Welcome to mirror list, hosted at ThFree Co, Russian Federation.

gitlab.com/gitlab-org/gitlab-foss.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
path: root/doc
diff options
context:
space:
mode:
Diffstat (limited to 'doc')
-rw-r--r--doc/.vale/gitlab/LatinTerms.yml1
-rw-r--r--doc/.vale/gitlab/Wordy.yml1
-rw-r--r--doc/administration/audit_event_streaming/audit_event_types.md7
-rw-r--r--doc/administration/audit_event_streaming/graphql_api.md23
-rw-r--r--doc/administration/audit_event_streaming/index.md95
-rw-r--r--doc/administration/audit_events.md181
-rw-r--r--doc/administration/auditor_users.md3
-rw-r--r--doc/administration/auth/ldap/index.md2
-rw-r--r--doc/administration/backup_restore/backup_gitlab.md9
-rw-r--r--doc/administration/cicd.md102
-rw-r--r--doc/administration/dedicated/index.md50
-rw-r--r--doc/administration/geo/disaster_recovery/index.md1
-rw-r--r--doc/administration/geo/index.md3
-rw-r--r--doc/administration/geo/replication/troubleshooting.md8
-rw-r--r--doc/administration/geo/setup/index.md2
-rw-r--r--doc/administration/gitaly/configure_gitaly.md166
-rw-r--r--doc/administration/gitaly/img/gitaly_adaptive_concurrency_limit.pngbin0 -> 36052 bytes
-rw-r--r--doc/administration/gitaly/index.md86
-rw-r--r--doc/administration/gitaly/monitoring.md41
-rw-r--r--doc/administration/gitaly/recovery.md54
-rw-r--r--doc/administration/gitaly/troubleshooting.md37
-rw-r--r--doc/administration/inactive_project_deletion.md19
-rw-r--r--doc/administration/incoming_email.md7
-rw-r--r--doc/administration/instance_limits.md13
-rw-r--r--doc/administration/integration/plantuml.md9
-rw-r--r--doc/administration/logs/index.md6
-rw-r--r--doc/administration/logs/log_parsing.md20
-rw-r--r--doc/administration/merge_request_diffs.md117
-rw-r--r--doc/administration/moderate_users.md39
-rw-r--r--doc/administration/monitoring/performance/performance_bar.md4
-rw-r--r--doc/administration/monitoring/prometheus/gitlab_metrics.md10
-rw-r--r--doc/administration/monitoring/prometheus/index.md4
-rw-r--r--doc/administration/monitoring/prometheus/web_exporter.md8
-rw-r--r--doc/administration/operations/puma.md31
-rw-r--r--doc/administration/package_information/supported_os.md2
-rw-r--r--doc/administration/packages/container_registry.md44
-rw-r--r--doc/administration/pages/index.md4
-rw-r--r--doc/administration/postgresql/external.md7
-rw-r--r--doc/administration/postgresql/external_metrics.md33
-rw-r--r--doc/administration/postgresql/external_upgrade.md48
-rw-r--r--doc/administration/postgresql/index.md5
-rw-r--r--doc/administration/raketasks/geo.md84
-rw-r--r--doc/administration/raketasks/github_import.md6
-rw-r--r--doc/administration/reference_architectures/10k_users.md43
-rw-r--r--doc/administration/reference_architectures/1k_users.md37
-rw-r--r--doc/administration/reference_architectures/25k_users.md43
-rw-r--r--doc/administration/reference_architectures/2k_users.md33
-rw-r--r--doc/administration/reference_architectures/3k_users.md45
-rw-r--r--doc/administration/reference_architectures/50k_users.md43
-rw-r--r--doc/administration/reference_architectures/5k_users.md46
-rw-r--r--doc/administration/reference_architectures/index.md118
-rw-r--r--doc/administration/review_spam_logs.md40
-rw-r--r--doc/administration/settings/continuous_integration.md16
-rw-r--r--doc/administration/settings/gitaly_timeouts.md10
-rw-r--r--doc/administration/settings/jira_cloud_app.md76
-rw-r--r--doc/administration/settings/rate_limits_on_git_ssh_operations.md3
-rw-r--r--doc/administration/settings/scim_setup.md4
-rw-r--r--doc/administration/settings/sign_in_restrictions.md2
-rw-r--r--doc/administration/settings/slack_app.md8
-rw-r--r--doc/administration/settings/usage_statistics.md40
-rw-r--r--doc/administration/sidekiq/index.md37
-rw-r--r--doc/administration/sidekiq/processing_specific_job_classes.md37
-rw-r--r--doc/administration/sidekiq/sidekiq_troubleshooting.md22
-rw-r--r--doc/administration/silent_mode/index.md9
-rw-r--r--doc/administration/troubleshooting/gitlab_rails_cheat_sheet.md4
-rw-r--r--doc/api/api_resources.md18
-rw-r--r--doc/api/bulk_imports.md23
-rw-r--r--doc/api/container_registry.md19
-rw-r--r--doc/api/dependency_list_export.md10
-rw-r--r--doc/api/deployments.md4
-rw-r--r--doc/api/geo_nodes.md14
-rw-r--r--doc/api/geo_sites.md3
-rw-r--r--doc/api/graphql/reference/index.md1236
-rw-r--r--doc/api/group_iterations.md4
-rw-r--r--doc/api/group_protected_environments.md17
-rw-r--r--doc/api/groups.md17
-rw-r--r--doc/api/import.md12
-rw-r--r--doc/api/invitations.md1
-rw-r--r--doc/api/iterations.md4
-rw-r--r--doc/api/jobs.md5
-rw-r--r--doc/api/lint.md4
-rw-r--r--doc/api/member_roles.md15
-rw-r--r--doc/api/merge_request_approvals.md2
-rw-r--r--doc/api/merge_requests.md4
-rw-r--r--doc/api/packages.md62
-rw-r--r--doc/api/pipelines.md56
-rw-r--r--doc/api/projects.md3
-rw-r--r--doc/api/protected_environments.md6
-rw-r--r--doc/api/rest/index.md1
-rw-r--r--doc/api/runners.md84
-rw-r--r--doc/api/saml.md12
-rw-r--r--doc/api/scim.md14
-rw-r--r--doc/api/settings.md13
-rw-r--r--doc/api/users.md12
-rw-r--r--doc/architecture/blueprints/cdot_orders/index.md265
-rw-r--r--doc/architecture/blueprints/cells/impacted_features/personal-access-tokens.md28
-rw-r--r--doc/architecture/blueprints/cells/index.md2
-rw-r--r--doc/architecture/blueprints/ci_pipeline_components/img/catalogs.pngbin30325 -> 0 bytes
-rw-r--r--doc/architecture/blueprints/ci_pipeline_components/index.md59
-rw-r--r--doc/architecture/blueprints/cloud_connector/decisions/001_lb_entry_point.md52
-rw-r--r--doc/architecture/blueprints/cloud_connector/index.md12
-rw-r--r--doc/architecture/blueprints/container_registry_metadata_database/index.md10
-rw-r--r--doc/architecture/blueprints/container_registry_metadata_database_self_managed_rollout/index.md2
-rw-r--r--doc/architecture/blueprints/email_ingestion/index.md2
-rw-r--r--doc/architecture/blueprints/feature_flags_usage_in_dev_and_ops/index.md285
-rw-r--r--doc/architecture/blueprints/gitlab_ml_experiments/index.md67
-rw-r--r--doc/architecture/blueprints/gitlab_steps/gitlab-ci.md247
-rw-r--r--doc/architecture/blueprints/gitlab_steps/index.md15
-rw-r--r--doc/architecture/blueprints/gitlab_steps/step-definition.md368
-rw-r--r--doc/architecture/blueprints/gitlab_steps/steps-syntactic-sugar.md66
-rw-r--r--doc/architecture/blueprints/google_artifact_registry_integration/index.md2
-rw-r--r--doc/architecture/blueprints/new_diffs.md29
-rw-r--r--doc/architecture/blueprints/observability_logging/diagrams.drawio1
-rw-r--r--doc/architecture/blueprints/observability_logging/index.md632
-rw-r--r--doc/architecture/blueprints/observability_logging/system_overview.pngbin0 -> 76330 bytes
-rw-r--r--doc/architecture/blueprints/organization/diagrams/organization-isolation-broken.drawio.pngbin0 -> 57795 bytes
-rw-r--r--doc/architecture/blueprints/organization/diagrams/organization-isolation.drawio.pngbin0 -> 56021 bytes
-rw-r--r--doc/architecture/blueprints/organization/index.md3
-rw-r--r--doc/architecture/blueprints/organization/isolation.md152
-rw-r--r--doc/architecture/blueprints/runner_admission_controller/index.md97
-rw-r--r--doc/architecture/blueprints/secret_detection/index.md124
-rw-r--r--doc/architecture/blueprints/secret_manager/decisions/002_gcp_kms.md101
-rw-r--r--doc/architecture/blueprints/secret_manager/decisions/003_go_service.md37
-rw-r--r--doc/architecture/blueprints/secret_manager/decisions/004_staleless_kms.md49
-rw-r--r--doc/architecture/blueprints/secret_manager/index.md18
-rw-r--r--doc/architecture/blueprints/work_items/index.md32
-rw-r--r--doc/ci/chatops/index.md61
-rw-r--r--doc/ci/cloud_services/azure/index.md21
-rw-r--r--doc/ci/cloud_services/google_cloud/index.md4
-rw-r--r--doc/ci/components/index.md94
-rw-r--r--doc/ci/debugging.md295
-rw-r--r--doc/ci/docker/using_docker_build.md4
-rw-r--r--doc/ci/docker/using_docker_images.md75
-rw-r--r--doc/ci/enable_or_disable_ci.md62
-rw-r--r--doc/ci/environments/deployment_approvals.md129
-rw-r--r--doc/ci/environments/kubernetes_dashboard.md14
-rw-r--r--doc/ci/index.md31
-rw-r--r--doc/ci/jobs/ci_job_token.md40
-rw-r--r--doc/ci/jobs/index.md67
-rw-r--r--doc/ci/jobs/job_control.md24
-rw-r--r--doc/ci/migration/bamboo.md780
-rw-r--r--doc/ci/migration/github_actions.md4
-rw-r--r--doc/ci/migration/jenkins.md4
-rw-r--r--doc/ci/pipelines/merge_request_pipelines.md22
-rw-r--r--doc/ci/pipelines/merge_trains.md35
-rw-r--r--doc/ci/pipelines/merged_results_pipelines.md13
-rw-r--r--doc/ci/pipelines/settings.md24
-rw-r--r--doc/ci/quick_start/index.md2
-rw-r--r--doc/ci/runners/new_creation_workflow.md23
-rw-r--r--doc/ci/runners/runners_scope.md6
-rw-r--r--doc/ci/runners/saas/linux_saas_runner.md2
-rw-r--r--doc/ci/runners/saas/macos_saas_runner.md28
-rw-r--r--doc/ci/secrets/azure_key_vault.md66
-rw-r--r--doc/ci/testing/browser_performance_testing.md3
-rw-r--r--doc/ci/testing/code_coverage.md5
-rw-r--r--doc/ci/testing/code_quality.md47
-rw-r--r--doc/ci/triggers/index.md1
-rw-r--r--doc/ci/troubleshooting.md558
-rw-r--r--doc/ci/variables/index.md72
-rw-r--r--doc/ci/variables/predefined_variables.md12
-rw-r--r--doc/ci/yaml/gitlab_ci_yaml.md92
-rw-r--r--doc/ci/yaml/img/job_running_v13_10.pngbin57525 -> 0 bytes
-rw-r--r--doc/ci/yaml/img/pipeline_status.pngbin54243 -> 0 bytes
-rw-r--r--doc/ci/yaml/img/rollback.pngbin41693 -> 0 bytes
-rw-r--r--doc/ci/yaml/index.md705
-rw-r--r--doc/ci/yaml/inputs.md86
-rw-r--r--doc/development/ai_architecture.md5
-rw-r--r--doc/development/ai_features/duo_chat.md37
-rw-r--r--doc/development/ai_features/index.md134
-rw-r--r--doc/development/api_graphql_styleguide.md9
-rw-r--r--doc/development/backend/create_source_code_be/gitaly_touch_points.md6
-rw-r--r--doc/development/bulk_import.md9
-rw-r--r--doc/development/cells/index.md1
-rw-r--r--doc/development/code_review.md10
-rw-r--r--doc/development/contributing/first_contribution.md2
-rw-r--r--doc/development/contributing/img/bot_ready.pngbin9367 -> 0 bytes
-rw-r--r--doc/development/contributing/img/bot_ready_v16_6.pngbin0 -> 7163 bytes
-rw-r--r--doc/development/dangerbot.md7
-rw-r--r--doc/development/database/avoiding_downtime_in_migrations.md11
-rw-r--r--doc/development/database/clickhouse/clickhouse_within_gitlab.md45
-rw-r--r--doc/development/database/database_lab.md2
-rw-r--r--doc/development/database/iterating_tables_in_batches.md4
-rw-r--r--doc/development/database/loose_foreign_keys.md6
-rw-r--r--doc/development/database/multiple_databases.md10
-rw-r--r--doc/development/database/understanding_explain_plans.md1
-rw-r--r--doc/development/development_processes.md57
-rw-r--r--doc/development/distributed_tracing.md4
-rw-r--r--doc/development/documentation/styleguide/index.md36
-rw-r--r--doc/development/documentation/styleguide/word_list.md30
-rw-r--r--doc/development/documentation/versions.md5
-rw-r--r--doc/development/documentation/workflow.md12
-rw-r--r--doc/development/ee_features.md24
-rw-r--r--doc/development/experiment_guide/implementing_experiments.md2
-rw-r--r--doc/development/export_csv.md2
-rw-r--r--doc/development/fe_guide/graphql.md37
-rw-r--r--doc/development/fe_guide/security.md51
-rw-r--r--doc/development/fe_guide/sentry.md5
-rw-r--r--doc/development/fe_guide/storybook.md34
-rw-r--r--doc/development/fe_guide/style/scss.md96
-rw-r--r--doc/development/fe_guide/style/typescript.md215
-rw-r--r--doc/development/fe_guide/type_hinting.md215
-rw-r--r--doc/development/feature_flags/controls.md11
-rw-r--r--doc/development/feature_flags/index.md6
-rw-r--r--doc/development/gems.md5
-rw-r--r--doc/development/gitaly.md43
-rw-r--r--doc/development/github_importer.md46
-rw-r--r--doc/development/i18n/externalization.md2
-rw-r--r--doc/development/i18n/proofreader.md1
-rw-r--r--doc/development/img/runner_fleet_dashboard.pngbin0 -> 38440 bytes
-rw-r--r--doc/development/index.md2
-rw-r--r--doc/development/internal_analytics/index.md53
-rw-r--r--doc/development/internal_analytics/internal_event_instrumentation/local_setup_and_debugging.md53
-rw-r--r--doc/development/internal_analytics/internal_event_instrumentation/quick_start.md24
-rw-r--r--doc/development/internal_analytics/metrics/metrics_dictionary.md2
-rw-r--r--doc/development/internal_analytics/service_ping/index.md119
-rw-r--r--doc/development/internal_api/index.md4
-rw-r--r--doc/development/migration_style_guide.md20
-rw-r--r--doc/development/permissions/custom_roles.md4
-rw-r--r--doc/development/pipelines/index.md33
-rw-r--r--doc/development/repository_storage_moves/index.md102
-rw-r--r--doc/development/rubocop_development_guide.md48
-rw-r--r--doc/development/ruby_upgrade.md14
-rw-r--r--doc/development/runner_fleet_dashboard.md245
-rw-r--r--doc/development/testing_guide/end_to_end/beginners_guide.md10
-rw-r--r--doc/development/testing_guide/end_to_end/capybara_to_chemlab_migration_guide.md38
-rw-r--r--doc/development/utilities.md2
-rw-r--r--doc/development/wikis.md3
-rw-r--r--doc/devsecops.md60
-rw-r--r--doc/gitlab-basics/start-using-git.md9
-rw-r--r--doc/install/aws/eks_clusters_aws.md49
-rw-r--r--doc/install/aws/gitlab_hybrid_on_aws.md380
-rw-r--r--doc/install/aws/gitlab_sre_for_aws.md98
-rw-r--r--doc/install/aws/index.md880
-rw-r--r--doc/install/aws/manual_install_aws.md859
-rw-r--r--doc/install/docker.md5
-rw-r--r--doc/install/installation.md2
-rw-r--r--doc/install/relative_url.md2
-rw-r--r--doc/install/requirements.md5
-rw-r--r--doc/integration/advanced_search/elasticsearch.md33
-rw-r--r--doc/integration/advanced_search/elasticsearch_troubleshooting.md10
-rw-r--r--doc/integration/jenkins.md1
-rw-r--r--doc/integration/jira/connect-app.md8
-rw-r--r--doc/integration/jira/development_panel.md2
-rw-r--r--doc/integration/jira/issues.md3
-rw-r--r--doc/integration/kerberos.md2
-rw-r--r--doc/integration/mattermost/index.md1
-rw-r--r--doc/integration/oauth2_generic.md3
-rw-r--r--doc/integration/shibboleth.md2
-rw-r--r--doc/operations/feature_flags.md25
-rw-r--r--doc/operations/incident_management/manage_incidents.md2
-rw-r--r--doc/policy/experiment-beta-support.md6
-rw-r--r--doc/security/email_verification.md10
-rw-r--r--doc/security/reset_user_password.md4
-rw-r--r--doc/security/token_overview.md33
-rw-r--r--doc/security/unlock_user.md10
-rw-r--r--doc/solutions/cloud/aws/gitaly_sre_for_aws.md91
-rw-r--r--doc/solutions/cloud/aws/gitlab_aws_integration.md103
-rw-r--r--doc/solutions/cloud/aws/gitlab_aws_partner_designations.md38
-rw-r--r--doc/solutions/cloud/aws/gitlab_instance_on_aws.md55
-rw-r--r--doc/solutions/cloud/aws/gitlab_single_box_on_aws.md51
-rw-r--r--doc/solutions/cloud/aws/img/all-aws-partner-designations.pngbin0 -> 12275 bytes
-rw-r--r--doc/solutions/cloud/aws/index.md84
-rw-r--r--doc/solutions/cloud/index.md13
-rw-r--r--doc/solutions/index.md19
-rw-r--r--doc/subscriptions/bronze_starter.md2
-rw-r--r--doc/subscriptions/gitlab_com/index.md13
-rw-r--r--doc/subscriptions/gitlab_dedicated/index.md5
-rw-r--r--doc/subscriptions/self_managed/index.md39
-rw-r--r--doc/topics/autodevops/cicd_variables.md3
-rw-r--r--doc/topics/autodevops/customize.md5
-rw-r--r--doc/topics/offline/quick_start_guide.md2
-rw-r--r--doc/tutorials/build_application.md2
-rw-r--r--doc/tutorials/left_sidebar/index.md6
-rw-r--r--doc/tutorials/product_analytics_onboarding_website_project/index.md139
-rw-r--r--doc/update/deprecations.md229
-rw-r--r--doc/update/versions/gitlab_15_changes.md10
-rw-r--r--doc/update/versions/gitlab_16_changes.md171
-rw-r--r--doc/user/ai_features.md130
-rw-r--r--doc/user/analytics/analytics_dashboards.md6
-rw-r--r--doc/user/analytics/dora_metrics.md68
-rw-r--r--doc/user/analytics/value_streams_dashboard.md4
-rw-r--r--doc/user/application_security/container_scanning/index.md140
-rw-r--r--doc/user/application_security/continuous_vulnerability_scanning/index.md5
-rw-r--r--doc/user/application_security/dast/browser_based.md33
-rw-r--r--doc/user/application_security/dast/checks/89.1.md37
-rw-r--r--doc/user/application_security/dast/checks/917.1.md33
-rw-r--r--doc/user/application_security/dast/checks/94.1.md53
-rw-r--r--doc/user/application_security/dast/checks/94.2.md51
-rw-r--r--doc/user/application_security/dast/checks/94.3.md45
-rw-r--r--doc/user/application_security/dast/checks/943.1.md30
-rw-r--r--doc/user/application_security/dast/checks/index.md6
-rw-r--r--doc/user/application_security/dast/proxy-based.md7
-rw-r--r--doc/user/application_security/dependency_scanning/index.md33
-rw-r--r--doc/user/application_security/get-started-security.md48
-rw-r--r--doc/user/application_security/index.md12
-rw-r--r--doc/user/application_security/policies/scan-execution-policies.md6
-rw-r--r--doc/user/application_security/policies/scan-result-policies.md179
-rw-r--r--doc/user/application_security/sast/customize_rulesets.md4
-rw-r--r--doc/user/application_security/sast/index.md4
-rw-r--r--doc/user/application_security/sast/rules.md2
-rw-r--r--doc/user/application_security/sast/troubleshooting.md10
-rw-r--r--doc/user/application_security/secret_detection/index.md50
-rw-r--r--doc/user/application_security/security_dashboard/img/group_security_dashboard.pngbin0 -> 234627 bytes
-rw-r--r--doc/user/application_security/security_dashboard/img/project_security_dashboard.pngbin0 -> 157184 bytes
-rw-r--r--doc/user/application_security/security_dashboard/img/security_center_dashboard_v15_10.pngbin22361 -> 0 bytes
-rw-r--r--doc/user/application_security/security_dashboard/index.md166
-rw-r--r--doc/user/application_security/terminology/index.md2
-rw-r--r--doc/user/application_security/vulnerabilities/img/create_mr_from_vulnerability_v13_4.pngbin16106 -> 0 bytes
-rw-r--r--doc/user/application_security/vulnerabilities/img/create_mr_from_vulnerability_v13_4_updated.pngbin0 -> 65832 bytes
-rw-r--r--doc/user/application_security/vulnerabilities/index.md15
-rw-r--r--doc/user/application_security/vulnerability_report/index.md85
-rw-r--r--doc/user/clusters/agent/gitops/example_repository_structure.md2
-rw-r--r--doc/user/clusters/agent/gitops/flux_oci_tutorial.md2
-rw-r--r--doc/user/clusters/agent/gitops/flux_tutorial.md1
-rw-r--r--doc/user/clusters/agent/install/index.md4
-rw-r--r--doc/user/clusters/agent/user_access.md59
-rw-r--r--doc/user/clusters/agent/vulnerabilities.md21
-rw-r--r--doc/user/compliance/compliance_center/index.md7
-rw-r--r--doc/user/compliance/license_list.md2
-rw-r--r--doc/user/compliance/license_scanning_of_cyclonedx_files/index.md9
-rw-r--r--doc/user/custom_roles.md89
-rw-r--r--doc/user/discussions/img/add_internal_note_v15_0.pngbin18963 -> 0 bytes
-rw-r--r--doc/user/discussions/img/add_internal_note_v16_6.pngbin0 -> 8531 bytes
-rw-r--r--doc/user/discussions/img/create_thread_v16_6.pngbin0 -> 14366 bytes
-rw-r--r--doc/user/discussions/img/discussion_comment.pngbin18323 -> 0 bytes
-rw-r--r--doc/user/discussions/img/quickly_assign_commenter_v13_1.pngbin43849 -> 0 bytes
-rw-r--r--doc/user/discussions/img/quickly_assign_commenter_v16_6.pngbin0 -> 11074 bytes
-rw-r--r--doc/user/discussions/index.md12
-rw-r--r--doc/user/feature_flags.md2
-rw-r--r--doc/user/free_push_limit.md4
-rw-r--r--doc/user/gitlab_duo_chat.md67
-rw-r--r--doc/user/group/access_and_permissions.md9
-rw-r--r--doc/user/group/epics/manage_epics.md2
-rw-r--r--doc/user/group/import/index.md21
-rw-r--r--doc/user/group/index.md6
-rw-r--r--doc/user/group/manage.md28
-rw-r--r--doc/user/group/reporting/git_abuse_rate_limit.md2
-rw-r--r--doc/user/group/saml_sso/group_sync.md2
-rw-r--r--doc/user/group/saml_sso/index.md40
-rw-r--r--doc/user/group/saml_sso/troubleshooting.md20
-rw-r--r--doc/user/group/saml_sso/troubleshooting_scim.md19
-rw-r--r--doc/user/group/value_stream_analytics/index.md15
-rw-r--r--doc/user/img/snippet_clone_button_v13_0.pngbin33081 -> 0 bytes
-rw-r--r--doc/user/img/snippet_intro_v13_11.pngbin15293 -> 0 bytes
-rw-r--r--doc/user/img/snippet_sample_v16_6.pngbin0 -> 34750 bytes
-rw-r--r--doc/user/infrastructure/clusters/connect/new_gke_cluster.md6
-rw-r--r--doc/user/infrastructure/iac/index.md1
-rw-r--r--doc/user/infrastructure/iac/mr_integration.md11
-rw-r--r--doc/user/infrastructure/iac/terraform_state.md10
-rw-r--r--doc/user/markdown.md3
-rw-r--r--doc/user/okrs.md18
-rw-r--r--doc/user/organization/index.md38
-rw-r--r--doc/user/packages/composer_repository/index.md2
-rw-r--r--doc/user/packages/container_registry/index.md2
-rw-r--r--doc/user/packages/container_registry/reduce_container_registry_storage.md51
-rw-r--r--doc/user/packages/container_registry/troubleshoot_container_registry.md27
-rw-r--r--doc/user/packages/generic_packages/index.md6
-rw-r--r--doc/user/packages/maven_repository/index.md97
-rw-r--r--doc/user/packages/npm_registry/index.md10
-rw-r--r--doc/user/packages/nuget_repository/index.md17
-rw-r--r--doc/user/packages/package_registry/supported_functionality.md6
-rw-r--r--doc/user/permissions.md11
-rw-r--r--doc/user/product_analytics/index.md56
-rw-r--r--doc/user/product_analytics/instrumentation/browser_sdk.md282
-rw-r--r--doc/user/product_analytics/instrumentation/index.md15
-rw-r--r--doc/user/profile/account/delete_account.md10
-rw-r--r--doc/user/profile/account/two_factor_authentication.md6
-rw-r--r--doc/user/profile/comment_templates.md9
-rw-r--r--doc/user/profile/img/comment_template_v16_6.pngbin0 -> 15154 bytes
-rw-r--r--doc/user/profile/img/saved_replies_dropdown_v16_0.pngbin16149 -> 0 bytes
-rw-r--r--doc/user/profile/index.md7
-rw-r--r--doc/user/profile/notifications.md30
-rw-r--r--doc/user/profile/personal_access_tokens.md36
-rw-r--r--doc/user/profile/preferences.md16
-rw-r--r--doc/user/profile/service_accounts.md4
-rw-r--r--doc/user/project/codeowners/index.md50
-rw-r--r--doc/user/project/deploy_tokens/index.md6
-rw-r--r--doc/user/project/import/github.md14
-rw-r--r--doc/user/project/import/jira.md4
-rw-r--r--doc/user/project/index.md17
-rw-r--r--doc/user/project/integrations/aws_codepipeline.md4
-rw-r--r--doc/user/project/integrations/gitlab_slack_application.md14
-rw-r--r--doc/user/project/issues/associate_zoom_meeting.md2
-rw-r--r--doc/user/project/issues/img/zoom-quickaction-button.pngbin43369 -> 0 bytes
-rw-r--r--doc/user/project/issues/img/zoom_quickaction_button_v16_6.pngbin0 -> 8668 bytes
-rw-r--r--doc/user/project/issues/issue_weight.md3
-rw-r--r--doc/user/project/members/index.md1
-rw-r--r--doc/user/project/members/share_project_with_groups.md5
-rw-r--r--doc/user/project/merge_requests/ai_in_merge_requests.md10
-rw-r--r--doc/user/project/merge_requests/approvals/settings.md23
-rw-r--r--doc/user/project/merge_requests/cherry_pick_changes.md13
-rw-r--r--doc/user/project/merge_requests/dependencies.md6
-rw-r--r--doc/user/project/merge_requests/drafts.md35
-rw-r--r--doc/user/project/merge_requests/index.md29
-rw-r--r--doc/user/project/merge_requests/merge_when_pipeline_succeeds.md2
-rw-r--r--doc/user/project/merge_requests/revert_changes.md2
-rw-r--r--doc/user/project/merge_requests/reviews/data_usage.md2
-rw-r--r--doc/user/project/merge_requests/reviews/img/comment-on-any-diff-line_v13_10.pngbin21304 -> 0 bytes
-rw-r--r--doc/user/project/merge_requests/reviews/img/comment_on_any_diff_line_v16_6.pngbin0 -> 12677 bytes
-rw-r--r--doc/user/project/merge_requests/reviews/img/mr_review_new_comment_v15_3.pngbin32927 -> 0 bytes
-rw-r--r--doc/user/project/merge_requests/reviews/img/mr_review_new_comment_v16_6.pngbin0 -> 11833 bytes
-rw-r--r--doc/user/project/merge_requests/reviews/img/mr_summary_comment_v15_4.pngbin61841 -> 0 bytes
-rw-r--r--doc/user/project/merge_requests/reviews/img/mr_summary_comment_v16_6.pngbin0 -> 16816 bytes
-rw-r--r--doc/user/project/merge_requests/reviews/index.md20
-rw-r--r--doc/user/project/merge_requests/status_checks.md4
-rw-r--r--doc/user/project/pages/public_folder.md15
-rw-r--r--doc/user/project/protected_branches.md47
-rw-r--r--doc/user/project/push_options.md3
-rw-r--r--doc/user/project/repository/branches/index.md37
-rw-r--r--doc/user/project/repository/code_suggestions/index.md34
-rw-r--r--doc/user/project/repository/code_suggestions/self_managed.md2
-rw-r--r--doc/user/project/repository/code_suggestions/troubleshooting.md3
-rw-r--r--doc/user/project/repository/forking_workflow.md7
-rw-r--r--doc/user/project/repository/reducing_the_repo_size_using_git.md7
-rw-r--r--doc/user/project/service_desk/configure.md2
-rw-r--r--doc/user/project/service_desk/using_service_desk.md5
-rw-r--r--doc/user/project/settings/project_access_tokens.md2
-rw-r--r--doc/user/project/system_notes.md18
-rw-r--r--doc/user/project/wiki/index.md6
-rw-r--r--doc/user/read_only_namespaces.md2
-rw-r--r--doc/user/report_abuse.md9
-rw-r--r--doc/user/reserved_names.md39
-rw-r--r--doc/user/search/index.md22
-rw-r--r--doc/user/shortcuts.md5
-rw-r--r--doc/user/snippets.md9
-rw-r--r--doc/user/storage_management_automation.md34
-rw-r--r--doc/user/tasks.md19
-rw-r--r--doc/user/usage_quotas.md137
-rw-r--r--doc/user/workspace/index.md12
429 files changed, 13289 insertions, 5598 deletions
diff --git a/doc/.vale/gitlab/LatinTerms.yml b/doc/.vale/gitlab/LatinTerms.yml
index 0bac0448bb1..0f098979b16 100644
--- a/doc/.vale/gitlab/LatinTerms.yml
+++ b/doc/.vale/gitlab/LatinTerms.yml
@@ -15,3 +15,4 @@ swap:
e\. g\.: for example
i\.e\.: that is
i\. e\.: that is
+ via: "Use 'with', 'through', or 'by using' instead."
diff --git a/doc/.vale/gitlab/Wordy.yml b/doc/.vale/gitlab/Wordy.yml
index 808bedad35a..9c472f66570 100644
--- a/doc/.vale/gitlab/Wordy.yml
+++ b/doc/.vale/gitlab/Wordy.yml
@@ -10,6 +10,7 @@ link: https://docs.gitlab.com/ee/development/documentation/styleguide/word_list.
level: suggestion
ignorecase: true
swap:
+ a number of: "Specify the number or remove the phrase."
as well as: "Use 'and' instead of 'as well as'."
note that: "Remove the phrase 'note that'."
please: "Use 'please' only if we've inconvenienced the user."
diff --git a/doc/administration/audit_event_streaming/audit_event_types.md b/doc/administration/audit_event_streaming/audit_event_types.md
index 3b2ae098469..88212045d8e 100644
--- a/doc/administration/audit_event_streaming/audit_event_types.md
+++ b/doc/administration/audit_event_streaming/audit_event_types.md
@@ -37,6 +37,7 @@ Audit event types belong to the following product categories.
| Name | Description | Saved to database | Streamed | Introduced in |
|:-----|:------------|:------------------|:---------|:--------------|
| [`amazon_s3_configuration_created`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/132443) | Triggered when Amazon S3 configuration for audit events streaming is created| **{check-circle}** Yes | **{check-circle}** Yes | GitLab [16.5](https://gitlab.com/gitlab-org/gitlab/-/issues/423229) |
+| [`amazon_s3_configuration_deleted`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/133695) | Triggered when Amazon S3 configuration for audit events streaming is deleted.| **{check-circle}** Yes | **{check-circle}** Yes | GitLab [16.5](https://gitlab.com/gitlab-org/gitlab/-/issues/423229) |
| [`amazon_s3_configuration_updated`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/133691) | Triggered when Amazon S3 configuration for audit events streaming is updated.| **{check-circle}** Yes | **{check-circle}** Yes | GitLab [16.5](https://gitlab.com/gitlab-org/gitlab/-/issues/423229) |
| [`audit_events_streaming_headers_create`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/92068) | Triggered when a streaming header for audit events is created| **{check-circle}** Yes | **{check-circle}** Yes | GitLab [15.3](https://gitlab.com/gitlab-org/gitlab/-/issues/366350) |
| [`audit_events_streaming_headers_destroy`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/92068) | Triggered when a streaming header for audit events is deleted| **{check-circle}** Yes | **{check-circle}** Yes | GitLab [15.3](https://gitlab.com/gitlab-org/gitlab/-/issues/366350) |
@@ -44,6 +45,7 @@ Audit event types belong to the following product categories.
| [`audit_events_streaming_instance_headers_destroy`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/127228) | Triggered when a streaming header for instance level external audit event destination is deleted| **{check-circle}** Yes | **{check-circle}** Yes | GitLab [16.3](https://gitlab.com/gitlab-org/gitlab/-/issues/417433) |
| [`audit_events_streaming_instance_headers_update`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/127228) | Triggered when a streaming header for instance level external audit event destination is updated| **{check-circle}** Yes | **{check-circle}** Yes | GitLab [16.3](https://gitlab.com/gitlab-org/gitlab/-/issues/417433) |
| [`create_event_streaming_destination`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/74632) | Event triggered when an external audit event destination is created| **{check-circle}** Yes | **{check-circle}** Yes | GitLab [14.6](https://gitlab.com/gitlab-org/gitlab/-/issues/344664) |
+| [`create_http_namespace_filter`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/136047) | Event triggered when a namespace filter for an external audit event destination for a top-level group is created.| **{check-circle}** Yes | **{check-circle}** Yes | GitLab [16.6](https://gitlab.com/gitlab-org/gitlab/-/issues/424176) |
| [`create_instance_event_streaming_destination`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/123882) | Event triggered when an instance level external audit event destination is created| **{check-circle}** Yes | **{check-circle}** Yes | GitLab [16.2](https://gitlab.com/gitlab-org/gitlab/-/issues/404730) |
| [`destroy_event_streaming_destination`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/74632) | Event triggered when an external audit event destination is deleted| **{check-circle}** Yes | **{check-circle}** Yes | GitLab [14.6](https://gitlab.com/gitlab-org/gitlab/-/issues/344664) |
| [`destroy_instance_event_streaming_destination`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/125846) | Event triggered when an instance level external audit event destination is deleted| **{check-circle}** Yes | **{check-circle}** Yes | GitLab [16.2](https://gitlab.com/gitlab-org/gitlab/-/issues/404730) |
@@ -171,6 +173,7 @@ Audit event types belong to the following product categories.
| [`ci_variable_created`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/91983) | Triggered when a CI variable is created at a project level| **{check-circle}** Yes | **{check-circle}** Yes | GitLab [15.2](https://gitlab.com/gitlab-org/gitlab/-/issues/363090) |
| [`ci_variable_deleted`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/91983) | Triggered when a project's CI variable is deleted| **{check-circle}** Yes | **{check-circle}** Yes | GitLab [15.2](https://gitlab.com/gitlab-org/gitlab/-/issues/363090) |
| [`ci_variable_updated`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/91983) | Triggered when a project's CI variable is updated| **{check-circle}** Yes | **{check-circle}** Yes | GitLab [15.2](https://gitlab.com/gitlab-org/gitlab/-/issues/363090) |
+| [`destroy_pipeline`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/135255) | Event triggered when a pipeline is deleted| **{check-circle}** Yes | **{check-circle}** Yes | GitLab [16.6](https://gitlab.com/gitlab-org/gitlab/-/issues/339041) |
### Deployment management
@@ -227,6 +230,8 @@ Audit event types belong to the following product categories.
| Name | Description | Saved to database | Streamed | Introduced in |
|:-----|:------------|:------------------|:---------|:--------------|
+| [`create_ssh_certificate`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/134556) | Event triggered when an SSH certificate is created.| **{check-circle}** Yes | **{check-circle}** Yes | GitLab [16.6](https://gitlab.com/gitlab-org/gitlab/-/issues/427413) |
+| [`delete_ssh_certificate`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/134556) | Event triggered when an SSH certificate is deleted.| **{check-circle}** Yes | **{check-circle}** Yes | GitLab [16.6](https://gitlab.com/gitlab-org/gitlab/-/issues/427413) |
| [`group_created`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/121005) | Event triggered when a group is created.| **{check-circle}** Yes | **{check-circle}** Yes | GitLab [16.3](https://gitlab.com/gitlab-org/gitlab/-/issues/411595) |
| [`group_lfs_enabled_updated`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/106079) | Event triggered when a groups lfs enabled is updated.| **{check-circle}** Yes | **{check-circle}** Yes | GitLab [15.7](https://gitlab.com/gitlab-org/gitlab/-/issues/369323) |
| [`group_membership_lock_updated`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/106079) | Event triggered when a groups membership lock is updated.| **{check-circle}** Yes | **{check-circle}** Yes | GitLab [15.7](https://gitlab.com/gitlab-org/gitlab/-/issues/369323) |
@@ -306,7 +311,6 @@ Audit event types belong to the following product categories.
| Name | Description | Saved to database | Streamed | Introduced in |
|:-----|:------------|:------------------|:---------|:--------------|
| [`experiment_features_enabled_updated`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/118222) | Event triggered on toggling setting for enabling experiment AI features| **{check-circle}** Yes | **{check-circle}** Yes | GitLab [16.0](https://gitlab.com/gitlab-org/gitlab/-/issues/404856/) |
-| [`third_party_ai_features_enabled_updated`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/118222) | Event triggered on toggling setting for enabling third-party AI features| **{check-circle}** Yes | **{check-circle}** Yes | GitLab [16.0](https://gitlab.com/gitlab-org/gitlab/-/issues/404856/) |
### Portfolio management
@@ -442,6 +446,7 @@ Audit event types belong to the following product categories.
| [`email_confirmation_sent`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/129261) | Triggered when users add or change and email address and it needs to be confirmed.| **{dotted-circle}** No | **{check-circle}** Yes | GitLab [16.3](https://gitlab.com/gitlab-org/gitlab/-/issues/377625) |
| [`remove_ssh_key`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/65615) | Audit event triggered when a SSH key is removed| **{check-circle}** Yes | **{check-circle}** Yes | GitLab [14.1](https://gitlab.com/gitlab-org/gitlab/-/issues/220127) |
| [`user_admin_status_updated`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/65168) | Adds an audit event when a user is either made an administrator, or removed as an administrator| **{check-circle}** Yes | **{check-circle}** Yes | GitLab [14.1](https://gitlab.com/gitlab-org/gitlab/-/issues/323905) |
+| [`user_auditor_status_updated`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/136456) | Adds an audit event when a user is either made an auditor, or removed as an auditor| **{check-circle}** Yes | **{check-circle}** Yes | GitLab [16.6](https://gitlab.com/gitlab-org/gitlab/-/issues/430235) |
| [`user_email_address_updated`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/2103) | Adds an audit event when a user updates their email address| **{check-circle}** Yes | **{check-circle}** Yes | GitLab [10.1](https://gitlab.com/gitlab-org/gitlab-ee/issues/1370) |
| [`user_profile_visiblity_updated`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/129149) | Triggered when user toggles private profile user setting| **{dotted-circle}** No | **{check-circle}** Yes | GitLab [16.3](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/129149) |
| [`user_username_updated`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/106086) | Event triggered on updating a user's username| **{check-circle}** Yes | **{check-circle}** Yes | GitLab [15.7](https://gitlab.com/gitlab-org/gitlab/-/issues/369329) |
diff --git a/doc/administration/audit_event_streaming/graphql_api.md b/doc/administration/audit_event_streaming/graphql_api.md
index 6e1a3424929..58668902b8e 100644
--- a/doc/administration/audit_event_streaming/graphql_api.md
+++ b/doc/administration/audit_event_streaming/graphql_api.md
@@ -177,9 +177,8 @@ Prerequisites:
- Owner role for a top-level group.
-Users with the Owner role for a group can update streaming destinations' custom HTTP headers using the
-`auditEventsStreamingHeadersUpdate` mutation type. You can retrieve the custom HTTP headers ID
-by [listing all the custom HTTP headers](#list-streaming-destinations) for the group.
+To update streaming destinations for a group, use the `externalAuditEventDestinationUpdate` mutation type. You can retrieve the destinations ID
+by [listing all the streaming destinations](#list-streaming-destinations) for the group.
```graphql
mutation {
@@ -206,6 +205,24 @@ Streaming destination is updated if:
- The returned `errors` object is empty.
- The API responds with `200 OK`.
+Users with the Owner role for a group can update streaming destinations' custom HTTP headers using the
+`auditEventsStreamingHeadersUpdate` mutation type. You can retrieve the custom HTTP headers ID
+by [listing all the custom HTTP headers](#list-streaming-destinations) for the group.
+
+```graphql
+mutation {
+ auditEventsStreamingHeadersUpdate(input: { headerId: "gid://gitlab/AuditEvents::Streaming::Header/2", key: "new-key", value: "new-value", active: false }) {
+ errors
+ header {
+ id
+ key
+ value
+ active
+ }
+ }
+}
+```
+
Group owners can remove an HTTP header using the GraphQL `auditEventsStreamingHeadersDestroy` mutation. You can retrieve the header ID
by [listing all the custom HTTP headers](#list-streaming-destinations) for the group.
diff --git a/doc/administration/audit_event_streaming/index.md b/doc/administration/audit_event_streaming/index.md
index 8f40dc6c34c..09474db1e08 100644
--- a/doc/administration/audit_event_streaming/index.md
+++ b/doc/administration/audit_event_streaming/index.md
@@ -206,7 +206,9 @@ To add Google Cloud Logging streaming destinations to a top-level group:
1. Select **Secure > Audit events**.
1. On the main area, select **Streams** tab.
1. Select **Add streaming destination** and select **Google Cloud Logging** to show the section for adding destinations.
-1. Enter the Google project ID, Google client email, log ID, and Google private key to add.
+1. Enter a random string to use as a name for the new destination.
+1. Enter the Google project ID, Google client email, and Google private key from previously-created Google Cloud service account key to add to the new destination.
+1. Enter a random string to use as a log ID for the new destination. You can use this later to filter log results in Google Cloud.
1. Select **Add** to add the new streaming destination.
#### List Google Cloud Logging destinations
@@ -236,7 +238,9 @@ To update Google Cloud Logging streaming destinations to a top-level group:
1. Select **Secure > Audit events**.
1. On the main area, select **Streams** tab.
1. Select the Google Cloud Logging stream to expand.
-1. Enter the Google project ID, Google client email, and log ID to update.
+1. Enter a random string to use as a name for the destination.
+1. Enter the Google project ID and Google client email from previously-created Google Cloud service account key to update the destination.
+1. Enter a random string to update the log ID for the destination. You can use this later to filter log results in Google Cloud.
1. Select **Add a new private key** and enter a Google private key to update the private key.
1. Select **Save** to update the streaming destination.
@@ -255,6 +259,85 @@ To delete Google Cloud Logging streaming destinations to a top-level group:
1. Select **Delete destination**.
1. Confirm by selecting **Delete destination** in the dialog.
+### AWS S3 destinations
+
+> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/132603) in GitLab 16.6 [with a flag](../feature_flags.md) named `allow_streaming_audit_events_to_amazon_s3`. Enabled by default.
+
+FLAG:
+On self-managed GitLab, by default this feature is available. To hide the feature per group, an administrator can [disable the feature flag](../feature_flags.md) named `allow_streaming_audit_events_to_amazon_s3`.
+On GitLab.com, this feature is available.
+
+Manage AWS S3 destinations for top-level groups.
+
+#### Prerequisites
+
+Before setting up AWS S3 streaming audit events, you must:
+
+1. Create a access key for AWS with the appropriate credentials and permissions. This account is used to configure audit log streaming authentication.
+ For more information, see [Managing access keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html?icmpid=docs_iam_console#Using_CreateAccessKey).
+1. Create a AWS S3 bucket. This bucket is used to store audit log streaming data. For more information, see [Creating a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html)
+
+#### Add a new AWS S3 destination
+
+Prerequisites:
+
+- Owner role for a top-level group.
+
+To add AWS S3 streaming destinations to a top-level group:
+
+1. On the left sidebar, select **Search or go to** and find your group.
+1. Select **Secure > Audit events**.
+1. On the main area, select **Streams** tab.
+1. Select **Add streaming destination** and select **AWS S3** to show the section for adding destinations.
+1. Enter a random string to use as a name for the new destination.
+1. Enter the Access Key ID, Secret Access Key, Bucket Name, and AWS Region from previously-created AWS access key and bucket to add to the new destination.
+1. Select **Add** to add the new streaming destination.
+
+#### List AWS S3 destinations
+
+Prerequisites:
+
+- Owner role for a top-level group.
+
+To list AWS S3 streaming destinations for a top-level group:
+
+1. On the left sidebar, select **Search or go to** and find your group.
+1. Select **Secure > Audit events**.
+1. On the main area, select **Streams** tab.
+1. Select the AWS S3 stream to expand and see all the fields.
+
+#### Update a AWS S3 destination
+
+Prerequisites:
+
+- Owner role for a top-level group.
+
+To update AWS S3 streaming destinations to a top-level group:
+
+1. On the left sidebar, select **Search or go to** and find your group.
+1. Select **Secure > Audit events**.
+1. On the main area, select **Streams** tab.
+1. Select the AWS S3 stream to expand.
+1. Enter a random string to use as a name for the destination.
+1. Enter the Access Key ID, Secret Access Key, Bucket Name, and AWS Region from previously-created AWS access key and bucket to update the destination.
+1. Select **Add a new Secret Access Key** and enter a AWS Secret Access Key to update the Secret Access Key.
+1. Select **Save** to update the streaming destination.
+
+#### Delete a AWS S3 streaming destination
+
+Prerequisites:
+
+- Owner role for a top-level group.
+
+To delete AWS S3 streaming destinations to a top-level group:
+
+1. On the left sidebar, select **Search or go to** and find your group.
+1. Select **Secure > Audit events**.
+1. On the main area, select the **Streams** tab.
+1. Select the AWS S3 stream to expand.
+1. Select **Delete destination**.
+1. Confirm by selecting **Delete destination** in the dialog.
+
## Instance streaming destinations **(ULTIMATE SELF)**
> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/398107) in GitLab 16.1 [with a flag](../feature_flags.md) named `ff_external_audit_events`. Disabled by default.
@@ -446,7 +529,9 @@ To add Google Cloud Logging streaming destinations to an instance:
1. On the left sidebar, select **Monitoring > Audit Events**.
1. On the main area, select **Streams** tab.
1. Select **Add streaming destination** and select **Google Cloud Logging** to show the section for adding destinations.
-1. Enter the Google project ID, Google client email, log ID, and Google private key to add.
+1. Enter a random string to use as a name for the new destination.
+1. Enter the Google project ID, Google client email, and Google private key from previously-created Google Cloud service account key to add to the new destination.
+1. Enter a random string to use as a log ID for the new destination. You can use this later to filter log results in Google Cloud.
1. Select **Add** to add the new streaming destination.
#### List Google Cloud Logging destinations
@@ -476,7 +561,9 @@ To update Google Cloud Logging streaming destinations to an instance:
1. On the left sidebar, select **Monitoring > Audit Events**.
1. On the main area, select **Streams** tab.
1. Select the Google Cloud Logging stream to expand.
-1. Enter the Google project ID, Google client email, and log ID to update.
+1. Enter a random string to use as a name for the destination.
+1. Enter the Google project ID and Google client email from previously-created Google Cloud service account key to update the destination.
+1. Enter a random string to update the log ID for the destination. You can use this later to filter log results in Google Cloud.
1. Select **Add a new private key** and enter a Google private key to update the private key.
1. Select **Save** to update the streaming destination.
diff --git a/doc/administration/audit_events.md b/doc/administration/audit_events.md
index 736f381e9d7..ba1a4ca05c4 100644
--- a/doc/administration/audit_events.md
+++ b/doc/administration/audit_events.md
@@ -6,171 +6,146 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# Audit events **(PREMIUM ALL)**
-Use audit events to track important events, including who performed the related action and when.
-You can use audit events to track, for example:
+A security audit is a in-depth analysis and review of your infrastructure, which is used to display
+areas of concern and potentially hazardous practices. To assist with the audit process, GitLab provides
+audit events which allow you to track a variety of different actions within GitLab.
+
+For example, you can use audit events to track:
- Who changed the permission level of a particular user for a GitLab project, and when.
- Who added a new user or removed a user, and when.
-Audit events are similar to the [log system](logs/index.md).
-
-The GitLab API, database, and `audit_json.log` record many audit events. Some audit events are only available through
-[streaming audit events](audit_event_streaming.md).
+These events can be used to in an audit to assess risk, strengthen security measures, respond to incidents, and adhere to compliance. For a complete list the audit events GitLab provides, see [Audit event types](../administration/audit_event_streaming/audit_event_types.md).
-You can also generate an [audit report](audit_reports.md) of audit events.
+## Prerequisites
-NOTE:
-You can't configure a retention policy for audit events, but epic
-[7917](https://gitlab.com/groups/gitlab-org/-/epics/7917) proposes to change this.
+To view specific types of audit events, you need a minimum role.
-## Time zones
-
-> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/242014) in GitLab 15.7, GitLab UI shows dates and times in the user's local time zone instead of UTC.
+- To view the group audit events of all users in a group, you must have the [Owner role](../user/permissions.md#roles) for the group.
+- To view the project audit events of all users in a project, you must have at least the [Maintainer role](../user/permissions.md#roles) for the project.
+- To view the group and project audit events based on your own actions in a group or project, you must have at least the [Developer role](../user/permissions.md#roles)
+ for the group or project.
-The time zone used for audit events depends on where you view them:
+Users with the [Auditor access level](auditor_users.md) can see group and project events for all users.
-- In GitLab UI, your local time zone (GitLab 15.7 and later) or UTC (GitLab 15.6 and earlier) is used.
-- The [Audit Events API](../api/audit_events.md) returns dates and times in UTC by default, or the
- [configured time zone](timezone.md) on a self-managed GitLab instance.
-- In `audit_json.log`, UTC is used.
-- In CSV exports, UTC is used.
+## Viewing audit events
-## View audit events
+Audit events can be viewed at the group, project, instance, and sign-in level. Each level has different audit events which it logs.
-Depending on the events you want to view, at a minimum you must have:
-
-- For group audit events of all users in the group, the Owner role for the group.
-- For project audit events of all users in the project, the Maintainer role for the project.
-- For group and project audit events based on your own actions, the Developer role for the group or project.
-- [Auditor users](auditor_users.md) can see group and project events for all users.
-
-You can view audit events scoped to a group or project.
+### Group audit events
To view a group's audit events:
-1. Go to the group.
+1. On the left sidebar, select **Search or go to** and find your group.
1. On the left sidebar, select **Secure > Audit events**.
+1. Filter the audit events by the member of the project (user) who performed the action and date range.
-Group events do not include project audit events. Group events can also be accessed using the
-[Group Audit Events API](../api/audit_events.md#group-audit-events). Group event queries are limited to a maximum of 30
-days.
+Group audit events can also be accessed using the [Group Audit Events API](../api/audit_events.md#group-audit-events). Group audit event queries are limited to a maximum of 30 days.
-To view a project's audit events:
+### Project audit events
-1. Go to the project.
+1. On the left sidebar, select **Search or go to** and find your project.
1. On the left sidebar, select **Secure > Audit events**.
+1. Filter the audit events by the member of the project (user) who performed the action and date range.
-Project events can also be accessed using the [Project Audit Events API](../api/audit_events.md#project-audit-events).
-Project event queries are limited to a maximum of 30 days.
+Project audit events can also be accessed using the [Project Audit Events API](../api/audit_events.md#project-audit-events). Project audit event queries are limited to a maximum of 30 days.
-## View instance audit events **(PREMIUM SELF)**
+### Instance audit events **(PREMIUM SAAS)**
You can view audit events from user actions across an entire GitLab instance.
-
To view instance audit events:
1. On the left sidebar, select **Search or go to**.
1. Select **Admin Area**.
1. On the left sidebar, select **Monitoring > Audit Events**.
+1. Filter by the following:
+ - Member of the project (user) who performed the action
+ - Group
+ - Project
+ - Date Range
+
+### Sign-in audit events **(FREE ALL)**
+
+Successful sign-in events are the only audit events available at all tiers. To see successful sign-in events:
+
+1. On the left sidebar, select your avatar.
+1. Select **Edit profile > Authentication log**.
-### Export to CSV
+After upgrading to a paid tier, you can also see successful sign-in events on audit event pages.
+
+## Exporting audit events
> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/1449) in GitLab 13.4.
> - [Feature flag removed](https://gitlab.com/gitlab-org/gitlab/-/issues/285441) in GitLab 13.7.
> - Entity type `Gitlab::Audit::InstanceScope` for instance audit events [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/418185) in GitLab 16.2.
-You can export the current view (including filters) of your instance audit events as a CSV file. To export the instance
-audit events to CSV:
+You can export the current view (including filters) of your instance audit events as a
+CSV(comma-separated values) file. To export the instance audit events to CSV:
1. On the left sidebar, select **Search or go to**.
1. Select **Admin Area**.
1. On the left sidebar, select **Monitoring > Audit Events**.
-1. Select the available search [filters](#filter-audit-events).
+1. Select the available search filters.
1. Select **Export as CSV**.
-The exported file:
-
-- Is sorted by `created_at` in ascending order.
-- Is limited to a maximum of 100 000 events. The remaining records are truncated when this limit is reached.
-
-Data is encoded with:
-
-- Comma as the column delimiter.
-- `"` to quote fields if necessary.
-- New lines separate rows.
+A download confirmation dialog then appears for you to download the CSV file. The exported CSV is limited
+to a maximum of 100000 events. The remaining records are truncated when this limit is reached.
-The first row contains the headers, which are listed in the following table along with a description of the values:
+### Audit event CSV encoding
-| Column | Description |
-|:---------------------|:-------------------------------------------------------------------|
-| **ID** | Audit event `id`. |
-| **Author ID** | ID of the author. |
-| **Author Name** | Full name of the author. |
-| **Entity ID** | ID of the scope. |
-| **Entity Type** | Type of the scope (`Project`, `Group`, `User`, or `Gitlab::Audit::InstanceScope`). |
-| **Entity Path** | Path of the scope. |
-| **Target ID** | ID of the target. |
-| **Target Type** | Type of the target. |
-| **Target Details** | Details of the target. |
-| **Action** | Description of the action. |
-| **IP Address** | IP address of the author who performed the action. |
-| **Created At (UTC)** | Formatted as `YYYY-MM-DD HH:MM:SS`. |
+The exported CSV file is encoded as follows:
-## View sign-in events **(FREE ALL)**
+- `,` is used as the column delimiter
+- `"` is used to quote fields if necessary.
+- `\n` is used to separate rows.
-Successful sign-in events are the only audit events available at all tiers. To see successful sign-in events:
+The first row contains the headers, which are listed in the following table along
+with a description of the values:
-1. On the left sidebar, select your avatar.
-1. Select **Edit profile > Authentication log**.
-
-After upgrading to a paid tier, you can also see successful sign-in events on audit event pages.
+| Column | Description |
+| --------------------- | ---------------------------------------------------------------------------------- |
+| **ID** | Audit event `id`. |
+| **Author ID** | ID of the author. |
+| **Author Name** | Full name of the author. |
+| **Entity ID** | ID of the scope. |
+| **Entity Type** | Type of the scope (`Project`, `Group`, `User`, or `Gitlab::Audit::InstanceScope`). |
+| **Entity Path** | Path of the scope. |
+| **Target ID** | ID of the target. |
+| **Target Type** | Type of the target. |
+| **Target Details** | Details of the target. |
+| **Action** | Description of the action. |
+| **IP Address** | IP address of the author who performed the action. |
+| **Created At (UTC)** | Formatted as `YYYY-MM-DD HH:MM:SS`. |
-## Filter audit events
-
-From audit events pages, different filters are available depending on the page you're on.
-
-| Audit event page | Available filter |
-|:-----------------|:-----------------------------------------------------------------------------------------------------------------------|
-| Project | User (member of the project) who performed the action. |
-| Group | User (member of the group) who performed the action. |
-| Instance | Group, project, or user. |
-| All | Date range buttons and pickers (maximum range of 31 days). Default is from the first day of the month to today's date. |
+All items are sorted by `created_at` in ascending order.
## User impersonation
> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/536) in GitLab 13.0.
> - Impersonation session events included in group audit events in GitLab 14.8.
-When a user is [impersonated](../administration/admin_area.md#user-impersonation), their actions are logged as audit events
-with additional details:
+When a user is [impersonated](../administration/admin_area.md#user-impersonation), their actions are logged as audit events with the following additional details:
-- Audit events include information about the impersonating administrator. These audit events are visible in audit event
- pages depending on the audit event type (group, project, or user).
-- Extra audit events are recorded for the start and end of the administrator's impersonation session. These audit events
- are visible as:
- - Instance audit events.
- - Group audit events for all groups the user belongs to. For performance reasons, group audit events are limited to
- the oldest 20 groups you belong to.
+- Audit events include information about the impersonating administrator.
+- Extra audit events are recorded for the start and end of the administrator's impersonation session.
![Audit event with impersonated user](img/impersonated_audit_events_v15_7.png)
-## Available audit events
+## Time zones
-For a list of available audit events, see [Audit event types](../administration/audit_event_streaming/audit_event_types.md).
+> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/242014) in GitLab 15.7, GitLab UI shows dates and times in the user's local time zone instead of UTC.
-## Unsupported events
+The time zone used for audit events depends on where you view them:
-Some events are not tracked in audit events. The following epics and issues propose support for more events:
+- In GitLab UI, your local time zone (GitLab 15.7 and later) or UTC (GitLab 15.6 and earlier) is used.
+- The [Audit Events API](../api/audit_events.md) returns dates and times in UTC by default, or the
+ [configured time zone](timezone.md) on a self-managed GitLab instance.
+- In CSV exports, UTC is used.
-- [Project settings and activity](https://gitlab.com/groups/gitlab-org/-/epics/474).
-- [Group settings and activity](https://gitlab.com/groups/gitlab-org/-/epics/475).
-- [Instance-level settings and activity](https://gitlab.com/groups/gitlab-org/-/epics/476).
-- [Deployment Approval activity](https://gitlab.com/gitlab-org/gitlab/-/issues/354782).
-- [Approval rules processing by a non GitLab user](https://gitlab.com/gitlab-org/gitlab/-/issues/407384).
+## Contribute to audit events
If you don't see the event you want in any of the epics, you can either:
- Use the **Audit Event Proposal** issue template to
- [create an issue](https://gitlab.com/gitlab-org/gitlab/-/issues/new?issuable_template=Audit%20Event%20Proposal) to
- request it.
+ [create an issue](https://gitlab.com/gitlab-org/gitlab/-/issues/new?issuable_template=Audit%20Event%20Proposal) to request it.
- [Add it yourself](../development/audit_event_guide/index.md).
diff --git a/doc/administration/auditor_users.md b/doc/administration/auditor_users.md
index e9df9cc6e37..09d68e82782 100644
--- a/doc/administration/auditor_users.md
+++ b/doc/administration/auditor_users.md
@@ -25,6 +25,9 @@ Situations where auditor access for users could be helpful include:
you can create an account with auditor access and then share the credentials
with those users to which you want to grant access.
+NOTE:
+An auditor user counts as a billable user and consumes a license seat.
+
## Add a user with auditor access
To create a new user account with auditor access (or change an existing user):
diff --git a/doc/administration/auth/ldap/index.md b/doc/administration/auth/ldap/index.md
index bf2b3d7e53e..0c42ce90346 100644
--- a/doc/administration/auth/ldap/index.md
+++ b/doc/administration/auth/ldap/index.md
@@ -448,7 +448,7 @@ These LDAP sync configuration settings are available:
| Setting | Description | Required | Examples |
|-------------------|-------------|----------|----------|
-| `group_base` | Base used to search for groups. | **{dotted-circle}** No (required when `external_groups` is configured) | `'ou=groups,dc=gitlab,dc=example'` |
+| `group_base` | Base used to search for groups. All valid groups have this base as part of their DN. | **{dotted-circle}** No (required when `external_groups` is configured) | `'ou=groups,dc=gitlab,dc=example'` |
| `admin_group` | The CN of a group containing GitLab administrators. Not `cn=administrators` or the full DN. | **{dotted-circle}** No | `'administrators'` |
| `external_groups` | An array of CNs of groups containing users that should be considered external. Not `cn=interns` or the full DN. | **{dotted-circle}** No | `['interns', 'contractors']` |
| `sync_ssh_keys` | The LDAP attribute containing a user's public SSH key. | **{dotted-circle}** No | `'sshPublicKey'` or false if not set |
diff --git a/doc/administration/backup_restore/backup_gitlab.md b/doc/administration/backup_restore/backup_gitlab.md
index 05a330bf3f5..5c0fcbbc4ef 100644
--- a/doc/administration/backup_restore/backup_gitlab.md
+++ b/doc/administration/backup_restore/backup_gitlab.md
@@ -437,7 +437,9 @@ sudo -u git -H bundle exec rake gitlab:backup:create SKIP=tar RAILS_ENV=producti
#### Create server-side repository backups
-> [Introduced](https://gitlab.com/gitlab-org/gitaly/-/issues/4941) in GitLab 16.3.
+> - [Introduced](https://gitlab.com/gitlab-org/gitaly/-/issues/4941) in GitLab 16.3.
+> - Server-side support for restoring a specified backup instead of the latest backup [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/132188) in GitLab 16.6.
+> - Server-side support for creating incremental backups [introduced](https://gitlab.com/gitlab-org/gitaly/-/merge_requests/6475) in GitLab 16.6.
Instead of storing large repository backups in the backup archive, repository
backups can be configured so that the Gitaly node that hosts each repository is
@@ -504,6 +506,7 @@ sudo -u git -H bundle exec rake gitlab:backup:create GITLAB_BACKUP_MAX_CONCURREN
> - Introduced in GitLab 14.9 [with a flag](../feature_flags.md) named `incremental_repository_backup`. Disabled by default.
> - [Enabled on self-managed](https://gitlab.com/gitlab-org/gitlab/-/issues/355945) in GitLab 14.10.
> - `PREVIOUS_BACKUP` option [introduced](https://gitlab.com/gitlab-org/gitaly/-/issues/4184) in GitLab 15.0.
+> - Server-side support for creating incremental backups [introduced](https://gitlab.com/gitlab-org/gitaly/-/merge_requests/6475) in GitLab 16.6.
FLAG:
On self-managed GitLab, by default this feature is available. To hide the feature, an administrator can [disable the feature flag](../feature_flags.md) named `incremental_repository_backup`.
@@ -853,7 +856,7 @@ For the Linux package (Omnibus):
## If you have CNAME buckets (foo.example.com), you might run into SSL issues
## when uploading backups ("hostname foo.example.com.storage.googleapis.com
- ## does not match the server certificate"). In that case, uncomnent the following
+ ## does not match the server certificate"). In that case, uncomment the following
## setting. See: https://github.com/fog/fog/issues/2834
#'path_style' => true
}
@@ -1272,7 +1275,7 @@ Gitaly Cluster [does not support snapshot backups](../gitaly/index.md#snapshot-b
When considering using file system data transfer or snapshots:
- Don't use these methods to migrate from one operating system to another. The operating systems of the source and destination should be as similar as possible. For example,
- don't use these methods to migrate from Ubuntu to Fedora.
+ don't use these methods to migrate from Ubuntu to RHEL.
- Data consistency is very important. You should stop GitLab with `sudo gitlab-ctl stop` before taking doing a file system transfer (with `rsync`, for example) or taking a
snapshot.
diff --git a/doc/administration/cicd.md b/doc/administration/cicd.md
index 7a6316a1e50..10bc60fe399 100644
--- a/doc/administration/cicd.md
+++ b/doc/administration/cicd.md
@@ -18,7 +18,7 @@ CI/CD to be disabled by default in new projects by modifying the settings in:
- `gitlab.rb` for Linux package installations.
Existing projects that already had CI/CD enabled are unchanged. Also, this setting only changes
-the project default, so project owners [can still enable CI/CD in the project settings](../ci/enable_or_disable_ci.md).
+the project default, so project owners [can still enable CI/CD in the project settings](../ci/pipelines/settings.md#disable-gitlab-cicd-pipelines).
For self-compiled installations:
@@ -93,14 +93,96 @@ To change the frequency of the pipeline schedule worker:
For example, to set the maximum frequency of pipelines to twice a day, set `pipeline_schedule_worker_cron`
to a cron value of `0 */12 * * *` (`00:00` and `12:00` every day).
-<!-- ## Troubleshooting
+## Disaster recovery
-Include any troubleshooting steps that you can foresee. If you know beforehand what issues
-one might have when setting this up, or when something is changed, or on upgrading, it's
-important to describe those, too. Think of things that may go wrong and include them here.
-This is important to minimize requests for support, and to avoid doc comments with
-questions that you know someone might ask.
+You can disable some important but computationally expensive parts of the application
+to relieve stress on the database during ongoing downtime.
-Each scenario can be a third-level heading, for example `### Getting error message X`.
-If you have none to add when creating a doc, leave this section in place
-but commented out to help encourage others to add to it in the future. -->
+### Disable fair scheduling on shared runners
+
+When clearing a large backlog of jobs, you can temporarily enable the `ci_queueing_disaster_recovery_disable_fair_scheduling`
+[feature flag](../administration/feature_flags.md). This flag disables fair scheduling
+on shared runners, which reduces system resource usage on the `jobs/request` endpoint.
+
+When enabled, jobs are processed in the order they were put in the system, instead of
+balanced across many projects.
+
+### Disable compute quota enforcement
+
+To disable the enforcement of [compute quotas](../ci/pipelines/cicd_minutes.md) on shared runners, you can temporarily
+enable the `ci_queueing_disaster_recovery_disable_quota` [feature flag](../administration/feature_flags.md).
+This flag reduces system resource usage on the `jobs/request` endpoint.
+
+When enabled, jobs created in the last hour can run in projects which are out of quota.
+Earlier jobs are already canceled by a periodic background worker (`StuckCiJobsWorker`).
+
+## CI/CD troubleshooting Rails console commands
+
+The following commands are run in the [Rails console](../administration/operations/rails_console.md#starting-a-rails-console-session).
+
+WARNING:
+Any command that changes data directly could be damaging if not run correctly, or under the right conditions.
+We highly recommend running them in a test environment with a backup of the instance ready to be restored, just in case.
+
+### Cancel stuck pending pipelines
+
+```ruby
+project = Project.find_by_full_path('<project_path>')
+Ci::Pipeline.where(project_id: project.id).where(status: 'pending').count
+Ci::Pipeline.where(project_id: project.id).where(status: 'pending').each {|p| p.cancel if p.stuck?}
+Ci::Pipeline.where(project_id: project.id).where(status: 'pending').count
+```
+
+### Try merge request integration
+
+```ruby
+project = Project.find_by_full_path('<project_path>')
+mr = project.merge_requests.find_by(iid: <merge_request_iid>)
+mr.project.try(:ci_integration)
+```
+
+### Validate the `.gitlab-ci.yml` file
+
+```ruby
+project = Project.find_by_full_path('<project_path>')
+content = p.ci_config_for(project.repository.root_ref_sha)
+Gitlab::Ci::Lint.new(project: project, current_user: User.first).validate(content)
+```
+
+### Disable AutoDevOps on Existing Projects
+
+```ruby
+Project.all.each do |p|
+ p.auto_devops_attributes={"enabled"=>"0"}
+ p.save
+end
+```
+
+### Obtain runners registration token
+
+```ruby
+Gitlab::CurrentSettings.current_application_settings.runners_registration_token
+```
+
+### Seed runners registration token
+
+```ruby
+appSetting = Gitlab::CurrentSettings.current_application_settings
+appSetting.set_runners_registration_token('<new-runners-registration-token>')
+appSetting.save!
+```
+
+### Run pipeline schedules manually
+
+You can run pipeline schedules manually through the Rails console to reveal any errors that are usually not visible.
+
+```ruby
+# schedule_id can be obtained from Edit Pipeline Schedule page
+schedule = Ci::PipelineSchedule.find_by(id: <schedule_id>)
+
+# Select the user that you want to run the schedule for
+user = User.find_by_username('<username>')
+
+# Run the schedule
+ps = Ci::CreatePipelineService.new(schedule.project, user, ref: schedule.ref).execute!(:schedule, ignore_skip_ci: true, save_on_errors: false, schedule: schedule)
+```
diff --git a/doc/administration/dedicated/index.md b/doc/administration/dedicated/index.md
index 2889fb9b389..16efc353c84 100644
--- a/doc/administration/dedicated/index.md
+++ b/doc/administration/dedicated/index.md
@@ -38,7 +38,7 @@ After you first sign in to Switchboard, you must update your password and set up
The following stages guide you through a series of four steps to provide the information required to create your GitLab Dedicated tenant.
1. Confirm account details: Confirm key attributes of your GitLab Dedicated account:
- - Reference architecture: Corresponds with the number of users you provided to your account team when beginning the onboarding process. For more information, see [reference architectures](../../administration/reference_architectures/index.md).
+ - Reference architecture: Corresponds with the number of users you provided to your account team when beginning the onboarding process. For more information, see [reference architectures](../../subscriptions/gitlab_dedicated/index.md#availability-and-scalability).
- Total repository storage size: Corresponds with the storage size you provided to your account team when beginning the onboarding process.
- If you need to make changes to these attributes, [submit a support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650).
1. Tenant configuration: Provides the minimum required information needed to create your GitLab Dedicated tenant:
@@ -214,7 +214,9 @@ Make sure the AWS KMS keys are replicated to your desired primary, secondary and
## Configuration changes
-To change or update the configuration for your GitLab Dedicated instance, open a [support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650) with your request. You can request configuration changes for the options originally specified during onboarding, or for any of the following optional features.
+Switchboard empowers the user to make limited configuration changes to their Dedicated Tenant Instance. As Switchboard matures further configuration changes will be made available.
+
+To change or update the configuration of your GitLab Dedicated instance, use Switchboard following the instructions in the relevant section or open a [support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650) with your request. You can request configuration changes for the options originally specified during onboarding, or for any of the following optional features.
The turnaround time to process configuration change requests is [documented in the GitLab handbook](https://about.gitlab.com/handbook/engineering/infrastructure/team/gitlab-dedicated/#handling-configuration-changes-for-tenant-environments).
@@ -278,10 +280,22 @@ To enable an Outbound Private Link:
GitLab then configures the tenant instance to create the necessary Endpoint Interfaces based on the service names you provided. Any matching outbound
connections made from the tenant GitLab instance are directed through the PrivateLink into your VPC.
-#### Custom certificates
+### Custom certificates
In some cases, the GitLab Dedicated instance can't reach an internal service you own because it exposes a certificate that can't be validated using a public Certification Authority (CA). In these cases, custom certificates are required.
+#### Add a custom certificate with Switchboard
+
+1. Log in to [Switchboard](https://console.gitlab-dedicated.com/).
+1. At the top of the page, select **Configuration**.
+1. Expand **Custom Certificate Authorities**.
+1. Select **+ Add Certificate**.
+1. Paste the certificate into the text box.
+1. Select **Save**.
+1. Scroll up to the top of the page and select whether to apply the changes immediately or during the next maintenance window.
+
+#### Add a custom certificate with a Support Request
+
To request that GitLab add custom certificates when communicating with your services over PrivateLink, attach the custom public certificate files to your [support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650).
#### Maximum number of reverse PrivateLink connections
@@ -292,6 +306,19 @@ GitLab Dedicated limits the number of reverse PrivateLink connections to 10.
GitLab Dedicated allows you to control which IP addresses can access your instance through an IP allowlist.
+#### Add an IP to the allowlist with Switchboard
+
+1. Log in to [Switchboard](https://console.gitlab-dedicated.com/).
+1. At the top of the page, select **Configuration**.
+1. Expand **Allowed Source List Config / IP allowlist**.
+1. Turn on the **Enable** toggle.
+1. Select **Add Item**.
+1. Enter the IP address and description. To add another IP address, repeat steps 5 and 6.
+1. Select **Save**.
+1. Scroll up to the top of the page and select whether to apply the changes immediately or during the next maintenance window.
+
+#### Add an IP to the allowlist with a Support Request
+
Specify a comma separated list of IP addresses that can access your GitLab Dedicated instance in your [support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650). After the configuration has been applied, when an IP not on the allowlist tries to access your instance, the connection is refused.
### SAML
@@ -303,6 +330,23 @@ Prerequisites:
- You must configure the identity provider before sending the required data to GitLab.
+#### Activate SAML with Switchboard
+
+To activate SAML for your GitLab Dedicated instance:
+
+1. Log in to [Switchboard](https://console.gitlab-dedicated.com/).
+1. At the top of the page, select **Configuration**.
+1. Expand **SAML Config**.
+1. Turn on the **Enable** toggle.
+1. Complete the fields.
+1. Select **Save**.
+1. Scroll up to the top of the page and select whether to apply the changes immediately or during the next maintenance window.
+1. To verify the SAML configuration is successful:
+ - Check that the SSO button description is displayed on your instance's sign-in page.
+ - Go to the metadata URL of your instance (`https://INSTANCE-URL/users/auth/saml/metadata`). This page can be used to simplify much of the configuration of the identity provider, and manually validate the settings.
+
+#### Activate SAML with a Support Request
+
To activate SAML for your GitLab Dedicated instance:
1. To make the necessary changes, include the desired [SAML configuration block](../../integration/saml.md#configure-saml-support-in-gitlab) for your GitLab application in your [support ticket](https://support.gitlab.com/hc/en-us/requests/new?ticket_form_id=4414917877650). At a minimum, GitLab needs the following information to enable SAML for your instance:
diff --git a/doc/administration/geo/disaster_recovery/index.md b/doc/administration/geo/disaster_recovery/index.md
index d6f6211ed4c..2f636dc6ba4 100644
--- a/doc/administration/geo/disaster_recovery/index.md
+++ b/doc/administration/geo/disaster_recovery/index.md
@@ -88,6 +88,7 @@ Note the following when promoting a secondary:
- If you encounter an `ActiveRecord::RecordInvalid: Validation failed: Name has already been taken`
error message during this process, for more information, see this
[troubleshooting advice](../replication/troubleshooting.md#fixing-errors-during-a-failover-or-when-promoting-a-secondary-to-a-primary-site).
+- You should [point the primary domain DNS at the newly promoted site](#step-4-optional-updating-the-primary-domain-dns-record). Otherwise, runners must be registered again with the newly promoted site, and all Git remotes, bookmarks, and external integrations must be updated.
#### Promoting a **secondary** site running on a single node running GitLab 14.5 and later
diff --git a/doc/administration/geo/index.md b/doc/administration/geo/index.md
index 78bd685e06f..e8b2cb38563 100644
--- a/doc/administration/geo/index.md
+++ b/doc/administration/geo/index.md
@@ -19,8 +19,6 @@ Fetching large repositories can take a long time for teams located far from a si
Geo provides local, read-only sites of your GitLab instances. This can reduce the time it takes
to clone and fetch large repositories, speeding up development.
-For a video introduction to Geo, see [Introduction to GitLab Geo - GitLab Features](https://www.youtube.com/watch?v=-HDLxSjEh6w).
-
To make sure you're using the right version of the documentation, go to [the Geo page on GitLab.com](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/administration/geo/index.md) and choose the appropriate release from the **Switch branch/tag** dropdown list. For example, [`v13.7.6-ee`](https://gitlab.com/gitlab-org/gitlab/-/blob/v13.7.6-ee/doc/administration/geo/index.md).
Geo uses a set of defined terms that are described in the [Geo Glossary](glossary.md).
@@ -208,6 +206,7 @@ This list of limitations only reflects the latest version of GitLab. If you are
- For Git over SSH, to make the project clone URL display correctly regardless of which site you are browsing, secondary sites must use the same port as the primary. [GitLab issue #339262](https://gitlab.com/gitlab-org/gitlab/-/issues/339262) proposes to remove this limitation.
- Git push over SSH against a secondary site does not work for pushes over 1.86 GB. [GitLab issue #413109](https://gitlab.com/gitlab-org/gitlab/-/issues/413109) tracks this bug.
- Backups [cannot be run on secondaries](replication/troubleshooting.md#message-error-canceling-statement-due-to-conflict-with-recovery).
+- Git clone and fetch requests with option `--depth` over SSH against a secondary site does not work and hangs indefinitely if the secondary site is not up to date at the time the request is initiated. For more information, see [issue 391980](https://gitlab.com/gitlab-org/gitlab/-/issues/391980).
### Limitations on replication/verification
diff --git a/doc/administration/geo/replication/troubleshooting.md b/doc/administration/geo/replication/troubleshooting.md
index 3c2d43d196a..dd021695800 100644
--- a/doc/administration/geo/replication/troubleshooting.md
+++ b/doc/administration/geo/replication/troubleshooting.md
@@ -1231,7 +1231,7 @@ status
### Failed verification of Uploads on the primary Geo site
-If some Uploads verification is failing on the primary Geo site with the `verification_checksum: nil` and `verification_failure: Error during verification: undefined method 'underscore' for NilClass:Class` errros, this can be due to orphaned Uploads. The parent record owning the Upload (the Upload's `model`) has somehow been deleted, but the Upload record still exists. These verification failures are false.
+If verification of some uploads is failing on the primary Geo site with `verification_checksum = nil` and with the ``verification_failure = Error during verification: undefined method `underscore' for NilClass:Class``, this can be due to orphaned Uploads. The parent record owning the Upload (the upload's model) has somehow been deleted, but the Upload record still exists. These verification failures are false.
You can find these errors in the `geo.log` file on the primary Geo site.
@@ -1249,7 +1249,7 @@ You can delete these Upload records on the primary Geo site to get rid of these
uploads = Geo::UploadState.where(
verification_checksum: nil,
verification_state: 3,
- verification_failure: "Error during verification: undefined method 'underscore' for NilClass:Class"
+ verification_failure: "Error during verification: undefined method `underscore' for NilClass:Class"
).pluck(:upload_id)
uploads_deleted = 0
@@ -1434,8 +1434,8 @@ If you are using the Linux package installation, something might have failed dur
### GitLab indicates that more than 100% of repositories were synced
-This can be caused by orphaned records in the project registry. You can clear them
-[using the Rake task to remove orphaned project registries](../../../administration/raketasks/geo.md#remove-orphaned-project-registries).
+This can be caused by orphaned records in the project registry. They are being cleaned
+periodically using a registry worker, so give it some time to fix it itself.
### Secondary site shows "Unhealthy" in UI after changing the value of `external_url` for the primary site
diff --git a/doc/administration/geo/setup/index.md b/doc/administration/geo/setup/index.md
index ea3bb5afc24..f59dec17f8b 100644
--- a/doc/administration/geo/setup/index.md
+++ b/doc/administration/geo/setup/index.md
@@ -31,6 +31,8 @@ a single-node Geo site or a multi-node Geo site.
If both Geo sites are based on the [1K reference architecture](../../reference_architectures/1k_users.md), follow
[Set up Geo for two single-node sites](two_single_node_sites.md).
+If using external PostgreSQL services, for example Amazon RDS, follow [Set up Geo for two single-node sites (with external PostgreSQL services)](two_single_node_external_services.md).
+
Depending on your GitLab deployment, [additional configuration](#additional-configuration) for LDAP, object storage, and the Container Registry might be required.
### Multi-node Geo sites
diff --git a/doc/administration/gitaly/configure_gitaly.md b/doc/administration/gitaly/configure_gitaly.md
index f62f0a5a4e2..15ace9c4ed9 100644
--- a/doc/administration/gitaly/configure_gitaly.md
+++ b/doc/administration/gitaly/configure_gitaly.md
@@ -27,6 +27,7 @@ The following configuration options are also available:
- Enabling [TLS support](#enable-tls-support).
- Limiting [RPC concurrency](#limit-rpc-concurrency).
+- Limiting [pack-objects concurrency](#limit-pack-objects-concurrency).
## About the Gitaly token
@@ -361,7 +362,7 @@ Configure Gitaly server in one of two ways:
WARNING:
If directly copying repository data from a GitLab server to Gitaly, ensure that the metadata file,
default path `/var/opt/gitlab/git-data/repositories/.gitaly-metadata`, is not included in the transfer.
-Copying this file causes GitLab to use the [Rugged patches](index.md#direct-access-to-git-in-gitlab) for repositories hosted on the Gitaly server,
+Copying this file causes GitLab to use the direct disk access to repositories hosted on the Gitaly server,
leading to `Error creating pipeline` and `Commit not found` errors, or stale data.
### Configure Gitaly clients
@@ -665,6 +666,8 @@ Configure Gitaly with TLS in one of two ways:
```
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation).
+1. Run `sudo gitlab-rake gitlab:gitaly:check` on the Gitaly client (for example, the
+ Rails application) to confirm it can connect to Gitaly servers.
1. Verify Gitaly traffic is being served over TLS by
[observing the types of Gitaly connections](#observe-type-of-gitaly-connections).
1. Optional. Improve security by:
@@ -751,6 +754,43 @@ Configure Gitaly with TLS in one of two ways:
::EndTabs
+#### Update the certificates
+
+To update the Gitaly certificates after initial configuration:
+
+::Tabs
+
+:::TabTitle Linux package (Omnibus)
+
+If the content of your SSL certificates under the `/etc/gitlab/ssl` directory have been updated, but no configuration changes have been made to
+`/etc/gitlab/gitlab.rb`, then reconfiguring GitLab doesn’t affect Gitaly. Instead, you must restart Gitaly manually for the certificates to be loaded
+by the Gitaly process:
+
+```shell
+sudo gitlab-ctl restart gitaly
+```
+
+If you change or update the certificates in `/etc/gitlab/trusted-certs` without making changes to the `/etc/gitlab/gitlab.rb` file, you must:
+
+1. [Reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) so the symlinks for the trusted certificates are updated.
+1. Restart Gitaly manually for the certificates to be loaded by the Gitaly process:
+
+ ```shell
+ sudo gitlab-ctl restart gitaly
+ ```
+
+:::TabTitle Self-compiled (source)
+
+If the content of your SSL certificates under the `/etc/gitlab/ssl` directory have been updated, you must
+[restart GitLab](../restart_gitlab.md#self-compiled-installations) for the certificates to be loaded by the Gitaly process.
+
+If you change or update the certificates in `/usr/local/share/ca-certificates`, you must:
+
+1. Run `sudo update-ca-certificates` to update the system's trusted store.
+1. [Restart GitLab](../restart_gitlab.md#self-compiled-installations) for the certificates to be loaded by the Gitaly process.
+
+::EndTabs
+
### Observe type of Gitaly connections
For information on observing the type of Gitaly connections being served, see the
@@ -866,6 +906,126 @@ When the pack-object cache is enabled, pack-objects limiting kicks in only if th
You can observe the behavior of this queue using Gitaly logs and Prometheus. For more information, see
[Monitor Gitaly pack-objects concurrency limiting](monitoring.md#monitor-gitaly-pack-objects-concurrency-limiting).
+## Adaptive concurrency limiting
+
+> [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/10734) in GitLab 16.6.
+
+Gitaly supports two concurrency limits:
+
+- An [RPC concurrency limit](#limit-rpc-concurrency), which allow you to configure a maximum number of simultaneous in-flight requests for each
+ Gitaly RPC. The limit is scoped by RPC and repository.
+- A [Pack-objects concurrency limit](#limit-pack-objects-concurrency), which restricts the number of concurrent Git data transfer request by IP.
+
+If this limit is exceeded, either:
+
+- The request is put in a queue.
+- The request is rejected if the queue is full or if the request remains in the queue for too long.
+
+Both of these concurrency limits can be configured statically. Though static limits can yield good protection results, they have some drawbacks:
+
+- Static limits are not good for all usage patterns. There is no one-size-fits-all value. If the limit is too low, big repositories are
+ negatively impacted. If the limit is too high, the protection is essentially lost.
+- It's tedious to maintain a sane value for the concurrency limit, especially when the workload of each repository changes over time.
+- A request can be rejected even though the server is idle because the rate doesn't factor in the load on the server.
+
+You can overcome all of these drawbacks and keep the benefits of concurrency limiting by configuring adaptive concurrency limits. Adaptive
+concurrency limits are optional and build on the two concurrency limiting types. It uses Additive Increase/Multiplicative Decrease (AIMD)
+algorithm. Each adaptive limit:
+
+- Gradually increases up to a certain upper limit during typical process functioning.
+- Quickly decreases when the host machine has a resource problem.
+
+This mechanism provides some headroom for the machine to "breathe" and speeds up current inflight requests.
+
+![Gitaly Adaptive Concurrency Limit](img/gitaly_adaptive_concurrency_limit.png)
+
+The adaptive limiter calibrates the limits every 30 seconds and:
+
+- Increases the limits by one until reaching the upper limit.
+- Decreases the limits by half when the top-level cgroup has either memory usage that exceeds 90%, excluding highly-evictable page caches,
+ or CPU throttled for 50% or more of the observation time.
+
+Otherwise, the limits increase by one until reaching the upper bound. For more information about technical implementation
+of this system, please refer to [this blueprint](../../architecture/blueprints/gitaly_adaptive_concurrency_limit/index.md).
+
+Adaptive limiting is enabled for each RPC or pack-objects cache individually. However, limits are calibrated at the same time.
+
+### Enable adaptiveness for RPC concurrency
+
+Prerequisites:
+
+- Because adaptive limiting depends on [control groups](#control-groups), control groups must be enabled before using adaptive limiting.
+
+The following is an example to configure an adaptive limit for RPC concurrency:
+
+```ruby
+# in /etc/gitlab/gitlab.rb
+gitaly['configuration'] = {
+ # ...
+ concurrency: [
+ {
+ rpc: '/gitaly.SmartHTTPService/PostUploadPackWithSidechannel',
+ max_queue_wait: '1s',
+ max_queue_size: 10,
+ adaptive: true,
+ min_limit: 10,
+ initial_limit: 20,
+ max_limit: 40
+ },
+ {
+ rpc: '/gitaly.SSHService/SSHUploadPackWithSidechannel',
+ max_queue_wait: '10s',
+ max_queue_size: 20,
+ adaptive: true,
+ min_limit: 10,
+ initial_limit: 50,
+ max_limit: 100
+ },
+ ],
+}
+```
+
+In this example:
+
+- `adaptive` sets whether the adaptiveness is enabled. If set, the `max_per_repo` value is ignored in favor of the following configuration.
+- `initial_limit` is the per-repository concurrency limit to use when Gitaly starts.
+- `max_limit` is the minimum per-repository concurrency limit of the configured RPC. Gitaly increases the current limit
+ until it reaches this number.
+- `min_limit` is the is the minimum per-repository concurrency limit of the configured RPC. When the host machine has a resource problem,
+ Gitaly quickly reduces the limit until reaching this value.
+
+For more information, see [RPC concurrency](#limit-rpc-concurrency).
+
+### Enable adaptiveness for pack-objects concurrency
+
+Prerequisites:
+
+- Because adaptive limiting depends on [control groups](#control-groups), control groups must be enabled before using adaptive limiting.
+
+The following is an example to configure an adaptive limit for pack-objects concurrency:
+
+```ruby
+# in /etc/gitlab/gitlab.rb
+gitaly['pack_objects_limiting'] = {
+ 'max_queue_length' => 200,
+ 'max_queue_wait' => '60s',
+ 'adaptive' => true,
+ 'min_limit' => 10,
+ 'initial_limit' => 20,
+ 'max_limit' => 40
+}
+```
+
+In this example:
+
+- `adaptive` sets whether the adaptiveness is enabled. If set, the value of `max_concurrency` is ignored in favor of the following configuration.
+- `initial_limit` is the per-IP concurrency limit to use when Gitaly starts.
+- `max_limit` is the minimum per-IP concurrency limit for pack-objects. Gitaly increases the current limit until it reaches this number.
+- `min_limit` is the is the minimum per-IP concurrency limit for pack-objects. When the host machine has a resources problem, Gitaly quickly
+ reduces the limit until it reaches this value.
+
+For more information, see [pack-objects concurrency](#limit-pack-objects-concurrency).
+
## Control groups
WARNING:
@@ -1673,7 +1833,9 @@ Gitaly fails to start up if either:
## Configure server-side backups
-> [Introduced](https://gitlab.com/gitlab-org/gitaly/-/issues/4941) in GitLab 16.3.
+> - [Introduced](https://gitlab.com/gitlab-org/gitaly/-/issues/4941) in GitLab 16.3.
+> - Server-side support for restoring a specified backup instead of the latest backup [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/132188) in GitLab 16.6.
+> - Server-side support for creating incremental backups [introduced](https://gitlab.com/gitlab-org/gitaly/-/merge_requests/6475) in GitLab 16.6.
Repository backups can be configured so that the Gitaly node that hosts each
repository is responsible for creating the backup and streaming it to
diff --git a/doc/administration/gitaly/img/gitaly_adaptive_concurrency_limit.png b/doc/administration/gitaly/img/gitaly_adaptive_concurrency_limit.png
new file mode 100644
index 00000000000..ce6bb1a8dfc
--- /dev/null
+++ b/doc/administration/gitaly/img/gitaly_adaptive_concurrency_limit.png
Binary files differ
diff --git a/doc/administration/gitaly/index.md b/doc/administration/gitaly/index.md
index 46f6a5829c8..6784ff4d970 100644
--- a/doc/administration/gitaly/index.md
+++ b/doc/administration/gitaly/index.md
@@ -587,92 +587,6 @@ off Gitaly Cluster to a sharded Gitaly instance:
1. [Move the repositories](../operations/moving_repositories.md#moving-repositories) to the newly created storage. You can
move them by shard or by group, which gives you the opportunity to spread them over multiple Gitaly servers.
-## Direct access to Git in GitLab
-
-Direct access to Git uses code in GitLab known as the "Rugged patches".
-
-Before Gitaly existed, what are now Gitaly clients accessed Git repositories directly, either:
-
-- On a local disk in the case of a single-machine Linux package installation.
-- Using NFS in the case of a horizontally-scaled GitLab installation.
-
-In addition to running plain `git` commands, GitLab used a Ruby library called
-[Rugged](https://github.com/libgit2/rugged). Rugged is a wrapper around
-[libgit2](https://libgit2.org/), a stand-alone implementation of Git in the form of a C library.
-
-Over time it became clear that Rugged, particularly in combination with
-[Unicorn](https://yhbt.net/unicorn/), is extremely efficient. Because `libgit2` is a library and
-not an external process, there was very little overhead between:
-
-- GitLab application code that tried to look up data in Git repositories.
-- The Git implementation itself.
-
-Because the combination of Rugged and Unicorn was so efficient, the GitLab application code ended up
-with lots of duplicate Git object lookups. For example, looking up the default branch commit a dozen
-times in one request. We could write inefficient code without poor performance.
-
-When we migrated these Git lookups to Gitaly calls, we suddenly had a much higher fixed cost per Git
-lookup. Even when Gitaly is able to re-use an already-running `git` process (for example, to look up
-a commit), you still have:
-
-- The cost of a network roundtrip to Gitaly.
-- Inside Gitaly, a write/read roundtrip on the Unix pipes that connect Gitaly to the `git` process.
-
-Using GitLab.com to measure, we reduced the number of Gitaly calls per request until we no longer felt
-the efficiency loss of losing Rugged. It also helped that we run Gitaly itself directly on the Git
-file servers, rather than by using NFS mounts. This gave us a speed boost that counteracted the
-negative effect of not using Rugged anymore.
-
-Unfortunately, other deployments of GitLab could not remove NFS like we did on GitLab.com, and they
-got the worst of both worlds:
-
-- The slowness of NFS.
-- The increased inherent overhead of Gitaly.
-
-The code removed from GitLab during the Gitaly migration project affected these deployments. As a
-performance workaround for these NFS-based deployments, we re-introduced some of the old Rugged
-code. This re-introduced code is informally referred to as the "Rugged patches".
-
-### Automatic detection
-
-> Automatic detection for Rugged [disabled](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/95445) in GitLab 15.3.
-
-FLAG:
-On self-managed GitLab, by default automatic detection of whether Rugged should be used (per storage) is not available.
-To make it available, an administrator can [disable the feature flag](../../administration/feature_flags.md) named
-`skip_rugged_auto_detect`.
-
-The Ruby methods that perform direct Git access are behind
-[feature flags](../../development/gitaly.md#legacy-rugged-code), disabled by default. It wasn't
-convenient to set feature flags to get the best performance, so we added an automatic mechanism that
-enables direct Git access.
-
-When GitLab calls a function that has a "Rugged patch", it performs two checks:
-
-- Is the feature flag for this patch set in the database? If so, the feature flag setting controls
- the GitLab use of "Rugged patch" code.
-- If the feature flag is not set, GitLab tries accessing the file system underneath the
- Gitaly server directly. If it can, it uses the "Rugged patch":
- - If using Puma and [thread count](../../install/requirements.md#puma-threads) is set
- to `1`.
-
-The result of these checks is cached.
-
-To see if GitLab can access the repository file system directly, we use the following heuristic:
-
-- Gitaly ensures that the file system has a metadata file in its root with a UUID in it.
-- Gitaly reports this UUID to GitLab by using the `ServerInfo` RPC.
-- GitLab Rails tries to read the metadata file directly. If it exists, and if the UUIDs match,
- assume we have direct access.
-
-Direct Git access is:
-
-- [Disabled](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/95445) by default in GitLab 15.3 and later for
- compatibility with [Praefect-generated replica paths](#praefect-generated-replica-paths-gitlab-150-and-later). It
- can be enabled if Rugged [feature flags](../../development/gitaly.md#legacy-rugged-code) are enabled.
-- Enabled by default in GitLab 15.2 and earlier because it fills in the correct repository paths in the GitLab
- configuration file `config/gitlab.yml`. This satisfies the UUID check.
-
### Transition to Gitaly Cluster
For the sake of removing complexity, we must remove direct Git access in GitLab. However, we can't
diff --git a/doc/administration/gitaly/monitoring.md b/doc/administration/gitaly/monitoring.md
index cbf5722f2c5..5d8de42666b 100644
--- a/doc/administration/gitaly/monitoring.md
+++ b/doc/administration/gitaly/monitoring.md
@@ -90,6 +90,47 @@ In Prometheus, look for the following metrics:
- `gitaly_pack_objects_queued` indicates how many requests for pack-objects processes are waiting due to the concurrency limit being reached.
- `gitaly_pack_objects_acquiring_seconds` indicates how long a request for a pack-object process has to wait due to concurrency limits before being processed.
+## Monitor Gitaly adaptive concurrency limiting
+
+> [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/10734) in GitLab 16.6.
+
+You can observe specific behavior of [adaptive concurrency limiting](configure_gitaly.md#adaptive-concurrency-limiting) using Gitaly logs and Prometheus.
+
+In the [Gitaly logs](../logs/index.md#gitaly-logs), you can identify logs related to the adaptive concurrency limiting when the current limits are adjusted.
+You can filter the content of the logs (`msg`) for "Multiplicative decrease" and "Additive increase" messages.
+
+| Log Field | Description |
+|:---|:---|
+| `limit` | The name of the limit being adjusted. |
+| `previous_limit` | The previous limit before it was increased or decreased. |
+| `new_limit` | The new limit after it was increased or decreased. |
+| `watcher` | The resource watcher that decided the node is under pressure. For example: `CgroupCpu` or `CgroupMemory`. |
+| `reason` | The reason behind limit adjustment. |
+| `stats.*` | Some statistics behind an adjustment decision. They are for debugging purposes. |
+
+Example log:
+
+```json
+{
+ "msg": "Multiplicative decrease",
+ "limit": "pack-objects",
+ "new_limit": 14,
+ "previous_limit": 29,
+ "reason": "cgroup CPU throttled too much",
+ "watcher": "CgroupCpu",
+ "stats.time_diff": 15.0,
+ "stats.throttled_duration": 13.0,
+ "stat.sthrottled_threshold": 0.5
+}
+```
+
+In Prometheus, look for the following metrics:
+
+- `gitaly_concurrency_limiting_current_limit` The current limit value of an adaptive concurrency limit.
+- `gitaly_concurrency_limiting_watcher_errors_total` indicates the total number of watcher errors while fetching resource metrics.
+- `gitaly_concurrency_limiting_backoff_events_total` indicates the total number of backoff events, which are when the limits being
+ adjusted due to resource pressure.
+
## Monitor Gitaly cgroups
You can observe the status of [control groups (cgroups)](configure_gitaly.md#control-groups) using Prometheus:
diff --git a/doc/administration/gitaly/recovery.md b/doc/administration/gitaly/recovery.md
index 45bde083a1a..6779823c941 100644
--- a/doc/administration/gitaly/recovery.md
+++ b/doc/administration/gitaly/recovery.md
@@ -15,12 +15,17 @@ You can add and replace Gitaly nodes on a Gitaly Cluster.
### Add new Gitaly nodes
-To add a new Gitaly node to a Gitaly Cluster that has [replication factor](praefect.md#configure-replication-factor):
+The steps to add a new Gitaly node to a Gitaly Cluster depend on whether a [custom replication factor](praefect.md#configure-replication-factor) is set.
-- Set, set the [replication factor](praefect.md#configure-replication-factor) for each repository using `set-replication-factor` Praefect command. New repositories are
- replicated based on [replication factor](praefect.md#configure-replication-factor). Praefect doesn't automatically replicate existing repositories to the new Gitaly node.
-- Not set, add the new node in your [Praefect configuration](praefect.md#praefect) under `praefect['virtual_storages']`. Praefect automatically replicates all data to any
- new Gitaly node added to the configuration.
+#### Custom replication factor
+
+If a custom replication factor is set, set the [replication factor](praefect.md#configure-replication-factor) for each repository using the
+`set-replication-factor` Praefect command. New repositories are replicated based on the [replication factor](praefect.md#configure-replication-factor). Praefect doesn't automatically replicate existing repositories to the new Gitaly node.
+
+#### Default replication factor
+
+If the default replication factor is used, add the new node in your [Praefect configuration](praefect.md#praefect) under `praefect['virtual_storages']`.
+Praefect automatically replicates all data to any new Gitaly node added to the configuration.
### Replace an existing Gitaly node
@@ -33,32 +38,37 @@ To use the same name for the replacement node, use [repository verifier](praefec
#### With a node with a different name
-To use a different name for the replacement node for a Gitaly Cluster that has [replication factor](praefect.md#configure-replication-factor):
+The steps use a different name for the replacement node for a Gitaly Cluster depend on if a [custom replication factor](praefect.md#configure-replication-factor)
+is set.
-- Set, use [`praefect set-replication-factor`](praefect.md#configure-replication-factor) to set the replication factor per repository again to get new storage assigned.
- For example:
+##### Custom replication factor set
- ```shell
- $ sudo /opt/gitlab/embedded/bin/praefect -config /var/opt/gitlab/praefect/config.toml set-replication-factor -virtual-storage default -relative-path @hashed/3f/db/3fdba35f04dc8c462986c992bcf875546257113072a909c162f7e470e581e278.git -replication-factor 2
+If a custom replication factor is set, use [`praefect set-replication-factor`](praefect.md#configure-replication-factor) to set the replication factor per repository again to get new storage assigned. For example:
- current assignments: gitaly-1, gitaly-2
- ```
+```shell
+$ sudo /opt/gitlab/embedded/bin/praefect -config /var/opt/gitlab/praefect/config.toml set-replication-factor -virtual-storage default -relative-path @hashed/3f/db/3fdba35f04dc8c462986c992bcf875546257113072a909c162f7e470e581e278.git -replication-factor 2
+
+current assignments: gitaly-1, gitaly-2
+```
+
+To reassign all repositories from the old storage to the new one, after configuring the new Gitaly node:
- To reassign all repositories from the old storage to the new one, after configuring the new Gitaly node:
+1. Connect to Praefect database:
- 1. Connect to Praefect database:
+ ```shell
+ /opt/gitlab/embedded/bin/psql -h <psql host> -U <user> -d <database name>
+ ```
- ```shell
- /opt/gitlab/embedded/bin/psql -h <psql host> -U <user> -d <database name>
- ```
+1. Update the `repository_assignments` table to replace the old Gitaly node name (for example, `old-gitaly`) with the new Gitaly node name
+ (for example, `new-gitaly`):
- 1. Update `repository_assignments` table to replace the old Gitaly node name (for example, `old-gitaly`) with the new Gitaly node name (for example, `new-gitaly`):
+ ```sql
+ UPDATE repository_assignments SET storage='new-gitaly' WHERE storage='old-gitaly';
+ ```
- ```sql
- UPDATE repository_assignments SET storage='new-gitaly' WHERE storage='old-gitaly';
- ```
+##### Default replication factor
-- Not set, replace the node in the configuration. The old node's state remains in the Praefect database but it is ignored.
+If the default replication factor is used, replace the node in the configuration. The old node's state remains in the Praefect database but it is ignored.
## Primary node failure
diff --git a/doc/administration/gitaly/troubleshooting.md b/doc/administration/gitaly/troubleshooting.md
index 556bc29b76f..17687cbb181 100644
--- a/doc/administration/gitaly/troubleshooting.md
+++ b/doc/administration/gitaly/troubleshooting.md
@@ -387,6 +387,43 @@ If Git pushes are too slow when Dynatrace is enabled, disable Dynatrace.
One way to resolve this is to make sure the entry is correct for the GitLab internal API URL configured in `gitlab.rb` with `gitlab_rails['internal_api_url']`.
+### Changes (diffs) don't load for new merge requests when using Gitaly TLS
+
+After enabling [Gitaly with TLS](configure_gitaly.md#enable-tls-support), changes (diffs) for new merge requests are not generated
+and you see the following message in GitLab:
+
+```plaintext
+Building your merge request... This page will update when the build is complete
+```
+
+Gitaly must be able to connect to itself to complete some operations. If the Gitaly certificate is not trusted by the Gitaly server,
+merge request diffs can't be generated.
+
+If Gitaly can't connect to itself, you see messages in the [Gitaly logs](../../administration/logs/index.md#gitaly-logs) like the following messages:
+
+```json
+{
+ "level":"warning",
+ "msg":"[core] [Channel #16 SubChannel #17] grpc: addrConn.createTransport failed to connect to {Addr: \"ext-gitaly.example.com:9999\", ServerName: \"ext-gitaly.example.com:9999\", }. Err: connection error: desc = \"transport: authentication handshake failed: tls: failed to verify certificate: x509: certificate signed by unknown authority\"",
+ "pid":820,
+ "system":"system",
+ "time":"2023-11-06T05:40:04.169Z"
+}
+{
+ "level":"info",
+ "msg":"[core] [Server #3] grpc: Server.Serve failed to create ServerTransport: connection error: desc = \"ServerHandshake(\\\"x.x.x.x:x\\\") failed: wrapped server handshake: remote error: tls: bad certificate\"",
+ "pid":820,
+ "system":"system",
+ "time":"2023-11-06T05:40:04.169Z"
+}
+```
+
+To resolve the problem, ensure that you have added your Gitaly certificate to the `/etc/gitlab/trusted-certs` folder on the Gitaly server
+and:
+
+1. [Reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) so the certificates are symlinked
+1. Restart Gitaly manually `sudo gitlab-ctl restart gitaly` for the certificates to be loaded by the Gitaly process.
+
## Gitaly fails to fork processes stored on `noexec` file systems
Because of changes [introduced](https://gitlab.com/gitlab-org/omnibus-gitlab/-/merge_requests/5999) in GitLab 14.10, applying the `noexec` option to a mount
diff --git a/doc/administration/inactive_project_deletion.md b/doc/administration/inactive_project_deletion.md
index b7f71505e70..7ccd3455011 100644
--- a/doc/administration/inactive_project_deletion.md
+++ b/doc/administration/inactive_project_deletion.md
@@ -34,10 +34,13 @@ To configure deletion of inactive projects:
1. Select **Save changes**.
Inactive projects that meet the criteria are scheduled for deletion and a warning email is sent. If the
-projects remain inactive, they are deleted after the specified duration.
+projects remain inactive, they are deleted after the specified duration. These projects are deleted even if
+[the project is archived](../user/project/settings/index.md#archive-a-project).
### Configuration example
+#### Example 1
+
If you use these settings:
- **Delete inactive projects** enabled.
@@ -52,6 +55,20 @@ If a project is more than 50 MB and it is inactive for:
- More than 6 months: A deletion warning email is sent. This mail includes the date that the project will be deleted.
- More than 12 months: The project is scheduled for deletion.
+#### Example 2
+
+If you use these settings:
+
+- **Delete inactive projects** enabled.
+- **Delete inactive projects that exceed** set to `0`.
+- **Delete project after** set to `12`.
+- **Send warning email** set to `11`.
+
+If a project exists that has already been inactive for more than 12 months when you configure these settings:
+
+- A deletion warning email is sent immediately. This email includes the date that the project will be deleted.
+- The project is scheduled for deletion 1 month (12 months - 11 months) after warning email.
+
## Determine when a project was last active
You can view a project's activities and determine when the project was last active in the following ways:
diff --git a/doc/administration/incoming_email.md b/doc/administration/incoming_email.md
index 6948009aab2..33afaf19220 100644
--- a/doc/administration/incoming_email.md
+++ b/doc/administration/incoming_email.md
@@ -68,7 +68,8 @@ this method only supports replies, and not the other features of [incoming email
## Accepted headers
-> Accepting `Received` headers [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/81489) in GitLab 14.9.
+> - Accepting `Received` headers [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/81489) in GitLab 14.9.
+> - Accepting `Cc` headers [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/348572) in GitLab 16.5.
Email is processed correctly when a configured email address is present in one of the following headers
(sorted in the order they are checked):
@@ -77,6 +78,7 @@ Email is processed correctly when a configured email address is present in one o
- `Delivered-To`
- `Envelope-To` or `X-Envelope-To`
- `Received`
+- `Cc`
The `References` header is also accepted, however it is used specifically to relate email responses to existing discussion threads. It is not used for creating issues by email.
@@ -86,8 +88,7 @@ also checks accepted headers.
Usually, the "To" field contains the email address of the primary receiver.
However, it might not include the configured GitLab email address if:
-- The address is in the "CC" field.
-- The address was included when using "Reply all".
+- The address is in the "BCC" field.
- The email was forwarded.
The `Received` header can contain multiple email addresses. These are checked in the order that they appear.
diff --git a/doc/administration/instance_limits.md b/doc/administration/instance_limits.md
index 8f03a2224ec..d5855e3c832 100644
--- a/doc/administration/instance_limits.md
+++ b/doc/administration/instance_limits.md
@@ -309,7 +309,7 @@ The number of seconds GitLab waits for an HTTP response after sending a webhook.
To change the webhook timeout value:
-1. Edit `/etc/gitlab/gitlab.rb`:
+1. Edit `/etc/gitlab/gitlab.rb` on all GitLab nodes that are running Sidekiq:
```ruby
gitlab_rails['webhook_timeout'] = 60
@@ -992,18 +992,19 @@ Set the limit to `0` to disable it.
## Math rendering limits
-> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/132939) in GitLab 16.5.
+> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/132939) in GitLab 16.5.
+> - [Removed](https://gitlab.com/gitlab-org/gitlab/-/issues/368009) the 50-node limit from Wiki and repository files.
GitLab imposes default limits when rendering math in Markdown fields. These limits provide better security and performance.
-The limits for issues, merge requests, wikis, and repositories:
+The limits for issues, merge requests, epics, wikis, and repository files:
-- Maximum number of nodes rendered: `50`.
- Maximum number of macro expansions: `1000`.
-- Maximum user-specified size in em: `20`.
+- Maximum user-specified size in [em](https://en.wikipedia.org/wiki/Em_(typography)): `20`.
-The limits for issues and merge requests:
+The limits for issues, merge requests, and epics:
+- Maximum number of nodes rendered: `50`.
- Maximum number of characters in a math block: `1000`.
- Maximum rendering time: `2000 ms`.
diff --git a/doc/administration/integration/plantuml.md b/doc/administration/integration/plantuml.md
index 0155f0300d4..dae400ff755 100644
--- a/doc/administration/integration/plantuml.md
+++ b/doc/administration/integration/plantuml.md
@@ -180,7 +180,7 @@ see the [Tomcat Documentation](https://tomcat.apache.org/tomcat-10.1-doc/index.h
1. Install and configure Tomcat 10:
```shell
- wget https://dlcdn.apache.org/tomcat/tomcat-10/v10.1.9/bin/apache-tomcat-10.1.9.tar.gz -P /tmp
+ wget https://dlcdn.apache.org/tomcat/tomcat-10/v10.1.15/bin/apache-tomcat-10.1.15.tar.gz -P /tmp
sudo tar xzvf /tmp/apache-tomcat-10*tar.gz -C /opt/tomcat --strip-components=1
sudo chown -R tomcat:tomcat /opt/tomcat/
sudo chmod -R u+x /opt/tomcat/bin
@@ -266,12 +266,11 @@ see the [Tomcat Documentation](https://tomcat.apache.org/tomcat-10.1-doc/index.h
1. Install PlantUML and copy the `.war` file:
- Use the [latest release](https://github.com/plantuml/plantuml-server/releases) of plantuml-jsp (example: plantuml-jsp-v1.2023.8.war). For context, see [this issue](https://github.com/plantuml/plantuml-server/issues/265).
+ Use the [latest release](https://github.com/plantuml/plantuml-server/releases) of plantuml-jsp (example: plantuml-jsp-v1.2023.12.war). For context, see [this issue](https://github.com/plantuml/plantuml-server/issues/265).
```shell
- cd /
- wget https://github.com/plantuml/plantuml-server/releases/download/v1.2023.8/plantuml-jsp-v1.2023.8.war
- sudo cp plantuml-jsp-v1.2023.8.war /opt/tomcat/webapps/plantuml.war
+ wget -P /tmp https://github.com/plantuml/plantuml-server/releases/download/v1.2023.12/plantuml-jsp-v1.2023.12.war
+ sudo cp /tmp/plantuml-jsp-v1.2023.12.war /opt/tomcat/webapps/plantuml.war
sudo chown tomcat:tomcat /opt/tomcat/webapps/plantuml.war
sudo systemctl restart tomcat
```
diff --git a/doc/administration/logs/index.md b/doc/administration/logs/index.md
index e7277ab3186..3bb26681fae 100644
--- a/doc/administration/logs/index.md
+++ b/doc/administration/logs/index.md
@@ -806,12 +806,12 @@ GraphQL queries are recorded in the file. For example:
{"query_string":"query IntrospectionQuery{__schema {queryType { name },mutationType { name }}}...(etc)","variables":{"a":1,"b":2},"complexity":181,"depth":1,"duration_s":7}
```
-## `clickhouse.log` **(SAAS)**
+## `clickhouse.log`
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/133371) in GitLab 16.5.
-The `clickhouse.log` file logs information related to
-Clickhouse database client within GitLab.
+The `clickhouse.log` file logs information related to the
+ClickHouse database client in GitLab.
## `migrations.log`
diff --git a/doc/administration/logs/log_parsing.md b/doc/administration/logs/log_parsing.md
index 21ce3d7f17f..b281620fcf3 100644
--- a/doc/administration/logs/log_parsing.md
+++ b/doc/administration/logs/log_parsing.md
@@ -96,10 +96,10 @@ grep <PROJECT_NAME> <FILE> | jq .
jq 'select(.duration_s > 5000)' <FILE>
```
-#### Find all project requests with more than 5 rugged calls
+#### Find all project requests with more than 5 Gitaly calls
```shell
-grep <PROJECT_NAME> <FILE> | jq 'select(.rugged_calls > 5)'
+grep <PROJECT_NAME> <FILE> | jq 'select(.gitaly_calls > 5)'
```
#### Find all requests with a Gitaly duration > 10 seconds
@@ -273,8 +273,8 @@ jq --raw-output --slurp '
.[2]."grpc.time_ms",
.[0]."grpc.request.glProjectPath"
]
- | @sh' current \
-| awk 'BEGIN { printf "%7s %10s %10s %10s\t%s\n", "CT", "MAX DURS", "", "", "PROJECT" }
+ | @sh' current |
+ awk 'BEGIN { printf "%7s %10s %10s %10s\t%s\n", "CT", "MAX DURS", "", "", "PROJECT" }
{ printf "%7u %7u ms, %7u ms, %7u ms\t%s\n", $1, $2, $3, $4, $5 }'
```
@@ -288,12 +288,18 @@ jq --raw-output --slurp '
...
```
+#### Types of user and project activity overview
+
+```shell
+jq --raw-output '[.username, ."grpc.method", ."grpc.request.glProjectPath"] | @tsv' current | sort | uniq -c | sort -n
+```
+
#### Find all projects affected by a fatal Git problem
```shell
-grep "fatal: " current | \
- jq '."grpc.request.glProjectPath"' | \
- sort | uniq
+grep "fatal: " current |
+ jq '."grpc.request.glProjectPath"' |
+ sort | uniq
```
### Parsing `gitlab-shell/gitlab-shell.log`
diff --git a/doc/administration/merge_request_diffs.md b/doc/administration/merge_request_diffs.md
index 746dccb99d6..9c4ddcdc094 100644
--- a/doc/administration/merge_request_diffs.md
+++ b/doc/administration/merge_request_diffs.md
@@ -21,7 +21,9 @@ that only [stores outdated diffs](#alternative-in-database-storage) outside of d
## Using external storage
-For Linux package installations:
+::Tabs
+
+:::TabTitle Linux package (Omnibus)
1. Edit `/etc/gitlab/gitlab.rb` and add the following line:
@@ -41,7 +43,7 @@ For Linux package installations:
1. Save the file and [reconfigure GitLab](restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
GitLab then migrates your existing merge request diffs to external storage.
-For self-compiled installations:
+:::TabTitle Self-compiled (source)
1. Edit `/home/git/gitlab/config/gitlab.yml` and add or amend the following
lines:
@@ -65,6 +67,8 @@ For self-compiled installations:
1. Save the file and [restart GitLab](restart_gitlab.md#self-compiled-installations) for the changes to take effect.
GitLab then migrates your existing merge request diffs to external storage.
+::EndTabs
+
## Using object storage
WARNING:
@@ -74,7 +78,9 @@ Instead of storing the external diffs on disk, we recommended the use of an obje
store like AWS S3 instead. This configuration relies on valid AWS credentials to
be configured already.
-For Linux package installations:
+::Tabs
+
+:::TabTitle Linux package (Omnibus)
1. Edit `/etc/gitlab/gitlab.rb` and add the following line:
@@ -86,7 +92,7 @@ For Linux package installations:
1. Save the file and [reconfigure GitLab](restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
GitLab then migrates your existing merge request diffs to external storage.
-For self-compiled installations:
+:::TabTitle Self-compiled (source)
1. Edit `/home/git/gitlab/config/gitlab.yml` and add or amend the following
lines:
@@ -100,6 +106,8 @@ For self-compiled installations:
1. Save the file and [restart GitLab](restart_gitlab.md#self-compiled-installations) for the changes to take effect.
GitLab then migrates your existing merge request diffs to external storage.
+::EndTabs
+
[Read more about using object storage with GitLab](object_storage.md).
### Object Storage Settings
@@ -123,7 +131,9 @@ then `object_store:`. On Linux package installations, they are prefixed by
See [the available connection settings for different providers](object_storage.md#configure-the-connection-settings).
-For Linux package installations:
+::Tabs
+
+:::TabTitle Linux package (Omnibus)
1. Edit `/etc/gitlab/gitlab.rb` and add the following lines by replacing with
the values you want:
@@ -153,7 +163,7 @@ For Linux package installations:
1. Save the file and [reconfigure GitLab](restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
-For self-compiled installations:
+:::TabTitle Self-compiled (source)
1. Edit `/home/git/gitlab/config/gitlab.yml` and add or amend the following
lines:
@@ -173,6 +183,8 @@ For self-compiled installations:
1. Save the file and [restart GitLab](restart_gitlab.md#self-compiled-installations) for the changes to take effect.
+::EndTabs
+
## Alternative in-database storage
Enabling external diffs may reduce the performance of merge requests, as they
@@ -182,7 +194,9 @@ in the database.
To enable this feature, perform the following steps:
-For Linux package installations:
+::Tabs
+
+:::TabTitle Linux package (Omnibus)
1. Edit `/etc/gitlab/gitlab.rb` and add the following line:
@@ -192,7 +206,7 @@ For Linux package installations:
1. Save the file and [reconfigure GitLab](restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
-For self-compiled installations:
+:::TabTitle Self-compiled (source)
1. Edit `/home/git/gitlab/config/gitlab.yml` and add or amend the following
lines:
@@ -205,6 +219,8 @@ For self-compiled installations:
1. Save the file and [restart GitLab](restart_gitlab.md#self-compiled-installations) for the changes to take effect.
+::EndTabs
+
With this feature enabled, diffs are initially stored in the database, rather
than externally. They are moved to external storage after any of these
conditions become true:
@@ -217,64 +233,45 @@ These rules strike a balance between space and performance by only storing
frequently-accessed diffs in the database. Diffs that are less likely to be
accessed are moved to external storage instead.
-## Correcting incorrectly-migrated diffs
-
-Versions of GitLab earlier than `v13.0.0` would incorrectly record the location
-of some merge request diffs when [external diffs in object storage](#object-storage-settings)
-were enabled. This mainly affected imported merge requests, and was resolved
-with [this merge request](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/31005).
-
-If you are using object storage, or have never used on-disk storage for external
-diffs, the **Changes** tab for some merge requests fails to load with a 500 error,
-and the exception for that error is of this form:
-
-```plain
-Errno::ENOENT (No such file or directory @ rb_sysopen - /var/opt/gitlab/gitlab-rails/shared/external-diffs/merge_request_diffs/mr-6167082/diff-8199789)
-```
-
-Then you are affected by this issue. Because it's not possible to safely determine
-all these conditions automatically, we've provided a Rake task in GitLab v13.2.0
-that you can run manually to correct the data:
-
-For Linux package installations:
-
-```shell
-sudo gitlab-rake gitlab:external_diffs:force_object_storage
-```
-
-For self-compiled installations:
+## Switching from external storage to object storage
-```shell
-sudo -u git -H bundle exec rake gitlab:external_diffs:force_object_storage RAILS_ENV=production
-```
+Automatic migration moves diffs stored in the database, but it does not move diffs between storage types.
+To switch from external storage to object storage:
-Environment variables can be provided to modify the behavior of the task. The
-available variables are:
+1. Move files stored on local or NFS storage to object storage manually.
+1. Run this Rake task to change their location in the database.
-| Name | Default value | Purpose |
-| ---- | ------------- | ------- |
-| `ANSI` | `true` | Use ANSI escape codes to make output more understandable |
-| `BATCH_SIZE` | `1000` | Iterate through the table in batches of this size |
-| `START_ID` | `nil` | If set, begin scanning at this ID |
-| `END_ID` | `nil` | If set, stop scanning at this ID |
-| `UPDATE_DELAY` | `1` | Number of seconds to sleep between updates |
+ For Linux package installations:
-The `START_ID` and `END_ID` variables may be used to run the update in parallel,
-by assigning different processes to different parts of the table. The `BATCH`
-and `UPDATE_DELAY` parameters allow the speed of the migration to be traded off
-against concurrent access to the table. The `ANSI` parameter should be set to
-false if your terminal does not support ANSI escape codes.
+ ```shell
+ sudo gitlab-rake gitlab:external_diffs:force_object_storage
+ ```
-By default, `sudo` does not preserve existing environment variables. You should append them, rather than prefix them.
+ For self-compiled installations:
-```shell
-sudo gitlab-rake gitlab:external_diffs:force_object_storage START_ID=59946109 END_ID=59946109 UPDATE_DELAY=5
-```
+ ```shell
+ sudo -u git -H bundle exec rake gitlab:external_diffs:force_object_storage RAILS_ENV=production
+ ```
-## Switching from external storage to object storage
+ By default, `sudo` does not preserve existing environment variables. You should
+ append them, rather than prefix them, like this:
-Automatic migration moves diffs stored in the database, but it does not move diffs between storage types.
-To switch from external storage to object storage:
+ ```shell
+ sudo gitlab-rake gitlab:external_diffs:force_object_storage START_ID=59946109 END_ID=59946109 UPDATE_DELAY=5
+ ```
-1. Move files stored on local or NFS storage to object storage manually.
-1. Run the Rake task in the [previous section](#correcting-incorrectly-migrated-diffs) to change their location in the database.
+These environment variables modify the behavior of the Rake task:
+
+| Name | Default value | Purpose |
+|----------------|---------------|---------|
+| `ANSI` | `true` | Use ANSI escape codes to make output more understandable. |
+| `BATCH_SIZE` | `1000` | Iterate through the table in batches of this size. |
+| `START_ID` | `nil` | If set, begin scanning at this ID. |
+| `END_ID` | `nil` | If set, stop scanning at this ID. |
+| `UPDATE_DELAY` | `1` | Number of seconds to sleep between updates. |
+
+- `START_ID` and `END_ID` can be used to run the update in parallel,
+ by assigning different processes to different parts of the table.
+- `BATCH` and `UPDATE_DELAY` enable the speed of the migration to be traded off
+ against concurrent access to the table.
+- `ANSI` should be set to `false` if your terminal does not support ANSI escape codes.
diff --git a/doc/administration/moderate_users.md b/doc/administration/moderate_users.md
index b30294c5fe0..c12eb2b9a95 100644
--- a/doc/administration/moderate_users.md
+++ b/doc/administration/moderate_users.md
@@ -287,6 +287,45 @@ You can also delete a user and their contributions, such as merge requests, issu
NOTE:
Before 15.1, additionally groups of which deleted user were the only owner among direct members were deleted.
+## Trust and untrust users
+
+> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/132402) in GitLab 16.5.
+
+You can trust and untrust users from the Admin Area.
+
+By default, a user is not trusted and is blocked from creating issues, notes, and snippets considered to be spam. When you trust a user, they can create issues, notes, and snippets without being blocked.
+
+Prerequisite:
+
+- You must be an administrator.
+
+::Tabs
+
+:::TabTitle Trust a user
+
+1. On the left sidebar, select **Search or go to**.
+1. Select **Admin Area**.
+1. Select **Overview > Users**.
+1. Select a user.
+1. From the **User administration** dropdown list, select **Trust user**.
+1. On the confirmation dialog, select **Trust user**.
+
+The user is trusted.
+
+:::TabTitle Untrust a user
+
+1. On the left sidebar, select **Search or go to**.
+1. Select **Admin Area**.
+1. Select **Overview > Users**.
+1. Select the **Trusted** tab.
+1. Select a user.
+1. From the **User administration** dropdown list, select **Untrust user**.
+1. On the confirmation dialog, select **Untrust user**.
+
+The user is untrusted.
+
+::EndTabs
+
## Troubleshooting
When moderating users, you may need to perform bulk actions on them based on certain conditions. The following rails console scripts show some examples of this. You may [start a rails console session](../administration/operations/rails_console.md#starting-a-rails-console-session) and use scripts similar to the following:
diff --git a/doc/administration/monitoring/performance/performance_bar.md b/doc/administration/monitoring/performance/performance_bar.md
index 12fa79b3c13..95717f0c54f 100644
--- a/doc/administration/monitoring/performance/performance_bar.md
+++ b/doc/administration/monitoring/performance/performance_bar.md
@@ -17,6 +17,8 @@ For example:
## Available information
+> Rugged calls [removed](https://gitlab.com/gitlab-org/gitlab/-/issues/421591) in GitLab 16.6.
+
From left to right, the performance bar displays:
- **Current Host**: the current host serving the page.
@@ -37,8 +39,6 @@ From left to right, the performance bar displays:
- **Gitaly calls**: the time taken (in milliseconds) and the total number of
[Gitaly](../../gitaly/index.md) calls. Select to display a modal window with more
details.
-- **Rugged calls**: the time taken (in milliseconds) and the total number of
- Rugged calls. Select to display a modal window with more details.
- **Redis calls**: the time taken (in milliseconds) and the total number of
Redis calls. Select to display a modal window with more details.
- **Elasticsearch calls**: the time taken (in milliseconds) and the total number of
diff --git a/doc/administration/monitoring/prometheus/gitlab_metrics.md b/doc/administration/monitoring/prometheus/gitlab_metrics.md
index 9efe39b8d3a..2eb482cae69 100644
--- a/doc/administration/monitoring/prometheus/gitlab_metrics.md
+++ b/doc/administration/monitoring/prometheus/gitlab_metrics.md
@@ -48,8 +48,6 @@ The following metrics are available:
| `gitlab_ci_runner_authentication_failure_total` | Counter | 15.2 | Total number of times that runner authentication has failed
| `gitlab_ghost_user_migration_lag_seconds` | Gauge | 15.6 | The waiting time in seconds of the oldest scheduled record for ghost user migration | |
| `gitlab_ghost_user_migration_scheduled_records_total` | Gauge | 15.6 | The total number of scheduled ghost user migrations | |
-| `job_waiter_started_total` | Counter | 12.9 | Number of batches of jobs started where a web request is waiting for the jobs to complete | `worker` |
-| `job_waiter_timeouts_total` | Counter | 12.9 | Number of batches of jobs that timed out where a web request is waiting for the jobs to complete | `worker` |
| `gitlab_ci_active_jobs` | Histogram | 14.2 | Count of active jobs when pipeline is created | |
| `gitlab_database_transaction_seconds` | Histogram | 12.1 | Time spent in database transactions, in seconds | |
| `gitlab_method_call_duration_seconds` | Histogram | 10.2 | Method calls real duration | `controller`, `action`, `module`, `method` |
@@ -245,7 +243,6 @@ configuration option in `gitlab.yml`. These metrics are served from the
| `geo_cursor_last_event_timestamp` | Gauge | 10.2 | Last UNIX timestamp of the event log processed by the secondary | `url` |
| `geo_status_failed_total` | Counter | 10.2 | Number of times retrieving the status from the Geo Node failed | `url` |
| `geo_last_successful_status_check_timestamp` | Gauge | 10.2 | Last timestamp when the status was successfully updated | `url` |
-| `geo_job_artifacts_synced_missing_on_primary` | Gauge | 10.7 | Number of job artifacts marked as synced due to the file missing on the primary | `url` |
| `geo_package_files` | Gauge | 13.0 | Number of package files on primary | `url` |
| `geo_package_files_checksummed` | Gauge | 13.0 | Number of package files checksummed on primary | `url` |
| `geo_package_files_checksum_failed` | Gauge | 13.0 | Number of package files failed to calculate the checksum on primary | `url` |
@@ -386,7 +383,12 @@ configuration option in `gitlab.yml`. These metrics are served from the
| `geo_project_repositories_verification_total` | Gauge | 16.2 | Number of Project Repositories to attempt to verify on secondary | `url` |
| `geo_project_repositories_verified` | Gauge | 16.2 | Number of Project Repositories successfully verified on secondary | `url` |
| `geo_project_repositories_verification_failed` | Gauge | 16.2 | Number of Project Repositories that failed verification on secondary | `url` |
-
+| `geo_repositories_synced` | Gauge | 10.2 | Deprecated for removal in 17.0. Missing in 16.3 and 16.4. Replaced by `geo_project_repositories_synced`. Number of repositories synced on secondary | `url` |
+| `geo_repositories_failed` | Gauge | 10.2 | Deprecated for removal in 17.0. Missing in 16.3 and 16.4. Replaced by `geo_project_repositories_failed`. Number of repositories failed to sync on secondary | `url` |
+| `geo_repositories_checksummed` | Gauge | 10.7 | Deprecated for removal in 17.0. Missing in 16.3 and 16.4. Replaced by `geo_project_repositories_checksummed`. Number of repositories checksummed on primary | `url` |
+| `geo_repositories_checksum_failed` | Gauge | 10.7 | Deprecated for removal in 17.0. Missing in 16.3 and 16.4. Replaced by `geo_project_repositories_checksum_failed`. Number of repositories failed to calculate the checksum on primary | `url` |
+| `geo_repositories_verified` | Gauge | 10.7 | Deprecated for removal in 17.0. Missing in 16.3 and 16.4. Replaced by `geo_project_repositories_verified`. Number of repositories successfully verified on secondary | `url` |
+| `geo_repositories_verification_failed` | Gauge | 10.7 | Deprecated for removal in 17.0. Missing in 16.3 and 16.4. Replaced by `geo_project_repositories_verification_failed`. Number of repositories that failed verification on secondary | `url` |
| `gitlab_memwd_violations_total` | Counter | 15.9 | Total number of times a Sidekiq process violated a memory threshold | |
| `gitlab_memwd_violations_handled_total` | Counter | 15.9 | Total number of times Sidekiq process memory violations were handled | |
| `sidekiq_watchdog_running_jobs_total` | Counter | 15.9 | Current running jobs when RSS limit was reached | `worker_class` |
diff --git a/doc/administration/monitoring/prometheus/index.md b/doc/administration/monitoring/prometheus/index.md
index df6dd87c896..01b1851ab7f 100644
--- a/doc/administration/monitoring/prometheus/index.md
+++ b/doc/administration/monitoring/prometheus/index.md
@@ -302,6 +302,10 @@ update the firewall on the instance to only allow traffic from your Prometheus I
static_configs:
- targets:
- 1.1.1.1:9236
+ - job_name: registry
+ static_configs:
+ - targets:
+ - 1.1.1.1:5001
```
WARNING:
diff --git a/doc/administration/monitoring/prometheus/web_exporter.md b/doc/administration/monitoring/prometheus/web_exporter.md
index a2dee80f6d4..fbf4a109813 100644
--- a/doc/administration/monitoring/prometheus/web_exporter.md
+++ b/doc/administration/monitoring/prometheus/web_exporter.md
@@ -71,3 +71,11 @@ To serve metrics via HTTPS instead of HTTP, enable TLS in the exporter settings:
When TLS is enabled, the same `port` and `address` is used as described above.
The metrics server cannot serve both HTTP and HTTPS at the same time.
+
+## Troubleshooting
+
+### Docker container runs out of space
+
+When running [GitLab in Docker](../../../install/docker.md), your container might run out of space. This can happen if you enable certain features which increase your space consumption, for example Web Exporter.
+
+To work around this issue, [update your `shm-size`](../../../install/docker.md#devshm-mount-not-having-enough-space-in-docker-container).
diff --git a/doc/administration/operations/puma.md b/doc/administration/operations/puma.md
index f16f1ac46ae..89f1574697f 100644
--- a/doc/administration/operations/puma.md
+++ b/doc/administration/operations/puma.md
@@ -140,37 +140,6 @@ When running Puma in single mode, some features are not supported:
For more information, see [epic 5303](https://gitlab.com/groups/gitlab-org/-/epics/5303).
-## Performance caveat when using Puma with Rugged
-
-For deployments where NFS is used to store Git repositories, GitLab uses
-[direct Git access](../gitaly/index.md#direct-access-to-git-in-gitlab) to improve performance by using
-[Rugged](https://github.com/libgit2/rugged).
-
-Rugged usage is automatically enabled if direct Git access [is available](../gitaly/index.md#automatic-detection) and
-Puma is running single threaded, unless it is disabled by a [feature flag](../../development/gitaly.md#legacy-rugged-code).
-
-MRI Ruby uses a Global VM Lock (GVL). GVL allows MRI Ruby to be multi-threaded, but running at
-most on a single core.
-
-Git includes intensive I/O operations. When Rugged uses a thread for a long period of time,
-other threads that might be processing requests can starve. Puma running in single thread mode
-does not have this issue, because concurrently at most one request is being processed.
-
-GitLab is working to remove Rugged usage. Even though performance without Rugged
-is acceptable today, in some cases it might be still beneficial to run with it.
-
-Given the caveat of running Rugged with multi-threaded Puma, and acceptable
-performance of Gitaly, we disable Rugged usage if Puma multi-threaded is
-used (when Puma is configured to run with more than one thread).
-
-This default behavior may not be the optimal configuration in some situations. If Rugged
-plays an important role in your deployment, we suggest you benchmark to find the
-optimal configuration:
-
-- The safest option is to start with single-threaded Puma.
-- To force Rugged to be used with multi-threaded Puma, you can use a
- [feature flag](../../development/gitaly.md#legacy-rugged-code).
-
## Configuring Puma to listen over SSL
Puma, when deployed with a Linux package installation, listens over a Unix socket by
diff --git a/doc/administration/package_information/supported_os.md b/doc/administration/package_information/supported_os.md
index 2064ee2a8e2..ab579ca93c6 100644
--- a/doc/administration/package_information/supported_os.md
+++ b/doc/administration/package_information/supported_os.md
@@ -24,7 +24,7 @@ architecture.
| ------------------------------------------------------------ | ------------------------------ | --------------- | :----------------------------------------------------------: | ---------- | ------------------------------------------------------------ |
| AlmaLinux 8 | GitLab CE / GitLab EE 14.5.0 | x86_64, aarch64 | [AlmaLinux Install Documentation](https://about.gitlab.com/install/#almalinux) | 2029 | <https://almalinux.org/> |
| AlmaLinux 9 | GitLab CE / GitLab EE 16.0.0 | x86_64, aarch64 | [AlmaLinux Install Documentation](https://about.gitlab.com/install/#almalinux) | 2032 | <https://almalinux.org/> |
-| CentOS 7 | GitLab CE / GitLab EE 7.10.0 | x86_64 | [CentOS Install Documentation](https://about.gitlab.com/install/#centos-7) | June 2024 | <https://wiki.centos.org/About/Product> |
+| CentOS 7 | GitLab CE / GitLab EE 7.10.0 | x86_64 | [CentOS Install Documentation](https://about.gitlab.com/install/#centos-7) | June 2024 | <https://www.centos.org/about/> |
| Debian 10 | GitLab CE / GitLab EE 12.2.0 | amd64, arm64 | [Debian Install Documentation](https://about.gitlab.com/install/#debian) | 2024 | <https://wiki.debian.org/LTS> |
| Debian 11 | GitLab CE / GitLab EE 14.6.0 | amd64, arm64 | [Debian Install Documentation](https://about.gitlab.com/install/#debian) | 2026 | <https://wiki.debian.org/LTS> |
| Debian 12 | GitLab CE / GitLab EE 16.1.0 | amd64, arm64 | [Debian Install Documentation](https://about.gitlab.com/install/#debian) | TBD | <https://wiki.debian.org/LTS> |
diff --git a/doc/administration/packages/container_registry.md b/doc/administration/packages/container_registry.md
index dcc6b768eed..74dd71c19bf 100644
--- a/doc/administration/packages/container_registry.md
+++ b/doc/administration/packages/container_registry.md
@@ -9,7 +9,11 @@ info: To determine the technical writer assigned to the Stage/Group associated w
With the GitLab Container Registry, every project can have its
own space to store Docker images.
-Read more about the Docker Registry in [the Docker documentation](https://docs.docker.com/registry/introduction/).
+For more details about the Distribution Registry:
+
+- [Configuration](https://distribution.github.io/distribution/about/configuration/)
+- [Storage drivers](https://distribution.github.io/distribution/storage-drivers/)
+- [Deploy a registry server](https://distribution.github.io/distribution/about/deploying/)
This document is the administrator's guide. To learn how to use the GitLab Container
Registry, see the [user documentation](../../user/packages/container_registry/index.md).
@@ -33,14 +37,12 @@ Otherwise, the Container Registry is not enabled. To enable it:
The Container Registry works under HTTPS by default. You can use HTTP
but it's not recommended and is beyond the scope of this document.
-Read the [insecure Registry documentation](https://docs.docker.com/registry/insecure/)
-if you want to implement this.
### Self-compiled installations
If you self-compiled your GitLab installation:
-1. You must [deploy a registry](https://docs.docker.com/registry/deploying/) using the image corresponding to the
+1. You must deploy a registry using the image corresponding to the
version of GitLab you are installing
(for example: `registry.gitlab.com/gitlab-org/build/cng/gitlab-container-registry:v3.15.0-gitlab`)
1. After the installation is complete, to enable it, you must configure the Registry's
@@ -70,15 +72,15 @@ Where:
| `host` | The host URL under which the Registry runs and users can use. |
| `port` | The port the external Registry domain listens on. |
| `api_url` | The internal API URL under which the Registry is exposed. It defaults to `http://localhost:5000`. Do not change this unless you are setting up an [external Docker registry](#use-an-external-container-registry-with-gitlab-as-an-auth-endpoint). |
-| `key` | The private key location that is a pair of Registry's `rootcertbundle`. Read the [token auth configuration documentation](https://docs.docker.com/registry/configuration/#token). |
-| `path` | This should be the same directory like specified in Registry's `rootdirectory`. Read the [storage configuration documentation](https://docs.docker.com/registry/configuration/#storage). This path needs to be readable by the GitLab user, the web-server user and the Registry user. Read more in [#configure-storage-for-the-container-registry](#configure-storage-for-the-container-registry). |
-| `issuer` | This should be the same value as configured in Registry's `issuer`. Read the [token auth configuration documentation](https://docs.docker.com/registry/configuration/#token). |
+| `key` | The private key location that is a pair of Registry's `rootcertbundle`. |
+| `path` | This should be the same directory like specified in Registry's `rootdirectory`. This path needs to be readable by the GitLab user, the web-server user and the Registry user. |
+| `issuer` | This should be the same value as configured in Registry's `issuer`. |
A Registry init file is not shipped with GitLab if you install it from source.
Hence, [restarting GitLab](../restart_gitlab.md#self-compiled-installations) does not restart the Registry should
you modify its settings. Read the upstream documentation on how to achieve that.
-At the **absolute** minimum, make sure your [Registry configuration](https://docs.docker.com/registry/configuration/#auth)
+At the **absolute** minimum, make sure your Registry configuration
has `container_registry` as the service and `https://gitlab.example.com/jwt/auth`
as the realm:
@@ -383,9 +385,6 @@ The different supported drivers are:
Although most S3 compatible services (like [MinIO](https://min.io/)) should work with the Container Registry, we only guarantee support for AWS S3. Because we cannot assert the correctness of third-party S3 implementations, we can debug issues, but we cannot patch the registry unless an issue is reproducible against an AWS S3 bucket.
-Read more about the individual driver's configuration options in the
-[Docker Registry docs](https://docs.docker.com/registry/configuration/#storage).
-
### Use file system
If you want to store your images on the file system, you can change the storage
@@ -532,14 +531,14 @@ To configure the `gcs` storage driver for a Linux package installation:
}
```
- GitLab supports all [available parameters](https://docs.docker.com/registry/storage-drivers/gcs/).
+ GitLab supports all available parameters.
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
#### Self-compiled installations
Configuring the storage driver is done in the registry configuration YAML file created
-when you [deployed your Docker registry](https://docs.docker.com/registry/deploying/).
+when you deployed your Docker registry.
`s3` storage driver example:
@@ -638,11 +637,11 @@ you can pull from the Container Registry, but you cannot push.
<!--- start_remove The following content will be removed on remove_date: '2023-10-22' -->
WARNING:
-The default configuration for the storage driver is scheduled to be [changed](https://gitlab.com/gitlab-org/container-registry/-/issues/854) in GitLab 16.0. The storage driver will use `/` as the default root directory. You can add `trimlegacyrootprefix: false` to your current configuration now to avoid any disruptions. For more information, see the [Container Registry configuration](https://gitlab.com/gitlab-org/container-registry/-/tree/master/docs-gitlab#azure-storage-driver) documentation.
+The default configuration for the storage driver is scheduled to be [changed](https://gitlab.com/gitlab-org/container-registry/-/issues/854) in GitLab 16.0. The storage driver will use `/` as the default root directory. You can add `trimlegacyrootprefix: false` to your current configuration now to avoid any disruptions. For more information, see the [Container Registry configuration](https://gitlab.com/gitlab-org/container-registry/-/blob/master/docs/upstream-differences.md#azure-storage-driver) documentation.
<!--- end_remove -->
When moving from an existing file system or another object storage provider to Azure Object Storage, you must configure the registry to use the standard root directory.
-Configure it by setting [`trimlegacyrootprefix: true`](https://gitlab.com/gitlab-org/container-registry/-/blob/a3f64464c3ec1c5a599c0a2daa99ebcbc0100b9a/docs-gitlab/README.md#azure-storage-driver) in the Azure storage driver section of the registry configuration.
+Configure it by setting [`trimlegacyrootprefix: true`](https://gitlab.com/gitlab-org/container-registry/-/blob/master/docs/upstream-differences.md#azure-storage-driver) in the Azure storage driver section of the registry configuration.
Without this configuration, the Azure storage driver uses `//` instead of `/` as the first section of the root path, rendering the migrated images inaccessible.
::Tabs
@@ -675,7 +674,7 @@ storage:
::EndTabs
-By default, Azure Storage Driver uses the `core.windows.net` realm. You can set another value for `realm` in the `azure` section (for example, `core.usgovcloudapi.net` for Azure Government Cloud). For more information, see the [Docker documentation](https://docs.docker.com/registry/storage-drivers/azure/).
+By default, Azure Storage Driver uses the `core.windows.net` realm. You can set another value for `realm` in the `azure` section (for example, `core.usgovcloudapi.net` for Azure Government Cloud).
### Disable redirect for storage driver
@@ -876,8 +875,7 @@ You can use GitLab as an auth endpoint with an external container registry.
- `gitlab_rails['registry_api_url'] = "http://<external_registry_host>:5000"`
must be changed to match the host where Registry is installed.
It must also specify `https` if the external registry is
- configured to use TLS. Read more on the
- [Docker registry documentation](https://docs.docker.com/registry/deploying/).
+ configured to use TLS.
1. A certificate-key pair is required for GitLab and the external container
registry to communicate securely. You need to create a certificate-key
@@ -972,7 +970,7 @@ To configure a notification endpoint for a Linux package installation:
:::TabTitle Self-compiled (source)
Configuring the notification endpoint is done in your registry configuration YAML file created
-when you [deployed your Docker registry](https://docs.docker.com/registry/deploying/).
+when you deployed your Docker registry.
Example:
@@ -1028,7 +1026,7 @@ projects.each do |p|
end
if project_total_size > 0
- projects_and_size << [p.project_id, p.creator.id, project_total_size, p.full_path]
+ projects_and_size << [p.project_id, p.creator&.id, project_total_size, p.full_path]
end
end
@@ -1374,7 +1372,7 @@ By default, the container registry uses object storage to persist metadata
related to container images. This method to store metadata limits how efficiently
the data can be accessed, especially data spanning multiple images, such as when listing tags.
By using a database to store this data, many new features are possible, including
-[online garbage collection](https://gitlab.com/gitlab-org/container-registry/-/blob/master/docs-gitlab/db/online-garbage-collection.md)
+[online garbage collection](https://gitlab.com/gitlab-org/container-registry/-/blob/master/docs/spec/gitlab/online-garbage-collection.md)
which removes old data automatically with zero downtime.
This database works in conjunction with the object storage already used by the registry, but does not replace object storage.
@@ -1580,7 +1578,7 @@ You can add a configuration option for backwards compatibility.
:::TabTitle Self-compiled (source)
-1. Edit the YAML configuration file you created when you [deployed the registry](https://docs.docker.com/registry/deploying/). Add the following snippet:
+1. Edit the YAML configuration file you created when you deployed the registry. Add the following snippet:
```yaml
compatibility:
@@ -1632,7 +1630,7 @@ and a simple solution would be to enable relative URLs in the Registry.
:::TabTitle Self-compiled (source)
-1. Edit the YAML configuration file you created when you [deployed the registry](https://docs.docker.com/registry/deploying/). Add the following snippet:
+1. Edit the YAML configuration file you created when you deployed the registry. Add the following snippet:
```yaml
http:
diff --git a/doc/administration/pages/index.md b/doc/administration/pages/index.md
index f64c53e28a2..97acbf717fe 100644
--- a/doc/administration/pages/index.md
+++ b/doc/administration/pages/index.md
@@ -200,13 +200,13 @@ then run `gitlab-ctl reconfigure`. For more information, read
**Requirements:**
- [Wildcard DNS setup](#dns-configuration)
-- [TLS-terminating load balancer](../../install/aws/manual_install_aws.md#load-balancer)
+- [TLS-terminating load balancer](../../install/aws/index.md#load-balancer)
---
URL scheme: `https://<namespace>.example.io/<project_slug>`
-This setup is primarily intended to be used when [installing a GitLab POC on Amazon Web Services](../../install/aws/manual_install_aws.md). This includes a TLS-terminating [classic load balancer](../../install/aws/manual_install_aws.md#load-balancer) that listens for HTTPS connections, manages TLS certificates, and forwards HTTP traffic to the instance.
+This setup is primarily intended to be used when [installing a GitLab POC on Amazon Web Services](../../install/aws/index.md). This includes a TLS-terminating [classic load balancer](../../install/aws/index.md#load-balancer) that listens for HTTPS connections, manages TLS certificates, and forwards HTTP traffic to the instance.
1. In `/etc/gitlab/gitlab.rb` specify the following configuration:
diff --git a/doc/administration/postgresql/external.md b/doc/administration/postgresql/external.md
index a9f857d8f00..b9bfda80b83 100644
--- a/doc/administration/postgresql/external.md
+++ b/doc/administration/postgresql/external.md
@@ -63,7 +63,6 @@ pg_dump: error: Error message from server: SSL SYSCALL error: EOF detected
To resolve this error, ensure that you are meeting the
[minimum PostgreSQL requirements](../../install/requirements.md#postgresql-requirements). After
-upgrading your RDS instance to a suitable version, you should be able to perform a backup without
-this error. Refer to issue #64763
-([Segmentation fault citing `LooseForeignKeys::CleanupWorker` causes complete database restart](https://gitlab.com/gitlab-org/gitlab/-/issues/364763))
-for more information.
+upgrading your RDS instance to a [supported version](../../install/requirements.md#database),
+you should be able to perform a backup without this error.
+See [issue 64763](https://gitlab.com/gitlab-org/gitlab/-/issues/364763) for more information.
diff --git a/doc/administration/postgresql/external_metrics.md b/doc/administration/postgresql/external_metrics.md
new file mode 100644
index 00000000000..fc4c5652a18
--- /dev/null
+++ b/doc/administration/postgresql/external_metrics.md
@@ -0,0 +1,33 @@
+---
+stage: Data Stores
+group: Database
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
+---
+
+# Monitoring and logging setup for external databases
+
+External PostgreSQL database systems have different logging options for monitoring performance and troubleshooting, however they are not enabled by default. In this section we provide the recommendations for self-managed PostgreSQL, and recommendations for some major providers of PostgreSQL managed services.
+
+## Recommended PostgreSQL Logging settings
+
+You should enable the following logging settings:
+
+- `log_statement=ddl`: log changes of database model definition (DDL), such as `CREATE`, `ALTER` or `DROP` of objects. This helps track recent model changes that could be causing performance issues and identify security breaches and human errors.
+- `log_lock_waits=on`: log of processes holding [locks](https://www.postgresql.org/docs/current/explicit-locking.html) for long periods, a common cause of poor query performance.
+- `log_temp_files=0`: log usage of intense and unusual temporary files that can indicate poor query performance.
+- `log_autovacuum_min_duration=0`: log all autovacuum executions. Autovacuum is a key component for overall PostgreSQL engine performance. Essential for troubleshooting and tuning if dead tuples are not being removed from tables.
+- `log_min_duration_statement=1000`: log slow queries (slower than 1 second).
+
+The full description of the above parameter settings can be found in
+[PostgreSQL error reporting and logging documentation](https://www.postgresql.org/docs/current/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT).
+
+## Amazon RDS
+
+The Amazon Relational Database Service (RDS) provides a large number of [monitoring metrics](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Monitoring.html) and [logging interfaces](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Monitor_Logs_Events.html). Here are a few you should configure:
+
+- Change all above [recommended PostgreSQL Logging settings](#recommended-postgresql-logging-settings) through [RDS Parameter Groups](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithDBInstanceParamGroups.html).
+ - As the recommended logging parameters are [dynamic in RDS](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.PostgreSQL.CommonDBATasks.Parameters.html) you don't require a reboot after changing these settings.
+ - The PostgreSQL logs can be observed through the [RDS console](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/logs-events-streams-console.html).
+- Enable [RDS performance insight](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PerfInsights.html) allows you to visualise your database load with many important performance metrics of a PostgreSQL database engine.
+- Enable [RDS Enhanced Monitoring](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Monitoring.OS.html) to monitor the operating system metrics. These metrics can indicate bottlenecks in your underlying hardware and OS that are impacting your database performance.
+ - In production environments set the monitoring interval to 10 seconds (or less) to capture micro bursts of resource usage that can be the cause of many performance issues. Set `Granularity=10` in the console or `monitoring-interval=10` in the CLI.
diff --git a/doc/administration/postgresql/external_upgrade.md b/doc/administration/postgresql/external_upgrade.md
new file mode 100644
index 00000000000..3e2c3b09853
--- /dev/null
+++ b/doc/administration/postgresql/external_upgrade.md
@@ -0,0 +1,48 @@
+---
+stage: Data Stores
+group: Database
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
+---
+
+# Upgrading external PostgreSQL databases
+
+When upgrading your PostgreSQL database engine, it is important to follow all steps
+recommended by the PostgreSQL community and your cloud provider. Two
+kinds of upgrades exist for PostgreSQL databases:
+
+- **Minor version upgrades**: These include only bug and security fixes. They are
+ always backward-compatible with your existing application database model.
+
+ The minor version upgrade process consists of replacing the PostgreSQL binaries
+ and restarting the database service. The data directory remains unchanged.
+
+- **Major version upgrades**: These change the internal storage format and the database
+ catalog. As a result, object statistics used by the query optimizer
+ [are not transferred to the new version](https://www.postgresql.org/docs/current/pgupgrade.html)
+ and must be rebuilt with `ANALYZE`.
+
+ Not following the documented major version upgrade process often results in
+ poor database performance and high CPU use on the database server.
+
+All major cloud providers support in-place major version upgrades of database
+instances, using the `pg_upgrade` utility. However you must follow the pre- and
+post- upgrade steps to reduce the risk of performance degradation or database disruption.
+
+Read carefully the major version upgrade steps of your external database platform:
+
+- [Amazon RDS for PostgreSQL](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.PostgreSQL.html#USER_UpgradeDBInstance.PostgreSQL.MajorVersion.Process)
+- [Azure Database for PostgreSQL Flexible Server](https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-major-version-upgrade)
+- [Google Cloud SQL for PostgreSQL](https://cloud.google.com/sql/docs/postgres/upgrade-major-db-version-inplace)
+- [PostgreSQL community `pg_upgrade`](https://www.postgresql.org/docs/current/pgupgrade.html)
+
+## Always `ANALYZE` your database after a major version upgrade
+
+It is mandatory to run the [`ANALYZE` operation](https://www.postgresql.org/docs/current/sql-analyze.html)
+to refresh the `pg_statistic` table after a major version upgrade, because optimizer statistics
+[are not transferred by `pg_upgrade`](https://www.postgresql.org/docs/current/pgupgrade.html).
+This should be done for all databases on the upgraded PostgreSQL service/instance/cluster.
+
+To speed up the `ANALYZE` operation, use the
+[`vacuumdb` utility](https://www.postgresql.org/docs/current/app-vacuumdb.html),
+with `--analyze-only --jobs=njobs` to execute the `ANALYZE` command in parallel by
+running `njobs` commands simultaneously.
diff --git a/doc/administration/postgresql/index.md b/doc/administration/postgresql/index.md
index af0a86c3d72..4d73ba49846 100644
--- a/doc/administration/postgresql/index.md
+++ b/doc/administration/postgresql/index.md
@@ -30,6 +30,10 @@ your own external PostgreSQL server.
Read how to [set up an external PostgreSQL instance](external.md).
+When setting up an external database there are some metrics that are useful for monitoring and troubleshooting.
+When setting up an external database there are monitoring and logging settings required for troubleshooting various database related issues.
+Read more about [monitoring and logging setup for external Databases](external_metrics.md).
+
### PostgreSQL replication and failover for Linux package installations **(PREMIUM SELF)**
This setup is for when you have installed GitLab using the
@@ -47,3 +51,4 @@ Read how to [set up PostgreSQL replication and failover](replication_and_failove
- [Moving GitLab databases to a different PostgreSQL instance](moving.md)
- [Multiple databases](multiple_databases.md)
- [Database guides for GitLab development](../../development/database/index.md)
+- [Upgrade external database](external_upgrade.md)
diff --git a/doc/administration/raketasks/geo.md b/doc/administration/raketasks/geo.md
index c6bc891f529..a4b14b132db 100644
--- a/doc/administration/raketasks/geo.md
+++ b/doc/administration/raketasks/geo.md
@@ -2,82 +2,14 @@
stage: Systems
group: Geo
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
+remove_date: '2024-02-06'
+redirect_to: '../../update/deprecations.md#geo-housekeeping-rake-tasks'
---
-# Geo Rake tasks **(PREMIUM SELF)**
+# Geo Rake tasks (removed) **(PREMIUM SELF)**
-The following Rake tasks are for [Geo installations](../geo/index.md).
-See also [troubleshooting Geo](../geo/replication/troubleshooting.md) for additional Geo Rake tasks.
-
-## Git housekeeping
-
-There are few tasks you can run to schedule a Git housekeeping to start at the
-next repository sync in a **secondary** node:
-
-### Incremental Repack
-
-This is equivalent of running `git repack -d` on a _bare_ repository.
-
-- Linux package installations:
-
- ```shell
- sudo gitlab-rake geo:git:housekeeping:incremental_repack
- ```
-
-- Self-compiled installations:
-
- ```shell
- sudo -u git -H bundle exec rake geo:git:housekeeping:incremental_repack RAILS_ENV=production
- ```
-
-### Full Repack
-
-This is equivalent of running `git repack -d -A --pack-kept-objects` on a
-_bare_ repository which optionally, writes a reachability bitmap index
-when this is enabled in GitLab.
-
-- Linux package installations:
-
- ```shell
- sudo gitlab-rake geo:git:housekeeping:full_repack
- ```
-
-- Self-compiled installations:
-
- ```shell
- sudo -u git -H bundle exec rake geo:git:housekeeping:full_repack RAILS_ENV=production
- ```
-
-### GC
-
-This is equivalent of running `git gc` on a _bare_ repository, optionally writing
-a reachability bitmap index when this is enabled in GitLab.
-
-- Linux package installations:
-
- ```shell
- sudo gitlab-rake geo:git:housekeeping:gc
- ```
-
-- Self-compiled installations:
-
- ```shell
- sudo -u git -H bundle exec rake geo:git:housekeeping:gc RAILS_ENV=production
- ```
-
-## Remove orphaned project registries
-
-Under certain conditions your project registry can contain obsolete records, you
-can remove them using the Rake task `geo:run_orphaned_project_registry_cleaner`:
-
-- Linux package installations:
-
- ```shell
- sudo gitlab-rake geo:run_orphaned_project_registry_cleaner
- ```
-
-- Self-compiled installations:
-
- ```shell
- sudo -u git -H bundle exec rake geo:run_orphaned_project_registry_cleaner RAILS_ENV=production
- ```
+The Geo housekeeping Rake tasks were
+[deprecated](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/125927) in
+GitLab 16.3 and
+[removed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/130565) in
+GitLab 16.5.
diff --git a/doc/administration/raketasks/github_import.md b/doc/administration/raketasks/github_import.md
index 82f3ffa2193..a4d52899f21 100644
--- a/doc/administration/raketasks/github_import.md
+++ b/doc/administration/raketasks/github_import.md
@@ -4,11 +4,15 @@ group: Distribution
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
-# GitHub import Rake task **(FREE SELF)**
+# GitHub import Rake task (deprecated) **(FREE SELF)**
> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/390690) in GitLab 15.9, Rake task no longer automatically creates namespaces or groups that don't exist.
> - Requirement for Maintainer role instead of Developer role introduced in GitLab 16.0 and backported to GitLab 15.11.1 and GitLab 15.10.5.
+WARNING:
+This feature was [deprecated](https://gitlab.com/gitlab-org/gitlab/-/issues/428225) in GitLab 16.6 and is planned for
+removal in GitLab 17.0. Use the [GitHub import feature](../../user/project/import/github.md) instead.
+
To retrieve and import GitHub repositories, you need a [GitHub personal access token](https://github.com/settings/tokens).
A username should be passed as the second argument to the Rake task,
which becomes the owner of the project. You can resume an import
diff --git a/doc/administration/reference_architectures/10k_users.md b/doc/administration/reference_architectures/10k_users.md
index 2e208c4eca1..2203f4b3a02 100644
--- a/doc/administration/reference_architectures/10k_users.md
+++ b/doc/administration/reference_architectures/10k_users.md
@@ -6,18 +6,21 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# Reference architecture: up to 10,000 users **(PREMIUM SELF)**
-This page describes GitLab reference architecture for up to 10,000 users. For a
-full list of reference architectures, see
+This page describes the GitLab reference architecture designed for the load of up to 10,000 users
+with notable headroom.
+
+For a full list of reference architectures, see
[Available reference architectures](index.md#available-reference-architectures).
-> - **Supported users (approximate):** 10,000
+NOTE:
+Before deploying this architecture it's recommended to read through the [main documentation](index.md) first,
+specifically the [Before you start](index.md#before-you-start) and [Deciding which architecture to use](index.md#deciding-which-architecture-to-use) sections.
+
+> - **Target load:** API: 200 RPS, Web: 20 RPS, Git (Pull): 20 RPS, Git (Push): 4 RPS
> - **High Availability:** Yes ([Praefect](#configure-praefect-postgresql) needs a third-party PostgreSQL solution for HA)
> - **Estimated Costs:** [See cost table](index.md#cost-to-run)
> - **Cloud Native Hybrid Alternative:** [Yes](#cloud-native-hybrid-reference-architecture-with-helm-charts-alternative)
-> - **Validation and test results:** The Quality Engineering team does [regular smoke and performance tests](index.md#validation-and-test-results) to ensure the reference architectures remain compliant
-> - **Test requests per second (RPS) rates:** API: 200 RPS, Web: 20 RPS, Git (Pull): 20 RPS, Git (Push): 4 RPS
-> - **[Latest Results](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Benchmarks/Latest/10k)**
-> - **Unsure which Reference Architecture to use?** [Go to this guide for more info](index.md#deciding-which-architecture-to-use).
+> - **Unsure which Reference Architecture to use?** [Go to this guide for more info](index.md#deciding-which-architecture-to-use)
| Service | Nodes | Configuration | GCP | AWS |
|------------------------------------------|-------|-------------------------|------------------|----------------|
@@ -144,6 +147,27 @@ monitor .[#7FFFD4,norank]u--> elb
Before starting, see the [requirements](index.md#requirements) for reference architectures.
+## Testing methodology
+
+The 10k architecture is designed to cover a large majority of workflows and is regularly
+[smoke and performance tested](index.md#validation-and-test-results) by the Quality Engineering team
+against the following endpoint throughput targets:
+
+- API: 200 RPS
+- Web: 20 RPS
+- Git (Pull): 20 RPS
+- Git (Push): 4 RPS
+
+The above targets were selected based on real customer data of total environmental loads corresponding to the user count,
+including CI and other workloads along with additional substantial headroom added.
+
+If you have metrics to suggest that you have regularly higher throughput against the above endpoint targets, [large monorepos](index.md#large-monorepos)
+or notable [additional workloads](index.md#additional-workloads) these can notably impact the performance environment and [further adjustments may be required](index.md#scaling-an-environment).
+If this applies to you, we strongly recommended referring to the linked documentation as well as reaching out to your [Customer Success Manager](https://handbook.gitlab.com/job-families/sales/customer-success-management/) or our [Support team](https://about.gitlab.com/support/) for further guidance.
+
+Testing is done regularly via our [GitLab Performance Tool (GPT)](https://gitlab.com/gitlab-org/quality/performance) and its dataset, which is available for anyone to use.
+The results of this testing are [available publicly on the GPT wiki](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Benchmarks/Latest). For more information on our testing strategy [refer to this section of the documentation](index.md#validation-and-test-results).
+
## Setup components
To set up GitLab and its components to accommodate up to 10,000 users:
@@ -1307,7 +1331,7 @@ This is how this would work with a Linux package PostgreSQL setup:
1. Create the new user `praefect`, replacing `<praefect_postgresql_password>`:
```shell
- CREATE ROLE praefect WITH LOGIN CREATEDB PASSWORD <praefect_postgresql_password>;
+ CREATE ROLE praefect WITH LOGIN CREATEDB PASSWORD '<praefect_postgresql_password>';
```
1. Reconnect to the PostgreSQL server, this time as the `praefect` user:
@@ -1763,7 +1787,8 @@ Updates to example must be made at:
-->
```ruby
- roles ["sidekiq_role"]
+ # https://docs.gitlab.com/omnibus/roles/#sidekiq-roles
+ roles(["sidekiq_role"])
# External URL
## This should match the URL of the external load balancer
diff --git a/doc/administration/reference_architectures/1k_users.md b/doc/administration/reference_architectures/1k_users.md
index 2f7c8209a44..362da0bd7c6 100644
--- a/doc/administration/reference_architectures/1k_users.md
+++ b/doc/administration/reference_architectures/1k_users.md
@@ -6,24 +6,18 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# Reference architecture: up to 1,000 users **(FREE SELF)**
-This page describes GitLab reference architecture for up to 1,000 users. For a
-full list of reference architectures, see
-[Available reference architectures](index.md#available-reference-architectures).
+This page describes the GitLab reference architecture designed for the load of up to 1,000 users
+with notable headroom (non-HA standalone).
-If you are serving up to 1,000 users, and you don't have strict availability
-requirements, a [standalone](index.md#standalone-non-ha) single-node solution with
-frequent backups is appropriate for
-many organizations.
+For a full list of reference architectures, see
+[Available reference architectures](index.md#available-reference-architectures).
-> - **Supported users (approximate):** 1,000
+> - **Target Load:** API: 20 RPS, Web: 2 RPS, Git (Pull): 2 RPS, Git (Push): 1 RPS
> - **High Availability:** No. For a highly-available environment, you can
> follow a modified [3K reference architecture](3k_users.md#supported-modifications-for-lower-user-counts-ha).
> - **Estimated Costs:** [See cost table](index.md#cost-to-run)
> - **Cloud Native Hybrid:** No. For a cloud native hybrid environment, you
> can follow a [modified hybrid reference architecture](#cloud-native-hybrid-reference-architecture-with-helm-charts).
-> - **Validation and test results:** The Quality Engineering team does [regular smoke and performance tests](index.md#validation-and-test-results) to ensure the reference architectures remain compliant
-> - **Test requests per second (RPS) rates:** API: 20 RPS, Web: 2 RPS, Git (Pull): 2 RPS, Git (Push): 1 RPS
-> - **[Latest Results](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Benchmarks/Latest/1k)**
> - **Unsure which Reference Architecture to use?** [Go to this guide for more info](index.md#deciding-which-architecture-to-use).
| Users | Configuration | GCP | AWS | Azure |
@@ -73,6 +67,27 @@ WARNING:
**However, if you have [large monorepos](index.md#large-monorepos) (larger than several gigabytes) or [additional workloads](index.md#additional-workloads) these can *significantly* impact the performance of the environment and further adjustments may be required.**
If this applies to you, we strongly recommended referring to the linked documentation as well as reaching out to your [Customer Success Manager](https://handbook.gitlab.com/job-families/sales/customer-success-management/) or our [Support team](https://about.gitlab.com/support/) for further guidance.
+## Testing methodology
+
+The 1k architecture is designed to cover a large majority of workflows and is regularly
+[smoke and performance tested](index.md#validation-and-test-results) by the Quality Engineering team
+against the following endpoint throughput targets:
+
+- API: 20 RPS
+- Web: 2 RPS
+- Git (Pull): 2 RPS
+- Git (Push): 1 RPS
+
+The above targets were selected based on real customer data of total environmental loads corresponding to the user count,
+including CI and other workloads along with additional substantial headroom added.
+
+If you have metrics to suggest that you have regularly higher throughput against the above endpoint targets, [large monorepos](index.md#large-monorepos)
+or notable [additional workloads](index.md#additional-workloads) these can notably impact the performance environment and [further adjustments may be required](index.md#scaling-an-environment).
+If this applies to you, we strongly recommended referring to the linked documentation as well as reaching out to your [Customer Success Manager](https://handbook.gitlab.com/job-families/sales/customer-success-management/) or our [Support team](https://about.gitlab.com/support/) for further guidance.
+
+Testing is done regularly via our [GitLab Performance Tool (GPT)](https://gitlab.com/gitlab-org/quality/performance) and its dataset, which is available for anyone to use.
+The results of this testing are [available publicly on the GPT wiki](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Benchmarks/Latest). For more information on our testing strategy [refer to this section of the documentation](index.md#validation-and-test-results).
+
## Setup instructions
To install GitLab for this default reference architecture, use the standard
diff --git a/doc/administration/reference_architectures/25k_users.md b/doc/administration/reference_architectures/25k_users.md
index 355fe45cc2f..a5d44edf877 100644
--- a/doc/administration/reference_architectures/25k_users.md
+++ b/doc/administration/reference_architectures/25k_users.md
@@ -6,18 +6,21 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# Reference architecture: up to 25,000 users **(PREMIUM SELF)**
-This page describes GitLab reference architecture for up to 25,000 users. For a
-full list of reference architectures, see
+This page describes the GitLab reference architecture designed for the load of up to 25,000 users
+with notable headroom.
+
+For a full list of reference architectures, see
[Available reference architectures](index.md#available-reference-architectures).
-> - **Supported users (approximate):** 25,000
+NOTE:
+Before deploying this architecture it's recommended to read through the [main documentation](index.md) first,
+specifically the [Before you start](index.md#before-you-start) and [Deciding which architecture to use](index.md#deciding-which-architecture-to-use) sections.
+
+> - **Target load:** API: 500 RPS, Web: 50 RPS, Git (Pull): 50 RPS, Git (Push): 10 RPS
> - **High Availability:** Yes ([Praefect](#configure-praefect-postgresql) needs a third-party PostgreSQL solution for HA)
> - **Estimated Costs:** [See cost table](index.md#cost-to-run)
> - **Cloud Native Hybrid Alternative:** [Yes](#cloud-native-hybrid-reference-architecture-with-helm-charts-alternative)
-> - **Validation and test results:** The Quality Engineering team does [regular smoke and performance tests](index.md#validation-and-test-results) to ensure the reference architectures remain compliant
-> - **Test requests per second (RPS) rates:** API: 500 RPS, Web: 50 RPS, Git (Pull): 50 RPS, Git (Push): 10 RPS
-> - **[Latest Results](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Benchmarks/Latest/25k)**
-> - **Unsure which Reference Architecture to use?** [Go to this guide for more info](index.md#deciding-which-architecture-to-use).
+> - **Unsure which Reference Architecture to use?** [Go to this guide for more info](index.md#deciding-which-architecture-to-use)
| Service | Nodes | Configuration | GCP | AWS |
|------------------------------------------|-------|-------------------------|------------------|--------------|
@@ -144,6 +147,27 @@ monitor .[#7FFFD4,norank]u--> elb
Before starting, see the [requirements](index.md#requirements) for reference architectures.
+## Testing methodology
+
+The 25k architecture is designed to cover a large majority of workflows and is regularly
+[smoke and performance tested](index.md#validation-and-test-results) by the Quality Engineering team
+against the following endpoint throughput targets:
+
+- API: 500 RPS
+- Web: 50 RPS
+- Git (Pull): 50 RPS
+- Git (Push): 10 RPS
+
+The above targets were selected based on real customer data of total environmental loads corresponding to the user count,
+including CI and other workloads along with additional substantial headroom added.
+
+If you have metrics to suggest that you have regularly higher throughput against the above endpoint targets, [large monorepos](index.md#large-monorepos)
+or notable [additional workloads](index.md#additional-workloads) these can notably impact the performance environment and [further adjustments may be required](index.md#scaling-an-environment).
+If this applies to you, we strongly recommended referring to the linked documentation as well as reaching out to your [Customer Success Manager](https://handbook.gitlab.com/job-families/sales/customer-success-management/) or our [Support team](https://about.gitlab.com/support/) for further guidance.
+
+Testing is done regularly via our [GitLab Performance Tool (GPT)](https://gitlab.com/gitlab-org/quality/performance) and its dataset, which is available for anyone to use.
+The results of this testing are [available publicly on the GPT wiki](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Benchmarks/Latest). For more information on our testing strategy [refer to this section of the documentation](index.md#validation-and-test-results).
+
## Setup components
To set up GitLab and its components to accommodate up to 25,000 users:
@@ -1324,7 +1348,7 @@ This is how this would work with a Linux package PostgreSQL setup:
1. Create the new user `praefect`, replacing `<praefect_postgresql_password>`:
```shell
- CREATE ROLE praefect WITH LOGIN CREATEDB PASSWORD <praefect_postgresql_password>;
+ CREATE ROLE praefect WITH LOGIN CREATEDB PASSWORD '<praefect_postgresql_password>';
```
1. Reconnect to the PostgreSQL server, this time as the `praefect` user:
@@ -1780,7 +1804,8 @@ Updates to example must be made at:
-->
```ruby
- roles ["sidekiq_role"]
+ # https://docs.gitlab.com/omnibus/roles/#sidekiq-roles
+ roles(["sidekiq_role"])
# External URL
## This should match the URL of the external load balancer
diff --git a/doc/administration/reference_architectures/2k_users.md b/doc/administration/reference_architectures/2k_users.md
index 5814d6c1e2d..fb8b9d8de45 100644
--- a/doc/administration/reference_architectures/2k_users.md
+++ b/doc/administration/reference_architectures/2k_users.md
@@ -6,18 +6,17 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# Reference architecture: up to 2,000 users **(FREE SELF)**
-This page describes GitLab reference architecture for up to 2,000 users.
+This page describes the GitLab reference architecture designed for the load of up to 2,000 users
+with notable headroom (non-HA).
+
For a full list of reference architectures, see
[Available reference architectures](index.md#available-reference-architectures).
-> - **Supported users (approximate):** 2,000
+> - **Target Load:** API: 40 RPS, Web: 4 RPS, Git (Pull): 4 RPS, Git (Push): 1 RPS
> - **High Availability:** No. For a highly-available environment, you can
> follow a modified [3K reference architecture](3k_users.md#supported-modifications-for-lower-user-counts-ha).
> - **Estimated Costs:** [See cost table](index.md#cost-to-run)
> - **Cloud Native Hybrid:** [Yes](#cloud-native-hybrid-reference-architecture-with-helm-charts-alternative)
-> - **Validation and test results:** The Quality Engineering team does [regular smoke and performance tests](index.md#validation-and-test-results) to ensure the reference architectures remain compliant
-> - **Test requests per second (RPS) rates:** API: 40 RPS, Web: 4 RPS, Git (Pull): 4 RPS, Git (Push): 1 RPS
-> - **[Latest Results](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Benchmarks/Latest/2k)**
> - **Unsure which Reference Architecture to use?** [Go to this guide for more info](index.md#deciding-which-architecture-to-use).
| Service | Nodes | Configuration | GCP | AWS | Azure |
@@ -81,6 +80,27 @@ monitor .[#7FFFD4,norank]u--> elb
Before starting, see the [requirements](index.md#requirements) for reference architectures.
+## Testing methodology
+
+The 2k architecture is designed to cover a large majority of workflows and is regularly
+[smoke and performance tested](index.md#validation-and-test-results) by the Quality Engineering team
+against the following endpoint throughput targets:
+
+- API: 40 RPS
+- Web: 4 RPS
+- Git (Pull): 4 RPS
+- Git (Push): 1 RPS
+
+The above targets were selected based on real customer data of total environmental loads corresponding to the user count,
+including CI and other workloads along with additional substantial headroom added.
+
+If you have metrics to suggest that you have regularly higher throughput against the above endpoint targets, [large monorepos](index.md#large-monorepos)
+or notable [additional workloads](index.md#additional-workloads) these can notably impact the performance environment and [further adjustments may be required](index.md#scaling-an-environment).
+If this applies to you, we strongly recommended referring to the linked documentation as well as reaching out to your [Customer Success Manager](https://handbook.gitlab.com/job-families/sales/customer-success-management/) or our [Support team](https://about.gitlab.com/support/) for further guidance.
+
+Testing is done regularly via our [GitLab Performance Tool (GPT)](https://gitlab.com/gitlab-org/quality/performance) and its dataset, which is available for anyone to use.
+The results of this testing are [available publicly on the GPT wiki](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Benchmarks/Latest). For more information on our testing strategy [refer to this section of the documentation](index.md#validation-and-test-results).
+
## Setup components
To set up GitLab and its components to accommodate up to 2,000 users:
@@ -609,7 +629,8 @@ Updates to example must be made at:
-->
```ruby
- roles ["sidekiq_role"]
+ # https://docs.gitlab.com/omnibus/roles/#sidekiq-roles
+ roles(["sidekiq_role"])
# External URL
external_url 'https://gitlab.example.com'
diff --git a/doc/administration/reference_architectures/3k_users.md b/doc/administration/reference_architectures/3k_users.md
index 1fd8239c93f..73b0291ab95 100644
--- a/doc/administration/reference_architectures/3k_users.md
+++ b/doc/administration/reference_architectures/3k_users.md
@@ -6,27 +6,20 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# Reference architecture: up to 3,000 users **(PREMIUM SELF)**
-This GitLab reference architecture can help you deploy GitLab to up to 3,000
-users, and then maintain uptime and access for those users. You can also use
-this architecture to provide improved GitLab uptime and availability for fewer
-than 3,000 users. For fewer users, reduce the stated node sizes as needed.
+This page describes the GitLab reference architecture designed for the load of up to 3,000 users
+with notable headroom.
-If maintaining a high level of uptime for your GitLab environment isn't a
-requirement, or if you don't have the expertise to maintain this sort of
-environment, we recommend using the non-HA [2,000-user reference architecture](2k_users.md)
-for your GitLab installation. If HA is still a requirement, there's several supported
-tweaks you can make to this architecture to reduce complexity as detailed here.
+This architecture is the smallest one available with HA built in. If you require HA but
+have a lower user count or total load the [Supported Modifications for lower user counts](#supported-modifications-for-lower-user-counts-ha)
+section details how to reduce this architecture's size while maintaining HA.
For a full list of reference architectures, see
[Available reference architectures](index.md#available-reference-architectures).
-> - **Supported users (approximate):** 3,000
+> - **Target Load:** 60 RPS, Web: 6 RPS, Git (Pull): 6 RPS, Git (Push): 1 RPS
> - **High Availability:** Yes, although [Praefect](#configure-praefect-postgresql) needs a third-party PostgreSQL solution
> - **Estimated Costs:** [See cost table](index.md#cost-to-run)
> - **Cloud Native Hybrid Alternative:** [Yes](#cloud-native-hybrid-reference-architecture-with-helm-charts-alternative)
-> - **Validation and test results:** The Quality Engineering team does [regular smoke and performance tests](index.md#validation-and-test-results) to ensure the reference architectures remain compliant
-> - **Test requests per second (RPS) rates:** API: 60 RPS, Web: 6 RPS, Git (Pull): 6 RPS, Git (Push): 1 RPS
-> - **[Latest Results](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Benchmarks/Latest/3k)**
> - **Unsure which Reference Architecture to use?** [Go to this guide for more info](index.md#deciding-which-architecture-to-use).
| Service | Nodes | Configuration | GCP | AWS |
@@ -149,6 +142,27 @@ monitor .[#7FFFD4,norank]u--> elb
Before starting, see the [requirements](index.md#requirements) for reference architectures.
+## Testing methodology
+
+The 3k architecture is designed to cover a large majority of workflows and is regularly
+[smoke and performance tested](index.md#validation-and-test-results) by the Quality Engineering team
+against the following endpoint throughput targets:
+
+- API: 60 RPS
+- Web: 6 RPS
+- Git (Pull): 6 RPS
+- Git (Push): 1 RPS
+
+The above targets were selected based on real customer data of total environmental loads corresponding to the user count,
+including CI and other workloads along with additional substantial headroom added.
+
+If you have metrics to suggest that you have regularly higher throughput against the above endpoint targets, [large monorepos](index.md#large-monorepos)
+or notable [additional workloads](index.md#additional-workloads) these can notably impact the performance environment and [further adjustments may be required](index.md#scaling-an-environment).
+If this applies to you, we strongly recommended referring to the linked documentation as well as reaching out to your [Customer Success Manager](https://handbook.gitlab.com/job-families/sales/customer-success-management/) or our [Support team](https://about.gitlab.com/support/) for further guidance.
+
+Testing is done regularly via our [GitLab Performance Tool (GPT)](https://gitlab.com/gitlab-org/quality/performance) and its dataset, which is available for anyone to use.
+The results of this testing are [available publicly on the GPT wiki](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Benchmarks/Latest). For more information on our testing strategy [refer to this section of the documentation](index.md#validation-and-test-results).
+
## Setup components
To set up GitLab and its components to accommodate up to 3,000 users:
@@ -1248,7 +1262,7 @@ This is how this would work with a Linux package PostgreSQL setup:
1. Create the new user `praefect`, replacing `<praefect_postgresql_password>`:
```shell
- CREATE ROLE praefect WITH LOGIN CREATEDB PASSWORD <praefect_postgresql_password>;
+ CREATE ROLE praefect WITH LOGIN CREATEDB PASSWORD '<praefect_postgresql_password>';
```
1. Reconnect to the PostgreSQL server, this time as the `praefect` user:
@@ -1708,7 +1722,8 @@ Updates to example must be made at:
-->
```ruby
- roles ["sidekiq_role"]
+ # https://docs.gitlab.com/omnibus/roles/#sidekiq-roles
+ roles(["sidekiq_role"])
# External URL
## This should match the URL of the external load balancer
diff --git a/doc/administration/reference_architectures/50k_users.md b/doc/administration/reference_architectures/50k_users.md
index 72ddd347856..ca39468a76e 100644
--- a/doc/administration/reference_architectures/50k_users.md
+++ b/doc/administration/reference_architectures/50k_users.md
@@ -6,18 +6,21 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# Reference architecture: up to 50,000 users **(PREMIUM SELF)**
-This page describes GitLab reference architecture for up to 50,000 users. For a
-full list of reference architectures, see
+This page describes the GitLab reference architecture designed for the load of up to 50,000 users
+with notable headroom.
+
+For a full list of reference architectures, see
[Available reference architectures](index.md#available-reference-architectures).
-> - **Supported users (approximate):** 50,000
+NOTE:
+Before deploying this architecture it's recommended to read through the [main documentation](index.md) first,
+specifically the [Before you start](index.md#before-you-start) and [Deciding which architecture to use](index.md#deciding-which-architecture-to-use) sections.
+
+> - **Target load:** API: 1000 RPS, Web: 100 RPS, Git (Pull): 100 RPS, Git (Push): 20 RPS
> - **High Availability:** Yes ([Praefect](#configure-praefect-postgresql) needs a third-party PostgreSQL solution for HA)
> - **Estimated Costs:** [See cost table](index.md#cost-to-run)
> - **Cloud Native Hybrid Alternative:** [Yes](#cloud-native-hybrid-reference-architecture-with-helm-charts-alternative)
-> - **Validation and test results:** The Quality Engineering team does [regular smoke and performance tests](index.md#validation-and-test-results) to ensure the reference architectures remain compliant
-> - **Test requests per second (RPS) rates:** API: 1000 RPS, Web: 100 RPS, Git (Pull): 100 RPS, Git (Push): 20 RPS
-> - **[Latest Results](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Benchmarks/Latest/50k)**
-> - **Unsure which Reference Architecture to use?** [Go to this guide for more info](index.md#deciding-which-architecture-to-use).
+> - **Unsure which Reference Architecture to use?** [Go to this guide for more info](index.md#deciding-which-architecture-to-use)
| Service | Nodes | Configuration | GCP | AWS |
|------------------------------------------|-------|-------------------------|------------------|---------------|
@@ -144,6 +147,27 @@ monitor .[#7FFFD4,norank]u--> elb
Before starting, see the [requirements](index.md#requirements) for reference architectures.
+## Testing methodology
+
+The 50k architecture is designed to cover a large majority of workflows and is regularly
+[smoke and performance tested](index.md#validation-and-test-results) by the Quality Engineering team
+against the following endpoint throughput targets:
+
+- API: 1000 RPS
+- Web: 100 RPS
+- Git (Pull): 100 RPS
+- Git (Push): 20 RPS
+
+The above targets were selected based on real customer data of total environmental loads corresponding to the user count,
+including CI and other workloads along with additional substantial headroom added.
+
+If you have metrics to suggest that you have regularly higher throughput against the above endpoint targets, [large monorepos](index.md#large-monorepos)
+or notable [additional workloads](index.md#additional-workloads) these can notably impact the performance environment and [further adjustments may be required](index.md#scaling-an-environment).
+If this applies to you, we strongly recommended referring to the linked documentation as well as reaching out to your [Customer Success Manager](https://handbook.gitlab.com/job-families/sales/customer-success-management/) or our [Support team](https://about.gitlab.com/support/) for further guidance.
+
+Testing is done regularly via our [GitLab Performance Tool (GPT)](https://gitlab.com/gitlab-org/quality/performance) and its dataset, which is available for anyone to use.
+The results of this testing are [available publicly on the GPT wiki](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Benchmarks/Latest). For more information on our testing strategy [refer to this section of the documentation](index.md#validation-and-test-results).
+
## Setup components
To set up GitLab and its components to accommodate up to 50,000 users:
@@ -1320,7 +1344,7 @@ This is how this would work with a Linux package PostgreSQL setup:
1. Create the new user `praefect`, replacing `<praefect_postgresql_password>`:
```shell
- CREATE ROLE praefect WITH LOGIN CREATEDB PASSWORD <praefect_postgresql_password>;
+ CREATE ROLE praefect WITH LOGIN CREATEDB PASSWORD '<praefect_postgresql_password>';
```
1. Reconnect to the PostgreSQL server, this time as the `praefect` user:
@@ -1776,7 +1800,8 @@ Updates to example must be made at:
-->
```ruby
- roles ["sidekiq_role"]
+ # https://docs.gitlab.com/omnibus/roles/#sidekiq-roles
+ roles(["sidekiq_role"])
# External URL
## This should match the URL of the external load balancer
diff --git a/doc/administration/reference_architectures/5k_users.md b/doc/administration/reference_architectures/5k_users.md
index e2bf0aa59f4..e908565e27e 100644
--- a/doc/administration/reference_architectures/5k_users.md
+++ b/doc/administration/reference_architectures/5k_users.md
@@ -6,25 +6,21 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# Reference architecture: up to 5,000 users **(PREMIUM SELF)**
-This page describes GitLab reference architecture for up to 5,000 users. For a
-full list of reference architectures, see
+This page describes the GitLab reference architecture designed for the load of up to 5,000 users
+with notable headroom.
+
+For a full list of reference architectures, see
[Available reference architectures](index.md#available-reference-architectures).
NOTE:
-This reference architecture is designed to help your organization achieve a
-highly-available GitLab deployment. If you do not have the expertise or need to
-maintain a highly-available environment, you can have a simpler and less
-costly-to-operate environment by using the
-[2,000-user reference architecture](2k_users.md).
+Before deploying this architecture it's recommended to read through the [main documentation](index.md) first,
+specifically the [Before you start](index.md#before-you-start) and [Deciding which architecture to use](index.md#deciding-which-architecture-to-use) sections.
-> - **Supported users (approximate):** 5,000
+> - **Target load:** API: 100 RPS, Web: 10 RPS, Git (Pull): 10 RPS, Git (Push): 2 RPS
> - **High Availability:** Yes ([Praefect](#configure-praefect-postgresql) needs a third-party PostgreSQL solution for HA)
> - **Estimated Costs:** [See cost table](index.md#cost-to-run)
> - **Cloud Native Hybrid Alternative:** [Yes](#cloud-native-hybrid-reference-architecture-with-helm-charts-alternative)
-> - **Validation and test results:** The Quality Engineering team does [regular smoke and performance tests](index.md#validation-and-test-results) to ensure the reference architectures remain compliant
-> - **Test requests per second (RPS) rates:** API: 100 RPS, Web: 10 RPS, Git (Pull): 10 RPS, Git (Push): 2 RPS
-> - **[Latest Results](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Benchmarks/Latest/5k)**
-> - **Unsure which Reference Architecture to use?** [Go to this guide for more info](index.md#deciding-which-architecture-to-use).
+> - **Unsure which Reference Architecture to use?** [Go to this guide for more info](index.md#deciding-which-architecture-to-use)
| Service | Nodes | Configuration | GCP | AWS |
|-------------------------------------------|-------|-------------------------|-----------------|--------------|
@@ -146,6 +142,27 @@ monitor .[#7FFFD4,norank]u--> elb
Before starting, see the [requirements](index.md#requirements) for reference architectures.
+## Testing methodology
+
+The 5k architecture is designed to cover a large majority of workflows and is regularly
+[smoke and performance tested](index.md#validation-and-test-results) by the Quality Engineering team
+against the following endpoint throughput targets:
+
+- API: 100 RPS
+- Web: 10 RPS
+- Git (Pull): 10 RPS
+- Git (Push): 2 RPS
+
+The above targets were selected based on real customer data of total environmental loads corresponding to the user count,
+including CI and other workloads along with additional substantial headroom added.
+
+If you have metrics to suggest that you have regularly higher throughput against the above endpoint targets, [large monorepos](index.md#large-monorepos)
+or notable [additional workloads](index.md#additional-workloads) these can notably impact the performance environment and [further adjustments may be required](index.md#scaling-an-environment).
+If this applies to you, we strongly recommended referring to the linked documentation as well as reaching out to your [Customer Success Manager](https://handbook.gitlab.com/job-families/sales/customer-success-management/) or our [Support team](https://about.gitlab.com/support/) for further guidance.
+
+Testing is done regularly via our [GitLab Performance Tool (GPT)](https://gitlab.com/gitlab-org/quality/performance) and its dataset, which is available for anyone to use.
+The results of this testing are [available publicly on the GPT wiki](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Benchmarks/Latest). For more information on our testing strategy [refer to this section of the documentation](index.md#validation-and-test-results).
+
## Setup components
To set up GitLab and its components to accommodate up to 5,000 users:
@@ -1242,7 +1259,7 @@ This is how this would work with a Linux package PostgreSQL setup:
1. Create the new user `praefect`, replacing `<praefect_postgresql_password>`:
```shell
- CREATE ROLE praefect WITH LOGIN CREATEDB PASSWORD <praefect_postgresql_password>;
+ CREATE ROLE praefect WITH LOGIN CREATEDB PASSWORD '<praefect_postgresql_password>';
```
1. Reconnect to the PostgreSQL server, this time as the `praefect` user:
@@ -1696,7 +1713,8 @@ Updates to example must be made at:
-->
```ruby
- roles ["sidekiq_role"]
+ # https://docs.gitlab.com/omnibus/roles/#sidekiq-roles
+ roles(["sidekiq_role"])
# External URL
## This should match the URL of the external load balancer
diff --git a/doc/administration/reference_architectures/index.md b/doc/administration/reference_architectures/index.md
index 44aa3d648ad..fcbfaf46009 100644
--- a/doc/administration/reference_architectures/index.md
+++ b/doc/administration/reference_architectures/index.md
@@ -12,36 +12,37 @@ GitLab Quality Engineering and Support teams to provide recommended deployments
## Available reference architectures
-Depending on your workflow, the following recommended reference architectures
-may need to be adapted accordingly. Your workload is influenced by factors
-including how active your users are, how much automation you use, mirroring,
-and repository/change size. Additionally, the displayed memory values are
-provided by [GCP machine types](https://cloud.google.com/compute/docs/machine-resource).
-For different cloud vendors, attempt to select options that best match the
-provided architecture.
+The following Reference Architectures are available as recommended starting points for your environment.
+
+The architectures are named in terms of user count, which in this case means the architecture is designed against
+the _total_ load that comes with such a user count based on real data along with substantial headroom added to cover most scenarios such as CI or other automated workloads.
+
+However, it should be noted that in some cases, known heavy scenarios such as [large monorepos](#large-monorepos) or notable [additional workloads](#additional-workloads) may require adjustments to be made.
+
+For each Reference Architecture, the details of what they have been tested against can be found respectively in the "Testing Methodology" section of each page.
### GitLab package (Omnibus)
-The following reference architectures, where the GitLab package is used, are available:
+Below is a list of Linux package based architectures:
-- [Up to 1,000 users](1k_users.md)
-- [Up to 2,000 users](2k_users.md)
-- [Up to 3,000 users](3k_users.md)
-- [Up to 5,000 users](5k_users.md)
-- [Up to 10,000 users](10k_users.md)
-- [Up to 25,000 users](25k_users.md)
-- [Up to 50,000 users](50k_users.md)
+- [Up to 1,000 users](1k_users.md) <span style="color: darkgrey;">_API: 20 RPS, Web: 2 RPS, Git (Pull): 2 RPS, Git (Push): 1 RPS_</span>
+- [Up to 2,000 users](2k_users.md) <span style="color: darkgrey;">_API: 40 RPS, Web: 4 RPS, Git (Pull): 4 RPS, Git (Push): 1 RPS_</span>
+- [Up to 3,000 users](3k_users.md) <span style="color: darkgrey;">_API: 60 RPS, Web: 6 RPS, Git (Pull): 6 RPS, Git (Push): 1 RPS_</span>
+- [Up to 5,000 users](5k_users.md) <span style="color: darkgrey;">_API: 100 RPS, Web: 10 RPS, Git (Pull): 10 RPS, Git (Push): 2 RPS_</span>
+- [Up to 10,000 users](10k_users.md) <span style="color: darkgrey;">_API: 200 RPS, Web: 20 RPS, Git (Pull): 20 RPS, Git (Push): 4 RPS_</span>
+- [Up to 25,000 users](25k_users.md) <span style="color: darkgrey;">_API: 500 RPS, Web: 50 RPS, Git (Pull): 50 RPS, Git (Push): 10 RPS_</span>
+- [Up to 50,000 users](50k_users.md) <span style="color: darkgrey;">_API: 1000 RPS, Web: 100 RPS, Git (Pull): 100 RPS, Git (Push): 20 RPS_</span>
### Cloud native hybrid
-The following Cloud Native Hybrid reference architectures, where select recommended components can be run in Kubernetes, are available:
+Below is a list of Cloud Native Hybrid reference architectures, where select recommended components can be run in Kubernetes:
-- [Up to 2,000 users](2k_users.md#cloud-native-hybrid-reference-architecture-with-helm-charts-alternative)
-- [Up to 3,000 users](3k_users.md#cloud-native-hybrid-reference-architecture-with-helm-charts-alternative)
-- [Up to 5,000 users](5k_users.md#cloud-native-hybrid-reference-architecture-with-helm-charts-alternative)
-- [Up to 10,000 users](10k_users.md#cloud-native-hybrid-reference-architecture-with-helm-charts-alternative)
-- [Up to 25,000 users](25k_users.md#cloud-native-hybrid-reference-architecture-with-helm-charts-alternative)
-- [Up to 50,000 users](50k_users.md#cloud-native-hybrid-reference-architecture-with-helm-charts-alternative)
+- [Up to 2,000 users](2k_users.md#cloud-native-hybrid-reference-architecture-with-helm-charts-alternative) <span style="color: darkgrey;">_API: 40 RPS, Web: 4 RPS, Git (Pull): 4 RPS, Git (Push): 1 RPS_</span>
+- [Up to 3,000 users](3k_users.md#cloud-native-hybrid-reference-architecture-with-helm-charts-alternative) <span style="color: darkgrey;">_API: 60 RPS, Web: 6 RPS, Git (Pull): 6 RPS, Git (Push): 1 RPS_</span>
+- [Up to 5,000 users](5k_users.md#cloud-native-hybrid-reference-architecture-with-helm-charts-alternative) <span style="color: darkgrey;">_API: 100 RPS, Web: 10 RPS, Git (Pull): 10 RPS, Git (Push): 2 RPS_</span>
+- [Up to 10,000 users](10k_users.md#cloud-native-hybrid-reference-architecture-with-helm-charts-alternative) <span style="color: darkgrey;">_API: 200 RPS, Web: 20 RPS, Git (Pull): 20 RPS, Git (Push): 4 RPS_</span>
+- [Up to 25,000 users](25k_users.md#cloud-native-hybrid-reference-architecture-with-helm-charts-alternative) <span style="color: darkgrey;">_API: 500 RPS, Web: 50 RPS, Git (Pull): 50 RPS, Git (Push): 10 RPS_</span>
+- [Up to 50,000 users](50k_users.md#cloud-native-hybrid-reference-architecture-with-helm-charts-alternative) <span style="color: darkgrey;">_API: 1000 RPS, Web: 100 RPS, Git (Pull): 100 RPS, Git (Push): 20 RPS_</span>
## Before you start
@@ -63,6 +64,19 @@ As a general guide, **the more performant and/or resilient you want your environ
This section explains the designs you can choose from. It begins with the least complexity, goes to the most, and ends with a decision tree.
+### Expected Load (RPS)
+
+The first thing to check is what the expected load is your environment would be expected to serve.
+
+The Reference Architectures have been designed with substantial headroom by default, but it's recommended to also check the
+load of what each architecture has been tested against under the "Testing Methodology" section found on each page,
+comparing those values with what load you are expecting against your existing GitLab environment to help select the right Reference Architecture
+size.
+
+Load is given in terms of Requests per Section (RPS) for each endpoint type (API, Web, Git). This information on your existing infrastructure
+can typically be surfaced by most reputable monitoring solutions or in some other ways such as load balancer metrics. For example, on existing GitLab environments,
+[Prometheus metrics](../monitoring/prometheus/gitlab_metrics.md) such as `gitlab_transaction_duration_seconds` can be used to see this data.
+
### Standalone (non-HA)
For environments serving 2,000 or fewer users, we generally recommend a standalone approach by deploying a non-highly available single or multi-node environment. With this approach, you can employ strategies such as [automated backups](../../administration/backup_restore/backup_gitlab.md#configuring-cron-to-make-daily-backups) for recovery to provide a good level of RPO / RTO while avoiding the complexities that come with HA.
@@ -144,10 +158,11 @@ Below you can find the above guidance in the form of a decision tree. It's recom
```mermaid
%%{init: { 'theme': 'base' } }%%
graph TD
- L1A(<b>What Reference Architecture should I use?</b>)
+ L0A(<b>What Reference Architecture should I use?</b>)
+ L1A(<b>What is your <a href=#expected-load-rps>expected load</a>?</b>)
- L2A(3,000 users or more?)
- L2B(2,000 users or less?)
+ L2A("Equivalent to <a href=3k_users.md#testing-methodology>3,000 users</a> or more?")
+ L2B("Equivalent to <a href=2k_users.md#testing-methodology>2,000 users</a> or less?")
L3A("<a href=#do-you-need-high-availability-ha>Do you need HA?</a><br>(or Zero-Downtime Upgrades)")
L3B[Do you have experience with<br/>and want additional resilience<br/>with select components in Kubernetes?]
@@ -157,6 +172,7 @@ graph TD
L4C><b>Recommendation</b><br><br>Cloud Native Hybrid architecture<br>closest to user count]
L4D>"<b>Recommendation</b><br><br>Standalone 1K or 2K<br/>architecture with Backups"]
+ L0A --> L1A
L1A --> L2A
L1A --> L2B
L2A -->|Yes| L3B
@@ -191,13 +207,22 @@ Before implementing a reference architecture, refer to the following requirement
These reference architectures were built and tested on Google Cloud Platform (GCP) using the
[Intel Xeon E5 v3 (Haswell)](https://cloud.google.com/compute/docs/cpu-platforms)
CPU platform as a lowest common denominator baseline ([Sysbench benchmark](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Reference-Architectures/GCP-CPU-Benchmarks)).
+Newer, similarly-sized CPUs are supported and may have improved performance as a result.
-Newer, similarly-sized CPUs are supported and may have improved performance as a result. For Linux package environments,
-ARM-based equivalents are also supported.
+ARM CPUs are supported for Linux package environments as well as for any [Cloud Provider services](#cloud-provider-services) where applicable.
NOTE:
Any "burstable" instance types are not recommended due to inconsistent performance.
+### Supported disk types
+
+As a general guidance, most standard disk types are expected to work for GitLab, but be aware of the following specific call outs:
+
+- [Gitaly](../gitaly/index.md#disk-requirements) requires at least 8,000 input/output operations per second (IOPS) for read operations, and 2,000 IOPS for write operations.
+- We don't recommend the use of any disk types that are "burstable" due to inconsistent performance.
+
+Outside the above standard, disk types are expected to work for GitLab and the choice of each depends on your specific requirements around areas, such as durability or costs.
+
### Supported infrastructure
As a general guidance, GitLab should run on most infrastructure such as reputable Cloud Providers (AWS, GCP, Azure) and
@@ -356,6 +381,12 @@ If you choose to use a third party external service:
Redis is primarily single threaded. For the 10,000 user and above Reference Architectures, separate out the instances as specified into Cache and Persistent data to achieve optimum performance at this scale.
+### Recommendation notes for Object Storage
+
+GitLab has been tested against [various Object Storage providers](../object_storage.md#supported-object-storage-providers) that are expected to work.
+
+As a general guidance, it's recommended to use a reputable solution that has full S3 compatibility.
+
#### Unsupported database services
Several database cloud provider services are known not to support the above or have been found to have other issues and aren't recommended:
@@ -649,22 +680,35 @@ You should upgrade a Reference Architecture in the same order as you created it.
### Scaling an environment
-Scaling a GitLab environment is designed to be as seamless as possible.
+Scaling a GitLab environment is designed to be as flexible and seamless as possible.
+
+This can be done iteratively or wholesale to the next size of architecture depending on your circumstances.
+For example, if any of your GitLab Rails, Sidekiq, Gitaly, Redis or PostgreSQL nodes are consistently oversaturated, then increase their resources accordingly while leaving the rest of the environment as is.
-In terms of the Reference Architectures, you would look to the next size and adjust accordingly.
-Most setups would only need vertical scaling, but there are some specific areas that can be adjusted depending on the setup:
+If expecting a large increase in users, you may elect to scale up the whole environment to the next
+size of architecture.
+
+If the overall design is being followed, you can scale the environment vertically as required.
+
+If robust metrics are in place that show the environment is over-provisioned, you can apply the same process for
+scaling downwards. You should take an iterative approach when scaling downwards to ensure there are no issues.
+
+#### Scaling from a non-HA to an HA architecture
+
+While in most cases vertical scaling is only required to increase an environment's resources, if you are moving to an HA environment,
+there may be some additional steps required as shown below:
- If you're scaling from a non-HA environment to an HA environment, various components are recommended to be deployed in their HA forms:
- - Redis to multi-node Redis w/ Redis Sentinel
- - Postgres to multi-node Postgres w/ Consul + PgBouncer
- - Gitaly to Gitaly Cluster w/ Praefect
+ - [Redis to multi-node Redis w/ Redis Sentinel](../redis/replication_and_failover.md#switching-from-an-existing-single-machine-installation)
+ - [Postgres to multi-node Postgres w/ Consul + PgBouncer](../postgresql/moving.md)
+ - [Gitaly to Gitaly Cluster w/ Praefect](../gitaly/index.md#migrate-to-gitaly-cluster)
- From 10k users and higher, Redis is recommended to be split into multiple HA servers as it's single threaded.
Conversely, if you have robust metrics in place that show the environment is over-provisioned, you can apply the same process for
scaling downwards. You should take an iterative approach when scaling downwards, however, to ensure there are no issues.
-### How to monitor your environment
+### Monitoring
+
+There are numerous options available to monitor your infrastructure, as well as [GitLab itself](../monitoring/index.md), and you should refer to your chosen monitoring solution's documentation for more information.
-To monitor your GitLab environment, you can use the tools
-[bundled with GitLab](../monitoring/index.md), but it's also possible to use third-party
-options if desired.
+Of note, the GitLab application is bundled with [Prometheus as well as various Prometheus compatible exporters](../monitoring/prometheus/index.md) that could be hooked into your solution.
diff --git a/doc/administration/review_spam_logs.md b/doc/administration/review_spam_logs.md
new file mode 100644
index 00000000000..e3b96cdae95
--- /dev/null
+++ b/doc/administration/review_spam_logs.md
@@ -0,0 +1,40 @@
+---
+stage: Govern
+group: Anti-Abuse
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
+type: reference, howto
+---
+
+# Review spam logs **(FREE SELF)**
+
+GitLab tracks user activity and flags certain behavior for potential spam.
+
+In the Admin Area, a GitLab administrator can view and resolve spam logs.
+
+## Manage spam logs
+
+> **Trust user** [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/131812) in GitLab 16.5.
+
+View and resolve spam logs to moderate user activity in your instance.
+
+To view spam logs:
+
+1. On the left sidebar, select **Search or go to**.
+1. Select **Admin Area**.
+1. Select **Spam Logs**.
+1. Optional. To resolve a spam log, select a log and then select **Remove user**, **Block user**, **Remove log**, or **Trust user**.
+
+### Resolving spam logs
+
+You can resolve a spam log with one of the following effects:
+
+| Option | Description |
+|---------|-------------|
+| **Remove user** | The user is [deleted](../user/profile/account/delete_account.md) from the instance. |
+| **Block user** | The user is blocked from the instance. The spam log remains in the list. |
+| **Remove log** | The spam log is removed from the list. |
+| **Trust user** | The user is trusted, and can create issues, notes, snippets, and merge requests without being blocked for spam. Spam logs are not created for trusted users. |
+
+NOTE:
+Users can be [blocked](../api/users.md#block-user) and
+[unblocked](../api/users.md#unblock-user) using the GitLab API.
diff --git a/doc/administration/settings/continuous_integration.md b/doc/administration/settings/continuous_integration.md
index 841b6e644eb..0e2a512302d 100644
--- a/doc/administration/settings/continuous_integration.md
+++ b/doc/administration/settings/continuous_integration.md
@@ -266,6 +266,22 @@ To enable or disable the banner:
1. Select or clear the **Enable pipeline suggestion banner** checkbox.
1. Select **Save changes**.
+## Enable or disable the external redirect page for job artifacts
+
+By default, GitLab Pages shows an external redirect page when a user tries to view
+a job artifact served by GitLab Pages. This page warns about the potential for
+malicious user-generated content, as described in
+[issue 352611](https://gitlab.com/gitlab-org/gitlab/-/issues/352611).
+
+Self-managed administrators can disable the external redirect warning page,
+so you can view job artifact pages directly:
+
+1. On the left sidebar, select **Search or go to**.
+1. Select **Admin Area**.
+1. Select **Settings > CI/CD**.
+1. Expand **Continuous Integration and Deployment**.
+1. Deselect **Enable the external redirect page for job artifacts**.
+
## Required pipeline configuration **(ULTIMATE SELF)**
> - [Moved](https://gitlab.com/gitlab-org/gitlab/-/issues/352316) from GitLab Premium to GitLab Ultimate in 15.0.
diff --git a/doc/administration/settings/gitaly_timeouts.md b/doc/administration/settings/gitaly_timeouts.md
index 3304db3d148..1cab1e9fd01 100644
--- a/doc/administration/settings/gitaly_timeouts.md
+++ b/doc/administration/settings/gitaly_timeouts.md
@@ -20,8 +20,10 @@ To access Gitaly timeout settings:
The following timeouts are available.
-| Timeout | Default | Description |
-|:--------|:-----------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Timeout | Default | Description |
+|:--------|:-----------|:------------|
| Default | 55 seconds | Timeout for most Gitaly calls (not enforced for `git` `fetch` and `push` operations, or Sidekiq jobs). For example, checking if a repository exists on disk. Makes sure that Gitaly calls made within a web request cannot exceed the entire request timeout. It should be shorter than the [worker timeout](../operations/puma.md#change-the-worker-timeout) that can be configured for [Puma](../../install/requirements.md#puma-settings). If a Gitaly call timeout exceeds the worker timeout, the remaining time from the worker timeout is used to avoid having to terminate the worker. |
-| Fast | 10 seconds | Timeout for fast Gitaly operations used within requests, sometimes multiple times. For example, checking if a repository exists on disk. If fast operations exceed this threshold, there may be a problem with a storage shard. Failing fast can help maintain the stability of the GitLab instance. |
-| Medium | 30 seconds | Timeout for Gitaly operations that should be fast (possibly within requests) but preferably not used multiple times within a request. For example, loading blobs. Timeout that should be set between Default and Fast. |
+| Fast | 10 seconds | Timeout for fast Gitaly operations used within requests, sometimes multiple times. For example, checking if a repository exists on disk. If fast operations exceed this threshold, there may be a problem with a storage shard. Failing fast can help maintain the stability of the GitLab instance. |
+| Medium | 30 seconds | Timeout for Gitaly operations that should be fast (possibly within requests) but preferably not used multiple times within a request. For example, loading blobs. Timeout that should be set between Default and Fast. |
+
+You can also [configure negotiation timeouts](../gitaly/configure_gitaly.md#configure-negotiation-timeouts).
diff --git a/doc/administration/settings/jira_cloud_app.md b/doc/administration/settings/jira_cloud_app.md
index f4f1db3617e..8ff2a9acdb8 100644
--- a/doc/administration/settings/jira_cloud_app.md
+++ b/doc/administration/settings/jira_cloud_app.md
@@ -37,6 +37,9 @@ To create an OAuth application on your self-managed instance:
- If you're installing the app from the official marketplace listing, enter `https://gitlab.com/-/jira_connect/oauth_callbacks`.
- If you're installing the app manually, enter `<instance_url>/-/jira_connect/oauth_callbacks` and replace `<instance_url>` with the URL of your instance.
1. Clear the **Trusted** and **Confidential** checkboxes.
+
+ NOTE:
+ You must clear these checkboxes to avoid errors.
1. In **Scopes**, select the `api` checkbox only.
1. Select **Save application**.
1. Copy the **Application ID** value.
@@ -45,6 +48,28 @@ To create an OAuth application on your self-managed instance:
1. Paste the **Application ID** value into **Jira Connect Application ID**.
1. Select **Save changes**.
+## Jira user requirements
+
+> Support for the `org-admins` group [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/420687) in GitLab 16.6.
+
+In your [Atlassian organization](https://admin.atlassian.com), you must ensure that the Jira user that is used to set up the GitLab for Jira Cloud app is a member of
+either:
+
+- The Organization Administrators (`org-admins`) group. Newer Atlassian organizations are using
+ [centralized user management](https://support.atlassian.com/user-management/docs/give-users-admin-permissions/#Centralized-user-management-content),
+ which contains the `org-admins` group. Existing Atlassian organizations are being migrated to centralized user management.
+ If available, you should use the `org-admins` group to indicate which Jira users can manage the GitLab for Jira app. Alternatively you can use the
+ `site-admins` group.
+- The Site Administrators (`site-admins`) group. The `site-admins` group was used under
+ [original user management](https://support.atlassian.com/user-management/docs/give-users-admin-permissions/#Original-user-management-content).
+
+If necessary:
+
+1. [Create your preferred group](https://support.atlassian.com/user-management/docs/create-groups/).
+1. [Edit the group](https://support.atlassian.com/user-management/docs/edit-a-group/) to add your Jira user as a member of it.
+1. If you customized your global permissions in Jira, you might also need to grant the
+ [`Browse users and groups` permission](https://confluence.atlassian.com/jirakb/unable-to-browse-for-users-and-groups-120521888.html) to the Jira user.
+
## Connect the GitLab for Jira Cloud app
> Introduced in GitLab 15.7.
@@ -76,6 +101,7 @@ With this method:
- Set up an internet-facing reverse proxy in front of your self-managed instance. To secure this proxy further, only allow inbound
traffic from [Atlassian IP addresses](https://support.atlassian.com/organization-administration/docs/ip-addresses-and-domains-for-atlassian-cloud-products/#Outgoing-Connections).
- Add [GitLab IP addresses](../../user/gitlab_com/index.md#ip-range) to the allowlist of your firewall.
+- The Jira user that installs and configures the GitLab for Jira Cloud app must meet certain [requirements](#jira-user-requirements).
### Set up your instance
@@ -144,6 +170,7 @@ To support your self-managed instance with Jira Cloud, do one of the following:
- The instance must be publicly available.
- You must set up [OAuth authentication](#set-up-oauth-authentication).
+- The Jira user that installs and configures the GitLab for Jira Cloud app must meet certain [requirements](#jira-user-requirements).
### Install the app in development mode
@@ -314,6 +341,8 @@ To resolve this issue, ensure all prerequisites for your installation method hav
- [Prerequisites for connecting the GitLab for Jira Cloud app](#prerequisites)
- [Prerequisites for installing the GitLab for Jira Cloud app manually](#prerequisites-1)
+If you have configured a Jira Connect Proxy URL and the problem persists after checking the prerequisites, review [Debugging Jira Connect Proxy issues](#debugging-jira-connect-proxy-issues).
+
If you're using GitLab 15.8 and earlier and have previously enabled both the `jira_connect_oauth_self_managed`
and the `jira_connect_oauth` feature flags, you must disable the `jira_connect_oauth_self_managed` flag
due to a [known issue](https://gitlab.com/gitlab-org/gitlab/-/issues/388943). To check for these flags:
@@ -331,6 +360,46 @@ due to a [known issue](https://gitlab.com/gitlab-org/gitlab/-/issues/388943). To
Feature.disable(:jira_connect_oauth_self_managed)
```
+#### Debugging Jira Connect Proxy issues
+
+If you are using a self-managed GitLab instance and you have configured `https://gitlab.com` for the Jira Connect Proxy URL when
+[setting up the OAuth authentication](#set-up-oauth-authentication), you can inspect the network traffic in your browser's development
+tools while reproducing the `Failed to update the GitLab instance` error to see a more precise error.
+
+You should see a `GET` request to `https://gitlab.com/-/jira_connect/installations`.
+
+This request should return a `200` status code, but it can return a `422` status code if there was a problem. The response body can be checked for the error.
+
+If you cannot resolve the problem and you are a GitLab customer, contact [GitLab Support](https://about.gitlab.com/support/) for assistance. Provide
+GitLab Support with:
+
+1. Your GitLab self-managed instance URL.
+1. Your GitLab.com username.
+1. If possible, the `X-Request-Id` response header for the failed `GET` request to `https://gitlab.com/-/jira_connect/installations`.
+1. Optional. [A HAR file that captured the problem](https://support.zendesk.com/hc/en-us/articles/4408828867098-Generating-a-HAR-file-for-troubleshooting).
+
+The GitLab Support team can then look up why this is failing in the GitLab.com server logs.
+
+##### Process for GitLab Support
+
+NOTE:
+These steps can only be completed by GitLab Support.
+
+In Kibana, the logs should be filtered for `json.meta.caller_id: JiraConnect::InstallationsController#update` and `NOT json.status: 200`.
+If you have been provided the `X-Request-Id` value, you can use that against `json.correlation_id` to narrow down the results.
+
+Each `GET` request to the Jira Connect Proxy URL `https://gitlab.com/-/jira_connect/installations` generates two log entries.
+
+For the first log:
+
+- `json.status` is `422`.
+- `json.params.value` should match the GitLab self-managed URL `[[FILTERED], {"instance_url"=>"https://gitlab.example.com"}]`.
+
+For the second log:
+
+- `json.message` is `Proxy lifecycle event received error response` or similar.
+- `json.jira_status_code` and `json.jira_body` might contain details on why GitLab.com wasn't able to connect back to the self-managed instance.
+
### `Failed to link group`
After you connect the GitLab for Jira Cloud app for self-managed instances, you might get one of these errors:
@@ -349,9 +418,6 @@ When you check the browser console, you might see the following message:
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://gitlab.example.com/-/jira_connect/oauth_application_id. (Reason: CORS header 'Access-Control-Allow-Origin' missing). Status code: 403.
```
-`403` status code is returned if:
-
-- The user information cannot be fetched from Jira.
-- The authenticated Jira user does not have [site administrator](https://support.atlassian.com/user-management/docs/give-users-admin-permissions/#Make-someone-a-site-admin) access.
+`403` status code is returned if the user information cannot be fetched from Jira because of insufficient permissions.
-To resolve this issue, ensure the authenticated user is a Jira site administrator and try again.
+To resolve this issue, ensure that the Jira user that installs and configures the GitLab for Jira Cloud app meets certain [requirements](#jira-user-requirements).
diff --git a/doc/administration/settings/rate_limits_on_git_ssh_operations.md b/doc/administration/settings/rate_limits_on_git_ssh_operations.md
index 677d8fea195..4e60fd55b19 100644
--- a/doc/administration/settings/rate_limits_on_git_ssh_operations.md
+++ b/doc/administration/settings/rate_limits_on_git_ssh_operations.md
@@ -20,8 +20,6 @@ Each command has a rate limit of 600 per minute. For example:
Because the same commands are shared by `git-upload-pack`, `git pull`, and `git clone`, they share a rate limit.
-Users on self-managed GitLab can disable this rate limit.
-
## Configure GitLab Shell operation limit
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/123761) in GitLab 16.2.
@@ -33,4 +31,5 @@ Users on self-managed GitLab can disable this rate limit.
1. Select **Settings > Network**.
1. Expand **Git SSH operations rate limit**.
1. Enter a value for **Maximum number of Git operations per minute**.
+ - To disable the rate limit, set it to `0`.
1. Select **Save changes**.
diff --git a/doc/administration/settings/scim_setup.md b/doc/administration/settings/scim_setup.md
index 432c8598cf7..45020fdfb59 100644
--- a/doc/administration/settings/scim_setup.md
+++ b/doc/administration/settings/scim_setup.md
@@ -53,3 +53,7 @@ adding them to the SCIM identity provider.
After the identity provider performs a sync based on its configured schedule,
the user's SCIM identity is reactivated and their GitLab instance access is restored.
+
+## Troubleshooting
+
+See our [troubleshooting SCIM guide](../../user/group/saml_sso/troubleshooting_scim.md).
diff --git a/doc/administration/settings/sign_in_restrictions.md b/doc/administration/settings/sign_in_restrictions.md
index 6d38610192b..942b706b9a3 100644
--- a/doc/administration/settings/sign_in_restrictions.md
+++ b/doc/administration/settings/sign_in_restrictions.md
@@ -118,7 +118,7 @@ The following access methods are **not** protected by Admin Mode:
In other words, administrators who are otherwise limited by Admin Mode can still use
Git clients without additional authentication steps.
-To use the GitLab REST- or GraphQL API, administrators must [create a personal access token](../../user/profile/personal_access_tokens.md#create-a-personal-access-token) with the [`admin_mode` scope](../../user/profile/personal_access_tokens.md#personal-access-token-scopes).
+To use the GitLab REST- or GraphQL API, administrators must [create a personal access token](../../user/profile/personal_access_tokens.md#create-a-personal-access-token) or [OAuth token](../../api/oauth2.md) with the [`admin_mode` scope](../../user/profile/personal_access_tokens.md#personal-access-token-scopes).
If an administrator with a personal access token with the `admin_mode` scope loses their administrator access, that user cannot access the API as an administrator even though they still have the token with the `admin_mode` scope.
diff --git a/doc/administration/settings/slack_app.md b/doc/administration/settings/slack_app.md
index ef756dfeff7..de11da281e4 100644
--- a/doc/administration/settings/slack_app.md
+++ b/doc/administration/settings/slack_app.md
@@ -105,9 +105,13 @@ To enable the GitLab for Slack app functionality, your network must allow inboun
## Troubleshooting
-### Slash commands return `/gitlab failed with the error "dispatch_failed"` in Slack
+When administering the GitLab for Slack app for self-managed instances, you might encounter the following issues.
+
+For GitLab.com, see [GitLab for Slack app](../../user/project/integrations/gitlab_slack_application.md#troubleshooting).
+
+### Slash commands return an error in Slack
Slash commands might return `/gitlab failed with the error "dispatch_failed"` in Slack. To resolve this issue, ensure:
-- The GitLab for Slack app is properly [configured](#configure-the-settings), and the **Enable GitLab for Slack app** checkbox is selected.
+- The GitLab for Slack app is properly [configured](#configure-the-settings) and the **Enable GitLab for Slack app** checkbox is selected.
- Your GitLab instance [allows requests to and from Slack](#connectivity-requirements).
diff --git a/doc/administration/settings/usage_statistics.md b/doc/administration/settings/usage_statistics.md
index 4887ebd8cfe..b9080f49f5d 100644
--- a/doc/administration/settings/usage_statistics.md
+++ b/doc/administration/settings/usage_statistics.md
@@ -9,7 +9,8 @@ info: To determine the technical writer assigned to the Stage/Group associated w
GitLab Inc. periodically collects information about your instance in order
to perform various actions.
-All usage statistics are [opt-out](#enable-or-disable-usage-statistics).
+For free self-managed instances, all usage statistics are [opt-out](#enable-or-disable-service-ping).
+For information about other tiers, see [Customer Product Usage Information](https://about.gitlab.com/handbook/legal/privacy/customer-product-usage-information/#service-ping-formerly-known-as-usage-ping).
## Service Ping
@@ -63,6 +64,13 @@ In the following table, you can see:
| [Issue analytics](../../user/group/issues_analytics/index.md) | GitLab 16.5 and later |
| [Custom Text in Emails](../../administration/settings/email.md#custom-additional-text) | GitLab 16.5 and later |
| [Contribution analytics](../../user/group/contribution_analytics/index.md) | GitLab 16.5 and later |
+| [Group file templates](../../user/group/manage.md#group-file-templates) | GitLab 16.6 and later |
+| [Group webhooks](../../user/project/integrations/webhooks.md#group-webhooks) | GitLab 16.6 and later |
+| [Service Level Agreement countdown timer](../../operations/incident_management/incidents.md#service-level-agreement-countdown-timer) | GitLab 16.6 and later |
+| [Lock project membership to group](../../user/group/access_and_permissions.md#prevent-members-from-being-added-to-projects-in-a-group) | GitLab 16.6 and later |
+| [Users and permissions report](../../administration/admin_area.md#user-permission-export) | GitLab 16.6 and later |
+| [Advanced search](../../user/search/advanced_search.md) | GitLab 16.6 and later |
+| [DevOps Adoption](../../user/group/devops_adoption/index.md) | GitLab 16.6 and later |
### Enable registration features
@@ -95,7 +103,16 @@ This information is used, among other things, to identify to which versions
patches must be backported, making sure active GitLab instances remain
secure.
-If you [disable version check](#enable-or-disable-usage-statistics), this information isn't collected.
+If you disable version check, this information isn't collected.
+
+### Enable or disable version check
+
+1. On the left sidebar, select **Search or go to**.
+1. Select **Admin Area**.
+1. Select **Settings > Metrics and profiling**.
+1. Expand **Usage statistics**.
+1. Select or clear the **Enable version check** checkbox.
+1. Select **Save changes**.
### Request flow example
@@ -121,23 +138,26 @@ GitLab instance to the host `version.gitlab.com` on port `443`.
If your GitLab instance is behind a proxy, set the appropriate
[proxy configuration variables](https://docs.gitlab.com/omnibus/settings/environment-variables.html).
-## Enable or disable usage statistics
+## Enable or disable Service Ping
+
+### Through the UI
-To enable or disable Service Ping and version check:
+To enable or disable Service Ping:
1. On the left sidebar, select **Search or go to**.
1. Select **Admin Area**.
1. Select **Settings > Metrics and profiling**.
1. Expand **Usage statistics**.
-1. Select or clear the **Enable version check** and **Enable Service Ping** checkboxes.
+1. Select or clear the **Enable Service Ping** checkbox.
1. Select **Save changes**.
NOTE:
-Service Ping settings only control whether the data is being shared with GitLab, or used only internally.
+The effect of disabling Service Ping depends on the instance's tier. For more information, see [Customer Product Usage Information](https://about.gitlab.com/handbook/legal/privacy/customer-product-usage-information/#service-ping-formerly-known-as-usage-ping).
+Service Ping settings only control whether the data is being shared with GitLab, or limited to only internal use by the instance.
Even if you disable Service Ping, the `gitlab_service_ping_worker` background job still periodically generates a Service Ping payload for your instance.
-The payload is available in the [Service Usage data](#manually-upload-service-ping-payload) admin section.
+The payload is available in the [Metrics and profiling](#manually-upload-service-ping-payload) admin section.
-## Disable usage statistics with the configuration file
+### Through the configuration file
NOTE:
The method to disable Service Ping in the GitLab configuration file does not work in
@@ -189,7 +209,7 @@ You can view the exact JSON payload sent to GitLab Inc. in the Admin Area. To vi
1. Sign in as a user with administrator access.
1. On the left sidebar, select **Search or go to**.
1. Select **Admin Area**.
-1. Select **Settings > Service usage data**.
+1. Select **Settings > Metrics and profiling > Usage statistics**.
1. Select **Preview payload**.
For an example payload, see [Example Service Ping payload](../../development/internal_analytics/service_ping/index.md#example-service-ping-payload).
@@ -207,7 +227,7 @@ To upload the payload manually:
1. Sign in as a user with administrator access.
1. On the left sidebar, select **Search or go to**.
1. Select **Admin Area**.
-1. Select **Settings > Service usage data**.
+1. Select **Settings > Metrics and profiling > Usage statistics**.
1. Select **Download payload**.
1. Save the JSON file.
1. Visit [Service usage data center](https://version.gitlab.com/usage_data/new).
diff --git a/doc/administration/sidekiq/index.md b/doc/administration/sidekiq/index.md
index 10fadc40a82..0a7974c9622 100644
--- a/doc/administration/sidekiq/index.md
+++ b/doc/administration/sidekiq/index.md
@@ -95,27 +95,8 @@ Updates to example must be made at:
-->
```ruby
- ########################################
- ##### Services Disabled ###
- ########################################
- #
- # When running GitLab on just one server, you have a single `gitlab.rb`
- # to enable all services you want to run.
- # When running GitLab on N servers, you have N `gitlab.rb` files.
- # Enable only the services you want to run on each
- # specific server, while disabling all others.
- #
- gitaly['enable'] = false
- postgresql['enable'] = false
- redis['enable'] = false
- nginx['enable'] = false
- puma['enable'] = false
- gitlab_workhorse['enable'] = false
- prometheus['enable'] = false
- alertmanager['enable'] = false
- grafana['enable'] = false
- gitlab_exporter['enable'] = false
- gitlab_kas['enable'] = false
+ # https://docs.gitlab.com/omnibus/roles/#sidekiq-roles
+ roles(["sidekiq_role"])
##
## To maintain uniformity of links across nodes, the
@@ -375,20 +356,6 @@ To enable LDAP with the synchronization worker for Sidekiq:
If you use [SAML Group Sync](../../user/group/saml_sso/group_sync.md), you must configure [SAML Groups](../../integration/saml.md#configure-users-based-on-saml-group-membership) on all your Sidekiq nodes.
-## Disable Rugged
-
-Calls into Rugged, Ruby bindings for `libgit2`, [lock the Sidekiq processes (GVL)](https://silverhammermba.github.io/emberb/c/#c-in-ruby-threads),
-blocking all jobs on that worker from proceeding. If Rugged calls performed by Sidekiq are slow, this can cause significant delays in
-background task processing.
-
-By default, Rugged is used when Git repository data is stored on local storage or on an NFS mount.
-Using Rugged is recommended when using NFS, but if
-you are using local storage, disabling Rugged can improve Sidekiq performance:
-
-```shell
-sudo gitlab-rake gitlab:features:disable_rugged
-```
-
## Related topics
- [Extra Sidekiq processes](extra_sidekiq_processes.md)
diff --git a/doc/administration/sidekiq/processing_specific_job_classes.md b/doc/administration/sidekiq/processing_specific_job_classes.md
index 696b0b9444c..74cbb6ca89b 100644
--- a/doc/administration/sidekiq/processing_specific_job_classes.md
+++ b/doc/administration/sidekiq/processing_specific_job_classes.md
@@ -179,14 +179,16 @@ nodes. In this example, we exclude all import-related jobs from a Sidekiq node.
sudo gitlab-ctl reconfigure
```
-### Migrating from queue selectors to routing rules
+## Migrating from queue selectors to routing rules
We recommend GitLab deployments add more Sidekiq processes listening to all queues, as in the
[Reference Architectures](../reference_architectures/index.md). For very large-scale deployments, we recommend
[routing rules](#routing-rules) instead of [queue selectors](#queue-selectors-deprecated). We use routing rules on GitLab.com as
it helps to lower the load on Redis.
-To migrate from queue selectors to routing rules:
+### Single node setup
+
+To migrate from queue selectors to routing rules in a [single node setup](../reference_architectures/index.md#standalone-non-ha):
1. Open `/etc/gitlab/gitlab.rb`.
1. Set `sidekiq['queue_selector']` to `false`.
@@ -213,9 +215,11 @@ NOTE:
It is important to run the Rake task immediately after reconfiguring GitLab.
After reconfiguring GitLab, existing jobs are not processed until the Rake task starts to migrate the jobs.
+#### Migration example
+
The following example better illustrates the migration process above:
-1. Check the following content of `/etc/gitlab/gitlab.rb`:
+1. In `/etc/gitlab/gitlab.rb`, check the `urgency` queries in the `sidekiq['queue_groups']`. For example:
```ruby
sidekiq['routing_rules'] = []
@@ -228,7 +232,7 @@ The following example better illustrates the migration process above:
]
```
-1. Update `/etc/gitlab/gitlab.rb` to use routing rules:
+1. Use these same `urgency` queries to update `/etc/gitlab/gitlab.rb` to use routing rules:
```ruby
sidekiq['min_concurrency'] = 20
@@ -270,6 +274,31 @@ in a queue group entry is 1, while `min_concurrency` is set to `0`, and `max_con
concurrency is set to `2` instead. A concurrency of `2` might be too low in most cases, except for very highly-CPU
bound tasks.
+### Multiple node setup
+
+For a multiple node setup:
+
+- Reconfigure all GitLab Rails and Sidekiq nodes with the same `sidekiq['routing_rules']` setting.
+- Alternate between GitLab Rails and Sidekiq nodes as you update and reconfigure the nodes. This ensures the newly configured Sidekiq is ready to consume jobs from the new set of
+ queues during the migration. Otherwise, the new jobs hang until the end of the migration.
+
+Consider the following example of three GitLab Rails nodes and two Sidekiq nodes. To migrate from queue selectors to routing rules:
+
+1. In Sidekiq 1, follow all steps but one in [single node setup](#single-node-setup).
+ **Do not** run the Rake task to [migrate existing jobs](sidekiq_job_migration.md).
+1. Configure the external load balancer to remove Rails 1 from accepting traffic. This step ensures Rails 1 is not serving any request while the Rails process is restarting. For more information, see [issue 428794](https://gitlab.com/gitlab-org/gitlab/-/issues/428794#note_1619505870).
+1. In Rails 1, update `/etc/gitlab/gitlab.rb` to use the same `sidekiq['routing_rules']` setting as Sidekiq 1.
+ Only `sidekiq['routing_rules']` is required in Rails nodes.
+1. Configure the external load balancer to register Rails 1 back.
+1. Repeat steps 1 to 4 for Sidekiq 2 and Rails 2.
+1. Repeat steps 2 to 4 for Rails 3.
+1. If there are more Sidekiq nodes than Rails nodes, follow step 1 on the remaining Sidekiq nodes.
+1. Run the Rake task to [migrate existing jobs](sidekiq_job_migration.md):
+
+ ```shell
+ sudo gitlab-rake gitlab:sidekiq:migrate_jobs:retry gitlab:sidekiq:migrate_jobs:schedule gitlab:sidekiq:migrate_jobs:queued
+ ```
+
<!--- end_remove -->
## Worker matching query
diff --git a/doc/administration/sidekiq/sidekiq_troubleshooting.md b/doc/administration/sidekiq/sidekiq_troubleshooting.md
index 9ae2a59251a..2990110150f 100644
--- a/doc/administration/sidekiq/sidekiq_troubleshooting.md
+++ b/doc/administration/sidekiq/sidekiq_troubleshooting.md
@@ -536,6 +536,28 @@ The list of available jobs can be found in the [workers](https://gitlab.com/gitl
For more information about Sidekiq jobs, see the [Sidekiq-cron](https://github.com/sidekiq-cron/sidekiq-cron#work-with-job) documentation.
+## Disabling cron jobs
+
+You can disable any Sidekiq cron jobs by visiting the [Monitoring section in the Admin area](../admin_area.md#monitoring-section). You can also perform the same action using the command line and [Rails Runner](../operations/rails_console.md#using-the-rails-runner).
+
+To disable all cron jobs:
+
+```shell
+sudo gitlab-rails runner 'Sidekiq::Cron::Job.all.map(&:disable!)'
+```
+
+To enable all cron jobs:
+
+```shell
+sudo gitlab-rails runner 'Sidekiq::Cron::Job.all.map(&:enable!)'
+```
+
+If you wish to enable only a subset of the jobs at a time you can use name matching. For example, to enable only jobs with `geo` in the name:
+
+```shell
+ sudo gitlab-rails runner 'Sidekiq::Cron::Job.all.select{ |j| j.name.match("geo") }.map(&:disable!)'
+```
+
## Clearing a Sidekiq job deduplication idempotency key
Occasionally, jobs that are expected to run (for example, cron jobs) are observed to not run at all. When checking the logs, there might be instances where jobs are seen to not run with a `"job_status": "deduplicated"`.
diff --git a/doc/administration/silent_mode/index.md b/doc/administration/silent_mode/index.md
index 379b00536f3..4f68a765585 100644
--- a/doc/administration/silent_mode/index.md
+++ b/doc/administration/silent_mode/index.md
@@ -4,10 +4,11 @@ group: Geo
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
-# GitLab Silent Mode **(FREE SELF EXPERIMENT)**
+# GitLab Silent Mode **(FREE SELF)**
-> - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/9826) in GitLab 15.11. This feature is an [Experiment](../../policy/experiment-beta-support.md#experiment).
-> - Enabling and disabling Silent Mode through the web UI was [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/131090) in GitLab 16.4
+> - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/9826) in GitLab 15.11. This feature was an [Experiment](../../policy/experiment-beta-support.md#experiment).
+> - Enabling and disabling Silent Mode through the web UI was [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/131090) in GitLab 16.4.
+> - Silent Mode was updated to [Generally Available (GA)](../../policy/experiment-beta-support.md#generally-available-ga) in GitLab 16.6.
Silent Mode allows you to silence outbound communication, such as emails, from GitLab. Silent Mode is not intended to be used on environments which are in-use. Two use-cases are:
@@ -76,7 +77,7 @@ It may take up to a minute to take effect. [Issue 405433](https://gitlab.com/git
## Behavior of GitLab features in Silent Mode
-This section documents the current behavior of GitLab when Silent Mode is enabled. While Silent Mode is an Experiment, the behavior may change without notice. The work for the first iteration of Silent Mode is tracked by [Epic 9826](https://gitlab.com/groups/gitlab-org/-/epics/9826).
+This section documents the current behavior of GitLab when Silent Mode is enabled. The work for the first iteration of Silent Mode is tracked by [Epic 9826](https://gitlab.com/groups/gitlab-org/-/epics/9826).
When Silent Mode is enabled, a banner is displayed at the top of the page for all users stating the setting is enabled and **All outbound communications are blocked.**.
diff --git a/doc/administration/troubleshooting/gitlab_rails_cheat_sheet.md b/doc/administration/troubleshooting/gitlab_rails_cheat_sheet.md
index 9432836c22b..01c75c32366 100644
--- a/doc/administration/troubleshooting/gitlab_rails_cheat_sheet.md
+++ b/doc/administration/troubleshooting/gitlab_rails_cheat_sheet.md
@@ -46,11 +46,11 @@ This content has been moved to [Troubleshooting Repository mirroring](../../user
## CI
-This content has been moved to [Troubleshooting CI/CD](../../ci/troubleshooting.md).
+This content has been moved to [Troubleshooting CI/CD](../cicd.md#cicd-troubleshooting-rails-console-commands).
## License
-This content has been moved to [Activate GitLab EE with a license file or key](../../administration/license_file.md).
+This content has been moved to [Activate GitLab EE with a license file or key](../license_file.md).
## Registry
diff --git a/doc/api/api_resources.md b/doc/api/api_resources.md
index 3c3430dead4..76c91b00eb9 100644
--- a/doc/api/api_resources.md
+++ b/doc/api/api_resources.md
@@ -8,14 +8,14 @@ info: To determine the technical writer assigned to the Stage/Group associated w
Available resources for the [GitLab REST API](index.md) can be grouped in the following contexts:
-- [Projects](#project-resources).
-- [Groups](#group-resources).
-- [Standalone](#standalone-resources).
+- [Projects](#project-resources)
+- [Groups](#group-resources)
+- [Standalone](#standalone-resources)
See also:
-- Adding [deploy keys for multiple projects](deploy_keys.md#add-deploy-keys-to-multiple-projects).
-- [API Resources for various templates](#templates-api-resources).
+- Adding [deploy keys for multiple projects](deploy_keys.md#add-deploy-keys-to-multiple-projects)
+- [API Resources for various templates](#templates-api-resources)
## Project resources
@@ -206,7 +206,7 @@ The following API resources are available outside of project and group contexts
Endpoints are available for:
-- [Dockerfile templates](templates/dockerfiles.md).
-- [`.gitignore` templates](templates/gitignores.md).
-- [GitLab CI/CD YAML templates](templates/gitlab_ci_ymls.md).
-- [Open source license templates](templates/licenses.md).
+- [Dockerfile templates](templates/dockerfiles.md)
+- [`.gitignore` templates](templates/gitignores.md)
+- [GitLab CI/CD YAML templates](templates/gitlab_ci_ymls.md)
+- [Open source license templates](templates/licenses.md)
diff --git a/doc/api/bulk_imports.md b/doc/api/bulk_imports.md
index db508d1edfa..0f9df4eba31 100644
--- a/doc/api/bulk_imports.md
+++ b/doc/api/bulk_imports.md
@@ -257,3 +257,26 @@ curl --request GET --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab
"updated_at": "2021-06-18T09:46:27.003Z"
}
```
+
+## Get list of failed import records for group or project migration entity
+
+> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/428016) in GitLab 16.6.
+
+```plaintext
+GET /bulk_imports/:id/entities/:entity_id/failures
+```
+
+```shell
+curl --request GET --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/api/v4/bulk_imports/1/entities/2/failures"
+```
+
+```json
+{
+ "relation": "issues",
+ "exception_message": "Error!",
+ "exception_class": "StandardError",
+ "correlation_id_value": "06289e4b064329a69de7bb2d7a1b5a97",
+ "source_url": "https://gitlab.example/project/full/path/-/issues/1",
+ "source_title": "Issue title"
+}
+```
diff --git a/doc/api/container_registry.md b/doc/api/container_registry.md
index 901b0b93529..35b74965d2e 100644
--- a/doc/api/container_registry.md
+++ b/doc/api/container_registry.md
@@ -425,7 +425,7 @@ curl --request DELETE --data-urlencode 'name_regex_delete=dev-.+' \
Beside the group- and project-specific GitLab APIs explained above,
the Container Registry has its own endpoints.
To query those, follow the Registry's built-in mechanism to obtain and use an
-[authentication token](https://docs.docker.com/registry/spec/auth/token/).
+[authentication token](https://distribution.github.io/distribution/spec/auth/token/).
NOTE:
These are different from project or personal access tokens in the GitLab application.
@@ -436,7 +436,7 @@ These are different from project or personal access tokens in the GitLab applica
GET ${CI_SERVER_URL}/jwt/auth?service=container_registry&scope=*
```
-You must specify the correct [scopes and actions](https://docs.docker.com/registry/spec/auth/scope/) to retrieve a valid token:
+You must specify the correct [scopes and actions](https://distribution.github.io/distribution/spec/auth/scope/) to retrieve a valid token:
```shell
$ SCOPE="repository:${CI_REGISTRY_IMAGE}:delete" #or push,pull
@@ -448,17 +448,28 @@ $ curl --request GET --user "${CI_REGISTRY_USER}:${CI_REGISTRY_PASSWORD}" \
### Delete image tags by reference
+> Endpoint `v2/<name>/manifests/<tag>` [introduced](https://gitlab.com/gitlab-org/container-registry/-/issues/1091) and endpoint `v2/<name>/tags/reference/<tag>` [deprecated](https://gitlab.com/gitlab-org/container-registry/-/issues/1094) in GitLab 16.4.
+
+<!--- start_remove The following content will be removed on remove_date: '2024-08-15' -->
+
+WARNING:
+Endpoint `v2/<name>/tags/reference/<tag>` [deprecated](https://gitlab.com/gitlab-org/container-registry/-/issues/1095)
+in GitLab 16.4 and is planned for removal in 17.0. Use [`v2/<name>/manifests/<tag>`](https://gitlab.com/gitlab-org/container-registry/-/blob/master/docs/spec/docker/v2/api.md#delete-manifest) instead.
+This change is a breaking change.
+
+<!--- end_remove -->
+
```plaintext
DELETE http(s)://${CI_REGISTRY}/v2/${CI_REGISTRY_IMAGE}/tags/reference/${CI_COMMIT_SHORT_SHA}
```
You can use the token retrieved with the predefined `CI_REGISTRY_USER` and `CI_REGISTRY_PASSWORD` variables to delete container image tags by reference on your GitLab instance.
-The `tag_delete` [Container-Registry-Feature](https://gitlab.com/gitlab-org/container-registry/-/tree/v3.61.0-gitlab/docs-gitlab#api) must be enabled.
+The `tag_delete` [Container-Registry-Feature](https://gitlab.com/gitlab-org/container-registry/-/blob/master/docs/spec/docker/v2/api.md#delete-tag) must be enabled.
```shell
$ curl --request DELETE --header "Authorization: Bearer <token_from_above>" \
--header "Accept: application/vnd.docker.distribution.manifest.v2+json" \
- "https://gitlab.example.com:5050/v2/${CI_REGISTRY_IMAGE}/tags/reference/${CI_COMMIT_SHORT_SHA}"
+ "https://gitlab.example.com:5050/v2/${CI_REGISTRY_IMAGE}/manifests/${CI_COMMIT_SHORT_SHA}"
```
### Listing all container repositories
diff --git a/doc/api/dependency_list_export.md b/doc/api/dependency_list_export.md
index 744309a402e..db43ea238c1 100644
--- a/doc/api/dependency_list_export.md
+++ b/doc/api/dependency_list_export.md
@@ -23,7 +23,7 @@ and subject to change without notice.
Create a new CycloneDX JSON export for all the project dependencies detected in a pipeline.
-If an authenticated user doesn't have permission to [read_dependency](../user/custom_roles.md#custom-role-requirements),
+If an authenticated user does not have permission to [read_dependency](../user/custom_roles.md#available-permissions),
this request returns a `403 Forbidden` status code.
SBOM exports can be only accessed by the export's author.
@@ -59,7 +59,7 @@ Example response:
Get a single dependency list export.
```plaintext
-GET /security/dependency_list_exports/:id
+GET /dependency_list_exports/:id
```
| Attribute | Type | Required | Description |
@@ -67,7 +67,7 @@ GET /security/dependency_list_exports/:id
| `id` | integer | yes | The ID of the dependency list export. |
```shell
-curl --header "PRIVATE-TOKEN: <private_token>" "https://gitlab.example.com/api/v4/security/dependency_list_exports/2"
+curl --header "PRIVATE-TOKEN: <private_token>" "https://gitlab.example.com/api/v4/dependency_list_exports/2"
```
The status code is `202 Accepted` when the dependency list export is being generated, and `200 OK` when it's ready.
@@ -88,7 +88,7 @@ Example response:
Download a single dependency list export.
```plaintext
-GET /security/dependency_list_exports/:id/download
+GET /dependency_list_exports/:id/download
```
| Attribute | Type | Required | Description |
@@ -96,7 +96,7 @@ GET /security/dependency_list_exports/:id/download
| `id` | integer | yes | The ID of the dependency list export. |
```shell
-curl --header "PRIVATE-TOKEN: <private_token>" "https://gitlab.example.com/api/v4/security/dependency_list_exports/2/download"
+curl --header "PRIVATE-TOKEN: <private_token>" "https://gitlab.example.com/api/v4/dependency_list_exports/2/download"
```
The response is `404 Not Found` if the dependency list export is not finished yet or was not found.
diff --git a/doc/api/deployments.md b/doc/api/deployments.md
index aad3567879a..2dbc4bd0831 100644
--- a/doc/api/deployments.md
+++ b/doc/api/deployments.md
@@ -306,7 +306,7 @@ When the [unified approval setting](../ci/environments/deployment_approvals.md#u
}
```
-When the [multiple approval rules](../ci/environments/deployment_approvals.md#multiple-approval-rules) is configured, deployments created by users on GitLab Premium or Ultimate include the `approval_summary` property:
+When the [multiple approval rules](../ci/environments/deployment_approvals.md#add-multiple-approval-rules) is configured, deployments created by users on GitLab Premium or Ultimate include the `approval_summary` property:
```json
{
@@ -547,7 +547,7 @@ POST /projects/:id/deployments/:deployment_id/approval
| `deployment_id` | integer | yes | The ID of the deployment. |
| `status` | string | yes | The status of the approval (either `approved` or `rejected`). |
| `comment` | string | no | A comment to go with the approval |
-| `represented_as`| string | no | The name of the User/Group/Role to use for the approval, when the user belongs to [multiple approval rules](../ci/environments/deployment_approvals.md#multiple-approval-rules). |
+| `represented_as`| string | no | The name of the User/Group/Role to use for the approval, when the user belongs to [multiple approval rules](../ci/environments/deployment_approvals.md#add-multiple-approval-rules). |
```shell
curl --data "status=approved&comment=Looks good to me&represented_as=security" \
diff --git a/doc/api/geo_nodes.md b/doc/api/geo_nodes.md
index 3f7fd537abf..c376d7a6774 100644
--- a/doc/api/geo_nodes.md
+++ b/doc/api/geo_nodes.md
@@ -332,7 +332,6 @@ Example response:
"job_artifacts_count": 2,
"job_artifacts_synced_count": null,
"job_artifacts_failed_count": null,
- "job_artifacts_synced_missing_on_primary_count": 0,
"job_artifacts_synced_in_percentage": "0.00%",
"projects_count": 41,
"repositories_count": 41,
@@ -470,7 +469,6 @@ Example response:
"job_artifacts_verification_failed_count": 0,
"job_artifacts_synced_in_percentage": "100.00%",
"job_artifacts_verified_in_percentage": "100.00%",
- "job_artifacts_synced_missing_on_primary_count": 0,
"ci_secure_files_count": 5,
"ci_secure_files_checksum_total_count": 5,
"ci_secure_files_checksummed_count": 5,
@@ -483,7 +481,6 @@ Example response:
"ci_secure_files_verification_failed_count": 0,
"ci_secure_files_synced_in_percentage": "100.00%",
"ci_secure_files_verified_in_percentage": "100.00%",
- "ci_secure_files_synced_missing_on_primary_count": 0,
"dependency_proxy_blobs_count": 5,
"dependency_proxy_blobs_checksum_total_count": 5,
"dependency_proxy_blobs_checksummed_count": 5,
@@ -496,13 +493,11 @@ Example response:
"dependency_proxy_blobs_verification_failed_count": 0,
"dependency_proxy_blobs_synced_in_percentage": "100.00%",
"dependency_proxy_blobs_verified_in_percentage": "100.00%",
- "dependency_proxy_blobs_synced_missing_on_primary_count": 0,
"container_repositories_count": 5,
"container_repositories_synced_count": 5,
"container_repositories_failed_count": 0,
"container_repositories_registry_count": 5,
"container_repositories_synced_in_percentage": "100.00%",
- "container_repositories_synced_missing_on_primary_count": 0,
"container_repositories_checksum_total_count": 0,
"container_repositories_checksummed_count": 0,
"container_repositories_checksum_failed_count": 0,
@@ -569,7 +564,6 @@ Example response:
"job_artifacts_count": 2,
"job_artifacts_synced_count": 1,
"job_artifacts_failed_count": 1,
- "job_artifacts_synced_missing_on_primary_count": 0,
"job_artifacts_synced_in_percentage": "50.00%",
"design_management_repositories_count": 5,
"design_management_repositories_synced_count": 5,
@@ -695,7 +689,6 @@ Example response:
"job_artifacts_verification_failed_count": 0,
"job_artifacts_synced_in_percentage": "100.00%",
"job_artifacts_verified_in_percentage": "100.00%",
- "job_artifacts_synced_missing_on_primary_count": 0,
"dependency_proxy_blobs_count": 5,
"dependency_proxy_blobs_checksum_total_count": 5,
"dependency_proxy_blobs_checksummed_count": 5,
@@ -708,13 +701,11 @@ Example response:
"dependency_proxy_blobs_verification_failed_count": 0,
"dependency_proxy_blobs_synced_in_percentage": "100.00%",
"dependency_proxy_blobs_verified_in_percentage": "100.00%",
- "dependency_proxy_blobs_synced_missing_on_primary_count": 0,
"container_repositories_count": 5,
"container_repositories_synced_count": 5,
"container_repositories_failed_count": 0,
"container_repositories_registry_count": 5,
"container_repositories_synced_in_percentage": "100.00%",
- "container_repositories_synced_missing_on_primary_count": 0,
"container_repositories_checksum_total_count": 0,
"container_repositories_checksummed_count": 0,
"container_repositories_checksum_failed_count": 0,
@@ -785,7 +776,6 @@ Example response:
"job_artifacts_count": 2,
"job_artifacts_synced_count": 1,
"job_artifacts_failed_count": 1,
- "job_artifacts_synced_missing_on_primary_count": 0,
"job_artifacts_synced_in_percentage": "50.00%",
"projects_count": 41,
"repositories_count": 41,
@@ -896,7 +886,6 @@ Example response:
"job_artifacts_verification_failed_count": 0,
"job_artifacts_synced_in_percentage": "100.00%",
"job_artifacts_verified_in_percentage": "100.00%",
- "job_artifacts_synced_missing_on_primary_count": 0,
"ci_secure_files_count": 5,
"ci_secure_files_checksum_total_count": 5,
"ci_secure_files_checksummed_count": 5,
@@ -909,7 +898,6 @@ Example response:
"ci_secure_files_verification_failed_count": 0,
"ci_secure_files_synced_in_percentage": "100.00%",
"ci_secure_files_verified_in_percentage": "100.00%",
- "ci_secure_files_synced_missing_on_primary_count": 0,
"dependency_proxy_blobs_count": 5,
"dependency_proxy_blobs_checksum_total_count": 5,
"dependency_proxy_blobs_checksummed_count": 5,
@@ -922,13 +910,11 @@ Example response:
"dependency_proxy_blobs_verification_failed_count": 0,
"dependency_proxy_blobs_synced_in_percentage": "100.00%",
"dependency_proxy_blobs_verified_in_percentage": "100.00%",
- "dependency_proxy_blobs_synced_missing_on_primary_count": 0,
"container_repositories_count": 5,
"container_repositories_synced_count": 5,
"container_repositories_failed_count": 0,
"container_repositories_registry_count": 5,
"container_repositories_synced_in_percentage": "100.00%",
- "container_repositories_synced_missing_on_primary_count": 0,
"container_repositories_checksum_total_count": 0,
"container_repositories_checksummed_count": 0,
"container_repositories_checksum_failed_count": 0,
diff --git a/doc/api/geo_sites.md b/doc/api/geo_sites.md
index eaf813ae201..95691960a78 100644
--- a/doc/api/geo_sites.md
+++ b/doc/api/geo_sites.md
@@ -292,7 +292,6 @@ Example response:
[
{
"geo_node_id": 1,
- "job_artifacts_synced_missing_on_primary_count": null,
"projects_count": 19,
"container_repositories_replication_enabled": null,
"lfs_objects_count": 0,
@@ -510,7 +509,6 @@ Example response:
},
{
"geo_node_id": 2,
- "job_artifacts_synced_missing_on_primary_count": null,
"projects_count": 19,
"container_repositories_replication_enabled": null,
"lfs_objects_count": 0,
@@ -744,7 +742,6 @@ Example response:
```json
{
"geo_node_id": 2,
- "job_artifacts_synced_missing_on_primary_count": null,
"projects_count": 19,
"container_repositories_replication_enabled": null,
"lfs_objects_count": 0,
diff --git a/doc/api/graphql/reference/index.md b/doc/api/graphql/reference/index.md
index 6015323f7f7..4a1b536fd40 100644
--- a/doc/api/graphql/reference/index.md
+++ b/doc/api/graphql/reference/index.md
@@ -136,7 +136,8 @@ Returns [`CiCatalogResource`](#cicatalogresource).
| Name | Type | Description |
| ---- | ---- | ----------- |
-| <a id="querycicatalogresourceid"></a>`id` | [`CiCatalogResourceID!`](#cicatalogresourceid) | CI/CD Catalog resource global ID. |
+| <a id="querycicatalogresourcefullpath"></a>`fullPath` | [`ID`](#id) | CI/CD Catalog resource full path. |
+| <a id="querycicatalogresourceid"></a>`id` | [`CiCatalogResourceID`](#cicatalogresourceid) | CI/CD Catalog resource global ID. |
### `Query.ciCatalogResources`
@@ -157,7 +158,9 @@ four standard [pagination arguments](#connection-pagination-arguments):
| Name | Type | Description |
| ---- | ---- | ----------- |
| <a id="querycicatalogresourcesprojectpath"></a>`projectPath` | [`ID`](#id) | Project with the namespace catalog. |
-| <a id="querycicatalogresourcessort"></a>`sort` | [`CiCatalogResourceSort`](#cicatalogresourcesort) | Sort Catalog Resources by given criteria. |
+| <a id="querycicatalogresourcesscope"></a>`scope` | [`CiCatalogResourceScope`](#cicatalogresourcescope) | Scope of the returned catalog resources. |
+| <a id="querycicatalogresourcessearch"></a>`search` | [`String`](#string) | Search term to filter the catalog resources by name or description. |
+| <a id="querycicatalogresourcessort"></a>`sort` | [`CiCatalogResourceSort`](#cicatalogresourcesort) | Sort catalog resources by given criteria. |
### `Query.ciConfig`
@@ -324,6 +327,26 @@ Returns [`ExplainVulnerabilityPrompt`](#explainvulnerabilityprompt).
| ---- | ---- | ----------- |
| <a id="queryexplainvulnerabilitypromptvulnerabilityid"></a>`vulnerabilityId` | [`VulnerabilityID!`](#vulnerabilityid) | Vulnerability to generate a prompt for. |
+### `Query.frecentGroups`
+
+A user's frecently visited groups. Requires the `frecent_namespaces_suggestions` feature flag to be enabled.
+
+WARNING:
+**Introduced** in 16.6.
+This feature is an Experiment. It can be changed or removed at any time.
+
+Returns [`[Group!]`](#group).
+
+### `Query.frecentProjects`
+
+A user's frecently visited projects. Requires the `frecent_namespaces_suggestions` feature flag to be enabled.
+
+WARNING:
+**Introduced** in 16.6.
+This feature is an Experiment. It can be changed or removed at any time.
+
+Returns [`[Project!]`](#project).
+
### `Query.geoNode`
Find a Geo node.
@@ -505,6 +528,22 @@ This field returns a [connection](#connections). It accepts the
four standard [pagination arguments](#connection-pagination-arguments):
`before: String`, `after: String`, `first: Int`, `last: Int`.
+### `Query.memberRole`
+
+Finds a single custom role.
+
+WARNING:
+**Introduced** in 16.6.
+This feature is an Experiment. It can be changed or removed at any time.
+
+Returns [`MemberRole`](#memberrole).
+
+#### Arguments
+
+| Name | Type | Description |
+| ---- | ---- | ----------- |
+| <a id="querymemberroleid"></a>`id` | [`MemberRoleID`](#memberroleid) | Global ID of the member role to look up. |
+
### `Query.memberRolePermissions`
List of all customizable permissions.
@@ -627,6 +666,7 @@ four standard [pagination arguments](#connection-pagination-arguments):
| Name | Type | Description |
| ---- | ---- | ----------- |
+| <a id="queryprojectsfullpaths"></a>`fullPaths` | [`[String!]`](#string) | Filter projects by full paths. You cannot provide more than 50 full paths. |
| <a id="queryprojectsids"></a>`ids` | [`[ID!]`](#id) | Filter projects by IDs. |
| <a id="queryprojectsmembership"></a>`membership` | [`Boolean`](#boolean) | Return only projects that the current user is a member of. |
| <a id="queryprojectssearch"></a>`search` | [`String`](#string) | Search query, which can be for the project name, a path, or a description. |
@@ -702,6 +742,7 @@ four standard [pagination arguments](#connection-pagination-arguments):
| Name | Type | Description |
| ---- | ---- | ----------- |
| <a id="queryrunnersactive"></a>`active` **{warning-solid}** | [`Boolean`](#boolean) | **Deprecated** in 14.8. This was renamed. Use: `paused`. |
+| <a id="queryrunnerscreatorid"></a>`creatorId` | [`UserID`](#userid) | Filter runners by creator ID. |
| <a id="queryrunnerspaused"></a>`paused` | [`Boolean`](#boolean) | Filter runners by `paused` (true) or `active` (false) status. |
| <a id="queryrunnerssearch"></a>`search` | [`String`](#string) | Filter by full token or partial text in description field. |
| <a id="queryrunnerssort"></a>`sort` | [`CiRunnerSort`](#cirunnersort) | Sort order of results. |
@@ -709,6 +750,7 @@ four standard [pagination arguments](#connection-pagination-arguments):
| <a id="queryrunnerstaglist"></a>`tagList` | [`[String!]`](#string) | Filter by tags associated with the runner (comma-separated or array). |
| <a id="queryrunnerstype"></a>`type` | [`CiRunnerType`](#cirunnertype) | Filter runners by type. |
| <a id="queryrunnersupgradestatus"></a>`upgradeStatus` | [`CiRunnerUpgradeStatus`](#cirunnerupgradestatus) | Filter by upgrade status. |
+| <a id="queryrunnersversionprefix"></a>`versionPrefix` **{warning-solid}** | [`String`](#string) | **Introduced** in 16.6. This feature is an Experiment. It can be changed or removed at any time. Filter runners by version. Runners that contain runner managers with the version at the start of the search term are returned. For example, the search term '14.' returns runner managers with versions '14.11.1' and '14.2.3'. |
### `Query.snippets`
@@ -1218,6 +1260,7 @@ Input type: `AiActionInput`
| <a id="mutationaiactiongeneratecommitmessage"></a>`generateCommitMessage` | [`AiGenerateCommitMessageInput`](#aigeneratecommitmessageinput) | Input for generate_commit_message AI action. |
| <a id="mutationaiactiongeneratedescription"></a>`generateDescription` | [`AiGenerateDescriptionInput`](#aigeneratedescriptioninput) | Input for generate_description AI action. |
| <a id="mutationaiactiongeneratetestfile"></a>`generateTestFile` | [`GenerateTestFileInput`](#generatetestfileinput) | Input for generate_test_file AI action. |
+| <a id="mutationaiactionresolvevulnerability"></a>`resolveVulnerability` | [`AiResolveVulnerabilityInput`](#airesolvevulnerabilityinput) | Input for resolve_vulnerability AI action. |
| <a id="mutationaiactionsummarizecomments"></a>`summarizeComments` | [`AiSummarizeCommentsInput`](#aisummarizecommentsinput) | Input for summarize_comments AI action. |
| <a id="mutationaiactionsummarizereview"></a>`summarizeReview` | [`AiSummarizeReviewInput`](#aisummarizereviewinput) | Input for summarize_review AI action. |
| <a id="mutationaiactiontanukibot"></a>`tanukiBot` | [`AiTanukiBotInput`](#aitanukibotinput) | Input for tanuki_bot AI action. |
@@ -1276,94 +1319,112 @@ Input type: `AlertTodoCreateInput`
| <a id="mutationalerttodocreateissue"></a>`issue` | [`Issue`](#issue) | Issue created after mutation. |
| <a id="mutationalerttodocreatetodo"></a>`todo` | [`Todo`](#todo) | To-do item after mutation. |
-### `Mutation.amazonS3ConfigurationCreate`
+### `Mutation.approveDeployment`
-Input type: `AmazonS3ConfigurationCreateInput`
+Input type: `ApproveDeploymentInput`
#### Arguments
| Name | Type | Description |
| ---- | ---- | ----------- |
-| <a id="mutationamazons3configurationcreateaccesskeyxid"></a>`accessKeyXid` | [`String!`](#string) | Access key ID of the Amazon S3 account. |
-| <a id="mutationamazons3configurationcreateawsregion"></a>`awsRegion` | [`String!`](#string) | AWS region where the bucket is created. |
-| <a id="mutationamazons3configurationcreatebucketname"></a>`bucketName` | [`String!`](#string) | Name of the bucket where the audit events would be logged. |
-| <a id="mutationamazons3configurationcreateclientmutationid"></a>`clientMutationId` | [`String`](#string) | A unique identifier for the client performing the mutation. |
-| <a id="mutationamazons3configurationcreategrouppath"></a>`groupPath` | [`ID!`](#id) | Group path. |
-| <a id="mutationamazons3configurationcreatename"></a>`name` | [`String`](#string) | Destination name. |
-| <a id="mutationamazons3configurationcreatesecretaccesskey"></a>`secretAccessKey` | [`String!`](#string) | Secret access key of the Amazon S3 account. |
+| <a id="mutationapprovedeploymentclientmutationid"></a>`clientMutationId` | [`String`](#string) | A unique identifier for the client performing the mutation. |
+| <a id="mutationapprovedeploymentcomment"></a>`comment` | [`String`](#string) | Comment to go with the approval. |
+| <a id="mutationapprovedeploymentid"></a>`id` | [`DeploymentID!`](#deploymentid) | ID of the deployment. |
+| <a id="mutationapprovedeploymentrepresentedas"></a>`representedAs` | [`String`](#string) | Name of the User/Group/Role to use for the approval, when the user belongs to multiple approval rules. |
+| <a id="mutationapprovedeploymentstatus"></a>`status` | [`DeploymentsApprovalStatus!`](#deploymentsapprovalstatus) | Status of the approval (either `APPROVED` or `REJECTED`). |
#### Fields
| Name | Type | Description |
| ---- | ---- | ----------- |
-| <a id="mutationamazons3configurationcreateamazons3configuration"></a>`amazonS3Configuration` | [`AmazonS3ConfigurationType`](#amazons3configurationtype) | configuration created. |
-| <a id="mutationamazons3configurationcreateclientmutationid"></a>`clientMutationId` | [`String`](#string) | A unique identifier for the client performing the mutation. |
-| <a id="mutationamazons3configurationcreateerrors"></a>`errors` | [`[String!]!`](#string) | Errors encountered during execution of the mutation. |
+| <a id="mutationapprovedeploymentclientmutationid"></a>`clientMutationId` | [`String`](#string) | A unique identifier for the client performing the mutation. |
+| <a id="mutationapprovedeploymentdeploymentapproval"></a>`deploymentApproval` | [`DeploymentApproval!`](#deploymentapproval) | DeploymentApproval after mutation. |
+| <a id="mutationapprovedeploymenterrors"></a>`errors` | [`[String!]!`](#string) | Errors encountered during execution of the mutation. |
-### `Mutation.amazonS3ConfigurationUpdate`
+### `Mutation.artifactDestroy`
-Input type: `AmazonS3ConfigurationUpdateInput`
+Input type: `ArtifactDestroyInput`
#### Arguments
| Name | Type | Description |
| ---- | ---- | ----------- |
-| <a id="mutationamazons3configurationupdateaccesskeyxid"></a>`accessKeyXid` | [`String`](#string) | Access key ID of the Amazon S3 account. |
-| <a id="mutationamazons3configurationupdateawsregion"></a>`awsRegion` | [`String`](#string) | AWS region where the bucket is created. |
-| <a id="mutationamazons3configurationupdatebucketname"></a>`bucketName` | [`String`](#string) | Name of the bucket where the audit events would be logged. |
-| <a id="mutationamazons3configurationupdateclientmutationid"></a>`clientMutationId` | [`String`](#string) | A unique identifier for the client performing the mutation. |
-| <a id="mutationamazons3configurationupdateid"></a>`id` | [`AuditEventsAmazonS3ConfigurationID!`](#auditeventsamazons3configurationid) | ID of the Amazon S3 configuration to update. |
-| <a id="mutationamazons3configurationupdatename"></a>`name` | [`String`](#string) | Destination name. |
-| <a id="mutationamazons3configurationupdatesecretaccesskey"></a>`secretAccessKey` | [`String`](#string) | Secret access key of the Amazon S3 account. |
+| <a id="mutationartifactdestroyclientmutationid"></a>`clientMutationId` | [`String`](#string) | A unique identifier for the client performing the mutation. |
+| <a id="mutationartifactdestroyid"></a>`id` | [`CiJobArtifactID!`](#cijobartifactid) | ID of the artifact to delete. |
#### Fields
| Name | Type | Description |
| ---- | ---- | ----------- |
-| <a id="mutationamazons3configurationupdateamazons3configuration"></a>`amazonS3Configuration` | [`AmazonS3ConfigurationType`](#amazons3configurationtype) | Updated Amazon S3 configuration. |
-| <a id="mutationamazons3configurationupdateclientmutationid"></a>`clientMutationId` | [`String`](#string) | A unique identifier for the client performing the mutation. |
-| <a id="mutationamazons3configurationupdateerrors"></a>`errors` | [`[String!]!`](#string) | Errors encountered during execution of the mutation. |
+| <a id="mutationartifactdestroyartifact"></a>`artifact` | [`CiJobArtifact`](#cijobartifact) | Deleted artifact. |
+| <a id="mutationartifactdestroyclientmutationid"></a>`clientMutationId` | [`String`](#string) | A unique identifier for the client performing the mutation. |
+| <a id="mutationartifactdestroyerrors"></a>`errors` | [`[String!]!`](#string) | Errors encountered during execution of the mutation. |
-### `Mutation.approveDeployment`
+### `Mutation.auditEventsAmazonS3ConfigurationCreate`
-Input type: `ApproveDeploymentInput`
+Input type: `AuditEventsAmazonS3ConfigurationCreateInput`
#### Arguments
| Name | Type | Description |
| ---- | ---- | ----------- |
-| <a id="mutationapprovedeploymentclientmutationid"></a>`clientMutationId` | [`String`](#string) | A unique identifier for the client performing the mutation. |
-| <a id="mutationapprovedeploymentcomment"></a>`comment` | [`String`](#string) | Comment to go with the approval. |
-| <a id="mutationapprovedeploymentid"></a>`id` | [`DeploymentID!`](#deploymentid) | ID of the deployment. |
-| <a id="mutationapprovedeploymentrepresentedas"></a>`representedAs` | [`String`](#string) | Name of the User/Group/Role to use for the approval, when the user belongs to multiple approval rules. |
-| <a id="mutationapprovedeploymentstatus"></a>`status` | [`DeploymentsApprovalStatus!`](#deploymentsapprovalstatus) | Status of the approval (either `APPROVED` or `REJECTED`). |
+| <a id="mutationauditeventsamazons3configurationcreateaccesskeyxid"></a>`accessKeyXid` | [`String!`](#string) | Access key ID of the Amazon S3 account. |
+| <a id="mutationauditeventsamazons3configurationcreateawsregion"></a>`awsRegion` | [`String!`](#string) | AWS region where the bucket is created. |
+| <a id="mutationauditeventsamazons3configurationcreatebucketname"></a>`bucketName` | [`String!`](#string) | Name of the bucket where the audit events would be logged. |
+| <a id="mutationauditeventsamazons3configurationcreateclientmutationid"></a>`clientMutationId` | [`String`](#string) | A unique identifier for the client performing the mutation. |
+| <a id="mutationauditeventsamazons3configurationcreategrouppath"></a>`groupPath` | [`ID!`](#id) | Group path. |
+| <a id="mutationauditeventsamazons3configurationcreatename"></a>`name` | [`String`](#string) | Destination name. |
+| <a id="mutationauditeventsamazons3configurationcreatesecretaccesskey"></a>`secretAccessKey` | [`String!`](#string) | Secret access key of the Amazon S3 account. |
#### Fields
| Name | Type | Description |
| ---- | ---- | ----------- |
-| <a id="mutationapprovedeploymentclientmutationid"></a>`clientMutationId` | [`String`](#string) | A unique identifier for the client performing the mutation. |
-| <a id="mutationapprovedeploymentdeploymentapproval"></a>`deploymentApproval` | [`DeploymentApproval!`](#deploymentapproval) | DeploymentApproval after mutation. |
-| <a id="mutationapprovedeploymenterrors"></a>`errors` | [`[String!]!`](#string) | Errors encountered during execution of the mutation. |
+| <a id="mutationauditeventsamazons3configurationcreateamazons3configuration"></a>`amazonS3Configuration` | [`AmazonS3ConfigurationType`](#amazons3configurationtype) | configuration created. |
+| <a id="mutationauditeventsamazons3configurationcreateclientmutationid"></a>`clientMutationId` | [`String`](#string) | A unique identifier for the client performing the mutation. |
+| <a id="mutationauditeventsamazons3configurationcreateerrors"></a>`errors` | [`[String!]!`](#string) | Errors encountered during execution of the mutation. |
-### `Mutation.artifactDestroy`
+### `Mutation.auditEventsAmazonS3ConfigurationDelete`
-Input type: `ArtifactDestroyInput`
+Input type: `AuditEventsAmazonS3ConfigurationDeleteInput`
#### Arguments
| Name | Type | Description |
| ---- | ---- | ----------- |
-| <a id="mutationartifactdestroyclientmutationid"></a>`clientMutationId` | [`String`](#string) | A unique identifier for the client performing the mutation. |
-| <a id="mutationartifactdestroyid"></a>`id` | [`CiJobArtifactID!`](#cijobartifactid) | ID of the artifact to delete. |
+| <a id="mutationauditeventsamazons3configurationdeleteclientmutationid"></a>`clientMutationId` | [`String`](#string) | A unique identifier for the client performing the mutation. |
+| <a id="mutationauditeventsamazons3configurationdeleteid"></a>`id` | [`AuditEventsAmazonS3ConfigurationID!`](#auditeventsamazons3configurationid) | ID of the Amazon S3 configuration to destroy. |
#### Fields
| Name | Type | Description |
| ---- | ---- | ----------- |
-| <a id="mutationartifactdestroyartifact"></a>`artifact` | [`CiJobArtifact`](#cijobartifact) | Deleted artifact. |
-| <a id="mutationartifactdestroyclientmutationid"></a>`clientMutationId` | [`String`](#string) | A unique identifier for the client performing the mutation. |
-| <a id="mutationartifactdestroyerrors"></a>`errors` | [`[String!]!`](#string) | Errors encountered during execution of the mutation. |
+| <a id="mutationauditeventsamazons3configurationdeleteclientmutationid"></a>`clientMutationId` | [`String`](#string) | A unique identifier for the client performing the mutation. |
+| <a id="mutationauditeventsamazons3configurationdeleteerrors"></a>`errors` | [`[String!]!`](#string) | Errors encountered during execution of the mutation. |
+
+### `Mutation.auditEventsAmazonS3ConfigurationUpdate`
+
+Input type: `AuditEventsAmazonS3ConfigurationUpdateInput`
+
+#### Arguments
+
+| Name | Type | Description |
+| ---- | ---- | ----------- |
+| <a id="mutationauditeventsamazons3configurationupdateaccesskeyxid"></a>`accessKeyXid` | [`String`](#string) | Access key ID of the Amazon S3 account. |
+| <a id="mutationauditeventsamazons3configurationupdateawsregion"></a>`awsRegion` | [`String`](#string) | AWS region where the bucket is created. |
+| <a id="mutationauditeventsamazons3configurationupdatebucketname"></a>`bucketName` | [`String`](#string) | Name of the bucket where the audit events would be logged. |
+| <a id="mutationauditeventsamazons3configurationupdateclientmutationid"></a>`clientMutationId` | [`String`](#string) | A unique identifier for the client performing the mutation. |
+| <a id="mutationauditeventsamazons3configurationupdateid"></a>`id` | [`AuditEventsAmazonS3ConfigurationID!`](#auditeventsamazons3configurationid) | ID of the Amazon S3 configuration to update. |
+| <a id="mutationauditeventsamazons3configurationupdatename"></a>`name` | [`String`](#string) | Destination name. |
+| <a id="mutationauditeventsamazons3configurationupdatesecretaccesskey"></a>`secretAccessKey` | [`String`](#string) | Secret access key of the Amazon S3 account. |
+
+#### Fields
+
+| Name | Type | Description |
+| ---- | ---- | ----------- |
+| <a id="mutationauditeventsamazons3configurationupdateamazons3configuration"></a>`amazonS3Configuration` | [`AmazonS3ConfigurationType`](#amazons3configurationtype) | Updated Amazon S3 configuration. |
+| <a id="mutationauditeventsamazons3configurationupdateclientmutationid"></a>`clientMutationId` | [`String`](#string) | A unique identifier for the client performing the mutation. |
+| <a id="mutationauditeventsamazons3configurationupdateerrors"></a>`errors` | [`[String!]!`](#string) | Errors encountered during execution of the mutation. |
### `Mutation.auditEventsStreamingDestinationEventsAdd`
@@ -1505,6 +1566,27 @@ Input type: `AuditEventsStreamingHeadersUpdateInput`
| <a id="mutationauditeventsstreamingheadersupdateerrors"></a>`errors` | [`[String!]!`](#string) | Errors encountered during execution of the mutation. |
| <a id="mutationauditeventsstreamingheadersupdateheader"></a>`header` | [`AuditEventStreamingHeader`](#auditeventstreamingheader) | Updates header. |
+### `Mutation.auditEventsStreamingHttpNamespaceFiltersAdd`
+
+Input type: `AuditEventsStreamingHTTPNamespaceFiltersAddInput`
+
+#### Arguments
+
+| Name | Type | Description |
+| ---- | ---- | ----------- |
+| <a id="mutationauditeventsstreaminghttpnamespacefiltersaddclientmutationid"></a>`clientMutationId` | [`String`](#string) | A unique identifier for the client performing the mutation. |
+| <a id="mutationauditeventsstreaminghttpnamespacefiltersadddestinationid"></a>`destinationId` | [`AuditEventsExternalAuditEventDestinationID!`](#auditeventsexternalauditeventdestinationid) | Destination ID. |
+| <a id="mutationauditeventsstreaminghttpnamespacefiltersaddgrouppath"></a>`groupPath` | [`ID`](#id) | Full path of the group. |
+| <a id="mutationauditeventsstreaminghttpnamespacefiltersaddprojectpath"></a>`projectPath` | [`ID`](#id) | Full path of the project. |
+
+#### Fields
+
+| Name | Type | Description |
+| ---- | ---- | ----------- |
+| <a id="mutationauditeventsstreaminghttpnamespacefiltersaddclientmutationid"></a>`clientMutationId` | [`String`](#string) | A unique identifier for the client performing the mutation. |
+| <a id="mutationauditeventsstreaminghttpnamespacefiltersadderrors"></a>`errors` | [`[String!]!`](#string) | Errors encountered during execution of the mutation. |
+| <a id="mutationauditeventsstreaminghttpnamespacefiltersaddnamespacefilter"></a>`namespaceFilter` | [`AuditEventStreamingHTTPNamespaceFilter`](#auditeventstreaminghttpnamespacefilter) | Namespace filter created. |
+
### `Mutation.auditEventsStreamingInstanceHeadersCreate`
Input type: `AuditEventsStreamingInstanceHeadersCreateInput`
@@ -1792,6 +1874,28 @@ Input type: `BulkRunnerDeleteInput`
| <a id="mutationbulkrunnerdeletedeletedids"></a>`deletedIds` | [`[CiRunnerID!]`](#cirunnerid) | IDs of records effectively deleted. Only present if operation was performed synchronously. |
| <a id="mutationbulkrunnerdeleteerrors"></a>`errors` | [`[String!]!`](#string) | Errors encountered during execution of the mutation. |
+### `Mutation.catalogResourceUnpublish`
+
+WARNING:
+**Introduced** in 16.6.
+This feature is an Experiment. It can be changed or removed at any time.
+
+Input type: `CatalogResourceUnpublishInput`
+
+#### Arguments
+
+| Name | Type | Description |
+| ---- | ---- | ----------- |
+| <a id="mutationcatalogresourceunpublishclientmutationid"></a>`clientMutationId` | [`String`](#string) | A unique identifier for the client performing the mutation. |
+| <a id="mutationcatalogresourceunpublishid"></a>`id` | [`CiCatalogResourceID!`](#cicatalogresourceid) | Global ID of the catalog resource to unpublish. |
+
+#### Fields
+
+| Name | Type | Description |
+| ---- | ---- | ----------- |
+| <a id="mutationcatalogresourceunpublishclientmutationid"></a>`clientMutationId` | [`String`](#string) | A unique identifier for the client performing the mutation. |
+| <a id="mutationcatalogresourceunpublisherrors"></a>`errors` | [`[String!]!`](#string) | Errors encountered during execution of the mutation. |
+
### `Mutation.catalogResourcesCreate`
WARNING:
@@ -2246,6 +2350,34 @@ Input type: `CreateComplianceFrameworkInput`
| <a id="mutationcreatecomplianceframeworkerrors"></a>`errors` | [`[String!]!`](#string) | Errors encountered during execution of the mutation. |
| <a id="mutationcreatecomplianceframeworkframework"></a>`framework` | [`ComplianceFramework`](#complianceframework) | Created compliance framework. |
+### `Mutation.createContainerRegistryProtectionRule`
+
+Creates a protection rule to restrict access to a project's container registry. Available only when feature flag `container_registry_protected_containers` is enabled.
+
+WARNING:
+**Introduced** in 16.6.
+This feature is an Experiment. It can be changed or removed at any time.
+
+Input type: `CreateContainerRegistryProtectionRuleInput`
+
+#### Arguments
+
+| Name | Type | Description |
+| ---- | ---- | ----------- |
+| <a id="mutationcreatecontainerregistryprotectionruleclientmutationid"></a>`clientMutationId` | [`String`](#string) | A unique identifier for the client performing the mutation. |
+| <a id="mutationcreatecontainerregistryprotectionrulecontainerpathpattern"></a>`containerPathPattern` | [`String!`](#string) | ContainerRegistryname protected by the protection rule. For example `@my-scope/my-container-*`. Wildcard character `*` allowed. |
+| <a id="mutationcreatecontainerregistryprotectionruledeleteprotecteduptoaccesslevel"></a>`deleteProtectedUpToAccessLevel` | [`ContainerRegistryProtectionRuleAccessLevel!`](#containerregistryprotectionruleaccesslevel) | Max GitLab access level to prevent from deleting container images in the container registry. For example `DEVELOPER`, `MAINTAINER`, `OWNER`. |
+| <a id="mutationcreatecontainerregistryprotectionruleprojectpath"></a>`projectPath` | [`ID!`](#id) | Full path of the project where a protection rule is located. |
+| <a id="mutationcreatecontainerregistryprotectionrulepushprotecteduptoaccesslevel"></a>`pushProtectedUpToAccessLevel` | [`ContainerRegistryProtectionRuleAccessLevel!`](#containerregistryprotectionruleaccesslevel) | Max GitLab access level to prevent from pushing container images to the container registry. For example `DEVELOPER`, `MAINTAINER`, `OWNER`. |
+
+#### Fields
+
+| Name | Type | Description |
+| ---- | ---- | ----------- |
+| <a id="mutationcreatecontainerregistryprotectionruleclientmutationid"></a>`clientMutationId` | [`String`](#string) | A unique identifier for the client performing the mutation. |
+| <a id="mutationcreatecontainerregistryprotectionrulecontainerregistryprotectionrule"></a>`containerRegistryProtectionRule` | [`ContainerRegistryProtectionRule`](#containerregistryprotectionrule) | Container registry protection rule after mutation. |
+| <a id="mutationcreatecontainerregistryprotectionruleerrors"></a>`errors` | [`[String!]!`](#string) | Errors encountered during execution of the mutation. |
+
### `Mutation.createCustomEmoji`
WARNING:
@@ -2990,6 +3122,31 @@ Input type: `DeleteAnnotationInput`
| <a id="mutationdeleteannotationclientmutationid"></a>`clientMutationId` | [`String`](#string) | A unique identifier for the client performing the mutation. |
| <a id="mutationdeleteannotationerrors"></a>`errors` | [`[String!]!`](#string) | Errors encountered during execution of the mutation. |
+### `Mutation.deletePackagesProtectionRule`
+
+Deletes a protection rule for packages. Available only when feature flag `packages_protected_packages` is enabled.
+
+WARNING:
+**Introduced** in 16.6.
+This feature is an Experiment. It can be changed or removed at any time.
+
+Input type: `DeletePackagesProtectionRuleInput`
+
+#### Arguments
+
+| Name | Type | Description |
+| ---- | ---- | ----------- |
+| <a id="mutationdeletepackagesprotectionruleclientmutationid"></a>`clientMutationId` | [`String`](#string) | A unique identifier for the client performing the mutation. |
+| <a id="mutationdeletepackagesprotectionruleid"></a>`id` | [`PackagesProtectionRuleID!`](#packagesprotectionruleid) | Global ID of the package protection rule to delete. |
+
+#### Fields
+
+| Name | Type | Description |
+| ---- | ---- | ----------- |
+| <a id="mutationdeletepackagesprotectionruleclientmutationid"></a>`clientMutationId` | [`String`](#string) | A unique identifier for the client performing the mutation. |
+| <a id="mutationdeletepackagesprotectionruleerrors"></a>`errors` | [`[String!]!`](#string) | Errors encountered during execution of the mutation. |
+| <a id="mutationdeletepackagesprotectionrulepackageprotectionrule"></a>`packageProtectionRule` | [`PackagesProtectionRule`](#packagesprotectionrule) | Packages protection rule that was deleted successfully. |
+
### `Mutation.designManagementDelete`
Input type: `DesignManagementDeleteInput`
@@ -3856,7 +4013,7 @@ Input type: `ExternalAuditEventDestinationUpdateInput`
### `Mutation.geoRegistriesBulkUpdate`
-Mutates multiple Geo registries for a given registry class. Does not mutate the registries if `geo_registries_update_mutation` feature flag is disabled.
+Mutates multiple Geo registries for a given registry class.
WARNING:
**Introduced** in 16.4.
@@ -3882,7 +4039,7 @@ Input type: `GeoRegistriesBulkUpdateInput`
### `Mutation.geoRegistriesUpdate`
-Mutates a Geo registry. Does not mutate the registry entry if `geo_registries_update_mutation` feature flag is disabled.
+Mutates a Geo registry.
WARNING:
**Introduced** in 16.1.
@@ -4962,6 +5119,40 @@ Input type: `MarkAsSpamSnippetInput`
| <a id="mutationmarkasspamsnippeterrors"></a>`errors` | [`[String!]!`](#string) | Errors encountered during execution of the mutation. |
| <a id="mutationmarkasspamsnippetsnippet"></a>`snippet` | [`Snippet`](#snippet) | Snippet after mutation. |
+### `Mutation.memberRoleCreate`
+
+WARNING:
+**Introduced** in 16.5.
+This feature is an Experiment. It can be changed or removed at any time.
+
+Input type: `MemberRoleCreateInput`
+
+#### Arguments
+
+| Name | Type | Description |
+| ---- | ---- | ----------- |
+| <a id="mutationmemberrolecreateadmingroupmember"></a>`adminGroupMember` | [`Boolean`](#boolean) | Permission to admin group members. |
+| <a id="mutationmemberrolecreateadminmergerequest"></a>`adminMergeRequest` | [`Boolean`](#boolean) | Permission to admin merge requests. |
+| <a id="mutationmemberrolecreateadminvulnerability"></a>`adminVulnerability` | [`Boolean`](#boolean) | Permission to admin vulnerability. |
+| <a id="mutationmemberrolecreatearchiveproject"></a>`archiveProject` | [`Boolean`](#boolean) | Permission to archive projects. |
+| <a id="mutationmemberrolecreatebaseaccesslevel"></a>`baseAccessLevel` | [`MemberAccessLevel!`](#memberaccesslevel) | Base access level for the custom role. |
+| <a id="mutationmemberrolecreateclientmutationid"></a>`clientMutationId` | [`String`](#string) | A unique identifier for the client performing the mutation. |
+| <a id="mutationmemberrolecreatedescription"></a>`description` | [`String`](#string) | Description of the member role. |
+| <a id="mutationmemberrolecreategrouppath"></a>`groupPath` | [`ID!`](#id) | Group the member role to mutate is in. |
+| <a id="mutationmemberrolecreatemanageprojectaccesstokens"></a>`manageProjectAccessTokens` | [`Boolean`](#boolean) | Permission to admin project access tokens. |
+| <a id="mutationmemberrolecreatename"></a>`name` | [`String`](#string) | Name of the member role. |
+| <a id="mutationmemberrolecreatereadcode"></a>`readCode` | [`Boolean`](#boolean) | Permission to read code. |
+| <a id="mutationmemberrolecreatereaddependency"></a>`readDependency` | [`Boolean`](#boolean) | Permission to read dependency. |
+| <a id="mutationmemberrolecreatereadvulnerability"></a>`readVulnerability` | [`Boolean`](#boolean) | Permission to read vulnerability. |
+
+#### Fields
+
+| Name | Type | Description |
+| ---- | ---- | ----------- |
+| <a id="mutationmemberrolecreateclientmutationid"></a>`clientMutationId` | [`String`](#string) | A unique identifier for the client performing the mutation. |
+| <a id="mutationmemberrolecreateerrors"></a>`errors` | [`[String!]!`](#string) | Errors encountered during execution of the mutation. |
+| <a id="mutationmemberrolecreatememberrole"></a>`memberRole` | [`MemberRole`](#memberrole) | Updated member role. |
+
### `Mutation.memberRoleUpdate`
Input type: `MemberRoleUpdateInput`
@@ -4989,6 +5180,10 @@ Accepts a merge request.
When accepted, the source branch will be scheduled to merge into the target branch, either
immediately if possible, or using one of the automatic merge strategies.
+[In GitLab 16.5](https://gitlab.com/gitlab-org/gitlab/-/issues/421510), the merging happens asynchronously.
+This results in `mergeRequest` and `state` not updating after a mutation request,
+because the merging may not have happened yet.
+
Input type: `MergeRequestAcceptInput`
#### Arguments
@@ -5456,6 +5651,30 @@ Input type: `OncallScheduleUpdateInput`
| <a id="mutationoncallscheduleupdateerrors"></a>`errors` | [`[String!]!`](#string) | Errors encountered during execution of the mutation. |
| <a id="mutationoncallscheduleupdateoncallschedule"></a>`oncallSchedule` | [`IncidentManagementOncallSchedule`](#incidentmanagementoncallschedule) | On-call schedule. |
+### `Mutation.organizationCreate`
+
+WARNING:
+**Introduced** in 16.6.
+This feature is an Experiment. It can be changed or removed at any time.
+
+Input type: `OrganizationCreateInput`
+
+#### Arguments
+
+| Name | Type | Description |
+| ---- | ---- | ----------- |
+| <a id="mutationorganizationcreateclientmutationid"></a>`clientMutationId` | [`String`](#string) | A unique identifier for the client performing the mutation. |
+| <a id="mutationorganizationcreatename"></a>`name` | [`String!`](#string) | Name for the organization. |
+| <a id="mutationorganizationcreatepath"></a>`path` | [`String!`](#string) | Path for the organization. |
+
+#### Fields
+
+| Name | Type | Description |
+| ---- | ---- | ----------- |
+| <a id="mutationorganizationcreateclientmutationid"></a>`clientMutationId` | [`String`](#string) | A unique identifier for the client performing the mutation. |
+| <a id="mutationorganizationcreateerrors"></a>`errors` | [`[String!]!`](#string) | Errors encountered during execution of the mutation. |
+| <a id="mutationorganizationcreateorganization"></a>`organization` | [`Organization`](#organization) | Organization created. |
+
### `Mutation.pagesMarkOnboardingComplete`
Input type: `PagesMarkOnboardingCompleteInput`
@@ -5839,6 +6058,45 @@ Input type: `ProjectSetLockedInput`
| <a id="mutationprojectsetlockederrors"></a>`errors` | [`[String!]!`](#string) | Errors encountered during execution of the mutation. |
| <a id="mutationprojectsetlockedproject"></a>`project` | [`Project`](#project) | Project after mutation. |
+### `Mutation.projectSubscriptionCreate`
+
+Input type: `ProjectSubscriptionCreateInput`
+
+#### Arguments
+
+| Name | Type | Description |
+| ---- | ---- | ----------- |
+| <a id="mutationprojectsubscriptioncreateclientmutationid"></a>`clientMutationId` | [`String`](#string) | A unique identifier for the client performing the mutation. |
+| <a id="mutationprojectsubscriptioncreateprojectpath"></a>`projectPath` | [`String!`](#string) | Full path of the downstream project of the Project Subscription. |
+| <a id="mutationprojectsubscriptioncreateupstreampath"></a>`upstreamPath` | [`String!`](#string) | Full path of the upstream project of the Project Subscription. |
+
+#### Fields
+
+| Name | Type | Description |
+| ---- | ---- | ----------- |
+| <a id="mutationprojectsubscriptioncreateclientmutationid"></a>`clientMutationId` | [`String`](#string) | A unique identifier for the client performing the mutation. |
+| <a id="mutationprojectsubscriptioncreateerrors"></a>`errors` | [`[String!]!`](#string) | Errors encountered during execution of the mutation. |
+| <a id="mutationprojectsubscriptioncreatesubscription"></a>`subscription` | [`CiSubscriptionsProject`](#cisubscriptionsproject) | Project Subscription created by the mutation. |
+
+### `Mutation.projectSubscriptionDelete`
+
+Input type: `ProjectSubscriptionDeleteInput`
+
+#### Arguments
+
+| Name | Type | Description |
+| ---- | ---- | ----------- |
+| <a id="mutationprojectsubscriptiondeleteclientmutationid"></a>`clientMutationId` | [`String`](#string) | A unique identifier for the client performing the mutation. |
+| <a id="mutationprojectsubscriptiondeletesubscriptionid"></a>`subscriptionId` | [`CiSubscriptionsProjectID!`](#cisubscriptionsprojectid) | ID of the subscription to delete. |
+
+#### Fields
+
+| Name | Type | Description |
+| ---- | ---- | ----------- |
+| <a id="mutationprojectsubscriptiondeleteclientmutationid"></a>`clientMutationId` | [`String`](#string) | A unique identifier for the client performing the mutation. |
+| <a id="mutationprojectsubscriptiondeleteerrors"></a>`errors` | [`[String!]!`](#string) | Errors encountered during execution of the mutation. |
+| <a id="mutationprojectsubscriptiondeleteproject"></a>`project` | [`Project`](#project) | Project after mutation. |
+
### `Mutation.projectSyncFork`
WARNING:
@@ -6415,7 +6673,7 @@ Input type: `SecurityPolicyProjectAssignInput`
### `Mutation.securityPolicyProjectCreate`
-Creates and assigns a security policy project for the given project (`full_path`).
+Creates and assigns a security policy project for the given project or group (`full_path`).
Input type: `SecurityPolicyProjectCreateInput`
@@ -7156,8 +7414,8 @@ Input type: `UpdateNamespacePackageSettingsInput`
| <a id="mutationupdatenamespacepackagesettingsmavenpackagerequestsforwarding"></a>`mavenPackageRequestsForwarding` | [`Boolean`](#boolean) | Indicates whether Maven package forwarding is allowed for this namespace. |
| <a id="mutationupdatenamespacepackagesettingsnamespacepath"></a>`namespacePath` | [`ID!`](#id) | Namespace path where the namespace package setting is located. |
| <a id="mutationupdatenamespacepackagesettingsnpmpackagerequestsforwarding"></a>`npmPackageRequestsForwarding` | [`Boolean`](#boolean) | Indicates whether npm package forwarding is allowed for this namespace. |
-| <a id="mutationupdatenamespacepackagesettingsnugetduplicateexceptionregex"></a>`nugetDuplicateExceptionRegex` | [`UntrustedRegexp`](#untrustedregexp) | When nuget_duplicates_allowed is false, you can publish duplicate packages with names that match this regex. Otherwise, this setting has no effect. Error is raised if `nuget_duplicates_option` feature flag is disabled. |
-| <a id="mutationupdatenamespacepackagesettingsnugetduplicatesallowed"></a>`nugetDuplicatesAllowed` | [`Boolean`](#boolean) | Indicates whether duplicate NuGet packages are allowed for this namespace. Error is raised if `nuget_duplicates_option` feature flag is disabled. |
+| <a id="mutationupdatenamespacepackagesettingsnugetduplicateexceptionregex"></a>`nugetDuplicateExceptionRegex` | [`UntrustedRegexp`](#untrustedregexp) | When nuget_duplicates_allowed is false, you can publish duplicate packages with names that match this regex. Otherwise, this setting has no effect. |
+| <a id="mutationupdatenamespacepackagesettingsnugetduplicatesallowed"></a>`nugetDuplicatesAllowed` | [`Boolean`](#boolean) | Indicates whether duplicate NuGet packages are allowed for this namespace. |
| <a id="mutationupdatenamespacepackagesettingspypipackagerequestsforwarding"></a>`pypiPackageRequestsForwarding` | [`Boolean`](#boolean) | Indicates whether PyPI package forwarding is allowed for this namespace. |
#### Fields
@@ -7441,6 +7699,83 @@ Input type: `UserSetNamespaceCommitEmailInput`
| <a id="mutationusersetnamespacecommitemailerrors"></a>`errors` | [`[String!]!`](#string) | Errors encountered during execution of the mutation. |
| <a id="mutationusersetnamespacecommitemailnamespacecommitemail"></a>`namespaceCommitEmail` | [`NamespaceCommitEmail`](#namespacecommitemail) | User namespace commit email after mutation. |
+### `Mutation.valueStreamCreate`
+
+Creates a value stream.
+
+WARNING:
+**Introduced** in 16.6.
+This feature is an Experiment. It can be changed or removed at any time.
+
+Input type: `ValueStreamCreateInput`
+
+#### Arguments
+
+| Name | Type | Description |
+| ---- | ---- | ----------- |
+| <a id="mutationvaluestreamcreateclientmutationid"></a>`clientMutationId` | [`String`](#string) | A unique identifier for the client performing the mutation. |
+| <a id="mutationvaluestreamcreatename"></a>`name` | [`String!`](#string) | Value stream name. |
+| <a id="mutationvaluestreamcreatenamespacepath"></a>`namespacePath` | [`ID!`](#id) | Full path of the namespace(project or group) the value stream is created in. |
+
+#### Fields
+
+| Name | Type | Description |
+| ---- | ---- | ----------- |
+| <a id="mutationvaluestreamcreateclientmutationid"></a>`clientMutationId` | [`String`](#string) | A unique identifier for the client performing the mutation. |
+| <a id="mutationvaluestreamcreateerrors"></a>`errors` | [`[String!]!`](#string) | Errors encountered during execution of the mutation. |
+| <a id="mutationvaluestreamcreatevaluestream"></a>`valueStream` | [`ValueStream`](#valuestream) | Created value stream. |
+
+### `Mutation.valueStreamDestroy`
+
+Destroy a value stream.
+
+WARNING:
+**Introduced** in 16.6.
+This feature is an Experiment. It can be changed or removed at any time.
+
+Input type: `ValueStreamDestroyInput`
+
+#### Arguments
+
+| Name | Type | Description |
+| ---- | ---- | ----------- |
+| <a id="mutationvaluestreamdestroyclientmutationid"></a>`clientMutationId` | [`String`](#string) | A unique identifier for the client performing the mutation. |
+| <a id="mutationvaluestreamdestroyid"></a>`id` | [`AnalyticsCycleAnalyticsValueStreamID!`](#analyticscycleanalyticsvaluestreamid) | Global ID of the value stream to destroy. |
+
+#### Fields
+
+| Name | Type | Description |
+| ---- | ---- | ----------- |
+| <a id="mutationvaluestreamdestroyclientmutationid"></a>`clientMutationId` | [`String`](#string) | A unique identifier for the client performing the mutation. |
+| <a id="mutationvaluestreamdestroyerrors"></a>`errors` | [`[String!]!`](#string) | Errors encountered during execution of the mutation. |
+| <a id="mutationvaluestreamdestroyvaluestream"></a>`valueStream` | [`ValueStream`](#valuestream) | Value stream deleted after mutation. |
+
+### `Mutation.valueStreamUpdate`
+
+Updates a value stream.
+
+WARNING:
+**Introduced** in 16.6.
+This feature is an Experiment. It can be changed or removed at any time.
+
+Input type: `ValueStreamUpdateInput`
+
+#### Arguments
+
+| Name | Type | Description |
+| ---- | ---- | ----------- |
+| <a id="mutationvaluestreamupdateclientmutationid"></a>`clientMutationId` | [`String`](#string) | A unique identifier for the client performing the mutation. |
+| <a id="mutationvaluestreamupdateid"></a>`id` | [`AnalyticsCycleAnalyticsValueStreamID!`](#analyticscycleanalyticsvaluestreamid) | Global ID of the value stream to update. |
+| <a id="mutationvaluestreamupdatename"></a>`name` | [`String!`](#string) | Value stream name. |
+
+#### Fields
+
+| Name | Type | Description |
+| ---- | ---- | ----------- |
+| <a id="mutationvaluestreamupdateclientmutationid"></a>`clientMutationId` | [`String`](#string) | A unique identifier for the client performing the mutation. |
+| <a id="mutationvaluestreamupdateerrors"></a>`errors` | [`[String!]!`](#string) | Errors encountered during execution of the mutation. |
+| <a id="mutationvaluestreamupdatevaluestream"></a>`valueStream` | [`ValueStream`](#valuestream) | Updated value stream. |
+
### `Mutation.vulnerabilitiesDismiss`
Input type: `VulnerabilitiesDismissInput`
@@ -10749,6 +11084,29 @@ The edge type for [`MemberInterface`](#memberinterface).
| <a id="memberinterfaceedgecursor"></a>`cursor` | [`String!`](#string) | A cursor for use in pagination. |
| <a id="memberinterfaceedgenode"></a>`node` | [`MemberInterface`](#memberinterface) | The item at the end of the edge. |
+#### `MemberRoleConnection`
+
+The connection type for [`MemberRole`](#memberrole).
+
+##### Fields
+
+| Name | Type | Description |
+| ---- | ---- | ----------- |
+| <a id="memberroleconnectionedges"></a>`edges` | [`[MemberRoleEdge]`](#memberroleedge) | A list of edges. |
+| <a id="memberroleconnectionnodes"></a>`nodes` | [`[MemberRole]`](#memberrole) | A list of nodes. |
+| <a id="memberroleconnectionpageinfo"></a>`pageInfo` | [`PageInfo!`](#pageinfo) | Information to aid in pagination. |
+
+#### `MemberRoleEdge`
+
+The edge type for [`MemberRole`](#memberrole).
+
+##### Fields
+
+| Name | Type | Description |
+| ---- | ---- | ----------- |
+| <a id="memberroleedgecursor"></a>`cursor` | [`String!`](#string) | A cursor for use in pagination. |
+| <a id="memberroleedgenode"></a>`node` | [`MemberRole`](#memberrole) | The item at the end of the edge. |
+
#### `MergeAccessLevelConnection`
The connection type for [`MergeAccessLevel`](#mergeaccesslevel).
@@ -11123,6 +11481,29 @@ The edge type for [`OncallParticipantType`](#oncallparticipanttype).
| <a id="oncallparticipanttypeedgecursor"></a>`cursor` | [`String!`](#string) | A cursor for use in pagination. |
| <a id="oncallparticipanttypeedgenode"></a>`node` | [`OncallParticipantType`](#oncallparticipanttype) | The item at the end of the edge. |
+#### `OrganizationConnection`
+
+The connection type for [`Organization`](#organization).
+
+##### Fields
+
+| Name | Type | Description |
+| ---- | ---- | ----------- |
+| <a id="organizationconnectionedges"></a>`edges` | [`[OrganizationEdge]`](#organizationedge) | A list of edges. |
+| <a id="organizationconnectionnodes"></a>`nodes` | [`[Organization]`](#organization) | A list of nodes. |
+| <a id="organizationconnectionpageinfo"></a>`pageInfo` | [`PageInfo!`](#pageinfo) | Information to aid in pagination. |
+
+#### `OrganizationEdge`
+
+The edge type for [`Organization`](#organization).
+
+##### Fields
+
+| Name | Type | Description |
+| ---- | ---- | ----------- |
+| <a id="organizationedgecursor"></a>`cursor` | [`String!`](#string) | A cursor for use in pagination. |
+| <a id="organizationedgenode"></a>`node` | [`Organization`](#organization) | The item at the end of the edge. |
+
#### `OrganizationUserConnection`
The connection type for [`OrganizationUser`](#organizationuser).
@@ -11355,6 +11736,29 @@ The edge type for [`PathLock`](#pathlock).
| <a id="pathlockedgecursor"></a>`cursor` | [`String!`](#string) | A cursor for use in pagination. |
| <a id="pathlockedgenode"></a>`node` | [`PathLock`](#pathlock) | The item at the end of the edge. |
+#### `PendingGroupMemberConnection`
+
+The connection type for [`PendingGroupMember`](#pendinggroupmember).
+
+##### Fields
+
+| Name | Type | Description |
+| ---- | ---- | ----------- |
+| <a id="pendinggroupmemberconnectionedges"></a>`edges` | [`[PendingGroupMemberEdge]`](#pendinggroupmemberedge) | A list of edges. |
+| <a id="pendinggroupmemberconnectionnodes"></a>`nodes` | [`[PendingGroupMember]`](#pendinggroupmember) | A list of nodes. |
+| <a id="pendinggroupmemberconnectionpageinfo"></a>`pageInfo` | [`PageInfo!`](#pageinfo) | Information to aid in pagination. |
+
+#### `PendingGroupMemberEdge`
+
+The edge type for [`PendingGroupMember`](#pendinggroupmember).
+
+##### Fields
+
+| Name | Type | Description |
+| ---- | ---- | ----------- |
+| <a id="pendinggroupmemberedgecursor"></a>`cursor` | [`String!`](#string) | A cursor for use in pagination. |
+| <a id="pendinggroupmemberedgenode"></a>`node` | [`PendingGroupMember`](#pendinggroupmember) | The item at the end of the edge. |
+
#### `PipelineArtifactRegistryConnection`
The connection type for [`PipelineArtifactRegistry`](#pipelineartifactregistry).
@@ -12931,7 +13335,38 @@ An abuse report.
| Name | Type | Description |
| ---- | ---- | ----------- |
+| <a id="abusereportcommenters"></a>`commenters` | [`UserCoreConnection!`](#usercoreconnection) | All commenters on this noteable. (see [Connections](#connections)) |
+| <a id="abusereportdiscussions"></a>`discussions` | [`DiscussionConnection!`](#discussionconnection) | All discussions on this noteable. (see [Connections](#connections)) |
+| <a id="abusereportid"></a>`id` | [`AbuseReportID!`](#abusereportid) | Global ID of the abuse report. |
| <a id="abusereportlabels"></a>`labels` | [`LabelConnection`](#labelconnection) | Labels of the abuse report. (see [Connections](#connections)) |
+| <a id="abusereportuserpermissions"></a>`userPermissions` | [`AbuseReportPermissions!`](#abusereportpermissions) | Permissions for the current user on the resource. |
+
+#### Fields with arguments
+
+##### `AbuseReport.notes`
+
+All notes on this noteable.
+
+Returns [`NoteConnection!`](#noteconnection).
+
+This field returns a [connection](#connections). It accepts the
+four standard [pagination arguments](#connection-pagination-arguments):
+`before: String`, `after: String`, `first: Int`, `last: Int`.
+
+###### Arguments
+
+| Name | Type | Description |
+| ---- | ---- | ----------- |
+| <a id="abusereportnotesfilter"></a>`filter` | [`NotesFilterType`](#notesfiltertype) | Type of notes collection: ALL_NOTES, ONLY_COMMENTS, ONLY_ACTIVITY. |
+
+### `AbuseReportPermissions`
+
+#### Fields
+
+| Name | Type | Description |
+| ---- | ---- | ----------- |
+| <a id="abusereportpermissionscreatenote"></a>`createNote` | [`Boolean!`](#boolean) | If `true`, the user can perform `create_note` on this resource. |
+| <a id="abusereportpermissionsreadabusereport"></a>`readAbuseReport` | [`Boolean!`](#boolean) | If `true`, the user can perform `read_abuse_report` on this resource. |
### `AccessLevel`
@@ -13046,12 +13481,13 @@ A user with add-on data.
| <a id="addonusernamespace"></a>`namespace` | [`Namespace`](#namespace) | Personal namespace of the user. |
| <a id="addonusernamespacecommitemails"></a>`namespaceCommitEmails` | [`NamespaceCommitEmailConnection`](#namespacecommitemailconnection) | User's custom namespace commit emails. (see [Connections](#connections)) |
| <a id="addonuserorganization"></a>`organization` | [`String`](#string) | Who the user represents or works for. |
+| <a id="addonuserorganizations"></a>`organizations` **{warning-solid}** | [`OrganizationConnection`](#organizationconnection) | **Introduced** in 16.6. This feature is an Experiment. It can be changed or removed at any time. Organizations where the user has access. |
| <a id="addonuserpreferencesgitpodpath"></a>`preferencesGitpodPath` | [`String`](#string) | Web path to the Gitpod section within user preferences. |
| <a id="addonuserprofileenablegitpodpath"></a>`profileEnableGitpodPath` | [`String`](#string) | Web path to enable Gitpod for the user. |
| <a id="addonuserprojectmemberships"></a>`projectMemberships` | [`ProjectMemberConnection`](#projectmemberconnection) | Project memberships of the user. (see [Connections](#connections)) |
| <a id="addonuserpronouns"></a>`pronouns` | [`String`](#string) | Pronouns of the user. |
| <a id="addonuserpublicemail"></a>`publicEmail` | [`String`](#string) | User's public email. |
-| <a id="addonusersavedreplies"></a>`savedReplies` | [`SavedReplyConnection`](#savedreplyconnection) | Saved replies authored by the user. Will not return saved replies if `saved_replies` feature flag is disabled. (see [Connections](#connections)) |
+| <a id="addonusersavedreplies"></a>`savedReplies` | [`SavedReplyConnection`](#savedreplyconnection) | Saved replies authored by the user. (see [Connections](#connections)) |
| <a id="addonuserstate"></a>`state` | [`UserState!`](#userstate) | State of the user. |
| <a id="addonuserstatus"></a>`status` | [`UserStatus`](#userstatus) | User status. |
| <a id="addonusertwitter"></a>`twitter` | [`String`](#string) | Twitter username of the user. |
@@ -13210,7 +13646,7 @@ four standard [pagination arguments](#connection-pagination-arguments):
##### `AddOnUser.savedReply`
-Saved reply authored by the user. Will not return saved reply if `saved_replies` feature flag is disabled.
+Saved reply authored by the user.
Returns [`SavedReply`](#savedreply).
@@ -13589,11 +14025,24 @@ Describes a rule for who can approve merge requests.
| <a id="approvalruleinvalid"></a>`invalid` | [`Boolean`](#boolean) | Indicates if the rule is invalid and cannot be approved. |
| <a id="approvalrulename"></a>`name` | [`String`](#string) | Name of the rule. |
| <a id="approvalruleoverridden"></a>`overridden` | [`Boolean`](#boolean) | Indicates if the rule was overridden for the merge request. |
+| <a id="approvalrulescanresultpolicies"></a>`scanResultPolicies` | [`[ApprovalScanResultPolicy!]`](#approvalscanresultpolicy) | List of scan result policies associated with the rule. |
| <a id="approvalrulesection"></a>`section` | [`String`](#string) | Named section of the Code Owners file that the rule applies to. |
| <a id="approvalrulesourcerule"></a>`sourceRule` | [`ApprovalRule`](#approvalrule) | Source rule used to create the rule. |
| <a id="approvalruletype"></a>`type` | [`ApprovalRuleType`](#approvalruletype) | Type of the rule. |
| <a id="approvalruleusers"></a>`users` | [`UserCoreConnection`](#usercoreconnection) | List of users added as approvers for the rule. (see [Connections](#connections)) |
+### `ApprovalScanResultPolicy`
+
+Represents the scan result policy.
+
+#### Fields
+
+| Name | Type | Description |
+| ---- | ---- | ----------- |
+| <a id="approvalscanresultpolicyapprovalsrequired"></a>`approvalsRequired` | [`Int!`](#int) | Represents the required approvals defined in the policy. |
+| <a id="approvalscanresultpolicyname"></a>`name` | [`String!`](#string) | Represents the name of the policy. |
+| <a id="approvalscanresultpolicyreporttype"></a>`reportType` | [`ApprovalReportType!`](#approvalreporttype) | Represents the report_type of the approval rule. |
+
### `AssetType`
Represents a vulnerability asset type.
@@ -13623,6 +14072,18 @@ Represents the YAML definitions for audit events defined in `ee/config/audit_eve
| <a id="auditeventdefinitionsavedtodatabase"></a>`savedToDatabase` | [`Boolean!`](#boolean) | Indicates if the event is saved to PostgreSQL database. |
| <a id="auditeventdefinitionstreamed"></a>`streamed` | [`Boolean!`](#boolean) | Indicates if the event is streamed to an external destination. |
+### `AuditEventStreamingHTTPNamespaceFilter`
+
+Represents a subgroup or project filter that belongs to an external audit event streaming destination.
+
+#### Fields
+
+| Name | Type | Description |
+| ---- | ---- | ----------- |
+| <a id="auditeventstreaminghttpnamespacefilterexternalauditeventdestination"></a>`externalAuditEventDestination` | [`ExternalAuditEventDestination!`](#externalauditeventdestination) | Destination to which the filter belongs. |
+| <a id="auditeventstreaminghttpnamespacefilterid"></a>`id` | [`ID!`](#id) | ID of the filter. |
+| <a id="auditeventstreaminghttpnamespacefilternamespace"></a>`namespace` | [`Namespace!`](#namespace) | Group or project namespace the filter belongs to. |
+
### `AuditEventStreamingHeader`
Represents a HTTP header key/value that belongs to an audit streaming destination.
@@ -13636,6 +14097,18 @@ Represents a HTTP header key/value that belongs to an audit streaming destinatio
| <a id="auditeventstreamingheaderkey"></a>`key` | [`String!`](#string) | Key of the header. |
| <a id="auditeventstreamingheadervalue"></a>`value` | [`String!`](#string) | Value of the header. |
+### `AuditEventsStreamingHTTPNamespaceFiltersAddPayload`
+
+Autogenerated return type of AuditEventsStreamingHTTPNamespaceFiltersAdd.
+
+#### Fields
+
+| Name | Type | Description |
+| ---- | ---- | ----------- |
+| <a id="auditeventsstreaminghttpnamespacefiltersaddpayloadclientmutationid"></a>`clientMutationId` | [`String`](#string) | A unique identifier for the client performing the mutation. |
+| <a id="auditeventsstreaminghttpnamespacefiltersaddpayloaderrors"></a>`errors` | [`[String!]!`](#string) | Errors encountered during execution of the mutation. |
+| <a id="auditeventsstreaminghttpnamespacefiltersaddpayloadnamespacefilter"></a>`namespaceFilter` | [`AuditEventStreamingHTTPNamespaceFilter`](#auditeventstreaminghttpnamespacefilter) | Namespace filter created. |
+
### `AuditEventsStreamingInstanceHeader`
Represents a HTTP header key/value that belongs to an instance level audit streaming destination.
@@ -13672,18 +14145,20 @@ Core representation of a GitLab user.
| <a id="autocompleteduserid"></a>`id` | [`ID!`](#id) | ID of the user. |
| <a id="autocompleteduseride"></a>`ide` | [`Ide`](#ide) | IDE settings. |
| <a id="autocompleteduserjobtitle"></a>`jobTitle` | [`String`](#string) | Job title of the user. |
+| <a id="autocompleteduserlastactivityon"></a>`lastActivityOn` | [`Date`](#date) | Date the user last performed any actions. |
| <a id="autocompleteduserlinkedin"></a>`linkedin` | [`String`](#string) | LinkedIn profile name of the user. |
| <a id="autocompleteduserlocation"></a>`location` | [`String`](#string) | Location of the user. |
| <a id="autocompletedusername"></a>`name` | [`String!`](#string) | Human-readable name of the user. Returns `****` if the user is a project bot and the requester does not have permission to view the project. |
| <a id="autocompletedusernamespace"></a>`namespace` | [`Namespace`](#namespace) | Personal namespace of the user. |
| <a id="autocompletedusernamespacecommitemails"></a>`namespaceCommitEmails` | [`NamespaceCommitEmailConnection`](#namespacecommitemailconnection) | User's custom namespace commit emails. (see [Connections](#connections)) |
| <a id="autocompleteduserorganization"></a>`organization` | [`String`](#string) | Who the user represents or works for. |
+| <a id="autocompleteduserorganizations"></a>`organizations` **{warning-solid}** | [`OrganizationConnection`](#organizationconnection) | **Introduced** in 16.6. This feature is an Experiment. It can be changed or removed at any time. Organizations where the user has access. |
| <a id="autocompleteduserpreferencesgitpodpath"></a>`preferencesGitpodPath` | [`String`](#string) | Web path to the Gitpod section within user preferences. |
| <a id="autocompleteduserprofileenablegitpodpath"></a>`profileEnableGitpodPath` | [`String`](#string) | Web path to enable Gitpod for the user. |
| <a id="autocompleteduserprojectmemberships"></a>`projectMemberships` | [`ProjectMemberConnection`](#projectmemberconnection) | Project memberships of the user. (see [Connections](#connections)) |
| <a id="autocompleteduserpronouns"></a>`pronouns` | [`String`](#string) | Pronouns of the user. |
| <a id="autocompleteduserpublicemail"></a>`publicEmail` | [`String`](#string) | User's public email. |
-| <a id="autocompletedusersavedreplies"></a>`savedReplies` | [`SavedReplyConnection`](#savedreplyconnection) | Saved replies authored by the user. Will not return saved replies if `saved_replies` feature flag is disabled. (see [Connections](#connections)) |
+| <a id="autocompletedusersavedreplies"></a>`savedReplies` | [`SavedReplyConnection`](#savedreplyconnection) | Saved replies authored by the user. (see [Connections](#connections)) |
| <a id="autocompleteduserstate"></a>`state` | [`UserState!`](#userstate) | State of the user. |
| <a id="autocompleteduserstatus"></a>`status` | [`UserStatus`](#userstatus) | User status. |
| <a id="autocompletedusertwitter"></a>`twitter` | [`String`](#string) | Twitter username of the user. |
@@ -13834,7 +14309,7 @@ four standard [pagination arguments](#connection-pagination-arguments):
##### `AutocompletedUser.savedReply`
-Saved reply authored by the user. Will not return saved reply if `saved_replies` feature flag is disabled.
+Saved reply authored by the user.
Returns [`SavedReply`](#savedreply).
@@ -14939,6 +15414,17 @@ Represents the Geo replication and verification state of a ci_secure_file.
| <a id="cistagename"></a>`name` | [`String`](#string) | Name of the stage. |
| <a id="cistagestatus"></a>`status` | [`String`](#string) | Status of the pipeline stage. |
+### `CiSubscriptionsProject`
+
+#### Fields
+
+| Name | Type | Description |
+| ---- | ---- | ----------- |
+| <a id="cisubscriptionsprojectauthor"></a>`author` | [`UserCore`](#usercore) | Author of the subscription. |
+| <a id="cisubscriptionsprojectdownstreamproject"></a>`downstreamProject` | [`Project`](#project) | Downstream project of the subscription. |
+| <a id="cisubscriptionsprojectid"></a>`id` | [`CiSubscriptionsProjectID`](#cisubscriptionsprojectid) | Global ID of the subscription. |
+| <a id="cisubscriptionsprojectupstreamproject"></a>`upstreamProject` | [`Project`](#project) | Upstream project of the subscription. |
+
### `CiTemplate`
GitLab CI/CD configuration template.
@@ -15079,6 +15565,7 @@ Represents reports comparison for code quality.
| Name | Type | Description |
| ---- | ---- | ----------- |
| <a id="codequalityreportscomparerreport"></a>`report` | [`CodequalityReportsComparerReport`](#codequalityreportscomparerreport) | Compared codequality report. |
+| <a id="codequalityreportscomparerstatus"></a>`status` | [`CodequalityReportsComparerReportGenerationStatus`](#codequalityreportscomparerreportgenerationstatus) | Compared codequality report generation status. |
### `CodequalityReportsComparerReport`
@@ -15091,7 +15578,7 @@ Represents compared code quality report.
| <a id="codequalityreportscomparerreportexistingerrors"></a>`existingErrors` | [`[CodequalityReportsComparerReportDegradation!]`](#codequalityreportscomparerreportdegradation) | All code quality degradations. |
| <a id="codequalityreportscomparerreportnewerrors"></a>`newErrors` | [`[CodequalityReportsComparerReportDegradation!]!`](#codequalityreportscomparerreportdegradation) | New code quality degradations. |
| <a id="codequalityreportscomparerreportresolvederrors"></a>`resolvedErrors` | [`[CodequalityReportsComparerReportDegradation!]`](#codequalityreportscomparerreportdegradation) | Resolved code quality degradations. |
-| <a id="codequalityreportscomparerreportstatus"></a>`status` | [`CodequalityReportsComparerReportStatus!`](#codequalityreportscomparerreportstatus) | Status of report. |
+| <a id="codequalityreportscomparerreportstatus"></a>`status` | [`CodequalityReportsComparerStatus!`](#codequalityreportscomparerstatus) | Status of report. |
| <a id="codequalityreportscomparerreportsummary"></a>`summary` | [`CodequalityReportsComparerReportSummary!`](#codequalityreportscomparerreportsummary) | Codequality report summary. |
### `CodequalityReportsComparerReportDegradation`
@@ -15412,6 +15899,19 @@ A tag expiration policy designed to keep only the images that matter most.
| <a id="containerexpirationpolicyolderthan"></a>`olderThan` | [`ContainerExpirationPolicyOlderThanEnum`](#containerexpirationpolicyolderthanenum) | Tags older that this will expire. |
| <a id="containerexpirationpolicyupdatedat"></a>`updatedAt` | [`Time!`](#time) | Timestamp of when the container expiration policy was updated. |
+### `ContainerRegistryProtectionRule`
+
+A container registry protection rule designed to prevent users with a certain access level or lower from altering the container registry.
+
+#### Fields
+
+| Name | Type | Description |
+| ---- | ---- | ----------- |
+| <a id="containerregistryprotectionrulecontainerpathpattern"></a>`containerPathPattern` | [`String!`](#string) | Container repository path pattern protected by the protection rule. For example `@my-scope/my-container-*`. Wildcard character `*` allowed. |
+| <a id="containerregistryprotectionruledeleteprotecteduptoaccesslevel"></a>`deleteProtectedUpToAccessLevel` | [`ContainerRegistryProtectionRuleAccessLevel!`](#containerregistryprotectionruleaccesslevel) | Max GitLab access level to prevent from pushing container images to the container registry. For example `DEVELOPER`, `MAINTAINER`, `OWNER`. |
+| <a id="containerregistryprotectionruleid"></a>`id` | [`ContainerRegistryProtectionRuleID!`](#containerregistryprotectionruleid) | ID of the container registry protection rule. |
+| <a id="containerregistryprotectionrulepushprotecteduptoaccesslevel"></a>`pushProtectedUpToAccessLevel` | [`ContainerRegistryProtectionRuleAccessLevel!`](#containerregistryprotectionruleaccesslevel) | Max GitLab access level to prevent from pushing container images to the container registry. For example `DEVELOPER`, `MAINTAINER`, `OWNER`. |
+
### `ContainerRepository`
A container repository.
@@ -15595,9 +16095,9 @@ A custom emoji uploaded by user.
| Name | Type | Description |
| ---- | ---- | ----------- |
-| <a id="customemojipermissionscreatecustomemoji"></a>`createCustomEmoji` | [`Boolean!`](#boolean) | Indicates the user can perform `create_custom_emoji` on this resource. |
-| <a id="customemojipermissionsdeletecustomemoji"></a>`deleteCustomEmoji` | [`Boolean!`](#boolean) | Indicates the user can perform `delete_custom_emoji` on this resource. |
-| <a id="customemojipermissionsreadcustomemoji"></a>`readCustomEmoji` | [`Boolean!`](#boolean) | Indicates the user can perform `read_custom_emoji` on this resource. |
+| <a id="customemojipermissionscreatecustomemoji"></a>`createCustomEmoji` | [`Boolean!`](#boolean) | If `true`, the user can perform `create_custom_emoji` on this resource. |
+| <a id="customemojipermissionsdeletecustomemoji"></a>`deleteCustomEmoji` | [`Boolean!`](#boolean) | If `true`, the user can perform `delete_custom_emoji` on this resource. |
+| <a id="customemojipermissionsreadcustomemoji"></a>`readCustomEmoji` | [`Boolean!`](#boolean) | If `true`, the user can perform `read_custom_emoji` on this resource. |
### `CustomerRelationsContact`
@@ -15834,7 +16334,7 @@ Check permissions for the current user on site profile.
| Name | Type | Description |
| ---- | ---- | ----------- |
-| <a id="dastsiteprofilepermissionscreateondemanddastscan"></a>`createOnDemandDastScan` | [`Boolean!`](#boolean) | Indicates the user can perform `create_on_demand_dast_scan` on this resource. |
+| <a id="dastsiteprofilepermissionscreateondemanddastscan"></a>`createOnDemandDastScan` | [`Boolean!`](#boolean) | If `true`, the user can perform `create_on_demand_dast_scan` on this resource. |
### `DastSiteValidation`
@@ -16059,8 +16559,8 @@ Approval summary of the deployment.
| Name | Type | Description |
| ---- | ---- | ----------- |
| <a id="deploymentpermissionsapprovedeployment"></a>`approveDeployment` | [`Boolean!`](#boolean) | Indicates the user can perform `approve_deployment` on this resource. This field can only be resolved for one environment in any single request. |
-| <a id="deploymentpermissionsdestroydeployment"></a>`destroyDeployment` | [`Boolean!`](#boolean) | Indicates the user can perform `destroy_deployment` on this resource. |
-| <a id="deploymentpermissionsupdatedeployment"></a>`updateDeployment` | [`Boolean!`](#boolean) | Indicates the user can perform `update_deployment` on this resource. |
+| <a id="deploymentpermissionsdestroydeployment"></a>`destroyDeployment` | [`Boolean!`](#boolean) | If `true`, the user can perform `destroy_deployment` on this resource. |
+| <a id="deploymentpermissionsupdatedeployment"></a>`updateDeployment` | [`Boolean!`](#boolean) | If `true`, the user can perform `update_deployment` on this resource. |
### `DeploymentTag`
@@ -16394,6 +16894,22 @@ four standard [pagination arguments](#connection-pagination-arguments):
| <a id="designversiondesignsatversionfilenames"></a>`filenames` | [`[String!]`](#string) | Filters designs by their filename. |
| <a id="designversiondesignsatversionids"></a>`ids` | [`[DesignManagementDesignID!]`](#designmanagementdesignid) | Filters designs by their ID. |
+### `DetailedImportStatus`
+
+Details of the import status of a project.
+
+#### Fields
+
+| Name | Type | Description |
+| ---- | ---- | ----------- |
+| <a id="detailedimportstatusid"></a>`id` | [`ProjectImportStateID`](#projectimportstateid) | ID of the import state. |
+| <a id="detailedimportstatuslasterror"></a>`lastError` | [`String`](#string) | Last error of the import. |
+| <a id="detailedimportstatuslastsuccessfulupdateat"></a>`lastSuccessfulUpdateAt` | [`Time`](#time) | Time of the last successful update. |
+| <a id="detailedimportstatuslastupdateat"></a>`lastUpdateAt` | [`Time`](#time) | Time of the last update. |
+| <a id="detailedimportstatuslastupdatestartedat"></a>`lastUpdateStartedAt` | [`Time`](#time) | Time of the start of the last update. |
+| <a id="detailedimportstatusstatus"></a>`status` | [`String`](#string) | Current status of the import. |
+| <a id="detailedimportstatusurl"></a>`url` | [`String`](#string) | Import url. |
+
### `DetailedStatus`
#### Fields
@@ -16692,9 +17208,9 @@ Returns [`Deployment`](#deployment).
| Name | Type | Description |
| ---- | ---- | ----------- |
-| <a id="environmentpermissionsdestroyenvironment"></a>`destroyEnvironment` | [`Boolean!`](#boolean) | Indicates the user can perform `destroy_environment` on this resource. |
-| <a id="environmentpermissionsstopenvironment"></a>`stopEnvironment` | [`Boolean!`](#boolean) | Indicates the user can perform `stop_environment` on this resource. |
-| <a id="environmentpermissionsupdateenvironment"></a>`updateEnvironment` | [`Boolean!`](#boolean) | Indicates the user can perform `update_environment` on this resource. |
+| <a id="environmentpermissionsdestroyenvironment"></a>`destroyEnvironment` | [`Boolean!`](#boolean) | If `true`, the user can perform `destroy_environment` on this resource. |
+| <a id="environmentpermissionsstopenvironment"></a>`stopEnvironment` | [`Boolean!`](#boolean) | If `true`, the user can perform `stop_environment` on this resource. |
+| <a id="environmentpermissionsupdateenvironment"></a>`updateEnvironment` | [`Boolean!`](#boolean) | If `true`, the user can perform `update_environment` on this resource. |
### `Epic`
@@ -16936,8 +17452,10 @@ Total weight of open and closed descendant issues.
| Name | Type | Description |
| ---- | ---- | ----------- |
-| <a id="epicdescendantweightsclosedissues"></a>`closedIssues` | [`Int`](#int) | Total weight of completed (closed) issues in this epic, including epic descendants. |
-| <a id="epicdescendantweightsopenedissues"></a>`openedIssues` | [`Int`](#int) | Total weight of opened issues in this epic, including epic descendants. |
+| <a id="epicdescendantweightsclosedissues"></a>`closedIssues` **{warning-solid}** | [`Int`](#int) | **Deprecated** in 16.6. Use `closedIssuesTotal`. |
+| <a id="epicdescendantweightsclosedissuestotal"></a>`closedIssuesTotal` | [`BigInt`](#bigint) | Total weight of completed (closed) issues in this epic, including epic descendants, encoded as a string. |
+| <a id="epicdescendantweightsopenedissues"></a>`openedIssues` **{warning-solid}** | [`Int`](#int) | **Deprecated** in 16.6. Use `OpenedIssuesTotal`. |
+| <a id="epicdescendantweightsopenedissuestotal"></a>`openedIssuesTotal` | [`BigInt`](#bigint) | Total weight of opened issues in this epic, including epic descendants, encoded as a string. |
### `EpicHealthStatus`
@@ -17167,14 +17685,14 @@ Check permissions for the current user on an epic.
| Name | Type | Description |
| ---- | ---- | ----------- |
-| <a id="epicpermissionsadminepic"></a>`adminEpic` | [`Boolean!`](#boolean) | Indicates the user can perform `admin_epic` on this resource. |
-| <a id="epicpermissionsawardemoji"></a>`awardEmoji` | [`Boolean!`](#boolean) | Indicates the user can perform `award_emoji` on this resource. |
-| <a id="epicpermissionscreateepic"></a>`createEpic` | [`Boolean!`](#boolean) | Indicates the user can perform `create_epic` on this resource. |
-| <a id="epicpermissionscreatenote"></a>`createNote` | [`Boolean!`](#boolean) | Indicates the user can perform `create_note` on this resource. |
-| <a id="epicpermissionsdestroyepic"></a>`destroyEpic` | [`Boolean!`](#boolean) | Indicates the user can perform `destroy_epic` on this resource. |
-| <a id="epicpermissionsreadepic"></a>`readEpic` | [`Boolean!`](#boolean) | Indicates the user can perform `read_epic` on this resource. |
-| <a id="epicpermissionsreadepiciid"></a>`readEpicIid` | [`Boolean!`](#boolean) | Indicates the user can perform `read_epic_iid` on this resource. |
-| <a id="epicpermissionsupdateepic"></a>`updateEpic` | [`Boolean!`](#boolean) | Indicates the user can perform `update_epic` on this resource. |
+| <a id="epicpermissionsadminepic"></a>`adminEpic` | [`Boolean!`](#boolean) | If `true`, the user can perform `admin_epic` on this resource. |
+| <a id="epicpermissionsawardemoji"></a>`awardEmoji` | [`Boolean!`](#boolean) | If `true`, the user can perform `award_emoji` on this resource. |
+| <a id="epicpermissionscreateepic"></a>`createEpic` | [`Boolean!`](#boolean) | If `true`, the user can perform `create_epic` on this resource. |
+| <a id="epicpermissionscreatenote"></a>`createNote` | [`Boolean!`](#boolean) | If `true`, the user can perform `create_note` on this resource. |
+| <a id="epicpermissionsdestroyepic"></a>`destroyEpic` | [`Boolean!`](#boolean) | If `true`, the user can perform `destroy_epic` on this resource. |
+| <a id="epicpermissionsreadepic"></a>`readEpic` | [`Boolean!`](#boolean) | If `true`, the user can perform `read_epic` on this resource. |
+| <a id="epicpermissionsreadepiciid"></a>`readEpicIid` | [`Boolean!`](#boolean) | If `true`, the user can perform `read_epic_iid` on this resource. |
+| <a id="epicpermissionsupdateepic"></a>`updateEpic` | [`Boolean!`](#boolean) | If `true`, the user can perform `update_epic` on this resource. |
### `EscalationPolicyType`
@@ -17250,6 +17768,7 @@ Represents an external resource to send audit events to.
| <a id="externalauditeventdestinationheaders"></a>`headers` | [`AuditEventStreamingHeaderConnection!`](#auditeventstreamingheaderconnection) | List of additional HTTP headers sent with each event. (see [Connections](#connections)) |
| <a id="externalauditeventdestinationid"></a>`id` | [`ID!`](#id) | ID of the destination. |
| <a id="externalauditeventdestinationname"></a>`name` | [`String!`](#string) | Name of the external destination to send audit events to. |
+| <a id="externalauditeventdestinationnamespacefilter"></a>`namespaceFilter` | [`AuditEventStreamingHTTPNamespaceFilter`](#auditeventstreaminghttpnamespacefilter) | List of subgroup or project filters for the destination. |
| <a id="externalauditeventdestinationverificationtoken"></a>`verificationToken` | [`String!`](#string) | Verification token to validate source of event. |
### `ExternalIssue`
@@ -17782,6 +18301,7 @@ GPG signature for a signed commit.
| <a id="grouppackagesettings"></a>`packageSettings` | [`PackageSettings`](#packagesettings) | Package settings for the namespace. |
| <a id="groupparent"></a>`parent` | [`Group`](#group) | Parent group. |
| <a id="grouppath"></a>`path` | [`String!`](#string) | Path of the namespace. |
+| <a id="grouppendingmembers"></a>`pendingMembers` **{warning-solid}** | [`PendingGroupMemberConnection`](#pendinggroupmemberconnection) | **Introduced** in 16.6. This feature is an Experiment. It can be changed or removed at any time. A pending membership of a user within this group. |
| <a id="groupprojectcreationlevel"></a>`projectCreationLevel` | [`String`](#string) | Permission level required to create projects in the group. |
| <a id="grouprecentissueboards"></a>`recentIssueBoards` | [`BoardConnection`](#boardconnection) | List of recently visited boards of the group. Maximum size is 4. (see [Connections](#connections)) |
| <a id="grouprepositorysizeexcessprojectcount"></a>`repositorySizeExcessProjectCount` | [`Int!`](#int) | Number of projects in the root namespace where the repository size exceeds the limit. This only applies to namespaces under Project limit enforcement. |
@@ -18394,6 +18914,26 @@ four standard [pagination arguments](#connection-pagination-arguments):
| <a id="grouplabelsonlygrouplabels"></a>`onlyGroupLabels` | [`Boolean`](#boolean) | Include only group level labels. |
| <a id="grouplabelssearchterm"></a>`searchTerm` | [`String`](#string) | Search term to find labels with. |
+##### `Group.memberRoles`
+
+Member roles available for the group.
+
+WARNING:
+**Introduced** in 16.5.
+This feature is an Experiment. It can be changed or removed at any time.
+
+Returns [`MemberRoleConnection`](#memberroleconnection).
+
+This field returns a [connection](#connections). It accepts the
+four standard [pagination arguments](#connection-pagination-arguments):
+`before: String`, `after: String`, `first: Int`, `last: Int`.
+
+###### Arguments
+
+| Name | Type | Description |
+| ---- | ---- | ----------- |
+| <a id="groupmemberrolesid"></a>`id` | [`MemberRoleID`](#memberroleid) | Global ID of the member role to look up. |
+
##### `Group.mergeRequestViolations`
Compliance violations reported on merge requests merged within the group.
@@ -18519,6 +19059,7 @@ four standard [pagination arguments](#connection-pagination-arguments):
| <a id="grouppackagesincludeversionless"></a>`includeVersionless` | [`Boolean`](#boolean) | Include versionless packages. |
| <a id="grouppackagespackagename"></a>`packageName` | [`String`](#string) | Search a package by name. |
| <a id="grouppackagespackagetype"></a>`packageType` | [`PackageTypeEnum`](#packagetypeenum) | Filter a package by type. |
+| <a id="grouppackagespackageversion"></a>`packageVersion` | [`String`](#string) | Filter a package by version. If used in combination with `include_versionless`, then no versionless packages are returned. |
| <a id="grouppackagessort"></a>`sort` | [`PackageGroupSort`](#packagegroupsort) | Sort packages by this criteria. |
| <a id="grouppackagesstatus"></a>`status` | [`PackageStatus`](#packagestatus) | Filter a package by status. |
@@ -18595,6 +19136,7 @@ four standard [pagination arguments](#connection-pagination-arguments):
| Name | Type | Description |
| ---- | ---- | ----------- |
| <a id="grouprunnersactive"></a>`active` **{warning-solid}** | [`Boolean`](#boolean) | **Deprecated** in 14.8. This was renamed. Use: `paused`. |
+| <a id="grouprunnerscreatorid"></a>`creatorId` | [`UserID`](#userid) | Filter runners by creator ID. |
| <a id="grouprunnersmembership"></a>`membership` | [`CiRunnerMembershipFilter`](#cirunnermembershipfilter) | Control which runners to include in the results. |
| <a id="grouprunnerspaused"></a>`paused` | [`Boolean`](#boolean) | Filter runners by `paused` (true) or `active` (false) status. |
| <a id="grouprunnerssearch"></a>`search` | [`String`](#string) | Filter by full token or partial text in description field. |
@@ -18603,6 +19145,7 @@ four standard [pagination arguments](#connection-pagination-arguments):
| <a id="grouprunnerstaglist"></a>`tagList` | [`[String!]`](#string) | Filter by tags associated with the runner (comma-separated or array). |
| <a id="grouprunnerstype"></a>`type` | [`CiRunnerType`](#cirunnertype) | Filter runners by type. |
| <a id="grouprunnersupgradestatus"></a>`upgradeStatus` | [`CiRunnerUpgradeStatus`](#cirunnerupgradestatus) | Filter by upgrade status. |
+| <a id="grouprunnersversionprefix"></a>`versionPrefix` **{warning-solid}** | [`String`](#string) | **Introduced** in 16.6. This feature is an Experiment. It can be changed or removed at any time. Filter runners by version. Runners that contain runner managers with the version at the start of the search term are returned. For example, the search term '14.' returns runner managers with versions '14.11.1' and '14.2.3'. |
##### `Group.scanExecutionPolicies`
@@ -18842,9 +19385,9 @@ Represents a Group Membership.
| <a id="groupmembercreatedat"></a>`createdAt` | [`Time`](#time) | Date and time the membership was created. |
| <a id="groupmembercreatedby"></a>`createdBy` | [`UserCore`](#usercore) | User that authorized membership. |
| <a id="groupmemberexpiresat"></a>`expiresAt` | [`Time`](#time) | Date and time the membership expires. |
-| <a id="groupmembergroup"></a>`group` | [`Group`](#group) | Group that a User is a member of. |
+| <a id="groupmembergroup"></a>`group` | [`Group`](#group) | Group that a user is a member of. |
| <a id="groupmemberid"></a>`id` | [`ID!`](#id) | ID of the member. |
-| <a id="groupmembernotificationemail"></a>`notificationEmail` | [`String`](#string) | Group notification email for User. Only available for admins. |
+| <a id="groupmembernotificationemail"></a>`notificationEmail` | [`String`](#string) | Group notification email for user. Only available for admins. |
| <a id="groupmemberupdatedat"></a>`updatedAt` | [`Time`](#time) | Date and time the membership was last updated. |
| <a id="groupmemberuser"></a>`user` | [`UserCore`](#usercore) | User that is associated with the member object. |
| <a id="groupmemberuserpermissions"></a>`userPermissions` | [`GroupPermissions!`](#grouppermissions) | Permissions for the current user on the resource. |
@@ -18869,9 +19412,9 @@ Returns [`UserMergeRequestInteraction`](#usermergerequestinteraction).
| Name | Type | Description |
| ---- | ---- | ----------- |
-| <a id="grouppermissionscreatecustomemoji"></a>`createCustomEmoji` | [`Boolean!`](#boolean) | Indicates the user can perform `create_custom_emoji` on this resource. |
-| <a id="grouppermissionscreateprojects"></a>`createProjects` | [`Boolean!`](#boolean) | Indicates the user can perform `create_projects` on this resource. |
-| <a id="grouppermissionsreadgroup"></a>`readGroup` | [`Boolean!`](#boolean) | Indicates the user can perform `read_group` on this resource. |
+| <a id="grouppermissionscreatecustomemoji"></a>`createCustomEmoji` | [`Boolean!`](#boolean) | If `true`, the user can perform `create_custom_emoji` on this resource. |
+| <a id="grouppermissionscreateprojects"></a>`createProjects` | [`Boolean!`](#boolean) | If `true`, the user can perform `create_projects` on this resource. |
+| <a id="grouppermissionsreadgroup"></a>`readGroup` | [`Boolean!`](#boolean) | If `true`, the user can perform `read_group` on this resource. |
### `GroupReleaseStats`
@@ -19459,15 +20002,15 @@ Check permissions for the current user on a issue.
| Name | Type | Description |
| ---- | ---- | ----------- |
-| <a id="issuepermissionsadminissue"></a>`adminIssue` | [`Boolean!`](#boolean) | Indicates the user can perform `admin_issue` on this resource. |
-| <a id="issuepermissionscreatedesign"></a>`createDesign` | [`Boolean!`](#boolean) | Indicates the user can perform `create_design` on this resource. |
-| <a id="issuepermissionscreatenote"></a>`createNote` | [`Boolean!`](#boolean) | Indicates the user can perform `create_note` on this resource. |
-| <a id="issuepermissionsdestroydesign"></a>`destroyDesign` | [`Boolean!`](#boolean) | Indicates the user can perform `destroy_design` on this resource. |
-| <a id="issuepermissionsreaddesign"></a>`readDesign` | [`Boolean!`](#boolean) | Indicates the user can perform `read_design` on this resource. |
-| <a id="issuepermissionsreadissue"></a>`readIssue` | [`Boolean!`](#boolean) | Indicates the user can perform `read_issue` on this resource. |
-| <a id="issuepermissionsreopenissue"></a>`reopenIssue` | [`Boolean!`](#boolean) | Indicates the user can perform `reopen_issue` on this resource. |
-| <a id="issuepermissionsupdatedesign"></a>`updateDesign` | [`Boolean!`](#boolean) | Indicates the user can perform `update_design` on this resource. |
-| <a id="issuepermissionsupdateissue"></a>`updateIssue` | [`Boolean!`](#boolean) | Indicates the user can perform `update_issue` on this resource. |
+| <a id="issuepermissionsadminissue"></a>`adminIssue` | [`Boolean!`](#boolean) | If `true`, the user can perform `admin_issue` on this resource. |
+| <a id="issuepermissionscreatedesign"></a>`createDesign` | [`Boolean!`](#boolean) | If `true`, the user can perform `create_design` on this resource. |
+| <a id="issuepermissionscreatenote"></a>`createNote` | [`Boolean!`](#boolean) | If `true`, the user can perform `create_note` on this resource. |
+| <a id="issuepermissionsdestroydesign"></a>`destroyDesign` | [`Boolean!`](#boolean) | If `true`, the user can perform `destroy_design` on this resource. |
+| <a id="issuepermissionsreaddesign"></a>`readDesign` | [`Boolean!`](#boolean) | If `true`, the user can perform `read_design` on this resource. |
+| <a id="issuepermissionsreadissue"></a>`readIssue` | [`Boolean!`](#boolean) | If `true`, the user can perform `read_issue` on this resource. |
+| <a id="issuepermissionsreopenissue"></a>`reopenIssue` | [`Boolean!`](#boolean) | If `true`, the user can perform `reopen_issue` on this resource. |
+| <a id="issuepermissionsupdatedesign"></a>`updateDesign` | [`Boolean!`](#boolean) | If `true`, the user can perform `update_design` on this resource. |
+| <a id="issuepermissionsupdateissue"></a>`updateIssue` | [`Boolean!`](#boolean) | If `true`, the user can perform `update_issue` on this resource. |
### `IssueStatusCountsType`
@@ -19633,9 +20176,10 @@ Represents the Geo replication and verification state of a job_artifact.
| Name | Type | Description |
| ---- | ---- | ----------- |
-| <a id="jobpermissionsreadbuild"></a>`readBuild` | [`Boolean!`](#boolean) | Indicates the user can perform `read_build` on this resource. |
-| <a id="jobpermissionsreadjobartifacts"></a>`readJobArtifacts` | [`Boolean!`](#boolean) | Indicates the user can perform `read_job_artifacts` on this resource. |
-| <a id="jobpermissionsupdatebuild"></a>`updateBuild` | [`Boolean!`](#boolean) | Indicates the user can perform `update_build` on this resource. |
+| <a id="jobpermissionscancelbuild"></a>`cancelBuild` | [`Boolean!`](#boolean) | If `true`, the user can perform `cancel_build` on this resource. |
+| <a id="jobpermissionsreadbuild"></a>`readBuild` | [`Boolean!`](#boolean) | If `true`, the user can perform `read_build` on this resource. |
+| <a id="jobpermissionsreadjobartifacts"></a>`readJobArtifacts` | [`Boolean!`](#boolean) | If `true`, the user can perform `read_job_artifacts` on this resource. |
+| <a id="jobpermissionsupdatebuild"></a>`updateBuild` | [`Boolean!`](#boolean) | If `true`, the user can perform `update_build` on this resource. |
### `Kas`
@@ -19741,7 +20285,7 @@ Represents an entry from the Cloud License history.
| <a id="linkedworkitemtypelinkid"></a>`linkId` | [`WorkItemsRelatedWorkItemLinkID!`](#workitemsrelatedworkitemlinkid) | Global ID of the link. |
| <a id="linkedworkitemtypelinktype"></a>`linkType` | [`String!`](#string) | Type of link. |
| <a id="linkedworkitemtypelinkupdatedat"></a>`linkUpdatedAt` | [`Time!`](#time) | Timestamp the link was updated. |
-| <a id="linkedworkitemtypeworkitem"></a>`workItem` | [`WorkItem!`](#workitem) | Linked work item. |
+| <a id="linkedworkitemtypeworkitem"></a>`workItem` | [`WorkItem`](#workitem) | Linked work item. |
### `Location`
@@ -19776,9 +20320,19 @@ Represents a member role.
| Name | Type | Description |
| ---- | ---- | ----------- |
+| <a id="memberroleadmingroupmember"></a>`adminGroupMember` **{warning-solid}** | [`Boolean`](#boolean) | **Introduced** in 16.5. This feature is an Experiment. It can be changed or removed at any time. Permission to admin group members. |
+| <a id="memberroleadminmergerequest"></a>`adminMergeRequest` **{warning-solid}** | [`Boolean`](#boolean) | **Introduced** in 16.5. This feature is an Experiment. It can be changed or removed at any time. Permission to admin merge requests. |
+| <a id="memberroleadminvulnerability"></a>`adminVulnerability` **{warning-solid}** | [`Boolean`](#boolean) | **Introduced** in 16.5. This feature is an Experiment. It can be changed or removed at any time. Permission to admin vulnerability. |
+| <a id="memberrolearchiveproject"></a>`archiveProject` **{warning-solid}** | [`Boolean`](#boolean) | **Introduced** in 16.6. This feature is an Experiment. It can be changed or removed at any time. Permission to archive projects. |
+| <a id="memberrolebaseaccesslevel"></a>`baseAccessLevel` **{warning-solid}** | [`AccessLevel!`](#accesslevel) | **Introduced** in 16.5. This feature is an Experiment. It can be changed or removed at any time. Base access level for the custom role. |
| <a id="memberroledescription"></a>`description` | [`String`](#string) | Description of the member role. |
+| <a id="memberroleenabledpermissions"></a>`enabledPermissions` **{warning-solid}** | [`[MemberRolePermission!]`](#memberrolepermission) | **Introduced** in 16.5. This feature is an Experiment. It can be changed or removed at any time. Array of all permissions enabled for the custom role. |
| <a id="memberroleid"></a>`id` | [`MemberRoleID!`](#memberroleid) | ID of the member role. |
+| <a id="memberrolemanageprojectaccesstokens"></a>`manageProjectAccessTokens` **{warning-solid}** | [`Boolean`](#boolean) | **Introduced** in 16.5. This feature is an Experiment. It can be changed or removed at any time. Permission to admin project access tokens. |
| <a id="memberrolename"></a>`name` | [`String!`](#string) | Name of the member role. |
+| <a id="memberrolereadcode"></a>`readCode` **{warning-solid}** | [`Boolean`](#boolean) | **Introduced** in 16.5. This feature is an Experiment. It can be changed or removed at any time. Permission to read code. |
+| <a id="memberrolereaddependency"></a>`readDependency` **{warning-solid}** | [`Boolean`](#boolean) | **Introduced** in 16.5. This feature is an Experiment. It can be changed or removed at any time. Permission to read dependency. |
+| <a id="memberrolereadvulnerability"></a>`readVulnerability` **{warning-solid}** | [`Boolean`](#boolean) | **Introduced** in 16.5. This feature is an Experiment. It can be changed or removed at any time. Permission to read vulnerability. |
### `MergeAccessLevel`
@@ -20030,6 +20584,7 @@ A user assigned to a merge request.
| <a id="mergerequestassigneeid"></a>`id` | [`ID!`](#id) | ID of the user. |
| <a id="mergerequestassigneeide"></a>`ide` | [`Ide`](#ide) | IDE settings. |
| <a id="mergerequestassigneejobtitle"></a>`jobTitle` | [`String`](#string) | Job title of the user. |
+| <a id="mergerequestassigneelastactivityon"></a>`lastActivityOn` | [`Date`](#date) | Date the user last performed any actions. |
| <a id="mergerequestassigneelinkedin"></a>`linkedin` | [`String`](#string) | LinkedIn profile name of the user. |
| <a id="mergerequestassigneelocation"></a>`location` | [`String`](#string) | Location of the user. |
| <a id="mergerequestassigneemergerequestinteraction"></a>`mergeRequestInteraction` | [`UserMergeRequestInteraction`](#usermergerequestinteraction) | Details of this user's interactions with the merge request. |
@@ -20037,12 +20592,13 @@ A user assigned to a merge request.
| <a id="mergerequestassigneenamespace"></a>`namespace` | [`Namespace`](#namespace) | Personal namespace of the user. |
| <a id="mergerequestassigneenamespacecommitemails"></a>`namespaceCommitEmails` | [`NamespaceCommitEmailConnection`](#namespacecommitemailconnection) | User's custom namespace commit emails. (see [Connections](#connections)) |
| <a id="mergerequestassigneeorganization"></a>`organization` | [`String`](#string) | Who the user represents or works for. |
+| <a id="mergerequestassigneeorganizations"></a>`organizations` **{warning-solid}** | [`OrganizationConnection`](#organizationconnection) | **Introduced** in 16.6. This feature is an Experiment. It can be changed or removed at any time. Organizations where the user has access. |
| <a id="mergerequestassigneepreferencesgitpodpath"></a>`preferencesGitpodPath` | [`String`](#string) | Web path to the Gitpod section within user preferences. |
| <a id="mergerequestassigneeprofileenablegitpodpath"></a>`profileEnableGitpodPath` | [`String`](#string) | Web path to enable Gitpod for the user. |
| <a id="mergerequestassigneeprojectmemberships"></a>`projectMemberships` | [`ProjectMemberConnection`](#projectmemberconnection) | Project memberships of the user. (see [Connections](#connections)) |
| <a id="mergerequestassigneepronouns"></a>`pronouns` | [`String`](#string) | Pronouns of the user. |
| <a id="mergerequestassigneepublicemail"></a>`publicEmail` | [`String`](#string) | User's public email. |
-| <a id="mergerequestassigneesavedreplies"></a>`savedReplies` | [`SavedReplyConnection`](#savedreplyconnection) | Saved replies authored by the user. Will not return saved replies if `saved_replies` feature flag is disabled. (see [Connections](#connections)) |
+| <a id="mergerequestassigneesavedreplies"></a>`savedReplies` | [`SavedReplyConnection`](#savedreplyconnection) | Saved replies authored by the user. (see [Connections](#connections)) |
| <a id="mergerequestassigneestate"></a>`state` | [`UserState!`](#userstate) | State of the user. |
| <a id="mergerequestassigneestatus"></a>`status` | [`UserStatus`](#userstatus) | User status. |
| <a id="mergerequestassigneetwitter"></a>`twitter` | [`String`](#string) | Twitter username of the user. |
@@ -20181,7 +20737,7 @@ four standard [pagination arguments](#connection-pagination-arguments):
##### `MergeRequestAssignee.savedReply`
-Saved reply authored by the user. Will not return saved reply if `saved_replies` feature flag is disabled.
+Saved reply authored by the user.
Returns [`SavedReply`](#savedreply).
@@ -20310,6 +20866,7 @@ The author of the merge request.
| <a id="mergerequestauthorid"></a>`id` | [`ID!`](#id) | ID of the user. |
| <a id="mergerequestauthoride"></a>`ide` | [`Ide`](#ide) | IDE settings. |
| <a id="mergerequestauthorjobtitle"></a>`jobTitle` | [`String`](#string) | Job title of the user. |
+| <a id="mergerequestauthorlastactivityon"></a>`lastActivityOn` | [`Date`](#date) | Date the user last performed any actions. |
| <a id="mergerequestauthorlinkedin"></a>`linkedin` | [`String`](#string) | LinkedIn profile name of the user. |
| <a id="mergerequestauthorlocation"></a>`location` | [`String`](#string) | Location of the user. |
| <a id="mergerequestauthormergerequestinteraction"></a>`mergeRequestInteraction` | [`UserMergeRequestInteraction`](#usermergerequestinteraction) | Details of this user's interactions with the merge request. |
@@ -20317,12 +20874,13 @@ The author of the merge request.
| <a id="mergerequestauthornamespace"></a>`namespace` | [`Namespace`](#namespace) | Personal namespace of the user. |
| <a id="mergerequestauthornamespacecommitemails"></a>`namespaceCommitEmails` | [`NamespaceCommitEmailConnection`](#namespacecommitemailconnection) | User's custom namespace commit emails. (see [Connections](#connections)) |
| <a id="mergerequestauthororganization"></a>`organization` | [`String`](#string) | Who the user represents or works for. |
+| <a id="mergerequestauthororganizations"></a>`organizations` **{warning-solid}** | [`OrganizationConnection`](#organizationconnection) | **Introduced** in 16.6. This feature is an Experiment. It can be changed or removed at any time. Organizations where the user has access. |
| <a id="mergerequestauthorpreferencesgitpodpath"></a>`preferencesGitpodPath` | [`String`](#string) | Web path to the Gitpod section within user preferences. |
| <a id="mergerequestauthorprofileenablegitpodpath"></a>`profileEnableGitpodPath` | [`String`](#string) | Web path to enable Gitpod for the user. |
| <a id="mergerequestauthorprojectmemberships"></a>`projectMemberships` | [`ProjectMemberConnection`](#projectmemberconnection) | Project memberships of the user. (see [Connections](#connections)) |
| <a id="mergerequestauthorpronouns"></a>`pronouns` | [`String`](#string) | Pronouns of the user. |
| <a id="mergerequestauthorpublicemail"></a>`publicEmail` | [`String`](#string) | User's public email. |
-| <a id="mergerequestauthorsavedreplies"></a>`savedReplies` | [`SavedReplyConnection`](#savedreplyconnection) | Saved replies authored by the user. Will not return saved replies if `saved_replies` feature flag is disabled. (see [Connections](#connections)) |
+| <a id="mergerequestauthorsavedreplies"></a>`savedReplies` | [`SavedReplyConnection`](#savedreplyconnection) | Saved replies authored by the user. (see [Connections](#connections)) |
| <a id="mergerequestauthorstate"></a>`state` | [`UserState!`](#userstate) | State of the user. |
| <a id="mergerequestauthorstatus"></a>`status` | [`UserStatus`](#userstatus) | User status. |
| <a id="mergerequestauthortwitter"></a>`twitter` | [`String`](#string) | Twitter username of the user. |
@@ -20461,7 +21019,7 @@ four standard [pagination arguments](#connection-pagination-arguments):
##### `MergeRequestAuthor.savedReply`
-Saved reply authored by the user. Will not return saved reply if `saved_replies` feature flag is disabled.
+Saved reply authored by the user.
Returns [`SavedReply`](#savedreply).
@@ -20653,6 +21211,7 @@ A user participating in a merge request.
| <a id="mergerequestparticipantid"></a>`id` | [`ID!`](#id) | ID of the user. |
| <a id="mergerequestparticipantide"></a>`ide` | [`Ide`](#ide) | IDE settings. |
| <a id="mergerequestparticipantjobtitle"></a>`jobTitle` | [`String`](#string) | Job title of the user. |
+| <a id="mergerequestparticipantlastactivityon"></a>`lastActivityOn` | [`Date`](#date) | Date the user last performed any actions. |
| <a id="mergerequestparticipantlinkedin"></a>`linkedin` | [`String`](#string) | LinkedIn profile name of the user. |
| <a id="mergerequestparticipantlocation"></a>`location` | [`String`](#string) | Location of the user. |
| <a id="mergerequestparticipantmergerequestinteraction"></a>`mergeRequestInteraction` | [`UserMergeRequestInteraction`](#usermergerequestinteraction) | Details of this user's interactions with the merge request. |
@@ -20660,12 +21219,13 @@ A user participating in a merge request.
| <a id="mergerequestparticipantnamespace"></a>`namespace` | [`Namespace`](#namespace) | Personal namespace of the user. |
| <a id="mergerequestparticipantnamespacecommitemails"></a>`namespaceCommitEmails` | [`NamespaceCommitEmailConnection`](#namespacecommitemailconnection) | User's custom namespace commit emails. (see [Connections](#connections)) |
| <a id="mergerequestparticipantorganization"></a>`organization` | [`String`](#string) | Who the user represents or works for. |
+| <a id="mergerequestparticipantorganizations"></a>`organizations` **{warning-solid}** | [`OrganizationConnection`](#organizationconnection) | **Introduced** in 16.6. This feature is an Experiment. It can be changed or removed at any time. Organizations where the user has access. |
| <a id="mergerequestparticipantpreferencesgitpodpath"></a>`preferencesGitpodPath` | [`String`](#string) | Web path to the Gitpod section within user preferences. |
| <a id="mergerequestparticipantprofileenablegitpodpath"></a>`profileEnableGitpodPath` | [`String`](#string) | Web path to enable Gitpod for the user. |
| <a id="mergerequestparticipantprojectmemberships"></a>`projectMemberships` | [`ProjectMemberConnection`](#projectmemberconnection) | Project memberships of the user. (see [Connections](#connections)) |
| <a id="mergerequestparticipantpronouns"></a>`pronouns` | [`String`](#string) | Pronouns of the user. |
| <a id="mergerequestparticipantpublicemail"></a>`publicEmail` | [`String`](#string) | User's public email. |
-| <a id="mergerequestparticipantsavedreplies"></a>`savedReplies` | [`SavedReplyConnection`](#savedreplyconnection) | Saved replies authored by the user. Will not return saved replies if `saved_replies` feature flag is disabled. (see [Connections](#connections)) |
+| <a id="mergerequestparticipantsavedreplies"></a>`savedReplies` | [`SavedReplyConnection`](#savedreplyconnection) | Saved replies authored by the user. (see [Connections](#connections)) |
| <a id="mergerequestparticipantstate"></a>`state` | [`UserState!`](#userstate) | State of the user. |
| <a id="mergerequestparticipantstatus"></a>`status` | [`UserStatus`](#userstatus) | User status. |
| <a id="mergerequestparticipanttwitter"></a>`twitter` | [`String`](#string) | Twitter username of the user. |
@@ -20804,7 +21364,7 @@ four standard [pagination arguments](#connection-pagination-arguments):
##### `MergeRequestParticipant.savedReply`
-Saved reply authored by the user. Will not return saved reply if `saved_replies` feature flag is disabled.
+Saved reply authored by the user.
Returns [`SavedReply`](#savedreply).
@@ -20918,16 +21478,16 @@ Check permissions for the current user on a merge request.
| Name | Type | Description |
| ---- | ---- | ----------- |
-| <a id="mergerequestpermissionsadminmergerequest"></a>`adminMergeRequest` | [`Boolean!`](#boolean) | Indicates the user can perform `admin_merge_request` on this resource. |
-| <a id="mergerequestpermissionscanapprove"></a>`canApprove` | [`Boolean!`](#boolean) | Indicates the user can perform `can_approve` on this resource. |
-| <a id="mergerequestpermissionscanmerge"></a>`canMerge` | [`Boolean!`](#boolean) | Indicates the user can perform `can_merge` on this resource. |
-| <a id="mergerequestpermissionscherrypickoncurrentmergerequest"></a>`cherryPickOnCurrentMergeRequest` | [`Boolean!`](#boolean) | Indicates the user can perform `cherry_pick_on_current_merge_request` on this resource. |
-| <a id="mergerequestpermissionscreatenote"></a>`createNote` | [`Boolean!`](#boolean) | Indicates the user can perform `create_note` on this resource. |
-| <a id="mergerequestpermissionspushtosourcebranch"></a>`pushToSourceBranch` | [`Boolean!`](#boolean) | Indicates the user can perform `push_to_source_branch` on this resource. |
-| <a id="mergerequestpermissionsreadmergerequest"></a>`readMergeRequest` | [`Boolean!`](#boolean) | Indicates the user can perform `read_merge_request` on this resource. |
-| <a id="mergerequestpermissionsremovesourcebranch"></a>`removeSourceBranch` | [`Boolean!`](#boolean) | Indicates the user can perform `remove_source_branch` on this resource. |
-| <a id="mergerequestpermissionsrevertoncurrentmergerequest"></a>`revertOnCurrentMergeRequest` | [`Boolean!`](#boolean) | Indicates the user can perform `revert_on_current_merge_request` on this resource. |
-| <a id="mergerequestpermissionsupdatemergerequest"></a>`updateMergeRequest` | [`Boolean!`](#boolean) | Indicates the user can perform `update_merge_request` on this resource. |
+| <a id="mergerequestpermissionsadminmergerequest"></a>`adminMergeRequest` | [`Boolean!`](#boolean) | If `true`, the user can perform `admin_merge_request` on this resource. |
+| <a id="mergerequestpermissionscanapprove"></a>`canApprove` | [`Boolean!`](#boolean) | If `true`, the user can perform `can_approve` on this resource. |
+| <a id="mergerequestpermissionscanmerge"></a>`canMerge` | [`Boolean!`](#boolean) | If `true`, the user can perform `can_merge` on this resource. |
+| <a id="mergerequestpermissionscherrypickoncurrentmergerequest"></a>`cherryPickOnCurrentMergeRequest` | [`Boolean!`](#boolean) | If `true`, the user can perform `cherry_pick_on_current_merge_request` on this resource. |
+| <a id="mergerequestpermissionscreatenote"></a>`createNote` | [`Boolean!`](#boolean) | If `true`, the user can perform `create_note` on this resource. |
+| <a id="mergerequestpermissionspushtosourcebranch"></a>`pushToSourceBranch` | [`Boolean!`](#boolean) | If `true`, the user can perform `push_to_source_branch` on this resource. |
+| <a id="mergerequestpermissionsreadmergerequest"></a>`readMergeRequest` | [`Boolean!`](#boolean) | If `true`, the user can perform `read_merge_request` on this resource. |
+| <a id="mergerequestpermissionsremovesourcebranch"></a>`removeSourceBranch` | [`Boolean!`](#boolean) | If `true`, the user can perform `remove_source_branch` on this resource. |
+| <a id="mergerequestpermissionsrevertoncurrentmergerequest"></a>`revertOnCurrentMergeRequest` | [`Boolean!`](#boolean) | If `true`, the user can perform `revert_on_current_merge_request` on this resource. |
+| <a id="mergerequestpermissionsupdatemergerequest"></a>`updateMergeRequest` | [`Boolean!`](#boolean) | If `true`, the user can perform `update_merge_request` on this resource. |
### `MergeRequestReviewLlmSummary`
@@ -20969,6 +21529,7 @@ A user assigned to a merge request as a reviewer.
| <a id="mergerequestreviewerid"></a>`id` | [`ID!`](#id) | ID of the user. |
| <a id="mergerequestrevieweride"></a>`ide` | [`Ide`](#ide) | IDE settings. |
| <a id="mergerequestreviewerjobtitle"></a>`jobTitle` | [`String`](#string) | Job title of the user. |
+| <a id="mergerequestreviewerlastactivityon"></a>`lastActivityOn` | [`Date`](#date) | Date the user last performed any actions. |
| <a id="mergerequestreviewerlinkedin"></a>`linkedin` | [`String`](#string) | LinkedIn profile name of the user. |
| <a id="mergerequestreviewerlocation"></a>`location` | [`String`](#string) | Location of the user. |
| <a id="mergerequestreviewermergerequestinteraction"></a>`mergeRequestInteraction` | [`UserMergeRequestInteraction`](#usermergerequestinteraction) | Details of this user's interactions with the merge request. |
@@ -20976,12 +21537,13 @@ A user assigned to a merge request as a reviewer.
| <a id="mergerequestreviewernamespace"></a>`namespace` | [`Namespace`](#namespace) | Personal namespace of the user. |
| <a id="mergerequestreviewernamespacecommitemails"></a>`namespaceCommitEmails` | [`NamespaceCommitEmailConnection`](#namespacecommitemailconnection) | User's custom namespace commit emails. (see [Connections](#connections)) |
| <a id="mergerequestreviewerorganization"></a>`organization` | [`String`](#string) | Who the user represents or works for. |
+| <a id="mergerequestreviewerorganizations"></a>`organizations` **{warning-solid}** | [`OrganizationConnection`](#organizationconnection) | **Introduced** in 16.6. This feature is an Experiment. It can be changed or removed at any time. Organizations where the user has access. |
| <a id="mergerequestreviewerpreferencesgitpodpath"></a>`preferencesGitpodPath` | [`String`](#string) | Web path to the Gitpod section within user preferences. |
| <a id="mergerequestreviewerprofileenablegitpodpath"></a>`profileEnableGitpodPath` | [`String`](#string) | Web path to enable Gitpod for the user. |
| <a id="mergerequestreviewerprojectmemberships"></a>`projectMemberships` | [`ProjectMemberConnection`](#projectmemberconnection) | Project memberships of the user. (see [Connections](#connections)) |
| <a id="mergerequestreviewerpronouns"></a>`pronouns` | [`String`](#string) | Pronouns of the user. |
| <a id="mergerequestreviewerpublicemail"></a>`publicEmail` | [`String`](#string) | User's public email. |
-| <a id="mergerequestreviewersavedreplies"></a>`savedReplies` | [`SavedReplyConnection`](#savedreplyconnection) | Saved replies authored by the user. Will not return saved replies if `saved_replies` feature flag is disabled. (see [Connections](#connections)) |
+| <a id="mergerequestreviewersavedreplies"></a>`savedReplies` | [`SavedReplyConnection`](#savedreplyconnection) | Saved replies authored by the user. (see [Connections](#connections)) |
| <a id="mergerequestreviewerstate"></a>`state` | [`UserState!`](#userstate) | State of the user. |
| <a id="mergerequestreviewerstatus"></a>`status` | [`UserStatus`](#userstatus) | User status. |
| <a id="mergerequestreviewertwitter"></a>`twitter` | [`String`](#string) | Twitter username of the user. |
@@ -21120,7 +21682,7 @@ four standard [pagination arguments](#connection-pagination-arguments):
##### `MergeRequestReviewer.savedReply`
-Saved reply authored by the user. Will not return saved reply if `saved_replies` feature flag is disabled.
+Saved reply authored by the user.
Returns [`SavedReply`](#savedreply).
@@ -21573,12 +22135,12 @@ Represents the network policy.
| Name | Type | Description |
| ---- | ---- | ----------- |
-| <a id="notepermissionsadminnote"></a>`adminNote` | [`Boolean!`](#boolean) | Indicates the user can perform `admin_note` on this resource. |
-| <a id="notepermissionsawardemoji"></a>`awardEmoji` | [`Boolean!`](#boolean) | Indicates the user can perform `award_emoji` on this resource. |
-| <a id="notepermissionscreatenote"></a>`createNote` | [`Boolean!`](#boolean) | Indicates the user can perform `create_note` on this resource. |
-| <a id="notepermissionsreadnote"></a>`readNote` | [`Boolean!`](#boolean) | Indicates the user can perform `read_note` on this resource. |
-| <a id="notepermissionsrepositionnote"></a>`repositionNote` | [`Boolean!`](#boolean) | Indicates the user can perform `reposition_note` on this resource. |
-| <a id="notepermissionsresolvenote"></a>`resolveNote` | [`Boolean!`](#boolean) | Indicates the user can perform `resolve_note` on this resource. |
+| <a id="notepermissionsadminnote"></a>`adminNote` | [`Boolean!`](#boolean) | If `true`, the user can perform `admin_note` on this resource. |
+| <a id="notepermissionsawardemoji"></a>`awardEmoji` | [`Boolean!`](#boolean) | If `true`, the user can perform `award_emoji` on this resource. |
+| <a id="notepermissionscreatenote"></a>`createNote` | [`Boolean!`](#boolean) | If `true`, the user can perform `create_note` on this resource. |
+| <a id="notepermissionsreadnote"></a>`readNote` | [`Boolean!`](#boolean) | If `true`, the user can perform `read_note` on this resource. |
+| <a id="notepermissionsrepositionnote"></a>`repositionNote` | [`Boolean!`](#boolean) | If `true`, the user can perform `reposition_note` on this resource. |
+| <a id="notepermissionsresolvenote"></a>`resolveNote` | [`Boolean!`](#boolean) | If `true`, the user can perform `resolve_note` on this resource. |
### `NugetDependencyLinkMetadata`
@@ -21638,6 +22200,7 @@ Active period time range for on-call rotation.
| <a id="organizationname"></a>`name` **{warning-solid}** | [`String!`](#string) | **Introduced** in 16.4. This feature is an Experiment. It can be changed or removed at any time. Name of the organization. |
| <a id="organizationorganizationusers"></a>`organizationUsers` **{warning-solid}** | [`OrganizationUserConnection!`](#organizationuserconnection) | **Introduced** in 16.4. This feature is an Experiment. It can be changed or removed at any time. Users with access to the organization. |
| <a id="organizationpath"></a>`path` **{warning-solid}** | [`String!`](#string) | **Introduced** in 16.4. This feature is an Experiment. It can be changed or removed at any time. Path of the organization. |
+| <a id="organizationweburl"></a>`webUrl` **{warning-solid}** | [`String!`](#string) | **Introduced** in 16.6. This feature is an Experiment. It can be changed or removed at any time. Web URL of the organization. |
#### Fields with arguments
@@ -21682,10 +22245,21 @@ A user with access to the organization.
| Name | Type | Description |
| ---- | ---- | ----------- |
-| <a id="organizationuserbadges"></a>`badges` **{warning-solid}** | [`[String!]`](#string) | **Introduced** in 16.4. This feature is an Experiment. It can be changed or removed at any time. Badges describing the user within the organization. |
+| <a id="organizationuserbadges"></a>`badges` **{warning-solid}** | [`[OrganizationUserBadge!]`](#organizationuserbadge) | **Introduced** in 16.4. This feature is an Experiment. It can be changed or removed at any time. Badges describing the user within the organization. |
| <a id="organizationuserid"></a>`id` **{warning-solid}** | [`ID!`](#id) | **Introduced** in 16.4. This feature is an Experiment. It can be changed or removed at any time. ID of the organization user. |
| <a id="organizationuseruser"></a>`user` **{warning-solid}** | [`UserCore!`](#usercore) | **Introduced** in 16.4. This feature is an Experiment. It can be changed or removed at any time. User that is associated with the organization. |
+### `OrganizationUserBadge`
+
+An organization user badge.
+
+#### Fields
+
+| Name | Type | Description |
+| ---- | ---- | ----------- |
+| <a id="organizationuserbadgetext"></a>`text` | [`String!`](#string) | Badge text. |
+| <a id="organizationuserbadgevariant"></a>`variant` | [`String!`](#string) | Badge variant. |
+
### `Package`
Represents a package with pipelines in the Package Registry.
@@ -21695,7 +22269,7 @@ Represents a package with pipelines in the Package Registry.
| Name | Type | Description |
| ---- | ---- | ----------- |
| <a id="package_links"></a>`_links` | [`PackageLinks!`](#packagelinks) | Map of links to perform actions on the package. |
-| <a id="packagecandestroy"></a>`canDestroy` | [`Boolean!`](#boolean) | Whether the user can destroy the package. |
+| <a id="packagecandestroy"></a>`canDestroy` **{warning-solid}** | [`Boolean!`](#boolean) | **Deprecated** in 16.6. Superseded by `user_permissions` field. See `Types::PermissionTypes::Package` type. |
| <a id="packagecreatedat"></a>`createdAt` | [`Time!`](#time) | Date of creation. |
| <a id="packageid"></a>`id` | [`PackagesPackageID!`](#packagespackageid) | ID of the package. |
| <a id="packagemetadata"></a>`metadata` | [`PackageMetadata`](#packagemetadata) | Package metadata. |
@@ -21707,6 +22281,7 @@ Represents a package with pipelines in the Package Registry.
| <a id="packagestatusmessage"></a>`statusMessage` | [`String`](#string) | Status message. |
| <a id="packagetags"></a>`tags` | [`PackageTagConnection`](#packagetagconnection) | Package tags. (see [Connections](#connections)) |
| <a id="packageupdatedat"></a>`updatedAt` | [`Time!`](#time) | Date of most recent update. |
+| <a id="packageuserpermissions"></a>`userPermissions` | [`PackagePermissions!`](#packagepermissions) | Permissions for the current user on the resource. |
| <a id="packageversion"></a>`version` | [`String`](#string) | Version string. |
### `PackageBase`
@@ -21718,7 +22293,7 @@ Represents a package in the Package Registry.
| Name | Type | Description |
| ---- | ---- | ----------- |
| <a id="packagebase_links"></a>`_links` | [`PackageLinks!`](#packagelinks) | Map of links to perform actions on the package. |
-| <a id="packagebasecandestroy"></a>`canDestroy` | [`Boolean!`](#boolean) | Whether the user can destroy the package. |
+| <a id="packagebasecandestroy"></a>`canDestroy` **{warning-solid}** | [`Boolean!`](#boolean) | **Deprecated** in 16.6. Superseded by `user_permissions` field. See `Types::PermissionTypes::Package` type. |
| <a id="packagebasecreatedat"></a>`createdAt` | [`Time!`](#time) | Date of creation. |
| <a id="packagebaseid"></a>`id` | [`PackagesPackageID!`](#packagespackageid) | ID of the package. |
| <a id="packagebasemetadata"></a>`metadata` | [`PackageMetadata`](#packagemetadata) | Package metadata. |
@@ -21729,6 +22304,7 @@ Represents a package in the Package Registry.
| <a id="packagebasestatusmessage"></a>`statusMessage` | [`String`](#string) | Status message. |
| <a id="packagebasetags"></a>`tags` | [`PackageTagConnection`](#packagetagconnection) | Package tags. (see [Connections](#connections)) |
| <a id="packagebaseupdatedat"></a>`updatedAt` | [`Time!`](#time) | Date of most recent update. |
+| <a id="packagebaseuserpermissions"></a>`userPermissions` | [`PackagePermissions!`](#packagepermissions) | Permissions for the current user on the resource. |
| <a id="packagebaseversion"></a>`version` | [`String`](#string) | Version string. |
### `PackageComposerJsonType`
@@ -21778,7 +22354,7 @@ Represents a package details in the Package Registry.
| Name | Type | Description |
| ---- | ---- | ----------- |
| <a id="packagedetailstype_links"></a>`_links` | [`PackageLinks!`](#packagelinks) | Map of links to perform actions on the package. |
-| <a id="packagedetailstypecandestroy"></a>`canDestroy` | [`Boolean!`](#boolean) | Whether the user can destroy the package. |
+| <a id="packagedetailstypecandestroy"></a>`canDestroy` **{warning-solid}** | [`Boolean!`](#boolean) | **Deprecated** in 16.6. Superseded by `user_permissions` field. See `Types::PermissionTypes::Package` type. |
| <a id="packagedetailstypecomposerconfigrepositoryurl"></a>`composerConfigRepositoryUrl` | [`String`](#string) | Url of the Composer setup endpoint. |
| <a id="packagedetailstypecomposerurl"></a>`composerUrl` | [`String`](#string) | Url of the Composer endpoint. |
| <a id="packagedetailstypeconanurl"></a>`conanUrl` | [`String`](#string) | Url of the Conan project endpoint. |
@@ -21802,6 +22378,7 @@ Represents a package details in the Package Registry.
| <a id="packagedetailstypestatusmessage"></a>`statusMessage` | [`String`](#string) | Status message. |
| <a id="packagedetailstypetags"></a>`tags` | [`PackageTagConnection`](#packagetagconnection) | Package tags. (see [Connections](#connections)) |
| <a id="packagedetailstypeupdatedat"></a>`updatedAt` | [`Time!`](#time) | Date of most recent update. |
+| <a id="packagedetailstypeuserpermissions"></a>`userPermissions` | [`PackagePermissions!`](#packagepermissions) | Permissions for the current user on the resource. |
| <a id="packagedetailstypeversion"></a>`version` | [`String`](#string) | Version string. |
| <a id="packagedetailstypeversions"></a>`versions` | [`PackageBaseConnection`](#packagebaseconnection) | Other versions of the package. (see [Connections](#connections)) |
@@ -21913,6 +22490,14 @@ Represents links to perform actions on the package.
| ---- | ---- | ----------- |
| <a id="packagelinkswebpath"></a>`webPath` | [`String`](#string) | Path to the package details page. |
+### `PackagePermissions`
+
+#### Fields
+
+| Name | Type | Description |
+| ---- | ---- | ----------- |
+| <a id="packagepermissionsdestroypackage"></a>`destroyPackage` | [`Boolean!`](#boolean) | If `true`, the user can perform `destroy_package` on this resource. |
+
### `PackageSettings`
Namespace-level Package Registry settings.
@@ -21932,8 +22517,8 @@ Namespace-level Package Registry settings.
| <a id="packagesettingsmavenpackagerequestsforwardinglocked"></a>`mavenPackageRequestsForwardingLocked` | [`Boolean!`](#boolean) | Indicates whether Maven package forwarding settings are locked by a parent namespace. |
| <a id="packagesettingsnpmpackagerequestsforwarding"></a>`npmPackageRequestsForwarding` | [`Boolean`](#boolean) | Indicates whether npm package forwarding is allowed for this namespace. |
| <a id="packagesettingsnpmpackagerequestsforwardinglocked"></a>`npmPackageRequestsForwardingLocked` | [`Boolean!`](#boolean) | Indicates whether npm package forwarding settings are locked by a parent namespace. |
-| <a id="packagesettingsnugetduplicateexceptionregex"></a>`nugetDuplicateExceptionRegex` | [`UntrustedRegexp`](#untrustedregexp) | When nuget_duplicates_allowed is false, you can publish duplicate packages with names that match this regex. Otherwise, this setting has no effect. Error is raised if `nuget_duplicates_option` feature flag is disabled. |
-| <a id="packagesettingsnugetduplicatesallowed"></a>`nugetDuplicatesAllowed` | [`Boolean!`](#boolean) | Indicates whether duplicate NuGet packages are allowed for this namespace. Error is raised if `nuget_duplicates_option` feature flag is disabled. |
+| <a id="packagesettingsnugetduplicateexceptionregex"></a>`nugetDuplicateExceptionRegex` | [`UntrustedRegexp`](#untrustedregexp) | When nuget_duplicates_allowed is false, you can publish duplicate packages with names that match this regex. Otherwise, this setting has no effect. |
+| <a id="packagesettingsnugetduplicatesallowed"></a>`nugetDuplicatesAllowed` | [`Boolean!`](#boolean) | Indicates whether duplicate NuGet packages are allowed for this namespace. |
| <a id="packagesettingspypipackagerequestsforwarding"></a>`pypiPackageRequestsForwarding` | [`Boolean`](#boolean) | Indicates whether PyPI package forwarding is allowed for this namespace. |
| <a id="packagesettingspypipackagerequestsforwardinglocked"></a>`pypiPackageRequestsForwardingLocked` | [`Boolean!`](#boolean) | Indicates whether PyPI package forwarding settings are locked by a parent namespace. |
@@ -21969,6 +22554,7 @@ A packages protection rule designed to protect packages from being pushed by use
| Name | Type | Description |
| ---- | ---- | ----------- |
+| <a id="packagesprotectionruleid"></a>`id` | [`PackagesProtectionRuleID!`](#packagesprotectionruleid) | ID of the package protection rule. |
| <a id="packagesprotectionrulepackagenamepattern"></a>`packageNamePattern` | [`String!`](#string) | Package name protected by the protection rule. For example `@my-scope/my-package-*`. Wildcard character `*` allowed. |
| <a id="packagesprotectionrulepackagetype"></a>`packageType` | [`PackagesProtectionRulePackageType!`](#packagesprotectionrulepackagetype) | Package type protected by the protection rule. For example `NPM`. |
| <a id="packagesprotectionrulepushprotecteduptoaccesslevel"></a>`pushProtectedUpToAccessLevel` | [`PackagesProtectionRuleAccessLevel!`](#packagesprotectionruleaccesslevel) | Max GitLab access level unable to push a package. For example `DEVELOPER`, `MAINTAINER`, `OWNER`. |
@@ -22022,6 +22608,46 @@ Represents a file or directory in the project repository that has been locked.
| <a id="pathlockpath"></a>`path` | [`String`](#string) | Locked path. |
| <a id="pathlockuser"></a>`user` | [`UserCore`](#usercore) | User that has locked this path. |
+### `PendingGroupMember`
+
+Represents a Pending Group Membership.
+
+#### Fields
+
+| Name | Type | Description |
+| ---- | ---- | ----------- |
+| <a id="pendinggroupmemberaccesslevel"></a>`accessLevel` | [`AccessLevel`](#accesslevel) | GitLab::Access level. |
+| <a id="pendinggroupmemberapproved"></a>`approved` | [`Boolean`](#boolean) | Whether the pending group member has been approved. |
+| <a id="pendinggroupmemberavatarurl"></a>`avatarUrl` | [`String`](#string) | URL to avatar image file of the pending group member. |
+| <a id="pendinggroupmembercreatedat"></a>`createdAt` | [`Time`](#time) | Date and time the membership was created. |
+| <a id="pendinggroupmembercreatedby"></a>`createdBy` | [`UserCore`](#usercore) | User that authorized membership. |
+| <a id="pendinggroupmemberemail"></a>`email` | [`String`](#string) | Public email of the pending group member. |
+| <a id="pendinggroupmemberexpiresat"></a>`expiresAt` | [`Time`](#time) | Date and time the membership expires. |
+| <a id="pendinggroupmembergroup"></a>`group` | [`Group`](#group) | Group that a user is a member of. |
+| <a id="pendinggroupmemberid"></a>`id` | [`ID!`](#id) | ID of the member. |
+| <a id="pendinggroupmemberinvited"></a>`invited` | [`Boolean`](#boolean) | Whether the pending group member has been invited. |
+| <a id="pendinggroupmembername"></a>`name` | [`String`](#string) | Name of the pending group member. |
+| <a id="pendinggroupmembernotificationemail"></a>`notificationEmail` | [`String`](#string) | Group notification email for user. Only available for admins. |
+| <a id="pendinggroupmemberupdatedat"></a>`updatedAt` | [`Time`](#time) | Date and time the membership was last updated. |
+| <a id="pendinggroupmemberuser"></a>`user` | [`UserCore`](#usercore) | User that is associated with the member object. |
+| <a id="pendinggroupmemberuserpermissions"></a>`userPermissions` | [`GroupPermissions!`](#grouppermissions) | Permissions for the current user on the resource. |
+| <a id="pendinggroupmemberusername"></a>`username` | [`String`](#string) | Username of the pending group member. |
+| <a id="pendinggroupmemberweburl"></a>`webUrl` | [`String`](#string) | Web URL of the pending group member. |
+
+#### Fields with arguments
+
+##### `PendingGroupMember.mergeRequestInteraction`
+
+Find a merge request.
+
+Returns [`UserMergeRequestInteraction`](#usermergerequestinteraction).
+
+###### Arguments
+
+| Name | Type | Description |
+| ---- | ---- | ----------- |
+| <a id="pendinggroupmembermergerequestinteractionid"></a>`id` | [`MergeRequestID!`](#mergerequestid) | Global ID of the merge request. |
+
### `Pipeline`
#### Fields
@@ -22067,7 +22693,7 @@ Represents a file or directory in the project repository that has been locked.
| <a id="pipelinesourcejob"></a>`sourceJob` | [`CiJob`](#cijob) | Job where pipeline was triggered from. |
| <a id="pipelinestages"></a>`stages` | [`CiStageConnection`](#cistageconnection) | Stages of the pipeline. (see [Connections](#connections)) |
| <a id="pipelinestartedat"></a>`startedAt` | [`Time`](#time) | Timestamp when the pipeline was started. |
-| <a id="pipelinestatus"></a>`status` | [`PipelineStatusEnum!`](#pipelinestatusenum) | Status of the pipeline (CREATED, WAITING_FOR_RESOURCE, PREPARING, PENDING, RUNNING, FAILED, SUCCESS, CANCELED, SKIPPED, MANUAL, SCHEDULED). |
+| <a id="pipelinestatus"></a>`status` | [`PipelineStatusEnum!`](#pipelinestatusenum) | Status of the pipeline (CREATED, WAITING_FOR_RESOURCE, PREPARING, WAITING_FOR_CALLBACK, PENDING, RUNNING, FAILED, SUCCESS, CANCELED, SKIPPED, MANUAL, SCHEDULED). |
| <a id="pipelinestuck"></a>`stuck` | [`Boolean!`](#boolean) | If the pipeline is stuck. |
| <a id="pipelinetestreportsummary"></a>`testReportSummary` | [`TestReportSummary!`](#testreportsummary) | Summary of the test report generated by the pipeline. |
| <a id="pipelinetotaljobs"></a>`totalJobs` | [`Int!`](#int) | The total number of jobs in the pipeline. |
@@ -22240,9 +22866,10 @@ Represents pipeline counts for the project.
| Name | Type | Description |
| ---- | ---- | ----------- |
-| <a id="pipelinepermissionsadminpipeline"></a>`adminPipeline` | [`Boolean!`](#boolean) | Indicates the user can perform `admin_pipeline` on this resource. |
-| <a id="pipelinepermissionsdestroypipeline"></a>`destroyPipeline` | [`Boolean!`](#boolean) | Indicates the user can perform `destroy_pipeline` on this resource. |
-| <a id="pipelinepermissionsupdatepipeline"></a>`updatePipeline` | [`Boolean!`](#boolean) | Indicates the user can perform `update_pipeline` on this resource. |
+| <a id="pipelinepermissionsadminpipeline"></a>`adminPipeline` | [`Boolean!`](#boolean) | If `true`, the user can perform `admin_pipeline` on this resource. |
+| <a id="pipelinepermissionscancelpipeline"></a>`cancelPipeline` | [`Boolean!`](#boolean) | If `true`, the user can perform `cancel_pipeline` on this resource. |
+| <a id="pipelinepermissionsdestroypipeline"></a>`destroyPipeline` | [`Boolean!`](#boolean) | If `true`, the user can perform `destroy_pipeline` on this resource. |
+| <a id="pipelinepermissionsupdatepipeline"></a>`updatePipeline` | [`Boolean!`](#boolean) | If `true`, the user can perform `update_pipeline` on this resource. |
### `PipelineSchedule`
@@ -22278,10 +22905,10 @@ Represents a pipeline schedule.
| Name | Type | Description |
| ---- | ---- | ----------- |
-| <a id="pipelineschedulepermissionsadminpipelineschedule"></a>`adminPipelineSchedule` | [`Boolean!`](#boolean) | Indicates the user can perform `admin_pipeline_schedule` on this resource. |
-| <a id="pipelineschedulepermissionsplaypipelineschedule"></a>`playPipelineSchedule` | [`Boolean!`](#boolean) | Indicates the user can perform `play_pipeline_schedule` on this resource. |
+| <a id="pipelineschedulepermissionsadminpipelineschedule"></a>`adminPipelineSchedule` | [`Boolean!`](#boolean) | If `true`, the user can perform `admin_pipeline_schedule` on this resource. |
+| <a id="pipelineschedulepermissionsplaypipelineschedule"></a>`playPipelineSchedule` | [`Boolean!`](#boolean) | If `true`, the user can perform `play_pipeline_schedule` on this resource. |
| <a id="pipelineschedulepermissionstakeownershippipelineschedule"></a>`takeOwnershipPipelineSchedule` **{warning-solid}** | [`Boolean!`](#boolean) | **Deprecated** in 15.9. Use admin_pipeline_schedule permission to determine if the user can take ownership of a pipeline schedule. |
-| <a id="pipelineschedulepermissionsupdatepipelineschedule"></a>`updatePipelineSchedule` | [`Boolean!`](#boolean) | Indicates the user can perform `update_pipeline_schedule` on this resource. |
+| <a id="pipelineschedulepermissionsupdatepipelineschedule"></a>`updatePipelineSchedule` | [`Boolean!`](#boolean) | If `true`, the user can perform `update_pipeline_schedule` on this resource. |
### `PipelineScheduleVariable`
@@ -22396,6 +23023,7 @@ Represents vulnerability finding of a security report on the pipeline.
| <a id="projectdependencyproxypackagessetting"></a>`dependencyProxyPackagesSetting` **{warning-solid}** | [`DependencyProxyPackagesSetting`](#dependencyproxypackagessetting) | **Introduced** in 16.5. This feature is an Experiment. It can be changed or removed at any time. Packages Dependency Proxy settings for the project. Requires the packages and dependency proxy to be enabled in the config. Requires the packages feature to be enabled at the project level. Returns `null` if `packages_dependency_proxy_maven` feature flag is disabled. |
| <a id="projectdescription"></a>`description` | [`String`](#string) | Short description of the project. |
| <a id="projectdescriptionhtml"></a>`descriptionHtml` | [`String`](#string) | GitLab Flavored Markdown rendering of `description`. |
+| <a id="projectdetailedimportstatus"></a>`detailedImportStatus` | [`DetailedImportStatus`](#detailedimportstatus) | Detailed import status of the project. |
| <a id="projectdora"></a>`dora` | [`Dora`](#dora) | Project's DORA metrics. |
| <a id="projectflowmetrics"></a>`flowMetrics` **{warning-solid}** | [`ProjectValueStreamAnalyticsFlowMetrics`](#projectvaluestreamanalyticsflowmetrics) | **Introduced** in 15.10. This feature is an Experiment. It can be changed or removed at any time. Flow metrics for value stream analytics. |
| <a id="projectforkscount"></a>`forksCount` | [`Int!`](#int) | Number of times the project has been forked. |
@@ -23291,6 +23919,26 @@ four standard [pagination arguments](#connection-pagination-arguments):
| <a id="projectlabelsincludeancestorgroups"></a>`includeAncestorGroups` | [`Boolean`](#boolean) | Include labels from ancestor groups. |
| <a id="projectlabelssearchterm"></a>`searchTerm` | [`String`](#string) | Search term to find labels with. |
+##### `Project.memberRoles`
+
+Member roles available for the group.
+
+WARNING:
+**Introduced** in 16.5.
+This feature is an Experiment. It can be changed or removed at any time.
+
+Returns [`MemberRoleConnection`](#memberroleconnection).
+
+This field returns a [connection](#connections). It accepts the
+four standard [pagination arguments](#connection-pagination-arguments):
+`before: String`, `after: String`, `first: Int`, `last: Int`.
+
+###### Arguments
+
+| Name | Type | Description |
+| ---- | ---- | ----------- |
+| <a id="projectmemberrolesid"></a>`id` | [`MemberRoleID`](#memberroleid) | Global ID of the member role to look up. |
+
##### `Project.mergeRequest`
A single merge request of the project.
@@ -23416,6 +24064,7 @@ four standard [pagination arguments](#connection-pagination-arguments):
| <a id="projectpackagesincludeversionless"></a>`includeVersionless` | [`Boolean`](#boolean) | Include versionless packages. |
| <a id="projectpackagespackagename"></a>`packageName` | [`String`](#string) | Search a package by name. |
| <a id="projectpackagespackagetype"></a>`packageType` | [`PackageTypeEnum`](#packagetypeenum) | Filter a package by type. |
+| <a id="projectpackagespackageversion"></a>`packageVersion` | [`String`](#string) | Filter a package by version. If used in combination with `include_versionless`, then no versionless packages are returned. |
| <a id="projectpackagessort"></a>`sort` | [`PackageSort`](#packagesort) | Sort packages by this criteria. |
| <a id="projectpackagesstatus"></a>`status` | [`PackageStatus`](#packagestatus) | Filter a package by status. |
@@ -23604,6 +24253,7 @@ four standard [pagination arguments](#connection-pagination-arguments):
| Name | Type | Description |
| ---- | ---- | ----------- |
| <a id="projectrunnersactive"></a>`active` **{warning-solid}** | [`Boolean`](#boolean) | **Deprecated** in 14.8. This was renamed. Use: `paused`. |
+| <a id="projectrunnerscreatorid"></a>`creatorId` | [`UserID`](#userid) | Filter runners by creator ID. |
| <a id="projectrunnerspaused"></a>`paused` | [`Boolean`](#boolean) | Filter runners by `paused` (true) or `active` (false) status. |
| <a id="projectrunnerssearch"></a>`search` | [`String`](#string) | Filter by full token or partial text in description field. |
| <a id="projectrunnerssort"></a>`sort` | [`CiRunnerSort`](#cirunnersort) | Sort order of results. |
@@ -23611,6 +24261,7 @@ four standard [pagination arguments](#connection-pagination-arguments):
| <a id="projectrunnerstaglist"></a>`tagList` | [`[String!]`](#string) | Filter by tags associated with the runner (comma-separated or array). |
| <a id="projectrunnerstype"></a>`type` | [`CiRunnerType`](#cirunnertype) | Filter runners by type. |
| <a id="projectrunnersupgradestatus"></a>`upgradeStatus` | [`CiRunnerUpgradeStatus`](#cirunnerupgradestatus) | Filter by upgrade status. |
+| <a id="projectrunnersversionprefix"></a>`versionPrefix` **{warning-solid}** | [`String`](#string) | **Introduced** in 16.6. This feature is an Experiment. It can be changed or removed at any time. Filter runners by version. Runners that contain runner managers with the version at the start of the search term are returned. For example, the search term '14.' returns runner managers with versions '14.11.1' and '14.2.3'. |
##### `Project.scanExecutionPolicies`
@@ -23960,50 +24611,50 @@ Returns [`UserMergeRequestInteraction`](#usermergerequestinteraction).
| Name | Type | Description |
| ---- | ---- | ----------- |
-| <a id="projectpermissionsadminoperations"></a>`adminOperations` | [`Boolean!`](#boolean) | Indicates the user can perform `admin_operations` on this resource. |
-| <a id="projectpermissionsadminpathlocks"></a>`adminPathLocks` | [`Boolean!`](#boolean) | Indicates the user can perform `admin_path_locks` on this resource. |
-| <a id="projectpermissionsadminproject"></a>`adminProject` | [`Boolean!`](#boolean) | Indicates the user can perform `admin_project` on this resource. |
-| <a id="projectpermissionsadminremotemirror"></a>`adminRemoteMirror` | [`Boolean!`](#boolean) | Indicates the user can perform `admin_remote_mirror` on this resource. |
-| <a id="projectpermissionsadminwiki"></a>`adminWiki` | [`Boolean!`](#boolean) | Indicates the user can perform `admin_wiki` on this resource. |
-| <a id="projectpermissionsarchiveproject"></a>`archiveProject` | [`Boolean!`](#boolean) | Indicates the user can perform `archive_project` on this resource. |
-| <a id="projectpermissionschangenamespace"></a>`changeNamespace` | [`Boolean!`](#boolean) | Indicates the user can perform `change_namespace` on this resource. |
-| <a id="projectpermissionschangevisibilitylevel"></a>`changeVisibilityLevel` | [`Boolean!`](#boolean) | Indicates the user can perform `change_visibility_level` on this resource. |
-| <a id="projectpermissionscreatedeployment"></a>`createDeployment` | [`Boolean!`](#boolean) | Indicates the user can perform `create_deployment` on this resource. |
-| <a id="projectpermissionscreatedesign"></a>`createDesign` | [`Boolean!`](#boolean) | Indicates the user can perform `create_design` on this resource. |
-| <a id="projectpermissionscreateissue"></a>`createIssue` | [`Boolean!`](#boolean) | Indicates the user can perform `create_issue` on this resource. |
-| <a id="projectpermissionscreatelabel"></a>`createLabel` | [`Boolean!`](#boolean) | Indicates the user can perform `create_label` on this resource. |
-| <a id="projectpermissionscreatemergerequestfrom"></a>`createMergeRequestFrom` | [`Boolean!`](#boolean) | Indicates the user can perform `create_merge_request_from` on this resource. |
-| <a id="projectpermissionscreatemergerequestin"></a>`createMergeRequestIn` | [`Boolean!`](#boolean) | Indicates the user can perform `create_merge_request_in` on this resource. |
-| <a id="projectpermissionscreatepages"></a>`createPages` | [`Boolean!`](#boolean) | Indicates the user can perform `create_pages` on this resource. |
-| <a id="projectpermissionscreatepipeline"></a>`createPipeline` | [`Boolean!`](#boolean) | Indicates the user can perform `create_pipeline` on this resource. |
-| <a id="projectpermissionscreatepipelineschedule"></a>`createPipelineSchedule` | [`Boolean!`](#boolean) | Indicates the user can perform `create_pipeline_schedule` on this resource. |
-| <a id="projectpermissionscreatesnippet"></a>`createSnippet` | [`Boolean!`](#boolean) | Indicates the user can perform `create_snippet` on this resource. |
-| <a id="projectpermissionscreatewiki"></a>`createWiki` | [`Boolean!`](#boolean) | Indicates the user can perform `create_wiki` on this resource. |
-| <a id="projectpermissionsdestroydesign"></a>`destroyDesign` | [`Boolean!`](#boolean) | Indicates the user can perform `destroy_design` on this resource. |
-| <a id="projectpermissionsdestroypages"></a>`destroyPages` | [`Boolean!`](#boolean) | Indicates the user can perform `destroy_pages` on this resource. |
-| <a id="projectpermissionsdestroywiki"></a>`destroyWiki` | [`Boolean!`](#boolean) | Indicates the user can perform `destroy_wiki` on this resource. |
-| <a id="projectpermissionsdownloadcode"></a>`downloadCode` | [`Boolean!`](#boolean) | Indicates the user can perform `download_code` on this resource. |
-| <a id="projectpermissionsdownloadwikicode"></a>`downloadWikiCode` | [`Boolean!`](#boolean) | Indicates the user can perform `download_wiki_code` on this resource. |
-| <a id="projectpermissionsforkproject"></a>`forkProject` | [`Boolean!`](#boolean) | Indicates the user can perform `fork_project` on this resource. |
-| <a id="projectpermissionspushcode"></a>`pushCode` | [`Boolean!`](#boolean) | Indicates the user can perform `push_code` on this resource. |
-| <a id="projectpermissionspushtodeleteprotectedbranch"></a>`pushToDeleteProtectedBranch` | [`Boolean!`](#boolean) | Indicates the user can perform `push_to_delete_protected_branch` on this resource. |
-| <a id="projectpermissionsreadcommitstatus"></a>`readCommitStatus` | [`Boolean!`](#boolean) | Indicates the user can perform `read_commit_status` on this resource. |
-| <a id="projectpermissionsreadcycleanalytics"></a>`readCycleAnalytics` | [`Boolean!`](#boolean) | Indicates the user can perform `read_cycle_analytics` on this resource. |
-| <a id="projectpermissionsreaddesign"></a>`readDesign` | [`Boolean!`](#boolean) | Indicates the user can perform `read_design` on this resource. |
-| <a id="projectpermissionsreadenvironment"></a>`readEnvironment` | [`Boolean!`](#boolean) | Indicates the user can perform `read_environment` on this resource. |
-| <a id="projectpermissionsreadmergerequest"></a>`readMergeRequest` | [`Boolean!`](#boolean) | Indicates the user can perform `read_merge_request` on this resource. |
-| <a id="projectpermissionsreadpagescontent"></a>`readPagesContent` | [`Boolean!`](#boolean) | Indicates the user can perform `read_pages_content` on this resource. |
-| <a id="projectpermissionsreadproject"></a>`readProject` | [`Boolean!`](#boolean) | Indicates the user can perform `read_project` on this resource. |
-| <a id="projectpermissionsreadprojectmember"></a>`readProjectMember` | [`Boolean!`](#boolean) | Indicates the user can perform `read_project_member` on this resource. |
-| <a id="projectpermissionsreadwiki"></a>`readWiki` | [`Boolean!`](#boolean) | Indicates the user can perform `read_wiki` on this resource. |
-| <a id="projectpermissionsremoveforkproject"></a>`removeForkProject` | [`Boolean!`](#boolean) | Indicates the user can perform `remove_fork_project` on this resource. |
-| <a id="projectpermissionsremovepages"></a>`removePages` | [`Boolean!`](#boolean) | Indicates the user can perform `remove_pages` on this resource. |
-| <a id="projectpermissionsremoveproject"></a>`removeProject` | [`Boolean!`](#boolean) | Indicates the user can perform `remove_project` on this resource. |
-| <a id="projectpermissionsrenameproject"></a>`renameProject` | [`Boolean!`](#boolean) | Indicates the user can perform `rename_project` on this resource. |
-| <a id="projectpermissionsrequestaccess"></a>`requestAccess` | [`Boolean!`](#boolean) | Indicates the user can perform `request_access` on this resource. |
-| <a id="projectpermissionsupdatepages"></a>`updatePages` | [`Boolean!`](#boolean) | Indicates the user can perform `update_pages` on this resource. |
-| <a id="projectpermissionsupdatewiki"></a>`updateWiki` | [`Boolean!`](#boolean) | Indicates the user can perform `update_wiki` on this resource. |
-| <a id="projectpermissionsuploadfile"></a>`uploadFile` | [`Boolean!`](#boolean) | Indicates the user can perform `upload_file` on this resource. |
+| <a id="projectpermissionsadminoperations"></a>`adminOperations` | [`Boolean!`](#boolean) | If `true`, the user can perform `admin_operations` on this resource. |
+| <a id="projectpermissionsadminpathlocks"></a>`adminPathLocks` | [`Boolean!`](#boolean) | If `true`, the user can perform `admin_path_locks` on this resource. |
+| <a id="projectpermissionsadminproject"></a>`adminProject` | [`Boolean!`](#boolean) | If `true`, the user can perform `admin_project` on this resource. |
+| <a id="projectpermissionsadminremotemirror"></a>`adminRemoteMirror` | [`Boolean!`](#boolean) | If `true`, the user can perform `admin_remote_mirror` on this resource. |
+| <a id="projectpermissionsadminwiki"></a>`adminWiki` | [`Boolean!`](#boolean) | If `true`, the user can perform `admin_wiki` on this resource. |
+| <a id="projectpermissionsarchiveproject"></a>`archiveProject` | [`Boolean!`](#boolean) | If `true`, the user can perform `archive_project` on this resource. |
+| <a id="projectpermissionschangenamespace"></a>`changeNamespace` | [`Boolean!`](#boolean) | If `true`, the user can perform `change_namespace` on this resource. |
+| <a id="projectpermissionschangevisibilitylevel"></a>`changeVisibilityLevel` | [`Boolean!`](#boolean) | If `true`, the user can perform `change_visibility_level` on this resource. |
+| <a id="projectpermissionscreatedeployment"></a>`createDeployment` | [`Boolean!`](#boolean) | If `true`, the user can perform `create_deployment` on this resource. |
+| <a id="projectpermissionscreatedesign"></a>`createDesign` | [`Boolean!`](#boolean) | If `true`, the user can perform `create_design` on this resource. |
+| <a id="projectpermissionscreateissue"></a>`createIssue` | [`Boolean!`](#boolean) | If `true`, the user can perform `create_issue` on this resource. |
+| <a id="projectpermissionscreatelabel"></a>`createLabel` | [`Boolean!`](#boolean) | If `true`, the user can perform `create_label` on this resource. |
+| <a id="projectpermissionscreatemergerequestfrom"></a>`createMergeRequestFrom` | [`Boolean!`](#boolean) | If `true`, the user can perform `create_merge_request_from` on this resource. |
+| <a id="projectpermissionscreatemergerequestin"></a>`createMergeRequestIn` | [`Boolean!`](#boolean) | If `true`, the user can perform `create_merge_request_in` on this resource. |
+| <a id="projectpermissionscreatepages"></a>`createPages` | [`Boolean!`](#boolean) | If `true`, the user can perform `create_pages` on this resource. |
+| <a id="projectpermissionscreatepipeline"></a>`createPipeline` | [`Boolean!`](#boolean) | If `true`, the user can perform `create_pipeline` on this resource. |
+| <a id="projectpermissionscreatepipelineschedule"></a>`createPipelineSchedule` | [`Boolean!`](#boolean) | If `true`, the user can perform `create_pipeline_schedule` on this resource. |
+| <a id="projectpermissionscreatesnippet"></a>`createSnippet` | [`Boolean!`](#boolean) | If `true`, the user can perform `create_snippet` on this resource. |
+| <a id="projectpermissionscreatewiki"></a>`createWiki` | [`Boolean!`](#boolean) | If `true`, the user can perform `create_wiki` on this resource. |
+| <a id="projectpermissionsdestroydesign"></a>`destroyDesign` | [`Boolean!`](#boolean) | If `true`, the user can perform `destroy_design` on this resource. |
+| <a id="projectpermissionsdestroypages"></a>`destroyPages` | [`Boolean!`](#boolean) | If `true`, the user can perform `destroy_pages` on this resource. |
+| <a id="projectpermissionsdestroywiki"></a>`destroyWiki` | [`Boolean!`](#boolean) | If `true`, the user can perform `destroy_wiki` on this resource. |
+| <a id="projectpermissionsdownloadcode"></a>`downloadCode` | [`Boolean!`](#boolean) | If `true`, the user can perform `download_code` on this resource. |
+| <a id="projectpermissionsdownloadwikicode"></a>`downloadWikiCode` | [`Boolean!`](#boolean) | If `true`, the user can perform `download_wiki_code` on this resource. |
+| <a id="projectpermissionsforkproject"></a>`forkProject` | [`Boolean!`](#boolean) | If `true`, the user can perform `fork_project` on this resource. |
+| <a id="projectpermissionspushcode"></a>`pushCode` | [`Boolean!`](#boolean) | If `true`, the user can perform `push_code` on this resource. |
+| <a id="projectpermissionspushtodeleteprotectedbranch"></a>`pushToDeleteProtectedBranch` | [`Boolean!`](#boolean) | If `true`, the user can perform `push_to_delete_protected_branch` on this resource. |
+| <a id="projectpermissionsreadcommitstatus"></a>`readCommitStatus` | [`Boolean!`](#boolean) | If `true`, the user can perform `read_commit_status` on this resource. |
+| <a id="projectpermissionsreadcycleanalytics"></a>`readCycleAnalytics` | [`Boolean!`](#boolean) | If `true`, the user can perform `read_cycle_analytics` on this resource. |
+| <a id="projectpermissionsreaddesign"></a>`readDesign` | [`Boolean!`](#boolean) | If `true`, the user can perform `read_design` on this resource. |
+| <a id="projectpermissionsreadenvironment"></a>`readEnvironment` | [`Boolean!`](#boolean) | If `true`, the user can perform `read_environment` on this resource. |
+| <a id="projectpermissionsreadmergerequest"></a>`readMergeRequest` | [`Boolean!`](#boolean) | If `true`, the user can perform `read_merge_request` on this resource. |
+| <a id="projectpermissionsreadpagescontent"></a>`readPagesContent` | [`Boolean!`](#boolean) | If `true`, the user can perform `read_pages_content` on this resource. |
+| <a id="projectpermissionsreadproject"></a>`readProject` | [`Boolean!`](#boolean) | If `true`, the user can perform `read_project` on this resource. |
+| <a id="projectpermissionsreadprojectmember"></a>`readProjectMember` | [`Boolean!`](#boolean) | If `true`, the user can perform `read_project_member` on this resource. |
+| <a id="projectpermissionsreadwiki"></a>`readWiki` | [`Boolean!`](#boolean) | If `true`, the user can perform `read_wiki` on this resource. |
+| <a id="projectpermissionsremoveforkproject"></a>`removeForkProject` | [`Boolean!`](#boolean) | If `true`, the user can perform `remove_fork_project` on this resource. |
+| <a id="projectpermissionsremovepages"></a>`removePages` | [`Boolean!`](#boolean) | If `true`, the user can perform `remove_pages` on this resource. |
+| <a id="projectpermissionsremoveproject"></a>`removeProject` | [`Boolean!`](#boolean) | If `true`, the user can perform `remove_project` on this resource. |
+| <a id="projectpermissionsrenameproject"></a>`renameProject` | [`Boolean!`](#boolean) | If `true`, the user can perform `rename_project` on this resource. |
+| <a id="projectpermissionsrequestaccess"></a>`requestAccess` | [`Boolean!`](#boolean) | If `true`, the user can perform `request_access` on this resource. |
+| <a id="projectpermissionsupdatepages"></a>`updatePages` | [`Boolean!`](#boolean) | If `true`, the user can perform `update_pages` on this resource. |
+| <a id="projectpermissionsupdatewiki"></a>`updateWiki` | [`Boolean!`](#boolean) | If `true`, the user can perform `update_wiki` on this resource. |
+| <a id="projectpermissionsuploadfile"></a>`uploadFile` | [`Boolean!`](#boolean) | If `true`, the user can perform `upload_file` on this resource. |
### `ProjectRepositoryRegistry`
@@ -24062,7 +24713,13 @@ Represents the source of a security policy belonging to a project.
| <a id="projectstatisticsbuildartifactssize"></a>`buildArtifactsSize` | [`Float!`](#float) | Build artifacts size of the project in bytes. |
| <a id="projectstatisticscommitcount"></a>`commitCount` | [`Float!`](#float) | Commit count of the project. |
| <a id="projectstatisticscontainerregistrysize"></a>`containerRegistrySize` | [`Float`](#float) | Container Registry size of the project in bytes. |
+| <a id="projectstatisticscostfactoredbuildartifactssize"></a>`costFactoredBuildArtifactsSize` **{warning-solid}** | [`Float!`](#float) | **Introduced** in 16.6. This feature is an Experiment. It can be changed or removed at any time. Build artifacts size in bytes with any applicable cost factor for forks applied. This will equal build_artifacts_size if there is no applicable cost factor. |
+| <a id="projectstatisticscostfactoredlfsobjectssize"></a>`costFactoredLfsObjectsSize` **{warning-solid}** | [`Float!`](#float) | **Introduced** in 16.6. This feature is an Experiment. It can be changed or removed at any time. LFS objects size in bytes with any applicable cost factor for forks applied. This will equal lfs_objects_size if there is no applicable cost factor. |
+| <a id="projectstatisticscostfactoredpackagessize"></a>`costFactoredPackagesSize` **{warning-solid}** | [`Float!`](#float) | **Introduced** in 16.6. This feature is an Experiment. It can be changed or removed at any time. Packages size in bytes with any applicable cost factor for forks applied. This will equal packages_size if there is no applicable cost factor. |
+| <a id="projectstatisticscostfactoredrepositorysize"></a>`costFactoredRepositorySize` **{warning-solid}** | [`Float!`](#float) | **Introduced** in 16.6. This feature is an Experiment. It can be changed or removed at any time. Repository size in bytes with any applicable cost factor for forks applied. This will equal repository_size if there is no applicable cost factor. |
+| <a id="projectstatisticscostfactoredsnippetssize"></a>`costFactoredSnippetsSize` **{warning-solid}** | [`Float!`](#float) | **Introduced** in 16.6. This feature is an Experiment. It can be changed or removed at any time. Snippets size in bytes with any applicable cost factor for forks applied. This will equal snippets_size if there is no applicable cost factor. |
| <a id="projectstatisticscostfactoredstoragesize"></a>`costFactoredStorageSize` **{warning-solid}** | [`Float!`](#float) | **Introduced** in 16.2. This feature is an Experiment. It can be changed or removed at any time. Storage size in bytes with any applicable cost factor for forks applied. This will equal storage_size if there is no applicable cost factor. |
+| <a id="projectstatisticscostfactoredwikisize"></a>`costFactoredWikiSize` **{warning-solid}** | [`Float!`](#float) | **Introduced** in 16.6. This feature is an Experiment. It can be changed or removed at any time. Wiki size in bytes with any applicable cost factor for forks applied. This will equal wiki_size if there is no applicable cost factor. |
| <a id="projectstatisticslfsobjectssize"></a>`lfsObjectsSize` | [`Float!`](#float) | Large File Storage (LFS) object size of the project in bytes. |
| <a id="projectstatisticspackagessize"></a>`packagesSize` | [`Float!`](#float) | Packages size of the project in bytes. |
| <a id="projectstatisticspipelineartifactssize"></a>`pipelineArtifactsSize` | [`Float`](#float) | CI Pipeline artifacts size in bytes. |
@@ -24316,8 +24973,14 @@ Pypi metadata.
| Name | Type | Description |
| ---- | ---- | ----------- |
+| <a id="pypimetadataauthoremail"></a>`authorEmail` | [`String`](#string) | Author email address(es) in RFC-822 format. |
+| <a id="pypimetadatadescription"></a>`description` | [`String`](#string) | Longer description that can run to several paragraphs. |
+| <a id="pypimetadatadescriptioncontenttype"></a>`descriptionContentType` | [`String`](#string) | Markup syntax used in the description field. |
| <a id="pypimetadataid"></a>`id` | [`PackagesPypiMetadatumID!`](#packagespypimetadatumid) | ID of the metadatum. |
+| <a id="pypimetadatakeywords"></a>`keywords` | [`String`](#string) | List of keywords, separated by commas. |
+| <a id="pypimetadatametadataversion"></a>`metadataVersion` | [`String`](#string) | Metadata version. |
| <a id="pypimetadatarequiredpython"></a>`requiredPython` | [`String`](#string) | Required Python version of the Pypi package. |
+| <a id="pypimetadatasummary"></a>`summary` | [`String`](#string) | One-line summary of the description. |
### `QueryComplexity`
@@ -24702,11 +25365,11 @@ Check permissions for the current user on a requirement.
| Name | Type | Description |
| ---- | ---- | ----------- |
-| <a id="requirementpermissionsadminrequirement"></a>`adminRequirement` | [`Boolean!`](#boolean) | Indicates the user can perform `admin_requirement` on this resource. |
-| <a id="requirementpermissionscreaterequirement"></a>`createRequirement` | [`Boolean!`](#boolean) | Indicates the user can perform `create_requirement` on this resource. |
-| <a id="requirementpermissionsdestroyrequirement"></a>`destroyRequirement` | [`Boolean!`](#boolean) | Indicates the user can perform `destroy_requirement` on this resource. |
-| <a id="requirementpermissionsreadrequirement"></a>`readRequirement` | [`Boolean!`](#boolean) | Indicates the user can perform `read_requirement` on this resource. |
-| <a id="requirementpermissionsupdaterequirement"></a>`updateRequirement` | [`Boolean!`](#boolean) | Indicates the user can perform `update_requirement` on this resource. |
+| <a id="requirementpermissionsadminrequirement"></a>`adminRequirement` | [`Boolean!`](#boolean) | If `true`, the user can perform `admin_requirement` on this resource. |
+| <a id="requirementpermissionscreaterequirement"></a>`createRequirement` | [`Boolean!`](#boolean) | If `true`, the user can perform `create_requirement` on this resource. |
+| <a id="requirementpermissionsdestroyrequirement"></a>`destroyRequirement` | [`Boolean!`](#boolean) | If `true`, the user can perform `destroy_requirement` on this resource. |
+| <a id="requirementpermissionsreadrequirement"></a>`readRequirement` | [`Boolean!`](#boolean) | If `true`, the user can perform `read_requirement` on this resource. |
+| <a id="requirementpermissionsupdaterequirement"></a>`updateRequirement` | [`Boolean!`](#boolean) | If `true`, the user can perform `update_requirement` on this resource. |
### `RequirementStatesCount`
@@ -24755,10 +25418,10 @@ Counts of requirements by their state.
| Name | Type | Description |
| ---- | ---- | ----------- |
-| <a id="runnerpermissionsassignrunner"></a>`assignRunner` | [`Boolean!`](#boolean) | Indicates the user can perform `assign_runner` on this resource. |
-| <a id="runnerpermissionsdeleterunner"></a>`deleteRunner` | [`Boolean!`](#boolean) | Indicates the user can perform `delete_runner` on this resource. |
-| <a id="runnerpermissionsreadrunner"></a>`readRunner` | [`Boolean!`](#boolean) | Indicates the user can perform `read_runner` on this resource. |
-| <a id="runnerpermissionsupdaterunner"></a>`updateRunner` | [`Boolean!`](#boolean) | Indicates the user can perform `update_runner` on this resource. |
+| <a id="runnerpermissionsassignrunner"></a>`assignRunner` | [`Boolean!`](#boolean) | If `true`, the user can perform `assign_runner` on this resource. |
+| <a id="runnerpermissionsdeleterunner"></a>`deleteRunner` | [`Boolean!`](#boolean) | If `true`, the user can perform `delete_runner` on this resource. |
+| <a id="runnerpermissionsreadrunner"></a>`readRunner` | [`Boolean!`](#boolean) | If `true`, the user can perform `read_runner` on this resource. |
+| <a id="runnerpermissionsupdaterunner"></a>`updateRunner` | [`Boolean!`](#boolean) | If `true`, the user can perform `update_runner` on this resource. |
### `RunnerPlatform`
@@ -25252,12 +25915,12 @@ Represents how the blob content should be displayed.
| Name | Type | Description |
| ---- | ---- | ----------- |
-| <a id="snippetpermissionsadminsnippet"></a>`adminSnippet` | [`Boolean!`](#boolean) | Indicates the user can perform `admin_snippet` on this resource. |
-| <a id="snippetpermissionsawardemoji"></a>`awardEmoji` | [`Boolean!`](#boolean) | Indicates the user can perform `award_emoji` on this resource. |
-| <a id="snippetpermissionscreatenote"></a>`createNote` | [`Boolean!`](#boolean) | Indicates the user can perform `create_note` on this resource. |
-| <a id="snippetpermissionsreadsnippet"></a>`readSnippet` | [`Boolean!`](#boolean) | Indicates the user can perform `read_snippet` on this resource. |
-| <a id="snippetpermissionsreportsnippet"></a>`reportSnippet` | [`Boolean!`](#boolean) | Indicates the user can perform `report_snippet` on this resource. |
-| <a id="snippetpermissionsupdatesnippet"></a>`updateSnippet` | [`Boolean!`](#boolean) | Indicates the user can perform `update_snippet` on this resource. |
+| <a id="snippetpermissionsadminsnippet"></a>`adminSnippet` | [`Boolean!`](#boolean) | If `true`, the user can perform `admin_snippet` on this resource. |
+| <a id="snippetpermissionsawardemoji"></a>`awardEmoji` | [`Boolean!`](#boolean) | If `true`, the user can perform `award_emoji` on this resource. |
+| <a id="snippetpermissionscreatenote"></a>`createNote` | [`Boolean!`](#boolean) | If `true`, the user can perform `create_note` on this resource. |
+| <a id="snippetpermissionsreadsnippet"></a>`readSnippet` | [`Boolean!`](#boolean) | If `true`, the user can perform `read_snippet` on this resource. |
+| <a id="snippetpermissionsreportsnippet"></a>`reportSnippet` | [`Boolean!`](#boolean) | If `true`, the user can perform `report_snippet` on this resource. |
+| <a id="snippetpermissionsupdatesnippet"></a>`updateSnippet` | [`Boolean!`](#boolean) | If `true`, the user can perform `update_snippet` on this resource. |
### `SnippetRepositoryRegistry`
@@ -25642,7 +26305,7 @@ Describes an incident management timeline event.
| Name | Type | Description |
| ---- | ---- | ----------- |
-| <a id="timelogpermissionsadmintimelog"></a>`adminTimelog` | [`Boolean!`](#boolean) | Indicates the user can perform `admin_timelog` on this resource. |
+| <a id="timelogpermissionsadmintimelog"></a>`adminTimelog` | [`Boolean!`](#boolean) | If `true`, the user can perform `admin_timelog` on this resource. |
### `Todo`
@@ -25812,18 +26475,20 @@ Core representation of a GitLab user.
| <a id="usercoreid"></a>`id` | [`ID!`](#id) | ID of the user. |
| <a id="usercoreide"></a>`ide` | [`Ide`](#ide) | IDE settings. |
| <a id="usercorejobtitle"></a>`jobTitle` | [`String`](#string) | Job title of the user. |
+| <a id="usercorelastactivityon"></a>`lastActivityOn` | [`Date`](#date) | Date the user last performed any actions. |
| <a id="usercorelinkedin"></a>`linkedin` | [`String`](#string) | LinkedIn profile name of the user. |
| <a id="usercorelocation"></a>`location` | [`String`](#string) | Location of the user. |
| <a id="usercorename"></a>`name` | [`String!`](#string) | Human-readable name of the user. Returns `****` if the user is a project bot and the requester does not have permission to view the project. |
| <a id="usercorenamespace"></a>`namespace` | [`Namespace`](#namespace) | Personal namespace of the user. |
| <a id="usercorenamespacecommitemails"></a>`namespaceCommitEmails` | [`NamespaceCommitEmailConnection`](#namespacecommitemailconnection) | User's custom namespace commit emails. (see [Connections](#connections)) |
| <a id="usercoreorganization"></a>`organization` | [`String`](#string) | Who the user represents or works for. |
+| <a id="usercoreorganizations"></a>`organizations` **{warning-solid}** | [`OrganizationConnection`](#organizationconnection) | **Introduced** in 16.6. This feature is an Experiment. It can be changed or removed at any time. Organizations where the user has access. |
| <a id="usercorepreferencesgitpodpath"></a>`preferencesGitpodPath` | [`String`](#string) | Web path to the Gitpod section within user preferences. |
| <a id="usercoreprofileenablegitpodpath"></a>`profileEnableGitpodPath` | [`String`](#string) | Web path to enable Gitpod for the user. |
| <a id="usercoreprojectmemberships"></a>`projectMemberships` | [`ProjectMemberConnection`](#projectmemberconnection) | Project memberships of the user. (see [Connections](#connections)) |
| <a id="usercorepronouns"></a>`pronouns` | [`String`](#string) | Pronouns of the user. |
| <a id="usercorepublicemail"></a>`publicEmail` | [`String`](#string) | User's public email. |
-| <a id="usercoresavedreplies"></a>`savedReplies` | [`SavedReplyConnection`](#savedreplyconnection) | Saved replies authored by the user. Will not return saved replies if `saved_replies` feature flag is disabled. (see [Connections](#connections)) |
+| <a id="usercoresavedreplies"></a>`savedReplies` | [`SavedReplyConnection`](#savedreplyconnection) | Saved replies authored by the user. (see [Connections](#connections)) |
| <a id="usercorestate"></a>`state` | [`UserState!`](#userstate) | State of the user. |
| <a id="usercorestatus"></a>`status` | [`UserStatus`](#userstatus) | User status. |
| <a id="usercoretwitter"></a>`twitter` | [`String`](#string) | Twitter username of the user. |
@@ -25962,7 +26627,7 @@ four standard [pagination arguments](#connection-pagination-arguments):
##### `UserCore.savedReply`
-Saved reply authored by the user. Will not return saved reply if `saved_replies` feature flag is disabled.
+Saved reply authored by the user.
Returns [`SavedReply`](#savedreply).
@@ -26092,7 +26757,7 @@ fields relate to interactions between the two entities.
| Name | Type | Description |
| ---- | ---- | ----------- |
-| <a id="userpermissionscreatesnippet"></a>`createSnippet` | [`Boolean!`](#boolean) | Indicates the user can perform `create_snippet` on this resource. |
+| <a id="userpermissionscreatesnippet"></a>`createSnippet` | [`Boolean!`](#boolean) | If `true`, the user can perform `create_snippet` on this resource. |
### `UserPreferences`
@@ -26114,6 +26779,17 @@ fields relate to interactions between the two entities.
| <a id="userstatusmessage"></a>`message` | [`String`](#string) | User status message. |
| <a id="userstatusmessagehtml"></a>`messageHtml` | [`String`](#string) | HTML of the user status message. |
+### `ValueStream`
+
+#### Fields
+
+| Name | Type | Description |
+| ---- | ---- | ----------- |
+| <a id="valuestreamid"></a>`id` | [`AnalyticsCycleAnalyticsValueStreamID!`](#analyticscycleanalyticsvaluestreamid) | ID of the value stream. |
+| <a id="valuestreamname"></a>`name` | [`String!`](#string) | Name of the value stream. |
+| <a id="valuestreamnamespace"></a>`namespace` | [`Namespace!`](#namespace) | Namespace the value stream belongs to. |
+| <a id="valuestreamproject"></a>`project` **{warning-solid}** | [`Project`](#project) | **Introduced** in 15.6. This feature is an Experiment. It can be changed or removed at any time. Project the value stream belongs to, returns empty if it belongs to a group. |
+
### `ValueStreamAnalyticsMetric`
#### Fields
@@ -26675,15 +27351,15 @@ Check permissions for the current user on a vulnerability.
| Name | Type | Description |
| ---- | ---- | ----------- |
-| <a id="vulnerabilitypermissionsadminvulnerability"></a>`adminVulnerability` | [`Boolean!`](#boolean) | Indicates the user can perform `admin_vulnerability` on this resource. |
-| <a id="vulnerabilitypermissionsadminvulnerabilityexternalissuelink"></a>`adminVulnerabilityExternalIssueLink` | [`Boolean!`](#boolean) | Indicates the user can perform `admin_vulnerability_external_issue_link` on this resource. |
-| <a id="vulnerabilitypermissionsadminvulnerabilityissuelink"></a>`adminVulnerabilityIssueLink` | [`Boolean!`](#boolean) | Indicates the user can perform `admin_vulnerability_issue_link` on this resource. |
-| <a id="vulnerabilitypermissionscreatevulnerabilityexport"></a>`createVulnerabilityExport` | [`Boolean!`](#boolean) | Indicates the user can perform `create_vulnerability_export` on this resource. |
-| <a id="vulnerabilitypermissionscreatevulnerabilityfeedback"></a>`createVulnerabilityFeedback` | [`Boolean!`](#boolean) | Indicates the user can perform `create_vulnerability_feedback` on this resource. |
-| <a id="vulnerabilitypermissionsdestroyvulnerabilityfeedback"></a>`destroyVulnerabilityFeedback` | [`Boolean!`](#boolean) | Indicates the user can perform `destroy_vulnerability_feedback` on this resource. |
-| <a id="vulnerabilitypermissionsreadvulnerability"></a>`readVulnerability` | [`Boolean!`](#boolean) | Indicates the user can perform `read_vulnerability` on this resource. |
-| <a id="vulnerabilitypermissionsreadvulnerabilityfeedback"></a>`readVulnerabilityFeedback` | [`Boolean!`](#boolean) | Indicates the user can perform `read_vulnerability_feedback` on this resource. |
-| <a id="vulnerabilitypermissionsupdatevulnerabilityfeedback"></a>`updateVulnerabilityFeedback` | [`Boolean!`](#boolean) | Indicates the user can perform `update_vulnerability_feedback` on this resource. |
+| <a id="vulnerabilitypermissionsadminvulnerability"></a>`adminVulnerability` | [`Boolean!`](#boolean) | If `true`, the user can perform `admin_vulnerability` on this resource. |
+| <a id="vulnerabilitypermissionsadminvulnerabilityexternalissuelink"></a>`adminVulnerabilityExternalIssueLink` | [`Boolean!`](#boolean) | If `true`, the user can perform `admin_vulnerability_external_issue_link` on this resource. |
+| <a id="vulnerabilitypermissionsadminvulnerabilityissuelink"></a>`adminVulnerabilityIssueLink` | [`Boolean!`](#boolean) | If `true`, the user can perform `admin_vulnerability_issue_link` on this resource. |
+| <a id="vulnerabilitypermissionscreatevulnerabilityexport"></a>`createVulnerabilityExport` | [`Boolean!`](#boolean) | If `true`, the user can perform `create_vulnerability_export` on this resource. |
+| <a id="vulnerabilitypermissionscreatevulnerabilityfeedback"></a>`createVulnerabilityFeedback` | [`Boolean!`](#boolean) | If `true`, the user can perform `create_vulnerability_feedback` on this resource. |
+| <a id="vulnerabilitypermissionsdestroyvulnerabilityfeedback"></a>`destroyVulnerabilityFeedback` | [`Boolean!`](#boolean) | If `true`, the user can perform `destroy_vulnerability_feedback` on this resource. |
+| <a id="vulnerabilitypermissionsreadvulnerability"></a>`readVulnerability` | [`Boolean!`](#boolean) | If `true`, the user can perform `read_vulnerability` on this resource. |
+| <a id="vulnerabilitypermissionsreadvulnerabilityfeedback"></a>`readVulnerabilityFeedback` | [`Boolean!`](#boolean) | If `true`, the user can perform `read_vulnerability_feedback` on this resource. |
+| <a id="vulnerabilitypermissionsupdatevulnerabilityfeedback"></a>`updateVulnerabilityFeedback` | [`Boolean!`](#boolean) | If `true`, the user can perform `update_vulnerability_feedback` on this resource. |
### `VulnerabilityRemediationType`
@@ -26876,14 +27552,14 @@ Check permissions for the current user on a work item.
| Name | Type | Description |
| ---- | ---- | ----------- |
-| <a id="workitempermissionsadminparentlink"></a>`adminParentLink` | [`Boolean!`](#boolean) | Indicates the user can perform `admin_parent_link` on this resource. |
-| <a id="workitempermissionsadminworkitem"></a>`adminWorkItem` | [`Boolean!`](#boolean) | Indicates the user can perform `admin_work_item` on this resource. |
-| <a id="workitempermissionsadminworkitemlink"></a>`adminWorkItemLink` | [`Boolean!`](#boolean) | Indicates the user can perform `admin_work_item_link` on this resource. |
-| <a id="workitempermissionscreatenote"></a>`createNote` | [`Boolean!`](#boolean) | Indicates the user can perform `create_note` on this resource. |
-| <a id="workitempermissionsdeleteworkitem"></a>`deleteWorkItem` | [`Boolean!`](#boolean) | Indicates the user can perform `delete_work_item` on this resource. |
-| <a id="workitempermissionsreadworkitem"></a>`readWorkItem` | [`Boolean!`](#boolean) | Indicates the user can perform `read_work_item` on this resource. |
-| <a id="workitempermissionssetworkitemmetadata"></a>`setWorkItemMetadata` | [`Boolean!`](#boolean) | Indicates the user can perform `set_work_item_metadata` on this resource. |
-| <a id="workitempermissionsupdateworkitem"></a>`updateWorkItem` | [`Boolean!`](#boolean) | Indicates the user can perform `update_work_item` on this resource. |
+| <a id="workitempermissionsadminparentlink"></a>`adminParentLink` | [`Boolean!`](#boolean) | If `true`, the user can perform `admin_parent_link` on this resource. |
+| <a id="workitempermissionsadminworkitem"></a>`adminWorkItem` | [`Boolean!`](#boolean) | If `true`, the user can perform `admin_work_item` on this resource. |
+| <a id="workitempermissionsadminworkitemlink"></a>`adminWorkItemLink` | [`Boolean!`](#boolean) | If `true`, the user can perform `admin_work_item_link` on this resource. |
+| <a id="workitempermissionscreatenote"></a>`createNote` | [`Boolean!`](#boolean) | If `true`, the user can perform `create_note` on this resource. |
+| <a id="workitempermissionsdeleteworkitem"></a>`deleteWorkItem` | [`Boolean!`](#boolean) | If `true`, the user can perform `delete_work_item` on this resource. |
+| <a id="workitempermissionsreadworkitem"></a>`readWorkItem` | [`Boolean!`](#boolean) | If `true`, the user can perform `read_work_item` on this resource. |
+| <a id="workitempermissionssetworkitemmetadata"></a>`setWorkItemMetadata` | [`Boolean!`](#boolean) | If `true`, the user can perform `set_work_item_metadata` on this resource. |
+| <a id="workitempermissionsupdateworkitem"></a>`updateWorkItem` | [`Boolean!`](#boolean) | If `true`, the user can perform `update_work_item` on this resource. |
### `WorkItemType`
@@ -27411,6 +28087,14 @@ All possible ways to specify the API surface for an API fuzzing scan.
| <a id="apifuzzingscanmodeopenapi"></a>`OPENAPI` | The API surface is specified by a OPENAPI file. |
| <a id="apifuzzingscanmodepostman"></a>`POSTMAN` | The API surface is specified by a POSTMAN file. |
+### `ApprovalReportType`
+
+| Value | Description |
+| ----- | ----------- |
+| <a id="approvalreporttypeany_merge_request"></a>`ANY_MERGE_REQUEST` | Represents report_type for any_merge_request related approval rules. |
+| <a id="approvalreporttypelicense_scanning"></a>`LICENSE_SCANNING` | Represents report_type for license scanning related approval rules. |
+| <a id="approvalreporttypescan_finding"></a>`SCAN_FINDING` | Represents report_type for vulnerability check related approval rules. |
+
### `ApprovalRuleType`
The kind of an approval rule.
@@ -27464,24 +28148,26 @@ Types of blob viewers.
| <a id="blobviewerstyperich"></a>`rich` | Rich blob viewers type. |
| <a id="blobviewerstypesimple"></a>`simple` | Simple blob viewers type. |
+### `CiCatalogResourceScope`
+
+Values for scoping catalog resources.
+
+| Value | Description |
+| ----- | ----------- |
+| <a id="cicatalogresourcescopeall"></a>`ALL` | All catalog resources visible to the current user. |
+
### `CiCatalogResourceSort`
Values for sorting catalog resources.
| Value | Description |
| ----- | ----------- |
-| <a id="cicatalogresourcesortcreated_asc"></a>`CREATED_ASC` | Created at ascending order. |
-| <a id="cicatalogresourcesortcreated_desc"></a>`CREATED_DESC` | Created at descending order. |
+| <a id="cicatalogresourcesortcreated_asc"></a>`CREATED_ASC` | Created date by ascending order. |
+| <a id="cicatalogresourcesortcreated_desc"></a>`CREATED_DESC` | Created date by descending order. |
| <a id="cicatalogresourcesortlatest_released_at_asc"></a>`LATEST_RELEASED_AT_ASC` | Latest release date by ascending order. |
| <a id="cicatalogresourcesortlatest_released_at_desc"></a>`LATEST_RELEASED_AT_DESC` | Latest release date by descending order. |
| <a id="cicatalogresourcesortname_asc"></a>`NAME_ASC` | Name by ascending order. |
| <a id="cicatalogresourcesortname_desc"></a>`NAME_DESC` | Name by descending order. |
-| <a id="cicatalogresourcesortupdated_asc"></a>`UPDATED_ASC` | Updated at ascending order. |
-| <a id="cicatalogresourcesortupdated_desc"></a>`UPDATED_DESC` | Updated at descending order. |
-| <a id="cicatalogresourcesortcreated_asc"></a>`created_asc` **{warning-solid}** | **Deprecated** in 13.5. This was renamed. Use: `CREATED_ASC`. |
-| <a id="cicatalogresourcesortcreated_desc"></a>`created_desc` **{warning-solid}** | **Deprecated** in 13.5. This was renamed. Use: `CREATED_DESC`. |
-| <a id="cicatalogresourcesortupdated_asc"></a>`updated_asc` **{warning-solid}** | **Deprecated** in 13.5. This was renamed. Use: `UPDATED_ASC`. |
-| <a id="cicatalogresourcesortupdated_desc"></a>`updated_desc` **{warning-solid}** | **Deprecated** in 13.5. This was renamed. Use: `UPDATED_DESC`. |
### `CiConfigIncludeType`
@@ -27586,6 +28272,7 @@ Values for sorting inherited variables.
| <a id="cijobstatusscheduled"></a>`SCHEDULED` | A job that is scheduled. |
| <a id="cijobstatusskipped"></a>`SKIPPED` | A job that is skipped. |
| <a id="cijobstatussuccess"></a>`SUCCESS` | A job that is success. |
+| <a id="cijobstatuswaiting_for_callback"></a>`WAITING_FOR_CALLBACK` | A job that is waiting for callback. |
| <a id="cijobstatuswaiting_for_resource"></a>`WAITING_FOR_RESOURCE` | A job that is waiting for resource. |
### `CiJobTokenScopeDirection`
@@ -27690,15 +28377,25 @@ Values for sorting variables.
| <a id="codequalitydegradationseverityminor"></a>`MINOR` | Code Quality degradation has a status of minor. |
| <a id="codequalitydegradationseverityunknown"></a>`UNKNOWN` | Code Quality degradation has a status of unknown. |
-### `CodequalityReportsComparerReportStatus`
+### `CodequalityReportsComparerReportGenerationStatus`
-Report comparison status.
+Represents the generation status of the compared codequality report.
| Value | Description |
| ----- | ----------- |
-| <a id="codequalityreportscomparerreportstatusfailed"></a>`FAILED` | Report failed to generate. |
-| <a id="codequalityreportscomparerreportstatusnot_found"></a>`NOT_FOUND` | Head report or base report not found. |
-| <a id="codequalityreportscomparerreportstatussuccess"></a>`SUCCESS` | Report successfully generated. |
+| <a id="codequalityreportscomparerreportgenerationstatuserror"></a>`ERROR` | An error happened while generating the report. |
+| <a id="codequalityreportscomparerreportgenerationstatusparsed"></a>`PARSED` | Report was generated. |
+| <a id="codequalityreportscomparerreportgenerationstatusparsing"></a>`PARSING` | Report is being generated. |
+
+### `CodequalityReportsComparerStatus`
+
+Represents the state of the code quality report.
+
+| Value | Description |
+| ----- | ----------- |
+| <a id="codequalityreportscomparerstatusfailed"></a>`FAILED` | Report generated and there are new code quality degradations. |
+| <a id="codequalityreportscomparerstatusnot_found"></a>`NOT_FOUND` | Head report or base report not found. |
+| <a id="codequalityreportscomparerstatussuccess"></a>`SUCCESS` | No degradations found in the head pipeline report. |
### `CommitActionMode`
@@ -27873,6 +28570,16 @@ Values for sorting contacts.
| <a id="containerexpirationpolicyolderthanenumsixty_days"></a>`SIXTY_DAYS` | 60 days until tags are automatically removed. |
| <a id="containerexpirationpolicyolderthanenumthirty_days"></a>`THIRTY_DAYS` | 30 days until tags are automatically removed. |
+### `ContainerRegistryProtectionRuleAccessLevel`
+
+Access level of a container registry protection rule resource.
+
+| Value | Description |
+| ----- | ----------- |
+| <a id="containerregistryprotectionruleaccessleveldeveloper"></a>`DEVELOPER` | Developer access. |
+| <a id="containerregistryprotectionruleaccesslevelmaintainer"></a>`MAINTAINER` | Maintainer access. |
+| <a id="containerregistryprotectionruleaccesslevelowner"></a>`OWNER` | Owner access. |
+
### `ContainerRepositoryCleanupStatus`
Status of the tags cleanup of a container repository.
@@ -28693,6 +29400,21 @@ Name of access levels of a group or project member.
| <a id="memberaccesslevelnameowner"></a>`OWNER` | Owner access. |
| <a id="memberaccesslevelnamereporter"></a>`REPORTER` | Reporter access. |
+### `MemberRolePermission`
+
+Member role permission.
+
+| Value | Description |
+| ----- | ----------- |
+| <a id="memberrolepermissionadmin_group_member"></a>`ADMIN_GROUP_MEMBER` | Allows admin access to group members. |
+| <a id="memberrolepermissionadmin_merge_request"></a>`ADMIN_MERGE_REQUEST` | Allows admin access to the merge requests. |
+| <a id="memberrolepermissionadmin_vulnerability"></a>`ADMIN_VULNERABILITY` | Allows admin access to the vulnerability reports. |
+| <a id="memberrolepermissionarchive_project"></a>`ARCHIVE_PROJECT` | Allows to archive projects. |
+| <a id="memberrolepermissionmanage_project_access_tokens"></a>`MANAGE_PROJECT_ACCESS_TOKENS` | Allows manage access to the project access tokens. |
+| <a id="memberrolepermissionread_code"></a>`READ_CODE` | Allows read-only access to the source code. |
+| <a id="memberrolepermissionread_dependency"></a>`READ_DEPENDENCY` | Allows read-only access to the dependencies. |
+| <a id="memberrolepermissionread_vulnerability"></a>`READ_VULNERABILITY` | Allows read-only access to the vulnerability reports. |
+
### `MemberSort`
Values for sorting members.
@@ -28727,8 +29449,9 @@ State of a review of a GitLab merge request.
| Value | Description |
| ----- | ----------- |
-| <a id="mergerequestreviewstatereviewed"></a>`REVIEWED` | The merge request is reviewed. |
-| <a id="mergerequestreviewstateunreviewed"></a>`UNREVIEWED` | The merge request is unreviewed. |
+| <a id="mergerequestreviewstaterequested_changes"></a>`REQUESTED_CHANGES` | Merge request reviewer has requested changes. |
+| <a id="mergerequestreviewstatereviewed"></a>`REVIEWED` | Merge request reviewer has reviewed. |
+| <a id="mergerequestreviewstateunreviewed"></a>`UNREVIEWED` | Awaiting review from merge request reviewer. |
### `MergeRequestSort`
@@ -29015,6 +29738,7 @@ Values for package manager.
| <a id="packagemanagerpip"></a>`PIP` | Package manager: pip. |
| <a id="packagemanagerpipenv"></a>`PIPENV` | Package manager: pipenv. |
| <a id="packagemanagerpnpm"></a>`PNPM` | Package manager: pnpm. |
+| <a id="packagemanagerpoetry"></a>`POETRY` | Package manager: poetry. |
| <a id="packagemanagersbt"></a>`SBT` | Package manager: sbt. |
| <a id="packagemanagersetuptools"></a>`SETUPTOOLS` | Package manager: setuptools. |
| <a id="packagemanageryarn"></a>`YARN` | Package manager: yarn. |
@@ -29149,6 +29873,7 @@ Event type of the pipeline associated with a merge request.
| <a id="pipelinestatusenumscheduled"></a>`SCHEDULED` | Pipeline is scheduled to run. |
| <a id="pipelinestatusenumskipped"></a>`SKIPPED` | Pipeline was skipped. |
| <a id="pipelinestatusenumsuccess"></a>`SUCCESS` | Pipeline completed successfully. |
+| <a id="pipelinestatusenumwaiting_for_callback"></a>`WAITING_FOR_CALLBACK` | Pipeline is waiting for an external action. |
| <a id="pipelinestatusenumwaiting_for_resource"></a>`WAITING_FOR_RESOURCE` | A resource (for example, a runner) that the pipeline requires to run is unavailable. |
### `ProductAnalyticsState`
@@ -29565,7 +30290,6 @@ Name of the feature that the callout is for.
| <a id="usercalloutfeaturenameenumci_deprecation_warning_for_types_keyword"></a>`CI_DEPRECATION_WARNING_FOR_TYPES_KEYWORD` | Callout feature name for ci_deprecation_warning_for_types_keyword. |
| <a id="usercalloutfeaturenameenumcloud_licensing_subscription_activation_banner"></a>`CLOUD_LICENSING_SUBSCRIPTION_ACTIVATION_BANNER` | Callout feature name for cloud_licensing_subscription_activation_banner. |
| <a id="usercalloutfeaturenameenumcluster_security_warning"></a>`CLUSTER_SECURITY_WARNING` | Callout feature name for cluster_security_warning. |
-| <a id="usercalloutfeaturenameenumcreate_runner_workflow_banner"></a>`CREATE_RUNNER_WORKFLOW_BANNER` | Callout feature name for create_runner_workflow_banner. |
| <a id="usercalloutfeaturenameenumeoa_bronze_plan_banner"></a>`EOA_BRONZE_PLAN_BANNER` | Callout feature name for eoa_bronze_plan_banner. |
| <a id="usercalloutfeaturenameenumfeature_flags_new_version"></a>`FEATURE_FLAGS_NEW_VERSION` | Callout feature name for feature_flags_new_version. |
| <a id="usercalloutfeaturenameenumgcp_signup_offer"></a>`GCP_SIGNUP_OFFER` | Callout feature name for gcp_signup_offer. |
@@ -29580,7 +30304,7 @@ Name of the feature that the callout is for.
| <a id="usercalloutfeaturenameenumnamespace_storage_limit_alert_error_threshold"></a>`NAMESPACE_STORAGE_LIMIT_ALERT_ERROR_THRESHOLD` | Callout feature name for namespace_storage_limit_alert_error_threshold. |
| <a id="usercalloutfeaturenameenumnamespace_storage_limit_alert_warning_threshold"></a>`NAMESPACE_STORAGE_LIMIT_ALERT_WARNING_THRESHOLD` | Callout feature name for namespace_storage_limit_alert_warning_threshold. |
| <a id="usercalloutfeaturenameenumnamespace_storage_pre_enforcement_banner"></a>`NAMESPACE_STORAGE_PRE_ENFORCEMENT_BANNER` | Callout feature name for namespace_storage_pre_enforcement_banner. |
-| <a id="usercalloutfeaturenameenumnew_navigation_callout"></a>`NEW_NAVIGATION_CALLOUT` | Callout feature name for new_navigation_callout. |
+| <a id="usercalloutfeaturenameenumnew_nav_for_everyone_callout"></a>`NEW_NAV_FOR_EVERYONE_CALLOUT` | Callout feature name for new_nav_for_everyone_callout. |
| <a id="usercalloutfeaturenameenumnew_top_level_group_alert"></a>`NEW_TOP_LEVEL_GROUP_ALERT` | Callout feature name for new_top_level_group_alert. |
| <a id="usercalloutfeaturenameenumnew_user_signups_cap_reached"></a>`NEW_USER_SIGNUPS_CAP_REACHED` | Callout feature name for new_user_signups_cap_reached. |
| <a id="usercalloutfeaturenameenumpersonal_access_token_expiry"></a>`PERSONAL_ACCESS_TOKEN_EXPIRY` | Callout feature name for personal_access_token_expiry. |
@@ -29952,6 +30676,12 @@ A `AlertManagementHttpIntegrationID` is a global ID. It is encoded as a string.
An example `AlertManagementHttpIntegrationID` is: `"gid://gitlab/AlertManagement::HttpIntegration/1"`.
+### `AnalyticsCycleAnalyticsValueStreamID`
+
+A `AnalyticsCycleAnalyticsValueStreamID` is a global ID. It is encoded as a string.
+
+An example `AnalyticsCycleAnalyticsValueStreamID` is: `"gid://gitlab/Analytics::CycleAnalytics::ValueStream/1"`.
+
### `AnalyticsDevopsAdoptionEnabledNamespaceID`
A `AnalyticsDevopsAdoptionEnabledNamespaceID` is a global ID. It is encoded as a string.
@@ -30098,6 +30828,12 @@ A `CiStageID` is a global ID. It is encoded as a string.
An example `CiStageID` is: `"gid://gitlab/Ci::Stage/1"`.
+### `CiSubscriptionsProjectID`
+
+A `CiSubscriptionsProjectID` is a global ID. It is encoded as a string.
+
+An example `CiSubscriptionsProjectID` is: `"gid://gitlab/Ci::Subscriptions::Project/1"`.
+
### `CiTriggerID`
A `CiTriggerID` is a global ID. It is encoded as a string.
@@ -30134,6 +30870,12 @@ A `ComplianceManagementFrameworkID` is a global ID. It is encoded as a string.
An example `ComplianceManagementFrameworkID` is: `"gid://gitlab/ComplianceManagement::Framework/1"`.
+### `ContainerRegistryProtectionRuleID`
+
+A `ContainerRegistryProtectionRuleID` is a global ID. It is encoded as a string.
+
+An example `ContainerRegistryProtectionRuleID` is: `"gid://gitlab/ContainerRegistry::Protection::Rule/1"`.
+
### `ContainerRepositoryID`
A `ContainerRepositoryID` is a global ID. It is encoded as a string.
@@ -30537,6 +31279,12 @@ A `PackagesPackageID` is a global ID. It is encoded as a string.
An example `PackagesPackageID` is: `"gid://gitlab/Packages::Package/1"`.
+### `PackagesProtectionRuleID`
+
+A `PackagesProtectionRuleID` is a global ID. It is encoded as a string.
+
+An example `PackagesProtectionRuleID` is: `"gid://gitlab/Packages::Protection::Rule/1"`.
+
### `PackagesPypiMetadatumID`
A `PackagesPypiMetadatumID` is a global ID. It is encoded as a string.
@@ -30559,6 +31307,12 @@ A `ProjectID` is a global ID. It is encoded as a string.
An example `ProjectID` is: `"gid://gitlab/Project/1"`.
+### `ProjectImportStateID`
+
+A `ProjectImportStateID` is a global ID. It is encoded as a string.
+
+An example `ProjectImportStateID` is: `"gid://gitlab/ProjectImportState/1"`.
+
### `ReleaseID`
A `ReleaseID` is a global ID. It is encoded as a string.
@@ -31039,6 +31793,7 @@ Implementations:
Implementations:
- [`GroupMember`](#groupmember)
+- [`PendingGroupMember`](#pendinggroupmember)
- [`ProjectMember`](#projectmember)
##### Fields
@@ -31071,6 +31826,7 @@ Returns [`UserMergeRequestInteraction`](#usermergerequestinteraction).
Implementations:
+- [`AbuseReport`](#abusereport)
- [`AlertManagementAlert`](#alertmanagementalert)
- [`BoardEpic`](#boardepic)
- [`Design`](#design)
@@ -31245,18 +32001,20 @@ Implementations:
| <a id="userid"></a>`id` | [`ID!`](#id) | ID of the user. |
| <a id="useride"></a>`ide` | [`Ide`](#ide) | IDE settings. |
| <a id="userjobtitle"></a>`jobTitle` | [`String`](#string) | Job title of the user. |
+| <a id="userlastactivityon"></a>`lastActivityOn` | [`Date`](#date) | Date the user last performed any actions. |
| <a id="userlinkedin"></a>`linkedin` | [`String`](#string) | LinkedIn profile name of the user. |
| <a id="userlocation"></a>`location` | [`String`](#string) | Location of the user. |
| <a id="username"></a>`name` | [`String!`](#string) | Human-readable name of the user. Returns `****` if the user is a project bot and the requester does not have permission to view the project. |
| <a id="usernamespace"></a>`namespace` | [`Namespace`](#namespace) | Personal namespace of the user. |
| <a id="usernamespacecommitemails"></a>`namespaceCommitEmails` | [`NamespaceCommitEmailConnection`](#namespacecommitemailconnection) | User's custom namespace commit emails. (see [Connections](#connections)) |
| <a id="userorganization"></a>`organization` | [`String`](#string) | Who the user represents or works for. |
+| <a id="userorganizations"></a>`organizations` **{warning-solid}** | [`OrganizationConnection`](#organizationconnection) | **Introduced** in 16.6. This feature is an Experiment. It can be changed or removed at any time. Organizations where the user has access. |
| <a id="userpreferencesgitpodpath"></a>`preferencesGitpodPath` | [`String`](#string) | Web path to the Gitpod section within user preferences. |
| <a id="userprofileenablegitpodpath"></a>`profileEnableGitpodPath` | [`String`](#string) | Web path to enable Gitpod for the user. |
| <a id="userprojectmemberships"></a>`projectMemberships` | [`ProjectMemberConnection`](#projectmemberconnection) | Project memberships of the user. (see [Connections](#connections)) |
| <a id="userpronouns"></a>`pronouns` | [`String`](#string) | Pronouns of the user. |
| <a id="userpublicemail"></a>`publicEmail` | [`String`](#string) | User's public email. |
-| <a id="usersavedreplies"></a>`savedReplies` | [`SavedReplyConnection`](#savedreplyconnection) | Saved replies authored by the user. Will not return saved replies if `saved_replies` feature flag is disabled. (see [Connections](#connections)) |
+| <a id="usersavedreplies"></a>`savedReplies` | [`SavedReplyConnection`](#savedreplyconnection) | Saved replies authored by the user. (see [Connections](#connections)) |
| <a id="userstate"></a>`state` | [`UserState!`](#userstate) | State of the user. |
| <a id="userstatus"></a>`status` | [`UserStatus`](#userstatus) | User status. |
| <a id="usertwitter"></a>`twitter` | [`String`](#string) | Twitter username of the user. |
@@ -31395,7 +32153,7 @@ four standard [pagination arguments](#connection-pagination-arguments):
###### `User.savedReply`
-Saved reply authored by the user. Will not return saved reply if `saved_replies` feature flag is disabled.
+Saved reply authored by the user.
Returns [`SavedReply`](#savedreply).
@@ -31545,9 +32303,21 @@ see the associated mutation type above.
| Name | Type | Description |
| ---- | ---- | ----------- |
| <a id="aichatinputcontent"></a>`content` | [`String!`](#string) | Content of the message. |
+| <a id="aichatinputcurrentfile"></a>`currentFile` **{warning-solid}** | [`AiCurrentFileInput`](#aicurrentfileinput) | **Deprecated:** This feature is an Experiment. It can be changed or removed at any time. Introduced in 16.6. |
| <a id="aichatinputnamespaceid"></a>`namespaceId` | [`NamespaceID`](#namespaceid) | Global ID of the namespace the user is acting on. |
| <a id="aichatinputresourceid"></a>`resourceId` | [`AiModelID`](#aimodelid) | Global ID of the resource to mutate. |
+### `AiCurrentFileInput`
+
+#### Arguments
+
+| Name | Type | Description |
+| ---- | ---- | ----------- |
+| <a id="aicurrentfileinputcontentabovecursor"></a>`contentAboveCursor` | [`String`](#string) | Content above cursor. |
+| <a id="aicurrentfileinputcontentbelowcursor"></a>`contentBelowCursor` | [`String`](#string) | Content below cursor. |
+| <a id="aicurrentfileinputfilename"></a>`fileName` | [`String!`](#string) | File name. |
+| <a id="aicurrentfileinputselectedtext"></a>`selectedText` | [`String!`](#string) | Selected text. |
+
### `AiExplainCodeInput`
#### Arguments
@@ -31606,6 +32376,14 @@ see the associated mutation type above.
| <a id="aigeneratedescriptioninputdescriptiontemplatename"></a>`descriptionTemplateName` | [`String`](#string) | Name of the description template to use to generate message off of. |
| <a id="aigeneratedescriptioninputresourceid"></a>`resourceId` | [`AiModelID!`](#aimodelid) | Global ID of the resource to mutate. |
+### `AiResolveVulnerabilityInput`
+
+#### Arguments
+
+| Name | Type | Description |
+| ---- | ---- | ----------- |
+| <a id="airesolvevulnerabilityinputresourceid"></a>`resourceId` | [`AiModelID!`](#aimodelid) | Global ID of the resource to mutate. |
+
### `AiSummarizeCommentsInput`
#### Arguments
@@ -32174,8 +32952,10 @@ A time-frame defined as a closed inclusive range of two dates.
| Name | Type | Description |
| ---- | ---- | ----------- |
-| <a id="unionedepicfilterinputauthorusername"></a>`authorUsername` | [`[String!]`](#string) | Filters epics that are authored by one of the given users. |
-| <a id="unionedepicfilterinputlabelname"></a>`labelName` | [`[String!]`](#string) | Filters epics that have at least one of the given labels. |
+| <a id="unionedepicfilterinputauthorusername"></a>`authorUsername` **{warning-solid}** | [`[String!]`](#string) | **Deprecated:** Use authorUsernames instead. Deprecated in 16.6. |
+| <a id="unionedepicfilterinputauthorusernames"></a>`authorUsernames` | [`[String!]`](#string) | Filters epics that are authored by one of the given users. |
+| <a id="unionedepicfilterinputlabelname"></a>`labelName` **{warning-solid}** | [`[String!]`](#string) | **Deprecated:** Use labelNames instead. Deprecated in 16.6. |
+| <a id="unionedepicfilterinputlabelnames"></a>`labelNames` | [`[String!]`](#string) | Filters epics that have at least one of the given labels. |
### `UnionedIssueFilterInput`
diff --git a/doc/api/group_iterations.md b/doc/api/group_iterations.md
index a2e23e29d89..b8cb1b7e053 100644
--- a/doc/api/group_iterations.md
+++ b/doc/api/group_iterations.md
@@ -16,6 +16,10 @@ There's a separate [project iterations API](iterations.md) page.
Returns a list of group iterations.
+Iterations created by **Enable automatic scheduling** in
+[Iteration cadences](../user/group/iterations/index.md#iteration-cadences) return `null` for
+the `title` and `description` fields.
+
```plaintext
GET /groups/:id/iterations
GET /groups/:id/iterations?state=opened
diff --git a/doc/api/group_protected_environments.md b/doc/api/group_protected_environments.md
index a7b0eee08b7..3010c9794b6 100644
--- a/doc/api/group_protected_environments.md
+++ b/doc/api/group_protected_environments.md
@@ -109,7 +109,7 @@ POST /groups/:id/protected_environments
| `name` | string | yes | The deployment tier of the protected environment. One of `production`, `staging`, `testing`, `development`, or `other`. Read more about [deployment tiers](../ci/environments/index.md#deployment-tier-of-environments).|
| `deploy_access_levels` | array | yes | Array of access levels allowed to deploy, with each described by a hash. One of `user_id`, `group_id` or `access_level`. They take the form of `{user_id: integer}`, `{group_id: integer}` or `{access_level: integer}` respectively. |
| `required_approval_count` | integer | no | The number of approvals required to deploy to this environment. |
-| `approval_rules` | array | no | Array of access levels allowed to approve, with each described by a hash. One of `user_id`, `group_id` or `access_level`. They take the form of `{user_id: integer}`, `{group_id: integer}` or `{access_level: integer}` respectively. You can also specify the number of required approvals from the specified entity with `required_approvals` field. See [Multiple approval rules](../ci/environments/deployment_approvals.md#multiple-approval-rules) for more information. |
+| `approval_rules` | array | no | Array of access levels allowed to approve, with each described by a hash. One of `user_id`, `group_id` or `access_level`. They take the form of `{user_id: integer}`, `{group_id: integer}` or `{access_level: integer}` respectively. You can also specify the number of required approvals from the specified entity with `required_approvals` field. See [Multiple approval rules](../ci/environments/deployment_approvals.md#add-multiple-approval-rules) for more information. |
The assignable `user_id` are the users who belong to the given group with the Maintainer role (or above).
The assignable `group_id` are the subgroups under the given group.
@@ -136,6 +136,19 @@ Example response:
}
```
+An example with multiple approval rules:
+
+```shell
+curl --header 'Content-Type: application/json' --request POST \
+ --data '{"name": "production", "deploy_access_levels": [{"group_id": 138}], "approval_rules": [{"group_id": 134}, {"group_id": 135, "required_approvals": 2}]}' \
+ --header "PRIVATE-TOKEN: <your_access_token>" \
+ "https://gitlab.example.com/api/v4/groups/128/protected_environments"
+```
+
+In this configuration, the operator group `"group_id": 138` can execute the deployment job
+to `production` only after the QA group `"group_id": 134` and security group
+`"group_id": 135` have approved the deployment.
+
## Update a protected environment
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/351854) in GitLab 15.4.
@@ -152,7 +165,7 @@ PUT /groups/:id/protected_environments/:name
| `name` | string | yes | The deployment tier of the protected environment. One of `production`, `staging`, `testing`, `development`, or `other`. Read more about [deployment tiers](../ci/environments/index.md#deployment-tier-of-environments).|
| `deploy_access_levels` | array | no | Array of access levels allowed to deploy, with each described by a hash. One of `user_id`, `group_id` or `access_level`. They take the form of `{user_id: integer}`, `{group_id: integer}` or `{access_level: integer}` respectively. |
| `required_approval_count` | integer | no | The number of approvals required to deploy to this environment. |
-| `approval_rules` | array | no | Array of access levels allowed to approve, with each described by a hash. One of `user_id`, `group_id` or `access_level`. They take the form of `{user_id: integer}`, `{group_id: integer}` or `{access_level: integer}` respectively. You can also specify the number of required approvals from the specified entity with `required_approvals` field. See [Multiple approval rules](../ci/environments/deployment_approvals.md#multiple-approval-rules) for more information. |
+| `approval_rules` | array | no | Array of access levels allowed to approve, with each described by a hash. One of `user_id`, `group_id` or `access_level`. They take the form of `{user_id: integer}`, `{group_id: integer}` or `{access_level: integer}` respectively. You can also specify the number of required approvals from the specified entity with `required_approvals` field. See [Multiple approval rules](../ci/environments/deployment_approvals.md#add-multiple-approval-rules) for more information. |
To update:
diff --git a/doc/api/groups.md b/doc/api/groups.md
index 6b17af63853..c9ec64e83db 100644
--- a/doc/api/groups.md
+++ b/doc/api/groups.md
@@ -59,6 +59,7 @@ GET /groups
"auto_devops_enabled": null,
"subgroup_creation_level": "owner",
"emails_disabled": null,
+ "emails_enabled": null,
"mentions_disabled": null,
"lfs_enabled": true,
"default_branch_protection": 2,
@@ -97,6 +98,7 @@ GET /groups?statistics=true
"auto_devops_enabled": null,
"subgroup_creation_level": "owner",
"emails_disabled": null,
+ "emails_enabled": null,
"mentions_disabled": null,
"lfs_enabled": true,
"default_branch_protection": 2,
@@ -181,6 +183,7 @@ GET /groups/:id/subgroups
"auto_devops_enabled": null,
"subgroup_creation_level": "owner",
"emails_disabled": null,
+ "emails_enabled": null,
"mentions_disabled": null,
"lfs_enabled": true,
"default_branch_protection": 2,
@@ -242,6 +245,7 @@ GET /groups/:id/descendant_groups
"auto_devops_enabled": null,
"subgroup_creation_level": "owner",
"emails_disabled": null,
+ "emails_enabled": null,
"mentions_disabled": null,
"lfs_enabled": true,
"default_branch_protection": 2,
@@ -267,6 +271,7 @@ GET /groups/:id/descendant_groups
"auto_devops_enabled": null,
"subgroup_creation_level": "owner",
"emails_disabled": null,
+ "emails_enabled": null,
"mentions_disabled": null,
"lfs_enabled": true,
"default_branch_protection": 2,
@@ -467,6 +472,7 @@ Example response:
"pages_access_level":"enabled",
"security_and_compliance_access_level":"enabled",
"emails_disabled":null,
+ "emails_enabled": null,
"shared_runners_enabled":true,
"lfs_enabled":true,
"creator_id":1,
@@ -818,7 +824,8 @@ Parameters:
| `avatar` | mixed | no | Image file for avatar of the group. [Introduced in GitLab 12.9](https://gitlab.com/gitlab-org/gitlab/-/issues/36681) |
| `default_branch_protection` | integer | no | See [Options for `default_branch_protection`](#options-for-default_branch_protection). Default to the global level default branch protection setting. |
| `description` | string | no | The group's description. |
-| `emails_disabled` | boolean | no | Disable email notifications. |
+| `emails_disabled` | boolean | no | _([Deprecated](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/127899) in GitLab 16.5.)_ Disable email notifications. Use `emails_enabled` instead. |
+| `emails_enabled` | boolean | no | Enable email notifications. |
| `lfs_enabled` | boolean | no | Enable/disable Large File Storage (LFS) for the projects in this group. |
| `mentions_disabled` | boolean | no | Disable the capability of a group from getting mentioned. |
| `parent_id` | integer | no | The parent group ID for creating nested group. |
@@ -975,7 +982,8 @@ PUT /groups/:id
| `avatar` | mixed | no | Image file for avatar of the group. [Introduced in GitLab 12.9](https://gitlab.com/gitlab-org/gitlab/-/issues/36681) |
| `default_branch_protection` | integer | no | See [Options for `default_branch_protection`](#options-for-default_branch_protection). |
| `description` | string | no | The description of the group. |
-| `emails_disabled` | boolean | no | Disable email notifications. |
+| `emails_disabled` | boolean | no | _([Deprecated](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/127899) in GitLab 16.5.)_ Disable email notifications. Use `emails_enabled` instead. |
+| `emails_enabled` | boolean | no | Enable email notifications. |
| `lfs_enabled` | boolean | no | Enable/disable Large File Storage (LFS) for the projects in this group. |
| `mentions_disabled` | boolean | no | Disable the capability of a group from getting mentioned. |
| `prevent_sharing_groups_outside_hierarchy` | boolean | no | See [Prevent group sharing outside the group hierarchy](../user/group/access_and_permissions.md#prevent-group-sharing-outside-the-group-hierarchy). This attribute is only available on top-level groups. [Introduced in GitLab 14.1](https://gitlab.com/gitlab-org/gitlab/-/issues/333721) |
@@ -1332,6 +1340,10 @@ Example response:
}
```
+| Attribute | Type | Required | Description |
+| --------- | --------------- | -------- | ----------- |
+| `expires_at` | date | no | Personal access token expiry date. When left blank, the token follows the [standard rule of expiry for personal access tokens](../user/profile/personal_access_tokens.md#when-personal-access-tokens-expire). |
+
### Rotate a Personal Access Token for Service Account User
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/406781) in GitLab 16.1.
@@ -1476,6 +1488,7 @@ PUT /groups/:id/hooks/:hook_id
| `releases_events` | boolean | no | Trigger hook on release events. |
| `subgroup_events` | boolean | no | Trigger hook on subgroup events. |
| `enable_ssl_verification` | boolean | no | Do SSL verification when triggering the hook. |
+| `service_access_tokens_expiration_enforced` | boolean | no | Require service account access tokens to have an expiration date. |
| `token` | string | no | Secret token to validate received payloads. Not returned in the response. When you change the webhook URL, the secret token is reset and not retained. |
### Delete group hook
diff --git a/doc/api/import.md b/doc/api/import.md
index 677848a0ed3..4b7abfdfec1 100644
--- a/doc/api/import.md
+++ b/doc/api/import.md
@@ -34,7 +34,6 @@ POST /import/github
| `target_namespace` | string | yes | Namespace to import repository into. Supports subgroups like `/namespace/subgroup`. In GitLab 15.8 and later, must not be blank |
| `github_hostname` | string | no | Custom GitHub Enterprise hostname. Do not set for GitHub.com. |
| `optional_stages` | object | no | [Additional items to import](../user/project/import/github.md#select-additional-items-to-import). [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/373705) in GitLab 15.5 |
-| `additional_access_tokens` | string | no | Comma-separated list of [additional](#use-multiple-github-personal-access-tokens) GitHub personal access tokens. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/337232) in GitLab 16.2 |
| `timeout_strategy` | string | no | Strategy for handling import timeouts. Valid values are `optimistic` (continue to next stage of import) or `pessimistic` (fail immediately). Defaults to `pessimistic`. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/422979) in GitLab 16.5. |
```shell
@@ -80,17 +79,6 @@ Example response:
}
```
-### Use multiple GitHub personal access tokens
-
-> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/337232) in GitLab 16.2.
-
-The GitHub import API can accept more than one GitHub personal access token using the `additional_access_tokens`
-property so the API can make more calls to GitHub before hitting the rate limit. The additional GitHub personal access
-tokens:
-
-- Cannot be from the same account because they would all share one rate limit.
-- Must have the same permissions and sufficient privileges to the repositories to import.
-
### Import a public project through the API using a group access token
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/362683) in GitLab 15.7, projects are not imported into a [bot user's](../user/group/settings/group_access_tokens.md#bot-users-for-groups) namespace in any circumstances. Projects imported into a bot user's namespace could not be deleted by users with valid tokens, which represented a security risk.
diff --git a/doc/api/invitations.md b/doc/api/invitations.md
index e3619932fea..0bf38b6e616 100644
--- a/doc/api/invitations.md
+++ b/doc/api/invitations.md
@@ -43,6 +43,7 @@ POST /projects/:id/invitations
| `access_level` | integer | yes | A valid access level |
| `expires_at` | string | no | A date string in the format YEAR-MONTH-DAY |
| `invite_source` | string | no | The source of the invitation that starts the member creation process. See [this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/327120). |
+| `member_role_id` **(ULTIMATE ALL)** | integer | no | Assigns the new member to the provided custom role. ([Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/134100) in GitLab 16.6. |
```shell
curl --request POST --header "PRIVATE-TOKEN: <your_access_token>" \
diff --git a/doc/api/iterations.md b/doc/api/iterations.md
index 364cca9c977..ef718fffe0a 100644
--- a/doc/api/iterations.md
+++ b/doc/api/iterations.md
@@ -18,6 +18,10 @@ As of GitLab 13.5, we don't have project-level iterations, but you can use this
Returns a list of project iterations.
+Iterations created by **Enable automatic scheduling** in
+[Iteration cadences](../user/group/iterations/index.md#iteration-cadences) return `null` for
+the `title` and `description` fields.
+
```plaintext
GET /projects/:id/iterations
GET /projects/:id/iterations?state=opened
diff --git a/doc/api/jobs.md b/doc/api/jobs.md
index 92ab12ec0d0..06fd354f2be 100644
--- a/doc/api/jobs.md
+++ b/doc/api/jobs.md
@@ -14,8 +14,9 @@ Get a list of jobs in a project. Jobs are sorted in descending order of their ID
By default, this request returns 20 results at a time because the API results [are paginated](rest/index.md#pagination)
-This endpoint supports both offset-based and [keyset-based](rest/index.md#keyset-based-pagination) pagination. Keyset-based
-pagination is recommended when requesting consecutive pages of results.
+NOTE:
+This endpoint supports both offset-based and [keyset-based](rest/index.md#keyset-based-pagination) pagination, but keyset-based
+pagination is strongly recommended when requesting consecutive pages of results.
```plaintext
GET /projects/:id/jobs
diff --git a/doc/api/lint.md b/doc/api/lint.md
index 7b288c34343..45ae739ef86 100644
--- a/doc/api/lint.md
+++ b/doc/api/lint.md
@@ -20,7 +20,7 @@ POST /projects/:id/ci/lint
| `content` | string | Yes | The CI/CD configuration content. |
| `dry_run` | boolean | No | Run [pipeline creation simulation](../ci/lint.md#simulate-a-pipeline), or only do static check. Default: `false`. |
| `include_jobs` | boolean | No | If the list of jobs that would exist in a static check or pipeline simulation should be included in the response. Default: `false`. |
-| `ref` | string | No | When `dry_run` is `true`, sets the branch or tag to use. Defaults to the project's default branch when not set. |
+| `ref` | string | No | When `dry_run` is `true`, sets the branch or tag context to use to validate the CI/CD YAML configuration. Defaults to the project's default branch when not set. |
Example request:
@@ -71,7 +71,7 @@ GET /projects/:id/ci/lint
|----------------|---------|----------|-------------|
| `dry_run` | boolean | No | Run pipeline creation simulation, or only do static check. |
| `include_jobs` | boolean | No | If the list of jobs that would exist in a static check or pipeline simulation should be included in the response. Default: `false`. |
-| `ref` | string | No | When `dry_run` is `true`, sets the branch or tag to use. Defaults to the project's default branch when not set. |
+| `ref` | string | No | When `dry_run` is `true`, sets the branch or tag context to use to validate the CI/CD YAML configuration. Defaults to the project's default branch when not set. |
| `sha` | string | No | The commit SHA of a branch or tag. Defaults to the SHA of the head of the project's default branch when not set. |
Example request:
diff --git a/doc/api/member_roles.md b/doc/api/member_roles.md
index 79f7bc2b3ad..63de583de25 100644
--- a/doc/api/member_roles.md
+++ b/doc/api/member_roles.md
@@ -13,12 +13,14 @@ info: To determine the technical writer assigned to the Stage/Group associated w
> - [Read dependency added](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/126247) in GitLab 16.3.
> - [Name and description fields added](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/126423) in GitLab 16.3.
> - [Admin merge request introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/128302) in GitLab 16.4 [with a flag](../administration/feature_flags.md) named `admin_merge_request`. Disabled by default.
-> - [Admin group members introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/131914) in GitLab 16.5 [with a flag](../administration/feature_flags.md) named `admin_group_member`. Disabled by default.
+> - [Feature flag `admin_merge_request` removed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/132578) in GitLab 16.5.
+> - [Admin group members introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/131914) in GitLab 16.5 [with a flag](../administration/feature_flags.md) named `admin_group_member`. Disabled by default. The feature flag has been removed in GitLab 16.6.
> - [Manage project access tokens introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/132342) in GitLab 16.5 in [with a flag](../administration/feature_flags.md) named `manage_project_access_tokens`. Disabled by default.
+> - [Archive project introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/134998) in GitLab 16.6 in [with a flag](../administration/feature_flags.md) named `archive_project`. Disabled by default.
FLAG:
-On self-managed GitLab, by default these two features are not available. To make them available, an administrator can [enable the feature flags](../administration/feature_flags.md) named `admin_merge_request` and `admin_member_custom_role`.
-On GitLab.com, this feature is not available.
+On self-managed GitLab, by default these features are not available. To make them available, an administrator can [enable the feature flags](../administration/feature_flags.md) named `admin_group_member`, `manage_project_access_tokens` and `archive_project`.
+On GitLab.com, these features are not available.
## List all member roles of a group
@@ -48,6 +50,7 @@ If successful, returns [`200`](rest/index.md#status-codes) and the following res
| `[].read_vulnerability` | boolean | Permission to read project vulnerabilities. |
| `[].admin_group_member` | boolean | Permission to admin members of a group. |
| `[].manage_project_access_tokens` | boolean | Permission to manage project access tokens. |
+| `[].archive_project` | boolean | Permission to archive projects. |
Example request:
@@ -70,7 +73,8 @@ Example response:
"read_code": true,
"read_dependency": false,
"read_vulnerability": false,
- "manage_project_access_tokens": false
+ "manage_project_access_tokens": false,
+ "archive_project": false
},
{
"id": 3,
@@ -83,7 +87,8 @@ Example response:
"read_code": false,
"read_dependency": true,
"read_vulnerability": true,
- "manage_project_access_tokens": false
+ "manage_project_access_tokens": false,
+ "archive_project": false
}
]
```
diff --git a/doc/api/merge_request_approvals.md b/doc/api/merge_request_approvals.md
index fd8026d3077..628f274c38f 100644
--- a/doc/api/merge_request_approvals.md
+++ b/doc/api/merge_request_approvals.md
@@ -1039,7 +1039,7 @@ Supported attributes:
| Attribute | Type | Required | Description |
|---------------------|-------------------|------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `id` | integer or string | Yes | The ID or [URL-encoded path of a project](rest/index.md#namespaced-path-encoding). |
-| `approval_password` | string | No | Current user's password. Required if [**Require user password to approve**](../user/project/merge_requests/approvals/settings.md#require-user-password-to-approve) is enabled in the project settings. |
+| `approval_password` | string | No | Current user's password. Required if [**Require user re-authentication to approve**](../user/project/merge_requests/approvals/settings.md#require-user-re-authentication-to-approve) is enabled in the project settings. |
| `merge_request_iid` | integer | Yes | The IID of the merge request. |
| `sha` | string | No | The `HEAD` of the merge request. |
diff --git a/doc/api/merge_requests.md b/doc/api/merge_requests.md
index e32c6a2ab56..bf071e9ae51 100644
--- a/doc/api/merge_requests.md
+++ b/doc/api/merge_requests.md
@@ -1161,6 +1161,10 @@ Example response:
]
```
+NOTE:
+This endpoint is subject to [Merge requests diff limits](../administration/instance_limits.md#diff-limits).
+Merge requests that exceed the diff limits return limited results.
+
## List merge request pipelines
Get a list of merge request pipelines. The pagination parameters `page` and
diff --git a/doc/api/packages.md b/doc/api/packages.md
index a378be26a24..7c8dfeb8710 100644
--- a/doc/api/packages.md
+++ b/doc/api/packages.md
@@ -8,11 +8,11 @@ info: To determine the technical writer assigned to the Stage/Group associated w
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/349418) support for [GitLab CI/CD job token](../ci/jobs/ci_job_token.md) authentication for the project-level API in GitLab 15.3.
-This is the API documentation of [GitLab Packages](../administration/packages/index.md).
+The API documentation of [GitLab Packages](../administration/packages/index.md).
## List packages
-### Within a project
+### For a project
Get a list of project packages. All package types are included in results. When
accessed without authentication, only packages of public projects are returned.
@@ -23,15 +23,16 @@ packages.
GET /projects/:id/packages
```
-| Attribute | Type | Required | Description |
-| --------- | ---- | -------- | ----------- |
-| `id` | integer/string | yes | ID or [URL-encoded path of the project](rest/index.md#namespaced-path-encoding) |
-| `order_by`| string | no | The field to use as order. One of `created_at` (default), `name`, `version`, or `type`. |
-| `sort` | string | no | The direction of the order, either `asc` (default) for ascending order or `desc` for descending order. |
-| `package_type` | string | no | Filter the returned packages by type. One of `conan`, `maven`, `npm`, `pypi`, `composer`, `nuget`, `helm`, `terraform_module`, or `golang`. (_Introduced in GitLab 12.9_)
-| `package_name` | string | no | Filter the project packages with a fuzzy search by name. (_Introduced in GitLab 12.9_)
-| `include_versionless` | boolean | no | When set to true, versionless packages are included in the response. (_Introduced in GitLab 13.8_)
-| `status` | string | no | Filter the returned packages by status. One of `default` (default), `hidden`, `processing`, `error`, or `pending_destruction`. (_Introduced in GitLab 13.9_)
+| Attribute | Type | Required | Description |
+|:----------------------|:---------------|:---------|:------------|
+| `id` | integer/string | yes | ID or [URL-encoded path of the project](rest/index.md#namespaced-path-encoding). |
+| `order_by` | string | no | The field to use as order. One of `created_at` (default), `name`, `version`, or `type`. |
+| `sort` | string | no | The direction of the order, either `asc` (default) for ascending order or `desc` for descending order. |
+| `package_type` | string | no | Filter the returned packages by type. One of `conan`, `maven`, `npm`, `pypi`, `composer`, `nuget`, `helm`, `terraform_module`, or `golang`. |
+| `package_name` | string | no | Filter the project packages with a fuzzy search by name. |
+| `package_version` | string | no | Filter the project packages by version. If used in combination with `include_versionless`, then no versionless packages are returned. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/349065) in GitLab 16.6. |
+| `include_versionless` | boolean | no | When set to true, versionless packages are included in the response. |
+| `status` | string | no | Filter the returned packages by status. One of `default` (default), `hidden`, `processing`, `error`, or `pending_destruction`. |
```shell
curl --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/api/v4/projects/:id/packages"
@@ -76,9 +77,7 @@ By default, the `GET` request returns 20 results, because the API is [paginated]
Although you can filter packages by status, working with packages that have a `processing` status
can result in malformed data or broken packages.
-### Within a group
-
-> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/18871) in GitLab 12.5.
+### For a group
Get a list of project packages at the group level.
When accessed without authentication, only packages of public projects are returned.
@@ -89,26 +88,22 @@ packages.
GET /groups/:id/packages
```
-| Attribute | Type | Required | Description |
-| --------- | ---- | -------- | ----------- |
-| `id` | integer/string | yes | ID or [URL-encoded path of the group](rest/index.md#namespaced-path-encoding). |
-| `exclude_subgroups` | boolean | false | If the parameter is included as true, packages from projects from subgroups are not listed. Default is `false`. |
-| `order_by`| string | no | The field to use as order. One of `created_at` (default), `name`, `version`, `type`, or `project_path`. |
-| `sort` | string | no | The direction of the order, either `asc` (default) for ascending order or `desc` for descending order. |
-| `package_type` | string | no | Filter the returned packages by type. One of `conan`, `maven`, `npm`, `pypi`, `composer`, `nuget`, `helm`, or `golang`. (_Introduced in GitLab 12.9_) |
-| `package_name` | string | no | Filter the project packages with a fuzzy search by name. (_[Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/30980) in GitLab 13.0_)
-| `include_versionless` | boolean | no | When set to true, versionless packages are included in the response. (_Introduced in GitLab 13.8_)
-| `status` | string | no | Filter the returned packages by status. One of `default` (default), `hidden`, `processing`, `error`, or `pending_destruction`. (_Introduced in GitLab 13.9_)
+| Attribute | Type | Required | Description |
+|:----------------------|:---------------|:---------|:------------|
+| `id` | integer/string | yes | ID or [URL-encoded path of the group](rest/index.md#namespaced-path-encoding). |
+| `exclude_subgroups` | boolean | false | If the parameter is included as true, packages from projects from subgroups are not listed. Default is `false`. |
+| `order_by` | string | no | The field to use as order. One of `created_at` (default), `name`, `version`, `type`, or `project_path`. |
+| `sort` | string | no | The direction of the order, either `asc` (default) for ascending order or `desc` for descending order. |
+| `package_type` | string | no | Filter the returned packages by type. One of `conan`, `maven`, `npm`, `pypi`, `composer`, `nuget`, `helm`, or `golang`. |
+| `package_name` | string | no | Filter the project packages with a fuzzy search by name. |
+| `package_version` | string | no | Filter the returned packages by version. If used in combination with `include_versionless`, then no versionless packages are returned. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/349065) in GitLab 16.6. |
+| `include_versionless` | boolean | no | When set to true, versionless packages are included in the response. |
+| `status` | string | no | Filter the returned packages by status. One of `default` (default), `hidden`, `processing`, `error`, or `pending_destruction`. |
```shell
curl --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/api/v4/groups/:id/packages?exclude_subgroups=false"
```
-> **Deprecation:**
->
-> The `pipeline` attribute in the response is deprecated in favor of `pipelines`, which was [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/44348) in GitLab 13.6. Both are available until 13.7.
-> The `build_info` attribute in the response is deprecated in favor of `pipeline`, which was [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/28040) in GitLab 12.10.
-
Example response:
```json
@@ -195,11 +190,6 @@ GET /projects/:id/packages/:package_id
curl --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/api/v4/projects/:id/packages/:package_id"
```
-> **Deprecation:**
->
-> The `pipeline` attribute in the response is deprecated in favor of `pipelines`, which was [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/44348) in GitLab 13.6. Both are available until 13.7.
-> The `build_info` attribute in the response is deprecated in favor of `pipeline`, which was [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/28040) in GitLab 12.10.
-
Example response:
```json
@@ -213,7 +203,7 @@ Example response:
"delete_api_path": "/namespace1/project1/-/packages/1"
},
"created_at": "2019-11-27T03:37:38.711Z",
- "last_downloaded_at": "2022-09-07T07:51:50.504Z"
+ "last_downloaded_at": "2022-09-07T07:51:50.504Z",
"pipelines": [
{
"id": 123,
@@ -425,8 +415,6 @@ deleting a package can introduce a [dependency confusion risk](../user/packages/
## Delete a package file
-> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/32107) in GitLab 13.12.
-
WARNING:
Deleting a package file may corrupt your package making it unusable or unpullable from your package
manager client. When deleting a package file, be sure that you understand what you're doing.
diff --git a/doc/api/pipelines.md b/doc/api/pipelines.md
index e908f4adb34..50616974ae1 100644
--- a/doc/api/pipelines.md
+++ b/doc/api/pipelines.md
@@ -38,7 +38,7 @@ GET /projects/:id/pipelines
| `id` | integer/string | Yes | The ID or [URL-encoded path of the project](rest/index.md#namespaced-path-encoding) |
| `scope` | string | No | The scope of pipelines, one of: `running`, `pending`, `finished`, `branches`, `tags` |
| `status` | string | No | The status of pipelines, one of: `created`, `waiting_for_resource`, `preparing`, `pending`, `running`, `success`, `failed`, `canceled`, `skipped`, `manual`, `scheduled` |
-| `source` | string | No | In [GitLab 14.3 and later](https://gitlab.com/gitlab-org/gitlab/-/issues/325439), how the pipeline was triggered, one of: `push`, `web`, `trigger`, `schedule`, `api`, `external`, `pipeline`, `chat`, `webide`, `merge_request_event`, `external_pull_request_event`, `parent_pipeline`, `ondemand_dast_scan`, or `ondemand_dast_validation`. |
+| `source` | string | No | In [GitLab 14.3 and later](https://gitlab.com/gitlab-org/gitlab/-/issues/325439), how the pipeline was triggered, one of: `api`, `chat`, `external`, `external_pull_request_event`, `merge_request_event`, `ondemand_dast_scan`, `ondemand_dast_validation`, `parent_pipeline`, `pipeline`, `push`, `schedule`, `security_orchestration_policy`, `trigger`, `web`, or `webide`. |
| `ref` | string | No | The ref of pipelines |
| `sha` | string | No | The SHA of pipelines |
| `yaml_errors` | boolean | No | Returns pipelines with invalid configurations |
@@ -518,3 +518,57 @@ DELETE /projects/:id/pipelines/:pipeline_id
```shell
curl --header "PRIVATE-TOKEN: <your_access_token>" --request "DELETE" "https://gitlab.example.com/api/v4/projects/1/pipelines/46"
```
+
+## Update pipeline metadata
+
+You can update the metadata of a pipeline. The metadata contains the name of the pipeline.
+
+```plaintext
+PUT /projects/:id/pipelines/:pipeline_id/metadata
+```
+
+| Attribute | Type | Required | Description |
+|---------------|----------------|----------|-------------|
+| `id` | integer/string | Yes | The ID or [URL-encoded path of the project](rest/index.md#namespaced-path-encoding) |
+| `pipeline_id` | integer | Yes | The ID of a pipeline |
+| `name` | string | Yes | The new name of the pipeline |
+
+Sample request:
+
+```shell
+curl --request PUT --header "PRIVATE-TOKEN: <your_access_token>" --data "name=Some new pipeline name" "https://gitlab.example.com/api/v4/projects/1/pipelines/46/metadata"
+```
+
+Sample response:
+
+```json
+{
+ "id": 46,
+ "iid": 11,
+ "project_id": 1,
+ "status": "running",
+ "ref": "main",
+ "sha": "a91957a858320c0e17f3a0eca7cfacbff50ea29a",
+ "before_sha": "a91957a858320c0e17f3a0eca7cfacbff50ea29a",
+ "tag": false,
+ "yaml_errors": null,
+ "user": {
+ "name": "Administrator",
+ "username": "root",
+ "id": 1,
+ "state": "active",
+ "avatar_url": "http://www.gravatar.com/avatar/e64c7d89f26bd1972efa854d13d7dd61?s=80&d=identicon",
+ "web_url": "http://localhost:3000/root"
+ },
+ "created_at": "2016-08-11T11:28:34.085Z",
+ "updated_at": "2016-08-11T11:32:35.169Z",
+ "started_at": null,
+ "finished_at": "2016-08-11T11:32:35.145Z",
+ "committed_at": null,
+ "duration": null,
+ "queued_duration": 0.010,
+ "coverage": null,
+ "web_url": "https://example.com/foo/bar/pipelines/46",
+ "name": "Some new pipeline name"
+}
+```
diff --git a/doc/api/projects.md b/doc/api/projects.md
index f909f376fce..f4a9e396930 100644
--- a/doc/api/projects.md
+++ b/doc/api/projects.md
@@ -57,6 +57,8 @@ GET /projects
| `id_after` | integer | No | Limit results to projects with IDs greater than the specified ID. |
| `id_before` | integer | No | Limit results to projects with IDs less than the specified ID. |
| `imported` | boolean | No | Limit results to projects which were imported from external systems by current user. |
+| `include_hidden` **(PREMIUM ALL)** | boolean | No | Include hidden projects. _(administrators only)_ |
+| `include_pending_delete` | boolean | No | Include projects pending deletion. _(administrators only)_ |
| `last_activity_after` | datetime | No | Limit results to projects with last activity after specified time. Format: ISO 8601 (`YYYY-MM-DDTHH:MM:SSZ`) |
| `last_activity_before` | datetime | No | Limit results to projects with last activity before specified time. Format: ISO 8601 (`YYYY-MM-DDTHH:MM:SSZ`) |
| `membership` | boolean | No | Limit by projects that the current user is a member of. |
@@ -1794,6 +1796,7 @@ POST /projects/:id/fork
| `namespace` | integer or string | No | _(Deprecated)_ The ID or path of the namespace that the project is forked to. |
| `path` | string | No | The path assigned to the resultant project after forking. |
| `visibility` | string | No | The [visibility level](#project-visibility-level) assigned to the resultant project after forking. |
+| `branches` | string | No | Branches to fork (empty for all branches). |
## List forks of a project
diff --git a/doc/api/protected_environments.md b/doc/api/protected_environments.md
index 5a25844c754..8b502d78d0d 100644
--- a/doc/api/protected_environments.md
+++ b/doc/api/protected_environments.md
@@ -117,11 +117,11 @@ POST /projects/:id/protected_environments
| `name` | string | yes | The name of the environment. |
| `deploy_access_levels` | array | yes | Array of access levels allowed to deploy, with each described by a hash. |
| `required_approval_count` | integer | no | The number of approvals required to deploy to this environment. |
-| `approval_rules` | array | no | Array of access levels allowed to approve, with each described by a hash. See [Multiple approval rules](../ci/environments/deployment_approvals.md#multiple-approval-rules) for more information. |
+| `approval_rules` | array | no | Array of access levels allowed to approve, with each described by a hash. See [Multiple approval rules](../ci/environments/deployment_approvals.md#add-multiple-approval-rules). |
Elements in the `deploy_access_levels` and `approval_rules` array should be one of `user_id`, `group_id` or
`access_level`, and take the form `{user_id: integer}`, `{group_id: integer}` or
-`{access_level: integer}`. Optionally you can specify the `group_inheritance_type` on each as one of the [valid group inheritance types](#group-inheritance-types).
+`{access_level: integer}`. Optionally, you can specify the `group_inheritance_type` on each as one of the [valid group inheritance types](#group-inheritance-types).
Each user must have access to the project and each group must [have this project shared](../user/project/members/share_project_with_groups.md).
@@ -187,7 +187,7 @@ PUT /projects/:id/protected_environments/:name
| `name` | string | yes | The name of the environment. |
| `deploy_access_levels` | array | no | Array of access levels allowed to deploy, with each described by a hash. |
| `required_approval_count` | integer | no | The number of approvals required to deploy to this environment. |
-| `approval_rules` | array | no | Array of access levels allowed to approve, with each described by a hash. See [Multiple approval rules](../ci/environments/deployment_approvals.md#multiple-approval-rules) for more information. |
+| `approval_rules` | array | no | Array of access levels allowed to approve, with each described by a hash. See [Multiple approval rules](../ci/environments/deployment_approvals.md#add-multiple-approval-rules) for more information. |
Elements in the `deploy_access_levels` and `approval_rules` array should be one of `user_id`, `group_id` or
`access_level`, and take the form `{user_id: integer}`, `{group_id: integer}` or
diff --git a/doc/api/rest/index.md b/doc/api/rest/index.md
index 039129d24c6..fd98952185b 100644
--- a/doc/api/rest/index.md
+++ b/doc/api/rest/index.md
@@ -804,6 +804,7 @@ For questions about these integrations, use the [GitLab community forum](https:/
### `C#`
- [`GitLabApiClient`](https://github.com/nmklotas/GitLabApiClient)
+- [`NGitLab`](https://github.com/ubisoft/NGitLab)
### Go
diff --git a/doc/api/runners.md b/doc/api/runners.md
index dba37edcb01..372ce397332 100644
--- a/doc/api/runners.md
+++ b/doc/api/runners.md
@@ -52,13 +52,14 @@ GET /runners?paused=true
GET /runners?tag_list=tag1,tag2
```
-| Attribute | Type | Required | Description |
-|------------|--------------|----------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| `scope` | string | no | Deprecated: Use `type` or `status` instead. The scope of runners to return, one of: `active`, `paused`, `online` and `offline`; showing all runners if none provided |
-| `type` | string | no | The type of runners to return, one of: `instance_type`, `group_type`, `project_type` |
-| `status` | string | no | The status of runners to return, one of: `online`, `offline`, `stale`, and `never_contacted`. `active` and `paused` are also possible values which were deprecated in GitLab 14.8 and will be removed in GitLab 16.0 |
-| `paused` | boolean | no | Whether to include only runners that are accepting or ignoring new jobs |
-| `tag_list` | string array | no | A list of runner tags |
+| Attribute | Type | Required | Description |
+|------------------|--------------|----------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `scope` | string | no | Deprecated: Use `type` or `status` instead. The scope of runners to return, one of: `active`, `paused`, `online` and `offline`; showing all runners if none provided |
+| `type` | string | no | The type of runners to return, one of: `instance_type`, `group_type`, `project_type` |
+| `status` | string | no | The status of runners to return, one of: `online`, `offline`, `stale`, and `never_contacted`. `active` and `paused` are also possible values which were deprecated in GitLab 14.8 and will be removed in a future version of the REST API |
+| `paused` | boolean | no | Whether to include only runners that are accepting or ignoring new jobs |
+| `tag_list` | string array | no | A list of runner tags |
+| `version_prefix` | string | no | The prefix of the version of the runners to return. For example, `15.0`, `14`, `16.1.241` |
```shell
curl --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/api/v4/runners"
@@ -66,11 +67,11 @@ curl --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/a
NOTE:
The `active` and `paused` values in the `status` query parameter were deprecated [in GitLab 14.8](https://gitlab.com/gitlab-org/gitlab/-/issues/347211).
-and will be removed in [GitLab 16.0](https://gitlab.com/gitlab-org/gitlab/-/issues/351109). They are replaced by the `paused` query parameter.
+and will be removed in [a future version of the REST API](https://gitlab.com/gitlab-org/gitlab/-/issues/351109). They are replaced by the `paused` query parameter.
NOTE:
The `active` attribute in the response was deprecated [in GitLab 14.8](https://gitlab.com/gitlab-org/gitlab/-/issues/347211).
-and will be removed in [GitLab 16.0](https://gitlab.com/gitlab-org/gitlab/-/issues/351109). It is replaced by the `paused` attribute.
+and will be removed in [a future version of the REST API](https://gitlab.com/gitlab-org/gitlab/-/issues/351109). It is replaced by the `paused` attribute.
Example response:
@@ -117,13 +118,14 @@ GET /runners/all?paused=true
GET /runners/all?tag_list=tag1,tag2
```
-| Attribute | Type | Required | Description |
-|------------|--------------|----------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| `scope` | string | no | Deprecated: Use `type` or `status` instead. The scope of runners to return, one of: `specific`, `shared`, `active`, `paused`, `online` and `offline`; showing all runners if none provided |
-| `type` | string | no | The type of runners to return, one of: `instance_type`, `group_type`, `project_type` |
-| `status` | string | no | The status of runners to return, one of: `online`, `offline`, `stale`, and `never_contacted`. `active` and `paused` are also possible values which were deprecated in GitLab 14.8 and will be removed in GitLab 16.0 |
-| `paused` | boolean | no | Whether to include only runners that are accepting or ignoring new jobs |
-| `tag_list` | string array | no | A list of runner tags |
+| Attribute | Type | Required | Description |
+|------------------|--------------|----------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `scope` | string | no | Deprecated: Use `type` or `status` instead. The scope of runners to return, one of: `specific`, `shared`, `active`, `paused`, `online` and `offline`; showing all runners if none provided |
+| `type` | string | no | The type of runners to return, one of: `instance_type`, `group_type`, `project_type` |
+| `status` | string | no | The status of runners to return, one of: `online`, `offline`, `stale`, and `never_contacted`. `active` and `paused` are also possible values which were deprecated in GitLab 14.8 and will be removed in a future version of the REST API |
+| `paused` | boolean | no | Whether to include only runners that are accepting or ignoring new jobs |
+| `tag_list` | string array | no | A list of runner tags |
+| `version_prefix` | string | no | The prefix of the version of the runners to return. For example, `15.0`, `14`, `16.1.241` |
```shell
curl --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/api/v4/runners/all"
@@ -131,11 +133,11 @@ curl --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/a
NOTE:
The `active` and `paused` values in the `status` query parameter were deprecated [in GitLab 14.8](https://gitlab.com/gitlab-org/gitlab/-/issues/347211).
-and will be removed in [GitLab 16.0](https://gitlab.com/gitlab-org/gitlab/-/issues/351109). They are replaced by the `paused` query parameter.
+and will be removed in [a future version of the REST API](https://gitlab.com/gitlab-org/gitlab/-/issues/351109). They are replaced by the `paused` query parameter.
NOTE:
The `active` attribute in the response was deprecated [in GitLab 14.8](https://gitlab.com/gitlab-org/gitlab/-/issues/347211).
-and will be removed in [GitLab 16.0](https://gitlab.com/gitlab-org/gitlab/-/issues/351109). It is replaced by the `paused` attribute.
+and will be removed in [a future version of the REST API](https://gitlab.com/gitlab-org/gitlab/-/issues/351109). It is replaced by the `paused` attribute.
Example response:
@@ -221,7 +223,7 @@ and removed in [GitLab 13.0](https://gitlab.com/gitlab-org/gitlab/-/issues/21432
NOTE:
The `active` attribute in the response was deprecated [in GitLab 14.8](https://gitlab.com/gitlab-org/gitlab/-/issues/347211).
-and will be removed in [GitLab 16.0](https://gitlab.com/gitlab-org/gitlab/-/issues/351109). It is replaced by the `paused` attribute.
+and will be removed in [a future version of the REST API](https://gitlab.com/gitlab-org/gitlab/-/issues/351109). It is replaced by the `paused` attribute.
Example response:
@@ -291,7 +293,7 @@ and [removed](https://gitlab.com/gitlab-org/gitlab/-/issues/214322) in GitLab 13
NOTE:
The `active` query parameter was deprecated [in GitLab 14.8](https://gitlab.com/gitlab-org/gitlab/-/issues/347211).
-and will be removed in [GitLab 16.0](https://gitlab.com/gitlab-org/gitlab/-/issues/351109). It is replaced by the `paused` attribute.
+and will be removed in [a future version of the REST API](https://gitlab.com/gitlab-org/gitlab/-/issues/351109). It is replaced by the `paused` attribute.
Example response:
@@ -361,7 +363,7 @@ curl --request PUT --header "PRIVATE-TOKEN: <your_access_token>" \
NOTE:
The `active` form attribute was deprecated [in GitLab 14.8](https://gitlab.com/gitlab-org/gitlab/-/issues/347211).
-and will be removed in [GitLab 16.0](https://gitlab.com/gitlab-org/gitlab/-/issues/351109). It is replaced by the `paused` attribute.
+and will be removed in [a future version of the REST API](https://gitlab.com/gitlab-org/gitlab/-/issues/351109). It is replaced by the `paused` attribute.
## List runner's jobs
@@ -468,14 +470,15 @@ GET /projects/:id/runners/all?paused=true
GET /projects/:id/runners?tag_list=tag1,tag2
```
-| Attribute | Type | Required | Description |
-|------------|----------------|----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| `id` | integer/string | yes | The ID or [URL-encoded path of the project](rest/index.md#namespaced-path-encoding) owned by the authenticated user |
-| `scope` | string | no | Deprecated: Use `type` or `status` instead. The scope of runners to return, one of: `active`, `paused`, `online` and `offline`; showing all runners if none provided |
-| `type` | string | no | The type of runners to return, one of: `instance_type`, `group_type`, `project_type` |
-| `status` | string | no | The status of runners to return, one of: `online`, `offline`, `stale`, and `never_contacted`. `active` and `paused` are also possible values which were deprecated in GitLab 14.8 and will be removed in GitLab 16.0 |
-| `paused` | boolean | no | Whether to include only runners that are accepting or ignoring new jobs |
-| `tag_list` | string array | no | A list of runner tags |
+| Attribute | Type | Required | Description |
+|------------------|----------------|----------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `id` | integer/string | yes | The ID or [URL-encoded path of the project](rest/index.md#namespaced-path-encoding) owned by the authenticated user |
+| `scope` | string | no | Deprecated: Use `type` or `status` instead. The scope of runners to return, one of: `active`, `paused`, `online` and `offline`; showing all runners if none provided |
+| `type` | string | no | The type of runners to return, one of: `instance_type`, `group_type`, `project_type` |
+| `status` | string | no | The status of runners to return, one of: `online`, `offline`, `stale`, and `never_contacted`. `active` and `paused` are also possible values which were deprecated in GitLab 14.8 and will be removed in a future version of the REST API |
+| `paused` | boolean | no | Whether to include only runners that are accepting or ignoring new jobs |
+| `tag_list` | string array | no | A list of runner tags |
+| `version_prefix` | string | no | The prefix of the version of the runners to return. For example, `15.0`, `14`, `16.1.241` |
```shell
curl --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/api/v4/projects/9/runners"
@@ -483,11 +486,11 @@ curl --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/a
NOTE:
The `active` and `paused` values in the `status` query parameter were deprecated [in GitLab 14.8](https://gitlab.com/gitlab-org/gitlab/-/issues/347211).
-and will be removed in [GitLab 16.0](https://gitlab.com/gitlab-org/gitlab/-/issues/351109). They are replaced by the `paused` query parameter.
+and will be removed in [a future version of the REST API](https://gitlab.com/gitlab-org/gitlab/-/issues/351109). They are replaced by the `paused` query parameter.
NOTE:
The `active` attribute in the response was deprecated [in GitLab 14.8](https://gitlab.com/gitlab-org/gitlab/-/issues/347211).
-and will be removed in [GitLab 16.0](https://gitlab.com/gitlab-org/gitlab/-/issues/351109). It is replaced by the `paused` attribute.
+and will be removed in [a future version of the REST API](https://gitlab.com/gitlab-org/gitlab/-/issues/351109). It is replaced by the `paused` attribute.
Example response:
@@ -585,13 +588,14 @@ GET /groups/:id/runners/all?paused=true
GET /groups/:id/runners?tag_list=tag1,tag2
```
-| Attribute | Type | Required | Description |
-|------------|----------------|----------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| `id` | integer | yes | The ID of the group owned by the authenticated user |
-| `type` | string | no | The type of runners to return, one of: `instance_type`, `group_type`, `project_type`. The `project_type` value is [deprecated](https://gitlab.com/gitlab-org/gitlab/-/issues/351466) and will be removed in GitLab 15.0 |
-| `status` | string | no | The status of runners to return, one of: `online`, `offline`, `stale`, and `never_contacted`. `active` and `paused` are also possible values which were deprecated in GitLab 14.8 and will be removed in GitLab 16.0 |
-| `paused` | boolean | no | Whether to include only runners that are accepting or ignoring new jobs |
-| `tag_list` | string array | no | A list of runner tags |
+| Attribute | Type | Required | Description |
+|------------------|----------------|----------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `id` | integer | yes | The ID of the group owned by the authenticated user |
+| `type` | string | no | The type of runners to return, one of: `instance_type`, `group_type`, `project_type`. The `project_type` value is [deprecated](https://gitlab.com/gitlab-org/gitlab/-/issues/351466) and will be removed in GitLab 15.0 |
+| `status` | string | no | The status of runners to return, one of: `online`, `offline`, `stale`, and `never_contacted`. `active` and `paused` are also possible values which were deprecated in GitLab 14.8 and will be removed in a future version of the REST API |
+| `paused` | boolean | no | Whether to include only runners that are accepting or ignoring new jobs |
+| `tag_list` | string array | no | A list of runner tags |
+| `version_prefix` | string | no | The prefix of the version of the runners to return. For example, `15.0`, `14`, `16.1.241` |
```shell
curl --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/api/v4/groups/9/runners"
@@ -599,11 +603,11 @@ curl --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/a
NOTE:
The `active` and `paused` values in the `status` query parameter were deprecated [in GitLab 14.8](https://gitlab.com/gitlab-org/gitlab/-/issues/347211).
-and will be removed in [GitLab 16.0](https://gitlab.com/gitlab-org/gitlab/-/issues/351109). They are replaced by the `paused` query parameter.
+and will be removed in [a future version of the REST API](https://gitlab.com/gitlab-org/gitlab/-/issues/351109). They are replaced by the `paused` query parameter.
NOTE:
The `active` attribute in the response was deprecated [in GitLab 14.8](https://gitlab.com/gitlab-org/gitlab/-/issues/347211).
-and will be removed in [GitLab 16.0](https://gitlab.com/gitlab-org/gitlab/-/issues/351109). It is replaced by the `paused` attribute.
+and will be removed in [a future version of the REST API](https://gitlab.com/gitlab-org/gitlab/-/issues/351109). It is replaced by the `paused` attribute.
Example response:
diff --git a/doc/api/saml.md b/doc/api/saml.md
index 911586933fa..5c6eee2b73c 100644
--- a/doc/api/saml.md
+++ b/doc/api/saml.md
@@ -43,7 +43,7 @@ Example response:
```json
[
{
- "extern_uid": "4",
+ "extern_uid": "yrnZW46BrtBFqM7xDzE7dddd",
"user_id": 48
}
]
@@ -67,14 +67,14 @@ Supported attributes:
Example request:
```shell
-curl --location --request GET "https://gitlab.example.com/api/v4/groups/33/saml/sydney_jones" --header "PRIVATE-TOKEN: <PRIVATE TOKEN>"
+curl --location --request GET "https://gitlab.example.com/api/v4/groups/33/saml/yrnZW46BrtBFqM7xDzE7dddd" --header "PRIVATE-TOKEN: <PRIVATE TOKEN>"
```
Example response:
```json
{
- "extern_uid": "4",
+ "extern_uid": "yrnZW46BrtBFqM7xDzE7dddd",
"user_id": 48
}
```
@@ -101,9 +101,9 @@ Supported attributes:
Example request:
```shell
-curl --location --request PATCH "https://gitlab.example.com/api/v4/groups/33/saml/sydney_jones" \
+curl --location --request PATCH "https://gitlab.example.com/api/v4/groups/33/saml/yrnZW46BrtBFqM7xDzE7dddd" \
--header "PRIVATE-TOKEN: <PRIVATE TOKEN>" \
---form "extern_uid=sydney_jones_new"
+--form "extern_uid=be20d8dcc028677c931e04f387"
```
## Delete a single SAML identity
@@ -124,7 +124,7 @@ Supported attributes:
Example request:
```shell
-curl --request DELETE --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/api/v4/groups/33/saml/sydney_jones"
+curl --request DELETE --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/api/v4/groups/33/saml/be20d8dcc028677c931e04f387"
```
diff --git a/doc/api/scim.md b/doc/api/scim.md
index 8840935e646..f3be1a479a8 100644
--- a/doc/api/scim.md
+++ b/doc/api/scim.md
@@ -8,7 +8,7 @@ info: To determine the technical writer assigned to the Stage/Group associated w
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/98354) in GitLab 15.5.
-The GitLab SCIM API manages SCIM identities within groups and provides the `/Users` endpoint. The base URL is `/api/scim/v2/groups/:group_path/Users/`.
+The GitLab SCIM API manages SCIM identities within groups and provides the `/groups/:groups_id/scim/identities` and `/groups/:groups_id/scim/:uid` endpoints. The base URL is `<http|https>://<GitLab host>/api/v4`.
To use this API, [Group SSO](../user/group/saml_sso/index.md) must be enabled for the group.
This API is only in use where [SCIM for Group SSO](../user/group/saml_sso/scim_setup.md) is enabled. It's a prerequisite to the creation of SCIM identities.
@@ -53,7 +53,7 @@ Example response:
```json
[
{
- "extern_uid": "4",
+ "extern_uid": "be20d8dcc028677c931e04f387",
"user_id": 48,
"active": true
}
@@ -85,14 +85,14 @@ Supported attributes:
Example request:
```shell
-curl --location --request GET "https://gitlab.example.com/api/v4/groups/33/scim/sydney_jones" --header "PRIVATE-TOKEN: <PRIVATE TOKEN>"
+curl --location --request GET "https://gitlab.example.com/api/v4/groups/33/scim/be20d8dcc028677c931e04f387" --header "PRIVATE-TOKEN: <PRIVATE TOKEN>"
```
Example response:
```json
{
- "extern_uid": "4",
+ "extern_uid": "be20d8dcc028677c931e04f387",
"user_id": 48,
"active": true
}
@@ -122,9 +122,9 @@ Parameters:
Example request:
```shell
-curl --location --request PATCH "https://gitlab.example.com/api/v4/groups/33/scim/sydney_jones" \
+curl --location --request PATCH "https://gitlab.example.com/api/v4/groups/33/scim/be20d8dcc028677c931e04f387" \
--header "PRIVATE-TOKEN: <PRIVATE TOKEN>" \
---form "extern_uid=sydney_jones_new"
+--form "extern_uid=yrnZW46BrtBFqM7xDzE7dddd"
```
## Delete a single SCIM identity
@@ -145,7 +145,7 @@ Supported attributes:
Example request:
```shell
-curl --request DELETE --header "Content-Type: application/json" --header "Authorization: Bearer <your_access_token>" "https://gitlab.example.com/api/v4/groups/33/scim/sydney_jones"
+curl --request DELETE --header "Content-Type: application/json" --header "Authorization: Bearer <your_access_token>" "https://gitlab.example.com/api/v4/groups/33/scim/yrnZW46BrtBFqM7xDzE7dddd"
```
diff --git a/doc/api/settings.md b/doc/api/settings.md
index 03877c6c489..9c0a1e8e4a8 100644
--- a/doc/api/settings.md
+++ b/doc/api/settings.md
@@ -19,6 +19,8 @@ For information on how to control the application settings cache for an instance
> - `always_perform_delayed_deletion` feature flag [enabled](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/113332) in GitLab 15.11.
> - `delayed_project_deletion` and `delayed_group_deletion` attributes removed in GitLab 16.0.
+> - `in_product_marketing_emails_enabled` attribute [removed](https://gitlab.com/gitlab-org/gitlab/-/issues/418137) in GitLab 16.6.
+> - `repository_storages` attribute [removed](https://gitlab.com/gitlab-org/gitlab/-/issues/429675) in GitLab 16.6.
List the current [application settings](#list-of-settings-that-can-be-accessed-via-api-calls)
of the GitLab instance.
@@ -215,7 +217,6 @@ Example response:
"container_registry_token_expire_delay": 5,
"decompress_archive_file_timeout": 210,
"package_registry_cleanup_policies_worker_capacity": 2,
- "repository_storages": ["default"],
"plantuml_enabled": false,
"plantuml_url": null,
"diagramsnet_enabled": true,
@@ -433,6 +434,7 @@ listed in the descriptions of the relevant settings.
| `gitaly_timeout_fast` | integer | no | Gitaly fast operation timeout, in seconds. Some Gitaly operations are expected to be fast. If they exceed this threshold, there may be a problem with a storage shard and 'failing fast' can help maintain the stability of the GitLab instance. Set to `0` to disable timeouts. |
| `gitaly_timeout_medium` | integer | no | Medium Gitaly timeout, in seconds. This should be a value between the Fast and the Default timeout. Set to `0` to disable timeouts. |
| `gitlab_dedicated_instance` | boolean | no | Indicates whether the instance was provisioned for GitLab Dedicated. |
+| `gitlab_shell_operation_limit` | integer | no | Maximum number of Git operations per minute a user can perform. Default: `600`. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/412088) in GitLab 16.2. |
| `grafana_enabled` | boolean | no | Enable Grafana. |
| `grafana_url` | string | no | Grafana URL. |
| `gravatar_enabled` | boolean | no | Enable Gravatar. |
@@ -452,9 +454,10 @@ listed in the descriptions of the relevant settings.
| `housekeeping_optimize_repository_period`| integer | no | Number of Git pushes after which an incremental `git repack` is run. |
| `html_emails_enabled` | boolean | no | Enable HTML emails. |
| `import_sources` | array of strings | no | Sources to allow project import from, possible values: `github`, `bitbucket`, `bitbucket_server`, `fogbugz`, `git`, `gitlab_project`, `gitea`, and `manifest`. |
-| `in_product_marketing_emails_enabled` | boolean | no | Enable [in-product marketing emails](../user/profile/notifications.md#global-notification-settings). Enabled by default. |
| `invisible_captcha_enabled` | boolean | no | Enable Invisible CAPTCHA spam detection during sign-up. Disabled by default. |
| `issues_create_limit` | integer | no | Max number of issue creation requests per minute per user. Disabled by default.|
+| `jira_connect_application_key` | String | no | Application ID of the OAuth application that should be used to authenticate with the GitLab for Jira Cloud app |
+| `jira_connect_proxy_url` | String | no | URL of the GitLab instance that should be used as a proxy for the GitLab for Jira Cloud app |
| `keep_latest_artifact` | boolean | no | Prevent the deletion of the artifacts from the most recent successful jobs, regardless of the expiry time. Enabled by default. |
| `local_markdown_version` | integer | no | Increase this value when any cached Markdown should be invalidated. |
| `mailgun_signing_key` | string | no | The Mailgun HTTP webhook signing key for receiving events from webhook. |
@@ -534,13 +537,13 @@ listed in the descriptions of the relevant settings.
| `repository_checks_enabled` | boolean | no | GitLab periodically runs `git fsck` in all project and wiki repositories to look for silent disk corruption issues. |
| `repository_size_limit` **(PREMIUM ALL)** | integer | no | Size limit per repository (MB) |
| `repository_storages_weighted` | hash of strings to integers | no | (GitLab 13.1 and later) Hash of names of taken from `gitlab.yml` to [weights](../administration/repository_storage_paths.md#configure-where-new-repositories-are-stored). New projects are created in one of these stores, chosen by a weighted random selection. |
-| `repository_storages` | array of strings | no | (GitLab 13.0 and earlier) List of names of enabled storage paths, taken from `gitlab.yml`. New projects are created in one of these stores, chosen at random. |
| `require_admin_approval_after_user_signup` | boolean | no | When enabled, any user that signs up for an account using the registration form is placed under a **Pending approval** state and has to be explicitly [approved](../administration/moderate_users.md) by an administrator. |
| `require_two_factor_authentication` | boolean | no | (**If enabled, requires:** `two_factor_grace_period`) Require all users to set up Two-factor authentication. |
| `restricted_visibility_levels` | array of strings | no | Selected levels cannot be used by non-Administrator users for groups, projects or snippets. Can take `private`, `internal` and `public` as a parameter. Default is `null` which means there is no restriction.[Changed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/131203) in GitLab 16.4: cannot select levels that are set as `default_project_visibility` and `default_group_visibility`. |
| `rsa_key_restriction` | integer | no | The minimum allowed bit length of an uploaded RSA key. Default is `0` (no restriction). `-1` disables RSA keys. |
| `session_expire_delay` | integer | no | Session duration in minutes. GitLab restart is required to apply changes. |
| `security_policy_global_group_approvers_enabled` | boolean | no | Whether to look up scan result policy approval groups globally or within project hierarchies. |
+| `service_access_tokens_expiration_enforced` | boolean | no | Flag to indicate if token expiry date can be optional for service account users |
| `shared_runners_enabled` | boolean | no | (**If enabled, requires:** `shared_runners_text` and `shared_runners_minutes`) Enable shared runners for new projects. |
| `shared_runners_minutes` **(PREMIUM ALL)** | integer | required by: `shared_runners_enabled` | Set the maximum number of compute minutes that a group can use on shared runners per month. |
| `shared_runners_text` | string | required by: `shared_runners_enabled` | Shared runners text. |
@@ -572,6 +575,7 @@ listed in the descriptions of the relevant settings.
| `spam_check_endpoint_url` | string | no | URL of the external Spamcheck service endpoint. Valid URI schemes are `grpc` or `tls`. Specifying `tls` forces communication to be encrypted.|
| `spam_check_api_key` | string | no | API key used by GitLab for accessing the Spam Check service endpoint. |
| `suggest_pipeline_enabled` | boolean | no | Enable pipeline suggestion banner. |
+| `enable_artifact_external_redirect_warning_page` | boolean | no | Show the external redirect page that warns you about user-generated content in GitLab Pages. |
| `terminal_max_session_time` | integer | no | Maximum time for web terminal websocket connection (in seconds). Set to `0` for unlimited time. |
| `terms` | text | required by: `enforce_terms` | (**Required by:** `enforce_terms`) Markdown content for the ToS. |
| `throttle_authenticated_api_enabled` | boolean | no | (**If enabled, requires:** `throttle_authenticated_api_period_in_seconds` and `throttle_authenticated_api_requests_per_period`) Enable authenticated API request rate limit. Helps reduce request volume (for example, from crawlers or abusive bots). |
@@ -613,9 +617,6 @@ listed in the descriptions of the relevant settings.
| `valid_runner_registrars` | array of strings | no | List of types which are allowed to register a GitLab Runner. Can be `[]`, `['group']`, `['project']` or `['group', 'project']`. |
| `whats_new_variant` | string | no | What's new variant, possible values: `all_tiers`, `current_tier`, and `disabled`. |
| `wiki_page_max_content_bytes` | integer | no | Maximum wiki page content size in **bytes**. Default: 52428800 Bytes (50 MB). The minimum value is 1024 bytes. |
-| `jira_connect_application_key` | String | no | Application ID of the OAuth application that should be used to authenticate with the GitLab for Jira Cloud app |
-| `jira_connect_proxy_url` | String | no | URL of the GitLab instance that should be used as a proxy for the GitLab for Jira Cloud app |
-| `gitlab_shell_operation_limit` | integer | no | Maximum number of Git operations per minute a user can perform. Default: `600`. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/412088) in GitLab 16.2. |
### Configure inactive project deletion
diff --git a/doc/api/users.md b/doc/api/users.md
index 118008848f3..cb9951a1c45 100644
--- a/doc/api/users.md
+++ b/doc/api/users.md
@@ -2142,9 +2142,14 @@ Example response:
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/131923) in GitLab 16.5.
Use this API to create a new personal access token for the currently authenticated user.
-For security purposes, the scopes are limited to only `k8s_proxy` and by default the token will expire by
-the end of the day it was created at.
-Token values are returned once so, make sure you save it as you can't access it again.
+For security purposes, the token:
+
+- Is limited to the [`k8s_proxy` scope](../user/profile/personal_access_tokens.md#personal-access-token-scopes).
+ This scope grants permission to perform Kubernetes API calls using the agent for Kubernetes.
+- By default, expires at the end of the day it was created on.
+
+Token values are returned once, so make sure you save the token as you cannot access
+it again.
```plaintext
POST /user/personal_access_tokens
@@ -2331,6 +2336,7 @@ Prerequisites:
- You must be an administrator or have the Owner role of the target namespace or project.
- For `instance_type`, you must be an administrator of the GitLab instance.
+- For `group_type` or `project_type` with an Owner role, an administrator must not have enabled [restrict runner registration](../administration/settings/continuous_integration.md#restrict-runner-registration-by-all-users-in-an-instance).
- An access token with the `create_runner` scope.
Be sure to copy or save the `token` in the response, the value cannot be retrieved again.
diff --git a/doc/architecture/blueprints/cdot_orders/index.md b/doc/architecture/blueprints/cdot_orders/index.md
new file mode 100644
index 00000000000..924a50d2b8a
--- /dev/null
+++ b/doc/architecture/blueprints/cdot_orders/index.md
@@ -0,0 +1,265 @@
+---
+status: proposed
+creation-date: "2023-10-12"
+authors: [ "@tyleramos" ]
+coach: "@fabiopitino"
+approvers: [ "@tgolubeva", "@jameslopez" ]
+owning-stage: "~devops::fulfillment"
+participating-stages: []
+---
+
+# Align CustomersDot Orders with Zuora Orders
+
+## Summary
+
+The [GitLab Customers Portal](https://customers.gitlab.com/) is an application separate from the GitLab product that allows GitLab Customers to manage their account and subscriptions, tasks like purchasing additional seats. More information about the Customers Portal can be found in [the GitLab docs](../../../subscriptions/customers_portal.md). Internally, the application is known as [CustomersDot](https://gitlab.com/gitlab-org/customers-gitlab-com) (also known as CDot).
+
+GitLab uses [Zuora's platform](https://about.gitlab.com/handbook/business-technology/enterprise-applications/guides/zuora/) to manage their subscription-based services. CustomersDot integrates directly with Zuora Billing and treats [Zuora Billing](https://about.gitlab.com/handbook/finance/accounting/finance-ops/billing-ops/zuora-billing/) as the single source of truth for subscription data.
+
+CustomersDot stores some subscription and order data locally, in the form of the `orders` database table, which at times can be out of sync with Zuora Billing. The main objective for this blueprint is to lay out a plan for improving the integration with Zuora Billing, making it more reliable, accurate, and performant.
+
+## Motivation
+
+Working with the `Order` model in CustomersDot has been a challenge for Fulfillment engineers. It is difficult to trust `Order` data as it can get out of sync with the single source of truth for subscription data, Zuora Billing. This has led to bugs, confusion and delays in feature development. An [epic exists for aligning CustomersDot Orders with Zuora objects](https://gitlab.com/groups/gitlab-org/-/epics/9748) which lists a variety of issues related to these data integrity problems. The motivation of this blueprint is to develop a better data architecture in CustomersDot for Subscriptions and associated data models which builds trust and reduces bugs.
+
+### Goals
+
+This re-architecture project has several multifaceted objectives.
+
+- Increase the accuracy of CustomersDot data pertaining to Subscriptions and its entitlements. This data is stored as `Order` records in CustomersDot - it is not granular enough to represent what the customer has purchased, and it is error prone as shown by the following issues:
+ - [Multiple order records for the same subscription](https://gitlab.com/gitlab-org/customers-gitlab-com/-/issues/6971)
+ - [Multiple subscriptions active for the same namespace](https://gitlab.com/gitlab-org/customers-gitlab-com/-/issues/6972)
+ - [Support Multiple Active Orders on a Namespace](https://gitlab.com/groups/gitlab-org/-/epics/9486)
+- Continue to align with Zuora Billing being the SSoT for Subscription and Order data.
+- Decrease dependency and reliance on Zuora Billing uptime.
+- Improve CustomersDot performance by storing relevant Subscription data locally and keeping it in sync with Zuora Billing. This could be a key piece to making Seat Link more efficient and reliable.
+- Eliminate confusion between CustomersDot Orders, which contain data more closely resembling a Subscription, and [Zuora Orders](https://knowledgecenter.zuora.com/Zuora_Billing/Manage_subscription_transactions/Orders), which represent a transaction between a customer and merchant and can apply to multiple Subscriptions.
+ - The CustomersDot `orders` table contains a mixture of Zuora Subscription and trials, along with GitLab-specific metadata like sync timestamps with GitLab.com. GitLab does not store trial subscriptions in Zuora at this time.
+
+## Proposal
+
+As the list of goals above shows, there are a good number of desired outcomes we would like to see at the end of implementation. To reach these goals, we will break this work up into smaller iterations.
+
+1. [Phase one: Zuora Subscription Cache](#phase-one-zuora-subscription-cache)
+
+ The first iteration focuses on adding a local cache for Zuora Subscription objects, including Rate Plans, Rate Plan Charges, and Rate Plan Charge Tiers, in CustomersDot.
+
+1. [Phase two: Utilize Zuora Cache Models](#phase-two-utilize-zuora-cache-models)
+
+ The second phase involves using the Zuora cache models introduced in phase one. Any code in CustomersDot that makes a read request to Zuora for Subscription data should be replaced with an ActiveRecord query. This should result in a big performance improvement.
+
+1. [Phase three: Transition from `Order` to `Subscription`](#phase-three-transition-from-order-to-subscription)
+
+ The next iteration focuses on transitioning away from the CustomersDot `Order` model to a new model for Subscription.
+
+## Design and implementation details
+
+### Phase one: Zuora Subscription Cache
+
+The first phase for this blueprint focuses on adding new models for caching Zuora Subscription data locally in CustomersDot. These local data models will allow CustomersDot to query the local database for Zuora Subscriptions. Currently, this requires querying directly to Zuora which can be problematic if Zuora is experiencing downtime. Zuora also has rate limits for API usage which we want to avoid as CustomersDot continues to scale.
+
+This phase will consist of creating the new data models, building the mechanisms to keep the local data in sync with Zuora, and backfilling the existing data. It will be important that the local cache models are read-only for most of the application to ensure the data is always in sync. Only the syncing mechanism should have the ability to write to these models.
+
+#### Proposed DB schema
+
+```mermaid
+erDiagram
+ "Zuora::Subscription" ||--|{ "Zuora::RatePlan" : "has many"
+ "Zuora::RatePlan" ||--|{ "Zuora::RatePlanCharge" : "has many"
+ "Zuora::RatePlanCharge" ||--|{ "Zuora::RatePlanChargeTier" : "has many"
+
+ "Zuora::Subscription" {
+ string(64) zuora_id PK "`id` field on Zuora Subscription"
+ string(64) account_id
+ string name
+ string(64) previous_subscription_id
+ string status
+ date term_start_date
+ date term_end_date
+ int version
+ boolean auto_renew "null:false default:false"
+ date cancelled_date
+ string(64) created_by_id
+ integer current_term
+ string current_term_period_type
+ string eoa_starter_bronze_offer_accepted__c
+ string external_subscription_id__c
+ string external_subscription_source__c
+ string git_lab_namespace_id__c
+ string git_lab_namespace_name__c
+ integer initial_term
+ string(64) invoice_owner_id
+ string notes
+ string opportunity_id__c
+ string(64) original_id
+ string(64) ramp_id
+ string renewal_subscription__c__c
+ integer renewal_term
+ date subscription_end_date
+ date subscription_start_date
+ string turn_on_auto_renew__c
+ string turn_on_cloud_licensing__c
+ string turn_on_operational_metrics__c
+ string turn_on_seat_reconciliation__c
+ datetime created_date
+ datetime updated_date
+ datetime created_at
+ datetime updated_at
+ }
+
+ "Zuora::RatePlan" {
+ string(64) zuora_id PK "`id` field on Zuora RatePlan"
+ string(64) subscription_id FK
+ string name
+ string(64) product_rate_plan_id
+ datetime created_date
+ datetime updated_date
+ datetime created_at
+ datetime updated_at
+ }
+
+ "Zuora::RatePlanCharge" {
+ string(64) zuora_id PK "`id` field on Zuora RatePlanCharge"
+ string(64) rate_plan_id FK
+ string(64) product_rate_plan_charge_id
+ int quantity
+ date effective_start_date
+ date effective_end_date
+ string price_change_option
+ string charge_number
+ string charge_type
+ boolean is_last_segment "null:false default:false"
+ int segment
+ int mrr
+ int tcv
+ int dmrc
+ int dtcv
+ string(64) subscription_id
+ string(64) subscription_owner_id
+ int version
+ datetime created_date
+ datetime updated_date
+ datetime created_at
+ datetime updated_at
+ }
+
+ "Zuora::RatePlanChargeTier" {
+ string zuora_id PK "`id` field on Zuora RatePlanChargeTier"
+ string rate_plan_charge_id FK
+ string price
+ datetime created_date
+ datetime updated_date
+ datetime created_at
+ datetime updated_at
+ }
+```
+
+#### Notes
+
+- The namespace `Zuora` is already taken by the classes used to extend `IronBank` resource classes. It was decided to move these to the namespace `Zuora::Remote` to indicate these are intended to reach out to Zuora. This frees up the `Zuora` namespace to be used to group the models related to Zuora cached data.
+- All versions of Zuora Subscriptions will be stored in this table to be able to support display of current as well as future purchases when Zuora is down. One of the guiding principles from the Architecture Review meeting on 2023-08-06 was "Customers should be able to view and access what they purchased even if Zuora is down". Given that customers can make future-dated purchases, CustomersDot needs to store current and future versions of Subscriptions.
+- `zuora_id` would be the primary key given we want to avoid the field name `id` which is magical in ActiveRecord.
+- The timezone for Zuora Billing is configured as Pacific Time. Let's account for this timezone as we sync data from Zuora into CDot's cached models to allow for more accurate comparisons.
+
+#### Keeping data in sync with Zuora
+
+CDot currently receives and processes `Order Processed` Zuora callouts for Order actions like `Update Product` ([full list](https://gitlab.com/gitlab-org/customers-gitlab-com/-/blob/64c5d17bac38bef1156e9a15008cc7d2b9aa46a9/lib/zuora/order.rb#L26)). These callouts help to keep CustomersDot in sync with Zuora and trigger provisioning events. These callouts will be important to keeping `Zuora::Subscription` and related cached models in sync with changes in Zuora.
+
+This existing callout would not be sufficient to cover all changes to a Zuora Subscription though. In particular, changes to custom fields may not be captured by these existing callouts. We will need to create custom events and callouts for any custom field cached in CustomersDot for any of these resources to ensure CDot is in sync with Zuora. This should only affect `Zuora::Subscription` though as no custom fields are used by CustomersDot on any of the other proposed cached resources at this time.
+
+#### Rollout of Zuora Cache models
+
+With the first iteration of introducing the cached Zuora data models, we will take an iterative approach to the rollout. There should be no impact to existing functionality as we build out the models, start populating the data through callouts, and backfill these models. Once this is in place, we will iteratively update existing features to use these cached data models instead of querying Zuora directly.
+
+We will make this transition using many small scoped feature flags, rather than one large feature flag to gate all of the new logic using these cache models. This will help us deliver more quickly and reduce the length with which feature flag logic is maintained and test cases are retained.
+
+Testing can be performed before the cached models are used in the codebase to ensure data integrity of the cached models.
+
+### Phase two: Utilize Zuora Cache Models
+
+This phase covers the second phase of work of the Orders re-architecture. In this phase, the focus will be utilizing the new Zuora cache data models introduced in phase one. Querying Zuora for Subscription data is fundamental to Customers so there are plenty of places that will need to be updated. In the places where CDot is reading from Zuora, it can be replaced by querying the local cache data models instead. This should result in a big performance boost by avoiding third party requests, particularly in components like the Seat Link Service.
+
+This transition will be completed using many small scoped feature flags, rather than one large feature flag to gate all of the new logic using these cache models. This will help to deliver more quickly and reduce the length with which feature flag logic is maintained and test cases are retained.
+
+### Phase three: Transition from `Order` to `Subscription`
+
+The second phase for this blueprint focuses on transitioning away from the CustomersDot `Order` model to a new model for `Subscription`. This phase will consist of creating a new model for `Subscription`, supporting both models during the transition period, updating existing code to use `Subscription` and finally removing the `Order` model once it is no longer needed.
+
+Replacing the `Order` model with a `Subscription` model should address the goal of eliminating confusion around the `Order` model. The data stored in the CustomersDot `Order` model does not correspond to a Zuora Order. It more closely resembles a Zuora Subscription with some additional metadata about syncing with GitLab.com. The transition to a `Subscription` model, along with the local cache layer in phase one, should address the goal of better data accuracy and building trust in CustomersDot data.
+
+#### Proposed DB schema
+
+```mermaid
+erDiagram
+ Subscription ||--|{ "Zuora::Subscription" : "has many"
+
+ Subscription {
+ bigint id PK
+ bigint billing_account_id
+ string(64) zuora_account_id
+ string(64) zuora_subscription_id
+ string zuora_subscription_name
+ string gitlab_namespace_id
+ string gitlab_namespace_name
+ datetime last_extra_ci_minutes_sync_at
+ datetime increased_billing_rate_notified_at
+ boolean reconciliation_accepted "null:false default:false"
+ datetime seat_overage_notified_at
+ datetime auto_renew_error_notified_at
+ date monthly_seat_digest_notified_on
+ datetime created_at
+ datetime updated_at
+ }
+
+ "Zuora::Subscription" {
+ string(64) zuora_id PK "`id` field on Zuora Subscription"
+ string(64) account_id
+ string name
+ }
+```
+
+#### Notes
+
+- The name for this model is up for debate given a `Subscription` model already exists. The existing model could be renamed with the hope of eventually replacing it with the new model.
+- This model serves as a record of the Subscription that is modifiable by the CDot application, whereas the `Zuora::Subscription` table below should be read-only.
+- `zuora_account_id` could be added as a convenience but could also be fetched via the `billing_account`.
+- There will be one `Subscription` record per actual subscription instead of a Subscription version.
+ - This has the advantage of avoiding duplication of fields like `gitlab_namespace_id` or `last_extra_ci_minutes_sync_at`.
+ - The `zuora_subscription_id` column could be removed or kept as a reference to the latest Zuora Subscription version.
+
+#### Keeping data in sync with Zuora
+
+The `Subscription` model should stay in sync with Zuora as subscriptions are created or updated. This model will be synced when we sync `Zuora::Subscription` records, similar to how the cached models are synced when processing Zuora callouts as described in phase one. When saving a new version of a `Zuora::Subscription`, an update could be made to the `Subscription` record with the matching `zuora_subscription_name`, or create a `Subscription` if one does not exist. The `zuora_subscription_id` would be set to the latest version on typical updates. Most of the data on `Subscription` is GitLab metadata (e.g. `last_extra_ci_minutes_sync_at`) so it wouldn't need to be updated.
+
+The exception to this update rule are the `zuora_account_id` and `billing_account_id` attributes. Let's consider the current behavior when processing an `Order Processed` callout in CDot if the `zuora_account_id` changes for a Zuora Subscription:
+
+1. The Billing Account Membership is updated to the new Billing Account for the CDot `Customer` matching the Sold To email address.
+1. CDot attempts to find the CDot `Order` with the new `billing_account_id` and `subscription_name`.
+1. If an `Order` isn't found matching this criteria, a new `Order` is created. This leads to two `Order` records for the same Zuora Subscription.
+
+This scenario should be avoided for the new `Subscription` model. One `Subscription` should exist for a unique `Zuora::Subscription` name. If the Zuora Subscription transfers Accounts, the `Subscription` should as well.
+
+#### Unknowns
+
+Several unknowns are outlined below. As we get further into implementation, these unknown should become clearer.
+
+##### Trial data in Subscription?
+
+The CDot `Order` model contains paid subscription data as well as trials. For `Subscription`, we could choose to continue to have paid subscription and trial data together in the same table, or break them into their own models.
+
+The `orders` table has fields for `customer_id` and `trial` which only really concern trials. Should these fields be added to the `Subscription` table? Should `Subscription` contain trial information if it doesn't exist in Zuora?
+
+If trial orders were broken out into their own table, these are the columns likely needed for a (SaaS) `trials` table:
+
+- `customer_id`
+- `product_rate_plan_id` (or rename to `plan_id` or use `plan_code`)
+- `quantity`
+- `start_date`
+- `end_date`
+- `gl_namespace_id`
+- `gl_namespace_name`
+
+### Resources
+
+- [FY24Q3 OKR - Create plan to align CustomersDot Orders to Zuora Orders](https://gitlab.com/gitlab-com/gitlab-OKRs/-/work_items/3378)
+- [Epic &9748 - Align CustomersDot Orders to Zuora objects](https://gitlab.com/groups/gitlab-org/-/epics/9748)
diff --git a/doc/architecture/blueprints/cells/impacted_features/personal-access-tokens.md b/doc/architecture/blueprints/cells/impacted_features/personal-access-tokens.md
index 3aca9f1e116..a493a1c4395 100644
--- a/doc/architecture/blueprints/cells/impacted_features/personal-access-tokens.md
+++ b/doc/architecture/blueprints/cells/impacted_features/personal-access-tokens.md
@@ -17,13 +17,37 @@ we can document the reasons for not choosing this approach.
## 1. Definition
-Personal Access Tokens associated with a User are a way for Users to interact with the API of GitLab to perform operations.
-Personal Access Tokens today are scoped to the User, and can access all Groups that a User has access to.
+Personal Access Tokens (PATs) associated with a User are a way for Users to interact with the API of GitLab to perform operations.
+PATs today are scoped to the User, and can access all Groups that a User has access to.
## 2. Data flow
## 3. Proposal
+### 3.1. Organization-scoped PATs
+
+Pros:
+
+- Can be managed entirely from Rails application.
+- Increased security. PAT is limited only to Organization.
+
+Cons:
+
+- Different PAT needed for different Organizations.
+- Cannot tell at a glance if PAT will apply to a certain Project/Namespace.
+
+### 3.2. Cluster-wide PATs
+
+Pros:
+
+- User does not have to worry about which scope the PAT applies to.
+
+Cons:
+
+- User has to worry about wide-ranging scope of PAT (e.g. separation of personal items versus work items).
+- Organization cannot limit scope of PAT to only their Organization.
+- Increases complexity. All cluster-wide data likely will be moved to a separate [data access layer](../../cells/index.md#1-data-access-layer).
+
## 4. Evaluation
## 4.1. Pros
diff --git a/doc/architecture/blueprints/cells/index.md b/doc/architecture/blueprints/cells/index.md
index 1366d308487..c9a03830a4a 100644
--- a/doc/architecture/blueprints/cells/index.md
+++ b/doc/architecture/blueprints/cells/index.md
@@ -338,6 +338,7 @@ Below is a list of known affected features with preliminary proposed solutions.
- [Cells: Global Search](impacted_features/global-search.md)
- [Cells: GraphQL](impacted_features/graphql.md)
- [Cells: Organizations](impacted_features/organizations.md)
+- [Cells: Personal Access Tokens](impacted_features/personal-access-tokens.md)
- [Cells: Personal Namespaces](impacted_features/personal-namespaces.md)
- [Cells: Secrets](impacted_features/secrets.md)
- [Cells: Snippets](impacted_features/snippets.md)
@@ -354,7 +355,6 @@ The following list of impacted features only represents placeholders that still
- [Cells: Group Transfer](impacted_features/group-transfer.md)
- [Cells: Issues](impacted_features/issues.md)
- [Cells: Merge Requests](impacted_features/merge-requests.md)
-- [Cells: Personal Access Tokens](impacted_features/personal-access-tokens.md)
- [Cells: Project Transfer](impacted_features/project-transfer.md)
- [Cells: Router Endpoints Classification](impacted_features/router-endpoints-classification.md)
- [Cells: Schema changes (Postgres and Elasticsearch migrations)](impacted_features/schema-changes.md)
diff --git a/doc/architecture/blueprints/ci_pipeline_components/img/catalogs.png b/doc/architecture/blueprints/ci_pipeline_components/img/catalogs.png
deleted file mode 100644
index 8c83aede186..00000000000
--- a/doc/architecture/blueprints/ci_pipeline_components/img/catalogs.png
+++ /dev/null
Binary files differ
diff --git a/doc/architecture/blueprints/ci_pipeline_components/index.md b/doc/architecture/blueprints/ci_pipeline_components/index.md
index 46b8f361949..9fdbf8cb70b 100644
--- a/doc/architecture/blueprints/ci_pipeline_components/index.md
+++ b/doc/architecture/blueprints/ci_pipeline_components/index.md
@@ -105,6 +105,7 @@ identifying abstract concepts and are subject to changes as we refine the design
allows components to be pinned to a specific revision.
- **Step** is a type of component that contains a collection of instructions for job execution.
- **Template** is a type of component that contains a snippet of CI/CD configuration that can be [included](../../../ci/yaml/includes.md) in a project's pipeline configuration.
+- **Publishing** is the act of listing a version of the resource (for example, a project release) on the Catalog.
## Definition of pipeline component
@@ -524,17 +525,26 @@ spec:
The CI Catalog is an index of resources that users can leverage in CI/CD. It initially
contains a list of components repositories that users can discover and use in their pipelines.
+The user sees only resources based on their permissions and project visibility level.
+Unauthenticated users will only see public resources.
+
+Project admins are responsible for setting the project private or public.
+The CI Catalog should not provide security functionalities like prevent projects from appearing in the Community Catalog.
+If the project is public it's visible to the world anyway.
+
+The Catalog page can provide different filters to refine the user search including
+predefined filters such as resources from groups the user is member of.
In the future, the Catalog could contain also other types of resources (for example:
-integrations, project templates, etc.).
+integrations, project templates, container images, etc.).
To list a components repository in the Catalog we need to mark the project as being a
-catalog resource. We do that initially with an API endpoint, similar to changing a project setting.
+catalog resource. We do that initially with a project setting.
-Once a project is marked as a "catalog resource" it can be displayed in the Catalog.
+Once a project is marked as a "catalog resource" it can eventually be displayed in the Catalog.
-We could create a database record when the API endpoint is used and remove the record when
-the same is disabled/removed.
+We could create a database record when the setting is enabled and modify the record's state when
+the same is disabled.
## Catalog resource
@@ -552,9 +562,6 @@ Other properties of a catalog resource:
- indicators of popularity (stars, forks).
- categorization: user should select a category and or define search tags
-As soon as a components repository is marked as being a "catalog resource"
-we should be seeing the resource listed in the Catalog.
-
Initially for the resource, the project may not have any released tags.
Users would be able to use the components repository by specifying a branch name or
commit SHA for the version. However, these types of version qualifiers should not
@@ -564,10 +571,14 @@ be listed in the catalog resource's page for various reasons:
- Branches and tags may not be meaningful for the end-user.
- Branches and tags don't communicate versioning thoroughly.
+To list a catalog resource in the Catalog we first need to create a release for
+the project.
+
## Releasing new resource versions to the Catalog
-The versions that should be displayed for the resource should be the project [releases](../../../user/project/releases/index.md).
-Creating project releases is an official act of versioning a resource.
+The versions that will be published for the resource should be the project
+[releases](../../../user/project/releases/index.md). Creating project releases is an official
+act of versioning a resource.
A resource page would have:
@@ -599,29 +610,6 @@ For example: index the content of `spec:` section for CI components.
See an [example of development workflow](dev_workflow.md) for a components repository.
-## Availability of CI catalog as a feature
-
-We plan to introduce 2 features of CI catalog as separate views:
-
-1. **Namespace Catalog (GitLab Ultimate):** allows organizations to share and discover catalog resources
- created inside the top-level namespace.
- Users will be able to access the Namespace Catalog from a project or subgroup inside the top-level
- namespace.
-1. **Community Catalog (GitLab free):** allows anyone in a GitLab instance to share and discover catalog
- resources. The Community Catalog presents only resources/projects that are public.
-
-If a resource in a Namespace Catalog is made public (changing the project's visibility) the resource is
-available in both Namespace Catalog (because it comes from there) as well as the Community Catalog
-(because it's public).
-
-![Namespace and Community Catalogs](img/catalogs.png)
-
-There is only 1 CI catalog. The Namespace and Community Catalogs are different views of the CI catalog.
-
-Project admins are responsible for setting the project private or public.
-The CI Catalog should not provide security functionalities like prevent projects from appearing in the Community Catalog.
-If the project is public it's visible to the world anyway.
-
## Note about future resource types
In the future, to support multiple types of resources in the Catalog we could
@@ -673,6 +661,8 @@ metadata:
## Iterations
+The first plan of iterations constisted in:
+
1. Experimentation phase
- Build an MVC behind a feature flag with `namespace` actor.
- Enable the feature flag only for `gitlab-com` and `gitlab-org` namespaces to initiate the dogfooding.
@@ -691,6 +681,9 @@ metadata:
components from GitLab.com or from repository exports.
- Iterate on feedback.
+In October 2023, after releasing the namespace-view (previously called private catalog view) as Experiment we changed
+focus moving away from 2 separate views (namespace view and global view) and combining the UX in a single global view.
+
## Limits
Any MVC that exposes a feature should be added with limitations from the beginning.
diff --git a/doc/architecture/blueprints/cloud_connector/decisions/001_lb_entry_point.md b/doc/architecture/blueprints/cloud_connector/decisions/001_lb_entry_point.md
new file mode 100644
index 00000000000..d49b702be94
--- /dev/null
+++ b/doc/architecture/blueprints/cloud_connector/decisions/001_lb_entry_point.md
@@ -0,0 +1,52 @@
+---
+owning-stage: "~devops::data stores"
+description: 'Cloud Connector ADR 001: Use load balancer as single entry point'
+---
+
+# Cloud Connector ADR 001: Load balancer as single entry point
+
+## Context
+
+The original iteration of the blueprint suggested to stand up a dedicated Cloud Connector edge service,
+through which all traffic that uses features under the Cloud Connector umbrella would pass.
+
+The primary reasons for why we wanted this to be a dedicated service were to:
+
+1. **Provide a single entry point for customers.** We identified the ability for any GitLab instance
+ around the world to consume Cloud Connector features through a single endpoint such as
+ `cloud.gitlab.com` as a must-have property.
+1. **Have the ability to execute custom logic.** There was a desire from product to create a space where we can
+ run cross-cutting business logic such as application-level rate limiting, which is hard or impossible to
+ do using a traditional load balancer such as HAProxy.
+
+## Decision
+
+We decided to take a smaller incremental step toward having a "smart router" by focusing on
+the ability to provide a single endpoint through which Cloud Connector traffic enters our
+infrastructure. This can be accomplished using simpler means than deploying dedicated services, specifically
+by pulling in a load balancing layer listening at `cloud.gitlab.com` that can also perform simple routing
+tasks to forward traffic into feature backends.
+
+Our reasons for this decision were:
+
+1. **Unclear requirements for custom logic to run.** We are still exploring how and to what extent we would
+ apply rate limiting logic at the Cloud Connector level. This is being explored in
+ [issue 429592](https://gitlab.com/gitlab-org/gitlab/-/issues/429592). Because we need to have a single
+ entry point by January, and because we think we will not be ready by then to implement such logic at the
+ Cloud Connector level, a web service is not required yet.
+1. **New use cases found that are not suitable to run through a dedicated service.** We started to work with
+ the Observability group to see how we can bring the GitLab Observability Backend (GOB) to Cloud Connector
+ customers in [MR 131577](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/131577).
+ In this discussion it became clear that due to the large amounts of traffic and data volume passing
+ through GOB each day, putting another service in front of this stack does not provide a sensible
+ risk/benefit trade-off. Instead, we will probably split traffic and make Cloud Connector components
+ available through other means for special cases like these (for example, through a Cloud Connector library).
+
+We are exploring several options for load-balancing this new endpoint in [issue 429818](https://gitlab.com/gitlab-org/gitlab/-/issues/429818)
+and are working with the `Infrastructure:Foundations` team to deploy this in [issue 24711](https://gitlab.com/gitlab-com/gl-infra/reliability/-/issues/24711).
+
+## Consequences
+
+We have not yet discarded the plan to build a smart router eventually, either as a service or
+through other means, but have delayed this decision in face of uncertainty at both a product
+and technical level. We will reassess how to proceed in Q1 2024.
diff --git a/doc/architecture/blueprints/cloud_connector/index.md b/doc/architecture/blueprints/cloud_connector/index.md
index 840e17a438a..9aef8bc7a98 100644
--- a/doc/architecture/blueprints/cloud_connector/index.md
+++ b/doc/architecture/blueprints/cloud_connector/index.md
@@ -68,7 +68,7 @@ Introducing a dedicated edge service for Cloud Connector serves the following go
we do not currently support.
- **Independently scalable.** For reasons of fault tolerance and scalability, it is beneficial to have all SM traffic go
through a separate service. For example, if an excess of unexpected requests arrive from SM instances due to a bug
- in a milestone release, this traffic could be absorbed at the CC gateway level without cascading downstream, thus leaving
+ in a milestone release, this traffic could be absorbed at the CC gateway level without cascading further, thus leaving
SaaS users unaffected.
### Non-goals
@@ -82,6 +82,10 @@ Introducing a dedicated edge service for Cloud Connector serves the following go
other systems using public key cryptographic checks. We may move some of the code around that currently implements this,
however.
+## Decisions
+
+- [ADR-001: Use load balancer as single entry point](decisions/001_lb_entry_point.md)
+
## Proposal
We propose to make two major changes to the current architecture:
@@ -133,7 +137,7 @@ The new service would be made available at `cloud.gitlab.com` and act as a "smar
It will have the following responsibilities:
1. **Request handling.** The service will make decisions about whether a particular request is handled
- in the service itself or forwarded to a downstream service. For example, a request to `/ai/code_suggestions/completions`
+ in the service itself or forwarded to other backends. For example, a request to `/ai/code_suggestions/completions`
could be handled by forwarding this request to an appropriate endpoint in the AI gateway unchanged, while a request
to `/-/metrics` could be handled by the service itself. As mentioned in [non-goals](#non-goals), the latter would not
include domain logic as it pertains to an end user feature, but rather cross-cutting logic such as telemetry, or
@@ -141,14 +145,14 @@ It will have the following responsibilities:
When handling requests, the service should be unopinionated about which protocol is used, to the extent possible.
Reasons for injecting custom logic could be setting additional HTTP header fields. A design principle should be
- to not require CC service deployments if a downstream service merely changes request payload or endpoint definitions. However,
+ to not require CC service deployments if a backend service merely changes request payload or endpoint definitions. However,
supporting more protocols on top of HTTP may require adding support in the CC service itself.
1. **Authentication/authorization.** The service will be the first point of contact for authenticating clients and verifying
they are authorized to use a particular CC feature. This will include fetching and caching public keys served from GitLab SaaS
and CustomersDot to decode JWT access tokens sent by GitLab instances, including matching token scopes to feature endpoints
to ensure an instance is eligible to consume this feature. This functionality will largely be lifted out of the AI gateway
where it currently lives. To maintain a ZeroTrust environment, the service will implement a more lightweight auth/z protocol
- with internal services downstream that merely performs general authenticity checks but forgoes billing and permission
+ with internal backends that merely performs general authenticity checks but forgoes billing and permission
related scoping checks. How this protocol will look like is to be decided, and might be further explored in
[Discussion: Standardized Authentication and Authorization between internal services and GitLab Rails](https://gitlab.com/gitlab-org/gitlab/-/issues/421983).
1. **Organization-level rate limits.** It is to be decided if this is needed, but there could be value in having application-level rate limits
diff --git a/doc/architecture/blueprints/container_registry_metadata_database/index.md b/doc/architecture/blueprints/container_registry_metadata_database/index.md
index 243270afdb2..c9f7f1c0d27 100644
--- a/doc/architecture/blueprints/container_registry_metadata_database/index.md
+++ b/doc/architecture/blueprints/container_registry_metadata_database/index.md
@@ -30,7 +30,7 @@ graph LR
R -- Write/read metadata --> B
```
-Client applications (for example, GitLab Rails and Docker CLI) interact with the Container Registry through its [HTTP API](https://gitlab.com/gitlab-org/container-registry/-/blob/master/docs/spec/api.md). The most common operations are pushing and pulling images to/from the registry, which require a series of HTTP requests in a specific order. The request flow for these operations is detailed in the [Request flow](https://gitlab.com/gitlab-org/container-registry/-/blob/master/docs-gitlab/push-pull-request-flow.md).
+Client applications (for example, GitLab Rails and Docker CLI) interact with the Container Registry through its [HTTP API](https://gitlab.com/gitlab-org/container-registry/-/blob/master/docs/spec/gitlab/api.md). The most common operations are pushing and pulling images to/from the registry, which require a series of HTTP requests in a specific order. The request flow for these operations is detailed in the [Request flow](https://gitlab.com/gitlab-org/container-registry/-/blob/master/docs/push-pull-request-flow.md).
The registry supports multiple [storage backends](https://gitlab.com/gitlab-org/container-registry/-/blob/master/docs/configuration.md#storage), including Google Cloud Storage (GCS) which is used for the GitLab.com registry. In the storage backend, images are stored as blobs, deduplicated, and shared across repositories. These are then linked (like a symlink) to each repository that relies on them, giving them access to the central storage location.
@@ -69,7 +69,7 @@ Please refer to the [Docker documentation](https://docs.docker.com/registry/spec
##### Push and Pull
-Push and pull commands are used to upload and download images, more precisely manifests and blobs. The push/pull flow is described in the [documentation](https://gitlab.com/gitlab-org/container-registry/-/blob/master/docs-gitlab/push-pull-request-flow.md).
+Push and pull commands are used to upload and download images, more precisely manifests and blobs. The push/pull flow is described in the [documentation](https://gitlab.com/gitlab-org/container-registry/-/blob/master/docs/push-pull-request-flow.md).
#### GitLab Rails
@@ -86,7 +86,7 @@ The single entrypoint for the registry is the [HTTP API](https://gitlab.com/gitl
| [Check if manifest exists](https://gitlab.com/gitlab-org/container-registry/-/blob/master/docs/spec/api.md#existing-manifests) | **{check-circle}** Yes | **{dotted-circle}** No | Used to get the digest of a manifest by tag. This is then used to pull the manifest and show the tag details in the UI. |
| [Pull manifest](https://gitlab.com/gitlab-org/container-registry/-/blob/master/docs/spec/api.md#pulling-an-image-manifest) | **{check-circle}** Yes | **{dotted-circle}** No | Used to show the image size and the manifest digest in the tag details UI. |
| [Pull blob](https://gitlab.com/gitlab-org/container-registry/-/blob/master/docs/spec/api.md#pulling-a-layer) | **{check-circle}** Yes | **{dotted-circle}** No | Used to show the configuration digest and the creation date in the tag details UI. |
-| [Delete tag](https://gitlab.com/gitlab-org/container-registry/-/blob/master/docs/spec/api.md#deleting-a-tag) | **{check-circle}** Yes | **{check-circle}** Yes | Used to delete a tag from the UI and in background (cleanup policies). |
+| [Delete tag](https://gitlab.com/gitlab-org/container-registry/-/blob/master/docs/spec/api.md#delete-tag) | **{check-circle}** Yes | **{check-circle}** Yes | Used to delete a tag from the UI and in background (cleanup policies). |
A valid authentication token is generated in GitLab Rails and embedded in all these requests before sending them to the registry.
@@ -154,7 +154,7 @@ Following the GitLab [Go standards and style guidelines](../../../development/go
The design and development of the registry database adhere to the GitLab [database guidelines](../../../development/database/index.md). Being a Go application, the required tooling to support the database will have to be developed, such as for running database migrations.
-Running *online* and [*post deployment*](../../../development/database/post_deployment_migrations.md) migrations is already supported by the registry CLI, as described in the [documentation](https://gitlab.com/gitlab-org/container-registry/-/blob/master/docs-gitlab/database-migrations.md).
+Running *online* and [*post deployment*](../../../development/database/post_deployment_migrations.md) migrations is already supported by the registry CLI, as described in the [documentation](https://gitlab.com/gitlab-org/container-registry/-/blob/master/docs/database-migrations.md).
#### Partitioning
@@ -224,7 +224,7 @@ This is a list of all the registry HTTP API operations and how they depend on th
| [Check API version](https://gitlab.com/gitlab-org/container-registry/-/blob/master/docs/spec/api.md#api-version-check) | `GET` | `/v2/` | **{dotted-circle}** No | **{dotted-circle}** No | **{check-circle}** Yes |
| [List repositories](https://gitlab.com/gitlab-org/container-registry/-/blob/master/docs/spec/api.md#listing-repositories) | `GET` | `/v2/_catalog` | **{check-circle}** Yes | **{dotted-circle}** No | **{dotted-circle}** No |
| [List repository tags](https://gitlab.com/gitlab-org/container-registry/-/blob/master/docs/spec/api.md#listing-image-tags) | `GET` | `/v2/<name>/tags/list` | **{check-circle}** Yes | **{dotted-circle}** No | **{check-circle}** Yes |
-| [Delete tag](https://gitlab.com/gitlab-org/container-registry/-/blob/master/docs/spec/api.md#deleting-a-tag) | `DELETE` | `/v2/<name>/tags/reference/<reference>` | **{check-circle}** Yes | **{dotted-circle}** No | **{check-circle}** Yes |
+| [Delete tag](https://gitlab.com/gitlab-org/container-registry/-/blob/master/docs/spec/api.md#delete-tag) | `DELETE` | `/v2/<name>/manifests/<reference>` | **{check-circle}** Yes | **{dotted-circle}** No | **{check-circle}** Yes |
| [Check if manifest exists](https://gitlab.com/gitlab-org/container-registry/-/blob/master/docs/spec/api.md#existing-manifests) | `HEAD` | `/v2/<name>/manifests/<reference>` | **{check-circle}** Yes | **{dotted-circle}** No | **{check-circle}** Yes |
| [Pull manifest](https://gitlab.com/gitlab-org/container-registry/-/blob/master/docs/spec/api.md#pulling-an-image-manifest) | `GET` | `/v2/<name>/manifests/<reference>` | **{check-circle}** Yes | **{dotted-circle}** No | **{check-circle}** Yes |
| [Push manifest](https://gitlab.com/gitlab-org/container-registry/-/blob/master/docs/spec/api.md#pushing-an-image-manifest) | `PUT` | `/v2/<name>/manifests/<reference>` | **{check-circle}** Yes | **{dotted-circle}** No | **{dotted-circle}** No |
diff --git a/doc/architecture/blueprints/container_registry_metadata_database_self_managed_rollout/index.md b/doc/architecture/blueprints/container_registry_metadata_database_self_managed_rollout/index.md
index 84a95e3e7c3..d91f2fdddbf 100644
--- a/doc/architecture/blueprints/container_registry_metadata_database_self_managed_rollout/index.md
+++ b/doc/architecture/blueprints/container_registry_metadata_database_self_managed_rollout/index.md
@@ -160,7 +160,7 @@ import which would lead to greater consistency across all storage driver impleme
### The Import Tool
-The [import tool](https://gitlab.com/gitlab-org/container-registry/-/blob/master/docs-gitlab/database-import-tool.md)
+The [import tool](https://gitlab.com/gitlab-org/container-registry/-/blob/master/docs/database-import-tool.md)
is a well-validated component of the Container Registry project that we have used
from the beginning as a way to perform local testing. This tool is a thin wrapper
over the core import functionality — the code which handles the import logic has
diff --git a/doc/architecture/blueprints/email_ingestion/index.md b/doc/architecture/blueprints/email_ingestion/index.md
index 9579a903133..59086aed86a 100644
--- a/doc/architecture/blueprints/email_ingestion/index.md
+++ b/doc/architecture/blueprints/email_ingestion/index.md
@@ -36,7 +36,7 @@ The current implementation lacks scalability and requires significant infrastruc
Because we are using a fork of the `mail_room` gem ([`gitlab-mail_room`](https://gitlab.com/gitlab-org/ruby/gems/gitlab-mail_room)), which contains some GitLab specific features that won't be ported upstream, we have a noteable maintenance overhead.
-The [Service Desk Single-Engineer-Group (SEG)](https://about.gitlab.com/handbook/engineering/incubation/service-desk/) started work on [customizable email addresses for Service Desk](https://gitlab.com/gitlab-org/gitlab/-/issues/329990) and [released the first iteration in beta in `16.4`](https://about.gitlab.com/releases/2023/09/22/gitlab-16-4-released/#custom-email-address-for-service-desk). As a [MVC we introduced a `Forwarding & SMTP` mode](https://gitlab.com/gitlab-org/gitlab/-/issues/329990#note_1201344150) where administrators set up email forwarding from their custom email address to the projects' `incoming_mail` email address. They also provide SMTP credentials so GitLab can send emails from the custom email address on their behalf. We don't need any additional email ingestion other than the existing mechanics for this approach to work.
+The [Service Desk Single-Engineer-Group (SEG)](https://about.gitlab.com/handbook/engineering/development/incubation/service-desk/) started work on [customizable email addresses for Service Desk](https://gitlab.com/gitlab-org/gitlab/-/issues/329990) and [released the first iteration in beta in `16.4`](https://about.gitlab.com/releases/2023/09/22/gitlab-16-4-released/#custom-email-address-for-service-desk). As a [MVC we introduced a `Forwarding & SMTP` mode](https://gitlab.com/gitlab-org/gitlab/-/issues/329990#note_1201344150) where administrators set up email forwarding from their custom email address to the projects' `incoming_mail` email address. They also provide SMTP credentials so GitLab can send emails from the custom email address on their behalf. We don't need any additional email ingestion other than the existing mechanics for this approach to work.
As a second iteration we'd like to add Microsoft Graph support for custom email addresses for Service Desk as well. Therefore we need a way to ingest more than the system defined two addresses. We will explore a solution path for Microsoft Graph support where privileged users can connect a custom email account and we can [receive messages via a Microsoft Graph webhook (`Outlook message`)](https://learn.microsoft.com/en-us/graph/webhooks#supported-resources). GitLab would need a public endpoint to receive updates on emails. That might not work for Self-managed instances, so we'll need direct email ingestion for Microsoft customers as well. But using the webhook approach could improve performance and efficiency for GitLab SaaS where we potentially have thousands of mailboxes to poll.
diff --git a/doc/architecture/blueprints/feature_flags_usage_in_dev_and_ops/index.md b/doc/architecture/blueprints/feature_flags_usage_in_dev_and_ops/index.md
new file mode 100644
index 00000000000..ad6dd755607
--- /dev/null
+++ b/doc/architecture/blueprints/feature_flags_usage_in_dev_and_ops/index.md
@@ -0,0 +1,285 @@
+---
+status: proposed
+creation-date: "2023-11-01"
+authors: [ "@rymai" ]
+coach: "@DylanGriffith"
+approvers: []
+owning-stage: "~devops::non_devops"
+participating-stages: []
+---
+
+# Feature Flags usage in GitLab development and operations
+
+This blueprint builds upon [the Development Feature Flags Architecture blueprint](../feature_flags_development/index.md).
+
+## Summary
+
+Feature flags are critical both in developing and operating GitLab, but in the current state
+of the process, they can lead to production issues, and introduce a lot of manual and maintenance work.
+
+The goals of this blueprint is to make the process safer, more maintainable, lightweight, automated and transparent.
+
+## Motivations
+
+### Feature flag use-cases
+
+Feature flags can be used for different purposes:
+
+- De-risking GitLab.com deployments (most feature flags): Allows to quickly enable/disable
+ a feature flag in production in the event of a production incident.
+- Work-in-progress feature: Some features are complex and need to be implemented through several MRs. Until they're fully implemented, it needs
+ to be hidden from anyone. In that case, the feature flag allows to merge all the changes to the main branch without actually using
+ the feature yet.
+- Beta features: We might
+ [not be confident we'll be able to scale, support, and maintain a feature](https://about.gitlab.com/handbook/product/gitlab-the-product/#experiment-beta-ga)
+ in its current form for every designed use case ([example](https://gitlab.com/gitlab-org/gitlab/-/issues/336070#note_1523983444)).
+ There are also scenarios where a feature is not complete enough to be considered an MVC.
+ Providing a flag in this case allows engineers and customers to disable the new feature until it's performant enough.
+- Operations: Site reliability engineer or Support engineer can use these flags to
+ disable potentially resource-heavy features in order to the instance back to a
+ more stable and available state. Another example is SaaS-only features.
+- Experiment: A/B testing on GitLab.com.
+- Worker (special `ops` feature flag): Used for controlling Sidekiq workers behavior, such as deferring Sidekiq jobs.
+
+We need to better categorize our feature flags.
+
+### Production incidents related to feature flags
+
+Feature flags have caused production incidents on GitLab.com ([1](https://gitlab.com/gitlab-com/gl-infra/production/-/issues/5289), [2](https://gitlab.com/gitlab-com/gl-infra/production/-/issues/4155), [3](https://gitlab.com/gitlab-com/gl-infra/production/-/issues/16366)).
+
+We need to prevent this for the sake of GitLab.com stability.
+
+### Technical debt caused by feature flags
+
+Feature flags are also becoming an ever-growing source of technical debt: there are currently
+[591 feature flags in the GitLab codebase](../../../user/feature_flags.md).
+
+We need to reduce feature flags count for the sake of long-term maintainability & quality of the GitLab codebase.
+
+## Goal
+
+The goal of this blueprint is to improve the feature flag process by making it:
+
+- safer
+- more maintainable
+- more lightweight & automated
+- more transparent
+
+## Challenges
+
+### Complex feature flag rollout process
+
+The feature flag rollout process is currently:
+
+- Complex: Rollout issues that are very manual and includes a lot of checkboxes
+ (including non-relevant checkboxes).
+ Engineers often don't use these issues, which tend to become stale and forgotten over time.
+- Not very transparent: Feature flag changes are logged in several places far from the rollout
+ issue, which makes it hard to understand the latest feature flag state.
+- Far from production processes: Rollout issues are created in the `gitlab-org/gitlab` project
+ (far from the production issue tracker).
+- There is no consistent path to rolling out feature flags: we leave to the judgement of the
+ engineer to trade-off between speed and safety. There should be a standardized set of rollout
+ steps.
+
+### Technical debt and codebase complexity
+
+[The challenges from the Development Feature Flags Architecture blueprint still stand](../feature_flags_development/index.md#challenges).
+
+Additionally, there are new challenges:
+
+- If a feature flag is enabled by default, and is disabled in an on-premise installation,
+ then when the feature flag is removed, the feature suddenly becomes enabled on the
+ on-premise instance and cannot be rolled backed to the previous behavior.
+
+### Multiple source of truth for feature flag default states and observability
+
+We currently show the feature flag default states in several places, for different intended audiences:
+
+**GitLab customers**
+
+- [User documentation](../../../user/feature_flags.md):
+ List all feature flags and their metadata so that GitLab customers can tweak feature flags on
+ their instance. Also useful for GitLab.com users that want to check the default state of a feature flag.
+
+**Site reliability and Delivery engineers**
+
+- [Internal GitLab.com feature flag state change issues](https://gitlab.com/gitlab-com/gl-infra/feature-flag-log/-/issues):
+ For each change of a feature flag state on GitLab.com, an issue is created in this project.
+- [Internal GitLab.com feature flag state change logs](https://nonprod-log.gitlab.net):
+ Filter logs with `source: feature` and `env: gprd` to see feature flag state change logs.
+
+**GitLab Engineering & Infra/Quality Directors / VPs, and CTO**
+
+- [Internal Sisense dashboard](https://app.periscopedata.com/app/gitlab/792066/Engineering-::-Feature-Flags):
+ Feature flag metrics over time, grouped per DevOps groups.
+
+**GitLab Engineering and Product managers**
+
+- ["Feature flags requiring attention" monthly reports](https://gitlab.com/gitlab-org/quality/triage-reports/-/issues/?sort=created_date&state=opened&search=Feature%20flags&in=TITLE&assignee_id=None&first_page_size=100):
+ Same data as the above Internal Sisense dashboard but for a specific DevOps
+ group, presented in an issue and assigned to the group's Engineering managers.
+
+**Anyone who wants to check feature flag default states**
+
+- [Unofficial feature flags dashboard](https://samdbeckham.gitlab.io/feature-flags/):
+ A user-friendly dashboard which provides useful filtering.
+
+This leads to confusion for almost all feature flag stakeholders (Development engineers, Engineering managers, Site reliability, Delivery engineers).
+
+## Proposal
+
+### Improve feature flags implementation and usage
+
+- [Reduce the likelihood of mis-configuration and human-error at the implementation step](https://gitlab.com/groups/gitlab-org/-/epics/11553)
+ - Remove the "percentage of time" strategy in favor of "percentage of actors"
+- [Improve the feature flag development documentation](https://gitlab.com/groups/gitlab-org/-/epics/5324)
+
+### Introduce new feature flag `type`s
+
+It's clear that the `development` feature flag type actually includes several use-cases:
+
+- GitLab.com deployment de-risking. YAML value: `gitlab_com_derisk`.
+- Work-in-progress feature. YAML value: `wip`. Once the feature is complete, the feature flag type can be changed to `beta`
+ if there still are some doubts on the scalability of the feature.
+- Beta features. YAML value: `beta`.
+
+Notes:
+
+- These new types replace the broad `development` type, which shouldn't be used anymore in the future.
+- Backward-compatibility will be kept until there's no `development` feature flags in the codebase anymore.
+
+### Introduce constraints per feature flag type
+
+Each feature flag type will be assigned specific constraints regarding:
+
+- Allowed values for the `default_enabled` attribute
+- Maximum Lifespan (MLS): the duration starting on the introduction of the feature flag (i.e. when it's merged into `master`).
+ We don't introduce a life span that would start on the global GitLab.com enablement (or `default_enabled: true` when
+ applicable) so that there's incentive to rollout and delete feature flags as quickly as possible.
+
+The MLS will be enforced through automation, reporting & regular review meetings at the section level.
+
+Following are the constraints for each feature flag type:
+
+- `gitlab_com_derisk`
+ - `default_enabled` **must not** be set to `true`. This kind of feature flag is meant to lower the risk on GitLab.com, thus
+ there's no need to keep the flag in the codebase after it's been enabled on GitLab.com.
+ **`default_enabled: true` will not have any effect for this type of feature flag.**
+ - Maximum Lifespan: 2 months.
+ - Additional note: This type of feature flag won't be documented in the [All feature flags in GitLab](../../../user/feature_flags.md)
+ page given they're short-lived and deployment-related.
+- `wip`
+ - `default_enabled` **must not** be set to `true`. If needed, this type can be changed to `beta` once the feature is complete.
+ - Maximum Lifespan: 4 months.
+- `beta`
+ - `default_enabled` can be set to `true` so that a feature can be "released" to everyone in Beta with the possibility to disable
+ it in the case of scalability issues (ideally it should only be disabled for this reason on specific on-premise installations).
+ - Maximum Lifespan: 6 months.
+- `ops`
+ - `default_enabled` can be set to `true`.
+ - Maximum Lifespan: Unlimited.
+ - Additional note: Remember that using this type should follow a conscious decision not to introduce an instance setting.
+- `experiment`
+ - `default_enabled` **must not** be set to `true`.
+ - Maximum Lifespan: 6 months.
+
+### Introduce a new `feature_issue_url` field
+
+Keeping the URL to the original feature issue will allow automated cross-linking from the rollout
+and logging issues. The new field for this information is `feature_issue_url`.
+
+For instance:
+
+```yaml
+---
+name: auto_devops_banner_disabled
+feature_issue_url: https://gitlab.com/gitlab-org/gitlab/-/issues/12345
+introduced_by_url: https://gitlab.com/gitlab-org/gitlab/-/merge_requests/678910
+rollout_issue_url: https://gitlab.com/gitlab-com/gl-infra/production/-/issues/9876
+milestone: '16.5'
+type: gitlab_com_derisk
+group: group::pipeline execution
+```
+
+```yaml
+---
+name: ai_mr_creation
+feature_issue_url: https://gitlab.com/gitlab-org/gitlab/-/issues/12345
+introduced_by_url: https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/14218
+rollout_issue_url: https://gitlab.com/gitlab-com/gl-infra/production/-/issues/83652
+milestone: '16.3'
+type: beta
+group: group::code review
+default_enabled: true
+```
+
+### Streamline the feature flag rollout process
+
+1. (Process) Transition to **create rollout issues in the
+ [Production issue tracker](https://gitlab.com/gitlab-com/gl-infra/production/-/issues)** and adapt the
+ template to be closer to the
+ [Change management issue template](https://gitlab.com/gitlab-com/gl-infra/production/-/blob/master/.gitlab/issue_templates/change_management.md)
+ (see [this issue](https://gitlab.com/gitlab-com/gl-infra/production/-/issues/2780) for inspiration)
+ That way, the rollout issue would only concern the actual production changes (i.e. enablement/disablement
+ of the flag on production) and should be closed as soon as the production change is confirmed to work as expected.
+1. (Automation) Automate most rollout steps, such as:
+ - (Done) [Let the author know that their feature has been deployed to staging / canary / production environments](https://gitlab.com/gitlab-org/quality/triage-ops/-/issues/1403)
+ - (Done) [Cross-link actual feature flag state change (from Chatops project) to rollout issues](https://gitlab.com/gitlab-org/gitlab/-/issues/290770)
+ - (Done) [Let the author know that their `default_enabled: true` MR has been deployed to production and that the feature flag can be removed from production](https://gitlab.com/gitlab-org/quality/triage-ops/-/merge_requests/2482)
+ - Automate the creation of rollout issues when a feature flag is first introduced in a merge request,
+ and provide an diff suggestion to fill the `rollout_issue_url` field (Danger)
+ - Check and enforce feature flag definition constraints in merge requests (Danger)
+ - Provide a diff suggestion to correct the `milestone` field when it's not the same value as
+ the MR milestone (Danger)
+ - Upon feature flag state change, notify on Slack the group responsible for it (chatops)
+ - 7 days before the Maximum Lifespan of a feature flag is reached, automatically create a "cleanup MR" with the group label set, and
+ assigned to the feature flag author (if they're still with GitLab). We could take advantage of the [automation of repetitive developer tasks](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/134487)
+ - Enforce Maximum Lifespan of feature flags through automated reporting & regular review at the section level
+1. (Documentation/process) Ensure the rollout DRI stays online for a few hours after enabling a feature flag (ideally they'd enable the flag at the
+ beginning of their day) in case of any issue with the feature flag
+1. (Process) Provide a standardized set of rollout steps. Trade-offs to consider include:
+ - Likelihood of errors occurring
+ - Total actors (users / requests / projects / groups) affected by the feature flag rollout,
+ e.g. it will be bad if 100,000 users cannot log in when we roll out for 1%
+ - How long to wait between each step. Some feature flags only need to wait 10 minutes per step, some
+ flags should wait 24 hours. Ideally there should be automation to actively verify there
+ is no adverse effect for each step.
+
+### Provide better SSOT for the feature flag default states and current states & state changes on GitLab.com
+
+**GitLab customers**
+
+- [User documentation](../../../user/feature_flags.md):
+ Keep the current page but add filtering and sorting, similarly to the
+ [unofficial feature flags dashboard](https://samdbeckham.gitlab.io/feature-flags/).
+
+**Site reliability and Delivery engineers**
+
+We [assessed the usefulness of feature flag state change logging strategies](https://gitlab.com/gitlab-org/quality/engineering-productivity/team/-/issues/309)
+and it appears that both
+[internal GitLab.com feature flag state change issues](https://gitlab.com/gitlab-com/gl-infra/feature-flag-log/-/issues)
+and [internal GitLab.com feature flag state change logs](https://nonprod-log.gitlab.net) are useful for different
+audiences.
+
+**GitLab Engineering & Infra/Quality Directors / VPs, and CTO**
+
+- [Internal Sisense dashboard](https://app.periscopedata.com/app/gitlab/792066/Engineering-::-Feature-Flags):
+ Streamline the current dashboard to be more useful for its stakeholders.
+
+**GitLab Engineering and Product managers**
+
+- ["Feature flags requiring attention" monthly reports](https://gitlab.com/gitlab-org/quality/triage-reports/-/issues/?sort=created_date&state=opened&search=Feature%20flags&in=TITLE&assignee_id=None&first_page_size=100):
+ Make the current reports more actionable by linking to automatically created MRs for removing feature flags as well as improving documentation and best-practices around feature flags.
+
+## Iterations
+
+This work is being done as part of dedicated epic:
+[Improve internal usage of Feature Flags](https://gitlab.com/groups/gitlab-org/-/epics/3551).
+This epic describes a meta reasons for making these changes.
+
+## Resources
+
+- [What Are Feature Flags?](https://launchdarkly.com/blog/what-are-feature-flags/#:~:text=Feature%20flags%20are%20a%20software,portions%20of%20code%20are%20executed)
+- [Feature Flags Best Practices](https://featureflags.io/feature-flags-best-practices/)
+- [Short-lived or Long-lived Flags? Explaining Feature Flag lifespans](https://configcat.com/blog/2022/07/08/how-long-should-you-keep-feature-flags/)
diff --git a/doc/architecture/blueprints/gitlab_ml_experiments/index.md b/doc/architecture/blueprints/gitlab_ml_experiments/index.md
index e0675bb5be6..b9830778902 100644
--- a/doc/architecture/blueprints/gitlab_ml_experiments/index.md
+++ b/doc/architecture/blueprints/gitlab_ml_experiments/index.md
@@ -120,51 +120,46 @@ However, Service-Integration will establish certain necessary and optional requi
###### Ease of Use, Ownership Requirements
-1. <a name="R100">`R100`</a>: Required: the platform should be easy to use: imagine Heroku with [GitLab Production Readiness-approved](https://about.gitlab.com/handbook/engineering/infrastructure/production/readiness/) defaults.
-1. <a name="R110">`R110`</a>: Required: with the exception of an Infrastructure-led onboarding process, services are owned, deployed and managed by stage-group teams. In other words,services follow a "You Build It, You Run It" model of ownership.
-1. <a name="R120">`R120`</a>: Required: programming-language agnostic: no requirements for services. Services should be packaged as container images.
-1. <a name="R130">`R130`</a>: Recommended: Each service should be evaluated against the GitLab.com [Service Maturity Model](https://about.gitlab.com/handbook/engineering/infrastructure/service-maturity-model/).
-1. <a name="R140">`R140`</a>: Recommended: services using the platform have expedited production-readiness processes.
- 1. Production-readiness requirements graded by service maturity: low-traffic, low-maturity experimental services will have lower requirement thresholds than more mature services.
- 1. By default, the platform should provide services with defaults that would pass production-readiness review for the lowest service maturity-level.
- 1. At introduction, lowest maturity services can be deployed without production readiness, provided the meet certain automatically validated requirements. This removes Infrastructure gate-keeping from being a blocker to experimental service delivery.
+| ID | Required | Detail | Epic/Issue | Done? |
+|---|---|---|---|---|
+| `R100` | Required | The platform should be easy to use: imagine Heroku with [GitLab Production Readiness-approved](https://about.gitlab.com/handbook/engineering/infrastructure/production/readiness/) defaults. | [Runway to [BETA] : Increased Adoption and Self Service](https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/1115) | **{dotted-circle}** No |
+| `R110` | Required | With the exception of an Infrastructure-led onboarding process, services are owned, deployed and managed by stage-group teams. In other words,services follow a “You Build It, You Run It” model of ownership.| [[Paused] Discussion: Tiered Support Model for Runway](https://gitlab.com/gitlab-com/gl-infra/platform/runway/team/-/issues/97) | **{dotted-circle}** No |
+| `R120` | Required | Programming-language agnostic: no requirements for services. Services should be packaged as container images.| [Runway to [BETA] : Increased Adoption and Self Service](https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/1115) | **{dotted-circle}** No |
+| `R130` | Recommended | Each service should be evaluated against the GitLab.com [Service Maturity Model](https://about.gitlab.com/handbook/engineering/infrastructure/service-maturity-model/).| [Discussion: Introduce an 'Infrastructure Well-Architected Service Framework'](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/2537) | **{dotted-circle}** No |
+| `R140` | Recommended | Services using the platform have expedited production-readiness processes. {::nomarkdown}<ol><li>Production-readiness requirements graded by service maturity: low-traffic, low-maturity experimental services will have lower requirement thresholds than more mature services. </li><li> By default, the platform should provide services with defaults that would pass production-readiness review for the lowest service maturity-level. </li><li> At introduction, lowest maturity services can be deployed without production readiness, provided the meet certain automatically validated requirements. This removes Infrastructure gate-keeping from being a blocker to experimental service delivery.</li></ol>{:/} | | |
###### Observability Requirements
-1. <a name="R200">`R200`</a>: Required: the platform must provide SLIs for services out-of-the-box.
- 1. While it is recommended that services expose internal metrics, it is not mandatory. The platform will provide monitoring from the load-balancer. This is to speed up deployment by removing barriers to experimentation.
- 1. For services that provide internal metrics scrape endpoints, the platform must be configurable to collect these.
- 1. The platform must provide generic load-balancer level SLIs for all services. Service owners must be able to select from constructing SLIs from internal application metrics, the platform-provided external SLIs, or a combination of both.
-1. <a name="R210">`R210`</a>: Required: Observability dashboards, rules, alerts (with per-term routing) must be generated from a manifest.
-1. <a name="R220">`R220`</a>:Required: standardized logging infrastructure.
- 1. Mandate that all logging emitted from services must be Structured JSON. Text logs are permitted but not recommended.
- 1. See [Common Service Libraries](#common-service-libraries) for more details of building common SDKs for observability.
+| ID | Required | Detail | Epic/Issue | Done? |
+|---|---|---|---|---|
+| `R200` | Required | The platform must provide SLIs for services out-of-the-box.{::nomarkdown}<ol><li>While it is recommended that services expose internal metrics, it is not mandatory. The platform will provide monitoring from the load-balancer. This is to speed up deployment by removing barriers to experimentation.</li><li>For services that provide internal metrics scrape endpoints, the platform must be configurable to collect these.</li><li>The platform must provide generic load-balancer level SLIs for all services. Service owners must be able to select from constructing SLIs from internal application metrics, the platform-provided external SLIs, or a combination of both.</li></ol>{:/} | [Observability: Default Metrics](https://gitlab.com/gitlab-com/gl-infra/platform/runway/team/-/issues/72), [Observability: Custom Metrics](https://gitlab.com/gitlab-com/gl-infra/platform/runway/team/-/issues/67) | **{check-circle}** Yes |
+| `R210` | Required | Observability dashboards, rules, alerts (with per-term routing) must be generated from a manifest. | [Observability: Metrics Catalog](https://gitlab.com/gitlab-com/gl-infra/platform/runway/team/-/issues/74) | **{check-circle}** Yes |
+| `R220` | Required | Standardized logging infrastructure.{::nomarkdown}<ol><li>Mandate that all logging emitted from services must be Structured JSON. Text logs are permitted but not recommended.</li><li>See <a href="#common-service-libraries">Common Service Libraries</a> for more details of building common SDKs for observability.</li></ol>{:/} | [Observability: Logs in Elasticsearch for model-gateway](https://gitlab.com/gitlab-com/gl-infra/platform/runway/team/-/issues/75), [Observability: Runway logs available to users](https://gitlab.com/gitlab-com/gl-infra/platform/runway/team/-/issues/84) | |
###### Deployment Requirements
-1. <a name="R300">`R300`</a>: Required: No secrets stored in CI/CD.
- 1. Authentication with Cloud Provider Resources should be exclusively via OIDC, managed as part of the platform.
- 1. Secrets should be stored in the Infrastructure-provided Hashicorp Vault for the environment and passed to applications through files or environment variables.
- 1. Generation and management of service account tokens should be done declaratively, without manual interaction.
-1. <a name="R310">`R310`</a>: Required: multiple environment should be supported, eg Staging and Production.
-1. <a name="R320">`R320`</a>: Required: the platform should be cost-effective. Kubernetes clusters should support multiple services and teams.
-1. <a name="R330">`R330`</a>: Recommended: gradual rollouts, rollbacks, blue-green deployments.
-1. <a name="R340">`R340`</a>: Required: services should be isolated from one another.
-1. <a name="R350">`R350`</a>: Recommended: services should have the ability to specify node characteristic requirements (eg, GPU).
-1. <a name="R360">`R360`</a>: Required: Developers should not need knowledge of Helm, Kubernetes, Prometheus in order to deploy. All required values are configured and validated in project-hosted manifest before generating Kubernetes manifests, Prometheus rules, etc.
-1. <a name="R370">`R370`</a>: Initially services should be synchronous only - using REST or GRPC requests.
- 1. This does not however preclude long-running HTTP(s) requests, for example long-polling or Websocket requests.
-1. <a name="R390">`R390`</a>: Each service hosted in its own GitLab repository with deployment manifest stored in the repository.
- 1. Continuous deployments that are initiated from the CI pipeline of the corresponding GitLab repository.
+| ID | Required | Detail | Epic/Issue | Done? |
+|---|---|---|---|---|
+| `R300` | Required | No secrets stored in CI/CD. {::nomarkdown} <ol><li>Authentication with Cloud Provider Resources should be exclusively via OIDC, managed as part of the platform.</li><li> Secrets should be stored in the Infrastructure-provided Hashicorp Vault for the environment and passed to applications through files or environment variables. </li><li>Generation and management of service account tokens should be done declaratively, without manual interaction.</li></ul>{:/} | [Secrets Management](https://gitlab.com/gitlab-com/gl-infra/platform/runway/team/-/issues/52) | **{dotted-circle}** No |
+| `R310` | Required | Multiple environment should be supported, eg Staging and Production. | | **{check-circle}** Yes |
+| `R320` | Required | The platform should be cost-effective. Kubernetes clusters should support multiple services and teams. | | |
+| `R330` | Recommended | Gradual rollouts, rollbacks, blue-green deployments. | | |
+| `R340` | Required | Services should be isolated from one another. | | |
+| `R350` | Recommended | Services should have the ability to specify node characteristic requirements (eg, GPU). | | |
+| `R360` | Required | Developers should not need knowledge of Helm, Kubernetes, Prometheus in order to deploy. All required values are configured and validated in project-hosted manifest before generating Kubernetes manifests, Prometheus rules, etc. | | |
+| `R370` | | Initially services should be synchronous only - using REST or GRPC requests.{::nomarkdown}<ol><li>This does not however preclude long-running HTTP(s) requests, for example long-polling or Websocket requests.</li></ol>{:/} | | |
+| `R390` | | Each service hosted in its own GitLab repository with deployment manifest stored in the repository. {::nomarkdown}<ol><li>Continuous deployments that are initiated from the CI pipeline of the corresponding GitLab repository.</li></ol>{:/} | | |
##### Security Requirements
-1. <a name="R400">`R400`</a>: stateful services deployed on the platform that utilize their own stateful storage (for example, custom deployed Postgres instance), must not store application security tokens, cloud-provider service keys or other long-lived security tokens in their stateful stores.
-1. <a name="R410">`R410`</a>: long-lived shared secrets are discouraged, and should be referenced in the service manifest as such, to allow for accounting and monitoring.
-1. <a name="R420">`R420`</a>: services using long-lived shared secrets should ensure that secret rotation can take place without downtime.
- 1. During a rotation, old and new generations of secrets should pass authentication, allowing gradual roll-out of new secrets.
+| ID | Required | Detail | Epic/Issue | Done? |
+|---|---|---|---|---|
+| `R400` | | Stateful services deployed on the platform that utilize their own stateful storage (for example, custom deployed Postgres instance), must not store application security tokens, cloud-provider service keys or other long-lived security tokens in their stateful stores. | | |
+| `R410` | | Long-lived shared secrets are discouraged, and should be referenced in the service manifest as such, to allow for accounting and monitoring. | | |
+| `R420` | | Services using long-lived shared secrets should ensure that secret rotation can take place without downtime. {::nomarkdown}<ol><li>During a rotation, old and new generations of secrets should pass authentication, allowing gradual roll-out of new secrets.</li></ol>{:/} | | |
##### Common Service Libraries
-1. <a name="R500">`R500`</a>: Experimental services would be strongly encouraged to adopt and use [LabKit](https://gitlab.com/gitlab-org/labkit) (for Go services), or [LabKit-Ruby](https://gitlab.com/gitlab-org/ruby/gems/labkit-ruby) for observability, context, correlation, FIPs verification, etc.
- 1. At present, there is no LabKit-Python library, but some experiments will run in Python, so building a library to providing observability, context, correlation services in Python will be required.
+| ID | Required | Detail | Epic/Issue | Done? |
+|---|---|---|---|---|
+| `R500` | Required | Experimental services would be strongly encouraged to adopt and use [LabKit](https://gitlab.com/gitlab-org/labkit) (for Go services), or [LabKit-Ruby](https://gitlab.com/gitlab-org/ruby/gems/labkit-ruby) for observability, context, correlation, FIPs verification, etc. {::nomarkdown}<ol><li>At present, there is no LabKit-Python library, but some experiments will run in Python, so building a library to providing observability, context, correlation services in Python will be required. </li></ol>{:/} | | |
diff --git a/doc/architecture/blueprints/gitlab_steps/gitlab-ci.md b/doc/architecture/blueprints/gitlab_steps/gitlab-ci.md
new file mode 100644
index 00000000000..8f97c307b37
--- /dev/null
+++ b/doc/architecture/blueprints/gitlab_steps/gitlab-ci.md
@@ -0,0 +1,247 @@
+---
+owning-stage: "~devops::verify"
+description: Usage of the [GitLab Steps](index.md) with [`.gitlab-ci.yml`](../../../ci/yaml/index.md).
+---
+
+# Usage of the [GitLab Steps](index.md) with [`.gitlab-ci.yml`](../../../ci/yaml/index.md)
+
+This document describes how [GitLab Steps](index.md) are integrated into the `.gitlab-ci.yml`.
+
+GitLab Steps will be integrated using a three-stage execution cycle
+and replace `before_script:`, `script:` and `after_script:`.
+
+- `setup:`: Execution stage responsible for provisioning the environment,
+ including cloning the repository, restoring artifacts, or installing all dependencies.
+ This stage will replace implicitly cloning, restoring artifacts, and cache download.
+- `run:`: Execution stage responsible for running a test, build,
+ or any other main command required by that job.
+- `teardown:`: Execution stage responsible for cleaning the environment,
+ uploading artifacts, or storing cache. This stage will replace implicit
+ artifacts and cache uploads.
+
+Before we can achieve three-stage execution we will ship minimal initial support
+that does not require any prior GitLab integration.
+
+## Phase 1: Initial support
+
+Initially the Step Runner will be used externally, without any prior dependencies
+to GitLab:
+
+- The `step-runner` will be provided as part of a container image.
+- The `step-runner` will be explicitly run in the `script:` section.
+- The `$STEPS` environment variable will be executed as [`type: steps`](step-definition.md#the-steps-step-type).
+
+```yaml
+hello-world:
+ image: registry.gitlab.com/gitlab-org/step-runner
+ variables:
+ STEPS: |
+ - step: gitlab.com/josephburnett/component-hello-steppy@master
+ inputs:
+ greeting: "hello world"
+ script:
+ - /step-runner ci
+```
+
+## Phase 2: The addition of `run:` to `.gitlab-ci.yml`
+
+In Phase 2 we will add `run:` as a first class way to use GitLab Steps:
+
+- `run:` will use a [`type: steps`](step-definition.md#the-steps-step-type) syntax.
+- `run:` will replace usage of `before_script`, `script` and `after_script`.
+- All existing functions to support Git cloning, artifacts, and cache would continue to be supported.
+- It is yet to be defined how we would support `after_script`, which is executed unconditionally
+ or when the job is canceled.
+- `run:` will not be allowed to be combined with `before_script:`, `script:` or `after_script:`.
+- GitLab Rails would not parse `run:`, instead it would only perform static validation
+ with a JSON schema provided by the Step Runner.
+
+```yaml
+hello-world:
+ image: registry.gitlab.com/gitlab-org/step-runner
+ run:
+ - step: gitlab.com/josephburnett/component-hello-steppy@master
+ inputs:
+ greeting: "hello world"
+```
+
+The following example would **fail** syntax validation:
+
+```yaml
+hello-world:
+ image: registry.gitlab.com/gitlab-org/step-runner
+ run:
+ - step: gitlab.com/josephburnett/component-hello-steppy@master
+ inputs:
+ greeting: "hello world"
+ script: echo "This is ambiguous and invalid example"
+```
+
+### Transitioning from `before_script:`, `script:` and `after_script:`
+
+GitLab Rails would automatically convert the `*script:` syntax into relevant `run:` specification:
+
+- Today `before_script:` and `script:` are joined together as a single script for execution.
+- The `after_script:` section is always executed in a separate context, representing a separate step to be executed.
+- It is yet to be defined how we would retain the existing behavior of `after_script`, which is always executed
+ regardless of the job status or timeout, and uses a separate timeout.
+- We would retain all implicit behavior which defines all environment variables when translating `script:`
+ into step-based execution.
+
+For example, this CI/CD configuration:
+
+```yaml
+hello-world:
+ before_script:
+ - echo "Run before_script"
+ script:
+ - echo "Run script"
+ after_script:
+ - echo "Run after_script"
+```
+
+Could be translated into this equivalent specification:
+
+```yaml
+hello-world:
+ run:
+ - step: gitlab.com/gitlab-org/components/steps/legacy/script@v1.0
+ inputs:
+ script:
+ - echo "Run before_script"
+ - echo "Run script"
+ - step: gitlab.com/gitlab-org/components/steps/legacy/script@v1.0
+ inputs:
+ script:
+ - echo "Run after_script"
+ when: always
+```
+
+## Phase 3: The addition of `setup:` and `teardown:` to `.gitlab-ci.yml`
+
+The addition of `setup:` and `teardown:` will replace the implicit functions
+provided by GitLab Runner: Git clone, artifacts and cache handling:
+
+- The usage of `setup:` would stop GitLab Runner from implicitly cloning the repository.
+- `artifacts:` and `cache:`, when specified, would be translated and appended to `setup:` and `teardown:`
+ to provide backward compatibility for the old syntax.
+- `release:`, when specified, would be translated and appended to `teardown:`
+ to provide backward compatibility for the old syntax.
+- `setup:` and `teardown:` could be used in `default:` to simplify support
+ of common workflows like where the repository is cloned, or how the artifacts are handled.
+- The split into 3-stage execution additionally improves composability of steps with `extends:`.
+- The `hooks:pre_get_sources_script` would be implemented similar to [`script:`](#transitioning-from-before_script-script-and-after_script)
+ and be prepended to `setup:`.
+
+For example, this CI/CD configuration:
+
+```yaml
+rspec:
+ script:
+ - echo "This job uses a cache."
+ artifacts:
+ paths: [binaries/, .config]
+ cache:
+ key: binaries-cache
+ paths: [binaries/*.apk, .config]
+```
+
+Could be translated into this equivalent specification executed by a step runner:
+
+```yaml
+rspec:
+ setup:
+ - step: gitlab.com/gitlab-org/components/git/clone@v1.0
+ - step: gitlab.com/gitlab-org/components/artifacts/download@v1.0
+ - step: gitlab.com/gitlab-org/components/cache/restore@v1.0
+ inputs:
+ key: binaries-cache
+ run:
+ - step: gitlab.com/gitlab-org/components/steps/legacy/script@v1.0
+ inputs:
+ script:
+ - echo "This job uses a cache."
+ teardown:
+ - step: gitlab.com/gitlab-org/components/artifacts/upload@v1.0
+ inputs:
+ paths: [binaries/, .config]
+ - step: gitlab.com/gitlab-org/components/cache/restore@v1.0
+ inputs:
+ key: binaries-cache
+ paths: [binaries/*.apk, .config]
+```
+
+### Inheriting common operations with `default:`
+
+`setup:` and `teardown:` are likely to become very verbose over time. One way to simplify them
+is to allow inheriting the common `setup:` and `teardown:` operations
+with `default:`.
+
+The previous example could be simplified to:
+
+```yaml
+default:
+ setup:
+ - step: gitlab.com/gitlab-org/components/git/clone@v1.0
+ - step: gitlab.com/gitlab-org/components/artifacts/download@v1.0
+ - step: gitlab.com/gitlab-org/components/cache/restore@v1.0
+ inputs:
+ key: binaries-cache
+ teardown:
+ - step: gitlab.com/gitlab-org/components/artifacts/upload@v1.0
+ inputs:
+ paths: [binaries/, .config]
+ - step: gitlab.com/gitlab-org/components/cache/restore@v1.0
+ inputs:
+ key: binaries-cache
+ paths: [binaries/*.apk, .config]
+
+rspec:
+ run:
+ - step: gitlab.com/gitlab-org/components/steps/legacy/script@v1.0
+ inputs:
+ script:
+ - echo "This job uses a cache."
+
+linter:
+ run:
+ - step: gitlab.com/gitlab-org/components/steps/legacy/script@v1.0
+ inputs:
+ script:
+ - echo "Run linting"
+```
+
+### Parallel jobs and `setup:`
+
+With the introduction of `setup:` at some point in the future we will introduce
+an efficient way to parallelize the jobs:
+
+- `setup:` would define all steps required to provision the environment.
+- The result of `setup:` would be snapshot and distributed as the base
+ for all parallel jobs, if `parallel: N` is used.
+- The `run:` and `teardown:` would be run on top of cloned job, and all its services.
+- The runner would control and intelligently distribute all parallel
+ jobs, significantly cutting the resource requirements for fixed
+ parts of the job (Git clone, artifacts, installing dependencies.)
+
+```yaml
+rspec-parallel:
+ image: ruby:3.2
+ services: [postgres, redis]
+ parallel: 10
+ setup:
+ - step: gitlab.com/gitlab-org/components/git/clone@v1.0
+ - step: gitlab.com/gitlab-org/components/artifacts/download@v1.0
+ inputs:
+ jobs: [setup-all]
+ - script: bundle install --without production
+ run:
+ - script: bundle exec knapsack
+```
+
+Potential GitLab Runner flow:
+
+1. Runner receives the `rspec-parallel` job with `setup:` and `parallel:` configured.
+1. Runner executes a job on top of Kubernetes cluster using block volumes up to the `setup`.
+1. Runner then runs 10 parallel jobs in Kubernetes, overlaying the block volume from 2
+ and continue execution of `run:` and `teardown:`.
diff --git a/doc/architecture/blueprints/gitlab_steps/index.md b/doc/architecture/blueprints/gitlab_steps/index.md
index 74c9ba1498d..5e3becfec19 100644
--- a/doc/architecture/blueprints/gitlab_steps/index.md
+++ b/doc/architecture/blueprints/gitlab_steps/index.md
@@ -33,12 +33,12 @@ shows a need for a better way to define CI job execution.
## Motivation
-Even though the current [`.gitlab-ci.yml`](../../../ci/yaml/gitlab_ci_yaml.md) is reasonably flexible, it easily becomes very
+Even though the current [`.gitlab-ci.yml`](../../../ci/index.md#the-gitlab-ciyml-file) is reasonably flexible, it easily becomes very
complex when trying to support complex workflows. This complexity is represented
with repetetitve patterns, a purpose-specific syntax, or a complex sequence of commands
to execute.
-This is particularly challenging, because the [`.gitlab-ci.yml`](../../../ci/yaml/gitlab_ci_yaml.md)
+This is particularly challenging, because the [`.gitlab-ci.yml`](../../../ci/index.md#the-gitlab-ciyml-file)
is inflexible on more complex workflows that require fine-tuning or special behavior
for the CI job execution. Its prescriptive approach how to handle Git cloning,
when artifacts are downloaded, or how the shell script is being executed quite often
@@ -46,7 +46,7 @@ results in the need to work around the system for pipelines that are not "standa
or when new features are requested.
This proves especially challenging when trying to add a new syntax to the
-[`.gitlab-ci.yml`](../../../ci/yaml/gitlab_ci_yaml.md)
+[`.gitlab-ci.yml`](../../../ci/index.md#the-gitlab-ciyml-file)
to support a specific feature, like [`secure files`](../../../ci/secure_files/index.md)
or `release:` keyword. Adding these special features on a syntax level
results in a more complex config, which is harder to maintain, and more complex
@@ -131,7 +131,14 @@ TBD
## Proposal
-TBD
+### GitLab Steps definition and syntax
+
+- [Step Definition](step-definition.md).
+- [Syntactic Sugar extensions](steps-syntactic-sugar.md).
+
+### Integration of GitLab Steps in `.gitlab-ci.yml`
+
+- [Usage of the GitLab Steps with `.gitlab-ci.yml`](gitlab-ci.md).
## Design and implementation details
diff --git a/doc/architecture/blueprints/gitlab_steps/step-definition.md b/doc/architecture/blueprints/gitlab_steps/step-definition.md
new file mode 100644
index 00000000000..08ca1ab7c31
--- /dev/null
+++ b/doc/architecture/blueprints/gitlab_steps/step-definition.md
@@ -0,0 +1,368 @@
+---
+owning-stage: "~devops::verify"
+description: The Step Definition for [GitLab Steps](index.md).
+---
+
+# The Step definition
+
+A step is the minimum executable unit that user can provide and is defined in a `step.yml` file.
+
+The following step definition describes the minimal syntax supported.
+The syntax is extended with [syntactic sugar](steps-syntactic-sugar.md).
+
+A step definition consists of two documents. The purpose of the document split is
+to distinguish between the declaration and implementation:
+
+1. [Specification / Declaration](#step-specification):
+
+ Provides the specification which describes step inputs and outputs,
+ as well any other metadata that might be needed by the step in the future (license, author, etc.).
+ In programming language terms, this is similar to a function declaration with arguments and return values.
+
+1. [Implementation](#step-implementation):
+
+ The implementation part of the document describes how to execute the step, including how the environment
+ has to be configured, or how actions can be configured.
+
+## Example step that prints a message to stdout
+
+In the following step example:
+
+1. The declaration specifies that the step accepts a single input named `message`.
+ The `message` is a required argument that needs to be provided when running the step
+ because it does not define `default:`.
+1. The implementation section specifies that the step is of type `exec`. When run, the step
+ will execute an `echo` command with a single argument (the `message` value).
+
+```yaml
+# .gitlab/ci/steps/exec-echo.yaml
+spec:
+ inputs:
+ message:
+---
+type: exec
+exec:
+ command: [echo, "${{inputs.message}}"]
+```
+
+## Step specification
+
+The step specification currently only defines inputs and outputs:
+
+- Inputs:
+ - Can be required or optional.
+ - Have a name and can have a description.
+ - Can contain a list of accepted options. Options limit what value can be provided for the input.
+ - Can define matching regexp. The matching regexp limits what value can be provided for the input.
+ - Can be expanded with the usage of syntax `${{ inputs.input_name }}`.
+- All **input values** can be accessed when `type: exec` is used,
+ by decoding the `$STEP_JSON` file that does provide information about the context of the execution.
+- Outputs:
+ - Have a name and can have a description.
+ - Can be set by writing to a special [dotenv](https://github.com/bkeepers/dotenv) file named:
+ `$OUTPUT_FILE` with a format of `output_name=VALUE` per output.
+
+For example:
+
+```yaml
+spec:
+ inputs:
+ message_with_default:
+ default: "Hello World"
+ message_that_is_required:
+ description: "This description explains that the input is required, because it does not specify a default:"
+ type_with_limited_options:
+ options: [bash, powershell, detect]
+ type_with_default_and_limited_options:
+ default: bash
+ options: [bash, powershell, detect]
+ description: "Since the options are provided, the default: needs to be one of the options"
+ version_with_matching_regexp:
+ match: ^v\d+\.\d+$
+ description: "The match pattern only allows values similar to `v1.2`"
+ outputs:
+ code_coverage:
+ description: "Measured code coverage that was calculated as part of the step"
+---
+type: steps
+steps:
+ - step: ./bash-script.yaml
+ inputs:
+ script: "echo Code Coverage = 95.4% >> $OUTPUT_FILE"
+```
+
+## Step Implementation
+
+The step definition can use the following types to implement the step:
+
+- `type: exec`: Run a binary command, using STDOUT/STDERR for tracing the executed process.
+- `type: steps`: Run a sequence of steps.
+- `type: parallel` (Planned): Run all steps in parallel, waiting for all of them to finish.
+- `type: grpc` (Planned): Run a binary command but use gRPC for intra-process communication.
+- `type: container` (Planned): Run a nested Step Runner in a container image of choice,
+ transferring all execution flow.
+
+### The `exec` step type
+
+The ability to run binary commands is one of the primitive functions:
+
+- The command to execute is defined by the `exec:` section.
+- The result of the execution is the exit code of the command to be executed, unless the default behavior is overwritten.
+- The default working directory in which the command is executed is the directory in which the
+ step is located.
+- By default, the command is not time-limited, but can be time-limited during job execution with `timeout:`.
+
+For example, an `exec` step with no inputs:
+
+```yaml
+spec:
+---
+type: exec
+exec:
+ command: [/bin/bash, ./my-script.sh]
+ timeout: 30m
+ workdir: /tmp
+```
+
+#### Example step that executes user-defined command
+
+The following example is a minimal step definition that executes a user-provided command:
+
+- The declaration section specifies that the step accepts a single input named `script`.
+- The `script` input is a required argument that needs to be provided when running the step
+ because no `default:` is defined.
+- The implementation section specifies that the step is of type `exec`. When run, the step
+ will execute in `bash` passing the user command with `-c` argument.
+- The command to be executed will be prefixed with `set -veo pipefail` to print the execution
+ to the job log and exit on the first failure.
+
+```yaml
+# .gitlab/ci/steps/exec-script.yaml
+
+spec:
+ inputs:
+ script:
+ description: 'Run user script.'
+---
+type: exec
+exec:
+ command: [/usr/bin/env, bash, -c, "set -veo pipefail; ${{inputs.script}}"]
+```
+
+### The `steps` step type
+
+The ability to run multiple steps in sequence is one of the primitive functions:
+
+- A sequence of steps is defined by an array of step references: `steps: []`.
+- The next step is run only if previous step succeeded, unless the default behavior is overwritten.
+- The result of the execution is either:
+ - A failure at the first failed step.
+ - Success if all steps in sequence succeed.
+
+#### Steps that use other steps
+
+The `steps` type depends extensively on being able to use other steps.
+Each item in a sequence can reference other external steps, for example:
+
+```yaml
+spec:
+---
+type: steps
+steps:
+ - step: ./.gitlab/ci/steps/ruby/install.yml
+ inputs:
+ version: 3.1
+ env:
+ HTTP_TIMEOUT: 10s
+ - step: gitlab.com/gitlab-org/components/bash/script@v1.0
+ inputs:
+ script: echo Hello World
+```
+
+The `step:` value is a string that describes where the step definition is located:
+
+- **Local**: The definition can be retrieved from a local source with `step: ./path/to/local/step.yml`.
+ A local reference is used when the path starts with `./` or `../`.
+ The resolved path to another local step is always **relative** to the location of the current step.
+ There is no limitation where the step is located in the repository.
+- **Remote**: The definition can also be retrieved from a remote source with `step: gitlab.com/gitlab-org/components/bash/script@v1.0`.
+ Using a FQDN makes the Step Runner pull the repository or archive containing
+ the step, using the version provided after the `@`.
+
+The `inputs:` section is a list of key-value pairs. The `inputs:` specify values
+that are passed and matched against the [step specification](#step-specification).
+
+The `env:` section is a list of key-value pairs. `env:` exposes the given environment
+variables to all children steps, including [`type: exec`](#the-exec-step-type) or [`type: steps`](#the-steps-step-type).
+
+#### Remote Steps
+
+To use remote steps with `step: gitlab.com/gitlab-org/components/bash/script@v1.0`
+the step definitions must be stored in a structured-way. The step definitions:
+
+- Must be stored in the `steps/` folder.
+- Can be nested in sub-directories.
+- Can be referenced by the directory name alone if the step definition
+ is stored in a `step.yml` file.
+
+For example, the file structure for a repository hosted in `git clone https://gitlab.com/gitlab-org/components.git`:
+
+```plaintext
+├── steps/
+├── ├── secret_detection.yml
+| ├── sast/
+│ | └── step.yml
+│ └── dast
+│ ├── java.yml
+│ └── ruby.yml
+```
+
+This structure exposes the following steps:
+
+- `step: gitlab.com/gitlab-org/components/secret_detection@v1.0`: From the definition stored at `steps/secret_detection.yml`.
+- `step: gitlab.com/gitlab-org/components/sast@v1.0`: From the definition stored at `steps/sast/step.yml`.
+- `step: gitlab.com/gitlab-org/components/dast/java@v1.0`: From the definition stored at `steps/dast/java.yml`.
+- `step: gitlab.com/gitlab-org/components/dast/ruby@v1.0`: From the definition stored at `steps/dast/ruby.yml`.
+
+#### Example step that runs other steps
+
+The following example is a minimal step definition that
+runs other steps that are local to the current step.
+
+- The declaration specifies that the step accepts two inputs, each with
+ a default value.
+- The implementation section specifies that the step is of type `steps`, meaning
+ the step will execute the listed steps in sequence. The usage of a top-level
+ `env:` makes the `HTTP_TIMEOUT` variable available in all executed steps.
+
+```yaml
+spec:
+ inputs:
+ ruby_version:
+ default: 3.1
+ http_timeout:
+ default: 10s
+---
+type: steps
+env:
+ HTTP_TIMEOUT: ${{inputs.http_timeout}}
+steps:
+ - step: ./.gitlab/ci/steps/exec-echo.yaml
+ inputs:
+ message: "Installing Ruby ${{inputs.ruby_version}}..."
+ - step: ./.gitlab/ci/ruby/install.yaml
+ inputs:
+ version: ${{inputs.ruby_version}}
+```
+
+## Context and interpolation
+
+Every step definition is executed in a context object which
+stores the following information that can be used by the step definition:
+
+- `inputs`: The list of inputs, including user-provided or default.
+- `outputs`: The list of expected outputs.
+- `env`: The current environment variable values.
+- `job`: The metadata about the current job being executed.
+ - `job.project`: Information about the project, for example ID, name, or full path.
+ - `job.variables`: All [CI/CD Variables](../../../ci/variables/predefined_variables.md) as provided by the CI/CD execution,
+ including project variables, predefined variables, etc.
+ - `job.pipeline`: Information about the current executed pipeline, like the ID, name, full path
+- `step`: Information about the current executed step, like the location of the step, the version used, or the [specification](#step-specification).
+- `steps` (only for `type: exec`): - Information about each step in sequence to be run, containing information about the
+ result of the step execution, like status or trace log.
+ - `steps.<name-of-the-step>.status`: The status of the step, like `success` or `failed`.
+ - `steps.<name-of-the-step>.outputs.<output-name>`: To fetch the output provided by the step
+
+The context object is used to enable support for the interpolation in the form of `${{ <value> }}`.
+
+Interpolation:
+
+- Is forbidden in the [step specification](#step-specification) section.
+ The specification is static configuration that should not affected by the runtime environment.
+- Can be used in the [step implementation](#step-implementation) section. The implementation
+ describes the runtime set of instructions for how step should be executed.
+- Is applied to every value of the hash of each data structure.
+- Of the *values* of each hash is possible (for now). The interpolation of *keys* is forbidden.
+- Is done when executing and passing control to a given step, instead of running
+ it once when the configuration is loaded. This enables chaining outputs to inputs, or making steps depend on the execution
+ of earlier steps.
+
+For example:
+
+```yaml
+# .gitlab/ci/steps/exec-echo.yaml
+spec:
+ inputs:
+ timeout:
+ default: 10s
+ bash_support_version:
+---
+type: steps
+env:
+ HTTP_TIMEOUT: ${{inputs.timeout}}
+ PROJECT_ID: ${{job.project.id}}
+steps:
+ - step: ./my/local/step/to/echo.yml
+ inputs:
+ message: "I'm currently building a project: ${{job.project.full_path}}"
+ - step: gitlab.com/gitlab-org/components/bash/script@v${{inputs.bash_support_version}}
+```
+
+## Reference data structures describing YAML document
+
+```go
+package main
+
+type StepEnvironment map[string]string
+
+type StepSpecInput struct {
+ Default *string `yaml:"default"`
+ Description string `yaml:"description"`
+ Options *[]string `yaml:"options"`
+ Match *string `yaml:"match"`
+}
+
+type StepSpecOutput struct {
+}
+
+type StepSpecInputs map[string]StepSpecInput
+type StepSpecOutputs map[string]StepSpecOutput
+
+type StepSpec struct {
+ Inputs StepSpecInput `yaml:"inputs"`
+ Outputs StepSpecOutputs `yaml:"outputs"`
+}
+
+type StepSpecDoc struct {
+ Spec StepSpec `yaml:"spec"`
+}
+
+type StepType string
+
+const StepTypeExec StepType = "exec"
+const StepTypeSteps StepType = "steps"
+
+type StepDefinition struct {
+ Def StepSpecDoc `yaml:"-"`
+ Env StepEnvironment `yaml:"env"`
+ Steps *StepDefinitionSequence `yaml:"steps"`
+ Exec *StepDefinitionExec `yaml:"exec"`
+}
+
+type StepDefinitionExec struct {
+ Command []string `yaml:"command"`
+ WorkingDir *string `yaml:"working_dir"`
+ Timeout *time.Duration `yaml:"timeout"`
+}
+
+type StepDefinitionSequence []StepReference
+
+type StepReferenceInputs map[string]string
+
+type StepReference struct {
+ Step string `yaml:"step"`
+ Inputs StepReferenceInputs `yaml:"inputs"`
+ Env StepEnvironment `yaml:"env"`
+}
+```
diff --git a/doc/architecture/blueprints/gitlab_steps/steps-syntactic-sugar.md b/doc/architecture/blueprints/gitlab_steps/steps-syntactic-sugar.md
new file mode 100644
index 00000000000..3ca54a45477
--- /dev/null
+++ b/doc/architecture/blueprints/gitlab_steps/steps-syntactic-sugar.md
@@ -0,0 +1,66 @@
+---
+owning-stage: "~devops::verify"
+description: The Syntactic Sugar extensions to the Step Definition
+---
+
+# The Syntactic Sugar extensions to the Step Definition
+
+[The Step Definition](step-definition.md) describes a minimal required syntax
+to be supported. To aid common workflows the following syntactic sugar is used
+to extend different parts of that document.
+
+## Syntactic Sugar for Step Reference
+
+Each of syntactic sugar extensions is converted into the simple
+[step reference](step-definition.md#steps-that-use-other-steps).
+
+### Easily execute scripts in a target environment
+
+`script:` is a shorthand syntax to aid execution of simple scripts, which cannot be used with `step:`
+and is run by an externally stored step component provided by GitLab.
+
+The GitLab-provided step component performs shell auto-detection unless overwritten,
+similar to how GitLab Runner does that now: based on a running system.
+
+`inputs:` and `env:` can be used for additional control of some aspects of that step component.
+
+For example:
+
+```yaml
+spec:
+---
+type: steps
+steps:
+ - script: bundle exec rspec
+ - script: bundle exec rspec
+ inputs:
+ shell: sh # Force runner to use `sh` shell, instead of performing auto-detection
+```
+
+This syntax example translates into the following equivalent syntax for
+execution by the Step Runner:
+
+```yaml
+spec:
+---
+type: steps
+steps:
+ - step: gitlab.com/gitlab-org/components/steps/script@v1.0
+ inputs:
+ script: bundle exec rspec
+ - step: gitlab.com/gitlab-org/components/steps/script@v1.0
+ inputs:
+ script: bundle exec rspec
+ shell: sh # Force runner to use `sh` shell, instead of performing auto-detection
+```
+
+This syntax example is **invalid** (and ambiguous) because the `script:` and `step:` cannot be used together:
+
+```yaml
+spec:
+---
+type: steps
+steps:
+ - step: gitlab.com/my-component/ruby/install@v1.0
+ script: bundle exec rspec
+```
diff --git a/doc/architecture/blueprints/google_artifact_registry_integration/index.md b/doc/architecture/blueprints/google_artifact_registry_integration/index.md
index 4c2bfe95c5e..ef66ae33b2a 100644
--- a/doc/architecture/blueprints/google_artifact_registry_integration/index.md
+++ b/doc/architecture/blueprints/google_artifact_registry_integration/index.md
@@ -116,6 +116,6 @@ One alternative solution considered was to use the Docker/OCI API provided by GA
- **Multiple Requests**: To retrieve all the required information about each image, multiple requests to different endpoints (listing tags, obtaining image manifests, and image configuration blobs) would have been necessary, leading to a `1+N` performance issue.
-GitLab had previously faced significant challenges with the last two limitations, prompting the development of a custom [GitLab Container Registry API](https://gitlab.com/gitlab-org/container-registry/-/blob/master/docs-gitlab/api.md) to address them. Additionally, GitLab decided to [deprecate support](../../../update/deprecations.md#use-of-third-party-container-registries-is-deprecated) for connecting to third-party container registries using the Docker/OCI API due to these same limitations and the increased cost of maintaining two solutions in parallel. As a result, there is an ongoing effort to replace the use of the Docker/OCI API endpoints with custom API endpoints for all container registry functionalities in GitLab.
+GitLab had previously faced significant challenges with the last two limitations, prompting the development of a custom [GitLab Container Registry API](https://gitlab.com/gitlab-org/container-registry/-/blob/master/docs/spec/gitlab/api.md) to address them. Additionally, GitLab decided to [deprecate support](../../../update/deprecations.md#use-of-third-party-container-registries-is-deprecated) for connecting to third-party container registries using the Docker/OCI API due to these same limitations and the increased cost of maintaining two solutions in parallel. As a result, there is an ongoing effort to replace the use of the Docker/OCI API endpoints with custom API endpoints for all container registry functionalities in GitLab.
Considering these factors, the decision was made to build the GAR integration from scratch using the proprietary GAR API. This approach provides more flexibility and control over the integration and can serve as a foundation for future expansions, such as support for other GAR artifact formats.
diff --git a/doc/architecture/blueprints/new_diffs.md b/doc/architecture/blueprints/new_diffs.md
index b5aeb9b8aa8..af1e4679c14 100644
--- a/doc/architecture/blueprints/new_diffs.md
+++ b/doc/architecture/blueprints/new_diffs.md
@@ -68,6 +68,35 @@ compared with the pros and cons of alternatives.
## Design and implementation details
+### Workspace & Artifacts
+
+- We will store implementation details like metrics, budgets, and development & architectural patterns here in the docs
+- We will store large bodies of research, the results of audits, etc. in the [wiki](https://gitlab.com/gitlab-com/create-stage/new-diffs/-/wikis/home) of the [New Diffs project](https://gitlab.com/gitlab-com/create-stage/new-diffs)
+- We will store audio & video recordings on the public Youtube channel in the Code Review / New Diffs playlist
+- We will store drafts, meeting notes, and other temporary documents in public Google docs
+
+### Definitions
+
+#### Maintainability
+
+Maintainable projects are _simple_ projects.
+
+Simplicity is the opposite of complexity. This uses a definition of simple and complex [described by Rich Hickey in "Simple Made Easy"](https://www.infoq.com/presentations/Simple-Made-Easy/) (Strange Loop, 2011).
+
+- Maintainable code is simple (single task, single concept, separate from other things).
+- Maintainable projects expand on simple code by having simple structure (folders define classes of behaviors, e.g. you can be assured that a component directory will never initiate a network call, because that would be complecting visual display with data access)
+- Maintainable applications flow out of simple organization and simple code. The old saying is a cluttered desk is representative of a cluttered mind. Rigorous discipline on simplicity will be represented in our output (the product). By being strict about working simply, we will naturally produce applications where our users can more easily reason about their behavior.
+
+#### Done
+
+GitLab has an existing [definition of done](/ee/development/contributing/merge_request_workflow.md#definition-of-done) which is geared primarily toward identifying when an MR is ready to be merged.
+
+In addition to the items in the GitLab definition of done, work on new diffs should also adhere to the following requirements:
+
+- Meets or exceeds all metrics
+ - Meets or exceeds our minimum accessibility metrics (these are explicitly not part of our defined priorities, since they are non-negotiable)
+- All work is fully documented for engineers (user documentation is a requirement of the standard definition of done)
+
<!--
This section should contain enough information that the specifics of your
change are understandable. This may include API specs (though not always
diff --git a/doc/architecture/blueprints/observability_logging/diagrams.drawio b/doc/architecture/blueprints/observability_logging/diagrams.drawio
new file mode 100644
index 00000000000..79b05247437
--- /dev/null
+++ b/doc/architecture/blueprints/observability_logging/diagrams.drawio
@@ -0,0 +1 @@
+<mxfile host="Electron" modified="2023-10-29T14:03:45.654Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/20.7.4 Chrome/106.0.5249.199 Electron/21.3.3 Safari/537.36" etag="mgCNcxJzZIj4Fii1_swS" version="20.7.4" type="device"><diagram id="eudcs1I04LxSKviLHc7n" name="Page-1">7VrZcuMoFP0aP9olCS3Wo+MsPVPpjnuSqkk/TWGJ2FSwcSG8zdcPWEiWQF5aXuRUzZPhglgOh8Pl4hboT1ZPDM7G32mMSMux4lUL3Lccx3aBK36kZZ1aur6fGkYMx6rS1vCK/0XKaCnrHMcoKVXklBKOZ2VjRKdTFPGSDTJGl+VqH5SUe53BETIMrxEkpvVvHPOxmoUTbO3fEB6Ns55tP0xLJjCrrGaSjGFMlwUTeGiBPqOUp6nJqo+IBC/DJf3ucUdpPjCGpvyYD+gIg8WPhP75GFv47m244iuvrVpZQDJXE/5jOmIoSdSY+ToDgtH5NEayLbsF7pZjzNHrDEaydCmWXtjGfEJUsTm2rCPEOFoVTGqsT4hOEGdrUSWnjsJNEQcEKr/cLoOdYTsuLIGvbFCt/ChveguOSCh8qrF6/t5++Qn+wqHDrX8GsfVu9WAFVn2Co89vdJ4gAy6x2jOZjNYEC9wYOAzaMEX4eZgbYPQ52uD+MueiGaTsSbpLbC9H2oC1AvydSAeOhrTnmUhbFUiHl0LaMZD+OUebIb8itsCRCXcNdp6EmeuEJcxyvSpgFlRAZoNLYeYZmO3k5AdBq57URoEFmsYqeR8RmCQ4KmO1Bdba1BZDfVdFm8wvWdLxsuz9qljzfp3lVpi/F9K/CuntJzKzzogtcwPEsEAHMVXjAxPSp4SyzXwAsB4fQSDXW4G+3RAoNmTd0KKEzlmEDm94DtkI8UN0NalSoIJXQYXMxhCBHC/Kw62ih+phQLGYSM5EPyjvXkP/0mmqr4rng9aQB3bIQNZQioPR0Iat+bTrE9g3CPzy9vAsLGLFiTjaxao7PuFSF2VqJFNvaAo3c+g1rgigW1YEF1ScV/t4cHZFCK6qCFYn9Iqi0LY6Vi1Z6Hg7hWGfBmxa1gXjQsLgmMKw27tqShgCfT936wqDpZ113esKQ/cUYbi7OWHwrKaFITwF0B+nAQpZpC55rnUZfP3GhTfzBfcpb0Flp1R69ncxTMY5igXEpH0AuRC16cbiWPIukXBGP/MrqWPosiaTLkKe45b1uiTX9rFSncvzfh/OOq/ihke6Yt0mFdfVrqxuXVfMDbWGbK2hCyuubV51L+A6aBR98OwQlCla75Jh/4Y3cbqDsC+scpCu/v90PQddzXjBl6VrcNN8bVReneAAzY7lKwi1q5p/XYfWBrfIV6cOYe1bZmt4U2x1z8RWD1yZra7B1l56R9hB2fmEpBXAnXTscQTJMxwiMqAJ5phKL3ZIOaeTQoUewSNZwKl2X6Bp5LufP++c6dagx7wrXhdABTku97hwlfhtURPUtaNODCYRlONZt0NCo89mNnhKzaZ2ONDdnroBFsMR0x8JLr3DzdBr8+fRnsPIr+vtXzNG6B/L4Zs6pL6sS3WVYPfvUXhfSKWdv3HdMoe7X4LDXY3DTl0Z1uLcbnBlDpuB7kY43LHdo0ODbbcDdjPZuR0qHxtAbJbKukehOwJHU9nWqHzlJxvbfGJ4wpzAobC9DBPEFnCICeYS2oGA7IOyiUhGZJ5IJhisX+IJgRt39fLvNfofj/J8gQN+WEUC3f874nIgstt/gKXob/9HBx7+Aw==</diagram></mxfile> \ No newline at end of file
diff --git a/doc/architecture/blueprints/observability_logging/index.md b/doc/architecture/blueprints/observability_logging/index.md
new file mode 100644
index 00000000000..d8259e0a736
--- /dev/null
+++ b/doc/architecture/blueprints/observability_logging/index.md
@@ -0,0 +1,632 @@
+---
+status: proposed
+creation-date: "2023-10-29"
+authors: [ "@vespian_gl" ]
+coach: "@mappelman"
+approvers: [ "@sguyon", "@nicholasklick" ]
+owning-stage: "~monitor::observability"
+participating-stages: []
+---
+
+<!-- vale gitlab.FutureTense = NO -->
+
+# GitLab Observability - Logging
+
+## Summary
+
+This design document outlines a system for storing and querying logs which will be a part of GitLab Observability Backend (GOB), together with [tracing](../observability_tracing/index.md) and [metrics](../observability_metrics/index.md).
+At its core the system is leveraging [OpenTelemetry logging](https://opentelemetry.io/docs/specs/otel/logs/) specification for data ingestion and ClickHouse database for storage.
+The users will interact with the data through GitLab UI.
+The system itself is multi-tenant and offers our users a way to store their application logs, query them, and in future iterations correlate with other observability signals (traces, errors, metrics, etc...).
+
+## Motivation
+
+After [tracing](../observability_tracing/index.md) and [metrics](../observability_metrics/index.md), logging is the last observability signal that we need to support to be able to provide our users with a fully-fledged observability solution.
+
+One could argue that logging itself is also the most important observability signal because it is so widespread.
+It predates metrics and tracing in the history of application observability and is usually implemented as one of the first things during development.
+
+Without logging support, it would be very hard if not impossible to fully understand for our users the performance and operation of the applications developed by them with the help of our platform.
+
+### Goals
+
+- **multi-tenant**: each user and their data should be isolated from others that are using the platform.
+ Users may query only the data that they have sent to the platform.
+- **follows OpenTelemetry standards**: logs ingestion should follow the [OpenTelemetry protocol](https://opentelemetry.io/docs/specs/otel/logs/data-model/).
+ Apart from being able to re-use the tooling and know-how that was already developed for OpenTelemetry protocol, we will not have to reinvent the wheel when it comes to wire protocol and data storage format.
+- **uses ClickHouse as a data storage backend**: ClickHouse has become the go-to solution for observability data at GitLab for a plethora of reasons.
+ Our tracing and metrics solutions already use it, so logging should be consistent with it and not introduce new dependencies.
+- **Users can query their data using reasonably complex queries**: storing logs by itself will not bring much value to our users.
+
+### Non-Goals
+
+- **complex query support and logs analytics** - at least in the first iteration we do not plan to support complex queries, in particular `GROUP BY` queries that users may want to use for quantitative logs analytics.
+ Supporting it is not trivial and requires some more research and work in the area of query language syntax.
+- **advanced data retention** - logs differ from traces and metrics concerning legal requirements.
+ Authorities may request logs stored by us as part of e.g. ongoing investigations.
+ In the initial iteration, we need to caution our users that our system is not ready for that and they need a secondary system for now if they intend to store e.g. access logs.
+ We will need more work around logs/data integrity and long-term storage policies to handle this use case.
+- **data deletion** - apart from the case where the data simply expires after a predefined storage period, we do not plan to support deleting individual logs by users.
+ This is left for later iterations.
+- **linking logs to traces** - we do not intend to support linking logs to traces in the first iteration, at least not in the UI.
+- **logs sampling** - for traces we expect users to sample their data before sending it to us while we focus only on enforcing the limits/quotas.
+ Logs should follow this pattern.
+ The log sampling implementation seems immature as well - a log sampler is [implemented in OTEL Collector](https://github.com/open-telemetry/opentelemetry-collector-contrib/pull/14920), but it is not clear if it can work together with traces sampling, and there is no official specification ([issue](https://github.com/open-telemetry/opentelemetry-specification/issues/2237), [pull request](https://github.com/open-telemetry/opentelemetry-specification/pull/2482)).
+
+## Proposal
+
+The architecture of logs ingestion follows the patterns outlined in the [tracing](../observability_tracing/index.md) and [metrics](../observability_metrics/index.md) proposals:
+
+![System Overview](system_overview.png)
+
+We re-use the components that were introduced by these proposals, so there are not going to be any new services added.
+Each top-level GitLab namespace has its own OTEL collector to which ingestion requests are directed by the cluster-wide Ingress.
+On the other hand, there is a single, cluster-wide query service that handles queries from users.
+The query service is tenant-aware.
+Rate-limiting of the user requests is done at the Ingress level.
+The cluster-wide Ingress is currently done using Traefik, and it is shared with all other services in the cluster.
+
+### Ingestion path
+
+We receive Log objects from customers in the JSON format over HTTP.
+The request arrives at the cluster-wide Ingress which routes the request to the appropriate OTEL collector.
+The collector then processes this request and executes INSERT statements against Clickhouse.
+
+### Read path
+
+GOB exposes an HTTP/JSON API that e.g. GitLab UI uses to query and then render logs.
+The cluster-wide Ingress is routing the requests to the query service which in turn parses the API request and executes an SQL query against ClickHouse.
+The results are then formatted into JSON response and sent back to the client.
+
+## Design and implementation details
+
+### Legacy code
+
+Handling logging signals is heavily influenced by the large amount of legacy code that needs to be supported, contrary to trace and metric signals.
+For metrics and tracing, OpenTelemetry specification defines new APIs and SDKs that can be leveraged.
+With logs, OpenTelemetry acts more like a bridge and enables legacy libraries/code to send their data to us.
+
+Users may create Log signals from plain log files using [filelogreceiver](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/filelogreceiver) or [fluentd](https://docs.fluentbit.io/manual/pipeline/outputs/opentelemetry).
+Existing log libraries may use [Log Bridge API](https://opentelemetry.io/docs/specs/otel/logs/bridge-api/) to emit logs using OTEL protocol.
+In time the ecosystem will most probably develop and the number of options will grow.
+The assumption is made that _how_ logs are ingested is up to the user.
+
+Hence we expose only an HTTP endpoint that accepts logs in OTEL format and assume that logs are already properly parsed and formatted.
+
+### Logs, Events, and Span Events
+
+Log messages can be sent using three different objects according to the OTEL specification:
+
+- [Log](https://opentelemetry.io/docs/specs/otel/logs/)
+- [Event](https://opentelemetry.io/docs/specs/otel/logs/event-api/)
+- [Span Event](https://opentelemetry.io/docs/concepts/signals/traces/#span-events)
+
+At least in the first iteration we can only either support Logs, Events, or Span-Events.
+
+We can't send Span Events as there are lots of legacy code that can not or will not implement tracing for various reasons.
+
+Even though Events use the same data model internally, their semantics differ.
+Logs have a mandatory severity level as a first-class parameter that Events do not need to have, and Events have a mandatory `event.name` and optional `event.domain` keys in the `Attributes` field of the Log record.
+Further, logs typically have messages in string form and events have data in the form of key-value pairs.
+There is a [discussion](https://github.com/open-telemetry/oteps/blob/main/text/0202-events-and-logs-api.md) to separate Log and Event APIs.
+More information on the differences between these two can be found [here](https://github.com/open-telemetry/oteps/blob/main/text/0202-events-and-logs-api.md#subtle-differences-between-logs-and-events).
+
+From the perspective of a developer/potential user, there seems to be no logging use case that couldn't be modeled as a Log record instead of sending an Event explicitly.
+Examples that the community gives e.g. [here](https://github.com/open-telemetry/opentelemetry-specification/issues/3254) or [here](https://github.com/open-telemetry/oteps/blob/main/text/0202-events-and-logs-api.md#subtle-differences-between-logs-and-events) are not convincing enough and could simply be modeled as Log records.
+
+Hence the decision to only support Log objects seems like a boring and simple solution.
+
+### Rate-limiting
+
+Similar to traces, logging data ingestion will be done at the Ingress level.
+As part of [the forward-auth](https://doc.traefik.io/traefik/middlewares/http/forwardauth/) flow, Traefik will forward the request to Gatekeeper which in turn leverages Redis for counting.
+This is currently done only for [the ingestion path](https://gitlab.com/gitlab-org/opstrace/opstrace/-/merge_requests/2236).
+Please check the MR description for more details on how it works.
+The read path rate limiting implementation is tracked [here](https://gitlab.com/gitlab-org/opstrace/opstrace/-/issues/2356).
+
+### Database schema
+
+[OpenTelemetry specification](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/logs/data-model.md) defines a set of fields that are required by the implementations.
+There are some small discrepancies between the documented schema and the [protobuf definition](https://github.com/open-telemetry/opentelemetry-proto/blob/main/opentelemetry/proto/logs/v1/logs.proto), namely, TraceFlags is defined as an 8-bit field in the documentation whereas it is a 32-bit wide field in the proto definition.
+The remaining 24 bits are reserved.
+The Log message body may be any object and there is [no size limitation for the record](https://github.com/open-telemetry/opentelemetry-specification/issues/1251).
+For the purpose of this design document, we will assume that it is going to be an arbitrary string, either plain text or e.g. JSON, without length limits.
+
+#### Data filtering
+
+The schema uses Bloom Filters extensively.
+They prevent false negatives, but false positives are still possible, hence we will not be able to provide `!=` queries to users.
+The `Body` field is a special case, as it uses [`tokenbf_v1` tokenized Bloom Filter](https://clickhouse.com/docs/en/optimize/skipping-indexes#bloom-filter-types).
+The `tokenbf_v1` skipping index sees like a simpler and more lightweight approach than the `ngrambf_v1` index.
+Based on the very preliminary benchmarks below `ngrambf_v1` index will be also much more difficult to tune.
+The limitation is though that our users will be able to search only the full words for now.
+We (gu)estimate that there may be up to 10,000 different words in a given granule, and we aim for a 0.1% probability of false positives
+Using [this tool](https://krisives.github.io/bloom-calculator/) the optimal size of the filter was calculated at 143776 bits and 10 hash functions.
+
+#### Skipping indexes, `==`, `!=` and `LIKE` operators
+
+Skipping indexes only optimize searching for granules to scan.
+`==` and `LIKE` queries work as they should, the `!=` always results in a full scan due to Bloom Filters limitations.
+At least in the first iteration we will not make `!=` operator available to users.
+
+Based on the data, it may be much easier for us to tune the `tokenbf_v1` filter in the first iteration than the `ngrambf_v1`, because in the case of `ngrambf_v1` queries almost always result in a full scan for any reasonably big dataset.
+The reason for that is that the number of ngrams in the index is much higher than tokens hence matches are more frequent for data with high cardinality of words/symbols.
+
+A very preliminary benchmark was conducted to verify these assumptions.
+
+As testing data, we used the following table schemas and inserts/functions.
+They simulate a single tenant, as we want to focus only on the `Body` field.
+Normally the primary index would allow us to skip granules where there is no data for a given tenant.
+
+`tokenbf_v1` version of the table:
+
+```plaintext
+CREATE TABLE tbl2
+(
+ `Timestamp` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
+ `TraceId` String CODEC(ZSTD(1)),
+ `ServiceName` LowCardinality(String) CODEC(ZSTD(1)),
+ `Duration` UInt8 CODEC(ZSTD(1)),
+ `SpanName` LowCardinality(String) CODEC(ZSTD(1)),
+ `Body` String CODEC(ZSTD(1)),
+ INDEX idx_body Body TYPE tokenbf_v1(143776, 10, 0) GRANULARITY 1
+)
+ENGINE = MergeTree
+PARTITION BY toDate(Timestamp)
+ORDER BY (ServiceName, SpanName, toUnixTimestamp(Timestamp), TraceId)
+SETTINGS index_granularity = 8192
+```
+
+`ngrambf_v1` version of the table:
+
+```plaintext
+CREATE TABLE tbl3
+(
+ `Timestamp` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
+ `TraceId` String CODEC(ZSTD(1)),
+ `ServiceName` LowCardinality(String) CODEC(ZSTD(1)),
+ `Duration` UInt8 CODEC(ZSTD(1)),
+ `SpanName` LowCardinality(String) CODEC(ZSTD(1)),
+ `Body` String CODEC(ZSTD(1)),
+ INDEX idx_body Body TYPE ngrambf_v1(4,143776, 10, 0) GRANULARITY 1
+)
+ENGINE = MergeTree
+PARTITION BY toDate(Timestamp)
+ORDER BY (ServiceName, SpanName, toUnixTimestamp(Timestamp), TraceId)
+SETTINGS index_granularity = 8192
+```
+
+In both cases, their `Body` fields were filled with data that simulates a JSON map object:
+
+```plaintext
+CREATE FUNCTION genmap AS (n) -> arrayMap (x-> (x::String, (x*(rand()%40000+1))::String), range(1, n));
+
+INSERT INTO tbl(2|3)
+SELECT
+ now() - randUniform(1, 1_000_000) as Timestamp,
+ randomPrintableASCII(2) as TraceId,
+ randomPrintableASCII(2) as ServiceName,
+ rand32() as Duration,
+ randomPrintableASCII(2) as SpanName,
+ toJSONString(genmap(rand()%40+1)::Map(String, String)) as Body
+FROM numbers(10_000_000);
+```
+
+In the case of the `tokenbf_v1` table, we have:
+
+- `==` equality works, skipping index resulted in 224/1264 granules scanned:
+
+```plaintext
+zara.engel.vespian.net :) explain indexes=1 select count(*) from tbl2 where Body == '{"1":"14732","2":"29464","3":"44196","4":"58928","5":"73660","6":"88392","7":"103124","8":"117856","9":"132588","10":"147320","11":"162052"}'
+
+EXPLAIN indexes = 1
+SELECT count(*)
+FROM tbl2
+WHERE Body = '{"1":"14732","2":"29464","3":"44196","4":"58928","5":"73660","6":"88392","7":"103124","8":"117856","9":"132588","10":"147320","11":"162052"}'
+
+Query id: 60827945-a9b0-42f9-86a8-dfe77758a6b1
+
+┌─explain───────────────────────────────────────────┐
+│ Expression ((Projection + Before ORDER BY)) │
+│ Aggregating │
+│ Expression (Before GROUP BY) │
+│ Filter (WHERE) │
+│ ReadFromMergeTree (logging.tbl2) │
+│ Indexes: │
+│ MinMax │
+│ Condition: true │
+│ Parts: 69/69 │
+│ Granules: 1264/1264 │
+│ Partition │
+│ Condition: true │
+│ Parts: 69/69 │
+│ Granules: 1264/1264 │
+│ PrimaryKey │
+│ Condition: true │
+│ Parts: 69/69 │
+│ Granules: 1264/1264 │
+│ Skip │
+│ Name: idx_body │
+│ Description: tokenbf_v1 GRANULARITY 1 │
+│ Parts: 62/69 │
+│ Granules: 224/1264 │
+└───────────────────────────────────────────────────┘
+
+23 rows in set. Elapsed: 0.019 sec.
+```
+
+- `!=` inequality works as well, but results in fulltext scan - all granules were scanned:
+
+```plaintext
+zara.engel.vespian.net :) explain indexes=1 select count(*) from tbl2 where Body != '{"1":"14732","2":"29464","3":"44196","4":"58928","5":"73660","6":"88392","7":"103124","8":"117856","9":"132588","10":"147320","11":"162052"}'
+
+EXPLAIN indexes = 1
+SELECT count(*)
+FROM tbl2
+WHERE Body != '{"1":"14732","2":"29464","3":"44196","4":"58928","5":"73660","6":"88392","7":"103124","8":"117856","9":"132588","10":"147320","11":"162052"}'
+
+Query id: 01584696-30d8-4711-8469-44d4f2629c98
+
+┌─explain───────────────────────────────────────────┐
+│ Expression ((Projection + Before ORDER BY)) │
+│ Aggregating │
+│ Expression (Before GROUP BY) │
+│ Filter (WHERE) │
+│ ReadFromMergeTree (logging.tbl2) │
+│ Indexes: │
+│ MinMax │
+│ Condition: true │
+│ Parts: 69/69 │
+│ Granules: 1264/1264 │
+│ Partition │
+│ Condition: true │
+│ Parts: 69/69 │
+│ Granules: 1264/1264 │
+│ PrimaryKey │
+│ Condition: true │
+│ Parts: 69/69 │
+│ Granules: 1264/1264 │
+│ Skip │
+│ Name: idx_body │
+│ Description: tokenbf_v1 GRANULARITY 1 │
+│ Parts: 69/69 │
+│ Granules: 1264/1264 │
+└───────────────────────────────────────────────────┘
+
+23 rows in set. Elapsed: 0.017 sec.
+```
+
+- `LIKE` queries work, 271/1264 granules scanned:
+
+```plaintext
+zara.engel.vespian.net :) explain indexes=1 select * from tbl2 where Body like '%"11":"162052"%';
+
+EXPLAIN indexes = 1
+SELECT *
+FROM tbl2
+WHERE Body LIKE '%"11":"162052"%'
+
+Query id: 86e99d7a-6567-4000-badc-d0b8b2dc8936
+
+┌─explain─────────────────────────────────────┐
+│ Expression ((Projection + Before ORDER BY)) │
+│ ReadFromMergeTree (logging.tbl2) │
+│ Indexes: │
+│ MinMax │
+│ Condition: true │
+│ Parts: 69/69 │
+│ Granules: 1264/1264 │
+│ Partition │
+│ Condition: true │
+│ Parts: 69/69 │
+│ Granules: 1264/1264 │
+│ PrimaryKey │
+│ Condition: true │
+│ Parts: 69/69 │
+│ Granules: 1264/1264 │
+│ Skip │
+│ Name: idx_body │
+│ Description: tokenbf_v1 GRANULARITY 1 │
+│ Parts: 64/69 │
+│ Granules: 271/1264 │
+└─────────────────────────────────────────────┘
+
+20 rows in set. Elapsed: 0.047 sec.
+```
+
+`ngrambf_v1` tokenizer will be much harder to tune and use correctly:
+
+- equality using n-gram indexes works as well, but due to the high granularity of tokens in the bloom filter, we aren't skipping many granules:
+
+```plaintext
+zara.engel.vespian.net :) explain indexes=1 select count(*) from tbl3 where Body == '{"1":"14732","2":"29464","3":"44196","4":"58928","5":"73660","6":"88392","7":"103124","8":"117856","9":"132588","10":"147320","11":"162052"}'
+
+EXPLAIN indexes = 1
+SELECT count(*)
+FROM tbl3
+WHERE Body = '{"1":"14732","2":"29464","3":"44196","4":"58928","5":"73660","6":"88392","7":"103124","8":"117856","9":"132588","10":"147320","11":"162052"}'
+
+Query id: 22836e2d-5e49-4f51-b23c-facf5a3102c2
+
+┌─explain───────────────────────────────────────────┐
+│ Expression ((Projection + Before ORDER BY)) │
+│ Aggregating │
+│ Expression (Before GROUP BY) │
+│ Filter (WHERE) │
+│ ReadFromMergeTree (logging.tbl3) │
+│ Indexes: │
+│ MinMax │
+│ Condition: true │
+│ Parts: 60/60 │
+│ Granules: 1257/1257 │
+│ Partition │
+│ Condition: true │
+│ Parts: 60/60 │
+│ Granules: 1257/1257 │
+│ PrimaryKey │
+│ Condition: true │
+│ Parts: 60/60 │
+│ Granules: 1257/1257 │
+│ Skip │
+│ Name: idx_body │
+│ Description: ngrambf_v1 GRANULARITY 1 │
+│ Parts: 60/60 │
+│ Granules: 1239/1257 │
+└───────────────────────────────────────────────────┘
+
+23 rows in set. Elapsed: 0.025 sec.
+```
+
+- inequality here also results in a full scan:
+
+```plaintext
+zara.engel.vespian.net :) explain indexes=1 select count(*) from tbl3 where Body != '{"1":"14732","2":"29464","3":"44196","4":"58928","5":"73660","6":"88392","7":"103124","8":"117856","9":"132588","10":"147320","11":"162052"}'
+
+EXPLAIN indexes = 1
+SELECT count(*)
+FROM tbl3
+WHERE Body != '{"1":"14732","2":"29464","3":"44196","4":"58928","5":"73660","6":"88392","7":"103124","8":"117856","9":"132588","10":"147320","11":"162052"}'
+
+Query id: 2378c885-65b0-4be0-9564-fa7ba7c79172
+
+┌─explain───────────────────────────────────────────┐
+│ Expression ((Projection + Before ORDER BY)) │
+│ Aggregating │
+│ Expression (Before GROUP BY) │
+│ Filter (WHERE) │
+│ ReadFromMergeTree (logging.tbl3) │
+│ Indexes: │
+│ MinMax │
+│ Condition: true │
+│ Parts: 60/60 │
+│ Granules: 1257/1257 │
+│ Partition │
+│ Condition: true │
+│ Parts: 60/60 │
+│ Granules: 1257/1257 │
+│ PrimaryKey │
+│ Condition: true │
+│ Parts: 60/60 │
+│ Granules: 1257/1257 │
+│ Skip │
+│ Name: idx_body │
+│ Description: ngrambf_v1 GRANULARITY 1 │
+│ Parts: 60/60 │
+│ Granules: 1257/1257 │
+└───────────────────────────────────────────────────┘
+
+23 rows in set. Elapsed: 0.022 sec.
+```
+
+- LIKE statements work, but result in a fullscan as the ngrams match all the granules:
+
+```plaintext
+zara.engel.vespian.net :) explain indexes=1 select * from tbl3 where Body like '%"11":"162052"%';
+
+EXPLAIN indexes = 1
+SELECT *
+FROM tbl3
+WHERE Body LIKE '%"11":"162052"%'
+
+Query id: 957d8c98-819e-4487-93ac-868ffe0485ec
+
+┌─explain─────────────────────────────────────┐
+│ Expression ((Projection + Before ORDER BY)) │
+│ ReadFromMergeTree (logging.tbl3) │
+│ Indexes: │
+│ MinMax │
+│ Condition: true │
+│ Parts: 60/60 │
+│ Granules: 1257/1257 │
+│ Partition │
+│ Condition: true │
+│ Parts: 60/60 │
+│ Granules: 1257/1257 │
+│ PrimaryKey │
+│ Condition: true │
+│ Parts: 60/60 │
+│ Granules: 1257/1257 │
+│ Skip │
+│ Name: idx_body │
+│ Description: ngrambf_v1 GRANULARITY 1 │
+│ Parts: 60/60 │
+│ Granules: 1251/1257 │
+└─────────────────────────────────────────────┘
+
+20 rows in set. Elapsed: 0.023 sec.
+```
+
+#### Data Deduplication
+
+To provide cost-efficient service to our users, we need to think about deduplicating the data we get from our users.
+ClickHouse [ReplacingMergeTree](https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/replacingmergetree) deduplicates data automatically based on the primary key.
+We can't include all the relevant `Log` entry fields in the primary field, hence the idea of a Fingerprint as the very last part of the Primary Key.
+We normally do not use it for indexing, just to prevent unique records from being garbage collected.
+The fingerprint calculation algorithm and length have not been chosen yet, we may use the same one that `metrics` are using to calculate their Fingerprint.
+For now, we assume that it is 128-bit wide (16 8-bit chars).
+The columns we use for fingerprint calculation are the columns that are not present in the primary key: `Body`, `ResourceAttributes`, and `LogAttributes`.
+The fingerprint, due to very high cardinality, will need to go into the last place in the primary index.
+
+#### Data Retention
+
+There is a legal question of how long logs need to be stored and whether we allow for their deletion (e.g. due to the leak of some private data or data related to an investigation).
+In some jurisdictions, logs need to be kept for years and there must be no way to delete them.
+This affects deduplication unless we include the ObservedTimestamp in the fingerprint.
+As pointed out in the `Non-Goals` section, this is an issue we are going to tackle in future iterations.
+
+#### Ingestion-time fields
+
+I am intentionally not pulling [semantic convention fields](https://opentelemetry.io/docs/specs/semconv/general/logs/) into separate columns as users will use countless number of log formats, and it will probably not be possible to identify properties worth becoming a column.
+
+The `ObservedTimestamp` field is set by the collector during the ingestion.
+Users query by the `Timestamp` field and the log pruning is driven by the `ObservedTimestamp` field.
+The disadvantage of this approach is that `TTL DELETE` may not remove parts as early as we would like to because the primary index and TTL column differ so the data may not be localized.
+This seems like a good tradeoff though.
+We will offer users a predefined storage period that starts with the ingestion.
+In case when users ingest logs that have timestamps in the future or the past, the pruning of old logs could start too early or too late.
+Users could abuse the claimed log timestamp too to delay pruning.
+The `ObservedTimestamp` approach does not have these issues.
+
+During the ingestion, the `SeverityText` field is parsed into `SeverityNumber` if the `SeverityNumber` field has not been set.
+Queries will be using the `SeverityNumber` field as it is more efficient than plain text and offers higher granularity.
+
+```plaintext
+DROP TABLE if exists logs;
+CREATE TABLE logs
+(
+ `ProjectId` String CODEC(ZSTD(1)),
+ `Fingerprint` FixedString(16) CODEC(ZSTD(1)),
+ `Timestamp` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
+ `ObservedTimestamp` DateTime64(9) CODEC(Delta(8), ZSTD(1)),
+ `TraceId` FixedString(16) CODEC(ZSTD(1)),
+ `SpanId` FixedString(8) CODEC(ZSTD(1)),
+ `TraceFlags` UInt32 CODEC(ZSTD(1)),
+ `SeverityText` LowCardinality(String) CODEC(ZSTD(1)),
+ `SeverityNumber` UInt8 CODEC(ZSTD(1)),
+ `ServiceName` String CODEC(ZSTD(1)),
+ `Body` String CODEC(ZSTD(1)),
+ `ResourceAttributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
+ `LogAttributes` Map(LowCardinality(String), String) CODEC(ZSTD(1)),
+ INDEX idx_trace_id TraceId TYPE bloom_filter(0.001) GRANULARITY 1,
+ INDEX idx_span_id SpanId TYPE bloom_filter(0.001) GRANULARITY 1,
+ INDEX idx_trace_flags TraceFlags TYPE set(2) GRANULARITY 1,
+ INDEX idx_res_attr_key mapKeys(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_res_attr_value mapValues(ResourceAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_log_attr_key mapKeys(LogAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_log_attr_value mapValues(LogAttributes) TYPE bloom_filter(0.01) GRANULARITY 1,
+ INDEX idx_body Body TYPE tokenbf_v1(143776, 10, 0) GRANULARITY 1
+)
+ENGINE = ReplacingMergeTree
+PARTITION BY toDate(Timestamp)
+ORDER BY (ProjectId, ServiceName, SeverityNumber, toUnixTimestamp(Timestamp), TraceId, Fingerprint)
+TTL toDateTime(ObservedTimestamp) + toIntervalDay(30)
+SETTINGS index_granularity = 8192, ttl_only_drop_parts = 1;
+```
+
+### Query API, querying UI
+
+The main idea behind query API/workflow introduced by this proposal is to give users the freedom to query while at the same time providing limits both when it comes to query complexity and query resource usage/execution time.
+We can't foresee how users are going to query their data, nor how the data will look exactly - some will use Attributes, some will just focus on log-level, etc...
+
+In Clickhouse, individual queries [may have settings](https://clickhouse.com/docs/knowledgebase/configure-a-user-setting), which include [query complexity settings](https://clickhouse.com/docs/en/operations/settings/query-complexity).
+The query limits would be appended to each query automatically by the query service when constructing SQL statements.
+
+Fulltext queries for the Log entry `Body` field would be handled transparently by the query service as well thanks to ClickHouse optimizing `LIKE` queries using BloomFilters and tokenization of the search term.
+In future iterations we may want to consider n-gram tokenization, for now, the queries will be limited to full words only.
+
+It is up for debate whether we want to deduplicate log entries in the UI in case the user ingests duplicates.
+We could use the `max(ObservedTimestamp)` function to avoid duplicated entries in the time between records ingestion and ReplacingMergeTree's eventual deduplication kicking in.
+Definitely not in the first iteration though.
+
+The query service would also transparently translate the `SeverityText` attributes of the query into `SeverityNumber` while constructing the query.
+
+#### Query Service API schema
+
+We can't allow UI to send us SQL queries as that would open the possibility of abusing the system by users.
+We are also unable to support all the use cases that users could come up with when given the full flexibility of SQL query language.
+So the idea is for UI to provide a simple creator-like experience that would guide users.
+Something very similar to what GitLab currently has for searching MRs and Issues.
+UI code would then translate the query that the user came up with into a JSON and send it for processing to the query service.
+Based on the JSON received, the query service would then template an SQL query together with the query limits we mentioned above.
+
+For now, the UI and the JSON API would support only a basic set of operations on given fields:
+
+- Timestamp: `>`, `<`, `==`
+- TraceId: `==`, later iterations `in`
+- SpanId: `==`, later iterations `in`
+- TraceFlags: `==`, `!=`, later iterations:`in`, `notIn`
+- SeverityText: `==`, `!=`, later iterations: `in`, `notIn`
+- SeverityNumber: `<`,`>`, `==`, `!=`, later iterations: `in`, `notIn`
+- ServiceName: `==`, `!=`, later iterations: `in`, `notIn`
+- Body: `==`, `CONTAINS`
+- ResourceAttributes: `key==value`, `mapContains(key)`
+- LogAttributes: `key==value`, `mapContains(key)`
+
+The format of the intermediate JSON could look like the following:
+
+```yaml
+{
+ "query": [
+ { "type": "()|AND|OR",
+ "operands": {
+ [...]
+ },
+ {
+ "type": "==|!=|<|>|CONTAINS",
+ "column": "...",
+ "val": "..."
+ }
+ ]
+}
+```
+
+The `==|!=|<|>|CONTAINS` are non-nesting operands, they operate on concrete columns and result in `WHEN` conditions after being processed by the query service.
+The `()|AND|OR` are nesting operands and can only include other non-nesting operands.
+We may defer the implementation of the nesting operands for later iterations.
+There is implicit AND between the operands at the top level of the query structure.
+
+The query schema is intentionally kept simple compared to [the one used in the metrics proposal](../observability_metrics/index.md#api-structure).
+We may add fields like `QueryContext`, `BackendContext`, etc... in later iterations once a need arises.
+For now, we keep the schema as simple as possible and just make sure that the API is versioned so that we can change it easily in the future.
+
+## Open questions
+
+### Logging SDK Maturity
+
+OTEL standard does not intend to provide a standalone SDK for logging just like it did e.g. tracing.
+It may consider doing so only for a programming language that does not have its logging libraries which should be a pretty rare thing.
+All the existing logging libraries should instead use [bridge API](https://opentelemetry.io/docs/specs/otel/logs/bridge-api/) to interact with OTEL collector/send logs using OTEL Logs standard.
+
+The majority of languages have already made the required adjustments, except for Go.
+There is only very minimal support for GO ([repo](https://github.com/agoda-com/opentelemetry-go), [repo](https://github.com/agoda-com/opentelemetry-logs-go)).
+The official Uber Zap repository has barely an [issue](https://github.com/uber-go/zap/issues/654) about emitting events in spans.
+Opentelemetry [status page](https://opentelemetry.io/docs/instrumentation/go/) states that Go support as not implemented yet.
+
+The lack of native OTEL SDK support for emitting logs in Go may be an issue for us if we want to dogfood logging.
+We could work around these limitations in large extent by parsing log files using [filelogreceiver](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/filelogreceiver) or [fluentd](https://docs.fluentbit.io/manual/pipeline/outputs/opentelemetry).
+Contributing and improving the support of Go in OTEL is also a valid option.
+
+## Future work
+
+### Support for != operators in queries
+
+Bloom filters that we use in schemas do not allow for testing if the given term is NOT present in the log entry's body/attributes.
+This is a small but valid use case.
+A solution for that may be [inverted indexes](https://clickhouse.com/blog/clickhouse-search-with-inverted-indices) but this is still an experimental feature.
+
+### Documentation
+
+As part of the documentation effort, we may want to provide examples of how sending data to GOB can be done in different languages (uber-zap, logrus, log4j, etc...) just like we do for error tracking.
+Some of the applications can't be easily modified to send data to us (e.g. systemd/journald) and a log tailing/parsing needs to be employed using [filelogreceiver](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/filelogreceiver) or [fluentd](https://docs.fluentbit.io/manual/pipeline/outputs/opentelemetry).
+We could probably address both cases above by instrumenting our infrastructure and linking to our code from the documentation.
+This way we can both dog-food our solution, save some money as the GCE logging solution is pretty expensive, and give users real-life examples of how they can instrument their infrastructure.
+
+This could be one of the follow-up tasks once we are done with the implementation.
+
+### User query resource usage monitoring
+
+Long-term, we will need a way to monitor the number of user queries that failed due to limits enforcement and resource usage in general to fine-tune the query limits and make sure that users are not too aggressively restricted.
+
+## Iterations
+
+Please refer to [Observability Group planning epic](https://gitlab.com/groups/gitlab-org/opstrace/-/epics/92) and its linked issues for up-to-date information.
diff --git a/doc/architecture/blueprints/observability_logging/system_overview.png b/doc/architecture/blueprints/observability_logging/system_overview.png
new file mode 100644
index 00000000000..30c6510c3dc
--- /dev/null
+++ b/doc/architecture/blueprints/observability_logging/system_overview.png
Binary files differ
diff --git a/doc/architecture/blueprints/organization/diagrams/organization-isolation-broken.drawio.png b/doc/architecture/blueprints/organization/diagrams/organization-isolation-broken.drawio.png
new file mode 100644
index 00000000000..cd1301bb0bc
--- /dev/null
+++ b/doc/architecture/blueprints/organization/diagrams/organization-isolation-broken.drawio.png
Binary files differ
diff --git a/doc/architecture/blueprints/organization/diagrams/organization-isolation.drawio.png b/doc/architecture/blueprints/organization/diagrams/organization-isolation.drawio.png
new file mode 100644
index 00000000000..a9ff4ae5165
--- /dev/null
+++ b/doc/architecture/blueprints/organization/diagrams/organization-isolation.drawio.png
Binary files differ
diff --git a/doc/architecture/blueprints/organization/index.md b/doc/architecture/blueprints/organization/index.md
index 258a624e371..49bf18442e9 100644
--- a/doc/architecture/blueprints/organization/index.md
+++ b/doc/architecture/blueprints/organization/index.md
@@ -323,6 +323,7 @@ In iteration 2, an Organization MVC Experiment will be released. We will test th
- Organizations can be deleted.
- Organization Owners can access the Activity page for the Organization.
- Forking across Organizations will be defined.
+- [Organization Isolation](isolation.md) will be finished to meet the requirements of the initial set of customers
### Iteration 3: Organization MVC Beta (FY25Q1)
@@ -333,6 +334,7 @@ In iteration 3, the Organization MVC Beta will be released.
- Organization Owners can create, edit and delete Groups from the Groups overview.
- Organization Owners can create, edit and delete Projects from the Projects overview.
- The Organization URL path can be changed.
+- [Organization Isolation](isolation.md) is available.
### Iteration 4: Organization MVC GA (FY25Q2)
@@ -398,3 +400,4 @@ See [Organization: Frequently Asked Questions](organization-faq.md).
- [Cells blueprint](../cells/index.md)
- [Cells epic](https://gitlab.com/groups/gitlab-org/-/epics/7582)
- [Namespaces](../../../user/namespace/index.md)
+- [Organization Isolation](isolation.md)
diff --git a/doc/architecture/blueprints/organization/isolation.md b/doc/architecture/blueprints/organization/isolation.md
new file mode 100644
index 00000000000..238269c4329
--- /dev/null
+++ b/doc/architecture/blueprints/organization/isolation.md
@@ -0,0 +1,152 @@
+---
+status: ongoing
+creation-date: "2023-10-11"
+authors: [ "@DylanGriffith" ]
+coach:
+approvers: [ "@lohrc", "@alexpooley" ]
+owning-stage: "~devops::data stores"
+participating-stages: []
+---
+
+<!-- vale gitlab.FutureTense = NO -->
+
+# Organization Isolation
+
+This blueprint details requirements for Organizations to be isolated.
+Watch a [video introduction](https://www.youtube.com/watch?v=kDinjEHVVi0) that summarizes what Organization isolation is and why we need it.
+Read more about what an Organization is in [Organization](index.md).
+
+## What?
+
+<img src="diagrams/organization-isolation.drawio.png" width="800">
+
+All Cell-local data and functionality in GitLab (all data except the few
+things that need to exist on all Cells in the cluster) must be isolated.
+Isolation means that data or features can never cross Organization boundaries.
+Many features in GitLab can link data together.
+A few examples of things that would be disallowed by Organization Isolation are:
+
+1. [Related issues](../../../user/project/issues/related_issues.md): Users would not be able to take an issue in one Project in `Organization A` and relate that issue to another issue in a Project in `Organization B`.
+1. [Share a project/group with a group](../../../user/group/manage.md#share-a-group-with-another-group): Users would not be allowed to share a Group or Project in `Organization A` with another Group or Project in `Organization B`.
+1. [System notes](../../../user/project/system_notes.md): Users would not get a system note added to an issue in `Organization A` if it is mentioned in a comment on an issue in `Organization B`.
+
+## Why?
+
+<img src="diagrams/organization-isolation-broken.drawio.png" width="800">
+
+[GitLab Cells](../cells/index.md) depend on using the Organization as the sharding key, which will allow us to shard data between different Cells.
+Initially, when we start rolling out Organizations, we will be working with a single Cell `Cell 1`.
+`Cell 1` is our current GitLab.com deployment.
+Newly created Organizations will be created on `Cell 1`.
+Once Cells are ready, we will deploy `Cell 2` and begin migrating Organizations from `Cell 1` to `Cell 2`.
+Migrating workloads off will be critical to allowing us to rebalance our data across a fleet of servers and eventually run much smaller GitLab instances (and databases).
+
+If today we allowed users to create Organizations that linked to data in other Organizations, these links would suddenly break when an Organization is moved to a different Cell (because it won't know about the other Organization).
+For this reason we need to ensure from the very beginning of rolling out Organizations to customers that it is impossible to create any links that cross the Organization boundary, even when Organizations are still on the same Cell.
+If we don't, we will create even more mixed up related data that cannot be migrated between Cells.
+Not fulfilling the requirement of isolation means we risk creating a new top-level data wrapper (Organization) that cannot actually be used as a sharding key.
+
+The Cells project initially started with the assumption that we'd be able to shard by top-level Groups.
+We quickly learned that there were no constraints in the application that isolated top-level Groups.
+Many users (including ourselves) had created multiple top-level Groups and linked data across them.
+So we decided that the only way to create a viable sharding key was to create another wrapper around top-level Groups.
+Organizations were something our customers already wanted to gain more administrative capabilities as available in self-managed, and aggregate data across multiple top-level Groups, so this became a logical choice.
+Again, this leads us to realize that we cannot allow multiple Organizations to get mixed in together the same way we had with top-level Groups, otherwise we will end up back where we started.
+
+## How?
+
+Multiple POCs have been implemented to demonstrate how we will provide robust developer facing and customer facing constraints in the GitLab application and database that enforce the described isolation constraint.
+These are:
+
+1. [Enforce Organization Isolation based on `project_id` and `namespace_id` column on every table](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/133576)
+1. [Enforce Organization Isolation based on `organization_id` on every table](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/129889)
+1. [Validate if a top-level group is isolated to be migrated to an Organization](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/131968)
+
+The major constraint these POCs were trying to overcome was that there is no standard way in the GitLab application or database to even determine what Organization (or Project or namespace) a piece of data belongs to.
+This means that the first step is to implement a standard way to efficiently find the parent Organization for any model or row in the database.
+
+The proposed solution is ensuring that every single table that exists in the `gitlab_main_cell` and `gitlab_ci_cell` (Cell-local) databases must include a valid sharding key that is either `project_id` or `namespace_id`.
+At first we considered enforcing everything to have an `organization_id`, but we determined that this would be too expensive to update for customers that need to migrate large Groups out of the default Organization.
+The added benefit is that more than half of our tables already have one of these columns.
+Additionally, if we can't consistently attribute data to a top-level Group, then we won't be able to validate if a top-level Group is safe to be moved to a new Organization.
+
+Once we have consistent sharding keys we can use them to validate all data on insert are not crossing any Organization boundaries.
+We can also use these sharding keys to help us decide whether:
+
+- Existing namespaces in the default Organization can be moved safely to a new Organization, because the namespace is already isolated.
+- The namespace owner would need to remove some links before migrating to a new Organization.
+- A set of namespaces is isolated as a group and could be moved together in bulk to a new Organization.
+
+## Detailed steps
+
+1. Implement developer facing documentation explaining the requirement to add these sharding keys and how they should choose between `project_id` and `namespace_id`.
+1. Add a way to declare a sharding key in `db/docs` and automatically populate it for all tables that already have a sharding key
+1. Implement automation in our CI pipelines and/or DB migrations that makes it impossible to create new tables without a sharding key.
+1. Implement a way for people to declare a desired sharding key in `db/docs` as
+ well as a path to the parent table from which it is migrated. Will only be
+ needed temporarily for tables that don't have a sharding key
+1. Attempt to populate as many "desired sharding key" as possible in an
+ automated way and delegate the MRs to other teams
+1. Fan out issues to other teams to manually populate the remaining "desired
+ sharding key"
+1. Start manually creating then automating the creation of migrations for
+ tables to populate sharding keys from "desired sharding key"
+1. Once all tables have sharding keys or "desired sharding key", we ship an
+ evolved version of the
+ [POC](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/133576), which
+ will enforce that newly inserted data cannot cross Organization boundaries.
+ This may need to be expanded to more than just foreign keys, and should also
+ include loose foreign keys and possibly any relationships described in
+ models. It can temporarily depend on inferring, at runtime, the sharding key
+ from the "desired sharding key" which will be a less performant option while
+ we backfill the sharding keys to all tables but allow us to unblock
+ implementing the isolation rules and user experience of isolation.
+1. Finish migration of ~300 tables that are missing a sharding key:
+ 1. The Tenant Scale team migrates the first few tables.
+ 1. We build a dashboard showing our progress and continue to create
+ automated MRs for the sharding keys that can be automatically inferred
+ and automate creating issues for all the sharding keys that can't be
+ automatically inferred
+1. Validate that all existing `project_id` and `namespace_id` columns on all Cell-local tables can reliably be assumed to be the sharding key. This requires assigning issues to teams to confirm that these columns aren't used for some other purpose that would actually not be suitable. If there is an issue with a table we need to migrate and rename these columns, and then add a new `project_id` or `namespace_id` column with the correct sharding key.
+1. We allow customers to create new Organizations without the option to migrate namespaces into them. All namespaces need to be newly created in their new Organization.
+1. Implement new functionality in GitLab similar to the [POC](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/131968), which allows a namespace owner to see if their namespace is fully isolated.
+1. Implement functionality that allows namespace owners to migrate an existing namespace from one Organization to another. Most likely this will be existing customers that want to migrate their namespace out of the default Organization into a newly created Organization. Only isolated namespaces as implemented in the previous step will be allowed to move.
+1. Expand functionality to validate if a namespace is isolated, so that users can select multiple namespaces they own and validate that the selected group of namespaces is isolated. Links between the selected namespaces would stay intact.
+1. Implement functionality that allows namespace owners to migrate multiple existing namespaces from one Organization to another. Only isolated namespaces as implemented in the previous step will be allowed to move.
+1. We build better tooling to help namespace owners with cleaning up unwanted links outside of their namespace to allow more customers to migrate to a new Organization. This step would be dependent on the amount of existing customers that actually have links to clean up.
+
+The implementation of this effort will be tracked in [#11670](https://gitlab.com/groups/gitlab-org/-/epics/11670).
+
+## Alternatives considered
+
+### Add any data that need to cross Organizations to cluster-wide tables
+
+We plan on having some data at the cluster level in our Cells architecture (for example
+Users), so it might stand to reason that we can make any data cluster-wide
+that might need to cross Organization boundaries and this would solve the problem.
+
+This could be an option for a limited set of features and may turn out to be
+necessary for some critical workflows.
+However, this should not become the default option, because it will ultimately lead to the Cells architecture not achieving the horizontal scaling goals.
+Features like [sharing a group with a group](../../../user/group/manage.md#share-a-group-with-another-group) are very tightly connected to some of the worst performing functionality in our
+application with regard to scalability.
+We are hoping that by splitting up our databases in Cells we will be able to unlock more scaling headroom and reduce the problems associated with supporting these features.
+
+### Do nothing and treat these anomalies as an acceptable edge case
+
+This idea hasn't been explored deeply but is rejected on the basis that these
+anomalies will appear as data loss while moving customer data between Cells.
+Data loss is a very serious kind of bug, especially when customers are not opting into being moved between servers.
+
+### Solve these problems feature by feature
+
+This could be done, for example, by implementing an application rule that
+prevents users from adding an issue link between Projects on different Organizations.
+We would need to find all such features by asking teams, and
+they would need to fix them all as a special case business rule.
+
+This may be a viable, less robust option, but it does not give us a lot of confidence in our system.
+Without a robust way to ensure that all Organization data is isolated, we would have to trust that each feature we implement has been manually checked.
+This creates a real risk that we miss something, and again we would end up with customer data loss.
+Another challenge here is that if we are not confident in our isolation constraints, then we may end up attributing various unrelated bugs to possible data loss.
+As such it could become a rabbit hole to debug all kinds of unrelated bugs.
diff --git a/doc/architecture/blueprints/runner_admission_controller/index.md b/doc/architecture/blueprints/runner_admission_controller/index.md
index 92c824527ec..21dc1d53303 100644
--- a/doc/architecture/blueprints/runner_admission_controller/index.md
+++ b/doc/architecture/blueprints/runner_admission_controller/index.md
@@ -1,7 +1,7 @@
---
status: proposed
creation-date: "2023-03-07"
-authors: [ "@ajwalker" ]
+authors: [ "@ajwalker", "@johnwparent" ]
coach: [ "@ayufan" ]
approvers: [ "@DarrenEastman", "@engineering-manager" ]
owning-stage: "~devops::<stage>"
@@ -14,7 +14,7 @@ The GitLab `admission controller` (inspired by the [Kubernetes admission control
An admission controller can be registered to the GitLab instance and receive a payload containing jobs to be created. Admission controllers can be _mutating_, _validating_, or both.
-- When _mutating_, mutatable job information can be modified and sent back to the GitLab instance. Jobs can be modified to conform to organizational policy, security requirements, or have, for example, their tag list modified so that they're routed to specific runners.
+- When _mutating_, mutable job information can be modified and sent back to the GitLab instance. Jobs can be modified to conform to organizational policy, security requirements, or have, for example, their tag list modified so that they're routed to specific runners.
- When _validating_, a job can be denied execution.
## Motivation
@@ -35,12 +35,12 @@ Before going further, it is helpful to level-set the current job handling mechan
- On the request from a runner to the API for a job, the database is queried to verify that the job parameters matches that of the runner. In other words, when runners poll a GitLab instance for a job to execute they're assigned a job if it matches the specified criteria.
- If the job matches the runner in question, then the GitLab instance connects the job to the runner and changes the job state to running. In other words, GitLab connects the `job` object with the `Runner` object.
- A runner can be configured to run un-tagged jobs. Tags are the primary mechanism used today to enable customers to have some control of which Runners run certain types of jobs.
-- So while runners are scoped to the instance, group, or project, there are no additional access control mechanisms today that can easily be expanded on to deny access to a runner based on a user or group identifier.
+- So while runners are scoped to the instance, group, or project, there are no additional access control mechanisms today that can be expanded on to deny access to a runner based on a user or group identifier.
-The current CI jobs queue logic is as follows. **Note - in the code ww still use the very old `build` naming construct, but we've migrated from `build` to `job` in the product and documentation.
+The current CI jobs queue logic is as follows. **Note - in the code we still use the very old `build` naming construct, but we've migrated from `build` to `job` in the product and documentation.
```ruby
-jobs =
+jobs =
if runner.instance_type?
jobs_for_shared_runner
elsif runner.group_type?
@@ -96,22 +96,31 @@ Each runner has a tag such as `zone_a`, `zone_b`. In this scenario the customer
1. When a job is created the `project information` (`project_id`, `job_id`, `api_token`) will be used to query GitLab for specific details.
1. If the `user_id` matches then the admissions controller modifies the job tag list. `zone_a` is added to the tag list as the controller has detected that the user triggering the job should have their jobs run IN `zone_a`.
+**Scenario 3**: Runner pool with specific tag scheme, user only has access to a specific subset
+
+Each runner has a tag identifier unique to that runner, e.g. `DiscoveryOne`, `tugNostromo`, `MVSeamus`, etc. Users have arbitrary access to these runners, however we don't want to fail a job on access denial, instead we want to prevent the job from being executed on runners to which the user does not have access. We also don't want to reduce the pool of runners the job can be run on.
+
+1. Configure an admissions controller to mutate jobs based on `user_id`.
+1. When a job is created the `project information` (`project_id`, `job_id`, `api_token`) will be used to query GitLab for specific details.
+1. The admission controller queries available runners with the `user_id` and collects all runners for which the job cannot be run. If this is _all_ runners, the admission controller rejects the job, which is dropped. No tags are modified, and a message is included indicating the reasoning. If there are runners for which the user has permissions, the admission controller filters the associated runners for which there are no permissions.
+
### MVC
#### Admission controller
1. A single admission controller can be registered at the instance level only.
-1. The admission controller must respond within 30 seconds.
-1. The admission controller will receive an array of individual jobs. These jobs may or may not be related to each other. The response must contain only responses to the jobs made as part of the request.
+1. The admission controller must respond within 1 hr.
+1. The admission controller will receive individual jobs. The response must contain only responses to that job.
+1. The admission controller will recieve an API callback for rejection and acceptance, with the acceptance callback accepting mutation parameters.
#### Job Lifecycle
-1. The lifecycle of a job will be updated to include a new `validating` state.
+1. The `preparing` job state will be expanded to include the validation process prerequisite.
```mermaid
stateDiagram-v2
- created --> validating
- state validating {
+ created --> preparing
+ state preparing {
[*] --> accept
[*] --> reject
}
@@ -127,10 +136,12 @@ Each runner has a tag such as `zone_a`, `zone_b`. In this scenario the customer
executed --> created: retry
```
-1. When the state is `validating`, the mutating webhook payload is sent to the admission controller.
-1. For jobs where the webhook times out (30 seconds) their status should be set as though the admission was denied. This should
+1. When the state is `preparing`, the mutating webhook payload is sent to the admission controller asynchronously. This will be retried a number of times as needed.
+1. The `preparing` state will wait for a response from the webhook or until timeout.
+1. The UI should be updated with the current status of the job prerequisites and admission
+1. For jobs where the webhook times out (1 hour) their status should be set as though the admission was denied with a timeout reasoning. This should
be rare in typical circumstances.
-1. Jobs with denied admission can be retried. Retried jobs will be resent to the admission controller along with any mutations that they received previously.
+1. Jobs with denied admission can be retried. Retried jobs will be resent to the admission controller without tag mutations or runner filtering reset.
1. [`allow_failure`](../../../ci/yaml/index.md#allow_failure) should be updated to support jobs that fail on denied admissions, for example:
```yaml
@@ -141,8 +152,8 @@ be rare in typical circumstances.
on_denied_admission: true
```
-1. The UI should be updated to display the reason for any job mutations (if provided).
-1. A table in the database should be created to store the mutations. Any changes that were made, like tags, should be persisted and attached to `ci_builds` with `acts_as_taggable :admission_tags`.
+1. The UI should be updated to display the reason for any job mutations (if provided) or rejection.
+1. Tag modifications applied by the Admission Controller should be persisted by the system with associated reasoning for any modifications, acceptances, or rejections
#### Payload
@@ -153,8 +164,10 @@ be rare in typical circumstances.
1. The response payload is comprised of individual job entries consisting of:
- Job ID.
- Admission state: `accepted` or `denied`.
- - Mutations: Only `tags` is supported for now. The tags provided replaces the original tag list.
+ - Mutations: `additions` and `removals`. `additions` supplements the existing set of tags, `removals` removes tags from the current tag list
- Reason: A controller can provide a reason for admission and mutation.
+ - Accepted Runners: runners to be considered for job matching, can be empty to match all runners
+ - Rejected Runners: runners that should not be considered for job matching, can be empty to match all runners
##### Example request
@@ -170,7 +183,9 @@ be rare in typical circumstances.
...
},
"tags": [ "docker", "windows" ]
- },
+ }
+]
+[
{
"id": 245,
"variables": {
@@ -180,7 +195,9 @@ be rare in typical circumstances.
...
},
"tags": [ "linux", "eu-west" ]
- },
+ }
+]
+[
{
"id": 666,
"variables": {
@@ -202,20 +219,29 @@ be rare in typical circumstances.
"id": 123,
"admission": "accepted",
"reason": "it's always-allow-day-wednesday"
- },
+ }
+]
+[
{
"id": 245,
"admission": "accepted",
- "mutations": {
- "tags": [ "linux", "us-west" ]
+ "tags": {
+ "add": [ "linux", "us-west" ],
+ "remove": [...]
},
- "reason": "user is US employee: retagged region"
- },
+ "runners": {
+ "accepted_ids": ["822993167"],
+ "rejected_ids": ["822993168"]
+ },
+ "reason": "user is US employee: retagged region; user only has uid on runner 822993167"
+ }
+]
+[
{
"id": 666,
"admission": "rejected",
"reason": "you have no power here"
- },
+ }
]
```
@@ -229,13 +255,32 @@ be rare in typical circumstances.
### Implementation Details
-1. _placeholder for steps required to code the admissions controller MVC_
+#### GitLab
+
+1. Expand `preparing` state to engage the validation process via the `prerequsite` interface.
+1. Amend `preparing` state to indicate to user, via the UI and API, the status of job preparation with regard to the job prerequisites
+ 1. Should indicate status of each prerequisite resource for the job separately as they are asynchronous
+ 1. Should indicate overall prerequisite status
+1. Introduce a 1 hr timeout to the entire `preparing` state
+1. Add an `AdmissionValidation` prerequisite to the `preparing` status dependencies via `Gitlab::Ci::Build::Prerequisite::Factory`
+1. Convert the Prerequisite factory and `preparing` status to operate asynchronously
+1. Convert `PreparingBuildService` to operate asynchronously
+1. `PreparingBuildService` transitions the job from preparing to failed or pending depending on success of validation.
+1. AdmissionValidation performs a reasonable amount of retries when sending request
+1. Add API endpoint for Webhook/Admission Controller response callback
+ 1. Accepts Parameters:
+ - Acceptance/Rejection
+ - Reason String
+ - Tag mutations (if accepted, otherwise ignored)
+ 1. Callback encodes one time auth token
+1. Introduce new failure reasoning on validation rejection
+1. Admission controller impacts on job should be persisted
+1. Runner selection filtering per job as a function of the response from the Admission controller (mutating web hook) should be added
## Technical issues to resolve
| issue | resolution|
| ------ | ------ |
-|We may have conflicting tag-sets as mutating controller can make it possible to define AND, OR and NONE logical definition of tags. This can get quite complex quickly. | |
|Rule definition for the queue web hook|
|What data to send to the admissions controller? Is it a subset or all of the [predefined variables](../../../ci/variables/predefined_variables.md)?|
|Is the `queueing web hook` able to run at GitLab.com scale? On GitLab.com we would trigger millions of webhooks per second and the concern is that would overload Sidekiq or be used to abuse the system.
diff --git a/doc/architecture/blueprints/secret_detection/index.md b/doc/architecture/blueprints/secret_detection/index.md
index fc97ca71d7f..76bf6dd4088 100644
--- a/doc/architecture/blueprints/secret_detection/index.md
+++ b/doc/architecture/blueprints/secret_detection/index.md
@@ -26,28 +26,22 @@ job logs, and project management features such as issues, epics, and MRs.
### Goals
-- Support asynchronous secret detection for the following scan targets:
- - push events
- - issuable creation
- - issuable updates
- - issuable comments
+- Support platform-wide detection of tokens to avoid secret leaks
+- Prevent exposure by rejecting detected secrets
+- Provide scalable means of detection without harming end user experience
-### Non-Goals
+See [target types](#target-types) for scan target priorities.
-The current proposal is limited to asynchronous detection and alerting only.
+### Non-Goals
-**Blocking** secrets on push events is high-risk to a critical path and
-would require extensive performance profiling before implementing. See
-[a recent example](https://gitlab.com/gitlab-org/gitlab/-/issues/246819#note_1164411983)
-of a customer incident where this was attempted.
+Initial proposal is limited to detection and alerting across platform, with rejection only
+during [preceive Git interactions and browser-based detection](#iterations).
Secret revocation and rotation is also beyond the scope of this new capability.
Scanned object types beyond the scope of this MVC include:
-- Media types (JPEGs, PDFs,...)
-- Snippets
-- Wikis
+See [target types](#target-types) for scan target priorities.
#### Management UI
@@ -69,7 +63,13 @@ which remain focused on active detection.
## Proposal
-To achieve scalable secret detection for a variety of domain objects a dedicated
+The first iteration of the experimental capability will feature a blocking
+pre-receive hook implemented within the Rails application. This iteration
+will be released in an experimental state to select users and provide
+opportunity for the team to profile the capability before considering extraction
+into a dedicated service.
+
+In the future state, to achieve scalable secret detection for a variety of domain objects a dedicated
scanning service must be created and deployed alongside the GitLab distribution.
This is referred to as the `SecretScanningService`.
@@ -94,10 +94,10 @@ as self-managed instances.
The critical paths as outlined under [goals above](#goals) cover two major object
types: Git blobs (corresponding to push events) and arbitrary text blobs.
-The detection flow for push events relies on subscribing to the PostReceive hook
-to enqueue Sidekiq requests to the `SecretScanningService`. The `SecretScanningService`
-service fetches enqueued refs, queries Gitaly for the ref blob contents, scans
-the commit contents, and notifies the Rails application when a secret is detected.
+The detection flow for push events relies on subscribing to the PreReceive hook
+to scan commit data using the [PushCheck interface](https://gitlab.com/gitlab-org/gitlab/blob/3f1653f5706cd0e7bbd60ed7155010c0a32c681d/lib/gitlab/checks/push_check.rb). This `SecretScanningService`
+service fetches the specified blob contents from Gitaly, scans
+the commit contents, and rejects the push when a secret is detected.
See [Push event detection flow](#push-event-detection-flow) for sequence.
The detection flow for arbitrary text blobs, such as issue comments, relies on
@@ -112,13 +112,33 @@ storage. See discussion [in this issue](https://gitlab.com/groups/gitlab-org/-/e
around scanning during streaming and the added complexity in buffering lookbacks
for arbitrary trace chunks.
-In any case of detection, the Rails application manually creates a vulnerability
+In the case of a push detection, the commit is rejected and error returned to the end user.
+In any other case of detection, the Rails application manually creates a vulnerability
using the `Vulnerabilities::ManuallyCreateService` to surface the finding in the
existing Vulnerability Management UI.
See [technical discovery](https://gitlab.com/gitlab-org/gitlab/-/issues/376716)
for further background exploration.
+### Target types
+
+Target object types refer to the scanning targets prioritized for detection of leaked secrets.
+
+In order of priority this includes:
+
+1. non-binary Git blobs
+1. job logs
+1. issuable creation (issues, MRs, epics)
+1. issuable updates (issues, MRs, epics)
+1. issuable comments (issues, MRs, epics)
+
+Targets out of scope for the initial phases include:
+
+- Media types (JPEG, PDF, ...)
+- Snippets
+- Wikis
+- Container images
+
### Token types
The existing Secret Detection configuration covers ~100 rules across a variety
@@ -135,16 +155,17 @@ Token types to identify in order of importance:
### Detection engine
-Our current secret detection offering utilizes [Gitleaks](https://github.com/zricethezav/gitleaks/)
+Our current secret detection offering uses [Gitleaks](https://github.com/zricethezav/gitleaks/)
for all secret scanning in pipeline contexts. By using its `--no-git` configuration
we can scan arbitrary text blobs outside of a repository context and continue to
-utilize it for non-pipeline scanning.
+use it for non-pipeline scanning.
-Given our existing familiarity with the tool and its extensibility, it should
-remain our engine of choice. Changes to the detection engine are out of scope
-unless benchmarking unveils performance concerns.
+In the case of PreReceive detection, we rely on a combination of keyword/substring matches
+for pre-filtering and `re2` for regex detections. See [spike issue](https://gitlab.com/gitlab-org/gitlab/-/issues/423832) for initial benchmarks
-Notable alternatives include high-performance regex engines such as [hyperscan](https://github.com/intel/hyperscan) or it's portable fork [vectorscan](https://github.com/VectorCamp/vectorscan).
+Changes to the detection engine are out of scope until benchmarking unveils performance concerns.
+
+Notable alternatives include high-performance regex engines such as [Hyperscan](https://github.com/intel/hyperscan) or it's portable fork [Vectorscan](https://github.com/VectorCamp/vectorscan).
### High-level architecture
@@ -167,37 +188,42 @@ for past discussion around scaling approaches.
sequenceDiagram
autonumber
actor User
- User->>+Workhorse: git push
+ User->>+Workhorse: git push with-secret
+ Workhorse->>+Gitaly: tcp
+ Gitaly->>+Rails: PreReceive
+ Rails->>-Gitaly: ListAllBlobs
+ Gitaly->>-Rails: ListAllBlobsResponse
+
+ Rails->>+GitLabSecretDetection: Scan(blob)
+ GitLabSecretDetection->>-Rails: found
+
+ Rails->>User: rejected: secret found
+
+ User->>+Workhorse: git push without-secret
Workhorse->>+Gitaly: tcp
- Gitaly->>+Rails: grpc
- Sidekiq->>+Rails: poll job
- Rails->>-Sidekiq: PostReceive worker
- Sidekiq-->>+Sidekiq: enqueue PostReceiveSecretScanWorker
-
- Sidekiq->>+Rails: poll job
- loop PostReceiveSecretScanWorker
- Rails->>-Sidekiq: PostReceiveSecretScanWorker
- Sidekiq->>+SecretScanningSvc: ScanBlob(ref)
- SecretScanningSvc->>+Sidekiq: accepted
- Note right of SecretScanningSvc: Scanning job enqueued
- Sidekiq-->>+Rails: done
- SecretScanningSvc->>+Gitaly: retrieve blob
- SecretScanningSvc->>+SecretScanningSvc: scan blob
- SecretScanningSvc->>+Rails: secret found
- end
+ Gitaly->>+Rails: PreReceive
+ Rails->>-Gitaly: ListAllBlobs
+ Gitaly->>-Rails: ListAllBlobsResponse
+
+ Rails->>+GitLabSecretDetection: Scan(blob)
+ GitLabSecretDetection->>-Rails: not_found
+
+ Rails->>User: OK
```
## Iterations
- ✓ Define [requirements for detection coverage and actions](https://gitlab.com/gitlab-org/gitlab/-/issues/376716)
-- ✓ Implement [Clientside detection of GitLab tokens in comments/issues](https://gitlab.com/gitlab-org/gitlab/-/issues/368434)
-- PoC of secret scanning service
- - Benchmarking of issuables, comments, job logs and blobs to gain confidence that the total costs will be viable
- - Capacity planning for addition of service component to Reference Architectures headroom
- - Service capabilities
+- ✓ Implement [Browser-based detection of GitLab tokens in comments/issues](https://gitlab.com/gitlab-org/gitlab/-/issues/368434)
+- ✓ [PoC of secret scanning service](https://gitlab.com/gitlab-org/secure/pocs/secret-detection-go-poc/)
+- ✓ [PoC of secret scanning gem](https://gitlab.com/gitlab-org/gitlab/-/issues/426823)
+- [Pre-Production Performance Profiling for pre-receive PoCs](https://gitlab.com/gitlab-org/gitlab/-/issues/428499)
+ - Profiling service capabilities
+ - ✓ [Benchmarking regex performance between Ruby and Go approaches](https://gitlab.com/gitlab-org/gitlab/-/issues/423832)
- gRPC commit retrieval from Gitaly
- - blob scanning
+ - transfer latency, CPU, and memory footprint
- Implementation of secret scanning service MVC (targeting individual commits)
+- Capacity planning for addition of service component to Reference Architectures headroom
- Security and readiness review
- Deployment and monitoring
- Implementation of secret scanning service MVC (targeting arbitrary text blobs)
diff --git a/doc/architecture/blueprints/secret_manager/decisions/002_gcp_kms.md b/doc/architecture/blueprints/secret_manager/decisions/002_gcp_kms.md
new file mode 100644
index 00000000000..c750164632f
--- /dev/null
+++ b/doc/architecture/blueprints/secret_manager/decisions/002_gcp_kms.md
@@ -0,0 +1,101 @@
+---
+owning-stage: "~devops::verify"
+description: 'GitLab Secrets Manager ADR 002: Use GCP Key Management Service'
+---
+
+# GitLab Secrets Manager ADR 002: Use GCP Key Management Service
+
+## Context
+
+Following from [ADR 001: Use envelope encryption](001_envelop_encryption.md), we need to find a solution to securely
+store asymmetric keys belonging to each vault.
+
+## Decision
+
+We decided to rely on Google CLoud Platform (GCP) Key Management Service (KMS) to manage the asymmetric keys
+used by the GitLab Secrets Manager vaults.
+
+Using GCP provides a few advantages:
+
+1. Avoid implementing our own secure storage of cryptographic keys.
+1. Support for Hardware Security Modules (HSM).
+
+```mermaid
+sequenceDiagram
+ participant A as Client
+ participant B as GitLab Rails
+ participant C as GitLab Secrets Service
+ participant D as GCP Key Management Service
+
+ Note over B,D: Initialize vault for project/group/organization
+
+ B->>C: Initialize vault - create key pair
+
+ Note over D: Incurs cost per key
+ C->>D: Create new asymmetric key
+ D->>C: Returns public key
+ C->>B: Returns vault public key
+ B->>B: Stores vault public key
+
+ Note over A,C: Creating a new secret
+
+ A->>B: Create new secret
+ B->>B: Generate new symmetric data key
+ B->>B: Encrypts secret with data key
+ B->>B: Encrypts data key with vault public key
+ B->>B: Stores envelope (encrypted secret + encrypted data key)
+ B-->>B: Discards plain-text data key
+ B->>A: Success
+
+ Note over A,D: Retrieving a secret
+
+ A->>B: Get secret
+ B->>B: Retrieves envelope (encrypted secret + encrypted data key)
+ B->>C: Decrypt data key
+ Note over D: Incurs cost per decryption request
+ C->>D: Decrypt data key
+ D->>C: Returns plain-text data key
+ C->>B: Returns plain-text data key
+ B->>B: Decrypts secret
+ B-->>B: Discards plain-text data key
+ B->>A: Returns secret
+```
+
+For security purpose, we decided to use Hardware Security Module (HSM) to protect the keys in GCP KMS.
+
+## Consequences
+
+### Authentication
+
+With keys stored in GCP KMS, we need to de-multiplex between identities configured in GCP KMS and
+identities defined in GitLab so that decryption requests can be authenticated accordingly.
+
+### Cost
+
+With the use of GCP KMS, we need to account for the following cost:
+
+1. Number of keys required
+1. Number of key operations
+1. HSM Protection level
+
+The number of keys required would be dependent on the number of projects, groups, and organizations using this feature.
+A single asymmetric key is required for each project, group or organization.
+
+Each cryptographic key operation would also incur cost and it varies per protection level.
+Based on the proposed design above, this would incur cost at each secret decryption request.
+
+We may implement a multi-tier protection level, supporting different protection types for different users.
+
+The pricing table of GCP KMS can be found [here](https://cloud.google.com/kms/pricing).
+
+### Feature availability for Self-Managed customers
+
+Using GCP KMS as a backend means that this solution cannot be deployed into self-managed environments.
+To make this feature available to Self-Managed customers, this feature needs to be a GitLab Cloud Connector feature.
+
+## Alternatives
+
+We considered generating and storing private keys within GitLab Secrets Service,
+but this would not meet the requirements for [FIPS Compliance](../../../../development/fips_compliance.md).
+
+On the other hand, GCP HSM Keys comply with [FIPS 140-2 Level 3](https://cloud.google.com/docs/security/key-management-deep-dive#fips_140-2_validation).
diff --git a/doc/architecture/blueprints/secret_manager/decisions/003_go_service.md b/doc/architecture/blueprints/secret_manager/decisions/003_go_service.md
new file mode 100644
index 00000000000..561a1bde24e
--- /dev/null
+++ b/doc/architecture/blueprints/secret_manager/decisions/003_go_service.md
@@ -0,0 +1,37 @@
+---
+owning-stage: "~devops::verify"
+description: 'GitLab Secrets Manager ADR 003: Implement Secrets Manager in Go'
+---
+
+# GitLab Secrets Manager ADR 003: Implement Secrets Manager in Go
+
+Following [ADR-002](002_gcp_kms.md) highlighting the need to integrate with GCP
+services, we do need to decide what tech stack is going to be used to build
+GitLab Secrets Manager Service (GSMS).
+
+## Context
+
+At GitLab, we usually build satellite services around GitLab Rails in Go.
+This is especially a good choice of technology for services that may heavily
+leverage concurrency and caching, where cache could be invalidated / refreshed
+asynchronously.
+
+Go-based [GCP KMS](https://cloud.google.com/kms/docs/reference/libraries#client-libraries-usage-go)
+client library also seems to expose a reliable interface to access KMS.
+
+## Decision
+
+Implement GitLab Secrets Manager Service in Go. Use
+[labkit](https://gitlab.com/gitlab-org/labkit) as a minimalist library to
+provide common functionality shared by satellite servicies.
+
+## Consequences
+
+The team that is going to own GitLab Secrets Manager feature will need to gain
+more Go expertise.
+
+## Alternatives
+
+We considered implementing GitLab Secrets Manager Service in Ruby, but we
+concluded that using Ruby will not allow us to build a service that will be
+efficient enough.
diff --git a/doc/architecture/blueprints/secret_manager/decisions/004_staleless_kms.md b/doc/architecture/blueprints/secret_manager/decisions/004_staleless_kms.md
new file mode 100644
index 00000000000..3de8adfd3a7
--- /dev/null
+++ b/doc/architecture/blueprints/secret_manager/decisions/004_staleless_kms.md
@@ -0,0 +1,49 @@
+---
+owning-stage: "~devops::verify"
+description: 'GitLab Secrets Manager ADR 004: Sateless Key Management Service'
+---
+
+# GitLab Secrets Manager ADR 004: Stateless Key Management Service
+
+In [ADR-002](002_gcp_kms.md) we decided that we want to use Google's Cloud Key
+Management Service to store private encryption keys. This will allow us to meet
+various compliance requirements easier.
+
+In this ADR we are going to describe the desired architecture of GitLab Secrets
+Management Service, making it a stateless service, that is not connected to a
+persistent datastore, other than an ephemeral local storage.
+
+## Context
+
+## Decision
+
+Make GitLab Secrets Management Service a stateless application, not being
+connected to a global data storage, like a relational or NoSQL database.
+
+We are only going to support local block storage, presumably only for caching
+purposes.
+
+In order to manage decryption cost wisely, we would need to implement
+multi-tier protection layers, and in-memory, per-instance,
+[symmetric decryption key](001_envelop_encryption.md) caching, with cache TTL
+depending on the protection tier. A hardware or software key can be used in
+Google's Cloud KMS, depending on the tier too.
+
+## Consequences
+
+1. All private keys are going to be stored in Google's Cloud KMS.
+1. Multi-tier protection will be implemented, with higher tries offering more protection.
+1. Protection tier will be defined on per-organization level on the GitLab Rails Service side.
+1. Depending on the protection level used, symmetric decryption keys can be in-memory cached.
+1. The symmetric key's cache must not be valid for more than 24 hours..
+1. The highest protection tier will use Hardware Security Module and no caching.
+1. The GitLab Secrets Management Service will not store access-control metadata.
+1. Identity de-multiplexing will happen on GitLab Rails Service side.
+1. Decryption request will be signed by an organization's public key.
+1. The service will verify decryption requestor's identity by checking the signature.
+
+## Alternatives
+
+We considered using a relational database, or a NoSQL database, both
+self-managed and managed by a Cloud Provider, but concluded that this would add
+a lot of complexity and would weaken the security posture of the service.
diff --git a/doc/architecture/blueprints/secret_manager/index.md b/doc/architecture/blueprints/secret_manager/index.md
index 2a840f8d846..ac30f3399d8 100644
--- a/doc/architecture/blueprints/secret_manager/index.md
+++ b/doc/architecture/blueprints/secret_manager/index.md
@@ -59,12 +59,18 @@ This blueprint does not cover the following:
- Secrets such as access tokens created within GitLab to allow external resources to access GitLab, e.g personal access tokens.
+## Decisions
+
+- [ADR-001: Use envelope encryption](decisions/001_envelop_encryption.md)
+- [ADR-002: Use GCP Key Management Service](decisions/002_gcp_kms.md)
+- [ADR-003: Build Secrets Manager in Go](decisions/003_go_service.md)
+
## Proposal
The secrets manager feature will consist of three core components:
1. GitLab Rails
-1. GitLab Secrets Service
+1. GitLab Secrets Manager Service
1. GCP Key Management
At a high level, secrets will be stored using unique encryption keys in order to achieve isolation
@@ -86,13 +92,15 @@ The plain-text secret would be encrypted using a single use data key.
The data key is then encrypted using the public key belonging to the group or project.
Both, the encrypted secret and the encrypted data key, are being stored in the database.
-**2. GitLab Secrets Manager**
+**2. GitLab Secrets Manager Service**
-GitLab Secrets Manager will be a new component in the GitLab overall architecture. This component serves the following purpose:
+GitLab Secrets Manager Service will be a new component in the GitLab overall architecture. This component serves the following purpose:
1. Correlating GitLab identities into GCP identities for access control.
1. A proxy over GCP Key Management for decrypting operations.
+[The service will use Go-based tech stack](decisions/003_go_service.md) and [labkit](https://gitlab.com/gitlab-org/labkit).
+
**3. GCP Key Management**
We choose to leverage GCP Key Management to build on the security and trust that GCP provides on cryptographic operations.
@@ -120,10 +128,6 @@ Hence, GCP Key Management is the natural choice for a cloud-based key management
To extend this service to self-managed GitLab instances, we would consider using GitLab Cloud Connector as a proxy between
self-managed GitLab instances and the GitLab Secrets Manager.
-## Decision Records
-
-- [001: Use envelope encryption](decisions/001_envelop_encryption.md)
-
## Alternative Solutions
Other solutions we have explored:
diff --git a/doc/architecture/blueprints/work_items/index.md b/doc/architecture/blueprints/work_items/index.md
index e12bb4d8773..74690d34088 100644
--- a/doc/architecture/blueprints/work_items/index.md
+++ b/doc/architecture/blueprints/work_items/index.md
@@ -64,7 +64,7 @@ You can also refer to fields of [Work Item](../../../api/graphql/reference/index
All Work Item types share the same pool of predefined widgets and are customized by which widgets are active on a specific type. The list of widgets for any certain Work Item type is currently predefined and is not customizable. However, in the future we plan to allow users to create new Work Item types and define a set of widgets for them.
-### Work Item widget types (updating)
+### Widget types (updating)
| Widget | Description | Feature flag | Write permission | GraphQL Subscription Support |
|---|---|---|---|---|
@@ -86,6 +86,36 @@ All Work Item types share the same pool of predefined widgets and are customized
| [WorkItemWidgetTestReports](../../../api/graphql/reference/index.md#workitemwidgettestreports) | Test reports associated with a work item | | | |
| [WorkItemWidgetWeight](../../../api/graphql/reference/index.md#workitemwidgetweight) | Set weight of a work item | |`Reporter`|No|
+#### Widget availability (updating)
+
+| Widget | Epic | Issue | Task | Objective | Key Result |
+|---|---|---|---|---|---|
+| [WorkItemWidgetAssignees](../../../api/graphql/reference/index.md#workitemwidgetassignees) | ✅ | ✅ | ✅ | ✅ | ✅ |
+| [WorkItemWidgetAwardEmoji](../../../api/graphql/reference/index.md#workitemwidgetawardemoji) | ✅ | ✔️ | ✅ | ✅ | ✅ |
+| [WorkItemWidgetCurrentUserTodos](../../../api/graphql/reference/index.md#workitemwidgetcurrentusertodos) | ✅ | ✅ | ✅ | ✅ | ✅ |
+| [WorkItemWidgetDescription](../../../api/graphql/reference/index.md#workitemwidgetdescription) | ✅ | ✅ | ✅ | ✅ | ✅ |
+| [WorkItemWidgetHealthStatus](../../../api/graphql/reference/index.md#workitemwidgethealthstatus) | ✅ | ✅ | ✅ | ✅ | ✅ |
+| [WorkItemWidgetHierarchy](../../../api/graphql/reference/index.md#workitemwidgethierarchy) | ✔ | ✔️ | ❌ | ✅ | ❌ |
+| [WorkItemWidgetIteration](../../../api/graphql/reference/index.md#workitemwidgetiteration) | ❌ | ✅ | ✅ | ❌ | ❌ |
+| [WorkItemWidgetLabels](../../../api/graphql/reference/index.md#workitemwidgetlabels) | ✅ | ✅ | ✅ | ✅ | ✅ |
+| [WorkItemWidgetLinkedItems](../../../api/graphql/reference/index.md#workitemwidgetlinkeditems) | ✔️ | ✔️ | ✔️ | ✅ | ✅ |
+| [WorkItemWidgetMilestone](../../../api/graphql/reference/index.md#workitemwidgetmilestone) | 🔍 | ✅ | ✅ | ✅ | ❌ |
+| [WorkItemWidgetNotes](../../../api/graphql/reference/index.md#workitemwidgetnotes) | ✅ | ✅ | ✅ | ✅ | ✅ |
+| [WorkItemWidgetNotifications](../../../api/graphql/reference/index.md#workitemwidgetnotifications) | ✅ | ✅ | ✅ | ✅ | ✅ |
+| [WorkItemWidgetProgress](../../../api/graphql/reference/index.md#workitemwidgetprogress) | ❌ | ❌ | ❌ | ✅ | ✅ |
+| [WorkItemWidgetStartAndDueDate](../../../api/graphql/reference/index.md#workitemwidgetstartandduedate) | 🔍 | ✅ | ✅ | ❌ | ✅ |
+| [WorkItemWidgetStatus](../../../api/graphql/reference/index.md#workitemwidgetstatus) | ❓ | ❓ | ❓ | ❓ | ❓ |
+| [WorkItemWidgetTestReports](../../../api/graphql/reference/index.md#workitemwidgettestreports) | ❌ | ❌ | ❌ | ❌ | ❌ |
+| [WorkItemWidgetWeight](../../../api/graphql/reference/index.md#workitemwidgetweight) | 🔍 | ✅ | ✅ | ❌ | ❌ |
+
+##### Legend
+
+- ✅ - Widget available
+- ✔️ - Widget planned to be available
+- ❌ - Widget not available
+- ❓ - Widget pending for consideration
+- 🔍 - Alternative widget planned
+
### Work item relationships
Work items can be related to other work items in a number of different ways:
diff --git a/doc/ci/chatops/index.md b/doc/ci/chatops/index.md
index 10276df6291..454266942f6 100644
--- a/doc/ci/chatops/index.md
+++ b/doc/ci/chatops/index.md
@@ -14,10 +14,20 @@ type: index, concepts, howto
Use GitLab ChatOps to interact with CI/CD jobs through chat services
like Slack.
-Many organizations use chat services to collaborate, troubleshoot, and plan work. With ChatOps,
+Many organizations use Slack or Mattermost to collaborate, troubleshoot, and plan work. With ChatOps,
you can discuss work with your team, run CI/CD jobs, and view job output, all from the same
application.
+## Slash command integrations
+
+You can trigger ChatOps with the [`run` slash command](../../user/project/integrations/gitlab_slack_application.md#slash-commands).
+
+The following integrations are available:
+
+- [GitLab for Slack app](../../user/project/integrations/gitlab_slack_application.md) (recommended for Slack)
+- [Slack slash commands](../../user/project/integrations/slack_slash_commands.md)
+- [Mattermost slash commands](../../user/project/integrations/mattermost_slash_commands.md)
+
## ChatOps workflow and CI/CD configuration
ChatOps looks for the specified job in the
@@ -37,7 +47,7 @@ run as part of the standard CI/CD pipeline.
ChatOps passes the following [CI/CD variables](../variables/index.md#predefined-cicd-variables)
to the job:
-- `CHAT_INPUT` - The arguments passed to `/project-name run`.
+- `CHAT_INPUT` - The arguments passed to the `run` slash command.
- `CHAT_CHANNEL` - The name of the chat channel the job is run from.
- `CHAT_USER_ID` - The chat service ID of the user who runs the job.
@@ -47,30 +57,13 @@ When the job runs:
- If the job completes in more than 30 minutes, you must use a method like the
[Slack API](https://api.slack.com/) to send data to the channel.
-## Run a CI/CD job
-
-Prerequisite:
-
-- You must have at least the Developer role for the project.
-
-You can run a CI/CD job on the default branch from chat. To run a CI/CD job:
-
-- In the chat client, enter `/<project-name> run <job name> <arguments>` where:
-
- - `<project-name>` is the name of the project.
- - `<job name>` is the name of the CI/CD job to run.
- - `<arguments>` is the arguments to pass to the CI/CD job.
-
-ChatOps schedules a pipeline that contains only the specified job.
-Other [slash commands](../../user/project/integrations/gitlab_slack_application.md#slash-commands) are also available.
-
### Exclude a job from ChatOps
To prevent a job from being run from chat:
- In `.gitlab-ci.yml`, set the job to `except: [chat]`.
-## Customize the ChatOps reply
+### Customize the ChatOps reply
ChatOps sends the output for a job with a single command to the
channel as a reply. For example, when the following job runs,
@@ -108,8 +101,34 @@ ls:
- echo -e "section_start:$( date +%s ):chat_reply\r\033[0K\n$( ls -la )\nsection_end:$( date +%s ):chat_reply\r\033[0K"
```
+## Trigger a CI/CD job using ChatOps
+
+Prerequisite:
+
+- You must have at least the Developer role for the project.
+- The project is configured to use a slash command integration.
+
+You can run a CI/CD job on the default branch from Slack or Mattermost.
+
+The slash command to trigger a CI/CD job depends on which slash command integration
+is configured for the project.
+
+- For the GitLab for Slack app, use `/gitlab <project-name> run <job name> <arguments>`.
+- For Slack or Mattermost slash commands, use `/<trigger-name> run <job name> <arguments>`.
+
+Where:
+
+- `<job name>` is the name of the CI/CD job to run.
+- `<arguments>` are the arguments to pass to the CI/CD job.
+- `<trigger-name>` is the trigger name configured for the Slack or Mattermost integration.
+
+ChatOps schedules a pipeline that contains only the specified job.
+
## Related topics
-- [The official GitLab ChatOps icon](img/gitlab-chatops-icon.png)
- [A repository of common ChatOps scripts](https://gitlab.com/gitlab-com/chatops)
that GitLab uses to interact with GitLab.com
+- [GitLab for Slack app](../../user/project/integrations/gitlab_slack_application.md)
+- [Slack slash commands](../../user/project/integrations/slack_slash_commands.md)
+- [Mattermost slash commands](../../user/project/integrations/mattermost_slash_commands.md)
+- [The official GitLab ChatOps icon](img/gitlab-chatops-icon.png)
diff --git a/doc/ci/cloud_services/azure/index.md b/doc/ci/cloud_services/azure/index.md
index 3a882cf6820..b26533562f4 100644
--- a/doc/ci/cloud_services/azure/index.md
+++ b/doc/ci/cloud_services/azure/index.md
@@ -25,6 +25,7 @@ Prerequisites:
- Access to the corresponding Azure Active Directory Tenant with at least the `Application Developer` access level.
- A local installation of the [Azure CLI](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli).
Alternatively, you can follow all the steps below with the [Azure Cloud Shell](https://portal.azure.com/#cloudshell/).
+- Your GitLab instance must be publicly accessible over the internet as Azure must to connect to the GitLab OIDC endpoint.
- A GitLab project.
To complete this tutorial:
@@ -167,3 +168,23 @@ CI/CD variables, from the Azure Portal:
Azure AD federated identity credentials.
Review [Connect to cloud services](../index.md) for further details.
+
+### `Request to External OIDC endpoint failed` message
+
+If you receive the error `ERROR: AADSTS501661: Request to External OIDC endpoint failed.`
+you should verify that your GitLab instance is publicly accessible from the internet.
+
+Azure must be able to access the following GitLab endpoints to authenticate with OIDC:
+
+- `GET /.well-known/openid-configuration`
+- `GET /oauth/discovery/keys`
+
+If you update your firewall and still receive this error, [clear the Redis cache](../../../administration/raketasks/maintenance.md#clear-redis-cache)
+and try again.
+
+### `No matching federated identity record found for presented assertion audience` message
+
+If you receive the error `ERROR: AADSTS700212: No matching federated identity record found for presented assertion audience 'https://gitlab.com'`
+you should verify that your CI/CD job uses the correct `aud` value.
+
+The `aud` value should match the audience used to [create the federated identity credentials](#create-azure-ad-federated-identity-credentials).
diff --git a/doc/ci/cloud_services/google_cloud/index.md b/doc/ci/cloud_services/google_cloud/index.md
index a733f3d59cb..fd8aca7045c 100644
--- a/doc/ci/cloud_services/google_cloud/index.md
+++ b/doc/ci/cloud_services/google_cloud/index.md
@@ -22,6 +22,10 @@ This tutorial assumes you have a Google Cloud account and a Google Cloud project
Your account must have at least the **Workload Identity Pool Admin** permission
on the Google Cloud project.
+NOTE:
+If you would prefer to use a Terraform module and a CI/CD template instead of this tutorial,
+see [How OIDC can simplify authentication of GitLab CI/CD pipelines with Google Cloud](https://about.gitlab.com/blog/2023/06/28/introduction-of-oidc-modules-for-integration-between-google-cloud-and-gitlab-ci/).
+
To complete this tutorial:
1. [Create the Google Cloud Workload Identity Pool](#create-the-google-cloud-workload-identity-pool).
diff --git a/doc/ci/components/index.md b/doc/ci/components/index.md
index a3d6d7224e4..3d46ec5bbd5 100644
--- a/doc/ci/components/index.md
+++ b/doc/ci/components/index.md
@@ -4,14 +4,12 @@ group: Pipeline Authoring
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
-# CI/CD components **(FREE ALL EXPERIMENT)**
+# CI/CD components **(FREE ALL BETA)**
> - Introduced as an [experimental feature](../../policy/experiment-beta-support.md) in GitLab 16.0, [with a flag](../../administration/feature_flags.md) named `ci_namespace_catalog_experimental`. Disabled by default.
> - [Enabled on GitLab.com and self-managed](https://gitlab.com/groups/gitlab-org/-/epics/9897) in GitLab 16.2.
> - [Feature flag `ci_namespace_catalog_experimental` removed](https://gitlab.com/gitlab-org/gitlab/-/issues/394772) in GitLab 16.3.
-
-This feature is an experimental feature and [an epic exists](https://gitlab.com/groups/gitlab-org/-/epics/9897)
-to track future work. Tell us about your use case by leaving comments in the epic.
+> - [Moved](https://gitlab.com/gitlab-com/www-gitlab-com/-/merge_requests/130824) to [Beta status](../../policy/experiment-beta-support.md) in GitLab 16.6.
A CI/CD component is a reusable single pipeline configuration unit. Use them to compose an entire pipeline configuration or a small part of a larger pipeline.
@@ -29,6 +27,8 @@ A components repository is a GitLab project with a repository that hosts one or
If a component requires different versioning from other components, the component should be migrated to its own components repository.
+One component repository can have a maximum of 10 components.
+
## Create a components repository
To create a components repository, you must:
@@ -65,17 +65,17 @@ the file structure should be similar to:
```plaintext
├── templates/
-│ └── only_template.yml
+│ └── secret-detection.yml
├── README.md
└── .gitlab-ci.yml
```
-This example component could be referenced with a path similar to `gitlab.com/my-username/my-component/only_template@<version>`,
+This example component could be referenced with a path similar to `gitlab.com/my-namespace/my-project/secret-detection@<version>`,
if the project is:
- On GitLab.com
-- Named `my-component`
-- In a personal namespace named `my-username`
+- Named `my-project`
+- In a personal namespace or group named `my-namespace`
The templates directory and the suffix of the configuration file should be excluded from the referenced path.
@@ -85,26 +85,32 @@ If the project contains multiple components, then the file structure should be s
├── README.md
├── .gitlab-ci.yml
└── templates/
- └── all-scans.yml
+ ├── all-scans.yml
└── secret-detection.yml
```
These components would be referenced with these paths:
-- `gitlab.com/my-username/my-component/all-scans`
-- `gitlab.com/my-username/my-component/secret-detection`
+- `gitlab.com/my-namespace/my-project/all-scans@<version>`
+- `gitlab.com/my-namespace/my-project/secret-detection@<version>`
+
+You can also have components defined as a directory if you want to bundle together multiple related files.
+In this case GitLab expects a `template.yml` file to be present:
-You can omit the filename in the path if the configuration file is named `template.yml`.
-For example, the following component could be referenced with `gitlab.com/my-username/my-component/dast`:
+For example:
```plaintext
├── README.md
├── .gitlab-ci.yml
-├── templates/
-│ └── dast
-│ └── template.yml
+└── templates/
+ └── dast
+ ├── docs.md
+ ├── Dockerfile
+ └── template.yml
```
+In this example, the component could be referenced with `gitlab.com/my-namespace/my-project/dast@<version>`.
+
#### Component configurations saved in any directory (deprecated)
WARNING:
@@ -117,8 +123,8 @@ Components configurations can be saved through the following directory structure
components, each file must be in a separate subdirectory.
- `README.md`: A documentation file explaining the details of all the components in the repository.
-For example, if the project is on GitLab.com, named `my-component`, and in a personal
-namespace named `my-username`:
+For example, if the project is on GitLab.com, named `my-project`, and in a personal
+namespace named `my-namespace`:
- Containing a single component and a simple pipeline to test the component, then
the file structure might be:
@@ -132,7 +138,7 @@ namespace named `my-username`:
The `.gitlab-ci.yml` file is not required for a CI/CD component to work, but
[testing the component](#test-the-component) in a pipeline in the project is recommended.
- This component is referenced with the path `gitlab.com/my-username/my-component@<version>`.
+ This component is referenced with the path `gitlab.com/my-namespace/my-project@<version>`.
- Containing one default component and multiple sub-components, then the file structure
might be:
@@ -149,9 +155,9 @@ namespace named `my-username`:
These components are identified by these paths:
- - `gitlab.com/my-username/my-component`
- - `gitlab.com/my-username/my-component/unit`
- - `gitlab.com/my-username/my-component/integration`
+ - `gitlab.com/my-namespace/my-project`
+ - `gitlab.com/my-namespace/my-project/unit`
+ - `gitlab.com/my-namespace/my-project/integration`
It is possible to have a components repository with no default component, by having
no `template.yml` in the root directory.
@@ -169,19 +175,41 @@ Nesting of components is not possible. For example:
## Release a component
-To create a release for a CI/CD component, use either:
+To create a release for a CI/CD component, use the [`release`](../yaml/index.md#release)
+keyword in a CI/CD pipeline.
+
+For example:
+
+```yaml
+create-release:
+ stage: deploy
+ image: registry.gitlab.com/gitlab-org/release-cli:latest
+ rules:
+ - if: $CI_COMMIT_TAG =~ /^v\d+/
+ script: echo "Creating release $CI_COMMIT_TAG"
+ release:
+ tag_name: $CI_COMMIT_TAG
+ description: "Release $CI_COMMIT_TAG of components repository $CI_PROJECT_PATH"
+```
+
+In this example, the job runs only for tags formatted as `v` + version number.
+If all previous jobs succeed, the release is created.
-- The [`release`](../yaml/index.md#release) keyword in a CI/CD pipeline. Like in the
- [component testing example](#test-the-component), you can set a component to automatically
- be released after all tests pass in pipelines for new tags.
-- The [UI for creating a release](../../user/project/releases/index.md#create-a-release).
+Like in the [component testing example](#test-the-component), you can set a component to automatically
+be released after all tests pass in pipelines for new tags.
-All released versions of the components are displayed in the [CI/CD Catalog](catalog.md)
-page for the given resource, providing users with information about official releases.
+All released versions of the components repositories are displayed in the [CI/CD Catalog](catalog.md),
+providing users with information about official releases.
Components [can be used](#use-a-component-in-a-cicd-configuration) without being released,
-but only with a commit SHA or a branch name. To enable the use of tags or the `~latest` version keyword,
-you must create a release.
+by using the commit SHA or ref. However, the `~latest` version keyword can only be used with released tags.
+
+NOTE:
+The `~latest` keyword always returns the most recent release, not the release with
+the latest semantic version. For example, if you first release `v2.0.0`, and later release
+a patch fix like `v1.5.1`, then `~latest` returns the `v1.5.1` release.
+[Issue #427286](https://gitlab.com/gitlab-org/gitlab/-/issues/427286) proposes to
+change this behavior.
## Use a component in a CI/CD configuration
@@ -190,7 +218,7 @@ For example:
```yaml
include:
- - component: gitlab.example.com/my-namespace/my-component@1.0
+ - component: gitlab.example.com/my-namespace/my-project@1.0
inputs:
stage: build
```
@@ -395,7 +423,7 @@ For example:
```yaml
include:
# include the component located in the current project from the current SHA
- - component: gitlab.com/$CI_PROJECT_PATH@$CI_COMMIT_SHA
+ - component: gitlab.com/$CI_PROJECT_PATH/my-project@$CI_COMMIT_SHA
inputs:
stage: build
diff --git a/doc/ci/debugging.md b/doc/ci/debugging.md
new file mode 100644
index 00000000000..5bcf834b61d
--- /dev/null
+++ b/doc/ci/debugging.md
@@ -0,0 +1,295 @@
+---
+stage: Verify
+group: Pipeline Authoring
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
+type: reference
+---
+
+# Debugging CI/CD pipelines **(FREE ALL)**
+
+GitLab provides several tools to help make it easier to debug your CI/CD configuration.
+
+If you are unable to resolve pipeline issues, you can get help from:
+
+- The [GitLab community forum](https://forum.gitlab.com/)
+- GitLab [Support](https://about.gitlab.com/support/)
+
+## Verify syntax
+
+An early source of problems can be incorrect syntax. The pipeline shows a `yaml invalid`
+badge and does not start running if any syntax or formatting problems are found.
+
+### Edit `.gitlab-ci.yml` with the pipeline editor
+
+The [pipeline editor](pipeline_editor/index.md) is the recommended editing
+experience (rather than the single file editor or the Web IDE). It includes:
+
+- Code completion suggestions that ensure you are only using accepted keywords.
+- Automatic syntax highlighting and validation.
+- The [CI/CD configuration visualization](pipeline_editor/index.md#visualize-ci-configuration),
+ a graphical representation of your `.gitlab-ci.yml` file.
+
+### Edit `.gitlab-ci.yml` locally
+
+If you prefer to edit your pipeline configuration locally, you can use the
+GitLab CI/CD schema in your editor to verify basic syntax issues. Any
+[editor with Schemastore support](https://www.schemastore.org/json/#editors) uses
+the GitLab CI/CD schema by default.
+
+If you need to link to the schema directly, use this URL:
+
+```plaintext
+https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/editor/schema/ci.json
+```
+
+To see the full list of custom tags covered by the CI/CD schema, check the
+latest version of the schema.
+
+### Verify syntax with CI Lint tool
+
+You can use the [CI Lint tool](lint.md) to verify that the syntax of a CI/CD configuration
+snippet is correct. Paste in full `.gitlab-ci.yml` files or individual job configurations,
+to verify the basic syntax.
+
+When a `.gitlab-ci.yml` file is present in a project, you can also use the CI Lint
+tool to [simulate the creation of a full pipeline](lint.md#simulate-a-pipeline).
+It does deeper verification of the configuration syntax.
+
+## Verify variables
+
+A key part of troubleshooting CI/CD is to verify which variables are present in a
+pipeline, and what their values are. A lot of pipeline configuration is dependent
+on variables, and verifying them is one of the fastest ways to find the source of
+a problem.
+
+[Export the full list of variables](variables/index.md#list-all-variables)
+available in each problematic job. Check if the variables you expect are present,
+and check if their values are what you expect.
+
+## Job configuration issues
+
+A lot of common pipeline issues can be fixed by analyzing the behavior of the `rules`
+or `only/except` configuration used to [control when jobs are added to a pipeline](jobs/job_control.md).
+You shouldn't use these two configurations in the same pipeline, as they behave differently.
+It's hard to predict how a pipeline runs with this mixed behavior. `rules` is the preferred
+choice for controlling jobs, as `only` and `except` are no longer being actively developed.
+
+If your `rules` or `only/except` configuration makes use of [predefined variables](variables/predefined_variables.md)
+like `CI_PIPELINE_SOURCE`, `CI_MERGE_REQUEST_ID`, you should [verify them](#verify-variables)
+as the first troubleshooting step.
+
+### Jobs or pipelines don't run when expected
+
+The `rules` or `only/except` keywords are what determine whether or not a job is
+added to a pipeline. If a pipeline runs, but a job is not added to the pipeline,
+it's usually due to `rules` or `only/except` configuration issues.
+
+If a pipeline does not seem to run at all, with no error message, it may also be
+due to `rules` or `only/except` configuration, or the `workflow: rules` keyword.
+
+If you are converting from `only/except` to the `rules` keyword, you should check
+the [`rules` configuration details](yaml/index.md#rules) carefully. The behavior
+of `only/except` and `rules` is different and can cause unexpected behavior when migrating
+between the two.
+
+The [common `if` clauses for `rules`](jobs/job_control.md#common-if-clauses-for-rules)
+can be very helpful for examples of how to write rules that behave the way you expect.
+
+### A job with the `changes` keyword runs unexpectedly
+
+A common reason a job is added to a pipeline unexpectedly is because the `changes`
+keyword always evaluates to true in certain cases. For example, `changes` is always
+true in certain pipeline types, including scheduled pipelines and pipelines for tags.
+
+The `changes` keyword is used in combination with [`only/except`](yaml/index.md#onlychanges--exceptchanges)
+or [`rules`](yaml/index.md#ruleschanges). It's recommended to only use `changes` with
+`if` sections in `rules` or `only/except` configuration that ensures the job is only added to
+branch pipelines or merge request pipelines.
+
+### Two pipelines run at the same time
+
+Two pipelines can run when pushing a commit to a branch that has an open merge request
+associated with it. Usually one pipeline is a merge request pipeline, and the other
+is a branch pipeline.
+
+This situation is usually caused by the `rules` configuration, and there are several ways to
+[prevent duplicate pipelines](jobs/job_control.md#avoid-duplicate-pipelines).
+
+### No pipeline or the wrong type of pipeline runs
+
+Before a pipeline can run, GitLab evaluates all the jobs in the configuration and tries
+to add them to all available pipeline types. A pipeline does not run if no jobs are added
+to it at the end of the evaluation.
+
+If a pipeline did not run, it's likely that all the jobs had `rules` or `only/except` that
+blocked them from being added to the pipeline.
+
+If the wrong pipeline type ran, then the `rules` or `only/except` configuration should
+be checked to make sure the jobs are added to the correct pipeline type. For
+example, if a merge request pipeline did not run, the jobs may have been added to
+a branch pipeline instead.
+
+It's also possible that your [`workflow: rules`](yaml/index.md#workflow) configuration
+blocked the pipeline, or allowed the wrong pipeline type.
+
+### Pipeline with many jobs fails to start
+
+A Pipeline that has more jobs than the instance's defined [CI/CD limits](../administration/settings/continuous_integration.md#set-cicd-limits)
+fails to start.
+
+To reduce the number of jobs in a single pipeline, you can split your `.gitlab-ci.yml`
+configuration into more independent [parent-child pipelines](../ci/pipelines/pipeline_architectures.md#parent-child-pipelines).
+
+## Pipeline warnings
+
+Pipeline configuration warnings are shown when you:
+
+- [Validate configuration with the CI Lint tool](yaml/index.md).
+- [Manually run a pipeline](pipelines/index.md#run-a-pipeline-manually).
+
+### `Job may allow multiple pipelines to run for a single action` warning
+
+When you use [`rules`](yaml/index.md#rules) with a `when` clause without an `if`
+clause, multiple pipelines may run. Usually this occurs when you push a commit to
+a branch that has an open merge request associated with it.
+
+To [prevent duplicate pipelines](jobs/job_control.md#avoid-duplicate-pipelines), use
+[`workflow: rules`](yaml/index.md#workflow) or rewrite your rules to control
+which pipelines can run.
+
+## Troubleshooting
+
+For help with a specific area, see:
+
+- [Caching](caching/index.md#troubleshooting).
+- [CI/CD job tokens](jobs/ci_job_token.md).
+- [Container Registry](../user/packages/container_registry/troubleshoot_container_registry.md).
+- [Docker](docker/using_docker_build.md#troubleshooting).
+- [Downstream pipelines](pipelines/downstream_pipelines.md#troubleshooting).
+- [Environments](environments/deployment_safety.md#ensure-only-one-deployment-job-runs-at-a-time).
+- [GitLab Runner](https://docs.gitlab.com/runner/faq/).
+- [ID tokens](secrets/id_token_authentication.md#troubleshooting).
+- [Jobs](jobs/index.md#troubleshooting).
+- [Job control](jobs/job_control.md).
+- [Job artifacts](jobs/job_artifacts_troubleshooting.md).
+- [Merge request pipelines](pipelines/merge_request_pipelines.md#troubleshooting),
+ [merged results pipelines](pipelines/merged_results_pipelines.md#troubleshooting),
+ and [Merge trains](pipelines/merge_trains.md#troubleshooting).
+- [Pipeline editor](pipeline_editor/index.md#troubleshooting).
+- [Variables](variables/index.md#troubleshooting).
+- [YAML `includes` keyword](yaml/includes.md#troubleshooting).
+- [YAML `script` keyword](yaml/script.md#troubleshooting).
+
+Otherwise, review the following troubleshooting sections for known status messages
+and error messages.
+
+### `A CI/CD pipeline must run and be successful before merge` message
+
+This message is shown if the [**Pipelines must succeed**](../user/project/merge_requests/merge_when_pipeline_succeeds.md#require-a-successful-pipeline-for-merge)
+setting is enabled in the project and a pipeline has not yet run successfully.
+This also applies if the pipeline has not been created yet, or if you are waiting
+for an external CI service.
+
+If you don't use pipelines for your project, then you should disable **Pipelines must succeed**
+so you can accept merge requests.
+
+### `Checking ability to merge automatically` message
+
+If your merge request is stuck with a `Checking ability to merge automatically`
+message that does not disappear after a few minutes, you can try one of these workarounds:
+
+- Refresh the merge request page.
+- Close & Re-open the merge request.
+- Rebase the merge request with the `/rebase` [quick action](../user/project/quick_actions.md).
+- If you have already confirmed the merge request is ready to be merged, you can merge
+ it with the `/merge` quick action.
+
+This issue is [resolved](https://gitlab.com/gitlab-org/gitlab/-/issues/229352) in GitLab 15.5.
+
+### `Checking pipeline status` message
+
+This message displays when the merge request does not yet have a pipeline associated with the
+latest commit. This might be because:
+
+- GitLab hasn't finished creating the pipeline yet.
+- You are using an external CI service and GitLab hasn't heard back from the service yet.
+- You are not using CI/CD pipelines in your project.
+- You are using CI/CD pipelines in your project, but your configuration prevented a pipeline from running on the source branch for your merge request.
+- The latest pipeline was deleted (this is a [known issue](https://gitlab.com/gitlab-org/gitlab/-/issues/214323)).
+- The source branch of the merge request is on a private fork.
+
+After the pipeline is created, the message updates with the pipeline status.
+
+### `Project <group/project> not found or access denied` message
+
+This message is shown if configuration is added with [`include`](yaml/index.md#include) and either:
+
+- The configuration refers to a project that can't be found.
+- The user that is running the pipeline is unable to access any included projects.
+
+To resolve this, check that:
+
+- The path of the project is in the format `my-group/my-project` and does not include
+ any folders in the repository.
+- The user running the pipeline is a [member of the projects](../user/project/members/index.md#add-users-to-a-project)
+ that contain the included files. Users must also have the [permission](../user/permissions.md#job-permissions)
+ to run CI/CD jobs in the same projects.
+
+### `The parsed YAML is too big` message
+
+This message displays when the YAML configuration is too large or nested too deeply.
+YAML files with a large number of includes, and thousands of lines overall, are
+more likely to hit this memory limit. For example, a YAML file that is 200 kb is
+likely to hit the default memory limit.
+
+To reduce the configuration size, you can:
+
+- Check the length of the expanded CI/CD configuration in the pipeline editor's
+ [Full configuration](pipeline_editor/index.md#view-full-configuration) tab. Look for
+ duplicated configuration that can be removed or simplified.
+- Move long or repeated `script` sections into standalone scripts in the project.
+- Use [parent and child pipelines](pipelines/downstream_pipelines.md#parent-child-pipelines) to move some
+ work to jobs in an independent child pipeline.
+
+On a self-managed instance, you can [increase the size limits](../administration/instance_limits.md#maximum-size-and-depth-of-cicd-configuration-yaml-files).
+
+### `500` error when editing the `.gitlab-ci.yml` file
+
+A [loop of included configuration files](pipeline_editor/index.md#configuration-validation-currently-not-available-message)
+can cause a `500` error when editing the `.gitlab-ci.yml` file with the [web editor](../user/project/repository/web_editor.md).
+
+Ensure that included configuration files do not create a loop of references to each other.
+
+### `Failed to pull image` messages
+
+> **Allow access to this project with a CI_JOB_TOKEN** setting [renamed to **Limit access _to_ this project**](https://gitlab.com/gitlab-org/gitlab/-/issues/411406) in GitLab 16.3.
+
+A runner might return a `Failed to pull image` message when trying to pull a container image
+in a CI/CD job.
+
+The runner authenticates with a [CI/CD job token](jobs/ci_job_token.md)
+when fetching a container image defined with [`image`](yaml/index.md#image)
+from another project's container registry.
+
+If the job token settings prevent access to the other project's container registry,
+the runner returns an error message.
+
+For example:
+
+- ```plaintext
+ WARNING: Failed to pull image with policy "always": Error response from daemon: pull access denied for registry.example.com/path/to/project, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
+ ```
+
+- ```plaintext
+ WARNING: Failed to pull image with policy "": image pull failed: rpc error: code = Unknown desc = failed to pull and unpack image "registry.example.com/path/to/project/image:v1.2.3": failed to resolve reference "registry.example.com/path/to/project/image:v1.2.3": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
+ ```
+
+These errors can happen if the following are both true:
+
+- The [**Limit access _to_ this project**](jobs/ci_job_token.md#limit-job-token-scope-for-public-or-internal-projects)
+ option is enabled in the private project hosting the image.
+- The job attempting to fetch the image is running in a project that is not listed in
+ the private project's allowlist.
+
+To resolve this issue, add any projects with CI/CD jobs that fetch images from the container
+registry to the target project's [job token allowlist](jobs/ci_job_token.md#allow-access-to-your-project-with-a-job-token).
diff --git a/doc/ci/docker/using_docker_build.md b/doc/ci/docker/using_docker_build.md
index 269ce2c3212..2505089e4be 100644
--- a/doc/ci/docker/using_docker_build.md
+++ b/doc/ci/docker/using_docker_build.md
@@ -390,9 +390,7 @@ sudo gitlab-runner register -n \
--docker-volumes /var/run/docker.sock:/var/run/docker.sock
```
-To use more complex Docker-in-Docker configurations, such as is necessary to run Code Quality checks
-with Code Climate, you need to ensure that the paths to the build directory are the same on the host
-as well as inside the Docker container. For more details, see
+For complex Docker-in-Docker setups like Code Quality checks using Code Climate, you must match host and container paths for proper execution. For more details, see
[Improve Code Quality performance with private runners](../testing/code_quality.md#improve-code-quality-performance-with-private-runners).
#### Enable registry mirror for `docker:dind` service
diff --git a/doc/ci/docker/using_docker_images.md b/doc/ci/docker/using_docker_images.md
index 455731f6c65..dd6cd2099a9 100644
--- a/doc/ci/docker/using_docker_images.md
+++ b/doc/ci/docker/using_docker_images.md
@@ -471,3 +471,78 @@ REPOSITORY TAG DIGE
gitlab/gitlab-ee latest sha256:723aa6edd8f122d50cae490b1743a616d54d4a910db892314d68470cc39dfb24 (...)
gitlab/gitlab-runner latest sha256:4a18a80f5be5df44cb7575f6b89d1fdda343297c6fd666c015c0e778b276e726 (...)
```
+
+## Creating a Custom GitLab Runner Docker Image
+
+You can create a custom GitLab Runner Docker image to package AWS CLI and Amazon ECR Credential Helper. This setup facilitates
+secure and streamlined interactions with AWS services, especially for containerized applications. For example, to reduce time
+and error-prone manual configurations, teams who deploy microservices on AWS can use this setup to manage, deploy,
+and update Docker images on Amazon ECR, without using manual credential management.
+
+1. [Authenticate GitLab with AWS](../cloud_deployment/index.md#authenticate-gitlab-with-aws).
+1. Create a `Dockerfile` with the following content:
+
+ ```Dockerfile
+ # Control package versions
+ ARG GITLAB_RUNNER_VERSION=v16.4.0
+ ARG AWS_CLI_VERSION=2.2.30
+
+ # AWS CLI and Amazon ECR Credential Helper
+ FROM amazonlinux as aws-tools
+ RUN set -e \
+ && yum update -y \
+ && yum install -y --allowerasing git make gcc curl unzip \
+ && curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" --output "awscliv2.zip" \
+ && unzip awscliv2.zip && ./aws/install -i /usr/local/bin \
+ && yum clean all
+
+ # Download and install ECR Credential Helper
+ RUN curl --location --output /usr/local/bin/docker-credential-ecr-login "https://github.com/awslabs/amazon-ecr-credential-helper/releases/latest/download/docker-credential-ecr-login-linux-amd64"
+ RUN chmod +x /usr/local/bin/docker-credential-ecr-login
+
+ # Configure the ECR Credential Helper
+ RUN mkdir -p /root/.docker
+ RUN echo '{ "credsStore": "ecr-login" }' > /root/.docker/config.json
+
+ # Final image based on GitLab Runner
+ FROM gitlab/gitlab-runner:${GITLAB_RUNNER_VERSION}
+
+ # Install necessary packages
+ RUN apt-get update \
+ && apt-get install -y --no-install-recommends jq procps curl unzip groff libgcrypt20 tar gzip less openssh-client \
+ && apt-get clean && rm -rf /var/lib/apt/lists/*
+
+ # Copy AWS CLI and Amazon ECR Credential Helper binaries
+ COPY --from=aws-tools /usr/local/bin/ /usr/local/bin/
+
+ # Copy ECR Credential Helper Configuration
+ COPY --from=aws-tools /root/.docker/config.json /root/.docker/config.json
+ ```
+
+1. To build the custom GitLab Runner Docker image within a `.gitlab-ci.yml`, include the following example below:
+
+ ```yaml
+ variables:
+ DOCKER_DRIVER: overlay2
+ IMAGE_NAME: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_NAME
+ GITLAB_RUNNER_VERSION: v16.4.0
+ AWS_CLI_VERSION: 2.13.21
+
+ stages:
+ - build
+
+ build-image:
+ stage: build
+ script:
+ - echo "Logging into GitLab Container Registry..."
+ - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
+ - echo "Building Docker image..."
+ - docker build --build-arg GITLAB_RUNNER_VERSION=${GITLAB_RUNNER_VERSION} --build-arg AWS_CLI_VERSION=${AWS_CLI_VERSION} -t ${IMAGE_NAME} .
+ - echo "Pushing Docker image to GitLab Container Registry..."
+ - docker push ${IMAGE_NAME}
+ rules:
+ - changes:
+ - Dockerfile
+ ```
+
+1. [Register the runner](https://docs.gitlab.com/runner/register/index.html#docker).
diff --git a/doc/ci/enable_or_disable_ci.md b/doc/ci/enable_or_disable_ci.md
index 3081b8d1b39..d8a2fd66228 100644
--- a/doc/ci/enable_or_disable_ci.md
+++ b/doc/ci/enable_or_disable_ci.md
@@ -1,59 +1,11 @@
---
-stage: Verify
-group: Pipeline Execution
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
-type: howto
+redirect_to: 'pipelines/settings.md#disable-gitlab-cicd-pipelines'
+remove_date: '2024-01-30'
---
-# Disabling GitLab CI/CD **(FREE ALL)**
+This document was moved to [another location](pipelines/settings.md#disable-gitlab-cicd-pipelines).
-GitLab CI/CD is enabled by default on all new projects.
-If you use an external CI/CD server like Jenkins or Drone CI, you can
-disable GitLab CI/CD to avoid conflicts with the commits status
-API.
-
-You can disable GitLab CI/CD:
-
-- [For each project](#disable-cicd-in-a-project).
-- [For all new projects on an instance](../administration/cicd.md).
-
-These changes do not apply to projects in an
-[external integration](../user/project/integrations/index.md#available-integrations).
-
-## Disable CI/CD in a project
-
-When you disable GitLab CI/CD:
-
-- The **CI/CD** item in the left sidebar is removed.
-- The `/pipelines` and `/jobs` pages are no longer available.
-- Existing jobs and pipelines are hidden, not removed.
-
-To disable GitLab CI/CD in your project:
-
-1. On the left sidebar, select **Search or go to** and find your project.
-1. Select **Settings > General**.
-1. Expand **Visibility, project features, permissions**.
-1. In the **Repository** section, turn off **CI/CD**.
-1. Select **Save changes**.
-
-## Enable CI/CD in a project
-
-To enable GitLab CI/CD in your project:
-
-1. On the left sidebar, select **Search or go to** and find your project.
-1. Select **Settings > General**.
-1. Expand **Visibility, project features, permissions**.
-1. In the **Repository** section, turn on **CI/CD**.
-1. Select **Save changes**.
-
-<!-- ## Troubleshooting
-
-Include any troubleshooting steps that you can foresee. If you know beforehand what issues
-one might have when setting this up, or when something is changed, or on upgrading, it's
-important to describe those, too. Think of things that may go wrong and include them here.
-This is important to minimize requests for support, and to avoid doc comments with
-questions that you know someone might ask.
-
-Each scenario can be a third-level heading, for example `### Getting error message X`.
-If you have none to add when creating a doc, leave this section in place
-but commented out to help encourage others to add to it in the future. -->
+<!-- This redirect file can be deleted after <2024-01-30>. -->
+<!-- Redirects that point to other docs in the same project expire in three months. -->
+<!-- Redirects that point to docs in a different project or site (link is not relative and starts with `https:`) expire in one year. -->
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/redirects.html -->
diff --git a/doc/ci/environments/deployment_approvals.md b/doc/ci/environments/deployment_approvals.md
index 754dcafb9f7..b14ee5eb3eb 100644
--- a/doc/ci/environments/deployment_approvals.md
+++ b/doc/ci/environments/deployment_approvals.md
@@ -23,6 +23,10 @@ require approvals for deployments to production environments.
You can require approvals for deployments to protected environments in
a project.
+Prerequisite:
+
+- To update an environment, you must have at least the Maintainer role.
+
To configure deployment approvals for a project:
1. Create a deployment job in the `.gitlab-ci.yml` file of your project:
@@ -41,10 +45,26 @@ To configure deployment approvals for a project:
The job does not need to be manual (`when: manual`).
-1. Add the required [approval rules](#multiple-approval-rules).
+1. Add the required [approval rules](#add-multiple-approval-rules).
The environments in your project require approval before deployment.
+### Add multiple approval rules
+
+> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/345678) in GitLab 14.10 with a flag named `deployment_approval_rules`. Disabled by default.
+> - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/345678) in GitLab 15.0. [Feature flag `deployment_approval_rules`](https://gitlab.com/gitlab-org/gitlab/-/issues/345678) removed.
+> - UI configuration [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/378445) in GitLab 15.11.
+
+Add multiple approval rules to control who can approve and execute deployment jobs.
+
+To configure multiple approval rules, use the [CI/CD settings](protected_environments.md#protecting-environments).
+You can [also use the API](../../api/group_protected_environments.md#protect-a-single-environment).
+
+All jobs deploying to the environment are blocked and wait for approvals before running.
+Make sure the number of required approvals is less than the number of users allowed to deploy.
+
+After a deployment job is approved, you must [run the job manually](../jobs/job_control.md#run-a-manual-job).
+
<!--- start_remove The following content will be removed on remove_date: '2024-05-22' -->
### Unified approval setting (deprecated)
@@ -62,7 +82,7 @@ To configure approvals for a protected environment:
- Using the [REST API](../../api/protected_environments.md#protect-a-single-environment),
set the `required_approval_count` field to 1 or more.
-After this is configured, all jobs deploying to this environment automatically go into a blocked state and wait for approvals before running. Ensure that the number of required approvals is less than the number of users allowed to deploy.
+After this setting is configured, all jobs deploying to this environment automatically go into a blocked state and wait for approvals before running. Ensure that the number of required approvals is less than the number of users allowed to deploy.
Example:
@@ -73,46 +93,8 @@ curl --header 'Content-Type: application/json' --request POST \
"https://gitlab.example.com/api/v4/projects/22034114/protected_environments"
```
-NOTE:
-To protect, update, or unprotect an environment, you must have at least the
-Maintainer role.
-
<!--- end_remove -->
-### Multiple approval rules
-
-> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/345678) in GitLab 14.10 with a flag named `deployment_approval_rules`. Disabled by default.
-> - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/345678) in GitLab 15.0. [Feature flag `deployment_approval_rules`](https://gitlab.com/gitlab-org/gitlab/-/issues/345678) removed.
-> - UI configuration [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/378445) in GitLab 15.11.
-
-- Using the [REST API](../../api/group_protected_environments.md#protect-a-single-environment).
- - `deploy_access_levels` represents which entity can execute the deployment job.
- - `approval_rules` represents which entity can approve the deployment job.
-- Using the [UI](protected_environments.md#protecting-environments).
- - **Allowed to deploy** sets which entities can execute the deployment job.
- - **Approvers** sets which entities can approve the deployment job.
-
-After this is configured, all jobs deploying to this environment automatically go into a blocked state and wait for approvals before running. Ensure that the number of required approvals is less than the number of users allowed to deploy. Once a deployment job is approved, it must be [run manually](../jobs/job_control.md#run-a-manual-job).
-
-A configuration that uses the REST API might look like:
-
-```shell
-curl --header 'Content-Type: application/json' --request POST \
- --data '{"name": "production", "deploy_access_levels": [{"group_id": 138}], "approval_rules": [{"group_id": 134}, {"group_id": 135, "required_approvals": 2}]}' \
- --header "PRIVATE-TOKEN: <your_access_token>" \
- "https://gitlab.example.com/api/v4/groups/128/protected_environments"
-```
-
-With this setup:
-
-- The operator group (`group_id: 138`) has permission to execute the deployment jobs to the `production` environment in the organization (`group_id: 128`).
-- The QA tester group (`group_id: 134`) and security group (`group_id: 135`) have permission to approve the deployment jobs to the `production` environment in the organization (`group_id: 128`).
-- Unless two approvals from security group and one approval from QA tester group have been collected, the operator group can't execute the deployment jobs.
-
-NOTE:
-To protect, update, or unprotect an environment, you must have at least the
-Maintainer role.
-
### Migrate to multiple approval rules
You can migrate a protected environment from unified approval rules to multiple
@@ -128,7 +110,7 @@ To migrate with the UI:
1. From the **Environment** list, select your environment.
1. For each entity allowed to deploy to the environment:
1. Select **Add approval rules**.
- 1. In the modal window, select which entity is allowed to approve the
+ 1. On the dialog, select which entity is allowed to approve the
deployment job.
1. Enter the number of required approvals.
1. Select **Save**.
@@ -154,6 +136,9 @@ require `Administrator` to approve every deployment job in `Production`.
> - Automatic approval [removed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/124638) in GitLab 16.2 due to [usability issues](https://gitlab.com/gitlab-org/gitlab/-/issues/391258).
By default, the user who triggers a deployment pipeline can't also approve the deployment job.
+
+A GitLab administrator can approve or reject all deployments.
+
To allow self-approval of a deployment job:
1. On the left sidebar, select **Search or go to** and find your project.
@@ -165,55 +150,53 @@ To allow self-approval of a deployment job:
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/342180/) in GitLab 14.9
-Using either the GitLab UI or the API, you can:
+Using the GitLab UI or the API, you can:
- Approve a deployment to allow it to proceed.
- Reject a deployment to prevent it.
-NOTE:
-GitLab administrators can approve or reject all deployments.
+Prerequisites:
-### Approve or reject a deployment using the UI
+- You have permission to deploy to the protected environment.
-Prerequisites:
+::Tabs
-- Permission to deploy to the protected environment.
+:::TabTitle With the UI
-To approve or reject a deployment to a protected environment using the UI:
+To approve or reject a deployment with the UI:
1. On the left sidebar, select **Search or go to** and find your project.
1. Select **Operate > Environments**.
1. Select the environment's name.
1. In the deployment's row, select **Approval options** (**{thumb-up}**).
- Before approving or rejecting the deployment, you can view the number of approvals granted and
- remaining, also who has approved or rejected it.
+ Before you approve or reject the deployment, you can view the deployment's approval details.
1. Optional. Add a comment which describes your reason for approving or rejecting the deployment.
1. Select **Approve** or **Reject**.
-### Approve or reject a deployment using the API
+:::TabTitle With the API
-Prerequisites:
+To approve or reject a deployment with the API:
-- Permission to deploy to the protected environment.
+- Pass the required attributes to the deployment endpoint.
-To approve or reject a deployment to a protected environment using the API, pass the
-required attributes. For more details, see
-[Approve or reject a blocked deployment](../../api/deployments.md#approve-or-reject-a-blocked-deployment).
+For details, see [Approve or reject a blocked deployment](../../api/deployments.md#approve-or-reject-a-blocked-deployment).
-Example:
+For example:
```shell
curl --data "status=approved&comment=Looks good to me" \
--header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/api/v4/projects/1/deployments/1/approval"
```
+::EndTabs
+
### View the approval details of a deployment
Prerequisites:
-- Permission to deploy to the protected environment.
+- You have permission to deploy to the protected environment.
-A deployment to a protected environment can only proceed after all required approvals have been
+A deployment to a protected environment can proceed only after all required approvals have been
granted.
To view the approval details of a deployment:
@@ -230,25 +213,31 @@ The approval status details are shown:
- Users who have granted approval
- History of approvals or rejections
-## How to see blocked deployments
+## View blocked deployments
+
+Use the UI or API to review the status of your deployments, including whether a deployment is blocked.
-### Using the UI
+::Tabs
+
+:::TabTitle With the UI
+
+To view your deployments:
1. On the left sidebar, select **Search or go to** and find your project.
1. Select **Operate > Environments**.
1. Select the environment being deployed to.
-1. Look for the `blocked` label.
-### Using the API
+A deployment with the **blocked** label is blocked.
+
+:::TabTitle With the API
+
+To view your deployments:
+
+- Using the [deployments API](../../api/deployments.md#get-a-specific-deployment), get a specific deployment, or a list of all deployments in a project.
-Use the [Deployments API](../../api/deployments.md#get-a-specific-deployment) to see deployments.
+The `status` field indicates whether a deployment is blocked.
-- The `status` field indicates if a deployment is blocked.
-- When the [unified approval setting](#unified-approval-setting-deprecated) is configured:
- - The `pending_approval_count` field indicates how many approvals are remaining to run a deployment.
- - The `approvals` field contains the deployment's approvals.
-- When the [multiple approval rules](#multiple-approval-rules) is configured:
- - The `approval_summary` field contains the current approval status per rule.
+::EndTabs
## Related topics
diff --git a/doc/ci/environments/kubernetes_dashboard.md b/doc/ci/environments/kubernetes_dashboard.md
index 0f9e1d808ec..42fa560ad76 100644
--- a/doc/ci/environments/kubernetes_dashboard.md
+++ b/doc/ci/environments/kubernetes_dashboard.md
@@ -55,6 +55,17 @@ Prerequisites:
## View a dashboard
+> Kubernetes watch API integration [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/422945) in GitLab 16.6 [with a flag](../../administration/feature_flags.md) named `k8s_watch_api`. Disabled by default.
+
+FLAG:
+On self-managed GitLab, by default the Kubernetes watch API integration is not available.
+To make it available, an administrator can [enable the feature flag](../../administration/feature_flags.md) named `k8s_watch_api`.
+On GitLab.com, this feature is not available.
+
+View a dashboard to see the status of any connected clusters.
+If the `k8s_watch_api` feature flag is enabled, the status of your
+pods and Flux reconciliation updates in real time.
+
To view a configured dashboard:
1. On the left sidebar, select **Search or go to** and find your project.
@@ -72,8 +83,7 @@ You can review the sync status of your Flux deployments from a dashboard.
To display the deployment status, your dashboard must be able to retrieve the `Kustomization` and `HelmRelease` resources,
which requires a namespace to be configured for the environment.
-By default, GitLab searches the `Kustomization` and `HelmRelease` resources for the name of the project slug.
-You can specify the resource names with the **Flux resource** dropdown list in the environment settings.
+GitLab searches the `Kustomization` and `HelmRelease` resources specified by the **Flux resource** dropdown list in the environment settings.
A dashboard displays one of the following status badges:
diff --git a/doc/ci/index.md b/doc/ci/index.md
index 413116b0e51..c0c63d13d3a 100644
--- a/doc/ci/index.md
+++ b/doc/ci/index.md
@@ -21,23 +21,34 @@ If you're new to GitLab CI/CD, start by reviewing some of the commonly used term
### The `.gitlab-ci.yml` file
-To use GitLab CI/CD, you start with a `.gitlab-ci.yml` file at the root of your project.
-In this file, you specify the list of things you want to do, like test and deploy your application.
-This file follows the YAML format and has its own special syntax.
+To use GitLab CI/CD, you start with a `.gitlab-ci.yml` file at the root of your project
+which contains the configuration for your CI/CD pipeline. This file follows the YAML format
+and has its own special syntax.
You can name this file anything you want, but `.gitlab-ci.yml` is the most common name.
-Use the pipeline editor to edit the `.gitlab-ci.yml` file and test the syntax before you commit changes.
+
+In the `.gitlab-ci.yml` file, you can define:
+
+- The tasks you want to complete, for example test and deploy your application.
+- Other configuration files and templates you want to include.
+- Dependencies and caches.
+- The commands you want to run in sequence and those you want to run in parallel.
+- The location to deploy your application to.
+- Whether you want to run the scripts automatically or trigger any of them manually.
**Get started:**
- [Create your first `.gitlab-ci.yml` file](quick_start/index.md).
- [View all the possible keywords that you can use in the `.gitlab-ci.yml` file](yaml/index.md).
+the configuration.
+- Use the [pipeline editor](pipeline_editor/index.md) to edit or [visualize](pipeline_editor/index.md#visualize-ci-configuration)
+ your CI/CD configuration.
### Runners
Runners are the agents that run your jobs. These agents can run on physical machines or virtual instances.
In your `.gitlab-ci.yml` file, you can specify a container image you want to use when running the job.
-The runner loads the image and runs the job either locally or in the container.
+The runner loads the image, clones your project and runs the job either locally or in the container.
If you use GitLab.com, SaaS runners on Linux, Windows, and macOS are already available for use. And you can register your own
runners on GitLab.com if you'd like.
@@ -68,16 +79,20 @@ Pipelines are made up of jobs and stages:
### CI/CD variables
CI/CD variables help you customize jobs by making values defined elsewhere accessible to jobs.
-They can be hard-coded in your `.gitlab-ci.yml` file, project settings, or dynamically generated
-[predefined variables](variables/predefined_variables.md).
+They can be hard-coded in your `.gitlab-ci.yml` file, project settings, or dynamically generated.
**Get started:**
- [Learn more about CI/CD variables](variables/index.md).
+- [Learn about dynamically generated predefined variables](variables/predefined_variables.md).
### CI/CD components
-A [CI/CD component](components/index.md) is a reusable single pipeline configuration unit. Use them to compose an entire pipeline configuration or a small part of a larger pipeline.
+A CI/CD component is a reusable single pipeline configuration unit. Use them to compose an entire pipeline configuration or a small part of a larger pipeline.
+
+**Get started:**
+
+- [Learn more about CI/CD components](components/index.md).
## Videos
diff --git a/doc/ci/jobs/ci_job_token.md b/doc/ci/jobs/ci_job_token.md
index a335794b209..cf8b4ccd092 100644
--- a/doc/ci/jobs/ci_job_token.md
+++ b/doc/ci/jobs/ci_job_token.md
@@ -22,6 +22,7 @@ You can use a GitLab CI/CD job token to authenticate with specific API endpoints
- [Get job token's job](../../api/jobs.md#get-job-tokens-job).
- [Pipeline triggers](../../api/pipeline_triggers.md), using the `token=` parameter
to [trigger a multi-project pipeline](../pipelines/downstream_pipelines.md#trigger-a-multi-project-pipeline-by-using-the-api).
+- [Update pipeline metadata](../../api/pipelines.md#update-pipeline-metadata)
- [Releases](../../api/releases/index.md) and [Release links](../../api/releases/links.md).
- [Terraform plan](../../user/infrastructure/index.md).
- [Deployments](../../api/deployments.md).
@@ -69,9 +70,7 @@ tries to steal tokens from other jobs.
You can control what projects a CI/CD job token can access to increase the
job token's security. A job token might give extra permissions that aren't necessary
-to access specific private resources. The job token scope only controls access
-to private projects. If an accessed project is public or internal, token scoping does
-not apply.
+to access specific private resources.
When enabled, and the job token is being used to access a different project:
@@ -80,7 +79,7 @@ When enabled, and the job token is being used to access a different project:
- The accessed project must have the project attempting to access it [added to the allowlist](#add-a-project-to-the-job-token-scope-allowlist).
If a job token is leaked, it could potentially be used to access private data
-to the job token's user. By limiting the job token access scope, private data cannot
+to the job token's user. By limiting the job token access scope, project data cannot
be accessed unless projects are explicitly authorized.
There is a proposal to add more strategic control of the access permissions,
@@ -100,8 +99,7 @@ their `CI_JOB_TOKEN`.
For example, project `A` can add project `B` to the allowlist. CI/CD jobs
in project `B` (the "allowed project") can now use their CI/CD job token to
-authenticate API calls to access project `A`. If project `A` is public or internal,
-the project can be accessed by project `B` without adding it to the allowlist.
+authenticate API calls to access project `A`.
By default, the allowlist of any project only includes itself.
@@ -109,6 +107,32 @@ It is a security risk to disable this feature, so project maintainers or owners
keep this setting enabled at all times. Add projects to the allowlist only when cross-project
access is needed.
+### Limit job token scope for public or internal projects
+
+Projects can use a job token to authenticate with public or internal projects for
+the following actions without being added to the allowlist:
+
+- Fetch artifacts
+- Access the container registry
+- Access the package registry
+- Access releases, deployments, and environments
+
+To limit access to these actions to only the projects on the allowlist, set the visibility
+of each feature to be only accessible to project members:
+
+Prerequisite:
+
+- You must have the Maintainer role for the project.
+
+1. On the left sidebar, at the top, select **Search GitLab** (**{search}**) to find your project.
+1. On the left sidebar, select **Settings > General**.
+1. Expand **Visibility, project features, permissions**.
+1. Set the visibility to **Only project members** for the features you want to restrict access to.
+ - The ability to fetch artifacts is controlled by the CI/CD visibility setting.
+1. Select **Save changes**.
+
+Triggering pipelines and fetching Terraform plans is not affected by feature visibility.
+
### Disable the job token scope allowlist
> **Allow access to this project with a CI_JOB_TOKEN** setting [renamed to **Limit access _to_ this project**](https://gitlab.com/gitlab-org/gitlab/-/issues/411406) in GitLab 16.3.
@@ -180,9 +204,7 @@ limited only by the user's access permissions.
For example, when the setting is enabled, jobs in a pipeline in project `A` have
a `CI_JOB_TOKEN` scope limited to project `A`. If the job needs to use the token
-to make an API request to a private project `B`, then `B` must be added to the allowlist for `A`.
-If project `B` is public or internal, you do not need to add
-`B` to the allowlist to grant access.
+to make an API request to project `B`, then `B` must be added to the allowlist for `A`.
### Configure the job token scope
diff --git a/doc/ci/jobs/index.md b/doc/ci/jobs/index.md
index 90a64ea7569..b5fc32e69dc 100644
--- a/doc/ci/jobs/index.md
+++ b/doc/ci/jobs/index.md
@@ -297,7 +297,8 @@ For example, if you start rolling out new code and:
## Expand and collapse job log sections
-> [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/14664) in GitLab 12.0.
+> - [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/14664) in GitLab 12.0.
+> - Support for output of multi-line command bash shell output [Introduced](https://gitlab.com/gitlab-org/gitlab-runner/-/merge_requests/3486) in GitLab 16.5 behind the [GitLab Runner feature flag](https://docs.gitlab.com/runner/configuration/feature-flags.html), `FF_SCRIPT_SECTIONS`.
Job logs are divided into sections that can be collapsed or expanded. Each section displays
the duration.
@@ -397,3 +398,67 @@ The behavior of deployment jobs can be controlled with
[deployment safety](../environments/deployment_safety.md) settings like
[preventing outdated deployment jobs](../environments/deployment_safety.md#prevent-outdated-deployment-jobs)
and [ensuring only one deployment job runs at a time](../environments/deployment_safety.md#ensure-only-one-deployment-job-runs-at-a-time).
+
+## Troubleshooting
+
+### Job log slow to update
+
+When you visit the job log page for a running job, there could be a delay of up to
+60 seconds before a log update. The default refresh time is 60 seconds, but after
+the log is viewed in the UI one time, log updates should occur every 3 seconds.
+
+### `get_sources` job section fails because of an HTTP/2 problem
+
+Sometimes, jobs fail with the following cURL error:
+
+```plaintext
+++ git -c 'http.userAgent=gitlab-runner <version>' fetch origin +refs/pipelines/<id>:refs/pipelines/<id> ...
+error: RPC failed; curl 16 HTTP/2 send again with decreased length
+fatal: ...
+```
+
+You can work around this problem by configuring Git and `libcurl` to
+[use HTTP/1.1](https://git-scm.com/docs/git-config#Documentation/git-config.txt-httpversion).
+The configuration can be added to:
+
+- A job's [`pre_get_sources_script`](../yaml/index.md#hookspre_get_sources_script):
+
+ ```yaml
+ job_name:
+ hooks:
+ pre_get_sources_script:
+ - git config --global http.version "HTTP/1.1"
+ ```
+
+- The [runner's `config.toml`](https://docs.gitlab.com/runner/configuration/advanced-configuration.html)
+ with [Git configuration environment variables](https://git-scm.com/docs/git-config#ENVIRONMENT):
+
+ ```toml
+ [[runners]]
+ ...
+ environment = [
+ "GIT_CONFIG_COUNT=1",
+ "GIT_CONFIG_KEY_0=http.version",
+ "GIT_CONFIG_VALUE_0=HTTP/1.1"
+ ]
+ ```
+
+### Job using `resource_group` gets stuck **(FREE SELF)**
+
+If a job using [`resource_group`](../yaml/index.md#resource_group) gets stuck, a
+GitLab administrator can try run the following commands from the [rails console](../../administration/operations/rails_console.md#starting-a-rails-console-session):
+
+```ruby
+# find resource group by name
+resource_group = Project.find_by_full_path('...').resource_groups.find_by(key: 'the-group-name')
+busy_resources = resource_group.resources.where('build_id IS NOT NULL')
+
+# identify which builds are occupying the resource
+# (I think it should be 1 as of today)
+busy_resources.pluck(:build_id)
+
+# it's good to check why this build is holding the resource.
+# Is it stuck? Has it been forcefully dropped by the system?
+# free up busy resources
+busy_resources.update_all(build_id: nil)
+```
diff --git a/doc/ci/jobs/job_control.md b/doc/ci/jobs/job_control.md
index 1065ee93389..0c8e4fc593f 100644
--- a/doc/ci/jobs/job_control.md
+++ b/doc/ci/jobs/job_control.md
@@ -174,8 +174,7 @@ multiple pipelines. You don't have to explicitly configure rules for multiple ty
of pipeline to trigger them accidentally.
Some configurations that have the potential to cause duplicate pipelines cause a
-[pipeline warning](../troubleshooting.md#pipeline-warnings) to be displayed.
-[Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/219431) in GitLab 13.3.
+[pipeline warning](../debugging.md#pipeline-warnings) to be displayed.
For example:
@@ -209,7 +208,7 @@ To avoid duplicate pipelines, you can:
You can also avoid duplicate pipelines by changing the job rules to avoid either push (branch)
pipelines or merge request pipelines. However, if you use a `- when: always` rule without
-`workflow: rules`, GitLab still displays a [pipeline warning](../troubleshooting.md#pipeline-warnings).
+`workflow: rules`, GitLab still displays a [pipeline warning](../debugging.md#pipeline-warnings).
For example, the following does not trigger double pipelines, but is not recommended
without `workflow: rules`:
@@ -933,7 +932,7 @@ types the variables can control for:
| `CI_COMMIT_BRANCH` | Yes | | | Yes |
| `CI_COMMIT_TAG` | | Yes | | Yes, if the scheduled pipeline is configured to run on a tag. |
| `CI_PIPELINE_SOURCE = push` | Yes | Yes | | |
-| `CI_PIPELINE_SOURCE = scheduled` | | | | Yes |
+| `CI_PIPELINE_SOURCE = schedule` | | | | Yes |
| `CI_PIPELINE_SOURCE = merge_request_event` | | | Yes | |
| `CI_MERGE_REQUEST_IID` | | | Yes | |
@@ -1194,3 +1193,20 @@ To run protected manual jobs:
- Add the administrator as a direct member of the private project (any role)
- [Impersonate a user](../../administration/admin_area.md#user-impersonation) who is a
direct member of the project.
+
+### A CI/CD job does not use newer configuration when run again
+
+The configuration for a pipeline is only fetched when the pipeline is created.
+When you rerun a job, uses the same configuration each time. If you update configuration files,
+including separate files added with [`include`](../yaml/index.md#include), you must
+start a new pipeline to use the new configuration.
+
+### `Job may allow multiple pipelines to run for a single action` warning
+
+When you use [`rules`](../yaml/index.md#rules) with a `when` clause without an `if`
+clause, multiple pipelines may run. Usually this occurs when you push a commit to
+a branch that has an open merge request associated with it.
+
+To [prevent duplicate pipelines](#avoid-duplicate-pipelines), use
+[`workflow: rules`](../yaml/index.md#workflow) or rewrite your rules to control
+which pipelines can run.
diff --git a/doc/ci/migration/bamboo.md b/doc/ci/migration/bamboo.md
new file mode 100644
index 00000000000..93091d2a30a
--- /dev/null
+++ b/doc/ci/migration/bamboo.md
@@ -0,0 +1,780 @@
+---
+stage: Verify
+group: Pipeline Authoring
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
+type: index, howto
+---
+
+# Migrating from Bamboo **(FREE ALL)**
+
+This migration guide looks at how you can migrate from Atlassian Bamboo to GitLab CI/CD.
+The focus is on [Bamboo Specs YAML](https://docs.atlassian.com/bamboo-specs-docs/8.1.12/specs.html?yaml)
+exported from the Bamboo UI or stored in Spec repositories.
+
+## GitLab CI/CD Primer
+
+If you are new to GitLab CI/CD, use the [Getting started guide](../index.md) to learn
+the basic concepts and how to create your first [`.gitlab-ci.yml` file](../quick_start/index.md).
+If you already have some experience using GitLab CI/CD, you can review [keywords reference documentation](../yaml/index.md)
+to see the full list of available keywords.
+
+You can also take a look at [Auto DevOps](../../topics/autodevops/index.md), which automatically
+builds, tests, and deploys your application using a collection of
+pre-configured features and integrations.
+
+## Key similarities and differences
+
+### Offerings
+
+Atlassian offers Bamboo in its Cloud (SaaS) or Data center (Self-managed) options.
+A third Server option is scheduled for [EOL on February 15, 2024](https://about.gitlab.com/blog/2023/09/26/atlassian-server-ending-move-to-a-single-devsecops-platform/).
+
+These options are similar to GitLab [SaaS](../../subscriptions/gitlab_com/index.md)
+and [Self-Managed](../../subscriptions/self_managed/index.md). GitLab also offers
+[GitLab Dedicated](../../subscriptions/gitlab_dedicated/index.md), a fully isolated
+single-tenant SaaS service.
+
+### Agents vs Runners
+
+Bamboo uses [agents](https://confluence.atlassian.com/confeval/development-tools-evaluator-resources/bamboo/bamboo-remote-agents-and-local-agents)
+to run builds and deployments. Agents can be local agents running on the Bamboo server or
+remote agents running external to the server.
+
+GitLab uses a similar concept to agents called [runners](https://docs.gitlab.com/runner/)
+which use [executors](https://docs.gitlab.com/runner/executors/) to run builds.
+
+Examples of executors are shell, Docker, or Kubernetes. You can choose to use GitLab [SaaS runners](../runners/index.md)
+or deploy your own [self-managed runners](https://docs.gitlab.com/runner/install/index.md).
+
+### Workflow
+
+[Bamboo workflow](https://confluence.atlassian.com/bamboo/understanding-the-bamboo-ci-server-289277285.html)
+is organized into projects. Projects are used to organize Plans, along with variables,
+shared credentials, and permissions needed by multiple plans. A plan groups jobs into
+stages and links to code repositories where applications to be built are hosted.
+Repositories could be in Bitbucket, GitLab, or other services.
+
+A job is a series of tasks that are executed sequentially on the same Bamboo agent.
+CI and deployments are treated separately in Bamboo. [Deployment project workflow](https://confluence.atlassian.com/bamboo/deployment-projects-workflow-362971857.html)
+is different from the build plans workflow. [Learn more](https://confluence.atlassian.com/bamboo/understanding-the-bamboo-ci-server-289277285.html)
+about Bamboo workflow.
+
+GitLab CI/CD uses a similar workflow. Jobs are organized into [stages](../yaml/index.md#stage),
+and projects have individual `.gitlab-ci.yml` configuration files or include existing templates.
+
+### Templating & Configuration as Code
+
+#### Bamboo Specs
+
+Bamboo plans can be configured in either the Web UI or with Bamboo Specs.
+[Bamboo Specs](https://confluence.atlassian.com/bamboo/bamboo-specs-894743906.html)
+is configuration as code, which can be written in Java or YAML. [YAML Specs](https://docs.atlassian.com/bamboo-specs-docs/8.1.12/specs.html?yaml)
+is the easiest to use but lacks in Bamboo feature coverage. [Java Specs](https://docs.atlassian.com/bamboo-specs-docs/8.1.12/specs.html?java)
+has complete Bamboo feature coverage and can be written in any JVM language like Groovy, Scala, or Kotlin.
+If you configured your plans using the Web UI, you can [export your Bamboo configuration](https://confluence.atlassian.com/bamboo/exporting-existing-plan-configuration-to-bamboo-yaml-specs-1018270696.html)
+into Bamboo Specs.
+
+Bamboo Specs can also be [repository-stored](https://confluence.atlassian.com/bamboo/enabling-repository-stored-bamboo-specs-938641941.html).
+
+#### `.gitlab-ci.yml` configuration file
+
+GitLab, by default, uses a [`.gitlab-ci.yml` file](../yaml/index.md) for CI/CD configuration.
+Alternatively, [Auto DevOps](../../topics/autodevops/index.md) can automatically build,
+test, and deploy your application without a manually configured `.gitlab-ci.yml` file.
+
+GitLab CI/CD configuration can be organized into templates that are reusable across projects.
+GitLab also provides pre-built [templates](../examples/index.md#cicd-templates)
+that help you get started quickly and avoid re-inventing the wheel.
+
+### Configuration
+
+#### Bamboo YAML Spec syntax
+
+This Bamboo Spec was exported from a Bamboo Server instance, which creates quite verbose output:
+
+```yaml
+version: 2
+plan:
+ project-key: AB
+ key: TP
+ name: test plan
+stages:
+- Default Stage:
+ manual: false
+ final: false
+ jobs:
+ - Default Job
+Default Job:
+ key: JOB1
+ tasks:
+ - checkout:
+ force-clean-build: false
+ description: Checkout Default Repository
+ - script:
+ interpreter: SHELL
+ scripts:
+ - |-
+ ruby -v # Print out ruby version for debugging
+ bundle config set --local deployment true # Install dependencies into ./vendor/ruby
+ bundle install -j $(nproc)
+ rubocop
+ rspec spec
+ description: run bundler
+ artifact-subscriptions: []
+repositories:
+- Demo Project:
+ scope: global
+triggers:
+- polling:
+ period: '180'
+branches:
+ create: manually
+ delete: never
+ link-to-jira: true
+notifications: []
+labels: []
+dependencies:
+ require-all-stages-passing: false
+ enabled-for-branches: true
+ block-strategy: none
+ plans: []
+other:
+ concurrent-build-plugin: system-default
+
+---
+
+version: 2
+plan:
+ key: AB-TP
+plan-permissions:
+- users:
+ - root
+ permissions:
+ - view
+ - edit
+ - build
+ - clone
+ - admin
+ - view-configuration
+- roles:
+ - logged-in
+ - anonymous
+ permissions:
+ - view
+...
+
+```
+
+A GitLab CI/CD `.gitlab-ci.yml` configuration with similar behavior would be:
+
+```yaml
+default:
+ image: ruby:latest
+
+stages:
+- default-stage
+
+job1:
+ stage: default-stage
+ script:
+ - ruby -v # Print out ruby version for debugging
+ - bundle config set --local deployment true # Install dependencies into ./vendor/ruby
+ - bundle install -j $(nproc)
+ - rubocop
+ - rspec spec
+```
+
+### Common Configurations
+
+This section reviews some common Bamboo configurations and the GitLab CI/CD equivalents.
+
+#### Workflow
+
+Bamboo is structured differently compared to GitLab CI/CD. With GitLab, CI/CD can be enabled
+in a project in a number of ways: by adding a `.gitlab-ci.yml` file to the project,
+the existence of a Compliance pipeline in the group the project belongs to, or enabling AutoDevOps.
+Pipelines are then triggered automatically, depending on rules or context, where AutoDevOps is used.
+
+Bamboo is structured differently, [repositories need to be added](https://confluence.atlassian.com/bamboo0903/linking-to-source-code-repositories-1236445195.html)
+to a Bamboo project, with authentication provided and [triggers](https://confluence.atlassian.com/bamboo0903/triggering-builds-1236445226.html)
+are set. Repositories added to projects are available to all plans in the project.
+Plans used for testing and building applications are called Build plans.
+
+#### Build Plans
+
+Build Plans in Bamboo are composed of Stages that run sequentially to build an application and generate artifacts where relevant. Build Plans require
+a default repository attached to it or inherit linked repositories from its parent project.
+Variables, triggers, and relationships between different plans can be defined at the plan level.
+
+An example of a Bamboo build plan:
+
+```yaml
+version: 2
+plan:
+ project-key: SAMPLE
+ name: Build Ruby App
+ key: BUILD-APP
+
+stages:
+ - Test App:
+ jobs:
+ - Test Application
+ - Perform Security checks
+ - Build App:
+ jobs:
+ - Build Application
+
+Test Application:
+ tasks:
+ - script:
+ - # Run tests
+
+Perform Security checks:
+ tasks:
+ - script:
+ - # Run Security Checks
+
+Build Application:
+ tasks:
+ - script:
+ - # Run buils
+```
+
+In this example:
+
+- Plan Specs include a YAML Spec version. Version 2 is the latest.
+- The `project-key` links the plan to its parent project. The key is specified when creating the project.
+- Plan `key` uniquely identifies the plan.
+
+In GitLab CI/CD, a Bamboo Build plan is similar to the `.gitlab-ci.yml` file in a project,
+which can include CI/CD scripts from other projects or templates.
+
+The equivalent GitLab CI/CD `.gitlab-ci.yml` file would be:
+
+```yaml
+default:
+ image: alpine:latest
+
+stages:
+ - test
+ - build
+
+test-application:
+ stage: test
+ script:
+ - # Run tests
+
+security-checks:
+ stage: test
+ script:
+ - # Run Security Checks
+
+build-application:
+ stage: build
+ script:
+ - # Run builds
+```
+
+#### Container Images
+
+Builds and deployments are run by default on the Bamboo agent's native operating system,
+but can be configured to run in containers. To make jobs run in a container, Bamboo uses
+the `docker` keyword at the plan or job level.
+
+For example, in a Bamboo build plan:
+
+```yaml
+version: 2
+plan:
+ project-key: SAMPLE
+ name: Build Ruby App
+ key: BUILD-APP
+
+docker: alpine:latest
+
+stages:
+ - Build App:
+ jobs:
+ - Build Application
+
+Build Application:
+ tasks:
+ - script:
+ - # Run builds
+ docker:
+ image: alpine:edge
+```
+
+In GitLab CI/CD, you only need the `image` keyword.
+
+The equivalent GitLab CI/CD `.gitlab-ci.yml` file would be:
+
+```yaml
+default:
+ image: alpine:latest
+
+stages:
+ - build
+
+build-application:
+ stage: build
+ script:
+ - # Run builds
+ image:
+ name: alpine:edge
+```
+
+#### Variables
+
+Bamboo has the following types of [variables](https://confluence.atlassian.com/bamboo/bamboo-variables-289277087.html)
+based on scope:
+
+- Build-specific variables which are evaluated at build time. For example `${bamboo.planKey}`.
+- System variables inherited from the Bamboo instance or system environment.
+- Global variables defined at the instance level and accessible to every plan.
+- Project variables defined at the project level and accessible by plans in the same project.
+- Plan variables specific to a plan.
+
+You can access variables in Bamboo using the format `${system.variableName}` for System variables
+and `${bamboo.variableName}` for other types of variables. When using a variable in a script task,
+the full stops, are converted to underscores, `${bamboo.variableName}` becomes `$bamboo_variableName`.
+
+In GitLab, [CI/CD variables](../variables/index.md) can be defined at these levels:
+
+- Instance.
+- Group.
+- Project.
+- At the global level in the CI/CD configuration.
+- At the job level in the CI/CD configuration.
+
+Like Bamboo's System and Global variables, GitLab has [predefined CI/CD variables](../variables/predefined_variables.md)
+that are available to every job.
+
+Defining variables in CI/CD scripts is similar in both Bamboo and GitLab.
+
+For example, in a Bamboo build plan:
+
+```yaml
+version: 2
+# ...
+variables:
+ username: admin
+ releaseType: milestone
+
+Default job:
+ tasks:
+ - script: echo '$bamboo_username is the DRI for $bamboo_releaseType'
+```
+
+The equivalent GitLab CI/CD `.gitlab-ci.yml` file would be:
+
+```yaml
+variables:
+ GLOBAL_VAR: "A global variable"
+
+job1:
+ variables:
+ JOB_VAR: "A job variable"
+ script:
+ - echo "Variables are '$GLOBAL_VAR' and '$JOB_VAR'"
+```
+
+In GitLab CI/CD, variables are accessed like regular Shell script variables. For example, `$VARIABLE_NAME`.
+
+#### Jobs & Tasks
+
+In both GitLab and Bamboo, jobs in the same stage run in parallel, except where there is a dependency
+that needs to be met before a job runs.
+
+The number of jobs that can run in Bamboo depends on availability of Bamboo agents
+and Bamboo license Size. With [GitLab CI/CD](../jobs/index.md), the number of parallel
+jobs depends on the number of runners integrated with the GitLab instance and the
+concurrency set in the runners.
+
+In Bamboo, Jobs are composed of [Tasks](https://confluence.atlassian.com/bamboo/configuring-tasks-289277036.html),
+which can be:
+
+- A set of commands run as a [script](https://confluence.atlassian.com/bamboo/script-289277046.html)
+- Predefined tasks like source code checkout, artifact download, and other tasks available in the
+ Atlassian [tasks marketplace](https://marketplace.atlassian.com/addons/app/bamboo).
+
+For example, in a Bamboo build plan:
+
+```yaml
+version: 2
+#...
+
+Default Job:
+ key: JOB1
+ tasks:
+ - checkout:
+ force-clean-build: false
+ description: Checkout Default Repository
+ - script:
+ interpreter: SHELL
+ scripts:
+ - |-
+ ruby -v
+ bundle config set --local deployment true
+ bundle install -j $(nproc)
+ description: run bundler
+other:
+ concurrent-build-plugin: system-default
+```
+
+The equivalent of Tasks in GitLab is the `script`, which specifies the commands
+for the runner to execute.
+
+For example, in a GitLab CI/CD `.gitlab-ci.yml` file:
+
+```yaml
+job1:
+ script: "bundle exec rspec"
+
+job2:
+ script:
+ - ruby -v
+ - bundle config set --local deployment true
+ - bundle install -j $(nproc)
+```
+
+With GitLab, you can use [CI/CD templates](https://gitlab.com/gitlab-org/gitlab-foss/tree/master/lib/gitlab/ci/templates)
+and [CI/CD components](../components/index.md) to compose your pipelines without the need to write
+everything yourself.
+
+#### Conditionals
+
+In Bamboo, every task can have conditions that determine if a task runs.
+
+For example, in a Bamboo build plan:
+
+```yaml
+version: 2
+# ...
+tasks:
+ - script:
+ interpreter: SHELL
+ scripts:
+ - echo "Hello"
+ conditions:
+ - variable:
+ equals:
+ planRepository.branch: development
+```
+
+With GitLab, this can be done with the `rules` keyword to [control when jobs run](../jobs/job_control.md) in GitLab CI/CD.
+
+For example, in a GitLab CI/CD `.gitlab-ci.yml` file:
+
+```yaml
+job:
+ script: echo "Hello, Rules!"
+ rules:
+ - if: $CI_MERGE_REQUEST_SOURCE_BRANCH_NAME = development
+```
+
+#### Triggers
+
+Bamboo has a number of options for [triggering builds](https://confluence.atlassian.com/bamboo/triggering-builds-289276897.html),
+which can be based on code changes, a schedule, the outcomes of other plans, or on demand.
+A plan can be configured to periodically poll a project for new changes,
+as shown below.
+
+For example, in a Bamboo build plan:
+
+```yaml
+version: 2
+#...
+triggers:
+- polling:
+ period: '180'
+```
+
+GitLab CI/CD pipelines can be triggered based on code change, on schedule, or triggered by
+other jobs or API calls. GitLab CI/CD pipelines do not need to use polling, but can be triggered
+on schedule as well.
+
+You can configure when pipelines themselves run with the [`workflow` keyword](../yaml/workflow.md),
+and `rules`.
+
+For example, in a GitLab CI/CD `.gitlab-ci.yml` file:
+
+```yaml
+workflow:
+ rules:
+ - changes:
+ - .gitlab/**/**.md
+ when: never
+```
+
+#### Artifacts
+
+You can define Job artifacts using the `artifacts` keyword in both GitLab and Bamboo.
+
+For example, in a Bamboo build plan:
+
+```yaml
+version: 2
+# ...
+ artifacts:
+ -
+ name: Test Reports
+ location: target/reports
+ pattern: '*.xml'
+ required: false
+ shared: false
+ -
+ name: Special Reports
+ location: target/reports
+ pattern: 'special/*.xml'
+ shared: true
+```
+
+In this example, artifacts are defined with a name, location, pattern, and the optional
+ability to share the artifacts with other jobs or plans. You canalso define jobs that
+subscribe to the artifact.
+
+`artifact-subscriptions` is used to access artifacts from another job in the same plan,
+for example:
+
+```yaml
+Test app:
+ artifact-subscriptions:
+ -
+ artifact: Test Reports
+ destination: deploy
+```
+
+`artifact-download` is used to access artifacts from jobs in a different plan, for example:
+
+```yaml
+version: 2
+# ...
+ tasks:
+ - artifact-download:
+ source-plan: PROJECTKEY-PLANKEY
+```
+
+You need to provide the key of the plan you are downloading artifacts from in the `source-plan` keyword.
+
+In GitLab, all [artifacts](../jobs/job_artifacts.md) from completed jobs in earlier
+stages are downloaded by default.
+
+For example, in a GitLab CI/CD `.gitlab-ci.yml` file:
+
+```yaml
+stages:
+ - build
+
+pdf:
+ stage: build
+ script: #generate XML reports
+ artifacts:
+ name: "test-report-files"
+ untracked: true
+ paths:
+ - target/reports
+```
+
+In this example:
+
+- The name of the artifact is specific explicitly, but you can make it dynamic by using a CI/CD variable.
+- The `untracked` keyword sets the artifact to also include Git untracked files,
+ along with those specified explictly with `paths`.
+
+#### Caching
+
+In Bamboo, [Git caches](https://confluence.atlassian.com/bamkb/how-stored-git-caches-speed-up-builds-690848923.html)
+can be used to speed up builds. Git caches are configured in Bamboo administration settings
+and are stored either on the Bamboo server or remote agents.
+
+GitLab supports both Git Caches and Job cache. [Caches](../caching/index.md) are defined per job
+using the `cache` keyword.
+
+For example, in a GitLab CI/CD `.gitlab-ci.yml` file:
+
+```yaml
+test-job:
+ stage: build
+ cache:
+ - key:
+ files:
+ - Gemfile.lock
+ paths:
+ - vendor/ruby
+ - key:
+ files:
+ - yarn.lock
+ paths:
+ - .yarn-cache/
+ script:
+ - bundle config set --local path 'vendor/ruby'
+ - bundle install
+ - yarn install --cache-folder .yarn-cache
+ - echo Run tests...
+```
+
+#### Deployment Projects
+
+Bamboo has [Deployments project](https://confluence.atlassian.com/bamboo/deployment-projects-338363438.html),
+which link to Build plans to track, fetch, and deploy artifacts to [deployment environments](https://confluence.atlassian.com/bamboo0903/creating-a-deployment-environment-1236445634.html).
+
+When creating a project you link it to a build plan, specify the deployment environment
+and the tasks to perform the deployments. A [deployment task](https://confluence.atlassian.com/bamboo0903/tasks-for-deployment-environments-1236445662.html)
+can either be a script or a Bamboo task from the Atlassian marketplace.
+
+For example in a Deployment project Spec:
+
+```yaml
+version: 2
+
+deployment:
+ name: Deploy ruby app
+ source-plan: build-app
+
+release-naming: release-1.0
+
+environments:
+ - Production
+
+Production:
+ tasks:
+ - # scripts to deploy app to production
+ - ./.ci/deploy_prod.sh
+```
+
+In GitLab CI/CD, You can create a [deployment job](../jobs/index.md#deployment-jobs)
+that deploys to an [environment](../environments/index.md) or creates a [release](../../user/project/releases/index.md).
+
+For example, in a GitLab CI/CD `.gitlab-ci.yml` file:
+
+```yaml
+deploy-to-production:
+ stage: deploy
+ script:
+ - # Run Deployment script
+ - ./.ci/deploy_prod.sh
+ environment:
+ name: production
+```
+
+To create release instead, use the [`release`](../yaml/index.md#release)
+keyword with the [release-cli](https://gitlab.com/gitlab-org/release-cli/-/tree/master/docs)
+tool to create releases for [Git tags](../../user/project/repository/tags/index.md).
+
+For example, in a GitLab CI/CD `.gitlab-ci.yml` file:
+
+```yaml
+release_job:
+ stage: release
+ image: registry.gitlab.com/gitlab-org/release-cli:latest
+ rules:
+ - if: $CI_COMMIT_TAG # Run this job when a tag is created manually
+ script:
+ - echo "Building release version"
+ release:
+ tag_name: $CI_COMMIT_TAG
+ name: 'Release $CI_COMMIT_TAG'
+ description: 'Release created using the release-cli.'
+```
+
+### Security Scanning features
+
+Bamboo relies on third-party tasks provided in the Atlassian Marketplace to run security scans.
+GitLab provides [security scanners](../../user/application_security/index.md) out-of-the-box to detect
+vulnerabilities in all parts of the SDLC. You can add these plugins in GitLab using templates, for example to add
+SAST scanning to your pipeline, add the following to your `.gitlab-ci.yml`:
+
+```yaml
+include:
+ - template: Security/SAST.gitlab-ci.yml
+```
+
+You can customize the behavior of security scanners by using CI/CD variables, for example
+with the [SAST scanners](../../user/application_security/sast/index.md#available-cicd-variables).
+
+### Secrets Management
+
+Privileged information, often referred to as "secrets", is sensitive information
+or credentials you need in your CI/CD workflow. You might use secrets to unlock protected resources
+or sensitive information in tools, applications, containers, and cloud-native environments.
+
+Secrets management in Bamboo is usually handled using [Shared credentials](https://confluence.atlassian.com/bamboo/shared-credentials-424313357.html),
+or via third-party applications from the Atlassian market place.
+
+For secrets management in GitLab, you can use one of the supported integrations
+for an external service. These services securely store secrets outside of your GitLab project,
+though you must have a subscription for the service:
+
+- [HashiCorp Vault](../secrets/id_token_authentication.md#automatic-id-token-authentication-with-hashicorp-vault)
+- [Azure Key Vault](../secrets/azure_key_vault.md).
+
+GitLab also supports [OIDC authentication](../secrets/id_token_authentication.md)
+for other third party services that support OIDC.
+
+Additionally, you can make credentials available to jobs by storing them in CI/CD variables, though secrets
+stored in plain text are susceptible to accidental exposure, [the same as in Bamboo](https://confluence.atlassian.com/bamboo/bamboo-specs-encryption-970268127.html).
+You should always store sensitive information in [masked](../variables/index.md#mask-a-cicd-variable)
+and [protected](../variables/index.md#protect-a-cicd-variable) variables, which mitigates
+some of the risk.
+
+Also, never store secrets as variables in your `.gitlab-ci.yml` file, which is public to all
+users with access to the project. Storing sensitive information in variables should
+only be done in [the project, group, or instance settings](../variables/index.md#define-a-cicd-variable-in-the-ui).
+
+Review the [security guidelines](../variables/index.md#cicd-variable-security) to improve
+the safety of your CI/CD variables.
+
+### Migration Plan
+
+The following list of recommended steps was created after observing organizations
+that were able to quickly complete this migration.
+
+#### Create a Migration Plan
+
+Before starting a migration you should create a [migration plan](plan_a_migration.md)
+to make preparations for the migration. For a migration from Bamboo, ask yourself
+the following questions in preparation:
+
+- What Bamboo Tasks are used by jobs in Bamboo today?
+ - Do you know what these Tasks do exactly?
+ - Do any Task wrap a common build tool? For example, Maven, Gradle, or NPM?
+- What is installed on the Bamboo agents?
+- Are there any shared libraries in use?
+- How are you authenticating from Bamboo? Are you using SSH keys, API tokens, or other secrets?
+- Are there other projects that you need to access from your pipeline?
+- Are there credentials in Bamboo to access outside services? For example Ansible Tower,
+ Artifactory, or other Cloud Providers or deployment targets?
+
+#### Prerequisites
+
+Before doing any migration work, you should first:
+
+1. Get familiar with GitLab.
+ - Read about the [key GitLab CI/CD features](../../ci/index.md).
+ - Follow tutorials to create [your first GitLab pipeline](../quick_start/index.md)
+ and [more complex pipelines](../quick_start/tutorial.md) that build, test, and deploy
+ a static site.
+ - Review the [`.gitlab-ci.yml` keyword reference](../yaml/index.md).
+1. Set up and configure GitLab.
+1. Test your GitLab instance.
+ - Ensure [runners](../runners/index.md) are available, either by using shared GitLab.com runners or installing new runners.
+
+#### Migration Steps
+
+1. Migrate projects from your SCM solution to GitLab.
+ - (Recommended) You can use the available [importers](../../user/project/import/index.md)
+ to automate mass imports from external SCM providers.
+ - You can [import repositories by URL](../../user/project/import/repo_by_url.md).
+1. Create a `.gitlab-ci.yml` file in each project.
+1. Export your Bamboo Projects/Plans as YAML Spec
+1. Migrate Bamboo YAML Spec configuration to GitLab CI/CD jobs and configure them to show results directly in merge requests.
+1. Migrate deployment jobs by using [cloud deployment templates](../cloud_deployment/index.md),
+ [environments](../environments/index.md), and the [GitLab agent for Kubernetes](../../user/clusters/agent/index.md).
+1. Check if any CI/CD configuration can be reused across different projects, then create
+ and share CI/CD templates.
+1. Check the [pipeline efficiency documentation](../pipelines/pipeline_efficiency.md)
+ to learn how to make your GitLab CI/CD pipelines faster and more efficient.
+
+If you have questions that are not answered here, the [GitLab community forum](https://forum.gitlab.com/)
+can be a great resource.
diff --git a/doc/ci/migration/github_actions.md b/doc/ci/migration/github_actions.md
index 86ce6c4846a..46d15f506ac 100644
--- a/doc/ci/migration/github_actions.md
+++ b/doc/ci/migration/github_actions.md
@@ -39,7 +39,7 @@ functionality.
### Configuration file
GitHub Actions can be configured with a [workflow YAML file](https://docs.github.com/en/actions/learn-github-actions/understanding-github-actions#understanding-the-workflow-file).
-GitLab CI/CD uses a [`.gitlab-ci.yml` YAML file](../../ci/yaml/gitlab_ci_yaml.md) by default.
+GitLab CI/CD uses a [`.gitlab-ci.yml` YAML file](../../ci/index.md#the-gitlab-ciyml-file) by default.
For example, in a GitHub Actions `workflow` file:
@@ -88,7 +88,7 @@ from GitHub Actions to GitLab CI/CD.
generate automated CI/CD jobs that are triggered when certain event take place, for example
pushing a new commit. A GitHub Action workflow is a YAML file defined in the `.github/workflows`
directory located in the root of the repository. The GitLab equivalent is the
-[`.gitlab-ci.yml` configuration file](../../ci/yaml/gitlab_ci_yaml.md) which also resides
+[`.gitlab-ci.yml` configuration file](../../ci/index.md#the-gitlab-ciyml-file) which also resides
in the repository's root directory.
#### Jobs
diff --git a/doc/ci/migration/jenkins.md b/doc/ci/migration/jenkins.md
index e9f39e2d7af..4352b495e7b 100644
--- a/doc/ci/migration/jenkins.md
+++ b/doc/ci/migration/jenkins.md
@@ -38,7 +38,7 @@ functionality.
### Configuration file
-Jenkins can be configured with a [`Jenkinsfile` in the Groovy format](https://www.jenkins.io/doc/book/pipeline/jenkinsfile/). GitLab CI/CD uses a [`.gitlab-ci.yml` YAML file](../../ci/yaml/gitlab_ci_yaml.md) by default.
+Jenkins can be configured with a [`Jenkinsfile` in the Groovy format](https://www.jenkins.io/doc/book/pipeline/jenkinsfile/). GitLab CI/CD uses a [`.gitlab-ci.yml` YAML file](../../ci/index.md#the-gitlab-ciyml-file) by default.
Example of a `Jenkinsfile`:
@@ -101,7 +101,7 @@ from Jenkins to GitLab CI/CD.
[Jenkins pipelines](https://www.jenkins.io/doc/book/pipeline/) generate automated CI/CD jobs
that are triggered when certain event take place, such as a new commit being pushed.
-A Jenkins pipeline is defined in a `Jenkinsfile`. The GitLab equivalent is the [`.gitlab-ci.yml` configuration file](../../ci/yaml/gitlab_ci_yaml.md).
+A Jenkins pipeline is defined in a `Jenkinsfile`. The GitLab equivalent is the [`.gitlab-ci.yml` configuration file](../../ci/index.md#the-gitlab-ciyml-file).
Jenkins does not provide a place to store source code, so the `Jenkinsfile` must be stored
in a separate source control repository.
diff --git a/doc/ci/pipelines/merge_request_pipelines.md b/doc/ci/pipelines/merge_request_pipelines.md
index 37febfd90ee..fb1c19d8770 100644
--- a/doc/ci/pipelines/merge_request_pipelines.md
+++ b/doc/ci/pipelines/merge_request_pipelines.md
@@ -262,3 +262,25 @@ Some possible reasons for this error message:
If **Run pipeline** is available, but the project does not have merge request pipelines
enabled, do not use this option. You can push a commit or rebase the branch to trigger
new branch pipelines.
+
+### `Merge blocked: pipeline must succeed. Push a new commit that fixes the failure` message
+
+This message is shown if the merge request pipeline, [merged results pipeline](merged_results_pipelines.md),
+or [merge train pipeline](merge_trains.md) has failed or been canceled.
+This does not happen when a branch pipeline fails.
+
+If a merge request pipeline or merged result pipeline was canceled or failed, you can:
+
+- Re-run the entire pipeline by selecting **Run pipeline** in the pipeline tab in the merge request.
+- [Retry only the jobs that failed](index.md#view-pipelines). If you re-run the entire pipeline, this is not necessary.
+- Push a new commit to fix the failure.
+
+If the merge train pipeline has failed, you can:
+
+- Check the failure and determine if you can use the [`/merge` quick action](../../user/project/quick_actions.md) to immediately add the merge request to the train again.
+- Re-run the entire pipeline by selecting **Run pipeline** in the pipeline tab in the merge request, then add the merge request to the train again.
+- Push a commit to fix the failure, then add the merge request to the train again.
+
+If the merge train pipeline was canceled before the merge request was merged, without a failure, you can:
+
+- Add it to the train again.
diff --git a/doc/ci/pipelines/merge_trains.md b/doc/ci/pipelines/merge_trains.md
index b7f081886a6..a54087262e7 100644
--- a/doc/ci/pipelines/merge_trains.md
+++ b/doc/ci/pipelines/merge_trains.md
@@ -90,6 +90,8 @@ are cancelled.
## Enable merge trains
+> `disable_merge_trains` feature flag [removed](https://gitlab.com/gitlab-org/gitlab/-/issues/282477) in GitLab 16.5.
+
Prerequisites:
- You must have the Maintainer role.
@@ -97,17 +99,15 @@ Prerequisites:
- Your pipeline must be [configured to use merge request pipelines](merge_request_pipelines.md#prerequisites).
Otherwise your merge requests may become stuck in an unresolved state or your pipelines
might be dropped.
+- You must have [merged results pipelines enabled](merged_results_pipelines.md#enable-merged-results-pipelines).
To enable merge trains:
1. On the left sidebar, select **Search or go to** and find your project.
1. Select **Settings > Merge requests**.
1. In the **Merge method** section, verify that **Merge commit** is selected.
-1. In the **Merge options** section:
- - In GitLab 13.6 and later, select **Enable merged results pipelines** and **Enable merge trains**.
- - In GitLab 13.5 and earlier, select **Enable merge trains and pipelines for merged results**.
- Additionally, [a feature flag](#disable-merge-trains-in-gitlab-135-and-earlier)
- must be set correctly.
+1. In the **Merge options** section, ensure **Enable merged results pipelines** is enabled
+ and select **Enable merge trains**.
1. Select **Save changes**.
## Start a merge train
@@ -174,31 +174,6 @@ WARNING:
Merging immediately can use a lot of CI/CD resources. Use this option
only in critical situations.
-## Disable merge trains in GitLab 13.5 and earlier **(PREMIUM SELF)**
-
-In [GitLab 13.6 and later](https://gitlab.com/gitlab-org/gitlab/-/issues/244831),
-you can [enable or disable merge trains in the project settings](#enable-merge-trains).
-
-In GitLab 13.5 and earlier, merge trains are automatically enabled when
-[merged results pipelines](merged_results_pipelines.md) are enabled.
-To use merged results pipelines but not merge trains, enable the `disable_merge_trains`
-[feature flag](../../user/feature_flags.md).
-
-[GitLab administrators with access to the GitLab Rails console](../../administration/feature_flags.md)
-can enable the feature flag to disable merge trains:
-
-```ruby
-Feature.enable(:disable_merge_trains)
-```
-
-After you enable this feature flag, GitLab cancels existing merge trains.
-
-To disable the feature flag, which enables merge trains again:
-
-```ruby
-Feature.disable(:disable_merge_trains)
-```
-
## Troubleshooting
### Merge request dropped from the merge train
diff --git a/doc/ci/pipelines/merged_results_pipelines.md b/doc/ci/pipelines/merged_results_pipelines.md
index e4f739e8242..afe7a450370 100644
--- a/doc/ci/pipelines/merged_results_pipelines.md
+++ b/doc/ci/pipelines/merged_results_pipelines.md
@@ -61,19 +61,6 @@ Upgrade to 13.8 or later, or make sure the `:merge_ref_auto_sync`
[feature flag is enabled](../../administration/feature_flags.md#check-if-a-feature-flag-is-enabled)
on your GitLab instance.
-### Pipelines fail intermittently with a `fatal: reference is not a tree:` error
-
-Merged results pipelines run on a merge ref for a merge request
-(`refs/merge-requests/<iid>/merge`), so the Git reference could be overwritten at an
-unexpected time.
-
-For example, when a source or target branch is advanced, the pipeline fails with
-the `fatal: reference is not a tree:` error, which indicates that the checkout-SHA
-is not found in the merge ref.
-
-This behavior was improved in GitLab 12.4 by introducing [persistent pipeline refs](../troubleshooting.md#fatal-reference-is-not-a-tree-error).
-Upgrade to GitLab 12.4 or later to resolve the problem.
-
### Successful merged results pipeline overrides a failed branch pipeline
A failed branch pipeline is sometimes ignored when the
diff --git a/doc/ci/pipelines/settings.md b/doc/ci/pipelines/settings.md
index 265fd674190..321eae183eb 100644
--- a/doc/ci/pipelines/settings.md
+++ b/doc/ci/pipelines/settings.md
@@ -204,6 +204,7 @@ You can define how long a job can run before it times out.
1. Expand **General pipelines**.
1. In the **Timeout** field, enter the number of minutes, or a human-readable value like `2 hours`.
Must be 10 minutes or more, and less than one month. Default is 60 minutes.
+ Pending jobs are dropped after 24 hours of inactivity.
Jobs that exceed the timeout are marked as failed.
@@ -213,3 +214,26 @@ You can override this value [for individual runners](../runners/configure_runner
You can use [pipeline badges](../../user/project/badges.md) to indicate the pipeline status and
test coverage of your projects. These badges are determined by the latest successful pipeline.
+
+## Disable GitLab CI/CD pipelines
+
+GitLab CI/CD pipelines are enabled by default on all new projects. If you use an external CI/CD server like
+Jenkins or Drone CI, you can disable GitLab CI/CD to avoid conflicts with the commits status API.
+
+You can disable GitLab CI/CD per project or [for all new projects on an instance](../../administration/cicd.md).
+
+When you disable GitLab CI/CD:
+
+- The **CI/CD** item in the left sidebar is removed.
+- The `/pipelines` and `/jobs` pages are no longer available.
+- Existing jobs and pipelines are hidden, not removed.
+
+To disable GitLab CI/CD in your project:
+
+1. On the left sidebar, select **Search or go to** and find your project.
+1. Select **Settings > General**.
+1. Expand **Visibility, project features, permissions**.
+1. In the **Repository** section, turn off **CI/CD**.
+1. Select **Save changes**.
+
+These changes do not apply to projects in an [external integration](../../user/project/integrations/index.md#available-integrations).
diff --git a/doc/ci/quick_start/index.md b/doc/ci/quick_start/index.md
index 8e6fa965aa4..1f8c33a9700 100644
--- a/doc/ci/quick_start/index.md
+++ b/doc/ci/quick_start/index.md
@@ -9,7 +9,7 @@ type: reference
This tutorial shows you how to configure and run your first CI/CD pipeline in GitLab.
-If you are already familiar with basic CI/CD concepts, you can learn about
+If you are already familiar with [basic CI/CD concepts](../index.md), you can learn about
common keywords in [Tutorial: Create a complex pipeline](tutorial.md).
## Prerequisites
diff --git a/doc/ci/runners/new_creation_workflow.md b/doc/ci/runners/new_creation_workflow.md
index 3465aaf94fc..022f7af11ec 100644
--- a/doc/ci/runners/new_creation_workflow.md
+++ b/doc/ci/runners/new_creation_workflow.md
@@ -99,7 +99,7 @@ If you specify a runner authentication token with:
Authentication tokens have the prefix, `glrt-`.
To ensure minimal disruption to your automation workflow,
-[legacy-compatible registration processing](https://docs.gitlab.com/runner/register/#legacy-compatible-registration-processing)
+[legacy-compatible registration processing](https://docs.gitlab.com/runner/register/#legacy-compatible-registration-process)
triggers if a runner authentication token is specified in the legacy parameter `--registration-token`.
Example command for GitLab 15.9:
@@ -202,5 +202,22 @@ you can set it to any string - it will be ignored when `runner-token` is present
## Known issues
-- When you use the new registration workflow to register your runners with the Helm chart, the pod name is not visible
- in the runner details page. For more information, see [issue 423523](https://gitlab.com/gitlab-org/gitlab/-/issues/423523).
+### Pod name is not visible in runner details page
+
+When you use the new registration workflow to register your runners with the Helm chart, the pod name is not visible
+in the runner details page.
+For more information, see [issue 423523](https://gitlab.com/gitlab-org/gitlab/-/issues/423523).
+
+### Runner authentication token does not update when rotated
+
+When you use the new registration workflow to register your runners with the GitLab Operator,
+the runner authentication token referenced by the Custom Resource Definition does not update when the token is rotated.
+This occurs when:
+
+- You're using a runner authentication token (prefixed with `glrt-`) in a secret
+ [referenced by a Custom Resource Definition](https://docs.gitlab.com/runner/install/operator.html#install-gitlab-runner).
+- The runner authentication token is due to expire.
+ For more information about runner authentication token expiration,
+ see [Authentication token security](configure_runners.md#authentication-token-security).
+
+For more information, see [issue 186](https://gitlab.com/gitlab-org/gl-openshift/gitlab-runner-operator/-/issues/186).
diff --git a/doc/ci/runners/runners_scope.md b/doc/ci/runners/runners_scope.md
index 5341f19fbbc..6b6493db2c4 100644
--- a/doc/ci/runners/runners_scope.md
+++ b/doc/ci/runners/runners_scope.md
@@ -89,7 +89,7 @@ To create a shared runner:
1. Select **CI/CD > Runners**.
1. Select **Register an instance runner**.
1. Copy the registration token.
-1. [Register the runner](https://docs.gitlab.com/runner/register/).
+1. [Register the runner](https://docs.gitlab.com/runner/register/#register-with-a-runner-registration-token-deprecated).
### Pause or resume a shared runner
@@ -289,7 +289,7 @@ To create a group runner:
These instructions include the token, URL, and a command to register a runner.
Alternately, you can copy the registration token and follow the documentation for
-how to [register a runner](https://docs.gitlab.com/runner/register/).
+how to [register a runner](https://docs.gitlab.com/runner/register/#register-with-a-runner-registration-token-deprecated).
### View group runners
@@ -481,7 +481,7 @@ To create a project runner:
1. Select **Settings > CI/CD**.
1. Expand **Runners**.
1. In the **Project runners** section, note the URL and token.
-1. [Register the runner](https://docs.gitlab.com/runner/register/).
+1. [Register the runner](https://docs.gitlab.com/runner/register/#register-with-a-runner-registration-token-deprecated).
The runner is now enabled for the project.
diff --git a/doc/ci/runners/saas/linux_saas_runner.md b/doc/ci/runners/saas/linux_saas_runner.md
index c026ccf3d22..23a9b26a8d7 100644
--- a/doc/ci/runners/saas/linux_saas_runner.md
+++ b/doc/ci/runners/saas/linux_saas_runner.md
@@ -28,6 +28,8 @@ For Free, Premium, and Ultimate plan customers, jobs on these instances consume
The `small` machine type is set as default. If no [tag](../../yaml/index.md#tags) keyword in your `.gitlab-ci.yml` file is specified,
the jobs will run on this default runner.
+There are [different rates of compute minutes consumption](../../pipelines/cicd_minutes.md#additional-costs-on-gitlab-saas), based on the type of machine that is used.
+
All SaaS runners on Linux currently run on
[`n2d-standard`](https://cloud.google.com/compute/docs/general-purpose-machines#n2d_machines) general-purpose compute from GCP.
The machine type and underlying processor type can change. Jobs optimized for a specific processor design could behave inconsistently.
diff --git a/doc/ci/runners/saas/macos_saas_runner.md b/doc/ci/runners/saas/macos_saas_runner.md
index 1445ae58bd4..b503fea4f2f 100644
--- a/doc/ci/runners/saas/macos_saas_runner.md
+++ b/doc/ci/runners/saas/macos_saas_runner.md
@@ -34,34 +34,26 @@ In comparison to our SaaS runners on Linux, where you can run any Docker image,
GitLab SaaS provides a set of VM images for macOS.
You can execute your build in one of the following images, which you specify
-in your `.gitlab-ci.yml` file.
-
-Each image runs a specific version of macOS and Xcode.
+in your `.gitlab-ci.yml` file. Each image runs a specific version of macOS and Xcode.
| VM image | Status |
|----------------------------|--------|
-| `macos-12-xcode-13` | `GA` |
| `macos-12-xcode-14` | `GA` |
-| `macos-13-xcode-14` | `Beta` |
-
-## Image update policy for macOS
+| `macos-13-xcode-14` | `GA` |
+| `macos-14-xcode-15` | `Beta` |
-macOS and Xcode follow a yearly release cadence, during which GitLab increments its versions synchronously. GitLab typically supports multiple versions of preinstalled tools. For more information, see
-a [full list of preinstalled software](https://gitlab.com/gitlab-org/ci-cd/shared-runners/images/job-images/-/tree/main/toolchain).
+If no image is specified, the macOS runner uses `macos-13-xcode-14`.
-GitLab provides `stable` and `latest` macOS images that follow different update patterns:
+## Image update policy for macOS
-- **Stable image:** The `stable` images and installed components are updated every release. Images without the `:latest` prefix are considered stable images.
-- **Latest image:** The `latest` images are typically updated on a weekly cadence and use a `:latest` prefix in the image name. Using the `latest` image results in more regularly updated components and shorter update times for Homebrew or asdf. The `latest` images are used to test software components before releasing the components to the `stable` images.
-By definition, the `latest` images are always Beta.
-A `latest` image is not available.
+macOS and Xcode follow a yearly release cadence, during which GitLab increments its versions synchronously. GitLab typically supports multiple versions of preinstalled tools. For more information, see the [full list of preinstalled software](https://gitlab.com/gitlab-org/ci-cd/shared-runners/images/job-images/-/tree/main/toolchain).
-### Image release process
+When Apple releases a new macOS version, GitLab releases a new `stable` image based on the OS in the next release,
+which is in Beta.
-When Apple releases a new macOS version, GitLab releases both `stable` and `latest` images based on the OS in the next release. Both images are Beta.
+With the release of the first patch to macOS, the `stable` image becomes Generally Available (GA). As only two GA images are supported at a time, the prior OS version becomes deprecated and is deleted after three months in accordance with the [supported image lifecycle](../index.md#supported-image-lifecycle).
-With the release of the first patch to macOS, the `stable` image becomes Generally Available (GA).
-As only two GA images are supported at a time, the prior OS version becomes deprecated and is deleted after three months in accordance with the [supported image lifecycle](../index.md#supported-image-lifecycle).
+The `stable` images and installed components are updated every release, to keep the preinstalled software up-to-date.
## Example `.gitlab-ci.yml` file
diff --git a/doc/ci/secrets/azure_key_vault.md b/doc/ci/secrets/azure_key_vault.md
index 645ab5db0d1..d8a511e8bdf 100644
--- a/doc/ci/secrets/azure_key_vault.md
+++ b/doc/ci/secrets/azure_key_vault.md
@@ -9,14 +9,19 @@ type: concepts, howto
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/271271) in GitLab and GitLab Runner 16.3.
+NOTE:
+A [bug was discovered](https://gitlab.com/gitlab-org/gitlab/-/issues/424746) and this feature might not work as expected or at all. A fix is scheduled for a future release.
+
You can use secrets stored in the [Azure Key Vault](https://azure.microsoft.com/en-us/products/key-vault/)
in your GitLab CI/CD pipelines.
Prerequisites:
-- Have a key vault on Azure.
-- Have an application with key vault permissions.
-- [Configure OpenID Connect in Azure to retrieve temporary credentials](../../ci/cloud_services/azure/index.md).
+- Have a [Key Vault](https://learn.microsoft.com/en-us/azure/key-vault/general/quick-create-portal) on Azure.
+ - Your IAM user must be granted [granted the **Key Vault Administrator** role assignment](https://learn.microsoft.com/en-us/azure/role-based-access-control/quickstart-assign-role-user-portal#grant-access)
+ for the **resource group** assigned to the Key Vault. Otherwise, you can't create secrets inside the Key Vault.
+- [Configure OpenID Connect in Azure to retrieve temporary credentials](../../ci/cloud_services/azure/index.md). These
+ steps include instructions on how to create an Azure AD application for Key Vault access.
- Add [CI/CD variables to your project](../variables/index.md#for-a-project) to provide details about your Vault server:
- `AZURE_KEY_VAULT_SERVER_URL`: The URL of your Azure Key Vault server, such as `https://vault.example.com`.
- `AZURE_CLIENT_ID`: The client ID of the Azure application.
@@ -31,19 +36,64 @@ You can use a secret stored in your Azure Key Vault in a job by defining it with
job:
id_tokens:
AZURE_JWT:
- aud: 'azure'
+ aud: 'https://gitlab.com'
secrets:
DATABASE_PASSWORD:
- token: AZURE_JWT
+ token: $AZURE_JWT
azure_key_vault:
name: 'test'
- version: 'test'
+ version: '00000000000000000000000000000000'
```
In this example:
-- `name` is the name of the secret.
-- `version` is the version of the secret.
+- `aud` is the audience, which must match the audience used when [creating the federated identity credentials](../../ci/cloud_services/azure/index.md#create-azure-ad-federated-identity-credentials)
+- `name` is the name of the secret in Azure Key Vault.
+- `version` is the version of the secret in Azure Key Vault. The version is a generated
+ GUID without dashes, which can be found on the Azure Key Vault secrets page.
- GitLab fetches the secret from Azure Key Vault and stores the value in a temporary file.
The path to this file is stored in a `DATABASE_PASSWORD` CI/CD variable, similar to
[file type CI/CD variables](../variables/index.md#use-file-type-cicd-variables).
+
+## Troubleshooting
+
+Refer to [OIDC for Azure troubleshooting](../../ci/cloud_services/azure/index.md#troubleshooting) for general
+problems when setting up OIDC with Azure.
+
+### `JWT token is invalid or malformed` message
+
+You might receive this error when fetching secrets from Azure Key Vault:
+
+```plaintext
+RESPONSE 400 Bad Request
+AADSTS50027: JWT token is invalid or malformed.
+```
+
+This occurs due to a known issue in GitLab Runner where the JWT token isn't parsed correctly.
+A fix is [scheduled for a future GitLab Runner release](https://gitlab.com/gitlab-org/gitlab/-/issues/424746).
+
+### `Caller is not authorized to perform action on resource` message
+
+You might receive this error when fetching secrets from Azure Key Vault:
+
+```plaintext
+RESPONSE 403: 403 Forbidden
+ERROR CODE: Forbidden
+Caller is not authorized to perform action on resource.\r\nIf role assignments, deny assignments or role definitions were changed recently, please observe propagation time.
+ForbiddenByRbac
+```
+
+If your Azure Key Vault is using RBAC, you must add the **Key Vault Secrets User** to your Azure AD
+application.
+
+For example:
+
+```shell
+appId=$(az ad app list --display-name gitlab-oidc --query '[0].appId' -otsv)
+az role assignment create --assignee $appId --role "Key Vault Secrets User" --scope /subscriptions/<subscription-id>
+```
+
+You can find your subscription ID in:
+
+- The [Azure Portal](https://learn.microsoft.com/en-us/azure/azure-portal/get-subscription-tenant-id#find-your-azure-subscription).
+- The [Azure CLI](https://learn.microsoft.com/en-us/cli/azure/manage-azure-subscriptions-azure-cli#get-the-active-subscription).
diff --git a/doc/ci/testing/browser_performance_testing.md b/doc/ci/testing/browser_performance_testing.md
index 9e81f243e50..d8c66c2d4d5 100644
--- a/doc/ci/testing/browser_performance_testing.md
+++ b/doc/ci/testing/browser_performance_testing.md
@@ -55,6 +55,8 @@ merge request targeting that branch.
## Configuring Browser Performance Testing
+> Support for the `SITESPEED_DOCKER_OPTIONS` variable [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/134024) in GitLab 16.6.
+
This example shows how to run the [sitespeed.io container](https://hub.docker.com/r/sitespeedio/sitespeed.io/)
on your code by using GitLab CI/CD and [sitespeed.io](https://www.sitespeed.io)
using Docker-in-Docker.
@@ -91,6 +93,7 @@ You can also customize the jobs with CI/CD variables:
- `SITESPEED_IMAGE`: Configure the Docker image to use for the job (default `sitespeedio/sitespeed.io`), but not the image version.
- `SITESPEED_VERSION`: Configure the version of the Docker image to use for the job (default `14.1.0`).
- `SITESPEED_OPTIONS`: Configure any additional sitespeed.io options as required (default `nil`). Refer to the [sitespeed.io documentation](https://www.sitespeed.io/documentation/sitespeed.io/configuration/) for more details.
+- `SITESPEED_DOCKER_OPTIONS`: Configure any additional Docker options (default `nil`). Refer to the [Docker options documentation](https://docs.docker.com/engine/reference/commandline/run/#options) for more details.
For example, you can override the number of runs sitespeed.io
makes on the given URL, and change the version:
diff --git a/doc/ci/testing/code_coverage.md b/doc/ci/testing/code_coverage.md
index fb846f52a72..a39586a9eb0 100644
--- a/doc/ci/testing/code_coverage.md
+++ b/doc/ci/testing/code_coverage.md
@@ -40,7 +40,10 @@ using the [`coverage`](../yaml/index.md#coverage) keyword.
#### Test coverage examples
-Use this regex for commonly used test tools.
+The following list shows sample regex patterns for many common test coverage tools.
+If the tooling has changed after these samples were created, or if the tooling was customized,
+the regex might not work. Test the regex carefully to make sure it correctly finds the
+coverage in the tool's output:
<!-- vale gitlab.Spelling = NO -->
diff --git a/doc/ci/testing/code_quality.md b/doc/ci/testing/code_quality.md
index 1d857b8f543..6b4275d8055 100644
--- a/doc/ci/testing/code_quality.md
+++ b/doc/ci/testing/code_quality.md
@@ -12,6 +12,8 @@ Use Code Quality to analyze your source code's quality and complexity. This help
project's code simple, readable, and easier to maintain. Code Quality should supplement your
other review processes, not replace them.
+Code Quality runs in CI/CD pipelines, and helps you avoid merging changes that would degrade your code's quality.
+
Code Quality uses the open source Code Climate tool, and selected
[plugins](https://docs.codeclimate.com/docs/list-of-engines), to analyze your source code.
To confirm if your code's languages are covered, see the Code Climate list of
@@ -20,9 +22,6 @@ You can extend the code coverage either by using Code Climate
[Analysis Plugins](https://docs.codeclimate.com/docs/list-of-engines) or a
[custom tool](#implement-a-custom-tool).
-Run Code Quality reports in your CI/CD pipeline to verify changes don't degrade your code's quality,
-_before_ committing them to the default branch.
-
## Features per tier
Different features are available in different [GitLab tiers](https://about.gitlab.com/pricing/),
@@ -344,9 +343,9 @@ code_quality:
> [Introduced](https://gitlab.com/gitlab-org/ci-cd/codequality/-/merge_requests/30) in GitLab 13.7.
Using a private container image registry can reduce the time taken to download images, and also
-reduce external dependencies. Because of the nested architecture of container execution, the
-registry prefix must be specifically configured to be passed down into CodeClimate's subsequent
-`docker pull` commands for individual engines.
+reduce external dependencies. You must configure the registry prefix to be passed down
+to CodeClimate's subsequent `docker pull` commands for individual engines, because of
+the nested method of container execution.
The following variables can address all of the required image pulls:
@@ -710,3 +709,39 @@ Replace `gitlab.example.com` with the actual domain of the registry.
mount_path = "/etc/docker/certs.d/gitlab.example.com/ca.crt"
sub_path = "gitlab.example.com.crt"
```
+
+### Failed to load Code Quality report
+
+The Code Quality report can fail to load when there are issues parsing data from the artifact file.
+To gain insight into the errors, you can execute a GraphQL query using the following steps:
+
+1. Go to the pipeline details page.
+1. Append `.json` to the URL.
+1. Copy the `iid` of the pipeline.
+1. Go to [GraphiQL explorer](../../api/graphql/index.md#graphiql).
+1. Run the following query:
+
+ ```graphql
+ {
+ project(fullPath: "<fullpath-to-your-project>") {
+ pipeline(iid: "<iid>") {
+ codeQualityReports {
+ count
+ nodes {
+ line
+ description
+ path
+ fingerprint
+ severity
+ }
+ pageInfo {
+ hasNextPage
+ hasPreviousPage
+ startCursor
+ endCursor
+ }
+ }
+ }
+ }
+ }
+ ```
diff --git a/doc/ci/triggers/index.md b/doc/ci/triggers/index.md
index 698118f457f..ee1e05c4fc9 100644
--- a/doc/ci/triggers/index.md
+++ b/doc/ci/triggers/index.md
@@ -14,6 +14,7 @@ When authenticating with the API, you can use:
- A [pipeline trigger token](#create-a-pipeline-trigger-token) to trigger a branch or tag pipeline.
- A [CI/CD job token](../jobs/ci_job_token.md) to [trigger a multi-project pipeline](../pipelines/downstream_pipelines.md#trigger-a-multi-project-pipeline-by-using-the-api).
+- A [personal access token](../../user/profile/personal_access_tokens.md).
## Create a pipeline trigger token
diff --git a/doc/ci/troubleshooting.md b/doc/ci/troubleshooting.md
index cc7e5594466..77ee6b11d92 100644
--- a/doc/ci/troubleshooting.md
+++ b/doc/ci/troubleshooting.md
@@ -1,555 +1,11 @@
---
-stage: Verify
-group: Pipeline Authoring
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
-type: reference
+redirect_to: 'debugging.md'
+remove_date: '2024-02-01'
---
-# Troubleshooting CI/CD **(FREE ALL)**
+This document was moved to [another location](debugging.md).
-GitLab provides several tools to help make troubleshooting your pipelines easier.
-
-This guide also lists common issues and possible solutions.
-
-## Verify syntax
-
-An early source of problems can be incorrect syntax. The pipeline shows a `yaml invalid`
-badge and does not start running if any syntax or formatting problems are found.
-
-### Edit `.gitlab-ci.yml` with the pipeline editor
-
-The [pipeline editor](pipeline_editor/index.md) is the recommended editing
-experience (rather than the single file editor or the Web IDE). It includes:
-
-- Code completion suggestions that ensure you are only using accepted keywords.
-- Automatic syntax highlighting and validation.
-- The [CI/CD configuration visualization](pipeline_editor/index.md#visualize-ci-configuration),
- a graphical representation of your `.gitlab-ci.yml` file.
-
-### Edit `.gitlab-ci.yml` locally
-
-If you prefer to edit your pipeline configuration locally, you can use the
-GitLab CI/CD schema in your editor to verify basic syntax issues. Any
-[editor with Schemastore support](https://www.schemastore.org/json/#editors) uses
-the GitLab CI/CD schema by default.
-
-If you need to link to the schema directly, it
-is at:
-
-```plaintext
-https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/editor/schema/ci.json
-```
-
-To see the full list of custom tags covered by the CI/CD schema, check the
-latest version of the schema.
-
-### Verify syntax with CI Lint tool
-
-The [CI Lint tool](lint.md) is a simple way to ensure the syntax of a CI/CD configuration
-file is correct. Paste in full `.gitlab-ci.yml` files or individual jobs configuration,
-to verify the basic syntax.
-
-When a `.gitlab-ci.yml` file is present in a project, you can also use the CI Lint
-tool to [simulate the creation of a full pipeline](lint.md#simulate-a-pipeline).
-It does deeper verification of the configuration syntax.
-
-## Verify variables
-
-A key part of troubleshooting CI/CD is to verify which variables are present in a
-pipeline, and what their values are. A lot of pipeline configuration is dependent
-on variables, and verifying them is one of the fastest ways to find the source of
-a problem.
-
-[Export the full list of variables](variables/index.md#list-all-variables)
-available in each problematic job. Check if the variables you expect are present,
-and check if their values are what you expect.
-
-## GitLab CI/CD documentation
-
-The [complete `.gitlab-ci.yml` reference](yaml/index.md) contains a full list of
-every keyword you can use to configure your pipelines.
-
-You can also look at a large number of pipeline configuration [examples](examples/index.md)
-and [templates](examples/index.md#cicd-templates).
-
-### Documentation for pipeline types
-
-Branch pipelines are the most basic type.
-Other pipeline types have their own detailed usage guides that you should read
-if you are using that type:
-
-- [Multi-project pipelines](pipelines/downstream_pipelines.md#multi-project-pipelines): Have your pipeline trigger
- a pipeline in a different project.
-- [Parent/child pipelines](pipelines/downstream_pipelines.md#parent-child-pipelines): Have your main pipeline trigger
- and run separate pipelines in the same project. You can also
- [dynamically generate the child pipeline's configuration](pipelines/downstream_pipelines.md#dynamic-child-pipelines)
- at runtime.
-- [Merge request pipelines](pipelines/merge_request_pipelines.md): Run a pipeline
- in the context of a merge request.
- - [Merged results pipelines](pipelines/merged_results_pipelines.md):
- Merge request pipelines that run on the combined source and target branch
- - [Merge trains](pipelines/merge_trains.md):
- Multiple merged results pipelines that queue and run automatically before
- changes are merged.
-
-### Troubleshooting Guides for CI/CD features
-
-Troubleshooting guides are available for some CI/CD features and related topics:
-
-- [Container Registry](../user/packages/container_registry/troubleshoot_container_registry.md)
-- [GitLab Runner](https://docs.gitlab.com/runner/faq/)
-- [Merge Trains](pipelines/merge_trains.md#troubleshooting)
-- [Docker Build](docker/using_docker_build.md#troubleshooting)
-- [Environments](environments/deployment_safety.md#ensure-only-one-deployment-job-runs-at-a-time)
-
-## Common CI/CD issues
-
-A lot of common pipeline issues can be fixed by analyzing the behavior of the `rules`
-or `only/except` configuration. You shouldn't use these two configurations in the same
-pipeline, as they behave differently. It's hard to predict how a pipeline runs with
-this mixed behavior.
-
-If your `rules` or `only/except` configuration makes use of [predefined variables](variables/predefined_variables.md)
-like `CI_PIPELINE_SOURCE`, `CI_MERGE_REQUEST_ID`, you should [verify them](#verify-variables)
-as the first troubleshooting step.
-
-### Jobs or pipelines don't run when expected
-
-The `rules` or `only/except` keywords are what determine whether or not a job is
-added to a pipeline. If a pipeline runs, but a job is not added to the pipeline,
-it's usually due to `rules` or `only/except` configuration issues.
-
-If a pipeline does not seem to run at all, with no error message, it may also be
-due to `rules` or `only/except` configuration, or the `workflow: rules` keyword.
-
-If you are converting from `only/except` to the `rules` keyword, you should check
-the [`rules` configuration details](yaml/index.md#rules) carefully. The behavior
-of `only/except` and `rules` is different and can cause unexpected behavior when migrating
-between the two.
-
-The [common `if` clauses for `rules`](jobs/job_control.md#common-if-clauses-for-rules)
-can be very helpful for examples of how to write rules that behave the way you expect.
-
-#### Two pipelines run at the same time
-
-Two pipelines can run when pushing a commit to a branch that has an open merge request
-associated with it. Usually one pipeline is a merge request pipeline, and the other
-is a branch pipeline.
-
-This situation is usually caused by the `rules` configuration, and there are several ways to
-[prevent duplicate pipelines](jobs/job_control.md#avoid-duplicate-pipelines).
-
-#### A job is not in the pipeline
-
-GitLab determines if a job is added to a pipeline based on the [`only/except`](yaml/index.md#only--except)
-or [`rules`](yaml/index.md#rules) defined for the job. If it didn't run, it's probably
-not evaluating as you expect.
-
-#### No pipeline or the wrong type of pipeline runs
-
-Before a pipeline can run, GitLab evaluates all the jobs in the configuration and tries
-to add them to all available pipeline types. A pipeline does not run if no jobs are added
-to it at the end of the evaluation.
-
-If a pipeline did not run, it's likely that all the jobs had `rules` or `only/except` that
-blocked them from being added to the pipeline.
-
-If the wrong pipeline type ran, then the `rules` or `only/except` configuration should
-be checked to make sure the jobs are added to the correct pipeline type. For
-example, if a merge request pipeline did not run, the jobs may have been added to
-a branch pipeline instead.
-
-It's also possible that your [`workflow: rules`](yaml/index.md#workflow) configuration
-blocked the pipeline, or allowed the wrong pipeline type.
-
-### Pipeline with many jobs fails to start
-
-A Pipeline that has more jobs than the instance's defined [CI/CD limits](../administration/settings/continuous_integration.md#set-cicd-limits)
-fails to start.
-
-To reduce the number of jobs in your pipeline, you can split your `.gitlab-ci.yml`
-configuration using [parent-child pipelines](../ci/pipelines/pipeline_architectures.md#parent-child-pipelines).
-
-### A job runs unexpectedly
-
-A common reason a job is added to a pipeline unexpectedly is because the `changes`
-keyword always evaluates to true in certain cases. For example, `changes` is always
-true in certain pipeline types, including scheduled pipelines and pipelines for tags.
-
-The `changes` keyword is used in combination with [`only/except`](yaml/index.md#onlychanges--exceptchanges)
-or [`rules`](yaml/index.md#ruleschanges)). It's recommended to use `changes` with
-`rules` or `only/except` configuration that ensures the job is only added to branch
-pipelines or merge request pipelines.
-
-### "fatal: reference is not a tree" error
-
-> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/17043) in GitLab 12.4.
-
-Previously, you'd have encountered unexpected pipeline failures when you force-pushed
-a branch to its remote repository. To illustrate the problem, suppose you've had the current workflow:
-
-1. A user creates a feature branch named `example` and pushes it to a remote repository.
-1. A new pipeline starts running on the `example` branch.
-1. A user rebases the `example` branch on the latest default branch and force-pushes it to its remote repository.
-1. A new pipeline starts running on the `example` branch again, however,
- the previous pipeline (2) fails because of `fatal: reference is not a tree:` error.
-
-This occurs because the previous pipeline cannot find a checkout-SHA (which is associated with the pipeline record)
-from the `example` branch that the commit history has already been overwritten by the force-push.
-Similarly, [Merged results pipelines](pipelines/merged_results_pipelines.md)
-might have failed intermittently due to [the same reason](pipelines/merged_results_pipelines.md#pipelines-fail-intermittently-with-a-fatal-reference-is-not-a-tree-error).
-
-As of GitLab 12.4, we've improved this behavior by persisting pipeline refs exclusively.
-To illustrate its life cycle:
-
-1. A pipeline is created on a feature branch named `example`.
-1. A persistent pipeline ref is created at `refs/pipelines/<pipeline-id>`,
- which retains the checkout-SHA of the associated pipeline record.
- This persistent ref stays intact during the pipeline execution,
- even if the commit history of the `example` branch has been overwritten by force-push.
-1. The runner fetches the persistent pipeline ref and gets source code from the checkout-SHA.
-1. When the pipeline finishes, its persistent ref is cleaned up in a background process.
-
-### `get_sources` job section fails because of an HTTP/2 problem
-
-Sometimes, jobs fail with the following cURL error:
-
-```plaintext
-++ git -c 'http.userAgent=gitlab-runner <version>' fetch origin +refs/pipelines/<id>:refs/pipelines/<id> ...
-error: RPC failed; curl 16 HTTP/2 send again with decreased length
-fatal: ...
-```
-
-You can work around this problem by configuring Git and `libcurl` to
-[use HTTP/1.1](https://git-scm.com/docs/git-config#Documentation/git-config.txt-httpversion).
-The configuration can be added to:
-
-- A job's [`pre_get_sources_script`](yaml/index.md#hookspre_get_sources_script):
-
- ```yaml
- job_name:
- hooks:
- pre_get_sources_script:
- - git config --global http.version "HTTP/1.1"
- ```
-
-- The [runner's `config.toml`](https://docs.gitlab.com/runner/configuration/advanced-configuration.html)
- with [Git configuration environment variables](https://git-scm.com/docs/git-config#ENVIRONMENT):
-
- ```toml
- [[runners]]
- ...
- environment = [
- "GIT_CONFIG_COUNT=1",
- "GIT_CONFIG_KEY_0=http.version",
- "GIT_CONFIG_VALUE_0=HTTP/1.1"
- ]
- ```
-
-### Merge request pipeline messages
-
-The merge request pipeline widget shows information about the pipeline status in
-a merge request. It's displayed above the [ability to merge status widget](#merge-request-status-messages).
-
-#### "Checking ability to merge automatically" message
-
-There is a [known issue](https://gitlab.com/gitlab-org/gitlab/-/issues/229352)
-where a merge request can be stuck with the `Checking ability to merge automatically`
-message.
-
-If your merge request has this message and it does not disappear after a few minutes,
-you can try one of these workarounds:
-
-- Refresh the merge request page.
-- Close & Re-open the merge request.
-- Rebase the merge request with the `/rebase` [quick action](../user/project/quick_actions.md).
-- If you have already confirmed the merge request is ready to be merged, you can merge
- it with the `/merge` quick action.
-
-#### "Checking pipeline status" message
-
-This message is shown when the merge request has no pipeline associated with the
-latest commit yet. This might be because:
-
-- GitLab hasn't finished creating the pipeline yet.
-- You are using an external CI service and GitLab hasn't heard back from the service yet.
-- You are not using CI/CD pipelines in your project.
-- You are using CI/CD pipelines in your project, but your configuration prevented a pipeline from running on the source branch for your merge request.
-- The latest pipeline was deleted (this is a [known issue](https://gitlab.com/gitlab-org/gitlab/-/issues/214323)).
-- The source branch of the merge request is on a private fork.
-
-After the pipeline is created, the message updates with the pipeline status.
-
-### Merge request status messages
-
-The merge request status widget shows:
-
-- If the merge request is ready to merge. If the merge request can't be merged, the reason is displayed.
-- **Merge**, if the pipeline is complete, or **Set to auto-merge** if the pipeline is still running.
-
-#### "A CI/CD pipeline must run and be successful before merge" message
-
-This message is shown if the [Pipelines must succeed](../user/project/merge_requests/merge_when_pipeline_succeeds.md#require-a-successful-pipeline-for-merge)
-setting is enabled in the project and a pipeline has not yet run successfully.
-This also applies if the pipeline has not been created yet, or if you are waiting
-for an external CI service. If you don't use pipelines for your project, then you
-should disable **Pipelines must succeed** so you can accept merge requests.
-
-#### "Merge blocked: pipeline must succeed. Push a new commit that fixes the failure" message
-
-This message is shown if the [merge request pipeline](pipelines/merge_request_pipelines.md),
-[merged results pipeline](pipelines/merged_results_pipelines.md),
-or [merge train pipeline](pipelines/merge_trains.md)
-has failed or been canceled.
-This does not happen when a basic branch pipeline fails.
-
-If a merge request pipeline or merged result pipeline was canceled or failed, you can:
-
-- Re-run the entire pipeline by selecting **Run pipeline** in the pipeline tab in the merge request.
-- [Retry only the jobs that failed](pipelines/index.md#view-pipelines). If you re-run the entire pipeline, this is not necessary.
-- Push a new commit to fix the failure.
-
-If the merge train pipeline has failed, you can:
-
-- Check the failure and determine if you can use the [`/merge` quick action](../user/project/quick_actions.md) to immediately add the merge request to the train again.
-- Re-run the entire pipeline by selecting **Run pipeline** in the pipeline tab in the merge request, then add the merge request to the train again.
-- Push a commit to fix the failure, then add the merge request to the train again.
-
-If the merge train pipeline was canceled before the merge request was merged, without a failure, you can:
-
-- Add it to the train again.
-
-### Merge request rules widget shows a scan result policy is invalid or duplicated **(ULTIMATE SELF)**
-
-On GitLab self-managed 15.0 and later, the most likely cause is that the project was exported from a
-group and imported into another, and had scan result policy rules. These rules are stored in a
-separate project to the one that was exported. As a result, the project contains policy rules that
-reference entities that don't exist in the imported project's group. The result is policy rules that
-are invalid, duplicated, or both.
-
-To remove all invalid scan result policy rules from a GitLab instance, an administrator can run
-the following script in the [Rails console](../administration/operations/rails_console.md).
-
-```ruby
-Project.joins(:approval_rules).where(approval_rules: { report_type: %i[scan_finding license_scanning] }).where.not(approval_rules: { security_orchestration_policy_configuration_id: nil }).find_in_batches.flat_map do |batch|
- batch.map do |project|
- # Get projects and their configuration_ids for applicable project rules
- [project, project.approval_rules.where(report_type: %i[scan_finding license_scanning]).pluck(:security_orchestration_policy_configuration_id).uniq]
- end.uniq.map do |project, configuration_ids| # We take only unique combinations of project + configuration_ids
- # If we find more configurations than what is available for the project, we take records with the extra configurations
- [project, configuration_ids - project.all_security_orchestration_policy_configurations.pluck(:id)]
- end.select { |_project, configuration_ids| configuration_ids.any? }
-end.each do |project, configuration_ids|
- # For each found pair project + ghost configuration, we remove these rules for a given project
- Security::OrchestrationPolicyConfiguration.where(id: configuration_ids).each do |configuration|
- configuration.delete_scan_finding_rules_for_project(project.id)
- end
- # Ensure we sync any potential rules from new group's policy
- Security::ScanResultPolicies::SyncProjectWorker.perform_async(project.id)
-end
-```
-
-### Project `group/project` not found or access denied
-
-This message is shown if configuration is added with [`include`](yaml/index.md#include) and one of the following:
-
-- The configuration refers to a project that can't be found.
-- The user that is running the pipeline is unable to access any included projects.
-
-To resolve this, check that:
-
-- The path of the project is in the format `my-group/my-project` and does not include
- any folders in the repository.
-- The user running the pipeline is a [member of the projects](../user/project/members/index.md#add-users-to-a-project)
- that contain the included files. Users must also have the [permission](../user/permissions.md#job-permissions)
- to run CI/CD jobs in the same projects.
-
-### "The parsed YAML is too big" message
-
-This message displays when the YAML configuration is too large or nested too deeply.
-YAML files with a large number of includes, and thousands of lines overall, are
-more likely to hit this memory limit. For example, a YAML file that is 200kb is
-likely to hit the default memory limit.
-
-To reduce the configuration size, you can:
-
-- Check the length of the expanded CI/CD configuration in the pipeline editor's
- [Full configuration](pipeline_editor/index.md#view-full-configuration) tab. Look for
- duplicated configuration that can be removed or simplified.
-- Move long or repeated `script` sections into standalone scripts in the project.
-- Use [parent and child pipelines](pipelines/downstream_pipelines.md#parent-child-pipelines) to move some
- work to jobs in an independent child pipeline.
-
-On a self-managed instance, you can [increase the size limits](../administration/instance_limits.md#maximum-size-and-depth-of-cicd-configuration-yaml-files).
-
-### Error 500 when editing the `.gitlab-ci.yml` file
-
-A [loop of included configuration files](pipeline_editor/index.md#configuration-validation-currently-not-available-message)
-can cause a `500` error when editing the `.gitlab-ci.yml` file with the [web editor](../user/project/repository/web_editor.md).
-
-### A CI/CD job does not use newer configuration when run again
-
-The configuration for a pipeline is only fetched when the pipeline is created.
-When you rerun a job, uses the same configuration each time. If you update configuration files,
-including separate files added with [`include`](yaml/index.md#include), you must
-start a new pipeline to use the new configuration.
-
-### Unable to pull image from another project
-
-> **Allow access to this project with a CI_JOB_TOKEN** setting [renamed to **Limit access _to_ this project**](https://gitlab.com/gitlab-org/gitlab/-/issues/411406) in GitLab 16.3.
-
-When a runner tries to pull an image from a private project, the job could fail with the following error:
-
-```shell
-WARNING: Failed to pull image with policy "always": Error response from daemon: pull access denied for registry.example.com/path/to/project, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
-```
-
-This error can happen if the following are both true:
-
-- The **Limit access _to_ this project** option is enabled in the private project
- hosting the image.
-- The job attempting to fetch the image is running for a project that is not listed in
- the private project's allowlist.
-
-The recommended solution is to [add your project to the private project's job token scope allowlist](jobs/ci_job_token.md#add-a-project-to-the-job-token-scope-allowlist).
-
-## Pipeline warnings
-
-Pipeline configuration warnings are shown when you:
-
-- [Validate configuration with the CI Lint tool](yaml/index.md).
-- [Manually run a pipeline](pipelines/index.md#run-a-pipeline-manually).
-
-### "Job may allow multiple pipelines to run for a single action" warning
-
-When you use [`rules`](yaml/index.md#rules) with a `when` clause without an `if`
-clause, multiple pipelines may run. Usually this occurs when you push a commit to
-a branch that has an open merge request associated with it.
-
-To [prevent duplicate pipelines](jobs/job_control.md#avoid-duplicate-pipelines), use
-[`workflow: rules`](yaml/index.md#workflow) or rewrite your rules to control
-which pipelines can run.
-
-### Console workaround if job using `resource_group` gets stuck **(FREE SELF)**
-
-```ruby
-# find resource group by name
-resource_group = Project.find_by_full_path('...').resource_groups.find_by(key: 'the-group-name')
-busy_resources = resource_group.resources.where('build_id IS NOT NULL')
-
-# identify which builds are occupying the resource
-# (I think it should be 1 as of today)
-busy_resources.pluck(:build_id)
-
-# it's good to check why this build is holding the resource.
-# Is it stuck? Has it been forcefully dropped by the system?
-# free up busy resources
-busy_resources.update_all(build_id: nil)
-```
-
-### Job log slow to update
-
-When you visit the job log page for a running job, there could be a delay of up to
-60 seconds before the log updates. The default refresh time is 60 seconds, but after
-the log is viewed in the UI, the following log updates should occur every 3 seconds.
-
-## Disaster recovery
-
-You can disable some important but computationally expensive parts of the application
-to relieve stress on the database during ongoing downtime.
-
-### Disable fair scheduling on shared runners
-
-When clearing a large backlog of jobs, you can temporarily enable the `ci_queueing_disaster_recovery_disable_fair_scheduling`
-[feature flag](../administration/feature_flags.md). This flag disables fair scheduling
-on shared runners, which reduces system resource usage on the `jobs/request` endpoint.
-
-When enabled, jobs are processed in the order they were put in the system, instead of
-balanced across many projects.
-
-### Disable compute quota enforcement
-
-To disable the enforcement of [compute quotas](pipelines/cicd_minutes.md) on shared runners, you can temporarily
-enable the `ci_queueing_disaster_recovery_disable_quota` [feature flag](../administration/feature_flags.md).
-This flag reduces system resource usage on the `jobs/request` endpoint.
-
-When enabled, jobs created in the last hour can run in projects which are out of quota.
-Earlier jobs are already canceled by a periodic background worker (`StuckCiJobsWorker`).
-
-## CI/CD troubleshooting Rails console commands
-
-The following commands are run in the [Rails console](../administration/operations/rails_console.md#starting-a-rails-console-session).
-
-WARNING:
-Any command that changes data directly could be damaging if not run correctly, or under the right conditions.
-We highly recommend running them in a test environment with a backup of the instance ready to be restored, just in case.
-
-### Cancel stuck pending pipelines
-
-```ruby
-project = Project.find_by_full_path('<project_path>')
-Ci::Pipeline.where(project_id: project.id).where(status: 'pending').count
-Ci::Pipeline.where(project_id: project.id).where(status: 'pending').each {|p| p.cancel if p.stuck?}
-Ci::Pipeline.where(project_id: project.id).where(status: 'pending').count
-```
-
-### Try merge request integration
-
-```ruby
-project = Project.find_by_full_path('<project_path>')
-mr = project.merge_requests.find_by(iid: <merge_request_iid>)
-mr.project.try(:ci_integration)
-```
-
-### Validate the `.gitlab-ci.yml` file
-
-```ruby
-project = Project.find_by_full_path('<project_path>')
-content = p.repository.gitlab_ci_yml_for(project.repository.root_ref_sha)
-Gitlab::Ci::Lint.new(project: project, current_user: User.first).validate(content)
-```
-
-### Disable AutoDevOps on Existing Projects
-
-```ruby
-Project.all.each do |p|
- p.auto_devops_attributes={"enabled"=>"0"}
- p.save
-end
-```
-
-### Obtain runners registration token
-
-```ruby
-Gitlab::CurrentSettings.current_application_settings.runners_registration_token
-```
-
-### Seed runners registration token
-
-```ruby
-appSetting = Gitlab::CurrentSettings.current_application_settings
-appSetting.set_runners_registration_token('<new-runners-registration-token>')
-appSetting.save!
-```
-
-### Run pipeline schedules manually
-
-You can run pipeline schedules manually through the Rails console to reveal any errors that are usually not visible.
-
-```ruby
-# schedule_id can be obtained from Edit Pipeline Schedule page
-schedule = Ci::PipelineSchedule.find_by(id: <schedule_id>)
-
-# Select the user that you want to run the schedule for
-user = User.find_by_username('<username>')
-
-# Run the schedule
-ps = Ci::CreatePipelineService.new(schedule.project, user, ref: schedule.ref).execute!(:schedule, ignore_skip_ci: true, save_on_errors: false, schedule: schedule)
-```
-
-## How to get help
-
-If you are unable to resolve pipeline issues, you can get help from:
-
-- The [GitLab community forum](https://forum.gitlab.com/)
-- GitLab [Support](https://about.gitlab.com/support/)
+<!-- This redirect file can be deleted after <2024-02-01>. -->
+<!-- Redirects that point to other docs in the same project expire in three months. -->
+<!-- Redirects that point to docs in a different project or site (link is not relative and starts with `https:`) expire in one year. -->
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/redirects.html -->
diff --git a/doc/ci/variables/index.md b/doc/ci/variables/index.md
index 975157ff917..0c05129fb1e 100644
--- a/doc/ci/variables/index.md
+++ b/doc/ci/variables/index.md
@@ -278,11 +278,15 @@ The method used to mask variables [limits what can be included in a masked varia
The value of the variable must:
- Be a single line.
-- Be 8 characters or longer, consisting only of:
- - Characters from the Base64 alphabet (RFC4648).
- - The `@`, `:`, `.`, or `~` characters.
+- Be 8 characters or longer.
- Not match the name of an existing predefined or custom CI/CD variable.
+Additionally, if [variable expansion](#prevent-cicd-variable-expansion) is enabled,
+the value can contain only:
+
+- Characters from the Base64 alphabet (RFC4648).
+- The `@`, `:`, `.`, or `~` characters.
+
Different versions of [GitLab Runner](../runners/index.md) have different masking limitations:
| Version | Limitations |
@@ -703,6 +707,68 @@ to enable the `restrict_user_defined_variables` setting. The setting is `disable
If you [store your CI/CD configurations in a different repository](../../ci/pipelines/settings.md#specify-a-custom-cicd-configuration-file),
use this setting for control over the environment the pipeline runs in.
+## Exporting variables
+
+Scripts executed in separate shell contexts do not share exports, aliases,
+local function definitions, or any other local shell updates.
+
+This means that if a job fails, variables created by user-defined scripts are not
+exported.
+
+When runners execute jobs defined in `.gitlab-ci.yml`:
+
+- Scripts specified in `before_script` and the main script are executed together in
+ a single shell context, and are concatenated.
+- Scripts specified in `after_script` run in a shell context completely separate to
+ the `before_script` and the specified scripts.
+
+Regardless of the shell the scripts are executed in, the runner output includes:
+
+- Predefined variables.
+- Variables defined in:
+ - Instance, group, or project CI/CD settings.
+ - The `.gitlab-ci.yml` file in the `variables:` section.
+ - The `.gitlab-ci.yml` file in the `secrets:` section.
+ - The `config.toml`.
+
+The runner cannot handle manual exports, shell aliases, and functions executed in the body of the script, like `export MY_VARIABLE=1`.
+
+For example, in the following `.gitlab-ci.yml` file, the following scripts are defined:
+
+```yaml
+ job:
+ variables:
+ JOB_DEFINED_VARIABLE: "job variable"
+ before_script:
+ - echo "This is the 'before_script' script"
+ - export MY_VARIABLE="variable"
+ script:
+ - echo "This is the 'script' script"
+ - echo "JOB_DEFINED_VARIABLE's value is ${JOB_DEFINED_VARIABLE}"
+ - echo "CI_COMMIT_SHA's value is ${CI_COMMIT_SHA}"
+ - echo "MY_VARIABLE's value is ${MY_VARIABLE}"
+ after_script:
+ - echo "JOB_DEFINED_VARIABLE's value is ${JOB_DEFINED_VARIABLE}"
+ - echo "CI_COMMIT_SHA's value is ${CI_COMMIT_SHA}"
+ - echo "MY_VARIABLE's value is ${MY_VARIABLE}"
+```
+
+When the runner executes the job:
+
+1. `before_script` is executed:
+ 1. Prints to the output.
+ 1. Defines the variable for `MY_VARIABLE`.
+1. `script` is executed:
+ 1. Prints to the output.
+ 1. Prints the value of `JOB_DEFINED_VARIABLE`.
+ 1. Prints the value of `CI_COMMIT_SHA`.
+ 1. Prints the value of `MY_VARIABLE`.
+1. `after_script` is executed in a new, separate shell context:
+ 1. Prints to the output.
+ 1. Prints the value of `JOB_DEFINED_VARIABLE`.
+ 1. Prints the value of `CI_COMMIT_SHA`.
+ 1. Prints an empty value of `MY_VARIABLE`. The variable value cannot be detected because `after_script` is in a separate shell context to `before_script`.
+
## Related topics
- You can configure [Auto DevOps](../../topics/autodevops/index.md) to pass CI/CD variables
diff --git a/doc/ci/variables/predefined_variables.md b/doc/ci/variables/predefined_variables.md
index a77ba781d7d..cd23b903d30 100644
--- a/doc/ci/variables/predefined_variables.md
+++ b/doc/ci/variables/predefined_variables.md
@@ -107,10 +107,10 @@ as it can cause the pipeline to behave unexpectedly.
| `CI_PROJECT_URL` | 8.10 | 0.5 | The HTTP(S) address of the project. |
| `CI_PROJECT_VISIBILITY` | 10.3 | all | The project visibility. Can be `internal`, `private`, or `public`. |
| `CI_PROJECT_CLASSIFICATION_LABEL` | 14.2 | all | The project [external authorization classification label](../../administration/settings/external_authorization.md). |
-| `CI_REGISTRY_IMAGE` | 8.10 | 0.5 | The address of the project's Container Registry. Only available if the Container Registry is enabled for the project. |
+| `CI_REGISTRY` | 8.10 | 0.5 | Address of the [Container Registry](../../user/packages/container_registry/index.md) server, formatted as `<host>[:<port>]`. For example: `registry.gitlab.example.com`. Only available if the Container Registry is enabled for the GitLab instance. |
+| `CI_REGISTRY_IMAGE` | 8.10 | 0.5 | Base address for the container registry to push, pull, or tag project's images, formatted as `<host>[:<port>]/<project_full_path>`. For example: `registry.gitlab.example.com/my_group/my_project`. Image names must follow the [container registry naming convention](../../user/packages/container_registry/index.md#naming-convention-for-your-container-images). Only available if the Container Registry is enabled for the project. |
| `CI_REGISTRY_PASSWORD` | 9.0 | all | The password to push containers to the project's GitLab Container Registry. Only available if the Container Registry is enabled for the project. This password value is the same as the `CI_JOB_TOKEN` and is valid only as long as the job is running. Use the `CI_DEPLOY_PASSWORD` for long-lived access to the registry |
| `CI_REGISTRY_USER` | 9.0 | all | The username to push containers to the project's GitLab Container Registry. Only available if the Container Registry is enabled for the project. |
-| `CI_REGISTRY` | 8.10 | 0.5 | The address of the GitLab Container Registry. Only available if the Container Registry is enabled for the project. This variable includes a `:port` value if one is specified in the registry configuration. |
| `CI_REPOSITORY_URL` | 9.0 | all | The full path to Git clone (HTTP) the repository with a [CI/CD job token](../jobs/ci_job_token.md), in the format `https://gitlab-ci-token:$CI_JOB_TOKEN@gitlab.example.com/my-group/my-project.git`. |
| `CI_RUNNER_DESCRIPTION` | 8.10 | 0.5 | The description of the runner. |
| `CI_RUNNER_EXECUTABLE_ARCH` | all | 10.6 | The OS/architecture of the GitLab Runner executable. Might not be the same as the environment of the executor. |
@@ -140,9 +140,9 @@ as it can cause the pipeline to behave unexpectedly.
| `GITLAB_CI` | all | all | Available for all jobs executed in CI/CD. `true` when available. |
| `GITLAB_FEATURES` | 10.6 | all | The comma-separated list of licensed features available for the GitLab instance and license. |
| `GITLAB_USER_EMAIL` | 8.12 | all | The email of the user who started the pipeline, unless the job is a manual job. In manual jobs, the value is the email of the user who started the job. |
-| `GITLAB_USER_ID` | 8.12 | all | The ID of the user who started the pipeline, unless the job is a manual job. In manual jobs, the value is the ID of the user who started the job. |
+| `GITLAB_USER_ID` | 8.12 | all | The numeric ID of the user who started the pipeline, unless the job is a manual job. In manual jobs, the value is the ID of the user who started the job. |
| `GITLAB_USER_LOGIN` | 10.0 | all | The username of the user who started the pipeline, unless the job is a manual job. In manual jobs, the value is the username of the user who started the job. |
-| `GITLAB_USER_NAME` | 10.0 | all | The name of the user who started the pipeline, unless the job is a manual job. In manual jobs, the value is the name of the user who started the job. |
+| `GITLAB_USER_NAME` | 10.0 | all | The display name of the user who started the pipeline, unless the job is a manual job. In manual jobs, the value is the name of the user who started the job. |
| `KUBECONFIG` | 14.2 | all | The path to the `kubeconfig` file with contexts for every shared agent connection. Only available when a [GitLab agent is authorized to access the project](../../user/clusters/agent/ci_cd_workflow.md#authorize-the-agent). |
| `TRIGGER_PAYLOAD` | 13.9 | all | The webhook payload. Only available when a pipeline is [triggered with a webhook](../triggers/index.md#access-webhook-payload). |
@@ -157,8 +157,8 @@ These variables are available when:
|---------------------------------------------|--------|--------|-------------|
| `CI_MERGE_REQUEST_APPROVED` | 14.1 | all | Approval status of the merge request. `true` when [merge request approvals](../../user/project/merge_requests/approvals/index.md) is available and the merge request has been approved. |
| `CI_MERGE_REQUEST_ASSIGNEES` | 11.9 | all | Comma-separated list of usernames of assignees for the merge request. |
-| `CI_MERGE_REQUEST_ID` | 11.6 | all | The instance-level ID of the merge request. This is a unique ID across all projects on GitLab. |
-| `CI_MERGE_REQUEST_IID` | 11.6 | all | The project-level IID (internal ID) of the merge request. This ID is unique for the current project. |
+| `CI_MERGE_REQUEST_ID` | 11.6 | all | The instance-level ID of the merge request. This is a unique ID across all projects on the GitLab instance. |
+| `CI_MERGE_REQUEST_IID` | 11.6 | all | The project-level IID (internal ID) of the merge request. This ID is unique for the current project, and is the number used in the merge request URL, page title, and other visible locations. |
| `CI_MERGE_REQUEST_LABELS` | 11.9 | all | Comma-separated label names of the merge request. |
| `CI_MERGE_REQUEST_MILESTONE` | 11.9 | all | The milestone title of the merge request. |
| `CI_MERGE_REQUEST_PROJECT_ID` | 11.6 | all | The ID of the project of the merge request. |
diff --git a/doc/ci/yaml/gitlab_ci_yaml.md b/doc/ci/yaml/gitlab_ci_yaml.md
index 920abf50546..a0e1ce04fad 100644
--- a/doc/ci/yaml/gitlab_ci_yaml.md
+++ b/doc/ci/yaml/gitlab_ci_yaml.md
@@ -1,89 +1,11 @@
---
-stage: Verify
-group: Pipeline Authoring
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
-type: reference
+redirect_to: '../index.md#the-gitlab-ciyml-file'
+remove_date: '2024-01-30'
---
-# The `.gitlab-ci.yml` file **(FREE ALL)**
+This document was moved to [another location](../index.md#the-gitlab-ciyml-file).
-To use GitLab CI/CD, you need:
-
-- Application code hosted in a Git repository.
-- A file called [`.gitlab-ci.yml`](index.md) in the root of your repository, which
- contains the CI/CD configuration.
-
-In the `.gitlab-ci.yml` file, you can define:
-
-- The scripts you want to run.
-- Other configuration files and templates you want to include.
-- Dependencies and caches.
-- The commands you want to run in sequence and those you want to run in parallel.
-- The location to deploy your application to.
-- Whether you want to run the scripts automatically or trigger any of them manually.
-
-The scripts are grouped into **jobs**, and jobs run as part of a larger
-**pipeline**. You can group multiple independent jobs into **stages** that run in a defined order.
-The CI/CD configuration needs at least one job that is [not hidden](../jobs/index.md#hide-jobs).
-
-You should organize your jobs in a sequence that suits your application and is in accordance with
-the tests you wish to perform. To [visualize](../pipeline_editor/index.md#visualize-ci-configuration) the process, imagine
-the scripts you add to jobs are the same as CLI commands you run on your computer.
-
-When you add a `.gitlab-ci.yml` file to your
-repository, GitLab detects it and an application called [GitLab Runner](https://docs.gitlab.com/runner/)
-runs the scripts defined in the jobs.
-
-A `.gitlab-ci.yml` file might contain:
-
-```yaml
-stages:
- - build
- - test
-
-build-code-job:
- stage: build
- script:
- - echo "Check the ruby version, then build some Ruby project files:"
- - ruby -v
- - rake
-
-test-code-job1:
- stage: test
- script:
- - echo "If the files are built successfully, test some files with one command:"
- - rake test1
-
-test-code-job2:
- stage: test
- script:
- - echo "If the files are built successfully, test other files with a different command:"
- - rake test2
-```
-
-In this example, the `build-code-job` job in the `build` stage runs first. It outputs
-the Ruby version the job is using, then runs `rake` to build project files.
-If this job completes successfully, the two `test-code-job` jobs in the `test` stage start
-in parallel and run tests on the files.
-
-The full pipeline in the example is composed of three jobs, grouped into two stages,
-`build` and `test`. The pipeline runs every time changes are pushed to any
-branch in the project.
-
-GitLab CI/CD not only executes the jobs but also shows you what's happening during execution,
-just as you would see in your terminal:
-
-![job running](img/job_running_v13_10.png)
-
-You create the strategy for your app and GitLab runs the pipeline
-according to what you've defined. Your pipeline status is also
-displayed by GitLab:
-
-![pipeline status](img/pipeline_status.png)
-
-If anything goes wrong, you can
-[roll back](../environments/index.md#retry-or-roll-back-a-deployment) the changes:
-
-![rollback button](img/rollback.png)
-
-[View the full syntax for the `.gitlab-ci.yml` file](index.md).
+<!-- This redirect file can be deleted after <2024-01-30>. -->
+<!-- Redirects that point to other docs in the same project expire in three months. -->
+<!-- Redirects that point to docs in a different project or site (link is not relative and starts with `https:`) expire in one year. -->
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/redirects.html -->
diff --git a/doc/ci/yaml/img/job_running_v13_10.png b/doc/ci/yaml/img/job_running_v13_10.png
deleted file mode 100644
index b1f21b8445f..00000000000
--- a/doc/ci/yaml/img/job_running_v13_10.png
+++ /dev/null
Binary files differ
diff --git a/doc/ci/yaml/img/pipeline_status.png b/doc/ci/yaml/img/pipeline_status.png
deleted file mode 100644
index 96881f072e1..00000000000
--- a/doc/ci/yaml/img/pipeline_status.png
+++ /dev/null
Binary files differ
diff --git a/doc/ci/yaml/img/rollback.png b/doc/ci/yaml/img/rollback.png
deleted file mode 100644
index 38e0552f4f1..00000000000
--- a/doc/ci/yaml/img/rollback.png
+++ /dev/null
Binary files differ
diff --git a/doc/ci/yaml/index.md b/doc/ci/yaml/index.md
index 66a5fe61a1d..9b781ca6d13 100644
--- a/doc/ci/yaml/index.md
+++ b/doc/ci/yaml/index.md
@@ -5,13 +5,14 @@ info: To determine the technical writer assigned to the Stage/Group associated w
type: reference
---
-# `.gitlab-ci.yml` keyword reference **(FREE ALL)**
+# CI/CD YAML syntax reference **(FREE ALL)**
This document lists the configuration options for the GitLab `.gitlab-ci.yml` file.
This file is where you define the CI/CD jobs that make up your pipeline.
-- To create your own `.gitlab-ci.yml` file, try a tutorial that demonstrates a
- [simple](../quick_start/index.md) or [complex](../quick_start/tutorial.md) pipeline.
+- If you are already familiar with [basic CI/CD concepts](../index.md), try creating
+ your own `.gitlab-ci.yml` file by following a tutorial that demonstrates a [simple](../quick_start/index.md)
+ or [complex](../quick_start/tutorial.md) pipeline.
- For a collection of examples, see [GitLab CI/CD examples](../examples/index.md).
- To view a large `.gitlab-ci.yml` file used in an enterprise, see the
[`.gitlab-ci.yml` file for `gitlab`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab-ci.yml).
@@ -35,6 +36,12 @@ A GitLab CI/CD pipeline configuration includes:
| [`variables`](#variables) | Define CI/CD variables for all job in the pipeline. |
| [`workflow`](#workflow) | Control what types of pipeline run. |
+- [Header keywords](#header-keywords)
+
+ | Keyword | Description |
+ |-----------------|:------------|
+ | [`spec`](#spec) | Define specifications for external configuration files. |
+
- [Jobs](../jobs/index.md) configured with [job keywords](#job-keywords):
| Keyword | Description |
@@ -349,6 +356,42 @@ include:
- All [nested includes](includes.md#use-nested-includes) are executed without context as a public user,
so you can only include public projects or templates. No variables are available in the `include` section of nested includes.
+#### `include:inputs`
+
+> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/391331) in GitLab 15.11 as a Beta feature.
+
+Use `include:inputs` to set the values for input parameters when the included configuration
+uses [`spec:inputs`](#specinputs) and is added to the pipeline.
+
+**Keyword type**: Global keyword.
+
+**Possible inputs**: A string, numeric value, or boolean.
+
+**Example of `include:inputs`**:
+
+```yaml
+include:
+ - local: 'custom_configuration.yml'
+ inputs:
+ website: "My website"
+```
+
+In this example:
+
+- The configuration contained in `custom_configuration.yml` is added to the pipeline,
+ with a `website` input set to a value of `My website` for the included configuration.
+
+**Additional details**:
+
+- If the included configuration file uses [`spec:inputs:type`](#specinputstype),
+ the input value must match the defined type.
+- If the included configuration file uses [`spec:inputs:options`](#specinputsoptions),
+ the input value must match one of the listed options.
+
+**Related topics**:
+
+- [Set input values when using `include`](inputs.md#set-input-values-when-using-include).
+
### `stages`
Use `stages` to define stages that contain groups of jobs. Use [`stage`](#stage)
@@ -592,6 +635,193 @@ When the branch is something else:
- Use [`inherit:variables`](#inheritvariables) in the trigger job and list the
exact variables you want to forward to the downstream pipeline.
+## Header keywords
+
+Some keywords must be defined in a header section of a YAML configuration file.
+The header must be at the top of the file, separated from the rest of the configuration
+with `---`.
+
+### `spec`
+
+> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/391331) in GitLab 15.11 as a Beta feature.
+
+Add a `spec` section to the header of a YAML file to configure the behavior of a pipeline
+when a configuration is added to the pipeline with the `include` keyword.
+
+#### `spec:inputs`
+
+You can use `spec:inputs` to define input parameters for the CI/CD configuration you intend to add
+to a pipeline with `include`. Use `include:inputs` to define the values to use when the pipeline runs.
+
+Use the inputs to customize the behavior of the configuration when included in CI/CD configuration.
+
+Use the interpolation format `$[[ input.input-id ]]` to reference the values outside of the header section.
+Inputs are evaluated and interpolated when the configuration is fetched during pipeline creation, but before the
+configuration is merged with the contents of the `.gitlab-ci.yml` file.
+
+**Keyword type**: Header keyword. `specs` must be declared at the top of the configuration file,
+in a header section.
+
+**Possible inputs**: A hash of strings representing the expected inputs.
+
+**Example of `spec:inputs`**:
+
+```yaml
+spec:
+ inputs:
+ environment:
+ job-stage:
+---
+
+scan-website:
+ stage: $[[ inputs.job-stage ]]
+ script: ./scan-website $[[ inputs.environment ]]
+```
+
+**Additional details**:
+
+- Inputs are mandatory unless you use [`spec:inputs:default`](#specinputsdefault)
+ to set a default value.
+- Inputs expect strings unless you use [`spec:inputs:type`](#specinputstype) to set a
+ different input type.
+- A string containing an interpolation block must not exceed 1 MB.
+- The string inside an interpolation block must not exceed 1 KB.
+
+**Related topics**:
+
+- [Define input parameters with `spec:inputs`](inputs.md#define-input-parameters-with-specinputs).
+
+##### `spec:inputs:default`
+
+> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/391331) in GitLab 15.11 as a Beta feature.
+
+Inputs are mandatory when included, unless you set a default value with `spec:inputs:default`.
+
+Use `default: null` to have no default value.
+
+**Keyword type**: Header keyword. `specs` must be declared at the top of the configuration file,
+in a header section.
+
+**Possible inputs**: A string representing the default value, or `null`.
+
+**Example of `spec:inputs:default`**:
+
+```yaml
+spec:
+ inputs:
+ website:
+ user:
+ default: 'test-user'
+ flags:
+ default: null
+---
+
+# The pipeline configuration would follow...
+```
+
+In this example:
+
+- `website` is mandatory and must be defined.
+- `user` is optional. If not defined, the value is `test-user`.
+- `flags` is optional. If not defined, it has no value.
+
+**Additional details**:
+
+- If an input uses both `default` and [`options`](#specinputsoptions), the default value
+ must be one of the listed options. If not, the pipeline fails with a validation error.
+
+##### `spec:inputs:description`
+
+> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/415637) in GitLab 16.5.
+
+Use `description` to give a description to a specific input. The description does
+not affect the behavior of the input and is only used to help users of the file
+understand the input.
+
+**Keyword type**: Header keyword. `specs` must be declared at the top of the configuration file,
+in a header section.
+
+**Possible inputs**: A string representing the description.
+
+**Example of `spec:inputs:description`**:
+
+```yaml
+spec:
+ inputs:
+ flags:
+ description: 'Sample description of the `flags` input details.'
+---
+
+# The pipeline configuration would follow...
+```
+
+##### `spec:inputs:options`
+
+> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/393401) in GitLab 16.6.
+
+Inputs can use `options` to specify a list of allowed values for an input.
+The limit is 50 options per input.
+
+**Keyword type**: Header keyword. `specs` must be declared at the top of the configuration file,
+in a header section.
+
+**Possible inputs**: An array of input options.
+
+**Example of `spec:inputs:options`**:
+
+```yaml
+spec:
+ inputs:
+ environment:
+ options:
+ - development
+ - staging
+ - production
+---
+
+# The pipeline configuration would follow...
+```
+
+In this example:
+
+- `environment` is mandatory and must be defined with one of the values in the list.
+
+**Additional details**:
+
+- If an input uses both [`default`](#specinputsdefault) and `options`, the default value
+ must be one of the listed options. If not, the pipeline fails with a validation error.
+
+##### `spec:inputs:type`
+
+By default, inputs expect strings. Use `spec:inputs:type` to set a different required
+type for inputs.
+
+**Keyword type**: Header keyword. `specs` must be declared at the top of the configuration file,
+in a header section.
+
+**Possible inputs**: Can be one of:
+
+- `string`, to accept string inputs (default when not defined).
+- `number`, to only accept numeric inputs.
+- `boolean`, to only accept `true` or `false` inputs.
+
+**Example of `spec:inputs:type`**:
+
+```yaml
+spec:
+ inputs:
+ job_name:
+ website:
+ type: string
+ port:
+ type: number
+ available:
+ type: boolean
+---
+
+# The pipeline configuration would follow...
+```
+
## Job keywords
The following topics explain how to use keywords to configure CI/CD pipelines.
@@ -2025,7 +2255,7 @@ Use `hooks:pre_get_sources_script` to specify a list of commands to execute on t
before cloning the Git repository and any submodules.
You can use it for example to:
-- Adjust the [Git configuration](../troubleshooting.md#get_sources-job-section-fails-because-of-an-http2-problem).
+- Adjust the [Git configuration](../jobs/index.md#get_sources-job-section-fails-because-of-an-http2-problem).
- Export [tracing variables](../../topics/git/useful_git_commands.md).
**Possible inputs**: An array including:
@@ -2421,8 +2651,8 @@ This example creates four paths of execution:
**Additional details**:
- The maximum number of jobs that a single job can have in the `needs` array is limited:
- - For GitLab.com, the limit is 50. For more information, see our
- [infrastructure issue](https://gitlab.com/gitlab-com/gl-infra/reliability/-/issues/7541).
+ - For GitLab.com, the limit is 50. For more information, see
+ [issue 350398](https://gitlab.com/gitlab-org/gitlab/-/issues/350398).
- For self-managed instances, the default limit is 50. This limit [can be changed](../../administration/cicd.md#set-the-needs-job-limit).
- If `needs` refers to a job that uses the [`parallel`](#parallel) keyword,
it depends on all jobs created in parallel, not just one job. It also downloads
@@ -2793,226 +3023,6 @@ The `linux:rspec` job runs as soon as the `linux:build: [aws, app1]` job finishe
script: echo "Running rspec on linux..."
```
-### `only` / `except`
-
-NOTE:
-`only` and `except` are not being actively developed. To control when to add jobs to pipelines,
-use [`rules`](#rules) instead.
-
-You can use `only` and `except` to control when to add jobs to pipelines.
-
-- Use `only` to define when a job runs.
-- Use `except` to define when a job **does not** run.
-
-See [specify when jobs run with `only` and `except`](../jobs/job_control.md#specify-when-jobs-run-with-only-and-except)
-for more details and examples.
-
-#### `only:refs` / `except:refs`
-
-NOTE:
-`only:refs` and `except:refs` are not being actively developed. To use refs, regular expressions,
-or variables to control when to add jobs to pipelines, use [`rules:if`](#rulesif) instead.
-
-Use the `only:refs` and `except:refs` keywords to control when to add jobs to a
-pipeline based on branch names or pipeline types.
-
-**Keyword type**: Job keyword. You can use it only as part of a job.
-
-**Possible inputs**: An array including any number of:
-
-- Branch names, for example `main` or `my-feature-branch`.
-- [Regular expressions](../jobs/job_control.md#only--except-regex-syntax)
- that match against branch names, for example `/^feature-.*/`.
-- The following keywords:
-
- | **Value** | **Description** |
- | -------------------------|-----------------|
- | `api` | For pipelines triggered by the [pipelines API](../../api/pipelines.md#create-a-new-pipeline). |
- | `branches` | When the Git reference for a pipeline is a branch. |
- | `chat` | For pipelines created by using a [GitLab ChatOps](../chatops/index.md) command. |
- | `external` | When you use CI services other than GitLab. |
- | `external_pull_requests` | When an external pull request on GitHub is created or updated (See [Pipelines for external pull requests](../ci_cd_for_external_repos/index.md#pipelines-for-external-pull-requests)). |
- | `merge_requests` | For pipelines created when a merge request is created or updated. Enables [merge request pipelines](../pipelines/merge_request_pipelines.md), [merged results pipelines](../pipelines/merged_results_pipelines.md), and [merge trains](../pipelines/merge_trains.md). |
- | `pipelines` | For [multi-project pipelines](../pipelines/downstream_pipelines.md#multi-project-pipelines) created by [using the API with `CI_JOB_TOKEN`](../pipelines/downstream_pipelines.md#trigger-a-multi-project-pipeline-by-using-the-api), or the [`trigger`](#trigger) keyword. |
- | `pushes` | For pipelines triggered by a `git push` event, including for branches and tags. |
- | `schedules` | For [scheduled pipelines](../pipelines/schedules.md). |
- | `tags` | When the Git reference for a pipeline is a tag. |
- | `triggers` | For pipelines created by using a [trigger token](../triggers/index.md#configure-cicd-jobs-to-run-in-triggered-pipelines). |
- | `web` | For pipelines created by selecting **Run pipeline** in the GitLab UI, from the project's **Build > Pipelines** section. |
-
-**Example of `only:refs` and `except:refs`**:
-
-```yaml
-job1:
- script: echo
- only:
- - main
- - /^issue-.*$/
- - merge_requests
-
-job2:
- script: echo
- except:
- - main
- - /^stable-branch.*$/
- - schedules
-```
-
-**Additional details**:
-
-- Scheduled pipelines run on specific branches, so jobs configured with `only: branches`
- run on scheduled pipelines too. Add `except: schedules` to prevent jobs with `only: branches`
- from running on scheduled pipelines.
-- `only` or `except` used without any other keywords are equivalent to `only: refs`
- or `except: refs`. For example, the following two jobs configurations have the same
- behavior:
-
- ```yaml
- job1:
- script: echo
- only:
- - branches
-
- job2:
- script: echo
- only:
- refs:
- - branches
- ```
-
-- If a job does not use `only`, `except`, or [`rules`](#rules), then `only` is set to `branches`
- and `tags` by default.
-
- For example, `job1` and `job2` are equivalent:
-
- ```yaml
- job1:
- script: echo "test"
-
- job2:
- script: echo "test"
- only:
- - branches
- - tags
- ```
-
-#### `only:variables` / `except:variables`
-
-NOTE:
-`only:variables` and `except:variables` are not being actively developed. To use refs,
-regular expressions, or variables to control when to add jobs to pipelines, use [`rules:if`](#rulesif) instead.
-
-Use the `only:variables` or `except:variables` keywords to control when to add jobs
-to a pipeline, based on the status of [CI/CD variables](../variables/index.md).
-
-**Keyword type**: Job keyword. You can use it only as part of a job.
-
-**Possible inputs**:
-
-- An array of [CI/CD variable expressions](../jobs/job_control.md#cicd-variable-expressions).
-
-**Example of `only:variables`**:
-
-```yaml
-deploy:
- script: cap staging deploy
- only:
- variables:
- - $RELEASE == "staging"
- - $STAGING
-```
-
-**Related topics**:
-
-- [`only:variables` and `except:variables` examples](../jobs/job_control.md#only-variables--except-variables-examples).
-
-#### `only:changes` / `except:changes`
-
-NOTE:
-`only:changes` and `except:changes` are not being actively developed. To use changed files
-to control when to add a job to a pipeline, use [`rules:changes`](#ruleschanges) instead.
-
-Use the `changes` keyword with `only` to run a job, or with `except` to skip a job,
-when a Git push event modifies a file.
-
-Use `changes` in pipelines with the following refs:
-
-- `branches`
-- `external_pull_requests`
-- `merge_requests` (see additional details about [using `only:changes` with merge request pipelines](../jobs/job_control.md#use-onlychanges-with-merge-request-pipelines))
-
-**Keyword type**: Job keyword. You can use it only as part of a job.
-
-**Possible inputs**: An array including any number of:
-
-- Paths to files.
-- Wildcard paths for single directories, for example `path/to/directory/*`, or a directory
- and all its subdirectories, for example `path/to/directory/**/*`.
-- Wildcard [glob](https://en.wikipedia.org/wiki/Glob_(programming)) paths for all
- files with the same extension or multiple extensions, for example `*.md` or `path/to/directory/*.{rb,py,sh}`.
- See the [Ruby `fnmatch` documentation](https://docs.ruby-lang.org/en/master/File.html#method-c-fnmatch)
- for the supported syntax list.
-- Wildcard paths to files in the root directory, or all directories, wrapped in double quotes.
- For example `"*.json"` or `"**/*.json"`.
-
-**Example of `only:changes`**:
-
-```yaml
-docker build:
- script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
- only:
- refs:
- - branches
- changes:
- - Dockerfile
- - docker/scripts/*
- - dockerfiles/**/*
- - more_scripts/*.{rb,py,sh}
- - "**/*.json"
-```
-
-**Additional details**:
-
-- `changes` resolves to `true` if any of the matching files are changed (an `OR` operation).
-- If you use refs other than `branches`, `external_pull_requests`, or `merge_requests`,
- `changes` can't determine if a given file is new or old and always returns `true`.
-- If you use `only: changes` with other refs, jobs ignore the changes and always run.
-- If you use `except: changes` with other refs, jobs ignore the changes and never run.
-
-**Related topics**:
-
-- [`only: changes` and `except: changes` examples](../jobs/job_control.md#onlychanges--exceptchanges-examples).
-- If you use `changes` with [only allow merge requests to be merged if the pipeline succeeds](../../user/project/merge_requests/merge_when_pipeline_succeeds.md#require-a-successful-pipeline-for-merge),
- you should [also use `only:merge_requests`](../jobs/job_control.md#use-onlychanges-with-merge-request-pipelines).
-- [Jobs or pipelines can run unexpectedly when using `only: changes`](../jobs/job_control.md#jobs-or-pipelines-run-unexpectedly-when-using-changes).
-
-#### `only:kubernetes` / `except:kubernetes`
-
-NOTE:
-`only:refs` and `except:refs` are not being actively developed. To control if jobs are added
-to the pipeline when the Kubernetes service is active in the project, use [`rules:if`](#rulesif)
-with the [`CI_KUBERNETES_ACTIVE`](../variables/predefined_variables.md) predefined CI/CD variable instead.
-
-Use `only:kubernetes` or `except:kubernetes` to control if jobs are added to the pipeline
-when the Kubernetes service is active in the project.
-
-**Keyword type**: Job-specific. You can use it only as part of a job.
-
-**Possible inputs**:
-
-- The `kubernetes` strategy accepts only the `active` keyword.
-
-**Example of `only:kubernetes`**:
-
-```yaml
-deploy:
- only:
- kubernetes: active
-```
-
-In this example, the `deploy` job runs only when the Kubernetes service is active
-in the project.
-
### `pages`
Use `pages` to define a [GitLab Pages](../../user/project/pages/index.md) job that
@@ -4929,9 +4939,9 @@ The following keywords are deprecated.
### Globally-defined `image`, `services`, `cache`, `before_script`, `after_script`
-Defining `image`, `services`, `cache`, `before_script`, and
-`after_script` globally is deprecated. Support could be removed
-from a future release.
+Defining `image`, `services`, `cache`, `before_script`, and `after_script` globally is deprecated.
+Using these keywords at the top level is still possible to ensure backwards compatibility,
+but could be scheduled for removal in a future milestone.
Use [`default`](#default) instead. For example:
@@ -4949,14 +4959,233 @@ default:
- rm -rf tmp/
```
-<!-- ## Troubleshooting
+### `only` / `except`
+
+NOTE:
+`only` and `except` are deprecated and not being actively developed. These keywords
+are still usable to ensure backwards compatibility, but could be scheduled for removal
+in a future milestone. To control when to add jobs to pipelines, use [`rules`](#rules) instead.
+
+You can use `only` and `except` to control when to add jobs to pipelines.
+
+- Use `only` to define when a job runs.
+- Use `except` to define when a job **does not** run.
+
+See [specify when jobs run with `only` and `except`](../jobs/job_control.md#specify-when-jobs-run-with-only-and-except)
+for more details and examples.
-Include any troubleshooting steps that you can foresee. If you know beforehand what issues
-one might have when setting this up, or when something is changed, or on upgrading, it's
-important to describe those, too. Think of things that may go wrong and include them here.
-This is important to minimize requests for support, and to avoid doc comments with
-questions that you know someone might ask.
+#### `only:refs` / `except:refs`
-Each scenario can be a third-level heading, for example, `### Getting error message X`.
-If you have none to add when creating a doc, leave this section in place
-but commented out to help encourage others to add to it in the future. -->
+NOTE:
+`only:refs` and `except:refs` are deprecated and not being actively developed. These keywords
+are still usable to ensure backwards compatibility, but could be scheduled for removal
+in a future milestone. To use refs, regular expressions, or variables to control
+when to add jobs to pipelines, use [`rules:if`](#rulesif) instead.
+
+You can use the `only:refs` and `except:refs` keywords to control when to add jobs to a
+pipeline based on branch names or pipeline types.
+
+**Keyword type**: Job keyword. You can use it only as part of a job.
+
+**Possible inputs**: An array including any number of:
+
+- Branch names, for example `main` or `my-feature-branch`.
+- [Regular expressions](../jobs/job_control.md#only--except-regex-syntax)
+ that match against branch names, for example `/^feature-.*/`.
+- The following keywords:
+
+ | **Value** | **Description** |
+ | -------------------------|-----------------|
+ | `api` | For pipelines triggered by the [pipelines API](../../api/pipelines.md#create-a-new-pipeline). |
+ | `branches` | When the Git reference for a pipeline is a branch. |
+ | `chat` | For pipelines created by using a [GitLab ChatOps](../chatops/index.md) command. |
+ | `external` | When you use CI services other than GitLab. |
+ | `external_pull_requests` | When an external pull request on GitHub is created or updated (See [Pipelines for external pull requests](../ci_cd_for_external_repos/index.md#pipelines-for-external-pull-requests)). |
+ | `merge_requests` | For pipelines created when a merge request is created or updated. Enables [merge request pipelines](../pipelines/merge_request_pipelines.md), [merged results pipelines](../pipelines/merged_results_pipelines.md), and [merge trains](../pipelines/merge_trains.md). |
+ | `pipelines` | For [multi-project pipelines](../pipelines/downstream_pipelines.md#multi-project-pipelines) created by [using the API with `CI_JOB_TOKEN`](../pipelines/downstream_pipelines.md#trigger-a-multi-project-pipeline-by-using-the-api), or the [`trigger`](#trigger) keyword. |
+ | `pushes` | For pipelines triggered by a `git push` event, including for branches and tags. |
+ | `schedules` | For [scheduled pipelines](../pipelines/schedules.md). |
+ | `tags` | When the Git reference for a pipeline is a tag. |
+ | `triggers` | For pipelines created by using a [trigger token](../triggers/index.md#configure-cicd-jobs-to-run-in-triggered-pipelines). |
+ | `web` | For pipelines created by selecting **Run pipeline** in the GitLab UI, from the project's **Build > Pipelines** section. |
+
+**Example of `only:refs` and `except:refs`**:
+
+```yaml
+job1:
+ script: echo
+ only:
+ - main
+ - /^issue-.*$/
+ - merge_requests
+
+job2:
+ script: echo
+ except:
+ - main
+ - /^stable-branch.*$/
+ - schedules
+```
+
+**Additional details**:
+
+- Scheduled pipelines run on specific branches, so jobs configured with `only: branches`
+ run on scheduled pipelines too. Add `except: schedules` to prevent jobs with `only: branches`
+ from running on scheduled pipelines.
+- `only` or `except` used without any other keywords are equivalent to `only: refs`
+ or `except: refs`. For example, the following two jobs configurations have the same
+ behavior:
+
+ ```yaml
+ job1:
+ script: echo
+ only:
+ - branches
+
+ job2:
+ script: echo
+ only:
+ refs:
+ - branches
+ ```
+
+- If a job does not use `only`, `except`, or [`rules`](#rules), then `only` is set to `branches`
+ and `tags` by default.
+
+ For example, `job1` and `job2` are equivalent:
+
+ ```yaml
+ job1:
+ script: echo "test"
+
+ job2:
+ script: echo "test"
+ only:
+ - branches
+ - tags
+ ```
+
+#### `only:variables` / `except:variables`
+
+NOTE:
+`only:variables` and `except:variables` are deprecated and not being actively developed.
+These keywords are still usable to ensure backwards compatibility, but could be scheduled
+for removal in a future milestone. To use refs, regular expressions, or variables
+to control when to add jobs to pipelines, use [`rules:if`](#rulesif) instead.
+
+You can use the `only:variables` or `except:variables` keywords to control when to add jobs
+to a pipeline, based on the status of [CI/CD variables](../variables/index.md).
+
+**Keyword type**: Job keyword. You can use it only as part of a job.
+
+**Possible inputs**:
+
+- An array of [CI/CD variable expressions](../jobs/job_control.md#cicd-variable-expressions).
+
+**Example of `only:variables`**:
+
+```yaml
+deploy:
+ script: cap staging deploy
+ only:
+ variables:
+ - $RELEASE == "staging"
+ - $STAGING
+```
+
+**Related topics**:
+
+- [`only:variables` and `except:variables` examples](../jobs/job_control.md#only-variables--except-variables-examples).
+
+#### `only:changes` / `except:changes`
+
+`only:variables` and `except:variables`
+
+NOTE:
+`only:changes` and `except:changes` are deprecated and not being actively developed.
+These keywords are still usable to ensure backwards compatibility, but could be scheduled
+for removal in a future milestone. To use changed files to control when to add a job to a pipeline,
+use [`rules:changes`](#ruleschanges) instead.
+
+Use the `changes` keyword with `only` to run a job, or with `except` to skip a job,
+when a Git push event modifies a file.
+
+Use `changes` in pipelines with the following refs:
+
+- `branches`
+- `external_pull_requests`
+- `merge_requests` (see additional details about [using `only:changes` with merge request pipelines](../jobs/job_control.md#use-onlychanges-with-merge-request-pipelines))
+
+**Keyword type**: Job keyword. You can use it only as part of a job.
+
+**Possible inputs**: An array including any number of:
+
+- Paths to files.
+- Wildcard paths for single directories, for example `path/to/directory/*`, or a directory
+ and all its subdirectories, for example `path/to/directory/**/*`.
+- Wildcard [glob](https://en.wikipedia.org/wiki/Glob_(programming)) paths for all
+ files with the same extension or multiple extensions, for example `*.md` or `path/to/directory/*.{rb,py,sh}`.
+ See the [Ruby `fnmatch` documentation](https://docs.ruby-lang.org/en/master/File.html#method-c-fnmatch)
+ for the supported syntax list.
+- Wildcard paths to files in the root directory, or all directories, wrapped in double quotes.
+ For example `"*.json"` or `"**/*.json"`.
+
+**Example of `only:changes`**:
+
+```yaml
+docker build:
+ script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
+ only:
+ refs:
+ - branches
+ changes:
+ - Dockerfile
+ - docker/scripts/*
+ - dockerfiles/**/*
+ - more_scripts/*.{rb,py,sh}
+ - "**/*.json"
+```
+
+**Additional details**:
+
+- `changes` resolves to `true` if any of the matching files are changed (an `OR` operation).
+- If you use refs other than `branches`, `external_pull_requests`, or `merge_requests`,
+ `changes` can't determine if a given file is new or old and always returns `true`.
+- If you use `only: changes` with other refs, jobs ignore the changes and always run.
+- If you use `except: changes` with other refs, jobs ignore the changes and never run.
+
+**Related topics**:
+
+- [`only: changes` and `except: changes` examples](../jobs/job_control.md#onlychanges--exceptchanges-examples).
+- If you use `changes` with [only allow merge requests to be merged if the pipeline succeeds](../../user/project/merge_requests/merge_when_pipeline_succeeds.md#require-a-successful-pipeline-for-merge),
+ you should [also use `only:merge_requests`](../jobs/job_control.md#use-onlychanges-with-merge-request-pipelines).
+- [Jobs or pipelines can run unexpectedly when using `only: changes`](../jobs/job_control.md#jobs-or-pipelines-run-unexpectedly-when-using-changes).
+
+#### `only:kubernetes` / `except:kubernetes`
+
+NOTE:
+`only:kubernetes` and `except:kubernetes` are deprecated and not being actively developed.
+These keywords are still usable to ensure backwards compatibility, but could be scheduled
+for removal in a future milestone. To control if jobs are added to the pipeline when
+the Kubernetes service is active in the project, use [`rules:if`](#rulesif) with the
+[`CI_KUBERNETES_ACTIVE`](../variables/predefined_variables.md) predefined CI/CD variable instead.
+
+Use `only:kubernetes` or `except:kubernetes` to control if jobs are added to the pipeline
+when the Kubernetes service is active in the project.
+
+**Keyword type**: Job-specific. You can use it only as part of a job.
+
+**Possible inputs**:
+
+- The `kubernetes` strategy accepts only the `active` keyword.
+
+**Example of `only:kubernetes`**:
+
+```yaml
+deploy:
+ only:
+ kubernetes: active
+```
+
+In this example, the `deploy` job runs only when the Kubernetes service is active
+in the project.
diff --git a/doc/ci/yaml/inputs.md b/doc/ci/yaml/inputs.md
index 9e084cf0020..089d6bc5b62 100644
--- a/doc/ci/yaml/inputs.md
+++ b/doc/ci/yaml/inputs.md
@@ -4,34 +4,34 @@ group: Pipeline Authoring
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
-# Define inputs for configuration added with `include` **(FREE ALL BETA)**
+# Define inputs for configuration added with `include` **(FREE ALL)**
-> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/391331) in GitLab 15.11 as a Beta feature.
-
-FLAG:
-`spec` and `inputs` are experimental [Open Beta features](../../policy/experiment-beta-support.md#beta)
-and subject to change without notice.
+> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/391331) in GitLab 15.11 as a Beta feature.
+> - Made generally available in GitLab 16.6.
## Define input parameters with `spec:inputs`
-> `description` keyword [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/415637) in GitLab 16.5.
+> - `description` keyword [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/415637) in GitLab 16.5.
+> - `options` keyword [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/393401) in GitLab 16.6.
Use `spec:inputs` to define input parameters for CI/CD configuration intended to be added
-to a pipeline with `include`. Use [`include:inputs`](#set-input-parameter-values-with-includeinputs)
+to a pipeline with `include`. Use [`include:inputs`](#set-input-values-when-using-include)
to define the values to use when the pipeline runs.
The specs must be declared at the top of the configuration file, in a header section.
Separate the header from the rest of the configuration with `---`.
Use the interpolation format `$[[ input.input-id ]]` to reference the values outside of the header section.
-The inputs are evaluated and interpolated once, when the configuration is fetched
-during pipeline creation, but before the configuration is merged with the contents of the `.gitlab-ci.yml`.
+The inputs are evaluated and interpolated when the configuration is fetched during pipeline creation, but before the
+configuration is merged with the contents of the `.gitlab-ci.yml` file.
+
+For example, in a file named `custom_website_scan.yml`:
```yaml
spec:
inputs:
- environment:
job-stage:
+ environment:
---
scan-website:
@@ -41,58 +41,58 @@ scan-website:
When using `spec:inputs`:
-- Defined inputs are mandatory by default.
-- Inputs can be made optional by specifying a `default`. Use `default: null` to have no default value.
-- You can optionally use `description` to give a description to a specific input.
+- Inputs are mandatory by default.
+- Inputs must be strings by default.
- A string containing an interpolation block must not exceed 1 MB.
- The string inside an interpolation block must not exceed 1 KB.
-For example, a `custom_configuration.yml`:
-
-```yaml
-spec:
- inputs:
- website:
- user:
- default: 'test-user'
- flags:
- default: null
- description: 'Sample description of the `flags` input detail.'
----
-
-# The pipeline configuration would follow...
-```
-
-In this example:
+Additionally, use:
-- `website` is mandatory and must be defined.
-- `user` is optional. If not defined, the value is `test-user`.
-- `flags` is optional. If not defined, it has no value. The optional description should give details about the input.
+- [`spec:inputs:default`](index.md#specinputsdefault) to define default values for inputs
+ when not specified. When you specify a default, the inputs are no longer mandatory.
+- [`spec:inputs:description`](index.md#specinputsdescription) to give a description to
+ a specific input. The description does not affect the input, but can help people
+ understand the input details or expected values.
+- [`spec:inputs:options`](index.md#specinputsoptions) to specify a list of allowed values
+ for an input.
+- [`spec:inputs:type`](index.md#specinputstype) to force a specific input type, which
+ can be `string` (the default type), `number`, or `boolean`.
-## Set input parameter values with `include:inputs`
+## Set input values when using `include`
> `include:with` [renamed to `include:inputs`](https://gitlab.com/gitlab-org/gitlab/-/issues/406780) in GitLab 16.0.
-Use `include:inputs` to set the values for the parameters when the included configuration
-is added to the pipeline.
+Use [`include:inputs`](index.md#includeinputs) to set the values for the parameters
+when the included configuration is added to the pipeline.
-For example, to include a `custom_configuration.yml` that has the same specs
+For example, to include a `custom_website_scan.yml` that has the same specs
as the [example above](#define-input-parameters-with-specinputs):
```yaml
include:
- - local: 'custom_configuration.yml'
+ - local: 'custom_website_scan.yml'
inputs:
- website: "My website"
+ job-stage: post-deploy
+ environment: production
+
+stages:
+ - build
+ - test
+ - deploy
+ - post-deploy
+
+# The pipeline configuration would follow...
```
-In this example:
+In this example, the included configuration is added with:
-- `website` has a value of `My website` for the included configuration.
+- `job-stage` set to `post-deploy`, so the included job runs in the custom `post-deploy` stage.
+- `environment` set to `production`, so the included job runs for the production environment.
### Use `include:inputs` with multiple files
-`inputs` must be specified separately for each included file. For example:
+[`inputs`](index.md#includeinputs) must be specified separately for each included file.
+For example:
```yaml
include:
diff --git a/doc/development/ai_architecture.md b/doc/development/ai_architecture.md
index f03ffa748fa..54ad52f0c39 100644
--- a/doc/development/ai_architecture.md
+++ b/doc/development/ai_architecture.md
@@ -55,9 +55,8 @@ It is possible to utilize other models or technologies, however they will need t
The following models have been approved for use:
-- [OpenAI models](https://platform.openai.com/docs/models)
- Google's [Vertex AI](https://cloud.google.com/vertex-ai) and [model garden](https://cloud.google.com/model-garden)
-- [AI Code Suggestions](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/tree/main)
+- [Anthropic models](https://docs.anthropic.com/claude/reference/selecting-a-model)
- [Suggested reviewer](https://gitlab.com/gitlab-org/modelops/applied-ml/applied-ml-updates/-/issues/10)
### Vector stores
@@ -77,7 +76,7 @@ A [draft MR](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/122035) has b
The index function has been updated to improve search quality. This was tested locally by setting the `ivfflat.probes` value to `10` with the following SQL command:
```ruby
-Embedding::TanukiBotMvc.connection.execute("SET ivfflat.probes = 10")
+::Embedding::Vertex::GitlabDocumentation.connection.execute("SET ivfflat.probes = 10")
```
Setting the `probes` value for indexing improves results, as per the neighbor [documentation](https://github.com/ankane/neighbor#indexing).
diff --git a/doc/development/ai_features/duo_chat.md b/doc/development/ai_features/duo_chat.md
index 841123c803a..ad044f4a923 100644
--- a/doc/development/ai_features/duo_chat.md
+++ b/doc/development/ai_features/duo_chat.md
@@ -12,7 +12,6 @@ NOTE:
Use [this snippet](https://gitlab.com/gitlab-org/gitlab/-/snippets/2554994) for help automating the following section.
1. [Enable Anthropic API features](index.md#configure-anthropic-access).
-1. [Enable OpenAI support](index.md#configure-openai-access).
1. [Ensure the embedding database is configured](index.md#set-up-the-embedding-database).
1. Ensure that your current branch is up-to-date with `master`.
1. To access the GitLab Duo Chat interface, in the lower-left corner of any page, select **Help** and **Ask GitLab Duo Chat**.
@@ -86,19 +85,45 @@ gdk start
tail -f log/llm.log
```
-## Testing GitLab Duo Chat with predefined questions
+## Testing GitLab Duo Chat against real LLMs
-Because success of answers to user questions in GitLab Duo Chat heavily depends on toolchain and prompts of each tool, it's common that even a minor change in a prompt or a tool impacts processing of some questions. To make sure that a change in the toolchain doesn't break existing functionality, you can use the following rspecs to validate answers to some predefined questions:
+Because success of answers to user questions in GitLab Duo Chat heavily depends
+on toolchain and prompts of each tool, it's common that even a minor change in a
+prompt or a tool impacts processing of some questions.
+
+To make sure that a change in the toolchain doesn't break existing
+functionality, you can use the following RSpec tests to validate answers to some
+predefined questions when using real LLMs:
```ruby
-export OPENAI_API_KEY='<key>'
-export ANTHROPIC_API_KEY='<key>'
-REAL_AI_REQUEST=1 rspec ee/spec/lib/gitlab/llm/chain/agents/zero_shot/executor_spec.rb
+export VERTEX_AI_EMBEDDINGS='true' # if using Vertex embeddings
+export ANTHROPIC_API_KEY='<key>' # can use dev value of Gitlab::CurrentSettings
+export VERTEX_AI_CREDENTIALS='<vertex-ai-credentials>' # can set as dev value of Gitlab::CurrentSettings.vertex_ai_credentials
+export VERTEX_AI_PROJECT='<vertex-project-name>' # can use dev value of Gitlab::CurrentSettings.vertex_ai_project
+
+REAL_AI_REQUEST=1 bundle exec rspec ee/spec/lib/gitlab/llm/chain/agents/zero_shot/executor_real_requests_spec.rb
```
When you need to update the test questions that require documentation embeddings,
make sure a new fixture is generated and committed together with the change.
+## Running the rspecs tagged with `real_ai_request`
+
+The rspecs tagged with the metadata `real_ai_request` can be run in GitLab project's CI by triggering
+`rspec-ee unit gitlab-duo-chat`.
+The former runs with Vertex APIs enabled. The CI jobs are optional and allowed to fail to account for
+the non-deterministic nature of LLM responses.
+
+### Management of credentials and API keys for CI jobs
+
+All API keys required to run the rspecs should be [masked](../../ci/variables/index.md#mask-a-cicd-variable)
+
+The exception is GCP credentials as they contain characters that prevent them from being masked.
+Because `rspec-ee unit gitlab-duo-chat` needs to run on MR branches, GCP credentials cannot be added as a protected variable
+and must be added as a regular CI variable.
+For security, the GCP credentials and the associated project added to
+GitLab project's CI must not be able to access any production infrastructure and sandboxed.
+
## GraphQL Subscription
The GraphQL Subscription for Chat behaves slightly different because it's user-centric. A user could have Chat open on multiple browser tabs, or also on their IDE.
diff --git a/doc/development/ai_features/index.md b/doc/development/ai_features/index.md
index 4401a7e3fb1..df1627f2dc3 100644
--- a/doc/development/ai_features/index.md
+++ b/doc/development/ai_features/index.md
@@ -15,7 +15,6 @@ info: To determine the technical writer assigned to the Stage/Group associated w
- Background workers execute
- GraphQL subscriptions deliver results back in real time
- Abstraction for
- - OpenAI
- Google Vertex AI
- Anthropic
- Rate Limiting
@@ -28,7 +27,6 @@ info: To determine the technical writer assigned to the Stage/Group associated w
- Automatic Markdown Rendering of responses
- Centralised Group Level settings for experiment and 3rd party
- Experimental API endpoints for exploration of AI APIs by GitLab team members without the need for credentials
- - OpenAI
- Google Vertex AI
- Anthropic
@@ -36,7 +34,7 @@ info: To determine the technical writer assigned to the Stage/Group associated w
Apply the following two feature flags to any AI feature work:
-- A general that applies to all AI features.
+- A general flag (`ai_global_switch`) that applies to all AI features.
- A flag specific to that feature. The feature flag name [must be different](../feature_flags/index.md#feature-flags-for-licensed-features) than the licensed feature name.
See the [feature flag tracker](https://gitlab.com/gitlab-org/gitlab/-/issues/405161) for the list of all feature flags and how to use them.
@@ -58,20 +56,19 @@ Use [this snippet](https://gitlab.com/gitlab-org/gitlab/-/snippets/2554994) for
1. Enable the required general feature flags:
```ruby
- Feature.enable(:openai_experimentation)
+ Feature.enable(:ai_global_switch, type: :ops)
```
1. Ensure you have followed [the process to obtain an EE license](https://about.gitlab.com/handbook/developer-onboarding/#working-on-gitlab-ee-developer-licenses) for your local instance
1. Simulate the GDK to [simulate SaaS](../ee_features.md#simulate-a-saas-instance) and ensure the group you want to test has an Ultimate license
-1. Enable `Experimental features` and `Third-party AI services`
+1. Enable `Experimental features`:
1. Go to the group with the Ultimate license
1. **Group Settings** > **General** -> **Permissions and group features**
1. Enable **Experiment features**
- 1. Enable **Third-party AI services**
1. Enable the specific feature flag for the feature you want to test
1. Set the required access token. To receive an access token:
1. For Vertex, follow the [instructions below](#configure-gcp-vertex-access).
- 1. For all other providers, like Anthropic or OpenAI, create an access request where `@m_gill`, `@wayne`, and `@timzallmann` are the tech stack owners.
+ 1. For all other providers, like Anthropic, create an access request where `@m_gill`, `@wayne`, and `@timzallmann` are the tech stack owners.
### Set up the embedding database
@@ -117,12 +114,6 @@ In order to obtain a GCP service key for local development, please follow the st
Gitlab::CurrentSettings.update(vertex_ai_project: PROJECT_ID)
```
-### Configure OpenAI access
-
-```ruby
-Gitlab::CurrentSettings.update(openai_api_key: "<open-ai-key>")
-```
-
### Configure Anthropic access
```ruby
@@ -131,36 +122,9 @@ Gitlab::CurrentSettings.update!(anthropic_api_key: <insert API key>)
### Populating embeddings and using embeddings fixture
-Currently we have embeddings generate both with OpenAI and VertexAI. Bellow sections explain how to populate
+Embeddings are generated through VertexAI text embeddings endpoint. The sections below explain how to populate
embeddings in the DB or extract embeddings to be used in specs.
-FLAG:
-We are moving towards having VertexAI embeddings only, so eventually the OpenAI embeddings support will be drop
-as well as the section bellow will be removed.
-
-#### OpenAI embeddings
-
-To seed your development database with the embeddings for GitLab Documentation,
-you may use the pre-generated embeddings and a Rake task.
-
-```shell
-RAILS_ENV=development bundle exec rake gitlab:llm:embeddings:seed_pre_generated
-```
-
-The DBCleaner gem we use clear the database tables before each test runs.
-Instead of fully populating the table `tanuki_bot_mvc` where we store OpenAI embeddings for the documentations,
-we can add a few selected embeddings to the table from a pre-generated fixture.
-
-For instance, to test that the question "How can I reset my password" is correctly
-retrieving the relevant embeddings and answered, we can extract the top N closet embeddings
-to the question into a fixture and only restore a small number of embeddings quickly.
-To facilitate an extraction process, a Rake task been written.
-You can add or remove the questions needed to be tested in the Rake task and run the task to generate a new fixture.
-
-```shell
-RAILS_ENV=development bundle exec rake gitlab:llm:embeddings:extract_embeddings
-```
-
#### VertexAI embeddings
To seed your development database with the embeddings for GitLab Documentation,
@@ -210,9 +174,6 @@ Use the [experimental REST API endpoints](https://gitlab.com/gitlab-org/gitlab/-
The endpoints are:
-- `https://gitlab.example.com/api/v4/ai/experimentation/openai/completions`
-- `https://gitlab.example.com/api/v4/ai/experimentation/openai/embeddings`
-- `https://gitlab.example.com/api/v4/ai/experimentation/openai/chat/completions`
- `https://gitlab.example.com/api/v4/ai/experimentation/anthropic/complete`
- `https://gitlab.example.com/api/v4/ai/experimentation/vertex/chat`
@@ -257,11 +218,9 @@ mutation {
}
```
-The GraphQL API then uses the [OpenAI Client](https://gitlab.com/gitlab-org/gitlab/blob/master/ee/lib/gitlab/llm/open_ai/client.rb)
+The GraphQL API then uses the [Anthropic Client](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/gitlab/llm/anthropic/client.rb)
to send the response.
-Remember that other clients are available and you should not use OpenAI.
-
#### How to receive a response
The API requests to AI providers are handled in a background job. We therefore do not keep the request alive and the Frontend needs to match the request to the response from the subscription.
@@ -302,7 +261,7 @@ To not have many concurrent subscriptions, you should also only subscribe to the
#### Current abstraction layer flow
-The following graph uses OpenAI as an example. You can use different providers.
+The following graph uses VertexAI as an example. You can use different providers.
```mermaid
flowchart TD
@@ -311,9 +270,9 @@ B --> C[Llm::ExecuteMethodService]
C --> D[One of services, for example: Llm::GenerateSummaryService]
D -->|scheduled| E[AI worker:Llm::CompletionWorker]
E -->F[::Gitlab::Llm::Completions::Factory]
-F -->G[`::Gitlab::Llm::OpenAi::Completions::...` class using `::Gitlab::Llm::OpenAi::Templates::...` class]
-G -->|calling| H[Gitlab::Llm::OpenAi::Client]
-H --> |response| I[::Gitlab::Llm::OpenAi::ResponseService]
+F -->G[`::Gitlab::Llm::VertexAi::Completions::...` class using `::Gitlab::Llm::Templates::...` class]
+G -->|calling| H[Gitlab::Llm::VertexAi::Client]
+H --> |response| I[::Gitlab::Llm::GraphqlSubscriptionResponseService]
I --> J[GraphqlTriggers.ai_completion_response]
J --> K[::GitlabSchema.subscriptions.trigger]
```
@@ -419,11 +378,11 @@ end
We recommend to use [policies](../policies.md) to deal with authorization for a feature. Currently we need to make sure to cover the following checks:
-1. General AI feature flag is enabled
+1. General AI feature flag (`ai_global_switch`) is enabled
1. Feature specific feature flag is enabled
1. The namespace has the required license for the feature
1. User is a member of the group/project
-1. `experiment_features_enabled` and `third_party_ai_features_enabled` flags are set on the `Namespace`
+1. `experiment_features_enabled` settings are set on the `Namespace`
For our example, we need to implement the `allowed?(:amazing_new_ai_feature)` call. As an example, you can look at the [Issue Policy for the summarize comments feature](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/policies/ee/issue_policy.rb). In our example case, we want to implement the feature for Issues as well:
@@ -436,7 +395,7 @@ module EE
prepended do
with_scope :subject
condition(:ai_available) do
- ::Feature.enabled?(:openai_experimentation)
+ ::Feature.enabled?(:ai_global_switch, type: :ops)
end
with_scope :subject
@@ -501,10 +460,9 @@ Caching has following limitations:
### Check if feature is allowed for this resource based on namespace settings
-There are two settings allowed on root namespace level that restrict the use of AI features:
+There is one setting allowed on root namespace level that restrict the use of AI features:
- `experiment_features_enabled`
-- `third_party_ai_features_enabled`.
To check if that feature is allowed for a given namespace, call:
@@ -512,46 +470,39 @@ To check if that feature is allowed for a given namespace, call:
Gitlab::Llm::StageCheck.available?(namespace, :name_of_the_feature)
```
-Add the name of the feature to the `Gitlab::Llm::StageCheck` class. There are arrays there that differentiate
-between experimental and beta features.
+Add the name of the feature to the `Gitlab::Llm::StageCheck` class. There are
+arrays there that differentiate between experimental and beta features.
This way we are ready for the following different cases:
-- If the feature is not in any array, the check will return `true`. For example, the feature was moved to GA and does not use a third-party setting.
-- If feature is in GA, but uses a third-party setting, the class will return a proper answer based on the namespace third-party setting.
+- If the feature is not in any array, the check will return `true`. For example, the feature was moved to GA.
To move the feature from the experimental phase to the beta phase, move the name of the feature from the `EXPERIMENTAL_FEATURES` array to the `BETA_FEATURES` array.
### Implement calls to AI APIs and the prompts
The `CompletionWorker` will call the `Completions::Factory` which will initialize the Service and execute the actual call to the API.
-In our example, we will use OpenAI and implement two new classes:
+In our example, we will use VertexAI and implement two new classes:
```ruby
-# /ee/lib/gitlab/llm/open_ai/completions/amazing_new_ai_feature.rb
+# /ee/lib/gitlab/llm/vertex_ai/completions/amazing_new_ai_feature.rb
module Gitlab
module Llm
- module OpenAi
+ module VertexAi
module Completions
- class AmazingNewAiFeature
- def initialize(ai_prompt_class)
- @ai_prompt_class = ai_prompt_class
- end
+ class AmazingNewAiFeature < Gitlab::Llm::Completions::Base
+ def execute
+ prompt = ai_prompt_class.new(options[:user_input]).to_prompt
- def execute(user, issue, options)
- options = ai_prompt_class.get_options(options[:messages])
+ response = Gitlab::Llm::VertexAi::Client.new(user).text(content: prompt)
- ai_response = Gitlab::Llm::OpenAi::Client.new(user).chat(content: nil, **options)
+ response_modifier = ::Gitlab::Llm::VertexAi::ResponseModifiers::Predictions.new(response)
- ::Gitlab::Llm::OpenAi::ResponseService.new(user, issue, ai_response, options: {}).execute(
- Gitlab::Llm::OpenAi::ResponseModifiers::Chat.new
- )
+ ::Gitlab::Llm::GraphqlSubscriptionResponseService.new(
+ user, nil, response_modifier, options: response_options
+ ).execute
end
-
- private
-
- attr_reader :ai_prompt_class
end
end
end
@@ -560,28 +511,23 @@ end
```
```ruby
-# /ee/lib/gitlab/llm/open_ai/templates/amazing_new_ai_feature.rb
+# /ee/lib/gitlab/llm/vertex_ai/templates/amazing_new_ai_feature.rb
module Gitlab
module Llm
- module OpenAi
+ module VertexAi
module Templates
class AmazingNewAiFeature
- TEMPERATURE = 0.3
-
- def self.get_options(messages)
- system_content = <<-TEMPLATE
- You are an assistant that writes code for the following input:
- """
- TEMPLATE
-
- {
- messages: [
- { role: "system", content: system_content },
- { role: "user", content: messages },
- ],
- temperature: TEMPERATURE
- }
+ def initialize(user_input)
+ @user_input = user_input
+ end
+
+ def to_prompt
+ <<-PROMPT
+ You are an assistant that writes code for the following context:
+
+ context: #{user_input}
+ PROMPT
end
end
end
diff --git a/doc/development/api_graphql_styleguide.md b/doc/development/api_graphql_styleguide.md
index 3662b21eb9e..318f9bed6d3 100644
--- a/doc/development/api_graphql_styleguide.md
+++ b/doc/development/api_graphql_styleguide.md
@@ -154,7 +154,14 @@ developers must familiarize themselves with our [Deprecation and Removal process
Breaking changes are:
- Removing or renaming a field, argument, enum value, or mutation.
-- Changing the type of a field, argument or enum value.
+- Changing the type or type name of an argument. The type of an argument
+ is declared by the client when [using variables](https://graphql.org/learn/queries/#variables),
+ and a change would cause a query using the old type name to be rejected by the API.
+- Changing the [_scalar type_](https://graphql.org/learn/schema/#scalar-types) of a field or enum
+ value where it results in a change to how the value serializes to JSON.
+ For example, a change from a JSON String to a JSON Number, or a change to how a String is formatted.
+ A change to another [_object type_](https://graphql.org/learn/schema/#object-types-and-fields) can be
+ allowed so long as all scalar type fields of the object continue to serialize in the same way.
- Raising the [complexity](#max-complexity) of a field or complexity multipliers in a resolver.
- Changing a field from being _not_ nullable (`null: false`) to nullable (`null: true`), as
discussed in [Nullable fields](#nullable-fields).
diff --git a/doc/development/backend/create_source_code_be/gitaly_touch_points.md b/doc/development/backend/create_source_code_be/gitaly_touch_points.md
index c689af2f150..98607c7f6c7 100644
--- a/doc/development/backend/create_source_code_be/gitaly_touch_points.md
+++ b/doc/development/backend/create_source_code_be/gitaly_touch_points.md
@@ -19,9 +19,3 @@ All access to Gitaly from other parts of GitLab are through Create: Source Code
After a call is made to Gitaly, Git `commit` information is stored in memory. This information is wrapped by the [Ruby `Commit` Model](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/models/commit.rb), which is a wrapper around [`Gitlab::Git::Commit`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/git/commit.rb).
The `Commit` model acts like an ActiveRecord object, but it does not have a PostgreSQL backend. Instead, it maps back to Gitaly RPCs.
-
-## Rugged Patches
-
-Historically in GitLab, access to the server-based `git` repositories was provided through the [rugged](https://github.com/libgit2/rugged) RubyGem, which provides Ruby bindings to `libgit2`. This was further extended by what is termed "Rugged Patches", [a set of extensions to the Rugged library](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/57317). Rugged implementations of some of the most commonly-used RPCs can be [enabled via feature flags](../../gitaly.md#legacy-rugged-code).
-
-Rugged access requires the use of a NFS file system, a direction GitLab is moving away from in favor of Gitaly Cluster. Rugged has been proposed for [deprecation and removal](https://gitlab.com/gitlab-org/gitaly/-/issues/1690). Several large customers are still using NFS, and a specific removal date is not planned at this point.
diff --git a/doc/development/bulk_import.md b/doc/development/bulk_import.md
index 081af2b4e17..502bee97c9c 100644
--- a/doc/development/bulk_import.md
+++ b/doc/development/bulk_import.md
@@ -51,3 +51,12 @@ and its users.
The migration process starts with the creation of a [`BulkImport`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/models/bulk_import.rb)
record to keep track of the migration. From there all the code related to the
GitLab Group Migration can be found under the new `BulkImports` namespace in all the application layers.
+
+### Idempotency
+
+To ensure we don't get duplicate entries when re-running the same Sidekiq job, we cache each entry as it's processed and skip entries if they're present in the cache.
+
+There are two different strategies:
+
+- `BulkImports::Pipeline::HexdigestCacheStrategy`, which caches a hexdigest representation of the data.
+- `BulkImports::Pipeline::IndexCacheStrategy`, which caches the last processed index of an entry in a pipeline.
diff --git a/doc/development/cells/index.md b/doc/development/cells/index.md
index 30dccd91c9d..1ab88e0d8c6 100644
--- a/doc/development/cells/index.md
+++ b/doc/development/cells/index.md
@@ -16,6 +16,7 @@ To make the application work within the GitLab Cells architecture, we need to fi
Here is the suggested approach:
1. Pick a workflow to fix.
+1. Firstly, we need to find out the tables that are affected while performing the chosen workflow. As an example, in [this note](https://gitlab.com/gitlab-org/gitlab/-/issues/428600#note_1610331742) we have described how to figure out the list of all tables that are affected when a project is created in a group.
1. For each table affected for the chosen workflow, choose the approriate
[GitLab schema](../database/multiple_databases.md#gitlab-schema).
1. Identify all cross-joins, cross-transactions, and cross-database foreign keys for
diff --git a/doc/development/code_review.md b/doc/development/code_review.md
index 8e6ea3d68e9..c2f2a7643ae 100644
--- a/doc/development/code_review.md
+++ b/doc/development/code_review.md
@@ -115,10 +115,10 @@ It picks reviewers and maintainers from the list at the
page, with these behaviors:
- It doesn't pick people whose Slack or [GitLab status](../user/profile/index.md#set-your-current-status):
- - Contains the string `OOO`, `PTO`, `Parental Leave`, or `Friends and Family`.
+ - Contains the string `OOO`, `PTO`, `Parental Leave`, `Friends and Family`, or `Conference`.
- GitLab user **Busy** indicator is set to `True`.
- Emoji is from one of these categories:
- - **On leave** - 🌴 `:palm_tree:`, 🏖️ `:beach:`, ⛱ `:beach_umbrella:`, 🏖 `:beach_with_umbrella:`, 🌞 `:sun_with_face:`, 🎡 `:ferris_wheel:`
+ - **On leave** - 🌴 `:palm_tree:`, 🏖️ `:beach:`, ⛱ `:beach_umbrella:`, 🏖 `:beach_with_umbrella:`, 🌞 `:sun_with_face:`, 🎡 `:ferris_wheel:`, 🏙 `:cityscape:`
- **Out sick** - 🌡️ `:thermometer:`, 🤒 `:face_with_thermometer:`
- **At capacity** - 🔴 `:red_circle:`
- **Focus mode** - 💡 `:bulb:` (focusing on their team's work)
@@ -295,6 +295,10 @@ up confusion or verify that the end result matches what they had in mind, to
database specialists to get input on the data model or specific queries, or to
any other developer to get an in-depth review of the solution.
+If you know you'll need many merge requests to deliver a feature (for example, you created a proof of concept and it is clear the feature will consist of 10+ merge requests),
+consider identifying reviewers and maintainers who possess the necessary understanding of the feature (you share the context with them). Then direct all merge requests to these reviewers.
+The best DRI for finding these reviewers is the EM or Staff Engineer. Having stable reviewer counterparts for multiple merge requests with the same context improves efficiency.
+
If your merge request touches more than one domain (for example, Dynamic Analysis and GraphQL), ask for reviews from an expert from each domain.
If an author is unsure if a merge request needs a [domain expert's](#domain-experts) opinion,
@@ -764,7 +768,7 @@ A merge request may benefit from being considered a customer critical priority b
Properties of customer critical merge requests:
-- The [VP of Development](https://about.gitlab.com/job-families/engineering/development/management/vp/) ([@clefelhocz1](https://gitlab.com/clefelhocz1)) is the approver for deciding if a merge request qualifies as customer critical. Also, if two of his direct reports approve, that can also serve as approval.
+- A senior director or higher in Development must approve that a merge request qualifies as customer-critical. Alternatively, if two of their direct reports approve, that can also serve as approval.
- The DRI applies the `customer-critical-merge-request` label to the merge request.
- It is required that the reviewers and maintainers involved with a customer critical merge request are engaged as soon as this decision is made.
- It is required to prioritize work for those involved on a customer critical merge request so that they have the time available necessary to focus on it.
diff --git a/doc/development/contributing/first_contribution.md b/doc/development/contributing/first_contribution.md
index 3477590f40b..834f34328bc 100644
--- a/doc/development/contributing/first_contribution.md
+++ b/doc/development/contributing/first_contribution.md
@@ -343,7 +343,7 @@ Now you're ready to push changes from the community fork to the main GitLab repo
1. If you're happy with this merge request and want to start the review process, type
`@gitlab-bot ready` in a comment and then select **Comment**.
- ![GitLab bot ready comment](img/bot_ready.png)
+ ![GitLab bot ready comment](img/bot_ready_v16_6.png)
Someone from GitLab will look at your request and let you know what the next steps are.
diff --git a/doc/development/contributing/img/bot_ready.png b/doc/development/contributing/img/bot_ready.png
deleted file mode 100644
index 85116c8957b..00000000000
--- a/doc/development/contributing/img/bot_ready.png
+++ /dev/null
Binary files differ
diff --git a/doc/development/contributing/img/bot_ready_v16_6.png b/doc/development/contributing/img/bot_ready_v16_6.png
new file mode 100644
index 00000000000..a26971eefad
--- /dev/null
+++ b/doc/development/contributing/img/bot_ready_v16_6.png
Binary files differ
diff --git a/doc/development/dangerbot.md b/doc/development/dangerbot.md
index ef1e563b668..476d370e7ee 100644
--- a/doc/development/dangerbot.md
+++ b/doc/development/dangerbot.md
@@ -158,10 +158,9 @@ To enable the Dangerfile on another existing GitLab project, complete the follow
- if: $CI_SERVER_HOST == "gitlab.com"
```
-1. If your project is in the `gitlab-org` group, you don't need to set up any token as the `DANGER_GITLAB_API_TOKEN`
- variable is available at the group level. If not, follow these last steps:
- 1. Create a [Project access tokens](../user/project/settings/project_access_tokens.md).
- 1. Add the token as a CI/CD project variable named `DANGER_GITLAB_API_TOKEN`.
+1. Create a [Project access tokens](../user/project/settings/project_access_tokens.md) with the `api` scope,
+ `Developer` permission (so that it can add labels), and no expiration date (which actually means one year).
+1. Add the token as a CI/CD project variable named `DANGER_GITLAB_API_TOKEN`.
You should add the ~"Danger bot" label to the merge request before sending it
for review.
diff --git a/doc/development/database/avoiding_downtime_in_migrations.md b/doc/development/database/avoiding_downtime_in_migrations.md
index 27ffd356df6..3b4b45935b9 100644
--- a/doc/development/database/avoiding_downtime_in_migrations.md
+++ b/doc/development/database/avoiding_downtime_in_migrations.md
@@ -583,7 +583,7 @@ visualized in Thanos ([see an example](https://thanos-query.ops.gitlab.net/graph
### Swap the columns (release N + 1)
-After the background is completed and the new `bigint` columns are populated for all records, we can
+After the background migration is complete and the new `bigint` columns are populated for all records, we can
swap the columns. Swapping is done with post-deployment migration. The exact process depends on the
table being converted, but in general it's done in the following steps:
@@ -591,8 +591,11 @@ table being converted, but in general it's done in the following steps:
migration has finished ([see an example](https://gitlab.com/gitlab-org/gitlab/-/blob/41fbe34a4725a4e357a83fda66afb382828767b2/db/post_migrate/20210707210916_finalize_ci_stages_bigint_conversion.rb#L13-18)).
If the migration has not completed, the subsequent steps fail anyway. By checking in advance we
aim to have more helpful error message.
-1. Create indexes using the `bigint` columns that match the existing indexes using the `integer`
-column ([see an example](https://gitlab.com/gitlab-org/gitlab/-/blob/41fbe34a4725a4e357a83fda66afb382828767b2/db/post_migrate/20210707210916_finalize_ci_stages_bigint_conversion.rb#L28-34)).
+1. Use the `add_bigint_column_indexes` helper method from `Gitlab::Database::MigrationHelpers::ConvertToBigint` module
+ to create indexes with the `bigint` columns that match the existing indexes using the `integer` column.
+ - The helper method is expected to create all required `bigint` indexes, but it's advised to recheck to make sure
+ we are not missing any of the existing indexes. More information about the helper can be
+ found in merge request [135781](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/135781).
1. Create foreign keys (FK) using the `bigint` columns that match the existing FK using the
`integer` column. Do this both for FK referencing other tables, and FK that reference the table
that is being migrated ([see an example](https://gitlab.com/gitlab-org/gitlab/-/blob/41fbe34a4725a4e357a83fda66afb382828767b2/db/post_migrate/20210707210916_finalize_ci_stages_bigint_conversion.rb#L36-43)).
@@ -603,6 +606,8 @@ that is being migrated ([see an example](https://gitlab.com/gitlab-org/gitlab/-/
1. Swap the defaults ([see an example](https://gitlab.com/gitlab-org/gitlab/-/blob/41fbe34a4725a4e357a83fda66afb382828767b2/db/post_migrate/20210707210916_finalize_ci_stages_bigint_conversion.rb#L59-62)).
1. Swap the PK constraint (if any) ([see an example](https://gitlab.com/gitlab-org/gitlab/-/blob/41fbe34a4725a4e357a83fda66afb382828767b2/db/post_migrate/20210707210916_finalize_ci_stages_bigint_conversion.rb#L64-68)).
1. Remove old indexes and rename new ones ([see an example](https://gitlab.com/gitlab-org/gitlab/-/blob/41fbe34a4725a4e357a83fda66afb382828767b2/db/post_migrate/20210707210916_finalize_ci_stages_bigint_conversion.rb#L70-72)).
+ - Names of the `bigint` indexes created using `add_bigint_column_indexes` helper can be retrieved by calling
+ `bigint_index_name` from `Gitlab::Database::MigrationHelpers::ConvertToBigint` module.
1. Remove old foreign keys (if still present) and rename new ones ([see an example](https://gitlab.com/gitlab-org/gitlab/-/blob/41fbe34a4725a4e357a83fda66afb382828767b2/db/post_migrate/20210707210916_finalize_ci_stages_bigint_conversion.rb#L74)).
See example [merge request](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/66088), and [migration](https://gitlab.com/gitlab-org/gitlab/-/blob/41fbe34a4725a4e357a83fda66afb382828767b2/db/post_migrate/20210707210916_finalize_ci_stages_bigint_conversion.rb).
diff --git a/doc/development/database/clickhouse/clickhouse_within_gitlab.md b/doc/development/database/clickhouse/clickhouse_within_gitlab.md
index 297776429d7..2f7a3c4dfe0 100644
--- a/doc/development/database/clickhouse/clickhouse_within_gitlab.md
+++ b/doc/development/database/clickhouse/clickhouse_within_gitlab.md
@@ -45,22 +45,39 @@ ClickHouse::Client.select('SELECT 1', :main)
## Database schema and migrations
-For the ClickHouse database there are no established schema migration procedures yet. We have very basic tooling to build up the database schema in the test environment from scratch using timestamp-prefixed SQL files.
-
-You can create a table by placing a new SQL file in the `db/click_house/main` folder:
-
-```sql
-// 20230811124511_create_issues.sql
-CREATE TABLE issues
-(
- id UInt64 DEFAULT 0,
- title String DEFAULT ''
-)
-ENGINE = MergeTree
-PRIMARY KEY (id)
+There are `bundle exec rake gitlab:clickhouse:migrate` and `bundle exec rake gitlab:clickhouse:rollback` tasks
+(introduced in [!136103](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/136103)).
+
+You can create a migration by creating a Ruby migration file in `db/click_house/migrate` folder. It should be prefixed with a timestamp in the format `YYYYMMDDHHMMSS_description_of_migration.rb`
+
+```ruby
+# 20230811124511_create_issues.rb
+# frozen_string_literal: true
+
+class CreateIssues < ClickHouse::Migration
+ def up
+ execute <<~SQL
+ CREATE TABLE issues
+ (
+ id UInt64 DEFAULT 0,
+ title String DEFAULT ''
+ )
+ ENGINE = MergeTree
+ PRIMARY KEY (id)
+ SQL
+ end
+
+ def down
+ execute <<~SQL
+ DROP TABLE sync_cursors
+ SQL
+ end
+end
```
-When you're working locally in your development environment, you can create or re-create your table schema by executing the respective `CREATE TABLE` statement. Alternatively, you can use the following snippet in the Rails console:
+When you're working locally in your development environment, you can create or re-create your table schema by
+executing `rake gitlab:clickhouse:rollback` and `rake gitlab:clickhouse:migrate`.
+Alternatively, you can use the following snippet in the Rails console:
```ruby
require_relative 'spec/support/database/click_house/hooks.rb'
diff --git a/doc/development/database/database_lab.md b/doc/development/database/database_lab.md
index 7edb8ab4de5..7cdf034844d 100644
--- a/doc/development/database/database_lab.md
+++ b/doc/development/database/database_lab.md
@@ -18,7 +18,7 @@ schema changes, like additional indexes or columns, in an isolated copy of produ
1. Select **Sign in with Google**. (Not GitLab, as you need Google SSO to connect with our project.)
1. After you sign in, select the GitLab organization and then visit "Ask Joe" in the sidebar.
1. Select the database you're testing against:
- - Most queries for the GitLab project run against `gitlab-production-tunnel-pg12`.
+ - Most queries for the GitLab project run against `gitlab-production-main`.
- If the query is for a CI table, select `gitlab-production-ci`.
- If the query is for the container registry, select `gitlab-production-registry`.
1. Type `explain <Query Text>` in the chat box to get a plan.
diff --git a/doc/development/database/iterating_tables_in_batches.md b/doc/development/database/iterating_tables_in_batches.md
index 84b82b16255..44a8c72ea2c 100644
--- a/doc/development/database/iterating_tables_in_batches.md
+++ b/doc/development/database/iterating_tables_in_batches.md
@@ -523,14 +523,14 @@ and resumed at any point. This capability is demonstrated in the following code
stop_at = Time.current + 3.minutes
count, last_value = Issue.each_batch_count do
- Time.current > stop_at # condition for stopping the counting
+ stop_at.past? # condition for stopping the counting
end
# Continue the counting later
stop_at = Time.current + 3.minutes
count, last_value = Issue.each_batch_count(last_count: count, last_value: last_value) do
- Time.current > stop_at
+ stop_at.past?
end
```
diff --git a/doc/development/database/loose_foreign_keys.md b/doc/development/database/loose_foreign_keys.md
index fd380bee385..08d618a26ae 100644
--- a/doc/development/database/loose_foreign_keys.md
+++ b/doc/development/database/loose_foreign_keys.md
@@ -251,8 +251,12 @@ When the loose foreign key definition is no longer needed (parent table is remov
we need to remove the definition from the YAML file and ensure that we don't leave pending deleted
records in the database.
-1. Remove the deletion tracking trigger from the parent table (if the parent table is still there).
1. Remove the loose foreign key definition from the configuration (`config/gitlab_loose_foreign_keys.yml`).
+
+The deletion tracking trigger needs to be removed only when the parent table no longer uses loose foreign keys.
+If the model still has at least one `loose_foreign_key` definition remaining, then these steps can be skipped:
+
+1. Remove the trigger from the parent table (if the parent table is still there).
1. Remove leftover deleted records from the `loose_foreign_keys_deleted_records` table.
Migration for removing the trigger:
diff --git a/doc/development/database/multiple_databases.md b/doc/development/database/multiple_databases.md
index 79e1d3c0578..a045d8ad144 100644
--- a/doc/development/database/multiple_databases.md
+++ b/doc/development/database/multiple_databases.md
@@ -49,11 +49,21 @@ The usage of schema enforces the base class to be used:
### Guidelines on choosing between `gitlab_main_cell` and `gitlab_main_clusterwide` schema
+Depending on the use case, your feature may be [cell-local or clusterwide](../../architecture/blueprints/cells/index.md#how-do-i-decide-whether-to-move-my-feature-to-the-cluster-cell-or-organization-level) and hence the tables used for the feature should also use the appropriate schema.
+
When you choose the appropriate schema for tables, consider the following guidelines as part of the [Cells](../../architecture/blueprints/cells/index.md) architecture:
- Default to `gitlab_main_cell`: We expect most tables to be assigned to the `gitlab_main_cell` schema by default. Choose this schema if the data in the table is related to `projects` or `namespaces`.
- Consult with the Tenant Scale group: If you believe that the `gitlab_main_clusterwide` schema is more suitable for a table, seek approval from the Tenant Scale group This is crucial because it has scaling implications and may require reconsideration of the schema choice.
+To understand how existing tables are classified, you can use [this dashboard](https://manojmj.gitlab.io/tenant-scale-schema-progress/).
+
+After a schema has been assigned, the merge request pipeline might fail due to one or more of the following reasons, which can be rectified by following the linked guidelines:
+
+- [Cross-database joins](#suggestions-for-removing-cross-database-joins)
+- [Cross-database transactions](#fixing-cross-database-transactions)
+- [Cross-database foreign keys](#foreign-keys-that-cross-databases)
+
### The impact of `gitlab_schema`
The usage of `gitlab_schema` has a significant impact on the application.
diff --git a/doc/development/database/understanding_explain_plans.md b/doc/development/database/understanding_explain_plans.md
index 92688eb01dc..3e8978e1046 100644
--- a/doc/development/database/understanding_explain_plans.md
+++ b/doc/development/database/understanding_explain_plans.md
@@ -352,7 +352,6 @@ Indexes:
"index_users_on_static_object_token" UNIQUE, btree (static_object_token)
"index_users_on_unlock_token" UNIQUE, btree (unlock_token)
"index_on_users_name_lower" btree (lower(name::text))
- "index_users_on_accepted_term_id" btree (accepted_term_id)
"index_users_on_admin" btree (admin)
"index_users_on_created_at" btree (created_at)
"index_users_on_email_trigram" gin (email gin_trgm_ops)
diff --git a/doc/development/development_processes.md b/doc/development/development_processes.md
index 1cdf667a35f..fa221d5b51f 100644
--- a/doc/development/development_processes.md
+++ b/doc/development/development_processes.md
@@ -1,7 +1,7 @@
---
stage: none
group: unassigned
-info: "See the Technical Writers assigned to Development Guidelines: https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-development-guidelines"
+info: Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/ee/development/development_processes.html#development-guidelines-review.
---
# Development processes
@@ -35,32 +35,12 @@ Complementary reads:
### Development guidelines review
-When you submit a change to the GitLab development guidelines, who
-you ask for reviews depends on the level of change.
+For changes to development guidelines, request review and approval from an experienced GitLab Team Member.
-#### Wording, style, or link changes
-
-Not all changes require extensive review. For example, MRs that don't change the
-content's meaning or function can be reviewed, approved, and merged by any
-maintainer or Technical Writer. These can include:
-
-- Typo fixes.
-- Clarifying links, such as to external programming language documentation.
-- Changes to comply with the [Documentation Style Guide](documentation/index.md)
- that don't change the intent of the documentation page.
-
-#### Specific changes
-
-If the MR proposes changes that are limited to a particular stage, group, or team,
-request a review and approval from an experienced GitLab Team Member in that
-group. For example, if you're documenting a new internal API used exclusively by
+For example, if you're documenting a new internal API used exclusively by
a given group, request an engineering review from one of the group's members.
-After the engineering review is complete, assign the MR to the
-[Technical Writer associated with the stage and group](https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments)
-in the modified documentation page's metadata.
-If the page is not assigned to a specific group, follow the
-[Technical Writing review process for development guidelines](https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-development-guidelines).
+Small fixes, like typos, can be merged by any user with at least the Maintainer role.
#### Broader changes
@@ -85,7 +65,6 @@ In these cases, use the following workflow:
- [Quality](https://about.gitlab.com/handbook/engineering/quality/)
- [Engineering Productivity](https://about.gitlab.com/handbook/engineering/quality/engineering-productivity/)
- [Infrastructure](https://about.gitlab.com/handbook/engineering/infrastructure/)
- - [Technical Writing](https://about.gitlab.com/handbook/product/ux/technical-writing/)
You can skip this step for MRs authored by EMs or Staff Engineers responsible
for their area.
@@ -97,15 +76,15 @@ In these cases, use the following workflow:
author / approver of the MR.
If this is a significant change across multiple areas, request final review
- and approval from the VP of Development, the DRI for Development Guidelines,
- @clefelhocz1.
+ and approval from the VP of Development, who is the DRI for development guidelines.
+
+Any Maintainer can merge the MR.
-1. After all approvals are complete, assign the MR to the
- [Technical Writer associated with the stage and group](https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments)
- in the modified documentation page's metadata.
- If the page is not assigned to a specific group, follow the
- [Technical Writing review process for development guidelines](https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-development-guidelines).
- The Technical Writer may ask for additional approvals as previously suggested before merging the MR.
+#### Technical writing reviews
+
+If you would like a review by a technical writer, post a message in the `#docs` Slack channel.
+Technical writers do not need to review the content, however, and any Maintainer
+other than the MR author can merge.
### Reviewer values
@@ -114,6 +93,8 @@ In these cases, use the following workflow:
As a reviewer or as a reviewee, make sure to familiarize yourself with
the [reviewer values](https://about.gitlab.com/handbook/engineering/workflow/reviewer-values/) we strive for at GitLab.
+Also, any doc content should follow the [Documentation Style Guide](documentation/index.md).
+
## Language-specific guides
### Go guides
@@ -123,3 +104,13 @@ the [reviewer values](https://about.gitlab.com/handbook/engineering/workflow/rev
### Shell Scripting guides
- [Shell scripting standards and style guidelines](shell_scripting_guide/index.md)
+
+## Clear written communication
+
+While writing any comment in an issue or merge request or any other mode of communication,
+follow [IETF standard](https://www.ietf.org/rfc/rfc2119.txt) while using terms like
+"MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT","RECOMMENDED", "MAY",
+and "OPTIONAL".
+
+This ensures that different team members from different cultures have a clear understanding of
+the terms being used.
diff --git a/doc/development/distributed_tracing.md b/doc/development/distributed_tracing.md
index da6af8b95ef..56c114ba8de 100644
--- a/doc/development/distributed_tracing.md
+++ b/doc/development/distributed_tracing.md
@@ -221,8 +221,8 @@ This configuration string uses the Jaeger driver `opentracing://jaeger` with the
| Name | Example | Description |
|------|-------|-------------|
| `udp_endpoint` | `localhost:6831` | This is the default. Configures Jaeger to send trace information to the UDP listener on port `6831` using compact thrift protocol. Note that we've experienced some issues with the [Jaeger Client for Ruby](https://github.com/salemove/jaeger-client-ruby) when using this protocol. |
-| `sampler` | `probabalistic` | Configures Jaeger to use a probabilistic random sampler. The rate of samples is configured by the `sampler_param` value. |
-| `sampler_param` | `0.01` | Use a ratio of `0.01` to configure the `probabalistic` sampler to randomly sample _1%_ of traces. |
+| `sampler` | `probabilistic` | Configures Jaeger to use a probabilistic random sampler. The rate of samples is configured by the `sampler_param` value. |
+| `sampler_param` | `0.01` | Use a ratio of `0.01` to configure the `probabilistic` sampler to randomly sample _1%_ of traces. |
| `service_name` | `api` | Override the service name used by the Jaeger backend. This parameter takes precedence over the application-supplied value. |
NOTE:
diff --git a/doc/development/documentation/styleguide/index.md b/doc/development/documentation/styleguide/index.md
index c3df15f1890..6158d60a0ba 100644
--- a/doc/development/documentation/styleguide/index.md
+++ b/doc/development/documentation/styleguide/index.md
@@ -1281,11 +1281,10 @@ You can use an automatic screenshot generator to take and compress screenshots.
#### Extending the tool
-To add an additional **screenshot generator**, complete the following steps:
+To add an additional screenshot generator:
-1. Locate the `spec/docs_screenshots` directory.
-1. Add a new file with a `_docs.rb` extension.
-1. Be sure to include the following information in the file:
+1. In the `spec/docs_screenshots` directory, add a new file with a `_docs.rb` extension.
+1. Add the following information to your file:
```ruby
require 'spec_helper'
@@ -1298,29 +1297,29 @@ To add an additional **screenshot generator**, complete the following steps:
end
```
-1. In addition, every `it` block must include the path where the screenshot is saved:
+1. To each `it` block, add the path where the screenshot is saved:
```ruby
- it 'user/packages/container_registry/img/project_image_repositories_list'
+ it '<path/to/images/directory>'
```
-##### Full page screenshots
+You can take a screenshot of a page with `visit <path>`.
+To avoid blank screenshots, use `expect` to wait for the content to load.
-To take a full page screenshot, `visit the page` and perform any expectation on real content (to have capybara wait till the page is ready and not take a white screenshot).
+##### Single-element screenshots
-##### Element screenshot
+You can take a screenshot of a single element.
-To have the screenshot focuses few more steps are needed:
+- Add the following to your screenshot generator file:
-- **find the area**: `screenshot_area = find('#js-registry-policies')`
-- **scroll the area in focus**: `scroll_to screenshot_area`
-- **wait for the content**: `expect(screenshot_area).to have_content 'Expiration interval'`
-- **set the crop area**: `set_crop_data(screenshot_area, 20)`
-
-In particular, `set_crop_data` accepts as arguments: a `DOM` element and a
-padding. The padding is added around the element, enlarging the screenshot area.
+ ```ruby
+ screenshot_area = find('<element>') # Find the element
+ scroll_to screenshot_area # Scroll to the element
+ expect(screenshot_area).to have_content '<content>' # Wait for the content you want to capture
+ set_crop_data(screenshot_area, <padding>) # Capture the element with added padding
+ ```
-Use `spec/docs_screenshots/container_registry_docs.rb` as a guide and as an example to create your own scripts.
+Use `spec/docs_screenshots/container_registry_docs.rb` as a guide to create your own scripts.
## Emoji
@@ -1731,6 +1730,7 @@ Some pages won't have a tier badge, because no obvious tier badge applies. For e
- Tutorials.
- Pages that compare features from different tiers.
- Pages in the `/development` folder. These pages are automatically assigned a `Contribute` badge.
+- Pages in the `/solutions` folder. These pages are automatically assigned a `Solutions` badge.
##### Administrator documentation tier badges
diff --git a/doc/development/documentation/styleguide/word_list.md b/doc/development/documentation/styleguide/word_list.md
index ad2cbee974b..1888d72f991 100644
--- a/doc/development/documentation/styleguide/word_list.md
+++ b/doc/development/documentation/styleguide/word_list.md
@@ -26,6 +26,15 @@ For guidance not on this page, we defer to these style guides:
<!-- Disable trailing punctuation in heading rule https://github.com/DavidAnson/markdownlint/blob/main/doc/Rules.md#md026---trailing-punctuation-in-heading -->
<!-- markdownlint-disable MD026 -->
+## `.gitlab-ci.yml` file
+
+Use backticks and lowercase for **the `.gitlab-ci.yml` file**.
+
+When possible, use the full phrase: **the `.gitlab-ci.yml` file**
+
+Although users can specify another name for their CI/CD configuration file,
+in most cases, use **the `.gitlab-ci.yml` file** instead.
+
## `&`
Do not use Latin abbreviations. Use **and** instead, unless you are documenting a UI element that uses an `&`.
@@ -383,9 +392,14 @@ Use **confirmation dialog** to describe the dialog that asks you to confirm an a
Do not use **confirmation box** or **confirmation dialog box**. See also [**dialog**](#dialog).
-## Container Registry
+## container registry
+
+When documenting the GitLab container registry features and functionality, use lower case.
+
+Use:
-Use title case for the GitLab Container Registry.
+- The GitLab container registry supports A, B, and C.
+- You can push a Docker image to your project's container registry.
## currently
@@ -783,7 +797,9 @@ Do not use **handy**. If the user doesn't find the feature or process to be hand
## high availability, HA
-Do not use **high availability** or **HA**. Instead, direct readers to the GitLab [reference architectures](../../../administration/reference_architectures/index.md) for information about configuring GitLab for handling greater amounts of users.
+Do not use **high availability** or **HA**, except in the GitLab [reference architectures](../../../administration/reference_architectures/index.md#high-availability-ha). Instead, direct readers to the reference architectures for more information about configuring GitLab for handling greater amounts of users.
+
+Do not use phrases like **high availability setup** to mean a multiple node environment. Instead, use **multi-node setup** or similar.
## higher
@@ -1303,6 +1319,14 @@ For example, you might write something like:
Use lowercase for **push rules**.
+## `README` file
+
+Use backticks and lowercase for **the `README` file**, or **the `README.md` file**.
+
+When possible, use the full phrase: **the `README` file**
+
+For plural, use **`README` files**.
+
## recommend, we recommend
Instead of **we recommend**, use **you should**. We want to talk to the user the way
diff --git a/doc/development/documentation/versions.md b/doc/development/documentation/versions.md
index dadae134f4c..bd83ed7eff2 100644
--- a/doc/development/documentation/versions.md
+++ b/doc/development/documentation/versions.md
@@ -119,9 +119,8 @@ To deprecate a page or topic:
You can add any additional context-specific details that might help users.
-1. Add the following HTML comments above and below the content.
- For `remove_date`, set a date three months after the release where it
- will be removed.
+1. Add the following HTML comments above and below the content. For `remove_date`,
+ set a date three months after the [release where it will be removed](https://about.gitlab.com/releases/).
```markdown
<!--- start_remove The following content will be removed on remove_date: 'YYYY-MM-DD' -->
diff --git a/doc/development/documentation/workflow.md b/doc/development/documentation/workflow.md
index eb1ea28d3b8..5c99f5c48df 100644
--- a/doc/development/documentation/workflow.md
+++ b/doc/development/documentation/workflow.md
@@ -36,6 +36,13 @@ A member of the Technical Writing team adds these labels:
`docs::` prefix. For example, `~docs::improvement`.
- The [`~Technical Writing` team label](../labels/index.md#team-labels).
+NOTE:
+With the exception of `/doc/development/documentation`,
+technical writers do not review content in the `doc/development` directory.
+Any Maintainer can merge content in the `doc/development` directory.
+If you would like a technical writer review of content in the `doc/development` directory,
+ask in the `#docs` Slack channel.
+
## Post-merge reviews
If not assigned to a Technical Writer for review prior to merging, a review must be scheduled
@@ -65,6 +72,11 @@ Remember:
- The Technical Writer can also help decide that documentation can be merged without Technical
writer review, with the review to occur soon after merge.
+## Pages with no tech writer review
+
+The documentation under `/doc/solutions` is created, maintained, copy edited,
+and merged by the Solutions Architect team.
+
## Do not use ChatGPT or AI-generated content for the docs
GitLab documentation is distributed under the [CC BY-SA 4.0 license](https://creativecommons.org/licenses/by-sa/4.0/), which presupposes that GitLab owns the documentation.
diff --git a/doc/development/ee_features.md b/doc/development/ee_features.md
index 10943b2d135..d05249f3d3f 100644
--- a/doc/development/ee_features.md
+++ b/doc/development/ee_features.md
@@ -38,10 +38,10 @@ context rich definitions around the reason the feature is SaaS-only.
1. Add the new feature to `FEATURE` in `ee/lib/ee/gitlab/saas.rb`.
```ruby
- FEATURES = %w[purchases/additional_minutes some_domain/new_feature_name].freeze
+ FEATURES = %i[purchases_additional_minutes some_domain_new_feature_name].freeze
```
-1. Use the new feature in code with `Gitlab::Saas.feature_available?('some_domain/new_feature_name')`.
+1. Use the new feature in code with `Gitlab::Saas.feature_available?(:some_domain_new_feature_name)`.
#### SaaS-only feature definition and validation
@@ -68,7 +68,7 @@ Each SaaS feature is defined in a separate YAML file consisting of a number of f
Prepend the `ee/lib/ee/gitlab/saas.rb` module and override the `Gitlab::Saas.feature_available?` method.
```ruby
-JH_DISABLED_FEATURES = %w[some_domain/new_feature_name].freeze
+JH_DISABLED_FEATURES = %i[some_domain_new_feature_name].freeze
override :feature_available?
def feature_available?(feature)
@@ -78,7 +78,7 @@ end
### Do not use SaaS-only features for functionality in CE
-`Gitlab::Saas.feature_vailable?` must not appear in CE.
+`Gitlab::Saas.feature_available?` must not appear in CE.
See [extending CE with EE guide](#extend-ce-features-with-ee-backend-code).
### SaaS-only features in tests
@@ -88,30 +88,30 @@ It is strongly advised to include automated tests for all code affected by a Saa
to ensure the feature works properly.
To enable a SaaS-only feature in a test, use the `stub_saas_features`
-helper. For example, to globally disable the `purchases/additional_minutes` feature
+helper. For example, to globally disable the `purchases_additional_minutes` feature
flag in a test:
```ruby
-stub_saas_features('purchases/additional_minutes' => false)
+stub_saas_features(purchases_additional_minutes: false)
-::Gitlab::Saas.feature_available?('purchases/additional_minutes') # => false
+::Gitlab::Saas.feature_available?(:purchases_additional_minutes) # => false
```
A common pattern of testing both paths looks like:
```ruby
it 'purchases/additional_minutes is not available' do
- # tests assuming purchases/additional_minutes is not enabled by default
- ::Gitlab::Saas.feature_available?('purchases/additional_minutes') # => false
+ # tests assuming purchases_additional_minutes is not enabled by default
+ ::Gitlab::Saas.feature_available?(:purchases_additional_minutes) # => false
end
-context 'when purchases/additional_minutes is available' do
+context 'when purchases_additional_minutes is available' do
before do
- stub_saas_features('purchases/additional_minutes' => true)
+ stub_saas_features(purchases_additional_minutes: true)
end
it 'returns true' do
- ::Gitlab::Saas.feature_available?('purchases/additional_minutes') # => true
+ ::Gitlab::Saas.feature_available?(:purchases_additional_minutes) # => true
end
end
```
diff --git a/doc/development/experiment_guide/implementing_experiments.md b/doc/development/experiment_guide/implementing_experiments.md
index 83369ad8e34..15b8f8fc192 100644
--- a/doc/development/experiment_guide/implementing_experiments.md
+++ b/doc/development/experiment_guide/implementing_experiments.md
@@ -8,7 +8,7 @@ info: To determine the technical writer assigned to the Stage/Group associated w
## Implementing an experiment
-[Examples](https://gitlab.com/gitlab-org/growth/growth/-/wikis/GLEX-Framework-code-examples)
+[Examples](https://gitlab.com/groups/gitlab-org/growth/-/wikis/GLEX-How-Tos)
Start by generating a feature flag using the `bin/feature-flag` command as you
usually would for a development feature flag, making sure to use `experiment` for
diff --git a/doc/development/export_csv.md b/doc/development/export_csv.md
index 9b0205166bf..ce0a6e026ff 100644
--- a/doc/development/export_csv.md
+++ b/doc/development/export_csv.md
@@ -10,7 +10,7 @@ This document lists the different implementations of CSV export in GitLab codeba
| Export type | How it works | Advantages | Disadvantages | Existing examples |
|---|---|---|---|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| Streaming | - Query and yield data in batches to a response stream.<br>- Download starts immediately. | - Report available immediately. | - No progress indicator.<br>- Requires a reliable connection. | [Export Audit Event Log](../administration/audit_events.md#export-to-csv) |
+| Streaming | - Query and yield data in batches to a response stream.<br>- Download starts immediately. | - Report available immediately. | - No progress indicator.<br>- Requires a reliable connection. | [Export Audit Event Log](../administration/audit_events.md#exporting-audit-events) |
| Downloading | - Query and write data in batches to a temporary file.<br>- Loads the file into memory.<br>- Sends the file to the client. | - Report available immediately. | - Large amount of data might cause request timeout.<br>- Memory intensive.<br>- Request expires when user navigates to a different page. | - [Export Chain of Custody Report](../user/compliance/compliance_center/index.md#chain-of-custody-report)<br>- [Export License Usage File](../subscriptions/self_managed/index.md#export-your-license-usage) |
| As email attachment | - Asynchronously process the query with background job.<br>- Email uses the export as an attachment. | - Asynchronous processing. | - Requires users use a different app (email) to download the CSV.<br>- Email providers may limit attachment size. | - [Export issues](../user/project/issues/csv_export.md)<br>- [Export merge requests](../user/project/merge_requests/csv_export.md) |
| As downloadable link in email (*) | - Asynchronously process the query with background job.<br>- Email uses an export link. | - Asynchronous processing.<br>- Bypasses email provider attachment size limit. | - Requires users use a different app (email).<br>- Requires additional storage and cleanup. | [Export User Permissions](https://gitlab.com/gitlab-org/gitlab/-/issues/1772) |
diff --git a/doc/development/fe_guide/graphql.md b/doc/development/fe_guide/graphql.md
index 99070f3d31c..5807c9c5621 100644
--- a/doc/development/fe_guide/graphql.md
+++ b/doc/development/fe_guide/graphql.md
@@ -974,28 +974,6 @@ const data = store.readQuery({
Read more about the `@connection` directive in [Apollo's documentation](https://www.apollographql.com/docs/react/caching/advanced-topics/#the-connection-directive).
-### Managing performance
-
-The Apollo client batches queries by default. Given 3 deferred queries,
-Apollo groups them into one request, sends the single request to the server, and
-responds after all 3 queries have completed.
-
-If you need to have queries sent as individual requests, additional context can be provided
-to tell Apollo to do this.
-
-```javascript
-export default {
- apollo: {
- user: {
- query: QUERY_IMPORT,
- context: {
- isSingleRequest: true,
- }
- }
- },
-};
-```
-
#### Polling and Performance
While the Apollo client has support for simple polling, for performance reasons, our [ETag-based caching](../polling.md) is preferred to hitting the database each time.
@@ -1081,21 +1059,6 @@ await this.$apollo.mutate({
});
```
-ETags depend on the request being a `GET` instead of GraphQL's usual `POST`. Our default link library does not support `GET` requests, so we must let our default Apollo client know to use a different library. Keep in mind, this means your app cannot batch queries.
-
-```javascript
-/* componentMountIndex.js */
-
-const apolloProvider = new VueApollo({
- defaultClient: createDefaultClient(
- {},
- {
- useGet: true,
- },
- ),
-});
-```
-
Finally, we can add a visibility check so that the component pauses polling when the browser tab is not active. This should lessen the request load on the page.
```javascript
diff --git a/doc/development/fe_guide/security.md b/doc/development/fe_guide/security.md
index d578449e578..4e06c22b383 100644
--- a/doc/development/fe_guide/security.md
+++ b/doc/development/fe_guide/security.md
@@ -12,57 +12,6 @@ info: To determine the technical writer assigned to the Stage/Group associated w
[Qualys SSL Labs Server Test](https://www.ssllabs.com/ssltest/analyze.html) are good resources for finding
potential problems and ensuring compliance with security best practices.
-<!-- Uncomment these sections when CSP/SRI are implemented.
-### Content Security Policy (CSP)
-
-Content Security Policy is a web standard that intends to mitigate certain
-forms of Cross-Site Scripting (XSS) as well as data injection.
-
-Content Security Policy rules should be taken into consideration when
-implementing new features, especially those that may rely on connection with
-external services.
-
-GitLab's CSP is used for the following:
-
-- Blocking plugins like Flash and Silverlight from running at all on our pages.
-- Blocking the use of scripts and stylesheets downloaded from external sources.
-- Upgrading `http` requests to `https` when possible.
-- Preventing `iframe` elements from loading in most contexts.
-
-Some exceptions include:
-
-- Scripts from Google Analytics and Matomo if either is enabled.
-- Connecting with GitHub, Bitbucket, GitLab.com, etc. to allow project importing.
-- Connecting with Google, Twitter, GitHub, etc. to allow OAuth authentication.
-
-We use [the Secure Headers gem](https://github.com/twitter/secureheaders) to enable Content
-Security Policy headers in the GitLab Rails app.
-
-Some resources on implementing Content Security Policy:
-
-- [MDN Article on CSP](https://developer.mozilla.org/en-US/docs/Web/Security/CSP)
-- [GitHub's CSP Journey on the GitHub Engineering Blog](https://github.blog/2016-04-12-githubs-csp-journey/)
-- The Dropbox Engineering Blog's series on CSP: [1](https://blogs.dropbox.com/tech/2015/09/on-csp-reporting-and-filtering/), [2](https://blogs.dropbox.com/tech/2015/09/unsafe-inline-and-nonce-deployment/), [3](https://blogs.dropbox.com/tech/2015/09/csp-the-unexpected-eval/), [4](https://blogs.dropbox.com/tech/2015/09/csp-third-party-integrations-and-privilege-separation/)
-
-### Subresource Integrity (SRI)
-
-Subresource Integrity prevents malicious assets from being provided by a CDN by
-guaranteeing that the asset downloaded is identical to the asset the server
-is expecting.
-
-The Rails app generates a unique hash of the asset, which is used as the
-asset's `integrity` attribute. The browser generates the hash of the asset
-on-load and will reject the asset if the hashes do not match.
-
-All CSS and JavaScript assets should use Subresource Integrity.
-
-Some resources on implementing Subresource Integrity:
-
-- [MDN Article on SRI](https://developer.mozilla.org/en-us/docs/web/security/subresource_integrity)
-- [Subresource Integrity on the GitHub Engineering Blog](https://github.blog/2015-09-19-subresource-integrity/)
-
--->
-
## Including external resources
External fonts, CSS, and JavaScript should never be used with the exception of
diff --git a/doc/development/fe_guide/sentry.md b/doc/development/fe_guide/sentry.md
index 929de1499c7..95a170b7976 100644
--- a/doc/development/fe_guide/sentry.md
+++ b/doc/development/fe_guide/sentry.md
@@ -39,7 +39,7 @@ to our Sentry instance under the project
The most common way to report errors to Sentry is to call `captureException(error)`, for example:
```javascript
-import * as Sentry from '@sentry/browser';
+import * as Sentry from '~/sentry/sentry_browser_wrapper';
try {
// Code that may fail in runtime
@@ -53,6 +53,9 @@ about, or have no control over. For example, we shouldn't report validation erro
out a form incorrectly. However, if that form submission fails because or a server error,
this is an error we want Sentry to know about.
+By default your local development instance does not have Sentry configured. Calls to Sentry are
+stubbed and shown in the console with a `[Sentry stub]` prefix for debugging.
+
### Unhandled/unknown errors
Additionally, we capture unhandled errors automatically in all of our pages.
diff --git a/doc/development/fe_guide/storybook.md b/doc/development/fe_guide/storybook.md
index 6049dd7c7d3..cbda9d5efa2 100644
--- a/doc/development/fe_guide/storybook.md
+++ b/doc/development/fe_guide/storybook.md
@@ -135,3 +135,37 @@ export const Default = Template.bind({});
Default.args = {};
```
+
+## Using a Vuex store
+
+To write a story for a component that requires access to a Vuex store, use the `createVuexStore` method provided in
+the Story context.
+
+```javascript
+import Vue from 'vue';
+import { withVuexStore } from 'storybook_addons/vuex_store';
+import DurationChart from './duration-chart.vue';
+
+const Template = (_, { argTypes, createVuexStore }) => {
+ return {
+ components: { DurationChart },
+ store: createVuexStore({
+ state: {},
+ getters: {},
+ modules: {},
+ }),
+ props: Object.keys(argTypes),
+ template: '<duration-chart />',
+ };
+};
+
+export default {
+ component: DurationChart,
+ title: 'ee/analytics/cycle_analytics/components/duration_chart',
+ decorators: [withVuexStore],
+};
+
+export const Default = Template.bind({});
+
+Default.args = {};
+```
diff --git a/doc/development/fe_guide/style/scss.md b/doc/development/fe_guide/style/scss.md
index e760b0adaaa..400b178d9a4 100644
--- a/doc/development/fe_guide/style/scss.md
+++ b/doc/development/fe_guide/style/scss.md
@@ -6,18 +6,11 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# SCSS style guide
-This style guide recommends best practices for SCSS to make styles easy to read,
-easy to maintain, and performant for the end-user.
-
-## Rules
-
-Our CSS is a mixture of current and legacy approaches. That means sometimes it may be difficult to follow this guide to the letter; it means you are likely to run into exceptions, where following the guide is difficult to impossible without major effort. In those cases, you may work with your reviewers and maintainers to identify an approach that does not fit these rules. Try to limit these cases.
-
-### Utility Classes
+## Utility Classes
In order to reduce the generation of more CSS as our site grows, prefer the use of utility classes over adding new CSS. In complex cases, CSS can be addressed by adding component classes.
-#### Where are utility classes defined?
+### Where are utility classes defined?
Prefer the use of [utility classes defined in GitLab UI](https://gitlab.com/gitlab-org/gitlab-ui/-/blob/main/doc/css.md#utilities).
@@ -27,6 +20,8 @@ An easy list of classes can also be [seen on Unpkg](https://unpkg.com/browse/@gi
<!-- vale gitlab.Spelling = YES -->
+Or using an extension like [CSS Class completion](https://marketplace.visualstudio.com/items?itemName=Zignd.html-css-class-completion).
+
Classes in [`utilities.scss`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/stylesheets/utilities.scss) and [`common.scss`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/stylesheets/framework/common.scss) are being deprecated.
Classes in [`common.scss`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/stylesheets/framework/common.scss) that use non-design-system values should be avoided. Use classes with conforming values instead.
@@ -40,13 +35,13 @@ GitLab differs from the scale used in the Bootstrap library. For a Bootstrap pad
utility, you may need to double the size of the applied utility to achieve the same visual
result (such as `ml-1` becoming `gl-ml-2`).
-#### Where should you put new utility classes?
+### Where should you put new utility classes?
If a class you need has not been added to GitLab UI, you get to add it! Follow the naming patterns documented in the [utility files](https://gitlab.com/gitlab-org/gitlab-ui/-/tree/main/src/scss/utility-mixins) and refer to the [GitLab UI CSS documentation](https://gitlab.com/gitlab-org/gitlab-ui/-/blob/main/doc/contributing/adding_css.md#adding-utility-mixins) for more details, especially about adding responsive and stateful rules.
If it is not possible to wait for a GitLab UI update (generally one day), add the class to [`utilities.scss`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/stylesheets/utilities.scss) following the same naming conventions documented in GitLab UI. A follow-up issue to backport the class to GitLab UI and delete it from GitLab should be opened.
-#### When should you create component classes?
+### When should you create component classes?
We recommend a "utility-first" approach.
@@ -60,7 +55,7 @@ Inspiration:
- <https://tailwindcss.com/docs/utility-first>
- <https://tailwindcss.com/docs/extracting-components>
-#### Utility mixins
+### Utility mixins
In addition to utility classes GitLab UI provides utility mixins named after the utility classes.
@@ -95,7 +90,7 @@ For example prefer `display: flex` over `@include gl-display-flex`. Utility mixi
}
```
-### Naming
+## Naming
Filenames should use `snake_case`.
@@ -119,6 +114,23 @@ CSS classes should use the `lowercase-hyphenated` format rather than
}
```
+Avoid making compound class names with SCSS `&` features. It makes
+searching for usages harder, and provides limited benefit.
+
+```scss
+// Bad
+.class {
+ &-name {
+ color: orange;
+ }
+}
+
+// Good
+.class-name {
+ color: #fff;
+}
+```
+
Class names should be used instead of tag name selectors.
Using tag name selectors is discouraged because they can affect
unintended elements in the hierarchy.
@@ -154,53 +166,47 @@ the page.
}
```
-### Selectors with a `js-` Prefix
-
-Do not use any selector prefixed with `js-` for styling purposes. These
-selectors are intended for use only with JavaScript to allow for removal or
-renaming without breaking styling.
-
-### Variables
-
-Before adding a new variable for a color or a size, guarantee:
-
-- There isn't an existing one.
-- There isn't a similar one we can use instead.
-
-### Using `extend` at-rule
+## Nesting
-Usage of the `extend` at-rule is prohibited due to [memory leaks](https://gitlab.com/gitlab-org/gitlab/-/issues/323021) and [the rule doesn't work as it should to](https://sass-lang.com/documentation/breaking-changes/extend-compound). Use mixins instead:
+Avoid unnecessary nesting. The extra specificity of a wrapper component
+makes things harder to override.
```scss
// Bad
-.gl-pt-3 {
- padding-top: 12px;
-}
-
-.my-element {
- @extend .gl-pt-3;
-}
+.component-container {
+ .component-header {
+ /* ... */
+ }
-// compiles to
-.gl-pt-3, .my-element {
- padding-top: 12px;
+ .component-body {
+ /* ... */
+ }
}
// Good
-@mixin gl-pt-3 {
- padding-top: 12px;
+.component-container {
+ /* ... */
}
-.my-element {
- @include gl-pt-3;
+.component-header {
+ /* ... */
}
-// compiles to
-.my-element {
- padding-top: 12px;
+.component-body {
+ /* ... */
}
```
+## Selectors with a `js-` Prefix
+
+Do not use any selector prefixed with `js-` for styling purposes. These
+selectors are intended for use only with JavaScript to allow for removal or
+renaming without breaking styling.
+
+## Using `extend` at-rule
+
+Usage of the `extend` at-rule is prohibited due to [memory leaks](https://gitlab.com/gitlab-org/gitlab/-/issues/323021) and [the rule doesn't work as it should](https://sass-lang.com/documentation/breaking-changes/extend-compound).
+
## Linting
We use [stylelint](https://stylelint.io) to check for style guide conformity. It uses the
diff --git a/doc/development/fe_guide/style/typescript.md b/doc/development/fe_guide/style/typescript.md
new file mode 100644
index 00000000000..529459097b4
--- /dev/null
+++ b/doc/development/fe_guide/style/typescript.md
@@ -0,0 +1,215 @@
+---
+type: reference, dev
+stage: none
+group: unassigned
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
+---
+
+# TypeScript
+
+## History with GitLab
+
+TypeScript has been [considered](https://gitlab.com/gitlab-org/frontend/rfcs/-/issues/35),
+discussed, promoted, and rejected for years at GitLab. The general
+conclusion is that we are unable to integrate TypeScript into the main
+project because the costs outweigh the benefits.
+
+- The main project has **a lot** of pre-existing code that is not strongly typed.
+- The main contributors to the main project are not all familiar with TypeScript.
+
+Apart from the main project, TypeScript has been profitably employed in
+a handful of satellite projects.
+
+## Projects using TypeScript
+
+The following GitLab projects use TypeScript:
+
+- [`gitlab-web-ide`](https://gitlab.com/gitlab-org/gitlab-web-ide/)
+- [`gitlab-vscode-extension`](https://gitlab.com/gitlab-org/gitlab-vscode-extension/)
+- [`gitlab-language-server-for-code-suggestions`](https://gitlab.com/gitlab-org/editor-extensions/gitlab-language-server-for-code-suggestions)
+- [`gitlab-org/cluster-integration/javascript-client`](https://gitlab.com/gitlab-org/cluster-integration/javascript-client)
+
+## Recommendations
+
+### Setup ESLint and TypeScript configuration
+
+When setting up a new TypeScript project, configure strict type-safety rules for
+ESLint and TypeScript. This ensures that the project remains as type-safe as possible.
+
+The [GitLab Workflow Extension](https://gitlab.com/gitlab-org/gitlab-vscode-extension/)
+project is a good model for a TypeScript project's boilerplate and configuration.
+Consider copying the `tsconfig.json` and `.eslintrc.json` from there.
+
+For `tsconfig.json`:
+
+- Use [`"strict": true`](https://www.typescriptlang.org/tsconfig#strict).
+ This enforces the strongest type-checking capabilities in the project and
+ prohibits overriding type-safety.
+- Use [`"skipLibCheck": true`](https://www.typescriptlang.org/tsconfig#skipLibCheck).
+ This improves compile time by only checking references `.d.ts`
+ files as opposed to all `.d.ts` files in `node_modules`.
+
+For `.eslintrc.json` (or `.eslintrc.js`):
+
+- Make sure that TypeScript-specific parsing and linting are placed in an `overrides`
+ for `**/*.ts` files. This way, linting regular `.js` files
+ remains unaffected by the TypeScript-specific rules.
+- Extend from [`plugin:@typescript-eslint/recommended`](https://typescript-eslint.io/rules?supported-rules=recommended)
+ which has some very sensible defaults, such as:
+ - [`"@typescript-eslint/no-explicit-any": "error"`](https://typescript-eslint.io/rules/no-explicit-any/)
+ - [`"@typescript-eslint/no-unsafe-assignment": "error"`](https://typescript-eslint.io/rules/no-unsafe-assignment/)
+ - [`"@typescript-eslint/no-unsafe-return": "error"`](https://typescript-eslint.io/rules/no-unsafe-return)
+
+### Avoid `any`
+
+Avoid `any` at all costs. This should already be configured in the project's linter,
+but it's worth calling out here.
+
+Developers commonly resort to `any` when dealing with data structures that cross
+domain boundaries, such as handling HTTP responses or interacting with untyped
+libraries. This appears convenient at first. However, opting for a well-defined type (or using
+`unknown` and employing type narrowing through predicates) carries substantial benefits.
+
+```typescript
+// Bad :(
+function handleMessage(data: any) {
+ console.log("We don't know what data is. This could blow up!", data.special.stuff);
+}
+
+// Good :)
+function handleMessage(data: unknown) {
+ console.log("Sometimes it's okay that it remains unknown.", JSON.stringify(data));
+}
+
+// Also good :)
+function isFooMessage(data: unknown): data is { foo: string } {
+ return typeof data === 'object' && data && 'foo' in data;
+}
+
+function handleMessage(data: unknown) {
+ if (isFooMessage(data)) {
+ console.log("We know it's a foo now. This is safe!", data.foo);
+ }
+}
+```
+
+### Avoid casting with `<>` or `as`
+
+Avoid casting with `<>` or `as` as much as possible.
+
+Type casting explicitly circumvents type-safety. Consider using
+[type predicates](https://www.typescriptlang.org/docs/handbook/2/narrowing.html#using-type-predicates).
+
+```typescript
+// Bad :(
+function handler(data: unknown) {
+ console.log((data as StuffContainer).stuff);
+}
+
+// Good :)
+function hasStuff(data: unknown): data is StuffContainer {
+ if (data && typeof data === 'object') {
+ return 'stuff' in data;
+ }
+
+ return false;
+}
+
+function handler(data: unknown) {
+ if (hasStuff(data)) {
+ // No casting needed :)
+ console.log(data.stuff);
+ }
+ throw new Error('Expected data to have stuff. Catastrophic consequences might follow...');
+}
+
+```
+
+There's some rare cases this might be acceptable (consider
+[this test utility](https://gitlab.com/gitlab-org/gitlab-web-ide/-/blob/3ea8191ed066811caa4fb108713e7538b8d8def1/packages/vscode-extension-web-ide/test-utils/createFakePartial.ts#L1)). However, 99% of the
+time, there's a better way.
+
+### Prefer `interface` over `type` for new structures
+
+Prefer declaring a new `interface` over declaring a new `type` alias when defining new structures.
+
+Interfaces and type aliases have a lot of cross-over, but only interfaces can be used
+with the `implements` keyword. A class is not able to `implement` a `type` (only an `interface`),
+so using `type` would restrict the usability of the structure.
+
+```typescript
+// Bad :(
+type Fooer = {
+ foo: () => string;
+}
+
+// Good :)
+interface Fooer {
+ foo: () => string;
+}
+```
+
+From the [TypeScript guide](https://www.typescriptlang.org/docs/handbook/2/everyday-types.html#differences-between-type-aliases-and-interfaces):
+
+> If you would like a heuristic, use `interface` until you need to use features from `type`.
+
+### Use `type` to define aliases for existing types
+
+Use type to define aliases for existing types, classes or interfaces. Use
+the TypeScript [Utility Types](https://www.typescriptlang.org/docs/handbook/utility-types.html)
+to provide transformations.
+
+```typescript
+interface Config = {
+ foo: string;
+
+ isBad: boolean;
+}
+
+// Bad :(
+type PartialConfig = {
+ foo?: string;
+
+ isBad?: boolean;
+}
+
+// Good :)
+type PartialConfig = Partial<Config>;
+```
+
+### Use union types to improve inference
+
+```typescript
+// Bad :(
+interface Foo { type: string }
+interface FooBar extends Foo { bar: string }
+interface FooZed extends Foo { zed: string }
+
+const doThing = (foo: Foo) => {
+ if (foo.type === 'bar') {
+ // Casting bad :(
+ console.log((foo as FooBar).bar);
+ }
+}
+
+// Good :)
+interface FooBar { type: 'bar', bar: string }
+interface FooZed { type: 'zed', zed: string }
+type Foo = FooBar | FooZed;
+
+const doThing = (foo: Foo) => {
+ if (foo.type === 'bar') {
+ // No casting needed :) - TS knows we are FooBar now
+ console.log(foo.bar);
+ }
+}
+```
+
+## Future plans
+
+- Shared ESLint configuration to reuse across TypeScript projects.
+
+## Related topics
+
+- [TypeScript Handbook](https://www.typescriptlang.org/docs/handbook/intro.html)
+- [TypeScript notes in GitLab Workflow Extension](https://gitlab.com/gitlab-org/gitlab-vscode-extension/-/blob/main/docs/developer/coding-guidelines.md?ref_type=heads#typescript)
diff --git a/doc/development/fe_guide/type_hinting.md b/doc/development/fe_guide/type_hinting.md
new file mode 100644
index 00000000000..026bf855e27
--- /dev/null
+++ b/doc/development/fe_guide/type_hinting.md
@@ -0,0 +1,215 @@
+---
+stage: none
+group: unassigned
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
+---
+
+# Type hinting overview
+
+The Frontend codebase of the GitLab project currently does not require nor enforces types. Adding
+type annotations is optional, and we don't currently enforce any type safety in the JavaScript
+codebase. However, type annotations might be very helpful in adding clarity to the codebase,
+especially in shared utilities code. This document aims to cover how type hinting currently works,
+how to add new type annotations, and how to set up type hinting in the GitLab project.
+
+## JSDoc
+
+[JSDoc](https://jsdoc.app/) is a tool to document and describe types in JavaScript code, using
+specially formed comments. JSDoc's types vocabulary is relatively limited, but it is widely
+supported [by many IDEs](https://en.wikipedia.org/wiki/JSDoc#JSDoc_in_use).
+
+### Examples
+
+#### Describing functions
+
+Use [`@param`](https://jsdoc.app/tags-param.html) and [`@returns`](https://jsdoc.app/tags-returns.html)
+to describe function type:
+
+```javascript
+/**
+ * Adds two numbers
+ * @param {number} a first number
+ * @param {number} b second number
+ * @returns {number} sum of two numbers
+ */
+function add(a, b) {
+ return a + b;
+}
+```
+
+##### Optional parameters
+
+Use square brackets `[]` around a parameter name to mark it as optional. A default value can be
+provided by using the `[name=value]` syntax:
+
+```javascript
+/**
+ * Adds two numbers
+ * @param {number} value
+ * @param {number} [increment=1] optional param
+ * @returns {number} sum of two numbers
+ */
+function increment(a, b=1) {
+ return a + b;
+}
+```
+
+##### Object parameters
+
+Functions that accept objects can be typed by using `object.field` notation in `@param` names:
+
+```javascript
+/**
+ * Adds two numbers
+ * @param {object} config
+ * @param {string} config.path path
+ * @param {string} [config.anchor] anchor
+ * @returns {string}
+ */
+function createUrl(config) {
+ if (config.anchor) {
+ return path + '#' + anchor;
+ }
+ return path;
+}
+```
+
+#### Annotating types of variables that are not immediately assigned a value
+
+For tools and IDEs it's hard to infer type of a value that doesn't immediately receive a value. We
+can use [`@type`](https://jsdoc.app/tags-type.html) notation to assign type to such variables:
+
+```javascript
+/** @type {number} */
+let value;
+```
+
+Consult [JSDoc official website](https://jsdoc.app/) for more syntax details.
+
+### Tips for using JSDoc
+
+#### Use lower-case names for basic types
+
+While both uppercase `Boolean` and lowercase `boolean` are acceptable, in most cases when we need a
+primitive or an object — lower case versions are the right choice: `boolean`, `number`, `string`,
+`symbol`, `object`.
+
+```javascript
+/**
+ * Translates `text`.
+ * @param {string} text - The text to be translated
+ * @returns {string} The translated text
+ */
+const gettext = (text) => locale.gettext(ensureSingleLine(text));
+```
+
+#### Use well-known types
+
+Well-known types, like `HTMLDivElement` or `Intl` are available and can be used directly:
+
+```javascript
+/** @type {HTMLDivElement} */
+let element;
+```
+
+```javascript
+/**
+ * Creates an instance of Intl.DateTimeFormat for the current locale.
+ * @param {Intl.DateTimeFormatOptions} [formatOptions] - for available options, please see https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/DateTimeFormat
+ * @returns {Intl.DateTimeFormat}
+ */
+const createDateTimeFormat = (formatOptions) =>
+ Intl.DateTimeFormat(getPreferredLocales(), formatOptions);
+```
+
+#### Import existing type definitions via `import('path/to/module')`
+
+Here are examples of how to annotate a type of the Vue Test Utils Wrapper variables, that are not
+immediately defined:
+
+```javascript
+/** @type {import('helpers/vue_test_utils_helper').ExtendedWrapper} */
+let wrapper;
+// ...
+wrapper = mountExtended(/* ... */);
+```
+
+```javascript
+/** @type {import('@vue/test-utils').Wrapper} */
+let wrapper;
+// ...
+wrapper = shallowMount(/* ... */);
+```
+
+NOTE:
+`import()` is [not a native JSDoc construct](https://github.com/jsdoc/jsdoc/issues/1645), but it is
+recognized by many IDEs and tools. In this case we're aiming for better clarity in the code and
+improved Developer Experience with an IDE.
+
+#### JSDoc is limited
+
+As was stated above, JSDoc has limited vocabulary. And using it would not describe the type fully.
+But sometimes it's possible to use 3rd party library's type definitions to make type inference to
+work for our code. Here's an example of such approach:
+
+```diff
+- export const mountExtended = (...args) => extendedWrapper(mount(...args));
++ import { compose } from 'lodash/fp';
++ export const mountExtended = compose(extendedWrapper, mount);
+```
+
+Here we use TypeScript type definitions from `compose` function, to add inferred type definitions to
+`mountExtended` function. In this case `mountExtended` arguments will be of same type as `mount`
+arguments. And return type will be the same as `extendedWrapper` return type.
+
+We can still use JSDoc's syntax to add description to the function, for example:
+
+```javascript
+/** Mounts a component and returns an extended wrapper for it */
+export const mountExtended = compose(extendedWrapper, mount);
+```
+
+## System requirements
+
+A setup might be required for type definitions from GitLab codebase and from 3rd party packages to
+be properly displayed in IDEs and tools.
+
+### Aliases
+
+Our codebase uses many aliases for imports. For example, `import Api from '~/api';` would import a
+`app/assets/javascripts/api.js` file. But IDEs might not know that alias and thus might not know the
+type of the `Api`. To fix that for most IDEs — we need to create a
+[`jsconfig.json`](https://code.visualstudio.com/docs/languages/jsconfig) file.
+
+There is a script in the GitLab project that can generate a `jsconfig.json` file based on webpack
+configuration and current environment variables. To generate or update the `jsconfig.json` file —
+run from the GitLab project root:
+
+```shell
+node scripts/frontend/create_jsconfig.js
+```
+
+`jsconfig.json` is added to gitignore list, so creating or changing it does not cause Git changes in
+the GitLab project. This also means it is not included in Git pulls, so it has to be manually
+generated or updated.
+
+### 3rd party TypeScript definitions
+
+While more and more libraries use TypeScript for type definitions, some still might have JSDoc
+annotated types or no types at all. To cover that gap, TypeScript community started a
+[DefinitelyTyped](https://github.com/DefinitelyTyped/DefinitelyTyped) initiative, that creates and
+supports standalone type definitions for popular JavaScript libraries. We can use those definitions
+by either explicitly installing the type packages (`yarn add -D "@types/lodash"`) or by using a
+feature called [Automatic Type Acquisition (ATA)](https://www.typescriptlang.org/tsconfig#typeAcquisition),
+that is available in some Language Services
+(for example, [ATA in VS Code](https://github.com/microsoft/TypeScript/wiki/JavaScript-Language-Service-in-Visual-Studio#user-content--automatic-acquisition-of-type-definitions)).
+
+Automatic Type Acquisition (ATA) automatically fetches type definitions from the DefinitelyTyped
+list. But for ATA to work, a globally installed `npm` might be required. IDEs can provide a fallback
+configuration options to set location of the `npm` executables. Consult your IDE documentation for
+details.
+
+Because ATA is not guaranteed to work and Lodash is a backbone for many of our utility functions
+— we have [DefinitelyTyped definitions for Lodash](https://www.npmjs.com/package/@types/lodash)
+explicitly added to our `devDependencies` in the `package.json`. This ensures that everyone gets
+type hints for `lodash`-based functions out of the box.
diff --git a/doc/development/feature_flags/controls.md b/doc/development/feature_flags/controls.md
index 6c46780a5d7..6e0f0e8dbcf 100644
--- a/doc/development/feature_flags/controls.md
+++ b/doc/development/feature_flags/controls.md
@@ -507,15 +507,8 @@ Once the above MR has been merged, you should:
When a feature gate has been removed from the codebase, the feature
record still exists in the database that the flag was deployed too.
-The record can be deleted once the MR is deployed to each environment:
+The record can be deleted once the MR is deployed to all the environments:
```shell
-/chatops run feature delete some_feature --dev
-/chatops run feature delete some_feature --staging
-```
-
-Then, you can delete it from production after the MR is deployed to prod:
-
-```shell
-/chatops run feature delete some_feature
+/chatops run feature delete <feature-flag-name> --dev --ops --pre --staging --staging-ref --production
```
diff --git a/doc/development/feature_flags/index.md b/doc/development/feature_flags/index.md
index 552a4ccc84b..c1a5963e97f 100644
--- a/doc/development/feature_flags/index.md
+++ b/doc/development/feature_flags/index.md
@@ -203,7 +203,7 @@ Only feature flags that have a YAML definition file can be used when running the
```shell
$ bin/feature-flag my_feature_flag
>> Specify the group introducing the feature flag, like `group::project management`:
-?> group::application performance
+?> group::cloud connector
>> URL of the MR introducing the feature flag (enter to skip):
?> https://gitlab.com/gitlab-org/gitlab/-/merge_requests/38602
@@ -218,7 +218,7 @@ create config/feature_flags/development/my_feature_flag.yml
name: my_feature_flag
introduced_by_url: https://gitlab.com/gitlab-org/gitlab/-/merge_requests/38602
rollout_issue_url: https://gitlab.com/gitlab-org/gitlab/-/issues/232533
-group: group::application performance
+group: group::cloud connector
type: development
default_enabled: false
```
@@ -625,7 +625,7 @@ A common pattern of testing both paths looks like:
```ruby
it 'ci_live_trace works' do
# tests assuming ci_live_trace is enabled in tests by default
- Feature.enabled?(:ci_live_trace) # => true
+ Feature.enabled?(:ci_live_trace) # => true
end
context 'when ci_live_trace is disabled' do
diff --git a/doc/development/gems.md b/doc/development/gems.md
index c9672483e8d..54d6e6dc30d 100644
--- a/doc/development/gems.md
+++ b/doc/development/gems.md
@@ -254,13 +254,12 @@ The project for a new Gem should always be created in [`gitlab-org/ruby/gems` na
1. Create a project in the [`gitlab-org/ruby/gems` group](https://gitlab.com/gitlab-org/ruby/gems/) (or in a subgroup of it):
1. Follow the [instructions for new projects](https://about.gitlab.com/handbook/engineering/gitlab-repositories/#creating-a-new-project).
1. Follow the instructions for setting up a [CI/CD configuration](https://about.gitlab.com/handbook/engineering/gitlab-repositories/#cicd-configuration).
- 1. Use the [shared CI/CD config](https://gitlab.com/gitlab-org/quality/pipeline-common/-/blob/master/ci/gem-release.yml)
+ 1. Use the [gem-release CI component](https://gitlab.com/gitlab-org/quality/pipeline-common/-/tree/master/gem-release)
to release and publish new gem versions by adding the following to their `.gitlab-ci.yml`:
```yaml
include:
- - project: 'gitlab-org/quality/pipeline-common'
- file: '/ci/gem-release.yml'
+ - component: gitlab.com/gitlab-org/quality/pipeline-common/gem-release@<REPLACE WITH LATEST TAG FROM https://gitlab.com/gitlab-org/quality/pipeline-common/-/releases>
```
This job will handle building and publishing the gem (it uses a `gitlab_rubygems` Rubygems.org
diff --git a/doc/development/gitaly.md b/doc/development/gitaly.md
index e6a853c107e..ed7fb6325d6 100644
--- a/doc/development/gitaly.md
+++ b/doc/development/gitaly.md
@@ -41,8 +41,8 @@ To read or write Git data, a request has to be made to Gitaly. This means that
if you're developing a new feature where you need data that's not yet available
in `lib/gitlab/git` changes have to be made to Gitaly.
-There should be no new code that touches Git repositories via disk access (for example,
-Rugged, `git`, `rm -rf`) anywhere in the `gitlab` repository. Anything that
+There should be no new code that touches Git repositories via disk access
+anywhere in the `gitlab` repository. Anything that
needs direct access to the Git repository *must* be implemented in Gitaly, and
exposed via an RPC.
@@ -64,45 +64,6 @@ rm -rf tmp/tests/gitaly
During RSpec tests, the Gitaly instance writes logs to `gitlab/log/gitaly-test.log`.
-## Legacy Rugged code
-
-While Gitaly can handle all Git access, many of GitLab customers still
-run Gitaly atop NFS. The legacy Rugged implementation for Git calls may
-be faster than the Gitaly RPC due to N+1 Gitaly calls and other
-reasons. See [the issue](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/57317) for more
-details.
-
-Until GitLab has eliminated most of these inefficiencies or the use of
-NFS is discontinued for Git data, Rugged implementations of some of the
-most commonly-used RPCs can be enabled via feature flags:
-
-- `rugged_find_commit`
-- `rugged_get_tree_entries`
-- `rugged_tree_entry`
-- `rugged_commit_is_ancestor`
-- `rugged_commit_tree_entry`
-- `rugged_list_commits_by_oid`
-
-A convenience Rake task can be used to enable or disable these flags
-all together. To enable:
-
-```shell
-bundle exec rake gitlab:features:enable_rugged
-```
-
-To disable:
-
-```shell
-bundle exec rake gitlab:features:disable_rugged
-```
-
-Most of this code exists in the `lib/gitlab/git/rugged_impl` directory.
-
-NOTE:
-You should *not* have to add or modify code related to Rugged unless explicitly discussed with the
-[Gitaly Team](https://gitlab.com/groups/gl-gitaly/group_members). This code does not work on GitLab.com or other GitLab
-instances that do not use NFS.
-
## `TooManyInvocationsError` errors
During development and testing, you may experience `Gitlab::GitalyClient::TooManyInvocationsError` failures.
diff --git a/doc/development/github_importer.md b/doc/development/github_importer.md
index 45554ae465d..9ce95cf7da1 100644
--- a/doc/development/github_importer.md
+++ b/doc/development/github_importer.md
@@ -34,21 +34,42 @@ The importer's codebase is broken up into the following directories:
## Architecture overview
-When a GitHub project is imported, we schedule and execute a job for the
-`RepositoryImportWorker` worker as all other importers. However, unlike other
-importers, we don't immediately perform the work necessary. Instead work is
-divided into separate stages, with each stage consisting out of a set of Sidekiq
-jobs that are executed. Between every stage a job is scheduled that periodically
-checks if all work of the current stage is completed, advancing the import
-process to the next stage when this is the case. The worker handling this is
-called `Gitlab::GithubImport::AdvanceStageWorker`.
+When a GitHub project is imported, work is divided into separate stages, with
+each stage consisting of a set of Sidekiq jobs that are executed. Between
+every stage a job is scheduled that periodically checks if all work of the
+current stage is completed, advancing the import process to the next stage when
+this is the case. The worker handling this is called
+`Gitlab::GithubImport::AdvanceStageWorker`.
+
+- An import is initiated via an API request to
+ [`POST /import/github`](https://gitlab.com/gitlab-org/gitlab/-/blob/18878b90991e2d478f3c79a68013b156d83b5db8/lib/api/import_github.rb#L42)
+- The API endpoint calls [`Import::GitHubService`](https://gitlab.com/gitlab-org/gitlab/-/blob/18878b90991e2d478f3c79a68013b156d83b5db8/lib/api/import_github.rb#L43).
+- Which calls
+ [`Gitlab::LegacyGithubImport::ProjectCreator`](https://gitlab.com/gitlab-org/gitlab/-/blob/18878b90991e2d478f3c79a68013b156d83b5db8/app/services/import/github_service.rb#L31-38)
+- Which calls
+ [`Projects::CreateService`](https://gitlab.com/gitlab-org/gitlab/-/blob/18878b90991e2d478f3c79a68013b156d83b5db8/lib/gitlab/legacy_github_import/project_creator.rb#L30)
+- Which calls
+ [`@project.import_state.schedule`](https://gitlab.com/gitlab-org/gitlab/-/blob/18878b90991e2d478f3c79a68013b156d83b5db8/app/services/projects/create_service.rb#L325)
+- Which calls
+ [`project.add_import_job`](https://gitlab.com/gitlab-org/gitlab/-/blob/1d154fa0b9121566aebf3afe3d28808d025cc5af/app/models/project_import_state.rb#L43)
+- Which calls
+ [`RepositoryImportWorker`](https://gitlab.com/gitlab-org/gitlab/-/blob/1d154fa0b9121566aebf3afe3d28808d025cc5af/app/models/project.rb#L1105)
## Stages
### 1. RepositoryImportWorker
-This worker starts the import process by scheduling a job for the
-next worker.
+This worker calls
+[`Projects::ImportService.new.execute`](https://gitlab.com/gitlab-org/gitlab/-/blob/651e6a0139396ed6fa9ce73e27587ca88f9f4d96/app/workers/repository_import_worker.rb#L23-24),
+which calls
+[`importer.execute`](https://gitlab.com/gitlab-org/gitlab/-/blob/fcccaaac8d62191ad233cebeffc67111145b1ad7/app/services/projects/import_service.rb#L143).
+
+In this context, `importer` is an instance of
+[`Gitlab::ImportSources.importer(project.import_type)`](https://gitlab.com/gitlab-org/gitlab/-/blob/fcccaaac8d62191ad233cebeffc67111145b1ad7/app/services/projects/import_service.rb#L149),
+which for `github` import types maps to
+[`ParallelImporter`](https://gitlab.com/gitlab-org/gitlab/-/blob/651e6a0139396ed6fa9ce73e27587ca88f9f4d96/lib/gitlab/import_sources.rb#L13).
+
+`ParallelImporter` schedules a job for the next worker.
### 2. Stage::ImportRepositoryWorker
@@ -222,9 +243,8 @@ them to GitLab users. Other data such as issue pages and comments typically only
We handle the rate limit by doing the following:
-1. After we hit the rate limit, we either:
- - Automatically reschedule jobs in such a way that they are not executed until the rate limit has been reset.
- - Move onto another GitHub access token if multiple GitHub access tokens were passed to the API.
+1. After we hit the rate limit, we automatically reschedule jobs in such a way that they are not executed until the rate
+ limit has been reset.
1. We cache the mapping of GitHub users to GitLab users in Redis.
More information on user caching can be found below.
diff --git a/doc/development/i18n/externalization.md b/doc/development/i18n/externalization.md
index 68c2778eabe..1ce35b254f1 100644
--- a/doc/development/i18n/externalization.md
+++ b/doc/development/i18n/externalization.md
@@ -232,7 +232,7 @@ If strings are reused throughout a component, it can be useful to define these s
If we are reusing the same translated string in multiple components, it is tempting to add them to a `constants.js` file instead and import them across our components. However, there are multiple pitfalls to this approach:
- It creates distance between the HTML template and the copy, adding an additional level of complexity while navigating our codebase.
-- Copy strings are rarely, if ever, truly the same entity. The benefit of having a reusable variable is to have one easy place to go to update a value, but for copy it is quite common to have similar strings that aren't quite the same.
+- The benefit of having a reusable variable is to have one easy place to go to update a value, but for copy it is quite common to have similar strings that aren't quite the same.
Another practice to avoid when exporting copy strings is to import them in specs. While it might seem like a much more efficient test (if we change the copy, the test will still pass!) it creates additional problems:
diff --git a/doc/development/i18n/proofreader.md b/doc/development/i18n/proofreader.md
index cea59bae41b..f24ebacab18 100644
--- a/doc/development/i18n/proofreader.md
+++ b/doc/development/i18n/proofreader.md
@@ -140,7 +140,6 @@ are very appreciative of the work done by translators and proofreaders!
- Rıfat Ünalmış (Rifat Unalmis) - [GitLab](https://gitlab.com/runalmis), [Crowdin](https://crowdin.com/profile/runalmis)
- İsmail Arılık - [GitLab](https://gitlab.com/ismailarilik), [Crowdin](https://crowdin.com/profile/ismailarilik)
- Ukrainian
- - Volodymyr Sobotovych - [GitLab](https://gitlab.com/wheleph), [Crowdin](https://crowdin.com/profile/wheleph)
- Andrew Vityuk - [GitLab](https://gitlab.com/3_1_3_u), [Crowdin](https://crowdin.com/profile/andruwa13)
- Welsh
- Delyth Prys - [GitLab](https://gitlab.com/Delyth), [Crowdin](https://crowdin.com/profile/DelythPrys)
diff --git a/doc/development/img/runner_fleet_dashboard.png b/doc/development/img/runner_fleet_dashboard.png
new file mode 100644
index 00000000000..242ebf4aea9
--- /dev/null
+++ b/doc/development/img/runner_fleet_dashboard.png
Binary files differ
diff --git a/doc/development/index.md b/doc/development/index.md
index 71ab54c8a73..abc19645ecb 100644
--- a/doc/development/index.md
+++ b/doc/development/index.md
@@ -10,7 +10,7 @@ description: "Development Guidelines: learn how to contribute to GitLab."
Learn how to contribute to the development of the GitLab product.
-This content is intended for GitLab team members as well as members of the wider community.
+This content is intended for both GitLab team members and members of the wider community.
- [Contribute to GitLab development](contributing/index.md)
- [Contribute to GitLab Runner development](https://docs.gitlab.com/runner/development/)
diff --git a/doc/development/internal_analytics/index.md b/doc/development/internal_analytics/index.md
index 64b9c7af037..b0e47233777 100644
--- a/doc/development/internal_analytics/index.md
+++ b/doc/development/internal_analytics/index.md
@@ -14,6 +14,13 @@ when developing new features or instrumenting existing ones.
## Fundamental concepts
+<div class="video-fallback">
+ See the video about <a href="https://www.youtube.com/watch?v=GtFNXbjygWo">the concepts of events and metrics.</a>
+</div>
+<figure class="video_container">
+ <iframe src="https://www.youtube-nocookie.com/embed/GtFNXbjygWo" frameborder="0" allowfullscreen="true"> </iframe>
+</figure>
+
Events and metrics are the foundation of the internal analytics system.
Understanding the difference between the two concepts is vital to using the system.
@@ -50,9 +57,53 @@ such as the value of a setting or the count of rows in a database table.
- To instrument an event-based metric, see the [internal event tracking quick start guide](internal_event_instrumentation/quick_start.md).
- To instrument a metric that observes the GitLab instances state, see [the metrics instrumentation](metrics/metrics_instrumentation.md).
-## Data flow
+## Data availability
For GitLab there is an essential difference in analytics setup between SaaS and self-managed or GitLab Dedicated instances.
+On our SaaS instance both individual events and pre-computed metrics are available for analysis.
+Additionally for SaaS page views are automatically instrumented.
+For self-managed only the metrics instrumenented on the version installed on the instance are available.
+
+## Data discovery
+
+The data visualization tools [Sisense](https://about.gitlab.com/handbook/business-technology/data-team/platform/sisensecdt/) and [Tableau](https://about.gitlab.com/handbook/business-technology/data-team/platform/tableau/),
+which have access to our Data Warehouse, can be used to query the internal analytics data.
+
+### Querying metrics
+
+The following example query returns all values reported for `count_distinct_user_id_from_feature_used_7d` within the last six months and the according `instance_id`:
+
+```sql
+SELECT
+ date_trunc('week', ping_created_at),
+ dim_instance_id,
+ metric_value
+FROM common.fct_ping_instance_metric_rolling_6_months --model limited to last 6 months for performance
+WHERE metrics_path = 'counts.users_visiting_dashboard_weekly' --set to metric of interest
+ORDER BY ping_created_at DESC
+```
+
+For a list of other metrics tables refer to the [Data Models Cheat Sheet](https://about.gitlab.com/handbook/product/product-analysis/data-model-cheat-sheet/#commonly-used-data-models).
+
+### Querying events
+
+The following example query returns the number of daily event occurences for the `feature_used` event.
+
+```sql
+SELECT
+ behavior_date,
+ COUNT(*) as event_occurences
+FROM common_mart.mart_behavior_structured_event
+WHERE event_action = 'feature_used'
+AND event_category = 'InternalEventTracking'
+AND behavior_date > '2023-08-01' --restricted minimum date for performance
+GROUP BY 1 ORDER BY 1 desc
+```
+
+For a list of other event tables refer to the [Data Models Cheat Sheet](https://about.gitlab.com/handbook/product/product-analysis/data-model-cheat-sheet/#commonly-used-data-models-2).
+
+## Data flow
+
On SaaS event records are directly sent to a collection system, called Snowplow, and imported into our data warehouse.
Self-managed and GitLab Dedicated instances record event counts locally. Every week, a process called Service Ping sends the current
values for all pre-defined and active metrics to our data warehouse. For GitLab.com, metrics are calculated directly in the data warehouse.
diff --git a/doc/development/internal_analytics/internal_event_instrumentation/local_setup_and_debugging.md b/doc/development/internal_analytics/internal_event_instrumentation/local_setup_and_debugging.md
index d68e5565775..d9f45a2d93e 100644
--- a/doc/development/internal_analytics/internal_event_instrumentation/local_setup_and_debugging.md
+++ b/doc/development/internal_analytics/internal_event_instrumentation/local_setup_and_debugging.md
@@ -14,7 +14,7 @@ Internal events are using a tool called Snowplow under the hood. To develop and
| Snowplow Micro | Yes | Yes | Yes | No | No |
For local development you will have to either [setup a local event collector](#setup-local-event-collector) or [configure a remote event collector](#configure-a-remote-event-collector).
-We recommend the local setup when actively developing new events.
+We recommend using the local setup together with the [internal events monitor](#internal-events-monitor) when actively developing new events.
## Setup local event collector
@@ -68,6 +68,57 @@ You can configure your self-managed GitLab instance to use a custom Snowplow col
1. Select **Save changes**.
+## Internal Events Monitor
+
+<div class="video-fallback">
+ Watch the demo video about the <a href="https://www.youtube.com/watch?v=R7vT-VEzZOI">Internal Events Tracking Monitor</a>
+</div>
+<figure class="video_container">
+ <iframe src="https://www.youtube-nocookie.com/embed/R7vT-VEzZOI" frameborder="0" allowfullscreen="true"> </iframe>
+</figure>
+
+To understand how events are triggered and metrics are updated while you use the Rails app locally or `rails console`,
+you can use the monitor.
+
+Start the monitor and list one or more events that you would like to monitor. In this example we would like to monitor `i_code_review_user_create_mr`.
+
+```shell
+rails runner scripts/internal_events/monitor.rb i_code_review_user_create_mr
+```
+
+The monitor shows two tables. The top table lists all the metrics that are defined on the `i_code_review_user_create_mr` event.
+The second right-most column shows the value of each metric when the monitor was started and the right most column shows the current value of each metric.
+The bottom table has a list selected properties of all Snowplow events that matches the event name.
+
+If a new `i_code_review_user_create_mr` event is fired, the metrics values will get updated and a new event will appear in the `SNOWPLOW EVENTS` table.
+
+The monitor looks like below.
+
+```plaintext
+Updated at 2023-10-11 10:17:59 UTC
+Monitored events: i_code_review_user_create_mr
+
++--------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| RELEVANT METRICS |
++-----------------------------------------------------------------------------+------------------------------+-----------------------+---------------+---------------+
+| Key Path | Monitored Events | Instrumentation Class | Initial Value | Current Value |
++-----------------------------------------------------------------------------+------------------------------+-----------------------+---------------+---------------+
+| counts_monthly.aggregated_metrics.code_review_category_monthly_active_users | i_code_review_user_create_mr | AggregatedMetric | 13 | 14 |
+| counts_monthly.aggregated_metrics.code_review_group_monthly_active_users | i_code_review_user_create_mr | AggregatedMetric | 13 | 14 |
+| counts_weekly.aggregated_metrics.code_review_category_monthly_active_users | i_code_review_user_create_mr | AggregatedMetric | 0 | 1 |
+| counts_weekly.aggregated_metrics.code_review_group_monthly_active_users | i_code_review_user_create_mr | AggregatedMetric | 0 | 1 |
+| redis_hll_counters.code_review.i_code_review_user_create_mr_monthly | i_code_review_user_create_mr | RedisHLLMetric | 8 | 9 |
+| redis_hll_counters.code_review.i_code_review_user_create_mr_weekly | i_code_review_user_create_mr | RedisHLLMetric | 0 | 1 |
++-----------------------------------------------------------------------------+------------------------------+-----------------------+---------------+---------------+
++---------------------------------------------------------------------------------------------------------+
+| SNOWPLOW EVENTS |
++------------------------------+--------------------------+---------+--------------+------------+---------+
+| Event Name | Collector Timestamp | user_id | namespace_id | project_id | plan |
++------------------------------+--------------------------+---------+--------------+------------+---------+
+| i_code_review_user_create_mr | 2023-10-11T10:17:15.504Z | 29 | 93 | | default |
++------------------------------+--------------------------+---------+--------------+------------+---------+
+```
+
## Snowplow Analytics Debugger Chrome Extension
[Snowplow Analytics Debugger](https://chrome.google.com/webstore/detail/snowplow-analytics-debugg/jbnlcgeengmijcghameodeaenefieedm) is a browser extension for testing frontend events.
diff --git a/doc/development/internal_analytics/internal_event_instrumentation/quick_start.md b/doc/development/internal_analytics/internal_event_instrumentation/quick_start.md
index 271cb5f98a6..15ad4266d1b 100644
--- a/doc/development/internal_analytics/internal_event_instrumentation/quick_start.md
+++ b/doc/development/internal_analytics/internal_event_instrumentation/quick_start.md
@@ -148,3 +148,27 @@ Sometimes we want to send internal events when the component is rendered or load
= render Pajamas::ButtonComponent.new(button_options: { data: { event_tracking_load: 'true', event_tracking: 'i_devops' } }) do
= _("New project")
```
+
+### Props
+
+Apart from `eventName`, the `trackEvent` method also supports `extra` and `context` props.
+
+`extra`: Use this property to append supplementary information to GitLab standard context.
+`context`: Use this property to attach an additional context, if needed.
+
+The following example shows how to use the `extra` and `context` props with the `trackEvent` method:
+
+```javascript
+this.trackEvent('i_code_review_user_apply_suggestion', {
+ extra: {
+ projectId : 123,
+ },
+ context: {
+ schema: 'iglu:com.gitlab/design_management_context/jsonschema/1-0-0',
+ data: {
+ 'design-version-number': '1.0.0',
+ 'design-is-current-version': '1.0.1',
+ },
+ },
+});
+```
diff --git a/doc/development/internal_analytics/metrics/metrics_dictionary.md b/doc/development/internal_analytics/metrics/metrics_dictionary.md
index afdbd17c63b..6a3291eaba5 100644
--- a/doc/development/internal_analytics/metrics/metrics_dictionary.md
+++ b/doc/development/internal_analytics/metrics/metrics_dictionary.md
@@ -104,7 +104,7 @@ A metric's time frame is calculated based on the `time_frame` field and the `dat
We use the following categories to classify a metric:
- `operational`: Required data for operational purposes.
-- `optional`: Default value for a metric. Data that is optional to collect. This can be [enabled or disabled](../../../administration/settings/usage_statistics.md#enable-or-disable-usage-statistics) in the Admin Area.
+- `optional`: Default value for a metric. Data that is optional to collect. This can be [enabled or disabled](../../../administration/settings/usage_statistics.md#enable-or-disable-service-ping) in the Admin Area.
- `subscription`: Data related to licensing.
- `standard`: Standard set of identifiers that are included when collecting data.
diff --git a/doc/development/internal_analytics/service_ping/index.md b/doc/development/internal_analytics/service_ping/index.md
index bae4e35149d..f010884272b 100644
--- a/doc/development/internal_analytics/service_ping/index.md
+++ b/doc/development/internal_analytics/service_ping/index.md
@@ -22,7 +22,7 @@ and sales teams understand how GitLab is used. The data helps to:
Service Ping information is not anonymous. It's linked to the instance's hostname, but does
not contain project names, usernames, or any other specific data.
-Service Ping is enabled by default. However, you can [disable](../../../administration/settings/usage_statistics.md#enable-or-disable-usage-statistics) it on any self-managed instance. When Service Ping is enabled, GitLab gathers data from the other instances and can show your instance's usage statistics to your users.
+Service Ping is enabled by default. However, you can [disable](../../../administration/settings/usage_statistics.md#enable-or-disable-service-ping) certain metrics on any self-managed instance. When Service Ping is enabled, GitLab gathers data from the other instances and can show your instance's usage statistics to your users.
## Service Ping terminology
@@ -38,13 +38,8 @@ We use the following terminology to describe the Service Ping components:
### Limitations
-- Service Ping does not track frontend events things like page views, link clicks, or user sessions.
-- Service Ping focuses only on aggregated backend events.
-
-Because of these limitations we recommend you:
-
-- Instrument your products with Snowplow for more detailed analytics on GitLab.com.
-- Use Service Ping to track aggregated backend events on self-managed instances.
+- Service Ping delivers only [metrics](../index.md#metric), not individual events.
+- A metric has to be present and instrumented in the codebase for a GitLab version to be delivered in Service Pings for that version.
## Service Ping request flow
@@ -358,14 +353,6 @@ The following is example content of the Service Ping payload.
}
```
-## Notable changes
-
-In GitLab 14.6, [`flavor`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/75587) was added to try to detect the underlying managed database variant.
-Possible values are "Amazon Aurora PostgreSQL", "PostgreSQL on Amazon RDS", "Cloud SQL for PostgreSQL",
-"Azure Database for PostgreSQL - Flexible Server", or "null".
-
-In GitLab 13.5, `pg_system_id` was added to send the [PostgreSQL system identifier](https://www.2ndquadrant.com/en/blog/support-for-postgresqls-system-identifier-in-barman/).
-
## Export Service Ping data
Rake tasks exist to export Service Ping data in different formats.
@@ -390,105 +377,7 @@ bin/rake gitlab:usage_data:dump_non_sql_in_json
bin/rake gitlab:usage_data:dump_sql_in_yaml > ~/Desktop/usage-metrics-2020-09-02.yaml
```
-## Generate Service Ping
-
-To generate Service Ping, use [Teleport](https://goteleport.com/docs/) or a detached screen session on a remote server.
-
-### Triggering
-
-#### Trigger Service Ping with Teleport
-
-1. Request temporary [access](https://gitlab.com/gitlab-com/runbooks/-/blob/master/docs/teleport/Connect_to_Rails_Console_via_Teleport.md#how-to-use-teleport-to-connect-to-rails-console) to the required environment.
-1. After your approval is issued, [access the Rails console](https://gitlab.com/gitlab-com/runbooks/-/blob/master/docs/teleport/Connect_to_Rails_Console_via_Teleport.md#access-approval).
-1. Run `GitlabServicePingWorker.new.perform('triggered_from_cron' => false)`.
-
-#### Trigger Service Ping with a detached screen session
-
-1. Connect to bastion with agent forwarding:
-
- ```shell
- ssh -A lb-bastion.gprd.gitlab.com
- ```
-
-1. Create named screen:
-
- ```shell
- screen -S <username>_usage_ping_<date>
- ```
-
-1. Connect to console host:
-
- ```shell
- ssh $USER-rails@console-01-sv-gprd.c.gitlab-production.internal
- ```
-
-1. Run:
-
- ```shell
- GitlabServicePingWorker.new.perform('triggered_from_cron' => false)
- ```
-
-1. To detach from screen, press `ctrl + A`, `ctrl + D`.
-1. Exit from bastion:
-
- ```shell
- exit
- ```
-
-1. Get the metrics duration from logs:
-
-Search in Google Console logs for `time_elapsed`. [Query example](https://cloudlogging.app.goo.gl/nWheZvD8D3nWazNe6).
-
-### Verification (After approx 30 hours)
-
-#### Verify with Teleport
-
-1. Follow [the steps](https://gitlab.com/gitlab-com/runbooks/-/blob/master/docs/teleport/Connect_to_Rails_Console_via_Teleport.md#how-to-use-teleport-to-connect-to-rails-console) to request a new access to the required environment and connect to the Rails console
-1. Check the last payload in `raw_usage_data` table: `RawUsageData.last.payload`
-1. Check the when the payload was sent: `RawUsageData.last.sent_at`
-
-#### Verify using detached screen session
-
-1. Reconnect to bastion:
-
- ```shell
- ssh -A lb-bastion.gprd.gitlab.com
- ```
-
-1. Find your screen session:
-
- ```shell
- screen -ls
- ```
-
-1. Attach to your screen session:
-
- ```shell
- screen -x 14226.mwawrzyniak_usage_ping_2021_01_22
- ```
-
-1. Check the last payload in `raw_usage_data` table:
-
- ```shell
- RawUsageData.last.payload
- ```
-
-1. Check the when the payload was sent:
-
- ```shell
- RawUsageData.last.sent_at
- ```
-
-### Skip database write operations
-
-To skip database write operations, DevOps report creation, and storage of usage data payload, pass an optional argument:
-
-```shell
-skip_db_write:
-GitlabServicePingWorker.new.perform('triggered_from_cron' => false, 'skip_db_write' => true)
-```
-
-### Fallback values for Service Ping
+## Fallback values for Service Ping
We return fallback values in these cases:
diff --git a/doc/development/internal_api/index.md b/doc/development/internal_api/index.md
index f9b494b80c2..9b5bafaad8f 100644
--- a/doc/development/internal_api/index.md
+++ b/doc/development/internal_api/index.md
@@ -1215,7 +1215,7 @@ Example response:
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/9388) in GitLab 11.10.
-The group SCIM API implements the [RFC7644 protocol](https://www.rfc-editor.org/rfc/rfc7644). As this API is for
+The group SCIM API partially implements the [RFC7644 protocol](https://www.rfc-editor.org/rfc/rfc7644). This API provides the `/groups/:group_path/Users` and `/groups/:group_path/Users/:id` endpoints. The base URL is `<http|https>://<GitLab host>/api/scim/v2`. Because this API is for
**system** use for SCIM provider integration, it is subject to change without notice.
To use this API, enable [Group SSO](../../user/group/saml_sso/index.md) for the group.
@@ -1452,7 +1452,7 @@ Returns an empty response with a `204` status code if successful.
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/378599) in GitLab 15.8.
-The Instance SCIM API implements the [RFC7644 protocol](https://www.rfc-editor.org/rfc/rfc7644). As this API is for
+The instance SCIM API partially implements the [RFC7644 protocol](https://www.rfc-editor.org/rfc/rfc7644). This API provides the `/application/Users` and `/application/Users/:id` endpoints. The base URL is `<http|https>://<GitLab host>/api/scim/v2`. Because this API is for
**system** use for SCIM provider integration, it is subject to change without notice.
To use this API, enable [SAML SSO](../../integration/saml.md) for the instance.
diff --git a/doc/development/migration_style_guide.md b/doc/development/migration_style_guide.md
index 29181dd1b9d..afb36519b8d 100644
--- a/doc/development/migration_style_guide.md
+++ b/doc/development/migration_style_guide.md
@@ -1563,3 +1563,23 @@ Any table which has some high read operation compared to current [high-traffic t
As a general rule, we discourage adding columns to high-traffic tables that are purely for
analytics or reporting of GitLab.com. This can have negative performance impacts for all
self-managed instances without providing direct feature value to them.
+
+## Milestone
+
+Beginning in GitLab 16.6, all new migrations must specify a milestone, using the following syntax:
+
+```ruby
+class AddFooToBar < Gitlab::Database::Migration[2.2]
+ milestone '16.6'
+
+ def change
+ # Your migration here
+ end
+end
+```
+
+Adding the correct milestone to a migration enables us to logically partition migrations into
+their corresponding GitLab minor versions. This:
+
+- Simplifies the upgrade process.
+- Alleviates potential migration ordering issues that arise when we rely solely on the migration's timestamp for ordering.
diff --git a/doc/development/permissions/custom_roles.md b/doc/development/permissions/custom_roles.md
index a060d7a740b..1630ea7b9ab 100644
--- a/doc/development/permissions/custom_roles.md
+++ b/doc/development/permissions/custom_roles.md
@@ -200,6 +200,10 @@ Examples of merge requests adding new abilities to custom roles:
You should make sure a new custom roles ability is under a feature flag.
+### Privilege escalation consideration
+
+A base role typically has permissions that allow creation or management of artifacts corresponding to the base role when interacting with that artifact. For example, when a `Developer` creates an access token for a project, it is created with `Developer` access encoded into that credential. It is important to keep in mind that as new custom permissions are created, there might be a risk of elevated privileges when interacting with GitLab artifacts, and appropriate safeguards or base role checks should be added.
+
### Consuming seats
If a new user with a role `Guest` is added to a member role that includes enablement of an ability that is **not** in the `CUSTOMIZABLE_PERMISSIONS_EXEMPT_FROM_CONSUMING_SEAT` array, a seat is consumed. We simply want to make sure we are charging Ultimate customers for guest users, who have "elevated" abilities. This only applies to billable users on SaaS (billable users that are counted towards namespace subscription). More details about this topic can be found in [this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/390269).
diff --git a/doc/development/pipelines/index.md b/doc/development/pipelines/index.md
index 2266bdbe459..77f91300a57 100644
--- a/doc/development/pipelines/index.md
+++ b/doc/development/pipelines/index.md
@@ -610,15 +610,26 @@ Exceptions to this general guideline should be motivated and documented.
### Ruby versions testing
-We're running Ruby 3.0 on GitLab.com, as well as for merge requests and the default branch.
-To prepare for the next release, Ruby 3.1, we also run our test suite against Ruby 3.1 on
-a dedicated 2-hourly scheduled pipelines.
+We're running Ruby 3.0 on GitLab.com, as well as for the default branch.
+To prepare for the next Ruby version, we run merge requests in Ruby 3.1.
-For merge requests, you can add the `pipeline:run-in-ruby3_1` label to switch
-the Ruby version used for running the whole test suite to 3.1. When you do
-this, the test suite will no longer run in Ruby 3.0 (default), and an
-additional job `verify-ruby-3.0` will also run and always fail to remind us to
-remove the label and run in Ruby 3.0 before merging the merge request.
+This takes effects at the time when
+[Run merge requests in Ruby 3.1 by default](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/134290)
+is merged. See
+[Ruby 3.1 epic](https://gitlab.com/groups/gitlab-org/-/epics/10034)
+for the roadmap to fully make Ruby 3.1 the default.
+
+To make sure both Ruby versions are working, we also run our test suite
+against both Ruby 3.0 and Ruby 3.1 on dedicated 2-hourly scheduled pipelines.
+
+For merge requests, you can add the `pipeline:run-in-ruby3_0` label to switch
+the Ruby version to 3.0. When you do this, the test suite will no longer run
+in Ruby 3.1 (default for merge requests).
+
+When the pipeline is running in a Ruby version not considered default, an
+additional job `verify-default-ruby` will also run and always fail to remind
+us to remove the label and run in default Ruby before merging the merge
+request. At the moment both Ruby 3.0 and Ruby 3.1 are considered default.
This should let us:
@@ -632,17 +643,17 @@ Our test suite runs against PostgreSQL 14 as GitLab.com runs on PostgreSQL 14 an
We do run our test suite against PostgreSQL 14 on nightly scheduled pipelines.
-We also run our test suite against PostgreSQL 12 and PostgreSQL 13 upon specific database library changes in merge requests and `main` pipelines (with the `rspec db-library-code pg12` and `rspec db-library-code pg13` jobs).
+We also run our test suite against PostgreSQL 13 upon specific database library changes in merge requests and `main` pipelines (with the `rspec db-library-code pg13` job).
#### Current versions testing
| Where? | PostgreSQL version | Ruby version |
|--------------------------------------------------------------------------------------------------|-------------------------------------------------|-----------------------|
-| Merge requests | 14 (default version), 13 for DB library changes | 3.0 (default version) |
+| Merge requests | 14 (default version), 13 for DB library changes | 3.1 |
| `master` branch commits | 14 (default version), 13 for DB library changes | 3.0 (default version) |
| `maintenance` scheduled pipelines for the `master` branch (every even-numbered hour) | 14 (default version), 13 for DB library changes | 3.0 (default version) |
| `maintenance` scheduled pipelines for the `ruby3_1` branch (every odd-numbered hour), see below. | 14 (default version), 13 for DB library changes | 3.1 |
-| `nightly` scheduled pipelines for the `master` branch | 14 (default version), 12, 13, 15 | 3.0 (default version) |
+| `nightly` scheduled pipelines for the `master` branch | 14 (default version), 13, 15 | 3.0 (default version) |
There are 2 pipeline schedules used for testing Ruby 3.1. One is triggering a
pipeline in `ruby3_1-sync` branch, which updates the `ruby3_1` branch with latest
diff --git a/doc/development/repository_storage_moves/index.md b/doc/development/repository_storage_moves/index.md
new file mode 100644
index 00000000000..578bc1eabee
--- /dev/null
+++ b/doc/development/repository_storage_moves/index.md
@@ -0,0 +1,102 @@
+---
+stage: Create
+group: Source Code
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
+---
+
+# Project Repository Storage Moves
+
+This document was created to help contributors understand the code design of
+[project repository storage moves](../../api/project_repository_storage_moves.md).
+Read this document before making changes to the code for this feature.
+
+This document is intentionally limited to an overview of how the code is
+designed, as code can change often. To understand how a specific part of the
+feature works, view the code and the specs. The details here explain how the
+major components of the Code Owners feature work.
+
+NOTE:
+This document should be updated when parts of the codebase referenced in this
+document are updated, removed, or new parts are added.
+
+## Business logic
+
+- `Projects::RepositoryStorageMove`: Tracks the move, includes state machine.
+ - Defined in `app/models/projects/repository_storage_move.rb`.
+- `RepositoryStorageMovable`: Contains the state machine logic, validators, and some helper methods.
+ - Defined in `app/models/concerns/repository_storage_movable.rb`.
+- `Project`: The project model.
+ - Defined in `app/models/project.rb`.
+- `CanMoveRepositoryStorage`: Contains helper methods that are into `Project`.
+ - Defined in `app/models/concerns/can_move_repository_storage.rb`.
+- `API::ProjectRepositoryStorageMoves`: API class for project repository storage moves.
+ - Defined in `lib/api/project_repository_storage_moves.rb`.
+- `Entities::Projects::RepositoryStorageMove`: API entity for serializing the `Projects::RepositoryStorageMove` model.
+ - Defined in `lib/api/entities/projects/repository_storage_moves.rb`.
+- `Projects::ScheduleBulkRepositoryShardMovesService`: Service to schedule bulk moves.
+ - Defined in `app/services/projects/schedule_bulk_repository_shard_moves_service.rb`.
+- `ScheduleBulkRepositoryShardMovesMethods`: Generic methods for bulk moves.
+ - Defined in `app/services/concerns/schedule_bulk_repository_shard_moves_methods.rb`.
+- `Projects::ScheduleBulkRepositoryShardMovesWorker`: Worker to handle bulk moves.
+ - Defined in `app/workers/projects/schedule_bulk_repository_shard_moves_worker.rb`.
+- `Projects::UpdateRepositoryStorageWorker`: Finds repository storage move and then calls the update storage service.
+ - Defined in `app/workers/projects/update_repository_storage_worker.rb`.
+- `UpdateRepositoryStorageWorker`: Module containing generic logic for `Projects::UpdateRepositoryStorageWorker`.
+ - Defined in `app/workers/concerns/update_repository_storage_worker.rb`.
+- `Projects::UpdateRepositoryStorageService`: Performs the move.
+ - Defined in `app/services/projects/update_repository_storage_service.rb`.
+- `UpdateRepositoryStorageMethods`: Module with generic methods included in `Projects::UpdateRepositoryStorageService`.
+ - Defined in `app/services/concerns/update_repository_storage_methods.rb`.
+- `Projects::UpdateService`: Schedules move if the passed parameters request a move.
+ - Defined in `app/services/projects/update_service.rb`.
+- `PoolRepository`: Ruby object representing Gitaly `ObjectPool`.
+ - Defined in `app/models/pool_repository.rb`.
+- `ObjectPool::CreateWorker`: Worker to create an `ObjectPool` via `Gitaly`.
+ - Defined in `app/workers/object_pool/create_worker.rb`.
+- `ObjectPool::JoinWorker`: Worker to join an `ObjectPool` via `Gitaly`.
+ - Defined in `app/workers/object_pool/join_worker.rb`.
+- `ObjectPool::ScheduleJoinWorker`: Worker to schedule an `ObjectPool::JoinWorker`.
+ - Defined in `app/workers/object_pool/schedule_join_worker.rb`.
+- `ObjectPool::DestroyWorker`: Worker to destroy an `ObjectPool` via `Gitaly`.
+ - Defined in `app/workers/object_pool/destroy_worker.rb`.
+- `ObjectPoolQueue`: Module to configure `ObjectPool` workers.
+ - Defined in `app/workers/concerns/object_pool_queue.rb`.
+- `Repositories::ReplicateService`: Handles replication of data from one repository to another.
+ - Defined in `app/services/repositories/replicate_service.rb`.
+
+## Flow
+
+These flowcharts should help explain the flow from the endpoints down to the
+models for different features.
+
+### Schedule a repository storage move via the API
+
+```mermaid
+graph TD
+ A[<code>POST /api/:version/project_repository_storage_moves</code>] --> C
+ B[<code>POST /api/:version/projects/:id/repository_storage_moves</code>] --> D
+ C[Schedule move for each project in shard] --> D[Set state to scheduled]
+ D --> E[<code>after_transition callback</code>]
+ E --> F{<code>set_repository_read_only!</code>}
+ F -->|success| H[Schedule repository update worker]
+ F -->|error| G[Set state to failed]
+```
+
+### Moving the storage after being scheduled
+
+```mermaid
+graph TD
+ A[Repository update worker scheduled] --> B{State is scheduled?}
+ B -->|Yes| C[Set state to started]
+ B -->|No| D[Return success]
+ C --> E{Same filesystem?}
+ E -.-> G[Set project repo to writable]
+ E -->|Yes| F["Mirror repositories (project, wiki, design, & pool)"]
+ G --> H[Update repo storage value]
+ H --> I[Set state to finished]
+ I --> J[Associate project with new pool repository]
+ J --> K[Unlink old pool repository]
+ K --> L[Update project repository storage values]
+ L --> N[Remove old paths if same filesystem]
+ N --> M[Set state to finished]
+```
diff --git a/doc/development/rubocop_development_guide.md b/doc/development/rubocop_development_guide.md
index 6568d025ca5..807544b71d4 100644
--- a/doc/development/rubocop_development_guide.md
+++ b/doc/development/rubocop_development_guide.md
@@ -28,15 +28,51 @@ discussions, nitpicking, or back-and-forth in reviews. The
[GitLab Ruby style guide](backend/ruby_style_guide.md) includes a non-exhaustive
list of styles that commonly come up in reviews and are not enforced.
-By default, we should not
-[disable a RuboCop rule inline](https://docs.rubocop.org/rubocop/configuration.html#disabling-cops-within-source-code), because it negates agreed-upon code standards that the rule is attempting to apply to the codebase.
-
-If you must use inline disable, provide the reason on the MR and ensure the reviewers agree
-before merging.
-
Additionally, we have dedicated
[test-specific style guides and best practices](testing_guide/index.md).
+## Disabling rules inline
+
+By default, RuboCop rules should not be
+[disabled inline](https://docs.rubocop.org/rubocop/configuration.html#disabling-cops-within-source-code),
+because it negates agreed-upon code standards that the rule is attempting to
+apply to the codebase.
+
+If you must use inline disable provide the reason as a code comment in
+the same line where the rule is disabled.
+
+More context can go into code comments above this inline disable comment. To
+reduce verbose code comments link a resource (issue, epic, ...) to provide
+detailed context.
+
+For example:
+
+```ruby
+# bad
+module Types
+ module Domain
+ # rubocop:disable Graphql/AuthorizeTypes
+ class SomeType < BaseObject
+ object.public_send(action) # rubocop:disable GitlabSecurity/PublicSend
+ end
+ # rubocop:enable Graphql/AuthorizeTypes
+ end
+end
+
+# good
+module Types
+ module Domain
+ # rubocop:disable Graphql/AuthorizeTypes -- already authroized in parent entity
+ class SomeType < BaseObject
+ # At this point `action` is safe to be used in `public_send`.
+ # See https://gitlab.com/gitlab-org/gitlab/-/issues/123457890.
+ object.public_send(action) # rubocop:disable GitlabSecurity/PublicSend -- User input verified
+ end
+ # rubocop:enable Graphql/AuthorizeTypes
+ end
+end
+```
+
## Creating new RuboCop cops
Typically it is better for the linting rules to be enforced programmatically as it
diff --git a/doc/development/ruby_upgrade.md b/doc/development/ruby_upgrade.md
index 52f0f72e72a..61bc629e8c8 100644
--- a/doc/development/ruby_upgrade.md
+++ b/doc/development/ruby_upgrade.md
@@ -84,6 +84,8 @@ order reversed as described above.
Tracking this work in an epic is useful to get a sense of progress. For larger upgrades, include a
timeline in the epic description so stakeholders know when the final switch is expected to go live.
+Include the designated [performance testing template](https://gitlab.com/gitlab-org/quality/performance-testing/ruby-rollout-performance-testing)
+to help ensure performance standards on the upgrade.
Break changes to individual repositories into separate issues under this epic.
@@ -141,14 +143,13 @@ A [build matrix definition](../ci/yaml/index.md#parallelmatrix) can do this effi
#### Decide which repositories to update
-When upgrading Ruby, consider updating the following repositories:
+When upgrading Ruby, consider updating the repositories in the [`ruby/gems` group](https://gitlab.com/gitlab-org/ruby/gems/) as well.
+For reference, here is a list of merge requests that have updated Ruby for some of these projects in the past:
-- [Gitaly](https://gitlab.com/gitlab-org/gitaly) ([example](https://gitlab.com/gitlab-org/gitaly/-/merge_requests/3771))
- [GitLab LabKit](https://gitlab.com/gitlab-org/labkit-ruby) ([example](https://gitlab.com/gitlab-org/labkit-ruby/-/merge_requests/79))
- [GitLab Exporter](https://gitlab.com/gitlab-org/ruby/gems/gitlab-exporter) ([example](https://gitlab.com/gitlab-org/ruby/gems/gitlab-exporter/-/merge_requests/150))
- [GitLab Experiment](https://gitlab.com/gitlab-org/ruby/gems/gitlab-experiment) ([example](https://gitlab.com/gitlab-org/ruby/gems/gitlab-experiment/-/merge_requests/128))
- [Gollum Lib](https://gitlab.com/gitlab-org/gollum-lib) ([example](https://gitlab.com/gitlab-org/gollum-lib/-/merge_requests/21))
-- [GitLab Helm Chart](https://gitlab.com/gitlab-org/charts/gitlab) ([example](https://gitlab.com/gitlab-org/charts/gitlab/-/merge_requests/2162))
- [GitLab Sidekiq fetcher](https://gitlab.com/gitlab-org/sidekiq-reliable-fetch) ([example](https://gitlab.com/gitlab-org/sidekiq-reliable-fetch/-/merge_requests/33))
- [Prometheus Ruby Mmap Client](https://gitlab.com/gitlab-org/prometheus-client-mmap) ([example](https://gitlab.com/gitlab-org/prometheus-client-mmap/-/merge_requests/59))
- [GitLab-mail_room](https://gitlab.com/gitlab-org/gitlab-mail_room) ([example](https://gitlab.com/gitlab-org/gitlab-mail_room/-/merge_requests/16))
@@ -213,8 +214,6 @@ the new Ruby to be the new default.
The last step is to use the new Ruby in production. This
requires updating Omnibus and production Docker images to use the new version.
-Helm charts may also have to be updated if there were changes to related systems that maintain
-their own charts (such as `gitlab-exporter`.)
To use the new Ruby in production, update the following projects:
@@ -222,6 +221,11 @@ To use the new Ruby in production, update the following projects:
- [Omnibus GitLab](https://gitlab.com/gitlab-org/omnibus-gitlab) ([example](https://gitlab.com/gitlab-org/omnibus-gitlab/-/merge_requests/5545))
- [Self-compiled installations](../install/installation.md): update the [Ruby system version check](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/system_check/app/ruby_version_check.rb)
+Charts like the [GitLab Helm Chart](https://gitlab.com/gitlab-org/charts/gitlab) should also be updated if
+they use Ruby in some capacity, for example
+to run tests (see [this example](https://gitlab.com/gitlab-org/charts/gitlab/-/merge_requests/2162)), though
+this may not strictly be necessary.
+
If you submit a change management request, coordinate the rollout with infrastructure
engineers. When dealing with larger upgrades, involve [Release Managers](https://about.gitlab.com/community/release-managers/)
in the rollout plan.
diff --git a/doc/development/runner_fleet_dashboard.md b/doc/development/runner_fleet_dashboard.md
new file mode 100644
index 00000000000..2a7c7d05453
--- /dev/null
+++ b/doc/development/runner_fleet_dashboard.md
@@ -0,0 +1,245 @@
+---
+stage: Verify
+group: Runner
+info: >-
+ To determine the technical writer assigned to the Stage/Group associated with
+ this page, see
+ https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
+---
+# Runner Fleet Dashboard **(ULTIMATE BETA)**
+
+> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/424495) in GitLab 16.6 behind several [feature flags](#enable-feature-flags).
+
+This feature is in [BETA](../policy/experiment-beta-support.md).
+To join the list of users testing this feature, contact us in
+[epic 11180](https://gitlab.com/groups/gitlab-org/-/epics/11180).
+
+GitLab administrators can use the Runner Fleet Dashboard to assess the health of your instance runners.
+The Runner Fleet Dashboard shows:
+
+- Recent CI errors related caused by runner infrastructure.
+- Number of concurrent jobs executed on most busy runners.
+- Histogram of job queue times (available only with ClickHouse).
+
+There is a proposal to introduce [more features](#whats-next) to the Runner Fleet Dashboard.
+
+![Runner Fleet Dashboard](img/runner_fleet_dashboard.png)
+
+## View the Runner Fleet Dashboard
+
+Prerequisites:
+
+- You must be an administrator.
+
+To view the runner fleet dashboard:
+
+1. On the left sidebar, select **Search or go to**.
+1. Select **Admin Area**.
+1. Select **Runners**.
+1. Click **Fleet dashboard**.
+
+Most of the dashboard works without any additional actions, with the
+exception of **Wait time to pick a job** chart and [proposed features](#whats-next).
+These features require setting up an additional infrastructure, described in this page.
+
+To test the Runner Fleet Dashboard and gather feedback, we have launched an early adopters program
+for some customers to try this feature.
+
+## Requirements
+
+To test the Runner Fleet Dashboard as part of the early adopters program, you must:
+
+- Run GitLab 16.6 or above.
+- Have an [Ultimate license](https://about.gitlab.com/pricing/).
+- Be able to run ClickHouse database. We recommend using [ClickHouse Cloud](https://clickhouse.cloud/).
+
+## Setup
+
+To setup ClickHouse as the GitLab data storage:
+
+1. [Run ClickHouse Cluster and configure database](#run-and-configure-clickhouse).
+1. [Configure GitLab connection to Clickhouse](#configure-the-gitlab-connection-to-clickhouse).
+1. [Enable the feature flags](#enable-feature-flags).
+
+### Run and configure ClickHouse
+
+The most straightforward way to run ClickHouse is with [ClickHouse Cloud](https://clickhouse.cloud/).
+You can also [run ClickHouse on your own server](https://clickhouse.com/docs/en/install). Refer to the ClickHouse
+documentation regarding [recommendations for self-managed instances](https://clickhouse.com/docs/en/install#recommendations-for-self-managed-clickhouse).
+
+When you run ClickHouse on a hosted server, various data points might impact the resource consumption, like the number
+of builds that run on your instance each month, the selected hardware, the data center choice to host ClickHouse, and more.
+Regardless, the cost should not be significant.
+
+NOTE:
+ClickHouse is a secondary data store for GitLab. All your data is still stored in Postgres,
+and only duplicated in ClickHouse for analytics purposes.
+
+To create necessary user and database objects:
+
+1. Generate a secure password and save it.
+1. Sign in to the ClickHouse SQL console.
+1. Execute the following command. Replace `PASSWORD_HERE` with the generated password.
+
+ ```sql
+ CREATE DATABASE gitlab_clickhouse_main_production;
+ CREATE USER gitlab IDENTIFIED WITH sha256_password BY 'PASSWORD_HERE';
+ CREATE ROLE gitlab_app;
+ GRANT SELECT, INSERT, ALTER, CREATE, UPDATE, DROP, TRUNCATE, OPTIMIZE ON gitlab_clickhouse_main_production.* TO gitlab_app;
+ GRANT gitlab_app TO gitlab;
+ ```
+
+1. Connect to the `gitlab_clickhouse_main_production` database (or just switch it in the ClickHouse Cloud UI).
+
+1. To create the required database objects, execute:
+
+ ```sql
+ CREATE TABLE ci_finished_builds
+ (
+ id UInt64 DEFAULT 0,
+ project_id UInt64 DEFAULT 0,
+ pipeline_id UInt64 DEFAULT 0,
+ status LowCardinality(String) DEFAULT '',
+ created_at DateTime64(6, 'UTC') DEFAULT now(),
+ queued_at DateTime64(6, 'UTC') DEFAULT now(),
+ finished_at DateTime64(6, 'UTC') DEFAULT now(),
+ started_at DateTime64(6, 'UTC') DEFAULT now(),
+ runner_id UInt64 DEFAULT 0,
+ runner_manager_system_xid String DEFAULT '',
+ runner_run_untagged Boolean DEFAULT FALSE,
+ runner_type UInt8 DEFAULT 0,
+ runner_manager_version LowCardinality(String) DEFAULT '',
+ runner_manager_revision LowCardinality(String) DEFAULT '',
+ runner_manager_platform LowCardinality(String) DEFAULT '',
+ runner_manager_architecture LowCardinality(String) DEFAULT '',
+ duration Int64 MATERIALIZED age('ms', started_at, finished_at),
+ queueing_duration Int64 MATERIALIZED age('ms', queued_at, started_at)
+ )
+ ENGINE = ReplacingMergeTree
+ ORDER BY (status, runner_type, project_id, finished_at, id)
+ PARTITION BY toYear(finished_at);
+
+ CREATE TABLE ci_finished_builds_aggregated_queueing_delay_percentiles
+ (
+ status LowCardinality(String) DEFAULT '',
+ runner_type UInt8 DEFAULT 0,
+ started_at_bucket DateTime64(6, 'UTC') DEFAULT now(),
+
+ count_builds AggregateFunction(count),
+ queueing_duration_quantile AggregateFunction(quantile, Int64)
+ )
+ ENGINE = AggregatingMergeTree()
+ ORDER BY (started_at_bucket, status, runner_type);
+
+ CREATE MATERIALIZED VIEW ci_finished_builds_aggregated_queueing_delay_percentiles_mv
+ TO ci_finished_builds_aggregated_queueing_delay_percentiles
+ AS
+ SELECT
+ status,
+ runner_type,
+ toStartOfInterval(started_at, INTERVAL 5 minute) AS started_at_bucket,
+
+ countState(*) as count_builds,
+ quantileState(queueing_duration) AS queueing_duration_quantile
+ FROM ci_finished_builds
+ GROUP BY status, runner_type, started_at_bucket;
+ ```
+
+### Configure the GitLab connection to ClickHouse
+
+::Tabs
+
+:::TabTitle Linux package
+
+To provide GitLab with ClickHouse credentials:
+
+1. Edit `/etc/gitlab/gitlab.rb`:
+
+ ```ruby
+ gitlab_rails['clickhouse_databases']['main']['database'] = 'gitlab_clickhouse_main_production'
+ gitlab_rails['clickhouse_databases']['main']['url'] = 'https://example.com/path'
+ gitlab_rails['clickhouse_databases']['main']['username'] = 'gitlab'
+ gitlab_rails['clickhouse_databases']['main']['password'] = 'PASSWORD_HERE' # replace with the actual password
+ ```
+
+1. Save the file and reconfigure GitLab:
+
+ ```shell
+ sudo gitlab-ctl reconfigure
+ ```
+
+:::TabTitle Helm chart (Kubernetes)
+
+1. Save the ClickHouse password as a Kubernetes Secret:
+
+ ```shell
+ kubectl create secret generic gitlab-clickhouse-password --from-literal="main_password=PASSWORD_HERE"
+ ```
+
+1. Export the Helm values:
+
+ ```shell
+ helm get values gitlab > gitlab_values.yaml
+ ```
+
+1. Edit `gitlab_values.yaml`:
+
+ ```yaml
+ global:
+ clickhouse:
+ enabled: true
+ main:
+ username: default
+ password:
+ secret: gitlab-clickhouse-password
+ key: main_password
+ database: gitlab_clickhouse_main_production
+ url: 'http://example.com'
+ ```
+
+1. Save the file and apply the new values:
+
+ ```shell
+ helm upgrade -f gitlab_values.yaml gitlab gitlab/gitlab
+ ```
+
+::EndTabs
+
+To verify that your connection is set up successfully:
+
+1. Log in to [Rails console](../administration/operations/rails_console.md#starting-a-rails-console-session)
+1. Execute the following:
+
+ ```ruby
+ ClickHouse::Client.select('SELECT 1', :main)
+ ```
+
+ If successful, the command returns `[{"1"=>1}]`
+
+### Enable feature flags
+
+Features that use ClickHouse are currently under development and are disabled by feature flags.
+
+To enable these features, [enable](../administration/feature_flags.md#how-to-enable-and-disable-features-behind-flags)
+the following feature flags:
+
+| Feature flag name | Purpose |
+|------------------------------------|---------------------------------------------------------------------------|
+| `ci_data_ingestion_to_click_house` | Enables synchronization of new finished CI builds to Clickhouse database. |
+| `clickhouse_ci_analytics` | Enables the **Wait time to pick a job** chart. |
+
+## What's next
+
+Support for usage and cost analysis are proposed in
+[epic 11183](https://gitlab.com/groups/gitlab-org/-/epics/11183).
+
+## Feedback
+
+To help us improve the Runner Fleet Dashboard, you can provide feedback in
+[issue 421737](https://gitlab.com/gitlab-org/gitlab/-/issues/421737).
+In particular:
+
+- How easy or difficult it was to setup GitLab to make the dashboard work.
+- How useful you found the dashboard.
+- What other information you would like to see on that dashboard.
+- Any other related thoughts and ideas.
diff --git a/doc/development/testing_guide/end_to_end/beginners_guide.md b/doc/development/testing_guide/end_to_end/beginners_guide.md
index 12f90e0d88c..4a3aec97d29 100644
--- a/doc/development/testing_guide/end_to_end/beginners_guide.md
+++ b/doc/development/testing_guide/end_to_end/beginners_guide.md
@@ -127,7 +127,7 @@ Assign `product_group` metadata and specify what product group this test belongs
module QA
RSpec.describe 'Manage' do
- describe 'Login', product_group: :authentication_and_authorization do
+ describe 'Login', product_group: :authentication do
end
end
@@ -142,7 +142,7 @@ writing end-to-end tests is to write test case descriptions as `it` blocks:
```ruby
module QA
RSpec.describe 'Manage' do
- describe 'Login', product_group: :authentication_and_authorization do
+ describe 'Login', product_group: :authentication do
it 'can login' do
end
@@ -166,7 +166,7 @@ Begin by logging in.
module QA
RSpec.describe 'Manage' do
- describe 'Login', product_group: :authentication_and_authorization do
+ describe 'Login', product_group: :authentication do
it 'can login' do
Flow::Login.sign_in
@@ -189,7 +189,7 @@ should answer the question "What do we test?"
module QA
RSpec.describe 'Manage' do
- describe 'Login', product_group: :authentication_and_authorization do
+ describe 'Login', product_group: :authentication do
it 'can login' do
Flow::Login.sign_in
@@ -236,7 +236,7 @@ a call to `sign_in`.
module QA
RSpec.describe 'Manage' do
- describe 'Login', product_group: :authentication_and_authorization do
+ describe 'Login', product_group: :authentication do
before do
Flow::Login.sign_in
end
diff --git a/doc/development/testing_guide/end_to_end/capybara_to_chemlab_migration_guide.md b/doc/development/testing_guide/end_to_end/capybara_to_chemlab_migration_guide.md
index 7bac76c88e8..025f998c0c9 100644
--- a/doc/development/testing_guide/end_to_end/capybara_to_chemlab_migration_guide.md
+++ b/doc/development/testing_guide/end_to_end/capybara_to_chemlab_migration_guide.md
@@ -35,44 +35,6 @@ Given the view:
| ------ | ----- |
| ![before](img/gl-capybara_V13_12.png) | ![after](img/gl-chemlab_V13_12.png) |
-<!--
-```ruby
-# frozen_string_literal: true
-
-module QA
- module Page
- class Form < Page::Base
- view '_form.html' do
- element :first_name
- element :last_name
- element :company_name
- element :user_name
- element :password
- element :continue
- end
- end
- end
-end
-```
-```ruby
-# frozen_string_literal: true
-
-module QA
- module Page
- class Form < Chemlab::Page
- text_field :first_name
- text_field :last_name
- text_field :company_name
- text_field :user_name
- text_field :password
-
- button :continue
- end
- end
-end
-```
--->
-
## Key Differences
### Page Library Design vs Page Object Design
diff --git a/doc/development/utilities.md b/doc/development/utilities.md
index 343d03b9d68..83b87d6d289 100644
--- a/doc/development/utilities.md
+++ b/doc/development/utilities.md
@@ -206,7 +206,7 @@ Refer to [`strong_memoize.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/maste
# good
def expensive_method(arg)
- strong_memoize_with(:expensive_method, arg)
+ strong_memoize_with(:expensive_method, arg) do
# ...
end
end
diff --git a/doc/development/wikis.md b/doc/development/wikis.md
index a814fa76ec9..eca43f6df03 100644
--- a/doc/development/wikis.md
+++ b/doc/development/wikis.md
@@ -28,9 +28,6 @@ Some notable gems that are used for wikis are:
| Component | Description | Gem name | GitLab project | Upstream project |
|:--------------|:-----------------------------------------------|:-------------------------------|:--------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------|
| `gitlab` | Markup renderer, depends on various other gems | `gitlab-markup` | [`gitlab-org/gitlab-markup`](https://gitlab.com/gitlab-org/gitlab-markup) | [`github/markup`](https://github.com/github/markup) |
-| `gollum-lib` | Main Gollum library | `gitlab-gollum-lib` | [`gitlab-org/gollum-lib`](https://gitlab.com/gitlab-org/gollum-lib) | [`gollum/gollum-lib`](https://github.com/gollum/gollum-lib) |
-| | Gollum Git adapter for Rugged | `gitlab-gollum-rugged_adapter` | [`gitlab-org/gitlab-gollum-rugged_adapter`](https://gitlab.com/gitlab-org/gitlab-gollum-rugged_adapter) | [`gollum/rugged_adapter`](https://github.com/gollum/rugged_adapter) |
-| | Rugged (also used in Gitaly itself) | `rugged` | - | [`libgit2/rugged`](https://github.com/libgit2/rugged) |
### Notes on Gollum
diff --git a/doc/devsecops.md b/doc/devsecops.md
new file mode 100644
index 00000000000..f035121898a
--- /dev/null
+++ b/doc/devsecops.md
@@ -0,0 +1,60 @@
+---
+stage: none
+group: unassigned
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
+description: 'Learn how to use and administer GitLab, the most scalable Git-based fully integrated platform for software development.'
+---
+
+# GitLab: The DevSecOps platform
+
+ DevSecOps is a combination of development, security, and operations.
+ It is an approach to software development that integrates security throughout the development lifecycle.
+
+## DevSecOps compared to DevOps
+
+DevOps combines development and operations, with the intent to increase the efficiency,
+speed, and security of software development and delivery.
+
+DevOps means working together to conceive, build, and deliver secure software at top speed.
+DevOps practices include automation, collaboration, fast feedback, and iterative improvement.
+
+DevSecOps is an evolution of DevOps. DevSecOps includes application security practices in every stage of software development.
+
+Throughout the development process, tools and methods protect and monitor your live applications.
+New attack surfaces, like containers and orchestrators, must also be monitored and protected.
+DevSecOps tools automate security workflows to create an adaptable process for your development
+and security teams, improving collaboration and breaking down silos.
+By embedding security into the software development lifecycle, you can consistently secure fast-moving
+and iterative processes, improving efficiency without sacrificing quality.
+
+## DevSecOps fundamentals
+
+DevSecOps fundamentals include:
+
+- Automation
+- Collaboration
+- Policy guardrails
+- Visibility
+
+For details, see [this article about DevSecOps](https://about.gitlab.com/topics/devsecops/).
+
+## Is DevSecOps right for you?
+
+If your organization is facing any of the following challenges, a DevSecOps approach might be for you.
+
+- **Development, security, and operations teams are siloed.**
+ If development and operations are isolated from security issues,
+ they can't build secure software. And if security teams aren't part of the development process,
+ they can't identify risks proactively. DevSecOps brings teams together to improve workflows
+ and share ideas. Organizations might even see improved employee morale and retention.
+
+- **Long development cycles are making it difficult to meet customer or stakeholder demands.**
+ One reason for the struggle could be security. DevSecOps implements security at every step of
+ the development lifecycle, meaning that solid security doesn’t require the whole process to come to a halt.
+
+- **You’re migrating to the cloud (or considering it).**
+ Moving to the cloud often means bringing on new development processes, tools, and systems.
+ It’s a great time to make processes faster and more secure — and DevSecOps could make that a lot easier.
+
+To get started with DevSecOps,
+[learn more, and try GitLab Ultimate for free](https://about.gitlab.com/solutions/security-compliance/).
diff --git a/doc/gitlab-basics/start-using-git.md b/doc/gitlab-basics/start-using-git.md
index 91fa91e3a6a..c46b89f7620 100644
--- a/doc/gitlab-basics/start-using-git.md
+++ b/doc/gitlab-basics/start-using-git.md
@@ -117,8 +117,10 @@ This connection requires you to add credentials. You can either use SSH or HTTPS
Clone with SSH when you want to authenticate only one time.
1. Authenticate with GitLab by following the instructions in the [SSH documentation](../user/ssh.md).
-1. Go to your project's landing page and select **Clone**. Copy the URL for **Clone with SSH**.
-1. Open a terminal and go to the directory where you want to clone the files. Git automatically creates a folder with the repository name and downloads the files there.
+1. On the left sidebar, select **Search or go to** and find the project you want to clone.
+1. On the right-hand side of the page, select **Clone**, then copy the URL for **Clone with SSH**.
+1. Open a terminal and go to the directory where you want to clone the files.
+ Git automatically creates a folder with the repository name and downloads the files there.
1. Run this command:
```shell
@@ -139,7 +141,8 @@ You can also
Clone with HTTPS when you want to authenticate each time you perform an operation
between your computer and GitLab.
-1. Go to your project's landing page and select **Clone**. Copy the URL for **Clone with HTTPS**.
+1. On the left sidebar, select **Search or go to** and find the project you want to clone.
+1. On the right-hand side of the page, select **Clone**, then copy the URL for **Clone with HTTPS**.
1. Open a terminal and go to the directory where you want to clone the files.
1. Run the following command. Git automatically creates a folder with the repository name and downloads the files there.
diff --git a/doc/install/aws/eks_clusters_aws.md b/doc/install/aws/eks_clusters_aws.md
index 45ba46fce1e..b05749bdde3 100644
--- a/doc/install/aws/eks_clusters_aws.md
+++ b/doc/install/aws/eks_clusters_aws.md
@@ -1,46 +1,11 @@
---
-stage: Systems
-group: Distribution
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
+redirect_to: '../../solutions/cloud/aws/index.md'
+remove_date: '2024-03-31'
---
-# EKS cluster provisioning best practices **(FREE SELF)**
+This document was moved to [Solutions](../../solutions/cloud/aws/index.md).
-GitLab can be used to provision an EKS cluster into AWS, however, it necessarily focuses on a basic EKS configuration. Using the AWS tools can help with advanced cluster configuration, automation, and maintenance.
-
-This documentation is not for clusters for deployment of GitLab itself, but instead clusters purpose built for:
-
-- EKS Clusters for GitLab Runners
-- Application Deployment Clusters for GitLab review apps
-- Application Deployment Cluster for production applications
-
-Information on deploying GitLab onto EKS can be found in [Provisioning GitLab Cloud Native Hybrid on AWS EKS](gitlab_hybrid_on_aws.md).
-
-## Use `eksctl`
-
-Using `eksctl` enables the following when building an EKS Cluster:
-
-- You have various cluster configuration options:
- - Selection of operating system: Amazon Linux 2, Windows, Bottlerocket
- - Selection of Hardware Architecture: x86, ARM, GPU
- - Selection of Kubernetes version (the GitLab-managed clusters for your project's applications have [specific Kubernetes version requirements](../../user/clusters/agent/index.md#supported-kubernetes-versions-for-gitlab-features))
-- It can deploy high value-add items to the cluster, including:
- - A bastion host to keep the cluster endpoint private and possible perform performance testing.
- - Prometheus and Grafana for monitoring.
-- EKS Autoscaler for automatic K8s Node scaling.
-- 2 or 3 Availability Zones (AZ) spread for balance between High Availability (HA) and cost control.
-- Ability to specify spot compute.
-
-Read more about configuring Amazon EKS in the [`eksctl` guide](https://eksctl.io/getting-started/) and the [Amazon EKS User Guide](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html).
-
-## Inject GitLab configuration for integrating clusters
-
-Read more how to [configure an App Deployment cluster](../../user/project/clusters/add_existing_cluster.md) and extract information from it to integrate it into GitLab.
-
-## Provision GitLab Runners using Helm charts
-
-Read how to [use the GitLab Runner Helm Chart](https://docs.gitlab.com/runner/install/kubernetes.html) to deploy a runner into a cluster.
-
-## Runner Cache
-
-Because the EKS Quick Start provides for EFS provisioning, the best approach is to use EFS for runner caching. Eventually we will publish information on using an S3 bucket for runner caching here.
+<!-- This redirect file can be deleted after <YYYY-MM-DD>. -->
+<!-- Redirects that point to other docs in the same project expire in three months. -->
+<!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. -->
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/redirects.html --> \ No newline at end of file
diff --git a/doc/install/aws/gitlab_hybrid_on_aws.md b/doc/install/aws/gitlab_hybrid_on_aws.md
index b39f39f293e..84474e6615c 100644
--- a/doc/install/aws/gitlab_hybrid_on_aws.md
+++ b/doc/install/aws/gitlab_hybrid_on_aws.md
@@ -1,377 +1,11 @@
---
-stage: Systems
-group: Distribution
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
+redirect_to: '../../solutions/cloud/aws/gitlab_instance_on_aws.md'
+remove_date: '2024-03-31'
---
-{::options parse_block_html="true" /}
+This document was moved to [Solutions](../../solutions/cloud/aws/gitlab_instance_on_aws.md).
-# Provision GitLab Cloud Native Hybrid on AWS EKS **(FREE SELF)**
-
-GitLab "Cloud Native Hybrid" is a hybrid of the cloud native technology Kubernetes (EKS) and EC2. While as much of the GitLab application as possible runs in Kubernetes or on AWS services (PaaS), the GitLab service Gitaly must still be run on EC2. Gitaly is a layer designed to overcome limitations of the Git binaries in a horizontally scaled architecture. You can read more here about why Gitaly was built and why the limitations of Git mean that it must currently run on instance compute in [Git Characteristics That Make Horizontal Scaling Difficult](https://gitlab.com/gitlab-org/gitaly/-/blob/master/doc/DESIGN.md#git-characteristics-that-make-horizontal-scaling-difficult).
-
-Amazon provides a managed Kubernetes service offering known as [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/).
-
-## Tested AWS Bill of Materials by reference architecture size
-
-| GitLab Cloud Native Hybrid Ref Arch | GitLab Baseline Performance Test Results (using the Linux package on instances) | AWS Bill of Materials (BOM) for CNH | AWS Build Performance Testing Results for [CNH](https://gitlab.com/guided-explorations/aws/implementation-patterns/gitlab-cloud-native-hybrid-on-eks/-/blob/master/gitlab-alliances-testing/5K/5k-QuickStart-ARM-RDS-Redis_v13-12-3-ee_2021-07-23_140128/5k-QuickStart-ARM-RDS-Redis_v13-12-3-ee_2021-07-23_140128_results.txt) | CNH Cost Estimate 3 AZs* |
-| ------------------------------------------------------------ | ------------------------------------------------------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ |
-| [2K Linux package installation](../../administration/reference_architectures/2k_users.md) | [2K Baseline](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Benchmarks/Latest/2k) | [2K Cloud Native Hybrid on EKS](#2k-cloud-native-hybrid-on-eks) | GPT Test Results | [1 YR Ec2 Compute Savings + 1 YR RDS & ElastiCache RIs](https://calculator.aws/#/estimate?id=544bcf1162beae6b8130ad257d081cdf9d4504e3)<br />(2 AZ Cost Estimate is in BOM Below) |
-| [3K](../../administration/reference_architectures/3k_users.md#cloud-native-hybrid-reference-architecture-with-helm-charts-alternative) | [3k Baseline](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Benchmarks/Latest/3k) | [3K Cloud Native Hybrid on EKS](#3k-cloud-native-hybrid-on-eks) | [3K Full Fixed Scale GPT Test Results](https://gitlab.com/guided-explorations/aws/implementation-patterns/gitlab-cloud-native-hybrid-on-eks/-/blob/master/gitlab-alliances-testing/3K/3k-QuickStart-ARM-RDS-Cache_v13-12-3-ee_2021-07-23_124216/3k-QuickStart-ARM-RDS-Cache_v13-12-3-ee_2021-07-23_124216_results.txt)<br /><br />[3K Elastic Auto Scale GPT Test Results](https://gitlab.com/guided-explorations/aws/implementation-patterns/gitlab-cloud-native-hybrid-on-eks/-/blob/master/gitlab-alliances-testing/3K/3k-QuickStart-AutoScale-ARM-RDS-Cache_v13-12-3-ee_2021-07-23_194200/3k-QuickStart-AutoScale-ARM-RDS-Cache_v13-12-3-ee_2021-07-23_194200_results.txt) | [1 YR Ec2 Compute Savings + 1 YR RDS & ElastiCache RIs](https://calculator.aws/#/estimate?id=f1294fec554e21be999711cddcdab9c5e7f83f14)<br />(2 AZ Cost Estimate is in BOM Below) |
-| [5K](../../administration/reference_architectures/5k_users.md#cloud-native-hybrid-reference-architecture-with-helm-charts-alternative) | [5k Baseline](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Benchmarks/Latest/5k) | [5K Cloud Native Hybrid on EKS](#5k-cloud-native-hybrid-on-eks) | [5K Full Fixed Scale GPT Test Results](https://gitlab.com/guided-explorations/aws/implementation-patterns/gitlab-cloud-native-hybrid-on-eks/-/blob/master/gitlab-alliances-testing/5K/5k-QuickStart-ARM-RDS-Redis_v13-12-3-ee_2021-07-23_140128/5k-QuickStart-ARM-RDS-Redis_v13-12-3-ee_2021-07-23_140128_results.txt)<br /><br />[5K AutoScale from 25% GPT Test Results](https://gitlab.com/guided-explorations/aws/implementation-patterns/gitlab-cloud-native-hybrid-on-eks/-/blob/master/gitlab-alliances-testing/5K/5k-QuickStart-AutoScale-From-25Percent-ARM-RDS-Redis_v13-12-3-ee_2021-07-24_102717/5k-QuickStart-AutoScale-From-25Percent-ARM-RDS-Redis_v13-12-3-ee_2021-07-24_102717_results.txt) | [1 YR Ec2 Compute Savings + 1 YR RDS & ElastiCache RIs](https://calculator.aws/#/estimate?id=330ee43c5b14662db5df6e52b34898d181a09e16) |
-| [10K](../../administration/reference_architectures/10k_users.md#cloud-native-hybrid-reference-architecture-with-helm-charts-alternative) | [10k Baseline](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Benchmarks/Latest/10k) | [10K Cloud Native Hybrid on EKS](#10k-cloud-native-hybrid-on-eks) | [10K Full Fixed Scale GPT Test Results](https://gitlab.com/guided-explorations/aws/implementation-patterns/gitlab-cloud-native-hybrid-on-eks/-/blob/master/gitlab-alliances-testing/10K/GL-CloudNative-10k-RDS-Graviton_v13-12-3-ee_2021-07-08_194647/GL-CloudNative-10k-RDS-Graviton_v13-12-3-ee_2021-07-08_194647_results.txt)<br /><br />[10K Elastic Auto Scale GPT Test Results](https://gitlab.com/guided-explorations/aws/implementation-patterns/gitlab-cloud-native-hybrid-on-eks/-/blob/master/gitlab-alliances-testing/10K/GL-CloudNative-10k-AutoScaling-Test_v13-12-3-ee_2021-07-09_115139/GL-CloudNative-10k-AutoScaling-Test_v13-12-3-ee_2021-07-09_115139_results.txt) | [10K 1 YR Ec2 Compute Savings + 1 YR RDS & ElastiCache RIs](https://calculator.aws/#/estimate?id=5ac2e07a22e01c36ee76b5477c5a046cd1bea792) |
-| [50K](../../administration/reference_architectures/50k_users.md#cloud-native-hybrid-reference-architecture-with-helm-charts-alternative) | [50k Baseline](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Benchmarks/Latest/50k) | [50K Cloud Native Hybrid on EKS](#50k-cloud-native-hybrid-on-eks) | [50K Full Fixed Scale GPT Test Results](https://gitlab.com/guided-explorations/aws/implementation-patterns/gitlab-cloud-native-hybrid-on-eks/-/blob/master/gitlab-alliances-testing/50K/50k-Fixed-Scale-Test_v13-12-3-ee_2021-08-13_172819/50k-Fixed-Scale-Test_v13-12-3-ee_2021-08-13_172819_results.txt)<br /><br />[10K Elastic Auto Scale GPT Test Results](https://gitlab.com/guided-explorations/aws/implementation-patterns/gitlab-cloud-native-hybrid-on-eks/-/blob/master/gitlab-alliances-testing/50K/50k-AutoScale-Test_v13-12-3-ee_2021-08-13_192633/50k-AutoScale-Test_v13-12-3-ee_2021-08-13_192633.txt) | [50K 1 YR Ec2 Compute Savings + 1 YR RDS & ElastiCache RIs](https://calculator.aws/#/estimate?id=b9c9d6ac1d4a7848011d2050cef3120931fb7c22) |
-
-\*Cost calculations for actual implementations are a rough guideline with the following considerations:
-
-- Actual choices about instance types should be based on GPT testing of your configuration.
-- The first year of actual usage will reveal potential savings due to lower than expected usage, especially for ramping migrations where the full loading takes months, so be careful not to commit to savings plans too early or for too long.
-- The cost estimates assume full scale of the Kubernetes cluster nodes 24 x 7 x 365. Savings due to 'idling scale-in' are not considered because they are highly dependent on the usage patterns of the specific implementation.
-- Costs such as GitLab Runners, data egress and storage costs are not included as they are very dependent on the configuration of a specific implementation and on development behaviors (for example, frequency of committing or frequency of builds).
-- These estimates will change over time as GitLab tests and optimizes compute choices.
-
-## Available Infrastructure as Code for GitLab Cloud Native Hybrid
-
-The [GitLab Environment Toolkit (GET)](https://gitlab.com/gitlab-org/gitlab-environment-toolkit/-/blob/main/README.md) is a set of opinionated Terraform
-and Ansible scripts. These scripts help with the deployment of Linux package or Cloud Native Hybrid environments on selected cloud providers and are used
-by GitLab developers for [GitLab Dedicated](../../subscriptions/gitlab_dedicated/index.md) (for example).
-
-You can use the GitLab Environment Toolkit to deploy a Cloud Native Hybrid environment on AWS. However, it's not required and may not support every valid
-permutation. That said, the scripts are presented as-is and you can adapt them accordingly.
-
-### Two and Three Zone High Availability
-
-While GitLab Reference Architectures generally encourage three zone redundancy, AWS Quick Starts and AWS Well Architected consider two zone redundancy as AWS Well Architected. Individual implementations should weigh the costs of two and three zone configurations against their own high availability requirements for a final configuration.
-
-Gitaly Cluster uses a consistency voting system to implement strong consistency between synchronized nodes. Regardless of the number of availability zones implemented, there will always need to be a minimum of three Gitaly and three Praefect nodes in the cluster to avoid voting stalemates cause by an even number of nodes.
-
-### Streamlined Performance Testing of AWS Quick Start Prepared GitLab Instances
-
-A set of performance testing instructions have been abbreviated for testing a GitLab instance prepared using the AWS Quick Start for GitLab Cloud Native Hybrid on EKS. They assume zero familiarity with GitLab Performance Tool. They can be accessed here: [Performance Testing an Instance Prepared using AWS Quick Start for GitLab Cloud Native Hybrid on EKS](https://gitlab.com/guided-explorations/aws/implementation-patterns/getting-started-gitlab-aws-quick-start/-/wikis/Easy-Performance-Testing-for-AWS-Quick-Start-for-GitLab-CNH).
-
-### AWS GovCloud Support for AWS Quick Start for GitLab CNH on EKS
-
-The AWS Quick Start for GitLab Cloud Native Hybrid on EKS has been tested with GovCloud and works with the following restrictions and understandings.
-
-- GovCloud does not have public Route53 hosted zones, so you must set the following parameters:
-
- | CloudFormation Quick Start form field | CloudFormation Parameter | Setting |
- | --------------------------------------------------- | ------------------------ | ------- |
- | **Create Route 53 hosted zone** | CreatedHostedZone | No |
- | **Request AWS Certificate Manager SSL certificate** | CreateSslCertificate | No |
-
-- The Quick Start creates public load balancer IPs, so that you can easily configure your local hosts file to get to the GUI for GitLab when deploying tests. However, you may need to manually alter this if public load balancers are not part of your provisioning plan. We are planning to make non-public load balancers a configuration option issue link: [Short Term: Documentation and/or Automation for private GitLab instance with no internet Ingress](https://github.com/aws-quickstart/quickstart-eks-gitlab/issues/55)
-- As of 2021-08-19, AWS GovCloud has Graviton instances for Amazon RDS PostgreSQL available, but does not for ElastiCache Redis.
-- It is challenging to get the Quick Start template to load in GovCloud from the Standard Quick Start URL, so the generic ones are provided here:
- - [Launch for New VPC in us-gov-east-1](https://us-gov-east-1.console.amazonaws-us-gov.com/cloudformation/home?region=us-gov-east-1#/stacks/quickcreate?templateUrl=https://aws-quickstart.s3.us-east-1.amazonaws.com/quickstart-eks-gitlab/templates/gitlab-entry-new-vpc.template.yaml&stackName=Gitlab-for-EKS-New-VPC)
- - [Launch for New VPC in us-gov-west-1](https://us-gov-west-1.console.amazonaws-us-gov.com/cloudformation/home?region=us-gov-west-1#/stacks/quickcreate?templateUrl=https://aws-quickstart.s3.us-east-1.amazonaws.com/quickstart-eks-gitlab/templates/gitlab-entry-new-vpc.template.yaml&stackName=Gitlab-for-EKS-New-VPC)
-
-## AWS PaaS qualified for all GitLab implementations
-
-For both implementations that used the Linux package or Cloud Native Hybrid implementations, the following GitLab Service roles can be performed by AWS Services (PaaS). Any PaaS solutions that require preconfigured sizing based on the scale of your instance will also be listed in the per-instance size Bill of Materials lists. Those PaaS that do not require specific sizing, are not repeated in the BOM lists (for example, AWS Certification Manager).
-
-These services have been tested with GitLab.
-
-Some services, such as log aggregation, outbound email are not specified by GitLab, but where provided are noted.
-
-| GitLab Services | AWS PaaS (Tested) | Provided by AWS Cloud <br />Native Hybrid Quick Start |
-| ------------------------------------------------------------ | ------------------------------ | ------------------------------------------------------------ |
-| <u>Tested PaaS Mentioned in Reference Architectures</u> | | |
-| **PostgreSQL Database** | Amazon RDS PostgreSQL | Yes. |
-| **Redis Caching** | Redis ElastiCache | Yes. |
-| **Gitaly Cluster (Git Repository Storage)**<br />(Including Praefect and PostgreSQL) | ASG and Instances | Yes - ASG and Instances<br />**Note: Gitaly cannot be put into a Kubernetes Cluster.** |
-| **All GitLab storages besides Git Repository Storage**<br />(Includes Git-LFS which is S3 Compatible) | AWS S3 | Yes |
-| | | |
-| <u>Tested PaaS for Supplemental Services</u> | | |
-| **Front End Load Balancing** | AWS ELB | Yes |
-| **Internal Load Balancing** | AWS ELB | Yes |
-| **Outbound Email Services** | AWS Simple Email Service (SES) | Yes |
-| **Certificate Authority and Management** | AWS Certificate Manager (ACM) | Yes |
-| **DNS** | AWS Route53 (tested) | Yes |
-| **GitLab and Infrastructure Log Aggregation** | AWS CloudWatch Logs | Yes (ContainerInsights Agent for EKS) |
-| **Infrastructure Performance Metrics** | AWS CloudWatch Metrics | Yes |
-| | | |
-| <u>Supplemental Services and Configurations (Tested)</u> | | |
-| **Prometheus for GitLab** | AWS EKS (Cloud Native Only) | Yes |
-| **Grafana for GitLab** | AWS EKS (Cloud Native Only) | Yes |
-| **Administrative Access to GitLab Backend** | Bastion Host in VPC | Yes - HA - Preconfigured for Cluster Management. |
-| **Encryption (In Transit / At Rest)** | AWS KMS | Yes |
-| **Secrets Storage for Provisioning** | AWS Secrets Manager | Yes |
-| **Configuration Data for Provisioning** | AWS Parameter Store | Yes |
-| **AutoScaling Kubernetes** | EKS AutoScaling Agent | Yes |
-
-## GitLab Cloud Native Hybrid on AWS
-
-### 2K Cloud Native Hybrid on EKS
-
-**2K Cloud Native Hybrid on EKS Bill of Materials (BOM)**
-
-**GPT Test Results**
-
-- TBD
-
- **Deploy Now**
- Deploy Now links leverage the AWS Quick Start automation and only pre-populate the number of instances and instance types for the Quick Start based on the Bill of Materials below. You must provide appropriate input for all other parameters by following the guidance in the [Quick Start documentation's Deployment steps](https://aws-quickstart.github.io/quickstart-eks-gitlab/#_deployment_steps) section.
-
-- **Deploy Now: AWS Quick Start for 2 AZs**
-- **Deploy Now: AWS Quick Start for 3 AZs**
-
-NOTE:
-On Demand pricing is used in this table for comparisons, but should not be used for budgeting nor purchasing AWS resources for a GitLab production instance. Do not use these tables to calculate actual monthly or yearly price estimates, instead use the AWS Calculator links in the "GitLab on AWS Compute" table above and customize it with your desired savings plan.
-
-**BOM Total:** = Bill of Materials Total - this is what you use when building this configuration
-
-**Ref Arch Raw Total:** = The totals if the configuration was built on regular VMs with no PaaS services. Configuring on pure VMs generally requires additional VMs for cluster management activities.
-
-**Idle Configuration (Scaled-In)** = can be used to scale-in during time of low demand and/or for warm standby Geo instances. Requires configuration, testing and management of EKS autoscaling to meet your internal requirements.
-
-| Service | Ref Arch Raw (Full Scaled) | AWS BOM | Example Full Scaled Cost<br />(On Demand, US East) |
-| ------------------------------------------------------------ | -------------------------- | ------------------------------------------------------------ | -------------------------------------------------- |
-| Webservice | 12 vCPU,16 GB | | |
-| Sidekiq | 2 vCPU, 8 GB | | |
-| Supporting services such as NGINX, Prometheus, etc | 2 vCPU, 8 GB | | |
-| **GitLab Ref Arch Raw Total K8s Node Capacity** | 16 vCPU, 32 GB | | |
-| One Node for Overhead and Miscellaneous (EKS Cluster AutoScaler, Grafana, Prometheus, etc) | + 8 vCPU, 16 GB | | |
-| **Grand Total w/ Overheads**<br />Minimum hosts = 3 | 24 vCPU, 48 GB | **c5.2xlarge** <br />(8vCPU/16 GB) x 3 nodes<br />24 vCPU, 48 GB | $1.02/hr |
-| **Idle Configuration (Scaled-In)** | 16 vCPU, 32 GB | **c5.2xlarge** x 2 | $0.68/hr |
-
-NOTE:
-If EKS node autoscaling is employed, it is likely that your average loading will run lower than this, especially during non-working hours and weekends.
-
-| Non-Kubernetes Compute | Ref Arch Raw Total | AWS BOM<br />(Directly Usable in AWS Quick Start) | Example Cost<br />US East, 3 AZ | Example Cost<br />US East, 2 AZ |
-| ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------- | ------------------------------- | ------------------------------- |
-| **Bastion Host (Quick Start)** | 1 HA instance in ASG | **t2.micro** for prod, **m4.2xlarge** for performance testing | | |
-| **PostgreSQL**<br />AWS Amazon RDS PostgreSQL Nodes Configuration (GPT tested) | 2vCPU, 7.5 GB<br />Tested with Graviton ARM | **db.r6g.large** x 3 nodes <br />(6vCPU, 48 GB) | 3 nodes x $0.26 = $0.78/hr | 3 nodes x $0.26 = $0.78/hr |
-| **Redis** | 1vCPU, 3.75GB<br />(across 12 nodes for Redis Cache, Redis Queues/Shared State, Sentinel Cache, Sentinel Queues/Shared State) | **cache.m6g.large** x 3 nodes<br />(6vCPU, 19 GB) | 3 nodes x $0.15 = $0.45/hr | 2 nodes x $0.15 = $0.30/hr |
-| **<u>Gitaly Cluster</u>** [Details](gitlab_sre_for_aws.md#gitaly-sre-considerations) | [Gitaly & Praefect Must Have an Uneven Node Count for HA](gitlab_sre_for_aws.md#gitaly-and-praefect-elections) | | | |
-| Gitaly Instances (in ASG) | 12 vCPU, 45GB<br />(across 3 nodes) | **m5.xlarge** x 3 nodes<br />(48 vCPU, 180 GB) | $0.192 x 3 = $0.58/hr | $0.192 x 3 = $0.58/hr |
-| | The GitLab Reference architecture for 2K is not Highly Available and therefore has a single Gitaly no Praefect. AWS Quick Starts MUST be HA, so it implements Praefect from the 3K Ref Architecture to meet that requirement | | | |
-| Praefect (Instances in ASG with load balancer) | 6 vCPU, 10 GB<br />([across 3 nodes](gitlab_sre_for_aws.md#gitaly-and-praefect-elections)) | **c5.large** x 3 nodes<br />(6 vCPU, 12 GB) | $0.09 x 3 = $0.21/hr | $0.09 x 3 = $0.21/hr |
-| Praefect PostgreSQL(1) (AWS RDS) | 6 vCPU, 5.4 GB<br />([across 3 nodes](gitlab_sre_for_aws.md#gitaly-and-praefect-elections)) | Not applicable; reuses GitLab PostgreSQL | $0 | $0 |
-| Internal Load Balancing Node | 2 vCPU, 1.8 GB | AWS ELB | $0.10/hr | $0.10/hr |
-
-### 3K Cloud Native Hybrid on EKS
-
-**3K Cloud Native Hybrid on EKS Bill of Materials (BOM)**
-
-**GPT Test Results**
-
-- [3K Full Fixed Scale GPT Test Results](https://gitlab.com/guided-explorations/aws/implementation-patterns/gitlab-cloud-native-hybrid-on-eks/-/blob/master/gitlab-alliances-testing/3K/3k-QuickStart-ARM-RDS-Cache_v13-12-3-ee_2021-07-23_124216/3k-QuickStart-ARM-RDS-Cache_v13-12-3-ee_2021-07-23_124216_results.txt)
-
-- [3K AutoScale from 25% GPT Test Results](https://gitlab.com/guided-explorations/aws/implementation-patterns/gitlab-cloud-native-hybrid-on-eks/-/blob/master/gitlab-alliances-testing/3K/3k-QuickStart-AutoScale-ARM-RDS-Cache_v13-12-3-ee_2021-07-23_194200/3k-QuickStart-AutoScale-ARM-RDS-Cache_v13-12-3-ee_2021-07-23_194200_results.txt)
-
- Elastic Auto Scale GPT Test Results start with an idle scaled cluster and then start the standard GPT test to determine if the EKS Auto Scaler performs well enough to keep up with performance test demands. In general this is substantially harder ramping than the scaling required when the ramping is driven by standard production workloads.
-
-**Deploy Now**
-
-Deploy Now links leverage the AWS Quick Start automation and only pre-populate the number of instances and instance types for the Quick Start based on the Bill of Materials below. You must provide appropriate input for all other parameters by following the guidance in the [Quick Start documentation's Deployment steps](https://aws-quickstart.github.io/quickstart-eks-gitlab/#_deployment_steps) section.
-
-- **[Deploy Now: AWS Quick Start for 2 AZs](https://us-east-2.console.aws.amazon.com/cloudformation/home?region=us-east-2#/stacks/quickcreate?templateUrl=https://aws-quickstart.s3.us-east-1.amazonaws.com/quickstart-eks-gitlab/templates/gitlab-entry-new-vpc.template.yaml&stackName=Gitlab-EKS-3K-Users-2AZs&param_NumberOfAZs=2&param_NodeInstanceType=c5.2xlarge&param_NumberOfNodes=3&param_MaxNumberOfNodes=3&param_DBInstanceClass=db.r6g.xlarge&param_CacheNodes=2&param_CacheNodeType=cache.m6g.large&param_GitalyInstanceType=m5.large&param_NumberOfGitalyReplicas=3&param_PraefectInstanceType=c5.large&param_NumberOfPraefectReplicas=3)**
-- **[Deploy Now: AWS Quick Start for 3 AZs](https://us-east-2.console.aws.amazon.com/cloudformation/home?region=us-east-2#/stacks/quickcreate?templateUrl=https://aws-quickstart.s3.us-east-1.amazonaws.com/quickstart-eks-gitlab/templates/gitlab-entry-new-vpc.template.yaml&stackName=Gitlab-EKS-3K-Users-3AZs&param_NumberOfAZs=3&param_NodeInstanceType=c5.2xlarge&param_NumberOfNodes=3&param_MaxNumberOfNodes=3&param_DBInstanceClass=db.r6g.xlarge&param_CacheNodes=3&param_CacheNodeType=cache.m6g.large&param_GitalyInstanceType=m5.large&param_NumberOfGitalyReplicas=3&param_PraefectInstanceType=c5.large&param_NumberOfPraefectReplicas=3)**
-
-NOTE:
-On Demand pricing is used in this table for comparisons, but should not be used for budgeting nor purchasing AWS resources for a GitLab production instance. Do not use these tables to calculate actual monthly or yearly price estimates, instead use the AWS Calculator links in the "GitLab on AWS Compute" table above and customize it with your desired savings plan.
-
-**BOM Total:** = Bill of Materials Total - this is what you use when building this configuration
-
-**Ref Arch Raw Total:** = The totals if the configuration was built on regular VMs with no PaaS services. Configuring on pure VMs generally requires additional VMs for cluster management activities.
-
- **Idle Configuration (Scaled-In)** = can be used to scale-in during time of low demand and/or for warm standby Geo instances. Requires configuration, testing and management of EKS autoscaling to meet your internal requirements.
-
-| Service | Ref Arch Raw (Full Scaled) | AWS BOM | Example Full Scaled Cost<br />(On Demand, US East) |
-| ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | -------------------------------------------------- |
-| Webservice | [4 pods](https://gitlab.com/gitlab-org/charts/gitlab/-/blob/master/examples/ref/3k.yaml#L7) x ([5 vCPU & 6.25 GB](../../administration/reference_architectures/3k_users.md#webservice)) = <br />20 vCPU, 25 GB | | |
-| Sidekiq | [8 pods](https://gitlab.com/gitlab-org/charts/gitlab/-/blob/master/examples/ref/3k.yaml#L24) x ([1 vCPU & 2 GB](../../administration/reference_architectures/3k_users.md#sidekiq)) = <br />8 vCPU, 16 GB | | |
-| Supporting services such as NGINX, Prometheus, etc | [2 allocations](../../administration/reference_architectures/3k_users.md#cluster-topology) x ([2 vCPU and 7.5 GB](../../administration/reference_architectures/3k_users.md#cluster-topology)) = <br />4 vCPU, 15 GB | | |
-| **GitLab Ref Arch Raw Total K8s Node Capacity** | 32 vCPU, 56 GB | | |
-| One Node for Overhead and Miscellaneous (EKS Cluster AutoScaler, Grafana, Prometheus, etc) | + 16 vCPU, 32GB | | |
-| **Grand Total w/ Overheads Full Scale**<br />Minimum hosts = 3 | 48 vCPU, 88 GB | **c5.2xlarge** (8vCPU/16 GB) x 5 nodes<br />40 vCPU, 80 GB<br />[Full Fixed Scale GPT Test Results](https://gitlab.com/guided-explorations/aws/implementation-patterns/gitlab-cloud-native-hybrid-on-eks/-/blob/master/gitlab-alliances-testing/3K/3k-QuickStart-ARM-RDS-Cache_v13-12-3-ee_2021-07-23_124216/3k-QuickStart-ARM-RDS-Cache_v13-12-3-ee_2021-07-23_124216_results.txt) | $1.70/hr |
-| **Possible Idle Configuration (Scaled-In 75% - round up)**<br />Pod autoscaling must be also adjusted to enable lower idling configuration. | 24 vCPU, 48 GB | c5.2xlarge x 4 | $1.36/hr |
-
-Other combinations of node type and quantity can be used to meet the Grand Total. Due to the properties of pods, hosts that are overly small may have significant unused capacity.
-
-NOTE:
-If EKS node autoscaling is employed, it is likely that your average loading will run lower than this, especially during non-working hours and weekends.
-
-| Non-Kubernetes Compute | Ref Arch Raw Total | AWS BOM<br />(Directly Usable in AWS Quick Start) | Example Cost<br />US East, 3 AZ | Example Cost<br />US East, 2 AZ |
-| ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------- | ------------------------------- | ------------------------------------------------------------ |
-| **Bastion Host (Quick Start)** | 1 HA instance in ASG | **t2.micro** for prod, **m4.2xlarge** for performance testing | | |
-| **PostgreSQL**<br />Amazon RDS PostgreSQL Nodes Configuration (GPT tested) | 18vCPU, 36 GB <br />(across 9 nodes for PostgreSQL, PgBouncer, Consul)<br />Tested with Graviton ARM | **db.r6g.xlarge** x 3 nodes <br />(12vCPU, 96 GB) | 3 nodes x $0.52 = $1.56/hr | 3 nodes x $0.52 = $1.56/hr |
-| **Redis** | 6vCPU, 18 GB<br />(across 6 nodes for Redis Cache, Sentinel) | **cache.m6g.large** x 3 nodes<br />(6vCPU, 19 GB) | 3 nodes x $0.15 = $0.45/hr | 2 nodes x $0.15 = $0.30/hr |
-| **<u>Gitaly Cluster</u>** [Details](gitlab_sre_for_aws.md#gitaly-sre-considerations) | | | | |
-| Gitaly Instances (in ASG) | 12 vCPU, 45GB<br />([across 3 nodes](gitlab_sre_for_aws.md#gitaly-and-praefect-elections)) | **m5.large** x 3 nodes<br />(12 vCPU, 48 GB) | $0.192 x 3 = $0.58/hr | [Gitaly & Praefect Must Have an Uneven Node Count for HA](gitlab_sre_for_aws.md#gitaly-and-praefect-elections) |
-| Praefect (Instances in ASG with load balancer) | 6 vCPU, 5.4 GB<br />([across 3 nodes](gitlab_sre_for_aws.md#gitaly-and-praefect-elections)) | **c5.large** x 3 nodes<br />(6 vCPU, 12 GB) | $0.09 x 3 = $0.21/hr | [Gitaly & Praefect Must Have an Uneven Node Count for HA](gitlab_sre_for_aws.md#gitaly-and-praefect-elections) |
-| Praefect PostgreSQL(1) (Amazon RDS) | 6 vCPU, 5.4 GB<br />([across 3 nodes](gitlab_sre_for_aws.md#gitaly-and-praefect-elections)) | Not applicable; reuses GitLab PostgreSQL | $0 | |
-| Internal Load Balancing Node | 2 vCPU, 1.8 GB | AWS ELB | $0.10/hr | $0.10/hr |
-
-### 5K Cloud Native Hybrid on EKS
-
-**5K Cloud Native Hybrid on EKS Bill of Materials (BOM)**
-
-**GPT Test Results**
-
-- [5K Full Fixed Scale GPT Test Results](https://gitlab.com/guided-explorations/aws/implementation-patterns/gitlab-cloud-native-hybrid-on-eks/-/blob/master/gitlab-alliances-testing/5K/5k-QuickStart-ARM-RDS-Redis_v13-12-3-ee_2021-07-23_140128/5k-QuickStart-ARM-RDS-Redis_v13-12-3-ee_2021-07-23_140128_results.txt)
-
-- [5K AutoScale from 25% GPT Test Results](https://gitlab.com/guided-explorations/aws/implementation-patterns/gitlab-cloud-native-hybrid-on-eks/-/blob/master/gitlab-alliances-testing/5K/5k-QuickStart-AutoScale-From-25Percent-ARM-RDS-Redis_v13-12-3-ee_2021-07-24_102717/5k-QuickStart-AutoScale-From-25Percent-ARM-RDS-Redis_v13-12-3-ee_2021-07-24_102717_results.txt)
-
- Elastic Auto Scale GPT Test Results start with an idle scaled cluster and then start the standard GPT test to determine if the EKS Auto Scaler performs well enough to keep up with performance test demands. In general this is substantially harder ramping than the scaling required when the ramping is driven by standard production workloads.
-
-**Deploy Now**
-
-Deploy Now links leverage the AWS Quick Start automation and only prepopulate the number of instances and instance types for the Quick Start based on the Bill of Materials below. You must provide appropriate input for all other parameters by following the guidance in the [Quick Start documentation's Deployment steps](https://aws-quickstart.github.io/quickstart-eks-gitlab/#_deployment_steps) section.
-
-- **[Deploy Now: AWS Quick Start for 2 AZs](https://us-east-2.console.aws.amazon.com/cloudformation/home?region=us-east-2#/stacks/quickcreate?templateUrl=https://aws-quickstart.s3.us-east-1.amazonaws.com/quickstart-eks-gitlab/templates/gitlab-entry-new-vpc.template.yaml&stackName=Gitlab-EKS-5K-Users-2AZs&param_NumberOfAZs=2&param_NodeInstanceType=c5.2xlarge&param_NumberOfNodes=5&param_MaxNumberOfNodes=5&param_DBInstanceClass=db.r6g.2xlarge&param_CacheNodes=2&param_CacheNodeType=cache.m6g.xlarge&param_GitalyInstanceType=m5.2xlarge&param_NumberOfGitalyReplicas=2&param_PraefectInstanceType=c5.large&param_NumberOfPraefectReplicas=2)**
-- **[Deploy Now: AWS Quick Start for 3 AZs](https://us-east-2.console.aws.amazon.com/cloudformation/home?region=us-east-2#/stacks/quickcreate?templateUrl=https://aws-quickstart.s3.us-east-1.amazonaws.com/quickstart-eks-gitlab/templates/gitlab-entry-new-vpc.template.yaml&stackName=Gitlab-EKS-5K-Users-3AZs&param_NumberOfAZs=3&param_NodeInstanceType=c5.2xlarge&param_NumberOfNodes=5&param_MaxNumberOfNodes=5&param_DBInstanceClass=db.r6g.2xlarge&param_CacheNodes=3&param_CacheNodeType=cache.m6g.xlarge&param_GitalyInstanceType=m5.2xlarge&param_NumberOfGitalyReplicas=3&param_PraefectInstanceType=c5.large&param_NumberOfPraefectReplicas=3)**
-
-NOTE:
-On Demand pricing is used in this table for comparisons, but should not be used for budgeting nor purchasing AWS resources for a GitLab production instance. Do not use these tables to calculate actual monthly or yearly price estimates, instead use the AWS Calculator links in the "GitLab on AWS Compute" table above and customize it with your desired savings plan.
-
-**BOM Total:** = Bill of Materials Total - this is what you use when building this configuration
-
-**Ref Arch Raw Total:** = The totals if the configuration was built on regular VMs with no PaaS services. Configuring on pure VMs generally requires additional VMs for cluster management activities.
-
-**Idle Configuration (Scaled-In)** = can be used to scale-in during time of low demand and/or for warm standby Geo instances. Requires configuration, testing and management of EKS autoscaling to meet your internal requirements.
-
-| Service | Ref Arch Raw (Full Scaled) | AWS BOM | Example Full Scaled Cost<br />(On Demand, US East) |
-| ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | -------------------------------------------------- |
-| Webservice | [10 pods](https://gitlab.com/gitlab-org/charts/gitlab/-/blob/master/examples/ref/5k.yaml#L7) x ([5 vCPU & 6.25GB](../../administration/reference_architectures/5k_users.md#webservice)) = <br />50 vCPU, 62.5 GB | | |
-| Sidekiq | [8 pods](https://gitlab.com/gitlab-org/charts/gitlab/-/blob/master/examples/ref/5k.yaml#L24) x ([1 vCPU & 2 GB](../../administration/reference_architectures/5k_users.md#sidekiq)) = <br />8 vCPU, 16 GB | | |
-| Supporting services such as NGINX, Prometheus, etc | [2 allocations](../../administration/reference_architectures/5k_users.md#cluster-topology) x ([2 vCPU and 7.5 GB](../../administration/reference_architectures/5k_users.md#cluster-topology)) = <br />4 vCPU, 15 GB | | |
-| **GitLab Ref Arch Raw Total K8s Node Capacity** | 62 vCPU, 96.5 GB | | |
-| One Node for Quick Start Overhead and Miscellaneous (EKS Cluster AutoScaler, Grafana, Prometheus, etc) | + 8 vCPU, 16 GB | | |
-| **Grand Total w/ Overheads Full Scale**<br />Minimum hosts = 3 | 70 vCPU, 112.5 GB | **c5.2xlarge** (8vCPU/16 GB) x 9 nodes<br />72 vCPU, 144 GB<br />[Full Fixed Scale GPT Test Results](https://gitlab.com/guided-explorations/aws/implementation-patterns/gitlab-cloud-native-hybrid-on-eks/-/blob/master/gitlab-alliances-testing/5K/5k-QuickStart-ARM-RDS-Redis_v13-12-3-ee_2021-07-23_140128/5k-QuickStart-ARM-RDS-Redis_v13-12-3-ee_2021-07-23_140128_results.txt) | $2.38/hr |
-| **Possible Idle Configuration (Scaled-In 75% - round up)**<br />Pod autoscaling must be also adjusted to enable lower idling configuration. | 24 vCPU, 48 GB | c5.2xlarge x 7 | $1.85/hr |
-
-Other combinations of node type and quantity can be used to meet the Grand Total. Due to the CPU and memory requirements of pods, hosts that are overly small may have significant unused capacity.
-
-NOTE:
-If EKS node autoscaling is employed, it is likely that your average loading will run lower than this, especially during non-working hours and weekends.
-
-| Non-Kubernetes Compute | Ref Arch Raw Total | AWS BOM<br />(Directly Usable in AWS Quick Start) | Example Cost<br />US East, 3 AZ | Example Cost<br />US East, 2 AZ |
-| ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------- | ------------------------------- | ------------------------------------------------------------ |
-| **Bastion Host (Quick Start)** | 1 HA instance in ASG | **t2.micro** for prod, **m4.2xlarge** for performance testing | | |
-| **PostgreSQL**<br />Amazon RDS PostgreSQL Nodes Configuration (GPT tested) | 21vCPU, 51 GB <br />(across 9 nodes for PostgreSQL, PgBouncer, Consul)<br />Tested with Graviton ARM | **db.r6g.2xlarge** x 3 nodes <br />(24vCPU, 192 GB) | 3 nodes x $1.04 = $3.12/hr | 3 nodes x $1.04 = $3.12/hr |
-| **Redis** | 9vCPU, 27GB<br />(across 6 nodes for Redis, Sentinel) | **cache.m6g.xlarge** x 3 nodes<br />(12vCPU, 39GB) | 3 nodes x $0.30 = $0.90/hr | 2 nodes x $0.30 = $0.60/hr |
-| **<u>Gitaly Cluster</u>** [Details](gitlab_sre_for_aws.md#gitaly-sre-considerations) | | | | |
-| Gitaly Instances (in ASG) | 24 vCPU, 90GB<br />([across 3 nodes](gitlab_sre_for_aws.md#gitaly-and-praefect-elections)) | **m5.2xlarge** x 3 nodes<br />(24 vCPU, 96GB) | $0.384 x 3 = $1.15/hr | [Gitaly & Praefect Must Have an Uneven Node Count for HA](gitlab_sre_for_aws.md#gitaly-and-praefect-elections) |
-| Praefect (Instances in ASG with load balancer) | 6 vCPU, 5.4 GB<br />([across 3 nodes](gitlab_sre_for_aws.md#gitaly-and-praefect-elections)) | **c5.large** x 3 nodes<br />(6 vCPU, 12 GB) | $0.09 x 3 = $0.21/hr | [Gitaly & Praefect Must Have an Uneven Node Count for HA](gitlab_sre_for_aws.md#gitaly-and-praefect-elections) |
-| Praefect PostgreSQL(1) (Amazon RDS) | 6 vCPU, 5.4 GB<br />([across 3 nodes](gitlab_sre_for_aws.md#gitaly-and-praefect-elections)) | Not applicable; reuses GitLab PostgreSQL | $0 | |
-| Internal Load Balancing Node | 2 vCPU, 1.8 GB | AWS ELB | $0.10/hr | $0.10/hr |
-
-### 10K Cloud Native Hybrid on EKS
-
-**10K Cloud Native Hybrid on EKS Bill of Materials (BOM)**
-
-**GPT Test Results**
-
-- [10K Full Fixed Scale GPT Test Results](https://gitlab.com/guided-explorations/aws/implementation-patterns/gitlab-cloud-native-hybrid-on-eks/-/blob/master/gitlab-alliances-testing/10K/GL-CloudNative-10k-RDS-Graviton_v13-12-3-ee_2021-07-08_194647/GL-CloudNative-10k-RDS-Graviton_v13-12-3-ee_2021-07-08_194647_results.txt)
-
-- [10K Elastic Auto Scale GPT Test Results](https://gitlab.com/guided-explorations/aws/implementation-patterns/gitlab-cloud-native-hybrid-on-eks/-/blob/master/gitlab-alliances-testing/10K/GL-CloudNative-10k-AutoScaling-Test_v13-12-3-ee_2021-07-09_115139/GL-CloudNative-10k-AutoScaling-Test_v13-12-3-ee_2021-07-09_115139_results.txt)
-
- Elastic Auto Scale GPT Test Results start with an idle scaled cluster and then start the standard GPT test to determine if the EKS Auto Scaler performs well enough to keep up with performance test demands. In general this is substantially harder ramping than the scaling required when the ramping is driven by standard production workloads.
-
-**Deploy Now**
-
-Deploy Now links leverage the AWS Quick Start automation and only prepopulate the number of instances and instance types for the Quick Start based on the Bill of Materials below. You must provide appropriate input for all other parameters by following the guidance in the [Quick Start documentation's Deployment steps](https://aws-quickstart.github.io/quickstart-eks-gitlab/#_deployment_steps) section.
-
-- **[Deploy Now: AWS Quick Start for 3 AZs](https://us-east-2.console.aws.amazon.com/cloudformation/home?region=us-east-2#/stacks/quickcreate?templateUrl=https://aws-quickstart.s3.us-east-1.amazonaws.com/quickstart-eks-gitlab/templates/gitlab-entry-new-vpc.template.yaml&stackName=Gitlab-EKS-10K-Users-3AZs&param_NumberOfAZs=3&param_NodeInstanceType=c5.4xlarge&param_NumberOfNodes=9&param_MaxNumberOfNodes=9&param_DBInstanceClass=db.r6g.2xlarge&param_CacheNodes=3&param_CacheNodeType=cache.m6g.2xlarge&param_GitalyInstanceType=m5.4xlarge&param_NumberOfGitalyReplicas=3&param_PraefectInstanceType=c5.large&param_NumberOfPraefectReplicas=3)**
-
-NOTE:
-On Demand pricing is used in this table for comparisons, but should not be used for budgeting nor purchasing AWS resources for a GitLab production instance. Do not use these tables to calculate actual monthly or yearly price estimates, instead use the AWS Calculator links in the "GitLab on AWS Compute" table above and customize it with your desired savings plan.
-
-**BOM Total:** = Bill of Materials Total - this is what you use when building this configuration
-
-**Ref Arch Raw Total:** = The totals if the configuration was built on regular VMs with no PaaS services. Configuring on pure VMs generally requires additional VMs for cluster management activities.
-
- **Idle Configuration (Scaled-In)** = can be used to scale-in during time of low demand and/or for warm standby Geo instances. Requires configuration, testing and management of EKS autoscaling to meet your internal requirements.
-
-| Service | Ref Arch Raw (Full Scaled) | AWS BOM<br />(Directly Usable in AWS Quick Start) | Example Full Scaled Cost<br />(On Demand, US East) |
-| ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | -------------------------------------------------- |
-| Webservice | [20 pods](https://gitlab.com/gitlab-org/charts/gitlab/-/blob/master/examples/ref/10k.yaml#L7) x ([5 vCPU & 6.25 GB](../../administration/reference_architectures/10k_users.md#webservice)) = <br />100 vCPU, 125 GB | | |
-| Sidekiq | [14 pods](https://gitlab.com/gitlab-org/charts/gitlab/-/blob/master/examples/ref/10k.yaml#L24) x ([1 vCPU & 2 GB](../../administration/reference_architectures/10k_users.md#sidekiq))<br />14 vCPU, 28 GB | | |
-| Supporting services such as NGINX, Prometheus, etc | [2 allocations](../../administration/reference_architectures/10k_users.md#cluster-topology) x ([2 vCPU and 7.5 GB](../../administration/reference_architectures/10k_users.md#cluster-topology))<br />4 vCPU, 15 GB | | |
-| **GitLab Ref Arch Raw Total K8s Node Capacity** | 128 vCPU, 158 GB | | |
-| One Node for Overhead and Miscellaneous (EKS Cluster AutoScaler, Grafana, Prometheus, etc) | + 16 vCPU, 32GB | | |
-| **Grand Total w/ Overheads Fully Scaled**<br />Minimum hosts = 3 | 142 vCPU, 190 GB | **c5.4xlarge** (16vCPU/32GB) x 9 nodes<br />144 vCPU, 288GB<br /><br />[Full Fixed Scale GPT Test Results](https://gitlab.com/guided-explorations/aws/implementation-patterns/gitlab-cloud-native-hybrid-on-eks/-/blob/master/gitlab-alliances-testing/10K/GL-CloudNative-10k-RDS-Graviton_v13-12-3-ee_2021-07-08_194647/GL-CloudNative-10k-RDS-Graviton_v13-12-3-ee_2021-07-08_194647_results.txt) | $6.12/hr |
-| **Possible Idle Configuration (Scaled-In 75% - round up)**<br />Pod autoscaling must be also adjusted to enable lower idling configuration. | 40 vCPU, 80 GB | c5.4xlarge x 7<br /><br />[Elastic Auto Scale GPT Test Results](https://gitlab.com/guided-explorations/aws/implementation-patterns/gitlab-cloud-native-hybrid-on-eks/-/blob/master/gitlab-alliances-testing/10K/GL-CloudNative-10k-AutoScaling-Test_v13-12-3-ee_2021-07-09_115139/GL-CloudNative-10k-AutoScaling-Test_v13-12-3-ee_2021-07-09_115139_results.txt) | $4.76/hr |
-
-Other combinations of node type and quantity can be used to meet the Grand Total. Due to the CPU and memory requirements of pods, hosts that are overly small may have significant unused capacity.
-
-NOTE:
-If EKS node autoscaling is employed, it is likely that your average loading will run lower than this, especially during non-working hours and weekends.
-
-| Non-Kubernetes Compute | Ref Arch Raw Total | AWS BOM | Example Cost<br />US East, 3 AZ | Example Cost<br />US East, 2 AZ |
-| ------------------------------------------------------------ | ------------------------------ | ------------------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ |
-| **Bastion Host (Quick Start)** | 1 HA instance in ASG | **t2.micro** for prod, **m4.2xlarge** for performance testing | | |
-| **PostgreSQL**<br />Amazon RDS PostgreSQL Nodes Configuration (GPT tested) | 36vCPU, 102 GB <br />(across 9 nodes for PostgreSQL, PgBouncer, Consul) | **db.r6g.2xlarge** x 3 nodes <br />(24vCPU, 192 GB) | 3 nodes x $1.04 = $3.12/hr | 3 nodes x $1.04 = $3.12/hr |
-| **Redis** | 30vCPU, 114 GB<br />(across 12 nodes for Redis Cache, Redis Queues/Shared State, Sentinel Cache, Sentinel Queues/Shared State) | **cache.m5.2xlarge** x 3 nodes<br />(24vCPU, 78GB) | 3 nodes x $0.62 = $1.86/hr | 2 nodes x $0.62 = $1.24/hr |
-| **<u>Gitaly Cluster</u>** [Details](gitlab_sre_for_aws.md#gitaly-sre-considerations) | | | | |
-| Gitaly Instances (in ASG) | 48 vCPU, 180 GB<br />([across 3 nodes](gitlab_sre_for_aws.md#gitaly-and-praefect-elections)) | **m5.4xlarge** x 3 nodes<br />(48 vCPU, 180 GB) | $0.77 x 3 = $2.31/hr | [Gitaly & Praefect Must Have an Uneven Node Count for HA](gitlab_sre_for_aws.md#gitaly-and-praefect-elections) |
-| Praefect (Instances in ASG with load balancer) | 6 vCPU, 5.4 GB<br />([across 3 nodes](gitlab_sre_for_aws.md#gitaly-and-praefect-elections)) | **c5.large** x 3 nodes<br />(6 vCPU, 12 GB) | $0.09 x 3 = $0.21/hr | [Gitaly & Praefect Must Have an Uneven Node Count for HA](gitlab_sre_for_aws.md#gitaly-and-praefect-elections) |
-| Praefect PostgreSQL(1) (Amazon RDS) | 6 vCPU, 5.4 GB<br />([across 3 nodes](gitlab_sre_for_aws.md#gitaly-and-praefect-elections)) | Not applicable; reuses GitLab PostgreSQL | $0 | |
-| Internal Load Balancing Node | 2 vCPU, 1.8 GB | AWS ELB | $0.10/hr | $0.10/hr |
-
-### 50K Cloud Native Hybrid on EKS
-
-**50K Cloud Native Hybrid on EKS Bill of Materials (BOM)**
-
-**GPT Test Results**
-
-- [50K Full Fixed Scale GPT Test Results](https://gitlab.com/guided-explorations/aws/implementation-patterns/gitlab-cloud-native-hybrid-on-eks/-/blob/master/gitlab-alliances-testing/50K/50k-Fixed-Scale-Test_v13-12-3-ee_2021-08-13_172819/50k-Fixed-Scale-Test_v13-12-3-ee_2021-08-13_172819_results.txt)
-
-- [50K Elastic Auto Scale GPT Test Results](https://gitlab.com/guided-explorations/aws/implementation-patterns/gitlab-cloud-native-hybrid-on-eks/-/blob/master/gitlab-alliances-testing/50K/50k-AutoScale-Test_v13-12-3-ee_2021-08-13_192633/50k-AutoScale-Test_v13-12-3-ee_2021-08-13_192633.txt)
-
- Elastic Auto Scale GPT Test Results start with an idle scaled cluster and then start the standard GPT test to determine if the EKS Auto Scaler performs well enough to keep up with performance test demands. In general this is substantially harder ramping than the scaling required when the ramping is driven by standard production workloads.
-
-**Deploy Now**
-
-Deploy Now links leverage the AWS Quick Start automation and only prepopulate the number of instances and instance types for the Quick Start based on the Bill of Materials below. You must provide appropriate input for all other parameters by following the guidance in the [Quick Start documentation's Deployment steps](https://aws-quickstart.github.io/quickstart-eks-gitlab/#_deployment_steps) section.
-
-- **[Deploy Now: AWS Quick Start for 3 AZs - 1/4 Scale EKS](https://us-east-2.console.aws.amazon.com/cloudformation/home?region=us-east-2#/stacks/quickcreate?templateUrl=https://aws-quickstart.s3.us-east-1.amazonaws.com/quickstart-eks-gitlab/templates/gitlab-entry-new-vpc.template.yaml&stackName=Gitlab-EKS-50K-Users-3AZs&param_NumberOfAZs=3&param_NodeInstanceType=c5.4xlarge&param_NumberOfNodes=7&param_MaxNumberOfNodes=9&param_DBInstanceClass=db.r6g.8xlarge&param_CacheNodes=3&param_CacheNodeType=cache.m6g.2xlarge&param_GitalyInstanceType=m5.16xlarge&param_NumberOfGitalyReplicas=3&param_PraefectInstanceType=c5.xlarge&param_NumberOfPraefectReplicas=3)**
-
-NOTE:
-On Demand pricing is used in this table for comparisons, but should not be used for budgeting nor purchasing AWS resources for a GitLab production instance. Do not use these tables to calculate actual monthly or yearly price estimates, instead use the AWS Calculator links in the "GitLab on AWS Compute" table above and customize it with your desired savings plan.
-
-**BOM Total:** = Bill of Materials Total - this is what you use when building this configuration
-
-**Ref Arch Raw Total:** = The totals if the configuration was built on regular VMs with no PaaS services. Configuring on pure VMs generally requires additional VMs for cluster management activities.
-
- **Idle Configuration (Scaled-In)** = can be used to scale-in during time of low demand and/or for warm standby Geo instances. Requires configuration, testing and management of EKS autoscaling to meet your internal requirements.
-
-| Service | Ref Arch Raw (Full Scaled) | AWS BOM<br />(Directly Usable in AWS Quick Start) | Example Full Scaled Cost<br />(On Demand, US East) |
-| ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | -------------------------------------------------- |
-| Webservice | [80 pods](https://gitlab.com/gitlab-org/charts/gitlab/-/blob/master/examples/ref/10k.yaml#L7) x ([5 vCPU & 6.25 GB](../../administration/reference_architectures/10k_users.md#webservice)) = <br />400 vCPU, 500 GB | | |
-| Sidekiq | [14 pods](https://gitlab.com/gitlab-org/charts/gitlab/-/blob/master/examples/ref/10k.yaml#L24) x ([1 vCPU & 2 GB](../../administration/reference_architectures/10k_users.md#sidekiq))<br />14 vCPU, 28 GB | | |
-| Supporting services such as NGINX, Prometheus, etc | [2 allocations](../../administration/reference_architectures/10k_users.md#cluster-topology) x ([2 vCPU and 7.5 GB](../../administration/reference_architectures/10k_users.md#cluster-topology))<br />4 vCPU, 15 GB | | |
-| **GitLab Ref Arch Raw Total K8s Node Capacity** | 428 vCPU, 533 GB | | |
-| One Node for Overhead and Miscellaneous (EKS Cluster AutoScaler, Grafana, Prometheus, etc) | + 16 vCPU, 32GB | | |
-| **Grand Total w/ Overheads Fully Scaled**<br />Minimum hosts = 3 | 444 vCPU, 565 GB | **c5.4xlarge** (16vCPU/32GB) x 28 nodes<br />448 vCPU, 896GB<br /><br />[Full Fixed Scale GPT Test Results](https://gitlab.com/guided-explorations/aws/implementation-patterns/gitlab-cloud-native-hybrid-on-eks/-/blob/master/gitlab-alliances-testing/50K/50k-Fixed-Scale-Test_v13-12-3-ee_2021-08-13_172819/50k-Fixed-Scale-Test_v13-12-3-ee_2021-08-13_172819_results.txt) | $19.04/hr |
-| **Possible Idle Configuration (Scaled-In 75% - round up)**<br />Pod autoscaling must be also adjusted to enable lower idling configuration. | 40 vCPU, 80 GB | c5.2xlarge x 10<br /><br />[Elastic Auto Scale GPT Test Results](https://gitlab.com/guided-explorations/aws/implementation-patterns/gitlab-cloud-native-hybrid-on-eks/-/blob/master/gitlab-alliances-testing/50K/50k-AutoScale-Test_v13-12-3-ee_2021-08-13_192633/50k-AutoScale-Test_v13-12-3-ee_2021-08-13_192633.txt) | $6.80/hr |
-
-Other combinations of node type and quantity can be used to meet the Grand Total. Due to the CPU and memory requirements of pods, hosts that are overly small may have significant unused capacity.
-
-NOTE:
-If EKS node autoscaling is employed, it is likely that your average loading will run lower than this, especially during non-working hours and weekends.
-
-| Non-Kubernetes Compute | Ref Arch Raw Total | AWS BOM | Example Cost<br />US East, 3 AZ | Example Cost<br />US East, 2 AZ |
-| ------------------------------------------------------------ | ------------------------------------------------------------ | --------------------------------------------------------- | ------------------------------- | ------------------------------------------------------------ |
-| **Bastion Host (Quick Start)** | 1 HA instance in ASG | **t2.micro** for prod, **m4.2xlarge** for performance testing | | |
-| **PostgreSQL**<br />Amazon RDS PostgreSQL Nodes Configuration (GPT tested) | 96vCPU, 360 GB <br />(across 3 nodes) | **db.r6g.8xlarge** x 3 nodes <br />(96vCPU, 768 GB total) | 3 nodes x $4.15 = $12.45/hr | 3 nodes x $4.15 = $12.45/hr |
-| **Redis** | 30vCPU, 114 GB<br />(across 12 nodes for Redis Cache, Redis Queues/Shared State, Sentinel Cache, Sentinel Queues/Shared State) | **cache.m6g.2xlarge** x 3 nodes<br />(24vCPU, 78GB total) | 3 nodes x $0.60 = $1.80/hr | 2 nodes x $0.60 = $1.20/hr |
-| **<u>Gitaly Cluster</u>** [Details](gitlab_sre_for_aws.md#gitaly-sre-considerations) | | | | |
-| Gitaly Instances (in ASG) | 64 vCPU, 240GB x [3 nodes](gitlab_sre_for_aws.md#gitaly-and-praefect-elections) | **m5.16xlarge** x 3 nodes<br />(64 vCPU, 256 GB each) | $3.07 x 3 = $9.21/hr | [Gitaly & Praefect Must Have an Uneven Node Count for HA](gitlab_sre_for_aws.md#gitaly-and-praefect-elections) |
-| Praefect (Instances in ASG with load balancer) | 4 vCPU, 3.6 GB x [3 nodes](gitlab_sre_for_aws.md#gitaly-and-praefect-elections) | **c5.xlarge** x 3 nodes<br />(4 vCPU, 8 GB each) | $0.17 x 3 = $0.51/hr | [Gitaly & Praefect Must Have an Uneven Node Count for HA](gitlab_sre_for_aws.md#gitaly-and-praefect-elections) |
-| Praefect PostgreSQL(1) (AWS RDS) | 2 vCPU, 1.8 GB x [3 nodes](gitlab_sre_for_aws.md#gitaly-and-praefect-elections) | Not applicable; reuses GitLab PostgreSQL | $0 | |
-| Internal Load Balancing Node | 2 vCPU, 1.8 GB | AWS ELB | $0.10/hr | $0.10/hr |
-
-## Helpful Resources
-
-- [Architecting Kubernetes clusters — choosing a worker node size](https://learnk8s.io/kubernetes-node-size)
-
-DISCLAIMER:
-This page contains information related to upcoming products, features, and functionality.
-It is important to note that the information presented is for informational purposes only.
-Please do not rely on this information for purchasing or planning purposes.
-As with all projects, the items mentioned on this page are subject to change or delay.
-The development, release, and timing of any products, features, or functionality remain at the
-sole discretion of GitLab Inc.
+<!-- This redirect file can be deleted after <YYYY-MM-DD>. -->
+<!-- Redirects that point to other docs in the same project expire in three months. -->
+<!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. -->
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/redirects.html --> \ No newline at end of file
diff --git a/doc/install/aws/gitlab_sre_for_aws.md b/doc/install/aws/gitlab_sre_for_aws.md
index 5f3fe9fefac..222bcbc1ed8 100644
--- a/doc/install/aws/gitlab_sre_for_aws.md
+++ b/doc/install/aws/gitlab_sre_for_aws.md
@@ -1,95 +1,11 @@
---
-stage: Systems
-group: Distribution
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
-description: Doing SRE for GitLab instances and runners on AWS.
+redirect_to: '../../solutions/cloud/aws/gitaly_sre_for_aws.md'
+remove_date: '2024-03-31'
---
-# GitLab Site Reliability Engineering for AWS **(FREE SELF)**
+This document was moved to [Solutions](../../solutions/cloud/aws/gitaly_sre_for_aws.md).
-## Gitaly SRE considerations
-
-Gitaly is an embedded service for Git Repository Storage. Gitaly and Gitaly Cluster have been engineered by GitLab to overcome fundamental challenges with horizontal scaling of the open source Git binaries that must be used on the service side of GitLab. Here is in-depth technical reading on the topic:
-
-### Why Gitaly was built
-
-If you would like to understand the underlying rationale on why GitLab had to invest in creating Gitaly, read the following minimal list of topics:
-
-- [Git characteristics that make horizontal scaling difficult](https://gitlab.com/gitlab-org/gitaly/-/blob/master/doc/DESIGN.md#git-characteristics-that-make-horizontal-scaling-difficult)
-- [Git architectural characteristics and assumptions](https://gitlab.com/gitlab-org/gitaly/-/blob/master/doc/DESIGN.md#git-architectural-characteristics-and-assumptions)
-- [Affects on horizontal compute architecture](https://gitlab.com/gitlab-org/gitaly/-/blob/master/doc/DESIGN.md#affects-on-horizontal-compute-architecture)
-- [Evidence to back building a new horizontal layer to scale Git](https://gitlab.com/gitlab-org/gitaly/-/blob/master/doc/DESIGN.md#evidence-to-back-building-a-new-horizontal-layer-to-scale-git)
-
-### Gitaly and Praefect elections
-
-As part of Gitaly cluster consistency, Praefect nodes must occasionally vote on what data copy is the most accurate. This requires an uneven number of Praefect nodes to avoid stalemates. This means that for HA, Gitaly and Praefect require a minimum of three nodes.
-
-### Gitaly performance monitoring
-
-Complete performance metrics should be collected for Gitaly instances for identification of bottlenecks, as they could have to do with disk IO, network IO, or memory.
-
-### Gitaly performance guidelines
-
-Gitaly functions as the primary Git Repository Storage in GitLab. However, it's not a streaming file server. It also does a lot of demanding computing work, such as preparing and caching Git packfiles which informs some of the performance recommendations below.
-
-NOTE:
-All recommendations are for production configurations, including performance testing. For test configurations, like training or functional testing, you can use less expensive options. However, you should adjust or rebuild if performance is an issue.
-
-#### Overall recommendations
-
-- Production-grade Gitaly must be implemented on instance compute due to all of the above and below characteristics.
-- Never use [burstable instance types](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/burstable-performance-instances.html) (such as `t2`, `t3`, `t4g`) for Gitaly.
-- Always use at least the [AWS Nitro generation of instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html#ec2-nitro-instances) to ensure many of the below concerns are automatically handled.
-- Use Amazon Linux 2 to ensure that all [AWS oriented hardware and OS optimizations](https://aws.amazon.com/amazon-linux-2/faqs/) are maximized without additional configuration or SRE management.
-
-#### CPU and memory recommendations
-
-- The general GitLab Gitaly node recommendations for CPU and Memory assume relatively even loading across repositories. GitLab Performance Tool (GPT) testing of any non-characteristic repositories and/or SRE monitoring of Gitaly metrics may inform when to choose memory and/or CPU higher than general recommendations.
-
-**To accommodate:**
-
-- Git packfile operations are memory and CPU intensive.
-- If repository commit traffic is dense, large, or very frequent, then more CPU and Memory are required to handle the load. Patterns such as storing binaries and/or busy or large monorepos are examples that can cause high loading.
-
-#### Disk I/O recommendations
-
-- Use only SSD storage and the [class of Elastic Block Store (EBS) storage](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html) that suites your durability and speed requirements.
-- When not using provisioned EBS IO, EBS volume size determines the I/O level, so provisioning volumes that are much larger than needed can be the least expensive way to improve EBS IO.
-- If Gitaly performance monitoring shows signs of disk stress then one of the provisioned IOPS levels can be chosen. EBS IOPS levels also have enhanced durability which may be appealing for some implementations aside from performance considerations.
-
-**To accommodate:**
-
-- Gitaly storage is expected to be local (not NFS of any type including EFS).
-- Gitaly servers also need disk space for building and caching Git packfiles. This is above and beyond the permanent storage of your Git Repositories.
-- Git packfiles are cached in Gitaly. Creation of packfiles in temporary disk benefits from fast disk, and disk caching of packfiles benefits from ample disk space.
-
-#### Network I/O recommendations
-
-- Use only instance types [from the list of ones that support Elastic Network Adapter (ENA) advanced networking](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html#instance-type-summary-table) to ensure that cluster replication latency is not due to instance level network I/O bottlenecks.
-- Choose instances with sizes with more than 10 Gbps - but only if needed and only when having proven a node level network bottleneck with monitoring and/or stress testing.
-
-**To accommodate:**
-
-- Gitaly nodes do the main work of streaming repositories for push and pull operations (to add development endpoints, and to CI/CD).
-- Gitaly servers need reasonable low latency between cluster nodes and with Praefect services in order for the cluster to maintain operational and data integrity.
-- Gitaly nodes should be selected with network bottleneck avoidance as a primary consideration.
-- Gitaly nodes should be monitored for network saturation.
-- Not all networking issues can be solved through optimizing the node level networking:
- - Gitaly cluster node replication depends on all networking between nodes.
- - Gitaly networking performance to pull and push endpoints depends on all networking in between.
-
-### AWS Gitaly backup
-
-Due to the nature of how Praefect tracks the replication metadata of Gitaly disk information, the best backup method is [the official backup and restore Rake tasks](../../administration/backup_restore/index.md).
-
-### AWS Gitaly recovery
-
-Gitaly Cluster does not support snapshot backups as these can cause issues where the Praefect database becomes out of syn with the disk storage. Due to the nature of how Praefect rebuilds the replication metadata of Gitaly disk information during a restore, the best recovery method is [the official backup and restore Rake tasks](../../administration/backup_restore/index.md).
-
-### Gitaly HA in EKS quick start
-
-The [AWS GitLab Cloud Native Hybrid on EKS Quick Start](gitlab_hybrid_on_aws.md#available-infrastructure-as-code-for-gitlab-cloud-native-hybrid) for GitLab Cloud Native implements Gitaly as a multi-zone, self-healing infrastructure. It has specific code for reestablishing a Gitaly node when one fails, including AZ failure.
-
-### Gitaly long term management
-
-Gitaly node disk sizes must be monitored and increased to accommodate Git repository growth and Gitaly temporary and caching storage needs. The storage configuration on all nodes should be kept identical.
+<!-- This redirect file can be deleted after <YYYY-MM-DD>. -->
+<!-- Redirects that point to other docs in the same project expire in three months. -->
+<!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. -->
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/redirects.html --> \ No newline at end of file
diff --git a/doc/install/aws/index.md b/doc/install/aws/index.md
index febe54a8bb6..2c1f2529426 100644
--- a/doc/install/aws/index.md
+++ b/doc/install/aws/index.md
@@ -3,167 +3,857 @@ stage: Systems
group: Distribution
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
description: Read through the GitLab installation methods.
-type: index
---
-# AWS implementation patterns **(FREE SELF)**
+{::options parse_block_html="true" /}
-GitLab [Reference Architectures](../../administration/reference_architectures/index.md) give qualified and tested guidance on the recommended ways GitLab can be configured to meet the performance requirements of various workloads. Reference Architectures are purpose-designed to be non-implementation specific so they can be extrapolated to as many environments as possible. This generally means they have a highly-granular "machine" to "server role" specification and focus on system elements that impact performance. This is what enables Reference Architectures to be adaptable to the broadest number of supported implementations.
+# Installing a GitLab POC on Amazon Web Services (AWS) **(FREE SELF)**
-Implementation patterns are built on the foundational information and testing done for Reference Architectures and allow architects and implementers at GitLab, GitLab Customers, and GitLab Partners to build out deployments with less experimentation and a higher degree of confidence that the results perform as expected. A more thorough discussion of implementation patterns is below in [Additional details on implementation patterns](#additional-details-on-implementation-patterns).
+This page offers a walkthrough of a common configuration for GitLab on AWS using the official Linux package. You should customize it to accommodate your needs.
-## AWS Implementation patterns information
+NOTE:
+For organizations with 1,000 users or less, the recommended AWS installation method is to launch an EC2 single box [Linux package installation](https://about.gitlab.com/install/) and implement a snapshot strategy for backing up the data. See the [1,000 user reference architecture](../../administration/reference_architectures/1k_users.md) for more information.
+
+## Getting started for production-grade GitLab
+
+NOTE:
+This document is an installation guide for a proof of concept instance. It is not a reference architecture and it does not result in a highly available configuration.
+
+Following this guide exactly results in a proof of concept instance that roughly equates to a **scaled down** version of a **two availability zone implementation** of the **Non-HA** [2000 User Reference Architecture](../../administration/reference_architectures/2k_users.md). The 2K reference architecture is not HA because it is primarily intended to provide some scaling while keeping costs and complexity low. The [3000 User Reference Architecture](../../administration/reference_architectures/3k_users.md) is the smallest size that is GitLab HA. It has additional service roles to achieve HA, most notably it uses Gitaly Cluster to achieve HA for Git repository storage and specifies triple redundancy.
+
+GitLab maintains and tests two main types of Reference Architectures. The **Linux package architectures** are implemented on instance compute while **Cloud Native Hybrid architectures** maximize the use of a Kubernetes cluster. Cloud Native Hybrid reference architecture specifications are addendum sections to the Reference Architecture size pages that start by describing the Linux package architecture. For example, the 3000 User Cloud Native Reference Architecture is in the subsection titled [Cloud Native Hybrid reference architecture with Helm Charts (alternative)](../../administration/reference_architectures/3k_users.md#cloud-native-hybrid-reference-architecture-with-helm-charts-alternative) in the 3000 User Reference Architecture page.
+
+### Getting started for production-grade Linux package installations
+
+The Infrastructure as Code tooling [GitLab Environment Tool (GET)](https://gitlab.com/gitlab-org/gitlab-environment-toolkit/-/tree/main) is the best place to start for building using the Linux package on AWS and most especially if you are targeting an HA setup. While it does not automate everything, it does complete complex setups like Gitaly Cluster for you. GET is open source so anyone can build on top of it and contribute improvements to it.
+
+### Getting started for production-grade Cloud Native Hybrid GitLab
+
+The [GitLab Environment Toolkit (GET)](https://gitlab.com/gitlab-org/gitlab-environment-toolkit/-/blob/main/README.md) is a set of opinionated Terraform and Ansible scripts. These scripts help with the deployment of Linux package or Cloud Native Hybrid environments on selected cloud providers and are used by GitLab developers for [GitLab Dedicated](../../subscriptions/gitlab_dedicated/index.md) (for example).
+
+You can use the GitLab Environment Toolkit to deploy a Cloud Native Hybrid environment on AWS. However, it's not required and may not support every valid permutation. That said, the scripts are presented as-is and you can adapt them accordingly.
+
+## Introduction
+
+For the most part, we make use of the Linux package in our setup, but we also leverage native AWS services. Instead of using the Linux package-bundled PostgreSQL and Redis, we use Amazon RDS and ElastiCache.
+
+In this guide, we go through a multi-node setup where we start by
+configuring our Virtual Private Cloud and subnets to later integrate
+services such as RDS for our database server and ElastiCache as a Redis
+cluster to finally manage them in an auto scaling group with custom
+scaling policies.
+
+## Requirements
+
+In addition to having a basic familiarity with [AWS](https://docs.aws.amazon.com/) and [Amazon EC2](https://docs.aws.amazon.com/ec2/), you need:
+
+- [An AWS account](https://console.aws.amazon.com/console/home)
+- [To create or upload an SSH key](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html)
+ to connect to the instance via SSH
+- A domain name for the GitLab instance
+- An SSL/TLS certificate to secure your domain. If you do not already own one, you can provision a free public SSL/TLS certificate through [AWS Certificate Manager](https://aws.amazon.com/certificate-manager/)(ACM) for use with the [Elastic Load Balancer](#load-balancer) we create.
-The following are the currently available implementation patterns for GitLab when it is implemented on AWS.
+NOTE:
+It can take a few hours to validate a certificate provisioned through ACM. To avoid delays later, request your certificate as soon as possible.
+
+## Architecture
+
+Below is a diagram of the recommended architecture.
+
+![AWS architecture diagram](img/aws_ha_architecture_diagram.png)
+
+## AWS costs
+
+GitLab uses the following AWS services, with links to pricing information:
+
+- **EC2**: GitLab is deployed on shared hardware, for which
+ [on-demand pricing](https://aws.amazon.com/ec2/pricing/on-demand/) applies.
+ If you want to run GitLab on a dedicated or reserved instance, see the
+ [EC2 pricing page](https://aws.amazon.com/ec2/pricing/) for information about
+ its cost.
+- **S3**: GitLab uses S3 ([pricing page](https://aws.amazon.com/s3/pricing/)) to
+ store backups, artifacts, and LFS objects.
+- **ELB**: A Classic Load Balancer ([pricing page](https://aws.amazon.com/elasticloadbalancing/pricing/)),
+ used to route requests to the GitLab instances.
+- **RDS**: An Amazon Relational Database Service using PostgreSQL
+ ([pricing page](https://aws.amazon.com/rds/postgresql/pricing/)).
+- **ElastiCache**: An in-memory cache environment ([pricing page](https://aws.amazon.com/elasticache/pricing/)),
+ used to provide a Redis configuration.
+
+## Create an IAM EC2 instance role and profile
+
+As we are using [Amazon S3 object storage](#amazon-s3-object-storage), our EC2 instances must have read, write, and list permissions for our S3 buckets. To avoid embedding AWS keys in our GitLab configuration, we make use of an [IAM Role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) to allow our GitLab instance with this access. We must create an IAM policy to attach to our IAM role:
+
+### Create an IAM Policy
+
+1. Go to the IAM dashboard and select **Policies** in the left menu.
+1. Select **Create policy**, select the `JSON` tab, and add a policy. We want to [follow security best practices and grant _least privilege_](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#grant-least-privilege), giving our role only the permissions needed to perform the required actions.
+ 1. Assuming you prefix the S3 bucket names with `gl-` as shown in the diagram, add the following policy:
+
+ ```json
+ { "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Effect": "Allow",
+ "Action": [
+ "s3:PutObject",
+ "s3:GetObject",
+ "s3:DeleteObject",
+ "s3:PutObjectAcl"
+ ],
+ "Resource": "arn:aws:s3:::gl-*/*"
+ },
+ {
+ "Effect": "Allow",
+ "Action": [
+ "s3:ListBucket",
+ "s3:AbortMultipartUpload",
+ "s3:ListMultipartUploadParts",
+ "s3:ListBucketMultipartUploads"
+ ],
+ "Resource": "arn:aws:s3:::gl-*"
+ }
+ ]
+ }
+ ```
+
+1. Select **Review policy**, give your policy a name (we use `gl-s3-policy`), and select **Create policy**.
+
+### Create an IAM Role
+
+1. Still on the IAM dashboard, select **Roles** in the left menu, and
+ select **Create role**.
+1. Create a new role by selecting **AWS service > EC2**, then select
+ **Next: Permissions**.
+1. In the policy filter, search for the `gl-s3-policy` we created above, select it, and select **Tags**.
+1. Add tags if needed and select **Review**.
+1. Give the role a name (we use `GitLabS3Access`) and select **Create Role**.
+
+We use this role when we [create a launch configuration](#create-a-launch-configuration) later on.
+
+## Configuring the network
+
+We start by creating a VPC for our GitLab cloud infrastructure, then
+we can create subnets to have public and private instances in at least
+two [Availability Zones (AZs)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html). Public subnets require a Route Table keep and an associated
+Internet Gateway.
-### GitLab Site Reliability Engineering (SRE) for AWS
+### Creating the Virtual Private Cloud (VPC)
-[GitLab Site Reliability Engineering (SRE) for AWS](gitlab_sre_for_aws.md) - information for planning, implementing, upgrading, and long term management of GitLab instances and runners on AWS.
+We now create a VPC, a virtual networking environment that you control:
-### Patterns to Install GitLab Cloud Native Hybrid on AWS EKS (HA)
+1. Sign in to [Amazon Web Services](https://console.aws.amazon.com/vpc/home).
+1. Select **Your VPCs** from the left menu and then select **Create VPC**.
+ At the "Name tag" enter `gitlab-vpc` and at the "IPv4 CIDR block" enter
+ `10.0.0.0/16`. If you don't require dedicated hardware, you can leave
+ "Tenancy" as default. Select **Yes, Create** when ready.
-[Provision GitLab Cloud Native Hybrid on AWS EKS (HA)](gitlab_hybrid_on_aws.md). This document includes instructions, patterns, and automation for installing GitLab Cloud Native Hybrid on AWS EKS. It also includes [Bill of Materials](https://en.wikipedia.org/wiki/Bill_of_materials) listings and links to Infrastructure as Code. GitLab Cloud Native Hybrid is the supported way to put as much of GitLab as possible into Kubernetes.
+ ![Create VPC](img/create_vpc.png)
-### Patterns to Install GitLab by using the Linux package on AWS EC2 (HA)
+1. Select the VPC, select **Actions**, select **Edit DNS resolution**, and enable DNS resolution. Select **Save** when done.
-[Installing a GitLab POC on Amazon Web Services (AWS)](manual_install_aws.md) - instructions for installing GitLab on EC2 instances. Manual instructions to build a GitLab instance or create your own Infrastructure as Code (IaC).
+### Subnets
-### Patterns for EKS cluster provisioning
+Now, let's create some subnets in different Availability Zones. Make sure
+that each subnet is associated to the VPC we just created and
+that CIDR blocks don't overlap. This also
+allows us to enable multi AZ for redundancy.
-[EKS Cluster Provisioning Patterns](eks_clusters_aws.md) - considerations for setting up EKS cluster for runners and for integrating.
+We create private and public subnets to match load balancers and
+RDS instances as well:
-### Patterns for Scaling HA GitLab Runner on AWS EC2 Auto Scaling group (ASG)
+1. Select **Subnets** from the left menu.
+1. Select **Create subnet**. Give it a descriptive name tag based on the IP,
+ for example `gitlab-public-10.0.0.0`, select the VPC we created previously, select an availability zone (we use `us-west-2a`),
+ and at the IPv4 CIDR block let's give it a 24 subnet `10.0.0.0/24`:
+
+ ![Create subnet](img/create_subnet.png)
+
+1. Follow the same steps to create all subnets:
+
+ | Name tag | Type | Availability Zone | CIDR block |
+ | ------------------------- | ------- | ----------------- | ------------- |
+ | `gitlab-public-10.0.0.0` | public | `us-west-2a` | `10.0.0.0/24` |
+ | `gitlab-private-10.0.1.0` | private | `us-west-2a` | `10.0.1.0/24` |
+ | `gitlab-public-10.0.2.0` | public | `us-west-2b` | `10.0.2.0/24` |
+ | `gitlab-private-10.0.3.0` | private | `us-west-2b` | `10.0.3.0/24` |
-The following repository is self-contained in regard to enabling this pattern: [GitLab HA Scaling Runner Vending Machine for AWS EC2 ASG](https://gitlab.com/guided-explorations/aws/gitlab-runner-autoscaling-aws-asg/). The [feature list for this implementation pattern](https://gitlab.com/guided-explorations/aws/gitlab-runner-autoscaling-aws-asg/-/blob/main/FEATURES.md) is good to review to understand the complete value it can deliver.
+1. Once all the subnets are created, enable **Auto-assign IPv4** for the two public subnets:
+ 1. Select each public subnet in turn, select **Actions**, and select **Modify auto-assign IP settings**. Enable the option and save.
-### Patterns for Using GitLab with AWS
+### Internet Gateway
-[The Guided Explorations' subgroup for AWS](https://gitlab.com/guided-explorations/aws) contains a variety of working example projects for:
+Now, still on the same dashboard, go to Internet Gateways and
+create a new one:
-- Using GitLab and AWS together.
-- Running GitLab infrastructure on AWS.
-- Retrieving temporary credentials for access to AWS services.
+1. Select **Internet Gateways** from the left menu.
+1. Select **Create internet gateway**, give it the name `gitlab-gateway` and
+ select **Create**.
+1. Select it from the table, and then under the **Actions** dropdown list choose
+ "Attach to VPC".
+
+ ![Create gateway](img/create_gateway.png)
+
+1. Choose `gitlab-vpc` from the list and hit **Attach**.
+
+### Create NAT Gateways
+
+Instances deployed in our private subnets must connect to the internet for updates, but should not be reachable from the public internet. To achieve this, we make use of [NAT Gateways](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html) deployed in each of our public subnets:
-## AWS known issues list
+1. Go to the VPC dashboard and select **NAT Gateways** in the left menu bar.
+1. Select **Create NAT Gateway** and complete the following:
+ 1. **Subnet**: Select `gitlab-public-10.0.0.0` from the dropdown list.
+ 1. **Elastic IP Allocation ID**: Enter an existing Elastic IP or select **Allocate Elastic IP address** to allocate a new IP to your NAT gateway.
+ 1. Add tags if needed.
+ 1. Select **Create NAT Gateway**.
-Known issues are gathered from within GitLab and from customer reported issues. Customers successfully implement GitLab with a variety of "as a Service" components that GitLab has not specifically been designed for, nor has ongoing testing for. While GitLab does take partner technologies very seriously, the highlighting of known issues here is a convenience for implementers and it does not imply that GitLab has targeted compatibility with, nor carries any type of guarantee of running on the partner technology where the issues occur. Consult individual issues to understand the GitLab stance and plans on any given known issue.
+Create a second NAT gateway but this time place it in the second public subnet, `gitlab-public-10.0.2.0`.
-See the [GitLab AWS known issues list](https://gitlab.com/gitlab-com/alliances/aws/public-tracker/-/issues?label_name%5B%5D=AWS+Known+Issue) for a complete list.
+### Route Tables
-## Provision a single GitLab instance on AWS
+#### Public Route Table
-If you want to provision a single GitLab instance on AWS, you have two options:
+We must create a route table for our public subnets to reach the internet via the internet gateway we created in the previous step.
-- The marketplace subscription
-- The official GitLab AMIs
+On the VPC dashboard:
-### Marketplace subscription
+1. Select **Route Tables** from the left menu.
+1. Select **Create Route Table**.
+1. At the "Name tag" enter `gitlab-public` and choose `gitlab-vpc` under "VPC".
+1. Select **Create**.
-GitLab provides a 5 user subscription as an AWS Marketplace subscription to help teams of all sizes to get started with an Ultimate licensed instance in record time. The Marketplace subscription can be easily upgraded to any GitLab licensing via an AWS Marketplace Private Offer, with the convenience of continued AWS billing. No migration is necessary to obtain a larger, non-time based license from GitLab. Per-minute licensing is automatically removed when you accept the private offer.
+We now must add our internet gateway as a new target and have
+it receive traffic from any destination.
-For a tutorial on provisioning a GitLab Instance via a Marketplace Subscription, [use this tutorial](https://gitlab.awsworkshop.io/040_partner_setup.html). The tutorial links to the [GitLab Ultimate Marketplace Listing](https://aws.amazon.com/marketplace/pp/prodview-g6ktjmpuc33zk), but you can also use the [GitLab Premium Marketplace Listing](https://aws.amazon.com/marketplace/pp/prodview-amk6tacbois2k) to provision an instance.
+1. Select **Route Tables** from the left menu and select the `gitlab-public`
+ route to show the options at the bottom.
+1. Select the **Routes** tab, select **Edit routes > Add route** and set `0.0.0.0/0`
+ as the destination. In the target column, select the `gitlab-gateway` we created previously.
+ Select **Save routes** when done.
-### Official GitLab releases as AMIs
+Next, we must associate the **public** subnets to the route table:
-GitLab produces Amazon Machine Images (AMI) during the regular release process. The AMIs can be used for single instance GitLab installation or, by configuring `/etc/gitlab/gitlab.rb`, can be specialized for specific GitLab service roles (for example a Gitaly server). Older releases remain available and can be used to migrate an older GitLab server to AWS.
+1. Select the **Subnet Associations** tab and select **Edit subnet associations**.
+1. Check only the public subnets and select **Save**.
-Initial licensing can either be the Free Enterprise License (EE) or the open source Community Edition (CE). The Enterprise Edition provides the easiest path forward to a licensed version if the need arises.
+#### Private Route Tables
-Currently the Amazon AMI uses the Amazon prepared Ubuntu AMI (x86 and ARM are available) as its starting point.
+We also must create two private route tables so that instances in each private subnet can reach the internet via the NAT gateway in the corresponding public subnet in the same availability zone.
+
+1. Follow the same steps as above to create two private route tables. Name them `gitlab-private-a` and `gitlab-private-b`.
+1. Next, add a new route to each of the private route tables where the destination is `0.0.0.0/0` and the target is one of the NAT gateways we created earlier.
+ 1. Add the NAT gateway we created in `gitlab-public-10.0.0.0` as the target for the new route in the `gitlab-private-a` route table.
+ 1. Similarly, add the NAT gateway in `gitlab-public-10.0.2.0` as the target for the new route in the `gitlab-private-b`.
+1. Lastly, associate each private subnet with a private route table.
+ 1. Associate `gitlab-private-10.0.1.0` with `gitlab-private-a`.
+ 1. Associate `gitlab-private-10.0.3.0` with `gitlab-private-b`.
+
+## Load Balancer
+
+We create a load balancer to evenly distribute inbound traffic on ports `80` and `443` across our GitLab application servers. Based on the [scaling policies](#create-an-auto-scaling-group) we create later, instances are added to or removed from our load balancer as needed. Additionally, the load balancer performs health checks on our instances.
+
+On the EC2 dashboard, look for Load Balancer in the left navigation bar:
+
+1. Select **Create Load Balancer**.
+ 1. Choose the **Classic Load Balancer**.
+ 1. Give it a name (we use `gitlab-loadbalancer`) and for the **Create LB Inside** option, select `gitlab-vpc` from the dropdown list.
+ 1. In the **Listeners** section, set the following listeners:
+ - HTTP port 80 for both load balancer and instance protocol and ports
+ - TCP port 22 for both load balancer and instance protocols and ports
+ - HTTPS port 443 for load balancer protocol and ports, forwarding to HTTP port 80 on the instance (we configure GitLab to listen on port 80 [later in the guide](#add-support-for-proxied-ssl))
+ 1. In the **Select Subnets** section, select both public subnets from the list so that the load balancer can route traffic to both availability zones.
+1. We add a security group for our load balancer to act as a firewall to control what traffic is allowed through. Select **Assign Security Groups** and select **Create a new security group**, give it a name
+ (we use `gitlab-loadbalancer-sec-group`) and description, and allow both HTTP and HTTPS traffic
+ from anywhere (`0.0.0.0/0, ::/0`). Also allow SSH traffic, select a custom source, and add a single trusted IP address or an IP address range in CIDR notation. This allows users to perform Git actions over SSH.
+1. Select **Configure Security Settings** and set the following:
+ 1. Select an SSL/TLS certificate from ACM or upload a certificate to IAM.
+ 1. Under **Select a Cipher**, pick a predefined security policy from the dropdown list. You can see a breakdown of [Predefined SSL Security Policies for Classic Load Balancers](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-security-policy-table.html) in the AWS documentation. Check the GitLab codebase for a list of [supported SSL ciphers and protocols](https://gitlab.com/gitlab-org/gitlab/-/blob/9ee7ad433269b37251e0dd5b5e00a0f00d8126b4/lib/support/nginx/gitlab-ssl#L97-99).
+1. Select **Configure Health Check** and set up a health check for your EC2 instances.
+ 1. For **Ping Protocol**, select HTTP.
+ 1. For **Ping Port**, enter 80.
+ 1. For **Ping Path** - we recommend that you [use the Readiness check endpoint](../../administration/load_balancer.md#readiness-check). You must add [the VPC IP Address Range (CIDR)](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-security-groups.html#elb-vpc-nacl) to the [IP allowlist](../../administration/monitoring/ip_allowlist.md) for the [Health Check endpoints](../../administration/monitoring/health_check.md)
+ 1. Keep the default **Advanced Details** or adjust them according to your needs.
+1. Select **Add EC2 Instances** - don't add anything as we create an Auto Scaling Group later to manage instances for us.
+1. Select **Add Tags** and add any tags you need.
+1. Select **Review and Create**, review all your settings, and select **Create** if you're happy.
+
+After the Load Balancer is up and running, you can revisit your Security
+Groups to refine the access only through the ELB and any other requirements
+you might have.
+
+### Configure DNS for Load Balancer
+
+On the Route 53 dashboard, select **Hosted zones** in the left navigation bar:
+
+1. Select an existing hosted zone or, if you do not already have one for your domain, select **Create Hosted Zone**, enter your domain name, and select **Create**.
+1. Select **Create Record Set** and provide the following values:
+ 1. **Name:** Use the domain name (the default value) or enter a subdomain.
+ 1. **Type:** Select **A - IPv4 address**.
+ 1. **Alias:** Defaults to **No**. Select **Yes**.
+ 1. **Alias Target:** Find the **ELB Classic Load Balancers** section and select the classic load balancer we created earlier.
+ 1. **Routing Policy:** We use **Simple** but you can choose a different policy based on your use case.
+ 1. **Evaluate Target Health:** We set this to **No** but you can choose to have the load balancer route traffic based on target health.
+ 1. Select **Create**.
+1. If you registered your domain through Route 53, you're done. If you used a different domain registrar, you must update your DNS records with your domain registrar. You must:
+ 1. Select **Hosted zones** and select the domain you added above.
+ 1. You see a list of `NS` records. From your domain registrar's administrator panel, add each of these as `NS` records to your domain's DNS records. These steps may vary between domain registrars. If you're stuck, Google **"name of your registrar" add DNS records** and you should find a help article specific to your domain registrar.
+
+The steps for doing this vary depending on which registrar you use and is beyond the scope of this guide.
+
+## PostgreSQL with RDS
+
+For our database server we use Amazon RDS for PostgreSQL which offers Multi AZ
+for redundancy (Aurora is **not** supported). First we create a security group and subnet group, then we
+create the actual RDS instance.
+
+### RDS Security Group
+
+We need a security group for our database that allows inbound traffic from the instances we deploy in our `gitlab-loadbalancer-sec-group` later on:
+
+1. From the EC2 dashboard, select **Security Groups** from the left menu bar.
+1. Select **Create security group**.
+1. Give it a name (we use `gitlab-rds-sec-group`), a description, and select the `gitlab-vpc` from the **VPC** dropdown list.
+1. In the **Inbound rules** section, select **Add rule** and set the following:
+ 1. **Type:** search for and select the **PostgreSQL** rule.
+ 1. **Source type:** set as "Custom".
+ 1. **Source:** select the `gitlab-loadbalancer-sec-group` we created earlier.
+1. When done, select **Create security group**.
+
+### RDS Subnet Group
+
+1. Go to the RDS dashboard and select **Subnet Groups** from the left menu.
+1. Select **Create DB Subnet Group**.
+1. Under **Subnet group details**, enter a name (we use `gitlab-rds-group`), a description, and choose the `gitlab-vpc` from the VPC dropdown list.
+1. From the **Availability Zones** dropdown list, select the Availability Zones that include the subnets you've configured. In our case, we add `eu-west-2a` and `eu-west-2b`.
+1. From the **Subnets** dropdown list, select the two private subnets (`10.0.1.0/24` and `10.0.3.0/24`) as we defined them in the [subnets section](#subnets).
+1. Select **Create** when ready.
+
+### Create the database
+
+WARNING:
+Avoid using burstable instances (t class instances) for the database as this could lead to performance issues due to CPU credits running out during sustained periods of high load.
+
+Now, it's time to create the database:
+
+1. Go to the RDS dashboard, select **Databases** from the left menu, and select **Create database**.
+1. Select **Standard Create** for the database creation method.
+1. Select **PostgreSQL** as the database engine and select the minimum PostgreSQL version as defined for your GitLab version in our [database requirements](../../install/requirements.md#postgresql-requirements).
+1. Because this is a production server, let's choose **Production** from the **Templates** section.
+1. Under **Settings**, use:
+ - `gitlab-db-ha` for the DB instance identifier.
+ - `gitlab` for a master username.
+ - A very secure password for the master password.
+
+ Make a note of these as we need them later.
+
+1. For the DB instance size, select **Standard classes** and select an instance size that meets your requirements from the dropdown list. We use a `db.m4.large` instance.
+1. Under **Storage**, configure the following:
+ 1. Select **Provisioned IOPS (SSD)** from the storage type dropdown list. Provisioned IOPS (SSD) storage is best suited for this use (though you can choose General Purpose (SSD) to reduce the costs). Read more about it at [Storage for Amazon RDS](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html).
+ 1. Allocate storage and set provisioned IOPS. We use the minimum values, `100` and `1000`, respectively.
+ 1. Enable storage autoscaling (optional) and set a maximum storage threshold.
+1. Under **Availability & durability**, select **Create a standby instance** to have a standby RDS instance provisioned in a different [Availability Zone](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html).
+1. Under **Connectivity**, configure the following:
+ 1. Select the VPC we created earlier (`gitlab-vpc`) from the **Virtual Private Cloud (VPC)** dropdown list.
+ 1. Expand the **Additional connectivity configuration** section and select the subnet group (`gitlab-rds-group`) we created earlier.
+ 1. Set public accessibility to **No**.
+ 1. Under **VPC security group**, select **Choose existing** and select the `gitlab-rds-sec-group` we create above from the dropdown list.
+ 1. Leave the database port as the default `5432`.
+1. For **Database authentication**, select **Password authentication**.
+1. Expand the **Additional configuration** section and complete the following:
+ 1. The initial database name. We use `gitlabhq_production`.
+ 1. Configure your preferred backup settings.
+ 1. The only other change we make here is to disable auto minor version updates under **Maintenance**.
+ 1. Leave all the other settings as is or tweak according to your needs.
+ 1. If you're happy, select **Create database**.
+
+Now that the database is created, let's move on to setting up Redis with ElastiCache.
+
+## Redis with ElastiCache
+
+ElastiCache is an in-memory hosted caching solution. Redis maintains its own
+persistence and is used to store session data, temporary cache information, and background job queues for the GitLab application.
+
+### Create a Redis Security Group
+
+1. Go to the EC2 dashboard.
+1. Select **Security Groups** from the left menu.
+1. Select **Create security group** and fill in the details. Give it a name (we use `gitlab-redis-sec-group`),
+ add a description, and choose the VPC we created previously
+1. In the **Inbound rules** section, select **Add rule** and add a **Custom TCP** rule, set port `6379`, and set the "Custom" source as the `gitlab-loadbalancer-sec-group` we created earlier.
+1. When done, select **Create security group**.
+
+### Redis Subnet Group
+
+1. Go to the ElastiCache dashboard from your AWS console.
+1. Go to **Subnet Groups** in the left menu, and create a new subnet group (we name ours `gitlab-redis-group`).
+ Make sure to select our VPC and its [private subnets](#subnets).
+1. Select **Create** when ready.
+
+ ![ElastiCache subnet](img/ec_subnet.png)
+
+### Create the Redis Cluster
+
+1. Go back to the ElastiCache dashboard.
+1. Select **Redis** on the left menu and select **Create** to create a new
+ Redis cluster. Do not enable **Cluster Mode** as it is [not supported](../../administration/redis/replication_and_failover_external.md#requirements). Even without cluster mode on, you still get the
+ chance to deploy Redis in multiple availability zones.
+1. In the settings section:
+ 1. Give the cluster a name (`gitlab-redis`) and a description.
+ 1. For the version, select the latest.
+ 1. Leave the port as `6379` because this is what we used in our Redis security group above.
+ 1. Select the node type (at least `cache.t3.medium`, but adjust to your needs) and the number of replicas.
+1. In the advanced settings section:
+ 1. Select the multi-AZ auto-failover option.
+ 1. Select the subnet group we created previously.
+ 1. Manually select the preferred availability zones, and under "Replica 2"
+ choose a different zone than the other two.
+
+ ![Redis availability zones](img/ec_az.png)
+
+1. In the security settings, edit the security groups and choose the
+ `gitlab-redis-sec-group` we had previously created.
+1. Leave the rest of the settings to their default values or edit to your liking.
+1. When done, select **Create**.
+
+## Setting up Bastion Hosts
+
+Because our GitLab instances are in private subnets, we need a way to connect
+to these instances with SSH for actions that include making configuration changes
+and performing upgrades. One way of doing this is by using a [bastion host](https://en.wikipedia.org/wiki/Bastion_host),
+sometimes also referred to as a jump box.
NOTE:
-When deploying a GitLab instance using the official AMI, the root password to the instance is the EC2 **Instance** ID (not the AMI ID). This way of setting the root account password is specific to official GitLab published AMIs ONLY.
+If you do not want to maintain bastion hosts, you can set up [AWS Systems Manager Session Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html) for access to instances. This is beyond the scope of this document.
+
+### Create Bastion Host A
+
+1. Go to the EC2 Dashboard and select **Launch instance**.
+1. Select the **Ubuntu Server 18.04 LTS (HVM)** AMI.
+1. Choose an instance type. We use a `t2.micro` as we only use the bastion host to SSH into our other instances.
+1. Select **Configure Instance Details**.
+ 1. Under **Network**, select the `gitlab-vpc` from the dropdown list.
+ 1. Under **Subnet**, select the public subnet we created earlier (`gitlab-public-10.0.0.0`).
+ 1. Double check that under **Auto-assign Public IP** you have **Use subnet setting (Enable)** selected.
+ 1. Leave everything else as default and select **Add Storage**.
+1. For storage, we leave everything as default and only add an 8GB root volume. We do not store anything on this instance.
+1. Select **Add Tags** and on the next screen select **Add Tag**.
+ 1. We only set `Key: Name` and `Value: Bastion Host A`.
+1. Select **Configure Security Group**.
+ 1. Select **Create a new security group**, enter a **Security group name** (we use `bastion-sec-group`), and add a description.
+ 1. We enable SSH access from anywhere (`0.0.0.0/0`). If you want stricter security, specify a single IP address or an IP address range in CIDR notation.
+ 1. Select **Review and Launch**
+1. Review all your settings and, if you're happy, select **Launch**.
+1. Acknowledge that you have access to an existing key pair or create a new one. Select **Launch Instance**.
+
+Confirm that you can SSH into the instance:
+
+1. On the EC2 Dashboard, select **Instances** in the left menu.
+1. Select **Bastion Host A** from your list of instances.
+1. Select **Connect** and follow the connection instructions.
+1. If you are able to connect successfully, let's move on to setting up our second bastion host for redundancy.
+
+### Create Bastion Host B
+
+1. Create an EC2 instance following the same steps as above with the following changes:
+ 1. For the **Subnet**, select the second public subnet we created earlier (`gitlab-public-10.0.2.0`).
+ 1. Under the **Add Tags** section, we set `Key: Name` and `Value: Bastion Host B` so that we can easily identify our two instances.
+ 1. For the security group, select the existing `bastion-sec-group` we created above.
+
+### Use SSH Agent Forwarding
+
+EC2 instances running Linux use private key files for SSH authentication. You connect to your bastion host using an SSH client and the private key file stored on your client. Because the private key file is not present on the bastion host, you are not able to connect to your instances in private subnets.
+
+Storing private key files on your bastion host is a bad idea. To get around this, use SSH agent forwarding on your client. See [Securely Connect to Linux Instances Running in a Private Amazon VPC](https://aws.amazon.com/blogs/security/securely-connect-to-linux-instances-running-in-a-private-amazon-vpc/) for a step-by-step guide on how to use SSH agent forwarding.
+
+## Install GitLab and create custom AMI
+
+We need a preconfigured, custom GitLab AMI to use in our launch configuration later. As a starting point, we use the official GitLab AMI to create a GitLab instance. Then, we add our custom configuration for PostgreSQL, Redis, and Gitaly. If you prefer, instead of using the official GitLab AMI, you can also spin up an EC2 instance of your choosing and [manually install GitLab](https://about.gitlab.com/install/).
+
+### Install GitLab
+
+From the EC2 dashboard:
+
+1. Use the section below titled "[Find official GitLab-created AMI IDs on AWS](#find-official-gitlab-created-ami-ids-on-aws)" to find the correct AMI to launch.
+1. After selecting **Launch** on the desired AMI, select an instance type based on your workload. Consult the [hardware requirements](../../install/requirements.md#hardware-requirements) to choose one that fits your needs (at least `c5.xlarge`, which is sufficient to accommodate 100 users).
+1. Select **Configure Instance Details**:
+ 1. In the **Network** dropdown list, select `gitlab-vpc`, the VPC we created earlier.
+ 1. In the **Subnet** dropdown list, select `gitlab-private-10.0.1.0` from the list of subnets we created earlier.
+ 1. Double check that **Auto-assign Public IP** is set to `Use subnet setting (Disable)`.
+ 1. Select **Add Storage**.
+ 1. The root volume is 8GiB by default and should be enough given that we do not store any data there.
+1. Select **Add Tags** and add any tags you may need. In our case, we only set `Key: Name` and `Value: GitLab`.
+1. Select **Configure Security Group**. Check **Select an existing security group** and select the `gitlab-loadbalancer-sec-group` we created earlier.
+1. Select **Review and launch** followed by **Launch** if you're happy with your settings.
+1. Finally, acknowledge that you have access to the selected private key file or create a new one. Select **Launch Instances**.
+
+### Add custom configuration
+
+Connect to your GitLab instance via **Bastion Host A** using [SSH Agent Forwarding](#use-ssh-agent-forwarding). Once connected, add the following custom configuration:
+
+#### Disable Let's Encrypt
+
+Because we're adding our SSL certificate at the load balancer, we do not need the GitLab built-in support for Let's Encrypt. Let's Encrypt [is enabled by default](https://docs.gitlab.com/omnibus/settings/ssl/index.html#enable-the-lets-encrypt-integration) when using an `https` domain in GitLab 10.7 and later, so we must explicitly disable it:
+
+1. Open `/etc/gitlab/gitlab.rb` and disable it:
+
+ ```ruby
+ letsencrypt['enable'] = false
+ ```
+
+1. Save the file and reconfigure for the changes to take effect:
+
+ ```shell
+ sudo gitlab-ctl reconfigure
+ ```
+
+#### Install the required extensions for PostgreSQL
+
+From your GitLab instance, connect to the RDS instance to verify access and to install the required `pg_trgm` and `btree_gist` extensions.
+
+To find the host or endpoint, go to **Amazon RDS > Databases** and select the database you created earlier. Look for the endpoint under the **Connectivity & security** tab.
+
+Do not to include the colon and port number:
+
+```shell
+sudo /opt/gitlab/embedded/bin/psql -U gitlab -h <rds-endpoint> -d gitlabhq_production
+```
+
+At the `psql` prompt create the extension and then quit the session:
+
+```shell
+psql (10.9)
+Type "help" for help.
+
+gitlab=# CREATE EXTENSION pg_trgm;
+gitlab=# CREATE EXTENSION btree_gist;
+gitlab=# \q
+```
+
+#### Configure GitLab to connect to PostgreSQL and Redis
+
+1. Edit `/etc/gitlab/gitlab.rb`, find the `external_url 'http://<domain>'` option
+ and change it to the `https` domain you are using.
+
+1. Look for the GitLab database settings and uncomment as necessary. In
+ our current case we specify the database adapter, encoding, host, name,
+ username, and password:
+
+ ```ruby
+ # Disable the built-in Postgres
+ postgresql['enable'] = false
+
+ # Fill in the connection details
+ gitlab_rails['db_adapter'] = "postgresql"
+ gitlab_rails['db_encoding'] = "unicode"
+ gitlab_rails['db_database'] = "gitlabhq_production"
+ gitlab_rails['db_username'] = "gitlab"
+ gitlab_rails['db_password'] = "mypassword"
+ gitlab_rails['db_host'] = "<rds-endpoint>"
+ ```
+
+1. Next, we must configure the Redis section by adding the host and
+ uncommenting the port:
-Instances running on Community Edition (CE) require a migration to Enterprise Edition (EE) to subscribe to the GitLab Premium or Ultimate plan. If you want to pursue a subscription, using the Free-forever plan of Enterprise Edition is the least disruptive method.
+ ```ruby
+ # Disable the built-in Redis
+ redis['enable'] = false
+
+ # Fill in the connection details
+ gitlab_rails['redis_host'] = "<redis-endpoint>"
+ gitlab_rails['redis_port'] = 6379
+ ```
+
+1. Finally, reconfigure GitLab for the changes to take effect:
+
+ ```shell
+ sudo gitlab-ctl reconfigure
+ ```
+
+1. You can also run a check and a service status to make sure
+ everything has been setup correctly:
+
+ ```shell
+ sudo gitlab-rake gitlab:check
+ sudo gitlab-ctl status
+ ```
+
+#### Set up Gitaly
+
+WARNING:
+In this architecture, having a single Gitaly server creates a single point of failure. Use
+[Gitaly Cluster](../../administration/gitaly/praefect.md) to remove this limitation.
+
+Gitaly is a service that provides high-level RPC access to Git repositories.
+It should be enabled and configured on a separate EC2 instance in one of the
+[private subnets](#subnets) we configured previously.
+
+Let's create an EC2 instance where we install Gitaly:
+
+1. From the EC2 dashboard, select **Launch instance**.
+1. Choose an AMI. In this example, we select the **Ubuntu Server 18.04 LTS (HVM), SSD Volume Type**.
+1. Choose an instance type. We pick a `c5.xlarge`.
+1. Select **Configure Instance Details**.
+ 1. In the **Network** dropdown list, select `gitlab-vpc`, the VPC we created earlier.
+ 1. In the **Subnet** dropdown list, select `gitlab-private-10.0.1.0` from the list of subnets we created earlier.
+ 1. Double check that **Auto-assign Public IP** is set to `Use subnet setting (Disable)`.
+ 1. Select **Add Storage**.
+1. Increase the Root volume size to `20 GiB` and change the **Volume Type** to `Provisioned IOPS SSD (io1)`. (This is an arbitrary size. Create a volume big enough for your repository storage requirements.)
+ 1. For **IOPS** set `1000` (20 GiB x 50 IOPS). You can provision up to 50 IOPS per GiB. If you select a larger volume, increase the IOPS accordingly. Workloads where many small files are written in a serialized manner, like `git`, requires performant storage, hence the choice of `Provisioned IOPS SSD (io1)`.
+1. Select **Add Tags** and add your tags. In our case, we only set `Key: Name` and `Value: Gitaly`.
+1. Select **Configure Security Group** and let's **Create a new security group**.
+ 1. Give your security group a name and description. We use `gitlab-gitaly-sec-group` for both.
+ 1. Create a **Custom TCP** rule and add port `8075` to the **Port Range**. For the **Source**, select the `gitlab-loadbalancer-sec-group`.
+ 1. Also add an inbound rule for SSH from the `bastion-sec-group` so that we can connect using [SSH Agent Forwarding](#use-ssh-agent-forwarding) from the Bastion hosts.
+1. Select **Review and launch** followed by **Launch** if you're happy with your settings.
+1. Finally, acknowledge that you have access to the selected private key file or create a new one. Select **Launch Instances**.
NOTE:
-Because any given GitLab upgrade might involve data disk updates or database schema upgrades, swapping out the AMI is not sufficient for taking upgrades.
+Instead of storing configuration _and_ repository data on the root volume, you can also choose to add an additional EBS volume for repository storage. Follow the same guidance as above. See the [Amazon EBS pricing](https://aws.amazon.com/ebs/pricing/). We do not recommend using EFS as it may negatively impact the performance of GitLab. You can review the [relevant documentation](../../administration/nfs.md#avoid-using-cloud-based-file-systems) for more details.
+
+Now that we have our EC2 instance ready, follow the [documentation to install GitLab and set up Gitaly on its own server](../../administration/gitaly/configure_gitaly.md#run-gitaly-on-its-own-server). Perform the client setup steps from that document on the [GitLab instance we created](#install-gitlab) above.
+
+#### Add Support for Proxied SSL
+
+As we are terminating SSL at our [load balancer](#load-balancer), follow the steps at [Supporting proxied SSL](https://docs.gitlab.com/omnibus/settings/ssl/index.html#configure-a-reverse-proxy-or-load-balancer-ssl-termination) to configure this in `/etc/gitlab/gitlab.rb`.
+
+Remember to run `sudo gitlab-ctl reconfigure` after saving the changes to the `gitlab.rb` file.
+
+#### Fast lookup of authorized SSH keys
+
+The public SSH keys for users allowed to access GitLab are stored in `/var/opt/gitlab/.ssh/authorized_keys`. Typically we'd use shared storage so that all the instances are able to access this file when a user performs a Git action over SSH. Because we do not have shared storage in our setup, we update our configuration to authorize SSH users via indexed lookup in the GitLab database.
+
+Follow the instructions at [Set up fast SSH key lookup](../../administration/operations/fast_ssh_key_lookup.md#set-up-fast-lookup) to switch from using the `authorized_keys` file to the database.
-1. Log in to the AWS Web Console, so that selecting the links in the following step take you directly to the AMI list.
-1. Pick the edition you want:
+If you do not configure fast lookup, Git actions over SSH results in the following error:
- - [GitLab Enterprise Edition](https://console.aws.amazon.com/ec2/v2/home?region=us-east-1#Images:visibility=public-images;ownerAlias=782774275127;search=GitLab%20EE;sort=desc:name): If you want to unlock the enterprise features, a license is needed.
- - [GitLab Community Edition](https://console.aws.amazon.com/ec2/v2/home?region=us-east-1#Images:visibility=public-images;ownerAlias=782774275127;search=GitLab%20CE;sort=desc:name): The open source version of GitLab.
- - [GitLab Premium or Ultimate Marketplace (pre-licensed)](https://console.aws.amazon.com/ec2/v2/home?region=us-east-1#Images:visibility=public-images;source=Marketplace;search=GitLab%20EE;sort=desc:name): 5 user license built into per-minute billing.
+```shell
+Permission denied (publickey).
+fatal: Could not read from remote repository.
-1. AMI IDs are unique per region. After you've loaded any of these editions, in the upper-right corner, select the desired target region of the console to see the appropriate AMIs.
-1. After the console is loaded, you can add additional search criteria to narrow further. For instance, type `13.` to find only 13.x versions.
-1. To launch an EC2 Machine with one of the listed AMIs, check the box at the start of the relevant row, and select **Launch** near the top of left of the page.
+Please make sure you have the correct access rights
+and the repository exists.
+```
+
+#### Configure host keys
+
+Ordinarily we would manually copy the contents (primary and public keys) of `/etc/ssh/` on the primary application server to `/etc/ssh` on all secondary servers. This prevents false man-in-the-middle-attack alerts when accessing servers in your cluster behind a load balancer.
+
+We automate this by creating static host keys as part of our custom AMI. As these host keys are also rotated every time an EC2 instance boots up, "hard coding" them into our custom AMI serves as a workaround.
+
+On your GitLab instance run the following:
+
+```shell
+sudo mkdir /etc/ssh_static
+sudo cp -R /etc/ssh/* /etc/ssh_static
+```
+
+In `/etc/ssh/sshd_config` update the following:
+
+```shell
+# HostKeys for protocol version 2
+HostKey /etc/ssh_static/ssh_host_rsa_key
+HostKey /etc/ssh_static/ssh_host_dsa_key
+HostKey /etc/ssh_static/ssh_host_ecdsa_key
+HostKey /etc/ssh_static/ssh_host_ed25519_key
+```
+
+#### Amazon S3 object storage
+
+Because we're not using NFS for shared storage, we use [Amazon S3](https://aws.amazon.com/s3/) buckets to store backups, artifacts, LFS objects, uploads, merge request diffs, container registry images, and more. Our documentation includes [instructions on how to configure object storage](../../administration/object_storage.md) for each of these data types, and other information about using object storage with GitLab.
NOTE:
-If you are trying to restore from an older version of GitLab while moving to AWS, find the
-[Enterprise and Community Editions before GitLab 11.10.3](https://console.aws.amazon.com/ec2/v2/home?region=us-east-1#Images:visibility=public-images;ownerAlias=855262394183;sort=desc:name).
+Because we are using the [AWS IAM profile](#create-an-iam-role) we created earlier, be sure to omit the AWS access key and secret access key/value pairs when configuring object storage. Instead, use `'use_iam_profile' => true` in your configuration as shown in the object storage documentation linked above.
+
+Remember to run `sudo gitlab-ctl reconfigure` after saving the changes to the `gitlab.rb` file.
+
+---
+
+That concludes the configuration changes for our GitLab instance. Next, we create a custom AMI based on this instance to use for our launch configuration and auto scaling group.
+
+### Log in for the first time
+
+Using the domain name you used when setting up [DNS for the load balancer](#configure-dns-for-load-balancer), you should now be able to visit GitLab in your browser.
+
+Depending on how you installed GitLab and if you did not change the password by any other means, the default password is either:
-## Additional details on implementation patterns
+- Your instance ID if you used the official GitLab AMI.
+- A randomly generated password stored for 24 hours in `/etc/gitlab/initial_root_password`.
-GitLab implementation patterns build upon [GitLab Reference Architectures](../../administration/reference_architectures/index.md) in the following ways.
+To change the default password, log in as the `root` user with the default password and [change it in the user profile](../../user/profile/user_passwords.md#change-your-password).
-### Cloud platform well architected compliance
+When our [auto scaling group](#create-an-auto-scaling-group) spins up new instances, we are able to sign in with username `root` and the newly created password.
-Testing-backed architectural qualification is a fundamental concept behind implementation patterns:
+### Create custom AMI
-- Implementation patterns maintain GitLab Reference Architecture compliance and provide [GitLab Performance Tool](https://gitlab.com/gitlab-org/quality/performance) (GPT) reports to demonstrate adherence to them.
-- Implementation patterns may be qualified by and/or contributed to by the technology vendor. For instance, an implementation pattern for AWS may be officially reviewed by AWS.
-- Implementation patterns may specify and test Cloud Platform PaaS services for suitability for GitLab. This testing can be coordinated and help qualify these technologies for Reference Architectures. For instance, qualifying compatibility with and availability of runtime versions of top level PaaS such as those for PostgreSQL and Redis.
-- Implementation patterns can provided qualified testing for platform limitations, for example, ensuring Gitaly Cluster can work correctly on specific Cloud Platform availability zone latency and throughput characteristics or qualifying what levels of available platform partner local disk performance is workable for Gitaly server to operate with integrity.
+On the EC2 dashboard:
-### Platform partner specificity
+1. Select the `GitLab` instance we [created earlier](#install-gitlab).
+1. Select **Actions**, scroll down to **Image** and select **Create Image**.
+1. Give your image a name and description (we use `GitLab-Source` for both).
+1. Leave everything else as default and select **Create Image**
-Implementation patterns enable platform-specific terminology, best practice architecture, and platform-specific build manifests:
+Now we have a custom AMI that we use to create our launch configuration the next step.
-- Implementation patterns are more vendor specific. For instance, advising specific compute instances / VMs / nodes instead of vCPUs or other generalized measures.
-- Implementation patterns are oriented to implementing good architecture for the vendor in view.
-- Implementation patterns are written to an audience who is familiar with building on the infrastructure that the implementation pattern targets. For example, if the implementation pattern is for GCP, the specific terminology of GCP is used - including using the specific names for PaaS services.
-- Implementation patterns can test and qualify if the versions of PaaS available are compatible with GitLab (for example, PostgreSQL, Redis, etc.).
+## Deploy GitLab inside an auto scaling group
+
+### Create a launch configuration
+
+From the EC2 dashboard:
+
+1. Select **Launch Configurations** from the left menu and select **Create launch configuration**.
+1. Select **My AMIs** from the left menu and select the `GitLab` custom AMI we created above.
+1. Select an instance type best suited for your needs (at least a `c5.xlarge`) and select **Configure details**.
+1. Enter a name for your launch configuration (we use `gitlab-ha-launch-config`).
+1. **Do not** check **Request Spot Instance**.
+1. From the **IAM Role** dropdown list, pick the `GitLabAdmin` instance role we [created earlier](#create-an-iam-ec2-instance-role-and-profile).
+1. Leave the rest as defaults and select **Add Storage**.
+1. The root volume is 8GiB by default and should be enough given that we do not store any data there. Select **Configure Security Group**.
+1. Check **Select and existing security group** and select the `gitlab-loadbalancer-sec-group` we created earlier.
+1. Select **Review**, review your changes, and select **Create launch configuration**.
+1. Acknowledge that you have access to the private key or create a new one. Select **Create launch configuration**.
+
+### Create an auto scaling group
+
+1. After the launch configuration is created, select **Create an Auto Scaling group using this launch configuration** to start creating the auto scaling group.
+1. Enter a **Group name** (we use `gitlab-auto-scaling-group`).
+1. For **Group size**, enter the number of instances you want to start with (we enter `2`).
+1. Select the `gitlab-vpc` from the **Network** dropdown list.
+1. Add both the private [subnets we created earlier](#subnets).
+1. Expand the **Advanced Details** section and check the **Receive traffic from one or more load balancers** option.
+1. From the **Classic Load Balancers** dropdown list, select the load balancer we created earlier.
+1. For **Health Check Type**, select **ELB**.
+1. We leave our **Health Check Grace Period** as the default `300` seconds. Select **Configure scaling policies**.
+1. Check **Use scaling policies to adjust the capacity of this group**.
+1. For this group we scale between 2 and 4 instances where one instance is added if CPU
+utilization is greater than 60% and one instance is removed if it falls
+to less than 45%.
+
+![Auto scaling group policies](img/policies.png)
+
+1. Finally, configure notifications and tags as you see fit, review your changes, and create the
+auto scaling group.
+
+As the auto scaling group is created, you see your new instances spinning up in your EC2 dashboard. You also see the new instances added to your load balancer. After the instances pass the heath check, they are ready to start receiving traffic from the load balancer.
+
+Because our instances are created by the auto scaling group, go back to your instances and terminate the [instance we created manually above](#install-gitlab). We only needed this instance to create our custom AMI.
+
+## Health check and monitoring with Prometheus
+
+Apart from Amazon's Cloudwatch which you can enable on various services,
+GitLab provides its own integrated monitoring solution based on Prometheus.
+For more information about how to set it up, see
+[GitLab Prometheus](../../administration/monitoring/prometheus/index.md).
+
+GitLab also has various [health check endpoints](../../administration/monitoring/health_check.md)
+that you can ping and get reports.
+
+## GitLab Runner
+
+If you want to take advantage of [GitLab CI/CD](../../ci/index.md), you have to
+set up at least one [runner](https://docs.gitlab.com/runner/).
+
+Read more on configuring an
+[autoscaling GitLab Runner on AWS](https://docs.gitlab.com/runner/configuration/runner_autoscale_aws/).
+
+## Backup and restore
+
+GitLab provides [a tool to back up](../../administration/backup_restore/index.md)
+and restore its Git data, database, attachments, LFS objects, and so on.
+
+Some important things to know:
+
+- The backup/restore tool **does not** store some configuration files, like secrets; you
+ must [configure this yourself](../../administration/backup_restore/backup_gitlab.md#storing-configuration-files).
+- By default, the backup files are stored locally, but you can
+ [backup GitLab using S3](../../administration/backup_restore/backup_gitlab.md#using-amazon-s3).
+- You can [exclude specific directories form the backup](../../administration/backup_restore/backup_gitlab.md#excluding-specific-directories-from-the-backup).
+
+### Backing up GitLab
+
+To back up GitLab:
+
+1. SSH into your instance.
+1. Take a backup:
+
+ ```shell
+ sudo gitlab-backup create
+ ```
+
+NOTE:
+For GitLab 12.1 and earlier, use `gitlab-rake gitlab:backup:create`.
+
+### Restoring GitLab from a backup
+
+To restore GitLab, first review the [restore documentation](../../administration/backup_restore/index.md#restore-gitlab),
+and primarily the restore prerequisites. Then, follow the steps under the
+[Linux package installations section](../../administration/backup_restore/restore_gitlab.md#restore-for-linux-package-installations).
+
+## Updating GitLab
+
+GitLab releases a new version every month on the [release date](https://about.gitlab.com/releases/). Whenever a new version is
+released, you can update your GitLab instance:
+
+1. SSH into your instance
+1. Take a backup:
+
+ ```shell
+ sudo gitlab-backup create
+ ```
+
+NOTE:
+For GitLab 12.1 and earlier, use `gitlab-rake gitlab:backup:create`.
-### Platform as a Service (PaaS) specification and usage
+1. Update the repositories and install GitLab:
-Platform as a Service options are a huge portion of the value provided by Cloud Platforms as they simplify operational complexity and reduce the SRE and security skilling required to operate advanced, highly available technology services. Implementation patterns can be pre-qualified against the partner PaaS options.
+ ```shell
+ sudo apt update
+ sudo apt install gitlab-ee
+ ```
-- Implementation patterns help implementers understand what PaaS options are known to work and how to choose between PaaS solutions when a single platform has more than one PaaS option for the same GitLab role.
-- For instance, where reference architectures do not have a specific recommendation on what technology is leveraged for GitLab outbound email services or what the sizing should be - a Reference Implementation may advise using a cloud providers Email as a Service (PaaS) and possibly even with specific settings.
+After a few minutes, the new version should be up and running.
-### Cost optimizing engineering
+## Find official GitLab-created AMI IDs on AWS
-Cost engineering is a fundamental aspect of Cloud Architecture and frequently the savings capabilities available on a platform exert strong influence on how to build out scaled computing.
+Read more on how to use [GitLab releases as AMIs](../../solutions/cloud/aws/gitlab_single_box_on_aws.md#official-gitlab-releases-as-amis).
-- Implementation patterns may define GPT tested autoscaling for various aspects of GitLab infrastructure, including minimum idling configurations and scaling speeds.
-- Implementation patterns may provide GPT testing for advised configurations that go beyond the scope of reference architectures, for instance GPT tested elastic scaling configurations for Cloud Native Hybrid that enable lower resourcing during periods of lower usage (for example on the weekend).
-- Implementation patterns may engineer specifically for the savings models available on a platform provider. An AWS example would be maximizing the occurrence of a specific instance type for taking advantage of reserved instances.
-- Implementation patterns may leverage ephemeral compute where appropriate and with appropriate customer guidelines. For instance, a Kubernetes node group dedicated to runners on ephemeral compute (with appropriate GitLab Runner tagging to indicate the compute type).
-- Implementation patterns may include vendor specific cost calculators.
+## Conclusion
-### Actionability and automatability orientation
+In this guide, we went mostly through scaling and some redundancy options,
+your mileage may vary.
-Implementation patterns are one step closer to specifics that can be used as a source for build instructions and automation code:
+Keep in mind that all solutions come with a trade-off between
+cost/complexity and uptime. The more uptime you want, the more complex the solution.
+And the more complex the solution, the more work is involved in setting up and
+maintaining it.
-- Implementation patterns enable builders to generate a list of vendor specific resources required to implement GitLab for a given Reference Architecture.
-- Implementation patterns enable builders to use manual instructions or to create automation to build out the reference implementation.
+Have a read through these other resources and feel free to
+[open an issue](https://gitlab.com/gitlab-org/gitlab/-/issues/new)
+to request additional material:
-## Supplementary implementation patterns
+- [Scaling GitLab](../../administration/reference_architectures/index.md):
+ GitLab supports several different types of clustering.
+- [Geo replication](../../administration/geo/index.md):
+ Geo is the solution for widely distributed development teams.
+- [Linux package](https://docs.gitlab.com/omnibus/) - Everything you must know
+ about administering your GitLab instance.
+- [Add a license](../../administration/license.md):
+ Activate all GitLab Enterprise Edition functionality with a license.
+- [Pricing](https://about.gitlab.com/pricing/): Pricing for the different tiers.
-Implementation patterns may also provide specialized implementations beyond the scope of reference architecture compliance, especially where the cost of enablement can be more appropriately managed.
+## Troubleshooting
-For example:
+### Instances are failing health checks
-- Small, self-contained GitLab instances for per-person administration training, perhaps on Kubernetes so that a deployment cluster is self-contained as well.
-- GitLab Runner implementation patterns, including using platform-specific PaaS.
+If your instances are failing the load balancer's health checks, verify that they are returning a status `200` from the health check endpoint we configured earlier. Any other status, including redirects like status `302`, causes the health check to fail.
-## Intended audiences and contributors
+You may have to set a password on the `root` user to prevent automatic redirects on the sign-in endpoint before health checks pass.
-The primary audiences for and contributors to this information is the GitLab **Implementation Eco System** which consists of at least:
+### "The change you requested was rejected (422)"
-GitLab Implementation Community:
+If you see this page when trying to set a password via the web interface, make sure `external_url` in `gitlab.rb` matches the domain you are making a request from, and run `sudo gitlab-ctl reconfigure` after making any changes to it.
-- Customers
-- GitLab Channel Partners (Integrators)
-- Platform Partners
+### Some job logs are not uploaded to object storage
-GitLab Internal Implementation Teams:
+When the GitLab deployment is scaled up to more than one node, some job logs may not be uploaded to [object storage](../../administration/object_storage.md) properly. [Incremental logging is required](../../administration/object_storage.md#alternatives-to-file-system-storage) for CI to use object storage.
-- Quality / Distribution / Self-Managed
-- Alliances
-- Training
-- Support
-- Professional Services
-- Public Sector
+Enable [incremental logging](../../administration/job_logs.md#enable-or-disable-incremental-logging) if it has not already been enabled.
diff --git a/doc/install/aws/manual_install_aws.md b/doc/install/aws/manual_install_aws.md
index a952180674c..0019c8c3472 100644
--- a/doc/install/aws/manual_install_aws.md
+++ b/doc/install/aws/manual_install_aws.md
@@ -1,856 +1,11 @@
---
-stage: Systems
-group: Distribution
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
+redirect_to: 'index.md'
+remove_date: '2024-03-31'
---
-{::options parse_block_html="true" /}
+This document was moved to [AWS](index.md).
-# Installing a GitLab POC on Amazon Web Services (AWS) **(FREE SELF)**
-
-This page offers a walkthrough of a common configuration for GitLab on AWS using the official Linux package. You should customize it to accommodate your needs.
-
-NOTE:
-For organizations with 1,000 users or less, the recommended AWS installation method is to launch an EC2 single box [Linux package installation](https://about.gitlab.com/install/) and implement a snapshot strategy for backing up the data. See the [1,000 user reference architecture](../../administration/reference_architectures/1k_users.md) for more information.
-
-## Getting started for production-grade GitLab
-
-NOTE:
-This document is an installation guide for a proof of concept instance. It is not a reference architecture and it does not result in a highly available configuration.
-
-Following this guide exactly results in a proof of concept instance that roughly equates to a **scaled down** version of a **two availability zone implementation** of the **Non-HA** [2000 User Reference Architecture](../../administration/reference_architectures/2k_users.md). The 2K reference architecture is not HA because it is primarily intended to provide some scaling while keeping costs and complexity low. The [3000 User Reference Architecture](../../administration/reference_architectures/3k_users.md) is the smallest size that is GitLab HA. It has additional service roles to achieve HA, most notably it uses Gitaly Cluster to achieve HA for Git repository storage and specifies triple redundancy.
-
-GitLab maintains and tests two main types of Reference Architectures. The **Linux package architectures** are implemented on instance compute while **Cloud Native Hybrid architectures** maximize the use of a Kubernetes cluster. Cloud Native Hybrid reference architecture specifications are addendum sections to the Reference Architecture size pages that start by describing the Linux package architecture. For example, the 3000 User Cloud Native Reference Architecture is in the subsection titled [Cloud Native Hybrid reference architecture with Helm Charts (alternative)](../../administration/reference_architectures/3k_users.md#cloud-native-hybrid-reference-architecture-with-helm-charts-alternative) in the 3000 User Reference Architecture page.
-
-### Getting started for production-grade Linux package installations
-
-The Infrastructure as Code tooling [GitLab Environment Tool (GET)](https://gitlab.com/gitlab-org/gitlab-environment-toolkit/-/tree/main) is the best place to start for building using the Linux package on AWS and most especially if you are targeting an HA setup. While it does not automate everything, it does complete complex setups like Gitaly Cluster for you. GET is open source so anyone can build on top of it and contribute improvements to it.
-
-### Getting started for production-grade Cloud Native Hybrid GitLab
-
-For the Cloud Native Hybrid architectures there are two Infrastructure as Code options which are compared in GitLab Cloud Native Hybrid on AWS EKS implementation pattern in the section [Available Infrastructure as Code for GitLab Cloud Native Hybrid](gitlab_hybrid_on_aws.md#available-infrastructure-as-code-for-gitlab-cloud-native-hybrid). It compares the [GitLab Environment Toolkit](https://gitlab.com/gitlab-org/gitlab-environment-toolkit/-/tree/main) to the AWS Quick Start for GitLab Cloud Native Hybrid on EKS which was co-developed by GitLab and AWS. GET and the AWS Quick Start are both open source so anyone can build on top of them and contribute improvements to them.
-
-## Introduction
-
-For the most part, we make use of the Linux package in our setup, but we also leverage native AWS services. Instead of using the Linux package-bundled PostgreSQL and Redis, we use Amazon RDS and ElastiCache.
-
-In this guide, we go through a multi-node setup where we start by
-configuring our Virtual Private Cloud and subnets to later integrate
-services such as RDS for our database server and ElastiCache as a Redis
-cluster to finally manage them in an auto scaling group with custom
-scaling policies.
-
-## Requirements
-
-In addition to having a basic familiarity with [AWS](https://docs.aws.amazon.com/) and [Amazon EC2](https://docs.aws.amazon.com/ec2/), you need:
-
-- [An AWS account](https://console.aws.amazon.com/console/home)
-- [To create or upload an SSH key](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html)
- to connect to the instance via SSH
-- A domain name for the GitLab instance
-- An SSL/TLS certificate to secure your domain. If you do not already own one, you can provision a free public SSL/TLS certificate through [AWS Certificate Manager](https://aws.amazon.com/certificate-manager/)(ACM) for use with the [Elastic Load Balancer](#load-balancer) we create.
-
-NOTE:
-It can take a few hours to validate a certificate provisioned through ACM. To avoid delays later, request your certificate as soon as possible.
-
-## Architecture
-
-Below is a diagram of the recommended architecture.
-
-![AWS architecture diagram](img/aws_ha_architecture_diagram.png)
-
-## AWS costs
-
-GitLab uses the following AWS services, with links to pricing information:
-
-- **EC2**: GitLab is deployed on shared hardware, for which
- [on-demand pricing](https://aws.amazon.com/ec2/pricing/on-demand/) applies.
- If you want to run GitLab on a dedicated or reserved instance, see the
- [EC2 pricing page](https://aws.amazon.com/ec2/pricing/) for information about
- its cost.
-- **S3**: GitLab uses S3 ([pricing page](https://aws.amazon.com/s3/pricing/)) to
- store backups, artifacts, and LFS objects.
-- **ELB**: A Classic Load Balancer ([pricing page](https://aws.amazon.com/elasticloadbalancing/pricing/)),
- used to route requests to the GitLab instances.
-- **RDS**: An Amazon Relational Database Service using PostgreSQL
- ([pricing page](https://aws.amazon.com/rds/postgresql/pricing/)).
-- **ElastiCache**: An in-memory cache environment ([pricing page](https://aws.amazon.com/elasticache/pricing/)),
- used to provide a Redis configuration.
-
-## Create an IAM EC2 instance role and profile
-
-As we are using [Amazon S3 object storage](#amazon-s3-object-storage), our EC2 instances must have read, write, and list permissions for our S3 buckets. To avoid embedding AWS keys in our GitLab configuration, we make use of an [IAM Role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) to allow our GitLab instance with this access. We must create an IAM policy to attach to our IAM role:
-
-### Create an IAM Policy
-
-1. Go to the IAM dashboard and select **Policies** in the left menu.
-1. Select **Create policy**, select the `JSON` tab, and add a policy. We want to [follow security best practices and grant _least privilege_](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#grant-least-privilege), giving our role only the permissions needed to perform the required actions.
- 1. Assuming you prefix the S3 bucket names with `gl-` as shown in the diagram, add the following policy:
-
- ```json
- { "Version": "2012-10-17",
- "Statement": [
- {
- "Effect": "Allow",
- "Action": [
- "s3:PutObject",
- "s3:GetObject",
- "s3:DeleteObject",
- "s3:PutObjectAcl"
- ],
- "Resource": "arn:aws:s3:::gl-*/*"
- },
- {
- "Effect": "Allow",
- "Action": [
- "s3:ListBucket",
- "s3:AbortMultipartUpload",
- "s3:ListMultipartUploadParts",
- "s3:ListBucketMultipartUploads"
- ],
- "Resource": "arn:aws:s3:::gl-*"
- }
- ]
- }
- ```
-
-1. Select **Review policy**, give your policy a name (we use `gl-s3-policy`), and select **Create policy**.
-
-### Create an IAM Role
-
-1. Still on the IAM dashboard, select **Roles** in the left menu, and
- select **Create role**.
-1. Create a new role by selecting **AWS service > EC2**, then select
- **Next: Permissions**.
-1. In the policy filter, search for the `gl-s3-policy` we created above, select it, and select **Tags**.
-1. Add tags if needed and select **Review**.
-1. Give the role a name (we use `GitLabS3Access`) and select **Create Role**.
-
-We use this role when we [create a launch configuration](#create-a-launch-configuration) later on.
-
-## Configuring the network
-
-We start by creating a VPC for our GitLab cloud infrastructure, then
-we can create subnets to have public and private instances in at least
-two [Availability Zones (AZs)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html). Public subnets require a Route Table keep and an associated
-Internet Gateway.
-
-### Creating the Virtual Private Cloud (VPC)
-
-We now create a VPC, a virtual networking environment that you control:
-
-1. Sign in to [Amazon Web Services](https://console.aws.amazon.com/vpc/home).
-1. Select **Your VPCs** from the left menu and then select **Create VPC**.
- At the "Name tag" enter `gitlab-vpc` and at the "IPv4 CIDR block" enter
- `10.0.0.0/16`. If you don't require dedicated hardware, you can leave
- "Tenancy" as default. Select **Yes, Create** when ready.
-
- ![Create VPC](img/create_vpc.png)
-
-1. Select the VPC, select **Actions**, select **Edit DNS resolution**, and enable DNS resolution. Select **Save** when done.
-
-### Subnets
-
-Now, let's create some subnets in different Availability Zones. Make sure
-that each subnet is associated to the VPC we just created and
-that CIDR blocks don't overlap. This also
-allows us to enable multi AZ for redundancy.
-
-We create private and public subnets to match load balancers and
-RDS instances as well:
-
-1. Select **Subnets** from the left menu.
-1. Select **Create subnet**. Give it a descriptive name tag based on the IP,
- for example `gitlab-public-10.0.0.0`, select the VPC we created previously, select an availability zone (we use `us-west-2a`),
- and at the IPv4 CIDR block let's give it a 24 subnet `10.0.0.0/24`:
-
- ![Create subnet](img/create_subnet.png)
-
-1. Follow the same steps to create all subnets:
-
- | Name tag | Type | Availability Zone | CIDR block |
- | ------------------------- | ------- | ----------------- | ------------- |
- | `gitlab-public-10.0.0.0` | public | `us-west-2a` | `10.0.0.0/24` |
- | `gitlab-private-10.0.1.0` | private | `us-west-2a` | `10.0.1.0/24` |
- | `gitlab-public-10.0.2.0` | public | `us-west-2b` | `10.0.2.0/24` |
- | `gitlab-private-10.0.3.0` | private | `us-west-2b` | `10.0.3.0/24` |
-
-1. Once all the subnets are created, enable **Auto-assign IPv4** for the two public subnets:
- 1. Select each public subnet in turn, select **Actions**, and select **Modify auto-assign IP settings**. Enable the option and save.
-
-### Internet Gateway
-
-Now, still on the same dashboard, go to Internet Gateways and
-create a new one:
-
-1. Select **Internet Gateways** from the left menu.
-1. Select **Create internet gateway**, give it the name `gitlab-gateway` and
- select **Create**.
-1. Select it from the table, and then under the **Actions** dropdown list choose
- "Attach to VPC".
-
- ![Create gateway](img/create_gateway.png)
-
-1. Choose `gitlab-vpc` from the list and hit **Attach**.
-
-### Create NAT Gateways
-
-Instances deployed in our private subnets must connect to the internet for updates, but should not be reachable from the public internet. To achieve this, we make use of [NAT Gateways](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html) deployed in each of our public subnets:
-
-1. Go to the VPC dashboard and select **NAT Gateways** in the left menu bar.
-1. Select **Create NAT Gateway** and complete the following:
- 1. **Subnet**: Select `gitlab-public-10.0.0.0` from the dropdown list.
- 1. **Elastic IP Allocation ID**: Enter an existing Elastic IP or select **Allocate Elastic IP address** to allocate a new IP to your NAT gateway.
- 1. Add tags if needed.
- 1. Select **Create NAT Gateway**.
-
-Create a second NAT gateway but this time place it in the second public subnet, `gitlab-public-10.0.2.0`.
-
-### Route Tables
-
-#### Public Route Table
-
-We must create a route table for our public subnets to reach the internet via the internet gateway we created in the previous step.
-
-On the VPC dashboard:
-
-1. Select **Route Tables** from the left menu.
-1. Select **Create Route Table**.
-1. At the "Name tag" enter `gitlab-public` and choose `gitlab-vpc` under "VPC".
-1. Select **Create**.
-
-We now must add our internet gateway as a new target and have
-it receive traffic from any destination.
-
-1. Select **Route Tables** from the left menu and select the `gitlab-public`
- route to show the options at the bottom.
-1. Select the **Routes** tab, select **Edit routes > Add route** and set `0.0.0.0/0`
- as the destination. In the target column, select the `gitlab-gateway` we created previously.
- Select **Save routes** when done.
-
-Next, we must associate the **public** subnets to the route table:
-
-1. Select the **Subnet Associations** tab and select **Edit subnet associations**.
-1. Check only the public subnets and select **Save**.
-
-#### Private Route Tables
-
-We also must create two private route tables so that instances in each private subnet can reach the internet via the NAT gateway in the corresponding public subnet in the same availability zone.
-
-1. Follow the same steps as above to create two private route tables. Name them `gitlab-private-a` and `gitlab-private-b`.
-1. Next, add a new route to each of the private route tables where the destination is `0.0.0.0/0` and the target is one of the NAT gateways we created earlier.
- 1. Add the NAT gateway we created in `gitlab-public-10.0.0.0` as the target for the new route in the `gitlab-private-a` route table.
- 1. Similarly, add the NAT gateway in `gitlab-public-10.0.2.0` as the target for the new route in the `gitlab-private-b`.
-1. Lastly, associate each private subnet with a private route table.
- 1. Associate `gitlab-private-10.0.1.0` with `gitlab-private-a`.
- 1. Associate `gitlab-private-10.0.3.0` with `gitlab-private-b`.
-
-## Load Balancer
-
-We create a load balancer to evenly distribute inbound traffic on ports `80` and `443` across our GitLab application servers. Based on the [scaling policies](#create-an-auto-scaling-group) we create later, instances are added to or removed from our load balancer as needed. Additionally, the load balancer performs health checks on our instances.
-
-On the EC2 dashboard, look for Load Balancer in the left navigation bar:
-
-1. Select **Create Load Balancer**.
- 1. Choose the **Classic Load Balancer**.
- 1. Give it a name (we use `gitlab-loadbalancer`) and for the **Create LB Inside** option, select `gitlab-vpc` from the dropdown list.
- 1. In the **Listeners** section, set the following listeners:
- - HTTP port 80 for both load balancer and instance protocol and ports
- - TCP port 22 for both load balancer and instance protocols and ports
- - HTTPS port 443 for load balancer protocol and ports, forwarding to HTTP port 80 on the instance (we configure GitLab to listen on port 80 [later in the guide](#add-support-for-proxied-ssl))
- 1. In the **Select Subnets** section, select both public subnets from the list so that the load balancer can route traffic to both availability zones.
-1. We add a security group for our load balancer to act as a firewall to control what traffic is allowed through. Select **Assign Security Groups** and select **Create a new security group**, give it a name
- (we use `gitlab-loadbalancer-sec-group`) and description, and allow both HTTP and HTTPS traffic
- from anywhere (`0.0.0.0/0, ::/0`). Also allow SSH traffic, select a custom source, and add a single trusted IP address or an IP address range in CIDR notation. This allows users to perform Git actions over SSH.
-1. Select **Configure Security Settings** and set the following:
- 1. Select an SSL/TLS certificate from ACM or upload a certificate to IAM.
- 1. Under **Select a Cipher**, pick a predefined security policy from the dropdown list. You can see a breakdown of [Predefined SSL Security Policies for Classic Load Balancers](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-security-policy-table.html) in the AWS documentation. Check the GitLab codebase for a list of [supported SSL ciphers and protocols](https://gitlab.com/gitlab-org/gitlab/-/blob/9ee7ad433269b37251e0dd5b5e00a0f00d8126b4/lib/support/nginx/gitlab-ssl#L97-99).
-1. Select **Configure Health Check** and set up a health check for your EC2 instances.
- 1. For **Ping Protocol**, select HTTP.
- 1. For **Ping Port**, enter 80.
- 1. For **Ping Path** - we recommend that you [use the Readiness check endpoint](../../administration/load_balancer.md#readiness-check). You must add [the VPC IP Address Range (CIDR)](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-security-groups.html#elb-vpc-nacl) to the [IP allowlist](../../administration/monitoring/ip_allowlist.md) for the [Health Check endpoints](../../administration/monitoring/health_check.md)
- 1. Keep the default **Advanced Details** or adjust them according to your needs.
-1. Select **Add EC2 Instances** - don't add anything as we create an Auto Scaling Group later to manage instances for us.
-1. Select **Add Tags** and add any tags you need.
-1. Select **Review and Create**, review all your settings, and select **Create** if you're happy.
-
-After the Load Balancer is up and running, you can revisit your Security
-Groups to refine the access only through the ELB and any other requirements
-you might have.
-
-### Configure DNS for Load Balancer
-
-On the Route 53 dashboard, select **Hosted zones** in the left navigation bar:
-
-1. Select an existing hosted zone or, if you do not already have one for your domain, select **Create Hosted Zone**, enter your domain name, and select **Create**.
-1. Select **Create Record Set** and provide the following values:
- 1. **Name:** Use the domain name (the default value) or enter a subdomain.
- 1. **Type:** Select **A - IPv4 address**.
- 1. **Alias:** Defaults to **No**. Select **Yes**.
- 1. **Alias Target:** Find the **ELB Classic Load Balancers** section and select the classic load balancer we created earlier.
- 1. **Routing Policy:** We use **Simple** but you can choose a different policy based on your use case.
- 1. **Evaluate Target Health:** We set this to **No** but you can choose to have the load balancer route traffic based on target health.
- 1. Select **Create**.
-1. If you registered your domain through Route 53, you're done. If you used a different domain registrar, you must update your DNS records with your domain registrar. You must:
- 1. Select **Hosted zones** and select the domain you added above.
- 1. You see a list of `NS` records. From your domain registrar's administrator panel, add each of these as `NS` records to your domain's DNS records. These steps may vary between domain registrars. If you're stuck, Google **"name of your registrar" add DNS records** and you should find a help article specific to your domain registrar.
-
-The steps for doing this vary depending on which registrar you use and is beyond the scope of this guide.
-
-## PostgreSQL with RDS
-
-For our database server we use Amazon RDS for PostgreSQL which offers Multi AZ
-for redundancy (Aurora is **not** supported). First we create a security group and subnet group, then we
-create the actual RDS instance.
-
-### RDS Security Group
-
-We need a security group for our database that allows inbound traffic from the instances we deploy in our `gitlab-loadbalancer-sec-group` later on:
-
-1. From the EC2 dashboard, select **Security Groups** from the left menu bar.
-1. Select **Create security group**.
-1. Give it a name (we use `gitlab-rds-sec-group`), a description, and select the `gitlab-vpc` from the **VPC** dropdown list.
-1. In the **Inbound rules** section, select **Add rule** and set the following:
- 1. **Type:** search for and select the **PostgreSQL** rule.
- 1. **Source type:** set as "Custom".
- 1. **Source:** select the `gitlab-loadbalancer-sec-group` we created earlier.
-1. When done, select **Create security group**.
-
-### RDS Subnet Group
-
-1. Go to the RDS dashboard and select **Subnet Groups** from the left menu.
-1. Select **Create DB Subnet Group**.
-1. Under **Subnet group details**, enter a name (we use `gitlab-rds-group`), a description, and choose the `gitlab-vpc` from the VPC dropdown list.
-1. From the **Availability Zones** dropdown list, select the Availability Zones that include the subnets you've configured. In our case, we add `eu-west-2a` and `eu-west-2b`.
-1. From the **Subnets** dropdown list, select the two private subnets (`10.0.1.0/24` and `10.0.3.0/24`) as we defined them in the [subnets section](#subnets).
-1. Select **Create** when ready.
-
-### Create the database
-
-WARNING:
-Avoid using burstable instances (t class instances) for the database as this could lead to performance issues due to CPU credits running out during sustained periods of high load.
-
-Now, it's time to create the database:
-
-1. Go to the RDS dashboard, select **Databases** from the left menu, and select **Create database**.
-1. Select **Standard Create** for the database creation method.
-1. Select **PostgreSQL** as the database engine and select the minimum PostgreSQL version as defined for your GitLab version in our [database requirements](../../install/requirements.md#postgresql-requirements).
-1. Because this is a production server, let's choose **Production** from the **Templates** section.
-1. Under **Settings**, use:
- - `gitlab-db-ha` for the DB instance identifier.
- - `gitlab` for a master username.
- - A very secure password for the master password.
-
- Make a note of these as we need them later.
-
-1. For the DB instance size, select **Standard classes** and select an instance size that meets your requirements from the dropdown list. We use a `db.m4.large` instance.
-1. Under **Storage**, configure the following:
- 1. Select **Provisioned IOPS (SSD)** from the storage type dropdown list. Provisioned IOPS (SSD) storage is best suited for this use (though you can choose General Purpose (SSD) to reduce the costs). Read more about it at [Storage for Amazon RDS](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html).
- 1. Allocate storage and set provisioned IOPS. We use the minimum values, `100` and `1000`, respectively.
- 1. Enable storage autoscaling (optional) and set a maximum storage threshold.
-1. Under **Availability & durability**, select **Create a standby instance** to have a standby RDS instance provisioned in a different [Availability Zone](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html).
-1. Under **Connectivity**, configure the following:
- 1. Select the VPC we created earlier (`gitlab-vpc`) from the **Virtual Private Cloud (VPC)** dropdown list.
- 1. Expand the **Additional connectivity configuration** section and select the subnet group (`gitlab-rds-group`) we created earlier.
- 1. Set public accessibility to **No**.
- 1. Under **VPC security group**, select **Choose existing** and select the `gitlab-rds-sec-group` we create above from the dropdown list.
- 1. Leave the database port as the default `5432`.
-1. For **Database authentication**, select **Password authentication**.
-1. Expand the **Additional configuration** section and complete the following:
- 1. The initial database name. We use `gitlabhq_production`.
- 1. Configure your preferred backup settings.
- 1. The only other change we make here is to disable auto minor version updates under **Maintenance**.
- 1. Leave all the other settings as is or tweak according to your needs.
- 1. If you're happy, select **Create database**.
-
-Now that the database is created, let's move on to setting up Redis with ElastiCache.
-
-## Redis with ElastiCache
-
-ElastiCache is an in-memory hosted caching solution. Redis maintains its own
-persistence and is used to store session data, temporary cache information, and background job queues for the GitLab application.
-
-### Create a Redis Security Group
-
-1. Go to the EC2 dashboard.
-1. Select **Security Groups** from the left menu.
-1. Select **Create security group** and fill in the details. Give it a name (we use `gitlab-redis-sec-group`),
- add a description, and choose the VPC we created previously
-1. In the **Inbound rules** section, select **Add rule** and add a **Custom TCP** rule, set port `6379`, and set the "Custom" source as the `gitlab-loadbalancer-sec-group` we created earlier.
-1. When done, select **Create security group**.
-
-### Redis Subnet Group
-
-1. Go to the ElastiCache dashboard from your AWS console.
-1. Go to **Subnet Groups** in the left menu, and create a new subnet group (we name ours `gitlab-redis-group`).
- Make sure to select our VPC and its [private subnets](#subnets).
-1. Select **Create** when ready.
-
- ![ElastiCache subnet](img/ec_subnet.png)
-
-### Create the Redis Cluster
-
-1. Go back to the ElastiCache dashboard.
-1. Select **Redis** on the left menu and select **Create** to create a new
- Redis cluster. Do not enable **Cluster Mode** as it is [not supported](../../administration/redis/replication_and_failover_external.md#requirements). Even without cluster mode on, you still get the
- chance to deploy Redis in multiple availability zones.
-1. In the settings section:
- 1. Give the cluster a name (`gitlab-redis`) and a description.
- 1. For the version, select the latest.
- 1. Leave the port as `6379` because this is what we used in our Redis security group above.
- 1. Select the node type (at least `cache.t3.medium`, but adjust to your needs) and the number of replicas.
-1. In the advanced settings section:
- 1. Select the multi-AZ auto-failover option.
- 1. Select the subnet group we created previously.
- 1. Manually select the preferred availability zones, and under "Replica 2"
- choose a different zone than the other two.
-
- ![Redis availability zones](img/ec_az.png)
-
-1. In the security settings, edit the security groups and choose the
- `gitlab-redis-sec-group` we had previously created.
-1. Leave the rest of the settings to their default values or edit to your liking.
-1. When done, select **Create**.
-
-## Setting up Bastion Hosts
-
-Because our GitLab instances are in private subnets, we need a way to connect
-to these instances with SSH for actions that include making configuration changes
-and performing upgrades. One way of doing this is by using a [bastion host](https://en.wikipedia.org/wiki/Bastion_host),
-sometimes also referred to as a jump box.
-
-NOTE:
-If you do not want to maintain bastion hosts, you can set up [AWS Systems Manager Session Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html) for access to instances. This is beyond the scope of this document.
-
-### Create Bastion Host A
-
-1. Go to the EC2 Dashboard and select **Launch instance**.
-1. Select the **Ubuntu Server 18.04 LTS (HVM)** AMI.
-1. Choose an instance type. We use a `t2.micro` as we only use the bastion host to SSH into our other instances.
-1. Select **Configure Instance Details**.
- 1. Under **Network**, select the `gitlab-vpc` from the dropdown list.
- 1. Under **Subnet**, select the public subnet we created earlier (`gitlab-public-10.0.0.0`).
- 1. Double check that under **Auto-assign Public IP** you have **Use subnet setting (Enable)** selected.
- 1. Leave everything else as default and select **Add Storage**.
-1. For storage, we leave everything as default and only add an 8GB root volume. We do not store anything on this instance.
-1. Select **Add Tags** and on the next screen select **Add Tag**.
- 1. We only set `Key: Name` and `Value: Bastion Host A`.
-1. Select **Configure Security Group**.
- 1. Select **Create a new security group**, enter a **Security group name** (we use `bastion-sec-group`), and add a description.
- 1. We enable SSH access from anywhere (`0.0.0.0/0`). If you want stricter security, specify a single IP address or an IP address range in CIDR notation.
- 1. Select **Review and Launch**
-1. Review all your settings and, if you're happy, select **Launch**.
-1. Acknowledge that you have access to an existing key pair or create a new one. Select **Launch Instance**.
-
-Confirm that you can SSH into the instance:
-
-1. On the EC2 Dashboard, select **Instances** in the left menu.
-1. Select **Bastion Host A** from your list of instances.
-1. Select **Connect** and follow the connection instructions.
-1. If you are able to connect successfully, let's move on to setting up our second bastion host for redundancy.
-
-### Create Bastion Host B
-
-1. Create an EC2 instance following the same steps as above with the following changes:
- 1. For the **Subnet**, select the second public subnet we created earlier (`gitlab-public-10.0.2.0`).
- 1. Under the **Add Tags** section, we set `Key: Name` and `Value: Bastion Host B` so that we can easily identify our two instances.
- 1. For the security group, select the existing `bastion-sec-group` we created above.
-
-### Use SSH Agent Forwarding
-
-EC2 instances running Linux use private key files for SSH authentication. You connect to your bastion host using an SSH client and the private key file stored on your client. Because the private key file is not present on the bastion host, you are not able to connect to your instances in private subnets.
-
-Storing private key files on your bastion host is a bad idea. To get around this, use SSH agent forwarding on your client. See [Securely Connect to Linux Instances Running in a Private Amazon VPC](https://aws.amazon.com/blogs/security/securely-connect-to-linux-instances-running-in-a-private-amazon-vpc/) for a step-by-step guide on how to use SSH agent forwarding.
-
-## Install GitLab and create custom AMI
-
-We need a preconfigured, custom GitLab AMI to use in our launch configuration later. As a starting point, we use the official GitLab AMI to create a GitLab instance. Then, we add our custom configuration for PostgreSQL, Redis, and Gitaly. If you prefer, instead of using the official GitLab AMI, you can also spin up an EC2 instance of your choosing and [manually install GitLab](https://about.gitlab.com/install/).
-
-### Install GitLab
-
-From the EC2 dashboard:
-
-1. Use the section below titled "[Find official GitLab-created AMI IDs on AWS](#find-official-gitlab-created-ami-ids-on-aws)" to find the correct AMI to launch.
-1. After selecting **Launch** on the desired AMI, select an instance type based on your workload. Consult the [hardware requirements](../../install/requirements.md#hardware-requirements) to choose one that fits your needs (at least `c5.xlarge`, which is sufficient to accommodate 100 users).
-1. Select **Configure Instance Details**:
- 1. In the **Network** dropdown list, select `gitlab-vpc`, the VPC we created earlier.
- 1. In the **Subnet** dropdown list, select `gitlab-private-10.0.1.0` from the list of subnets we created earlier.
- 1. Double check that **Auto-assign Public IP** is set to `Use subnet setting (Disable)`.
- 1. Select **Add Storage**.
- 1. The root volume is 8GiB by default and should be enough given that we do not store any data there.
-1. Select **Add Tags** and add any tags you may need. In our case, we only set `Key: Name` and `Value: GitLab`.
-1. Select **Configure Security Group**. Check **Select an existing security group** and select the `gitlab-loadbalancer-sec-group` we created earlier.
-1. Select **Review and launch** followed by **Launch** if you're happy with your settings.
-1. Finally, acknowledge that you have access to the selected private key file or create a new one. Select **Launch Instances**.
-
-### Add custom configuration
-
-Connect to your GitLab instance via **Bastion Host A** using [SSH Agent Forwarding](#use-ssh-agent-forwarding). Once connected, add the following custom configuration:
-
-#### Disable Let's Encrypt
-
-Because we're adding our SSL certificate at the load balancer, we do not need the GitLab built-in support for Let's Encrypt. Let's Encrypt [is enabled by default](https://docs.gitlab.com/omnibus/settings/ssl/index.html#enable-the-lets-encrypt-integration) when using an `https` domain in GitLab 10.7 and later, so we must explicitly disable it:
-
-1. Open `/etc/gitlab/gitlab.rb` and disable it:
-
- ```ruby
- letsencrypt['enable'] = false
- ```
-
-1. Save the file and reconfigure for the changes to take effect:
-
- ```shell
- sudo gitlab-ctl reconfigure
- ```
-
-#### Install the required extensions for PostgreSQL
-
-From your GitLab instance, connect to the RDS instance to verify access and to install the required `pg_trgm` and `btree_gist` extensions.
-
-To find the host or endpoint, go to **Amazon RDS > Databases** and select the database you created earlier. Look for the endpoint under the **Connectivity & security** tab.
-
-Do not to include the colon and port number:
-
-```shell
-sudo /opt/gitlab/embedded/bin/psql -U gitlab -h <rds-endpoint> -d gitlabhq_production
-```
-
-At the `psql` prompt create the extension and then quit the session:
-
-```shell
-psql (10.9)
-Type "help" for help.
-
-gitlab=# CREATE EXTENSION pg_trgm;
-gitlab=# CREATE EXTENSION btree_gist;
-gitlab=# \q
-```
-
-#### Configure GitLab to connect to PostgreSQL and Redis
-
-1. Edit `/etc/gitlab/gitlab.rb`, find the `external_url 'http://<domain>'` option
- and change it to the `https` domain you are using.
-
-1. Look for the GitLab database settings and uncomment as necessary. In
- our current case we specify the database adapter, encoding, host, name,
- username, and password:
-
- ```ruby
- # Disable the built-in Postgres
- postgresql['enable'] = false
-
- # Fill in the connection details
- gitlab_rails['db_adapter'] = "postgresql"
- gitlab_rails['db_encoding'] = "unicode"
- gitlab_rails['db_database'] = "gitlabhq_production"
- gitlab_rails['db_username'] = "gitlab"
- gitlab_rails['db_password'] = "mypassword"
- gitlab_rails['db_host'] = "<rds-endpoint>"
- ```
-
-1. Next, we must configure the Redis section by adding the host and
- uncommenting the port:
-
- ```ruby
- # Disable the built-in Redis
- redis['enable'] = false
-
- # Fill in the connection details
- gitlab_rails['redis_host'] = "<redis-endpoint>"
- gitlab_rails['redis_port'] = 6379
- ```
-
-1. Finally, reconfigure GitLab for the changes to take effect:
-
- ```shell
- sudo gitlab-ctl reconfigure
- ```
-
-1. You can also run a check and a service status to make sure
- everything has been setup correctly:
-
- ```shell
- sudo gitlab-rake gitlab:check
- sudo gitlab-ctl status
- ```
-
-#### Set up Gitaly
-
-WARNING:
-In this architecture, having a single Gitaly server creates a single point of failure. Use
-[Gitaly Cluster](../../administration/gitaly/praefect.md) to remove this limitation.
-
-Gitaly is a service that provides high-level RPC access to Git repositories.
-It should be enabled and configured on a separate EC2 instance in one of the
-[private subnets](#subnets) we configured previously.
-
-Let's create an EC2 instance where we install Gitaly:
-
-1. From the EC2 dashboard, select **Launch instance**.
-1. Choose an AMI. In this example, we select the **Ubuntu Server 18.04 LTS (HVM), SSD Volume Type**.
-1. Choose an instance type. We pick a `c5.xlarge`.
-1. Select **Configure Instance Details**.
- 1. In the **Network** dropdown list, select `gitlab-vpc`, the VPC we created earlier.
- 1. In the **Subnet** dropdown list, select `gitlab-private-10.0.1.0` from the list of subnets we created earlier.
- 1. Double check that **Auto-assign Public IP** is set to `Use subnet setting (Disable)`.
- 1. Select **Add Storage**.
-1. Increase the Root volume size to `20 GiB` and change the **Volume Type** to `Provisioned IOPS SSD (io1)`. (This is an arbitrary size. Create a volume big enough for your repository storage requirements.)
- 1. For **IOPS** set `1000` (20 GiB x 50 IOPS). You can provision up to 50 IOPS per GiB. If you select a larger volume, increase the IOPS accordingly. Workloads where many small files are written in a serialized manner, like `git`, requires performant storage, hence the choice of `Provisioned IOPS SSD (io1)`.
-1. Select **Add Tags** and add your tags. In our case, we only set `Key: Name` and `Value: Gitaly`.
-1. Select **Configure Security Group** and let's **Create a new security group**.
- 1. Give your security group a name and description. We use `gitlab-gitaly-sec-group` for both.
- 1. Create a **Custom TCP** rule and add port `8075` to the **Port Range**. For the **Source**, select the `gitlab-loadbalancer-sec-group`.
- 1. Also add an inbound rule for SSH from the `bastion-sec-group` so that we can connect using [SSH Agent Forwarding](#use-ssh-agent-forwarding) from the Bastion hosts.
-1. Select **Review and launch** followed by **Launch** if you're happy with your settings.
-1. Finally, acknowledge that you have access to the selected private key file or create a new one. Select **Launch Instances**.
-
-NOTE:
-Instead of storing configuration _and_ repository data on the root volume, you can also choose to add an additional EBS volume for repository storage. Follow the same guidance as above. See the [Amazon EBS pricing](https://aws.amazon.com/ebs/pricing/). We do not recommend using EFS as it may negatively impact the performance of GitLab. You can review the [relevant documentation](../../administration/nfs.md#avoid-using-cloud-based-file-systems) for more details.
-
-Now that we have our EC2 instance ready, follow the [documentation to install GitLab and set up Gitaly on its own server](../../administration/gitaly/configure_gitaly.md#run-gitaly-on-its-own-server). Perform the client setup steps from that document on the [GitLab instance we created](#install-gitlab) above.
-
-#### Add Support for Proxied SSL
-
-As we are terminating SSL at our [load balancer](#load-balancer), follow the steps at [Supporting proxied SSL](https://docs.gitlab.com/omnibus/settings/ssl/index.html#configure-a-reverse-proxy-or-load-balancer-ssl-termination) to configure this in `/etc/gitlab/gitlab.rb`.
-
-Remember to run `sudo gitlab-ctl reconfigure` after saving the changes to the `gitlab.rb` file.
-
-#### Fast lookup of authorized SSH keys
-
-The public SSH keys for users allowed to access GitLab are stored in `/var/opt/gitlab/.ssh/authorized_keys`. Typically we'd use shared storage so that all the instances are able to access this file when a user performs a Git action over SSH. Because we do not have shared storage in our setup, we update our configuration to authorize SSH users via indexed lookup in the GitLab database.
-
-Follow the instructions at [Set up fast SSH key lookup](../../administration/operations/fast_ssh_key_lookup.md#set-up-fast-lookup) to switch from using the `authorized_keys` file to the database.
-
-If you do not configure fast lookup, Git actions over SSH results in the following error:
-
-```shell
-Permission denied (publickey).
-fatal: Could not read from remote repository.
-
-Please make sure you have the correct access rights
-and the repository exists.
-```
-
-#### Configure host keys
-
-Ordinarily we would manually copy the contents (primary and public keys) of `/etc/ssh/` on the primary application server to `/etc/ssh` on all secondary servers. This prevents false man-in-the-middle-attack alerts when accessing servers in your cluster behind a load balancer.
-
-We automate this by creating static host keys as part of our custom AMI. As these host keys are also rotated every time an EC2 instance boots up, "hard coding" them into our custom AMI serves as a workaround.
-
-On your GitLab instance run the following:
-
-```shell
-sudo mkdir /etc/ssh_static
-sudo cp -R /etc/ssh/* /etc/ssh_static
-```
-
-In `/etc/ssh/sshd_config` update the following:
-
-```shell
-# HostKeys for protocol version 2
-HostKey /etc/ssh_static/ssh_host_rsa_key
-HostKey /etc/ssh_static/ssh_host_dsa_key
-HostKey /etc/ssh_static/ssh_host_ecdsa_key
-HostKey /etc/ssh_static/ssh_host_ed25519_key
-```
-
-#### Amazon S3 object storage
-
-Because we're not using NFS for shared storage, we use [Amazon S3](https://aws.amazon.com/s3/) buckets to store backups, artifacts, LFS objects, uploads, merge request diffs, container registry images, and more. Our documentation includes [instructions on how to configure object storage](../../administration/object_storage.md) for each of these data types, and other information about using object storage with GitLab.
-
-NOTE:
-Because we are using the [AWS IAM profile](#create-an-iam-role) we created earlier, be sure to omit the AWS access key and secret access key/value pairs when configuring object storage. Instead, use `'use_iam_profile' => true` in your configuration as shown in the object storage documentation linked above.
-
-Remember to run `sudo gitlab-ctl reconfigure` after saving the changes to the `gitlab.rb` file.
-
----
-
-That concludes the configuration changes for our GitLab instance. Next, we create a custom AMI based on this instance to use for our launch configuration and auto scaling group.
-
-### Log in for the first time
-
-Using the domain name you used when setting up [DNS for the load balancer](#configure-dns-for-load-balancer), you should now be able to visit GitLab in your browser.
-
-Depending on how you installed GitLab and if you did not change the password by any other means, the default password is either:
-
-- Your instance ID if you used the official GitLab AMI.
-- A randomly generated password stored for 24 hours in `/etc/gitlab/initial_root_password`.
-
-To change the default password, log in as the `root` user with the default password and [change it in the user profile](../../user/profile/user_passwords.md#change-your-password).
-
-When our [auto scaling group](#create-an-auto-scaling-group) spins up new instances, we are able to sign in with username `root` and the newly created password.
-
-### Create custom AMI
-
-On the EC2 dashboard:
-
-1. Select the `GitLab` instance we [created earlier](#install-gitlab).
-1. Select **Actions**, scroll down to **Image** and select **Create Image**.
-1. Give your image a name and description (we use `GitLab-Source` for both).
-1. Leave everything else as default and select **Create Image**
-
-Now we have a custom AMI that we use to create our launch configuration the next step.
-
-## Deploy GitLab inside an auto scaling group
-
-### Create a launch configuration
-
-From the EC2 dashboard:
-
-1. Select **Launch Configurations** from the left menu and select **Create launch configuration**.
-1. Select **My AMIs** from the left menu and select the `GitLab` custom AMI we created above.
-1. Select an instance type best suited for your needs (at least a `c5.xlarge`) and select **Configure details**.
-1. Enter a name for your launch configuration (we use `gitlab-ha-launch-config`).
-1. **Do not** check **Request Spot Instance**.
-1. From the **IAM Role** dropdown list, pick the `GitLabAdmin` instance role we [created earlier](#create-an-iam-ec2-instance-role-and-profile).
-1. Leave the rest as defaults and select **Add Storage**.
-1. The root volume is 8GiB by default and should be enough given that we do not store any data there. Select **Configure Security Group**.
-1. Check **Select and existing security group** and select the `gitlab-loadbalancer-sec-group` we created earlier.
-1. Select **Review**, review your changes, and select **Create launch configuration**.
-1. Acknowledge that you have access to the private key or create a new one. Select **Create launch configuration**.
-
-### Create an auto scaling group
-
-1. After the launch configuration is created, select **Create an Auto Scaling group using this launch configuration** to start creating the auto scaling group.
-1. Enter a **Group name** (we use `gitlab-auto-scaling-group`).
-1. For **Group size**, enter the number of instances you want to start with (we enter `2`).
-1. Select the `gitlab-vpc` from the **Network** dropdown list.
-1. Add both the private [subnets we created earlier](#subnets).
-1. Expand the **Advanced Details** section and check the **Receive traffic from one or more load balancers** option.
-1. From the **Classic Load Balancers** dropdown list, select the load balancer we created earlier.
-1. For **Health Check Type**, select **ELB**.
-1. We leave our **Health Check Grace Period** as the default `300` seconds. Select **Configure scaling policies**.
-1. Check **Use scaling policies to adjust the capacity of this group**.
-1. For this group we scale between 2 and 4 instances where one instance is added if CPU
-utilization is greater than 60% and one instance is removed if it falls
-to less than 45%.
-
-![Auto scaling group policies](img/policies.png)
-
-1. Finally, configure notifications and tags as you see fit, review your changes, and create the
-auto scaling group.
-
-As the auto scaling group is created, you see your new instances spinning up in your EC2 dashboard. You also see the new instances added to your load balancer. After the instances pass the heath check, they are ready to start receiving traffic from the load balancer.
-
-Because our instances are created by the auto scaling group, go back to your instances and terminate the [instance we created manually above](#install-gitlab). We only needed this instance to create our custom AMI.
-
-## Health check and monitoring with Prometheus
-
-Apart from Amazon's Cloudwatch which you can enable on various services,
-GitLab provides its own integrated monitoring solution based on Prometheus.
-For more information about how to set it up, see
-[GitLab Prometheus](../../administration/monitoring/prometheus/index.md).
-
-GitLab also has various [health check endpoints](../../administration/monitoring/health_check.md)
-that you can ping and get reports.
-
-## GitLab Runner
-
-If you want to take advantage of [GitLab CI/CD](../../ci/index.md), you have to
-set up at least one [runner](https://docs.gitlab.com/runner/).
-
-Read more on configuring an
-[autoscaling GitLab Runner on AWS](https://docs.gitlab.com/runner/configuration/runner_autoscale_aws/).
-
-## Backup and restore
-
-GitLab provides [a tool to back up](../../administration/backup_restore/index.md)
-and restore its Git data, database, attachments, LFS objects, and so on.
-
-Some important things to know:
-
-- The backup/restore tool **does not** store some configuration files, like secrets; you
- must [configure this yourself](../../administration/backup_restore/backup_gitlab.md#storing-configuration-files).
-- By default, the backup files are stored locally, but you can
- [backup GitLab using S3](../../administration/backup_restore/backup_gitlab.md#using-amazon-s3).
-- You can [exclude specific directories form the backup](../../administration/backup_restore/backup_gitlab.md#excluding-specific-directories-from-the-backup).
-
-### Backing up GitLab
-
-To back up GitLab:
-
-1. SSH into your instance.
-1. Take a backup:
-
- ```shell
- sudo gitlab-backup create
- ```
-
-NOTE:
-For GitLab 12.1 and earlier, use `gitlab-rake gitlab:backup:create`.
-
-### Restoring GitLab from a backup
-
-To restore GitLab, first review the [restore documentation](../../administration/backup_restore/index.md#restore-gitlab),
-and primarily the restore prerequisites. Then, follow the steps under the
-[Linux package installations section](../../administration/backup_restore/restore_gitlab.md#restore-for-linux-package-installations).
-
-## Updating GitLab
-
-GitLab releases a new version every month on the 22nd. Whenever a new version is
-released, you can update your GitLab instance:
-
-1. SSH into your instance
-1. Take a backup:
-
- ```shell
- sudo gitlab-backup create
- ```
-
-NOTE:
-For GitLab 12.1 and earlier, use `gitlab-rake gitlab:backup:create`.
-
-1. Update the repositories and install GitLab:
-
- ```shell
- sudo apt update
- sudo apt install gitlab-ee
- ```
-
-After a few minutes, the new version should be up and running.
-
-## Find official GitLab-created AMI IDs on AWS
-
-Read more on how to use [GitLab releases as AMIs](index.md#official-gitlab-releases-as-amis).
-
-## Conclusion
-
-In this guide, we went mostly through scaling and some redundancy options,
-your mileage may vary.
-
-Keep in mind that all solutions come with a trade-off between
-cost/complexity and uptime. The more uptime you want, the more complex the solution.
-And the more complex the solution, the more work is involved in setting up and
-maintaining it.
-
-Have a read through these other resources and feel free to
-[open an issue](https://gitlab.com/gitlab-org/gitlab/-/issues/new)
-to request additional material:
-
-- [Scaling GitLab](../../administration/reference_architectures/index.md):
- GitLab supports several different types of clustering.
-- [Geo replication](../../administration/geo/index.md):
- Geo is the solution for widely distributed development teams.
-- [Linux package](https://docs.gitlab.com/omnibus/) - Everything you must know
- about administering your GitLab instance.
-- [Add a license](../../administration/license.md):
- Activate all GitLab Enterprise Edition functionality with a license.
-- [Pricing](https://about.gitlab.com/pricing/): Pricing for the different tiers.
-
-## Troubleshooting
-
-### Instances are failing health checks
-
-If your instances are failing the load balancer's health checks, verify that they are returning a status `200` from the health check endpoint we configured earlier. Any other status, including redirects like status `302`, causes the health check to fail.
-
-You may have to set a password on the `root` user to prevent automatic redirects on the sign-in endpoint before health checks pass.
-
-### "The change you requested was rejected (422)"
-
-If you see this page when trying to set a password via the web interface, make sure `external_url` in `gitlab.rb` matches the domain you are making a request from, and run `sudo gitlab-ctl reconfigure` after making any changes to it.
-
-### Some job logs are not uploaded to object storage
-
-When the GitLab deployment is scaled up to more than one node, some job logs may not be uploaded to [object storage](../../administration/object_storage.md) properly. [Incremental logging is required](../../administration/object_storage.md#alternatives-to-file-system-storage) for CI to use object storage.
-
-Enable [incremental logging](../../administration/job_logs.md#enable-or-disable-incremental-logging) if it has not already been enabled.
+<!-- This redirect file can be deleted after <YYYY-MM-DD>. -->
+<!-- Redirects that point to other docs in the same project expire in three months. -->
+<!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. -->
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/redirects.html --> \ No newline at end of file
diff --git a/doc/install/docker.md b/doc/install/docker.md
index ac15b5490ce..0ba41e06b65 100644
--- a/doc/install/docker.md
+++ b/doc/install/docker.md
@@ -35,7 +35,10 @@ to community resources (such as IRC or forums) to seek help from other users.
## Prerequisites
-Docker is required. See the [official installation documentation](https://docs.docker.com/get-docker/).
+To use the GitLab Docker images:
+
+- You must install Docker.
+- You must use a valid externally-accessible hostname. Do not use `localhost`.
## Set up the volumes location
diff --git a/doc/install/installation.md b/doc/install/installation.md
index 68e69316f46..c8682fc154f 100644
--- a/doc/install/installation.md
+++ b/doc/install/installation.md
@@ -48,7 +48,7 @@ If the highest number stable branch is unclear, check the [GitLab blog](https://
| [Ruby](#2-ruby) | `3.0.x` | From GitLab 15.10, Ruby 3.0 is required. You must use the standard MRI implementation of Ruby. We love [JRuby](https://www.jruby.org/) and [Rubinius](https://github.com/rubinius/rubinius#the-rubinius-language-platform), but GitLab needs several Gems that have native extensions. |
| [RubyGems](#3-rubygems) | `3.4.x` | A specific RubyGems version is not fully needed, but it's recommended to update so you can enjoy some known performance improvements. |
| [Go](#4-go) | `1.20.x` | From GitLab 16.4, Go 1.20 or later is required. |
-| [Git](#git) | `2.41.x` | From GitLab 16.2, Git 2.41.x and later is required. You should use the [Git version provided by Gitaly](#git). |
+| [Git](#git) | `2.42.x` | From GitLab 16.5, Git 2.42.x and later is required. You should use the [Git version provided by Gitaly](#git). |
| [Node.js](#5-node) | `18.17.x` | From GitLab 16.3, Node.js 18.17 or later is required. |
## GitLab directory structure
diff --git a/doc/install/relative_url.md b/doc/install/relative_url.md
index 885dcba952e..07e1f150521 100644
--- a/doc/install/relative_url.md
+++ b/doc/install/relative_url.md
@@ -6,7 +6,7 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# Install GitLab under a relative URL **(FREE SELF)**
-While we recommend to install GitLab on its own (sub)domain, sometimes
+While you should install GitLab on its own (sub)domain, sometimes
this is not possible due to a variety of reasons. In that case, GitLab can also
be installed under a relative URL, for example `https://example.com/gitlab`.
diff --git a/doc/install/requirements.md b/doc/install/requirements.md
index 81244594a59..d20a5ecc561 100644
--- a/doc/install/requirements.md
+++ b/doc/install/requirements.md
@@ -46,7 +46,6 @@ Memory requirements are dependent on the number of users and expected workload.
The following is the recommended minimum Memory hardware guidance for a handful of example GitLab user base sizes.
- **4 GB RAM** is the **required** minimum memory size and supports up to 500 users
- - Our [Memory Team](https://about.gitlab.com/handbook/engineering/development/enablement/data_stores/application_performance/) is working to reduce the memory requirement.
- 8 GB RAM supports up to 1000 users
- More users? Consult the [reference architectures page](../administration/reference_architectures/index.md)
@@ -227,12 +226,10 @@ optimal settings for your infrastructure.
### Puma threads
-The recommended number of threads is dependent on several factors, including total memory, and use
-of [legacy Rugged code](../administration/gitaly/index.md#direct-access-to-git-in-gitlab).
+The recommended number of threads is dependent on several factors, including total memory.
- If the operating system has a maximum 2 GB of memory, the recommended number of threads is `1`.
A higher value results in excess swapping, and decrease performance.
-- If legacy Rugged code is in use, the recommended number of threads is `1`.
- In all other cases, the recommended number of threads is `4`. We don't recommend setting this
higher, due to how [Ruby MRI multi-threading](https://en.wikipedia.org/wiki/Global_interpreter_lock)
works.
diff --git a/doc/integration/advanced_search/elasticsearch.md b/doc/integration/advanced_search/elasticsearch.md
index 986bdb9a667..ef756be3ba4 100644
--- a/doc/integration/advanced_search/elasticsearch.md
+++ b/doc/integration/advanced_search/elasticsearch.md
@@ -972,6 +972,15 @@ For the steps below, consider the entry of `sidekiq['routing_rules']`:
At least one process in `sidekiq['queue_groups']` has to include the `mailers` queue, otherwise mailers jobs are not processed at all.
+NOTE:
+Routing rules (`sidekiq['routing_rules']`) must be the same across all GitLab nodes (especially GitLab Rails and Sidekiq nodes).
+
+WARNING:
+When starting multiple processes, the number of processes cannot exceed the number of CPU
+cores you want to dedicate to Sidekiq. Each Sidekiq process can use only one CPU core, subject
+to the available workload and concurrency settings. For more details, see how to
+[run multiple Sidekiq processes](../../administration/sidekiq/extra_sidekiq_processes.md).
+
### Single node, two processes
To create both an indexing and a non-indexing Sidekiq process in one node:
@@ -998,12 +1007,12 @@ To create both an indexing and a non-indexing Sidekiq process in one node:
1. Save the file and [reconfigure GitLab](../../administration/restart_gitlab.md)
for the changes to take effect.
+1. On all other Rails and Sidekiq nodes, ensure that `sidekiq['routing_rules']` is the same as above.
+1. Run the Rake task to [migrate existing jobs](../../administration/sidekiq/sidekiq_job_migration.md):
-WARNING:
-When starting multiple processes, the number of processes cannot exceed the number of CPU
-cores you want to dedicate to Sidekiq. Each Sidekiq process can use only one CPU core, subject
-to the available workload and concurrency settings. For more details, see how to
-[run multiple Sidekiq processes](../../administration/sidekiq/extra_sidekiq_processes.md).
+NOTE:
+It is important to run the Rake task immediately after reconfiguring GitLab.
+After reconfiguring GitLab, existing jobs are not processed until the Rake task starts to migrate the jobs.
### Two nodes, one process for each
@@ -1035,6 +1044,8 @@ for the changes to take effect.
```ruby
sidekiq['enable'] = true
+ sidekiq['queue_selector'] = false
+
sidekiq['routing_rules'] = [
["feature_category=global_search", "global_search"],
["*", "default"],
@@ -1048,10 +1059,18 @@ for the changes to take effect.
sidekiq['max_concurrency'] = 20
```
- to set up a non-indexing Sidekiq process.
-
+1. On all other Rails and Sidekiq nodes, ensure that `sidekiq['routing_rules']` is the same as above.
1. Save the file and [reconfigure GitLab](../../administration/restart_gitlab.md)
for the changes to take effect.
+1. Run the Rake task to [migrate existing jobs](../../administration/sidekiq/sidekiq_job_migration.md):
+
+ ```shell
+ sudo gitlab-rake gitlab:sidekiq:migrate_jobs:retry gitlab:sidekiq:migrate_jobs:schedule gitlab:sidekiq:migrate_jobs:queued
+ ```
+
+NOTE:
+It is important to run the Rake task immediately after reconfiguring GitLab.
+After reconfiguring GitLab, existing jobs are not processed until the Rake task starts to migrate the jobs.
## Reverting to Basic Search
diff --git a/doc/integration/advanced_search/elasticsearch_troubleshooting.md b/doc/integration/advanced_search/elasticsearch_troubleshooting.md
index df1e1f49083..1531e01577f 100644
--- a/doc/integration/advanced_search/elasticsearch_troubleshooting.md
+++ b/doc/integration/advanced_search/elasticsearch_troubleshooting.md
@@ -519,3 +519,13 @@ unexpectedly high `buff/cache` usage.
When you reindex, you might get a `Couldn't load task status` error. A `sliceId must be greater than 0 but was [-1]` error might also appear on the Elasticsearch host. As a workaround, consider [reindexing from scratch](../../integration/advanced_search/elasticsearch_troubleshooting.md#last-resort-to-recreate-an-index) or upgrading to GitLab 16.3.
For more information, see [issue 422938](https://gitlab.com/gitlab-org/gitlab/-/issues/422938).
+
+## Migration `BackfillProjectPermissionsInBlobs` has been halted in GitLab 15.11
+
+In GitLab 15.11, it is possible for the `BackfillProjectPermissionsInBlobs` migration to be halted with the following error message in the `elasticsearch.log`:
+
+```shell
+migration has failed with NoMethodError:undefined method `<<' for nil:NilClass, no retries left
+```
+
+If `BackfillProjectPermissionsInBlobs` is the only halted migration, you can upgrade to the latest patch version of GitLab 16.0, which includes [the fix](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/118494). Otherwise, you can ignore the error as it will not affect the current functionality of advanced search.
diff --git a/doc/integration/jenkins.md b/doc/integration/jenkins.md
index 260bb3f7108..b90ae3c3b79 100644
--- a/doc/integration/jenkins.md
+++ b/doc/integration/jenkins.md
@@ -157,6 +157,7 @@ If you cannot [provide GitLab with your Jenkins server URL and authentication in
- [GitLab Jenkins Integration](https://about.gitlab.com/solutions/jenkins/)
- [How to set up Jenkins on your local machine](../development/integrations/jenkins.md)
- [How to migrate from Jenkins to GitLab CI/CD](../ci/migration/jenkins.md)
+- [Jenkins to GitLab: The ultimate guide to modernizing your CI/CD environment](https://about.gitlab.com/blog/2023/11/01/jenkins-gitlab-ultimate-guide-to-modernizing-cicd-environment/?utm_campaign=devrel&utm_source=twitter&utm_medium=social&utm_budget=devrel)
## Troubleshooting
diff --git a/doc/integration/jira/connect-app.md b/doc/integration/jira/connect-app.md
index 4f0adb2771a..78cfc406d19 100644
--- a/doc/integration/jira/connect-app.md
+++ b/doc/integration/jira/connect-app.md
@@ -107,9 +107,7 @@ After you connect the GitLab for Jira Cloud app, you might get this error:
Failed to link group. Please try again.
```
-`403` status code is returned if:
+`403` status code is returned if the user information cannot be fetched from Jira due to insufficient permissions.
-- The user information cannot be fetched from Jira.
-- The authenticated Jira user does not have [site administrator](https://support.atlassian.com/user-management/docs/give-users-admin-permissions/#Make-someone-a-site-admin) access.
-
-To resolve this issue, ensure the authenticated user is a Jira site administrator and try again.
+To resolve this issue, ensure that the Jira user that installs and configures the GitLab for Jira Cloud app meets certain
+[requirements](../../administration/settings/jira_cloud_app.md#jira-user-requirements).
diff --git a/doc/integration/jira/development_panel.md b/doc/integration/jira/development_panel.md
index 02838239156..70e3534a32b 100644
--- a/doc/integration/jira/development_panel.md
+++ b/doc/integration/jira/development_panel.md
@@ -49,6 +49,8 @@ You can [view GitLab activity for a Jira issue](https://support.atlassian.com/ji
in the Jira development panel by referring to the Jira issue by ID in GitLab. The information displayed in the development panel
depends on where you mention the Jira issue ID in GitLab.
+For the [GitLab for Jira Cloud app](connect-app.md), the following information is displayed.
+
| GitLab: where you mention the Jira issue ID | Jira development panel: what information is displayed |
|------------------------------------------------|-------------------------------------------------------|
| Merge request title or description | Link to the merge request<br>Link to the deployment<br>Link to the pipeline through merge request title<br>Link to the pipeline through merge request description <sup>1</sup><br>Link to the branch <sup>2</sup><br>Reviewer information and approval status <sup>3</sup> |
diff --git a/doc/integration/jira/issues.md b/doc/integration/jira/issues.md
index ae4b726327c..f6716f49ea5 100644
--- a/doc/integration/jira/issues.md
+++ b/doc/integration/jira/issues.md
@@ -117,6 +117,9 @@ For example, use any of these trigger words to close the Jira issue `PROJECT-1`:
The commit or merge request must target your project's [default branch](../../user/project/repository/branches/default.md).
You can change your project's default branch in [project settings](../../user/project/settings/index.md).
+When your branch name matches the Jira issue ID, `Closes <JIRA-ID>` is automatically appended to your existing merge request template.
+If you do not want to close the issue, [disable automatic issue closing](../../user/project/issues/managing_issues.md#disable-automatic-issue-closing).
+
### Use case for closing issues
Consider this example:
diff --git a/doc/integration/kerberos.md b/doc/integration/kerberos.md
index 77d70010aa5..a01d31421ec 100644
--- a/doc/integration/kerberos.md
+++ b/doc/integration/kerberos.md
@@ -4,7 +4,7 @@ group: Authentication
info: "To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments"
---
-# Use Kerberos as an OAuth 2.0 authentication provider **(PREMIUM SELF)**
+# Use Kerberos as an OAuth 2.0 authentication provider **(FREE SELF)**
GitLab can integrate with [Kerberos](https://web.mit.edu/kerberos/) as an authentication mechanism.
diff --git a/doc/integration/mattermost/index.md b/doc/integration/mattermost/index.md
index 73f3140db2b..c8a58f0692f 100644
--- a/doc/integration/mattermost/index.md
+++ b/doc/integration/mattermost/index.md
@@ -338,6 +338,7 @@ Below is a list of Mattermost version changes for GitLab 14.0 and later:
| GitLab version | Mattermost version | Notes |
| :------------- | :----------------- | ---------------------------------------------------------------------------------------- |
+| 16.6 | 9.1 | |
| 16.5 | 9.0 | |
| 16.4 | 8.1 | |
| 16.3 | 8.0 | |
diff --git a/doc/integration/oauth2_generic.md b/doc/integration/oauth2_generic.md
index fa65020a4dc..6bcecffaeda 100644
--- a/doc/integration/oauth2_generic.md
+++ b/doc/integration/oauth2_generic.md
@@ -6,6 +6,9 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# Use Generic OAuth2 gem as an OAuth 2.0 authentication provider **(FREE SELF)**
+NOTE:
+If your provider supports the OpenID specification, you should use [`omniauth-openid-connect`](../administration/auth/oidc.md) as your authentication provider.
+
The [`omniauth-oauth2-generic` gem](https://gitlab.com/satorix/omniauth-oauth2-generic) allows single sign-on (SSO) between GitLab
and your OAuth 2.0 provider, or any OAuth 2.0 provider compatible with this gem.
diff --git a/doc/integration/shibboleth.md b/doc/integration/shibboleth.md
index bfb75eba402..f30f073bf08 100644
--- a/doc/integration/shibboleth.md
+++ b/doc/integration/shibboleth.md
@@ -4,7 +4,7 @@ group: Authentication
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
-# Use Shibboleth as an OAuth 2.0 authentication provider **(FREE ALL)**
+# Use Shibboleth as an OAuth 2.0 authentication provider **(FREE SELF)**
NOTE:
Use the [GitLab SAML integration](saml.md) to integrate specific Shibboleth identity providers (IdPs). For Shibboleth federation support (Discovery Service), use this document.
diff --git a/doc/operations/feature_flags.md b/doc/operations/feature_flags.md
index 5fd497a79e1..fe21f0db1c7 100644
--- a/doc/operations/feature_flags.md
+++ b/doc/operations/feature_flags.md
@@ -24,8 +24,7 @@ To contribute to the development of the GitLab product, view
## How it works
-GitLab uses [Unleash](https://github.com/Unleash/unleash), a feature
-toggle service.
+GitLab offers an [Unleash](https://github.com/Unleash/unleash)-compatible API for feature flags.
By enabling or disabling a flag in GitLab, your application
can determine which features to enable or disable.
@@ -76,10 +75,9 @@ is 200. For GitLab SaaS, the maximum number is determined by [tier](https://abou
You can apply a feature flag strategy across multiple environments, without defining
the strategy multiple times.
-GitLab feature flags use [Unleash](https://docs.getunleash.io/) as the feature flag
-engine. In Unleash, there are [strategies](https://docs.getunleash.io/reference/activation-strategies)
-for granular feature flag controls. GitLab feature flags can have multiple strategies,
-and the supported strategies are:
+GitLab feature flags are based on [Unleash](https://docs.getunleash.io/). In Unleash, there are
+[strategies](https://docs.getunleash.io/reference/activation-strategies) for granular feature
+flag controls. GitLab feature flags can have multiple strategies, and the supported strategies are:
- [All users](#all-users)
- [Percent of Users](#percent-of-users)
@@ -372,7 +370,11 @@ end
### Unleash Proxy example
As of [Unleash Proxy](https://docs.getunleash.io/reference/unleash-proxy) version
-0.2, the proxy is compatible with feature flags. To run a Docker container to
+0.2, the proxy is compatible with feature flags.
+
+You should use Unleash Proxy for production on GitLab.com. See the [performance note](#maximum-supported-clients-in-application-nodes) for details.
+
+To run a Docker container to
connect to your project's feature flags, run the following command:
```shell
@@ -418,10 +420,8 @@ Read [How it works](#how-it-works) section before diving into the details.
### Maximum supported clients in application nodes
-GitLab accepts client requests as much as possible until it hits the [rate limiting](../security/rate_limits.md).
-At the moment, the feature flag API falls into **Unauthenticated traffic (from a given IP address)**
-in the [GitLab.com specific limits](../user/gitlab_com/index.md),
-so it's **500 requests per minute**.
+GitLab accepts as many client requests as possible until it hits the [rate limit](../security/rate_limits.md).
+The feature flag API is considered **Unauthenticated traffic (from a given IP address)**. For GitLab.com, see the [GitLab.com specific limits](../user/gitlab_com/index.md).
The polling rate is configurable in SDKs. Provided that all clients are requesting from the same IP:
@@ -429,7 +429,8 @@ The polling rate is configurable in SDKs. Provided that all clients are requesti
- Request once per 15 sec ... 125 clients can be supported.
For applications looking for more scalable solution, you should use [Unleash Proxy](#unleash-proxy-example).
-This proxy server sits between the server and clients. It requests to the server as a behalf of the client groups,
+On GitLab.com, you should use Unleash Proxy to reduce the chance of being rate limited across endpoints.
+This proxy server sits between the server and clients. It makes requests to the server on behalf of the client groups,
so the number of outbound requests can be greatly reduced.
There is also an [issue](https://gitlab.com/gitlab-org/gitlab/-/issues/295472) to give more
diff --git a/doc/operations/incident_management/manage_incidents.md b/doc/operations/incident_management/manage_incidents.md
index ba21a210359..1b48de9e478 100644
--- a/doc/operations/incident_management/manage_incidents.md
+++ b/doc/operations/incident_management/manage_incidents.md
@@ -114,7 +114,7 @@ To view an incident's [details page](incidents.md#incident-details), select it f
Whether you can view an incident depends on the [project visibility level](../../user/public_access.md) and
the incident's confidentiality status:
-- Public project and a non-confidential incident: You don't have to be a member of the project.
+- Public project and a non-confidential incident: Anyone can view the incident.
- Private project and non-confidential incident: You must have at least the Guest role for the project.
- Confidential incident (regardless of project visibility): You must have at least the Reporter role for the project.
diff --git a/doc/policy/experiment-beta-support.md b/doc/policy/experiment-beta-support.md
index a87a72d7910..41ffaec3aa4 100644
--- a/doc/policy/experiment-beta-support.md
+++ b/doc/policy/experiment-beta-support.md
@@ -13,7 +13,7 @@ All other features are considered to be Generally Available (GA).
## Experiment
Support is not provided for features listed as "Experimental" or "Alpha" or any similar designation. Issues regarding such features should be opened in the GitLab issue tracker. Teams should release features as GA from the start unless there are strong reasons to release them as Experiment or Beta versions first.
-All Experimental features must [initiate Production Readiness Review](https://about.gitlab.com/handbook/engineering/infrastructure/production/readiness/#process) and complete the [experiment section in the readiness template](https://gitlab.com/gitlab-com/gl-infra/readiness/-/blob/master/.gitlab/issue_templates/production_readiness.md#experiment).
+All Experimental features that [meet the review criteria](https://about.gitlab.com/handbook/engineering/infrastructure/production/readiness/#criteria-for-starting-a-production-readiness-review) must [initiate Production Readiness Review](https://about.gitlab.com/handbook/engineering/infrastructure/production/readiness/#process) and complete the [experiment section in the readiness template](https://gitlab.com/gitlab-com/gl-infra/readiness/-/blob/master/.gitlab/issue_templates/production_readiness.md#experiment).
Experimental features are:
@@ -36,7 +36,7 @@ Experimental features are:
## Beta
Commercially-reasonable efforts are made to provide limited support for features designated as "Beta," with the expectation that issues require extra time and assistance from development to troubleshoot.
-All Beta features must complete all sections up to and including the [beta section in the readiness template](https://gitlab.com/gitlab-com/gl-infra/readiness/-/blob/master/.gitlab/issue_templates/production_readiness.md#beta) by following the [Production Readiness Review process](https://about.gitlab.com/handbook/engineering/infrastructure/production/readiness/#process).
+All Beta features that [meet the review criteria](https://about.gitlab.com/handbook/engineering/infrastructure/production/readiness/#criteria-for-starting-a-production-readiness-review) must complete all sections up to and including the [beta section in the readiness template](https://gitlab.com/gitlab-com/gl-infra/readiness/-/blob/master/.gitlab/issue_templates/production_readiness.md#beta) by following the [Production Readiness Review process](https://about.gitlab.com/handbook/engineering/infrastructure/production/readiness/#process).
Beta features are:
@@ -56,7 +56,7 @@ Beta features are:
## Generally Available (GA)
-Generally Available features must complete the [Production Readiness Review](https://about.gitlab.com/handbook/engineering/infrastructure/production/readiness) and complete all sections up to and including the [GA section in the readiness template](https://gitlab.com/gitlab-com/gl-infra/readiness/-/blob/master/.gitlab/issue_templates/production_readiness.md#general-availability).
+Generally Available features that [meet the review criteria](https://about.gitlab.com/handbook/engineering/infrastructure/production/readiness/#criteria-for-starting-a-production-readiness-review) must complete the [Production Readiness Review](https://about.gitlab.com/handbook/engineering/infrastructure/production/readiness) and complete all sections up to and including the [GA section in the readiness template](https://gitlab.com/gitlab-com/gl-infra/readiness/-/blob/master/.gitlab/issue_templates/production_readiness.md#general-availability).
GA features are:
diff --git a/doc/security/email_verification.md b/doc/security/email_verification.md
index d87f43dec6a..67d8764a118 100644
--- a/doc/security/email_verification.md
+++ b/doc/security/email_verification.md
@@ -18,6 +18,8 @@ you must verify your identity or reset your password to sign in to GitLab.
<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
For a demo, see [Require email verification - demo](https://www.youtube.com/watch?v=wU6BVEGB3Y0).
+On GitLab.com, if you don't receive a verification email, select **Resend Code** before you contact the support team.
+
## Accounts without two-factor authentication (2FA)
An account is locked when either:
@@ -36,10 +38,12 @@ To unlock your account, sign in and enter the verification code. You can also
## Accounts with 2FA or OAuth
-An account is locked when there are three or more failed sign-in attempts.
+An account is locked when there are ten or more failed sign-in attempts, or more than the
+amount defined in the [configurable locked user policy](unlock_user.md#self-managed-users).
-Accounts with 2FA or OAuth are automatically unlocked after 30 minutes. To unlock an account manually,
-reset your password.
+Accounts with 2FA or OAuth are automatically unlocked after ten minutes, or more than the
+amount defined in the [configurable locked user policy](unlock_user.md#self-managed-users).
+To unlock an account manually, reset your password.
## Related topics
diff --git a/doc/security/reset_user_password.md b/doc/security/reset_user_password.md
index d79ede70abd..9835509897e 100644
--- a/doc/security/reset_user_password.md
+++ b/doc/security/reset_user_password.md
@@ -168,3 +168,7 @@ attempt to fix this issue in a Rails console. For example, if a new `root` passw
The password might be too short, too weak, or not meet complexity
requirements. Ensure the password you are attempting to set meets all
[password requirements](../user/profile/user_passwords.md#password-requirements).
+
+### Expired password
+
+You might not be able to reset a user's expired password due to the [Password Expired error on Git Fetch via SSH for LDAP users](../topics/git/troubleshooting_git.md#password-expired-error-on-git-fetch-via-ssh-for-ldap-user).
diff --git a/doc/security/token_overview.md b/doc/security/token_overview.md
index c56fe0b9260..82e16694470 100644
--- a/doc/security/token_overview.md
+++ b/doc/security/token_overview.md
@@ -222,6 +222,39 @@ This table shows available scopes per token. Scopes can be limited further on to
1. Runner registration and authentication token don't provide direct access to repositories, but can be used to register and authenticate a new runner that may execute jobs which do have access to the repository
1. Limited to certain [endpoints](../ci/jobs/ci_job_token.md).
+## Token prefixes
+
+The following tables show the prefixes for each type of token where applicable.
+
+### GitLab tokens
+
+| Token name | Prefix |
+|-----------------------------------|--------------------|
+| Personal access token | `glpat-` |
+| OAuth Application Secret | `gloas-` |
+| Impersonation token | Not applicable. |
+| Project access token | Not applicable. |
+| Group access token | Not applicable. |
+| Deploy token | Not applicable. |
+| Deploy key | Not applicable. |
+| Runner registration token | Not applicable. |
+| Runner authentication token | `glrt-` |
+| Job token | Not applicable. |
+| Trigger token | `glptt-` |
+| Legacy runner registration token | GR1348941 |
+| Feed token | `glft-` |
+| Incoming mail token | `glimt-` |
+| GitLab Agent for Kubernetes token | `glagent-` |
+| GitLab session cookies | `_gitlab_session=` |
+
+### External system tokens
+
+| Token name | Prefix |
+|-----------------|-----------------|
+| Omamori tokens | `omamori_pat_` |
+| AWS credentials | `AKIA` |
+| GCP credentials | Not applicable. |
+
## Security considerations
1. Treat access tokens like passwords and keep them secure.
diff --git a/doc/security/unlock_user.md b/doc/security/unlock_user.md
index fe10274ce5a..8184bdfdd8c 100644
--- a/doc/security/unlock_user.md
+++ b/doc/security/unlock_user.md
@@ -18,10 +18,14 @@ By default, users are locked after 10 failed sign-in attempts. These users remai
In GitLab 16.5 and later, administrators can [use the API](../api/settings.md#list-of-settings-that-can-be-accessed-via-api-calls) to configure:
-- The number of failed sign-in attempts that locks a user.
-- The time period in minutes that the locked user is locked for, after the maximum number of failed sign-in attempts is reached.
+- The number of failed sign-in attempts that locks a user (`max_login_attempts`).
+- The time period in minutes that the locked user is locked for, after the maximum number of failed sign-in attempts is reached (`failed_login_attempts_unlock_period_in_minutes`).
-For example, an administrator can configure that five failed sign-in attempts locks a user, and that user will be locked for 60 minutes.
+For example, an administrator can configure that five failed sign-in attempts locks a user, and that user will be locked for 60 minutes, with the following API call:
+
+```shell
+curl --request PUT --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/api/v4/application/settings?max_login_attempts=5&failed_login_attempts_unlock_period_in_minutes=60"
+```
## GitLab.com users
diff --git a/doc/solutions/cloud/aws/gitaly_sre_for_aws.md b/doc/solutions/cloud/aws/gitaly_sre_for_aws.md
new file mode 100644
index 00000000000..318316b95b8
--- /dev/null
+++ b/doc/solutions/cloud/aws/gitaly_sre_for_aws.md
@@ -0,0 +1,91 @@
+---
+stage: Solutions Architecture
+group: Solutions Architecture
+info: This page is owned by the Solutions Architecture team.
+description: Doing SRE for Gitaly instances on AWS.
+---
+
+# SRE Considerations for Gitaly on AWS **(FREE SELF)**
+
+## Gitaly SRE considerations
+
+Gitaly is an embedded service for Git Repository Storage. Gitaly and Gitaly Cluster have been engineered by GitLab to overcome fundamental challenges with horizontal scaling of the open source Git binaries that must be used on the service side of GitLab. Here is in-depth technical reading on the topic:
+
+### Why Gitaly was built
+
+If you would like to understand the underlying rationale on why GitLab had to invest in creating Gitaly, read the following minimal list of topics:
+
+- [Git characteristics that make horizontal scaling difficult](https://gitlab.com/gitlab-org/gitaly/-/blob/master/doc/DESIGN.md#git-characteristics-that-make-horizontal-scaling-difficult)
+- [Git architectural characteristics and assumptions](https://gitlab.com/gitlab-org/gitaly/-/blob/master/doc/DESIGN.md#git-architectural-characteristics-and-assumptions)
+- [Affects on horizontal compute architecture](https://gitlab.com/gitlab-org/gitaly/-/blob/master/doc/DESIGN.md#affects-on-horizontal-compute-architecture)
+- [Evidence to back building a new horizontal layer to scale Git](https://gitlab.com/gitlab-org/gitaly/-/blob/master/doc/DESIGN.md#evidence-to-back-building-a-new-horizontal-layer-to-scale-git)
+
+### Gitaly and Praefect elections
+
+As part of Gitaly cluster consistency, Praefect nodes must occasionally vote on what data copy is the most accurate. This requires an uneven number of Praefect nodes to avoid stalemates. This means that for HA, Gitaly and Praefect require a minimum of three nodes.
+
+### Gitaly performance monitoring
+
+Complete performance metrics should be collected for Gitaly instances for identification of bottlenecks, as they could have to do with disk IO, network IO, or memory.
+
+### Gitaly performance guidelines
+
+Gitaly functions as the primary Git Repository Storage in GitLab. However, it's not a streaming file server. It also does a lot of demanding computing work, such as preparing and caching Git packfiles which informs some of the performance recommendations below.
+
+NOTE:
+All recommendations are for production configurations, including performance testing. For test configurations, like training or functional testing, you can use less expensive options. However, you should adjust or rebuild if performance is an issue.
+
+#### Overall recommendations
+
+- Production-grade Gitaly must be implemented on instance compute due to all of the above and below characteristics.
+- Never use [burstable instance types](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/burstable-performance-instances.html) (such as `t2`, `t3`, `t4g`) for Gitaly.
+- Always use at least the [AWS Nitro generation of instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html#ec2-nitro-instances) to ensure many of the below concerns are automatically handled.
+- Use Amazon Linux 2 to ensure that all [AWS oriented hardware and OS optimizations](https://aws.amazon.com/amazon-linux-2/faqs/) are maximized without additional configuration or SRE management.
+
+#### CPU and memory recommendations
+
+- The general GitLab Gitaly node recommendations for CPU and Memory assume relatively even loading across repositories. GitLab Performance Tool (GPT) testing of any non-characteristic repositories and/or SRE monitoring of Gitaly metrics may inform when to choose memory and/or CPU higher than general recommendations.
+
+**To accommodate:**
+
+- Git packfile operations are memory and CPU intensive.
+- If repository commit traffic is dense, large, or very frequent, then more CPU and Memory are required to handle the load. Patterns such as storing binaries and/or busy or large monorepos are examples that can cause high loading.
+
+#### Disk I/O recommendations
+
+- Use only SSD storage and the [class of Elastic Block Store (EBS) storage](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html) that suites your durability and speed requirements.
+- When not using provisioned EBS IO, EBS volume size determines the I/O level, so provisioning volumes that are much larger than needed can be the least expensive way to improve EBS IO.
+- If Gitaly performance monitoring shows signs of disk stress then one of the provisioned IOPS levels can be chosen. EBS IOPS levels also have enhanced durability which may be appealing for some implementations aside from performance considerations.
+
+**To accommodate:**
+
+- Gitaly storage is expected to be local (not NFS of any type including EFS).
+- Gitaly servers also need disk space for building and caching Git packfiles. This is above and beyond the permanent storage of your Git Repositories.
+- Git packfiles are cached in Gitaly. Creation of packfiles in temporary disk benefits from fast disk, and disk caching of packfiles benefits from ample disk space.
+
+#### Network I/O recommendations
+
+- Use only instance types [from the list of ones that support Elastic Network Adapter (ENA) advanced networking](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html#instance-type-summary-table) to ensure that cluster replication latency is not due to instance level network I/O bottlenecks.
+- Choose instances with sizes with more than 10 Gbps - but only if needed and only when having proven a node level network bottleneck with monitoring and/or stress testing.
+
+**To accommodate:**
+
+- Gitaly nodes do the main work of streaming repositories for push and pull operations (to add development endpoints, and to CI/CD).
+- Gitaly servers need reasonable low latency between cluster nodes and with Praefect services in order for the cluster to maintain operational and data integrity.
+- Gitaly nodes should be selected with network bottleneck avoidance as a primary consideration.
+- Gitaly nodes should be monitored for network saturation.
+- Not all networking issues can be solved through optimizing the node level networking:
+ - Gitaly cluster node replication depends on all networking between nodes.
+ - Gitaly networking performance to pull and push endpoints depends on all networking in between.
+
+### AWS Gitaly backup
+
+Due to the nature of how Praefect tracks the replication metadata of Gitaly disk information, the best backup method is [the official backup and restore Rake tasks](../../../administration/backup_restore/index.md).
+
+### AWS Gitaly recovery
+
+Gitaly Cluster does not support snapshot backups as these can cause issues where the Praefect database becomes out of syn with the disk storage. Due to the nature of how Praefect rebuilds the replication metadata of Gitaly disk information during a restore, the best recovery method is [the official backup and restore Rake tasks](../../../administration/backup_restore/index.md).
+
+### Gitaly long term management
+
+Gitaly node disk sizes must be monitored and increased to accommodate Git repository growth and Gitaly temporary and caching storage needs. The storage configuration on all nodes should be kept identical.
diff --git a/doc/solutions/cloud/aws/gitlab_aws_integration.md b/doc/solutions/cloud/aws/gitlab_aws_integration.md
new file mode 100644
index 00000000000..ba0b9717562
--- /dev/null
+++ b/doc/solutions/cloud/aws/gitlab_aws_integration.md
@@ -0,0 +1,103 @@
+---
+stage: Solutions Architecture
+group: Solutions Architecture
+info: This page is owned by the Solutions Architecture team.
+description: "Integrations Solutions Index for GitLab and AWS."
+---
+
+# Integrate with AWS
+
+Learn how to integrate GitLab and AWS.
+
+This content is intended for GitLab team members as well as members of the wider community.
+
+This page attempts to index the ways in which GitLab can integrate with AWS. It does so whether the integration is the result of configuring general functionality, was built in to AWS or GitLab or is provided as a solution.
+
+| Text Tag | Configuration / Built / Solution | Support/Maintenance |
+| -------------------- | ------------------------------------------------------------ | ------------------- |
+| `[AWS Configuration]` | Integration via Configuring Existing AWS Functionality | AWS |
+| `[GitLab Configuration]` | Integration via Configuring Existing GitLab Functionality | GitLab |
+| `[AWS Built]` | Built into AWS by Product Team to Address AWS Integration | AWS |
+| `[GitLab Built]` | Built into GitLab by Product Team to Address AWS Integration | GitLab |
+| `[AWS Solution]` | Built as Solution Example by AWS or AWS Partners | Community/Example |
+| `[GitLab Solution]` | Built as Solution Example by GitLab or GitLab Partners | Community/Example |
+| `[CI Solution]` | Built, at least in part, using GitLab CI and therefore <br />more customer customizable. | Items tagged `[CI Solution will]` <br />also carry one of the other tags <br />that indicates the maintenance status. |
+
+## Integrations For Development Activities
+
+### SCM Integrations
+
+- **AWS CodeStar Connections** - enables SCM connections to multiple AWS Services. **Currently for GitLab.com SaaS only**. [Configure GitLab](https://docs.aws.amazon.com/dtconsole/latest/userguide/connections-create-gitlab.html). [Supported Providers](https://docs.aws.amazon.com/dtconsole/latest/userguide/supported-versions-connections.html). [Supported AWS Services](https://docs.aws.amazon.com/dtconsole/latest/userguide/integrations-connections.html) - each one may have to make updates to support GitLab, so here is the subset that currently support GitLab `[AWS Built]`
+ - [AWS CodePipeline Integration](https://docs.aws.amazon.com/codepipeline/latest/userguide/connections-gitlab.html) - use GitLab as source for CodePipeline. `[AWS Built]`
+ - **AWS CodeBuild Integration** - indirectly through CodePipeline support. `[AWS Built]`
+ - **Amazon CodeWhisperer Customization Capability** [can connect to a GitLab repo](https://aws.amazon.com/blogs/aws/new-customization-capability-in-amazon-codewhisperer-generates-even-better-suggestions-preview/). `[AWS Built]`
+ - **AWS Service Catalog** directly inherits CodeStar Connections, there is not any specific documentation about GitLab since it just uses any GitLab CodeStar Connection that has been created in the account. `[AWS Built]`
+ - **AWS Proton** directly inherits CodeStar Connections, there is not any specific documentation about GitLab since it just uses any GitLab CodeStar Connection that has been created in the account. `[AWS Built]`
+ - **AWS Glue Notebook Jobs** directly inherit CodeStar Connections, there is not any specific documentation about GitLab since it just uses any GitLab CodeStar Connection that has been created in the account. `[AWS Built]`
+ - **Amazon SageMaker MLOps Projects** are done in CodePipeline and so directly inherit CodeStar Connections ([as noted here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-projects-walkthrough-3rdgit.html#sagemaker-proejcts-walkthrough-connect-3rdgit)), there is not any specific documentation about GitLab since it just uses any GitLab CodeStar Connection that has been created in the account. `[AWS Built]`
+ - **Amazon SageMaker Notebooks** [allow Git repositories to be specified by the Git clone URL](https://docs.aws.amazon.com/sagemaker/latest/dg/nbi-git-resource.html) and configuration of a secret - so GitLab is configurable. `[AWS Configuration]`
+ - **AWS CloudFormation** publishing of public extensions - **not yet supported**. `[AWS Built]`
+ - **Amazon CodeGuru Reviewer Repositories** - **not yet supported**. `[AWS Built]`
+- [GitLab Push Mirroring to CodeCommit](../../../user/project/repository/mirror/push.md#set-up-a-push-mirror-from-gitlab-to-aws-codecommit) Workaround enables GitLab repositories to leverage CodePipeline SCM Triggers. GitLab can already leverage S3 and Container Triggers for CodePipeline. **Still required for Self-Managed and Dedicated for the time being.** `[GitLab Configuration]`
+
+### CI Integrations
+
+- **Direct CI Integrations That Use Keys, IAM or OIDC/JWT to Authenticate to AWS Services from GitLab Runners**
+ - **Amazon CodeGuru Reviewer CI workflows using GitLab CI** - can be done, not yet documented. `[AWS Solution]` `[CI Solution]`
+ - [Amazon CodeGuru Secure Scanning using GitLab CI](https://docs.aws.amazon.com/codeguru/latest/security-ug/get-started-gitlab.html) `[AWS Solution]` `[CI Solution]`
+
+### CD and Operations Integrations
+
+- **AWS CodeDeploy Integration** - indirectly through CodePipeline support. `[AWS Built]`
+- [Integrate EKS clusters for application deployment](../../../user/infrastructure/clusters/connect/new_eks_cluster.md). `[GitLab Built]`
+
+## Solutions For Specific Development Frameworks and Ecosystems
+
+Generally solutions demonstrate end-to-end capabilities for the development framework - leveraging all relevant integration techniques to show the art of maximum value for using GitLab and AWS together.
+
+### Serverless Development
+
+- [Serverless Framework Deployment to AWS with GitLab Serverless SAST Scanning and Managed DevOps Environments](https://gitlab.com/guided-explorations/aws/serverless/serverless-framework-aws) - working example code and tutorials. `[GitLab Solution]` `[CI Solution]`
+ - [Tutorial: Serverless Framework Deployment to AWS with GitLab Serverless SAST Scanning](https://gitlab.com/guided-explorations/aws/serverless/serverless-framework-aws/-/blob/master/TUTORIAL.md) `[GitLab Solution]` `[CI Solution]`
+ - [Tutorial: Secure Serverless Framework Development with GitLab Security Policy Approval Rules and Managed DevOps Environments](https://gitlab.com/guided-explorations/aws/serverless/serverless-framework-aws/-/blob/master/TUTORIAL2-SecurityAndManagedEnvs.md) `[GitLab Solution]` `[CI Solution]`
+
+### Infrastructure as Code
+
+- [Terraform Deployment to AWS with GitLab MR Managed DevOps Environments](https://gitlab.com/guided-explorations/aws/terraform/terraform-web-server-cluster)
+ - [Tutorial: Terraform Deployment to AWS with GitLab IaC SAST Scanning](https://gitlab.com/guided-explorations/aws/terraform/terraform-web-server-cluster/-/blob/prod/TUTORIAL.md) `[GitLab Solution]` `[CI Solution]`
+ - [Terraform Deployment to AWS with GitLab Security Policy Approval Rules and Managed DevOps Environments](https://gitlab.com/guided-explorations/aws/terraform/terraform-web-server-cluster/-/blob/prod/TUTORIAL2-SecurityAndManagedEnvs.md) `[GitLab Solution]` `[CI Solution]`
+- [Tutorial: CloudFormation Deployment With GitLab MR Managed DevOps Environments](https://gitlab.com/guided-explorations/aws/cloudformation-deploy) `[GitLab Solution]` `[CI Solution]`
+
+### .Net on AWS
+
+- [Working Example Code for Scaling .NET Framework 4.x Runners on AWS](https://gitlab.com/guided-explorations/aws/dotnet-aws-toolkit) `[GitLab Solution]` `[CI Solution]`
+- [Video Walkthrough of Code and Building a .NET Framework 4.x Project](https://www.youtube.com/watch?v=_4r79ZLmDuo) `[GitLab Solution]` `[CI Solution]`
+
+## Authentication Integration
+
+- [Runner Job Authentication using Open ID & JWT Authentication](../../../ci/cloud_services/aws/index.md). `[GitLab Built]`
+ - [Configure OpenID Connect between GitLab and AWS](https://gitlab.com/guided-explorations/aws/configure-openid-connect-in-aws) `[GitLab Solution]` `[CI Solution]`
+ - [OIDC and Multi-Account Deployment with GitLab and ECS](https://gitlab.com/guided-explorations/aws/oidc-and-multi-account-deployment-with-ecs) `[GitLab Solution]` `[CI Solution]`
+
+## GitLab Instance Compute & Operations Integration
+
+- Installing GitLab Self-Managed on AWS
+ - GitLab Single EC2 Instance. `[GitLab Built]`
+ - [Using 5 Seat AWS marketplace subscription](gitlab_single_box_on_aws.md#marketplace-subscription)
+ - [Using Prepared AMIs](gitlab_single_box_on_aws.md#official-gitlab-releases-as-amis) - Bring Your Own License for Enterprise Edition.
+
+ - GitLab Cloud Native Hybrid Scaled on AWS EKS and Paas. `[GitLab Built]`
+ - Using GitLab Environment Toolkit (GET) - `[GitLab Solution]`
+
+ - GitLab Instance Scaled on AWS EC2 and PaaS. `[GitLab Built]`
+ - Using GitLab Environment Toolkit (GET) - `[GitLab Solution]`
+
+- [Amazon Managed Grafana](https://docs.aws.amazon.com/grafana/latest/userguide/gitlab-AMG-datasource.html) for GitLab self-managed Prometheus metrics. `[AWS Built]`
+
+## GitLab Runner on AWS Compute
+
+- [Autoscaling GitLab Runner on AWS EC2](https://docs.gitlab.com/runner/configuration/runner_autoscale_aws/). `[GitLab Built]`
+- [GitLab HA Scaling Runner Vending Machine for AWS EC2 ASG](https://gitlab.com/guided-explorations/aws/gitlab-runner-autoscaling-aws-asg/). `[GitLab Solution]`
+ - Runner vending machine training resources.
+
+- [GitLab EKS Fargate Runners](https://gitlab.com/guided-explorations/aws/eks-runner-configs/gitlab-runner-eks-fargate/-/blob/main/README.md). `[GitLab Solution]`
diff --git a/doc/solutions/cloud/aws/gitlab_aws_partner_designations.md b/doc/solutions/cloud/aws/gitlab_aws_partner_designations.md
new file mode 100644
index 00000000000..c48c3f95f9d
--- /dev/null
+++ b/doc/solutions/cloud/aws/gitlab_aws_partner_designations.md
@@ -0,0 +1,38 @@
+---
+stage: Solutions Architecture
+group: Solutions Architecture
+info: This page is owned by the Solutions Architecture team.
+description: GitLab partnership certifications and designations from AWS.
+---
+
+# GitLab partnership certifications and designations from AWS
+
+The certifications and designations outlined here can be validated on [GitLabs partner page at AWS](https://partners.amazonaws.com/partners/001E0000018YWFfIAO/GitLab,%20Inc.).
+
+All AWS partner qualifications require submission and validation of extensive checklists and submission of backing evidence that AWS utilizes to determine whether to grant the qualification.
+
+## DevOps Software / ISV Competency
+
+This competency validates that GitLab delivers DevOps solutions that work with and on AWS. [AWS Program Information](https://aws.amazon.com/devops/partner-solutions/)
+
+## DevSecOps Specialty Category
+
+[AWS Program Information](https://aws.amazon.com/blogs/apn/aws-devops-competency-expands-to-include-devsecops-category/) [GitLab Announcement](https://about.gitlab.com/blog/2023/09/25/aws-devsecops-competency-partner/)
+
+## Public Sector Partner
+
+This designation indicates that GitLab has been deemed qualified to work with AWS Public Sector customers. In fact, we have an entire organization dedicated to this practice. [AWS Program Information](https://aws.amazon.com/partners/programs/public-sector/)
+
+## AWS Graviton
+
+GitLab Instances and Runners have been tested and work on AWS Graviton. For Amazon Linux we maintain YUM packages for ARM architecture. [AWS Program Information](https://aws.amazon.com/ec2/graviton/partners/)
+
+## Amazon Linux Ready
+
+GitLab Instances and Runner have been validated on Amazon Linux 2 and 2023 - this includes YUM packages and package repositories for both and over 2300 CI tests for both before packaging. [AWS Program Information](https://aws.amazon.com/amazon-linux/partners/)
+
+## AWS Marketplace Seller
+
+GitLab is a marketplace seller and you can purchase and deploy it through AWS marketplace [AWS Program Information](https://aws.amazon.com/marketplace/partners/management-tour)
+
+![AWS Partner Designations Logo](img/all-aws-partner-designations.png){: .right}
diff --git a/doc/solutions/cloud/aws/gitlab_instance_on_aws.md b/doc/solutions/cloud/aws/gitlab_instance_on_aws.md
new file mode 100644
index 00000000000..320c317d446
--- /dev/null
+++ b/doc/solutions/cloud/aws/gitlab_instance_on_aws.md
@@ -0,0 +1,55 @@
+---
+stage: Solutions Architecture
+group: Solutions Architecture
+info: This page is owned by the Solutions Architecture team.
+---
+
+{::options parse_block_html="true" /}
+
+# Provision GitLab Instances on AWS EKS **(FREE SELF)**
+
+## Available Infrastructure as Code for GitLab Instance Installation on AWS
+
+The [GitLab Environment Toolkit (GET)](https://gitlab.com/gitlab-org/gitlab-environment-toolkit/-/blob/main/README.md) is a set of opinionated Terraform and Ansible scripts. These scripts help with the deployment of Linux package or Cloud Native Hybrid environments on selected cloud providers and are used by GitLab developers for [GitLab Dedicated](../../../subscriptions/gitlab_dedicated/index.md) (for example).
+
+You can use the GitLab Environment Toolkit to deploy a Cloud Native Hybrid environment on AWS. However, it's not required and may not support every valid permutation. That said, the scripts are presented as-is and you can adapt them accordingly.
+
+### Two and Three Zone High Availability
+
+While GitLab Reference Architectures generally encourage three zone redundancy, AWS Quick Starts and AWS Well Architected consider two zone redundancy as AWS Well Architected. Individual implementations should weigh the costs of two and three zone configurations against their own high availability requirements for a final configuration.
+
+Gitaly Cluster uses a consistency voting system to implement strong consistency between synchronized nodes. Regardless of the number of availability zones implemented, there will always need to be a minimum of three Gitaly and three Praefect nodes in the cluster to avoid voting stalemates cause by an even number of nodes.
+
+## AWS PaaS qualified for all GitLab implementations
+
+For both implementations that used the Linux package or Cloud Native Hybrid implementations, the following GitLab Service roles can be performed by AWS Services (PaaS). Any PaaS solutions that require preconfigured sizing based on the scale of your instance will also be listed in the per-instance size Bill of Materials lists. Those PaaS that do not require specific sizing, are not repeated in the BOM lists (for example, AWS Certification Manager).
+
+These services have been tested with GitLab.
+
+Some services, such as log aggregation, outbound email are not specified by GitLab, but where provided are noted.
+
+| GitLab Services | AWS PaaS (Tested) | Provided by AWS Cloud <br />Native Hybrid Quick Start |
+| ------------------------------------------------------------ | ------------------------------ | ------------------------------------------------------------ |
+| <u>Tested PaaS Mentioned in Reference Architectures</u> | | |
+| **PostgreSQL Database** | Amazon RDS PostgreSQL | Yes. |
+| **Redis Caching** | Redis ElastiCache | Yes. |
+| **Gitaly Cluster (Git Repository Storage)**<br />(Including Praefect and PostgreSQL) | ASG and Instances | Yes - ASG and Instances<br />**Note: Gitaly cannot be put into a Kubernetes Cluster.** |
+| **All GitLab storages besides Git Repository Storage**<br />(Includes Git-LFS which is S3 Compatible) | AWS S3 | Yes |
+| | | |
+| <u>Tested PaaS for Supplemental Services</u> | | |
+| **Front End Load Balancing** | AWS ELB | Yes |
+| **Internal Load Balancing** | AWS ELB | Yes |
+| **Outbound Email Services** | AWS Simple Email Service (SES) | Yes |
+| **Certificate Authority and Management** | AWS Certificate Manager (ACM) | Yes |
+| **DNS** | AWS Route53 (tested) | Yes |
+| **GitLab and Infrastructure Log Aggregation** | AWS CloudWatch Logs | Yes (ContainerInsights Agent for EKS) |
+| **Infrastructure Performance Metrics** | AWS CloudWatch Metrics | Yes |
+| | | |
+| <u>Supplemental Services and Configurations (Tested)</u> | | |
+| **Prometheus for GitLab** | AWS EKS (Cloud Native Only) | Yes |
+| **Grafana for GitLab** | AWS EKS (Cloud Native Only) | Yes |
+| **Administrative Access to GitLab Backend** | Bastion Host in VPC | Yes - HA - Preconfigured for Cluster Management. |
+| **Encryption (In Transit / At Rest)** | AWS KMS | Yes |
+| **Secrets Storage for Provisioning** | AWS Secrets Manager | Yes |
+| **Configuration Data for Provisioning** | AWS Parameter Store | Yes |
+| **AutoScaling Kubernetes** | EKS AutoScaling Agent | Yes |
diff --git a/doc/solutions/cloud/aws/gitlab_single_box_on_aws.md b/doc/solutions/cloud/aws/gitlab_single_box_on_aws.md
new file mode 100644
index 00000000000..7a647f1d8d7
--- /dev/null
+++ b/doc/solutions/cloud/aws/gitlab_single_box_on_aws.md
@@ -0,0 +1,51 @@
+---
+stage: Solutions Architecture
+group: Solutions Architecture
+info: This page is owned by the Solutions Architecture team.
+---
+
+{::options parse_block_html="true" /}
+
+# Provision GitLab on a single EC2 instance in AWS **(FREE SELF)**
+
+If you want to provision a single GitLab instance on AWS, you have two options:
+
+- The marketplace subscription
+- The official GitLab AMIs
+
+## Marketplace subscription
+
+GitLab provides a 5 user subscription as an AWS Marketplace subscription to help teams of all sizes to get started with an Ultimate licensed instance in record time. The Marketplace subscription can be easily upgraded to any GitLab licensing via an AWS Marketplace Private Offer, with the convenience of continued AWS billing. No migration is necessary to obtain a larger, non-time based license from GitLab. Per-minute licensing is automatically removed when you accept the private offer.
+
+For a tutorial on provisioning a GitLab Instance via a Marketplace Subscription, [use this tutorial](https://gitlab.awsworkshop.io/040_partner_setup.html). The tutorial links to the [GitLab Ultimate Marketplace Listing](https://aws.amazon.com/marketplace/pp/prodview-g6ktjmpuc33zk), but you can also use the [GitLab Premium Marketplace Listing](https://aws.amazon.com/marketplace/pp/prodview-amk6tacbois2k) to provision an instance.
+
+## Official GitLab releases as AMIs
+
+GitLab produces Amazon Machine Images (AMI) during the regular release process. The AMIs can be used for single instance GitLab installation or, by configuring `/etc/gitlab/gitlab.rb`, can be specialized for specific GitLab service roles (for example a Gitaly server). Older releases remain available and can be used to migrate an older GitLab server to AWS.
+
+Initial licensing can either be the Free Enterprise License (EE) or the open source Community Edition (CE). The Enterprise Edition provides the easiest path forward to a licensed version if the need arises.
+
+Currently the Amazon AMI uses the Amazon prepared Ubuntu AMI (x86 and ARM are available) as its starting point.
+
+NOTE:
+When deploying a GitLab instance using the official AMI, the root password to the instance is the EC2 **Instance** ID (not the AMI ID). This way of setting the root account password is specific to official GitLab published AMIs ONLY.
+
+Instances running on Community Edition (CE) require a migration to Enterprise Edition (EE) to subscribe to the GitLab Premium or Ultimate plan. If you want to pursue a subscription, using the Free-forever plan of Enterprise Edition is the least disruptive method.
+
+NOTE:
+Because any given GitLab upgrade might involve data disk updates or database schema upgrades, swapping out the AMI is not sufficient for taking upgrades.
+
+1. Log in to the AWS Web Console, so that selecting the links in the following step take you directly to the AMI list.
+1. Pick the edition you want:
+
+ - [GitLab Enterprise Edition](https://console.aws.amazon.com/ec2/v2/home?region=us-east-1#Images:visibility=public-images;ownerAlias=782774275127;search=GitLab%20EE;sort=desc:name): If you want to unlock the enterprise features, a license is needed.
+ - [GitLab Community Edition](https://console.aws.amazon.com/ec2/v2/home?region=us-east-1#Images:visibility=public-images;ownerAlias=782774275127;search=GitLab%20CE;sort=desc:name): The open source version of GitLab.
+ - [GitLab Premium or Ultimate Marketplace (pre-licensed)](https://console.aws.amazon.com/ec2/v2/home?region=us-east-1#Images:visibility=public-images;source=Marketplace;search=GitLab%20EE;sort=desc:name): 5 user license built into per-minute billing.
+
+1. AMI IDs are unique per region. After you've loaded any of these editions, in the upper-right corner, select the desired target region of the console to see the appropriate AMIs.
+1. After the console is loaded, you can add additional search criteria to narrow further. For instance, type `13.` to find only 13.x versions.
+1. To launch an EC2 Machine with one of the listed AMIs, check the box at the start of the relevant row, and select **Launch** near the top of left of the page.
+
+NOTE:
+If you are trying to restore from an older version of GitLab while moving to AWS, find the
+[Enterprise and Community Editions before GitLab 11.10.3](https://console.aws.amazon.com/ec2/v2/home?region=us-east-1#Images:visibility=public-images;ownerAlias=855262394183;sort=desc:name).
diff --git a/doc/solutions/cloud/aws/img/all-aws-partner-designations.png b/doc/solutions/cloud/aws/img/all-aws-partner-designations.png
new file mode 100644
index 00000000000..76925656fec
--- /dev/null
+++ b/doc/solutions/cloud/aws/img/all-aws-partner-designations.png
Binary files differ
diff --git a/doc/solutions/cloud/aws/index.md b/doc/solutions/cloud/aws/index.md
new file mode 100644
index 00000000000..7e9eed235ff
--- /dev/null
+++ b/doc/solutions/cloud/aws/index.md
@@ -0,0 +1,84 @@
+---
+stage: Solutions Architecture
+group: Solutions Architecture
+info: This page is owned by the Solutions Architecture team.
+---
+
+# AWS Solutions
+
+This documentation covers solutions relating to leveraging GitLab with and on Amazon Web Services (AWS).
+
+- [GitLab partnership certifications and designations from AWS](gitlab_aws_integration.md)
+- [GitLab AWS Integration Index](gitlab_aws_partner_designations.md)
+- [GitLab Instances on AWS EKS](gitlab_instance_on_aws.md)
+- [SRE Considerations Gitaly on AWS](gitaly_sre_for_aws.md)
+- [Provision GitLab on a single EC2 instance in AWS](gitlab_single_box_on_aws.md)
+
+## Cloud platform well architected compliance
+
+Testing-backed architectural qualification is a fundamental concept behind implementation patterns:
+
+- Implementation patterns maintain GitLab Reference Architecture compliance and provide [GitLab Performance Tool](https://gitlab.com/gitlab-org/quality/performance) (GPT) reports to demonstrate adherence to them.
+- Implementation patterns may be qualified by and/or contributed to by the technology vendor. For instance, an implementation pattern for AWS may be officially reviewed by AWS.
+- Implementation patterns may specify and test Cloud Platform PaaS services for suitability for GitLab. This testing can be coordinated and help qualify these technologies for Reference Architectures. For instance, qualifying compatibility with and availability of runtime versions of top level PaaS such as those for PostgreSQL and Redis.
+- Implementation patterns can provided qualified testing for platform limitations, for example, ensuring Gitaly Cluster can work correctly on specific Cloud Platform availability zone latency and throughput characteristics or qualifying what levels of available platform partner local disk performance is workable for Gitaly server to operate with integrity.
+
+## AWS known issues list
+
+Known issues are gathered from within GitLab and from customer reported issues. Customers successfully implement GitLab with a variety of “as a Service” components that GitLab has not specifically been designed for, nor has ongoing testing for. While GitLab does take partner technologies very seriously, the highlighting of known issues here is a convenience for implementers and it does not imply that GitLab has targeted compatibility with, nor carries any type of guarantee of running on the partner technology where the issues occur. Consult individual issues to understand the GitLab stance and plans on any given known issue.
+
+See the [GitLab AWS known issues list](https://gitlab.com/gitlab-com/alliances/aws/public-tracker/-/issues?label_name[]=AWS+Known+Issue) for a complete list.
+
+## Patterns with working code examples for using GitLab with AWS
+
+[The Guided Explorations' subgroup for AWS](https://gitlab.com/guided-explorations/aws) contains a variety of working example projects.
+
+## Platform partner specificity
+
+Implementation patterns enable platform-specific terminology, best practice architecture, and platform-specific build manifests:
+
+- Implementation patterns are more vendor specific. For instance, advising specific compute instances / VMs / nodes instead of vCPUs or other generalized measures.
+- Implementation patterns are oriented to implementing good architecture for the vendor in view.
+- Implementation patterns are written to an audience who is familiar with building on the infrastructure that the implementation pattern targets. For example, if the implementation pattern is for GCP, the specific terminology of GCP is used - including using the specific names for PaaS services.
+- Implementation patterns can test and qualify if the versions of PaaS available are compatible with GitLab (for example, PostgreSQL, Redis, etc.).
+
+## Platform as a Service (PaaS) specification and usage
+
+Platform as a Service options are a huge portion of the value provided by Cloud Platforms as they simplify operational complexity and reduce the SRE and security skilling required to operate advanced, highly available technology services. Implementation patterns can be pre-qualified against the partner PaaS options.
+
+- Implementation patterns help implementers understand what PaaS options are known to work and how to choose between PaaS solutions when a single platform has more than one PaaS option for the same GitLab role.
+- For instance, where reference architectures do not have a specific recommendation on what technology is leveraged for GitLab outbound email services or what the sizing should be - a Reference Implementation may advise using a cloud providers Email as a Service (PaaS) and possibly even with specific settings.
+
+## Cost optimizing engineering
+
+Cost engineering is a fundamental aspect of Cloud Architecture and frequently the savings capabilities available on a platform exert strong influence on how to build out scaled computing.
+
+- Implementation patterns may engineer specifically for the savings models available on a platform provider. An AWS example would be maximizing the occurrence of a specific instance type for taking advantage of reserved instances.
+- Implementation patterns may leverage ephemeral compute where appropriate and with appropriate customer guidelines. For instance, a Kubernetes node group dedicated to runners on ephemeral compute (with appropriate GitLab Runner tagging to indicate the compute type).
+- Implementation patterns may include vendor specific cost calculators.
+
+## Actionability and automatability orientation
+
+Implementation patterns are one step closer to specifics that can be used as a source for build instructions and automation code:
+
+- Implementation patterns enable builders to generate a list of vendor specific resources required to implement GitLab for a given Reference Architecture.
+- Implementation patterns enable builders to use manual instructions or to create automation to build out the reference implementation.
+
+## Intended audiences and contributors
+
+The primary audiences for and contributors to this information is the GitLab **Implementation Eco System** which consists of at least:
+
+GitLab Implementation Community:
+
+- Customers
+- GitLab Channel Partners (Integrators)
+- Platform Partners
+
+GitLab Internal Implementation Teams:
+
+- Quality / Distribution / Self-Managed
+- Alliances
+- Training
+- Support
+- Professional Services
+- Public Sector
diff --git a/doc/solutions/cloud/index.md b/doc/solutions/cloud/index.md
new file mode 100644
index 00000000000..27a90223382
--- /dev/null
+++ b/doc/solutions/cloud/index.md
@@ -0,0 +1,13 @@
+---
+stage: Solutions Architecture
+group: Solutions Architecture
+info: This page is owned by the Solutions Architecture team.
+---
+
+# Cloud solutions
+
+This documentation section covers a variety of Cloud Solutions.
+
+## Cloud solutions by provider
+
+[AWS Solutions](aws/index.md)
diff --git a/doc/solutions/index.md b/doc/solutions/index.md
new file mode 100644
index 00000000000..9d7fec1b549
--- /dev/null
+++ b/doc/solutions/index.md
@@ -0,0 +1,19 @@
+---
+stage: Solutions Architecture
+group: Solutions Architecture
+info: This page is owned by the Solutions Architecture team.
+---
+
+# Solutions architecture
+
+As with all extensible platforms, GitLab has many features that can be creatively combined together with third party functionality to create solutions that address the specific people, process, and technology challenges of the organizations that use it. Reference solutions and implementations can also be crafted at a more general level so that they can be adopted and customized by customers with similar needs to the reference solution.
+
+This documentation is the home for solutions GitLab wishes to share with customers.
+
+## Relationship to documentation
+
+While information in this section gives valuable and qualified guidance on ways to solve problems by using the GitLab platform, the product documentation is the authoritative reference for product features and functions.
+
+## Solutions categories
+
+[Cloud Solutions](cloud/index.md)
diff --git a/doc/subscriptions/bronze_starter.md b/doc/subscriptions/bronze_starter.md
index 3b2ef601136..90e0e77cb9a 100644
--- a/doc/subscriptions/bronze_starter.md
+++ b/doc/subscriptions/bronze_starter.md
@@ -106,7 +106,7 @@ the tiers are no longer mentioned in GitLab documentation:
- [Filtering merge requests](../user/project/merge_requests/index.md#filter-the-list-of-merge-requests) by "approved by"
- [Advanced search (Elasticsearch)](../user/search/advanced_search.md)
- [Service Desk](../user/project/service_desk/index.md)
-- [Storage usage statistics](../user/usage_quotas.md#storage-usage-statistics)
+- [Storage usage statistics](../user/usage_quotas.md)
The following developer features continue to be available to Starter and
Bronze-level subscribers:
diff --git a/doc/subscriptions/gitlab_com/index.md b/doc/subscriptions/gitlab_com/index.md
index b4efc463910..317cdb1e1d5 100644
--- a/doc/subscriptions/gitlab_com/index.md
+++ b/doc/subscriptions/gitlab_com/index.md
@@ -327,8 +327,11 @@ For details on upgrading your subscription tier, see
### Automatic subscription renewal
-When a subscription is set to auto-renew, it renews automatically on the
-expiration date without a gap in available service. Subscriptions purchased through the Customers Portal or GitLab.com are set to auto-renew by default. The number of seats is adjusted to fit the [number of billable users in your group](#view-seat-usage) at the time of renewal, if that number is higher than the current subscription quantity. You can view and download your renewal invoice on the Customers Portal [View invoices](https://customers.gitlab.com/receipts) page. If your account has a [saved credit card](../customers_portal.md#change-your-payment-method), the card is charged for the invoice amount. If we are unable to process a payment, or the auto-renewal fails for any other reason, you have 14 days to renew your subscription, after which your access is downgraded.
+When a subscription is set to auto-renew, it renews automatically on the expiration date without a gap in available service. Subscriptions purchased through the Customers Portal or GitLab.com are set to auto-renew by default.
+
+The number of seats is adjusted to fit the [number of billable users in your group](#view-seat-usage) at the time of renewal, if that number is higher than the current subscription quantity.
+
+You can view and download your renewal invoice on the Customers Portal [View invoices](https://customers.gitlab.com/receipts) page. If your account has a [saved credit card](../customers_portal.md#change-your-payment-method), the card is charged for the invoice amount. If we are unable to process a payment, or the auto-renewal fails for any other reason, you have 14 days to renew your subscription, after which your access is downgraded.
#### Email notifications
@@ -412,7 +415,7 @@ You can [cancel the subscription](#enable-or-disable-automatic-subscription-rene
1. Sign in to GitLab SaaS.
1. From either your personal homepage or the group's page, go to **Settings > Usage Quotas**.
-1. For each locked project, total by how much its **Usage** exceeds the free quota and purchased
+1. For each read-only project, total by how much its **Usage** exceeds the free quota and purchased
storage. You must purchase the storage increment that exceeds this total.
1. Select **Purchase more storage** and you are taken to the Customers Portal.
1. Select **Add new subscription**.
@@ -425,8 +428,8 @@ You can [cancel the subscription](#enable-or-disable-automatic-subscription-rene
1. Sign out of the Customers Portal.
1. Switch back to the GitLab SaaS tab and refresh the page.
-The **Purchased storage available** total is incremented by the amount purchased. All locked
-projects are unlocked and their excess usage is deducted from the additional storage.
+The **Purchased storage available** total is incremented by the amount purchased. The read-only
+state for all projects is removed, and their excess usage is deducted from the additional storage.
#### For your group namespace
diff --git a/doc/subscriptions/gitlab_dedicated/index.md b/doc/subscriptions/gitlab_dedicated/index.md
index d07979cfda5..07abfb223ef 100644
--- a/doc/subscriptions/gitlab_dedicated/index.md
+++ b/doc/subscriptions/gitlab_dedicated/index.md
@@ -23,7 +23,10 @@ GitLab Dedicated allows you to select the cloud region where your data will be s
### Availability and scalability
-GitLab Dedicated leverages the GitLab [Cloud Native Hybrid reference architectures](../../administration/reference_architectures/index.md#cloud-native-hybrid) with high availability enabled. When [onboarding](../../administration/dedicated/index.md#onboarding-to-gitlab-dedicated-using-switchboard), GitLab will match you to the closest reference architecture size based on your number of users. Learn about the [current Service Level Objective](https://about.gitlab.com/handbook/engineering/infrastructure/team/gitlab-dedicated/slas/#current-service-level-objective).
+GitLab Dedicated leverages modified versions of the GitLab [Cloud Native Hybrid reference architectures](../../administration/reference_architectures/index.md#cloud-native-hybrid) with high availability enabled. When [onboarding](../../administration/dedicated/index.md#onboarding-to-gitlab-dedicated-using-switchboard), GitLab will match you to the closest reference architecture size based on your number of users. Learn about the [current Service Level Objective](https://about.gitlab.com/handbook/engineering/infrastructure/team/gitlab-dedicated/slas/#current-service-level-objective).
+
+NOTE:
+The published [reference architectures](../../administration/reference_architectures/index.md) act as a starting point in defining the cloud resources deployed inside GitLab Dedicated environments, but they are not comprehensive. GitLab Dedicated leverages additional Cloud Provider services beyond what's included in the standard reference architectures for enhanced security and stability of the environment. Therefore, GitLab Dedicated costs differ from standard reference architecture costs.
#### Disaster Recovery
diff --git a/doc/subscriptions/self_managed/index.md b/doc/subscriptions/self_managed/index.md
index 05d00323e2a..a1573132ab2 100644
--- a/doc/subscriptions/self_managed/index.md
+++ b/doc/subscriptions/self_managed/index.md
@@ -34,7 +34,8 @@ Prorated charges are not possible without a quarterly usage report.
## View user totals
-You can view users for your license and determine if you've gone over your subscription.
+View the amount of users in your instance to determine if they exceed the amount
+paid for in your subscription.
1. On the left sidebar, select **Search or go to**.
1. Select **Admin Area**.
@@ -44,17 +45,19 @@ The lists of users are displayed.
### Billable users
-A _billable user_ counts against the number of subscription seats. Every user is considered a
-billable user, with the following exceptions:
-
-- [Deactivated users](../../administration/moderate_users.md#deactivate-a-user) and
- [blocked users](../../administration/moderate_users.md#block-a-user) don't count as billable users in the current subscription. When they are either deactivated or blocked they release a _billable user_ seat. However, they may
- count toward overages in the subscribed seat count.
-- Users who are [pending approval](../../administration/moderate_users.md#users-pending-approval).
-- Users with only the [Minimal Access role](../../user/permissions.md#users-with-minimal-access) on self-managed Ultimate subscriptions or any GitLab.com subscriptions.
-- Users with only the [Guest or Minimal Access roles on an Ultimate subscription](#free-guest-users).
-- Users without project or group memberships on an Ultimate subscription.
-- GitLab-created service accounts:
+Billable users count toward the number of subscription seats purchased in your subscription.
+
+A user is not counted as a billable user if:
+
+- They are [deactivated](../../administration/moderate_users.md#deactivate-a-user) or
+ [blocked](../../administration/moderate_users.md#block-a-user).
+ If the user occupied a seat prior to being deactivated or blocked,
+ the user is included in the number of [maximum users](#maximum-users).
+- They are [pending approval](../../administration/moderate_users.md#users-pending-approval).
+- They have only the [Minimal Access role](../../user/permissions.md#users-with-minimal-access) on self-managed Ultimate subscriptions or any GitLab.com subscriptions.
+- They have the [Guest or Minimal Access roles on an Ultimate subscription](#free-guest-users).
+- They have project or group memberships on an Ultimate subscription.
+- The account is a GitLab-created service account:
- [Ghost User](../../user/profile/account/delete_account.md#associated-records).
- Bots such as:
- [Support Bot](../../user/project/service_desk/configure.md#support-bot-user).
@@ -62,7 +65,7 @@ billable user, with the following exceptions:
- [Bot users for groups](../../user/group/settings/group_access_tokens.md#bot-users-for-groups).
- Other [internal users](../../development/internal_users.md#internal-users).
-**Billable users** as reported in the `/admin` section is updated once per day.
+The amount of **Billable users** is reported once a day in the Admin Area.
### Maximum users
@@ -373,14 +376,12 @@ An invoice is generated for the renewal and available for viewing or download on
### Automatic subscription renewal
-When a subscription is set to auto-renew, it renews automatically on the
-expiration date (at midnight UTC) without a gap in available service. Subscriptions purchased through Customers Portal are set to auto-renew by default.
-The number of user licenses is adjusted to fit the [number of billable users in your instance](#view-user-totals) at the time of renewal, if that number is higher than the current subscription quantity.
-Before auto-renewal you should [prepare for the renewal](#prepare-for-renewal-by-reviewing-your-account) at least 2 days before the renewal date, so that your changes synchronize to GitLab in time for your renewal. To auto-renew your subscription,
+When a subscription is set to auto-renew, it renews automatically on the expiration date (at midnight UTC) without a gap in available service. Subscriptions purchased through Customers Portal are set to auto-renew by default.
+
+The number of user licenses is adjusted to fit the [number of billable users in your instance](#view-user-totals) at the time of renewal, if that number is higher than the current subscription quantity. Before auto-renewal you should [prepare for the renewal](#prepare-for-renewal-by-reviewing-your-account) at least 2 days before the renewal date, so that your changes synchronize to GitLab in time for your renewal. To auto-renew your subscription,
you must have enabled the [synchronization of subscription data](#subscription-data-synchronization).
-You can view and download your renewal invoice on the Customers Portal
-[View invoices](https://customers.gitlab.com/receipts) page. If your account has a [saved credit card](../customers_portal.md#change-your-payment-method), the card is charged for the invoice amount. If we are unable to process a payment or the auto-renewal fails for any other reason, you have 14 days to renew your subscription, after which your GitLab tier is downgraded.
+You can view and download your renewal invoice on the Customers Portal [View invoices](https://customers.gitlab.com/receipts) page. If your account has a [saved credit card](../customers_portal.md#change-your-payment-method), the card is charged for the invoice amount. If we are unable to process a payment or the auto-renewal fails for any other reason, you have 14 days to renew your subscription, after which your GitLab tier is downgraded.
#### Email notifications
diff --git a/doc/topics/autodevops/cicd_variables.md b/doc/topics/autodevops/cicd_variables.md
index 21d9dd0b3d3..4fa2ee10c75 100644
--- a/doc/topics/autodevops/cicd_variables.md
+++ b/doc/topics/autodevops/cicd_variables.md
@@ -31,6 +31,9 @@ Use these variables to customize and deploy your build.
| `AUTO_DEVOPS_CHART_REPOSITORY_USERNAME` | Used to set a username to connect to the Helm repository. Defaults to no credentials. Also set `AUTO_DEVOPS_CHART_REPOSITORY_PASSWORD`. |
| `AUTO_DEVOPS_CHART_REPOSITORY_PASSWORD` | Used to set a password to connect to the Helm repository. Defaults to no credentials. Also set `AUTO_DEVOPS_CHART_REPOSITORY_USERNAME`. |
| `AUTO_DEVOPS_CHART_REPOSITORY_PASS_CREDENTIALS` | From GitLab 14.2, set to a non-empty value to enable forwarding of the Helm repository credentials to the chart server when the chart artifacts are on a different host than repository. |
+| `AUTO_DEVOPS_CHART_REPOSITORY_INSECURE` | Set to a non-empty value to add a `--insecure-skip-tls-verify` argument to the Helm commands. By default, Helm uses TLS verification. |
+| `AUTO_DEVOPS_CHART_CUSTOM_ONLY` | Set to a non-empty value to use only a custom chart. By default, the latest chart is downloaded from GitLab. |
+| `AUTO_DEVOPS_CHART_VERSION` | Set the version of the deployment chart. Defaults to the latest available version. |
| `AUTO_DEVOPS_COMMON_NAME` | From GitLab 15.5, set to a valid domain name to customize the common name used for the TLS certificate. Defaults to `le-$CI_PROJECT_ID.$KUBE_INGRESS_BASE_DOMAIN`. Set to `false` to not set this alternative host on the Ingress. |
| `AUTO_DEVOPS_DEPLOY_DEBUG` | From GitLab 13.1, if this variable is present, Helm outputs debug logs. |
| `AUTO_DEVOPS_ALLOW_TO_FORCE_DEPLOY_V<N>` | From [auto-deploy-image](https://gitlab.com/gitlab-org/cluster-integration/auto-deploy-image) v1.0.0, if this variable is present, a new major version of chart is forcibly deployed. For more information, see [Ignore warnings and continue deploying](upgrading_auto_deploy_dependencies.md#ignore-warnings-and-continue-deploying). |
diff --git a/doc/topics/autodevops/customize.md b/doc/topics/autodevops/customize.md
index e920ae5e5e1..2e6672e3ab0 100644
--- a/doc/topics/autodevops/customize.md
+++ b/doc/topics/autodevops/customize.md
@@ -208,11 +208,14 @@ repository or by specifying a project CI/CD variable:
file in it, Auto DevOps detects the chart and uses it instead of the
[default chart](https://gitlab.com/gitlab-org/cluster-integration/auto-deploy-image/-/tree/master/assets/auto-deploy-app).
- **Project variable** - Create a [project CI/CD variable](../../ci/variables/index.md)
- `AUTO_DEVOPS_CHART` with the URL of a custom chart. You can also create two project
+ `AUTO_DEVOPS_CHART` with the URL of a custom chart. You can also create five project
variables:
- `AUTO_DEVOPS_CHART_REPOSITORY` - The URL of a custom chart repository.
- `AUTO_DEVOPS_CHART` - The path to the chart.
+ - `AUTO_DEVOPS_CHART_REPOSITORY_INSECURE` - Set to a non-empty value to add a `--insecure-skip-tls-verify` argument to the Helm commands.
+ - `AUTO_DEVOPS_CHART_CUSTOM_ONLY` - Set to a non-empty value to use only a custom chart. By default, the latest chart is downloaded from GitLab.
+ - `AUTO_DEVOPS_CHART_VERSION` - The version of the deployment chart.
### Customize Helm chart values
diff --git a/doc/topics/offline/quick_start_guide.md b/doc/topics/offline/quick_start_guide.md
index 301f73a268d..4ff9975b317 100644
--- a/doc/topics/offline/quick_start_guide.md
+++ b/doc/topics/offline/quick_start_guide.md
@@ -204,7 +204,7 @@ Version Check and Service Ping improve the GitLab user experience and ensure tha
users are on the most up-to-date instances of GitLab. These two services can be turned off for offline
environments so that they do not attempt and fail to reach out to GitLab services.
-For more information, see [Enable or disable usage statistics](../../administration/settings/usage_statistics.md#enable-or-disable-usage-statistics).
+For more information, see [Enable or disable service ping](../../administration/settings/usage_statistics.md#enable-or-disable-service-ping).
### Configure NTP
diff --git a/doc/tutorials/build_application.md b/doc/tutorials/build_application.md
index 2b1f63874b1..5f4b9da2aa3 100644
--- a/doc/tutorials/build_application.md
+++ b/doc/tutorials/build_application.md
@@ -31,7 +31,7 @@ Set up runners to run jobs in a pipeline.
|-------|-------------|--------------------|
| [Create, register, and run your own project runner](create_register_first_runner/index.md) | Learn the basics of how to create and register a project runner that runs jobs for your project. | **{star}** |
| [Configure GitLab Runner to use the Google Kubernetes Engine](configure_gitlab_runner_to_use_gke/index.md) | Learn how to configure GitLab Runner to use the GKE to run jobs. | |
-| [Automate the creation of runners](https://about.gitlab.com/blog/2023/07/06/how-to-automate-creation-of-runners/) | Learn how to automate runner creation as an authenticated user to optimize your runner fleet. | |
+| [Automate runner creation and registration](automate_runner_creation/index.md) | Learn how to automate runner creation as an authenticated user to optimize your runner fleet. | |
## Publish a static website
diff --git a/doc/tutorials/left_sidebar/index.md b/doc/tutorials/left_sidebar/index.md
index 55e3b1dc30d..be631a20d50 100644
--- a/doc/tutorials/left_sidebar/index.md
+++ b/doc/tutorials/left_sidebar/index.md
@@ -12,12 +12,12 @@ Follow this tutorial to learn how to use the new left sidebar to navigate the UI
## Enable the new left sidebar
-To view the new sidebar:
+From 16.0 through 16.5, you can turn the new sidebar on and off:
1. On the left sidebar, select your avatar.
-1. Turn on the **New navigation** toggle.
+1. Change the **New navigation** toggle.
-To turn off this sidebar, return to your avatar and turn off the toggle.
+Return to your avatar to change the setting.
## Layout of the left sidebar
diff --git a/doc/tutorials/product_analytics_onboarding_website_project/index.md b/doc/tutorials/product_analytics_onboarding_website_project/index.md
new file mode 100644
index 00000000000..c0c3d7bb3d9
--- /dev/null
+++ b/doc/tutorials/product_analytics_onboarding_website_project/index.md
@@ -0,0 +1,139 @@
+---
+stage: Analyze
+group: Product Analytics
+info: For assistance with this tutorial, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-other-projects-and-subjects.
+---
+
+# Tutorial: Set up product analytics in a GitLab Pages website project **(ULTIMATE ALL EXPERIMENT)**
+
+Understanding how your users engage with your website or application is important for making data-driven decisions.
+By identifying the most and least used features by your users, your team can decide where and how to spend their time effectively.
+
+Follow along to learn how to set up an example website project, onboard product analytics for the project, instrument the website to start collecting events,
+and use project-level analytics dashboards to understand user behavior.
+
+Here's an overview of what we're going to do:
+
+1. Create a project from a template
+1. Onboard the project with product analytics
+1. Instrument the website with tracking snippet
+1. Collect usage data
+1. View dashboards
+
+## Before you begin
+
+To follow along this tutorial, you must:
+
+- [Enable product analytics](../../user/product_analytics/index.md#enable-product-analytics) for your instance.
+- Have the Owner role for the group you create the project in.
+
+## Create a project from a template
+
+First of all, you need to create a project in your group.
+
+GitLab provides project templates,
+which make it easier to set up a project with all the necessary files for various use cases.
+Here, you'll create a project for a plain HTML website.
+
+To create a project:
+
+1. On the left sidebar, at the top, select **Create new** (**{plus}**) and **New project/repository**.
+1. Select **Create from template**.
+1. Select the **Pages/Plain HTML** template.
+1. In the **Project name** text box, enter a name (for example `My website`).
+1. From the **Project URL** dropdown list, select the group you want to create the project in.
+1. In the **Project slug** text box, enter a slug for your project (for example, `my-website`).
+1. Optional. In the **Project description** text box, enter a description of your project.
+ For example, `Plain HTML website with product analytics`. You can add or edit this description at any time.
+1. Under **Visibility Level**, select the desired level for the project.
+ If you create the project in a group, the visibility setting for a project must be at least as restrictive as the visibility of its parent group.
+1. Select **Create project**.
+
+Now you have a project with all the files you need for a plain HTML website.
+
+## Onboard the project with product analytics
+
+To collect events and view dashboards about your website usage, the project must have product analytics onboarded.
+
+To onboard your new project with product analytics:
+
+1. In the project, select **Analyze > Analytics dashboards**.
+1. Find the **Product analytics** item and select **Set up**.
+1. Select **Set up product analytics**.
+1. Wait for your instance to finish creating.
+1. Copy the **HTML script setup** snippet. You will need it in the next steps.
+
+Your project is now onboarded and ready for your application to start sending events.
+
+## Instrument your website
+
+To collect and send usage events to GitLab, you must include a code snippet in your website.
+You can choose from several platform and technology-specific tracking SDKs to integrate with your application.
+For this example website, we use the [Browser SDK](../../user/product_analytics/instrumentation/browser_sdk.md).
+
+To instrument your new website:
+
+1. In the project, select **Code > Repository**.
+1. Select the **Edit > Web IDE**.
+1. In the left Web IDE toolbar, select **File Explorer** and open the `public/index.html` file.
+1. In the `public/index.html` file, before the closing `</body>` tag, paste the snippet you copied in the previous section.
+
+ The code in the `index.html` file should look like this (where `appId` and `host` have the values provided in the onboarding section):
+
+ ```html
+ <!DOCTYPE html>
+ <html>
+ <head>
+ <meta charset="utf-8">
+ <meta name="generator" content="GitLab Pages">
+ <title>Plain HTML site using GitLab Pages</title>
+ <link rel="stylesheet" href="style.css">
+ </head>
+ <body>
+ <div class="navbar">
+ <a href="https://pages.gitlab.io/plain-html/">Plain HTML Example</a>
+ <a href="https://gitlab.com/pages/plain-html/">Repository</a>
+ <a href="https://gitlab.com/pages/">Other Examples</a>
+ </div>
+
+ <h1>Hello World!</h1>
+
+ <p>
+ This is a simple plain-HTML website on GitLab Pages, without any fancy static site generator.
+ </p>
+ <script src="https://unpkg.com/@gitlab/application-sdk-browser/dist/gl-sdk.min.js"></script>
+ <script>
+ window.glClient = window.glSDK.glClientSDK({
+ appId: 'YOUR_APP_ID',
+ host: 'YOUR_HOST',
+ });
+ </script>
+ </body>
+ </html>
+ ```
+
+1. In the left Web IDE toolbar, select **Source Control**.
+1. Enter a commit message, such as `Add GitLab product analytics tracking snippet`.
+1. Select **Commit**, and if prompted to create a new branch or continue, select **Continue**. You can then close the Web IDE.
+1. In the project, select **Build > Pipelines**.
+ A pipeline is triggered from your recent commit. Wait for it to finish running and deploying your updated website.
+
+## Collect usage data
+
+After the instrumented website is deployed, events start being collected.
+
+1. In the project, select **Deploy > Pages**.
+1. To open the website, in **Access pages** select your unique URL.
+1. To collect some page view events, refresh the page a few times.
+
+## View dashboards
+
+GitLab provides two product analytics dashboards by default: **Audience** and **Behavior**.
+These dashboards become available after your project has received some events.
+
+To view these dashboards:
+
+1. In the project, select **Analyze > Dashboards**.
+1. From the list of available dashboards, select **Audience** or **Behavior**.
+
+That was it! Now you have a website project with product analytics, which help you collect and visualize data to understand your users' behavior, and make your team work more efficiently.
diff --git a/doc/update/deprecations.md b/doc/update/deprecations.md
index 3bb7f9816b4..333dad86086 100644
--- a/doc/update/deprecations.md
+++ b/doc/update/deprecations.md
@@ -102,6 +102,28 @@ This change is a breaking change. You should [create a runner in the UI](../ci/r
<div class="deprecation breaking-change" data-milestone="18.0">
+### Running a single database is deprecated
+
+<div class="deprecation-notes">
+- Announced in GitLab <span class="milestone">16.1</span>
+- Removal in GitLab <span class="milestone">18.0</span> ([breaking change](https://docs.gitlab.com/ee/update/terminology.html#breaking-change))
+- To discuss this change or learn more, see the [deprecation issue](https://gitlab.com/gitlab-org/gitlab/-/issues/411239).
+</div>
+
+From GitLab 18.0, we will require a [separate database for CI features](https://gitlab.com/groups/gitlab-org/-/epics/7509).
+We recommend running both databases on the same Postgres instance(s) due to ease of management for most deployments.
+
+We are providing this as an informational advance notice but we do not recommend taking action yet.
+We will have another update communicated (as well as the deprecation note) when we recommend admins to start the migration process.
+
+This change provides additional scalability for the largest of GitLab instances, like GitLab.com.
+This change applies to all installation methods: Omnibus GitLab, GitLab Helm chart, GitLab Operator, GitLab Docker images, and installation from source.
+Before upgrading to GitLab 18.0, please ensure you have [migrated](https://docs.gitlab.com/ee/administration/postgresql/multiple_databases.html) to two databases.
+
+</div>
+
+<div class="deprecation breaking-change" data-milestone="18.0">
+
### Support for REST API endpoints that reset runner registration tokens
<div class="deprecation-notes">
@@ -187,6 +209,26 @@ Because Cloud Native Buildpacks do not support automatic testing, the Auto Test
<div class="deprecation breaking-change" data-milestone="17.0">
+### Breaking change to the Maven repository group permissions
+
+<div class="deprecation-notes">
+- Announced in GitLab <span class="milestone">16.6</span>
+- Removal in GitLab <span class="milestone">17.0</span> ([breaking change](https://docs.gitlab.com/ee/update/terminology.html#breaking-change))
+- To discuss this change or learn more, see the [deprecation issue](https://gitlab.com/gitlab-org/gitlab/-/issues/393933).
+</div>
+
+The Maven repository exposes an API endpoint at the group level that allows Maven clients to download files from a specific package. The package finder first locates the package within the group, and then finds the file within the package.
+However, there is a limitation that affects duplicate package names hosted in different projects. The Maven package finder always returns the most recent package, but the "most recent" filter depends on user permissions. It is possible for a user with different permissions in different projects to download the wrong Maven package.
+
+In GitLab 17.0, the package finder logic will be fixed so that the "most recent" package is the last updated name and version of a package in a group. User permissions will be checked after the most recent package is found.
+After the change, download requests for users without correct permissions will be rejected. If your workflow depends on the current bugged behavior, this fix will introduce a breaking change.
+
+The change will be introduced in GitLab 16.6 behind a feature flag. If you are interested in enabling the feature flag for your group, leave a comment in [issue 393933](https://gitlab.com/gitlab-org/gitlab/-/issues/393933).
+
+</div>
+
+<div class="deprecation breaking-change" data-milestone="17.0">
+
### CiRunner.projects default sort is changing to `id_desc`
<div class="deprecation-notes">
@@ -217,6 +259,24 @@ the aliasing for the `CiRunnerUpgradeStatusType` type will be removed.
<div class="deprecation breaking-change" data-milestone="17.0">
+### Container Registry support for the Swift and OSS storage drivers
+
+<div class="deprecation-notes">
+- Announced in GitLab <span class="milestone">16.6</span>
+- Removal in GitLab <span class="milestone">17.0</span> ([breaking change](https://docs.gitlab.com/ee/update/terminology.html#breaking-change))
+- To discuss this change or learn more, see the [deprecation issue](https://gitlab.com/gitlab-org/container-registry/-/issues/1141).
+</div>
+
+The container registry uses storage drivers to work with various object storage platforms. While each driver's code is relatively self-contained, there is a high maintenance burden for these drivers. Each driver implementation is unique and making changes to a driver requires a high level of domain expertise with that specific driver.
+
+As we look to reduce maintenance costs, we are deprecating support for OSS (Object Storage Service) and OpenStack Swift. Both have already been removed from the upstream Docker Distribution. This helps align the container registry with the broader GitLab product offering with regards to [object storage support](https://docs.gitlab.com/ee/administration/object_storage.html#supported-object-storage-providers).
+
+OSS has an [S3 compatibility mode](https://www.alibabacloud.com/help/en/oss/developer-reference/compatibility-with-amazon-s3), so consider using that if you can't migrate to a supported driver. Swift is [compatible with S3 API operations](https://docs.openstack.org/swift/latest/s3_compat.html), required by the S3 storage driver as well.
+
+</div>
+
+<div class="deprecation breaking-change" data-milestone="17.0">
+
### DAST ZAP advanced configuration variables deprecation
<div class="deprecation-notes">
@@ -387,6 +447,24 @@ major release, GitLab 17.0. This gem sees very little use and is better suited f
<div class="deprecation breaking-change" data-milestone="17.0">
+### File type variable expansion fixed in downstream pipelines
+
+<div class="deprecation-notes">
+- Announced in GitLab <span class="milestone">16.6</span>
+- Removal in GitLab <span class="milestone">17.0</span> ([breaking change](https://docs.gitlab.com/ee/update/terminology.html#breaking-change))
+- To discuss this change or learn more, see the [deprecation issue](https://gitlab.com/gitlab-org/gitlab/-/issues/419445).
+</div>
+
+Previously, if you tried to reference a [file type CI/CD variable](https://docs.gitlab.com/ee/ci/variables/#use-file-type-cicd-variables) in another CI/CD variable, the CI/CD variable would expand to contain the contents of the file. This behavior was incorrect because it did not comply with typical shell variable expansion rules. The CI/CD variable reference should expand to only contain the path to the file, not the contents of the file itself. This was [fixed for most use cases in GitLab 15.7](https://gitlab.com/gitlab-org/gitlab/-/issues/29407). Unfortunately, passing CI/CD variables to downstream pipelines was an edge case not yet fixed, but which will now be fixed in GitLab 17.0.
+
+With this change, a variable configured in the `.gitlab-ci.yml` file can reference a file variable and be passed to a downstream pipeline, and the file variable will be passed to the downstream pipeline as well. The downstream pipeline will expand the variable reference to the file path, not the file contents.
+
+This breaking change could disrupt user workflows that depend on expanding a file variable in a downstream pipeline.
+
+</div>
+
+<div class="deprecation breaking-change" data-milestone="17.0">
+
### Filepath field in Releases and Release Links APIs
<div class="deprecation-notes">
@@ -560,6 +638,24 @@ In GitLab 17.0, the `DISABLED_WITH_OVERRIDE` value of the `SharedRunnersSetting`
<div class="deprecation breaking-change" data-milestone="17.0">
+### GraphQL: deprecate support for `canDestroy` and `canDelete`
+
+<div class="deprecation-notes">
+- Announced in GitLab <span class="milestone">16.6</span>
+- Removal in GitLab <span class="milestone">17.0</span> ([breaking change](https://docs.gitlab.com/ee/update/terminology.html#breaking-change))
+- To discuss this change or learn more, see the [deprecation issue](https://gitlab.com/gitlab-org/gitlab/-/issues/390754).
+</div>
+
+The Package Registry user interface relies on the GitLab GraphQL API. To make it easy for everyone to contribute, it's important that the frontend is coded consistently across all GitLab product areas. Before GitLab 16.6, however, the Package Registry UI handled permissions differently from other areas of the product.
+
+In 16.6, we added a new `UserPermissions` field under the `Types::PermissionTypes::Package` type to align the Package Registry with the rest of GitLab. This new field replaces the `canDestroy` field under the `Package`, `PackageBase`, and `PackageDetailsType` types. It also replaces the field `canDelete` for `ContainerRepository`, `ContainerRepositoryDetails`, and `ContainerRepositoryTag`. In GitLab 17.0, the `canDestroy` and `canDelete` fields will be removed.
+
+This is a breaking change that will be completed in 17.0.
+
+</div>
+
+<div class="deprecation breaking-change" data-milestone="17.0">
+
### HashiCorp Vault integration will no longer use CI_JOB_JWT by default
<div class="deprecation-notes">
@@ -615,6 +711,33 @@ If you do access the internal container registry API and use the original tag de
<div class="deprecation breaking-change" data-milestone="17.0">
+### Legacy Geo Prometheus metrics
+
+<div class="deprecation-notes">
+- Announced in GitLab <span class="milestone">16.6</span>
+- Removal in GitLab <span class="milestone">17.0</span> ([breaking change](https://docs.gitlab.com/ee/update/terminology.html#breaking-change))
+- To discuss this change or learn more, see the [deprecation issue](https://gitlab.com/gitlab-org/gitlab/-/issues/430192).
+</div>
+
+Following the migration of projects to the [Geo self-service framework](https://docs.gitlab.com/ee/development/geo/framework.html) we have deprecated a number of [Prometheus](https://docs.gitlab.com/ee/administration/monitoring/prometheus/) metrics.
+The following Geo-related Prometheus metrics are deprecated and will be removed in 17.0.
+The table below lists the deprecated metrics and their respective replacements. The replacements are available in GitLab 16.3.0 and later.
+
+| Deprecated metric | Replacement metric |
+| ---------------------------------------- | ---------------------------------------------- |
+| `geo_repositories_synced` | `geo_project_repositories_synced` |
+| `geo_repositories_failed` | `geo_project_repositories_failed` |
+| `geo_repositories_checksummed` | `geo_project_repositories_checksummed` |
+| `geo_repositories_checksum_failed` | `geo_project_repositories_checksum_failed` |
+| `geo_repositories_verified` | `geo_project_repositories_verified` |
+| `geo_repositories_verification_failed` | `geo_project_repositories_verification_failed` |
+| `geo_repositories_checksum_mismatch` | None available |
+| `geo_repositories_retrying_verification` | None available |
+
+</div>
+
+<div class="deprecation breaking-change" data-milestone="17.0">
+
### Maintainer role providing the ability to change Package settings using GraphQL API
<div class="deprecation-notes">
@@ -757,6 +880,20 @@ PostgreSQL 14 will also be supported for instances that want to upgrade prior to
<div class="deprecation breaking-change" data-milestone="17.0">
+### Proxy-based DAST deprecated
+
+<div class="deprecation-notes">
+- Announced in GitLab <span class="milestone">16.6</span>
+- Removal in GitLab <span class="milestone">17.0</span> ([breaking change](https://docs.gitlab.com/ee/update/terminology.html#breaking-change))
+- To discuss this change or learn more, see the [deprecation issue](https://gitlab.com/gitlab-org/gitlab/-/issues/430966).
+</div>
+
+As of GitLab 17.0, Proxy-based DAST will not be supported. Please migrate to Browser-based DAST to continue analyzing your projects for security findings via dynamic analysis.
+
+</div>
+
+<div class="deprecation breaking-change" data-milestone="17.0">
+
### Queue selector for running Sidekiq is deprecated
<div class="deprecation-notes">
@@ -822,28 +959,6 @@ that is available now. We recommend this alternative solution because it provide
<div class="deprecation breaking-change" data-milestone="17.0">
-### Running a single database is deprecated
-
-<div class="deprecation-notes">
-- Announced in GitLab <span class="milestone">16.1</span>
-- Removal in GitLab <span class="milestone">17.0</span> ([breaking change](https://docs.gitlab.com/ee/update/terminology.html#breaking-change))
-- To discuss this change or learn more, see the [deprecation issue](https://gitlab.com/gitlab-org/gitlab/-/issues/411239).
-</div>
-
-From GitLab 17.0, we will require a [separate database for CI features](https://gitlab.com/groups/gitlab-org/-/epics/7509).
-We recommend running both databases on the same Postgres instance(s) due to ease of management for most deployments.
-
-We are providing this as an informational advance notice but we do not recommend taking action yet.
-We will have another update communicated (as well as the deprecation note) when we recommend admins to start the migration process.
-
-This change provides additional scalability for the largest of GitLab instances, like GitLab.com.
-This change applies to all installation methods: Omnibus GitLab, GitLab Helm chart, GitLab Operator, GitLab Docker images, and installation from source.
-Before upgrading to GitLab 17.0, please ensure you have [migrated](https://docs.gitlab.com/ee/administration/postgresql/multiple_databases.html) to two databases.
-
-</div>
-
-<div class="deprecation breaking-change" data-milestone="17.0">
-
### Security policy field `newly_detected` is deprecated
<div class="deprecation-notes">
@@ -892,8 +1007,6 @@ For updates and details about this deprecation, follow [this epic](https://gitla
- To discuss this change or learn more, see the [deprecation issue](https://gitlab.com/gitlab-org/gitlab/-/issues/387898).
</div>
-This deprecation is now superseded by another [deprecation notice](#running-a-single-database-is-deprecated).
-
Previously, [GitLab's database](https://docs.gitlab.com/omnibus/settings/database.html)
configuration had a single `main:` section. This is being deprecated. The new
configuration has both a `main:` and a `ci:` section.
@@ -926,6 +1039,24 @@ we'll be introducing support in [this epic](https://gitlab.com/groups/gitlab-org
<div class="deprecation breaking-change" data-milestone="17.0">
+### The GitHub importer Rake task
+
+<div class="deprecation-notes">
+- Announced in GitLab <span class="milestone">16.6</span>
+- Removal in GitLab <span class="milestone">17.0</span> ([breaking change](https://docs.gitlab.com/ee/update/terminology.html#breaking-change))
+- To discuss this change or learn more, see the [deprecation issue](https://gitlab.com/gitlab-org/gitlab/-/issues/428225).
+</div>
+
+In GitLab 16.6 the [GitHub importer Rake task](https://docs.gitlab.com/ee/administration/raketasks/github_import.html) is deprecated. The Rake task lacks several features that are supported by the API and is not actively maintained.
+
+In GitLab 17.0, the Rake task will be removed.
+
+Instead, GitHub repositories can be imported by using the [API](https://docs.gitlab.com/ee/api/import.html#import-repository-from-github) or the [UI](https://docs.gitlab.com/ee/user/project/import/github.html).
+
+</div>
+
+<div class="deprecation breaking-change" data-milestone="17.0">
+
### The GitLab legacy requirement IID is deprecated in favor of work item IID
<div class="deprecation-notes">
@@ -1142,6 +1273,29 @@ Previous work helped [align the vulnerabilities calls for pipeline security tabs
</div>
</div>
+<div class="milestone-wrapper" data-milestone="16.9">
+
+## GitLab 16.9
+
+<div class="deprecation " data-milestone="16.9">
+
+### Deprecation of `lfs_check` feature flag
+
+<div class="deprecation-notes">
+- Announced in GitLab <span class="milestone">16.6</span>
+- Removal in GitLab <span class="milestone">16.9</span>
+- To discuss this change or learn more, see the [deprecation issue](https://gitlab.com/gitlab-org/gitlab/-/issues/233550).
+</div>
+
+In GitLab 16.9, we will remove the `lfs_check` feature flag. This feature flag was [introduced 4 years ago](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/60588) and controls whether the LFS integrity check is enabled. The feature flag is enabled by default, but some customers experienced performance issues with the LFS integrity check and explicitly disabled it.
+
+After [dramatically improving the performance](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/61355) of the LFS integrity check, we are ready to remove the feature flag. After the flag is removed, the feature will automatically be turned on for any environment in which it is currently disabled.
+
+If this feature flag is disabled for your environment, and you are concerned about performance issues, please enable it and monitor the performance before it is removed in 16.9. If you see any performance issues after enabling it, please let us know in [this feedback issue](https://gitlab.com/gitlab-org/gitlab/-/issues/233550).
+
+</div>
+</div>
+
<div class="milestone-wrapper" data-milestone="16.8">
## GitLab 16.8
@@ -1206,6 +1360,33 @@ If you have [public or internal](https://docs.gitlab.com/ee/user/public_access.h
Enabling the `ldap_settings_unlock_groups_by_owners` feature flag allowed non-LDAP synced users to be added to a locked LDAP group. This [feature](https://gitlab.com/gitlab-org/gitlab/-/issues/1793) has always been disabled by default and behind a feature flag. We are removing this feature to keep continuity with our SAML integration, and because allowing non-synced group members defeats the "single source of truth" principle of using a directory service. Once this feature is removed, any LDAP group members that are not synced with LDAP will lose access to that group.
</div>
+
+<div class="deprecation breaking-change" data-milestone="16.5">
+
+### Geo: Housekeeping Rake tasks
+
+<div class="deprecation-notes">
+- Announced in GitLab <span class="milestone">16.3</span>
+- Removal in GitLab <span class="milestone">16.5</span> ([breaking change](https://docs.gitlab.com/ee/update/terminology.html#breaking-change))
+- To discuss this change or learn more, see the [deprecation issue](https://gitlab.com/gitlab-org/gitlab/-/issues/416384).
+</div>
+
+As part of the migration of the replication and verification to the
+[Geo self-service framework (SSF)](https://docs.gitlab.com/ee/development/geo/framework.html),
+the legacy replication for project repositories has been
+[removed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/130565).
+As a result, the following Rake tasks that relied on legacy code have also been removed. The work invoked by these Rake tasks are now triggered automatically either periodically or based on trigger events.
+
+| Rake task | Replacement |
+| --------- | ----------- |
+| `geo:git:housekeeping:full_repack` | [Moved to UI](https://docs.gitlab.com/ee/administration/housekeeping.html#heuristical-housekeeping). No equivalent Rake task in the SSF. |
+| `geo:git:housekeeping:gc` | Always executed for new repositories, and then when it's needed. No equivalent Rake task in the SSF. |
+| `geo:git:housekeeping:incremental_repack` | Executed when needed. No equivalent Rake task in the SSF. |
+| `geo:run_orphaned_project_registry_cleaner` | Executed regularly by a registry [consistency worker](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/workers/geo/secondary/registry_consistency_worker.rb) which removes orphaned registries. No equivalent Rake task in the SSF. |
+| `geo:verification:repository:reset` | Moved to UI. No equivalent Rake task in the SSF. |
+| `geo:verification:wiki:reset` | Moved to UI. No equivalent Rake task in the SSF. |
+
+</div>
</div>
<div class="milestone-wrapper" data-milestone="16.3">
diff --git a/doc/update/versions/gitlab_15_changes.md b/doc/update/versions/gitlab_15_changes.md
index 019b8929a45..bd5efef8f1b 100644
--- a/doc/update/versions/gitlab_15_changes.md
+++ b/doc/update/versions/gitlab_15_changes.md
@@ -136,6 +136,7 @@ if you can't upgrade to 15.11.12 and later.
- `pg_upgrade` fails to upgrade the bundled PostregSQL database to version 13. See
[the details and workaround](#pg_upgrade-fails-to-upgrade-the-bundled-postregsql-database-to-version-13).
+- Cloning LFS objects from secondary site downloads from the primary site even when secondary is fully synced. See [the details and workaround](gitlab_16_changes.md#cloning-lfs-objects-from-secondary-site-downloads-from-the-primary-site-even-when-secondary-is-fully-synced).
## 15.9.0
@@ -181,6 +182,7 @@ if you can't upgrade to 15.11.12 and later.
- `pg_upgrade` fails to upgrade the bundled PostregSQL database to version 13. See
[the details and workaround](#pg_upgrade-fails-to-upgrade-the-bundled-postregsql-database-to-version-13).
+- Cloning LFS objects from secondary site downloads from the primary site even when secondary is fully synced. See [the details and workaround](gitlab_16_changes.md#cloning-lfs-objects-from-secondary-site-downloads-from-the-primary-site-even-when-secondary-is-fully-synced).
## 15.8.2
@@ -212,6 +214,7 @@ if you can't upgrade to 15.11.12 and later.
- We discovered an issue where [replication and verification of projects and wikis was not keeping up](https://gitlab.com/gitlab-org/gitlab/-/issues/387980) on small number of Geo installations. Your installation may be affected if you see some projects and/or wikis persistently in the "Queued" state for verification. This can lead to data loss after a failover.
- Affected versions: GitLab versions 15.6.x, 15.7.x, and 15.8.0 - 15.8.2.
- Versions containing fix: GitLab 15.8.3 and later.
+- Cloning LFS objects from secondary site downloads from the primary site even when secondary is fully synced. See [the details and workaround](gitlab_16_changes.md#cloning-lfs-objects-from-secondary-site-downloads-from-the-primary-site-even-when-secondary-is-fully-synced).
## 15.7.6
@@ -324,6 +327,7 @@ if you can't upgrade to 15.11.12 and later.
contents printed. For example, if they were printed in an echo output. For more information,
see [Understanding the file type variable expansion change in GitLab 15.7](https://about.gitlab.com/blog/2023/02/13/impact-of-the-file-type-variable-change-15-7/).
- Due to [a bug introduced in GitLab 15.4](https://gitlab.com/gitlab-org/gitlab/-/issues/390155), if one or more Git repositories in Gitaly Cluster is [unavailable](../../administration/gitaly/recovery.md#unavailable-repositories), then [Repository checks](../../administration/repository_checks.md#repository-checks) and [Geo replication and verification](../../administration/geo/index.md) stop running for all project or project wiki repositories in the affected Gitaly Cluster. The bug was fixed by [reverting the change in GitLab 15.9.0](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/110823). Before upgrading to this version, check if you have any "unavailable" repositories. See [the bug issue](https://gitlab.com/gitlab-org/gitlab/-/issues/390155) for more information.
+- Cloning LFS objects from secondary site downloads from the primary site even when secondary is fully synced. See [the details and workaround](gitlab_16_changes.md#cloning-lfs-objects-from-secondary-site-downloads-from-the-primary-site-even-when-secondary-is-fully-synced).
### Geo installations **(PREMIUM SELF)**
@@ -441,6 +445,7 @@ potentially cause downtime.
- Affected versions: GitLab versions 15.6.x, 15.7.x, and 15.8.0 - 15.8.2.
- Versions containing fix: GitLab 15.8.3 and later.
- Due to [a bug introduced in GitLab 15.4](https://gitlab.com/gitlab-org/gitlab/-/issues/390155), if one or more Git repositories in Gitaly Cluster is [unavailable](../../administration/gitaly/recovery.md#unavailable-repositories), then [Repository checks](../../administration/repository_checks.md#repository-checks) and [Geo replication and verification](../../administration/geo/index.md) stop running for all project or project wiki repositories in the affected Gitaly Cluster. The bug was fixed by [reverting the change in GitLab 15.9.0](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/110823). Before upgrading to this version, check if you have any "unavailable" repositories. See [the bug issue](https://gitlab.com/gitlab-org/gitlab/-/issues/390155) for more information.
+- Cloning LFS objects from secondary site downloads from the primary site even when secondary is fully synced. See [the details and workaround](gitlab_16_changes.md#cloning-lfs-objects-from-secondary-site-downloads-from-the-primary-site-even-when-secondary-is-fully-synced).
## 15.5.5
@@ -502,6 +507,7 @@ potentially cause downtime.
- `pg_upgrade` fails to upgrade the bundled PostregSQL database to version 13. See
[the details and workaround](#pg_upgrade-fails-to-upgrade-the-bundled-postregsql-database-to-version-13).
+- Cloning LFS objects from secondary site downloads from the primary site even when secondary is fully synced. See [the details and workaround](gitlab_16_changes.md#cloning-lfs-objects-from-secondary-site-downloads-from-the-primary-site-even-when-secondary-is-fully-synced).
## 15.4.6
@@ -576,6 +582,7 @@ potentially cause downtime.
- `pg_upgrade` fails to upgrade the bundled PostregSQL database to version 13. See
[the details and workaround](#pg_upgrade-fails-to-upgrade-the-bundled-postregsql-database-to-version-13).
+- Cloning LFS objects from secondary site downloads from the primary site even when secondary is fully synced. See [the details and workaround](gitlab_16_changes.md#cloning-lfs-objects-from-secondary-site-downloads-from-the-primary-site-even-when-secondary-is-fully-synced).
## 15.3.4
@@ -666,6 +673,7 @@ This issue is resolved in GitLab 15.3.3, so customers with the following configu
- LFS is enabled.
- LFS objects are being replicated across Geo sites.
- Repositories are being pulled by using a Geo secondary site.
+- Cloning LFS objects from secondary site downloads from the primary site even when secondary is fully synced. See [the details and workaround](gitlab_16_changes.md#cloning-lfs-objects-from-secondary-site-downloads-from-the-primary-site-even-when-secondary-is-fully-synced).
#### Incorrect object storage LFS file deletion on secondary sites
@@ -722,6 +730,7 @@ A [license caching issue](https://gitlab.com/gitlab-org/gitlab/-/issues/376706)
[the details and workaround](#lfs-transfers-redirect-to-primary-from-secondary-site-mid-session).
- Incorrect object storage LFS files deletion on Geo secondary sites. See
[the details and workaround](#incorrect-object-storage-lfs-file-deletion-on-secondary-sites).
+- Cloning LFS objects from secondary site downloads from the primary site even when secondary is fully synced. See [the details and workaround](gitlab_16_changes.md#cloning-lfs-objects-from-secondary-site-downloads-from-the-primary-site-even-when-secondary-is-fully-synced).
## 15.1.0
@@ -760,6 +769,7 @@ A [license caching issue](https://gitlab.com/gitlab-org/gitlab/-/issues/376706)
[the details and workaround](#lfs-transfers-redirect-to-primary-from-secondary-site-mid-session).
- Incorrect object storage LFS files deletion on Geo secondary sites. See
[the details and workaround](#incorrect-object-storage-lfs-file-deletion-on-secondary-sites).
+- Cloning LFS objects from secondary site downloads from the primary site even when secondary is fully synced. See [the details and workaround](gitlab_16_changes.md#cloning-lfs-objects-from-secondary-site-downloads-from-the-primary-site-even-when-secondary-is-fully-synced).
## 15.0.0
diff --git a/doc/update/versions/gitlab_16_changes.md b/doc/update/versions/gitlab_16_changes.md
index 7c5dd8ae6ae..836f5d188c5 100644
--- a/doc/update/versions/gitlab_16_changes.md
+++ b/doc/update/versions/gitlab_16_changes.md
@@ -30,6 +30,52 @@ For more information about upgrading GitLab Helm Chart, see [the release notes f
- [Praefect configuration structure change](#praefect-configuration-structure-change).
- [Gitaly configuration structure change](#gitaly-configuration-structure-change).
+## 16.5.0
+
+- Git 2.42.0 and later is required by Gitaly. For self-compiled installations, you should use the [Git version provided by Gitaly](../../install/installation.md#git).
+
+### Geo installations
+
+Specific information applies to installations using Geo:
+
+- A number of Prometheus metrics were incorrectly removed in 16.3.0, which can break dashboards and alerting:
+
+ | Affected metric | Metric restored in 16.5.2 and later | Replacement available in 16.3+ |
+ | ---------------------------------------- | ------------------------------------ | ---------------------------------------------- |
+ | `geo_repositories_synced` | Yes | `geo_project_repositories_synced` |
+ | `geo_repositories_failed` | Yes | `geo_project_repositories_failed` |
+ | `geo_repositories_checksummed` | Yes | `geo_project_repositories_checksummed` |
+ | `geo_repositories_checksum_failed` | Yes | `geo_project_repositories_checksum_failed` |
+ | `geo_repositories_verified` | Yes | `geo_project_repositories_verified` |
+ | `geo_repositories_verification_failed` | Yes | `geo_project_repositories_verification_failed` |
+ | `geo_repositories_checksum_mismatch` | No | None available |
+ | `geo_repositories_retrying_verification` | No | None available |
+
+ - Impacted versions:
+ - 16.3.0 to 16.5.1
+ - Versions containing fix:
+ - 16.5.2 and later
+
+ For more information, see [issue 429617](https://gitlab.com/gitlab-org/gitlab/-/issues/429617).
+
+- [Object storage verification](https://about.gitlab.com/releases/2023/09/22/gitlab-16-4-released/#geo-verifies-object-storage) was added in GitLab 16.4. Due to an [issue](https://gitlab.com/gitlab-org/gitlab/-/issues/429242) some Geo installations are reporting high memory usage which can lead to the GitLab application on the primary becoming unresponsive.
+
+ Your installation may be impacted if you have configured it to use [object storage](../../administration/object_storage.md) and have enabled [GitLab-managed object storage replication](../../administration/geo/replication/object_storage.md#enabling-gitlab-managed-object-storage-replication)
+
+ Until this is fixed, the workaround is to disable object storage verification.
+ Run the following command on one of the Rails nodes on the primary site:
+
+ ```shell
+ sudo gitlab-rails runner 'Feature.disable(:geo_object_storage_verification)'
+ ```
+
+ **Affected releases**:
+
+ | Affected minor releases | Affected patch releases | Fixed in |
+ | ------ | ------ | ------ |
+ | 16.4 | All | None |
+ | 16.5 | All | None |
+
## 16.4.0
- Updating a group path [received a bug fix](https://gitlab.com/gitlab-org/gitlab/-/issues/419289) that uses a database index introduced in 16.3.
@@ -71,9 +117,33 @@ For more information about upgrading GitLab Helm Chart, see [the release notes f
SELECT id FROM push_rules WHERE LENGTH(delete_branch_regex) > 511;
```
+ To find out if a push rule belongs to a project, group, or instance, run this script
+ in the [Rails console](../../administration/operations/rails_console.md#starting-a-rails-console-session):
+
+ ```ruby
+ # replace `delete_branch_regex` with a name of the field used in constraint
+ long_rules = PushRule.where("length(delete_branch_regex) > 511")
+
+ array = long_rules.map do |lr|
+ if lr.project
+ "Push rule with ID #{lr.id} is configured in a project #{lr.project.full_name}"
+ elsif lr.group
+ "Push rule with ID #{lr.id} is configured in a group #{lr.group.full_name}"
+ else
+ "Push rule with ID #{lr.id} is configured on the instance level"
+ end
+ end
+
+ puts "Total long rules: #{array.count}"
+ puts array.join("\n")
+ ```
+
Reduce the value length of the regex field for affected push rules records, then
retry the migration.
+ If you have too many affected push rules, and you can't update them through the GitLab UI,
+ contact [GitLab support](https://about.gitlab.com/support/).
+
### Self-compiled installations
- A new method of configuring paths for the GitLab secret and custom hooks is preferred in GitLab 16.4 and later:
@@ -82,6 +152,57 @@ For more information about upgrading GitLab Helm Chart, see [the release notes f
server-side custom hooks.
1. Remove the `[gitlab-shell] dir` configuration.
+### Geo installations
+
+Specific information applies to installations using Geo:
+
+- A number of Prometheus metrics were incorrectly removed in 16.3.0, which can break dashboards and alerting:
+
+ | Affected metric | Metric restored in 16.5.2 and later | Replacement available in 16.3+ |
+ | ---------------------------------------- | ------------------------------------ | ---------------------------------------------- |
+ | `geo_repositories_synced` | Yes | `geo_project_repositories_synced` |
+ | `geo_repositories_failed` | Yes | `geo_project_repositories_failed` |
+ | `geo_repositories_checksummed` | Yes | `geo_project_repositories_checksummed` |
+ | `geo_repositories_checksum_failed` | Yes | `geo_project_repositories_checksum_failed` |
+ | `geo_repositories_verified` | Yes | `geo_project_repositories_verified` |
+ | `geo_repositories_verification_failed` | Yes | `geo_project_repositories_verification_failed` |
+ | `geo_repositories_checksum_mismatch` | No | None available |
+ | `geo_repositories_retrying_verification` | No | None available |
+
+ - Impacted versions:
+ - 16.3.0 to 16.5.1
+ - Versions containing fix:
+ - 16.5.2 and later
+
+ For more information, see [issue 429617](https://gitlab.com/gitlab-org/gitlab/-/issues/429617).
+
+- [Object storage verification](https://about.gitlab.com/releases/2023/09/22/gitlab-16-4-released/#geo-verifies-object-storage) was added in GitLab 16.4. Due to an [issue](https://gitlab.com/gitlab-org/gitlab/-/issues/429242) some Geo installations are reporting high memory usage which can lead to the GitLab application on the primary becoming unresponsive.
+
+ Your installation may be impacted if you have configured it to use [object storage](../../administration/object_storage.md) and have enabled [GitLab-managed object storage replication](../../administration/geo/replication/object_storage.md#enabling-gitlab-managed-object-storage-replication)
+
+ Until this is fixed, the workaround is to disable object storage verification.
+ Run the following command on one of the Rails nodes on the primary site:
+
+ ```shell
+ sudo gitlab-rails runner 'Feature.disable(:geo_object_storage_verification)'
+ ```
+
+ **Affected releases**:
+
+ | Affected minor releases | Affected patch releases | Fixed in |
+ | ------ | ------ | ------ |
+ | 16.4 | All | None |
+ | 16.5 | All | None |
+
+- An [issue](https://gitlab.com/gitlab-org/gitlab/-/issues/419370) with sync states getting stuck in pending state results in replication being stuck indefinitely for impacted items leading to risk of data loss in the event of a failover. This mostly impact repository syncs but can also can also affect container registry syncs. You are advised to upgrade to a fixed version to avoid risk of data loss.
+
+ **Affected releases**:
+
+ | Affected minor releases | Affected patch releases | Fixed in |
+ | ------ | ------ | ------ |
+ | 16.3 | 16.3.0 - 16.3.5 | 16.3.6 |
+ | 16.4 | 16.4.0 - 16.4.1 | 16.4.2 |
+
## 16.3.0
- **Update to GitLab 16.3.5 or later**. This avoids [issue 425971](https://gitlab.com/gitlab-org/gitlab/-/issues/425971) that causes an excessive use of database disk space for GitLab 16.3.3 and 16.3.4.
@@ -149,6 +270,35 @@ Specific information applies to installations using Geo:
For more information, see [issue 425224](https://gitlab.com/gitlab-org/gitlab/-/issues/425224).
+- A number of Prometheus metrics were incorrectly removed in 16.3.0, which can break dashboards and alerting:
+
+ | Affected metric | Metric restored in 16.5.2 and later | Replacement available in 16.3+ |
+ | ---------------------------------------- | ------------------------------------ | ---------------------------------------------- |
+ | `geo_repositories_synced` | Yes | `geo_project_repositories_synced` |
+ | `geo_repositories_failed` | Yes | `geo_project_repositories_failed` |
+ | `geo_repositories_checksummed` | Yes | `geo_project_repositories_checksummed` |
+ | `geo_repositories_checksum_failed` | Yes | `geo_project_repositories_checksum_failed` |
+ | `geo_repositories_verified` | Yes | `geo_project_repositories_verified` |
+ | `geo_repositories_verification_failed` | Yes | `geo_project_repositories_verification_failed` |
+ | `geo_repositories_checksum_mismatch` | No | None available |
+ | `geo_repositories_retrying_verification` | No | None available |
+
+ - Impacted versions:
+ - 16.3.0 to 16.5.1
+ - Versions containing fix:
+ - 16.5.2 and later
+
+ For more information, see [issue 429617](https://gitlab.com/gitlab-org/gitlab/-/issues/429617).
+
+- An [issue](https://gitlab.com/gitlab-org/gitlab/-/issues/419370) with sync states getting stuck in pending state results in replication being stuck indefinitely for impacted items leading to risk of data loss in the event of a failover. This mostly impact repository syncs but can also can also affect container registry syncs. You are advised to upgrade to a fixed version to avoid risk of data loss.
+
+ **Affected releases**:
+
+ | Affected minor releases | Affected patch releases | Fixed in |
+ | ------ | ------ | ------ |
+ | 16.3 | 16.3.0 - 16.3.5 | 16.3.6 |
+ | 16.4 | 16.4.0 - 16.4.1 | 16.4.2 |
+
## 16.2.0
- Legacy LDAP configuration settings may cause
@@ -227,6 +377,24 @@ Specific information applies to installations using Geo:
Affected artifacts are automatically resynced upon upgrade to 16.1.5, 16.2.5, 16.3.1, 16.4.0, or later.
You can [manually resync affected job artifacts](https://gitlab.com/gitlab-org/gitlab/-/issues/419742#to-fix-data) if needed.
+#### Cloning LFS objects from secondary site downloads from the primary site even when secondary is fully synced
+
+A [bug](https://gitlab.com/gitlab-org/gitlab/-/issues/410413) in the Geo proxying logic for LFS objects meant that all LFS clone requests against a secondary site are proxied to the primary even if the secondary site is up-to-date. This can result in increased load on the primary site and longer access times for LFS objects for users cloning from the secondary site.
+
+In GitLab 15.1 proxying was enabled by default.
+
+You are not impacted:
+
+- If your installation is not configured to use LFS objects
+- If you do not use Geo to accelerate remote users
+- If you are using Geo to accelerate remote users but have disabled proxying
+
+| Affected minor releases | Affected patch releases | Fixed in |
+|-------------------------|-------------------------|----------|
+| 15.1 - 16.2 | All | 16.3 and later |
+
+Workaround: A possible workaround is to [disable proxying](../../administration/geo/secondary_proxy/index.md#disable-geo-proxying). Note that the secondary site fails to serve LFS files that have not been replicated at the time of cloning.
+
## 16.1.0
- A `BackfillPreparedAtMergeRequests` background migration is finalized with
@@ -273,6 +441,7 @@ Specific information applies to installations using Geo:
- While running an affected version, artifacts which appeared to become synced may actually be missing on the secondary site.
Affected artifacts are automatically resynced upon upgrade to 16.1.5, 16.2.5, 16.3.1, 16.4.0, or later.
You can [manually resync affected job artifacts](https://gitlab.com/gitlab-org/gitlab/-/issues/419742#to-fix-data) if needed.
+ - Cloning LFS objects from secondary site downloads from the primary site even when secondary is fully synced. See [the details and workaround](#cloning-lfs-objects-from-secondary-site-downloads-from-the-primary-site-even-when-secondary-is-fully-synced).
#### Wiki repositories not initialized on project creation
@@ -302,6 +471,7 @@ by this issue.
[throw errors on startup](../../install/docker.md#threaderror-cant-create-thread-operation-not-permitted).
- Starting with 16.0, GitLab self-managed installations now have two database connections by default, instead of one. This change doubles the number of PostgreSQL connections. It makes self-managed versions of GitLab behave similarly to GitLab.com, and is a step toward enabling a separate database for CI features for self-managed versions of GitLab. Before upgrading to 16.0, determine if you need to [increase max connections for PostgreSQL](https://docs.gitlab.com/omnibus/settings/database.html#configuring-multiple-database-connections).
- This change applies to installation methods with Linux packages (Omnibus), GitLab Helm chart, GitLab Operator, GitLab Docker images, and self-compiled installations.
+- Container registry using Azure storage might be empty with zero tags. You can fix this by following the [breaking change instructions](../deprecations.md#azure-storage-driver-defaults-to-the-correct-root-prefix).
### Linux package installations
@@ -334,6 +504,7 @@ Specific information applies to installations using Geo:
- Some project imports do not initialize wiki repositories on project creation. See
[the details and workaround](#wiki-repositories-not-initialized-on-project-creation).
+- Cloning LFS objects from secondary site downloads from the primary site even when secondary is fully synced. See [the details and workaround](#cloning-lfs-objects-from-secondary-site-downloads-from-the-primary-site-even-when-secondary-is-fully-synced).
### Gitaly configuration structure change
diff --git a/doc/user/ai_features.md b/doc/user/ai_features.md
index e24d50efee1..222752a4561 100644
--- a/doc/user/ai_features.md
+++ b/doc/user/ai_features.md
@@ -7,43 +7,37 @@ type: index, reference
# GitLab Duo
+> - [First GitLab Duo features introduced](https://about.gitlab.com/blog/2023/05/03/gitlab-ai-assisted-features/) in GitLab 16.0.
+> - [Removed third-party AI setting](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/136144) in GitLab 16.6.
+> - [Removed support for OpenAI from all GitLab Duo features](https://gitlab.com/groups/gitlab-org/-/epics/10964) in GitLab 16.6.
+
GitLab is creating AI-assisted features across our DevSecOps platform. These features aim to help increase velocity and solve key pain points across the software development lifecycle.
| Feature | Purpose | Large Language Model | Current availability | Maturity |
|-|-|-|-|-|
| [Suggested Reviewers](project/merge_requests/reviews/index.md#gitlab-duo-suggested-reviewers) | Assists in creating faster and higher-quality reviews by automatically suggesting reviewers for your merge request. | GitLab creates a machine learning model for each project, which is used to generate reviewers <br><br> [View the issue](https://gitlab.com/gitlab-org/modelops/applied-ml/applied-ml-updates/-/issues/10) | SaaS only <br><br> Ultimate tier | [Generally Available (GA)](../policy/experiment-beta-support.md#generally-available-ga) |
-| [Code Suggestions](project/repository/code_suggestions/index.md) | Helps you write code more efficiently by viewing code suggestions as you type. | [`code-gecko`](https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/code-completion) and [`code-bison`](https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/code-generation) <br><br> [Anthropic's Claude](https://www.anthropic.com/product) model | SaaS <br> Self-managed <br><br> All tiers | [Beta](../policy/experiment-beta-support.md#beta) |
-| [Vulnerability summary](application_security/vulnerabilities/index.md#explaining-a-vulnerability) | Helps you remediate vulnerabilities more efficiently, uplevel your skills, and write more secure code. | [`text-bison`](https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/text) <br><br> Anthropic's claude model if degraded performance | SaaS only <br><br> Ultimate tier | [Beta](../policy/experiment-beta-support.md#beta) |
-| [Code explanation](#explain-code-in-the-web-ui-with-code-explanation) | Helps you understand code by explaining it in English language. | [`codechat-bison`](https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/code-chat) | SaaS only <br><br> Ultimate tier | [Experiment](../policy/experiment-beta-support.md#experiment) |
-| [GitLab Duo Chat](#answer-questions-with-gitlab-duo-chat) | Process and generate text and code in a conversational manner. Helps you quickly identify useful information in large volumes of text in issues, epics, code, and GitLab documentation. | Anthropic's claude model <br><br> OpenAI Embeddings | SaaS only <br><br> Ultimate tier | [Experiment](../policy/experiment-beta-support.md#experiment) |
+| [Code Suggestions](project/repository/code_suggestions/index.md) | Helps you write code more efficiently by viewing code suggestions as you type. | For Code Completion: Vertex AI Codey [`code-gecko`](https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/code-completion) <br><br> For Code Generation: Anthropic [`Claude-2`](https://docs.anthropic.com/claude/reference/selecting-a-model)| [SaaS: All tiers](project/repository/code_suggestions/saas.md) <br><br> [Self-managed: Premium and Ultimate with Cloud Licensing](project/repository/code_suggestions/self_managed.md) | [Beta](../policy/experiment-beta-support.md#beta) |
+| [Vulnerability summary](application_security/vulnerabilities/index.md#explaining-a-vulnerability) | Helps you remediate vulnerabilities more efficiently, boost your skills, and write more secure code. | Vertex AI Codey [`text-bison`](https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/text) <br><br> Anthropic [`Claude-2`](https://docs.anthropic.com/claude/reference/selecting-a-model) if degraded performance | SaaS only <br><br> Ultimate tier | [Beta](../policy/experiment-beta-support.md#beta) |
+| [Code explanation](#explain-code-in-the-web-ui-with-code-explanation) | Helps you understand code by explaining it in English language. | Vertex AI Codey [`codechat-bison`](https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/code-chat) | SaaS only <br><br> Ultimate tier | [Experiment](../policy/experiment-beta-support.md#experiment) |
+| [GitLab Duo Chat](gitlab_duo_chat.md) | Process and generate text and code in a conversational manner. Helps you quickly identify useful information in large volumes of text in issues, epics, code, and GitLab documentation. | Anthropic [`Claude-2`](https://docs.anthropic.com/claude/reference/selecting-a-model) <br><br> Vertex AI Codey [`textembedding-gecko`](https://cloud.google.com/vertex-ai/docs/generative-ai/embeddings/get-text-embeddings) | SaaS only <br><br> Ultimate tier | [Experiment](../policy/experiment-beta-support.md#experiment) |
| [Value stream forecasting](#forecast-deployment-frequency-with-value-stream-forecasting) | Assists you with predicting productivity metrics and identifying anomalies across your software development lifecycle. | Statistical forecasting | SaaS only <br> Self-managed <br><br> Ultimate tier | [Experiment](../policy/experiment-beta-support.md#experiment) |
-| [Discussion summary](#summarize-issue-discussions-with-discussion-summary) | Assists with quickly getting everyone up to speed on lengthy conversations to help ensure you are all on the same page. | OpenAI's GPT-3 | SaaS only <br><br> Ultimate tier | [Experiment](../policy/experiment-beta-support.md#experiment) |
-| [Merge request summary](project/merge_requests/ai_in_merge_requests.md#summarize-merge-request-changes) | Efficiently communicate the impact of your merge request changes. | [`text-bison`](https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/text) | SaaS only <br><br> Ultimate tier | [Experiment](../policy/experiment-beta-support.md#experiment) |
-| [Code review summary](project/merge_requests/ai_in_merge_requests.md#summarize-my-merge-request-review) | Helps ease merge request handoff between authors and reviewers and help reviewers efficiently understand suggestions. | [`text-bison`](https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/text) | SaaS only <br><br> Ultimate tier | [Experiment](../policy/experiment-beta-support.md#experiment) |
-| [Merge request template population](project/merge_requests/ai_in_merge_requests.md#fill-in-merge-request-templates) | Generate a description for the merge request based on the contents of the template. | [`text-bison`](https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/text) | SaaS only <br><br> Ultimate tier | [Experiment](../policy/experiment-beta-support.md#experiment) |
-| [Test generation](project/merge_requests/ai_in_merge_requests.md#generate-suggested-tests-in-merge-requests) | Automates repetitive tasks and helps catch bugs early. | [`text-bison`](https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/text) | SaaS only <br><br> Ultimate tier | [Experiment](../policy/experiment-beta-support.md#experiment) |
-| [Git suggestions](https://gitlab.com/gitlab-org/gitlab/-/issues/409636) | Helps you discover or recall Git commands when and where you need them. | [Google Vertex Codey APIs](https://cloud.google.com/vertex-ai/docs/generative-ai/code/code-models-overview) | SaaS only <br><br> Ultimate tier | [Experiment](../policy/experiment-beta-support.md#experiment) |
-| [Root cause analysis](#root-cause-analysis) | Assists you in determining the root cause for a pipeline failure and failed CI/CD build. | [Google Vertex Codey APIs](https://cloud.google.com/vertex-ai/docs/generative-ai/code/code-models-overview) | SaaS only <br><br> Ultimate tier | [Experiment](../policy/experiment-beta-support.md#experiment) |
-| [Issue description generation](#summarize-an-issue-with-issue-description-generation) | Generate issue descriptions. | OpenAI's GPT-3 | SaaS only <br><br> Ultimate tier | [Experiment](../policy/experiment-beta-support.md#experiment) |
+| [Discussion summary](#summarize-issue-discussions-with-discussion-summary) | Assists with quickly getting everyone up to speed on lengthy conversations to help ensure you are all on the same page. | Vertex AI Codey [`text-bison`](https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/text) | SaaS only <br><br> Ultimate tier | [Experiment](../policy/experiment-beta-support.md#experiment) |
+| [Merge request summary](project/merge_requests/ai_in_merge_requests.md#summarize-merge-request-changes) | Efficiently communicate the impact of your merge request changes. | Vertex AI Codey [`text-bison`](https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/text) | SaaS only <br><br> Ultimate tier | [Experiment](../policy/experiment-beta-support.md#experiment) |
+| [Code review summary](project/merge_requests/ai_in_merge_requests.md#summarize-my-merge-request-review) | Helps ease merge request handoff between authors and reviewers and help reviewers efficiently understand suggestions. | Vertex AI Codey [`text-bison`](https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/text) | SaaS only <br><br> Ultimate tier | [Experiment](../policy/experiment-beta-support.md#experiment) |
+| [Merge request template population](project/merge_requests/ai_in_merge_requests.md#fill-in-merge-request-templates) | Generate a description for the merge request based on the contents of the template. | Vertex AI Codey [`text-bison`](https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/text) | SaaS only <br><br> Ultimate tier | [Experiment](../policy/experiment-beta-support.md#experiment) |
+| [Test generation](project/merge_requests/ai_in_merge_requests.md#generate-suggested-tests-in-merge-requests) | Automates repetitive tasks and helps catch bugs early. | Vertex AI Codey [`text-bison`](https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/text) | SaaS only <br><br> Ultimate tier | [Experiment](../policy/experiment-beta-support.md#experiment) |
+| [Git suggestions](https://gitlab.com/gitlab-org/gitlab/-/issues/409636) | Helps you discover or recall Git commands when and where you need them. | Vertex AI Codey [`codechat-bison`](https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/code-chat) | SaaS only <br><br> Ultimate tier | [Experiment](../policy/experiment-beta-support.md#experiment) |
+| [Root cause analysis](#root-cause-analysis) | Assists you in determining the root cause for a pipeline failure and failed CI/CD build. | Vertex AI Codey [`text-bison`](https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/text) | SaaS only <br><br> Ultimate tier | [Experiment](../policy/experiment-beta-support.md#experiment) |
+| [Issue description generation](#summarize-an-issue-with-issue-description-generation) | Generate issue descriptions. | Anthropic [`Claude-2`](https://docs.anthropic.com/claude/reference/selecting-a-model) | SaaS only <br><br> Ultimate tier | [Experiment](../policy/experiment-beta-support.md#experiment) |
## Enable AI/ML features
-- Third-party AI features
- - All features built on large language models (LLM) from Google,
- Anthropic or OpenAI (besides Code Suggestions) require that this setting is
- enabled at the group level.
- - [Generally Available](../policy/experiment-beta-support.md#generally-available-ga)
- features are available when third-party AI features are enabled.
- - Third-party AI features are enabled by default.
- - This setting is available to Ultimate groups on SaaS and can be
- set by a user who has the Owner role in the group.
- - View [how to enable this setting](group/manage.md#enable-third-party-ai-features).
- Experiment and Beta features
- All features categorized as
[Experiment features](../policy/experiment-beta-support.md#experiment) or
[Beta features](../policy/experiment-beta-support.md#beta)
(besides Code Suggestions) require that this setting is enabled at the group
- level. This is in addition to the Third-party AI features setting.
+ level.
- Their usage is subject to the
[Testing Terms of Use](https://about.gitlab.com/handbook/legal/testing-agreement/).
- Experiment and Beta features are disabled by default.
@@ -65,7 +59,6 @@ The following subsections describe the experimental AI features in more detail.
To use this feature:
- The parent group of the project must:
- - Enable the [third-party AI features setting](group/manage.md#enable-third-party-ai-features).
- Enable the [experiment and beta features setting](group/manage.md#enable-experiment-and-beta-features).
- You must be a member of the project with sufficient permissions to view the repository.
@@ -104,52 +97,6 @@ code in a merge request:
We cannot guarantee that the large language model produces results that are correct. Use the explanation with caution.
-### Answer questions with GitLab Duo Chat **(ULTIMATE SAAS EXPERIMENT)**
-
-> Introduced in GitLab 16.0 as an [Experiment](../policy/experiment-beta-support.md#experiment).
-
-To use this feature, at least one group you're a member of must:
-
-- Have the [third-party AI features setting](group/manage.md#enable-third-party-ai-features) enabled.
-- Have the [experiment and beta features setting](group/manage.md#enable-experiment-and-beta-features) enabled.
-
-You can get AI generated support from GitLab Duo Chat about the following topics:
-
-- How to use GitLab.
-- Questions about an issue.
-- Summarizing an issue.
-
-Example questions you might ask:
-
-- `What is a fork?`
-- `How to reset my password`
-- `Summarize the issue <link to your issue>`
-- `Summarize the description of the current issue`
-
-The examples above all use data from either the issue or the GitLab documentation. However, you can also ask to generate code, CI/CD configurations, or to explain code. For example:
-
-- `Write a hello world function in Ruby`
-- `Write a tic tac toe game in JavaScript`
-- `Write a .gitlab-ci.yml file to test and build a rails application`
-- `Explain the following code: def sum(a, b) a + b end`
-
-You can also ask follow-up questions.
-
-This is an experimental feature and we're continuously extending the capabilities and reliability of the chat.
-
-1. In the lower-left corner, select the Help icon.
- The [new left sidebar must be enabled](../tutorials/left_sidebar/index.md#enable-the-new-left-sidebar).
-1. Select **Ask in GitLab Duo Chat**. A drawer opens on the right side of your screen.
-1. Enter your question in the chat input box and press **Enter** or select **Send**. It may take a few seconds for the interactive AI chat to produce an answer.
-1. You can ask a follow-up question.
-1. If you want to ask a new question unrelated to the previous conversation, you may receive better answers if you clear the context by typing `/reset` into the input box and selecting **Send**.
-
-To give feedback about a specific response, use the feedback buttons in the response message.
-Or, you can add a comment in the [feedback issue](https://gitlab.com/gitlab-org/gitlab/-/issues/415591).
-
-NOTE:
-Only the last 50 messages are retained in the chat history. The chat history expires 3 days after last use.
-
### Summarize issue discussions with Discussion summary **(ULTIMATE SAAS EXPERIMENT)**
> [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/10344) in GitLab 16.0 as an [Experiment](../policy/experiment-beta-support.md#experiment).
@@ -157,7 +104,6 @@ Only the last 50 messages are retained in the chat history. The chat history exp
To use this feature:
- The parent group of the issue must:
- - Enable the [third-party AI features setting](group/manage.md#enable-third-party-ai-features).
- Enable the [experiment and beta features setting](group/manage.md#enable-experiment-and-beta-features).
- You must be a member of the project with sufficient permissions to view the issue.
@@ -181,7 +127,6 @@ language model referenced above.
To use this feature:
- The parent group of the project must:
- - Enable the [third-party AI features setting](group/manage.md#enable-third-party-ai-features).
- Enable the [experiment and beta features setting](group/manage.md#enable-experiment-and-beta-features).
- You must be a member of the project with sufficient permissions to view the CI/CD analytics.
@@ -207,7 +152,6 @@ Provide feedback on this experimental feature in [issue 416833](https://gitlab.c
To use this feature:
- The parent group of the project must:
- - Enable the [third-party AI features setting](group/manage.md#enable-third-party-ai-features).
- Enable the [experiment and beta features setting](group/manage.md#enable-experiment-and-beta-features).
- You must be a member of the project with sufficient permissions to view the CI/CD job.
@@ -222,7 +166,6 @@ reason for the failure.
To use this feature:
- The parent group of the project must:
- - Enable the [third-party AI features setting](group/manage.md#enable-third-party-ai-features).
- Enable the [experiment and beta features setting](group/manage.md#enable-experiment-and-beta-features).
- You must be a member of the project with sufficient permissions to view the issue.
@@ -239,9 +182,13 @@ Provide feedback on this experimental feature in [issue 409844](https://gitlab.c
**Data usage**: When you use this feature, the text you enter is sent to the large
language model referenced above.
+### GitLab Duo Chat **(ULTIMATE SAAS EXPERIMENT)**
+
+For details about this Experimental feature, see [GitLab Duo Chat](gitlab_duo_chat.md).
+
## Data usage
-GitLab AI features leverage generative AI to help increase velocity and aim to help make you more productive. Each feature operates independently of other features and is not required for other features to function.
+GitLab AI features leverage generative AI to help increase velocity and aim to help make you more productive. Each feature operates independently of other features and is not required for other features to function. GitLab selects the best-in-class large-language models for specific tasks. We use [Google Vertex AI Models](https://cloud.google.com/vertex-ai/docs/generative-ai/learn/overview#genai-models) and [Anthropic Claude](https://www.anthropic.com/product).
### Progressive enhancement
@@ -251,13 +198,38 @@ These features are designed as a progressive enhancement to existing GitLab feat
These features are in a variety of [feature support levels](../policy/experiment-beta-support.md#beta). Due to the nature of these features, there may be high demand for usage which may cause degraded performance or unexpected downtime of the feature. We have built these features to gracefully degrade and have controls in place to allow us to mitigate abuse or misuse. GitLab may disable **beta and experimental** features for any or all customers at any time at our discretion.
-## Third party services
-
### Data privacy
-Some AI features require the use of third-party AI services models and APIs from: Google AI and OpenAI. The processing of any personal data is in accordance with our [Privacy Statement](https://about.gitlab.com/privacy/). You may also visit the [Sub-Processors page](https://about.gitlab.com/privacy/subprocessors/#third-party-sub-processors) to see the list of our Sub-Processors that we use to provide these features.
+GitLab Duo AI features are powered by a generative AI models. The processing of any personal data is in accordance with our [Privacy Statement](https://about.gitlab.com/privacy/). You may also visit the [Sub-Processors page](https://about.gitlab.com/privacy/subprocessors/#third-party-sub-processors) to see the list of our Sub-Processors that we use to provide these features.
+
+### Data retention
+
+The below reflects the current retention periods of GitLab AI model [Sub-Processors](https://about.gitlab.com/privacy/subprocessors/#third-party-sub-processors):
+
+- Anthropic retains input and output data for 30 days.
+- Google discards input and output data immediately after the output is provided. Google currently does not store data for abuse monitoring.
+
+All of these AI providers are under data protection agreements with GitLab that prohibit the use of Customer Content for their own purposes, except to perform their independent legal obligations.
+
+### Telemetry
+
+GitLab Duo collects aggregated or de-identified first-party usage data through our [Snowplow collector](https://about.gitlab.com/handbook/business-technology/data-team/platform/snowplow/). This usage data includes the following metrics:
+
+- Number of unique users
+- Number of unique instances
+- Prompt lengths
+- Model used
+- Status code responses
+- API responses times
+
+### Training data
+
+GitLab does not train generative AI models based on private (non-public) data. The vendors we work with also do not train models based on private data.
+
+For more information on our AI [sub-processors](https://about.gitlab.com/privacy/subprocessors/#third-party-sub-processors), see:
-Group owners can control which top-level groups have access to third-party AI features by using the [group level third-party AI features setting](group/manage.md#enable-third-party-ai-features).
+- Google Vertex AI Models APIs [data governance](https://cloud.google.com/vertex-ai/docs/generative-ai/data-governance) and [responsible AI](https://cloud.google.com/vertex-ai/docs/generative-ai/learn/responsible-ai).
+- Anthropic Claude's [constitution](https://www.anthropic.com/index/claudes-constitution).
### Model accuracy and quality
diff --git a/doc/user/analytics/analytics_dashboards.md b/doc/user/analytics/analytics_dashboards.md
index 448a46fdc26..8bed8018eb8 100644
--- a/doc/user/analytics/analytics_dashboards.md
+++ b/doc/user/analytics/analytics_dashboards.md
@@ -39,6 +39,12 @@ When [product analytics](../product_analytics/index.md) is enabled and onboarded
- **Audience** displays metrics related to traffic, such as the number of users and sessions.
- **Behavior** displays metrics related to user activity, such as the number of page views and events.
+For more information about the development of product analytics, see the [group direction page](https://about.gitlab.com/direction/analytics/product-analytics/). To leave feedback about bugs or functionality:
+
+- Comment on issue [391970](https://gitlab.com/gitlab-org/gitlab/-/issues/391970).
+- Create an issue with the `group::product analytics` label.
+- [Schedule a call](https://calendly.com/jheimbuck/30-minute-call) with the team.
+
### Value Stream Management
- **Value Streams Dashboard** displays metrics related to [DevOps performance, security exposure, and workstream optimization](../analytics/value_streams_dashboard.md#devsecops-metrics-comparison-panel).
diff --git a/doc/user/analytics/dora_metrics.md b/doc/user/analytics/dora_metrics.md
index 391a1c7965f..e90bfd690ca 100644
--- a/doc/user/analytics/dora_metrics.md
+++ b/doc/user/analytics/dora_metrics.md
@@ -65,9 +65,14 @@ For software leaders, Lead time for changes reflects the efficiency of CI/CD pip
Over time, the lead time for changes should decrease, while your team's performance should increase. Low lead time for changes means more efficient CI/CD pipelines.
In GitLab, Lead time for changes is measure by the `Median time it takes for a merge request to get merged into production (from master)`.
+By default, Lead time for changes measures only one-branch operations with multiple deployment jobs (for example, jobs moving from development to staging to production jobs on the main branch).
+When a merge request gets merged in staging and then merge to production, GitLab processes them as two deployed merge requests, not one.
+
### How lead time for changes is calculated
-GitLab calculates Lead time for changes base on the number of seconds to successfully deliver a commit into production - **from** code committed **to** code successfully running in production, without adding the `coding_time` to the calculation.
+GitLab calculates Lead time for changes based on the number of seconds to successfully deliver a commit into production - **from** code committed **to** code successfully running in production, without adding the `coding_time` to the calculation.
+
+By default, Lead time for changes supports measuring only one branch operation with multiple deployment jobs (for example, from development to staging to production on the default branch). When a merge request gets merged on staging, and then on production, GitLab interprets them as two deployed merge requests, not one.
### How to improve lead time for changes
@@ -127,41 +132,37 @@ To improve this metric, you should consider:
- Improving the efficacy of code review processes.
- Adding more automated testing.
-## DORA metrics in GitLab
+## DORA custom calculation rules **(ULTIMATE ALL EXPERIMENT)**
-The DORA metrics are displayed on the following charts:
+> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/96561) in GitLab 15.4 [with a flag](../../administration/feature_flags.md) named `dora_configuration`. Disabled by default. This feature is an [Experiment](../../policy/experiment-beta-support.md).
-- [Value Streams Dashboard](value_streams_dashboard.md), which helps you identify trends, patterns, and opportunities for improvement. DORA metrics are displayed in the [metrics comparison panel](value_streams_dashboard.md#devsecops-metrics-comparison-panel) and the [DORA Performers score panel](value_streams_dashboard.md#dora-performers-score-panel).
-- [CI/CD analytics charts](ci_cd_analytics.md), which show pipeline success rates and duration, and the history of DORA metrics over time.
-- Insights reports for [groups](../group/insights/index.md) and [projects](../group/value_stream_analytics/index.md), where you can also use [DORA query parameters](../../user/project/insights/index.md#dora-query-parameters) to create custom charts.
+FLAG:
+On self-managed GitLab, by default this feature is not available. To make it available per project or for your entire instance, an administrator can [enable the feature flag](../../administration/feature_flags.md) named `dora_configuration`.
+On GitLab.com, this feature is not available.
-The table below provides an overview of the DORA metrics' data aggregation in different charts.
+This feature is an [Experiment](../../policy/experiment-beta-support.md).
+To join the list of users testing this feature, [here is a suggested test flow](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/96561#steps-to-check-on-localhost).
+If you find a bug, [open an issue here](https://gitlab.com/groups/gitlab-org/-/epics/11490).
+To share your use cases and feedback, comment in [epic 11490](https://gitlab.com/groups/gitlab-org/-/epics/11490).
-| Metric name | Measured values | Data aggregation in the [Value Streams Dashboard](value_streams_dashboard.md) | Data aggregation in [CI/CD analytics charts](ci_cd_analytics.md) | Data aggregation in [Custom insights reporting](../../user/project/insights/index.md#dora-query-parameters) |
-|---------------------------|-------------------|-----------------------------------------------------|------------------------|----------|
-| Deployment frequency | Number of successful deployments | daily average per month | daily average | `day` (default) or `month` |
-| Lead time for changes | Number of seconds to successfully deliver a commit into production | daily median per month | median time | `day` (default) or `month` |
-| Time to restore service | Number of seconds an incident was open for | daily median per month | daily median | `day` (default) or `month` |
-| Change failure rate | percentage of deployments that cause an incident in production | daily median per month | percentage of failed deployments | `day` (default) or `month` |
+### DORA Lead Time For Changes - multi-branch rule
-## Configure DORA metrics calculation **(ULTIMATE ALL BETA)**
+Unlike the default [calculation of Lead time for changes](#how-lead-time-for-changes-is-calculated), this calculation rule allows measuring multi-branch operations with a single deployment job for each operation.
+For example, from development job on development branch, to staging job on staging branch, to production job on production branch.
-> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/96561) in GitLab 15.4 [with a flag](../../administration/feature_flags.md) named `dora_configuration`. Disabled by default. This feature is in [Beta](../../policy/experiment-beta-support.md).
+This calculation rule has been implemented by updating the `dora_configurations` table with the target branches that are part of the development flow.
+This way, GitLab can recognize the branches as one, and filter out other merge requests.
-FLAG:
-On self-managed GitLab, by default this feature is not available. To make it available per project or for your entire instance, an administrator can [enable the feature flag](../../administration/feature_flags.md) named `dora_configuration`.
-On GitLab.com, this feature is not available.
-This feature is not ready for production use.
+This configuration changes how daily DORA metrics are calculated for the selected project, but doesn't affect other projects, groups, or users.
+
+This feature supports only project-level propagation.
-You can configure the behavior of DORA metrics calculations.
To do this, in the Rails console run the following command:
```ruby
Dora::Configuration.create!(project: my_project, ltfc_target_branches: \['master', 'main'\])
```
-This feature is in [Beta](../../policy/experiment-beta-support.md).
-
## Retrieve DORA metrics data
To retrieve DORA data, use the [GraphQL](../../api/graphql/reference/index.md) or the [REST](../../api/dora/metrics.md) APIs.
@@ -193,7 +194,9 @@ and use it to automatically:
1. [Create an incident when an alert is triggered](../../operations/incident_management/manage_incidents.md#automatically-when-an-alert-is-triggered).
1. [Close incidents via recovery alerts](../../operations/incident_management/manage_incidents.md#automatically-close-incidents-via-recovery-alerts).
-### Supported DORA metrics in GitLab
+## DORA metrics in GitLab
+
+GitLab supports the following DORA metrics:
| Metric | Level | API | UI chart | Comments |
|---------------------------|-------------------|-----------------------------------------------------|------------------------|----------|
@@ -203,3 +206,22 @@ and use it to automatically:
| `lead_time_for_changes` | Group | [GitLab 13.10 and later](../../api/dora/metrics.md) | GitLab 14.0 and later | Unit in seconds. Aggregation method is median. |
| `time_to_restore_service` | Project and group | [GitLab 14.9 and later](../../api/dora/metrics.md) | GitLab 15.1 and later | Unit in days. Aggregation method is median. |
| `change_failure_rate` | Project and group | [GitLab 14.10 and later](../../api/dora/metrics.md) | GitLab 15.2 and later | Percentage of deployments. |
+
+### DORA metrics charts
+
+The DORA metrics are displayed on the following charts:
+
+- [Value Streams Dashboard](value_streams_dashboard.md), which helps you identify trends, patterns, and opportunities for improvement. DORA metrics are displayed in the [metrics comparison panel](value_streams_dashboard.md#devsecops-metrics-comparison-panel) and the [DORA Performers score panel](value_streams_dashboard.md#dora-performers-score-panel).
+- [CI/CD analytics charts](ci_cd_analytics.md), which show pipeline success rates and duration, and the history of DORA metrics over time.
+- Insights reports for [groups](../group/insights/index.md) and [projects](../group/value_stream_analytics/index.md), where you can also use [DORA query parameters](../../user/project/insights/index.md#dora-query-parameters) to create custom charts.
+
+### DORA metrics data aggregation
+
+The table below provides an overview of the DORA metrics' data aggregation in different charts.
+
+| Metric name | Measured values | Data aggregation in the [Value Streams Dashboard](value_streams_dashboard.md) | Data aggregation in [CI/CD analytics charts](ci_cd_analytics.md) | Data aggregation in [Custom insights reporting](../../user/project/insights/index.md#dora-query-parameters) |
+|---------------------------|-------------------|-----------------------------------------------------|------------------------|----------|
+| Deployment frequency | Number of successful deployments | daily average per month | daily average | `day` (default) or `month` |
+| Lead time for changes | Number of seconds to successfully deliver a commit into production | daily median per month | median time | `day` (default) or `month` |
+| Time to restore service | Number of seconds an incident was open for | daily median per month | daily median | `day` (default) or `month` |
+| Change failure rate | percentage of deployments that cause an incident in production | daily median per month | percentage of failed deployments | `day` (default) or `month` |
diff --git a/doc/user/analytics/value_streams_dashboard.md b/doc/user/analytics/value_streams_dashboard.md
index 45be6f5aa25..b5358cc81c8 100644
--- a/doc/user/analytics/value_streams_dashboard.md
+++ b/doc/user/analytics/value_streams_dashboard.md
@@ -214,8 +214,8 @@ Label filters are appended as query parameters to the URL of the drill-down repo
| Change failure rate | Percentage of deployments that cause an incident in production. | [Change failure rate tab](https://gitlab.com/groups/gitlab-org/-/analytics/ci_cd?tab=change-failure-rate) | [Change failure rate](dora_metrics.md#change-failure-rate) | `change_failure_rate` |
| Lead time | Median time from issue created to issue closed. | [Value Stream Analytics](https://gitlab.com/groups/gitlab-org/-/analytics/value_stream_analytics) | [View the lead time and cycle time for issues](../group/value_stream_analytics/index.md#lifecycle-metrics) | `lead_time` |
| Cycle time | Median time from the earliest commit of a linked issue's merge request to when that issue is closed. | [VSA overview](https://gitlab.com/groups/gitlab-org/-/analytics/value_stream_analytics) | [View the lead time and cycle time for issues](../group/value_stream_analytics/index.md#lifecycle-metrics) | `cycle_time` |
-| New issues | Number of new issues created. | [Issue Analytics](https://gitlab.com/groups/gitlab-org/-/issues_analytics) | Issue analytics [for projects](issue_analytics.md) and [for groups](../../user/group/issues_analytics/index.md) | `issues` |
-| Closed issues | Number of issues closed by month. | [Value Stream Analytics](https://gitlab.com/groups/gitlab-org/-/analytics/value_stream_analytics) | [Value Stream Analytics](../group/value_stream_analytics/index.md) | `issues_completed` |
+| Issues created | Number of new issues created. | [Issue Analytics](https://gitlab.com/groups/gitlab-org/-/issues_analytics) | Issue analytics [for projects](issue_analytics.md) and [for groups](../../user/group/issues_analytics/index.md) | `issues` |
+| Issues closed | Number of issues closed by month. | [Value Stream Analytics](https://gitlab.com/groups/gitlab-org/-/analytics/value_stream_analytics) | [Value Stream Analytics](../group/value_stream_analytics/index.md) | `issues_completed` |
| Number of deploys | Total number of deploys to production. | [Merge Request Analytics](https://gitlab.com/gitlab-org/gitlab/-/analytics/merge_request_analytics) | [Merge request analytics](merge_request_analytics.md) | `deploys` |
| Merge request throughput | The number of merge requests merged by month. | [Groups Productivity analytics](productivity_analytics.md), [Projects Merge Request Analytics](https://gitlab.com/gitlab-org/gitlab/-/analytics/merge_request_analytics) | [Groups Productivity analytics](productivity_analytics.md) [Projects Merge request analytics](merge_request_analytics.md) | `merge_request_throughput` |
| Critical vulnerabilities over time | Critical vulnerabilities over time in project or group | [Vulnerability report](https://gitlab.com/gitlab-org/gitlab/-/security/vulnerability_report) | [Vulnerability report](../application_security/vulnerability_report/index.md) | `vulnerability_critical` |
diff --git a/doc/user/application_security/container_scanning/index.md b/doc/user/application_security/container_scanning/index.md
index 6ee8be822da..ac03f08e23b 100644
--- a/doc/user/application_security/container_scanning/index.md
+++ b/doc/user/application_security/container_scanning/index.md
@@ -7,11 +7,6 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# Container Scanning **(FREE ALL)**
-> - Improved support for FIPS [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/263482) in GitLab 13.6 by upgrading `CS_MAJOR_VERSION` from `2` to `3`.
-> - Integration with Trivy [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/322656) in GitLab 13.9 by upgrading `CS_MAJOR_VERSION` from `3` to `4`.
-> - Integration with Clair [deprecated](https://gitlab.com/gitlab-org/gitlab/-/issues/321451) in GitLab 13.9.
-> - Default container scanning with Trivy [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/61850) in GitLab 14.0.
-> - Integration with Grype as an alternative scanner [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/326279) in GitLab 14.0.
> - [Changed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/86092) the major analyzer version from `4` to `5` in GitLab 15.0.
> - [Moved](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/86783) from GitLab Ultimate to GitLab Free in 15.0.
> - Container Scanning variables that reference Docker [renamed](https://gitlab.com/gitlab-org/gitlab/-/issues/357264) in GitLab 15.4.
@@ -22,8 +17,9 @@ vulnerabilities. By including an extra Container Scanning job in your pipeline t
vulnerabilities and displays them in a merge request, you can use GitLab to audit your Docker-based
apps.
-<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
+- <i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
For an overview, see [Container Scanning](https://www.youtube.com/watch?v=C0jn2eN5MAs).
+- <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> For a video walkthrough, see [How to set up Container Scanning using GitLab](https://youtu.be/h__mcXpil_4?si=w_BVG68qnkL9x4l1).
Container Scanning is often considered part of Software Composition Analysis (SCA). SCA can contain
aspects of inspecting the items your code uses. These items typically include application and system
@@ -58,23 +54,23 @@ information directly in the merge request.
### Capabilities
-| Capability | In Free | In Ultimate |
+| Capability | In Free and Premium | In Ultimate |
| --- | ------ | ------ |
-| [Configure Scanners](#configuration) | Yes | Yes |
-| Customize Settings ([Variables](#available-cicd-variables), [Overriding](#overriding-the-container-scanning-template), [offline environment support](#running-container-scanning-in-an-offline-environment), etc) | Yes | Yes |
-| [View JSON Report](#reports-json-format) as a CI job artifact | Yes | Yes |
-| Generation of a JSON report of [dependencies](#dependency-list) as a CI job artifact | Yes | Yes |
-| Ability to enable container scanning via an MR in the GitLab UI | Yes | Yes |
-| [UBI Image Support](#fips-enabled-images) | Yes | Yes |
-| Support for Trivy | Yes | Yes |
-| Support for Grype | Yes | Yes |
+| [Configure Scanners](#configuration) | **{check-circle}** Yes | **{check-circle}** Yes |
+| Customize Settings ([Variables](#available-cicd-variables), [Overriding](#overriding-the-container-scanning-template), [offline environment support](#running-container-scanning-in-an-offline-environment), etc) | **{check-circle}** Yes | **{check-circle}** Yes |
+| [View JSON Report](#reports-json-format) as a CI job artifact | **{check-circle}** Yes | **{check-circle}** Yes |
+| Generation of a JSON report of [dependencies](#dependency-list) as a CI job artifact | **{check-circle}** Yes | **{check-circle}** Yes |
+| Ability to enable container scanning via an MR in the GitLab UI | **{check-circle}** Yes | **{check-circle}** Yes |
+| [UBI Image Support](#fips-enabled-images) | **{check-circle}** Yes | **{check-circle}** Yes |
+| Support for Trivy | **{check-circle}** Yes | **{check-circle}** Yes |
+| Support for Grype | **{check-circle}** Yes | **{check-circle}** Yes |
| Inclusion of GitLab Advisory Database | Limited to the time-delayed content from GitLab [advisories-communities](https://gitlab.com/gitlab-org/advisories-community/) project | Yes - all the latest content from [Gemnasium DB](https://gitlab.com/gitlab-org/security-products/gemnasium-db) |
-| Presentation of Report data in Merge Request and Security tab of the CI pipeline job | No | Yes |
-| [Interaction with Vulnerabilities](#interacting-with-the-vulnerabilities) such as merge request approvals | No | Yes |
-| [Solutions for vulnerabilities (auto-remediation)](#solutions-for-vulnerabilities-auto-remediation) | No | Yes |
-| Support for the [vulnerability allow list](#vulnerability-allowlisting) | No | Yes |
-| [Access to Security Dashboard page](#security-dashboard) | No | Yes |
-| [Access to Dependency List page](../dependency_list/index.md) | No | Yes |
+| Presentation of Report data in Merge Request and Security tab of the CI pipeline job | **{dotted-circle}** No | **{check-circle}** Yes |
+| [Interaction with Vulnerabilities](#interacting-with-the-vulnerabilities) such as merge request approvals | **{dotted-circle}** No | **{check-circle}** Yes |
+| [Solutions for vulnerabilities (auto-remediation)](#solutions-for-vulnerabilities-auto-remediation) | **{dotted-circle}** No | **{check-circle}** Yes |
+| Support for the [vulnerability allow list](#vulnerability-allowlisting) | **{dotted-circle}** No | **{check-circle}** Yes |
+| [Access to Security Dashboard page](#security-dashboard) | **{dotted-circle}** No | **{check-circle}** Yes |
+| [Access to Dependency List page](../dependency_list/index.md) | **{dotted-circle}** No | **{check-circle}** Yes |
## Prerequisites
@@ -133,6 +129,10 @@ Setting `CS_DEFAULT_BRANCH_IMAGE` avoids duplicate vulnerability findings when a
The value of `CS_DEFAULT_BRANCH_IMAGE` indicates the name of the scanned image as it appears on the default branch.
For more details on how this deduplication is achieved, see [Setting the default branch image](#setting-the-default-branch-image).
+## Running jobs in merge request pipelines
+
+See [Use security scanning tools with merge request pipelines](../index.md#use-security-scanning-tools-with-merge-request-pipelines)
+
### Customizing the container scanning settings
There may be cases where you want to customize how GitLab scans your containers. For example, you
@@ -272,28 +272,30 @@ including a large number of false positives.
| `CS_REGISTRY_USER` | `$CI_REGISTRY_USER` | Username for accessing a Docker registry requiring authentication. The default is only set if `$CS_IMAGE` resides at [`$CI_REGISTRY`](../../../ci/variables/predefined_variables.md). Not supported when [FIPS mode](../../../development/fips_compliance.md#enable-fips-mode) is enabled. | All |
| `CS_DOCKERFILE_PATH` | `Dockerfile` | The path to the `Dockerfile` to use for generating remediations. By default, the scanner looks for a file named `Dockerfile` in the root directory of the project. You should configure this variable only if your `Dockerfile` is in a non-standard location, such as a subdirectory. See [Solutions for vulnerabilities](#solutions-for-vulnerabilities-auto-remediation) for more details. | All |
| `CS_QUIET` | `""` | If set, this variable disables output of the [vulnerabilities table](#container-scanning-job-log-format) in the job log. [Introduced](https://gitlab.com/gitlab-org/security-products/analyzers/container-scanning/-/merge_requests/50) in GitLab 15.1. | All |
-| `SECURE_LOG_LEVEL` | `info` | Set the minimum logging level. Messages of this logging level or higher are output. From highest to lowest severity, the logging levels are: `fatal`, `error`, `warn`, `info`, `debug`. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/10880) in GitLab 13.1. | All |
+| `CS_TRIVY_JAVA_DB` | `"ghcr.io/aquasecurity/trivy-java-db"` | Specify an alternate location for the [trivy-java-db](https://github.com/aquasecurity/trivy-java-db) vulnerability database. | Trivy |
+| `CS_IGNORE_STATUSES` | `""` | Force the analyzer to ignore vulnerability findings with specified statuses in a comma-delimited list. For `trivy`, the following values are allowed: `unknown,not_affected,affected,fixed,under_investigation,will_not_fix,fix_deferred,end_of_life`. For `grype`, the following values are allowed: `fixed,not-fixed,unknown,wont-fix` | All |
+| `SECURE_LOG_LEVEL` | `info` | Set the minimum logging level. Messages of this logging level or higher are output. From highest to lowest severity, the logging levels are: `fatal`, `error`, `warn`, `info`, `debug`. | All |
### Supported distributions
Support depends on which scanner is used:
-| Distribution | Grype | Trivy |
-| -------------- | ----- | ----- |
-| Alma Linux | | ✅ |
-| Alpine Linux | ✅ | ✅ |
-| Amazon Linux | ✅ | ✅ |
-| BusyBox | ✅ | |
-| CentOS | ✅ | ✅ |
-| CBL-Mariner | | ✅ |
-| Debian | ✅ | ✅ |
-| Distroless | ✅ | ✅ |
-| Oracle Linux | ✅ | ✅ |
-| Photon OS | | ✅ |
-| Red Hat (RHEL) | ✅ | ✅ |
-| Rocky Linux | | ✅ |
-| SUSE | | ✅ |
-| Ubuntu | ✅ | ✅ |
+| Distribution | Grype | Trivy |
+|----------------|------------------------|------------------------|
+| Alma Linux | **{dotted-circle}** No | **{check-circle}** Yes |
+| Alpine Linux | **{check-circle}** Yes | **{check-circle}** Yes |
+| Amazon Linux | **{check-circle}** Yes | **{check-circle}** Yes |
+| BusyBox | **{check-circle}** Yes | **{dotted-circle}** No |
+| CentOS | **{check-circle}** Yes | **{check-circle}** Yes |
+| CBL-Mariner | **{dotted-circle}** No | **{check-circle}** Yes |
+| Debian | **{check-circle}** Yes | **{check-circle}** Yes |
+| Distroless | **{check-circle}** Yes | **{check-circle}** Yes |
+| Oracle Linux | **{check-circle}** Yes | **{check-circle}** Yes |
+| Photon OS | **{dotted-circle}** No | **{check-circle}** Yes |
+| Red Hat (RHEL) | **{check-circle}** Yes | **{check-circle}** Yes |
+| Rocky Linux | **{dotted-circle}** No | **{check-circle}** Yes |
+| SUSE | **{dotted-circle}** No | **{check-circle}** Yes |
+| Ubuntu | **{check-circle}** Yes | **{check-circle}** Yes |
#### FIPS-enabled images
@@ -654,6 +656,32 @@ Also:
Scanning images in external private registries is not supported when [FIPS mode](../../../development/fips_compliance.md#enable-fips-mode) is enabled.
+#### Create and use a Trivy Java database mirror
+
+When the `trivy` scanner is used and a `jar` file is encountered in a container image being scanned, `trivy` downloads an additional `trivy-java-db` vulnerability database. By default, the `trivy-java-db` database is hosted as an [OCI artifact](https://oras.land/docs/quickstart) at `ghcr.io/aquasecurity/trivy-java-db:1`. If this registry is not accessible, for example in a network-isolated offline GitLab instance, one solution is to mirror the `trivy-java-db` to a container registry that can be accessed in the offline instance:
+
+```yaml
+mirror trivy java db:
+ image:
+ name: ghcr.io/oras-project/oras:v1.1.0
+ entrypoint: [""]
+ script:
+ - oras login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
+ - oras pull ghcr.io/aquasecurity/trivy-java-db:1
+ - oras push $CI_REGISTRY_IMAGE:1 --config /dev/null:application/vnd.aquasec.trivy.config.v1+json javadb.tar.gz:application/vnd.aquasec.trivy.javadb.layer.v1.tar+gzip
+```
+
+If the above container registry is `gitlab.example.com/trivy-java-db-mirror`, then the container scanning job should be configured in the following way:
+
+```yaml
+include:
+ - template: Security/Container-Scanning.gitlab-ci.yml
+
+container_scanning:
+ variables:
+ CS_TRIVY_JAVA_DB: gitlab.example.com/trivy-java-db-mirror:1
+```
+
## Running the standalone container scanning tool
It's possible to run the [GitLab container scanning tool](https://gitlab.com/gitlab-org/security-products/analyzers/container-scanning)
@@ -715,24 +743,24 @@ All analyzer images are [updated daily](https://gitlab.com/gitlab-org/security-p
The images use data from upstream advisory databases depending on which scanner is used:
-| Data Source | Trivy | Grype |
-| ------------------------------ | ----- | ----- |
-| AlmaLinux Security Advisory | ✅ | ✅ |
-| Amazon Linux Security Center | ✅ | ✅ |
-| Arch Linux Security Tracker | ✅ | |
-| SUSE CVRF | ✅ | ✅ |
-| CWE Advisories | ✅ | |
-| Debian Security Bug Tracker | ✅ | ✅ |
-| GitHub Security Advisory | ✅ | ✅ |
-| Go Vulnerability Database | ✅ | |
-| CBL-Mariner Vulnerability Data | ✅ | |
-| NVD | ✅ | ✅ |
-| OSV | ✅ | |
-| Red Hat OVAL v2 | ✅ | ✅ |
-| Red Hat Security Data API | ✅ | ✅ |
-| Photon Security Advisories | ✅ | |
-| Rocky Linux UpdateInfo | ✅ | |
-| Ubuntu CVE Tracker (only data sources from mid 2021 and later) | ✅ | ✅ |
+| Data Source | Trivy | Grype |
+|----------------------------------------------------------------|------------------------|------------------------|
+| AlmaLinux Security Advisory | **{check-circle}** Yes | **{check-circle}** Yes |
+| Amazon Linux Security Center | **{check-circle}** Yes | **{check-circle}** Yes |
+| Arch Linux Security Tracker | **{check-circle}** Yes | **{dotted-circle}** No |
+| SUSE CVRF | **{check-circle}** Yes | **{check-circle}** Yes |
+| CWE Advisories | **{check-circle}** Yes | **{dotted-circle}** No |
+| Debian Security Bug Tracker | **{check-circle}** Yes | **{check-circle}** Yes |
+| GitHub Security Advisory | **{check-circle}** Yes | **{check-circle}** Yes |
+| Go Vulnerability Database | **{check-circle}** Yes | **{dotted-circle}** No |
+| CBL-Mariner Vulnerability Data | **{check-circle}** Yes | **{dotted-circle}** No |
+| NVD | **{check-circle}** Yes | **{check-circle}** Yes |
+| OSV | **{check-circle}** Yes | **{dotted-circle}** No |
+| Red Hat OVAL v2 | **{check-circle}** Yes | **{check-circle}** Yes |
+| Red Hat Security Data API | **{check-circle}** Yes | **{check-circle}** Yes |
+| Photon Security Advisories | **{check-circle}** Yes | **{dotted-circle}** No |
+| Rocky Linux UpdateInfo | **{check-circle}** Yes | **{dotted-circle}** No |
+| Ubuntu CVE Tracker (only data sources from mid 2021 and later) | **{check-circle}** Yes | **{check-circle}** Yes |
In addition to the sources provided by these scanners, GitLab maintains the following vulnerability databases:
diff --git a/doc/user/application_security/continuous_vulnerability_scanning/index.md b/doc/user/application_security/continuous_vulnerability_scanning/index.md
index 4094a0add28..e31fc5f7eb0 100644
--- a/doc/user/application_security/continuous_vulnerability_scanning/index.md
+++ b/doc/user/application_security/continuous_vulnerability_scanning/index.md
@@ -29,10 +29,9 @@ To enable Continuous Vulnerability Scanning:
- Enable the Continuous Vulnerability Scanning setting in the project's [security configuration](../configuration/index.md).
- Enable [Dependency Scanning](../dependency_scanning/index.md#configuration) and ensure that its prerequisites are met.
+- On GitLab self-managed only, you can [choose package registry metadata to synchronize](../../../administration/settings/security_and_compliance.md#choose-package-registry-metadata-to-sync) in the Admin Area for the GitLab instance. For this data synchronization to work, you must allow outbound network traffic from your GitLab instance to the domain `storage.googleapis.com`. If you have limited or no network connectivity then please refer to the documentation section [running in an offline environment](#running-in-an-offline-environment) for further guidance.
-On GitLab self-managed only, you can [choose package registry metadata to sync](../../../administration/settings/security_and_compliance.md#choose-package-registry-metadata-to-sync) in the Admin Area for the GitLab instance.
-
-### Requirements for offline environments
+### Running in an offline environment
For self-managed GitLab instances in an environment with limited, restricted, or intermittent access to external resources through the internet,
some adjustments are required to successfully scan CycloneDX reports for vulnerabilities.
diff --git a/doc/user/application_security/dast/browser_based.md b/doc/user/application_security/dast/browser_based.md
index 26782c319b1..207db52ed71 100644
--- a/doc/user/application_security/dast/browser_based.md
+++ b/doc/user/application_security/dast/browser_based.md
@@ -66,6 +66,8 @@ See [checks](checks/index.md) for more information about individual checks.
Active scans check for vulnerabilities by injecting attack payloads into HTTP requests recorded during the crawl phase of the scan.
Active scans are disabled by default due to the nature of their probing attacks.
+#### How active scans work
+
DAST analyzes each recorded HTTP request for injection locations, such as query values, header values, cookie values, form posts, and JSON string values.
Attack payloads are injected into the injection location, forming a new request.
DAST sends the request to the target application and uses the HTTP response to determine attack success.
@@ -84,6 +86,12 @@ A simplified timing attack works as follows:
1. The target application is vulnerable if it executes the query parameter value as a system command without validation, for example, `system(params[:search])`
1. DAST creates a finding if the response time takes longer than 10 seconds.
+#### Known issues
+
+Active scans do not use a browser to send HTTP requests in an effort to minimize scan time.
+
+Anti-CSRF tokens are not regenerated for attacks that submit forms. Please disable anti-CSRF tokens when running an active scan.
+
## Getting started
To run a DAST scan:
@@ -167,7 +175,7 @@ For authentication CI/CD variables, see [Authentication](authentication.md).
| CI/CD variable | Type | Example | Description |
|:--------------------------------------------|:---------------------------------------------------------|----------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| `DAST_ADVERTISE_SCAN` | boolean | `true` | Set to `true` to add a `Via` header to every request sent, advertising that the request was sent as part of a GitLab DAST scan. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/334947) in GitLab 14.1. |
+| `DAST_ADVERTISE_SCAN` | boolean | `true` | Set to `true` to add a `Via` header to every request sent, advertising that the request was sent as part of a GitLab DAST scan. The header value starts with `GitLab DAST`. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/334947) in GitLab 14.1. |
| `DAST_BROWSER_ACTION_STABILITY_TIMEOUT` | [Duration string](https://pkg.go.dev/time#ParseDuration) | `800ms` | The maximum amount of time to wait for a browser to consider a page loaded and ready for analysis after completing an action. |
| `DAST_BROWSER_ACTION_TIMEOUT` | [Duration string](https://pkg.go.dev/time#ParseDuration) | `7s` | The maximum amount of time to wait for a browser to complete an action. |
| `DAST_BROWSER_ALLOWED_HOSTS` | List of strings | `site.com,another.com` | Hostnames included in this variable are considered in scope when crawled. By default the `DAST_WEBSITE` hostname is included in the allowed hosts list. Headers set using `DAST_REQUEST_HEADERS` are added to every request made to these hostnames. |
@@ -209,7 +217,7 @@ For authentication CI/CD variables, see [Authentication](authentication.md).
| `DAST_REQUEST_HEADERS` | string | `Cache-control:no-cache` | Set to a comma-separated list of request header names and values. |
| `DAST_SKIP_TARGET_CHECK` | boolean | `true` | Set to `true` to prevent DAST from checking that the target is available before scanning. Default: `false`. |
| `DAST_TARGET_AVAILABILITY_TIMEOUT` | number | `60` | Time limit in seconds to wait for target availability. |
-| `DAST_WEBSITE` | URL | `https://example.com` | The URL of the website to scan. |
+| `DAST_WEBSITE` | URL | `https://example.com` | The URL of the target application to scan. |
| `SECURE_ANALYZERS_PREFIX` | URL | `registry.organization.com` | Set the Docker registry base address from which to download the analyzer. |
## Managing scope
@@ -281,22 +289,17 @@ dast:
DAST_EXCLUDE_URLS: "https://my.site.com/user/logout" # don't visit this URL
```
-## Vulnerability detection
+## Vulnerability check migration
+
+A migration is underway that changes the browser-based analyzer from using the proxy-based analyzer Zed Attack Proxy (ZAP) active vulnerability checks, to using GitLab-built active vulnerability checks.
+
+The browser-based analyzer continues to use a combination of proxy-based analyzer and GitLab-built vulnerability checks until the migration is complete. See [browser-based vulnerability checks](checks/index.md) for details of which checks have been migrated.
-Vulnerability detection is gradually being migrated from the default Zed Attack Proxy (ZAP) solution
-to the browser-based analyzer. For details of the vulnerability detection already migrated, see
-[browser-based vulnerability checks](checks/index.md).
+### Why browser-based scans produce different results to proxy-based scans
-The crawler runs the target website in a browser with DAST/ZAP configured as the proxy server. This
-ensures that all requests and responses made by the browser are passively scanned by DAST/ZAP. When
-running a full scan, active vulnerability checks executed by DAST/ZAP do not use a browser. This
-difference in how vulnerabilities are checked can cause issues that require certain features of the
-target website to be disabled to ensure the scan works as intended.
+Browser-based and proxy-based scans do not produce the same results because they use a different set of vulnerability checks.
-For example, for a target website that contains forms with Anti-CSRF tokens, a passive scan works as
-intended because the browser displays pages and forms as if a user is viewing the page. However,
-active vulnerability checks that run in a full scan cannot submit forms containing Anti-CSRF tokens.
-In such cases, we recommend you disable Anti-CSRF tokens when running a full scan.
+The browser-based analyzer does not have an equivalent for proxy-based checks that create too many false positives, are not worth running because modern browsers don't allow the vulnerability to be exploited, or are no longer considered relevant. The browser-based analyzer includes checks that proxy-based analyzer does not.
## Managing scan time
diff --git a/doc/user/application_security/dast/checks/89.1.md b/doc/user/application_security/dast/checks/89.1.md
new file mode 100644
index 00000000000..ca7ff5e4593
--- /dev/null
+++ b/doc/user/application_security/dast/checks/89.1.md
@@ -0,0 +1,37 @@
+---
+stage: Secure
+group: Dynamic Analysis
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
+---
+
+# SQL Injection
+
+## Description
+
+It is possible to execute arbitrary SQL commands on the target application server's
+backend database.
+SQL Injection is a critical vulnerability that can lead to a data or system
+compromise.
+
+## Remediation
+
+Always use parameterized queries when issuing requests to backend database systems. In
+situations where dynamic queries must be created, never use direct user input, but
+instead use a map or dictionary of valid values and resolve them using a user-supplied key.
+
+For example, some database drivers do not allow parameterized queries for `>` or `<` comparison
+operators. In these cases, do not use a user supplied `>` or `<` value, but rather have the user
+supply a `gt` or `lt` value. The alphabetical values are then used to look up the `>` and `<`
+values to be used in the construction of the dynamic query. The same goes for other queries where
+column or table names are required but can not be parameterized.
+
+## Details
+
+| ID | Aggregated | CWE | Type | Risk |
+|:---|:--------|:--------|:--------|:--------|
+| 89.1 | false | 89 | Active | high |
+
+## Links
+
+- [OWASP](https://owasp.org/www-community/attacks/SQL_Injection)
+- [CWE](https://cwe.mitre.org/data/definitions/89.html)
diff --git a/doc/user/application_security/dast/checks/917.1.md b/doc/user/application_security/dast/checks/917.1.md
new file mode 100644
index 00000000000..68b9665e393
--- /dev/null
+++ b/doc/user/application_security/dast/checks/917.1.md
@@ -0,0 +1,33 @@
+---
+stage: Secure
+group: Dynamic Analysis
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
+---
+
+# Expression Language Injection
+
+## Description
+
+It is possible to execute arbitrary Expression Language (EL) statements on the target
+application server. EL injection is a critical severity vulnerability that can lead to
+full system compromise. EL injection can occur when attacker-controlled data is used to construct
+EL statements without neutralizing special characters. These special characters could modify the
+intended EL statement prior to it being executed by an interpreter.
+
+## Remediation
+
+User-controlled data should always have special elements neutralized when used as part of
+constructing Expression Language statements. Please consult the documentation for the EL
+interpreter in use on how properly neutralize user controlled data.
+
+## Details
+
+| ID | Aggregated | CWE | Type | Risk |
+|:---|:--------|:--------|:--------|:--------|
+| 917.1 | false | 917 | Active | high |
+
+## Links
+
+- [CWE](https://cwe.mitre.org/data/definitions/917.html)
+- [OWASP](https://owasp.org/www-community/vulnerabilities/Expression_Language_Injection)
+- [Expression Language Injection [PDF]](https://mindedsecurity.com/wp-content/uploads/2020/10/ExpressionLanguageInjection.pdf)
diff --git a/doc/user/application_security/dast/checks/94.1.md b/doc/user/application_security/dast/checks/94.1.md
new file mode 100644
index 00000000000..ec30b41c5e8
--- /dev/null
+++ b/doc/user/application_security/dast/checks/94.1.md
@@ -0,0 +1,53 @@
+---
+stage: Secure
+group: Dynamic Analysis
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
+---
+
+# Server-side code injection (PHP)
+
+## Description
+
+The target application was found vulnerable to code injection. A malicious actor could inject arbitrary
+PHP code to be executed on the server. This could lead to a full system compromise by accessing
+stored secrets, injecting code to take over accounts, or executing OS commands.
+
+## Remediation
+
+Never pass user input directly into functions which evaluate string data as code, such as `eval`.
+There is almost no benefit of passing string values to `eval`, as such the best recommendation is
+to replace the current logic with more safe implementations of dynamically evaluating logic with
+user input. One alternative is to use an `array()`, storing expected user inputs in an array
+key, and use that key as a look up to execute functions:
+
+```php
+$func_to_run = function()
+{
+ print('hello world');
+};
+
+$function_map = array();
+$function_map["fn"] = $func_to_run; // store additional input to function mappings here
+
+$input = "fn";
+
+// lookup "fn" as the key
+if (array_key_exists($input, $function_map)) {
+ // run the $func_to_run that was stored in the "fn" array hash value.
+ $func = $function_map[$input];
+ $func();
+} else {
+ print('invalid input');
+}
+```
+
+## Details
+
+| ID | Aggregated | CWE | Type | Risk |
+|:---|:--------|:--------|:--------|:--------|
+| 94.1 | false | 94 | Active | high |
+
+## Links
+
+- [CWE](https://cwe.mitre.org/data/definitions/94.html)
+- [OWASP](https://owasp.org/www-community/attacks/Code_Injection)
diff --git a/doc/user/application_security/dast/checks/94.2.md b/doc/user/application_security/dast/checks/94.2.md
new file mode 100644
index 00000000000..666052807b5
--- /dev/null
+++ b/doc/user/application_security/dast/checks/94.2.md
@@ -0,0 +1,51 @@
+---
+stage: Secure
+group: Dynamic Analysis
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
+---
+
+# Server-side code injection (Ruby)
+
+## Description
+
+The target application was found vulnerable to code injection. A malicious actor could inject arbitrary
+Ruby code to be executed on the server. This could lead to a full system compromise by accessing
+stored secrets, injecting code to take over accounts, or executing OS commands.
+
+## Remediation
+
+Never pass user input directly into functions which evaluate string data as code, such as `eval`,
+`send`, `public_send`, `instance_eval` or `class_eval`. There is almost no benefit of passing string
+values to these methods, as such the best recommendation is to replace the current logic with more safe
+implementations of dynamically evaluating logic with user input. If using `send` or `public_send` ensure
+the first argument is to a known, hardcoded method/symbol and does not come from user input.
+
+For `eval`, `instance_eval` and `class_eval`, user input should never be sent directly to these methods.
+One alternative is to store functions or methods in a Hash that can be looked up using a key. If the key
+exists, the function can be executed.
+
+```ruby
+def func_to_run
+ puts 'hello world'
+end
+
+input = 'fn'
+
+function_map = { fn: method(:func_to_run) }
+
+if function_map.key?(input.to_sym)
+ function_map[input.to_sym].call
+else
+ puts 'invalid input'
+end
+```
+
+## Details
+
+| ID | Aggregated | CWE | Type | Risk |
+|:---|:--------|:--------|:--------|:--------|
+| 94.2 | false | 94 | Active | high |
+
+## Links
+
+- [CWE](https://cwe.mitre.org/data/definitions/94.html)
diff --git a/doc/user/application_security/dast/checks/94.3.md b/doc/user/application_security/dast/checks/94.3.md
new file mode 100644
index 00000000000..772cdb1d3ea
--- /dev/null
+++ b/doc/user/application_security/dast/checks/94.3.md
@@ -0,0 +1,45 @@
+---
+stage: Secure
+group: Dynamic Analysis
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
+---
+
+# Server-side code injection (Python)
+
+## Description
+
+The target application was found vulnerable to code injection. A malicious actor could inject arbitrary
+Python code to be executed on the server. This could lead to a full system compromise by accessing
+stored secrets, injecting code to take over accounts, or executing OS commands.
+
+## Remediation
+
+Never pass user input directly into functions which evaluate string data as code, such as `eval`,
+or `exec`. There is almost no benefit of passing string values to these methods, as such the best
+recommendation is to replace the current logic with more safe implementations of dynamically evaluating
+logic with user input. One alternative is to store functions or methods in a hashmap that can be looked
+up using a key. If the key exists, the function can be executed.
+
+```python
+def func_to_run():
+ print('hello world')
+
+function_map = {'fn': func_to_run}
+
+input = 'fn'
+
+if input in function_map:
+ function_map[input]()
+else:
+ print('invalid input')
+```
+
+## Details
+
+| ID | Aggregated | CWE | Type | Risk |
+|:---|:--------|:--------|:--------|:--------|
+| 94.3 | false | 94 | Active | high |
+
+## Links
+
+- [CWE](https://cwe.mitre.org/data/definitions/94.html)
diff --git a/doc/user/application_security/dast/checks/943.1.md b/doc/user/application_security/dast/checks/943.1.md
new file mode 100644
index 00000000000..debae65669a
--- /dev/null
+++ b/doc/user/application_security/dast/checks/943.1.md
@@ -0,0 +1,30 @@
+---
+stage: Secure
+group: Dynamic Analysis
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
+---
+
+# Improper neutralization of special elements in data query logic
+
+## Description
+
+The application generates a query intended to interact with MongoDB,
+but it does not neutralize or incorrectly neutralizes special elements
+that can modify the intended logic of the query.
+
+## Remediation
+
+Refactor find or search queries to use standard
+filtering operators such as `$gt` or `$in` instead of broad operators such
+as `$where`. If possible, disable the MongoDB JavaScript interface entirely.
+
+## Details
+
+| ID | Aggregated | CWE | Type | Risk |
+|:---|:--------|:--------|:--------|:--------|
+| 943.1 | false | 943 | Active | high |
+
+## Links
+
+- [CWE](https://cwe.mitre.org/data/definitions/943.html)
+- [Disabling MongoDB Server Side JS](https://www.mongodb.com/docs/manual/core/server-side-javascript/#std-label-disable-server-side-js)
diff --git a/doc/user/application_security/dast/checks/index.md b/doc/user/application_security/dast/checks/index.md
index 4d41f08672e..c239fdb5e74 100644
--- a/doc/user/application_security/dast/checks/index.md
+++ b/doc/user/application_security/dast/checks/index.md
@@ -170,4 +170,10 @@ The [DAST browser-based crawler](../browser_based.md) provides a number of vulne
| [113.1](113.1.md) | Improper Neutralization of CRLF Sequences in HTTP Headers | High | Active |
| [22.1](22.1.md) | Improper limitation of a pathname to a restricted directory (Path traversal) | High | Active |
| [611.1](611.1.md) | External XML Entity Injection (XXE) | High | Active |
+| [89.1](89.1.md) | SQL Injection | High | Active |
+| [917.1](917.1.md) | Expression Language Injection | High | Active |
+| [94.1](94.1.md) | Server-side code injection (PHP) | High | Active |
+| [94.2](94.2.md) | Server-side code injection (Ruby) | High | Active |
+| [94.3](94.3.md) | Server-side code injection (Python) | High | Active |
| [94.4](94.4.md) | Server-side code injection (NodeJS) | High | Active |
+| [943.1](943.1.md) | Improper neutralization of special elements in data query logic | High | Active |
diff --git a/doc/user/application_security/dast/proxy-based.md b/doc/user/application_security/dast/proxy-based.md
index 230d8ef5ca3..9e59ecc64d9 100644
--- a/doc/user/application_security/dast/proxy-based.md
+++ b/doc/user/application_security/dast/proxy-based.md
@@ -11,11 +11,14 @@ The DAST proxy-based analyzer can be added to your [GitLab CI/CD](../../../ci/in
This helps you discover vulnerabilities in web applications that do not use JavaScript heavily. For applications that do,
see the [DAST browser-based analyzer](browser_based.md).
+<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
+For a video walkthrough, see [How to set up Dynamic Application Security Testing (DAST) with GitLab](https://youtu.be/EiFE1QrUQfk?si=6rpgwgUpalw3ByiV).
+
WARNING:
Do not run DAST scans against a production server. Not only can it perform *any* function that
a user can, such as clicking buttons or submitting forms, but it may also trigger bugs, leading to modification or loss of production data. Only run DAST scans against a test server.
-The analyzer uses the [OWASP Zed Attack Proxy](https://www.zaproxy.org/) (ZAP) to scan in two different ways:
+The analyzer uses the [Software Security Project Zed Attack Proxy](https://www.zaproxy.org/) (ZAP) to scan in two different ways:
- Passive scan only (default). DAST executes
[ZAP's Baseline Scan](https://www.zaproxy.org/docs/docker/baseline-scan/) and doesn't
@@ -382,7 +385,7 @@ including a large number of false positives.
| `DAST_REQUEST_HEADERS` <sup>1</sup> | string | Set to a comma-separated list of request header names and values. Headers are added to every request made by DAST. For example, `Cache-control: no-cache,User-Agent: DAST/1.0` |
| `DAST_SKIP_TARGET_CHECK` | boolean | Set to `true` to prevent DAST from checking that the target is available before scanning. Default: `false`. |
| `DAST_SPIDER_MINS` <sup>1</sup> | number | The maximum duration of the spider scan in minutes. Set to `0` for unlimited. Default: One minute, or unlimited when the scan is a full scan. |
-| `DAST_SPIDER_START_AT_HOST` | boolean | Set to `false` to prevent DAST from resetting the target to its host before scanning. When `true`, non-host targets `http://test.site/some_path` is reset to `http://test.site` before scan. Default: `true`. |
+| `DAST_SPIDER_START_AT_HOST` | boolean | Set to `false` to prevent DAST from resetting the target to its host before scanning. When `true`, non-host targets `http://test.site/some_path` is reset to `http://test.site` before scan. Default: `false`. |
| `DAST_TARGET_AVAILABILITY_TIMEOUT` <sup>1</sup> | number | Time limit in seconds to wait for target availability. |
| `DAST_USE_AJAX_SPIDER` <sup>1</sup> | boolean | Set to `true` to use the AJAX spider in addition to the traditional spider, useful for crawling sites that require JavaScript. Default: `false`. |
| `DAST_XML_REPORT` | string | **{warning}** **[Deprecated](https://gitlab.com/gitlab-org/gitlab/-/issues/384340)** in GitLab 15.7. The filename of the XML report written at the end of a scan. |
diff --git a/doc/user/application_security/dependency_scanning/index.md b/doc/user/application_security/dependency_scanning/index.md
index c04134de2b2..683ba6ad19b 100644
--- a/doc/user/application_security/dependency_scanning/index.md
+++ b/doc/user/application_security/dependency_scanning/index.md
@@ -6,11 +6,6 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# Dependency Scanning **(ULTIMATE ALL)**
-<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
-For an interactive reading and how-to demo of this Dependency Scanning doc, see [How to use dependency scanning tutorial hands-on GitLab Application Security part 3](https://youtu.be/ii05cMbJ4xQ?feature=shared)
-<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
-For an interactive reading and how-to demo playlist, see [Get Started With GitLab Application Security Playlist](https://www.youtube.com/playlist?list=PL05JrBw4t0KrUrjDoefSkgZLx5aJYFaF9)
-
Dependency Scanning analyzes your application's dependencies for known vulnerabilities. All
dependencies are scanned, including transitive dependencies, also known as nested dependencies.
@@ -33,12 +28,16 @@ we encourage you to use all of our security scanners. For a comparison of these
![Dependency scanning Widget](img/dependency_scanning_v13_2.png)
-<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
-For an overview, see [Dependency Scanning](https://www.youtube.com/watch?v=TBnfbGk4c4o).
-
WARNING:
Dependency Scanning does not support runtime installation of compilers and interpreters.
+- <i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
+For an overview, see [Dependency Scanning](https://www.youtube.com/watch?v=TBnfbGk4c4o)
+- <i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
+For an interactive reading and how-to demo of this Dependency Scanning documentation, see [How to use dependency scanning tutorial hands-on GitLab Application Security part 3](https://youtu.be/ii05cMbJ4xQ?feature=shared)
+- <i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
+For other interactive reading and how-to demos, see [Get Started With GitLab Application Security Playlist](https://www.youtube.com/playlist?list=PL05JrBw4t0KrUrjDoefSkgZLx5aJYFaF9)
+
## Supported languages and package managers
The following languages and dependency managers are supported:
@@ -230,7 +229,8 @@ table.supported-languages ul {
<li>
<a id="notes-regarding-supported-languages-and-package-managers-2"></a>
<p>
- Java 21 LTS is only available when using <a href="https://maven.apache.org/">Maven</a> or <a href="https://gradle.org/">Gradle</a>. Java 21 LTS for <a href="https://www.scala-sbt.org/">sbt</a> is not yet available and tracked in <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/421174">issue 421174</a>. It is not supported when <a href="https://docs.gitlab.com/ee/development/fips_compliance.html#enable-fips-mode">FIPS mode</a> is enabled.
+ Java 21 LTS for <a href="https://www.scala-sbt.org/">sbt</a> is limited to version 1.9.7. Support for more <a href="https://www.scala-sbt.org/">sbt</a> versions can be tracked in <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/430335">issue 430335</a>.
+ It is not supported when <a href="https://docs.gitlab.com/ee/development/fips_compliance.html#enable-fips-mode">FIPS mode</a> is enabled.
</p>
</li>
<li>
@@ -599,6 +599,10 @@ To enable dependency scanning:
Pipelines now include a dependency scanning job.
+### Running jobs in merge request pipelines
+
+See [Use security scanning tools with merge request pipelines](../index.md#use-security-scanning-tools-with-merge-request-pipelines)
+
### Customizing analyzer behavior
You can use CI/CD variables to customize dependency scanning behavior.
@@ -1093,6 +1097,17 @@ variables:
GRADLE_CLI_OPTS: "-Dhttps.proxyHost=squid-proxy -Dhttps.proxyPort=3128 -Dhttp.proxyHost=squid-proxy -Dhttp.proxyPort=3128 -Dhttp.nonProxyHosts=localhost"
```
+## Using a proxy with Maven projects
+
+Maven does not read the `HTTP(S)_PROXY` environment variables.
+
+To make the Maven dependency scanner use a proxy, you can specify the options using the `MAVEN_CLI_OPTS` CI/CD variable:
+
+```yaml
+variables:
+ MAVEN_CLI_OPTS: "-DproxySet=true -Dhttps.proxyHost=squid-proxy -Dhttps.proxyPort=3128 -Dhttp.proxyHost=squid-proxy -Dhttp.proxyPort=3218"
+```
+
## Specific settings for languages and package managers
See the following sections for configuring specific languages and package managers.
diff --git a/doc/user/application_security/get-started-security.md b/doc/user/application_security/get-started-security.md
index 3e73fbc5955..6143dd59373 100644
--- a/doc/user/application_security/get-started-security.md
+++ b/doc/user/application_security/get-started-security.md
@@ -11,32 +11,42 @@ For an overview, see [Adopting GitLab application security](https://www.youtube.
<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
For an interactive reading and how-to demo playlist, see [Get Started With GitLab Application Security Playlist](https://www.youtube.com/playlist?list=PL05JrBw4t0KrUrjDoefSkgZLx5aJYFaF9)
-The following steps help you get the most from GitLab application security tools. These steps are a recommended order of operations. You can choose to implement capabilities in a different order or omit features that do not apply to your specific needs.
-
-1. Enable [Secret Detection](secret_detection/index.md) and [Dependency Scanning](dependency_scanning/index.md)
- to identify any leaked secrets and vulnerable packages in your codebase.
-
- - For all security scanners, enable them by updating your [`.gitlab-ci.yml`](../../ci/yaml/gitlab_ci_yaml.md) directly on your `default` branch. This creates a baseline scan of your `default` branch, which is necessary for
- feature branch scans to be compared against. This allows [merge requests](../project/merge_requests/index.md)
- to display only newly-introduced vulnerabilities. Otherwise, merge requests display every
- vulnerability in the branch, regardless of whether it was introduced by a change in the branch.
- - If you are after simplicity, enable only Secret Detection first. It only has one analyzer,
- no build requirements, and relatively simple findings: is this a secret or not?
- - It is good practice to enable Dependency Scanning early so you can start identifying existing
- vulnerable packages in your codebase.
-1. Let your team get comfortable with [vulnerability reports](vulnerability_report/index.md) and
- establish a vulnerability triage workflow.
-1. Consider creating [labels](../project/labels.md) and [issue boards](../project/issue_board.md) to
+The following steps help introduce you to GitLab application security tools incrementally.
+You can choose to enable features in a different order, or skip features that don't apply to your specific needs.
+You should start with:
+
+- [Secret Detection](secret_detection/index.md), which works with all programming languages and creates understandable results.
+- [Dependency Scanning](dependency_scanning/index.md), which finds known vulnerabilities in the dependencies your code uses.
+
+If it's your first time setting up GitLab security scanning, you should start with a single project.
+After you've gotten familiar with how scanning works, you can then choose to:
+
+- Follow [the same steps](#recommended-steps) to enable scanning in more projects.
+- [Enforce scanning](index.md#enforce-scan-execution) across more of your projects at once.
+
+## Recommended steps
+
+1. Choose a project to enable and test security features. Consider choosing a project:
+ - That uses your organization's typical programming languages and technologies, because some scanning features work differently across languages.
+ - Where you can try out new settings, like required approvals, without interrupting your team's daily work.
+ You could create a copy of a higher-traffic project for testing, or select a project that's not as busy.
+1. Create a merge request to [enable Secret Detection](secret_detection/index.md#enable-secret-detection) and [enable Dependency Scanning](dependency_scanning/index.md#configuration)
+ to identify any leaked secrets and vulnerable packages in that project.
+ - Security scanners run in your project's [CI/CD pipelines](../../ci/pipelines/index.md). Creating a merge request to update your [`.gitlab-ci.yml`](../../ci/index.md#the-gitlab-ciyml-file) helps you check how the scanners work with your project before they start running in every pipeline. In the merge request, you can change relevant [Secret Detection settings](secret_detection/index.md#configure-scan-settings) or [Dependency Scanning settings](dependency_scanning/index.md#available-cicd-variables) to accommodate your project's layout or configuration. For example, you might choose to exclude a directory of third-party code from scanning.
+ - After you merge this MR to your [default branch](../project/repository/branches/default.md), the system creates a baseline scan. This scan identifies which vulnerabilities already exist on the default branch so [merge requests](../project/merge_requests/index.md) can highlight only newly-introduced problems. Without a baseline scan, merge requests display every
+ vulnerability in the branch, even if the vulnerability already exists on the default branch.
+1. Let your team get comfortable with [viewing security findings in merge requests](index.md#view-security-scan-information) and the [vulnerability report](vulnerability_report/index.md).
+1. Establish a vulnerability triage workflow.
+ - Consider creating [labels](../project/labels.md) and [issue boards](../project/issue_board.md) to
help manage issues created from vulnerabilities. Issue boards allow all stakeholders to have a
common view of all issues and track remediation progress.
+1. Monitor the [Security Dashboard](security_dashboard/index.md) trends to gauge success in remediating existing vulnerabilities and preventing the introduction of new ones.
1. Enforce scheduled security scanning jobs by using a [scan execution policy](policies/scan-execution-policies.md).
- These scheduled jobs run independently from any other security scans you may have defined in a compliance framework pipeline or in the project's `.gitlab-ci.yml` file.
- Running regular dependency and [container scans](container_scanning/index.md) surface newly-discovered vulnerabilities that already exist in your repository.
- Scheduled scans are most useful for projects or important branches with low development activity where pipeline scans are infrequent.
1. Create a [scan result policy](policies/index.md) to limit new vulnerabilities from being merged
- into your `default` branch.
-1. Monitor the [Security Dashboard](security_dashboard/index.md) trends to gauge success in
- remediating existing vulnerabilities and preventing the introduction of new ones.
+ into your [default branch](../project/repository/branches/default.md).
1. Enable other scan types such as [SAST](sast/index.md), [DAST](dast/index.md),
[Fuzz testing](coverage_fuzzing/index.md), or [Container Scanning](container_scanning/index.md).
1. Use [Compliance Pipelines](../group/compliance_frameworks.md#compliance-pipelines)
diff --git a/doc/user/application_security/index.md b/doc/user/application_security/index.md
index 62155e07fbc..25fa1f5cbaf 100644
--- a/doc/user/application_security/index.md
+++ b/doc/user/application_security/index.md
@@ -177,6 +177,9 @@ By default, the application security jobs are configured to run for branch pipel
To use them with [merge request pipelines](../../ci/pipelines/merge_request_pipelines.md),
you must reference the [`latest` templates](../../development/cicd/templates.md).
+The latest version of the template may include breaking changes. Use the stable template unless you
+need a feature provided only in the latest template.
+
All `latest` security templates support merge request pipelines.
For example, to run both SAST and Dependency Scanning, the following template is used:
@@ -193,6 +196,9 @@ Mixing `latest` and `stable` security templates can cause both MR and branch pip
NOTE:
Latest templates can receive breaking changes in any release.
+For more information about template versioning, see the
+[CI/CD documentation](../../development/cicd/templates.md#latest-version).
+
## Default behavior of GitLab security scanning tools
### Secure jobs in your pipeline
@@ -264,7 +270,7 @@ In the Free tier, the reports above aren't parsed by GitLab. As a result, the wi
A merge request contains a security widget which displays a summary of the _new_ results. New results are determined by comparing the findings of the merge request against the findings of the most recent completed pipeline (`success`, `failed`, `canceled` or `skipped`) for the commit when the feature branch was created from the target branch.
-If security scans have not run for the completed pipeline in the target branch when the feature branch was created, there is no base for comparison. The vulnerabilities from the merge request findings are listed as new in the merge request security widget. We recommend you run a scan of the `default` (target) branch before enabling feature branch scans for your developers.
+GitLab checks the last 10 pipelines for the commit when the feature was created from the target branch to find one with security reports to use in comparison logic. If security scans have not run for the last 10 completed pipelines in the target branch when the feature branch was created, there is no base for comparison. The vulnerabilities from the merge request findings are listed as new in the merge request security widget. We recommend you run a scan of the `default` (target) branch before enabling feature branch scans for your developers.
The merge request security widget displays only a subset of the vulnerabilities in the generated JSON artifact because it contains both new and existing findings.
@@ -472,9 +478,9 @@ You can always find supported and deprecated schema versions in the [source code
You can interact with the results of the security scanning tools in several locations:
- [Scan information in merge requests](#merge-request)
-- [Project Security Dashboard](security_dashboard/index.md#view-vulnerabilities-over-time-for-a-project)
+- [Project Security Dashboard](security_dashboard/index.md#project-security-dashboard)
- [Security pipeline tab](security_dashboard/index.md)
-- [Group Security Dashboard](security_dashboard/index.md#view-vulnerabilities-over-time-for-a-group)
+- [Group Security Dashboard](security_dashboard/index.md#group-security-dashboard)
- [Security Center](security_dashboard/index.md#security-center)
- [Vulnerability Report](vulnerability_report/index.md)
- [Vulnerability Pages](vulnerabilities/index.md)
diff --git a/doc/user/application_security/policies/scan-execution-policies.md b/doc/user/application_security/policies/scan-execution-policies.md
index 0eb2355beb7..f6ef8a2c49e 100644
--- a/doc/user/application_security/policies/scan-execution-policies.md
+++ b/doc/user/application_security/policies/scan-execution-policies.md
@@ -28,9 +28,6 @@ implicitly so that the policies can be enforced. This ensures policies enabling
secret detection, static analysis, or other scanners that do not require a build in the
project, are still able to execute and be enforced.
-<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
-For an overview, see [Enforcing scan execution policies on projects with no GitLab CI/CD configuration](https://www.youtube.com/watch?v=sUfwQQ4-qHs).
-
In the event of a job name collision, GitLab appends a hyphen and a number to the job name. GitLab
increments the number until the name no longer conflicts with existing job names. If you create a
policy at the group level, it applies to every child project or subgroup. You cannot edit a
@@ -46,6 +43,9 @@ Policy jobs for scans other than DAST scans are created in the `test` stage of t
[`stages`](../../../ci/yaml/index.md#stages),
to remove the `test` stage, jobs will run in the `scan-policies` stage instead. This stage is injected into the CI pipeline at evaluation time if it doesn't exist. If the `build` stage exists, it is injected just after the `build` stage. If the `build` stage does not exist, it is injected at the beginning of the pipeline. DAST scans always run in the `dast` stage. If this stage does not exist, then a `dast` stage is injected at the end of the pipeline.
+- <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> For a video walkthrough, see [How to set up Security Scan Policies in GitLab](https://youtu.be/ZBcqGmEwORA?si=aeT4EXtmHjosgjBY).
+- <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> For an overview, see [Enforcing scan execution policies on projects with no GitLab CI/CD configuration](https://www.youtube.com/watch?v=sUfwQQ4-qHs).
+
## Scan execution policy editor
NOTE:
diff --git a/doc/user/application_security/policies/scan-result-policies.md b/doc/user/application_security/policies/scan-result-policies.md
index d892012c365..d0d3cb2ca03 100644
--- a/doc/user/application_security/policies/scan-result-policies.md
+++ b/doc/user/application_security/policies/scan-result-policies.md
@@ -27,13 +27,18 @@ The following video gives you an overview of GitLab scan result policies:
<iframe src="https://www.youtube-nocookie.com/embed/w5I9gcUgr9U" frameborder="0" allowfullscreen> </iframe>
</figure>
+## Requirements and limitations
+
+- You must add the respective [security scanning tools](../index.md#application-coverage).
+ Otherwise, scan result policies do not have any effect.
+- The maximum number of policies is five.
+- Each policy can have a maximum of five rules.
+- All configured scanners must be present in the merge request's latest pipeline. If not, approvals are required even if some vulnerability criteria have not been met.
+
## Merge request with multiple pipelines
> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/379108) in GitLab 16.2 [with a flag](../../../administration/feature_flags.md) named `multi_pipeline_scan_result_policies`. Disabled by default.
-> - [Enabled on GitLab.com and self-managed](https://gitlab.com/gitlab-org/gitlab/-/issues/409482) in GitLab 16.3.
-
-FLAG:
-On self-managed GitLab, by default this feature is available. To hide the feature, an administrator can [disable the feature flag](../../../administration/feature_flags.md) named `multi_pipeline_scan_result_policies`. On GitLab.com, this feature is available.
+> - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/409482) in GitLab 16.3. Feature flag `multi_pipeline_scan_result_policies` removed.
A project can have multiple pipeline types configured. A single commit can initiate multiple
pipelines, each of which may contain a security scan.
@@ -78,36 +83,31 @@ When you save a new policy, GitLab validates its contents against [this JSON sch
If you're not familiar with how to read [JSON schemas](https://json-schema.org/),
the following sections and tables provide an alternative.
-| Field | Type | Required | Possible values | Description |
-|-------|------|----------|-----------------|-------------|
-| `scan_result_policy` | `array` of Scan Result Policy | true | | List of scan result policies (maximum 5). |
+| Field | Type | Required | Possible values | Description |
+|----------------------|-------------------------------|----------|-----------------|------------------------------------------|
+| `scan_result_policy` | `array` of Scan Result Policy | true | | List of scan result policies (maximum 5). |
## Scan result policy schema
-> The `approval_settings` field was [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/418752) in GitLab 16.4 [with a flag](../../../administration/feature_flags.md) named `scan_result_policy_settings`. Disabled by default.
+> The `approval_settings` fields was [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/418752) in GitLab 16.4 [with flags](../../../administration/feature_flags.md) named `scan_result_policies_block_unprotecting_branches`, `scan_result_any_merge_request`, or `scan_result_policies_block_force_push`. All are disabled by default.
FLAG:
-On self-managed GitLab, by default this feature is not available. To make it available, ask an administrator to [enable the feature flag](../../../administration/feature_flags.md) named `scan_result_policy_settings`.
-On GitLab.com, this feature is not available.
+On self-managed GitLab, by default the `approval_settings` field is unavailable. To show the feature, an administrator can [enable the feature flags](../../../administration/feature_flags.md) named `scan_result_policies_block_unprotecting_branches`, `scan_result_any_merge_request`, or `scan_result_policies_block_force_push`. See the `approval_settings` section below for more information.
| Field | Type | Required |Possible values | Description |
-|-------|------|----------|----------------|-------------|
+|--------|------|----------|----------------|-------------|
| `name` | `string` | true | | Name of the policy. Maximum of 255 characters.|
-| `description` (optional) | `string` | true | | Description of the policy. |
+| `description` | `string` | false | | Description of the policy. |
| `enabled` | `boolean` | true | `true`, `false` | Flag to enable (`true`) or disable (`false`) the policy. |
| `rules` | `array` of rules | true | | List of rules that the policy applies. |
-| `actions` | `array` of actions | false| | List of actions that the policy enforces. |
-| `approval_settings` | `object` | false | `{prevent_approval_by_author: boolean, prevent_approval_by_commit_author: boolean, remove_approvals_with_new_commit: boolean, require_password_to_approve: boolean}` | Project settings that the policy overrides. |
+| `actions` | `array` of actions | false | | List of actions that the policy enforces. |
+| `approval_settings` | `object` | false | | Project settings that the policy overrides. |
## `scan_finding` rule type
-> - The scan result policy field `vulnerability_attributes` was [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/123052) in GitLab 16.2 [with a flag](../../../administration/feature_flags.md) named `enforce_vulnerability_attributes_rules`. [Enabled on GitLab.com and self-managed](https://gitlab.com/gitlab-org/gitlab/-/issues/418784) in GitLab 16.3. Feature flag `enforce_vulnerability_attributes_rules` removed in GitLab 16.5.
+> - The scan result policy field `vulnerability_attributes` was [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/123052) in GitLab 16.2 [with a flag](../../../administration/feature_flags.md) named `enforce_vulnerability_attributes_rules`. [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/418784) in GitLab 16.3. Feature flag removed.
> - The scan result policy field `vulnerability_age` was [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/123956) in GitLab 16.2.
-> - The `branch_exceptions` field was [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/418741) in GitLab 16.3 [with a flag](../../../administration/feature_flags.md) named `security_policies_branch_exceptions`. Generally available in GitLab 16.5. Feature flag removed.
-
-FLAG:
-On self-managed GitLab, by default the `branch_exceptions` field is available. To hide the feature, an administrator can [disable the feature flag](../../../administration/feature_flags.md) named `security_policies_branch_exceptions`.
-On GitLab.com, this feature is available.
+> - The `branch_exceptions` field was [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/418741) in GitLab 16.3 [with a flag](../../../administration/feature_flags.md) named `security_policies_branch_exceptions`. [Generally available](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/133753) in GitLab 16.5. Feature flag removed.
This rule enforces the defined actions based on security scan findings.
@@ -128,11 +128,7 @@ This rule enforces the defined actions based on security scan findings.
> - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/8092) in GitLab 15.9 [with a flag](../../../administration/feature_flags.md) named `license_scanning_policies`.
> - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/397644) in GitLab 15.11. Feature flag `license_scanning_policies` removed.
-> - The `branch_exceptions` field was [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/418741) in GitLab 16.3 [with a flag](../../../administration/feature_flags.md) named `security_policies_branch_exceptions`. Enabled by default.
-
-FLAG:
-On self-managed GitLab, by default the `branch_exceptions` field is available. To hide the feature, an administrator can [disable the feature flag](../../../administration/feature_flags.md) named `security_policies_branch_exceptions`.
-On GitLab.com, this feature is available.
+> - The `branch_exceptions` field was [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/418741) in GitLab 16.3 [with a flag](../../../administration/feature_flags.md) named `security_policies_branch_exceptions`. Enabled by default. [Generally available](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/133753) in GitLab 16.5. Feature flag removed.
This rule enforces the defined actions based on license findings.
@@ -148,12 +144,11 @@ This rule enforces the defined actions based on license findings.
## `any_merge_request` rule type
-> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/418752) in GitLab 16.4.
-> - The `branch_exceptions` field was [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/418741) in GitLab 16.3 [with a flag](../../../administration/feature_flags.md) named `security_policies_branch_exceptions`. Enabled by default.
+> - The `branch_exceptions` field was [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/418741) in GitLab 16.3 [with a flag](../../../administration/feature_flags.md) named `security_policies_branch_exceptions`. Enabled by default. [Generally available](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/133753) in GitLab 16.5. Feature flag removed.
+> - The `any_merge_request` rule type was [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/418752) in GitLab 16.4. Disabled by default.
FLAG:
-On self-managed GitLab, by default the `branch_exceptions` field is available. To hide the feature, an administrator can [disable the feature flag](../../../administration/feature_flags.md) named `security_policies_branch_exceptions`.
-On GitLab.com, this feature is available.
+On self-managed GitLab, by default the `any_merge_request` field is not available. To show the feature, an administrator can [enable the feature flag](../../../administration/feature_flags.md) named `any_merge_request`.
This rule enforces the defined actions for any merge request based on the commits signature.
@@ -180,13 +175,28 @@ the defined policy.
| `group_approvers_ids` | `array` of `integer` | false | ID of one of more groups | The IDs of groups to consider as approvers. Users with [direct membership in the group](../../project/merge_requests/approvals/rules.md#group-approvers) are eligible to approve. |
| `role_approvers` | `array` of `string` | false | One or more [roles](../../../user/permissions.md#roles) (for example: `owner`, `maintainer`) | The roles to consider as approvers that are eligible to approve. |
-Requirements and limitations:
+## `approval_settings`
-- You must add the respective [security scanning tools](../index.md#application-coverage).
- Otherwise, scan result policies do not have any effect.
-- The maximum number of policies is five.
-- Each policy can have a maximum of five rules.
-- All configured scanners must be present in the merge request's latest pipeline. If not, approvals are required even if some vulnerability criteria have not been met.
+> - The `block_unprotecting_branches` field was [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/423101) in GitLab 16.4 [with flag](../../../administration/feature_flags.md) named `scan_result_policy_settings`. Disabled by default.
+> - The `scan_result_policy_settings` feature flag was replaced by the `scan_result_policies_block_unprotecting_branches` feature flag in 16.4.
+> - The `prevent_approval_by_author`, `prevent_approval_by_commit_author`, `remove_approvals_with_new_commit`, and `require_password_to_approve` fields were [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/418752) in GitLab 16.4 [with flag](../../../administration/feature_flags.md) named `scan_result_any_merge_request`. Disabled by default.
+> - The `prevent_force_pushing` field was [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/420629) in GitLab 16.4 [with flag](../../../administration/feature_flags.md) named `scan_result_policies_block_force_push`. Disabled by default.
+
+FLAG:
+On self-managed GitLab, by default the `block_unprotecting_branches` field is unavailable. To show the feature, an administrator can [enable the feature flag](../../../administration/feature_flags.md) named `scan_result_policies_block_unprotecting_branches`. On GitLab.com, this feature is unavailable.
+On self-managed GitLab, by default the `prevent_approval_by_author`, `prevent_approval_by_commit_author`, `remove_approvals_with_new_commit`, and `require_password_to_approve` fields are unavailable. To show the feature, an administrator can [enable the feature flag](../../../administration/feature_flags.md) named `scan_result_any_merge_request`. On GitLab.com, this feature is available.
+On self-managed GitLab, by default the `prevent_force_pushing` field is unavailable. To show the feature, an administrator can [enable the feature flag](../../../administration/feature_flags.md) named `security_policies_branch_exceptions`. On GitLab.com, this feature is unavailable.
+
+The settings set in the policy overwrite settings in the project.
+
+| Field | Type | Required | Possible values | Description |
+|-------|------|----------|-----------------|-------------|
+| `block_unprotecting_branches` | `boolean` | false | `true`, `false` | Prevent a user from removing a branch from the protected branches list, deleting a protected branch, or changing the default branch if that branch is included in the security policy. |
+| `prevent_approval_by_author` | `boolean` | false | `true`, `false` | When enabled, two person approval is required on all MRs as merge request authors cannot approve their own MRs and merge them unilaterally. |
+| `prevent_approval_by_commit_author` | `boolean` | false | `true`, `false` | When enabled, users who have contributed code to the MR are ineligible for approval, ensuring code committers cannot introduce vulnerabilities and approve code to merge. |
+| `remove_approvals_with_new_commit` | `boolean` | false | `true`, `false` | If an MR receives all necessary approvals to merge, but then a new commit is added, new approvals are required. This ensures new commits that may include vulnerabilities cannot be introduced. |
+| `require_password_to_approve` | `boolean` | false | `true`, `false` | Password confirmation on approvals provides an additional level of security. Enabling this enforces the setting on all projects targeted by this policy. |
+| `prevent_force_pushing` | `boolean` | false | `true`, `false` | Prevent pushing and force pushing to a protected branch. |
## Example security scan result policies project
@@ -257,28 +267,47 @@ You can use this example in the YAML mode of the [Scan Result Policy editor](#sc
It corresponds to a single object from the previous example:
```yaml
-- name: critical vulnerability CS approvals
- description: critical severity level only for container scanning
- enabled: true
- rules:
- - type: scan_finding
- branches:
- - main
- scanners:
- - container_scanning
- vulnerabilities_allowed: 1
- severity_levels:
- - critical
- vulnerability_states:
- - newly_detected
- actions:
- - type: require_approval
- approvals_required: 1
- user_approvers:
- - adalberto.dare
+type: scan_result_policy
+name: critical vulnerability CS approvals
+description: critical severity level only for container scanning
+enabled: true
+rules:
+- type: scan_finding
+ branches:
+ - main
+ scanners:
+ - container_scanning
+ vulnerabilities_allowed: 1
+ severity_levels:
+ - critical
+ vulnerability_states:
+ - newly_detected
+actions:
+- type: require_approval
+ approvals_required: 1
+ user_approvers:
+ - adalberto.dare
```
-## Example situations where scan result policies require additional approval
+## Understanding scan result policy approvals
+
+### Scope of scan result policy comparison
+
+- To determine when approval is required on a merge request, we compare the latest completed pipelines for each supported pipeline source for the source and target branch (for example, `feature`/`main`). This ensures the most comprehensive evaluation of scan results.
+- We compare findings from the latest completed pipelines that ran on `HEAD` of the source and target branch.
+- Scan result policies considers all supported pipeline sources (based on the [`CI_PIPELINE_SOURCE` variable](../../../ci/variables/predefined_variables.md)) when comparing results from both the source and target branches when determining if a merge request requires approval. Pipeline sources `webide` and `parent_pipeline` are not supported.
+
+### Accepting risk and ignoring vulnerabilities in future merge requests
+
+For scan result policies that are scoped to `newly_detected` findings, it's important to understand the implications of this vulnerability state. A finding is considered `newly_detected` if it exists on the merge request's branch but not on the default branch. When a merge request whose branch contains `newly_detected` findings is approved and merged, approvers are "accepting the risk" of those vulnerabilities. If one or more of the same vulnerabilities were detected after this time, their status would be `previously_detected` and so not be out of scope of a policy aimed at `newly_detected` findings. For example:
+
+- A scan result policy is created to block critical SAST findings. If a SAST finding for CVE-1234 is approved, future merge requests with the same violation will not require approval in the project.
+
+When using license approval policies, the combination of project, component (dependency), and license are considered in the evaluation. If a license is approved as an exception, future merge requests don't require approval for the same combination of project, component (dependency), and license. The component's version is not be considered in this case. If a previously approved package is updated to a new version, approvers will not need to re-approve. For example:
+
+- A license approval policy is created to block merge requests with newly detected licenses matching `AGPL-1.0`. A change is made in project `demo` for component `osframework` that violates the policy. If approved and merged, future merge requests to `osframework` in project `demo` with the license `AGPL-1.0` don't require approval.
+
+### Multiple approvals
There are several situations where the scan result policy requires an additional approval step. For example:
@@ -295,3 +324,43 @@ There are several situations where the scan result policy requires an additional
- Someone stops a pipeline security job, and users can't skip the security scan.
- A job in a merge request fails and is configured with `allow_failure: false`. As a result, the pipeline is in a blocked state.
- A pipeline has a manual job that must run successfully for the entire pipeline to pass.
+
+### Known issues
+
+We have identified in [epic 11020](https://gitlab.com/groups/gitlab-org/-/epics/11020) common areas of confusion in scan result findings that need to be addressed. Below are a few of the known issues:
+
+- When using `newly_detected`, some findings may require approval when they are not introduced by the merge request (such as a new CVE on a related dependency). We currently use `main tip` of the target branch for comparison. In the future, we plan to use `merge base` for `newly_detected` policies (see [issue 428518](https://gitlab.com/gitlab-org/gitlab/-/issues/428518)).
+- Findings or errors that cause approval to be required on a scan result policy may not be evident in the Security MR Widget. By using `merge base` in [issue 428518](https://gitlab.com/gitlab-org/gitlab/-/issues/428518) some cases will be addressed. We will additionally be [displaying more granular details](https://gitlab.com/groups/gitlab-org/-/epics/11185) about what caused security policy violations.
+- Security policy violations are distinct compared to findings displayed in the MR widgets. Some violations may not be present in the MR widget. We are working to harmonize our features in [epic 11020](https://gitlab.com/groups/gitlab-org/-/epics/11020) and to display policy violations explicitly in merge requests in [epic 11185](https://gitlab.com/groups/gitlab-org/-/epics/11185).
+
+## Troubleshooting
+
+### Merge request rules widget shows a scan result policy is invalid or duplicated **(ULTIMATE SELF)**
+
+On GitLab self-managed from 15.0 to 16.4, the most likely cause is that the project was exported from a
+group and imported into another, and had scan result policy rules. These rules are stored in a
+separate project to the one that was exported. As a result, the project contains policy rules that
+reference entities that don't exist in the imported project's group. The result is policy rules that
+are invalid, duplicated, or both.
+
+To remove all invalid scan result policy rules from a GitLab instance, an administrator can run
+the following script in the [Rails console](../../../administration/operations/rails_console.md).
+
+```ruby
+Project.joins(:approval_rules).where(approval_rules: { report_type: %i[scan_finding license_scanning] }).where.not(approval_rules: { security_orchestration_policy_configuration_id: nil }).find_in_batches.flat_map do |batch|
+ batch.map do |project|
+ # Get projects and their configuration_ids for applicable project rules
+ [project, project.approval_rules.where(report_type: %i[scan_finding license_scanning]).pluck(:security_orchestration_policy_configuration_id).uniq]
+ end.uniq.map do |project, configuration_ids| # We take only unique combinations of project + configuration_ids
+ # If we find more configurations than what is available for the project, we take records with the extra configurations
+ [project, configuration_ids - project.all_security_orchestration_policy_configurations.pluck(:id)]
+ end.select { |_project, configuration_ids| configuration_ids.any? }
+end.each do |project, configuration_ids|
+ # For each found pair project + ghost configuration, we remove these rules for a given project
+ Security::OrchestrationPolicyConfiguration.where(id: configuration_ids).each do |configuration|
+ configuration.delete_scan_finding_rules_for_project(project.id)
+ end
+ # Ensure we sync any potential rules from new group's policy
+ Security::ScanResultPolicies::SyncProjectWorker.perform_async(project.id)
+end
+```
diff --git a/doc/user/application_security/sast/customize_rulesets.md b/doc/user/application_security/sast/customize_rulesets.md
index 90731114303..ed3b33fc35b 100644
--- a/doc/user/application_security/sast/customize_rulesets.md
+++ b/doc/user/application_security/sast/customize_rulesets.md
@@ -597,7 +597,7 @@ rules:
The following example [enables SAST](index.md#configure-sast-in-your-cicd-yaml) and uses a shared ruleset customization file. The file is:
-- Downloaded from a private project that requires authentication, by using a [Group Access Token](../../group/settings/group_access_tokens.md).
+- Downloaded from a private project that requires authentication, by using a [Group Access Token](../../group/settings/group_access_tokens.md) securely stored within a CI variable.
- Checked out at a specific Git commit SHA instead of the default branch.
See [group access tokens](../../group/settings/group_access_tokens.md#bot-users-for-groups) for how to find the username associated with a group token.
@@ -607,5 +607,5 @@ include:
- template: Security/SAST.gitlab-ci.yml
variables:
- SAST_RULESET_GIT_REFERENCE: "group_2504721_bot_7c9311ffb83f2850e794d478ccee36f5:glpat-1234567@gitlab.com/example-group/example-ruleset-project@c8ea7e3ff126987fb4819cc35f2310755511c2ab"
+ SAST_RULESET_GIT_REFERENCE: "group_2504721_bot_7c9311ffb83f2850e794d478ccee36f5:$PERSONAL_ACCESS_TOKEN@gitlab.com/example-group/example-ruleset-project@c8ea7e3ff126987fb4819cc35f2310755511c2ab"
```
diff --git a/doc/user/application_security/sast/index.md b/doc/user/application_security/sast/index.md
index acc7e9d9e84..770e24d87ca 100644
--- a/doc/user/application_security/sast/index.md
+++ b/doc/user/application_security/sast/index.md
@@ -273,6 +273,10 @@ When downloading, you always receive the most recent SAST artifact available.
You can enable and configure SAST by using the UI, either with the default settings or with customizations.
The method you can use depends on your GitLab license tier.
+### Running jobs in merge request pipelines
+
+See [Use security scanning tools with merge request pipelines](../index.md#use-security-scanning-tools-with-merge-request-pipelines)
+
#### Configure SAST with customizations **(ULTIMATE ALL)**
> [Removed](https://gitlab.com/gitlab-org/gitlab/-/issues/410013) individual SAST analyzers configuration options from the UI in GitLab 16.2.
diff --git a/doc/user/application_security/sast/rules.md b/doc/user/application_security/sast/rules.md
index e4054764e1f..3fb24bcd66b 100644
--- a/doc/user/application_security/sast/rules.md
+++ b/doc/user/application_security/sast/rules.md
@@ -102,7 +102,7 @@ More details are available in release announcements and in the CHANGELOG links p
Key changes to the GitLab-managed ruleset for Semgrep-based scanning include:
-- Beginning in GitLab 16.3, the GitLab Static Analysis and Vulnerability Research teams are working to remove rules that tend to produce too many false positive results or not enough actionable true positive results. Existing findings from these removed rules are [automatically resolved](index.md#automatic-vulnerability-resolution); they no longer appear in the [Security Dashboard](../security_dashboard/index.md#view-vulnerabilities-over-time-for-a-project) or in the default view of the [Vulnerability Report](../vulnerability_report/index.md). This work is tracked in [epic 10907](https://gitlab.com/groups/gitlab-org/-/epics/10907).
+- Beginning in GitLab 16.3, the GitLab Static Analysis and Vulnerability Research teams are working to remove rules that tend to produce too many false positive results or not enough actionable true positive results. Existing findings from these removed rules are [automatically resolved](index.md#automatic-vulnerability-resolution); they no longer appear in the [Security Dashboard](../security_dashboard/index.md#project-security-dashboard) or in the default view of the [Vulnerability Report](../vulnerability_report/index.md). This work is tracked in [epic 10907](https://gitlab.com/groups/gitlab-org/-/epics/10907).
- In GitLab 16.0 through 16.2, the GitLab Vulnerability Research team updated the guidance that's included in each result.
- In GitLab 15.10, the `detect-object-injection` rule was [removed by default](https://gitlab.com/gitlab-org/gitlab/-/issues/373920) and its findings were [automatically resolved](index.md#automatic-vulnerability-resolution).
diff --git a/doc/user/application_security/sast/troubleshooting.md b/doc/user/application_security/sast/troubleshooting.md
index 34a2a3d01af..77a2f20c934 100644
--- a/doc/user/application_security/sast/troubleshooting.md
+++ b/doc/user/application_security/sast/troubleshooting.md
@@ -56,14 +56,14 @@ For information on this, see the [general Application Security troubleshooting s
For information on this, see the [GitLab Secure troubleshooting section](../index.md#error-job-is-used-for-configuration-only-and-its-script-should-not-be-executed).
-## Limitation when using rules:exists
+## SAST jobs are running unexpectedly
The [SAST CI template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Security/SAST.gitlab-ci.yml)
-uses the `rules:exists` parameter. For performance reasons, a maximum number of matches are made
-against the given glob pattern. If the number of matches exceeds the maximum, the `rules:exists`
+uses the `rules:exists` parameter. For performance reasons, a maximum number of 10000 matches are
+made against the given glob pattern. If the number of matches exceeds the maximum, the `rules:exists`
parameter returns `true`. Depending on the number of files in your repository, a SAST job might be
-triggered even if the scanner doesn't support your project. For more details about this issue, see
-the [`rules:exists` documentation](../../../ci/yaml/index.md#rulesexists).
+triggered even if the scanner doesn't support your project. For more details about this limitation,
+see the [`rules:exists` documentation](../../../ci/yaml/index.md#rulesexists).
## SpotBugs UTF-8 unmappable character errors
diff --git a/doc/user/application_security/secret_detection/index.md b/doc/user/application_security/secret_detection/index.md
index 18016f6f342..4332b91c0f9 100644
--- a/doc/user/application_security/secret_detection/index.md
+++ b/doc/user/application_security/secret_detection/index.md
@@ -6,19 +6,7 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# Secret Detection **(FREE ALL)**
-> - In GitLab 13.1, Secret Detection was split from the [SAST configuration](../sast/index.md#configuration)
-> into its own CI/CD template. If you're using GitLab 13.0 or earlier and SAST is enabled, then
-> Secret Detection is already enabled.
-> - [Moved](https://gitlab.com/gitlab-org/gitlab/-/issues/222788) from GitLab Ultimate to GitLab
-> Free in 13.3.
-> - [In GitLab 14.0](https://gitlab.com/gitlab-org/gitlab/-/issues/297269), Secret Detection jobs
-> `secret_detection_default_branch` and `secret_detection` were consolidated into one job,
-> `secret_detection`.
-
-<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
-For an interactive reading and how-to demo of this Secret Detection doc, see [How to enable secret detection in GitLab Application Security Part 1/2](https://youtu.be/dbMxeO6nJCE?feature=shared) and [How to enable secret detection in GitLab Application Security Part 2/2](https://youtu.be/VL-_hdiTazo?feature=shared)
-<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
-For an interactive reading and how-to demo playlist, see [Get Started With GitLab Application Security Playlist](https://www.youtube.com/playlist?list=PL05JrBw4t0KrUrjDoefSkgZLx5aJYFaF9)
+> [In GitLab 14.0](https://gitlab.com/gitlab-org/gitlab/-/issues/297269), Secret Detection jobs `secret_detection_default_branch` and `secret_detection` were consolidated into one job, `secret_detection`.
People sometimes accidentally commit secrets like keys or API tokens to Git repositories.
After a sensitive value is pushed to a remote repository, anyone with access to the repository can impersonate the authorized user of the secret for malicious purposes.
@@ -37,6 +25,13 @@ With GitLab Ultimate, Secret Detection results are also processed so you can:
- Review them in the security dashboard.
- [Automatically respond](automatic_response.md) to leaks in public repositories.
+<i class="fa fa-youtube-play youtube" aria-hidden="true"></i> For an interactive reading and how-to demo of this Secret Detection documentation see:
+
+- [How to enable secret detection in GitLab Application Security Part 1/2](https://youtu.be/dbMxeO6nJCE?feature=shared)
+- [How to enable secret detection in GitLab Application Security Part 2/2](https://youtu.be/VL-_hdiTazo?feature=shared)
+
+<i class="fa fa-youtube-play youtube" aria-hidden="true"></i> For other interactive reading and how-to demos, see the [Get Started With GitLab Application Security Playlist](https://www.youtube.com/playlist?list=PL05JrBw4t0KrUrjDoefSkgZLx5aJYFaF9).
+
## Detected secrets
GitLab maintains the detection rules used in Secret Detection.
@@ -111,26 +106,13 @@ Secret Detection can detect if a secret was added in one commit and removed in a
- Merge request
In a merge request, Secret Detection scans every commit made on the source branch. To use this
- feature, you must use the [`latest` Secret Detection template](#templates), as it supports
+ feature, you must use the [`latest` Secret Detection template](../index.md#use-security-scanning-tools-with-merge-request-pipelines), as it supports
[merge request pipelines](../../../ci/pipelines/merge_request_pipelines.md). Secret Detection's
results are only available after the pipeline is completed.
-## Templates
+## Running jobs in merge request pipelines
-Secret Detection default configuration is defined in CI/CD templates. Updates to the template are
-provided with GitLab upgrades, allowing you to benefit from any improvements and additions.
-
-Available templates:
-
-- [`Secret-Detection.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Secret-Detection.gitlab-ci.yml): Stable, default version of the Secret Detection CI/CD template.
-- [`Secret-Detection.latest.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Secret-Detection.latest.gitlab-ci.yml): Latest version of the Secret Detection template.
-
-WARNING:
-The latest version of the template may include breaking changes. Use the stable template unless you
-need a feature provided only in the latest template.
-
-For more information about template versioning, see the
-[CI/CD documentation](../../../development/cicd/templates.md#latest-version).
+See [Use security scanning tools with merge request pipelines](../index.md#use-security-scanning-tools-with-merge-request-pipelines)
## Enable Secret Detection
@@ -166,7 +148,7 @@ your GitLab CI/CD configuration file is complex.
```yaml
include:
- - template: Security/Secret-Detection.gitlab-ci.yml
+ - template: Jobs/Secret-Detection.gitlab-ci.yml
```
1. Select the **Validate** tab, then select **Validate pipeline**.
@@ -232,7 +214,7 @@ This example uses a specific minor version of the analyzer:
```yaml
include:
- - template: Security/Secret-Detection.gitlab-ci.yml
+ - template: Jobs/Secret-Detection.gitlab-ci.yml
secret_detection:
variables:
@@ -262,7 +244,7 @@ In the following example _extract_ of a `.gitlab-ci.yml` file:
```yaml
include:
- - template: Security/Secret-Detection.gitlab-ci.yml
+ - template: Jobs/Secret-Detection.gitlab-ci.yml
secret_detection:
variables:
@@ -322,7 +304,7 @@ variables:
SECRET_DETECTION_IMAGE_SUFFIX: '-fips'
include:
- - template: Security/Secret-Detection.gitlab-ci.yml
+ - template: Jobs/Secret-Detection.gitlab-ci.yml
```
## Full history Secret Detection
@@ -576,7 +558,7 @@ Prerequisites:
```yaml
include:
- - template: Security/Secret-Detection.gitlab-ci.yml
+ - template: Jobs/Secret-Detection.gitlab-ci.yml
variables:
SECURE_ANALYZERS_PREFIX: "localhost:5000/analyzers"
diff --git a/doc/user/application_security/security_dashboard/img/group_security_dashboard.png b/doc/user/application_security/security_dashboard/img/group_security_dashboard.png
new file mode 100644
index 00000000000..1d324b8207a
--- /dev/null
+++ b/doc/user/application_security/security_dashboard/img/group_security_dashboard.png
Binary files differ
diff --git a/doc/user/application_security/security_dashboard/img/project_security_dashboard.png b/doc/user/application_security/security_dashboard/img/project_security_dashboard.png
new file mode 100644
index 00000000000..46fdebca9cd
--- /dev/null
+++ b/doc/user/application_security/security_dashboard/img/project_security_dashboard.png
Binary files differ
diff --git a/doc/user/application_security/security_dashboard/img/security_center_dashboard_v15_10.png b/doc/user/application_security/security_dashboard/img/security_center_dashboard_v15_10.png
deleted file mode 100644
index c2780fce787..00000000000
--- a/doc/user/application_security/security_dashboard/img/security_center_dashboard_v15_10.png
+++ /dev/null
Binary files differ
diff --git a/doc/user/application_security/security_dashboard/index.md b/doc/user/application_security/security_dashboard/index.md
index 53a6dfe6d0a..89c950f2473 100644
--- a/doc/user/application_security/security_dashboard/index.md
+++ b/doc/user/application_security/security_dashboard/index.md
@@ -7,64 +7,42 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# GitLab Security Dashboards and Security Center **(ULTIMATE ALL)**
-You can use Security Dashboards to view trends about vulnerabilities
-detected by [security scanners](../index.md#application-coverage).
-These trends are shown in projects, groups, and the Security Center.
+## Security Dashboards
-To use the Security Dashboards, you must:
+Security Dashboards are used to assess the security posture of your applications. GitLab provides
+you with a collection of metrics, ratings, and charts for the vulnerabilities detected by the [security scanners](../index.md#application-coverage) run on your project. The security dashboard provides data such as:
-- Configure at least one [security scanner](../index.md#application-coverage) in a project.
-- Configure jobs to use the [`reports` syntax](../../../ci/yaml/index.md#artifactsreports).
-- Use [GitLab Runner](https://docs.gitlab.com/runner/) 11.5 or later. If you use the
- shared runners on GitLab.com, you are using the correct version.
-- Have the [correct role](../../permissions.md) for the project or group.
+- Vulnerability trends over a 30, 60, or 90-day time-frame for all projects in a group
+- A letter grade rating for each project based on vulnerability severity
+- The total number of vulnerabilities detected within the last 365 days including their severity
+
+The data provided by the Security Dashboards can be used supply to insight on what decisions can be made to improve your security posture. For example, using the 365 day trend view, you can see on which days a significant number of vulnerabilities were introduced. Then you can examine the code changes performed on those particular days in order perform a root-cause analysis to create better policies for preventing the introduction of vulnerabilities in the future.
<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
For an overview, see [Security Dashboard](https://www.youtube.com/watch?v=QHQHN4luNpc).
-## When Security Dashboards are updated
-
-The Security Dashboards show results of scans from the most recent completed pipeline on the
-[default branch](../../project/repository/branches/default.md).
-Dashboards are updated with the result of completed pipelines run on the default branch; they do not include vulnerabilities discovered in pipelines from other un-merged branches.
-
-If you use manual jobs, for example gate deployments, in the default branch's pipeline,
-the results of any scans are only updated when the job has been successfully run.
-If manual jobs are skipped regularly, you should to define the job as optional,
-using the [`allow_failure`](../../../ci/jobs/job_control.md#types-of-manual-jobs) attribute.
-
-To ensure regular security scans (even on infrequently developed projects),
-you should use [scan execution policies](../../../user/application_security/policies/scan-execution-policies.md).
-Alternatively, you can
-[configure a scheduled pipeline](../../../ci/pipelines/schedules.md).
+## Prerequisites
-## Reduce false negatives in dependency scans
+To view the Security Dashboards, the following is required:
-WARNING:
-False negatives occur when you resolve dependency versions during a scan, which differ from those
-resolved when your project built and released in a previous pipeline.
+- [Maintainer Role](../../permissions.md#roles) for the project or group.
+- At least one [security scanner](../index.md#application-coverage) configured within your project.
+- A successful security scan performed on the [default branch](../../project/repository/branches/default.md) of your project
-To reduce false negatives in [dependency scans](../../../user/application_security/dependency_scanning/index.md) in scheduled pipelines, ensure you:
-
-- Include a lock file in your project. A lock file lists all transient dependencies and tracks their versions.
- - Java projects can't have lock files.
- - Python projects can have lock files, but GitLab Secure tools don't support them.
-- Configure your project for [Continuous Delivery](../../../ci/introduction/index.md).
+**Note**:
+The Security Dashboards show results of scans from the most recent completed pipeline on the
+[default branch](../../project/repository/branches/default.md). Dashboards are updated with the result of completed pipelines run on the default branch; they do not include vulnerabilities discovered in pipelines from other un-merged branches.
-## View vulnerabilities over time for a project
+## Viewing the Security Dashboard
-> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/235558) in GitLab 13.6.
-> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/285476) in GitLab 13.10, options to zoom in on a date range, and download the vulnerabilities chart.
-> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/285477) in GitLab 13.11, date range slider to visualize data between given dates.
+The Security Dashboard can be seen at the project, group, and the Security Center levels.
+Each dashboard provides a unique viewpoint of your security posture.
-The project Security Dashboard shows the total number of vulnerabilities
-over time, with up to 365 days of historical data. Data refresh begins daily at 01:15 UTC via a scheduled job.
-Each refresh captures a snapshot of open vulnerabilities. Data is not backported to prior days
-so vulnerabilities opened after the job has already run for the day cannot be reflected in the
-counts until the following day's refresh job.
-Project Security Dashboards show statistics for all vulnerabilities with a current status of `Needs triage` or `Confirmed` .
+### Project Security Dashboard
-To view total number of vulnerabilities over time:
+The Project Security Dashboard shows the total number of vulnerabilities detected over time,
+with up to 365 days of historical data for a given project. You can view the Project Security
+Dashboard:
1. On the left sidebar, select **Search or go to** and find your project.
1. Select **Secure > Security dashboard**.
@@ -75,70 +53,63 @@ To view total number of vulnerabilities over time:
across the chart.
- To reset to the original range, select **Remove Selection** (**{redo}**).
-### Download the vulnerabilities chart
+![Project Security Dashboard](img/project_security_dashboard.png)
-To download an SVG image of the vulnerabilities chart:
+#### Downloading the vulnerability chart
+
+You can download an image of the vulnerability chart from the Project Security Dashboard
+to use in documentation, presentations, and so on. To download the image of the vulnerability
+chart:
1. On the left sidebar, select **Search or go to** and find your project.
1. Select **Secure > Security dashboard**.
1. Select **Save chart as an image** (**{download}**).
-## View vulnerabilities over time for a group
-
-The group Security Dashboard gives an overview of vulnerabilities found in the default
-branches of projects in a group and its subgroups.
-
-To view vulnerabilities over time for a group:
-
-1. On the left sidebar, select **Search or go to** and find your group.
-1. Select **Security > Security dashboard**.
-1. Hover over the chart to get more details about vulnerabilities.
- - You can display the vulnerability trends over a 30, 60, or 90-day time frame (the default is 90 days).
- - To view aggregated data beyond a 90-day time frame, use the
- [VulnerabilitiesCountByDay GraphQL API](../../../api/graphql/reference/index.md#vulnerabilitiescountbyday).
- GitLab retains the data for 365 days.
-
-## View project security status for a group
-
-Use the group Security Dashboard to view the security status of projects.
-
-To view project security status for a group:
+You will then be prompted to download the image in SVG format.
-1. On the left sidebar, select **Search or go to** and find your group.
-1. Select **Secure > Security dashboard**.
-
-Each project is assigned a letter [grade](#project-vulnerability-grades) according to the highest-severity open vulnerability.
-Dismissed or resolved vulnerabilities are excluded. Each project can receive only one letter grade and appears only once
-in the Project security status report.
+### Group Security Dashboard
-To view vulnerabilities, go to the group's [vulnerability report](../vulnerability_report/index.md).
+The group Security Dashboard provides an overview of vulnerabilities found in the default
+branches of all projects in a group and its subgroups. The Group Security Dashboard
+supplies the following:
-### Project vulnerability grades
+- Vulnerability trends over a 30, 60, or 90-day time frame
+- A letter grade for each project in the group according to its highest-severity open vulnerability. The letter grades are assigned using the following criteria:
| Grade | Description |
-| --- | --- |
+| ----- | ----------- |
| **F** | One or more `critical` vulnerabilities |
| **D** | One or more `high` or `unknown` vulnerabilities |
| **C** | One or more `medium` vulnerabilities |
| **B** | One or more `low` vulnerabilities |
| **A** | Zero vulnerabilities |
-## Security Center
+To view group security dashboard:
-> [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/3426) in GitLab 13.4.
+1. On the left sidebar, select **Search or go to** and find your group.
+1. Select **Security > Security dashboard**.
+1. Hover over the **Vulnerabilities over time** chart to get more details about vulnerabilities.
+ - You can display the vulnerability trends over a 30, 60, or 90-day time frame (the default is 90 days).
+ - To view aggregated data beyond a 90-day time frame, use the [VulnerabilitiesCountByDay GraphQL API](../../../api/graphql/reference/index.md#vulnerabilitiescountbyday). GitLab retains the data for 365 days.
-The Security Center is a personal space where you view vulnerabilities across all your projects. It
-shows the vulnerabilities present in the default branches of the projects.
+1. Select the arrows under the **Project security status** section to see the what projects fall under a particular letter-grade rating:
+ - You can see how many vulnerabilities of a particular severity are found in a project
+ - You can select a project's name to directly access its project security dashboard
-The Security Center includes:
+![Group Security Dashboard](img/group_security_dashboard.png)
-- The group Security Dashboard.
-- A [vulnerability report](../vulnerability_report/index.md).
-- A settings area to configure which projects to display.
+## Security Center
-![Security Center Dashboard with projects](img/security_center_dashboard_v15_10.png)
+> [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/3426) in GitLab 13.4.
+
+The Security Center is a configurable personal space where you can view vulnerabilities across all the
+projects you belong to. The Security Center includes:
-### View the Security Center
+- The group Security Dashboard
+- A [vulnerability report](../vulnerability_report/index.md)
+- A settings area to configure which projects to display
+
+### Viewing the Security Center
To view the Security Center:
@@ -146,7 +117,9 @@ To view the Security Center:
1. Select **Your work**.
1. Select **Security > Security dashboard**.
-### Add projects to the Security Center
+The Security Center is blank by default. You must add a project which have been configured with at least one security scanner.
+
+### Adding Projects to the Security Center
To add projects to the Security Center:
@@ -157,26 +130,9 @@ To add projects to the Security Center:
1. Use the **Search your projects** text box to search for and select projects.
1. Select **Add projects**.
-After you add projects, the security dashboard and vulnerability report show the vulnerabilities
-found in those projects' default branches.
-
-You can add a maximum of 1,000 projects, however the **Project** filter in the **Vulnerability
-Report** is limited to 100 projects.
-
-<!-- ## Troubleshooting
-
-Include any troubleshooting steps that you can foresee. If you know beforehand what issues
-one might have when setting this up, or when something is changed, or on upgrading, it's
-important to describe those, too. Think of things that may go wrong and include them here.
-This is important to minimize requests for support, and to avoid doc comments with
-questions that you know someone might ask.
-
-Each scenario can be a third-level heading, for example `### Getting error message X`.
-If you have none to add when creating a doc, leave this section in place
-but commented out to help encourage others to add to it in the future. -->
+After you add projects, the security dashboard and vulnerability report show the vulnerabilities found in those projects' default branches. You can add a maximum of 1,000 projects, however the **Project** filter in the **Vulnerability Report** is limited to 100 projects.
## Related topics
-- [Address the vulnerabilities](../vulnerabilities/index.md)
- [Vulnerability reports](../vulnerability_report/index.md)
- [Vulnerability Page](../vulnerabilities/index.md)
diff --git a/doc/user/application_security/terminology/index.md b/doc/user/application_security/terminology/index.md
index 0f0a61a2b02..f09672685de 100644
--- a/doc/user/application_security/terminology/index.md
+++ b/doc/user/application_security/terminology/index.md
@@ -259,7 +259,7 @@ A finding's primary identifier is a value that is unique to each finding. The ex
of the finding's [first identifier](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/blob/v2.4.0-rc1/dist/sast-report-format.json#L228)
combine to create the value.
-Examples of primary identifiers include `PluginID` for OWASP Zed Attack Proxy (ZAP), or `CVE` for
+Examples of primary identifiers include `PluginID` for Zed Attack Proxy (ZAP), or `CVE` for
Trivy. The identifier must be stable. Subsequent scans must return the same value for the
same finding, even if the location has slightly changed.
diff --git a/doc/user/application_security/vulnerabilities/img/create_mr_from_vulnerability_v13_4.png b/doc/user/application_security/vulnerabilities/img/create_mr_from_vulnerability_v13_4.png
deleted file mode 100644
index 55694fc7926..00000000000
--- a/doc/user/application_security/vulnerabilities/img/create_mr_from_vulnerability_v13_4.png
+++ /dev/null
Binary files differ
diff --git a/doc/user/application_security/vulnerabilities/img/create_mr_from_vulnerability_v13_4_updated.png b/doc/user/application_security/vulnerabilities/img/create_mr_from_vulnerability_v13_4_updated.png
new file mode 100644
index 00000000000..7c1a5d4e298
--- /dev/null
+++ b/doc/user/application_security/vulnerabilities/img/create_mr_from_vulnerability_v13_4_updated.png
Binary files differ
diff --git a/doc/user/application_security/vulnerabilities/index.md b/doc/user/application_security/vulnerabilities/index.md
index 34c57292767..476b2411621 100644
--- a/doc/user/application_security/vulnerabilities/index.md
+++ b/doc/user/application_security/vulnerabilities/index.md
@@ -104,7 +104,13 @@ When dismissing a vulnerability, one of the following reasons must be chosen to
- **Used in tests**: The finding is not a vulnerability because it is part of a test or is test data.
- **Not applicable**: The vulnerability is known, and has not been remediated or mitigated, but is considered to be in a part of the application that will not be updated.
-## Change status of a vulnerability
+## Change the status of a vulnerability
+
+> In GitLab 16.4 the ability for `Developers` to change the status of a vulnerability (`admin_vulnerability`) was [deprecated](../../../update/deprecations.md#deprecate-change-vulnerability-status-from-the-developer-role). The `admin_vulnerability` permission will be removed, by default, from all `Developer` roles in GitLab 17.0.
+
+Prerequisites:
+
+- You must have at least the Developer role for the project.
To change a vulnerability's status from its Vulnerability Page:
@@ -146,8 +152,9 @@ The issue is then opened so you can take further action.
Prerequisites:
-- [Enable Jira integration](../../../integration/jira/index.md). The **Enable Jira issue creation
- from vulnerabilities** option must be selected as part of the configuration.
+- [Enable Jira integration](../../../integration/jira/configure.md). The
+ **Enable Jira issue creation from vulnerabilities** option must be selected as part
+ of the configuration.
- Each user must have a personal Jira user account with permission to create issues in the target
project.
@@ -242,7 +249,7 @@ To resolve a vulnerability, you can either:
- [Resolve a vulnerability with a merge request](#resolve-a-vulnerability-with-a-merge-request).
- [Resolve a vulnerability manually](#resolve-a-vulnerability-manually).
-![Create merge request from vulnerability](img/create_mr_from_vulnerability_v13_4.png)
+![Create merge request from vulnerability](img/create_mr_from_vulnerability_v13_4_updated.png)
### Resolve a vulnerability with a merge request
diff --git a/doc/user/application_security/vulnerability_report/index.md b/doc/user/application_security/vulnerability_report/index.md
index 24ed318e688..e71aab5839e 100644
--- a/doc/user/application_security/vulnerability_report/index.md
+++ b/doc/user/application_security/vulnerability_report/index.md
@@ -11,7 +11,8 @@ The Vulnerability Report provides information about vulnerabilities from scans o
cumulative results of all successful jobs, regardless of whether the pipeline was successful. The scan results from a
pipeline are only ingested after all the jobs in the pipeline complete.
-The report is available for users with the [correct role](../../permissions.md) on projects, groups, and the Security Center.
+<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
+For an overview, see [Vulnerability Management](https://www.youtube.com/watch?v=8SJHz6BCgXM).
At all levels, the Vulnerability Report contains:
@@ -19,8 +20,11 @@ At all levels, the Vulnerability Report contains:
- Filters for common vulnerability attributes.
- Details of each vulnerability, presented in tabular layout.
-<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
-For an overview, see [Vulnerability Management](https://www.youtube.com/watch?v=8SJHz6BCgXM).
+At the project level, the Vulnerability Report also contains:
+
+- A time stamp showing when it was updated, including a link to the latest pipeline.
+- The number of failures that occurred in the most recent pipeline. Select the failure
+ notification to view the **Failed jobs** tab of the pipeline's page.
The **Activity** column contains icons to indicate the activity, if any, taken on the vulnerability
in that row:
@@ -38,56 +42,38 @@ status of a Jira issue is not shown in the GitLab UI.
![Example project-level Vulnerability Report](img/project_level_vulnerability_report_v14_5.png)
-## Project-level Vulnerability Report
-
-At the project level, the Vulnerability Report also contains:
-
-- A time stamp showing when it was updated, including a link to the latest pipeline.
-- The number of failures that occurred in the most recent pipeline. Select the failure
- notification to view the **Failed jobs** tab of the pipeline's page.
-
When vulnerabilities originate from a multi-project pipeline setup,
this page displays the vulnerabilities that originate from the selected project.
-### View the project-level vulnerability report
+## View the vulnerability report
-To view the project-level vulnerability report:
+View the vulnerability report to list all vulnerabilities in the project or group.
-1. On the left sidebar, select **Search or go to** and find your project.
-1. Select **Secure > Vulnerability report**.
+Prerequisites:
-## Vulnerability Report actions
+- You must have at least the Developer role for the project or group.
-From the Vulnerability Report you can:
+To view the vulnerability report:
-- [Filter the list of vulnerabilities](#filter-the-list-of-vulnerabilities).
-- [View more details about a vulnerability](#view-details-of-a-vulnerability).
-- [View vulnerable source location](#view-vulnerable-source-location) (if available).
-- [Change the status of vulnerabilities](#change-status-of-vulnerabilities).
-- [Export details of vulnerabilities](#export-vulnerability-details).
-- [Sort vulnerabilities by date](#sort-vulnerabilities-by-date-detected).
-- [Manually add a vulnerability finding](#manually-add-a-vulnerability-finding).
-- [Grouping vulnerability report](#group-vulnerabilities)
+1. On the left sidebar, select **Search or go to** and find your project or group.
+1. Select **Secure > Vulnerability report**.
## Vulnerability Report filters
You can filter the Vulnerability Report to narrow focus on only vulnerabilities matching specific
criteria.
-The available filters are:
+The filters available at all levels are:
<!-- vale gitlab.SubstitutionWarning = NO -->
-- **Status**: Detected, Confirmed, Dismissed, Resolved. For details on what each status means, see
+- **Status**: Detected, confirmed, dismissed, resolved. For details on what each status means, see
[vulnerability status values](../vulnerabilities/index.md#vulnerability-status-values).
-- **Severity**: Critical, High, Medium, Low, Info, Unknown.
+- **Severity**: Critical, high, medium, low, info, unknown.
- **Tool**: For more details, see [Tool filter](#tool-filter).
-- **Project**: For more details, see [Project filter](#project-filter).
- **Activity**: For more details, see [Activity filter](#activity-filter).
-The filters' criteria are combined to show only vulnerabilities matching all criteria.
-An exception to this behavior is the Activity filter. For more details about how it works, see
-[Activity filter](#activity-filter).
+Additionally, the [project filter](#project-filter) is available at the group level.
<!-- vale gitlab.SubstitutionWarning = YES -->
@@ -106,8 +92,6 @@ After each filter is selected:
### Tool filter
-> The third-party tool filter was [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/229661) in GitLab 13.12.
-
The tool filter allows you to focus on vulnerabilities detected by selected tools.
When using the tool filter, you can choose:
@@ -122,23 +106,28 @@ For details of each of the available tools, see [Security scanning tools](../ind
The content of the Project filter depends on the current level:
-- **Security Center**: Only projects you've [added to your personal Security Center](../security_dashboard/index.md#add-projects-to-the-security-center).
+- **Security Center**: Only projects you've [added to your personal Security Center](../security_dashboard/index.md#adding-projects-to-the-security-center).
- **Group level**: All projects in the group.
- **Project level**: Not applicable.
### Activity filter
-The Activity filter behaves differently from the other filters. The selected values form mutually
-exclusive sets to allow for precisely locating the desired vulnerability records. Additionally, not
-all options can be selected in combination.
+The activity filter behaves differently from the other filters. You can select only one value in
+each category.
-Selection behavior when using the Activity filter:
+Selection behavior when using the activity filter:
-- **All**: Vulnerabilities with any Activity status (same as ignoring this filter). Selecting this deselects any other Activity filter options.
-- **No activity**: Only vulnerabilities without either an associated issue or that are no longer detected. Selecting this deselects any other Activity filter options.
-- **With issues**: Only vulnerabilities with one or more associated issues. Does not include vulnerabilities that also are no longer detected.
-- **No longer detected**: Only vulnerabilities that are no longer detected in the latest pipeline scan of the `default` branch. Does not include vulnerabilities with one or more associated issues.
-- **With issues** and **No longer detected**: Only vulnerabilities that have one or more associated issues and also are no longer detected in the latest pipeline scan of the `default` branch.
+- **Activity**
+ - **All activity**: Vulnerabilities with any activity status (same as ignoring this filter). Selecting this deselects all other activity filter options.
+- **Detection**
+ - **Still detected**: Vulnerabilities that are still detected in the latest pipeline scan of the `default` branch.
+ - **No longer detected**: Vulnerabilities that are no longer detected in the latest pipeline scan of the `default` branch.
+- **Issue**
+ - **Has issues**: Vulnerabilities with one or more associated issues.
+ - **Does not have issue**: Vulnerabilities without an associated issue.
+- **Merge request**
+ - **Has merge request**: Vulnerabilities with one or more associated merge requests.
+ - **Does not have merge request**: Vulnerabilities without an associated merge request.
## View details of a vulnerability
@@ -186,7 +175,7 @@ Fields included are:
- Group name
- Project name
-- Scanner type
+- Tool
- Scanner name
- Status
- Vulnerability
@@ -200,6 +189,8 @@ Fields included are:
- Location
- Activity: Returns `true` if the vulnerability is resolved on the default branch, and `false` if not.
- Comments
+- Full Path
+- CVSS Vectors
NOTE:
Full details are available through our
@@ -259,8 +250,8 @@ Group, Project, and Security Center Vulnerability Reports. To filter them, use t
## Group vulnerabilities
-> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/420055) in GitLab 16.4. Disabled by default.
-> - [Enabled on GitLab.com](https://gitlab.com/gitlab-org/gitlab/-/issues/422509) in GitLab 16.5.
+> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/420055) in GitLab 16.4 [with a flag](../../../administration/feature_flags.md) named `vulnerability_report_grouping`. Disabled by default.
+> - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/422509) in GitLab 16.6. Feature flag `vulnerability_report_grouping` removed.
To group the Vulnerability Report:
diff --git a/doc/user/clusters/agent/gitops/example_repository_structure.md b/doc/user/clusters/agent/gitops/example_repository_structure.md
index 02eea3300af..52855b9731c 100644
--- a/doc/user/clusters/agent/gitops/example_repository_structure.md
+++ b/doc/user/clusters/agent/gitops/example_repository_structure.md
@@ -96,7 +96,7 @@ You've successfully created a repository with a protected deployment branch!
Next, you'll configure CI/CD to merge changes from the default branch to your deployment branch.
-In the root of `web-app-manifests`, create and push a [`.gitlab-ci.yml`](../../../../ci/yaml/gitlab_ci_yaml.md) file with the following contents:
+In the root of `web-app-manifests`, create and push a [`.gitlab-ci.yml`](../../../../ci/index.md#the-gitlab-ciyml-file) file with the following contents:
```yaml
deploy:
diff --git a/doc/user/clusters/agent/gitops/flux_oci_tutorial.md b/doc/user/clusters/agent/gitops/flux_oci_tutorial.md
index b970c818a72..2c4796adf2b 100644
--- a/doc/user/clusters/agent/gitops/flux_oci_tutorial.md
+++ b/doc/user/clusters/agent/gitops/flux_oci_tutorial.md
@@ -65,7 +65,7 @@ First, create a repository for your Kubernetes manifests:
Next, configure [GitLab CI/CD](../../../../ci/index.md) to package your manifests into an OCI artifact,
and push the artifact to the [GitLab Container Registry](../../../packages/container_registry/index.md):
-1. In the root of `web-app-manifests`, create and push a [`.gitlab-ci.yml`](../../../../ci/yaml/gitlab_ci_yaml.md) file with the following contents:
+1. In the root of `web-app-manifests`, create and push a [`.gitlab-ci.yml`](../../../../ci/index.md#the-gitlab-ciyml-file) file with the following contents:
```yaml
package:
diff --git a/doc/user/clusters/agent/gitops/flux_tutorial.md b/doc/user/clusters/agent/gitops/flux_tutorial.md
index 27724a95291..832f91691e8 100644
--- a/doc/user/clusters/agent/gitops/flux_tutorial.md
+++ b/doc/user/clusters/agent/gitops/flux_tutorial.md
@@ -121,6 +121,7 @@ To install `agentk`:
kind: Secret
metadata:
name: gitlab-agent-token
+ namespace: gitlab
type: Opaque
stringData:
token: "<your-token-here>"
diff --git a/doc/user/clusters/agent/install/index.md b/doc/user/clusters/agent/install/index.md
index d620a9f658c..588be3a1223 100644
--- a/doc/user/clusters/agent/install/index.md
+++ b/doc/user/clusters/agent/install/index.md
@@ -76,7 +76,7 @@ In GitLab 14.10, a [flag](../../../../administration/feature_flags.md) named `ce
Prerequisites:
- For a [GitLab CI/CD workflow](../ci_cd_workflow.md), ensure that
- [GitLab CI/CD is not disabled](../../../../ci/enable_or_disable_ci.md#disable-cicd-in-a-project).
+ [GitLab CI/CD is not disabled](../../../../ci/pipelines/settings.md#disable-gitlab-cicd-pipelines).
You must register an agent before you can install the agent in your cluster. To register an agent:
@@ -220,7 +220,7 @@ The following example projects can help you get started with the agent.
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/340882) in GitLab 14.8, GitLab warns you on the agent's list page to update the agent version installed on your cluster.
-For the best experience, the version of the agent installed in your cluster should match the GitLab major and minor version. The previous minor version is also supported. For example, if your GitLab version is v14.9.4 (major version 14, minor version 9), then versions v14.9.0 and v14.9.1 of the agent are ideal, but any v14.8.x version of the agent is also supported. See [the release page](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/releases) of the GitLab agent.
+For the best experience, the version of the agent installed in your cluster should match the GitLab major and minor version. The previous and next minor versions are also supported. For example, if your GitLab version is v14.9.4 (major version 14, minor version 9), then versions v14.9.0 and v14.9.1 of the agent are ideal, but any v14.8.x or v14.10.x version of the agent is also supported. See [the release page](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/releases) of the GitLab agent.
### Update the agent version
diff --git a/doc/user/clusters/agent/user_access.md b/doc/user/clusters/agent/user_access.md
index 21dc249b1d1..b3735770a97 100644
--- a/doc/user/clusters/agent/user_access.md
+++ b/doc/user/clusters/agent/user_access.md
@@ -151,15 +151,66 @@ Prerequisite:
- You have an agent configured with the `user_access` entry.
-To grant Kubernetes API access:
+### Configure local access with the GitLab CLI (recommended)
+
+You can use the [GitLab CLI `glab`](../../../editor_extensions/gitlab_cli/index.md) to create or update
+a Kubernetes configuration file to access the agent Kubernetes API.
+
+Use `glab cluster agent` commands to manage cluster connections:
+
+1. View a list of all the agents associated with your project:
+
+```shell
+glab cluster agent list --repo '<group>/<project>'
+
+# If your current working directory is the Git repository of the project with the agent, you can omit the --repo option:
+glab cluster agent list
+```
+
+1. Use the numerical agent ID presented in the first column of the output to update your `kubeconfig`:
+
+```shell
+glab cluster agent update-kubeconfig --repo '<group>/<project>' --agent '<agent-id>' --use-context
+```
+
+1. Verify the update with `kubectl` or your preferred Kubernetes tooling:
+
+```shell
+kubectl get nodes
+```
+
+The `update-kubeconfig` command sets `glab cluster agent get-token` as a
+[credential plugin](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins)
+for Kubernetes tools to retrieve a token. The `get-token` command creates and
+returns a personal access token that is valid until the end of the current day.
+Kubernetes tools cache the token until it expires, the API returns an authorization error, or the process exits. Expect all subsequent calls to your Kubernetes tooling to create a new token.
+
+The `glab cluster agent update-kubeconfig` command supports a number of command line flags. You can view all supported flags with `glab cluster agent update-kubeconfig --help`.
+
+Some examples:
+
+```shell
+# When the current working directory is the Git repository where the agent is registered the --repo / -R flag can be omitted
+glab cluster agent update-kubeconfig --agent '<agent-id>'
+
+# When the --use-context option is specified the `current-context` of the kubeconfig file is changed to the agent context
+glab cluster agent update-kubeconfig --agent '<agent-id>' --use-context
+
+# The --kubeconfig flag can be used to specify an alternative kubeconfig path
+glab cluster agent update-kubeconfig --agent '<agent-id>' --kubeconfig ~/gitlab.kubeconfig
+```
+
+### Configure local access manually using a personal access token
+
+You can configure access to a Kubernetes cluster using a long-lived personal access token:
1. On the left sidebar, select **Search or go to** and find your project.
1. Select **Operate > Kubernetes clusters** and retrieve the numerical ID of the agent you want to access. You need the ID to construct the full API token.
1. Create a [personal access token](../../profile/personal_access_tokens.md) with the `k8s_proxy` scope. You need the access token to construct the full API token.
-1. Construct `kube config` entries to access the cluster:
- 1. Make sure that the proper `kube config` is selected.
+1. Construct `kubeconfig` entries to access the cluster:
+ 1. Make sure that the proper `kubeconfig` is selected.
For example, you can set the `KUBECONFIG` environment variable.
- 1. Add the GitLab KAS proxy cluster to the `kube config`:
+ 1. Add the GitLab KAS proxy cluster to the `kubeconfig`:
```shell
kubectl config set-cluster <cluster_name> --server "https://kas.gitlab.com/k8s-proxy"
diff --git a/doc/user/clusters/agent/vulnerabilities.md b/doc/user/clusters/agent/vulnerabilities.md
index a2dc50e43d7..e57551fc8c1 100644
--- a/doc/user/clusters/agent/vulnerabilities.md
+++ b/doc/user/clusters/agent/vulnerabilities.md
@@ -20,7 +20,7 @@ If both `agent config` and `scan execution policies` are configured, the configu
### Enable via agent configuration
-To enable scanning of all images within your Kubernetes cluster via the agent configuration, add a `container_scanning` configuration block to your agent
+To enable scanning of images within your Kubernetes cluster via the agent configuration, add a `container_scanning` configuration block to your agent
configuration with a `cadence` field containing a [CRON expression](https://en.wikipedia.org/wiki/Cron) for when the scans are run.
```yaml
@@ -39,9 +39,9 @@ Other elements of the [CRON syntax](https://docs.oracle.com/cd/E12058_01/doc/doc
NOTE:
The CRON expression is evaluated in [UTC](https://www.timeanddate.com/worldclock/timezone/utc) using the system-time of the Kubernetes-agent pod.
-By default, operational container scanning attempts to scan the workloads in all
-namespaces for vulnerabilities. You can set the `vulnerability_report` block with the `namespaces`
-field which can be used to restrict which namespaces are scanned. For example,
+By default, operational container scanning does not scan any workloads for vulnerabilities.
+You can set the `vulnerability_report` block with the `namespaces`
+field which can be used to select which namespaces are scanned. For example,
if you would like to scan only the `default`, `kube-system` namespaces, you can use this configuration:
```yaml
@@ -112,13 +112,15 @@ You can customize it with a `resource_requirements` field.
container_scanning:
resource_requirements:
requests:
- cpu: 200m
+ cpu: '0.2'
memory: 200Mi
limits:
- cpu: 700m
+ cpu: '0.7'
memory: 700Mi
```
+When using a fractional value for CPU, format the value as a string.
+
NOTE:
Resource requirements can only be set up using the agent configuration. If you enabled `Operational Container Scanning` through `scan execution policies`, you would need to define the resource requirements within the agent configuration file.
@@ -143,3 +145,10 @@ You must have at least the Developer role.
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/415451) in GitLab 16.4.
To scan private images, the scanner relies on the image pull secrets (direct references and from the service account) to pull the image.
+
+## Troubleshooting
+
+### `Error running Trivy scan. Container terminated reason: OOMKilled`
+
+OCS might fail with an OOM error if there are too many resources to be scanned or if the images being scanned are large.
+To resolve this, [configure the resource requirement](#configure-scanner-resource-requirements) to increase the amount of memory available.
diff --git a/doc/user/compliance/compliance_center/index.md b/doc/user/compliance/compliance_center/index.md
index 0e205a29920..4a42a70a7e7 100644
--- a/doc/user/compliance/compliance_center/index.md
+++ b/doc/user/compliance/compliance_center/index.md
@@ -111,9 +111,9 @@ You can sort the compliance report on:
You can filter the compliance violations report on:
-- Project.
-- Date range of merge.
-- Target branch.
+- The project that the violation was found on.
+- The date range of violation.
+- The target branch of the violation.
Select a row to see details of the compliance violation.
@@ -393,6 +393,7 @@ On self-managed GitLab, by default this feature is not available. To make it ava
With compliance frameworks report, you can see all the compliance frameworks in a group. Each row of the report shows:
- Framework name.
+- Associated projects.
The default framework for the group has a **default** badge.
diff --git a/doc/user/compliance/license_list.md b/doc/user/compliance/license_list.md
index f315f319b71..7ad19775509 100644
--- a/doc/user/compliance/license_list.md
+++ b/doc/user/compliance/license_list.md
@@ -16,7 +16,7 @@ For the licenses to appear under the license list, the following
requirements must be met:
1. You must be generating an SBOM file with components from one of our [one of our supported languages](license_scanning_of_cyclonedx_files/index.md#supported-languages-and-package-managers).
-1. If using our [`Dependency-Scanning.gitlab-ci.yml` template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/License-Scanning.gitlab-ci.yml) to generate the SBOM file, then your project must use at least one of the [supported languages and package managers](license_scanning_of_cyclonedx_files/index.md#supported-languages-and-package-managers).
+1. If using our [`Dependency-Scanning.gitlab-ci.yml` template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Dependency-Scanning.gitlab-ci.yml) to generate the SBOM file, then your project must use at least one of the [supported languages and package managers](license_scanning_of_cyclonedx_files/index.md#supported-languages-and-package-managers).
Alternatively, licenses will also appear under the license list when using our deprecated [`License-Scanning.gitlab-ci.yml` template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/License-Scanning.gitlab-ci.yml) as long as the following requirements are met:
diff --git a/doc/user/compliance/license_scanning_of_cyclonedx_files/index.md b/doc/user/compliance/license_scanning_of_cyclonedx_files/index.md
index 81f7cc61782..5d7a689e610 100644
--- a/doc/user/compliance/license_scanning_of_cyclonedx_files/index.md
+++ b/doc/user/compliance/license_scanning_of_cyclonedx_files/index.md
@@ -22,16 +22,11 @@ Licenses not in the SPDX list are reported as "Unknown". License information can
## Configuration
-Prerequisites:
+To enable License scanning of CycloneDX files:
-- On GitLab self-managed only, enable [Synchronization with the GitLab License Database](../../../administration/settings/security_and_compliance.md#choose-package-registry-metadata-to-sync) in the Admin Area for the GitLab instance. On GitLab SaaS this step has already been completed.
- Enable [Dependency Scanning](../../application_security/dependency_scanning/index.md#enabling-the-analyzer)
and ensure that its prerequisites are met.
-
-From the `.gitlab-ci.yml` file, remove the deprecated line `Jobs/License-Scanning.gitlab-ci.yml`, if
-it's present.
-
-On GitLab self-managed only, you can [choose package registry metadata to sync](../../../administration/settings/security_and_compliance.md#choose-package-registry-metadata-to-sync) in the Admin Area for the GitLab instance.
+- On GitLab self-managed only, you can [choose package registry metadata to synchronize](../../../administration/settings/security_and_compliance.md#choose-package-registry-metadata-to-sync) in the Admin Area for the GitLab instance. For this data synchronization to work, you must allow outbound network traffic from your GitLab instance to the domain `storage.googleapis.com`. If you have limited or no network connectivity then please refer to the documentation section [running in an offline environment](#running-in-an-offline-environment) for further guidance.
## Supported languages and package managers
diff --git a/doc/user/custom_roles.md b/doc/user/custom_roles.md
index a13c45306ad..bbb48724078 100644
--- a/doc/user/custom_roles.md
+++ b/doc/user/custom_roles.md
@@ -13,35 +13,18 @@ info: To determine the technical writer assigned to the Stage/Group associated w
> - Ability to view a vulnerability report [enabled by default](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/123835) in GitLab 16.1.
> - [Feature flag `custom_roles_vulnerability` removed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/124049) in GitLab 16.2.
> - Ability to create and remove a custom role with the UI [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/393235) in GitLab 16.4.
-> - Ability to manage group members [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/17364) in GitLab 16.5 under `admin_group_member` Feature flag.
-> - Ability to manage project access tokens [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/421778) in GitLab 16.5 under `manage_project_access_tokens` Feature flag.
+> - Ability to manage group members [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/17364) in GitLab 16.5.
+> - Ability to manage project access tokens [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/421778) in GitLab 16.5 [with a flag](../administration/feature_flags.md) named `manage_project_access_tokens`.
+> - Ability to archive projects [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/425957) in GitLab 16.6 in [with a flag](../administration/feature_flags.md) named `archive_project`. Disabled by default.
-Custom roles allow group members who are assigned the Owner role to create roles
+Custom roles allow group Owners or instance administrators to create roles
specific to the needs of their organization.
<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
For a demo of the custom roles feature, see [[Demo] Ultimate Guest can view code on private repositories via custom role](https://www.youtube.com/watch?v=46cp_-Rtxps).
-The following granular permissions are available. You can add these permissions to any base role, and add them in combination with each other to create a customized role:
-
-- The Guest+1 role, which allows users with the Guest role to view code.
-- In GitLab 16.1 and later, you can create a custom role that can view vulnerability reports and change the status of the vulnerabilities.
-- In GitLab 16.3 and later, you can create a custom role that can view the dependency list.
-- In GitLab 16.4 and later, you can create a custom role that can approve merge requests.
-- In GitLab 16.5 and later, you can create a custom role that can manage group members.
-
You can discuss individual custom role and permission requests in [issue 391760](https://gitlab.com/gitlab-org/gitlab/-/issues/391760).
-When you enable a custom role for a user with the Guest role, that user has
-access to elevated permissions, and therefore:
-
-- Is considered a [billable user](../subscriptions/self_managed/index.md#billable-users) on self-managed GitLab.
-- [Uses a seat](../subscriptions/gitlab_com/index.md#how-seat-usage-is-determined) on GitLab.com.
-
-This does not apply to Guest+1, a Guest custom role that only enables the `read_code`
-permission. Users with that specific custom role are not considered billable users
-and do not use a seat.
-
## Create a custom role
Prerequisites:
@@ -51,9 +34,19 @@ Prerequisites:
- The group must be in the Ultimate tier.
- You must have:
- At least one private project so that you can see the effect of giving a
- user with the Guest role a custom role. The project can be in the group itself
+ user a custom role. The project can be in the group itself
or one of that group's subgroups.
- - A [personal access token with the API scope](profile/personal_access_tokens.md#create-a-personal-access-token).
+ - If you are using the API to create the custom role, a [personal access token with the API scope](profile/personal_access_tokens.md#create-a-personal-access-token).
+
+You create a custom role by selecting [permissions](#available-permissions) to add
+to a base role.
+
+You can select any number of permissions. For example, you can create a custom role
+with the ability to:
+
+- View vulnerability reports.
+- Change the status of vulnerabilities.
+- Approve merge requests.
### GitLab SaaS
@@ -64,7 +57,7 @@ Prerequisite:
1. On the left sidebar, select **Search or go to** and find your group.
1. Select **Settings > Roles and Permissions**.
1. Select **Add new role**.
-1. In **Base role to use as template**, select **Guest**.
+1. In **Base role to use as template**, select an existing non-custom role.
1. In **Role name**, enter the custom role's title.
1. Select the **Permissions** for the new custom role.
1. Select **Create new role**.
@@ -80,30 +73,44 @@ Prerequisite:
1. Select **Settings > Roles and Permissions**.
1. From the top dropdown list, select the group you want to create a custom role in.
1. Select **Add new role**.
-1. In **Base role to use as template**, select **Guest**.
+1. In **Base role to use as template**, select an existing non-custom role.
1. In **Role name**, enter the custom role's title.
1. Select the **Permissions** for the new custom role.
1. Select **Create new role**.
To create a custom role, you can also [use the API](../api/member_roles.md#add-a-member-role-to-a-group).
-### Custom role requirements
+### Available permissions
+
+The following permissions are available. You can add these permissions in any combination
+to a base role to create a custom role.
+
+Some permissions require having other permissions enabled first. For example, administration of vulnerabilities (`admin_vulnerability`) can only be enabled if reading vulnerabilities (`read_vulnerability`) is also enabled.
+
+These requirements are documented in the `Required permission` column in the following table.
-For every ability, a minimal access level is defined. To be able to create a custom role which enables a certain ability, the `member_roles` table record has to have the associated minimal access level. For all abilities, the minimal access level is Guest. Only users who have at least the Guest role can be assigned to a custom role.
+| Permission | Version | Required permission | Description |
+| ------------------------------- | -----------------------| -------------------- | ----------- |
+| `read_code` | GitLab 15.7 and later | Not applicable | View project code. Does not include the ability to pull code. |
+| `read_vulnerability` | GitLab 16.1 and later | Not applicable | View [vulnerability reports](application_security/vulnerability_report/index.md). |
+| `admin_vulnerability` | GitLab 16.1 and later | `read_vulnerability` | Change the [status of vulnerabilities](application_security/vulnerabilities/index.md#vulnerability-status-values). |
+| `read_dependency` | GitLab 16.3 and later | Not applicable | View [project dependencies](application_security/dependency_list/index.md). |
+| `admin_merge_request` | GitLab 16.4 and later | Not applicable | View and approve [merge requests](project/merge_requests/index.md), and view the associated merge request code. <br> Does not allow users to view or change merge request approval rules. |
+| `manage_project_access_tokens` | GitLab 16.5 and later | Not applicable | Create, delete, and list [project access tokens](project/settings/project_access_tokens.md). |
+| `admin_group_member` | GitLab 16.5 and later | Not applicable | Add or remove [group members](group/manage.md). |
+| `archive_project` | GitLab 16.6 and later | Not applicable | Archive and unarchive [projects](project/settings/index.md#archive-a-project). |
-Some roles and abilities require having other abilities enabled. For example, a custom role can only have administration of vulnerabilities (`admin_vulnerability`) enabled if reading vulnerabilities (`read_vulnerability`) is also enabled.
+## Billing and seat usage
-You can see the abilities requirements in the following table.
+When you enable a custom role for a user with the Guest role, that user has
+access to elevated permissions over the base role, and therefore:
-| Ability | Required ability |
-| -- | -- |
-| `read_code` | - |
-| `read_dependency` | - |
-| `read_vulnerability` | - |
-| `admin_merge_request` | - |
-| `admin_vulnerability` | `read_vulnerability` |
-| `admin_group_member` | - |
-| `manage_project_access_tokens` | - |
+- Is considered a [billable user](../subscriptions/self_managed/index.md#billable-users) on self-managed GitLab.
+- [Uses a seat](../subscriptions/gitlab_com/index.md#how-seat-usage-is-determined) on GitLab.com.
+
+This does not apply when the user's custom role only has the `read_code` permission
+enabled. Guest users with that specific permission only are not considered billable users
+and do not use a seat.
## Associate a custom role with an existing group member
@@ -147,14 +154,14 @@ To do this, you can either remove the custom role from all group members with th
### Remove a custom role from a group member
To remove a custom role from a group member, use the [Group and Project Members API endpoint](../api/members.md#edit-a-member-of-a-group-or-project)
-and pass an empty `member_role_id` value.
+and pass an empty `member_role_id` value:
```shell
# to update a project membership
-curl --request PUT --header "Content-Type: application/json" --header "Authorization: Bearer <your_access_token>" --data '{"member_role_id": "", "access_level": 10}' "https://gitlab.example.com/api/v4/projects/<project_id>/members/<user_id>"
+curl --request PUT --header "Content-Type: application/json" --header "Authorization: Bearer <your_access_token>" --data '{"member_role_id": null, "access_level": 10}' "https://gitlab.example.com/api/v4/projects/<project_id>/members/<user_id>"
# to update a group membership
-curl --request PUT --header "Content-Type: application/json" --header "Authorization: Bearer <your_access_token>" --data '{"member_role_id": "", "access_level": 10}' "https://gitlab.example.com/api/v4/groups/<group_id>/members/<user_id>"
+curl --request PUT --header "Content-Type: application/json" --header "Authorization: Bearer <your_access_token>" --data '{"member_role_id": null, "access_level": 10}' "https://gitlab.example.com/api/v4/groups/<group_id>/members/<user_id>"
```
### Remove a group member with a custom role from the group
diff --git a/doc/user/discussions/img/add_internal_note_v15_0.png b/doc/user/discussions/img/add_internal_note_v15_0.png
deleted file mode 100644
index cf052edd5e7..00000000000
--- a/doc/user/discussions/img/add_internal_note_v15_0.png
+++ /dev/null
Binary files differ
diff --git a/doc/user/discussions/img/add_internal_note_v16_6.png b/doc/user/discussions/img/add_internal_note_v16_6.png
new file mode 100644
index 00000000000..0d6b4c05160
--- /dev/null
+++ b/doc/user/discussions/img/add_internal_note_v16_6.png
Binary files differ
diff --git a/doc/user/discussions/img/create_thread_v16_6.png b/doc/user/discussions/img/create_thread_v16_6.png
new file mode 100644
index 00000000000..3e0abb3d589
--- /dev/null
+++ b/doc/user/discussions/img/create_thread_v16_6.png
Binary files differ
diff --git a/doc/user/discussions/img/discussion_comment.png b/doc/user/discussions/img/discussion_comment.png
deleted file mode 100644
index 3fec5962363..00000000000
--- a/doc/user/discussions/img/discussion_comment.png
+++ /dev/null
Binary files differ
diff --git a/doc/user/discussions/img/quickly_assign_commenter_v13_1.png b/doc/user/discussions/img/quickly_assign_commenter_v13_1.png
deleted file mode 100644
index aa8f65ef6c4..00000000000
--- a/doc/user/discussions/img/quickly_assign_commenter_v13_1.png
+++ /dev/null
Binary files differ
diff --git a/doc/user/discussions/img/quickly_assign_commenter_v16_6.png b/doc/user/discussions/img/quickly_assign_commenter_v16_6.png
new file mode 100644
index 00000000000..7d6e54fdfa2
--- /dev/null
+++ b/doc/user/discussions/img/quickly_assign_commenter_v16_6.png
Binary files differ
diff --git a/doc/user/discussions/index.md b/doc/user/discussions/index.md
index ae74b534e02..a3ed888ed53 100644
--- a/doc/user/discussions/index.md
+++ b/doc/user/discussions/index.md
@@ -156,12 +156,12 @@ Prerequisite:
To lock an issue or merge request:
-1. On the right sidebar, next to **Lock issue** or **Lock merge request**, select **Edit**.
+1. On the right sidebar, next to **Lock discussion**, select **Edit**.
1. On the confirmation dialog, select **Lock**.
Notes are added to the page details.
-If an issue or merge request is locked and closed, you cannot reopen it.
+If an issue or merge request is closed with a locked discussion, then you cannot reopen it until the discussion is unlocked.
<!-- Delete when the `moved_mr_sidebar` feature flag is removed -->
If you don't see this action on the right sidebar, your project or instance might have [moved sidebar actions](../project/merge_requests/index.md#move-sidebar-actions) enabled.
@@ -192,7 +192,7 @@ To add an internal note:
1. Below the comment, select the **Make this an internal note** checkbox.
1. Select **Add internal note**.
-![Internal notes](img/add_internal_note_v15_0.png)
+![Internal notes](img/add_internal_note_v16_6.png)
You can also mark an [issue as confidential](../project/issues/confidential_issues.md).
@@ -233,7 +233,7 @@ You can assign an issue to a user who made a comment.
1. In the comment, select the **More Actions** (**{ellipsis_v}**) menu.
1. Select **Assign to commenting user**:
- ![Assign to commenting user](img/quickly_assign_commenter_v13_1.png)
+ ![Assign to commenting user](img/quickly_assign_commenter_v16_6.png)
1. To unassign the commenter, select the button again.
## Create a thread by replying to a standard comment
@@ -272,9 +272,9 @@ To create a thread:
1. From the list, select **Start thread**.
1. Select **Start thread** again.
-A threaded comment is created.
+![Create a thread](img/create_thread_v16_6.png)
-![Thread comment](img/discussion_comment.png)
+A threaded comment is created.
## Resolve a thread
diff --git a/doc/user/feature_flags.md b/doc/user/feature_flags.md
index f665395b103..88928ab6d47 100644
--- a/doc/user/feature_flags.md
+++ b/doc/user/feature_flags.md
@@ -1,6 +1,6 @@
---
stage: none
-group: Development
+group: unassigned
info: "See the Technical Writers assigned to Development Guidelines: https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-development-guidelines"
description: "View a list of all the flags available in the GitLab application."
layout: 'feature_flags'
diff --git a/doc/user/free_push_limit.md b/doc/user/free_push_limit.md
index c0b23720ab1..c1be8287eb1 100644
--- a/doc/user/free_push_limit.md
+++ b/doc/user/free_push_limit.md
@@ -6,9 +6,9 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# Free push limit **(FREE SAAS)**
-A 100 MB per-file limit applies when pushing new files to any project in the Free tier.
+A 100 MiB per-file limit applies when pushing new files to any project in the Free tier.
-If a new file that is 100 MB or large is pushed to a project in the Free tier, an error is displayed. For example:
+If a new file that is 100 MiB or large is pushed to a project in the Free tier, an error is displayed. For example:
```shell
Enumerating objects: 3, done.
diff --git a/doc/user/gitlab_duo_chat.md b/doc/user/gitlab_duo_chat.md
new file mode 100644
index 00000000000..ba6cd9b8f21
--- /dev/null
+++ b/doc/user/gitlab_duo_chat.md
@@ -0,0 +1,67 @@
+---
+stage: AI-powered
+group: Duo Chat
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
+type: index, reference
+---
+
+# Answer questions with GitLab Duo Chat **(ULTIMATE SAAS EXPERIMENT)**
+
+> Introduced in GitLab 16.0 as an [Experiment](../policy/experiment-beta-support.md#experiment).
+
+You can get AI generated support from GitLab Duo Chat about the following topics:
+
+- How to use GitLab.
+- Questions about an issue.
+- Question about an epic.
+- Questions about a code file.
+- Follow-up questions to answers from the chat.
+
+Example questions you might ask:
+
+- `Explain the concept of a 'fork' in a concise manner.`
+- `Provide step-by-step instructions on how to reset a user's password.`
+- `Generate a summary for the issue identified via this link: <link to your issue>`
+- `Generate a concise summary of the description of the current issue.`
+
+The examples above all use data from either the issue or the GitLab documentation. However, you can also ask to generate code, CI/CD configurations, or to explain code. For example:
+
+- `Write a Ruby function that prints 'Hello, World!' when called.`
+- `Develop a JavaScript program that simulates a two-player Tic-Tac-Toe game. Provide both game logic and user interface, if applicable.`
+- `Create a .gitlab-ci.yml configuration file for testing and building a Ruby on Rails application in a GitLab CI/CD pipeline.`
+- `Provide a clear explanation of the given Ruby code: def sum(a, b) a + b end. Describe what this code does and how it works.`
+
+In addition to the provided prompts, feel free to ask follow-up questions to delve deeper into the topic or task at hand. This helps you get more detailed and precise responses tailored to your specific needs, whether it's for further clarification, elaboration, or additional assistance.
+
+- A follow-up to the question `Write a Ruby function that prints 'Hello, World!' when called.` could be:
+ - `Could you also explain how I can call and execute this Ruby function in a typical Ruby environment, such as the command line?`
+
+This is an experimental feature and we're continuously extending the capabilities and reliability of the chat.
+
+## Enable GitLab Duo Chat
+
+To use this feature, at least one group you're a member of must:
+
+- Have the [experiment and beta features setting](group/manage.md#enable-experiment-and-beta-features) enabled.
+
+## Use GitLab Duo Chat
+
+1. In the lower-left corner, select the **Help** icon.
+ The [new left sidebar must be enabled](../tutorials/left_sidebar/index.md).
+1. Select **GitLab Duo Chat**. A drawer opens on the right side of your screen.
+1. Enter your question in the chat input box and press **Enter** or select **Send**. It may take a few seconds for the interactive AI chat to produce an answer.
+1. You can ask a follow-up question.
+1. If you want to ask a new question unrelated to the previous conversation, you may receive better answers if you clear the context by typing `/reset` into the input box and selecting **Send**.
+
+NOTE:
+Only the last 50 messages are retained in the chat history. The chat history expires 3 days after last use.
+
+## Give Feedback
+
+Your feedback is important to us as we continually enhance your GitLab Duo Chat experience:
+
+- **Enhance Your Experience**: Leaving feedback helps us customize the Chat for your needs and improve its performance for everyone.
+- **Privacy Assurance**: Rest assured, we don't collect your prompts. Your privacy is respected, and your interactions remain private.
+
+To give feedback about a specific response, use the feedback buttons in the response message.
+Or, you can add a comment in the [feedback issue](https://gitlab.com/gitlab-org/gitlab/-/issues/415591).
diff --git a/doc/user/group/access_and_permissions.md b/doc/user/group/access_and_permissions.md
index 966945b6b12..53a62a60157 100644
--- a/doc/user/group/access_and_permissions.md
+++ b/doc/user/group/access_and_permissions.md
@@ -118,7 +118,7 @@ To allow runner downloading, add the [outbound runner CIDR ranges](../gitlab_com
> - Support for restricting access to projects in the group [added](https://gitlab.com/gitlab-org/gitlab/-/issues/14004) in GitLab 14.1.2.
> - Support for restricting group memberships to groups with a subset of the allowed email domains [added](https://gitlab.com/gitlab-org/gitlab/-/issues/354791) in GitLab 15.1.1
-You can prevent users with email addresses in specific domains from being added to a group and its projects.
+You can prevent users with email addresses in specific domains from being added to a group and its projects. You can define an email domain allowlist at the top-level namespace only. Subgroups do not offer the ability to define an alternative allowlist.
To restrict group access by domain:
@@ -260,6 +260,13 @@ Group syncing allows LDAP groups to be mapped to GitLab groups. This provides mo
Group links can be created by using either a CN or a filter. To create these group links, go to the group's **Settings > LDAP Synchronization** page. After configuring the link, it may take more than an hour for the users to sync with the GitLab group.
+If a user is a member of two configured LDAP groups for the same GitLab group, they are granted the higher of the roles associated with the two LDAP groups.
+For example:
+
+- User is a member of LDAP groups `Owner` and `Dev`.
+- The GitLab Group is configured with these two LDAP groups.
+- When group sync is completed, the user is granted the Owner role as this is the higher of the two LDAP group roles.
+
For more information on the administration of LDAP and group sync, refer to the [main LDAP documentation](../../administration/auth/ldap/ldap_synchronization.md#group-sync).
NOTE:
diff --git a/doc/user/group/epics/manage_epics.md b/doc/user/group/epics/manage_epics.md
index 5675393441e..a5cc3ad9070 100644
--- a/doc/user/group/epics/manage_epics.md
+++ b/doc/user/group/epics/manage_epics.md
@@ -206,7 +206,7 @@ To view epics in a group:
Whether you can view an epic depends on the [group visibility level](../../public_access.md) and
the epic's [confidentiality status](#make-an-epic-confidential):
-- Public group and a non-confidential epic: You don't have to be a member of the group.
+- Public group and a non-confidential epic: Anyone can view the epic.
- Private group and non-confidential epic: You must have at least the Guest role for the group.
- Confidential epic (regardless of group visibility): You must have at least the Reporter
role for the group.
diff --git a/doc/user/group/import/index.md b/doc/user/group/import/index.md
index e1d5c8e5f0a..24d5ca5b214 100644
--- a/doc/user/group/import/index.md
+++ b/doc/user/group/import/index.md
@@ -240,7 +240,16 @@ To view group import history:
1. On the left sidebar, at the top, select **Create new** (**{plus}**) and **New group**.
1. Select **Import group**.
1. In the upper-right corner, select **History**.
-1. If there are any errors for a particular import, you can see them by selecting **Details**.
+1. If there are any errors for a particular import, select **See failures** to see their details.
+
+### Review results of the import
+
+> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/429109) in GitLab 16.6 [with a flag](../../feature_flags.md) named `bulk_import_details_page`. Enabled by default.
+
+To review the results of an import:
+
+1. Go to the [Group import history page](#group-import-history).
+1. To see the details of a failed import, select the **See failures** link on any import with a **Failed** status.
### Migrated group items
@@ -337,7 +346,7 @@ Project items that are migrated to the destination GitLab instance include:
| Projects | [GitLab 14.4](https://gitlab.com/gitlab-org/gitlab/-/issues/267945) |
| Auto DevOps | [GitLab 14.6](https://gitlab.com/gitlab-org/gitlab/-/issues/339410) |
| Badges | [GitLab 14.6](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/75029) |
-| Branches (including protected branches) | [GitLab 14.7](https://gitlab.com/gitlab-org/gitlab/-/issues/339414) |
+| Branches (including protected branches) <sup>1</sup> | [GitLab 14.7](https://gitlab.com/gitlab-org/gitlab/-/issues/339414) |
| CI Pipelines | [GitLab 14.6](https://gitlab.com/gitlab-org/gitlab/-/issues/339407) |
| Commit comments | [GitLab 15.10](https://gitlab.com/gitlab-org/gitlab/-/issues/391601) |
| Designs | [GitLab 15.1](https://gitlab.com/gitlab-org/gitlab/-/issues/339421) |
@@ -361,6 +370,14 @@ Project items that are migrated to the destination GitLab instance include:
| Uploads | [GitLab 14.5](https://gitlab.com/gitlab-org/gitlab/-/issues/339401) |
| Wikis | [GitLab 14.6](https://gitlab.com/gitlab-org/gitlab/-/issues/345923) |
+<html>
+<small>Footnotes:
+ <ol>
+ <li>Imported branches respect the [default branch protection settings](../../project/protected_branches.md) of the destination group, which can cause an unprotected branch to be imported as protected.</li>
+ </ol>
+</small>
+</html>
+
#### Issue-related items
Issue-related project items that are migrated to the destination GitLab instance include:
diff --git a/doc/user/group/index.md b/doc/user/group/index.md
index 484fd8c533b..1a4fa9df305 100644
--- a/doc/user/group/index.md
+++ b/doc/user/group/index.md
@@ -202,7 +202,7 @@ A table displays the member's:
NOTE:
The display of group members' **Source** might be inconsistent.
-For more information, see [issue 414557](https://gitlab.com/gitlab-org/gitlab/-/issues/414557).
+For more information, see [issue 23020](https://gitlab.com/gitlab-org/gitlab/-/issues/23020).
## Filter and sort members in a group
@@ -219,7 +219,7 @@ Filter a group to find members. By default, all members in the group and subgrou
In lists of group members, entries can display the following badges:
- **SAML**, to indicate the member has a [SAML account](saml_sso/index.md) connected to them.
-- **Enterprise**, to indicate that the member is an [enterprise user](../enterprise_user/index.md).
+- **Enterprise**, to indicate that the member of the top-level group is an [enterprise user](../enterprise_user/index.md).
1. On the left sidebar, select **Search or go to** and find your group.
1. Select **Manage > Members**.
@@ -227,7 +227,7 @@ In lists of group members, entries can display the following badges:
- To view members in the group only, select **Membership = Direct**.
- To view members of the group and its subgroups, select **Membership = Inherited**.
- To view members with two-factor authentication enabled or disabled, select **2FA = Enabled** or **Disabled**.
- - [In GitLab 14.0 and later](https://gitlab.com/gitlab-org/gitlab/-/issues/349887), to view GitLab users created by [SAML SSO](saml_sso/index.md) or [SCIM provisioning](saml_sso/scim_setup.md) select **Enterprise = true**.
+ - To view members of the top-level group who are [enterprise users](../enterprise_user/index.md), select **Enterprise = true**.
### Search a group
diff --git a/doc/user/group/manage.md b/doc/user/group/manage.md
index d671b0434b6..48f86ee4f0e 100644
--- a/doc/user/group/manage.md
+++ b/doc/user/group/manage.md
@@ -130,6 +130,11 @@ After sharing the `Frontend` group with the `Engineering` group:
- The **Groups** tab lists the `Engineering` group.
- The **Groups** tab lists a group regardless of whether it is a public or private group.
+- From [GitLab 16.6](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/134623),
+ the invited group's name and membership source will be masked unless:
+ - the invited group is public, or
+ - the current user is a member of the invited group, or
+ - the current user is a member of the current group.
- All direct members of the `Engineering` group have access to the `Frontend` group. The least access is granted between the access in the `Engineering` group and the access in the `Frontend` group.
- If `Member1` has the Maintainer role in `Engineering` and `Engineering` is added to `Frontend` with the Developer role, `Member1` has the Developer role in `Frontend`.
- If `Member2` has the Guest role in `Engineering` and `Engineering` is added to `Frontend` with the Developer role, `Member2` has the Guest role in `Frontend`.
@@ -487,29 +492,6 @@ To enable Experiment features for a top-level group:
1. Under **Experiment and Beta features**, select the **Use Experiment and Beta features** checkbox.
1. Select **Save changes**.
-## Enable third-party AI features **(ULTIMATE SAAS)**
-
-> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/118222) in GitLab 16.0.
-
-WARNING:
-These AI features use [third-party services](../ai_features.md#data-usage)
-and require transmission of data, including personal data.
-
-All users in the group have third-party AI features enabled by default.
-This setting [cascades to all projects](../project/merge_requests/approvals/settings.md#settings-cascading)
-that belong to the group.
-
-To disable third-party AI features for a group:
-
-1. On the left sidebar, select **Search or go to** and find your group.
-1. Select **Settings > General**.
-1. Expand **Permissions and group features**.
-1. Under **Third-party AI services**, uncheck the **Use third-party AI services** checkbox.
-1. Select **Save changes**.
-
-When Code Suggestions are enabled and disabled, an
-[audit event](../../administration/audit_events.md#view-audit-events) is created.
-
## Group activity analytics **(PREMIUM ALL)**
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/207164) in GitLab 12.10 as a [Beta feature](../../policy/experiment-beta-support.md#beta).
diff --git a/doc/user/group/reporting/git_abuse_rate_limit.md b/doc/user/group/reporting/git_abuse_rate_limit.md
index 1b14edb04d9..d32524b8f5f 100644
--- a/doc/user/group/reporting/git_abuse_rate_limit.md
+++ b/doc/user/group/reporting/git_abuse_rate_limit.md
@@ -13,7 +13,7 @@ On self-managed GitLab, by default this feature is not available. To make it ava
This is the group-level documentation. For self-managed instances, see the [administration documentation](../../admin_area/reporting/git_abuse_rate_limit.md).
-Git abuse rate limiting is a feature to automatically ban users who download, clone, pull, fetch, or fork more than a specified number of repositories of a group in a given time frame. Banned users cannot access the top-level group or any of its non-public subgroups via HTTP or SSH. The rate limit also applies to users who authenticate with a [personal](../../../user/profile/personal_access_tokens.md) or [group access token](../../../user/group/settings/group_access_tokens.md). Access to unrelated groups is unaffected.
+Git abuse rate limiting is a feature to automatically ban users who download, clone, pull, fetch, or fork more than a specified number of repositories of a group in a given time frame. Banned users cannot access the top-level group or any of its non-public subgroups via HTTP or SSH. The rate limit also applies to users who authenticate with [personal](../../../user/profile/personal_access_tokens.md) or [group access tokens](../../../user/group/settings/group_access_tokens.md), as well as [CI/CD job tokens](../../../ci/jobs/ci_job_token.md). Access to unrelated groups is unaffected.
Git abuse rate limiting does not apply to top-level group owners, [deploy tokens](../../../user/project/deploy_tokens/index.md), or [deploy keys](../../../user/project/deploy_keys/index.md).
diff --git a/doc/user/group/saml_sso/group_sync.md b/doc/user/group/saml_sso/group_sync.md
index c18ccaf9c20..7b10da016b9 100644
--- a/doc/user/group/saml_sso/group_sync.md
+++ b/doc/user/group/saml_sso/group_sync.md
@@ -81,6 +81,8 @@ When SAML is enabled, users with the Maintainer or Owner role
see a new menu item in group **Settings > SAML Group Links**. You can configure one or more **SAML Group Links** to map
a SAML identity provider group name to a GitLab role. This can be done for a top-level group or any subgroup.
+SAML Group Sync only manages a group if that group has one or more SAML group links. If a SAML group link is created then removed, the user remains in the group until they are removed from the group in the identity provider.
+
To link the SAML groups:
1. In **SAML Group Name**, enter the value of the relevant `saml:AttributeValue`. The value entered here must exactly match the value sent in the SAML response. For some IdPs, this may be a group ID or object ID (Azure AD) instead of a friendly group name.
diff --git a/doc/user/group/saml_sso/index.md b/doc/user/group/saml_sso/index.md
index 444afd3442b..70af800b180 100644
--- a/doc/user/group/saml_sso/index.md
+++ b/doc/user/group/saml_sso/index.md
@@ -54,7 +54,8 @@ To set up SSO with Azure as your identity provider:
1. You should set the following attributes:
- **Unique User Identifier (Name identifier)** to `user.objectID`.
- **nameid-format** to `persistent`. For more information, see how to [manage user SAML identity](#manage-user-saml-identity).
- - **Additional claims** to [supported attributes](#user-attributes).
+ - **email** to `user.mail` or similar.
+ - **Additional claims** to [supported attributes](#configure-assertions).
1. Make sure the identity provider is set to have provider-initiated calls
to link existing GitLab accounts.
@@ -98,7 +99,7 @@ To set up Google Workspace as your identity provider:
- For **Last name**: `last_name`.
- For **Name ID format**: `EMAIL`.
- For **NameID**: `Basic Information > Primary email`.
- For more information, see [manage user SAML identity](#manage-user-saml-identity).
+ For more information, see [supported attributes](#configure-assertions).
1. Make sure the identity provider is set to have provider-initiated calls
to link existing GitLab accounts.
@@ -134,6 +135,8 @@ To set up SSO with Okta as your identity provider:
1. Set these values:
- For **Application username (NameID)**: **Custom** `user.getInternalProperty("id")`.
- For **Name ID Format**: `Persistent`. For more information, see [manage user SAML identity](#manage-user-saml-identity).
+ - For **email**: `user.email` or similar.
+ - For additional **Attribute Statements**, see [supported attributes](#configure-assertions).
1. Make sure the identity provider is set to have provider-initiated calls
to link existing GitLab accounts.
@@ -170,10 +173,28 @@ To set up OneLogin as your identity provider:
| **Identity provider single sign-on URL** | **SAML 2.0 Endpoint** |
1. For **NameID**, use `OneLogin ID`. For more information, see [manage user SAML identity](#manage-user-saml-identity).
-
+1. Configure [required and supported attributes](#configure-assertions).
1. Make sure the identity provider is set to have provider-initiated calls
to link existing GitLab accounts.
+### Configure assertions
+
+At minimum, you must configure the following assertions:
+
+1. [NameID](#manage-user-saml-identity).
+1. Email.
+
+Optionally, you can pass user information to GitLab as attributes in the SAML assertion.
+
+- The user's email address can be an **email** or **mail** attribute.
+- The username can be either a **username** or **nickname** attribute. You should specify only
+ one of these.
+
+For more information, see the [attributes available for self-managed GitLab instances](../../../integration/saml.md#configure-assertions).
+
+NOTE:
+Attribute names starting with phrases such as `http://schemas.microsoft.com/ws/2008/06/identity/claims/` are not supported. For more information on configuring required attribute names in the SAML identity provider's settings, see [example group SAML and SCIM configurations](../../../user/group/saml_sso/example_saml_config.md).
+
### Use metadata
To configure some identity providers, you need a GitLab metadata URL.
@@ -253,19 +274,6 @@ When a user tries to sign in with Group SSO, GitLab attempts to find or create a
- Create a new account with another email address.
- Sign-in to their existing account to link the SAML identity.
-### User attributes
-
-You can pass user information to GitLab as attributes in the SAML assertion.
-
-- The user's email address can be an **email** or **mail** attribute.
-- The username can be either a **username** or **nickname** attribute. You should specify only
- one of these.
-
-For more information, see the [attributes available for self-managed GitLab instances](../../../integration/saml.md#configure-assertions).
-
-NOTE:
-Attribute names starting with phrases such as `http://schemas.microsoft.com/ws/2008/06/identity/claims/` are not supported. For more information on configuring required attribute names in the SAML identity provider's settings, see [example group SAML and SCIM configurations](../../../user/group/saml_sso/example_saml_config.md).
-
### Link SAML to your existing GitLab.com account
> **Remember me** checkbox [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/121569) in GitLab 15.7.
diff --git a/doc/user/group/saml_sso/troubleshooting.md b/doc/user/group/saml_sso/troubleshooting.md
index 9d3cc0bef50..527d710058a 100644
--- a/doc/user/group/saml_sso/troubleshooting.md
+++ b/doc/user/group/saml_sso/troubleshooting.md
@@ -222,7 +222,7 @@ to [reset their password](https://gitlab.com/users/password/new) if both:
Users might get an error that states "SAML Name ID and email address do not match your user account. Contact an administrator."
This means:
-- The NameID value sent by SAML does not match the existing SAML identity `extern_uid` value.
+- The NameID value sent by SAML does not match the existing SAML identity `extern_uid` value. Both the NameID and the `extern_uid` are case sensitive. For more information, see [manage user SAML identity](index.md#manage-user-saml-identity).
- Either the SAML response did not include an email address or the email address did not match the user's GitLab email address.
The workaround is that a GitLab group Owner uses the [SAML API](../../../api/saml.md) to update the user's SAML `extern_uid`.
@@ -356,3 +356,21 @@ If you see this message after trying to invite a user to a group:
1. Ensure the user is a [member of the top-level group](../index.md#search-a-group).
Additionally, see [troubleshooting users receiving a 404 after sign in](#users-receive-a-404).
+
+## Message: The SAML response did not contain an email address. Either the SAML identity provider is not configured to send the attribute, or the identity provider directory does not have an email address value for your user
+
+This error appears when the SAML response does not contain the user's email address in an **email** or **mail** attribute as shown in the following example:
+
+```xml
+<Attribute Name="email">
+ <AttributeValue>user@domain.com‹/AttributeValue>
+</Attribute>
+```
+
+Attribute names starting with phrases such as `http://schemas.microsoft.com/ws/2008/06/identity/claims/` like in the following example are not supported. Remove this type of attribute name from the SAML response on the IDP side.
+
+```xml
+<Attribute Name="http://schemas.microsoft.com/ws/2008/06/identity/claims/email">
+ <AttributeValue>user@domain.com‹/AttributeValue>
+</Attribute>
+```
diff --git a/doc/user/group/saml_sso/troubleshooting_scim.md b/doc/user/group/saml_sso/troubleshooting_scim.md
index 703dff16fd5..b31c2eed9df 100644
--- a/doc/user/group/saml_sso/troubleshooting_scim.md
+++ b/doc/user/group/saml_sso/troubleshooting_scim.md
@@ -4,7 +4,7 @@ group: Authentication
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
-# Troubleshooting SCIM **(PREMIUM SAAS)**
+# Troubleshooting SCIM **(FREE ALL)**
This section contains possible solutions for problems you might encounter.
@@ -31,6 +31,8 @@ To solve this problem:
1. Have the user sign in directly to GitLab.
1. [Manually link](scim_setup.md#link-scim-and-saml-identities) their account.
+Alternatively, self-managed administrators can [add a user identity](../../../administration/admin_area.md#user-identities).
+
## User cannot sign in
The following are possible solutions for problems where users cannot sign in:
@@ -38,10 +40,11 @@ The following are possible solutions for problems where users cannot sign in:
- Ensure that the user was added to the SCIM app.
- If you receive the `User is not linked to a SAML account` error, the user probably already exists in GitLab. Have the
user follow the [Link SCIM and SAML identities](scim_setup.md#link-scim-and-saml-identities) instructions.
+ Alternatively, self-managed administrators can [add a user identity](../../../administration/admin_area.md#user-identities).
- The **Identity** (`extern_uid`) value stored by GitLab is updated by SCIM whenever `id` or `externalId` changes. Users
- cannot sign in unless the GitLab Identity (`extern_uid`) value matches the `NameId` sent by SAML. This value is also
- used by SCIM to match users on the `id`, and is updated by SCIM whenever the `id` or `externalId` values change.
-- The SCIM `id` and SCIM `externalId` must be configured to the same value as the SAML `NameId`. You can trace SAML responses
+ cannot sign in unless the GitLab identifier (`extern_uid`) of the sign-in method matches the ID sent by the provider, such as
+ the `NameId` sent by SAML. This value is also used by SCIM to match users on the `id`, and is updated by SCIM whenever the `id` or `externalId` values change.
+- On GitLab.com, the SCIM `id` and SCIM `externalId` must be configured to the same value as the SAML `NameId`. You can trace SAML responses
using [debugging tools](troubleshooting.md#saml-debugging-tools), and check any errors against the
[SAML troubleshooting](troubleshooting.md) information.
@@ -94,10 +97,12 @@ When the SCIM app changes:
- Users can follow the instructions in the [Change the SAML app](index.md#change-the-identity-provider) section.
- Administrators of the identity provider can:
- 1. Remove users from the SCIM app, which unlinks all removed users.
+ 1. Remove users from the SCIM app, which:
+ - In GitLab.com, removes all removed users from the group.
+ - In GitLab self-managed, blocks users.
1. Turn on sync for the new SCIM app to [link existing users](scim_setup.md#link-scim-and-saml-identities).
-## SCIM app returns `"User has already been taken","status":409` error
+## SCIM app returns `"User has already been taken","status":409` error **(PREMIUM SAAS)**
Changing the SAML or SCIM configuration or provider can cause the following problems:
@@ -109,7 +114,7 @@ Changing the SAML or SCIM configuration or provider can cause the following prob
the SCIM app.
1. Use the same SCIM API to update the SCIM `extern_uid` for the user on GitLab.com.
-## Search Rails logs for SCIM requests
+## Search Rails logs for SCIM requests **(PREMIUM SAAS)**
GitLab.com administrators can search for SCIM requests in the `api_json.log` using the `pubsub-rails-inf-gprd-*` index in
[Kibana](https://about.gitlab.com/handbook/support/workflows/kibana.html#using-kibana). Use the following filters based
diff --git a/doc/user/group/value_stream_analytics/index.md b/doc/user/group/value_stream_analytics/index.md
index df9986e32e7..2ed01a0ec05 100644
--- a/doc/user/group/value_stream_analytics/index.md
+++ b/doc/user/group/value_stream_analytics/index.md
@@ -125,14 +125,17 @@ To view when the data was most recently updated, in the right corner next to **E
### How value stream analytics measures stages
Value stream analytics measures each stage from its start event to its end event.
+Only items that have reached their end event are included in the stage time calculation.
-For example, a stage might start when a user adds a label to an issue, and ends when they add another label.
-Items aren't included in the stage time calculation if they have not reached the end event.
+By default, blocked issues are not included in the life cycle overview.
+However, you can use custom labels (for example `workflow::blocked`) to track them.
-Value stream analytics allows you to customize your stages based on pre-defined events. To make the
-configuration easier, GitLab provides a pre-defined list of stages that can be used as a template
+You can customize stages in value stream analytics based on pre-defined events.
+To help you with the configuration, GitLab provides a pre-defined list of stages that you can use as a template.
+For example, you can define a stage that starts when you add a label to an issue,
+and ends when you add another label.
-Each pre-defined stages of value stream analytics is further described in the table below.
+The following table gives an overview of the pre-defined stages in value stream analytics.
| Stage | Measurement method |
| ------- | -------------------- |
@@ -156,7 +159,7 @@ If a stage does not include a start and a stop time, its data is not included in
In this example, milestones have been created and CI/CD for testing and setting environments is configured.
- 09:00: Create issue. **Issue** stage starts.
-- 11:00: Add issue to a milestone, start work on the issue, and create a branch locally.
+- 11:00: Add issue to a milestone (or backlog), start work on the issue, and create a branch locally.
**Issue** stage stops and **Plan** stage starts.
- 12:00: Make the first commit.
- 12:30: Make the second commit to the branch that mentions the issue number.
diff --git a/doc/user/img/snippet_clone_button_v13_0.png b/doc/user/img/snippet_clone_button_v13_0.png
deleted file mode 100644
index bf681e7349b..00000000000
--- a/doc/user/img/snippet_clone_button_v13_0.png
+++ /dev/null
Binary files differ
diff --git a/doc/user/img/snippet_intro_v13_11.png b/doc/user/img/snippet_intro_v13_11.png
deleted file mode 100644
index 4b6818341b7..00000000000
--- a/doc/user/img/snippet_intro_v13_11.png
+++ /dev/null
Binary files differ
diff --git a/doc/user/img/snippet_sample_v16_6.png b/doc/user/img/snippet_sample_v16_6.png
new file mode 100644
index 00000000000..035947a2b82
--- /dev/null
+++ b/doc/user/img/snippet_sample_v16_6.png
Binary files differ
diff --git a/doc/user/infrastructure/clusters/connect/new_gke_cluster.md b/doc/user/infrastructure/clusters/connect/new_gke_cluster.md
index 96819860a2f..5412ced3e6d 100644
--- a/doc/user/infrastructure/clusters/connect/new_gke_cluster.md
+++ b/doc/user/infrastructure/clusters/connect/new_gke_cluster.md
@@ -95,7 +95,7 @@ Use CI/CD environment variables to configure your project.
1. On the left sidebar, select **Settings > CI/CD**.
1. Expand **Variables**.
1. Set the variable `BASE64_GOOGLE_CREDENTIALS` to the `base64` encoded JSON file you just created.
-1. Set the variable `TF_VAR_gcp_project` to your GCP `project` name.
+1. Set the variable `TF_VAR_gcp_project` to your GCP `project` ID.
1. Set the variable `TF_VAR_agent_token` to the agent token displayed in the previous task.
1. Set the variable `TF_VAR_kas_address` to the agent server address displayed in the previous task.
@@ -113,6 +113,10 @@ contains other variables that you can override according to your needs:
Refer to the [Google Terraform provider](https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/provider_reference) and the [Kubernetes Terraform provider](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs) documentation for further resource options.
+## Enable Kubernetes Engine API
+
+From the Google Cloud console, enable the [Kubernetes Engine API](https://console.cloud.google.com/apis/library/container.googleapis.com).
+
## Provision your cluster
After configuring your project, manually trigger the provisioning of your cluster. In GitLab:
diff --git a/doc/user/infrastructure/iac/index.md b/doc/user/infrastructure/iac/index.md
index 1e6c59c2253..65ec84652ef 100644
--- a/doc/user/infrastructure/iac/index.md
+++ b/doc/user/infrastructure/iac/index.md
@@ -85,7 +85,6 @@ To use a Terraform template:
```yaml
variables:
TF_STATE_NAME: default
- TF_CACHE_KEY: default
# If your terraform files are in a subdirectory, set TF_ROOT accordingly. For example:
# TF_ROOT: terraform/production
```
diff --git a/doc/user/infrastructure/iac/mr_integration.md b/doc/user/infrastructure/iac/mr_integration.md
index 24ae3c998f8..8fe639bb453 100644
--- a/doc/user/infrastructure/iac/mr_integration.md
+++ b/doc/user/infrastructure/iac/mr_integration.md
@@ -16,10 +16,13 @@ enabling you to see statistics about the resources that Terraform creates,
modifies, or destroys.
WARNING:
-Like any other job artifact, Terraform Plan data is viewable by anyone with the Guest role for the repository.
-Neither Terraform nor GitLab encrypts the plan file by default. If your Terraform Plan
-includes sensitive data such as passwords, access tokens, or certificates, we strongly
-recommend encrypting plan output or modifying the project visibility settings.
+Like any other job artifact, Terraform plan data is viewable by anyone with the Guest role on the repository.
+Neither Terraform nor GitLab encrypts the plan file by default. If your Terraform `plan.json` or `plan.cache`
+files include sensitive data like passwords, access tokens, or certificates, you should
+encrypt the plan output or modify the project visibility settings. You should also **disable**
+[public pipelines](../../../ci/pipelines/settings.md#change-pipeline-visibility-for-non-project-members-in-public-projects)
+and set the [artifact's public flag to false](../../../ci/yaml/index.md#artifactspublic) (`public: false`).
+This setting ensures artifacts are accessible only to GitLab administrators and project members with at least the Reporter role.
## Configure Terraform report artifacts
diff --git a/doc/user/infrastructure/iac/terraform_state.md b/doc/user/infrastructure/iac/terraform_state.md
index 081e20b158e..876300a7794 100644
--- a/doc/user/infrastructure/iac/terraform_state.md
+++ b/doc/user/infrastructure/iac/terraform_state.md
@@ -54,12 +54,12 @@ Prerequisites:
WARNING:
Like any other job artifact, Terraform plan data is viewable by anyone with the Guest role on the repository.
-Neither Terraform nor GitLab encrypts the plan file by default. If your Terraform plan
-includes sensitive data, like passwords, access tokens, or certificates, you should
-encrypt plan output or modify the project visibility settings. We also strongly recommend that you **disable**
+Neither Terraform nor GitLab encrypts the plan file by default. If your Terraform `plan.json` or `plan.cache`
+files include sensitive data like passwords, access tokens, or certificates, you should
+encrypt the plan output or modify the project visibility settings. You should also **disable**
[public pipelines](../../../ci/pipelines/settings.md#change-pipeline-visibility-for-non-project-members-in-public-projects)
-by setting the artifact's public flag to false (`public: false`). This setting ensures artifacts are
-accessible only to GitLab Administrators and project members with the Reporter role and above.
+and set the [artifact's public flag to false](../../../ci/yaml/index.md#artifactspublic) (`public: false`).
+This setting ensures artifacts are accessible only to GitLab administrators and project members with at least the Reporter role.
To configure GitLab CI/CD as a backend:
diff --git a/doc/user/markdown.md b/doc/user/markdown.md
index 7f097891e92..a06e26c3e82 100644
--- a/doc/user/markdown.md
+++ b/doc/user/markdown.md
@@ -379,7 +379,8 @@ the [Asciidoctor user manual](https://asciidoctor.org/docs/user-manual/#activati
To prevent malicious activity, GitLab renders only the first 50 inline math instances.
The number of math blocks is also limited based on render time. If the limit is exceeded,
-GitLab renders the excess math instances as text.
+GitLab renders the excess math instances as text. Wiki and repository files do not have
+these limits.
Math written between dollar signs with backticks (``$`...`$``) or single dollar signs (`$...$`)
is rendered inline with the text.
diff --git a/doc/user/okrs.md b/doc/user/okrs.md
index 46390cd0275..ca5882da22a 100644
--- a/doc/user/okrs.md
+++ b/doc/user/okrs.md
@@ -399,6 +399,24 @@ To turn off a check-in reminder, enter:
/checkin_reminder never
```
+## Set an objective as a parent
+
+> [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/11198) in GitLab 16.6.
+
+Prerequisite:
+
+- You must have at least the Reporter role for the project.
+- The parent objective and child OKR must belong to the same project.
+
+To set an objective as a parent of an OKR:
+
+1. [Open the objective](#view-an-objective) or [key result](#view-a-key-result) that you want to edit.
+1. Next to **Parent**, from the dropdown list, select the parent to add.
+1. Select any area outside the dropdown list.
+
+To remove the parent of the objective or key result,
+next to **Parent**, select the dropdown list and then select **Unassign**.
+
## Confidential OKRs
> [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/8410) in GitLab 15.3.
diff --git a/doc/user/organization/index.md b/doc/user/organization/index.md
index 2a33543fea5..5a08307cc11 100644
--- a/doc/user/organization/index.md
+++ b/doc/user/organization/index.md
@@ -6,6 +6,13 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# Organization
+> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/409913) in GitLab 16.1 [with a flag](../../administration/feature_flags.md) named `ui_for_organizations`. Disabled by default.
+
+FLAG:
+This feature is not ready for production use.
+On self-managed GitLab, by default this feature is not available. To make it available, an administrator can [enable the feature flag](../../administration/feature_flags.md) named `ui_for_organizations`.
+On GitLab.com, this feature is not available.
+
DISCLAIMER:
This page contains information related to upcoming products, features, and functionality.
It is important to note that the information presented is for informational purposes only.
@@ -37,6 +44,37 @@ see [epic 9265](https://gitlab.com/groups/gitlab-org/-/epics/9265).
For a video introduction to the new hierarchy concept for groups and projects for epics, see
[Consolidating groups and projects update (August 2021)](https://www.youtube.com/watch?v=fE74lsG_8yM).
+## View organizations
+
+To view the organizations you have access to:
+
+- On the left sidebar, select **Organizations** (**{organization}**).
+
+## Create an organization
+
+1. On the left sidebar, at the top, select **Create new** (**{plus}**) and **New organization**.
+1. In the **Organization name** field, enter a name for the organization.
+1. In the **Organization URL** field, enter a path for the organization.
+1. Select **Create organization**.
+
+## Edit an organization's name
+
+1. On the left sidebar, select **Organizations** (**{organization}**) and find the organization you want to edit.
+1. Select **Settings > General**.
+1. Update the **Organization name** field.
+1. Select **Save changes**.
+
+## Manage groups and projects
+
+1. On the left sidebar, select **Organizations** (**{organization}**) and find the organization you want to manage.
+1. Select **Manage > Groups and projects**.
+1. To switch between groups and projects, use the **Display** filter next to the search box.
+
+## Manage users
+
+1. On the left sidebar, select **Organizations** (**{organization}**) and find the organization you want to manage.
+1. Select **Manage > Users**.
+
## Related topics
- [Organization developer documentation](../../development/organization/index.md)
diff --git a/doc/user/packages/composer_repository/index.md b/doc/user/packages/composer_repository/index.md
index d8662ef6512..6eac299e71f 100644
--- a/doc/user/packages/composer_repository/index.md
+++ b/doc/user/packages/composer_repository/index.md
@@ -225,7 +225,7 @@ To install a package:
Using a CI/CD job token:
```shell
- composer config gitlab-token.<DOMAIN-NAME> gitlab-ci-token ${CI_JOB_TOKEN}
+ composer config -- gitlab-token.<DOMAIN-NAME> gitlab-ci-token "${CI_JOB_TOKEN}"
```
Result in the `auth.json` file:
diff --git a/doc/user/packages/container_registry/index.md b/doc/user/packages/container_registry/index.md
index 1f95d2f9403..786fd0ca658 100644
--- a/doc/user/packages/container_registry/index.md
+++ b/doc/user/packages/container_registry/index.md
@@ -79,7 +79,7 @@ For more information on running container images, see the [Docker documentation]
Your container images must follow this naming convention:
```plaintext
-<registry URL>/<namespace>/<project>/<image>
+<registry server>/<namespace>/<project>[/<optional path>]
```
For example, if your project is `gitlab.example.com/mynamespace/myproject`,
diff --git a/doc/user/packages/container_registry/reduce_container_registry_storage.md b/doc/user/packages/container_registry/reduce_container_registry_storage.md
index 2af16dcc85a..8c4f25af2e1 100644
--- a/doc/user/packages/container_registry/reduce_container_registry_storage.md
+++ b/doc/user/packages/container_registry/reduce_container_registry_storage.md
@@ -15,14 +15,61 @@ if you add a large number of images or tags:
You should delete unnecessary images and tags and set up a [cleanup policy](#cleanup-policy)
to automatically manage your container registry usage.
-## Check Container Registry storage use
+## Check Container Registry storage use **(FREE SAAS)**
The Usage Quotas page (**Settings > Usage Quotas > Storage**) displays storage usage for Packages.
-This page includes the [Container Registry usage](../../usage_quotas.md#container-registry-usage), which is only available on GitLab.com.
Measuring usage is only possible on the new version of the GitLab Container Registry backed by a
metadata database, which is [available on GitLab.com](https://gitlab.com/groups/gitlab-org/-/epics/5523) since GitLab 15.7.
For information on the planned availability for self-managed instances, see [epic 5521](https://gitlab.com/groups/gitlab-org/-/epics/5521).
+## How container registry usage is calculated
+
+Image layers stored in the Container Registry are deduplicated at the root namespace level.
+
+An image is only counted once if:
+
+- You tag the same image more than once in the same repository.
+- You tag the same image across distinct repositories under the same root namespace.
+
+An image layer is only counted once if:
+
+- You share the image layer across multiple images in the same container repository, project, or group.
+- You share the image layer across different repositories.
+
+Only layers that are referenced by tagged images are accounted for. Untagged images and any layers
+referenced exclusively by them are subject to [online garbage collection](../container_registry/delete_container_registry_images.md#garbage-collection).
+Untagged image layers are automatically deleted after 24 hours if they remain unreferenced during that period.
+
+Image layers are stored on the storage backend in the original (usually compressed) format. This
+means that the measured size for any given image layer should match the size displayed on the
+corresponding [image manifest](https://github.com/opencontainers/image-spec/blob/main/manifest.md#example-image-manifest).
+
+Namespace usage is refreshed a few minutes after a tag is pushed or deleted from any container repository under the namespace.
+
+### Delayed refresh
+
+It is not possible to calculate container registry usage
+with maximum precision in real time for extremely large namespaces (about 1% of namespaces).
+To enable maintainers of these namespaces to see their usage, there is a delayed fallback mechanism.
+See [epic 9413](https://gitlab.com/groups/gitlab-org/-/epics/9413) for more details.
+
+If the usage for a namespace cannot be calculated with precision, GitLab falls back to the delayed method.
+In the delayed method, the displayed usage size is the sum of **all** unique image layers
+in the namespace. Untagged image layers are not ignored. As a result,
+the displayed usage size might not change significantly after deleting tags. Instead,
+the size value only changes when:
+
+- An automated [garbage collection process](../container_registry/delete_container_registry_images.md#garbage-collection)
+ runs and deletes untagged image layers. After a user deletes a tag, a garbage collection run
+ is scheduled to start 24 hours later. During that run, images that were previously tagged
+ are analyzed and their layers deleted if not referenced by any other tagged image.
+ If any layers are deleted, the namespace usage is updated.
+- The namespace's registry usage shrinks enough that GitLab can measure it with maximum precision.
+ As usage for namespaces shrinks to be under the [limits](../../../user/usage_quotas.md#namespace-storage-limit),
+ the measurement switches automatically from delayed to precise usage measurement.
+ There is no place in the UI to determine which measurement method is being used,
+ but [issue 386468](https://gitlab.com/gitlab-org/gitlab/-/issues/386468) proposes to improve this.
+
## Cleanup policy
> - [Renamed](https://gitlab.com/gitlab-org/gitlab/-/issues/218737) from "expiration policy" to "cleanup policy" in GitLab 13.2.
diff --git a/doc/user/packages/container_registry/troubleshoot_container_registry.md b/doc/user/packages/container_registry/troubleshoot_container_registry.md
index 13e14dfdeb4..3fb2754eb9c 100644
--- a/doc/user/packages/container_registry/troubleshoot_container_registry.md
+++ b/doc/user/packages/container_registry/troubleshoot_container_registry.md
@@ -128,6 +128,12 @@ time is set to 15 minutes.
If you are using self-managed GitLab, an administrator can
[increase the token duration](../../../administration/packages/container_registry.md#increase-token-duration).
+## `Failed to pull image` messages
+
+You might receive a [`Failed to pull image'](../../../ci/debugging.md#failed-to-pull-image-messages)
+error message when a CI/CD job is unable to pull a container image from a project with a limited
+[CI/CD job token scope](../../../ci/jobs/ci_job_token.md#limit-job-token-scope-for-public-or-internal-projects).
+
## Slow uploads when using `kaniko` to push large images
When you push large images with `kaniko`, you might experience uncharacteristically long delays.
@@ -136,3 +142,24 @@ This is typically a result of [a performance issue with `kaniko` and HTTP/2](htt
The current workaround is to use HTTP/1.1 when pushing with `kaniko`.
To use HTTP/1.1, set the `GODEBUG` environment variable to `"http2client=0"`.
+
+## `docker login` command fails with `access forbidden`
+
+The container registry [returns the GitLab API URL to the Docker client](../../../administration/packages/container_registry.md#architecture-of-gitlab-container-registry)
+to validate credentials. The Docker client uses basic auth, so the request contains
+the `Authorization` header. If the `Authorization` header is missing in the request to the
+`/jwt/auth` endpoint configured in the `token_realm` for the registry configuration,
+you receive an `access forbidden` error message.
+
+For example:
+
+```plaintext
+> docker login gitlab.example.com:4567
+
+Username: user
+Password:
+Error response from daemon: Get "https://gitlab.company.com:4567/v2/": denied: access forbidden
+```
+
+To avoid this error, ensure the `Authorization` header is not stripped from the request.
+For example, a proxy in front of GitLab might be redirecting to the `/jwt/auth` endpoint.
diff --git a/doc/user/packages/generic_packages/index.md b/doc/user/packages/generic_packages/index.md
index 938093f2a27..1416dcde14f 100644
--- a/doc/user/packages/generic_packages/index.md
+++ b/doc/user/packages/generic_packages/index.md
@@ -33,7 +33,7 @@ Prerequisites:
- You must [authenticate with the API](../../../api/rest/index.md#authentication).
If authenticating with a deploy token, it must be configured with the `write_package_registry`
scope. If authenticating with a personal access token or project access token, it must be
- configured with the `api` scope.
+ configured with the `api` scope. Project access tokens must have at least the Developer role.
- You must call this API endpoint serially when attempting to upload multiple files under the
same package name and version. Attempts to concurrently upload multiple files into
a new package name and version may face partial failures with
@@ -142,7 +142,9 @@ If multiple packages have the same name, version, and filename, then the most re
Prerequisites:
-- You need to [authenticate with the API](../../../api/rest/index.md#authentication). If authenticating with a deploy token, it must be configured with the `read_package_registry` and/or `write_package_registry` scope.
+- You need to [authenticate with the API](../../../api/rest/index.md#authentication).
+ - If authenticating with a deploy token, it must be configured with the `read_package_registry` and/or `write_package_registry` scope.
+ - Project access tokens require the `read_api` scope and at least the `Reporter` role.
```plaintext
GET /projects/:id/packages/generic/:package_name/:package_version/:file_name
diff --git a/doc/user/packages/maven_repository/index.md b/doc/user/packages/maven_repository/index.md
index 6765aa2cbb1..c8730c42022 100644
--- a/doc/user/packages/maven_repository/index.md
+++ b/doc/user/packages/maven_repository/index.md
@@ -24,7 +24,7 @@ Supported clients:
### Authenticate to the Package Registry
-You need an token to publish a package. There are different tokens available depending on what you're trying to achieve. For more information, review the [guidance on tokens](../package_registry/index.md#authenticate-with-the-registry).
+You need a token to publish a package. There are different tokens available depending on what you're trying to achieve. For more information, review the [guidance on tokens](../package_registry/index.md#authenticate-with-the-registry).
Create a token and save it to use later in the process.
@@ -32,6 +32,10 @@ Do not use authentication methods other than the methods documented here. Undocu
#### Edit the client configuration
+Update your configuration to authenticate to the Maven repository with HTTP.
+
+##### Custom HTTP header
+
You must add the authentication details to the configuration file
for your client.
@@ -127,6 +131,97 @@ file:
}
```
+::EndTabs
+
+##### Basic HTTP Authentication
+
+You can also use basic HTTP authentication to authenticate to the Maven Package Registry.
+
+::Tabs
+
+:::TabTitle `mvn`
+
+| Token type | Name must be | Token |
+| --------------------- | ---------------------------- | ---------------------------------------------------------------------- |
+| Personal access token | The username of the user | Paste token as-is, or define an environment variable to hold the token |
+| Deploy token | The username of deploy token | Paste token as-is, or define an environment variable to hold the token |
+| CI Job token | `gitlab-ci-token` | `${CI_JOB_TOKEN}` |
+
+Add the following section to your
+[`settings.xml`](https://maven.apache.org/settings.html) file.
+
+```xml
+<settings>
+ <servers>
+ <server>
+ <id>gitlab-maven</id>
+ <username>REPLACE_WITH_NAME</username>
+ <password>REPLACE_WITH_TOKEN</password>
+ <configuration>
+ <authenticationInfo>
+ <userName>REPLACE_WITH_NAME</userName>
+ <password>REPLACE_WITH_TOKEN</password>
+ </authenticationInfo>
+ </configuration>
+ </server>
+ </servers>
+</settings>
+```
+
+:::TabTitle `gradle`
+
+| Token type | Name must be | Token |
+| --------------------- | ---------------------------- | ---------------------------------------------------------------------- |
+| Personal access token | The username of the user | Paste token as-is, or define an environment variable to hold the token |
+| Deploy token | The username of deploy token | Paste token as-is, or define an environment variable to hold the token |
+| CI Job token | `gitlab-ci-token` | `System.getenv("CI_JOB_TOKEN")` |
+
+In [your `GRADLE_USER_HOME` directory](https://docs.gradle.org/current/userguide/directory_layout.html#dir:gradle_user_home),
+create a file `gradle.properties` with the following content:
+
+```properties
+gitLabPrivateToken=REPLACE_WITH_YOUR_TOKEN
+```
+
+Add a `repositories` section to your
+[`build.gradle`](https://docs.gradle.org/current/userguide/tutorial_using_tasks.html).
+
+- In Groovy DSL:
+
+ ```groovy
+ repositories {
+ maven {
+ url "https://gitlab.example.com/api/v4/groups/<group>/-/packages/maven"
+ name "GitLab"
+ credentials(PasswordCredentials) {
+ username = 'REPLACE_WITH_NAME'
+ password = gitLabPrivateToken
+ }
+ authentication {
+ basic(BasicAuthentication)
+ }
+ }
+ }
+ ```
+
+- In Kotlin DSL:
+
+ ```kotlin
+ repositories {
+ maven {
+ url = uri("https://gitlab.example.com/api/v4/groups/<group>/-/packages/maven")
+ name = "GitLab"
+ credentials(BasicAuthentication::class) {
+ username = "REPLACE_WITH_NAME"
+ password = findProperty("gitLabPrivateToken") as String?
+ }
+ authentication {
+ create("basic", BasicAuthentication::class)
+ }
+ }
+ }
+ ```
+
:::TabTitle `sbt`
| Token type | Name must be | Token |
diff --git a/doc/user/packages/npm_registry/index.md b/doc/user/packages/npm_registry/index.md
index 9d789c27d1f..43defb29fd5 100644
--- a/doc/user/packages/npm_registry/index.md
+++ b/doc/user/packages/npm_registry/index.md
@@ -87,6 +87,10 @@ Your package should now publish to the Package Registry.
When publishing by using a CI/CD pipeline, you can use the [predefined variables](../../../ci/variables/predefined_variables.md) `${CI_PROJECT_ID}` and `${CI_JOB_TOKEN}` to authenticate with your project's Package Registry. We use these variables to create a `.npmrc` file [for authentication](#authenticating-via-the-npmrc) during execution of your CI/CD job.
+WARNING:
+When generating the `.npmrc` file, do not specify the port after `${CI_SERVER_HOST}` if it is a default port,
+such as `80` for a URL starting with `http` or `443` for a URL starting with `https`.
+
In the GitLab project containing your `package.json`, edit or create a `.gitlab-ci.yml` file. For example:
```yaml
@@ -98,8 +102,8 @@ stages:
publish-npm:
stage: deploy
script:
- - echo "@scope:registry=https://${CI_SERVER_HOST}:${CI_SERVER_PORT}/api/v4/projects/${CI_PROJECT_ID}/packages/npm/" > .npmrc
- - echo "//${CI_SERVER_HOST}:${CI_SERVER_PORT}/api/v4/projects/${CI_PROJECT_ID}/packages/npm/:_authToken=${CI_JOB_TOKEN}" >> .npmrc
+ - echo "@scope:registry=https://${CI_SERVER_HOST}/api/v4/projects/${CI_PROJECT_ID}/packages/npm/" > .npmrc
+ - echo "//${CI_SERVER_HOST}/api/v4/projects/${CI_PROJECT_ID}/packages/npm/:_authToken=${CI_JOB_TOKEN}" >> .npmrc
- npm publish
```
@@ -265,7 +269,7 @@ npm deprecate @scope/package ""
### Package forwarding to npmjs.com
-When an npm package is not found in the Package Registry, the request is forwarded to [npmjs.com](https://www.npmjs.com/).
+When an npm package is not found in the Package Registry, the request is forwarded to [npmjs.com](https://www.npmjs.com/). The forward is performed by sending an HTTP redirect back to the requesting client.
Administrators can disable this behavior in the [Continuous Integration settings](../../admin_area/settings/continuous_integration.md).
diff --git a/doc/user/packages/nuget_repository/index.md b/doc/user/packages/nuget_repository/index.md
index f5430c5328c..8db79dc6c5f 100644
--- a/doc/user/packages/nuget_repository/index.md
+++ b/doc/user/packages/nuget_repository/index.md
@@ -434,14 +434,19 @@ the existing package is overwritten.
### Do not allow duplicate NuGet packages
-> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/293748) in GitLab 16.3 [with a flag](../../../administration/feature_flags.md) named `nuget_duplicates_option`. Disabled by default.
+> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/293748) in GitLab 16.3 [with a flag](../../../administration/feature_flags.md) named `nuget_duplicates_option`. Disabled by default.
+> - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/419078) in GitLab 16.6. Feature flag `nuget_duplicates_option` removed.
-FLAG:
-On self-managed GitLab, by default this feature is not available. To make it available,
-an administrator can [enable the feature flag](../../../administration/feature_flags.md) named `nuget_duplicates_option`.
-The feature is not ready for production use.
+To prevent users from publishing duplicate NuGet packages, you can use the [GraphQl API](../../../api/graphql/reference/index.md#packagesettings) or the UI.
-To prevent users from publishing duplicate NuGet packages, you can use the [GraphQl API](../../../api/graphql/reference/index.md#packagesettings).
+In the UI:
+
+1. On the left sidebar, select **Search or go to** and find your group.
+1. Select **Settings > Packages and registries**.
+1. In the **NuGet** row of the **Duplicate packages** table, turn off the **Allow duplicates** toggle.
+1. Optional. In the **Exceptions** text box, enter a regular expression that matches the names and versions of packages to allow.
+
+Your changes are automatically saved.
WARNING:
If the .nuspec file isn't located in the root of the package, the package might
diff --git a/doc/user/packages/package_registry/supported_functionality.md b/doc/user/packages/package_registry/supported_functionality.md
index 3e8852da808..eb6b415ee06 100644
--- a/doc/user/packages/package_registry/supported_functionality.md
+++ b/doc/user/packages/package_registry/supported_functionality.md
@@ -160,9 +160,9 @@ The following authentication protocols are supported:
| Package type | Supported auth protocols |
|-------------------------------------------------------|-------------------------------------------------------------|
-| [Maven (with `mvn`)](../maven_repository/index.md) | Headers, Basic auth ([pulling](#pulling-packages) only) (1) |
-| [Maven (with `gradle`)](../maven_repository/index.md) | Headers, Basic auth ([pulling](#pulling-packages) only) (1) |
-| [Maven (with `sbt`)](../maven_repository/index.md) | Basic auth (1) |
+| [Maven (with `mvn`)](../maven_repository/index.md) | Headers, Basic auth |
+| [Maven (with `gradle`)](../maven_repository/index.md) | Headers, Basic auth |
+| [Maven (with `sbt`)](../maven_repository/index.md) | Basic auth ([pulling](#pulling-packages) only) (1) |
| [npm](../npm_registry/index.md) | OAuth |
| [NuGet](../nuget_repository/index.md) | Basic auth |
| [PyPI](../pypi_repository/index.md) | Basic auth |
diff --git a/doc/user/permissions.md b/doc/user/permissions.md
index a83ce6a56c6..ab26e490f51 100644
--- a/doc/user/permissions.md
+++ b/doc/user/permissions.md
@@ -195,16 +195,16 @@ The following table lists project permissions available for each role:
| [Repository](project/repository/index.md):<br>Turn on or off protected branch push for developers | | | | ✓ | ✓ |
| [Repository](project/repository/index.md):<br>Remove fork relationship | | | | | ✓ |
| [Repository](project/repository/index.md):<br>Force push to protected branches (3) | | | | | |
-| [Repository](project/repository/index.md):<br>Remove protected branches (3) | | | | | |
+| [Repository](project/repository/index.md):<br>Remove protected branches by using the UI or API | | | | ✓ | ✓ |
| [Requirements Management](project/requirements/index.md):<br>Archive / reopen | | ✓ | ✓ | ✓ | ✓ |
| [Requirements Management](project/requirements/index.md):<br>Create / edit | | ✓ | ✓ | ✓ | ✓ |
| [Requirements Management](project/requirements/index.md):<br>Import / export | | ✓ | ✓ | ✓ | ✓ |
| [Security dashboard](application_security/security_dashboard/index.md):<br>Create issue from vulnerability finding | | | ✓ | ✓ | ✓ |
| [Security dashboard](application_security/security_dashboard/index.md):<br>Create vulnerability from vulnerability finding | | | ✓ | ✓ | ✓ |
-| [Security dashboard](application_security/security_dashboard/index.md):<br>Dismiss vulnerability | | | ✓ | ✓ | ✓ |
-| [Security dashboard](application_security/security_dashboard/index.md):<br>Dismiss vulnerability finding | | | ✓ | ✓ | ✓ |
-| [Security dashboard](application_security/security_dashboard/index.md):<br>Resolve vulnerability | | | ✓ | ✓ | ✓ |
-| [Security dashboard](application_security/security_dashboard/index.md):<br>Revert vulnerability to detected state | | | ✓ | ✓ | ✓ |
+| [Security dashboard](application_security/security_dashboard/index.md):<br>Dismiss vulnerability | | | ✓ (24) | ✓ | ✓ |
+| [Security dashboard](application_security/security_dashboard/index.md):<br>Dismiss vulnerability finding | | | ✓ | ✓ (24) | ✓ |
+| [Security dashboard](application_security/security_dashboard/index.md):<br>Resolve vulnerability | | | ✓ (24) | ✓ | ✓ |
+| [Security dashboard](application_security/security_dashboard/index.md):<br>Revert vulnerability to detected state | | | ✓ (24) | ✓ | ✓ |
| [Security dashboard](application_security/security_dashboard/index.md):<br>Use security dashboard | | | ✓ | ✓ | ✓ |
| [Security dashboard](application_security/security_dashboard/index.md):<br>View vulnerability | | | ✓ | ✓ | ✓ |
| [Security dashboard](application_security/security_dashboard/index.md):<br>View vulnerability findings in [dependency list](application_security/dependency_list/index.md) | | | ✓ | ✓ | ✓ |
@@ -249,6 +249,7 @@ The following table lists project permissions available for each role:
21. Authors of tasks can delete them even if they don't have the Owner role, but they have to have at least the Guest role for the project.
22. You must have permission to [view the epic](group/epics/manage_epics.md#who-can-view-an-epic).
23. In GitLab 15.9 and later, users with the Guest role and an Ultimate license can view private repository content if an administrator (on self-managed) or group owner (on GitLab.com) gives those users permission. The administrator or group owner can create a [custom role](custom_roles.md) through the API and assign that role to the users.
+24. In GitLab 16.4 the ability for `Developers` to change the status of a vulnerability (`admin_vulnerability`) was [deprecated](../update/deprecations.md#deprecate-change-vulnerability-status-from-the-developer-role). The `admin_vulnerability` permission will be removed, by default, from all `Developer` roles in GitLab 17.0.
<!-- markdownlint-enable MD029 -->
diff --git a/doc/user/product_analytics/index.md b/doc/user/product_analytics/index.md
index ca55ab758da..94217f985cf 100644
--- a/doc/user/product_analytics/index.md
+++ b/doc/user/product_analytics/index.md
@@ -32,7 +32,7 @@ Product analytics uses several tools:
- [**Snowplow**](https://docs.snowplow.io/docs) - A developer-first engine for collecting behavioral data, and passing it through to ClickHouse.
- [**ClickHouse**](https://clickhouse.com/docs) - A database suited to store, query, and retrieve analytical data.
-- [**Cube**](https://cube.dev/docs/) - An analytical graphing library that provides an API to run queries against the data stored in Clickhouse.
+- [**Cube**](https://cube.dev/docs/) - An analytical graphing library that provides an API to run queries against the data stored in ClickHouse.
The following diagram illustrates the product analytics flow:
@@ -46,7 +46,7 @@ flowchart TB
B --Pass data through--> C[Snowplow Enricher]
end
subgraph Data warehouse
- C --Transform and enrich data--> D([Clickhouse])
+ C --Transform and enrich data--> D([ClickHouse])
end
subgraph Data visualization with dashboards
E([Dashboards]) --Generated from the YAML definition--> F[Panels/Visualizations]
@@ -101,11 +101,35 @@ Prerequisites:
1. Expand **Configure** and enter the configuration values.
1. Select **Save changes**.
-## Instrument a GitLab project
+## Onboard a GitLab project
+
+Onboarding a GitLab project means preparing it to receive events that are used for product analytics.
+
+To onboard a project:
+
+1. On the left sidebar, select **Search or go to** and find your project.
+1. Select **Analyze > Analytics dashboards**.
+1. Under **Product analytics**, select **Set up**.
+1. Select **Set up product analytics**.
+Your instance is being created, and the project onboarded.
+
+### Onboard an internal project
+
+GitLab team members can enable Product Analytics on their internal projects on GitLab.com (Ultimate) during the experiment phase.
+
+1. Send a message to the Product Analytics team (`#g_analyze_product_analytics`) informing them of the repository to be enabled.
+1. Using ChatOps, enable both the `product_analytics_dashboards` and `combined_analytics_dashboards`:
+
+ ```plaintext
+ /chatops run feature set product_analytics_dashboards true --project=FULLPATH_TO_PROJECT
+ /chatops run feature set combined_analytics_dashboards true --project=FULLPATH_TO_PROJECT
+ ```
+
+## Instrument your application
To instrument code to collect data, use one or more of the existing SDKs:
-- [Browser SDK](https://gitlab.com/gitlab-org/analytics-section/product-analytics/gl-application-sdk-browser)
+- [Browser SDK](instrumentation/browser_sdk.md)
- [Ruby SDK](https://gitlab.com/gitlab-org/analytics-section/product-analytics/gl-application-sdk-rb)
- [Python SDK](https://gitlab.com/gitlab-org/analytics-section/product-analytics/gl-application-sdk-python)
- [Node SDK](https://gitlab.com/gitlab-org/analytics-section/product-analytics/gl-application-sdk-node)
@@ -273,18 +297,24 @@ POST /api/v4/projects/PROJECT_ID/product_analytics/request/load?queryType=multi
If the request is successful, the returned JSON includes an array of rows of results.
-## Onboarding GitLab internal projects
+## View product analytics usage quota
-GitLab team members can enable Product Analytics on their own internal projects on GitLab.com during the experiment phase.
+> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/424153) in GitLab 16.6 with a [flag](../../administration/feature_flags.md) named `product_analytics_usage_quota`. Disabled by default.
-1. Send a message to the Product Analytics team (`#g_analyze_product_analytics`) informing them of the repository to be enabled.
-1. Ensure that the project is within an Ultimate namespace.
-1. Using ChatOps, enable both the `product_analytics_dashboards` and `combined_analytics_dashboards`
+FLAG:
+On self-managed GitLab, by default this feature is not available. To make it available per project or for your entire instance, an administrator can [enable the feature flag](../../administration/feature_flags.md) named `product_analytics_usage_quota`.
+On GitLab.com, this feature is not available.
+This feature is not ready for production use.
- ```plaintext
- /chatops run feature set product_analytics_dashboards true --project=FULLPATH_TO_PROJECT
- /chatops run feature set combined_analytics_dashboards true --project=FULLPATH_TO_PROJECT
- ```
+Product analytics usage quota is calculated from the number of events received from instrumented applications.
+The tab displays the monthly totals for the group, and a breakdown of usage per project. Current month shows events counted to date.
+
+To view product analytics usage quota:
+
+1. On the left sidebar, select **Search or go to** and find your group.
+1. Select **Settings > Usage quota** and select the **Product analytics** tab.
+
+The usage quota excludes projects that are not onboarded with product analytics.
## Troubleshooting
diff --git a/doc/user/product_analytics/instrumentation/browser_sdk.md b/doc/user/product_analytics/instrumentation/browser_sdk.md
new file mode 100644
index 00000000000..f2beafab8e0
--- /dev/null
+++ b/doc/user/product_analytics/instrumentation/browser_sdk.md
@@ -0,0 +1,282 @@
+---
+stage: Analyze
+group: Analytics Instrumentation
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
+---
+
+# Browser SDK
+
+This SDK is for instrumenting web sites and applications to send data for the GitLab [product analytics functionality](../index.md).
+
+## How to use the Browser SDK
+
+### Using the NPM package
+
+Add the NPM package to your package JSON using your preferred package manager:
+
+::Tabs
+
+:::TabTitle yarn
+
+```shell
+yarn add @gitlab/application-sdk-browser
+```
+
+:::TabTitle npm
+
+```shell
+npm i @gitlab/application-sdk-browser
+```
+
+::EndTabs
+
+Then, for browser usage import the client SDK:
+
+```javascript
+import { glClientSDK } from '@gitlab/application-sdk-browser';
+
+this.glClient = glClientSDK({ appId, host });
+```
+
+### Using the script directly
+
+Add the script to the page and assign the client SDK to `window`:
+
+```html
+<script src="https://unpkg.com/@gitlab/application-sdk-browser/dist/gl-sdk.min.js"></script>
+<script>
+ window.glClient = window.glSDK.glClientSDK({
+ appId: 'YOUR_APP_ID',
+ host: 'YOUR_HOST',
+ });
+</script>
+```
+
+You can use a specific version of the SDK like this:
+
+```html
+<script src="https://unpkg.com/@gitlab/application-sdk-browser@0.2.5/dist/gl-sdk.min.js"></script>
+```
+
+## Browser SDK initialization options
+
+Apart from `appId` and `host`, you can configure the Browser SDK with the following options:
+
+```typescript
+interface GitLabClientSDKOptions {
+ appId: string;
+ host: string;
+ hasCookieConsent?: boolean;
+ respectGlobalPrivacyControl?: boolean;
+ trackerId?: string;
+ pagePingTracking?:
+ | boolean
+ | {
+ minimumVisitLength?: number;
+ heartbeatDelay?: number;
+ };
+ plugins?: AllowedPlugins;
+}
+```
+
+| Option | Description |
+| :---------------------------- | :---------- |
+| `appId` | The ID provided by the GitLab Project Analytics setup guide. This ID ensures your data is sent to your analytics instance. |
+| `host` | The GitLab Project Analytics instance provided by the setup guide. |
+| `hasCookieConsent` | Whether to use cookies to identify unique users and record their full IP address. Set to `false` by default. When `false`, users are considered anonymous users. No cookies or other storage mechanisms are used to identify users. |
+| `respectGlobalPrivacyControl` | Whether to respect the user's [GPC](https://globalprivacycontrol.org/) configuration to permit or refuse tracking. Set to `true` by default. When `false`, events are emitted regardless of user configuration. |
+| `trackerId` | Used to differentiate between multiple trackers running on the same page or application, because each tracker instance can be configured differently to capture different sets of data. This identifier helps ensure that the data sent to the collector is correctly associated with the correct tracker configuration. Default value is `gitlab`. |
+| `pagePingTracking` | Option to track user engagement on your website or application by sending periodic events while a user is actively browsing a page. Page pings provide valuable insight into how users interact with your content, such as how long they spend on a page, which sections they are viewing, and whether they are scrolling. `pagePingTracking` can be boolean or an object. As a boolean, set to `true` it enables page ping with default options, and set to `false` it disables page ping tracking. As an object, it has two options: `minimumVisitLength` (the minimum time that must have elapsed before the first heartbeat) and `heartbeatDelay` (the interval at which the callback is fired). |
+| `plugins` | Specify which plugins to enable or disable. By default all plugins are enabled. |
+
+### Plugins
+
+- `Client Hints`: An alternative to tracking the User Agent, which is particularly useful in browsers that are freezing the User Agent string.
+Enabling this plugin will automatically capture the following context:
+
+ For example,
+ [iglu:org.ietf/http_client_hints/jsonschema/1-0-0](https://github.com/snowplow/iglu-central/blob/master/schemas/org.ietf/http_client_hints/jsonschema/1-0-0)
+ has the following configuration:
+
+ ```json
+ {
+ "isMobile":false,
+ "brands":[
+ {
+ "brand":"Google Chrome",
+ "version":"89"
+ },
+ {
+ "brand":"Chromium",
+ "version":"89"
+ }
+ ]
+ }
+ ```
+
+- `Link Click Tracking`: With this plugin, the tracker adds click event listeners to all link elements. Link clicks are tracked as self-describing events. Each link-click event captures the link's `href` attribute. The event also has fields for the link's ID, classes, and target (where the linked document is opened, such as a new tab or new window).
+
+- `Performance Timing`: It collects performance-related data from a user's browser using the `Navigation Timing API`. This API provides detailed information about the various stages of loading a web page, such as domain lookup, connection time, content download, and rendering times. This plugin helps to gather insights into how well a website performs for users, identify potential performance bottlenecks, and improve the overall user experience.
+
+- `Error Tracking`: It helps to capture and track errors that occur on a website or application. By monitoring these errors, you can gain insights into potential issues with code or third-party libraries, which can help to improve the overall user experience, and maintain the quality of the website or application.
+
+By default all plugins are enabled. You can disable or enable these plugins through the `plugins` object:
+
+```typescript
+const tracker = glClientSDK({
+ ...options,
+ plugins: {
+ clientHints: true,
+ linkTracking: true,
+ performanceTiming: true,
+ errorTracking: true,
+ },
+});
+```
+
+## Methods
+
+### `identify`
+
+Used to associate a user and their attributes with the session and tracking events.
+
+```javascript
+glClient.identify(userId, userAttributes);
+```
+
+| Property | Type | Description |
+| :--------------- | :-------------------------- | :---------------------------------------------------------------------------- |
+| `userId` | `String` | The user identifier your application uses to identify individual users. |
+| `userAttributes` | `Object`/`Null`/`undefined` | The user attributes that need to be added to the session and tracking events. |
+
+### `page`
+
+Used to trigger a pageview event.
+
+```javascript
+glClient.page(eventAttributes);
+```
+
+| Property | Type | Description |
+| :---------------- | :-------------------------- | :---------------------------------------------------------------- |
+| `eventAttributes` | `Object`/`Null`/`undefined` | The event attributes that need to be added to the pageview event. |
+
+The `eventAttributes` object supports the following optional properties:
+
+| Property | Type | Description |
+| :--------------- | :-------------------------- | :---------------------------------------------------------------------------- |
+| `title` | `String` | Override the default page title. |
+| `contextCallback` | `Function` | A callback that fires on the page view. |
+| `context` | `Object` | Add context (additional information) on the page view. |
+| `timestamp` | `timestamp` | Set the true timestamp or overwrite the device-sent timestamp on an event. |
+
+### `track`
+
+Used to trigger a custom event.
+
+```javascript
+glClient.track(eventName, eventAttributes);
+```
+
+| Property | Type | Description |
+| :---------------- | :-------------------------- | :--------------------------------------------------------------- |
+| `eventName` | `String` | The name of the custom event. |
+| `eventAttributes` | `Object`/`Null`/`undefined` | The event attributes that need to be added to the tracked event. |
+
+### `refreshLinkClickTracking`
+
+`enableLinkClickTracking` tracks only clicks on links that exist when the page has loaded. To track new links added to the page after it has been loaded, use `refreshLinkClickTracking`.
+
+```javascript
+glClient.refreshLinkClickTracking();
+```
+
+### `trackError`
+
+NOTE:
+`trackError` is supported on the Browser SDK, but the resulting events are not used or available.
+
+Used to capture errors. This works only when the `errorTracking` plugin is enabled. The [plugin](#plugins) is enabled by default.
+
+```javascript
+glClient.trackError(eventAttributes);
+```
+
+For example, `trackError` can be used in `try...catch` like below:
+
+```javascript
+try {
+ // Call the function that throws an error
+ throwError();
+} catch (error) {
+ glClient.trackError({
+ message: error.message, // "This is a custom error"
+ filename: error.fileName || 'unknown', // The file in which the error occurred (e.g., "index.html")
+ lineno: error.lineNumber || 0, // The line number where the error occurred (e.g., 2)
+ colno: error.columnNumber || 0, // The column number where the error occurred (e.g., 6)
+ error: error, // The Error object itself
+ });
+}
+```
+
+| Property | Type | Description |
+| :---------------- | :------- | :------------------------------------------------------------------------------------------------------------------- |
+| `eventAttributes` | `Object` | The event attributes that need to be added to the tracked event. `message` is a mandatory key in `eventAttributes`. |
+
+### `addCookieConsent`
+
+`addCookieConsent` is used to allow tracking of user identifiers via cookies. By default `hasCookieConsent` is false, and no user identifiers are passed. To enable tracking of user identifiers, call the `addCookieConsent` method. This step is not needed if you intialized the Browser SDK with `hasCookieConsent` set to true.
+
+```javascript
+glClient.addCookieConsent();
+```
+
+### `setCustomUrl`
+
+Used to set a custom URL for tracking.
+
+```javascript
+glClient.setCustomUrl(url);
+```
+
+| Property | Type | Description |
+| :------- | :------- | :------------------------------------------------ |
+| `url` | `String` | The custom URL that you want to set for tracking. |
+
+### `setReferrerUrl`
+
+Used to set a referrer URL for tracking.
+
+```javascript
+glClient.setReferrerUrl(url);
+```
+
+| Property | Type | Description |
+| :------- | :------- | :-------------------------------------------------- |
+| `url` | `String` | The referrer URL that you want to set for tracking. |
+
+### `setDocumentTitle`
+
+Used to override the document title.
+
+```javascript
+glClient.setDocumentTitle(title);
+```
+
+| Property | Type | Description |
+| :------- | :------- | :--------------------------------- |
+| `title` | `String` | The document title you want to set. |
+
+## Contribute
+
+If you would like to contribute to Browser SDK, follow the [contributing guide](https://gitlab.com/gitlab-org/analytics-section/product-analytics/gl-application-sdk-js/-/blob/main/docs/Contributing.md).
+
+## Troubleshooting
+
+If the Browser SDK is not sending events, or behaving in an unexpected way, take the following actions:
+
+1. Verify that the `appId` and host values in the options object are correct.
+1. Check if any browser privacy settings, extensions, or ad blockers are interfering with the Browser SDK.
+
+For more information and assistance, see the [Snowplow documentation](https://docs.snowplow.io/docs/collecting-data/collecting-from-own-applications/javascript-trackers/browser-tracker/browser-tracker-v3-reference/)
+or contact the [Analytics Instrumentation team](https://about.gitlab.com/handbook/engineering/development/analytics/analytics-instrumentation/#team-members).
diff --git a/doc/user/product_analytics/instrumentation/index.md b/doc/user/product_analytics/instrumentation/index.md
new file mode 100644
index 00000000000..f909a01ff59
--- /dev/null
+++ b/doc/user/product_analytics/instrumentation/index.md
@@ -0,0 +1,15 @@
+---
+stage: Analyze
+group: Analytics Instrumentation
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
+---
+
+# Instrumentation
+
+To instrument an application to send events to GitLab product analytics you can use one of the following language and platform specific tracking SDKs:
+
+- [Browser SDK](browser_sdk.md)
+- [Ruby SDK](https://gitlab.com/gitlab-org/analytics-section/product-analytics/gl-application-sdk-rb)
+- [Python SDK](https://gitlab.com/gitlab-org/analytics-section/product-analytics/gl-application-sdk-python)
+- [Node SDK](https://gitlab.com/gitlab-org/analytics-section/product-analytics/gl-application-sdk-node)
+- [.NET SDK](https://gitlab.com/gitlab-org/analytics-section/product-analytics/gl-application-sdk-dotnet)
diff --git a/doc/user/profile/account/delete_account.md b/doc/user/profile/account/delete_account.md
index d41eee911f9..70c12cbcf00 100644
--- a/doc/user/profile/account/delete_account.md
+++ b/doc/user/profile/account/delete_account.md
@@ -54,10 +54,9 @@ Using the **Delete user and contributions** option may result in removing more d
When deleting users, you can either:
-- Delete just the user. Not all associated records are deleted with the user. Instead of being deleted, these records
- are moved to a system-wide user with the username Ghost User. The Ghost User's purpose is to act as a container for
- such records. Any commits made by a deleted user still display the username of the original user.
- The user's personal projects are deleted, not moved to the Ghost User.
+- Delete just the user, but move contributions to a system-wide "Ghost User":
+ - The `@ghost` acts as a container for all deleted users' contributions.
+ - The user's profile and personal projects are deleted, instead of moved to the Ghost User.
- Delete the user and their contributions, including:
- Abuse reports.
- Emoji reactions.
@@ -74,6 +73,9 @@ When deleting users, you can either:
[merge requests](../../project/merge_requests/index.md)
and [snippets](../../snippets.md).
+In both cases, commits retain [user information](https://git-scm.com/book/en/v2/Git-Internals-Git-Objects#_git_commit_objects)
+and therefore data integrity within a [Git repository](../../project/repository/index.md).
+
An alternative to deleting is [blocking a user](../../../administration/moderate_users.md#block-a-user).
When a user is deleted from an [abuse report](../../../administration/review_abuse_reports.md) or spam log, these associated
diff --git a/doc/user/profile/account/two_factor_authentication.md b/doc/user/profile/account/two_factor_authentication.md
index d1f1d28663e..d26f2193124 100644
--- a/doc/user/profile/account/two_factor_authentication.md
+++ b/doc/user/profile/account/two_factor_authentication.md
@@ -544,3 +544,9 @@ generates the codes. For example:
1. Select General.
1. Select Date & Time.
1. Enable Set Automatically. If it's already enabled, disable it, wait a few seconds, and re-enable.
+
+### Error: "Permission denied (publickey)" when regenerating recovery codes
+
+If you receive a `Permission denied (publickey)` error when attempting to [generate new recovery codes using an SSH key](#generate-new-recovery-codes-using-ssh)
+and you are using a non-default SSH key pair file path,
+you might need to [manually register your private SSH key](../../ssh.md#configure-ssh-to-point-to-a-different-directory) using `ssh-agent`.
diff --git a/doc/user/profile/comment_templates.md b/doc/user/profile/comment_templates.md
index 50df5f8fdb4..98fabdb0a35 100644
--- a/doc/user/profile/comment_templates.md
+++ b/doc/user/profile/comment_templates.md
@@ -10,10 +10,7 @@ type: howto
> - GraphQL support [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/352956) in GitLab 14.9 [with a flag](../../administration/feature_flags.md) named `saved_replies`. Disabled by default.
> - User interface [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/113232) in GitLab 15.10 [with a flag](../../administration/feature_flags.md) named `saved_replies`. Disabled by default. Enabled for GitLab team members only.
> - [Enabled on GitLab.com and self-managed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/119468) in GitLab 16.0.
-
-FLAG:
-On self-managed GitLab, by default this feature is available. To hide the feature, an administrator can [disable the feature flag](../../administration/feature_flags.md) named `saved_replies`.
-On GitLab.com, this feature is available.
+> - [Feature flag removed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/123363) in GitLab 16.6.
With comment templates, create and reuse text for any text area in:
@@ -25,7 +22,7 @@ With comment templates, create and reuse text for any text area in:
Comment templates can be small, like approving a merge request and unassigning yourself from it,
or large, like chunks of boilerplate text you use frequently:
-![Comment templates dropdown list](img/saved_replies_dropdown_v16_0.png)
+![Comment templates dropdown list](img/comment_template_v16_6.png)
## Use comment templates in a text area
@@ -65,4 +62,4 @@ To edit or delete a previously comment template:
1. On the left sidebar, select **Comment templates** (**{comment-lines}**).
1. Scroll to **My comment templates**, and identify the comment template you want to edit.
1. To edit, select **Edit** (**{pencil}**).
-1. To delete, select **Delete** (**{remove}**), then select **Delete** again from the modal window.
+1. To delete, select **Delete** (**{remove}**), then select **Delete** again on the dialog.
diff --git a/doc/user/profile/img/comment_template_v16_6.png b/doc/user/profile/img/comment_template_v16_6.png
new file mode 100644
index 00000000000..7990ca604ce
--- /dev/null
+++ b/doc/user/profile/img/comment_template_v16_6.png
Binary files differ
diff --git a/doc/user/profile/img/saved_replies_dropdown_v16_0.png b/doc/user/profile/img/saved_replies_dropdown_v16_0.png
deleted file mode 100644
index 4608484a496..00000000000
--- a/doc/user/profile/img/saved_replies_dropdown_v16_0.png
+++ /dev/null
Binary files differ
diff --git a/doc/user/profile/index.md b/doc/user/profile/index.md
index 6536a992292..64fa5d7b448 100644
--- a/doc/user/profile/index.md
+++ b/doc/user/profile/index.md
@@ -62,6 +62,10 @@ To add new email to your account:
1. Select **Add email address**.
1. Verify your email address with the verification email received.
+NOTE:
+[Making your email non-public](#set-your-public-email) does not prevent it from being used for commit matching,
+[project imports](../project/import/index.md), and [group migrations](../group/import/index.md).
+
## Make your user profile page private
You can make your user profile visible to only you and GitLab administrators.
@@ -128,6 +132,8 @@ to match your username.
## Add external accounts to your user profile page
+> Mastodon user account [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/132892) in 16.6 [with a flag](../feature_flags.md) named `mastodon_social_ui`. Disabled by default. This feature is in [Beta](../../policy/experiment-beta-support.md#beta).
+
You can add links to certain other external accounts you might have, like Skype and Twitter.
They can help other users connect with you on other platforms.
@@ -138,6 +144,7 @@ To add links to other accounts:
1. In the **Main settings** section, add your:
- Discord [user ID](https://support.discord.com/hc/en-us/articles/206346498-Where-can-I-find-my-User-Server-Message-ID-).
- LinkedIn profile name.
+ - Mastodon username.
- Skype username.
- Twitter @username.
diff --git a/doc/user/profile/notifications.md b/doc/user/profile/notifications.md
index 706065d4693..8d34055d42c 100644
--- a/doc/user/profile/notifications.md
+++ b/doc/user/profile/notifications.md
@@ -9,6 +9,7 @@ info: To determine the technical writer assigned to the Stage/Group associated w
> - Enhanced email styling [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/78604) in GitLab 14.9 [with a feature flag](../../administration/feature_flags.md) named `enhanced_notify_css`. Disabled by default.
> - Enhanced email styling [enabled on GitLab.com](https://gitlab.com/gitlab-org/gitlab/-/issues/355907) in GitLab 14.9.
> - Enhanced email styling [enabled on self-managed](https://gitlab.com/gitlab-org/gitlab/-/issues/355907) in GitLab 15.0.
+> - Product marketing emails [removed](https://gitlab.com/gitlab-org/gitlab/-/issues/418137) in GitLab 16.6.
Stay informed about what's happening in GitLab with email notifications.
You can receive updates about activity in issues, merge requests, epics, and designs.
@@ -84,8 +85,6 @@ different values for a project or a group.
- **Notification email**: the email address your notifications are sent to.
Defaults to your primary email address.
-- **Receive product marketing emails**: select this checkbox to receive
- [periodic emails](#opt-out-of-product-marketing-emails) about GitLab features.
- **Global notification level**: the default [notification level](#notification-levels)
which applies to all your notifications.
- **Receive notifications about your own activity**: select this checkbox to receive
@@ -145,32 +144,6 @@ Or:
<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
To learn how to be notified when a new release is available, watch [Notification for releases](https://www.youtube.com/watch?v=qyeNkGgqmH4).
-### Opt out of product marketing emails
-
-You can receive emails that teach you about various GitLab features.
-These emails are enabled by default.
-
-To opt out:
-
-1. On the left sidebar, select your avatar.
-1. Select **Preferences**.
-1. On the left sidebar, select **Notifications**.
-1. Clear the **Receive product marketing emails** checkbox.
- Edited settings are automatically saved and enabled.
-
-Disabling these emails does not disable all emails.
-Learn how to [opt out of all emails from GitLab](#opt-out-of-all-gitlab-emails).
-
-#### Self-managed product marketing emails **(FREE SELF)**
-
-The self-managed installation generates and automatically sends these emails based on user actions.
-Turning this on does not cause your GitLab instance or your company to send any personal information to
-GitLab Inc.
-
-An instance administrator can configure this setting for all users. If you choose to opt out, your
-setting overrides the instance-wide setting, even when an administrator later enables these emails
-for all users.
-
## Notification events
Users are notified of the following events:
@@ -348,7 +321,6 @@ If you no longer wish to receive any email notifications:
1. On the left sidebar, select your avatar.
1. Select **Preferences**.
1. On the left sidebar, select **Notifications**.
-1. Clear the **Receive product marketing emails** checkbox.
1. Set your **Global notification level** to **Disabled**.
1. Clear the **Receive notifications about your own activity** checkbox.
1. If you belong to any groups or projects, set their notification setting to **Global** or
diff --git a/doc/user/profile/personal_access_tokens.md b/doc/user/profile/personal_access_tokens.md
index 9135a142612..a953a878cc9 100644
--- a/doc/user/profile/personal_access_tokens.md
+++ b/doc/user/profile/personal_access_tokens.md
@@ -137,6 +137,42 @@ Personal access tokens expire on the date you define, at midnight, 00:00 AM UTC.
- In GitLab Ultimate, administrators can
[limit the allowable lifetime of access tokens](../../administration/settings/account_and_limit_settings.md#limit-the-lifetime-of-access-tokens). If not set, the maximum allowable lifetime of a personal access token is 365 days.
- In GitLab Free and Premium, the maximum allowable lifetime of a personal access token is 365 days.
+- If you do not set an expiry date when creating a personal access token, the expiry date is set to the
+ [maximum allowed lifetime for the token](../../administration/settings/account_and_limit_settings.md#limit-the-lifetime-of-access-tokens).
+ If the maximum allowed lifetime is not set, the default expiry date is 365 days from the date of creation.
+
+### Service Accounts
+
+You can [create a personal access token for a service account](../../api/groups.md#create-personal-access-token-for-service-account-user) with no expiry date.
+
+NOTE:
+Allowing personal access tokens for service accounts to be created with no expiry date only affects tokens created after you change this setting. It does not affect existing tokens.
+
+#### GitLab.com
+
+Prerequisite:
+
+- You must have the Owner role in the top-level group.
+
+1. On the left sidebar, select **Search or go to** and find your group.
+1. Select **Settings > Permissions and group features**.
+1. Clear the **Service account token expiration** checkbox.
+
+You can now create personal access tokens for a service account user with no expiry date.
+
+#### Self-managed GitLab
+
+Prerequisite:
+
+- You must be an administrator for your self-managed instance.
+
+1. On the left sidebar, select **Search or go to**.
+1. Select **Admin Area**.
+1. Select **Settings > General**.
+1. Expand **Account and limit**.
+1. Clear the **Service account token expiration** checkbox.
+
+You can now create personal access tokens for a service account user with no expiry date.
## Create a personal access token programmatically **(FREE SELF)**
diff --git a/doc/user/profile/preferences.md b/doc/user/profile/preferences.md
index 170545d851f..34f083e0b48 100644
--- a/doc/user/profile/preferences.md
+++ b/doc/user/profile/preferences.md
@@ -268,6 +268,22 @@ To use exact times on the GitLab UI:
1. Clear the **Use relative times** checkbox.
1. Select **Save changes**.
+### Customize time format
+
+> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/15206) in GitLab 16.6.
+
+You can customize the format used to display times of activities on your group and project overview pages and user profiles. You can display times as:
+
+- 12 hour format. For example: `2:34 PM`.
+- 24 hour format. For example: `14:34`.
+
+To customize the time format:
+
+1. On the left sidebar, select your avatar.
+1. Select **Preferences** > **Time preferences**.
+1. In **Time format**, select either the **12-hour** or **24-hour** option.
+1. Select **Save changes**.
+
## User identities in CI job JSON web tokens
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/387537) in GitLab 16.0.
diff --git a/doc/user/profile/service_accounts.md b/doc/user/profile/service_accounts.md
index 6bb96b9c552..8fa0067f150 100644
--- a/doc/user/profile/service_accounts.md
+++ b/doc/user/profile/service_accounts.md
@@ -53,6 +53,8 @@ Prerequisite:
You define the scopes for the service account by [setting the scopes for the personal access token](personal_access_tokens.md#personal-access-token-scopes).
+ Optional. You can [create a personal access token with no expiry date](personal_access_tokens.md#when-personal-access-tokens-expire).
+
The response includes the personal access token value.
1. Make this service account a group or project member by [manually adding the service account user to the group or project](#add-a-service-account-to-subgroup-or-project).
@@ -74,6 +76,8 @@ Prerequisite:
You define the scopes for the service account by [setting the scopes for the personal access token](personal_access_tokens.md#personal-access-token-scopes).
+ Optional. You can [create a personal access token with no expiry date](personal_access_tokens.md#when-personal-access-tokens-expire).
+
The response includes the personal access token value.
1. Make this service account a group or project member by
diff --git a/doc/user/project/codeowners/index.md b/doc/user/project/codeowners/index.md
index d783471f0da..0fa9983e93b 100644
--- a/doc/user/project/codeowners/index.md
+++ b/doc/user/project/codeowners/index.md
@@ -54,6 +54,10 @@ GitLab shows the Code Owners at the top of the page.
## Set up Code Owners
+Prerequisites:
+
+- You must be able to either push to the default branch or create a merge request.
+
1. Create a `CODEOWNERS` file in your [preferred location](#codeowners-file).
1. Define some rules in the file following the [Code Owners syntax reference](reference.md).
Some suggestions:
@@ -145,7 +149,7 @@ of the merge request becomes optional.
Inviting **Subgroup Y** to a parent group of **Project A**
[is not supported](https://gitlab.com/gitlab-org/gitlab/-/issues/288851). To set **Subgroup Y** as
-Code Owners [invite this group directly to the project](#inviting-subgroups-to-projects-in-parent-groups) itself.
+Code Owners, [invite this group directly to the project](#inviting-subgroups-to-projects-in-parent-groups) itself.
NOTE:
For approval to be required, groups as Code Owners must have a direct membership
@@ -196,7 +200,7 @@ You can organize Code Owners by putting them into named sections.
You can use sections for shared directories, so that multiple
teams can be reviewers.
-To add a section to the `CODEOWNERS` file, enter a section name in brackets,
+To add a section to the `CODEOWNERS` file, enter a section name in square brackets,
followed by the files or directories, and users, groups, or subgroups:
```plaintext
@@ -206,7 +210,7 @@ internal/README.md @user2
```
Each Code Owner in the merge request widget is listed under a label.
-The following image shows a **Groups** and **Documentation** section:
+The following image shows **Groups** and **Documentation** sections:
![MR widget - Sectional Code Owners](../img/sectional_code_owners_v13.2.png)
@@ -221,7 +225,9 @@ All paths in that section inherit this default, unless you override the section
default on a specific line.
Default owners are applied when specific owners are not specified for file paths.
-Specific owners defined beside the file path override default owners:
+Specific owners defined beside the file path override default owners.
+
+For example:
```plaintext
[Documentation] @docs-team
@@ -259,8 +265,8 @@ config/db/database-setup.md @docs-team
#### Use regular entries and sections together
-If you set a default Code Owner for a path outside a section, their approval is always required, and
-the entry isn't overridden.
+If you set a default Code Owner for a path **outside a section**, their approval is always required.
+Such entries aren't overridden by sections.
Entries without sections are treated as if they were another, unnamed section:
```plaintext
@@ -287,7 +293,7 @@ In this example:
of the `@general-approvers`,`@docs-team`, and `@database-team` groups.
Compare this behavior to when you use only [default owners for sections](#set-default-owner-for-a-section),
-when specific entries within a section override the section default.
+when specific entries in a section override the section default.
#### Sections with duplicate names
@@ -313,13 +319,14 @@ entries under **Database**. The entries defined under the sections **Documentati
#### Make a Code Owners section optional
-You can designate optional sections in your Code Owners file. Prepend the
-section name with the caret `^` character to treat the entire section as optional.
+You can designate optional sections in your Code Owners file.
Optional sections enable you to designate responsible parties for various parts
of your codebase, but not require approval from them. This approach provides
a more relaxed policy for parts of your project that are frequently updated,
but don't require stringent reviews.
+To treat the entire section as optional, prepend the section name with the caret `^` character.
+
In this example, the `[Go]` section is optional:
```plaintext
@@ -333,7 +340,7 @@ In this example, the `[Go]` section is optional:
*.go @root
```
-The optional Code Owners section displays in merge requests under the **Approval Rules** area:
+The optional Code Owners section displays in merge requests under the description:
![MR widget - Optional Code Owners sections](../img/optional_code_owners_sections_v13_8.png)
@@ -348,18 +355,25 @@ section is marked as optional.
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/335451) in GitLab 15.9.
-You can require multiple approvals for the Code Owners sections under the Approval Rules area in merge requests.
-Append the section name with a number `n` in brackets. This requires `n` approvals from the Code Owners in this section.
+You can require multiple approvals for the Code Owners sections in the Approvals area in merge requests.
+Append the section name with a number `n` in brackets, for example, `[2]` or `[3]`.
+This requires `n` approvals from the Code Owners in this section.
Valid entries for `n` are integers `≥ 1`. `[1]` is optional because it is the default. Invalid values for `n` are treated as `1`.
WARNING:
-[Issue #384881](https://gitlab.com/gitlab-org/gitlab/-/issues/385881) proposes changes
+[Issue 384881](https://gitlab.com/gitlab-org/gitlab/-/issues/385881) proposes changes
to the behavior of this setting. Do not intentionally set invalid values. They may
-become valid in the future, and cause unexpected behavior.
+become valid in the future and cause unexpected behavior.
+
+To require multiple approvals from Code Owners:
-Make sure you enabled `Require approval from code owners` in `Settings > Repository > Protected branches`, otherwise the Code Owner approvals are optional.
+1. On the left sidebar, select **Search or go to** and find your project.
+1. Select **Settings > Repository**.
+1. Expand **Protected branches**.
+1. Next to the default branch, turn on the toggle under **Code owner approval**.
+1. Edit the `CODEOWNERS` file to add a rule for multiple approvals.
-In this example, the `[Documentation]` section requires 2 approvals:
+For example, to require two approvals for the `[Documentation]` section:
```plaintext
[Documentation][2]
@@ -369,7 +383,7 @@ In this example, the `[Documentation]` section requires 2 approvals:
*.rb @dev-team
```
-The `Documentation` Code Owners section under the **Approval Rules** area displays 2 approvals are required:
+The `Documentation` Code Owners section in the Approvals area displays two approvals are required:
![MR widget - Multiple Approval Code Owners sections](../img/multi_approvals_code_owners_sections_v15_9.png)
@@ -377,7 +391,7 @@ The `Documentation` Code Owners section under the **Approval Rules** area displa
Users who are **Allowed to push** can choose to create a merge request
for their changes, or push the changes directly to a branch. If the user
-skips the merge request process, the protected-branch features
+skips the merge request process, the protected branch features
and Code Owner approvals built into merge requests are also skipped.
This permission is often granted to accounts associated with
diff --git a/doc/user/project/deploy_tokens/index.md b/doc/user/project/deploy_tokens/index.md
index 8b7e185508b..351762228fb 100644
--- a/doc/user/project/deploy_tokens/index.md
+++ b/doc/user/project/deploy_tokens/index.md
@@ -88,7 +88,8 @@ Create a deploy token to automate deployment tasks that can run independently of
Prerequisites:
-- You must have at least the Maintainer role for the project or group.
+- To create a group deploy token, you must have the Owner role for the group.
+- To create a project deploy token, you must have at least the Maintainer role for the project.
1. On the left sidebar, select **Search or go to** and find your project or group.
1. Select **Settings > Repository**.
@@ -106,7 +107,8 @@ Revoke a token when it's no longer required.
Prerequisites:
-- You must have at least the Maintainer role for the project or group.
+- To revoke a group deploy token, you must have the Owner role for the group.
+- To revoke a project deploy token, you must have at least the Maintainer role for the project.
To revoke a deploy token:
diff --git a/doc/user/project/import/github.md b/doc/user/project/import/github.md
index 4da756b05ea..f9b94774809 100644
--- a/doc/user/project/import/github.md
+++ b/doc/user/project/import/github.md
@@ -17,11 +17,10 @@ The namespace is a user or group in GitLab, such as `gitlab.com/sidney-jones` or
`gitlab.com/customer-success`. You can use bulk actions in the rails console to move projects to
different namespaces.
-- If you are importing to a self-managed GitLab instance, you can use the [GitHub Rake task](../../../administration/raketasks/github_import.md) instead. The
- Rake task imports projects without the constraints of a [Sidekiq](../../../development/sidekiq/index.md) worker.
-- If you are importing from GitHub Enterprise to GitLab.com, use the
- [GitLab Import API](../../../api/import.md#import-repository-from-github) GitHub endpoint instead. This allows you to provide a different domain to import the project from.
- Using the UI, the GitHub importer always imports from the `github.com` domain.
+If you are importing from GitHub Enterprise to GitLab.com, use the
+[GitLab Import API](../../../api/import.md#import-repository-from-github) GitHub endpoint instead. The API allows you to
+provide a different domain to import the project from. Using the UI, the GitHub importer always imports from the
+`github.com` domain.
When importing projects:
@@ -123,9 +122,10 @@ The [GitHub integration method (above)](#use-the-github-integration) is recommen
If you are not using the GitHub integration, you can still perform an authorization with GitHub to grant GitLab access your repositories:
-1. Go to <https://github.com/settings/tokens/new>
+1. Go to `https://github.com/settings/tokens/new`.
1. Enter a token description.
-1. Select the repository scope.
+1. Select the `repo` scope.
+1. Optional. To [import collaborators](#select-additional-items-to-import), select the `read:org` scope.
1. Select **Generate token**.
1. Copy the token hash.
1. Go back to GitLab and provide the token to the GitHub importer.
diff --git a/doc/user/project/import/jira.md b/doc/user/project/import/jira.md
index b2092082bf8..921669e4b70 100644
--- a/doc/user/project/import/jira.md
+++ b/doc/user/project/import/jira.md
@@ -23,8 +23,8 @@ GitLab imports the following information directly:
Other Jira issue metadata that is not formally mapped to GitLab issue fields is
imported into the GitLab issue's description as plain text.
-Our parser for converting text in Jira issues to GitLab Flavored Markdown is only compatible with
-Jira V3 REST API.
+Text in Jira issues is not parsed to GitLab Flavored Markdown which can result in broken text formatting.
+For more information, see [issue 379104](https://gitlab.com/gitlab-org/gitlab/-/issues/379104).
There is an [epic](https://gitlab.com/groups/gitlab-org/-/epics/2738) tracking the addition of issue assignees, comments, and much more in the future
iterations of the GitLab Jira importer.
diff --git a/doc/user/project/index.md b/doc/user/project/index.md
index b60d87adbd3..9ee1e33ecdd 100644
--- a/doc/user/project/index.md
+++ b/doc/user/project/index.md
@@ -143,19 +143,24 @@ To push your repository and create a project:
1. Push with SSH or HTTPS:
- To push with SSH:
- ```shell
- git push --set-upstream git@gitlab.example.com:namespace/myproject.git master
- ```
+ ```shell
+ # Use this version if your project uses the standard port 22
+ $ git push --set-upstream git@gitlab.example.com:namespace/myproject.git main
+
+ # Use this version if your project requires a non-standard port number
+ $ git push --set-upstream ssh://git@gitlab.example.com:00/namespace/myproject.git main
+ ```
- To push with HTTPS:
- ```shell
- git push --set-upstream https://gitlab.example.com/namespace/myproject.git master
- ```
+ ```shell
+ git push --set-upstream https://gitlab.example.com/namespace/myproject.git master
+ ```
- For `gitlab.example.com`, use the domain name of the machine that hosts your Git repository.
- For `namespace`, use the name of your [namespace](../namespace/index.md).
- For `myproject`, use the name of your project.
+ - If specifying a port, change `00` to your project's required port number.
- Optional. To export existing repository tags, append the `--tags` flag to your `git push` command.
1. Optional. To configure the remote:
diff --git a/doc/user/project/integrations/aws_codepipeline.md b/doc/user/project/integrations/aws_codepipeline.md
index b081544199e..5404101b4f6 100644
--- a/doc/user/project/integrations/aws_codepipeline.md
+++ b/doc/user/project/integrations/aws_codepipeline.md
@@ -1,6 +1,6 @@
---
-stage: Manage
-group: Import and Integrate
+stage: none
+group: unassigned
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
diff --git a/doc/user/project/integrations/gitlab_slack_application.md b/doc/user/project/integrations/gitlab_slack_application.md
index 6f70305ce8b..abfd4243e07 100644
--- a/doc/user/project/integrations/gitlab_slack_application.md
+++ b/doc/user/project/integrations/gitlab_slack_application.md
@@ -74,7 +74,8 @@ You can use slash commands to run common GitLab operations. Replace `<project>`
- You must authorize your Slack user on GitLab.com when you run your first slash command.
- You can [create a shorter project alias](#create-a-project-alias-for-slash-commands) for slash commands.
-**For [Slack slash commands](slack_slash_commands.md) on self-managed GitLab, [Mattermost slash commands](mattermost_slash_commands.md), and [ChatOps](../../../ci/chatops/index.md)**, replace `/gitlab` with the slash command trigger name configured for your integration.
+**For [Slack slash commands](slack_slash_commands.md) on self-managed GitLab and [Mattermost slash commands](mattermost_slash_commands.md)**,
+replace `/gitlab` with the slash command trigger name configured for your integration.
The following slash commands are available:
@@ -172,7 +173,11 @@ The following events are available for Slack notifications:
## Troubleshooting
-### GitLab for Slack app does not appear in the list of integrations
+When configuring the GitLab for Slack app on GitLab.com, you might encounter the following issues.
+
+For self-managed GitLab, see [GitLab for Slack app administration](../../../administration/settings/slack_app.md#troubleshooting).
+
+### The app does not appear in the list of integrations
The GitLab for Slack app might not appear in the list of integrations. To have the GitLab for Slack app on your self-managed instance, an administrator must [enable the integration](../../../administration/settings/slack_app.md). On GitLab.com, the GitLab for Slack app is available by default.
@@ -193,9 +198,10 @@ As a workaround, ensure:
- If using a [project alias](#create-a-project-alias-for-slash-commands), the alias is correct.
- The GitLab for Slack app is [enabled for the project](#from-project-integration-settings).
-### Slash commands return `/gitlab failed with the error "dispatch_failed"` in Slack
+### Slash commands return an error in Slack
-Slash commands might return `/gitlab failed with the error "dispatch_failed"` in Slack. To resolve this issue, ensure an administrator has properly configured the [GitLab for Slack app settings](../../../administration/settings/slack_app.md) on your self-managed instance.
+Slash commands might return `/gitlab failed with the error "dispatch_failed"` in Slack.
+To resolve this issue, ensure an administrator has properly configured the [GitLab for Slack app settings](../../../administration/settings/slack_app.md) on your self-managed instance.
### Notifications are not received to a channel
diff --git a/doc/user/project/issues/associate_zoom_meeting.md b/doc/user/project/issues/associate_zoom_meeting.md
index bb8f0ccd186..e112c5ebd0d 100644
--- a/doc/user/project/issues/associate_zoom_meeting.md
+++ b/doc/user/project/issues/associate_zoom_meeting.md
@@ -30,7 +30,7 @@ a system alert notifies you of its successful addition.
The issue's description is automatically edited to include the Zoom link, and a button
appears right under the issue's title.
-![Link Zoom Call in Issue](img/zoom-quickaction-button.png)
+![Link Zoom Call in Issue](img/zoom_quickaction_button_v16_6.png)
You are only allowed to attach a single Zoom meeting to an issue. If you attempt
to add a second Zoom meeting using the `/zoom` quick action, it doesn't work. You
diff --git a/doc/user/project/issues/img/zoom-quickaction-button.png b/doc/user/project/issues/img/zoom-quickaction-button.png
deleted file mode 100644
index 3be4f36f88f..00000000000
--- a/doc/user/project/issues/img/zoom-quickaction-button.png
+++ /dev/null
Binary files differ
diff --git a/doc/user/project/issues/img/zoom_quickaction_button_v16_6.png b/doc/user/project/issues/img/zoom_quickaction_button_v16_6.png
new file mode 100644
index 00000000000..cf869b59714
--- /dev/null
+++ b/doc/user/project/issues/img/zoom_quickaction_button_v16_6.png
Binary files differ
diff --git a/doc/user/project/issues/issue_weight.md b/doc/user/project/issues/issue_weight.md
index b1a1390d3d2..ddd08ee1de0 100644
--- a/doc/user/project/issues/issue_weight.md
+++ b/doc/user/project/issues/issue_weight.md
@@ -10,7 +10,8 @@ info: To determine the technical writer assigned to the Stage/Group associated w
When you have a lot of issues, it can be hard to get an overview.
With weighted issues, you can get a better idea of how much time,
-value, or complexity a given issue has or costs.
+value, or complexity a given issue has or costs. You can also [sort by weight](sorting_issue_lists.md#sorting-by-weight)
+to see which issues need to be prioritized.
## View the issue weight
diff --git a/doc/user/project/members/index.md b/doc/user/project/members/index.md
index 901a8fe9850..6df33a4fb06 100644
--- a/doc/user/project/members/index.md
+++ b/doc/user/project/members/index.md
@@ -190,6 +190,7 @@ To add a group to a project:
1. Select **Invite**.
The members of the group are not displayed on the **Members** tab.
+Private groups are masked from unauthorized users.
The **Members** tab shows:
- Members who are directly assigned to the project.
diff --git a/doc/user/project/members/share_project_with_groups.md b/doc/user/project/members/share_project_with_groups.md
index deefe9040fa..94dbb922c0b 100644
--- a/doc/user/project/members/share_project_with_groups.md
+++ b/doc/user/project/members/share_project_with_groups.md
@@ -76,6 +76,11 @@ In addition:
- On the group's page, the project is listed on the **Shared projects** tab.
- On the project's **Members** page, the group is listed on the **Groups** tab.
+- From [GitLab 16.6](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/134623),
+ the invited group's name and membership source will be masked unless:
+ - the group is public, or
+ - the current user is a member of the group, or
+ - the current user is a member of the project.
- Each user is assigned a maximum role.
- Members who have the **Project Invite** badge next to their profile on the usage quota page count towards the billable members of the shared project's top-level group.
diff --git a/doc/user/project/merge_requests/ai_in_merge_requests.md b/doc/user/project/merge_requests/ai_in_merge_requests.md
index c29060bf44b..2b4b28dafa2 100644
--- a/doc/user/project/merge_requests/ai_in_merge_requests.md
+++ b/doc/user/project/merge_requests/ai_in_merge_requests.md
@@ -14,7 +14,7 @@ Additional information on enabling these features and maturity can be found in o
> [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/10591) in GitLab 16.3 as an [Experiment](../../../policy/experiment-beta-support.md#experiment).
-This feature is an [Experiment](../../../policy/experiment-beta-support.md) on GitLab.com that is using Google's Vertex service and the `text-bison` model. It requires the [group-level third-party AI features setting](../../group/manage.md#enable-third-party-ai-features) to be enabled.
+This feature is an [Experiment](../../../policy/experiment-beta-support.md) on GitLab.com.
Merge requests in projects often have [templates](../description_templates.md#create-a-merge-request-template) defined that need to be filled out. This helps reviewers and other users understand the purpose and changes a merge request might propose.
@@ -40,7 +40,7 @@ Provide feedback on this experimental feature in [issue 416537](https://gitlab.c
> [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/10401) in GitLab 16.2 as an [Experiment](../../../policy/experiment-beta-support.md#experiment).
-This feature is an [Experiment](../../../policy/experiment-beta-support.md) on GitLab.com that is using Google's Vertex service and the `text-bison` model. It requires the [group-level third-party AI features setting](../../group/manage.md#enable-third-party-ai-features) to be enabled.
+This feature is an [Experiment](../../../policy/experiment-beta-support.md) on GitLab.com.
GitLab Duo Merge request summaries are available on the merge request page in:
@@ -56,7 +56,7 @@ Provide feedback on this experimental feature in [issue 408726](https://gitlab.c
> [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/10466) in GitLab 16.0 as an [Experiment](../../../policy/experiment-beta-support.md#experiment).
-This feature is an [Experiment](../../../policy/experiment-beta-support.md) on GitLab.com that is using Google's Vertex service and the `text-bison` model. It requires the [group-level third-party AI features setting](../../group/manage.md#enable-third-party-ai-features) to be enabled.
+This feature is an [Experiment](../../../policy/experiment-beta-support.md) on GitLab.com.
When you've completed your review of a merge request and are ready to [submit your review](reviews/index.md#submit-a-review), generate a GitLab Duo Code review summary:
@@ -78,7 +78,7 @@ Provide feedback on this experimental feature in [issue 408991](https://gitlab.c
> [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/10453) in GitLab 16.2 as an [Experiment](../../../policy/experiment-beta-support.md#experiment).
-This feature is an [Experiment](../../../policy/experiment-beta-support.md) on GitLab.com that is using Google's Vertex service and the `text-bison` model. It requires the [group-level third-party AI features setting](../../group/manage.md#enable-third-party-ai-features) to be enabled.
+This feature is an [Experiment](../../../policy/experiment-beta-support.md) on GitLab.com.
When preparing to merge your merge request you may wish to edit the proposed squash or merge commit message.
@@ -99,7 +99,7 @@ Provide feedback on this experimental feature in [issue 408994](https://gitlab.c
> [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/10366) in GitLab 16.0 as an [Experiment](../../../policy/experiment-beta-support.md#experiment).
-This feature is an [Experiment](../../../policy/experiment-beta-support.md) on GitLab.com that is using Google's Vertex service and the `code-bison` model. It requires the [group-level third-party AI features setting](../../group/manage.md#enable-third-party-ai-features) to be enabled.
+This feature is an [Experiment](../../../policy/experiment-beta-support.md) on GitLab.com.
Use GitLab Duo Test generation in a merge request to see a list of suggested tests for the file you are reviewing. This functionality can help determine if appropriate test coverage has been provided, or if you need more coverage for your project.
diff --git a/doc/user/project/merge_requests/approvals/settings.md b/doc/user/project/merge_requests/approvals/settings.md
index ae16eb2a790..3be546faabe 100644
--- a/doc/user/project/merge_requests/approvals/settings.md
+++ b/doc/user/project/merge_requests/approvals/settings.md
@@ -29,8 +29,8 @@ These settings limit who can approve merge requests:
Prevents users who add commits to a merge request from also approving it.
- [**Prevent editing approval rules in merge requests**](#prevent-editing-approval-rules-in-merge-requests):
Prevents users from overriding project level approval rules on merge requests.
-- [**Require user password to approve**](#require-user-password-to-approve):
- Force potential approvers to first authenticate with a password.
+- [**Require user re-authentication (password or SAML) to approve**](#require-user-re-authentication-to-approve):
+ Force potential approvers to first authenticate with either a password or with SAML.
- Code Owner approval removals: Define what happens to existing approvals when
commits are added to the merge request.
- **Keep approvals**: Do not remove any approvals.
@@ -104,20 +104,29 @@ on merge requests, you can disable this setting:
This change affects all open merge requests.
-## Require user password to approve
+## Require user re-authentication to approve
> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/5981) in GitLab 12.0.
> - Moved to GitLab Premium in 13.9.
+> - SAML authentication for GitLab.com groups [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/5981) in GitLab 16.6.
-You can force potential approvers to first authenticate with a password. This
+You can force potential approvers to first authenticate with either:
+
+- A password.
+- SAML. Available on GitLab.com groups only.
+
+This
permission enables an electronic signature for approvals, such as the one defined by
[Code of Federal Regulations (CFR) Part 11](https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfcfr/CFRSearch.cfm?CFRPart=11&showFR=1&subpartNode=21:1.0.1.1.8.3)):
-1. Enable password authentication for the web interface, as described in the
- [sign-in restrictions documentation](../../../../administration/settings/sign_in_restrictions.md#password-authentication-enabled).
+1. Enable password authentication and SAML authentication (available only on GitLab.com groups). For more information on:
+ - Password authentication, see
+ [sign-in restrictions documentation](../../../../administration/settings/sign_in_restrictions.md#password-authentication-enabled).
+ - SAML authentication for GitLab.com groups, see
+ [SAML SSO for GitLab.com groups documentation](../../../../user/group/saml_sso).
1. On the left sidebar, select **Settings > Merge requests**.
1. In the **Merge request approvals** section, scroll to **Approval settings** and
- select **Require user password to approve**.
+ select **Require user re-authentication (password or SAML) to approve**.
1. Select **Save changes**.
## Remove all approvals when commits are added to the source branch
diff --git a/doc/user/project/merge_requests/cherry_pick_changes.md b/doc/user/project/merge_requests/cherry_pick_changes.md
index ef1554f3b86..af76aa100c1 100644
--- a/doc/user/project/merge_requests/cherry_pick_changes.md
+++ b/doc/user/project/merge_requests/cherry_pick_changes.md
@@ -50,7 +50,18 @@ Commit `G` is added after the cherry-pick.
## Cherry-pick all changes from a merge request
After a merge request is merged, you can cherry-pick all changes introduced
-by the merge request:
+by the merge request.
+
+Prerequisites:
+
+- You must have a role in the project that allows you to edit merge requests, and add
+ code to the repository.
+- Your project must use the [merge method](methods/index.md#fast-forward-merge) **Merge Commit**,
+ which is set in the project's **Settings > Merge requests**. Fast-forwarded commits
+ can't be cherry-picked from the GitLab UI, but the individual commits can
+ [still be cherry-picked](#cherry-pick-a-single-commit).
+
+To do this:
1. On the left sidebar, select **Search or go to** and find your project.
1. Select **Code > Merge requests**, and find your merge request.
diff --git a/doc/user/project/merge_requests/dependencies.md b/doc/user/project/merge_requests/dependencies.md
index 89305e65dfb..8fb5230c497 100644
--- a/doc/user/project/merge_requests/dependencies.md
+++ b/doc/user/project/merge_requests/dependencies.md
@@ -145,6 +145,12 @@ information, read [issue #12549](https://gitlab.com/gitlab-org/gitlab/-/issues/1
### Complex merge order dependencies are unsupported
+- Support [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/11393) in GitLab 16.6 [with a flag](../../../administration/feature_flags.md) named `remove_mr_blocking_constraints`. Disabled by default.
+
+FLAG:
+On self-managed GitLab, by default this feature is not available. To make it available, an administrator can [enable the feature flag](../../../administration/feature_flags.md) named `remove_mr_blocking_constraints`.
+On GitLab.com, this feature is available.
+
If you attempt to create an indirect, nested dependency, GitLab shows the error message:
- Dependencies failed to save: Dependency chains are not supported
diff --git a/doc/user/project/merge_requests/drafts.md b/doc/user/project/merge_requests/drafts.md
index 85ebc75e61f..a3b1920e375 100644
--- a/doc/user/project/merge_requests/drafts.md
+++ b/doc/user/project/merge_requests/drafts.md
@@ -7,22 +7,19 @@ type: reference, concepts
# Draft merge requests **(FREE ALL)**
-If a merge request isn't ready to merge, potentially because of continued development
-or open threads, you can prevent it from being accepted before you
-[mark it as ready](#mark-merge-requests-as-ready). Flag it as a draft to disable
-the **Merge** button until you remove the **Draft** flag:
+If a merge request isn't ready to merge, you can block it from merging until you
+[mark it as ready](#mark-merge-requests-as-ready). Merge requests marked as **Draft**
+cannot merge until the **Draft** flag is removed, even if all other merge criteria are met:
-![Blocked Merge Button](img/merge_request_draft_blocked_v16_0.png)
+![merge is blocked](img/merge_request_draft_blocked_v16_0.png)
## Mark merge requests as drafts
-> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/32692) in GitLab 13.2, Work-In-Progress (WIP) merge requests were renamed to **Draft**.
-> - [Removed](https://gitlab.com/gitlab-org/gitlab/-/issues/228685) all support for using **WIP** in GitLab 14.8.
-> - **Mark as draft** and **Mark as ready** buttons [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/227421) in GitLab 13.5.
+> - [Removed](https://gitlab.com/gitlab-org/gitlab/-/issues/228685) all support for the term **WIP** in GitLab 14.8.
> `/draft` quick action as a toggle [deprecated](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/92654) in GitLab 15.4.
> - [Changed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/108073) the draft status to use a checkbox in GitLab 15.8.
-There are several ways to flag a merge request as a draft:
+You can flag a merge request as a draft in several ways:
- **Viewing a merge request**: In the upper-right corner of the merge request, select **Mark as draft**.
- **Creating or editing a merge request**: Add `[Draft]`, `Draft:` or `(Draft)` to
@@ -33,12 +30,12 @@ There are several ways to flag a merge request as a draft:
in a comment. To mark a merge request as ready, use `/ready`.
- **Creating a commit**: Add `draft:`, `Draft:`, `fixup!`, or `Fixup!` to the
beginning of a commit message targeting the merge request's source branch. This
- is not a toggle, and adding this text again in a later commit doesn't mark the
+ method is not a toggle. Adding this text again in a later commit doesn't mark the
merge request as ready.
## Mark merge requests as ready
-When a merge request is ready to be merged, you can remove the `Draft` flag in several ways:
+When a merge request is ready to merge, you can remove the `Draft` flag in several ways:
- **Viewing a merge request**: In the upper-right corner of the merge request, select **Mark as ready**.
Users with at least the Developer role
@@ -50,18 +47,18 @@ When a merge request is ready to be merged, you can remove the `Draft` flag in s
[quick action](../quick_actions.md#issues-merge-requests-and-epics)
in a comment in the merge request.
-In [GitLab 13.10 and later](https://gitlab.com/gitlab-org/gitlab/-/issues/15332),
-when you mark a merge request as ready, notifications are triggered to
-[merge request participants and watchers](../../profile/notifications.md#notifications-on-issues-merge-requests-and-epics).
+When you mark a merge request as ready,
+[merge request participants and watchers](../../profile/notifications.md#notifications-on-issues-merge-requests-and-epics)
+are notified.
## Include or exclude drafts when searching
-When viewing or searching in your project's merge requests list, you can include or exclude
+When you view or search in your project's merge requests list, to include or exclude
draft merge requests:
1. Go to your project and select **Code > Merge requests**.
-1. In the navigation bar, select **Open**, **Merged**, **Closed**, or **All** to
- filter by merge request status.
+1. To filter by merge request status, select **Open**, **Merged**, **Closed**,
+ or **All** in the navigation bar.
1. Select the search box to display a list of filters and select **Draft**, or
enter the word `draft`.
1. Select `=`.
@@ -72,9 +69,9 @@ draft merge requests:
## Pipelines for drafts
-Draft merge requests run the same pipelines as merge request that are marked as ready.
+Draft merge requests run the same pipelines as merge requests marked as ready.
-In GitLab 15.0 and older, you must [mark the merge request as ready](#mark-merge-requests-as-ready)
+In GitLab 15.0 and earlier, you must [mark the merge request as ready](#mark-merge-requests-as-ready)
if you want to run [merged results pipelines](../../../ci/pipelines/merged_results_pipelines.md).
<!-- ## Troubleshooting
diff --git a/doc/user/project/merge_requests/index.md b/doc/user/project/merge_requests/index.md
index 22cd8f9b89e..63e5cc93e7d 100644
--- a/doc/user/project/merge_requests/index.md
+++ b/doc/user/project/merge_requests/index.md
@@ -82,6 +82,7 @@ or:
> - Filtering by `reviewer` [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/47605) in GitLab 13.7.
> - Filtering by potential approvers was moved to GitLab Premium in 13.9.
> - Filtering by `approved-by` moved to GitLab Premium in 13.9.
+> - Filtering by `source-branch` [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/134555) in GitLab 16.6.
To filter the list of merge requests:
@@ -489,3 +490,31 @@ p = Project.find_by_full_path('<namespace/project>')
m = p.merge_requests.find_by(iid: <iid>)
Issuable::DestroyService.new(container: m.project, current_user: u).execute(m)
```
+
+### Merge request pre-receive hook failed
+
+If a merge request times out, you might see messages that indicate a Puma worker
+timeout problem:
+
+- In the GitLab UI:
+
+ ```plaintext
+ Something went wrong during merge pre-receive hook.
+ 500 Internal Server Error. Try again.
+ ```
+
+- In the `gitlab-rails/api_json.log` log file:
+
+ ```plaintext
+ Rack::Timeout::RequestTimeoutException
+ Request ran for longer than 60000ms
+ ```
+
+This error can happen if your merge request:
+
+- Contains many diffs.
+- Is many commits behind the target branch.
+
+Users in self-managed installations can request an administrator review server logs
+to determine the cause of the error. GitLab SaaS users should
+[contact Support](https://about.gitlab.com/support/#contact-support) for help.
diff --git a/doc/user/project/merge_requests/merge_when_pipeline_succeeds.md b/doc/user/project/merge_requests/merge_when_pipeline_succeeds.md
index 699c79806f0..c4c38ef9eaf 100644
--- a/doc/user/project/merge_requests/merge_when_pipeline_succeeds.md
+++ b/doc/user/project/merge_requests/merge_when_pipeline_succeeds.md
@@ -79,7 +79,7 @@ merge. This configuration works for both:
- GitLab CI/CD pipelines.
- Pipelines run from an [external CI integration](../integrations/index.md#available-integrations).
-As a result, [disabling GitLab CI/CD pipelines](../../../ci/enable_or_disable_ci.md#disable-cicd-in-a-project)
+As a result, [disabling GitLab CI/CD pipelines](../../../ci/pipelines/settings.md#disable-gitlab-cicd-pipelines)
does not disable this feature, but you can use pipelines from external
CI providers with it.
diff --git a/doc/user/project/merge_requests/revert_changes.md b/doc/user/project/merge_requests/revert_changes.md
index 7e6bf606f10..4476ec8c670 100644
--- a/doc/user/project/merge_requests/revert_changes.md
+++ b/doc/user/project/merge_requests/revert_changes.md
@@ -25,7 +25,7 @@ Prerequisites:
- You must have a role in the project that allows you to edit merge requests, and add
code to the repository.
- Your project must use the [merge method](methods/index.md#fast-forward-merge) **Merge Commit**,
- which is set in the project's **Settings > General > Merge request**. You can't revert
+ which is set in the project's **Settings > Merge requests**. You can't revert
fast-forwarded commits from the GitLab UI.
To do this:
diff --git a/doc/user/project/merge_requests/reviews/data_usage.md b/doc/user/project/merge_requests/reviews/data_usage.md
index b4b9b19c932..b32c527ab75 100644
--- a/doc/user/project/merge_requests/reviews/data_usage.md
+++ b/doc/user/project/merge_requests/reviews/data_usage.md
@@ -13,7 +13,7 @@ GitLab Duo Suggested Reviewers is the first user-facing GitLab machine learning
### Enabling the feature
-When a Project Maintainer or Owner enables Suggested Reviewers in project settings GitLab kicks off a data extraction job for the project which leverages the Merge Request API to understand pattern of review including recency, domain experience, and frequency to suggest an appropriate reviewer.
+When a Project Maintainer or Owner enables Suggested Reviewers in project settings, GitLab kicks off a data extraction job for the project which leverages the Merge Request API to understand pattern of review including recency, domain experience, and frequency to suggest an appropriate reviewer. If projects do not use the [merge request approval process](../approvals/index.md) or do not have any historical merge request data, Suggested Reviewers cannot suggest reviewers.
This data extraction job can take a few hours to complete (possibly up to a day), which is largely dependent on the size of the project. The process is automated and no action is needed during this process. Once data extraction is complete, you start getting suggestions in merge requests.
diff --git a/doc/user/project/merge_requests/reviews/img/comment-on-any-diff-line_v13_10.png b/doc/user/project/merge_requests/reviews/img/comment-on-any-diff-line_v13_10.png
deleted file mode 100644
index a31fea85be9..00000000000
--- a/doc/user/project/merge_requests/reviews/img/comment-on-any-diff-line_v13_10.png
+++ /dev/null
Binary files differ
diff --git a/doc/user/project/merge_requests/reviews/img/comment_on_any_diff_line_v16_6.png b/doc/user/project/merge_requests/reviews/img/comment_on_any_diff_line_v16_6.png
new file mode 100644
index 00000000000..5ed210ad8bb
--- /dev/null
+++ b/doc/user/project/merge_requests/reviews/img/comment_on_any_diff_line_v16_6.png
Binary files differ
diff --git a/doc/user/project/merge_requests/reviews/img/mr_review_new_comment_v15_3.png b/doc/user/project/merge_requests/reviews/img/mr_review_new_comment_v15_3.png
deleted file mode 100644
index b73dbb50cd2..00000000000
--- a/doc/user/project/merge_requests/reviews/img/mr_review_new_comment_v15_3.png
+++ /dev/null
Binary files differ
diff --git a/doc/user/project/merge_requests/reviews/img/mr_review_new_comment_v16_6.png b/doc/user/project/merge_requests/reviews/img/mr_review_new_comment_v16_6.png
new file mode 100644
index 00000000000..3e11440a71b
--- /dev/null
+++ b/doc/user/project/merge_requests/reviews/img/mr_review_new_comment_v16_6.png
Binary files differ
diff --git a/doc/user/project/merge_requests/reviews/img/mr_summary_comment_v15_4.png b/doc/user/project/merge_requests/reviews/img/mr_summary_comment_v15_4.png
deleted file mode 100644
index 47b7be3886d..00000000000
--- a/doc/user/project/merge_requests/reviews/img/mr_summary_comment_v15_4.png
+++ /dev/null
Binary files differ
diff --git a/doc/user/project/merge_requests/reviews/img/mr_summary_comment_v16_6.png b/doc/user/project/merge_requests/reviews/img/mr_summary_comment_v16_6.png
new file mode 100644
index 00000000000..965ce84a70f
--- /dev/null
+++ b/doc/user/project/merge_requests/reviews/img/mr_summary_comment_v16_6.png
Binary files differ
diff --git a/doc/user/project/merge_requests/reviews/index.md b/doc/user/project/merge_requests/reviews/index.md
index 0a3efa38440..d3124b716da 100644
--- a/doc/user/project/merge_requests/reviews/index.md
+++ b/doc/user/project/merge_requests/reviews/index.md
@@ -26,9 +26,13 @@ For an overview, see [Merge request review](https://www.youtube.com/watch?v=2May
> - [Introduced](https://gitlab.com/groups/gitlab-org/modelops/applied-ml/review-recommender/-/epics/3) in GitLab 15.4 as a [Beta](../../../../policy/experiment-beta-support.md#beta) feature [with a flag](../../../../administration/feature_flags.md) named `suggested_reviewers_control`. Disabled by default.
> - [Enabled on GitLab.com](https://gitlab.com/gitlab-org/gitlab/-/issues/368356) in GitLab 15.6.
> - Beta designation [removed from the UI](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/113058) in GitLab 15.10.
+> - Feature flag [removed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/134728) in GitLab 16.6.
GitLab uses machine learning to suggest reviewers for your merge request.
+<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
+For an overview, see [GitLab Duo Suggested Reviewers](https://www.youtube.com/embed/ivwZQgh4Rxw).
+
To suggest reviewers, GitLab uses:
- The changes in the merge request
@@ -164,7 +168,7 @@ You can submit your completed review in multiple ways:
In the modal window, you can supply a **Summary comment**, approve the merge request, and
include quick actions:
- ![Finish review with comment](img/mr_summary_comment_v15_4.png)
+ ![Finish review with comment](img/mr_summary_comment_v16_6.png)
When you submit your review, GitLab:
@@ -193,7 +197,7 @@ Pending comments display information about the action to be taken when the comme
If you have a review in progress, you can also add a comment from the **Overview** tab by selecting
**Add to review**:
-![New thread](img/mr_review_new_comment_v15_3.png)
+![New thread](img/mr_review_new_comment_v16_6.png)
### Approval Rule information for Reviewers **(PREMIUM ALL)**
@@ -227,8 +231,6 @@ them a notification email.
When commenting on a diff, you can select which lines of code your comment refers
to by either:
-![Comment on any diff file line](img/comment-on-any-diff-line_v13_10.png)
-
- Dragging **Add a comment to this line** (**{comment}**) in the gutter to highlight
lines in the diff. GitLab expands the diff lines and displays a comment box.
- After starting a comment by selecting **Add a comment to this line** (**{comment}**) in the
@@ -236,6 +238,8 @@ to by either:
select box. New comments default to single-line comments, unless you select
a different starting line.
+![Comment on any diff file line](img/comment_on_any_diff_line_v16_6.png)
+
Multiline comments display the comment's line numbers above the body of the comment:
![Multiline comment selection displayed above comment](img/multiline-comment-saved.png)
@@ -340,6 +344,9 @@ from the command line by running `git checkout <branch-name>`.
### Checkout merge requests locally through the `head` ref
+> - Deleting `head` refs 14 days after a merge request closes or merges [enabled on self-managed and GitLab.com](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/130098) in GitLab 16.4.
+> - Deleting `head` refs 14 days after a merge request closes or merges [generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/336070) in GitLab 16.6. Feature flag `merge_request_refs_cleanup` removed.
+
A merge request contains all the history from a repository, plus the additional
commits added to the branch associated with the merge request. Here's a few
ways to check out a merge request locally.
@@ -351,9 +358,8 @@ This relies on the merge request `head` ref (`refs/merge-requests/:iid/head`)
that is available for each merge request. It allows checking out a merge
request by using its ID instead of its branch.
-[Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/223156) in GitLab
-13.4, 14 days after a merge request gets closed or merged, the merge request
-`head` ref is deleted. This means that the merge request isn't available
+In GitLab 16.6 and later, the merge request `head` ref is deleted 14 days after
+a merge request is closed or merged. The merge request is then no longer available
for local checkout from the merge request `head` ref anymore. The merge request
can still be re-opened. If the merge request's branch
exists, you can still check out the branch, as it isn't affected.
diff --git a/doc/user/project/merge_requests/status_checks.md b/doc/user/project/merge_requests/status_checks.md
index 698078351e2..c330af0fc9b 100644
--- a/doc/user/project/merge_requests/status_checks.md
+++ b/doc/user/project/merge_requests/status_checks.md
@@ -10,6 +10,8 @@ type: reference, concepts
> - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/3869) in GitLab 14.0, disabled behind the `:ff_external_status_checks` feature flag.
> - [Feature flag removed](https://gitlab.com/gitlab-org/gitlab/-/issues/320783) in GitLab 14.1.
> - `failed` status [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/329636) in GitLab 14.9.
+> - `pending` status [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/413723) in GitLab 16.5
+> - Timeout interval of two minutes for `pending` status checks [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/388725) in GitLab 16.6.
Status checks are API calls to external systems that request the status of an external requirement.
@@ -25,6 +27,8 @@ at the merge request level itself.
You can configure merge request status checks for each individual project. These are not shared between projects.
+Status checks fail if they stay in the pending state for more than two minutes.
+
For more information about use cases, feature discovery, and development timelines,
see [epic 3869](https://gitlab.com/groups/gitlab-org/-/epics/3869).
diff --git a/doc/user/project/pages/public_folder.md b/doc/user/project/pages/public_folder.md
index 8471a4ec55a..39d80517bc7 100644
--- a/doc/user/project/pages/public_folder.md
+++ b/doc/user/project/pages/public_folder.md
@@ -126,6 +126,15 @@ pages:
NOTE:
GitLab Pages supports only static sites.
+By default, Nuxt uses the `public` folder to store static assets. For GitLab
+Pages, rename the `public` folder to a collision-free alternative first:
+
+1. In your project directory, run:
+
+ ```shell
+ mv public static
+ ```
+
1. Add the following to your `nuxt.config.js`:
```javascript
@@ -133,6 +142,12 @@ GitLab Pages supports only static sites.
target: 'static',
generate: {
dir: 'public'
+ },
+ dir: {
+ // The folder name Nuxt uses for static files (`public`) is already
+ // reserved for the build output. So in deviation from the defaults we're
+ // using a folder called `static` instead.
+ public: 'static'
}
}
```
diff --git a/doc/user/project/protected_branches.md b/doc/user/project/protected_branches.md
index fac07a1313a..f8f44d344d1 100644
--- a/doc/user/project/protected_branches.md
+++ b/doc/user/project/protected_branches.md
@@ -17,6 +17,7 @@ A protected branch controls:
- If users can force push to the branch.
- If changes to files listed in the CODEOWNERS file can be pushed directly to the branch.
- Which users can unprotect the branch.
+- Which users can modify the branch via the [Commits API](../../api/commits.md).
The [default branch](repository/branches/default.md) for your repository is protected by default.
@@ -26,12 +27,12 @@ The [default branch](repository/branches/default.md) for your repository is prot
When a branch is protected, the default behavior enforces these restrictions on the branch.
-| Action | Who can do it |
-|:-------------------------|:------------------------------------------------------------------|
-| Protect a branch | At least the Maintainer role. |
+| Action | Who can do it |
+|:-------------------------|:----------------------------------------|
+| Protect a branch | At least the Maintainer role. |
| Push to the branch | Anyone with **Allowed** permission. (1) |
-| Force push to the branch | No one. (3) |
-| Delete the branch | No one. (2) |
+| Force push to the branch | No one. (3) |
+| Delete the branch | No one. (2) |
1. Users with the Developer role can create a project in a group, but might not be allowed to
initially push to the [default branch](repository/branches/default.md).
@@ -49,12 +50,12 @@ level of protection for the branch. For example, consider these rules, which inc
[wildcards](#protect-multiple-branches-with-wildcard-rules):
| Branch name pattern | Allowed to merge | Allowed to push and merge |
-|---------------------|------------------------|-----------------|
-| `v1.x` | Maintainer | Maintainer |
-| `v1.*` | Maintainer + Developer | Maintainer |
-| `v*` | No one | No one |
+|---------------------|------------------------|---------------------------|
+| `v1.x` | Maintainer | Maintainer |
+| `v1.*` | Maintainer + Developer | Maintainer |
+| `v*` | No one | No one |
-A branch named `v1.x` matches all three branch name patterns: `v1.x`, `v1.*`, and `v*`.
+A branch named `v1.x` is a case-sensitive match for all three branch name patterns: `v1.x`, `v1.*`, and `v*`.
As the most permissive option determines the behavior, the resulting permissions for branch `v1.x` are:
- **Allowed to merge:** Of the three settings, `Maintainer + Developer` is most permissive,
@@ -71,10 +72,10 @@ If you want to ensure that `No one` is allowed to push to branch `v1.x`, every p
that matches `v1.x` must set `Allowed to push and merge` to `No one`, like this:
| Branch name pattern | Allowed to merge | Allowed to push and merge |
-|---------------------|------------------------|-----------------|
-| `v1.x` | Maintainer | No one |
-| `v1.*` | Maintainer + Developer | No one |
-| `v*` | No one | No one |
+|---------------------|------------------------|---------------------------|
+| `v1.x` | Maintainer | No one |
+| `v1.*` | Maintainer + Developer | No one |
+| `v*` | No one | No one |
### Set the default branch protection level
@@ -138,6 +139,7 @@ To protect a branch for all the projects in a group:
1. Expand **Protected branches**.
1. Select **Add protected branch**.
1. In the **Branch** text box, type the branch name or a wildcard.
+ Branch names and wildcards [are case-sensitive](repository/branches/index.md#name-your-branch).
1. From the **Allowed to merge** list, select a role that can merge into this branch.
1. From the **Allowed to push and merge** list, select a role that can push to this branch.
1. Select **Protect**.
@@ -162,7 +164,7 @@ To protect multiple branches at the same time:
1. Expand **Protected branches**.
1. Select **Add protected branch**.
1. From the **Branch** dropdown list, type the branch name and a wildcard.
- For example:
+ Branch names and wildcards [are case-sensitive](repository/branches/index.md#name-your-branch). For example:
| Wildcard protected branch | Matching branches |
|---------------------------|--------------------------------------------------------|
@@ -370,6 +372,7 @@ branches by using the GitLab web interface:
1. Select **Code > Branches**.
1. Next to the branch you want to delete, select **Delete** (**{remove}**).
1. On the confirmation dialog, enter the branch name and select **Yes, delete protected branch**.
+ Branch names [are case-sensitive](repository/branches/index.md#name-your-branch).
Protected branches can only be deleted by using GitLab either from the UI or API.
This prevents accidentally deleting a branch through local Git commands or
@@ -381,14 +384,10 @@ third-party Git clients.
- [Branches](repository/branches/index.md)
- [Branches API](../../api/branches.md)
-<!-- ## Troubleshooting
+## Troubleshooting
-Include any troubleshooting steps that you can foresee. If you know beforehand what issues
-one might have when setting this up, or when something is changed, or on upgrading, it's
-important to describe those, too. Think of things that may go wrong and include them here.
-This is important to minimize requests for support, and to avoid doc comments with
-questions that you know someone might ask.
+### Branch names are case-sensitive
-Each scenario can be a third-level heading, for example `### Getting error message X`.
-If you have none to add when creating a doc, leave this section in place
-but commented out to help encourage others to add to it in the future. -->
+Branch names in `git` are case-sensitive. When configuring your protected branch
+or [target branch rule](repository/branches/index.md#configure-rules-for-target-branches),
+`dev` is not the same `DEV` or `Dev`.
diff --git a/doc/user/project/push_options.md b/doc/user/project/push_options.md
index e8451e3049d..6c89e09bd47 100644
--- a/doc/user/project/push_options.md
+++ b/doc/user/project/push_options.md
@@ -45,7 +45,8 @@ Git push options can perform actions for merge requests while pushing changes:
| Push option | Description |
|----------------------------------------------|-------------|
| `merge_request.create` | Create a new merge request for the pushed branch. |
-| `merge_request.target=<branch_name>` | Set the target of the merge request to a particular branch or upstream project, such as: `git push -o merge_request.target=project_path/branch`. |
+| `merge_request.target=<branch_name>` | Set the target of the merge request to a particular branch, such as: `git push -o merge_request.target=branch_name`. |
+| `merge_request.target_project=<project>` | Set the target of the merge request to a particular upstream project, such as: `git push -o merge_request.target_project=path/to/project`. Introduced in [GitLab 16.6](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/132475). |
| `merge_request.merge_when_pipeline_succeeds` | Set the merge request to [merge when its pipeline succeeds](merge_requests/merge_when_pipeline_succeeds.md). |
| `merge_request.remove_source_branch` | Set the merge request to remove the source branch when it's merged. |
| `merge_request.title="<title>"` | Set the title of the merge request. For example: `git push -o merge_request.title="The title I want"`. |
diff --git a/doc/user/project/repository/branches/index.md b/doc/user/project/repository/branches/index.md
index 30ddf8d3230..3640beebdfb 100644
--- a/doc/user/project/repository/branches/index.md
+++ b/doc/user/project/repository/branches/index.md
@@ -176,6 +176,7 @@ GitLab enforces these additional rules on all branches:
- No spaces are allowed in branch names.
- Branch names with 40 hexadecimal characters are prohibited, because they are similar to Git commit hashes.
+- Branch names are case-sensitive.
Common software packages, like Docker, can enforce
[additional branch naming restrictions](../../../../administration/packages/container_registry.md#docker-connection-error).
@@ -313,6 +314,27 @@ To create a target branch rule:
1. Select the **Target branch** to use when the branch name matches the **Rule name**.
1. Select **Save**.
+### Example
+
+You could configure your project to have the following target branch rules:
+
+| Rule name | Target branch |
+|-------------|---------------|
+| `feature/*` | `develop` |
+| `bug/*` | `develop` |
+| `release/*` | `main` |
+
+These rules simplify the process of creating merge requests for a project that:
+
+- Uses `main` to represent the deployed state of your application.
+- Tracks current, unreleased development work in another long-running branch, like `develop`.
+
+If your workflow initially places new features in `develop` instead of `main`, these rules
+ensure all branches matching either `feature/*` or `bug/*` do not target `main` by mistake.
+
+When you're ready to release to `main`, create a branch named `release/*`, and the rules
+ensure this branch targets `main`.
+
## Delete a target branch rule
When you remove a target branch rule, existing merge requests remain unchanged.
@@ -389,3 +411,18 @@ To fix this problem:
Git versions [2.16.0 and later](https://github.com/git/git/commit/a625b092cc59940521789fe8a3ff69c8d6b14eb2),
prevent you from creating a branch with this name.
+
+### Find all branches you've authored
+
+To find all branches you've authored in a project, run this command in a Git repository:
+
+```shell
+git for-each-ref --format='%(refname:short) %(authoremail)' | grep $(git config --get user.email)
+```
+
+To get a total of all branches in a project, sorted by author, run this command
+in a Git repository:
+
+```shell
+git for-each-ref --format='%(authoremail)' | sort | uniq -c | sort -g
+```
diff --git a/doc/user/project/repository/code_suggestions/index.md b/doc/user/project/repository/code_suggestions/index.md
index 151792089ce..b44e26f8daf 100644
--- a/doc/user/project/repository/code_suggestions/index.md
+++ b/doc/user/project/repository/code_suggestions/index.md
@@ -17,6 +17,11 @@ Beta users should read about the [known limitations](#known-limitations). We loo
Write code more efficiently by using generative AI to suggest code while you're developing.
+Code Suggestions supports two distinct types of interactions:
+
+- Code Completion, which suggests completions the current line you are typing. These suggestions are usually low latency.
+- Code Generation, which generates code based on a natural language code comment block. Generating code can exceed multiple seconds.
+
GitLab Duo Code Suggestions are available:
- On [self-managed](self_managed.md) and [SaaS](saas.md).
@@ -31,7 +36,7 @@ GitLab Duo Code Suggestions are available:
</figure>
During Beta, usage of Code Suggestions is governed by the [GitLab Testing Agreement](https://about.gitlab.com/handbook/legal/testing-agreement/).
-Learn about [data usage when using Code Suggestions](#code-suggestions-data-usage).
+Learn about [data usage when using Code Suggestions](#code-suggestions-data-usage). As Code Suggestions matures to General Availibility it will be governed by our [AI Functionality Terms](https://about.gitlab.com/handbook/legal/ai-functionality-terms/).
## Use Code Suggestions
@@ -62,22 +67,13 @@ Code Suggestions do not prevent you from writing code in your IDE.
## Supported languages
-The best results from Code Suggestions are expected for languages that [Anthropic Claude](https://www.anthropic.com/product) and the [Google Vertex AI Codey APIs](https://cloud.google.com/vertex-ai/docs/generative-ai/code/code-models-overview#supported_coding_languages) directly support:
-
-- C++
-- C#
-- Go
-- Google SQL
-- Java
-- JavaScript
-- Kotlin
-- PHP
-- Python
-- Ruby
-- Rust
-- Scala
-- Swift
-- TypeScript
+Code Suggestions support is a function of the:
+
+- Underlying large language model.
+- IDE used.
+- Extension or plug-in support in the IDE.
+
+For languages not listed in the following table, Code Suggestions might not function as expected.
### Supported languages in IDEs
@@ -129,10 +125,12 @@ This improvement should result in:
Code Suggestions is powered by a generative AI model.
Your personal access token enables a secure API connection to GitLab.com or to your GitLab instance.
-This API connection securely transmits a context window from your IDE/editor to the [GitLab AI Gateway](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist), a GitLab hosted service. The gateway calls the large language model APIs, and then the generated suggestion is transmitted back to your IDE/editor.
+This API connection securely transmits a context window from your IDE/editor to the [GitLab AI Gateway](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist), a GitLab hosted service. The [gateway](../../../../development/ai_architecture.md) calls the large language model APIs, and then the generated suggestion is transmitted back to your IDE/editor.
GitLab selects the best-in-class large-language models for specific tasks. We use [Google Vertex AI Code Models](https://cloud.google.com/vertex-ai/docs/generative-ai/code/code-models-overview) and [Anthropic Claude](https://www.anthropic.com/product) for Code Suggestions.
+[View data retention policies](../../../ai_features.md#data-retention).
+
### Telemetry
For self-managed instances that have enabled Code Suggestions and SaaS accounts, we collect aggregated or de-identified first-party usage data through our [Snowplow collector](https://about.gitlab.com/handbook/business-technology/data-team/platform/snowplow/). This usage data includes the following metrics:
diff --git a/doc/user/project/repository/code_suggestions/self_managed.md b/doc/user/project/repository/code_suggestions/self_managed.md
index ee501212027..fd363e56021 100644
--- a/doc/user/project/repository/code_suggestions/self_managed.md
+++ b/doc/user/project/repository/code_suggestions/self_managed.md
@@ -164,7 +164,7 @@ A self-managed GitLab instance does not generate the code suggestion. After succ
authentication to the self-managed instance, a token is generated.
The IDE/editor then uses this token to securely transmit data directly to
-GitLab.com's Code Suggestions service for processing.
+GitLab.com's Code Suggestions service via the [Cloud Connector gateway service](../../../../architecture/blueprints/cloud_connector/index.md) for processing.
The Code Suggestions service then securely returns an AI-generated code suggestion.
diff --git a/doc/user/project/repository/code_suggestions/troubleshooting.md b/doc/user/project/repository/code_suggestions/troubleshooting.md
index 2faf20b3035..86400ea8860 100644
--- a/doc/user/project/repository/code_suggestions/troubleshooting.md
+++ b/doc/user/project/repository/code_suggestions/troubleshooting.md
@@ -18,9 +18,6 @@ In GitLab, ensure Code Suggestions is enabled:
- [For your user account](../../../profile/preferences.md#enable-code-suggestions).
- [For *all* top-level groups your account belongs to](../../../group/manage.md#enable-code-suggestions). If you don't have a role that lets you view the top-level group's settings, contact a group owner.
-To confirm that your account is enabled, go to [https://gitlab.com/api/v4/ml/ai-assist](https://gitlab.com/api/v4/ml/ai-assist). The `user_is_allowed` key should have should have a value of `true`.
-A `404 Not Found` result is returned if either of the previous conditions is not met.
-
### Code Suggestions not displayed in VS Code or GitLab WebIDE
Check all the steps in [Code Suggestions aren't displayed](#code-suggestions-arent-displayed) first.
diff --git a/doc/user/project/repository/forking_workflow.md b/doc/user/project/repository/forking_workflow.md
index ddc650c3924..c71c89b68c3 100644
--- a/doc/user/project/repository/forking_workflow.md
+++ b/doc/user/project/repository/forking_workflow.md
@@ -24,17 +24,24 @@ can access the object pool connected to the source project.
> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/15013) a new form in GitLab 13.11 [with a flag](../../../user/feature_flags.md) named `fork_project_form`. Disabled by default.
> - [Enabled on GitLab.com and self-managed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/77181) in GitLab 14.8. Feature flag `fork_project_form` removed.
+> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/24894) in GitLab 16.6.
To fork an existing project in GitLab:
1. On the project's homepage, in the upper-right corner, select **Fork** (**{fork}**):
+
![Fork this project](img/forking_workflow_fork_button_v13_10.png)
+
1. Optional. Edit the **Project name**.
1. For **Project URL**, select the [namespace](../../namespace/index.md)
your fork should belong to.
1. Add a **Project slug**. This value becomes part of the URL to your fork.
It must be unique in the namespace.
1. Optional. Add a **Project description**.
+1. Select one of the **Branches to include** options:
+ - **All branches** (default).
+ - **Only the default branch**. Uses the `--single-branch` and `--no-tags`
+ [Git options](https://git-scm.com/docs/git-clone).
1. Select the **Visibility level** for your fork. For more information about
visibility levels, read [Project and group visibility](../../public_access.md).
1. Select **Fork project**.
diff --git a/doc/user/project/repository/reducing_the_repo_size_using_git.md b/doc/user/project/repository/reducing_the_repo_size_using_git.md
index ff9ef5b78f8..ca7f2ae2043 100644
--- a/doc/user/project/repository/reducing_the_repo_size_using_git.md
+++ b/doc/user/project/repository/reducing_the_repo_size_using_git.md
@@ -325,12 +325,15 @@ are accurate.
To expedite this process, see the
['Prune Unreachable Objects' housekeeping task](../../../administration/housekeeping.md).
-### Sidekiq process fails to export a project
+### Sidekiq process fails to export a project **(FREE SELF)**
Occasionally the Sidekiq process can fail to export a project, for example if
it is terminated during execution.
-To bypass the Sidekiq process, use the Rails console to manually trigger the project export:
+GitLab.com users should [contact Support](https://about.gitlab.com/support/#contact-support) to resolve this issue.
+
+Self-managed users can use the Rails console to bypass the Sidekiq process and
+manually trigger the project export:
```ruby
project = Project.find(1)
diff --git a/doc/user/project/service_desk/configure.md b/doc/user/project/service_desk/configure.md
index 172a105cc28..8d0fbd81ebd 100644
--- a/doc/user/project/service_desk/configure.md
+++ b/doc/user/project/service_desk/configure.md
@@ -191,6 +191,8 @@ The custom email address you want to use must meet all of the following requirem
by any text to the local part. Given the email address `support@example.com`, check whether sub-addressing is supported by
sending an email to `support+1@example.com`. This email should appear in your mailbox.
- You have SMTP credentials (ideally, you should use an app password).
+ The username and password are stored in the database using the Advanced Encryption Standard (AES)
+ with a 256-bit key.
- You must have at least the Maintainer role for the project.
- Service Desk must be configured for the project.
diff --git a/doc/user/project/service_desk/using_service_desk.md b/doc/user/project/service_desk/using_service_desk.md
index ad97a36bbb0..5f3c725b83b 100644
--- a/doc/user/project/service_desk/using_service_desk.md
+++ b/doc/user/project/service_desk/using_service_desk.md
@@ -138,10 +138,7 @@ HTML emails show HTML formatting, such as:
> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/11733) in GitLab 15.8 [with a flag](../../../administration/feature_flags.md) named `service_desk_new_note_email_native_attachments`. Disabled by default.
> - [Enabled on GitLab.com and self-managed](https://gitlab.com/gitlab-org/gitlab/-/issues/386860) in GitLab 15.10.
-
-FLAG:
-On self-managed GitLab, by default this feature is available. To hide the feature per project or for your entire instance, an administrator can [disable the feature flag](../../../administration/feature_flags.md) named `service_desk_new_note_email_native_attachments`.
-On GitLab.com, this feature is available.
+> - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/11733) in GitLab 16.6. Feature flag `service_desk_new_note_email_native_attachments` removed.
If a comment contains any attachments and their total size is less than or equal to 10 MB, these
attachments are sent as part of the email. In other cases, the email contains links to the attachments.
diff --git a/doc/user/project/settings/project_access_tokens.md b/doc/user/project/settings/project_access_tokens.md
index 7de8a7beab5..3526425c912 100644
--- a/doc/user/project/settings/project_access_tokens.md
+++ b/doc/user/project/settings/project_access_tokens.md
@@ -60,7 +60,7 @@ To create a project access token:
1. Enter a name. The token name is visible to any user with permissions to view the project.
1. Enter an expiry date for the token.
- The token expires on that date at midnight UTC.
- - If you do not enter an expiry date, the expiry date is automatically set to 365 days later than the current date.
+ - If you do not enter an expiry date, the expiry date is automatically set to 30 days later than the current date.
- By default, this date can be a maximum of 365 days later than the current date.
- An instance-wide [maximum lifetime](../../../administration/settings/account_and_limit_settings.md#limit-the-lifetime-of-access-tokens) setting can limit the maximum allowable lifetime in self-managed instances.
1. Select a role for the token.
diff --git a/doc/user/project/system_notes.md b/doc/user/project/system_notes.md
index 73509846990..546b3250180 100644
--- a/doc/user/project/system_notes.md
+++ b/doc/user/project/system_notes.md
@@ -23,12 +23,14 @@ in system notes. System notes use the format `<Author> <action> <time ago>`.
By default, system notes do not display. When displayed, they are shown oldest first.
If you change the filter or sort options, your selection is remembered across sections.
-The filtering options are:
+For all item types except merge requests, the filtering options are:
- **Show all activity** displays both comments and history.
- **Show comments only** hides system notes.
- **Show history only** hides user comments.
+Merge requests provide more granular filtering options.
+
### On an epic
1. On the left sidebar, select **Search or go to** and find your project.
@@ -49,7 +51,19 @@ The filtering options are:
1. On the left sidebar, select **Search or go to** and find your project.
1. Select **Code > Merge requests** and find your merge request.
1. Go to **Activity**.
-1. For **Sort or filter**, select **Show all activity**.
+1. For **Sort or filter**, select **Show all activity** to see all system notes.
+ To narrow the types of system notes returned, select one or more of:
+
+ - **Approvals**
+ - **Assignees &amp; Reviewers**
+ - **Comments**
+ - **Commits &amp; branches**
+ - **Edits**
+ - **Labels**
+ - **Lock status**
+ - **Mentions**
+ - **Merge request status**
+ - **Tracking**
## Privacy considerations
diff --git a/doc/user/project/wiki/index.md b/doc/user/project/wiki/index.md
index a80c699eab7..fd543263ebd 100644
--- a/doc/user/project/wiki/index.md
+++ b/doc/user/project/wiki/index.md
@@ -181,11 +181,7 @@ You need at least the Developer role to move a wiki page:
> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/414691) in GitLab 16.3 [with a flag](../../../administration/feature_flags.md) named `print_wiki`. Disabled by default.
> - [Enabled on GitLab.com and self-managed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/134251/) in GitLab 16.5.
-
-FLAG:
-On self-managed GitLab, by default this feature is available.
-To hide the feature, an administrator can [disable the feature flag](../../../administration/feature_flags.md) named `print_wiki`.
-On GitLab.com, this feature is available.
+> - Feature flag `print_wiki` removed in GitLab 16.6.
You can export a wiki page as a PDF file:
diff --git a/doc/user/read_only_namespaces.md b/doc/user/read_only_namespaces.md
index 5b302d976dd..d5697ec5a94 100644
--- a/doc/user/read_only_namespaces.md
+++ b/doc/user/read_only_namespaces.md
@@ -27,7 +27,7 @@ To restore a namespace to its standard state, you can:
- [Purchase a paid tier](https://about.gitlab.com/pricing/).
- For exceeded storage quota:
- [Purchase more storage for the namespace](../subscriptions/gitlab_com/index.md#purchase-more-storage-and-transfer).
- - [Manage your storage usage](usage_quotas.md#manage-your-storage-usage).
+ - [Manage your storage usage](usage_quotas.md#manage-storage-usage).
## Restricted actions
diff --git a/doc/user/report_abuse.md b/doc/user/report_abuse.md
index 45113562e87..9e13d1fe263 100644
--- a/doc/user/report_abuse.md
+++ b/doc/user/report_abuse.md
@@ -26,17 +26,12 @@ You can report a user through their:
> - Report abuse from overflow menu [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/414773) in GitLab 16.4 [with a flag](../administration/feature_flags.md) named `user_profile_overflow_menu_vue`. Disabled by default.
> - [Enabled on GitLab.com](https://gitlab.com/gitlab-org/gitlab/-/issues/414773) in GitLab 16.4.
-
-FLAG:
-On self-managed GitLab, by default this feature is not available. To make it available, an administrator can [enable the feature flag](../administration/feature_flags.md) named `user_profile_overflow_menu_vue`.
-On GitLab.com, this feature is available.
+> - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/414773) in GitLab 16.6. Feature flag `user_profile_overflow_menu_vue` removed.
To report abuse from a user's profile page:
1. Anywhere in GitLab, select the name of the user.
-1. In the upper-right corner of the user's profile, if the `user_profile_overflow_menu_vue` feature flag is:
- - Enabled, select the vertical ellipsis (**{ellipsis_v}**), then **Report abuse to administrator**.
- - Disabled, select **Report abuse to administrator** (**{information-o}**).
+1. In the upper-right corner of the user's profile select the vertical ellipsis (**{ellipsis_v}**), then **Report abuse to administrator**.
1. Select a reason for reporting the user.
1. Complete an abuse report.
1. Select **Send report**.
diff --git a/doc/user/reserved_names.md b/doc/user/reserved_names.md
index b9c64739de0..697f5711396 100644
--- a/doc/user/reserved_names.md
+++ b/doc/user/reserved_names.md
@@ -6,31 +6,30 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# Reserved project and group names **(FREE ALL)**
-Not all project & group names are allowed because they would conflict with
-existing routes used by GitLab.
+To not conflict with existing routes used by GitLab, some words cannot be used as project or group names.
+These words are listed in the
+[`path_regex.rb` file](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/path_regex.rb),
+where:
-For a list of words that are not allowed to be used as group or project names, see the
-[`path_regex.rb` file](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/path_regex.rb)
-under the `TOP_LEVEL_ROUTES`, `PROJECT_WILDCARD_ROUTES` and `GROUP_ROUTES` lists:
-
-- `TOP_LEVEL_ROUTES`: are names that are reserved as usernames or top level groups
-- `PROJECT_WILDCARD_ROUTES`: are names that are reserved for child groups or projects.
-- `GROUP_ROUTES`: are names that are reserved for all groups or projects.
+- `TOP_LEVEL_ROUTES` are names reserved as usernames or top-level groups.
+- `PROJECT_WILDCARD_ROUTES` are names reserved for child groups or projects.
+- `GROUP_ROUTES` are names reserved for all groups or projects.
## Limitations on project and group names
-- Project or group names must start with a letter, digit, emoji, or "_".
-- Project names can only contain letters, digits, emoji, "_", ".", "+", dashes, or spaces.
-- Group names can only contain letters, digits, emoji, "_", ".", parenthesis, dashes, or spaces.
-- Project or group slugs must start with a letter or digit.
-- Project or group slugs can only contain letters, digits, '_', '.', or dashes.
-- Project or group slugs must not contain consecutive special characters.
-- Project or group slugs cannot start or end with a special character.
-- Project or group slugs cannot end in `.git` or `.atom`.
+- Project or group names must start with a letter (`a-zA-Z`), digit (`0-9`), emoji, or underscore (`_`). Additionally:
+ - Project names can contain only letters (`a-zA-Z`), digits (`0-9`), emoji, underscores (`_`), dots (`.`), pluses (`+`), dashes (`-`), or spaces.
+ - Group names can contain only letters (`a-zA-Z`), digits (`0-9`), emoji, underscores (`_`), dots (`.`), parentheses (`()`), dashes (`-`), or spaces.
+- Project or group slugs:
+ - Must start with a letter (`a-zA-Z`) or digit (`0-9`).
+ - Must not contain consecutive special characters.
+ - Cannot start or end with a special character.
+ - Cannot end in `.git` or `.atom`.
+ - Can contain only letters (`a-zA-Z`), digits (`0-9`), underscores (`_`), dots (`.`), or dashes (`-`).
## Reserved project names
-It is not possible to create a project with the following names:
+You cannot create projects with the following names:
- `\-`
- `badges`
@@ -56,7 +55,7 @@ It is not possible to create a project with the following names:
## Reserved group names
-The following names are reserved as top level groups:
+You cannot create groups with the following names, because they are reserved for top-level groups:
- `\-`
- `.well-known`
@@ -98,6 +97,6 @@ The following names are reserved as top level groups:
- `users`
- `v2`
-These group names are unavailable as subgroup names:
+You cannot create subgroups with the following names:
- `\-`
diff --git a/doc/user/search/index.md b/doc/user/search/index.md
index e8dfbfa675a..79782b1c880 100644
--- a/doc/user/search/index.md
+++ b/doc/user/search/index.md
@@ -103,29 +103,15 @@ For example:
## Include archived projects in search results
-> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/121981) in GitLab 16.1 [with a flag](../../administration/feature_flags.md) named `search_projects_hide_archived`. Disabled by default.
-> - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/413821) in GitLab 16.3. Feature flag `search_projects_hide_archived` removed.
+> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/121981) in GitLab 16.1 [with a flag](../../administration/feature_flags.md) named `search_projects_hide_archived` for project search. Disabled by default.
+> - [Generally available](https://gitlab.com/groups/gitlab-org/-/epics/10957) in GitLab 16.6 for all search scopes.
By default, archived projects are excluded from search results.
-To include archived projects:
+To include archived projects in search results:
-1. On the project search page, on the left sidebar, select the **Include archived** checkbox.
+1. On the search page, on the left sidebar, select the **Include archived** checkbox.
1. On the left sidebar, select **Apply**.
-## Exclude issues in archived projects from search results
-
-> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/124846) in GitLab 16.2 [with a flag](../../administration/feature_flags.md) named `search_issues_hide_archived_projects`. Disabled by default.
-
-FLAG:
-On self-managed GitLab, by default this feature is not available. To make it available,
-an administrator can [enable the feature flag](../../administration/feature_flags.md) named `search_issues_hide_archived_projects`. On GitLab.com, this feature is not available.
-
-By default, issues in archived projects are included in search results.
-To exclude issues in archived projects, ensure the `search_issues_hide_archived_projects` flag is enabled.
-
-To include issues in archived projects with `search_issues_hide_archived_projects` enabled,
-you must add the parameter `include_archived=true` to the URL.
-
## Search for code
To search for code in a project:
diff --git a/doc/user/shortcuts.md b/doc/user/shortcuts.md
index fa03cb54ba3..e504ee90821 100644
--- a/doc/user/shortcuts.md
+++ b/doc/user/shortcuts.md
@@ -1,6 +1,6 @@
---
-stage: none
-group: unassigned
+stage: Manage
+group: Foundations
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
type: reference
---
@@ -135,6 +135,7 @@ These shortcuts are available when browsing the files in a project (go to
| <kbd>Enter</kbd> | Open selection. |
| <kbd>Escape</kbd> | Go back to file list screen (only while searching for files, **Code > Repository**, then select **Find File**). |
| <kbd>y</kbd> | Go to file permalink (only while viewing a file). |
+| <kbd>Shift</kbd> + <kbd>c</kbd> | Go to compare branches view. |
| <kbd>.</kbd> | Open the [Web IDE](project/web_ide/index.md). |
### Web IDE
diff --git a/doc/user/snippets.md b/doc/user/snippets.md
index dbcc90c26df..fe782227701 100644
--- a/doc/user/snippets.md
+++ b/doc/user/snippets.md
@@ -17,7 +17,7 @@ and you can maintain your snippets with the [snippets API](../api/snippets.md).
You can create and manage your snippets through the GitLab user interface, or by
using the [GitLab Workflow VS Code extension](project/repository/vscode.md).
-![Example of snippet](img/snippet_intro_v13_11.png)
+![Example of a snippet](img/snippet_sample_v16_6.png)
GitLab provides two types of snippets:
@@ -168,10 +168,11 @@ To delete a file from your snippet through the GitLab UI:
## Clone snippets
To ensure you receive updates, clone the snippet instead of copying it locally. Cloning
-maintains the snippet's connection with the repository. Select **Clone** on a snippet
-to display the URLs to clone with SSH or HTTPS:
+maintains the snippet's connection with the repository.
-![Clone snippet](img/snippet_clone_button_v13_0.png)
+To clone a snippet:
+
+- Select **Clone**, then copy the URL to clone with SSH or HTTPS.
You can commit changes to a cloned snippet, and push the changes to GitLab.
diff --git a/doc/user/storage_management_automation.md b/doc/user/storage_management_automation.md
index 96f9ecd11a8..a83af4ab6c6 100644
--- a/doc/user/storage_management_automation.md
+++ b/doc/user/storage_management_automation.md
@@ -14,6 +14,10 @@ You can also manage your storage usage by improving [pipeline efficiency](../ci/
For more help with API automation, you can also use the [GitLab community forum and Discord](https://about.gitlab.com/community/).
+WARNING:
+The script examples in this page are for demonstration purposes only and should not
+be used in production. You can use the examples to design and test your own scripts for storage automation.
+
## API requirements
To automate storage management, your GitLab.com SaaS or self-managed instance must have access to the [GitLab REST API](../api/api_resources.md).
@@ -567,11 +571,17 @@ Support for creating a retention policy for job logs is proposed in [issue 37471
### Delete old pipelines
-Pipelines do not add to the overall storage consumption, but if required you can delete them with the following methods.
+Pipelines do not add to the overall storage usage, but if required you can automate their deletion.
-Automatic deletion of old pipelines is proposed in [issue 338480](https://gitlab.com/gitlab-org/gitlab/-/issues/338480).
+To delete pipelines based on a specific date, specify the `created_at` key.
+You can use the date to calculate the difference between the current date and
+when the pipeline was created. If the age is larger than the threshold, the pipeline is deleted.
-Example with the GitLab CLI:
+NOTE:
+The `created_at` key must be converted from a timestamp to Unix epoch time,
+for example with `date -d '2023-08-08T18:59:47.581Z' +%s`.
+
+Example with GitLab CLI:
```shell
export GL_PROJECT_ID=48349590
@@ -589,12 +599,10 @@ glab api --method GET projects/$GL_PROJECT_ID/pipelines | jq --compact-output '.
"2023-08-08T18:59:47.581Z"
```
-The `created_at` key must be converted from a timestamp to Unix epoch time,
-for example with `date -d '2023-08-08T18:59:47.581Z' +%s`. In the next step, the
-age can be calculated with the difference between now, and the pipeline creation
-date. If the age is larger than the threshold, the pipeline should be deleted.
+In the following example that uses a Bash script:
-The following example uses a Bash script that expects `jq` and the GitLab CLI installed, and authorized, and the exported environment variable `GL_PROJECT_ID`.
+- `jq` and the GitLab CLI are installed and authorized.
+- The exported environment variable `GL_PROJECT_ID`.
The full script `get_cicd_pipelines_compare_age_threshold_example.sh` is located in the [GitLab API with Linux Shell](https://gitlab.com/gitlab-de/use-cases/gitlab-api/gitlab-api-linux-shell) project.
@@ -624,7 +632,7 @@ do
done
```
-You can use the [`python-gitlab` API library](https://python-gitlab.readthedocs.io/en/stable/gl_objects/pipelines_and_jobs.html#project-pipelines) and
+You can also use the [`python-gitlab` API library](https://python-gitlab.readthedocs.io/en/stable/gl_objects/pipelines_and_jobs.html#project-pipelines) and
the `created_at` attribute to implement a similar algorithm that compares the job artifact age:
```python
@@ -645,6 +653,8 @@ the `created_at` attribute to implement a similar algorithm that compares the jo
pipeline_obj.delete()
```
+Automatic deletion of old pipelines is proposed in [issue 338480](https://gitlab.com/gitlab-org/gitlab/-/issues/338480).
+
### List expiry settings for job artifacts
To manage artifact storage, you can update or configure when an artifact expires.
@@ -770,7 +780,7 @@ default:
## Manage Container Registries storage
-Container registries are available [in a project](../api/container_registry.md#within-a-project) or [in a group](../api/container_registry.md#within-a-group). You can analyze both locations to implement a cleanup strategy.
+Container registries are available [for projects](../api/container_registry.md#within-a-project) or [for groups](../api/container_registry.md#within-a-group). You can analyze both locations to implement a cleanup strategy.
### List container registries
@@ -818,8 +828,6 @@ glab api --method GET projects/$GL_PROJECT_ID/registry/repositories/4435617/tags
::EndTabs
-A similar automation shell script is created in the [delete old pipelines](#delete-old-pipelines) section.
-
### Delete container images in bulk
When you [delete container image tags in bulk](../api/container_registry.md#delete-registry-repository-tags-in-bulk),
@@ -886,7 +894,7 @@ You can optimize container images to reduce the image size and overall storage c
## Manage Package Registry storage
-Package registries are available [in a project](../api/packages.md#within-a-project) or [in a group](../api/packages.md#within-a-group).
+Package registries are available [for projects](../api/packages.md#for-a-project) or [for groups](../api/packages.md#for-a-group).
### List packages and files
diff --git a/doc/user/tasks.md b/doc/user/tasks.md
index 347aedd6e74..173d2e44cf1 100644
--- a/doc/user/tasks.md
+++ b/doc/user/tasks.md
@@ -360,6 +360,25 @@ To copy the task's email address:
1. Select **Plan > Issues**, then select your issue to view it.
1. In the top right corner, select the vertical ellipsis (**{ellipsis_v}**), then select **Copy task email address**.
+## Set an issue as a parent
+
+> [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/11198) in GitLab 16.5.
+
+Prerequisite:
+
+- You must have at least the Reporter role for the project.
+- The issue and task must belong to the same project.
+
+To set an issue as a parent of a task:
+
+1. In the issue description, in the **Tasks** section, select the title of the task you want to edit.
+ The task window opens.
+1. Next to **Parent**, from the dropdown list, select the parent to add.
+1. Select any area outside the dropdown list.
+
+To remove the parent item of the task,
+next to **Parent**, select the dropdown list and then select **Unassign**.
+
## Confidential tasks
> [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/8410) in GitLab 15.3.
diff --git a/doc/user/usage_quotas.md b/doc/user/usage_quotas.md
index 305a46e1f15..7dea2b97249 100644
--- a/doc/user/usage_quotas.md
+++ b/doc/user/usage_quotas.md
@@ -5,7 +5,7 @@ group: Utilization
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
-# Storage usage quota **(FREE ALL)**
+# Storage **(FREE ALL)**
Storage usage statistics are available for projects and namespaces. You can use that information to
manage storage usage within the applicable quotas.
@@ -13,8 +13,8 @@ manage storage usage within the applicable quotas.
Statistics include:
- Storage usage across projects in a namespace.
-- Storage usage that exceeds the storage quota.
-- Available purchased storage.
+- Storage usage that exceeds the storage SaaS limit or [self-managed storage quota](../administration/settings/account_and_limit_settings.md#repository-size-limit).
+- Available purchased storage for SaaS.
Storage and network usage are calculated with the binary measurement system (1024 unit multiples).
Storage usage is displayed in kibibytes (KiB), mebibytes (MiB),
@@ -30,87 +30,33 @@ you might see references to `KB`, `MB`, and `GB` in the UI and documentation.
Prerequisites:
- To view storage usage for a project, you must have at least the Maintainer role for the project or Owner role for the namespace.
-- To view storage usage for a namespace, you must have the Owner role for the namespace.
+- To view storage usage for a group namespace, you must have the Owner role for the namespace.
1. On the left sidebar, select **Search or go to** and find your project or group.
1. On the left sidebar, select **Settings > Usage Quotas**.
-1. Select the **Storage** tab.
+1. Select the **Storage** tab to see namespace storage usage.
+1. To view storage usage for a project, select one of the projects from the table at the bottom of the **Storage** tab of the **Usage Quotas** page.
-Select any title to view details. The information on this page
-is updated every 90 minutes.
+The information on the **Usage Quotas** page is updated every 90 minutes.
If your namespace shows `'Not applicable.'`, push a commit to any project in the
namespace to recalculate the storage.
-### Container Registry usage **(FREE SAAS)**
+### View project fork storage usage **(FREE SAAS)**
-Container Registry usage is available only for GitLab.com. This feature requires a
-[new version](https://about.gitlab.com/blog/2022/04/12/next-generation-container-registry/)
-of the GitLab Container Registry. To learn about the proposed release for self-managed
-installations, see [epic 5521](https://gitlab.com/groups/gitlab-org/-/epics/5521).
-
-#### How container registry usage is calculated
-
-Image layers stored in the Container Registry are deduplicated at the root namespace level.
-
-An image is only counted once if:
-
-- You tag the same image more than once in the same repository.
-- You tag the same image across distinct repositories under the same root namespace.
-
-An image layer is only counted once if:
-
-- You share the image layer across multiple images in the same container repository, project, or group.
-- You share the image layer across different repositories.
-
-Only layers that are referenced by tagged images are accounted for. Untagged images and any layers
-referenced exclusively by them are subject to [online garbage collection](packages/container_registry/delete_container_registry_images.md#garbage-collection).
-Untagged image layers are automatically deleted after 24 hours if they remain unreferenced during that period.
-
-Image layers are stored on the storage backend in the original (usually compressed) format. This
-means that the measured size for any given image layer should match the size displayed on the
-corresponding [image manifest](https://github.com/opencontainers/image-spec/blob/main/manifest.md#example-image-manifest).
-
-Namespace usage is refreshed a few minutes after a tag is pushed or deleted from any container repository under the namespace.
-
-#### Delayed refresh
-
-It is not possible to calculate [container registry usage](#container-registry-usage)
-with maximum precision in real time for extremely large namespaces (about 1% of namespaces).
-To enable maintainers of these namespaces to see their usage, there is a delayed fallback mechanism.
-See [epic 9413](https://gitlab.com/groups/gitlab-org/-/epics/9413) for more details.
-
-If the usage for a namespace cannot be calculated with precision, GitLab falls back to the delayed method.
-In the delayed method, the displayed usage size is the sum of **all** unique image layers
-in the namespace. Untagged image layers are not ignored. As a result,
-the displayed usage size might not change significantly after deleting tags. Instead,
-the size value only changes when:
-
-- An automated [garbage collection process](packages/container_registry/delete_container_registry_images.md#garbage-collection)
- runs and deletes untagged image layers. After a user deletes a tag, a garbage collection run
- is scheduled to start 24 hours later. During that run, images that were previously tagged
- are analyzed and their layers deleted if not referenced by any other tagged image.
- If any layers are deleted, the namespace usage is updated.
-- The namespace's registry usage shrinks enough that GitLab can measure it with maximum precision.
- As usage for namespaces shrinks to be under the [limits](#namespace-storage-limit),
- the measurement switches automatically from delayed to precise usage measurement.
- There is no place in the UI to determine which measurement method is being used,
- but [issue 386468](https://gitlab.com/gitlab-org/gitlab/-/issues/386468) proposes to improve this.
+A cost factor is applied to the storage consumed by project forks so that forks consume less namespace storage than their actual size.
-### Storage usage statistics
+To view the amount of namespace storage the fork has used:
-> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/68898) project-level graph in GitLab 14.4 [with a flag](../administration/feature_flags.md) named `project_storage_ui`. Disabled by default.
-> - Enabled on GitLab.com in GitLab 14.4.
-> - Enabled on self-managed in GitLab 14.5.
-> - [Feature flag removed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/71270) in GitLab 14.5.
+1. On the left sidebar, select **Search or go to** and find your project or group.
+1. On the left sidebar, select **Settings > Usage Quotas**.
+1. Select the **Storage** tab. The **Total** column displays the amount of namespace storage used by the fork as a portion of the actual size of the fork on disk.
-The following storage usage statistics are available to a maintainer:
+The cost factor applies to the project repository, LFS objects, job artifacts, packages, snippets, and the wiki.
-- Total namespace storage used: Total amount of storage used across projects in this namespace.
-- Total excess storage used: Total amount of storage used that exceeds their allocated storage.
-- Purchased storage available: Total storage that has been purchased but is not yet used.
+The cost factor does not apply to private forks in namespaces on the Free plan.
-## Manage your storage usage
+## Manage storage usage
To manage your storage, if you are a namespace Owner you can [purchase more storage for the namespace](../subscriptions/gitlab_com/index.md#purchase-more-storage-and-transfer).
@@ -126,14 +72,16 @@ Depending on your role, you can also use the following methods to manage or redu
To automate storage usage analysis and management, see the [storage management automation](storage_management_automation.md) documentation.
-## Manage your transfer usage
+## Set usage quotas **(FREE SELF)**
+
+There are no application limits on the amount of storage and transfer for self-managed instances. The administrators are responsible for the underlying infrastructure costs. Administrators can set [repository size limits](../administration/settings/account_and_limit_settings.md#repository-size-limit) to manage your repositories’ size.
-Depending on your role, to manage your transfer usage you can [reduce Container Registry data transfers](packages/container_registry/reduce_container_registry_data_transfer.md).
+## Storage limits **(FREE SAAS)**
-## Project storage limit
+### Project storage limit
-Projects on GitLab SaaS have a 10 GiB storage limit on their Git repository and LFS storage.
-After namespace-level storage limits are applied, the project limit is removed. A namespace has either a namespace-level storage limit or a project-level storage limit, but not both.
+Projects on GitLab SaaS have a 10 GiB storage limit on their Git repository and LFS storage. Limits on project storage
+will be removed before limits are applied to GitLab SaaS namespace storage in the future.
When a project's repository and LFS reaches the quota, the project is set to a read-only state.
You cannot push changes to a read-only project. To monitor the size of each
@@ -141,7 +89,7 @@ repository in a namespace, including a breakdown for each project,
[view storage usage](#view-storage-usage). To allow a project's repository and LFS to exceed the free quota
you must purchase additional storage. For more details, see [Excess storage usage](#excess-storage-usage).
-### Excess storage usage
+#### Excess storage usage
Excess storage usage is the amount that a project's repository and LFS exceeds the [project storage limit](#project-storage-limit). If no
purchased storage is available the project is set to a read-only state. You cannot push changes to a read-only project.
@@ -185,12 +133,19 @@ available decreases. All projects no longer have the read-only status because 40
| Yellow | 5 GiB | 0 GiB | 10 GiB | Not read-only |
| **Totals** | **45 GiB** | **10 GiB** | - | - |
-## Namespace storage limit
+### Namespace storage limit **(FREE SAAS)**
-Namespaces on GitLab SaaS have a storage limit. For more information, see our [pricing page](https://about.gitlab.com/pricing/).
+GitLab plans to enforce a storage limit for namespaces on GitLab SaaS. For more information, see
+the FAQs for the following tiers:
-After namespace storage limits are enforced, view them in the **Usage quotas** page.
-For more information about the namespace storage limit enforcement, see the FAQ pages for the [Free](https://about.gitlab.com/pricing/faq-efficient-free-tier/#storage-limits-on-gitlab-saas-free-tier) and [Paid](https://about.gitlab.com/pricing/faq-paid-storage-transfer/) tiers.
+- [Free tier](https://about.gitlab.com/pricing/faq-efficient-free-tier/#storage-limits-on-gitlab-saas-free-tier).
+- [Premium and Ultimate](https://about.gitlab.com/pricing/faq-paid-storage-transfer/).
+
+Namespaces on GitLab SaaS have a [10 GiB project limit](#project-storage-limit) with a soft limit on
+namespace storage. Soft storage limits are limits that have not yet been enforced by GitLab, and will become
+hard limits after namespace storage limits apply. To avoid your namespace from becoming
+[read-only](../user/read_only_namespaces.md) after namespace storage limits apply,
+you should ensure that your namespace storage adheres to the soft storage limit.
Namespace storage limits do not apply to self-managed deployments, but administrators can [manage the repository size](../administration/settings/account_and_limit_settings.md#repository-size-limit).
@@ -209,13 +164,13 @@ If your total namespace storage exceeds the available namespace storage quota, a
To notify you that you have nearly exceeded your namespace storage quota:
-- In the command-line interface, a notification displays after each `git push` action when you've reached 95% and 100% of your namespace storage quota.
-- In the GitLab UI, a notification displays when you've reached 75%, 95%, and 100% of your namespace storage quota.
+- In the command-line interface, a notification displays after each `git push` action when your namespace has reached between 95% and 100%+ of your namespace storage quota.
+- In the GitLab UI, a notification displays when your namespace has reached between 75% and 100%+ of your namespace storage quota.
- GitLab sends an email to members with the Owner role to notify them when namespace storage usage is at 70%, 85%, 95%, and 100%.
To prevent exceeding the namespace storage limit, you can:
-- [Manage your storage usage](#manage-your-storage-usage).
+- [Manage your storage usage](#manage-storage-usage).
- If you meet the eligibility requirements, you can apply for:
- [GitLab for Education](https://about.gitlab.com/solutions/education/join/)
- [GitLab for Open Source](https://about.gitlab.com/solutions/open-source/join/)
@@ -225,16 +180,8 @@ To prevent exceeding the namespace storage limit, you can:
- [Start a trial](https://about.gitlab.com/free-trial/) or [upgrade to GitLab Premium or Ultimate](https://about.gitlab.com/pricing/), which include higher limits and features to enable growing teams to ship faster without sacrificing on quality.
- [Talk to an expert](https://page.gitlab.com/usage_limits_help.html) for more information about your options.
-### View project fork storage usage
-
-A cost factor is applied to the storage consumed by project forks so that forks consume less namespace storage than their actual size.
-
-To view the amount of namespace storage the fork has used:
-
-1. On the left sidebar, select **Search or go to** and find your project or group.
-1. On the left sidebar, select **Settings > Usage Quotas**.
-1. Select the **Storage** tab. The **Total** column displays the amount of namespace storage used by the fork as a portion of the actual size of the fork on disk.
-
-The cost factor applies to the project repository, LFS objects, job artifacts, packages, snippets, and the wiki.
+## Related Topics
-The cost factor does not apply to private forks in namespaces on the Free plan.
+- [Automate storage management](storage_management_automation.md)
+- [Purchase storage and transfer](../subscriptions/gitlab_com/index.md#purchase-more-storage-and-transfer)
+- [Transfer usage](packages/container_registry/reduce_container_registry_data_transfer.md)
diff --git a/doc/user/workspace/index.md b/doc/user/workspace/index.md
index 1284067a391..21905381577 100644
--- a/doc/user/workspace/index.md
+++ b/doc/user/workspace/index.md
@@ -95,18 +95,28 @@ Only these properties are relevant to the GitLab implementation of the `containe
| `endpoints` | Port mappings to expose from the container. |
| `volumeMounts` | Storage volume to mount in the container. |
+### Using variables in a devfile
+
+You can define variables to use in your devfile.
+The `variables` object is a map of name-value pairs that you can use for string replacement in the devfile.
+
+Variables cannot have names that start with `gl-`, `gl_`, `GL-`, or `GL_`.
+For more information about how and where to use variables, see the [devfile documentation](https://devfile.io/docs/2.2.0/defining-variables).
+
### Example configurations
The following is an example devfile configuration:
```yaml
schemaVersion: 2.2.0
+variables:
+ registry-root: registry.gitlab.com
components:
- name: tooling-container
attributes:
gl/inject-editor: true
container:
- image: registry.gitlab.com/gitlab-org/remote-development/gitlab-remote-development-docs/debian-bullseye-ruby-3.2-node-18.12:rubygems-3.4-git-2.33-lfs-2.9-yarn-1.22-graphicsmagick-1.3.36-gitlab-workspaces
+ image: "{{registry-root}}/gitlab-org/remote-development/gitlab-remote-development-docs/debian-bullseye-ruby-3.2-node-18.12:rubygems-3.4-git-2.33-lfs-2.9-yarn-1.22-graphicsmagick-1.3.36-gitlab-workspaces"
env:
- name: KEY
value: VALUE