Welcome to mirror list, hosted at ThFree Co, Russian Federation.

gitlab.com/gitlab-org/gitlab-foss.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorGitLab Bot <gitlab-bot@gitlab.com>2022-08-18 11:17:02 +0300
committerGitLab Bot <gitlab-bot@gitlab.com>2022-08-18 11:17:02 +0300
commitb39512ed755239198a9c294b6a45e65c05900235 (patch)
treed234a3efade1de67c46b9e5a38ce813627726aa7 /doc/integration
parentd31474cf3b17ece37939d20082b07f6657cc79a9 (diff)
Add latest changes from gitlab-org/gitlab@15-3-stable-eev15.3.0-rc42
Diffstat (limited to 'doc/integration')
-rw-r--r--doc/integration/advanced_search/elasticsearch.md33
-rw-r--r--doc/integration/advanced_search/elasticsearch_troubleshooting.md256
-rw-r--r--doc/integration/azure.md43
-rw-r--r--doc/integration/cas.md6
-rw-r--r--doc/integration/datadog.md3
-rw-r--r--doc/integration/github.md22
-rw-r--r--doc/integration/gitlab.md8
-rw-r--r--doc/integration/img/jenkins_gitlab_service.pngbin19235 -> 0 bytes
-rw-r--r--doc/integration/img/jenkins_project.pngbin42275 -> 0 bytes
-rw-r--r--doc/integration/img/omniauth_providers_v_14_6.pngbin12165 -> 0 bytes
-rw-r--r--doc/integration/index.md1
-rw-r--r--doc/integration/jenkins_deprecated.md62
-rw-r--r--doc/integration/jira/dvcs.md2
-rw-r--r--doc/integration/mattermost/index.md88
-rw-r--r--doc/integration/omniauth.md72
-rw-r--r--doc/integration/saml.md13
-rw-r--r--doc/integration/security_partners/index.md4
-rw-r--r--doc/integration/twitter.md2
18 files changed, 406 insertions, 209 deletions
diff --git a/doc/integration/advanced_search/elasticsearch.md b/doc/integration/advanced_search/elasticsearch.md
index 5eac6ab7c84..dc3dc4d2012 100644
--- a/doc/integration/advanced_search/elasticsearch.md
+++ b/doc/integration/advanced_search/elasticsearch.md
@@ -35,6 +35,10 @@ before we remove them.
|-----------------------|--------------------------|
| GitLab 15.0 or later | OpenSearch 1.x or later |
+If your version of Elasticsearch or OpenSearch is incompatible, to prevent data loss, indexing pauses and
+a message is logged in the
+[`elasticsearch.log`](../../administration/logs/index.md#elasticsearchlog) file.
+
If you are using a compatible version and after connecting to OpenSearch, you get the message `Elasticsearch version not compatible`, [unpause indexing](#unpause-indexing).
## System requirements
@@ -53,7 +57,7 @@ each node should have:
## Install Elasticsearch
Elasticsearch is *not* included in the Omnibus packages or when you install from
-source. You must [install it separately](https://www.elastic.co/guide/en/elasticsearch/reference/7.x/install-elasticsearch.html "Elasticsearch 7.x installation documentation") and ensure you select your version. Detailed information on how to install Elasticsearch is out of the scope of this page.
+source. You must [install it separately](https://www.elastic.co/guide/en/elasticsearch/reference/7.16/install-elasticsearch.html "Elasticsearch 7.x installation documentation") and ensure you select your version. Detailed information on how to install Elasticsearch is out of the scope of this page.
You can install Elasticsearch yourself, or use a cloud hosted offering such as [Elasticsearch Service](https://www.elastic.co/elasticsearch/service) (available on AWS, GCP, or Azure) or the [Amazon OpenSearch](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/gsg.html)
service.
@@ -159,6 +163,12 @@ If you see an error such as `Permission denied - /home/git/gitlab-elasticsearch-
may need to set the `production -> elasticsearch -> indexer_path` setting in your `gitlab.yml` file to
`/usr/local/bin/gitlab-elasticsearch-indexer`, which is where the binary is installed.
+### View indexing errors
+
+Errors from the [GitLab Elasticsearch Indexer](https://gitlab.com/gitlab-org/gitlab-elasticsearch-indexer) are reported in
+the [`sidekiq.log`](../../administration/logs/index.md#sidekiqlog) file with a `json.exception.class` of `Gitlab::Elastic::Indexer::Error`.
+These errors may occur when indexing Git repository data.
+
## Enable Advanced Search
For GitLab instances with more than 50GB repository data you can follow the instructions for [how to index large instances efficiently](#how-to-index-large-instances-efficiently) below.
@@ -212,7 +222,7 @@ The following Elasticsearch settings are available:
| `Password` | The password of your Elasticsearch instance. |
| `Number of Elasticsearch shards` | Elasticsearch indices are split into multiple shards for performance reasons. In general, you should use at least 5 shards, and indices with tens of millions of documents need to have more shards ([see below](#guidance-on-choosing-optimal-cluster-configuration)). Changes to this value do not take effect until the index is recreated. You can read more about tradeoffs in the [Elasticsearch documentation](https://www.elastic.co/guide/en/elasticsearch/reference/current/scalability.html). |
| `Number of Elasticsearch replicas` | Each Elasticsearch shard can have a number of replicas. These are a complete copy of the shard, and can provide increased query performance or resilience against hardware failure. Increasing this value increases total disk space required by the index. |
-| `Limit the number of namespaces and projects that can be indexed` | Enabling this allows you to select namespaces and projects to index. All other namespaces and projects use database search instead. If you enable this option but do not select any namespaces or projects, none are indexed. [Read more below](#limit-the-number-of-namespaces-and-projects-that-can-be-indexed).
+| `Limit the number of namespaces and projects that can be indexed` | Enabling this allows you to select namespaces and projects to index. All other namespaces and projects use database search instead. If you enable this option but do not select any namespaces or projects, none are indexed. [Read more below](#limit-the-number-of-namespaces-and-projects-that-can-be-indexed).|
| `Using AWS hosted Elasticsearch with IAM credentials` | Sign your Elasticsearch requests using [AWS IAM authorization](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html), [AWS EC2 Instance Profile Credentials](https://docs.aws.amazon.com/codedeploy/latest/userguide/getting-started-create-iam-instance-profile.html#getting-started-create-iam-instance-profile-cli), or [AWS ECS Tasks Credentials](https://docs.aws.amazon.com/AmazonECS/latest/userguide/task-iam-roles.html). Please refer to [Identity and Access Management in Amazon OpenSearch Service](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/ac.html) for details of AWS hosted OpenSearch domain access policy configuration. |
| `AWS Region` | The AWS region in which your OpenSearch Service is located. |
| `AWS Access Key` | The AWS access key. |
@@ -438,13 +448,13 @@ This should return something similar to:
}
```
-In order to debug issues with the migrations you can check the [`elasticsearch.log` file](../../administration/logs.md#elasticsearchlog).
+In order to debug issues with the migrations you can check the [`elasticsearch.log` file](../../administration/logs/index.md#elasticsearchlog).
### Retry a halted migration
Some migrations are built with a retry limit. If the migration cannot finish within the retry limit,
it is halted and a notification is displayed in the Advanced Search integration settings.
-It is recommended to check the [`elasticsearch.log` file](../../administration/logs.md#elasticsearchlog) to
+It is recommended to check the [`elasticsearch.log` file](../../administration/logs/index.md#elasticsearchlog) to
debug why the migration was halted and make any changes before retrying the migration. Once you believe you've
fixed the cause of the failure, select "Retry migration", and the migration is scheduled to be retried
in the background.
@@ -462,8 +472,7 @@ Before doing a major version GitLab upgrade, you should have completed all
migrations that exist up until the latest minor version before that major
version. If you have halted migrations, these need to be resolved and
[retried](#retry-a-halted-migration) before proceeding with a major version
-upgrade. Read more about [upgrading to a new major
-version](../../update/index.md#upgrading-to-a-new-major-version).
+upgrade. Read more about [upgrading to a new major version](../../update/index.md#upgrading-to-a-new-major-version).
## GitLab Advanced Search Rake tasks
@@ -573,9 +582,9 @@ due to large volumes of data being indexed.
WARNING:
Indexing a large instance generates a lot of Sidekiq jobs.
-Make sure to prepare for this task by having a [Scalable and Highly Available
-Setup](../../administration/reference_architectures/index.md) or creating [extra
-Sidekiq processes](../../administration/operations/extra_sidekiq_processes.md).
+Make sure to prepare for this task by having a
+[scalable setup](../../administration/reference_architectures/index.md) or creating
+[extra Sidekiq processes](../../administration/sidekiq/extra_sidekiq_processes.md).
1. [Configure your Elasticsearch host and port](#enable-advanced-search).
1. Create empty indices:
@@ -774,8 +783,8 @@ additional process dedicated to indexing a set of queues (or queue group). This
ensure that indexing queues always have a dedicated worker, while the rest of the queues have
another dedicated worker to avoid contention.
-For this purpose, use the [queue selector](../../administration/operations/extra_sidekiq_processes.md#queue-selector)
-option that allows a more general selection of queue groups using a [worker matching query](../../administration/operations/extra_sidekiq_routing.md#worker-matching-query).
+For this purpose, use the [queue selector](../../administration/sidekiq/extra_sidekiq_processes.md#queue-selector)
+option that allows a more general selection of queue groups using a [worker matching query](../../administration/sidekiq/extra_sidekiq_routing.md#worker-matching-query).
To handle these two queue groups, we generally recommend one of the following two options. You can either:
@@ -809,7 +818,7 @@ WARNING:
When starting multiple processes, the number of processes cannot exceed the number of CPU
cores you want to dedicate to Sidekiq. Each Sidekiq process can use only one CPU core, subject
to the available workload and concurrency settings. For more details, see how to
-[run multiple Sidekiq processes](../../administration/operations/extra_sidekiq_processes.md).
+[run multiple Sidekiq processes](../../administration/sidekiq/extra_sidekiq_processes.md).
### Two nodes, one process for each
diff --git a/doc/integration/advanced_search/elasticsearch_troubleshooting.md b/doc/integration/advanced_search/elasticsearch_troubleshooting.md
index 97abf456baa..fb558441d6a 100644
--- a/doc/integration/advanced_search/elasticsearch_troubleshooting.md
+++ b/doc/integration/advanced_search/elasticsearch_troubleshooting.md
@@ -14,16 +14,30 @@ Use the following information to troubleshoot Elasticsearch issues.
One of the most valuable tools for identifying issues with the Elasticsearch
integration are logs. The most relevant logs for this integration are:
-1. [`sidekiq.log`](../../administration/logs.md#sidekiqlog) - All of the
+1. [`sidekiq.log`](../../administration/logs/index.md#sidekiqlog) - All of the
indexing happens in Sidekiq, so much of the relevant logs for the
Elasticsearch integration can be found in this file.
-1. [`elasticsearch.log`](../../administration/logs.md#elasticsearchlog) - There
+1. [`elasticsearch.log`](../../administration/logs/index.md#elasticsearchlog) - There
are additional logs specific to Elasticsearch that are sent to this file
that may contain useful diagnostic information about searching,
indexing or migrations.
Here are some common pitfalls and how to overcome them.
+## Common terminology
+
+- **Lucene**: A full-text search library written in Java.
+- **Near real time (NRT)**: Refers to the slight latency from the time to index a
+ document to the time when it becomes searchable.
+- **Cluster**: A collection of one or more nodes that work together to hold all
+ the data, providing indexing and search capabilities.
+- **Node**: A single server that works as part of a cluster.
+- **Index**: A collection of documents that have somewhat similar characteristics.
+- **Document**: A basic unit of information that can be indexed.
+- **Shards**: Fully-functional and independent subdivisions of indices. Each shard is actually
+ a Lucene index.
+- **Replicas**: Failover mechanisms that duplicate indices.
+
## How can I verify that my GitLab instance is using Elasticsearch?
There are a couple of ways to achieve that:
@@ -44,6 +58,20 @@ There are a couple of ways to achieve that:
::Gitlab::CurrentSettings.elasticsearch_limit_indexing? # Whether or not Elasticsearch is limited only to certain projects/namespaces
```
+- Confirm searches use Elasticsearch by accessing the [rails console]
+ (../../administration/operations/rails_console.md) and running the following commands:
+
+ ```rails
+ u = User.find_by_email('email_of_user_doing_search')
+ s = SearchService.new(u, {:search => 'search_term'})
+ pp s.search_objects.class
+ ```
+
+ The output from the last command is the key here. If it shows:
+
+ - `ActiveRecord::Relation`, **it is not** using Elasticsearch.
+ - `Kaminari::PaginatableArray`, **it is** using Elasticsearch.
+
- If Elasticsearch is limited to specific namespaces and you need to know if
Elasticsearch is being used for a specific project or namespace, you can use
the Rails console:
@@ -53,13 +81,57 @@ There are a couple of ways to achieve that:
::Gitlab::CurrentSettings.search_using_elasticsearch?(scope: Project.find_by_full_path("/my-namespace/my-project"))
```
-## I updated GitLab and now I can't find anything
+## Troubleshooting indexing
+
+Troubleshooting indexing issues can be tricky. It can pretty quickly go to either GitLab
+support or your Elasticsearch administrator.
+
+The best place to start is to determine if the issue is with creating an empty index.
+If it is, check on the Elasticsearch side to determine if the `gitlab-production` (the
+name for the GitLab index) exists. If it exists, manually delete it on the Elasticsearch
+side and attempt to recreate it from the
+[`recreate_index`](../../integration/advanced_search/elasticsearch.md#gitlab-advanced-search-rake-tasks)
+Rake task.
+
+If you still encounter issues, try creating an index manually on the Elasticsearch
+instance. The details of the index aren't important here, as we want to test if indices
+can be made. If the indices:
+
+- Cannot be made, speak with your Elasticsearch administrator.
+- Can be made, Escalate this to GitLab support.
+
+If the issue is not with creating an empty index, the next step is to check for errors
+during the indexing of projects. If errors do occur, they stem from either the indexing:
+
+- On the GitLab side. You need to rectify those. If they are not
+ something you are familiar with, contact GitLab support for guidance.
+- Within the Elasticsearch instance itself. See if the error is [documented and has a fix](../../integration/advanced_search/elasticsearch_troubleshooting.md). If not, speak with your Elasticsearch administrator.
+
+If the indexing process does not present errors, check the status of the indexed projects. You can do this via the following Rake tasks:
+
+- [`sudo gitlab-rake gitlab:elastic:index_projects_status`](../../integration/advanced_search/elasticsearch.md#gitlab-advanced-search-rake-tasks) (shows the overall status)
+- [`sudo gitlab-rake gitlab:elastic:projects_not_indexed`](../../integration/advanced_search/elasticsearch.md#gitlab-advanced-search-rake-tasks) (shows specific projects that are not indexed)
+
+If:
+
+- Everything is showing at 100%, escalate to GitLab support. This could be a potential
+ bug/issue.
+- You do see something not at 100%, attempt to reindex that project. To do this,
+ run `sudo gitlab-rake gitlab:elastic:index_projects ID_FROM=<project ID> ID_TO=<project ID>`.
+
+If reindexing the project shows:
+
+- Errors on the GitLab side, escalate those to GitLab support.
+- Elasticsearch errors or doesn't present any errors at all, reach out to your
+ Elasticsearch administrator to check the instance.
+
+### I updated GitLab and now I can't find anything
We continuously make updates to our indexing strategies and aim to support
newer versions of Elasticsearch. When indexing changes are made, it may
be necessary for you to [reindex](elasticsearch.md#zero-downtime-reindexing) after updating GitLab.
-## I indexed all the repositories but I can't get any hits for my search term in the UI
+### I indexed all the repositories but I can't get any hits for my search term in the UI
Make sure you [indexed all the database data](elasticsearch.md#enable-advanced-search).
@@ -79,26 +151,29 @@ curl --request GET <elasticsearch_server_ip>:9200/gitlab-production/_search?q=<s
More [complex Elasticsearch API calls](https://www.elastic.co/guide/en/elasticsearch/reference/current/query-filter-context.html) are also possible.
-It is important to understand at which level the problem is manifesting (UI, Rails code, Elasticsearch side) to be able to [troubleshoot further](../../administration/troubleshooting/elasticsearch.md#search-results-workflow).
+If the results:
+
+- Sync up, please check that you are using [supported syntax](../../user/search/global_search/advanced_search_syntax.md). Note that Advanced Search does not support [exact substring matching](https://gitlab.com/gitlab-org/gitlab/-/issues/325234).
+- Do not match up, this indicates a problem with the documents generated from the project. It is best to [re-index that project](../advanced_search/elasticsearch.md#indexing-a-range-of-projects-or-a-specific-project)
NOTE:
The above instructions are not to be used for scenarios that only index a [subset of namespaces](elasticsearch.md#limit-the-number-of-namespaces-and-projects-that-can-be-indexed).
See [Elasticsearch Index Scopes](elasticsearch.md#advanced-search-index-scopes) for more information on searching for specific types of data.
-## I indexed all the repositories but then switched Elasticsearch servers and now I can't find anything
+### I indexed all the repositories but then switched Elasticsearch servers and now I can't find anything
You must re-run all the Rake tasks to reindex the database, repositories, and wikis.
-## The indexing process is taking a very long time
+### The indexing process is taking a very long time
The more data present in your GitLab instance, the longer the indexing process takes.
-## There are some projects that weren't indexed, but I don't know which ones
+### There are some projects that weren't indexed, but I don't know which ones
You can run `sudo gitlab-rake gitlab:elastic:projects_not_indexed` to display projects that aren't indexed.
-## No new data is added to the Elasticsearch index when I push code
+### No new data is added to the Elasticsearch index when I push code
NOTE:
This was [fixed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/35936) in GitLab 13.2 and the Rake task is not available for versions greater than that.
@@ -109,6 +184,116 @@ When performing the initial indexing of blobs, we lock all projects until the pr
sudo gitlab-rake gitlab:elastic:clear_locked_projects
```
+### Indexing fails with `error: elastic: Error 429 (Too Many Requests)`
+
+If `ElasticCommitIndexerWorker` Sidekiq workers are failing with this error during indexing, it usually means that Elasticsearch is unable to keep up with the concurrency of indexing request. To address change the following settings:
+
+- To decrease the indexing throughput you can decrease `Bulk request concurrency` (see [Advanced Search settings](elasticsearch.md#advanced-search-configuration)). This is set to `10` by default, but you change it to as low as 1 to reduce the number of concurrent indexing operations.
+- If changing `Bulk request concurrency` didn't help, you can use the [queue selector](../../administration/sidekiq/extra_sidekiq_processes.md#queue-selector) option to [limit indexing jobs only to specific Sidekiq nodes](elasticsearch.md#index-large-instances-with-dedicated-sidekiq-nodes-or-processes), which should reduce the number of indexing requests.
+
+### Indexing is very slow or fails with `rejected execution of coordinating operation` messages
+
+Bulk requests getting rejected by the Elasticsearch nodes are likely due to load and lack of available memory.
+Ensure that your Elasticsearch cluster meets the [system requirements](elasticsearch.md#system-requirements) and has enough resources
+to perform bulk operations. See also the error ["429 (Too Many Requests)"](#indexing-fails-with-error-elastic-error-429-too-many-requests).
+
+### Last resort to recreate an index
+
+There may be cases where somehow data never got indexed and it's not in the
+queue, or the index is somehow in a state where migrations just cannot
+proceed. It is always best to try to troubleshoot the root cause of the problem
+by [viewing the logs](#view-logs).
+
+If there are no other options, then you always have the option of recreating the
+entire index from scratch. If you have a small GitLab installation, this can
+sometimes be a quick way to resolve a problem, but if you have a large GitLab
+installation, then this might take a very long time to complete. Until the
+index is fully recreated, your index does not serve correct search results,
+so you may want to disable **Search with Elasticsearch** while it is running.
+
+If you are sure you've read the above caveats and want to proceed, then you
+should run the following Rake task to recreate the entire index from scratch:
+
+**For Omnibus installations**
+
+```shell
+# WARNING: DO NOT RUN THIS UNTIL YOU READ THE DESCRIPTION ABOVE
+sudo gitlab-rake gitlab:elastic:index
+```
+
+**For installations from source**
+
+```shell
+# WARNING: DO NOT RUN THIS UNTIL YOU READ THE DESCRIPTION ABOVE
+cd /home/git/gitlab
+sudo -u git -H bundle exec rake gitlab:elastic:index
+```
+
+### Troubleshooting performance
+
+Troubleshooting performance can be difficult on Elasticsearch. There is a ton of tuning
+that *can* be done, but the majority of this falls on shoulders of a skilled
+Elasticsearch administrator.
+
+Generally speaking, ensure:
+
+- The Elasticsearch server **is not** running on the same node as GitLab.
+- The Elasticsearch server have enough RAM and CPU cores.
+- That sharding **is** being used.
+
+Going into some more detail here, if Elasticsearch is running on the same server as GitLab, resource contention is **very** likely to occur. Ideally, Elasticsearch, which requires ample resources, should be running on its own server (maybe coupled with Logstash and Kibana).
+
+When it comes to Elasticsearch, RAM is the key resource. Elasticsearch themselves recommend:
+
+- **At least** 8 GB of RAM for a non-production instance.
+- **At least** 16 GB of RAM for a production instance.
+- Ideally, 64 GB of RAM.
+
+For CPU, Elasticsearch recommends at least 2 CPU cores, but Elasticsearch states common
+setups use up to 8 cores. For more details on server specs, check out
+[Elasticsearch's hardware guide](https://www.elastic.co/guide/en/elasticsearch/guide/current/hardware.html).
+
+Beyond the obvious, sharding comes into play. Sharding is a core part of Elasticsearch.
+It allows for horizontal scaling of indices, which is helpful when you are dealing with
+a large amount of data.
+
+With the way GitLab does indexing, there is a **huge** amount of documents being
+indexed. By utilizing sharding, you can speed up Elasticsearch's ability to locate
+data, since each shard is a Lucene index.
+
+If you are not using sharding, you are likely to hit issues when you start using
+Elasticsearch in a production environment.
+
+Keep in mind that an index with only one shard has **no scale factor** and will
+likely encounter issues when called upon with some frequency.
+
+If you need to know how many shards, read
+[Elasticsearch's documentation on capacity planning](https://www.elastic.co/guide/en/elasticsearch/guide/2.x/capacity-planning.html),
+as the answer is not straight forward.
+
+The easiest way to determine if sharding is in use is to check the output of the
+[Elasticsearch Health API](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html):
+
+- Red means the cluster is down.
+- Yellow means it is up with no sharding/replication.
+- Green means it is healthy (up, sharding, replicating).
+
+For production use, it should always be green.
+
+Beyond these steps, you get into some of the more complicated things to check,
+such as merges and caching. These can get complicated and it takes some time to
+learn them, so it is best to escalate/pair with an Elasticsearch expert if you need to
+dig further into these.
+
+Feel free to reach out to GitLab support, but this is likely to be something a skilled
+Elasticsearch administrator has more experience with.
+
+## Issues with migrations
+
+Please ensure you've read about [Elasticsearch Migrations](../advanced_search/elasticsearch.md#advanced-search-migrations).
+
+If there is a halted migration and your [`elasticsearch.log`](../../administration/logs/index.md#elasticsearchlog) file contain errors, this could potentially be a bug/issue. Escalate to GitLab support if retrying migrations does not succeed.
+
## `Can't specify parent if no parent field has been configured` error
If you enabled Elasticsearch before GitLab 8.12 and have not rebuilt indices, you get
@@ -146,6 +331,10 @@ This exception is seen when your Elasticsearch cluster is configured to reject r
AWS has [fixed limits](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/limits.html#network-limits) for this setting ("Maximum size of HTTP request payloads"), based on the size of the underlying instance.
+## `Faraday::TimeoutError (execution expired)` error when using a proxy
+
+Set a custom `gitlab_rails['env']` environment variable, called [`no_proxy`](https://docs.gitlab.com/omnibus/settings/environment-variables.html) with the IP address of your Elasticsearch host.
+
## My single node Elasticsearch cluster status never goes from `yellow` to `green` even though everything seems to be running properly
**For a single node Elasticsearch cluster the functional cluster health status is yellow** (never green) because the primary shard is allocated but replicas cannot be as there is no other node to which Elasticsearch can assign a replica. This also applies if you are using the [Amazon OpenSearch](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/aes-handling-errors.html#aes-handling-errors-yellow-cluster-status) service.
@@ -182,10 +371,6 @@ reason may be incompatible with our integration. You should try disabling
plugins so you can rule out the possibility that the plugin is causing the
problem.
-## Low-level troubleshooting
-
-There is a [more structured, lower-level troubleshooting document](../../administration/troubleshooting/elasticsearch.md) for when you experience other issues, including poor performance.
-
## Elasticsearch `code_analyzer` doesn't account for all code cases
The `code_analyzer` pattern and filter configuration is being evaluated for improvement. We have fixed [most edge cases](https://gitlab.com/groups/gitlab-org/-/epics/3621#note_363429094) that were not returning expected search results due to our pattern and filter configuration.
@@ -196,38 +381,6 @@ Improvements to the `code_analyzer` pattern and filters are being discussed in [
In GitLab 13.9, a change was made where [binary file names are being indexed](https://gitlab.com/gitlab-org/gitlab/-/issues/301083). However, without indexing all projects' data from scratch, only binary files that are added or updated after the GitLab 13.9 release are searchable.
-## Last resort to recreate an index
-
-There may be cases where somehow data never got indexed and it's not in the
-queue, or the index is somehow in a state where migrations just cannot
-proceed. It is always best to try to troubleshoot the root cause of the problem
-by [viewing the logs](#view-logs).
-
-If there are no other options, then you always have the option of recreating the
-entire index from scratch. If you have a small GitLab installation, this can
-sometimes be a quick way to resolve a problem, but if you have a large GitLab
-installation, then this might take a very long time to complete. Until the
-index is fully recreated, your index does not serve correct search results,
-so you may want to disable **Search with Elasticsearch** while it is running.
-
-If you are sure you've read the above caveats and want to proceed, then you
-should run the following Rake task to recreate the entire index from scratch:
-
-**For Omnibus installations**
-
-```shell
-# WARNING: DO NOT RUN THIS UNTIL YOU READ THE DESCRIPTION ABOVE
-sudo gitlab-rake gitlab:elastic:index
-```
-
-**For installations from source**
-
-```shell
-# WARNING: DO NOT RUN THIS UNTIL YOU READ THE DESCRIPTION ABOVE
-cd /home/git/gitlab
-sudo -u git -H bundle exec rake gitlab:elastic:index
-```
-
## How does Advanced Search handle private projects?
Advanced Search stores all the projects in the same Elasticsearch indices,
@@ -235,19 +388,6 @@ however, searches only surface results that can be viewed by the user.
Advanced Search honors all permission checks in the application by
filtering out projects that a user does not have access to at search time.
-## Indexing fails with `error: elastic: Error 429 (Too Many Requests)`
-
-If `ElasticCommitIndexerWorker` Sidekiq workers are failing with this error during indexing, it usually means that Elasticsearch is unable to keep up with the concurrency of indexing request. To address change the following settings:
-
-- To decrease the indexing throughput you can decrease `Bulk request concurrency` (see [Advanced Search settings](elasticsearch.md#advanced-search-configuration)). This is set to `10` by default, but you change it to as low as 1 to reduce the number of concurrent indexing operations.
-- If changing `Bulk request concurrency` didn't help, you can use the [queue selector](../../administration/operations/extra_sidekiq_processes.md#queue-selector) option to [limit indexing jobs only to specific Sidekiq nodes](elasticsearch.md#index-large-instances-with-dedicated-sidekiq-nodes-or-processes), which should reduce the number of indexing requests.
-
-## Indexing is very slow or fails with `rejected execution of coordinating operation` messages
-
-Bulk requests getting rejected by the Elasticsearch nodes are likely due to load and lack of available memory.
-Ensure that your Elasticsearch cluster meets the [system requirements](elasticsearch.md#system-requirements) and has enough resources
-to perform bulk operations. See also the error ["429 (Too Many Requests)"](#indexing-fails-with-error-elastic-error-429-too-many-requests).
-
## Access requirements for the self-managed AWS OpenSearch Service
To use the self-managed AWS OpenSearch Service with GitLab, configure your instance's domain access policies
diff --git a/doc/integration/azure.md b/doc/integration/azure.md
index 515e7406545..da1aa574bd6 100644
--- a/doc/integration/azure.md
+++ b/doc/integration/azure.md
@@ -107,6 +107,24 @@ Alternatively, add the `User.Read.All` application permission.
]
```
+ For [alternative Azure clouds](https://docs.microsoft.com/en-us/azure/active-directory/develop/authentication-national-cloud),
+ configure `base_azure_url` under the `args` section. For example, for Azure Government Community Cloud (GCC):
+
+ ```ruby
+ gitlab_rails['omniauth_providers'] = [
+ {
+ "name" => "azure_activedirectory_v2",
+ "label" => "Provider name", # optional label for login button, defaults to "Azure AD v2"
+ "args" => {
+ "client_id" => "CLIENT ID",
+ "client_secret" => "CLIENT SECRET",
+ "tenant_id" => "TENANT ID",
+ "base_azure_url" => "https://login.microsoftonline.us"
+ }
+ }
+ ]
+ ```
+
- **For installations from source**
For the v1.0 endpoint:
@@ -115,8 +133,8 @@ Alternatively, add the `User.Read.All` application permission.
- { name: 'azure_oauth2',
# label: 'Provider name', # optional label for login button, defaults to "Azure AD"
args: { client_id: 'CLIENT ID',
- client_secret: 'CLIENT SECRET',
- tenant_id: 'TENANT ID' } }
+ client_secret: 'CLIENT SECRET',
+ tenant_id: 'TENANT ID' } }
```
For the v2.0 endpoint:
@@ -125,14 +143,25 @@ Alternatively, add the `User.Read.All` application permission.
- { name: 'azure_activedirectory_v2',
label: 'Provider name', # optional label for login button, defaults to "Azure AD v2"
args: { client_id: "CLIENT ID",
- client_secret: "CLIENT SECRET",
- tenant_id: "TENANT ID" } }
+ client_secret: "CLIENT SECRET",
+ tenant_id: "TENANT ID" } }
+ ```
+
+ For [alternative Azure clouds](https://docs.microsoft.com/en-us/azure/active-directory/develop/authentication-national-cloud),
+ configure `base_azure_url` under the `args` section. For example, for Azure Government Community Cloud (GCC):
+
+ ```yaml
+ - { name: 'azure_activedirectory_v2',
+ label: 'Provider name', # optional label for login button, defaults to "Azure AD v2"
+ args: { client_id: "CLIENT ID",
+ client_secret: "CLIENT SECRET",
+ tenant_id: "TENANT ID",
+ base_azure_url: "https://login.microsoftonline.us" } }
```
- You can optionally add the following parameters:
+ In addition, you can optionally add the following parameters to the `args` section:
- - `base_azure_url` for different locales. For example, `base_azure_url: "https://login.microsoftonline.de"`.
- - `scope`, which you add to `args`. The default is `openid profile email`.
+ - `scope` for [OAuth2 scopes](https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-oauth2-auth-code-flow). The default is `openid profile email`.
1. Save the configuration file.
diff --git a/doc/integration/cas.md b/doc/integration/cas.md
index a0cb6bd98cd..38305967246 100644
--- a/doc/integration/cas.md
+++ b/doc/integration/cas.md
@@ -4,7 +4,11 @@ group: Authentication and Authorization
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
-# CAS OmniAuth Provider **(FREE SELF)**
+# CAS OmniAuth provider (deprecated) **(FREE SELF)**
+
+WARNING:
+This feature was [deprecated](https://gitlab.com/gitlab-org/gitlab/-/issues/369127) in GitLab 15.3 and is planned for
+removal in 16.0.
To enable the CAS OmniAuth provider you must register your application with your
CAS instance. This requires the service URL GitLab supplies to CAS. It should be
diff --git a/doc/integration/datadog.md b/doc/integration/datadog.md
index a9be7754cb9..b8624545c41 100644
--- a/doc/integration/datadog.md
+++ b/doc/integration/datadog.md
@@ -33,8 +33,7 @@ project, group, or instance level:
1. Select **Active** to enable the integration.
1. Specify the [**Datadog site**](https://docs.datadoghq.com/getting_started/site/) to send data to.
1. Provide your Datadog **API key**.
-<!-- 1. Optional. Select **Enable logs collection** to enable logs collection for the output of jobs. ([Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/346339) in GitLab 14.8.) -->
-<!-- TODO: uncomment the archive_trace_events field once :datadog_integration_logs_collection is rolled out. Rollout issue: https://gitlab.com/gitlab-org/gitlab/-/issues/346339 -->
+1. Optional. Select **Enable logs collection** to enable logs collection for the output of jobs. ([Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/346339) in GitLab 15.3.)
1. Optional. To override the API URL used to send data directly, provide an **API URL**.
Used only in advanced scenarios.
1. Optional. If you use more than one GitLab instance, provide a unique **Service** name
diff --git a/doc/integration/github.md b/doc/integration/github.md
index 3011155f825..ad90c714dac 100644
--- a/doc/integration/github.md
+++ b/doc/integration/github.md
@@ -157,13 +157,29 @@ To fix this issue, you must disable SSL verification:
1. Change the global Git `sslVerify` option to `false` on the GitLab server.
- - **For Omnibus installations**
+ - **For Omnibus installations in [GitLab 15.3](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/6800) and later**:
+
+ ```ruby
+ gitaly['gitconfig'] = [
+ {key: "http.sslVerify", value: "false"},
+ ]
+ ```
+
+ - **For Omnibus installations in GitLab 15.2 and earlier (legacy method)**:
```ruby
omnibus_gitconfig['system'] = { "http" => ["sslVerify = false"] }
```
- - **For installations from source**
+ - **For installations from source in [GitLab 15.3](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/6800) and later**, edit the Gitaly configuration (`gitaly.toml`):
+
+ ```toml
+ [[git.config]]
+ key = "http.sslVerify"
+ value = "false"
+ ```
+
+ - **For installations from source in GitLab 15.2 and earlier (legacy method)**:
```shell
git config --global http.sslVerify false
@@ -180,7 +196,7 @@ GitLab instance and GitHub Enterprise.
To check for a connectivity issue:
-1. Go to the [`production.log`](../administration/logs.md#productionlog)
+1. Go to the [`production.log`](../administration/logs/index.md#productionlog)
on your GitLab server and look for the following error:
``` plaintext
diff --git a/doc/integration/gitlab.md b/doc/integration/gitlab.md
index 02705d9dec3..fee1e573384 100644
--- a/doc/integration/gitlab.md
+++ b/doc/integration/gitlab.md
@@ -77,7 +77,7 @@ GitLab.com generates an application ID and secret key for you to use.
app_id: "YOUR_APP_ID",
app_secret: "YOUR_APP_SECRET",
args: { scope: "read_user" # optional: defaults to the scopes of the application
- , client_options: { site: "https://gitlab.example.com/api/v4" } }
+ , client_options: { site: "https://gitlab.example.com" } }
}
]
```
@@ -98,9 +98,13 @@ GitLab.com generates an application ID and secret key for you to use.
label: 'Provider name', # optional label for login button, defaults to "GitLab.com"
app_id: 'YOUR_APP_ID',
app_secret: 'YOUR_APP_SECRET',
- args: { "client_options": { "site": 'https://gitlab.example.com/api/v4' } }
+ args: { "client_options": { "site": 'https://gitlab.example.com' } }
```
+ NOTE:
+ In GitLab 15.1 and earlier, the `site` parameter requires an `/api/v4` suffix.
+ We recommend you drop this suffix after you upgrade to GitLab 15.2 or later.
+
1. Change `'YOUR_APP_ID'` to the Application ID from the GitLab.com application page.
1. Change `'YOUR_APP_SECRET'` to the secret from the GitLab.com application page.
1. Save the configuration file.
diff --git a/doc/integration/img/jenkins_gitlab_service.png b/doc/integration/img/jenkins_gitlab_service.png
deleted file mode 100644
index 682a5ae8ee2..00000000000
--- a/doc/integration/img/jenkins_gitlab_service.png
+++ /dev/null
Binary files differ
diff --git a/doc/integration/img/jenkins_project.png b/doc/integration/img/jenkins_project.png
deleted file mode 100644
index 126b05c8879..00000000000
--- a/doc/integration/img/jenkins_project.png
+++ /dev/null
Binary files differ
diff --git a/doc/integration/img/omniauth_providers_v_14_6.png b/doc/integration/img/omniauth_providers_v_14_6.png
deleted file mode 100644
index b434e9a210b..00000000000
--- a/doc/integration/img/omniauth_providers_v_14_6.png
+++ /dev/null
Binary files differ
diff --git a/doc/integration/index.md b/doc/integration/index.md
index 85ebac5b40c..f5b088b47f7 100644
--- a/doc/integration/index.md
+++ b/doc/integration/index.md
@@ -20,7 +20,6 @@ GitLab can be configured to authenticate access requests with the following auth
- Enable the [Auth0 OmniAuth](auth0.md) provider.
- Enable sign in with [Bitbucket](bitbucket.md) accounts.
-- Configure GitLab to sign in using [CAS](cas.md).
- Integrate with [Kerberos](kerberos.md).
- Enable sign in via [LDAP](../administration/auth/ldap/index.md).
- Enable [OAuth2 provider](oauth_provider.md) application creation.
diff --git a/doc/integration/jenkins_deprecated.md b/doc/integration/jenkins_deprecated.md
index 57219b18047..5010545b73a 100644
--- a/doc/integration/jenkins_deprecated.md
+++ b/doc/integration/jenkins_deprecated.md
@@ -2,62 +2,12 @@
stage: Ecosystem
group: Integrations
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
+remove_date: '2022-10-29'
+redirect_to: 'jenkins.md'
---
-# Jenkins CI (deprecated) service **(FREE)**
+# Jenkins CI service (removed) **(FREE)**
-NOTE:
-In GitLab 8.3, Jenkins integration using the
-[GitLab Hook Plugin](https://wiki.jenkins.io/display/JENKINS/GitLab+Hook+Plugin)
-was deprecated in favor of the
-[GitLab Plugin](https://wiki.jenkins.io/display/JENKINS/GitLab+Plugin).
-Please use documentation for the new [Jenkins CI service](jenkins.md).
-
-NOTE:
-This service was [removed](https://gitlab.com/gitlab-org/gitlab/-/issues/1600) in GitLab 13.0
-
-Integration includes:
-
-- Trigger Jenkins build after push to repository
-- Show build status on Merge request page
-
-Requirements:
-
-- [Jenkins GitLab Hook plugin](https://wiki.jenkins.io/display/JENKINS/GitLab+Hook+Plugin)
-- Git clone access for Jenkins from GitLab repository (via SSH key)
-
-## Jenkins
-
-1. Install [GitLab Hook plugin](https://wiki.jenkins.io/display/JENKINS/GitLab+Hook+Plugin)
-1. Set up Jenkins project
-
-![screen](img/jenkins_project.png)
-
-## GitLab
-
-In GitLab, perform the following steps.
-
-### Read access to repository
-
-Jenkins needs read access to the GitLab repository. We already specified a
-private key to use in Jenkins, now we must add a public one to the GitLab
-project. For that case we need a Deploy key. Read the documentation on
-[how to set up a Deploy key](../user/project/deploy_keys/index.md).
-
-### Jenkins service
-
-Now navigate to GitLab services page and activate Jenkins
-
-![screen](img/jenkins_gitlab_service.png)
-
-Done! When you push to GitLab, it creates a build for Jenkins. You can view the
-merge request build status with a link to the Jenkins build.
-
-### Multi-project Configuration
-
-The GitLab Hook plugin in Jenkins supports the automatic creation of a project
-for each feature branch. After configuration GitLab triggers feature branch
-builds and a corresponding project is created in Jenkins.
-
-Configure the GitLab Hook plugin in Jenkins. Go to 'Manage Jenkins' and then
-'Configure System'. Find the 'GitLab Web Hook' section and configure as shown below.
+This feature was [removed](https://gitlab.com/gitlab-org/gitlab/-/issues/1600)
+in GitLab 13.0.
+Use the [Jenkins integration](jenkins.md) instead.
diff --git a/doc/integration/jira/dvcs.md b/doc/integration/jira/dvcs.md
index d35c21f6187..43a5349e0e5 100644
--- a/doc/integration/jira/dvcs.md
+++ b/doc/integration/jira/dvcs.md
@@ -261,7 +261,7 @@ resynchronize the information:
- To complete a *full sync*, press `Shift` and select the sync icon.
For more information, read
-[Atlassian's documentation](https://support.atlassian.com/jira-cloud-administration/docs/synchronize-jira-cloud-to-bitbucket/).
+[Atlassian's documentation](https://support.atlassian.com/jira-cloud-administration/docs/integrate-with-development-tools/).
### `Sync Failed` error when refreshing repository data
diff --git a/doc/integration/mattermost/index.md b/doc/integration/mattermost/index.md
index 1a60ca3a5fe..3293732b59b 100644
--- a/doc/integration/mattermost/index.md
+++ b/doc/integration/mattermost/index.md
@@ -229,53 +229,60 @@ sudo gitlab-ctl start mattermost
### Mattermost Command Line Tools (CLI)
-NOTE:
-This CLI will be replaced in a future release with the new [`mmctl` Command Line Tool](https://docs.mattermost.com/manage/mmctl-command-line-tool.html).
+[`mmctl`](https://docs.mattermost.com/manage/mmctl-command-line-tool.html) is a CLI tool for the Mattermost server which is installed locally and uses the Mattermost API, but may also be used remotely. You must configure Mattermost either for local connections or authenticate as an administrator with local login credentials (not through GitLab SSO). The executable is located at `/opt/gitlab/embedded/bin/mmctl`.
-To use the [Mattermost Command Line Tools (CLI)](https://docs.mattermost.com/administration/command-line-tools.html), ensure that you are in the `/opt/gitlab/embedded/service/mattermost` directory when you run the CLI commands and that you specify the location of the configuration file. The executable is `/opt/gitlab/embedded/bin/mattermost`.
+#### Use `mmctl` through a local connection
-```shell
-cd /opt/gitlab/embedded/service/mattermost
+For local connections, the `mmctl` binary and Mattermost must be run from the same server. To enable the local socket:
-sudo /opt/gitlab/embedded/bin/chpst -e /opt/gitlab/etc/mattermost/env -P -U mattermost:mattermost -u mattermost:mattermost /opt/gitlab/embedded/bin/mattermost --config=/var/opt/gitlab/mattermost/config.json version
-```
+1. Edit `/var/opt/gitlab/mattermost/config.json`, and add the following lines:
+
+ ```json
+ {
+ "ServiceSettings": {
+ ...
+ "EnableLocalMode": true,
+ "LocalModeSocketLocation": "/var/tmp/mattermost_local.socket",
+ ...
+ }
+ }
+ ```
+
+1. Restart Mattermost:
+
+ ```shell
+ sudo gitlab-ctl restart mattermost
+ ```
-Until [#4745](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/4745) has been implemented, the command requires quite of bit typing and is hard to remember, so let's make a bash or Zsh alias to make it a bit easier to remember. Add the following to your `~/.bashrc` or `~/.zshrc` file:
+You can then use `/opt/gitlab/embedded/bin/mmctl --local` to run `mmctl` commands
+on your Mattermost instance.
+
+For example, to show the list of users:
```shell
-alias mattermost-cli="cd /opt/gitlab/embedded/service/mattermost && sudo /opt/gitlab/embedded/bin/chpst -e /opt/gitlab/etc/mattermost/env -P -U mattermost:mattermost -u mattermost:mattermost /opt/gitlab/embedded/bin/mattermost --config=/var/opt/gitlab/mattermost/config.json $1"
+$ /opt/gitlab/embedded/bin/mmctl --local user list
+
+13dzo5bmg7fu8rdox347hbfxde: appsbot (appsbot@localhost)
+tbnkwjdug3dejcoddboo4yuomr: boards (boards@localhost)
+wd3g5zpepjgbfjgpdjaas7yj6a: feedbackbot (feedbackbot@localhost)
+8d3zzgpurp85zgf1q88pef73eo: playbooks (playbooks@localhost)
+There are 4 users on local instance
```
-Then source `~/.zshrc` or `~/.bashrc` with `source ~/.zshrc` or `source ~/.bashrc`.
+#### Use `mmctl` through a remote connection
-If successful, you can now run any Mattermost CLI command with your new shell alias `mattermost-cli`:
+For remote connections or local connections where the socket cannot be used,
+create a non SSO user and give that user admin privileges. Those credentials
+can then be used to authenticate `mmctl`:
```shell
-$ mattermost-cli version
-
-[sudo] password for username:
-{"level":"info","ts":1569614421.9058893,"caller":"utils/i18n.go:83","msg":"Loaded system translations for 'en' from '/opt/gitlab/embedded/service/mattermost/i18n/en.json'"}
-{"level":"info","ts":1569614421.9062793,"caller":"app/server_app_adapters.go:58","msg":"Server is initializing..."}
-{"level":"info","ts":1569614421.90976,"caller":"sqlstore/supplier.go:223","msg":"Pinging SQL master database"}
-{"level":"info","ts":1569614422.0515099,"caller":"mlog/log.go:165","msg":"Starting up plugins"}
-{"level":"info","ts":1569614422.0515954,"caller":"app/plugin.go:193","msg":"Syncing plugins from the file store"}
-{"level":"info","ts":1569614422.086005,"caller":"app/plugin.go:228","msg":"Found no files in plugins file store"}
-{"level":"info","ts":1569614423.9337213,"caller":"sqlstore/post_store.go:1301","msg":"Post.Message supports at most 16383 characters (65535 bytes)"}
-{"level":"error","ts":1569614425.6317747,"caller":"go-plugin/stream.go:15","msg":" call to OnConfigurationChange failed, error: Must have a GitLab oauth client id","plugin_id":"com.github.manland.mattermost-plugin-gitlab","source":"plugin_stderr"}
-{"level":"info","ts":1569614425.6875598,"caller":"mlog/sugar.go:19","msg":"Ensuring Surveybot exists","plugin_id":"com.mattermost.nps"}
-{"level":"info","ts":1569614425.6953356,"caller":"app/server.go:216","msg":"Current version is 5.14.0 (5.14.2/Fri Aug 30 20:20:48 UTC 2019/817ee89711bf26d33f840ce7f59fba14da1ed168/none)"}
-{"level":"info","ts":1569614425.6953766,"caller":"app/server.go:217","msg":"Enterprise Enabled: false"}
-{"level":"info","ts":1569614425.6954057,"caller":"app/server.go:219","msg":"Current working directory is /opt/gitlab/embedded/service/mattermost/i18n"}
-{"level":"info","ts":1569614425.6954265,"caller":"app/server.go:220","msg":"Loaded config","source":"file:///var/opt/gitlab/mattermost/config.json"}
-Version: 5.14.0
-Build Number: 5.14.2
-Build Date: Fri Aug 30 20:20:48 UTC 2019
-Build Hash: 817ee89711bf26d33f840ce7f59fba14da1ed168
-Build Enterprise Ready: false
-DB Version: 5.14.0
-```
+$ /opt/gitlab/embedded/bin/mmctl auth login http://mattermost.example.com
-For more details see [Mattermost Command Line Tools (CLI)](https://docs.mattermost.com/administration/command-line-tools.html) and the [Troubleshooting Mattermost CLI](#troubleshooting-the-mattermost-cli) below.
+Connection name: test
+Username: local-user
+Password:
+ credentials for "test": "local-user@http://mattermost.example.com" stored
+```
## Configuring GitLab and Mattermost integrations
@@ -440,8 +447,7 @@ mattermost['env'] = {
}
```
-Refer to the [Mattermost Configuration Settings
-documentation](https://docs.mattermost.com/administration/config-settings.html)
+Refer to the [Mattermost Configuration Settings documentation](https://docs.mattermost.com/administration/config-settings.html)
for details about categories and configuration values.
There are a few exceptions to this rule:
@@ -512,14 +518,6 @@ sequenceDiagram
Mattermost->>User: Mattermost/GitLab user ready
```
-## Troubleshooting the Mattermost CLI
-
-### Failed to ping DB retrying in 10 seconds err=dial tcp: lookup dockerhost: no such host
-
-As of version 11.0, majority of the Mattermost settings are now configured via environmental variables. The error is mainly due to the database connection string being commented out in `gitlab.rb` and the database connection settings being set in environmental variables. Additionally, the connection string in the `gitlab.rb` is for MySQL which is no longer supported as of 12.1.
-
-You can fix this by setting up a `mattermost-cli` [shell alias](#mattermost-command-line-tools-cli).
-
## Community support resources
For help and support around your GitLab Mattermost deployment please see:
diff --git a/doc/integration/omniauth.md b/doc/integration/omniauth.md
index aac2820a69e..e297c13a2da 100644
--- a/doc/integration/omniauth.md
+++ b/doc/integration/omniauth.md
@@ -7,13 +7,9 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# OmniAuth **(FREE SELF)**
Users can sign in to GitLab by using their credentials from Twitter, GitHub, and other popular services.
-[OmniAuth](https://rubygems.org/gems/omniauth/) is the Rack
-framework that GitLab uses to provide this authentication.
+[OmniAuth](https://rubygems.org/gems/omniauth/) is the Rack framework that GitLab uses to provide this authentication.
-![OmniAuth providers on sign-in page](img/omniauth_providers_v_14_6.png)
-
-If you configure OmniAuth, users can continue to sign in using other
-mechanisms, including standard GitLab authentication or LDAP (if configured).
+When configured, additional sign-in options are displayed on the sign-in page.
## Supported providers
@@ -22,7 +18,6 @@ GitLab supports the following OmniAuth providers.
| Provider documentation | OmniAuth provider name |
|---------------------------------------------------------------------|----------------------------|
| [AliCloud](alicloud.md) | `alicloud` |
-| [Atlassian Crowd](../administration/auth/crowd.md) | `crowd` |
| [Atlassian](../administration/auth/atlassian.md) | `atlassian_oauth2` |
| [Auth0](auth0.md) | `auth0` |
| [Authentiq](../administration/auth/authentiq.md) | `authentiq` |
@@ -30,7 +25,6 @@ GitLab supports the following OmniAuth providers.
| [Azure v2](azure.md) | `azure_activedirectory_v2` |
| [Azure v1](azure.md) | `azure_oauth2` |
| [Bitbucket Cloud](bitbucket.md) | `bitbucket` |
-| [CAS](cas.md) | `cas3` |
| [DingTalk](ding_talk.md) | `dingtalk` |
| [Facebook](facebook.md) | `facebook` |
| [Generic OAuth 2.0](oauth2_generic.md) | `oauth2_generic` |
@@ -53,7 +47,7 @@ Setting | Description | Default value
---------------------------|-------------|--------------
`allow_single_sign_on` | Enables you to list the providers that automatically create a GitLab account. The provider names are available in the **OmniAuth provider name** column in the [supported providers table](#supported-providers). | The default is `false`. If `false`, users must be created manually, or they can't sign in using OmniAuth.
`auto_link_ldap_user` | If enabled, creates an LDAP identity in GitLab for users that are created through an OmniAuth provider. You can enable this setting if you have the [LDAP (ActiveDirectory)](../administration/auth/ldap/index.md) integration enabled. Requires the `uid` of the user to be the same in both LDAP and the OmniAuth provider. | The default is `false`.
-`block_auto_created_users` | If enabled, blocks users that are automatically created from signing in until they are approved by an administrator. | The default is `true`. If you set the value to `false`, make sure you only define providers for `allow_single_sign_on` that you can control, like SAML, Crowd, or Google. Otherwise, any user on the internet can sign in to GitLab without an administrator's approval.
+`block_auto_created_users` | If enabled, blocks users that are automatically created from signing in until they are approved by an administrator. | The default is `true`. If you set the value to `false`, make sure you only define providers for `allow_single_sign_on` that you can control, like SAML or Google. Otherwise, any user on the internet can sign in to GitLab without an administrator's approval.
To change these settings:
@@ -111,6 +105,65 @@ To change these settings:
After configuring these settings, you can configure
your chosen [provider](#supported-providers).
+### Per-provider configuration
+
+> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/89379) in GitLab 15.3.
+
+If `allow_single_sign_on` is set, GitLab uses one of the following fields returned in the OmniAuth `auth_hash` to establish a username in GitLab for the user signing in,
+choosing the first that exists:
+
+- `username`.
+- `nickname`.
+- `email`.
+
+You can create GitLab configuration on a per-provider basis, which is supplied to the [provider](#supported-providers) using `args`. If you set the `gitlab_username_claim`
+variable in `args` for a provider, you can select another claim to use for the GitLab username. The chosen claim must be unique to avoid collisions.
+
+- **For Omnibus installations**
+
+ ```ruby
+ gitlab_rails['omniauth_providers'] = [
+
+ # The generic pattern for configuring a provider with name PROVIDER_NAME
+
+ gitlab_rails['omniauth_providers'] = {
+ name: "PROVIDER_NAME"
+ ...
+ args: { gitlab_username_claim: 'sub' } # For users signing in with the provider you configure, the GitLab username will be set to the "sub" received from the provider
+ },
+
+ # Here are examples using GitHub and Kerberos
+
+ gitlab_rails['omniauth_providers'] = {
+ name: "github"
+ ...
+ args: { gitlab_username_claim: 'name' } # For users signing in with GitHub, the GitLab username will be set to the "name" received from GitHub
+ },
+ {
+ name: "kerberos"
+ ...
+ args: { gitlab_username_claim: 'uid' } # For users signing in with Kerberos, the GitLab username will be set to the "uid" received from Kerberos
+ },
+ ]
+ ```
+
+- **For installations from source**
+
+ ```yaml
+ - { name: 'PROVIDER_NAME',
+ ...
+ args: { gitlab_username_claim: 'sub' }
+ }
+ - { name: 'github',
+ ...
+ args: { gitlab_username_claim: 'name' }
+ }
+ - { name: 'kerberos',
+ ...
+ args: { gitlab_username_claim: 'uid' }
+ }
+ ```
+
### Passwords for users created via OmniAuth
The [Generated passwords for users created through integrated authentication](../security/passwords_for_integrated_authentication_methods.md)
@@ -387,5 +440,4 @@ then override the icon in one of two ways:
## Limitations
Most supported OmniAuth providers don't support Git over HTTP password authentication.
-The only exception is [Atlassian Crowd](../administration/auth/crowd.md) (since GitLab [13.7](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/46935)).
As a workaround, you can authenticate using a [personal access token](../user/profile/personal_access_tokens.md).
diff --git a/doc/integration/saml.md b/doc/integration/saml.md
index 9f707ba9bc6..0c517d07f41 100644
--- a/doc/integration/saml.md
+++ b/doc/integration/saml.md
@@ -709,7 +709,6 @@ args: {
security: {
authn_requests_signed: true, # enable signature on AuthNRequest
want_assertions_signed: true, # enable the requirement of signed assertion
- embed_sign: true, # embedded signature or HTTP GET parameter signature
metadata_signed: false, # enable signature on Metadata
signature_method: 'http://www.w3.org/2001/04/xmldsig-more#rsa-sha256',
digest_method: 'http://www.w3.org/2001/04/xmlenc#sha256',
@@ -756,7 +755,7 @@ Group SAML on a self-managed instance is limited when compared to the recommende
[instance-wide SAML](../user/group/saml_sso/index.md). The recommended solution allows you to take advantage of:
- [LDAP compatibility](../administration/auth/ldap/index.md).
-- [LDAP Group Sync](../user/group/index.md#manage-group-memberships-via-ldap).
+- [LDAP Group Sync](../user/group/access_and_permissions.md#manage-group-memberships-via-ldap).
- [Required groups](#required-groups).
- [Administrator groups](#administrator-groups).
- [Auditor groups](#auditor-groups).
@@ -801,8 +800,6 @@ If you have any questions on configuring the SAML app, please contact your provi
### Okta setup notes
-The following guidance is based on this Okta article, on adding a [SAML Application with an Okta Developer account](https://support.okta.com/help/s/article/Why-can-t-I-add-a-SAML-Application-with-an-Okta-Developer-account?language=en_US):
-
1. In the Okta administrator section, make sure to select Classic UI view in the top left corner. From there, choose to **Add an App**.
1. When the app screen comes up you see another button to **Create an App** and
choose SAML 2.0 on the next screen.
@@ -864,7 +861,7 @@ connect to the Google Workspace SAML app.
### SAML Response
-You can find the base64-encoded SAML Response in the [`production_json.log`](../administration/logs.md#production_jsonlog). This response is sent from the IdP, and contains user information that is consumed by GitLab. Many errors in the SAML integration can be solved by decoding this response and comparing it to the SAML settings in the GitLab configuration file.
+You can find the base64-encoded SAML Response in the [`production_json.log`](../administration/logs/index.md#production_jsonlog). This response is sent from the IdP, and contains user information that is consumed by GitLab. Many errors in the SAML integration can be solved by decoding this response and comparing it to the SAML settings in the GitLab configuration file.
### GitLab+SAML Testing Environments
@@ -907,7 +904,7 @@ the SAML request, but in GitLab 11.7 and earlier this error never reaches GitLab
the CSRF check.
To bypass this you can add `skip_before_action :verify_authenticity_token` to the
-`omniauth_callbacks_controller.rb` file immediately after the `class` line and
+`omniauth_callbacks_controller.rb` file immediately before the `after_action :verify_known_sign_in` line and
comment out the `protect_from_forgery` line using a `#`. Restart Puma for this
change to take effect. This allows the error to hit GitLab, where it can then
be seen in the usual logs, or as a flash message on the login screen.
@@ -941,8 +938,8 @@ Make sure this information is provided.
Another issue that can result in this error is when the correct information is being sent by
the IdP, but the attributes don't match the names in the OmniAuth `info` hash. In this case,
-you must set `attribute_statements` in the SAML configuration to [map the attribute names in
-your SAML Response to the corresponding OmniAuth `info` hash names](#attribute_statements).
+you must set `attribute_statements` in the SAML configuration to
+[map the attribute names in your SAML Response to the corresponding OmniAuth `info` hash names](#attribute_statements).
### Key validation error, Digest mismatch or Fingerprint mismatch
diff --git a/doc/integration/security_partners/index.md b/doc/integration/security_partners/index.md
index 50a7b3b717b..507157f9326 100644
--- a/doc/integration/security_partners/index.md
+++ b/doc/integration/security_partners/index.md
@@ -12,18 +12,18 @@ each security partner:
<!-- vale gitlab.Spelling = NO -->
-- [Anchore](https://docs.anchore.com/current/docs/using/integration/ci_cd/gitlab/)
+- [Anchore](https://docs.anchore.com/current/docs/configuration/integration/ci_cd/gitlab/)
- [Bridgecrew](https://docs.bridgecrew.io/docs/integrate-with-gitlab-self-managed)
- [Checkmarx](https://checkmarx.atlassian.net/wiki/spaces/SD/pages/1929937052/GitLab+Integration)
- [Deepfactor](https://docs.deepfactor.io/hc/en-us/articles/1500008981941)
- [GrammaTech](https://www.grammatech.com/codesonar-gitlab-integration)
- [Indeni](https://docs.cloudrail.app/#/integrations/gitlab)
- [JScrambler](https://docs.jscrambler.com/code-integrity/documentation/gitlab-ci-integration)
+- [Mend](https://www.mend.io/gitlab/)
- [Semgrep](https://semgrep.dev/for/gitlab)
- [StackHawk](https://docs.stackhawk.com/continuous-integration/gitlab.html)
- [Tenable](https://docs.tenable.com/tenableio/Content/ContainerSecurity/GetStarted.htm)
- [Venafi](https://marketplace.venafi.com/details/gitlab-ci-cd/)
- [Veracode](https://community.veracode.com/s/knowledgeitem/gitlab-ci-MCEKSYPRWL35BRTGOVI55SK5RI4A)
-- [WhiteSource](https://www.whitesourcesoftware.com/gitlab/)
<!-- vale gitlab.Spelling = YES -->
diff --git a/doc/integration/twitter.md b/doc/integration/twitter.md
index f3b66f8c12d..2218529a729 100644
--- a/doc/integration/twitter.md
+++ b/doc/integration/twitter.md
@@ -12,7 +12,7 @@ Twitter OAuth 2.0 support is [not yet supported](https://gitlab.com/gitlab-org/g
To enable the Twitter OmniAuth provider you must register your application with
Twitter. Twitter generates a client ID and secret key for you to use.
-1. Sign in to [Twitter Application Management](https://apps.twitter.com).
+1. Sign in to [Twitter Application Management](https://developer.twitter.com/apps).
1. Select "Create new app".