Welcome to mirror list, hosted at ThFree Co, Russian Federation.

gitlab.com/gitlab-org/gitlab-foss.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
path: root/doc
diff options
context:
space:
mode:
authorGitLab Bot <gitlab-bot@gitlab.com>2020-02-04 21:08:50 +0300
committerGitLab Bot <gitlab-bot@gitlab.com>2020-02-04 21:08:50 +0300
commitca05512007cea51e05d3431b2c8bd7228c754370 (patch)
tree5202d429acd68c071445aff9e352379173ec9c0b /doc
parent6b833f1e0340e00fdee074da9c42c0d4e07a46d2 (diff)
Add latest changes from gitlab-org/gitlab@master
Diffstat (limited to 'doc')
-rw-r--r--doc/development/code_review.md6
-rw-r--r--doc/development/migration_style_guide.md4
-rw-r--r--doc/development/reactive_caching.md276
-rw-r--r--doc/development/utilities.md275
-rw-r--r--doc/user/analytics/cycle_analytics.md17
-rw-r--r--doc/user/project/merge_requests/img/approvals_premium_mr_widget.pngbin22175 -> 0 bytes
-rw-r--r--doc/user/project/merge_requests/img/approvals_premium_mr_widget_v12_7.pngbin0 -> 198351 bytes
-rw-r--r--doc/user/project/merge_requests/img/mr_approvals_by_code_owners_v12_4.pngbin26902 -> 0 bytes
-rw-r--r--doc/user/project/merge_requests/img/mr_approvals_by_code_owners_v12_7.pngbin0 -> 88749 bytes
-rw-r--r--doc/user/project/merge_requests/merge_request_approvals.md6
10 files changed, 298 insertions, 286 deletions
diff --git a/doc/development/code_review.md b/doc/development/code_review.md
index 9b82cf5ff99..3d41ff64380 100644
--- a/doc/development/code_review.md
+++ b/doc/development/code_review.md
@@ -347,6 +347,12 @@ reviewee.
of the contributed code. It's usually a good idea to ask another maintainer or
reviewer before doing it, but have the courage to do it when you believe it is
important.
+- In the interest of [Iteration](https://about.gitlab.com/handbook/values/#iteration),
+ if, as a reviewer, your suggestions are non-blocking changes or personal preference
+ (not a documented or agreed requirement), consider approving the merge request
+ before passing it back to the author. This allows them to implement your suggestions
+ if they agree, or allows them to pass it onto the
+ maintainer for review straight away. This can help reduce our overall time-to-merge.
- There is a difference in doing things right and doing things right now.
Ideally, we should do the former, but in the real world we need the latter as
well. A good example is a security fix which should be released as soon as
diff --git a/doc/development/migration_style_guide.md b/doc/development/migration_style_guide.md
index cccea4ee9f4..209fa5d610e 100644
--- a/doc/development/migration_style_guide.md
+++ b/doc/development/migration_style_guide.md
@@ -379,10 +379,6 @@ Rails migration example:
```ruby
add_column_with_default(:projects, :foo, :integer, default: 10, limit: 8)
-
-# or
-
-add_column(:projects, :foo, :integer, default: 10, limit: 8)
```
## Timestamp column type
diff --git a/doc/development/reactive_caching.md b/doc/development/reactive_caching.md
new file mode 100644
index 00000000000..de93a5aa1d0
--- /dev/null
+++ b/doc/development/reactive_caching.md
@@ -0,0 +1,276 @@
+# `ReactiveCaching`
+
+> This doc refers to <https://gitlab.com/gitlab-org/gitlab/blob/master/app/models/concerns/reactive_caching.rb>.
+
+The `ReactiveCaching` concern is used for fetching some data in the background and store it
+in the Rails cache, keeping it up-to-date for as long as it is being requested. If the
+data hasn't been requested for `reactive_cache_lifetime`, it will stop being refreshed,
+and then be removed.
+
+## Examples
+
+```ruby
+class Foo < ApplicationRecord
+ include ReactiveCaching
+
+ after_save :clear_reactive_cache!
+
+ def calculate_reactive_cache(param1, param2)
+ # Expensive operation here. The return value of this method is cached
+ end
+
+ def result
+ # Any arguments can be passed to `with_reactive_cache`. `calculate_reactive_cache`
+ # will be called with the same arguments.
+ with_reactive_cache(param1, param2) do |data|
+ # ...
+ end
+ end
+end
+```
+
+In this example, the first time `#result` is called, it will return `nil`. However,
+it will enqueue a background worker to call `#calculate_reactive_cache` and set an
+initial cache lifetime of 10 min.
+
+## How it works
+
+The first time `#with_reactive_cache` is called, a background job is enqueued and
+`with_reactive_cache` returns `nil`. The background job calls `#calculate_reactive_cache`
+and stores its return value. It also re-enqueues the background job to run again after
+`reactive_cache_refresh_interval`. Therefore, it will keep the stored value up to date.
+Calculations never run concurrently.
+
+Calling `#with_reactive_cache` while a value is cached will call the block given to
+`#with_reactive_cache`, yielding the cached value. It will also extend the lifetime
+of the cache by the `reactive_cache_lifetime` value.
+
+Once the lifetime has expired, no more background jobs will be enqueued and calling
+`#with_reactive_cache` will again return `nil` - starting the process all over again.
+
+## When to use
+
+- If we need to make a request to an external API (for example, requests to the k8s API).
+It is not advisable to keep the application server worker blocked for the duration of
+the external request.
+- If a model needs to perform a lot of database calls or other time consuming
+calculations.
+
+## How to use
+
+### In models and services
+
+The ReactiveCaching concern can be used in models as well as `project_services`
+(`app/models/project_services`).
+
+1. Include the concern in your model or service.
+
+ When including in a model:
+
+ ```ruby
+ include ReactiveCaching
+ ```
+
+ or when including in a `project_service`:
+
+ ```ruby
+ include ReactiveService
+ ```
+
+1. Implement the `calculate_reactive_cache` method in your model/service.
+1. Call `with_reactive_cache` in your model/service where the cached value is needed.
+
+### In controllers
+
+Controller endpoints that call a model or service method that uses `ReactiveCaching` should
+not wait until the background worker completes.
+
+- An API that calls a model or service method that uses `ReactiveCaching` should return
+`202 accepted` when the cache is being calculated (when `#with_reactive_cache` returns `nil`).
+- It should also
+[set the polling interval header](fe_guide/performance.md#realtime-components) with
+`Gitlab::PollingInterval.set_header`.
+- The consumer of the API is expected to poll the API.
+- You can also consider implementing [ETag caching](polling.md) to reduce the server
+load caused by polling.
+
+### Methods to implement in a model or service
+
+These are methods that should be implemented in the model/service that includes `ReactiveCaching`.
+
+#### `#calculate_reactive_cache` (required)
+
+- This method must be implemented. Its return value will be cached.
+- It will be called by `ReactiveCaching` when it needs to populate the cache.
+- Any arguments passed to `with_reactive_cache` will also be passed to `calculate_reactive_cache`.
+
+#### `#reactive_cache_updated` (optional)
+
+- This method can be implemented if needed.
+- It is called by the `ReactiveCaching` concern whenever the cache is updated.
+If the cache is being refreshed and the new cache value is the same as the old cache
+value, this method will not be called. It is only called if a new value is stored in
+the cache.
+- It can be used to perform an action whenever the cache is updated.
+
+### Methods called by a model or service
+
+These are methods provided by `ReactiveCaching` and should be called in
+the model/service.
+
+#### `#with_reactive_cache` (required)
+
+- `with_reactive_cache` must be called where the result of `calculate_reactive_cache`
+is required.
+- A block can be given to `with_reactive_cache`. `with_reactive_cache` can also take
+any number of arguments. Any arguments passed to `with_reactive_cache` will be
+passed to `calculate_reactive_cache`. The arguments passed to `with_reactive_cache`
+will be appended to the cache key name.
+- If `with_reactive_cache` is called when the result has already been cached, the
+block will be called, yielding the cached value and the return value of the block
+will be returned by `with_reactive_cache`. It will also reset the timeout of the
+cache to the `reactive_cache_lifetime` value.
+- If the result has not been cached as yet, `with_reactive_cache` will return nil.
+It will also enqueue a background job, which will call `calculate_reactive_cache`
+and cache the result.
+- Once the background job has completed and the result is cached, the next call
+to `with_reactive_cache` will pick up the cached value.
+- In the example below, `data` is the cached value which is yielded to the block
+given to `with_reactive_cache`.
+
+ ```ruby
+ class Foo < ApplicationRecord
+ include ReactiveCaching
+
+ def calculate_reactive_cache(param1, param2)
+ # Expensive operation here. The return value of this method is cached
+ end
+
+ def result
+ with_reactive_cache(param1, param2) do |data|
+ # ...
+ end
+ end
+ end
+ ```
+
+#### `#clear_reactive_cache!` (optional)
+
+- This method can be called when the cache needs to be expired/cleared. For example,
+it can be called in an `after_save` callback in a model so that the cache is
+cleared after the model is modified.
+- This method should be called with the same parameters that are passed to
+`with_reactive_cache` because the parameters are part of the cache key.
+
+#### `#without_reactive_cache` (optional)
+
+- This is a convenience method that can be used for debugging purposes.
+- This method calls `calculate_reactive_cache` in the current process instead of
+in a background worker.
+
+### Configurable options
+
+There are some `class_attribute` options which can be tweaked.
+
+#### `self.reactive_cache_key`
+
+- The value of this attribute is the prefix to the `data` and `alive` cache key names.
+The parameters passed to `with_reactive_cache` form the rest of the cache key names.
+- By default, this key uses the model's name and the ID of the record.
+
+ ```ruby
+ self.reactive_cache_key = -> (record) { [model_name.singular, record.id] }
+ ```
+
+- The `data` and `alive` cache keys in this case will be `"ExampleModel:1:arg1:arg2"`
+and `"ExampleModel:1:arg1:arg2:alive"` respectively, where `ExampleModel` is the
+name of the model, `1` is the ID of the record, `arg1` and `arg2` are parameters
+passed to `with_reactive_cache`.
+- If you're including this concern in a service instead, you will need to override
+the default by adding the following to your service:
+
+ ```ruby
+ self.reactive_cache_key = ->(service) { [service.class.model_name.singular, service.project_id] }
+ ```
+
+ If your reactive_cache_key is exactly like the above, you can use the existing
+ `ReactiveService` concern instead.
+
+#### `self.reactive_cache_lease_timeout`
+
+- `ReactiveCaching` uses `Gitlab::ExclusiveLease` to ensure that the cache calculation
+is never run concurrently by multiple workers.
+- This attribute is the timeout for the `Gitlab::ExclusiveLease`.
+- It defaults to 2 minutes, but can be overriden if a different timeout is required.
+
+```ruby
+self.reactive_cache_lease_timeout = 2.minutes
+```
+
+#### `self.reactive_cache_refresh_interval`
+
+- This is the interval at which the cache is refreshed.
+- It defaults to 1 minute.
+
+```ruby
+self.reactive_cache_lease_timeout = 1.minute
+```
+
+#### `self.reactive_cache_lifetime`
+
+- This is the duration after which the cache will be cleared if there are no requests.
+- The default is 10 minutes. If there are no requests for this cache value for 10 minutes,
+the cache will expire.
+- If the cache value is requested before it expires, the timeout of the cache will
+be reset to `reactive_cache_lifetime`.
+
+```ruby
+self.reactive_cache_lifetime = 10.minutes
+```
+
+#### `self.reactive_cache_worker_finder`
+
+- This is the method used by the background worker to find or generate the object on
+which `calculate_reactive_cache` can be called.
+- By default it uses the model primary key to find the object:
+
+ ```ruby
+ self.reactive_cache_worker_finder = ->(id, *_args) do
+ find_by(primary_key => id)
+ end
+ ```
+
+- The default behaviour can be overridden by defining a custom `reactive_cache_worker_finder`.
+
+ ```ruby
+ class Foo < ApplicationRecord
+ include ReactiveCaching
+
+ self.reactive_cache_worker_finder = ->(_id, *args) { from_cache(*args) }
+
+ def self.from_cache(var1, var2)
+ # This method will be called by the background worker with "bar1" and
+ # "bar2" as arguments.
+ new(var1, var2)
+ end
+
+ def initialize(var1, var2)
+ # ...
+ end
+
+ def calculate_reactive_cache(var1, var2)
+ # Expensive operation here. The return value of this method is cached
+ end
+
+ def result
+ with_reactive_cache("bar1", "bar2") do |data|
+ # ...
+ end
+ end
+ end
+ ```
+
+ - In this example, the primary key ID will be passed to `reactive_cache_worker_finder`
+ along with the parameters passed to `with_reactive_cache`.
+ - The custom `reactive_cache_worker_finder` calls `.from_cache` with the parameters
+ passed to `with_reactive_cache`.
diff --git a/doc/development/utilities.md b/doc/development/utilities.md
index 561d5efc696..dfdc5c66114 100644
--- a/doc/development/utilities.md
+++ b/doc/development/utilities.md
@@ -196,277 +196,4 @@ end
## `ReactiveCaching`
-> This doc refers to <https://gitlab.com/gitlab-org/gitlab/blob/master/app/models/concerns/reactive_caching.rb>.
-
-The `ReactiveCaching` concern is used for fetching some data in the background and store it
-in the Rails cache, keeping it up-to-date for as long as it is being requested. If the
-data hasn't been requested for `reactive_cache_lifetime`, it will stop being refreshed,
-and then be removed.
-
-### Examples
-
-```ruby
-class Foo < ApplicationRecord
- include ReactiveCaching
-
- after_save :clear_reactive_cache!
-
- def calculate_reactive_cache(param1, param2)
- # Expensive operation here. The return value of this method is cached
- end
-
- def result
- # Any arguments can be passed to `with_reactive_cache`. `calculate_reactive_cache`
- # will be called with the same arguments.
- with_reactive_cache(param1, param2) do |data|
- # ...
- end
- end
-end
-```
-
-In this example, the first time `#result` is called, it will return `nil`. However,
-it will enqueue a background worker to call `#calculate_reactive_cache` and set an
-initial cache lifetime of 10 min.
-
-### How it works
-
-The first time `#with_reactive_cache` is called, a background job is enqueued and
-`with_reactive_cache` returns `nil`. The background job calls `#calculate_reactive_cache`
-and stores its return value. It also re-enqueues the background job to run again after
-`reactive_cache_refresh_interval`. Therefore, it will keep the stored value up to date.
-Calculations never run concurrently.
-
-Calling `#with_reactive_cache` while a value is cached will call the block given to
-`#with_reactive_cache`, yielding the cached value. It will also extend the lifetime
-of the cache by the `reactive_cache_lifetime` value.
-
-Once the lifetime has expired, no more background jobs will be enqueued and calling
-`#with_reactive_cache` will again return `nil` - starting the process all over again.
-
-### When to use
-
-- If we need to make a request to an external API (for example, requests to the k8s API).
-It is not advisable to keep the application server worker blocked for the duration of
-the external request.
-- If a model needs to perform a lot of database calls or other time consuming
-calculations.
-
-### How to use
-
-#### In models and services
-
-The ReactiveCaching concern can be used in models as well as `project_services`
-(`app/models/project_services`).
-
-1. Include the concern in your model or service.
-
- When including in a model:
-
- ```ruby
- include ReactiveCaching
- ```
-
- or when including in a `project_service`:
-
- ```ruby
- include ReactiveService
- ```
-
-1. Implement the `calculate_reactive_cache` method in your model/service.
-1. Call `with_reactive_cache` in your model/service where the cached value is needed.
-
-#### In controllers
-
-Controller endpoints that call a model or service method that uses `ReactiveCaching` should
-not wait until the background worker completes.
-
-- An API that calls a model or service method that uses `ReactiveCaching` should return
-`202 accepted` when the cache is being calculated (when `#with_reactive_cache` returns `nil`).
-- It should also
-[set the polling interval header](fe_guide/performance.md#realtime-components) with
-`Gitlab::PollingInterval.set_header`.
-- The consumer of the API is expected to poll the API.
-- You can also consider implementing [ETag caching](polling.md) to reduce the server
-load caused by polling.
-
-#### Methods to implement in a model or service
-
-These are methods that should be implemented in the model/service that includes `ReactiveCaching`.
-
-##### `#calculate_reactive_cache` (required)
-
-- This method must be implemented. Its return value will be cached.
-- It will be called by `ReactiveCaching` when it needs to populate the cache.
-- Any arguments passed to `with_reactive_cache` will also be passed to `calculate_reactive_cache`.
-
-##### `#reactive_cache_updated` (optional)
-
-- This method can be implemented if needed.
-- It is called by the `ReactiveCaching` concern whenever the cache is updated.
-If the cache is being refreshed and the new cache value is the same as the old cache
-value, this method will not be called. It is only called if a new value is stored in
-the cache.
-- It can be used to perform an action whenever the cache is updated.
-
-#### Methods called by a model or service
-
-These are methods provided by `ReactiveCaching` and should be called in
-the model/service.
-
-##### `#with_reactive_cache` (required)
-
-- `with_reactive_cache` must be called where the result of `calculate_reactive_cache`
-is required.
-- A block can be given to `with_reactive_cache`. `with_reactive_cache` can also take
-any number of arguments. Any arguments passed to `with_reactive_cache` will be
-passed to `calculate_reactive_cache`. The arguments passed to `with_reactive_cache`
-will be appended to the cache key name.
-- If `with_reactive_cache` is called when the result has already been cached, the
-block will be called, yielding the cached value and the return value of the block
-will be returned by `with_reactive_cache`. It will also reset the timeout of the
-cache to the `reactive_cache_lifetime` value.
-- If the result has not been cached as yet, `with_reactive_cache` will return nil.
-It will also enqueue a background job, which will call `calculate_reactive_cache`
-and cache the result.
-- Once the background job has completed and the result is cached, the next call
-to `with_reactive_cache` will pick up the cached value.
-- In the example below, `data` is the cached value which is yielded to the block
-given to `with_reactive_cache`.
-
- ```ruby
- class Foo < ApplicationRecord
- include ReactiveCaching
-
- def calculate_reactive_cache(param1, param2)
- # Expensive operation here. The return value of this method is cached
- end
-
- def result
- with_reactive_cache(param1, param2) do |data|
- # ...
- end
- end
- end
- ```
-
-##### `#clear_reactive_cache!` (optional)
-
-- This method can be called when the cache needs to be expired/cleared. For example,
-it can be called in an `after_save` callback in a model so that the cache is
-cleared after the model is modified.
-- This method should be called with the same parameters that are passed to
-`with_reactive_cache` because the parameters are part of the cache key.
-
-##### `#without_reactive_cache` (optional)
-
-- This is a convenience method that can be used for debugging purposes.
-- This method calls `calculate_reactive_cache` in the current process instead of
-in a background worker.
-
-#### Configurable options
-
-There are some `class_attribute` options which can be tweaked.
-
-##### `self.reactive_cache_key`
-
-- The value of this attribute is the prefix to the `data` and `alive` cache key names.
-The parameters passed to `with_reactive_cache` form the rest of the cache key names.
-- By default, this key uses the model's name and the ID of the record.
-
- ```ruby
- self.reactive_cache_key = -> (record) { [model_name.singular, record.id] }
- ```
-
-- The `data` and `alive` cache keys in this case will be `"ExampleModel:1:arg1:arg2"`
-and `"ExampleModel:1:arg1:arg2:alive"` respectively, where `ExampleModel` is the
-name of the model, `1` is the ID of the record, `arg1` and `arg2` are parameters
-passed to `with_reactive_cache`.
-- If you're including this concern in a service instead, you will need to override
-the default by adding the following to your service:
-
- ```ruby
- self.reactive_cache_key = ->(service) { [service.class.model_name.singular, service.project_id] }
- ```
-
- If your reactive_cache_key is exactly like the above, you can use the existing
- `ReactiveService` concern instead.
-
-##### `self.reactive_cache_lease_timeout`
-
-- `ReactiveCaching` uses `Gitlab::ExclusiveLease` to ensure that the cache calculation
-is never run concurrently by multiple workers.
-- This attribute is the timeout for the `Gitlab::ExclusiveLease`.
-- It defaults to 2 minutes, but can be overriden if a different timeout is required.
-
-```ruby
-self.reactive_cache_lease_timeout = 2.minutes
-```
-
-##### `self.reactive_cache_refresh_interval`
-
-- This is the interval at which the cache is refreshed.
-- It defaults to 1 minute.
-
-```ruby
-self.reactive_cache_lease_timeout = 1.minute
-```
-
-##### `self.reactive_cache_lifetime`
-
-- This is the duration after which the cache will be cleared if there are no requests.
-- The default is 10 minutes. If there are no requests for this cache value for 10 minutes,
-the cache will expire.
-- If the cache value is requested before it expires, the timeout of the cache will
-be reset to `reactive_cache_lifetime`.
-
-```ruby
-self.reactive_cache_lifetime = 10.minutes
-```
-
-##### `self.reactive_cache_worker_finder`
-
-- This is the method used by the background worker to find or generate the object on
-which `calculate_reactive_cache` can be called.
-- By default it uses the model primary key to find the object:
-
- ```ruby
- self.reactive_cache_worker_finder = ->(id, *_args) do
- find_by(primary_key => id)
- end
- ```
-
-- The default behaviour can be overridden by defining a custom `reactive_cache_worker_finder`.
-
- ```ruby
- class Foo < ApplicationRecord
- include ReactiveCaching
-
- self.reactive_cache_worker_finder = ->(_id, *args) { from_cache(*args) }
-
- def self.from_cache(var1, var2)
- # This method will be called by the background worker with "bar1" and
- # "bar2" as arguments.
- new(var1, var2)
- end
-
- def initialize(var1, var2)
- # ...
- end
-
- def calculate_reactive_cache(var1, var2)
- # Expensive operation here. The return value of this method is cached
- end
-
- def result
- with_reactive_cache("bar1", "bar2") do |data|
- # ...
- end
- end
- end
- ```
-
- - In this example, the primary key ID will be passed to `reactive_cache_worker_finder`
- along with the parameters passed to `with_reactive_cache`.
- - The custom `reactive_cache_worker_finder` calls `.from_cache` with the parameters
- passed to `with_reactive_cache`.
+Read the documentation on [`ReactiveCaching`](reactive_caching.md).
diff --git a/doc/user/analytics/cycle_analytics.md b/doc/user/analytics/cycle_analytics.md
index e0bb4e03c41..8d3eaade759 100644
--- a/doc/user/analytics/cycle_analytics.md
+++ b/doc/user/analytics/cycle_analytics.md
@@ -172,16 +172,23 @@ For example, if 30 days worth of data has been selected (for example, 2019-12-16
median line will represent the previous 30 days worth of data (2019-11-16 to 2019-12-16)
as a metric to compare against.
-### Enabling chart
+### Disabling chart
-By default, this chart is disabled for self-managed instances. To enable it, ask an
-administrator with Rails console access to run the following:
+This chart is enabled by default. If you have a self-managed instance, an
+administrator can open a Rails console and disable it with the following command:
```ruby
-Feature.enable(:cycle_analytics_scatterplot_enabled)
+Feature.disable(:cycle_analytics_scatterplot_enabled)
```
-This chart is enabled by default on GitLab.com.
+### Disabling chart median line
+
+This chart median line is enabled by default. If you have a self-managed instance, an
+administrator can open a Rails console and disable it with the following command:
+
+```ruby
+Feature.disable(:cycle_analytics_scatterplot_median_enabled)
+```
## Permissions
diff --git a/doc/user/project/merge_requests/img/approvals_premium_mr_widget.png b/doc/user/project/merge_requests/img/approvals_premium_mr_widget.png
deleted file mode 100644
index 2598cc71c33..00000000000
--- a/doc/user/project/merge_requests/img/approvals_premium_mr_widget.png
+++ /dev/null
Binary files differ
diff --git a/doc/user/project/merge_requests/img/approvals_premium_mr_widget_v12_7.png b/doc/user/project/merge_requests/img/approvals_premium_mr_widget_v12_7.png
new file mode 100644
index 00000000000..f9348b0eefc
--- /dev/null
+++ b/doc/user/project/merge_requests/img/approvals_premium_mr_widget_v12_7.png
Binary files differ
diff --git a/doc/user/project/merge_requests/img/mr_approvals_by_code_owners_v12_4.png b/doc/user/project/merge_requests/img/mr_approvals_by_code_owners_v12_4.png
deleted file mode 100644
index c704129685f..00000000000
--- a/doc/user/project/merge_requests/img/mr_approvals_by_code_owners_v12_4.png
+++ /dev/null
Binary files differ
diff --git a/doc/user/project/merge_requests/img/mr_approvals_by_code_owners_v12_7.png b/doc/user/project/merge_requests/img/mr_approvals_by_code_owners_v12_7.png
new file mode 100644
index 00000000000..c2e5714e78d
--- /dev/null
+++ b/doc/user/project/merge_requests/img/mr_approvals_by_code_owners_v12_7.png
Binary files differ
diff --git a/doc/user/project/merge_requests/merge_request_approvals.md b/doc/user/project/merge_requests/merge_request_approvals.md
index fa30f4e2eb4..d7bb2dd16f9 100644
--- a/doc/user/project/merge_requests/merge_request_approvals.md
+++ b/doc/user/project/merge_requests/merge_request_approvals.md
@@ -74,9 +74,9 @@ To enable this merge request approval rule:
1. Navigate to your project's **Settings > General** and expand
**Merge request approvals**.
-1. Locate **All members with Developer role or higher and code owners (if any)** and click **Edit** to choose the number of approvals required.
+1. Locate **Any eligible user** and choose the number of approvals required.
-![MR approvals by Code Owners](img/mr_approvals_by_code_owners_v12_4.png)
+![MR approvals by Code Owners](img/mr_approvals_by_code_owners_v12_7.png)
Once set, merge requests can only be merged once approved by the
number of approvals you've set. GitLab will accept approvals from
@@ -145,7 +145,7 @@ a rule is already defined.
When an [eligible approver](#eligible-approvers) approves a merge request, it will
reduce the number of approvals left for all rules that the approver belongs to.
-![Approvals premium merge request widget](img/approvals_premium_mr_widget.png)
+![Approvals premium merge request widget](img/approvals_premium_mr_widget_v12_7.png)
## Adding or removing an approval