Welcome to mirror list, hosted at ThFree Co, Russian Federation.

gitlab.com/gitlab-org/gitlab-foss.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorGitLab Bot <gitlab-bot@gitlab.com>2020-12-17 14:59:07 +0300
committerGitLab Bot <gitlab-bot@gitlab.com>2020-12-17 14:59:07 +0300
commit8b573c94895dc0ac0e1d9d59cf3e8745e8b539ca (patch)
tree544930fb309b30317ae9797a9683768705d664c4 /doc/development/reactive_caching.md
parent4b1de649d0168371549608993deac953eb692019 (diff)
Add latest changes from gitlab-org/gitlab@13-7-stable-eev13.7.0-rc42
Diffstat (limited to 'doc/development/reactive_caching.md')
-rw-r--r--doc/development/reactive_caching.md126
1 files changed, 64 insertions, 62 deletions
diff --git a/doc/development/reactive_caching.md b/doc/development/reactive_caching.md
index 106a9b5de31..5d514ffbfc9 100644
--- a/doc/development/reactive_caching.md
+++ b/doc/development/reactive_caching.md
@@ -1,17 +1,17 @@
---
stage: none
group: unassigned
-info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments
---
# `ReactiveCaching`
> This doc refers to <https://gitlab.com/gitlab-org/gitlab/blob/master/app/models/concerns/reactive_caching.rb>.
-The `ReactiveCaching` concern is used for fetching some data in the background and store it
+The `ReactiveCaching` concern is used for fetching some data in the background and storing it
in the Rails cache, keeping it up-to-date for as long as it is being requested. If the
-data hasn't been requested for `reactive_cache_lifetime`, it will stop being refreshed,
-and then be removed.
+data hasn't been requested for `reactive_cache_lifetime`, it stops being refreshed,
+and is removed.
## Examples
@@ -35,38 +35,40 @@ class Foo < ApplicationRecord
end
```
-In this example, the first time `#result` is called, it will return `nil`. However,
-it will enqueue a background worker to call `#calculate_reactive_cache` and set an
-initial cache lifetime of 10 min.
+In this example, the first time `#result` is called, it returns `nil`. However,
+it enqueues a background worker to call `#calculate_reactive_cache` and set an
+initial cache lifetime of 10 minutes.
## How it works
The first time `#with_reactive_cache` is called, a background job is enqueued and
`with_reactive_cache` returns `nil`. The background job calls `#calculate_reactive_cache`
and stores its return value. It also re-enqueues the background job to run again after
-`reactive_cache_refresh_interval`. Therefore, it will keep the stored value up to date.
+`reactive_cache_refresh_interval`. Therefore, it keeps the stored value up to date.
Calculations never run concurrently.
-Calling `#with_reactive_cache` while a value is cached will call the block given to
-`#with_reactive_cache`, yielding the cached value. It will also extend the lifetime
+Calling `#with_reactive_cache` while a value is cached calls the block given to
+`#with_reactive_cache`, yielding the cached value. It also extends the lifetime
of the cache by the `reactive_cache_lifetime` value.
-Once the lifetime has expired, no more background jobs will be enqueued and calling
-`#with_reactive_cache` will again return `nil` - starting the process all over again.
+After the lifetime has expired, no more background jobs are enqueued and calling
+`#with_reactive_cache` again returns `nil`, starting the process all over again.
-### 1 MB hard limit
+### Set a hard limit for ReactiveCaching
-`ReactiveCaching` has a 1 megabyte default limit. [This value is configurable](#selfreactive_cache_worker_finder).
+To preserve performance, you should set a hard caching limit in the class that includes
+`ReactiveCaching`. See the example of [how to set it up](#selfreactive_cache_hard_limit).
-If the data we're trying to cache has over 1 megabyte, it will not be cached and a handled `ReactiveCaching::ExceededReactiveCacheLimit` will be notified on Sentry.
+For more information, read the internal issue
+[Redis (or ReactiveCache) soft and hard limits](https://gitlab.com/gitlab-org/gitlab/-/issues/14015).
## When to use
- If we need to make a request to an external API (for example, requests to the k8s API).
-It is not advisable to keep the application server worker blocked for the duration of
-the external request.
+ It is not advisable to keep the application server worker blocked for the duration of
+ the external request.
- If a model needs to perform a lot of database calls or other time consuming
-calculations.
+ calculations.
## How to use
@@ -99,13 +101,13 @@ Controller endpoints that call a model or service method that uses `ReactiveCach
not wait until the background worker completes.
- An API that calls a model or service method that uses `ReactiveCaching` should return
-`202 accepted` when the cache is being calculated (when `#with_reactive_cache` returns `nil`).
+ `202 accepted` when the cache is being calculated (when `#with_reactive_cache` returns `nil`).
- It should also
-[set the polling interval header](fe_guide/performance.md#realtime-components) with
-`Gitlab::PollingInterval.set_header`.
+ [set the polling interval header](fe_guide/performance.md#realtime-components) with
+ `Gitlab::PollingInterval.set_header`.
- The consumer of the API is expected to poll the API.
- You can also consider implementing [ETag caching](polling.md) to reduce the server
-load caused by polling.
+ load caused by polling.
### Methods to implement in a model or service
@@ -113,17 +115,17 @@ These are methods that should be implemented in the model/service that includes
#### `#calculate_reactive_cache` (required)
-- This method must be implemented. Its return value will be cached.
-- It will be called by `ReactiveCaching` when it needs to populate the cache.
-- Any arguments passed to `with_reactive_cache` will also be passed to `calculate_reactive_cache`.
+- This method must be implemented. Its return value is cached.
+- It is called by `ReactiveCaching` when it needs to populate the cache.
+- Any arguments passed to `with_reactive_cache` are also passed to `calculate_reactive_cache`.
#### `#reactive_cache_updated` (optional)
- This method can be implemented if needed.
- It is called by the `ReactiveCaching` concern whenever the cache is updated.
-If the cache is being refreshed and the new cache value is the same as the old cache
-value, this method will not be called. It is only called if a new value is stored in
-the cache.
+ If the cache is being refreshed and the new cache value is the same as the old cache
+ value, this method is not called. It is only called if a new value is stored in
+ the cache.
- It can be used to perform an action whenever the cache is updated.
### Methods called by a model or service
@@ -134,22 +136,22 @@ the model/service.
#### `#with_reactive_cache` (required)
- `with_reactive_cache` must be called where the result of `calculate_reactive_cache`
-is required.
+ is required.
- A block can be given to `with_reactive_cache`. `with_reactive_cache` can also take
-any number of arguments. Any arguments passed to `with_reactive_cache` will be
-passed to `calculate_reactive_cache`. The arguments passed to `with_reactive_cache`
-will be appended to the cache key name.
+ any number of arguments. Any arguments passed to `with_reactive_cache` are
+ passed to `calculate_reactive_cache`. The arguments passed to `with_reactive_cache`
+ are appended to the cache key name.
- If `with_reactive_cache` is called when the result has already been cached, the
-block will be called, yielding the cached value and the return value of the block
-will be returned by `with_reactive_cache`. It will also reset the timeout of the
-cache to the `reactive_cache_lifetime` value.
-- If the result has not been cached as yet, `with_reactive_cache` will return nil.
-It will also enqueue a background job, which will call `calculate_reactive_cache`
-and cache the result.
-- Once the background job has completed and the result is cached, the next call
-to `with_reactive_cache` will pick up the cached value.
+ block is called, yielding the cached value and the return value of the block
+ is returned by `with_reactive_cache`. It also resets the timeout of the
+ cache to the `reactive_cache_lifetime` value.
+- If the result has not been cached as yet, `with_reactive_cache` return `nil`.
+ It also enqueues a background job, which calls `calculate_reactive_cache`
+ and caches the result.
+- After the background job has completed and the result is cached, the next call
+ to `with_reactive_cache` picks up the cached value.
- In the example below, `data` is the cached value which is yielded to the block
-given to `with_reactive_cache`.
+ given to `with_reactive_cache`.
```ruby
class Foo < ApplicationRecord
@@ -170,16 +172,16 @@ given to `with_reactive_cache`.
#### `#clear_reactive_cache!` (optional)
- This method can be called when the cache needs to be expired/cleared. For example,
-it can be called in an `after_save` callback in a model so that the cache is
-cleared after the model is modified.
+ it can be called in an `after_save` callback in a model so that the cache is
+ cleared after the model is modified.
- This method should be called with the same parameters that are passed to
-`with_reactive_cache` because the parameters are part of the cache key.
+ `with_reactive_cache` because the parameters are part of the cache key.
#### `#without_reactive_cache` (optional)
- This is a convenience method that can be used for debugging purposes.
- This method calls `calculate_reactive_cache` in the current process instead of
-in a background worker.
+ in a background worker.
### Configurable options
@@ -188,19 +190,19 @@ There are some `class_attribute` options which can be tweaked.
#### `self.reactive_cache_key`
- The value of this attribute is the prefix to the `data` and `alive` cache key names.
-The parameters passed to `with_reactive_cache` form the rest of the cache key names.
+ The parameters passed to `with_reactive_cache` form the rest of the cache key names.
- By default, this key uses the model's name and the ID of the record.
```ruby
self.reactive_cache_key = -> (record) { [model_name.singular, record.id] }
```
-- The `data` and `alive` cache keys in this case will be `"ExampleModel:1:arg1:arg2"`
-and `"ExampleModel:1:arg1:arg2:alive"` respectively, where `ExampleModel` is the
-name of the model, `1` is the ID of the record, `arg1` and `arg2` are parameters
-passed to `with_reactive_cache`.
-- If you're including this concern in a service instead, you will need to override
-the default by adding the following to your service:
+- The `data` and `alive` cache keys in this case are `"ExampleModel:1:arg1:arg2"`
+ and `"ExampleModel:1:arg1:arg2:alive"` respectively, where `ExampleModel` is the
+ name of the model, `1` is the ID of the record, `arg1` and `arg2` are parameters
+ passed to `with_reactive_cache`.
+- If you're including this concern in a service instead, you must override
+ the default by adding the following to your service:
```ruby
self.reactive_cache_key = ->(service) { [service.class.model_name.singular, service.project_id] }
@@ -212,7 +214,7 @@ the default by adding the following to your service:
#### `self.reactive_cache_lease_timeout`
- `ReactiveCaching` uses `Gitlab::ExclusiveLease` to ensure that the cache calculation
-is never run concurrently by multiple workers.
+ is never run concurrently by multiple workers.
- This attribute is the timeout for the `Gitlab::ExclusiveLease`.
- It defaults to 2 minutes, but can be overridden if a different timeout is required.
@@ -231,11 +233,11 @@ self.reactive_cache_lease_timeout = 1.minute
#### `self.reactive_cache_lifetime`
-- This is the duration after which the cache will be cleared if there are no requests.
+- This is the duration after which the cache is cleared if there are no requests.
- The default is 10 minutes. If there are no requests for this cache value for 10 minutes,
-the cache will expire.
-- If the cache value is requested before it expires, the timeout of the cache will
-be reset to `reactive_cache_lifetime`.
+ the cache expires.
+- If the cache value is requested before it expires, the timeout of the cache is
+ reset to `reactive_cache_lifetime`.
```ruby
self.reactive_cache_lifetime = 10.minutes
@@ -244,8 +246,8 @@ self.reactive_cache_lifetime = 10.minutes
#### `self.reactive_cache_hard_limit`
- This is the maximum data size that `ReactiveCaching` allows to be cached.
-- The default is 1 megabyte. Data that goes over this value will not be cached
-and will silently raise `ReactiveCaching::ExceededReactiveCacheLimit` on Sentry.
+- The default is 1 megabyte. Data that goes over this value is not cached
+ and silently raises `ReactiveCaching::ExceededReactiveCacheLimit` on Sentry.
```ruby
self.reactive_cache_hard_limit = 5.megabytes
@@ -300,7 +302,7 @@ which `calculate_reactive_cache` can be called.
end
```
- - In this example, the primary key ID will be passed to `reactive_cache_worker_finder`
- along with the parameters passed to `with_reactive_cache`.
+ - In this example, the primary key ID is passed to `reactive_cache_worker_finder`
+ along with the parameters passed to `with_reactive_cache`.
- The custom `reactive_cache_worker_finder` calls `.from_cache` with the parameters
- passed to `with_reactive_cache`.
+ passed to `with_reactive_cache`.