Welcome to mirror list, hosted at ThFree Co, Russian Federation.

gitlab.com/gitlab-org/gitlab-foss.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorGitLab Bot <gitlab-bot@gitlab.com>2020-01-30 18:09:15 +0300
committerGitLab Bot <gitlab-bot@gitlab.com>2020-01-30 18:09:15 +0300
commit536aa3a1f4b96abc4ca34489bf2cbe503afcded7 (patch)
tree88d08f7dfa29a32d6526773c4fe0fefd9f2bc7d1 /doc/development
parent50ae4065530c4eafbeb7c5ff2c462c48c02947ca (diff)
Add latest changes from gitlab-org/gitlab@master
Diffstat (limited to 'doc/development')
-rw-r--r--doc/development/README.md5
-rw-r--r--doc/development/architecture.md2
-rw-r--r--doc/development/chaos_endpoints.md12
-rw-r--r--doc/development/cicd/index.md59
-rw-r--r--doc/development/database_debugging.md6
-rw-r--r--doc/development/database_helpers.md63
-rw-r--r--doc/development/distributed_tracing.md10
-rw-r--r--doc/development/documentation/index.md8
-rw-r--r--doc/development/documentation/site_architecture/release_process.md16
-rw-r--r--doc/development/documentation/styleguide.md18
-rw-r--r--doc/development/emails.md4
-rw-r--r--doc/development/fe_guide/frontend_faq.md2
-rw-r--r--doc/development/fe_guide/graphql.md15
-rw-r--r--doc/development/feature_flags/controls.md4
-rw-r--r--doc/development/geo.md2
-rw-r--r--doc/development/gitaly.md10
-rw-r--r--doc/development/gotchas.md2
-rw-r--r--doc/development/i18n/externalization.md10
-rw-r--r--doc/development/import_export.md10
-rw-r--r--doc/development/import_project.md8
-rw-r--r--doc/development/internal_api.md16
-rw-r--r--doc/development/kubernetes.md2
-rw-r--r--doc/development/new_fe_guide/dependencies.md4
-rw-r--r--doc/development/performance.md14
-rw-r--r--doc/development/post_deployment_migrations.md8
-rw-r--r--doc/development/profiling.md4
-rw-r--r--doc/development/python_guide/index.md12
-rw-r--r--doc/development/shell_commands.md2
-rw-r--r--doc/development/shell_scripting_guide/index.md2
-rw-r--r--doc/development/testing_guide/best_practices.md4
-rw-r--r--doc/development/testing_guide/frontend_testing.md8
-rw-r--r--doc/development/utilities.md287
32 files changed, 420 insertions, 209 deletions
diff --git a/doc/development/README.md b/doc/development/README.md
index 84d4fb5519f..f94da66085b 100644
--- a/doc/development/README.md
+++ b/doc/development/README.md
@@ -134,7 +134,6 @@ Complementary reads:
- [Verifying database capabilities](verifying_database_capabilities.md)
- [Database Debugging and Troubleshooting](database_debugging.md)
- [Query Count Limits](query_count_limits.md)
-- [Database helper modules](database_helpers.md)
- [Code comments](code_comments.md)
- [Creating enums](creating_enums.md)
- [Renaming features](renaming_features.md)
@@ -191,6 +190,10 @@ Complementary reads:
- [Shell scripting standards and style guidelines](shell_scripting_guide/index.md)
+## Domain-specific guides
+
+- [CI/CD development documentation](cicd/index.md)
+
## Other Development guides
- [Defining relations between files using projections](projections.md)
diff --git a/doc/development/architecture.md b/doc/development/architecture.md
index 778cc1aa1d7..cd41eb28e4d 100644
--- a/doc/development/architecture.md
+++ b/doc/development/architecture.md
@@ -504,7 +504,7 @@ To summarize here's the [directory structure of the `git` user home directory](.
### Processes
-```sh
+```shell
ps aux | grep '^git'
```
diff --git a/doc/development/chaos_endpoints.md b/doc/development/chaos_endpoints.md
index 961520db7d8..2e55f19cd91 100644
--- a/doc/development/chaos_endpoints.md
+++ b/doc/development/chaos_endpoints.md
@@ -26,7 +26,7 @@ A secret token can be set through the `GITLAB_CHAOS_SECRET` environment variable
For example, when using the [GDK](https://gitlab.com/gitlab-org/gitlab-development-kit)
this can be done with the following command:
-```bash
+```shell
GITLAB_CHAOS_SECRET=secret gdk run
```
@@ -60,7 +60,7 @@ GET /-/chaos/leakmem?memory_mb=1024&duration_s=50&async=true
| `duration_s` | integer | no | Minimum duration_s, in seconds, that the memory should be retained. Defaults to 30s. |
| `async` | boolean | no | Set to true to leak memory in a Sidekiq background worker process |
-```bash
+```shell
curl http://localhost:3000/-/chaos/leakmem?memory_mb=1024&duration_s=10 --header 'X-Chaos-Secret: secret'
curl http://localhost:3000/-/chaos/leakmem?memory_mb=1024&duration_s=10&token=secret
```
@@ -83,7 +83,7 @@ GET /-/chaos/cpu_spin?duration_s=50&async=true
| `duration_s` | integer | no | Duration, in seconds, that the core will be utilized. Defaults to 30s |
| `async` | boolean | no | Set to true to consume CPU in a Sidekiq background worker process |
-```bash
+```shell
curl http://localhost:3000/-/chaos/cpu_spin?duration_s=60 --header 'X-Chaos-Secret: secret'
curl http://localhost:3000/-/chaos/cpu_spin?duration_s=60&token=secret
```
@@ -108,7 +108,7 @@ GET /-/chaos/db_spin?duration_s=50&async=true
| `duration_s` | integer | no | Duration, in seconds, that the core will be utilized. Defaults to 30s |
| `async` | boolean | no | Set to true to perform the operation in a Sidekiq background worker process |
-```bash
+```shell
curl http://localhost:3000/-/chaos/db_spin?interval_s=1&duration_s=60 --header 'X-Chaos-Secret: secret'
curl http://localhost:3000/-/chaos/db_spin?interval_s=1&duration_s=60&token=secret
```
@@ -130,7 +130,7 @@ GET /-/chaos/sleep?duration_s=50&async=true
| `duration_s` | integer | no | Duration, in seconds, that the request will sleep for. Defaults to 30s |
| `async` | boolean | no | Set to true to sleep in a Sidekiq background worker process |
-```bash
+```shell
curl http://localhost:3000/-/chaos/sleep?duration_s=60 --header 'X-Chaos-Secret: secret'
curl http://localhost:3000/-/chaos/sleep?duration_s=60&token=secret
```
@@ -151,7 +151,7 @@ GET /-/chaos/kill?async=true
| ------------ | ------- | -------- | ---------------------------------------------------------------------- |
| `async` | boolean | no | Set to true to kill a Sidekiq background worker process |
-```bash
+```shell
curl http://localhost:3000/-/chaos/kill --header 'X-Chaos-Secret: secret'
curl http://localhost:3000/-/chaos/kill?token=secret
```
diff --git a/doc/development/cicd/index.md b/doc/development/cicd/index.md
new file mode 100644
index 00000000000..ed33eb01308
--- /dev/null
+++ b/doc/development/cicd/index.md
@@ -0,0 +1,59 @@
+# CI/CD development documentation
+
+Development guides that are specific to CI/CD are listed here.
+
+## Job scheduling
+
+When a Pipeline is created all its jobs are created at once for all stages, with an initial state of `created`. This makes it possible to visualize the full content of a pipeline.
+
+A job with the `created` state won't be seen by the Runner yet. To make it possible to assign a job to a Runner, the job must transition first into the `pending` state, which can happen if:
+
+1. The job is created in the very first stage of the pipeline.
+1. The job required a manual start and it has been triggered.
+1. All jobs from the previous stage have completed successfully. In this case we transition all jobs from the next stage to `pending`.
+1. The job specifies DAG dependencies using `needs:` and all the dependent jobs are completed.
+
+When the Runner is connected, it requests the next `pending` job to run by polling the server continuously.
+
+NOTE: **Note:** API endpoints used by the Runner to interact with GitLab are defined in [`lib/api/runner.rb`](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/api/runner.rb)
+
+After the server receives the request it selects a `pending` job based on the [`Ci::RegisterJobService` algorithm](#ciregisterjobservice), then assigns and sends the job to the Runner.
+
+Once all jobs are completed for the current stage, the server "unlocks" all the jobs from the next stage by changing their state to `pending`. These can now be picked by the scheduling algorithm when the Runner requests new jobs, and continues like this until all stages are completed.
+
+### Communication between Runner and GitLab server
+
+Once the Runner is [registered](../../ci/runners/README.md#registering-a-shared-runner) using the registration token, the server knows what type of jobs it can execute. This depends on:
+
+- The type of runner it is registered as:
+ - a shared runner
+ - a group runner
+ - a project specific runner
+- Any associated tags.
+
+The Runner initiates the communication by requesting jobs to execute with `POST /api/v4/jobs/request`. Although this polling generally happens every few seconds we leverage caching via HTTP headers to reduce the server-side work load if the job queue doesn't change.
+
+This API endpoint runs [`Ci::RegisterJobService`](https://gitlab.com/gitlab-org/gitlab/blob/master/app/services/ci/register_job_service.rb), which:
+
+1. Picks the next job to run from the pool of `pending` jobs
+1. Assigns it to the Runner
+1. Presents it to the Runner via the API response
+
+### `Ci::RegisterJobService`
+
+There are 3 top level queries that this service uses to gather the majority of the jobs and they are selected based on the level where the Runner is registered to:
+
+- Select jobs for shared Runner (instance level)
+- Select jobs for group level Runner
+- Select jobs for project Runner
+
+This list of jobs is then filtered further by matching tags between job and Runner tags.
+
+NOTE: **Note:** If a job contains tags, the Runner will not pick the job if it does not match **all** the tags.
+The Runner may have more tags than defined for the job, but not vice-versa.
+
+Finally if the Runner can only pick jobs that are tagged, all untagged jobs are filtered out.
+
+At this point we loop through remaining `pending` jobs and we try to assign the first job that the Runner "can pick" based on additional policies. For example, Runners marked as `protected` can only pick jobs that run against protected branches (such as production deployments).
+
+As we increase the number of Runners in the pool we also increase the chances of conflicts which would arise if assigning the same job to different Runners. To prevent that we gracefully rescue conflict errors and assign the next job in the list.
diff --git a/doc/development/database_debugging.md b/doc/development/database_debugging.md
index 65a3e518585..234d6b2518d 100644
--- a/doc/development/database_debugging.md
+++ b/doc/development/database_debugging.md
@@ -60,7 +60,7 @@ When running specs with the [Spring preloader](rake_tasks.md#speed-up-tests-rake
the test database can get into a corrupted state. Trying to run the migration or
dropping/resetting the test database has no effect.
-```sh
+```shell
$ bundle exec spring rspec some_spec.rb
...
Failure/Error: ActiveRecord::Migration.maintain_test_schema!
@@ -78,7 +78,7 @@ ActiveRecord::PendingMigrationError:
To resolve, you can kill the spring server and app that lives between spec runs.
-```sh
+```shell
$ ps aux | grep spring
eric 87304 1.3 2.9 3080836 482596 ?? Ss 10:12AM 4:08.36 spring app | gitlab | started 6 hours ago | test mode
eric 37709 0.0 0.0 2518640 7524 s006 S Wed11AM 0:00.79 spring server | gitlab | started 29 hours ago
@@ -100,6 +100,6 @@ of GitLab schema later than the `MIN_SCHEMA_VERSION`, and then rolled back the
to an older migration, from before. In this case, in order to migrate forward again,
you should set the `SKIP_SCHEMA_VERSION_CHECK` environment variable.
-```sh
+```shell
bundle exec rake db:migrate SKIP_SCHEMA_VERSION_CHECK=true
```
diff --git a/doc/development/database_helpers.md b/doc/development/database_helpers.md
deleted file mode 100644
index 21e4e725de6..00000000000
--- a/doc/development/database_helpers.md
+++ /dev/null
@@ -1,63 +0,0 @@
-# Database helpers
-
-There are a number of useful helper modules defined in `/lib/gitlab/database/`.
-
-## Subquery
-
-In some cases it is not possible to perform an operation on a query.
-For example:
-
-```ruby
-Geo::EventLog.where('id < 100').limit(10).delete_all
-```
-
-Will give this error:
-
-> ActiveRecord::ActiveRecordError: delete_all doesn't support limit
-
-One solution would be to wrap it in another `where`:
-
-```ruby
-Geo::EventLog.where(id: Geo::EventLog.where('id < 100').limit(10)).delete_all
-```
-
-This works with PostgreSQL, but with MySQL it gives this error:
-
-> ActiveRecord::StatementInvalid: Mysql2::Error: This version of MySQL
-> doesn't yet support 'LIMIT & IN/ALL/ANY/SOME subquery'
-
-Also, that query doesn't have very good performance. Using a
-`INNER JOIN` with itself is better.
-
-So instead of this query:
-
-```sql
-SELECT geo_event_log.*
-FROM geo_event_log
-WHERE geo_event_log.id IN
- (SELECT geo_event_log.id
- FROM geo_event_log
- WHERE (id < 100)
- LIMIT 10)
-```
-
-It's better to write:
-
-```sql
-SELECT geo_event_log.*
-FROM geo_event_log
-INNER JOIN
- (SELECT geo_event_log.*
- FROM geo_event_log
- WHERE (id < 100)
- LIMIT 10) t2 ON geo_event_log.id = t2.id
-```
-
-And this is where `Gitlab::Database::Subquery.self_join` can help
-you. So you can rewrite the above statement as:
-
-```ruby
-Gitlab::Database::Subquery.self_join(Geo::EventLog.where('id < 100').limit(10)).delete_all
-```
-
-And this also works with MySQL, so you don't need to worry about that.
diff --git a/doc/development/distributed_tracing.md b/doc/development/distributed_tracing.md
index d2810fe89f0..948139f4aea 100644
--- a/doc/development/distributed_tracing.md
+++ b/doc/development/distributed_tracing.md
@@ -27,7 +27,7 @@ no overhead at all.
To enable `GITLAB_TRACING`, a valid _"configuration-string"_ value should be set, with a URL-like
form:
-```sh
+```shell
GITLAB_TRACING=opentracing://<driver>?<param_name>=<param_value>&<param_name_2>=<param_value_2>
```
@@ -90,7 +90,7 @@ documentation](https://www.jaegertracing.io/docs/1.9/getting-started/).
If you have Docker available, the easier approach to running the Jaeger all-in-one is through
Docker, using the following command:
-```sh
+```shell
$ docker run \
--rm \
-e COLLECTOR_ZIPKIN_HTTP_PORT=9411 \
@@ -121,7 +121,7 @@ appropriate configuration string.
**TL;DR:** If you are running everything on the same host, use the following value:
-```sh
+```shell
export GITLAB_TRACING="opentracing://jaeger?http_endpoint=http%3A%2F%2Flocalhost%3A14268%2Fapi%2Ftraces&sampler=const&sampler_param=1"
```
@@ -152,7 +152,7 @@ application.
When `GITLAB_TRACING` is configured properly, the application will log this on startup:
-```sh
+```shell
13:41:53 gitlab-workhorse.1 | 2019/02/12 13:41:53 Tracing enabled
...
13:41:54 gitaly.1 | 2019/02/12 13:41:54 Tracing enabled
@@ -161,7 +161,7 @@ When `GITLAB_TRACING` is configured properly, the application will log this on s
If `GITLAB_TRACING` is not configured correctly, this will also be logged:
-```sh
+```shell
13:43:45 gitaly.1 | 2019/02/12 13:43:45 skipping tracing configuration step: tracer: unable to load driver mytracer
```
diff --git a/doc/development/documentation/index.md b/doc/development/documentation/index.md
index 34cf50f61d9..ffb8178326b 100644
--- a/doc/development/documentation/index.md
+++ b/doc/development/documentation/index.md
@@ -98,7 +98,7 @@ For example, if you move `doc/workflow/lfs/lfs_administration.md` to
A quick way to find them is to use `git grep`. First go to the root directory
where you cloned the `gitlab` repository and then do:
- ```sh
+ ```shell
git grep -n "workflow/lfs/lfs_administration"
git grep -n "lfs/lfs_administration"
```
@@ -435,7 +435,7 @@ This list does not limit what other linters you can add to your local documentat
documentation in the [`gitlab` project](https://gitlab.com/gitlab-org/gitlab), run the
following commands from within the `gitlab` project:
-```sh
+```shell
cd doc
proselint **/*.md
```
@@ -480,13 +480,13 @@ run the following commands from within your `gitlab` project root directory, whi
automatically detect the [`.markdownlint.json`](#markdownlint-configuration) config
file in the root of the project, and test all files in `/doc` and its subdirectories:
-```sh
+```shell
markdownlint 'doc/**/*.md'
```
If you wish to use a different config file, use the `-c` flag:
-```sh
+```shell
markdownlint -c <config-file-name> 'doc/**/*.md'
```
diff --git a/doc/development/documentation/site_architecture/release_process.md b/doc/development/documentation/site_architecture/release_process.md
index 51a02528758..de014c121a9 100644
--- a/doc/development/documentation/site_architecture/release_process.md
+++ b/doc/development/documentation/site_architecture/release_process.md
@@ -23,7 +23,7 @@ and tag all tooling images locally:
1. Make sure you're on the `dockerfiles/` directory of the `gitlab-docs` repo.
1. Build the images:
- ```sh
+ ```shell
docker build -t registry.gitlab.com/gitlab-org/gitlab-docs:bootstrap -f Dockerfile.bootstrap ../
docker build -t registry.gitlab.com/gitlab-org/gitlab-docs:builder-onbuild -f Dockerfile.builder.onbuild ../
docker build -t registry.gitlab.com/gitlab-org/gitlab-docs:nginx-onbuild -f Dockerfile.nginx.onbuild ../
@@ -64,13 +64,13 @@ this needs to happen when the stable branches for all products have been created
1. Make sure you're on the root path of the `gitlab-docs` repo.
1. Make sure your `master` is updated:
- ```sh
+ ```shell
git pull origin master
```
1. Run the raketask to create the single version:
- ```sh
+ ```shell
./bin/rake "release:single[12.0]"
```
@@ -96,7 +96,7 @@ this needs to happen when the stable branches for all products have been created
Optionally, you can test locally by building the image and running it:
-```sh
+```shell
docker build -t docs:12.0 -f Dockerfile.12.0 .
docker run -it --rm -p 4000:4000 docs:12.0
```
@@ -111,7 +111,7 @@ version and rotates the old one:
1. Make sure you're on the root path of the `gitlab-docs` repo.
1. Create a branch `release-X-Y`:
- ```sh
+ ```shell
git checkout master
git checkout -b release-12-0
```
@@ -150,7 +150,7 @@ version and rotates the old one:
1. In the end, there should be four files in total that have changed.
Commit and push to create the merge request using the "Release" template:
- ```sh
+ ```shell
git add content/ Dockerfile.master dockerfiles/Dockerfile.archives
git commit -m "Release 12.0"
git push origin release-12-0
@@ -172,7 +172,7 @@ versions:
pipelines succeed. The `release-X-Y` branch needs to be present locally,
otherwise the raketask will fail:
- ```sh
+ ```shell
./bin/rake release:dropdowns
```
@@ -218,7 +218,7 @@ from time to time.
If this is not possible or there are many changes, merge master into them:
-```sh
+```shell
git branch 12.0
git fetch origin master
git merge origin/master
diff --git a/doc/development/documentation/styleguide.md b/doc/development/documentation/styleguide.md
index b361648b2f0..225e3a65eab 100644
--- a/doc/development/documentation/styleguide.md
+++ b/doc/development/documentation/styleguide.md
@@ -329,7 +329,7 @@ where a reader must replace text with their own value.
For example:
-```sh
+```shell
cp <your_source_directory> <your_destination_directory>
```
@@ -1277,7 +1277,7 @@ METHOD /endpoint
Example request:
-```sh
+```shell
curl --header "PRIVATE-TOKEN: <your_access_token>" 'https://gitlab.example.com/api/v4/endpoint?parameters'
```
@@ -1355,7 +1355,7 @@ Below is a set of [cURL](https://curl.haxx.se) examples that you can use in the
Get the details of a group:
-```bash
+```shell
curl --header "PRIVATE-TOKEN: <your_access_token>" https://gitlab.example.com/api/v4/groups/gitlab-org
```
@@ -1363,7 +1363,7 @@ curl --header "PRIVATE-TOKEN: <your_access_token>" https://gitlab.example.com/ap
Create a new project under the authenticated user's namespace:
-```bash
+```shell
curl --request POST --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/api/v4/projects?name=foo"
```
@@ -1373,7 +1373,7 @@ Instead of using `--request POST` and appending the parameters to the URI, you c
cURL's `--data` option. The example below will create a new project `foo` under
the authenticated user's namespace.
-```bash
+```shell
curl --data "name=foo" --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/api/v4/projects"
```
@@ -1382,7 +1382,7 @@ curl --data "name=foo" --header "PRIVATE-TOKEN: <your_access_token>" "https://gi
> **Note:** In this example we create a new group. Watch carefully the single
and double quotes.
-```bash
+```shell
curl --request POST --header "PRIVATE-TOKEN: <your_access_token>" --header "Content-Type: application/json" --data '{"path": "my-group", "name": "My group"}' https://gitlab.example.com/api/v4/groups
```
@@ -1391,7 +1391,7 @@ curl --request POST --header "PRIVATE-TOKEN: <your_access_token>" --header "Cont
Instead of using JSON or urlencode you can use multipart/form-data which
properly handles data encoding:
-```bash
+```shell
curl --request POST --header "PRIVATE-TOKEN: <your_access_token>" --form "title=ssh-key" --form "key=ssh-rsa AAAAB3NzaC1yc2EA..." https://gitlab.example.com/api/v4/users/25/keys
```
@@ -1405,7 +1405,7 @@ to escape them when possible. In the example below we create a new issue which
contains spaces in its title. Observe how spaces are escaped using the `%20`
ASCII code.
-```bash
+```shell
curl --request POST --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/api/v4/projects/42/issues?title=Hello%20Dude"
```
@@ -1417,7 +1417,7 @@ The GitLab API sometimes accepts arrays of strings or integers. For example, to
restrict the sign-up e-mail domains of a GitLab instance to `*.example.com` and
`example.net`, you would do something like this:
-```bash
+```shell
curl --request PUT --header "PRIVATE-TOKEN: <your_access_token>" --data "domain_whitelist[]=*.example.com" --data "domain_whitelist[]=example.net" https://gitlab.example.com/api/v4/application/settings
```
diff --git a/doc/development/emails.md b/doc/development/emails.md
index 2c5f3be45d8..5617d9d43f2 100644
--- a/doc/development/emails.md
+++ b/doc/development/emails.md
@@ -65,13 +65,13 @@ See the [Rails guides](https://guides.rubyonrails.org/action_mailer_basics.html#
1. Run this command in the GitLab root directory to launch `mail_room`:
- ```sh
+ ```shell
bundle exec mail_room -q -c config/mail_room.yml
```
1. Verify that everything is configured correctly:
- ```sh
+ ```shell
bundle exec rake gitlab:incoming_email:check RAILS_ENV=development
```
diff --git a/doc/development/fe_guide/frontend_faq.md b/doc/development/fe_guide/frontend_faq.md
index 01ed07f8736..ad52919a9af 100644
--- a/doc/development/fe_guide/frontend_faq.md
+++ b/doc/development/fe_guide/frontend_faq.md
@@ -33,7 +33,7 @@ Find here the [source code setting the attribute](https://gitlab.com/gitlab-org/
The `rake routes` command can be used to list all the routes available in the application, piping the output into `grep`, we can perform a search through the list of available routes.
The output includes the request types available, route parameters and the relevant controller.
-```sh
+```shell
bundle exec rake routes | grep "issues"
```
diff --git a/doc/development/fe_guide/graphql.md b/doc/development/fe_guide/graphql.md
index 8c284ae955d..336a808b793 100644
--- a/doc/development/fe_guide/graphql.md
+++ b/doc/development/fe_guide/graphql.md
@@ -208,6 +208,21 @@ Now every single time on attempt to fetch a version, our client will fetch `id`
Read more about local state management with Apollo in the [Vue Apollo documentation](https://vue-apollo.netlify.com/guide/local-state.html#local-state).
+### Using with Vuex
+
+When Apollo Client is used within Vuex and fetched data is stored in the Vuex store, there is no need in keeping Apollo Client cache enabled. Otherwise we would have data from the API stored in two places - Vuex store and Apollo Client cache. More to say, with Apollo default settings, a subsequent fetch from the GraphQL API could result in fetching data from Apollo cache (in the case where we have the same query and variables). To prevent this behavior, we need to disable Apollo Client cache passing a valid `fetchPolicy` option to its constructor:
+
+```js
+import fetchPolicies from '~/graphql_shared/fetch_policy_constants';
+
+export const gqClient = createGqClient(
+ {},
+ {
+ fetchPolicy: fetchPolicies.NO_CACHE,
+ },
+);
+```
+
### Feature flags in queries
Sometimes it may be useful to have an entity in the GraphQL query behind a feature flag.
diff --git a/doc/development/feature_flags/controls.md b/doc/development/feature_flags/controls.md
index 731fd7171f0..d711e49ee8b 100644
--- a/doc/development/feature_flags/controls.md
+++ b/doc/development/feature_flags/controls.md
@@ -153,13 +153,13 @@ When a feature gate has been removed from the code base, the feature
record still exists in the database that the flag was deployed too.
The record can be deleted once the MR is deployed to each environment:
-```sh
+```shell
/chatops run feature delete some_feature --dev
/chatops run feature delete some_feature --staging
```
Then, you can delete it from production after the MR is deployed to prod:
-```sh
+```shell
/chatops run feature delete some_feature
```
diff --git a/doc/development/geo.md b/doc/development/geo.md
index 5010e44e826..38a64cbadca 100644
--- a/doc/development/geo.md
+++ b/doc/development/geo.md
@@ -244,7 +244,7 @@ Whenever a new Geo node is configured or the database schema changes on the
**primary** node, you must refresh the foreign tables on the **secondary** node
by running the following:
-```sh
+```shell
bundle exec rake geo:db:refresh_foreign_tables
```
diff --git a/doc/development/gitaly.md b/doc/development/gitaly.md
index 3c18b3587c6..10da59ee9e0 100644
--- a/doc/development/gitaly.md
+++ b/doc/development/gitaly.md
@@ -98,13 +98,13 @@ most commonly-used RPCs can be enabled via feature flags:
A convenience Rake task can be used to enable or disable these flags
all together. To enable:
-```sh
+```shell
bundle exec rake gitlab:features:enable_rugged
```
To disable:
-```sh
+```shell
bundle exec rake gitlab:features:disable_rugged
```
@@ -343,7 +343,7 @@ the integration by using GDK:
submitting commit, observing history, etc.).
1. Check that the list of current metrics has the new counter for the feature flag:
- ```sh
+ ```shell
curl --silent http://localhost:9236/metrics | grep go_find_all_tags
```
@@ -352,7 +352,7 @@ the integration by using GDK:
1. Navigate to GDK's root directory.
1. Start a Rails console:
- ```sh
+ ```shell
bundle install && bundle exec rails console
```
@@ -373,6 +373,6 @@ the integration by using GDK:
your changes (project creation, submitting commit, observing history, etc.).
1. Verify the feature is on by observing the metrics for it:
- ```sh
+ ```shell
curl --silent http://localhost:9236/metrics | grep go_find_all_tags
```
diff --git a/doc/development/gotchas.md b/doc/development/gotchas.md
index 09d0d71b3d7..5e9722d9586 100644
--- a/doc/development/gotchas.md
+++ b/doc/development/gotchas.md
@@ -43,7 +43,7 @@ end
When run, this spec doesn't do what we might expect:
-```sh
+```shell
1) API::API reproduce sequence issue creates a second label
Failure/Error: expect(json_response.first['name']).to eq('label1')
diff --git a/doc/development/i18n/externalization.md b/doc/development/i18n/externalization.md
index b9ab5f4e8ff..09908ed2fed 100644
--- a/doc/development/i18n/externalization.md
+++ b/doc/development/i18n/externalization.md
@@ -452,7 +452,7 @@ For more information, see the [`gl-sprintf`](https://gitlab-org.gitlab.io/gitlab
Now that the new content is marked for translation, we need to update
`locale/gitlab.pot` files with the following command:
-```sh
+```shell
bin/rake gettext:regenerate
```
@@ -526,14 +526,14 @@ Let's suppose you want to add translations for a new language, let's say French.
1. Next, you need to add the language:
- ```sh
+ ```shell
bin/rake gettext:add_language[fr]
```
If you want to add a new language for a specific region, the command is similar,
you just need to separate the region with an underscore (`_`). For example:
- ```sh
+ ```shell
bin/rake gettext:add_language[en_GB]
```
@@ -547,7 +547,7 @@ Let's suppose you want to add translations for a new language, let's say French.
in order to generate the binary MO files and finally update the JSON files
containing the translations:
- ```sh
+ ```shell
bin/rake gettext:compile
```
@@ -557,7 +557,7 @@ Let's suppose you want to add translations for a new language, let's say French.
1. After checking that the changes are ok, you can proceed to commit the new files.
For example:
- ```sh
+ ```shell
git add locale/fr/ app/assets/javascripts/locale/fr/
git commit -m "Add French translations for Cycle Analytics page"
```
diff --git a/doc/development/import_export.md b/doc/development/import_export.md
index 3ee723bc897..323ed48aaf9 100644
--- a/doc/development/import_export.md
+++ b/doc/development/import_export.md
@@ -14,7 +14,7 @@ Project.find_by_full_path('group/project').import_state.slice(:jid, :status, :la
> {"jid"=>"414dec93f941a593ea1a6894", "status"=>"finished", "last_error"=>nil}
```
-```bash
+```shell
# Logs
grep JID /var/log/gitlab/sidekiq/current
grep "Import/Export error" /var/log/gitlab/sidekiq/current
@@ -30,7 +30,7 @@ Read through the current performance problems using the Import/Export below.
Out of memory (OOM) errors are normally caused by the [Sidekiq Memory Killer](../administration/operations/sidekiq_memory_killer.md):
-```bash
+```shell
SIDEKIQ_MEMORY_KILLER_MAX_RSS = 2000000
SIDEKIQ_MEMORY_KILLER_HARD_LIMIT_RSS = 3000000
SIDEKIQ_MEMORY_KILLER_GRACE_TIME = 900
@@ -38,7 +38,7 @@ SIDEKIQ_MEMORY_KILLER_GRACE_TIME = 900
An import status `started`, and the following Sidekiq logs will signal a memory issue:
-```bash
+```shell
WARN: Work still in progress <struct with JID>
```
@@ -59,7 +59,7 @@ class StuckImportJobsWorker
...
```
-```bash
+```shell
Marked stuck import jobs as failed. JIDs: xyz
```
@@ -219,7 +219,7 @@ We do not need to bump the version up in any of the following cases:
Every time we bump the version, the integration specs will fail and can be fixed with:
-```bash
+```shell
bundle exec rake gitlab:import_export:bump_version
```
diff --git a/doc/development/import_project.md b/doc/development/import_project.md
index 06c0bd02262..37cf07ff702 100644
--- a/doc/development/import_project.md
+++ b/doc/development/import_project.md
@@ -30,7 +30,7 @@ Note that to use the script, it will require some preparation if you haven't don
For details how to use `bin/import-project`, run:
-```sh
+```shell
bin/import-project --help
```
@@ -53,7 +53,7 @@ As part of this script we also disable direct and background upload to avoid sit
We can simply run this script from the terminal:
-```sh
+```shell
bundle exec rake "gitlab:import_export:import[root, root, testingprojectimport, /path/to/file.tar.gz]"
```
@@ -63,7 +63,7 @@ The last option is to import a project using a Rails console:
1. Start a Ruby on Rails console:
- ```sh
+ ```shell
# Omnibus GitLab
gitlab-rails console
@@ -126,7 +126,7 @@ You can use this [snippet](https://gitlab.com/gitlab-org/gitlab/snippets/1924954
You can execute the script from the `gdk/gitlab` directory like this:
-```sh
+```shell
bundle exec rails r /path_to_sript/script.rb project_name /path_to_extracted_project request_store_enabled
```
diff --git a/doc/development/internal_api.md b/doc/development/internal_api.md
index dbb721b6018..0bdc1d07591 100644
--- a/doc/development/internal_api.md
+++ b/doc/development/internal_api.md
@@ -51,7 +51,7 @@ POST /internal/allowed
Example request:
-```sh
+```shell
curl --request POST --header "Gitlab-Shared-Secret: <Base64 encoded token>" --data "key_id=11&project=gnuwget/wget2&action=git-upload-pack&protocol=ssh" http://localhost:3001/api/v4/internal/allowed
```
@@ -99,7 +99,7 @@ information for LFS clients when the repository is accessed over SSH.
Example request:
-```sh
+```shell
curl --request POST --header "Gitlab-Shared-Secret: <Base64 encoded token>" --data "key_id=11&project=gnuwget/wget2" http://localhost:3001/api/v4/internal/lfs_authenticate
```
@@ -132,7 +132,7 @@ GET /internal/authorized_keys
Example request:
-```sh
+```shell
curl --request GET --header "Gitlab-Shared-Secret: <Base64 encoded secret>""http://localhost:3001/api/v4/internal/authorized_keys?key=<key as passed by OpenSSH>"
```
@@ -167,7 +167,7 @@ GET /internal/discover
Example request:
-```sh
+```shell
curl --request GET --header "Gitlab-Shared-Secret: <Base64 encoded secret>" "http://localhost:3001/api/v4/internal/discover?key_id=7"
```
@@ -196,7 +196,7 @@ GET /internal/check
Example request:
-```sh
+```shell
curl --request GET --header "Gitlab-Shared-Secret: <Base64 encoded secret>" "http://localhost:3001/api/v4/internal/check"
```
@@ -232,7 +232,7 @@ GET /internal/two_factor_recovery_codes
Example request:
-```sh
+```shell
curl --request POST --header "Gitlab-Shared-Secret: <Base64 encoded secret>" --data "key_id=7" http://localhost:3001/api/v4/internal/two_factor_recovery_codes
```
@@ -275,7 +275,7 @@ POST /internal/pre_receive
Example request:
-```sh
+```shell
curl --request POST --header "Gitlab-Shared-Secret: <Base64 encoded secret>" --data "gl_repository=project-7" http://localhost:3001/api/v4/internal/pre_receive
```
@@ -307,7 +307,7 @@ POST /internal/post_receive
Example Request:
-```sh
+```shell
curl --request POST --header "Gitlab-Shared-Secret: <Base64 encoded secret>" --data "gl_repository=project-7" --data "identifier=user-1" --data "changes=0000000000000000000000000000000000000000 fd9e76b9136bdd9fe217061b497745792fe5a5ee gh-pages\n" http://localhost:3001/api/v4/internal/post_receive
```
diff --git a/doc/development/kubernetes.md b/doc/development/kubernetes.md
index db06ddf352f..1a8aa7647af 100644
--- a/doc/development/kubernetes.md
+++ b/doc/development/kubernetes.md
@@ -164,7 +164,7 @@ installation. Once the installation/upgrade is underway, wait for the
pod to be created. Then run the following to obtain the pods logs as
they are written:
-```bash
+```shell
kubectl logs <pod_name> --follow -n gitlab-managed-apps
```
diff --git a/doc/development/new_fe_guide/dependencies.md b/doc/development/new_fe_guide/dependencies.md
index 161ffb1fb57..afdf6e27b37 100644
--- a/doc/development/new_fe_guide/dependencies.md
+++ b/doc/development/new_fe_guide/dependencies.md
@@ -17,13 +17,13 @@ production assets post-compile.
To add or upgrade a dependency, run:
-```sh
+```shell
yarn add <your dependency here>
```
This may introduce duplicate dependencies. To de-duplicate `yarn.lock`, run:
-```sh
+```shell
node_modules/.bin/yarn-deduplicate --list --strategy fewer yarn.lock && yarn install
```
diff --git a/doc/development/performance.md b/doc/development/performance.md
index 94285efdf1e..a211fddc141 100644
--- a/doc/development/performance.md
+++ b/doc/development/performance.md
@@ -123,7 +123,7 @@ Keeping that in mind, to create a profile, identify (or create) a spec that
exercises the troublesome code path, then run it using the `bin/rspec-stackprof`
helper, e.g.:
-```sh
+```shell
$ LIMIT=10 bin/rspec-stackprof spec/policies/project_policy_spec.rb
8/8 |====== 100 ======>| Time: 00:00:18
@@ -157,14 +157,14 @@ it calls, were being executed.
To create a graphical view of the call stack:
-```sh
+```shell
stackprof tmp/project_policy_spec.rb.dump --graphviz > project_policy_spec.dot
dot -Tsvg project_policy_spec.dot > project_policy_spec.svg
```
To load the profile in [kcachegrind](https://kcachegrind.github.io/):
-```sh
+```shell
stackprof tmp/project_policy_spec.dump --callgrind > project_policy_spec.callgrind
kcachegrind project_policy_spec.callgrind # Linux
qcachegrind project_policy_spec.callgrind # Mac
@@ -172,7 +172,7 @@ qcachegrind project_policy_spec.callgrind # Mac
It may be useful to zoom in on a specific method, e.g.:
-```sh
+```shell
$ stackprof tmp/project_policy_spec.rb.dump --method warm_asset_cache
TestEnv#warm_asset_cache (/Users/lupine/dev/gitlab.com/gitlab-org/gitlab-development-kit/gitlab/spec/support/test_env.rb:164)
samples: 0 self (0.0%) / 6288 total (36.9%)
@@ -225,7 +225,7 @@ may have changed over time.
To activate profiling in your local environment, run the following:
-```sh
+```shell
export RSPEC_PROFILING=yes
rake rspec_profiling:install
```
@@ -237,7 +237,7 @@ variable set.
Ad-hoc investigation of the collected results can be performed in an interactive
shell:
-```sh
+```shell
$ rake rspec_profiling:console
irb(main):001:0> results.count
=> 231
@@ -271,7 +271,7 @@ bundle exec rbtrace -p <PID> -e 'File.open("heap.json", "wb") { |t| ObjectSpace.
Having the JSON, you finally could render a picture using the script [provided by Aaron](https://gist.github.com/tenderlove/f28373d56fdd03d8b514af7191611b88) or similar:
-```sh
+```shell
ruby heapviz.rb heap.json
```
diff --git a/doc/development/post_deployment_migrations.md b/doc/development/post_deployment_migrations.md
index a41096aa3eb..4d523178a21 100644
--- a/doc/development/post_deployment_migrations.md
+++ b/doc/development/post_deployment_migrations.md
@@ -9,13 +9,13 @@ when running `rake db:migrate`.
For example, this would run all migrations including any post deployment
migrations:
-```bash
+```shell
bundle exec rake db:migrate
```
This however will skip post deployment migrations:
-```bash
+```shell
SKIP_POST_DEPLOYMENT_MIGRATIONS=true bundle exec rake db:migrate
```
@@ -26,7 +26,7 @@ post deployment migrations after deploying a new version. Let's assume you
normally use the command `chef-client` to do so. To make use of this feature
you'd have to run this command as follows:
-```bash
+```shell
SKIP_POST_DEPLOYMENT_MIGRATIONS=true sudo chef-client
```
@@ -41,7 +41,7 @@ server but with the variable _unset_.
To create a post deployment migration you can use the following Rails generator:
-```bash
+```shell
bundle exec rails g post_deployment_migration migration_name_here
```
diff --git a/doc/development/profiling.md b/doc/development/profiling.md
index 18683fa10f8..316273f37b8 100644
--- a/doc/development/profiling.md
+++ b/doc/development/profiling.md
@@ -99,7 +99,7 @@ Sherlock is a custom profiling tool built into GitLab. Sherlock is _only_
available when running GitLab in development mode _and_ when setting the
environment variable `ENABLE_SHERLOCK` to a non empty value. For example:
-```sh
+```shell
ENABLE_SHERLOCK=1 bundle exec rails s
```
@@ -112,7 +112,7 @@ Bullet adds quite a bit of logging noise it's disabled by default. To enable
Bullet, set the environment variable `ENABLE_BULLET` to a non-empty value before
starting GitLab. For example:
-```sh
+```shell
ENABLE_BULLET=true bundle exec rails s
```
diff --git a/doc/development/python_guide/index.md b/doc/development/python_guide/index.md
index d898351345d..af1ec44bf3d 100644
--- a/doc/development/python_guide/index.md
+++ b/doc/development/python_guide/index.md
@@ -15,7 +15,7 @@ Ruby world: [rbenv](https://github.com/rbenv/rbenv).
To install `pyenv` on macOS, you can use [Homebrew](https://brew.sh/) with:
-```bash
+```shell
brew install pyenv
```
@@ -23,7 +23,7 @@ brew install pyenv
To install `pyenv` on Linux, you can run the command below:
-```bash
+```shell
curl https://pyenv.run | bash
```
@@ -38,13 +38,13 @@ check for any additional steps required for it.
For Fish, you can install a plugin for [Fisher](https://github.com/jorgebucaran/fisher):
-```bash
+```shell
fisher add fisherman/pyenv
```
Or for [Oh My Fish](https://github.com/oh-my-fish/oh-my-fish):
-```bash
+```shell
omf install pyenv
```
@@ -59,7 +59,7 @@ Recently, an equivalent to the `Gemfile` and the [Bundler](https://bundler.io/)
You will now find a `Pipfile` with the dependencies in the root folder. To install them, run:
-```bash
+```shell
pipenv install
```
@@ -70,7 +70,7 @@ Running this command will install both the required Python version as well as re
To run any Python code under the Pipenv environment, you need to first start a `virtualenv` based on the dependencies
of the application. With Pipenv, this is a simple as running:
-```bash
+```shell
pipenv shell
```
diff --git a/doc/development/shell_commands.md b/doc/development/shell_commands.md
index aa326cbdd34..1f97d433223 100644
--- a/doc/development/shell_commands.md
+++ b/doc/development/shell_commands.md
@@ -211,7 +211,7 @@ Since there are no anchors in the used regular expression, the `git:/tmp/lol` in
When importing, GitLab would execute the following command, passing the `import_url` as an argument:
-```sh
+```shell
git clone file://git:/tmp/lol
```
diff --git a/doc/development/shell_scripting_guide/index.md b/doc/development/shell_scripting_guide/index.md
index a501e3def10..387ef01bdcf 100644
--- a/doc/development/shell_scripting_guide/index.md
+++ b/doc/development/shell_scripting_guide/index.md
@@ -79,7 +79,7 @@ It's recommended to use the [shfmt](https://github.com/mvdan/sh#shfmt) tool to m
We format shell scripts according to the [Google Shell Style Guide](https://google.github.io/styleguide/shell.xml),
so the following `shfmt` invocation should be applied to the project's script files:
-```bash
+```shell
shfmt -i 2 -ci scripts/**/*.sh
```
diff --git a/doc/development/testing_guide/best_practices.md b/doc/development/testing_guide/best_practices.md
index 4fc9c35b2d2..f4844cb14d1 100644
--- a/doc/development/testing_guide/best_practices.md
+++ b/doc/development/testing_guide/best_practices.md
@@ -36,7 +36,7 @@ Here are some things to keep in mind regarding test performance:
To run rspec tests:
-```sh
+```shell
# run all tests
bundle exec rspec
@@ -46,7 +46,7 @@ bundle exec rspec spec/[path]/[to]/[spec].rb
Use [guard](https://github.com/guard/guard) to continuously monitor for changes and only run matching tests:
-```sh
+```shell
bundle exec guard
```
diff --git a/doc/development/testing_guide/frontend_testing.md b/doc/development/testing_guide/frontend_testing.md
index e2b29136524..626d4147e6a 100644
--- a/doc/development/testing_guide/frontend_testing.md
+++ b/doc/development/testing_guide/frontend_testing.md
@@ -571,7 +571,7 @@ As long as the fixtures don't change, `yarn test` is sufficient (and saves you s
While you work on a testsuite, you may want to run these specs in watch mode, so they rerun automatically on every save.
-```bash
+```shell
# Watch and rerun all specs matching the name icon
yarn jest --watch icon
@@ -581,7 +581,7 @@ yarn jest --watch path/to/spec/file.spec.js
You can also run some focused tests without the `--watch` flag
-```bash
+```shell
# Run specific jest file
yarn jest ./path/to/local_spec.js
# Run specific jest folder
@@ -609,7 +609,7 @@ remove these directives when you commit your code.
It is also possible to only run Karma on specific folders or files by filtering
the run tests via the argument `--filter-spec` or short `-f`:
-```bash
+```shell
# Run all files
yarn karma-start
# Run specific spec files
@@ -623,7 +623,7 @@ yarn karma-start -f vue_shared -f vue_mr_widget
You can also use glob syntax to match files. Remember to put quotes around the
glob otherwise your shell may split it into multiple arguments:
-```bash
+```shell
# Run all specs named `file_spec` within the IDE subdirectory
yarn karma -f 'spec/javascripts/ide/**/file_spec.js'
```
diff --git a/doc/development/utilities.md b/doc/development/utilities.md
index 68851f4d550..561d5efc696 100644
--- a/doc/development/utilities.md
+++ b/doc/development/utilities.md
@@ -196,12 +196,14 @@ end
## `ReactiveCaching`
-The `ReactiveCaching` concern is used to fetch some data in the background and
-store it in the Rails cache, keeping it up-to-date for as long as it is being
-requested. If the data hasn't been requested for `reactive_cache_lifetime`,
-it will stop being refreshed, and then be removed.
+> This doc refers to <https://gitlab.com/gitlab-org/gitlab/blob/master/app/models/concerns/reactive_caching.rb>.
-Example of use:
+The `ReactiveCaching` concern is used for fetching some data in the background and store it
+in the Rails cache, keeping it up-to-date for as long as it is being requested. If the
+data hasn't been requested for `reactive_cache_lifetime`, it will stop being refreshed,
+and then be removed.
+
+### Examples
```ruby
class Foo < ApplicationRecord
@@ -209,67 +211,262 @@ class Foo < ApplicationRecord
after_save :clear_reactive_cache!
- def calculate_reactive_cache
+ def calculate_reactive_cache(param1, param2)
# Expensive operation here. The return value of this method is cached
end
def result
- with_reactive_cache do |data|
+ # Any arguments can be passed to `with_reactive_cache`. `calculate_reactive_cache`
+ # will be called with the same arguments.
+ with_reactive_cache(param1, param2) do |data|
# ...
end
end
end
```
-In this example, the first time `#result` is called, it will return `nil`.
-However, it will enqueue a background worker to call `#calculate_reactive_cache`
-and set an initial cache lifetime of ten minutes.
+In this example, the first time `#result` is called, it will return `nil`. However,
+it will enqueue a background worker to call `#calculate_reactive_cache` and set an
+initial cache lifetime of 10 min.
-The background worker needs to find or generate the object on which
-`with_reactive_cache` was called.
-The default behaviour can be overridden by defining a custom
-`reactive_cache_worker_finder`.
-Otherwise, the background worker will use the class name and primary key to get
-the object using the ActiveRecord `find_by` method.
+### How it works
-```ruby
-class Bar
- include ReactiveCaching
+The first time `#with_reactive_cache` is called, a background job is enqueued and
+`with_reactive_cache` returns `nil`. The background job calls `#calculate_reactive_cache`
+and stores its return value. It also re-enqueues the background job to run again after
+`reactive_cache_refresh_interval`. Therefore, it will keep the stored value up to date.
+Calculations never run concurrently.
- self.reactive_cache_key = ->() { ["bar", "thing"] }
- self.reactive_cache_worker_finder = ->(_id, *args) { from_cache(*args) }
+Calling `#with_reactive_cache` while a value is cached will call the block given to
+`#with_reactive_cache`, yielding the cached value. It will also extend the lifetime
+of the cache by the `reactive_cache_lifetime` value.
- def self.from_cache(var1, var2)
- # This method will be called by the background worker with "bar1" and
- # "bar2" as arguments.
- new(var1, var2)
- end
+Once the lifetime has expired, no more background jobs will be enqueued and calling
+`#with_reactive_cache` will again return `nil` - starting the process all over again.
- def initialize(var1, var2)
- # ...
+### When to use
+
+- If we need to make a request to an external API (for example, requests to the k8s API).
+It is not advisable to keep the application server worker blocked for the duration of
+the external request.
+- If a model needs to perform a lot of database calls or other time consuming
+calculations.
+
+### How to use
+
+#### In models and services
+
+The ReactiveCaching concern can be used in models as well as `project_services`
+(`app/models/project_services`).
+
+1. Include the concern in your model or service.
+
+ When including in a model:
+
+ ```ruby
+ include ReactiveCaching
+ ```
+
+ or when including in a `project_service`:
+
+ ```ruby
+ include ReactiveService
+ ```
+
+1. Implement the `calculate_reactive_cache` method in your model/service.
+1. Call `with_reactive_cache` in your model/service where the cached value is needed.
+
+#### In controllers
+
+Controller endpoints that call a model or service method that uses `ReactiveCaching` should
+not wait until the background worker completes.
+
+- An API that calls a model or service method that uses `ReactiveCaching` should return
+`202 accepted` when the cache is being calculated (when `#with_reactive_cache` returns `nil`).
+- It should also
+[set the polling interval header](fe_guide/performance.md#realtime-components) with
+`Gitlab::PollingInterval.set_header`.
+- The consumer of the API is expected to poll the API.
+- You can also consider implementing [ETag caching](polling.md) to reduce the server
+load caused by polling.
+
+#### Methods to implement in a model or service
+
+These are methods that should be implemented in the model/service that includes `ReactiveCaching`.
+
+##### `#calculate_reactive_cache` (required)
+
+- This method must be implemented. Its return value will be cached.
+- It will be called by `ReactiveCaching` when it needs to populate the cache.
+- Any arguments passed to `with_reactive_cache` will also be passed to `calculate_reactive_cache`.
+
+##### `#reactive_cache_updated` (optional)
+
+- This method can be implemented if needed.
+- It is called by the `ReactiveCaching` concern whenever the cache is updated.
+If the cache is being refreshed and the new cache value is the same as the old cache
+value, this method will not be called. It is only called if a new value is stored in
+the cache.
+- It can be used to perform an action whenever the cache is updated.
+
+#### Methods called by a model or service
+
+These are methods provided by `ReactiveCaching` and should be called in
+the model/service.
+
+##### `#with_reactive_cache` (required)
+
+- `with_reactive_cache` must be called where the result of `calculate_reactive_cache`
+is required.
+- A block can be given to `with_reactive_cache`. `with_reactive_cache` can also take
+any number of arguments. Any arguments passed to `with_reactive_cache` will be
+passed to `calculate_reactive_cache`. The arguments passed to `with_reactive_cache`
+will be appended to the cache key name.
+- If `with_reactive_cache` is called when the result has already been cached, the
+block will be called, yielding the cached value and the return value of the block
+will be returned by `with_reactive_cache`. It will also reset the timeout of the
+cache to the `reactive_cache_lifetime` value.
+- If the result has not been cached as yet, `with_reactive_cache` will return nil.
+It will also enqueue a background job, which will call `calculate_reactive_cache`
+and cache the result.
+- Once the background job has completed and the result is cached, the next call
+to `with_reactive_cache` will pick up the cached value.
+- In the example below, `data` is the cached value which is yielded to the block
+given to `with_reactive_cache`.
+
+ ```ruby
+ class Foo < ApplicationRecord
+ include ReactiveCaching
+
+ def calculate_reactive_cache(param1, param2)
+ # Expensive operation here. The return value of this method is cached
+ end
+
+ def result
+ with_reactive_cache(param1, param2) do |data|
+ # ...
+ end
+ end
end
+ ```
- def calculate_reactive_cache
- # Expensive operation here. The return value of this method is cached
+##### `#clear_reactive_cache!` (optional)
+
+- This method can be called when the cache needs to be expired/cleared. For example,
+it can be called in an `after_save` callback in a model so that the cache is
+cleared after the model is modified.
+- This method should be called with the same parameters that are passed to
+`with_reactive_cache` because the parameters are part of the cache key.
+
+##### `#without_reactive_cache` (optional)
+
+- This is a convenience method that can be used for debugging purposes.
+- This method calls `calculate_reactive_cache` in the current process instead of
+in a background worker.
+
+#### Configurable options
+
+There are some `class_attribute` options which can be tweaked.
+
+##### `self.reactive_cache_key`
+
+- The value of this attribute is the prefix to the `data` and `alive` cache key names.
+The parameters passed to `with_reactive_cache` form the rest of the cache key names.
+- By default, this key uses the model's name and the ID of the record.
+
+ ```ruby
+ self.reactive_cache_key = -> (record) { [model_name.singular, record.id] }
+ ```
+
+- The `data` and `alive` cache keys in this case will be `"ExampleModel:1:arg1:arg2"`
+and `"ExampleModel:1:arg1:arg2:alive"` respectively, where `ExampleModel` is the
+name of the model, `1` is the ID of the record, `arg1` and `arg2` are parameters
+passed to `with_reactive_cache`.
+- If you're including this concern in a service instead, you will need to override
+the default by adding the following to your service:
+
+ ```ruby
+ self.reactive_cache_key = ->(service) { [service.class.model_name.singular, service.project_id] }
+ ```
+
+ If your reactive_cache_key is exactly like the above, you can use the existing
+ `ReactiveService` concern instead.
+
+##### `self.reactive_cache_lease_timeout`
+
+- `ReactiveCaching` uses `Gitlab::ExclusiveLease` to ensure that the cache calculation
+is never run concurrently by multiple workers.
+- This attribute is the timeout for the `Gitlab::ExclusiveLease`.
+- It defaults to 2 minutes, but can be overriden if a different timeout is required.
+
+```ruby
+self.reactive_cache_lease_timeout = 2.minutes
+```
+
+##### `self.reactive_cache_refresh_interval`
+
+- This is the interval at which the cache is refreshed.
+- It defaults to 1 minute.
+
+```ruby
+self.reactive_cache_lease_timeout = 1.minute
+```
+
+##### `self.reactive_cache_lifetime`
+
+- This is the duration after which the cache will be cleared if there are no requests.
+- The default is 10 minutes. If there are no requests for this cache value for 10 minutes,
+the cache will expire.
+- If the cache value is requested before it expires, the timeout of the cache will
+be reset to `reactive_cache_lifetime`.
+
+```ruby
+self.reactive_cache_lifetime = 10.minutes
+```
+
+##### `self.reactive_cache_worker_finder`
+
+- This is the method used by the background worker to find or generate the object on
+which `calculate_reactive_cache` can be called.
+- By default it uses the model primary key to find the object:
+
+ ```ruby
+ self.reactive_cache_worker_finder = ->(id, *_args) do
+ find_by(primary_key => id)
end
+ ```
- def result
- with_reactive_cache("bar1", "bar2") do |data|
+- The default behaviour can be overridden by defining a custom `reactive_cache_worker_finder`.
+
+ ```ruby
+ class Foo < ApplicationRecord
+ include ReactiveCaching
+
+ self.reactive_cache_worker_finder = ->(_id, *args) { from_cache(*args) }
+
+ def self.from_cache(var1, var2)
+ # This method will be called by the background worker with "bar1" and
+ # "bar2" as arguments.
+ new(var1, var2)
+ end
+
+ def initialize(var1, var2)
# ...
end
- end
-end
-```
-Each time the background job completes, it stores the return value of
-`#calculate_reactive_cache`. It is also re-enqueued to run again after
-`reactive_cache_refresh_interval`, therefore, it will keep the stored value up to date.
-Calculations are never run concurrently.
+ def calculate_reactive_cache(var1, var2)
+ # Expensive operation here. The return value of this method is cached
+ end
-Calling `#result` while a value is cached will call the block given to
-`#with_reactive_cache`, yielding the cached value. It will also extend the
-lifetime by the `reactive_cache_lifetime` value.
+ def result
+ with_reactive_cache("bar1", "bar2") do |data|
+ # ...
+ end
+ end
+ end
+ ```
-Once the lifetime has expired, no more background jobs will be enqueued and
-calling `#result` will again return `nil` - starting the process all over
-again.
+ - In this example, the primary key ID will be passed to `reactive_cache_worker_finder`
+ along with the parameters passed to `with_reactive_cache`.
+ - The custom `reactive_cache_worker_finder` calls `.from_cache` with the parameters
+ passed to `with_reactive_cache`.