Welcome to mirror list, hosted at ThFree Co, Russian Federation.

gitlab.com/gitlab-org/gitlab-foss.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
Diffstat (limited to 'doc/development/testing_guide')
-rw-r--r--doc/development/testing_guide/best_practices.md75
-rw-r--r--doc/development/testing_guide/contract/index.md4
-rw-r--r--doc/development/testing_guide/end_to_end/beginners_guide.md14
-rw-r--r--doc/development/testing_guide/end_to_end/best_practices.md6
-rw-r--r--doc/development/testing_guide/end_to_end/execution_context_selection.md12
-rw-r--r--doc/development/testing_guide/end_to_end/resources.md2
-rw-r--r--doc/development/testing_guide/end_to_end/rspec_metadata_tests.md4
-rw-r--r--doc/development/testing_guide/end_to_end/running_tests_that_require_special_setup.md95
-rw-r--r--doc/development/testing_guide/flaky_tests.md54
-rw-r--r--doc/development/testing_guide/frontend_testing.md311
-rw-r--r--doc/development/testing_guide/review_apps.md8
-rw-r--r--doc/development/testing_guide/testing_levels.md12
-rw-r--r--doc/development/testing_guide/testing_migrations_guide.md2
13 files changed, 474 insertions, 125 deletions
diff --git a/doc/development/testing_guide/best_practices.md b/doc/development/testing_guide/best_practices.md
index 54d3f368bbd..e257bddb900 100644
--- a/doc/development/testing_guide/best_practices.md
+++ b/doc/development/testing_guide/best_practices.md
@@ -313,7 +313,28 @@ NOTE:
`stub_method` does not support method existence and method arity checks.
WARNING:
-`stub_method` is supposed to be used in factories only. It's strongly discouraged to be used elsewhere. Please consider using [RSpec's mocks](https://relishapp.com/rspec/rspec-mocks/v/3-10/docs/basics) if available.
+`stub_method` is supposed to be used in factories only. It's strongly discouraged to be used elsewhere. Please consider using [RSpec mocks](https://rspec.info/features/3-12/rspec-mocks/) if available.
+
+#### Stubbing member access level
+
+To stub [member access level](../../user/permissions.md#roles) for factory stubs like `Project` or `Group` use
+[`stub_member_access_level`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/support/stub_member_access_level.rb):
+
+```ruby
+let(:project) { build_stubbed(:project) }
+let(:maintainer) { build_stubbed(:user) }
+let(:policy) { ProjectPolicy.new(maintainer, project) }
+
+it 'allows admin_project ability' do
+ stub_member_access_level(project, maintainer: maintainer)
+
+ expect(policy).to be_allowed(:admin_project)
+end
+```
+
+NOTE:
+Refrain from using this stub helper if the test code relies on persisting
+`project_authorizations` or `Member` records. Use `Project#add_member` or `Group#add_member` instead.
#### Identify slow tests
@@ -434,7 +455,7 @@ results are available, and not just the first failure.
- When using `evaluate_script("$('.js-foo').testSomething()")` (or `execute_script`) which acts on a given element,
use a Capybara matcher beforehand (such as `find('.js-foo')`) to ensure the element actually exists.
- Use `focus: true` to isolate parts of the specs you want to run.
-- Use [`:aggregate_failures`](https://relishapp.com/rspec/rspec-core/docs/expectation-framework-integration/aggregating-failures) when there is more than one expectation in a test.
+- Use [`:aggregate_failures`](https://rspec.info/features/3-12/rspec-core/expectation-framework-integration/aggregating-failures/) when there is more than one expectation in a test.
- For [empty test description blocks](https://github.com/rubocop-hq/rspec-style-guide#it-and-specify), use `specify` rather than `it do` if the test is self-explanatory.
- Use `non_existing_record_id`/`non_existing_record_iid`/`non_existing_record_access_level`
when you need an ID/IID/access level that doesn't actually exists. Using 123, 1234,
@@ -451,6 +472,10 @@ You can use `if: Gitlab.ee?` or `unless: Gitlab.ee?` on context/spec blocks to e
Example: [SchemaValidator reads a different path depending on the license](https://gitlab.com/gitlab-org/gitlab/-/blob/7cdcf9819cfa02c701d6fa9f18c1e7a8972884ed/spec/lib/gitlab/ci/parsers/security/validators/schema_validator_spec.rb#L571)
+### Tests depending on SaaS
+
+You can use the `:saas` RSpec metadata tag helper on context/spec blocks to test code that only runs on GitLab.com. This helper sets `Gitlab.config.gitlab['url']` to `Gitlab::Saas.com_url`.
+
### Coverage
[`simplecov`](https://github.com/colszowka/simplecov) is used to generate code test coverage reports.
@@ -507,6 +532,8 @@ If needed, you can scope interactions within a specific area of the page by usin
As you will likely be scoping to an element such as a `div`, which typically does not have a label,
you may use a `data-testid` selector in this case.
+You can use the `be_axe_clean` matcher to run [axe automated accessibility testing](../fe_guide/accessibility.md#automated-accessibility-testing-with-axe) in feature tests.
+
##### Externalized contents
Test expectations against externalized contents should call the same
@@ -865,8 +892,6 @@ it 'is overdue' do
travel_to(3.days.from_now) do
expect(issue).to be_overdue
end
-
- travel_back # Returns the current time back to its original state
end
```
@@ -1015,7 +1040,7 @@ a single test triggers the rate limit, the `:disable_rate_limit` can be used ins
#### Stubbing File methods
In the situations where you need to
-[stub](https://relishapp.com/rspec/rspec-mocks/v/3-9/docs/basics/allowing-messages)
+[stub](https://rspec.info/features/3-12/rspec-mocks/basics/allowing-messages/)
methods such as `File.read`, make sure to:
1. Stub `File.read` for only the file path you are interested in.
@@ -1191,11 +1216,17 @@ When you want to ensure that no event got called, you can use `expect_no_snowplo
it 'does not track any snowplow events' do
get :show
- expect_no_snowplow_event
+ expect_no_snowplow_event(category: described_class.name, action: 'some_action')
end
end
```
+Even though `category` and `action` can be omitted, you should at least
+specify a `category` to avoid flaky tests. For example,
+`Users::ActivityService` may track a Snowplow event after an API
+request, and `expect_no_snowplow_event` will fail if that happens to run
+when no arguments are specified.
+
#### Test Snowplow context against the schema
The [Snowplow schema matcher](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/60480)
@@ -1279,9 +1310,29 @@ they apply to multiple type of specs.
#### `be_like_time`
Time returned from a database can differ in precision from time objects
-in Ruby, so we need flexible tolerances when comparing in specs. We can
-use `be_like_time` to compare that times are within one second of each
-other.
+in Ruby, so we need flexible tolerances when comparing in specs.
+
+The PostgreSQL time and timestamp types
+have [the resolution of 1 microsecond](https://www.postgresql.org/docs/current/datatype-datetime.html).
+However, the precision of Ruby `Time` can vary [depending on the OS.](https://blog.paulswartz.net/post/142749676062/ruby-time-precision-os-x-vs-linux)
+
+Consider the following snippet:
+
+```ruby
+project = create(:project)
+
+expect(project.created_at).to eq(Project.find(project.id).created_at)
+```
+
+On Linux, `Time` can have the maximum precision of 9 and
+`project.created_at` has a value (like `2023-04-28 05:53:30.808033064`) with the same precision.
+However, the actual value `created_at` (like `2023-04-28 05:53:30.808033`) stored to and loaded from the database
+doesn't have the same precision, and the match would fail.
+On macOS X, the precision of `Time` matches that of the PostgreSQL timestamp type
+ and the match could succeed.
+
+To avoid the issue, we can use `be_like_time` or `be_within` to compare
+that times are within one second of each other.
Example:
@@ -1289,6 +1340,12 @@ Example:
expect(metrics.merged_at).to be_like_time(time)
```
+Example for `be_within`:
+
+```ruby
+expect(violation.reload.merged_at).to be_within(0.00001.seconds).of(merge_request.merged_at)
+```
+
#### `have_gitlab_http_status`
Prefer `have_gitlab_http_status` over `have_http_status` and
diff --git a/doc/development/testing_guide/contract/index.md b/doc/development/testing_guide/contract/index.md
index 31d68bb9f4f..cf23792e239 100644
--- a/doc/development/testing_guide/contract/index.md
+++ b/doc/development/testing_guide/contract/index.md
@@ -42,11 +42,11 @@ rake contracts:merge_requests:test:merge_requests[contract_merge_requests]
#### Verify the contracts in Pact Broker
-By default, the Rake tasks will verify the locally stored contracts. In order to verify the contracts published in the Pact Broker, we need to set the `PACT_BROKER` environment variable to `true`. It is important to point out here that the file path and file name of the provider test is what is used to find the contract in the Pact Broker which is why it is important to make sure the [provider test naming conventions](#provider-naming) are followed.
+By default, the Rake tasks will verify the locally stored contracts. In order to verify the contracts published in the Pact Broker, we need to set the `PACT_BROKER` environment variable to `true` and the `QA_PACT_BROKER_HOST` to the URL of the Pact Broker. It is important to point out here that the file path and file name of the provider test is what is used to find the contract in the Pact Broker which is why it is important to make sure the [provider test naming conventions](#provider-naming) are followed.
## Publish contracts to Pact Broker
-The contracts generated by the consumer test can be published to a hosted Pact Broker by going to `spec/contracts` and running the `publish-contracts.sh` script.
+The contracts generated by the consumer test can be published to a hosted Pact Broker by setting the `QA_PACT_BROKER_HOST` environment variable and running the [`publish-contracts.sh`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/spec/contracts/publish-contracts.sh) script.
## Test suite folder structure and naming conventions
diff --git a/doc/development/testing_guide/end_to_end/beginners_guide.md b/doc/development/testing_guide/end_to_end/beginners_guide.md
index b81379d89b2..78e6af4a347 100644
--- a/doc/development/testing_guide/end_to_end/beginners_guide.md
+++ b/doc/development/testing_guide/end_to_end/beginners_guide.md
@@ -34,9 +34,8 @@ For more information, see [End-to-end testing Best Practices](best_practices.md)
## Determine if end-to-end tests are needed
-Check the code coverage of a specific feature before writing end-to-end tests,
-for both [GitLab Community Edition](https://gitlab-org.gitlab.io/gitlab-foss/coverage-ruby/#_AllFiles)
-and [GitLab Enterprise Edition](https://gitlab-org.gitlab.io/gitlab/coverage-ruby/#_AllFiles) projects.
+Check the code coverage of a specific feature before writing end-to-end tests
+for the [GitLab](https://gitlab-org.gitlab.io/gitlab/coverage-ruby/#_AllFiles) project.
Does sufficient test coverage exist at the unit, feature, or integration levels?
If you answered *yes*, then you *don't* need an end-to-end test.
@@ -53,9 +52,8 @@ For information about the distribution of tests per level in GitLab, see
the feature and the lower-level tests.
WARNING:
-Check both [GitLab Community Edition](https://gitlab-org.gitlab.io/gitlab-foss/coverage-ruby/#_AllFiles) and
-[GitLab Enterprise Edition](https://gitlab-org.gitlab.io/gitlab/coverage-ruby/#_AllFiles) coverage projects
-for previously-written tests for this feature. For analyzing the code coverage,
+Check the [GitLab](https://gitlab-org.gitlab.io/gitlab/coverage-ruby/#_AllFiles) coverage project
+for previously written tests for this feature. To analyze code coverage,
you must understand which application files implement specific features.
In this tutorial we're writing a login end-to-end test, even though it has been
@@ -87,7 +85,7 @@ file `basic_login_spec.rb`.
See the [`RSpec.describe` outer block](#the-outer-rspecdescribe-block)
WARNING:
-The outer `context` [was deprecated](https://gitlab.com/gitlab-org/quality/team-tasks/-/issues/550) in `13.2`
+The outer `context` [was deprecated](https://gitlab.com/gitlab-org/quality/quality-engineering/team-tasks/-/issues/550) in `13.2`
in adherence to RSpec 4.0 specifications. Use `RSpec.describe` instead.
### The outer `RSpec.describe` block
@@ -225,7 +223,7 @@ end
1. Check if the user avatar *does not* appear in the top navigation.
Behind the scenes, `be_signed_in` is a
-[predicate matcher](https://relishapp.com/rspec/rspec-expectations/v/3-8/docs/built-in-matchers/predicate-matchers)
+[predicate matcher](https://rspec.info/features/3-12/rspec-expectations/built-in-matchers/predicates/)
that [implements checking the user avatar](https://gitlab.com/gitlab-org/gitlab/-/blob/master/qa/qa/page/main/menu.rb#L92).
## De-duplicate your code
diff --git a/doc/development/testing_guide/end_to_end/best_practices.md b/doc/development/testing_guide/end_to_end/best_practices.md
index 777a9f47339..c1f06bb9a66 100644
--- a/doc/development/testing_guide/end_to_end/best_practices.md
+++ b/doc/development/testing_guide/end_to_end/best_practices.md
@@ -16,9 +16,9 @@ In case custom inflection logic is needed, custom inflectors are added in the [q
## Link a test to its test case
-Every test should have a corresponding test case in the [GitLab project Test Cases](https://gitlab.com/gitlab-org/gitlab/-/quality/test_cases) as well as a results issue in the [Quality Test Cases project](https://gitlab.com/gitlab-org/quality/testcases/-/issues).
+Every test should have a corresponding test case in the [GitLab project test cases](https://gitlab.com/gitlab-org/gitlab/-/quality/test_cases) as well as a results issue in the [Quality Test Cases project](https://gitlab.com/gitlab-org/quality/testcases/-/issues).
If a test case issue does not yet exist, any GitLab team member can create a new test case in
-the [Test Cases](https://gitlab.com/gitlab-org/gitlab/-/quality/test_cases) GitLab project
+the **[CI/CD > Test cases](https://gitlab.com/gitlab-org/gitlab/-/quality/test_cases)** page of the GitLab project
with a placeholder title. After the test case URL is linked to a test in the code, when the test is
run in a pipeline that has reporting enabled, the `report-results` script automatically updates the
test case and the results issue.
@@ -413,7 +413,7 @@ except(page).to have_no_text('hidden')
```
Unfortunately, that's not automatically the case for the predicate methods that we add to our
-[page objects](page_objects.md). We need to [create our own negatable matchers](https://relishapp.com/rspec/rspec-expectations/v/3-9/docs/custom-matchers/define-a-custom-matcher#matcher-with-separate-logic-for-expect().to-and-expect().not-to).
+[page objects](page_objects.md). We need to [create our own negatable matchers](https://rspec.info/features/3-12/rspec-expectations/custom-matchers/define-matcher/).
The initial example uses the `have_job` matcher which is derived from the
[`has_job?` predicate method of the `Page::Project::Pipeline::Show` page object](https://gitlab.com/gitlab-org/gitlab/-/blob/87864b3047c23b4308f59c27a3757045944af447/qa/qa/page/project/pipeline/show.rb#L53).
diff --git a/doc/development/testing_guide/end_to_end/execution_context_selection.md b/doc/development/testing_guide/end_to_end/execution_context_selection.md
index 23b8ab7287b..4d83884f4d0 100644
--- a/doc/development/testing_guide/end_to_end/execution_context_selection.md
+++ b/doc/development/testing_guide/end_to_end/execution_context_selection.md
@@ -34,7 +34,8 @@ Matches use:
- Regex for environments.
- String matching for pipelines.
-- Regex or string matching for jobs.
+- Regex or string matching for jobs
+- Lambda or truthy/falsey value for generic condition
| Test execution context | Key | Matches |
| ---------------- | --- | --------------- |
@@ -47,6 +48,7 @@ Matches use:
| The `nightly` and `canary` pipelines | `only: { pipeline: [:nightly, :canary] }` | ["nightly"](https://gitlab.com/gitlab-org/quality/nightly) and ["canary"](https://gitlab.com/gitlab-org/quality/canary) |
| The `ee:instance` job | `only: { job: 'ee:instance' }` | The `ee:instance` job in any pipeline |
| Any `quarantine` job | `only: { job: '.*quarantine' }` | Any job ending in `quarantine` in any pipeline |
+| Any run where condition evaluates to a truthy value | `only: { condition: -> { ENV['TEST_ENV'] == 'true' } }` | Any run where `TEST_ENV` is set to true
```ruby
RSpec.describe 'Area' do
@@ -62,6 +64,8 @@ RSpec.describe 'Area' do
it 'runs only in nightly pipeline', only: { pipeline: :nightly } do; end
it 'runs in nightly and canary pipelines', only: { pipeline: [:nightly, :canary] } do; end
+
+ it 'runs in specific environment matching condition', only: { condition: -> { ENV['TEST_ENV'] == 'true' } } do; end
end
```
@@ -73,7 +77,8 @@ Matches use:
- Regex for environments.
- String matching for pipelines.
-- Regex or string matching for jobs.
+- Regex or string matching for jobs
+- Lambda or truthy/falsey value for generic condition
| Test execution context | Key | Matches |
| ---------------- | --- | --------------- |
@@ -86,6 +91,7 @@ Matches use:
| The `nightly` and `canary` pipelines | `except: { pipeline: [:nightly, :canary] }` | ["nightly"](https://gitlab.com/gitlab-org/quality/nightly) and ["canary"](https://gitlab.com/gitlab-org/quality/canary) |
| The `ee:instance` job | `except: { job: 'ee:instance' }` | The `ee:instance` job in any pipeline |
| Any `quarantine` job | `except: { job: '.*quarantine' }` | Any job ending in `quarantine` in any pipeline |
+| Any run except where condition evaluates to a truthy value | `except: { condition: -> { ENV['TEST_ENV'] == 'true' } }` | Any run where `TEST_ENV` is not set to true
```ruby
RSpec.describe 'Area' do
@@ -96,6 +102,8 @@ RSpec.describe 'Area' do
it 'runs in any execution context except the nightly pipeline', except: { pipeline: :nightly } do; end
it 'runs in any execution context except the ee:instance job', except: { job: 'ee:instance' } do; end
+
+ it 'runs in specific environment not matching condition', except: { condition: -> { ENV['TEST_ENV'] == 'true' } } do; end
end
```
diff --git a/doc/development/testing_guide/end_to_end/resources.md b/doc/development/testing_guide/end_to_end/resources.md
index f3e072ffa21..f9e136a86df 100644
--- a/doc/development/testing_guide/end_to_end/resources.md
+++ b/doc/development/testing_guide/end_to_end/resources.md
@@ -249,7 +249,7 @@ end
```
The `populate` method iterates through its arguments and call each
-attribute respectively. Here `populate(:brand)` has the same effect as
+attribute. Here `populate(:brand)` has the same effect as
just `brand`. Using the populate method makes the intention clearer.
With this, it ensures we construct the data right after we create the
diff --git a/doc/development/testing_guide/end_to_end/rspec_metadata_tests.md b/doc/development/testing_guide/end_to_end/rspec_metadata_tests.md
index c1389b3ac0e..a42d5e3df1d 100644
--- a/doc/development/testing_guide/end_to_end/rspec_metadata_tests.md
+++ b/doc/development/testing_guide/end_to_end/rspec_metadata_tests.md
@@ -6,7 +6,7 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# RSpec metadata for end-to-end tests
-This is a partial list of the [RSpec metadata](https://relishapp.com/rspec/rspec-core/docs/metadata/user-defined-metadata)
+This is a partial list of the [RSpec metadata](https://rspec.info/features/3-12/rspec-core/metadata/user-defined/)
(a.k.a. tags) that are used in our end-to-end tests.
<!-- Please keep the tags in alphabetical order -->
@@ -52,5 +52,5 @@ This is a partial list of the [RSpec metadata](https://relishapp.com/rspec/rspec
| `:skip_signup_disabled` | The test uses UI to sign up a new user and is skipped in any environment that does not allow new user registration via the UI. |
| `:smoke` | The test belongs to the test suite which verifies basic functionality of a GitLab instance. |
| `:smtp` | The test requires a GitLab instance to be configured to use an SMTP server. Tests SMTP notification email delivery from GitLab by using MailHog. |
-| `:testcase` | The link to the test case issue in the [GitLab Project Test Cases](https://gitlab.com/gitlab-org/gitlab/-/quality/test_cases). |
+| `:testcase` | The link to the test case issue in the [GitLab Project test cases](https://gitlab.com/gitlab-org/gitlab/-/quality/test_cases). |
| `:transient` | The test tests transient bugs. It is excluded by default. |
diff --git a/doc/development/testing_guide/end_to_end/running_tests_that_require_special_setup.md b/doc/development/testing_guide/end_to_end/running_tests_that_require_special_setup.md
index 4a947e59d5f..adf7bccb7fb 100644
--- a/doc/development/testing_guide/end_to_end/running_tests_that_require_special_setup.md
+++ b/doc/development/testing_guide/end_to_end/running_tests_that_require_special_setup.md
@@ -219,7 +219,7 @@ Geo requires an EE license. To visit the Geo sites in your browser, you need a r
# Using a full image address
GITLAB_QA_ACCESS_TOKEN=your-token-here gitlab-qa Test::Integration::Geo registry.gitlab.com/gitlab-org/build/omnibus-gitlab-mirror/gitlab-ee:examplesha123456789 --no-teardown
- ```
+ ```
You can use the `--no-tests` option to build the containers only, and then run the [`EE::Scenario::Test::Geo` scenario](https://gitlab.com/gitlab-org/gitlab/-/blob/f7272b77e80215c39d1ffeaed27794c220dbe03f/qa/qa/ee/scenario/test/geo.rb) from your GDK to complete setup and run tests. However, there might be configuration issues if your GDK and the containers are based on different GitLab versions. With the `--no-teardown` option, GitLab-QA uses the same GitLab version for the GitLab containers and the GitLab QA container used to configure the Geo setup.
@@ -369,15 +369,15 @@ To run the LDAP tests on your local with TLS disabled, follow these steps:
1. Run the GitLab container:
- ```shell
- sudo docker run \
- --hostname localhost \
- --net test \
- --publish 443:443 --publish 80:80 --publish 22:22 \
- --name gitlab \
- --env GITLAB_OMNIBUS_CONFIG="gitlab_rails['ldap_enabled'] = true; gitlab_rails['ldap_servers'] = {\"main\"=>{\"label\"=>\"LDAP\", \"host\"=>\"ldap-server.test\", \"port\"=>389, \"uid\"=>\"uid\", \"bind_dn\"=>\"cn=admin,dc=example,dc=org\", \"password\"=>\"admin\", \"encryption\"=>\"plain\", \"verify_certificates\"=>false, \"base\"=>\"dc=example,dc=org\", \"user_filter\"=>\"\", \"group_base\"=>\"ou=Global Groups,dc=example,dc=org\", \"admin_group\"=>\"AdminGroup\", \"external_groups\"=>\"\", \"sync_ssh_keys\"=>false}}; gitlab_rails['ldap_sync_worker_cron'] = '* * * * *'; gitlab_rails['ldap_group_sync_worker_cron'] = '* * * * *'; " \
- gitlab/gitlab-ee:latest
- ```
+ ```shell
+ sudo docker run \
+ --hostname localhost \
+ --net test \
+ --publish 443:443 --publish 80:80 --publish 22:22 \
+ --name gitlab \
+ --env GITLAB_OMNIBUS_CONFIG="gitlab_rails['ldap_enabled'] = true; gitlab_rails['ldap_servers'] = {\"main\"=>{\"label\"=>\"LDAP\", \"host\"=>\"ldap-server.test\", \"port\"=>389, \"uid\"=>\"uid\", \"bind_dn\"=>\"cn=admin,dc=example,dc=org\", \"password\"=>\"admin\", \"encryption\"=>\"plain\", \"verify_certificates\"=>false, \"base\"=>\"dc=example,dc=org\", \"user_filter\"=>\"\", \"group_base\"=>\"ou=Global Groups,dc=example,dc=org\", \"admin_group\"=>\"AdminGroup\", \"external_groups\"=>\"\", \"sync_ssh_keys\"=>false}}; gitlab_rails['ldap_sync_worker_cron'] = '* * * * *'; gitlab_rails['ldap_group_sync_worker_cron'] = '* * * * *'; " \
+ gitlab/gitlab-ee:latest
+ ```
1. Run an LDAP test from [`gitlab/qa`](https://gitlab.com/gitlab-org/gitlab/-/tree/d5447ebb5f99d4c72780681ddf4dc25b0738acba/qa) directory:
@@ -428,6 +428,8 @@ For instructions on how to run these tests using the `gitlab-qa` gem, please ref
Tests that are tagged with `:mobile` can be run against specified mobile devices using cloud emulator/simulator services.
+These tests run in the [nightly pipeline](https://gitlab.com/gitlab-org/quality/nightly/-/pipelines) in the `ce:remote_mobile_safari` job.
+
### How to run mobile tests with Sauce Labs
Running directly against an environment like staging is not recommended because Sauce Labs test logs expose credentials. Therefore, it is best practice and the default to use a tunnel.
@@ -527,3 +529,76 @@ end
```
When running mobile tests for phone layouts, both `remote_mobile_device_name` and `mobile_layout` are `true` but when using a tablet layout, only `remote_mobile_device_name` is true. This is because phone layouts have more menus closed by default such as how both tablets and phones have the left nav closed but unlike phone layouts, tablets have the regular top navigation bar, not the mobile one. So in the case where the navigation being edited needs to be used in tablet layouts as well, use `remote_mobile_device_name` instead of `mobile_layout?` when prepending so it will use it if it's a tablet layout as well.
+
+## Targeting canary vs non-canary components in live environments
+
+Use the `QA_COOKIES` ENV variable to have the entire test target a `canary` (`staging-canary` or `canary`) or `non-canary` (`staging` or `production`) environment.
+
+Locally, that would mean prepending the ENV variable to your call to bin/qa. To target the `canary` version of that environment:
+
+```shell
+QA_COOKIES="gitlab_canary=true" WEBDRIVER_HEADLESS=false bin/qa Test::Instance::Staging <YOUR SPECIFIC TAGS OR TESTS>
+```
+
+Alternatively, you may set the cookie to `false` to ensure the `non-canary` version is targeted.
+
+You can also export the cookie for your current session to avoid prepending it each time:
+
+```shell
+export QA_COOKIES="gitlab_canary=true"
+```
+
+### Updating the cookie within a running spec
+
+Within a specific test, you can target either the `canary` or `non-canary` nodes within live environments, such as `staging` and `production`.
+
+For example, to switch back and forth between the two environments, you could utilize the `target_canary` method:
+
+```ruby
+it 'tests toggling between canary and non-canary nodes' do
+ Runtime::Browser.visit(:gitlab, Page::Main::Login)
+
+ # After starting the browser session, use the target_canary method ...
+
+ Runtime::Browser::Session.target_canary(true)
+ Flow::Login.sign_in
+
+ verify_session_on_canary(true)
+
+ Runtime::Browser::Session.target_canary(false)
+
+ # Refresh the page ...
+
+ verify_session_on_canary(false)
+
+ # Log out and clean up ...
+end
+
+def verify_session_on_canary(enable_canary)
+ Page::Main::Menu.perform do |menu|
+ aggregate_failures 'testing session log in' do
+ expect(menu.canary?).to be(enable_canary)
+ end
+ end
+end
+```
+
+You can verify whether GitLab is appropriately redirecting your session to the `canary` or `non-canary` nodes with the `menu.canary?` method.
+
+The above spec is verbose, written specifically this way to ensure the idea behind the implementation is clear. We recommend following the practices detailed within our [Beginner's guide to writing end-to-end tests](beginners_guide.md).
+
+## OpenID Connect (OIDC) tests
+
+To run the [`login_via_oidc_with_gitlab_as_idp_spec`](https://gitlab.com/gitlab-org/gitlab/-/blob/188e2c876a17a097448d7f3ed35bdf264fed0d3b/qa/qa/specs/features/browser_ui/1_manage/login/login_via_oidc_with_gitlab_as_idp_spec.rb) on your local machine:
+
+1. Make sure your GDK is set to run on a non-localhost address such as `gdk.test:3000`.
+1. Configure a [loopback interface to 172.16.123.1](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/6fe7b46403229f12ab6d903f99b024e0b82cb94a/doc/howto/local_network.md#create-loopback-interface).
+1. Make sure Docker Desktop or Rancher Desktop is running.
+1. Add an entry to your `/etc/hosts` file for `gitlab-oidc-consumer.bridge` pointing to `127.0.0.1`.
+1. From the `qa` directory, run the following command. To set the GitLab image you want to use, update the `RELEASE` variable. For example, to use the latest EE image, set `RELEASE` to `gitlab/gitlab-ee:latest`:
+
+ ```shell
+ bundle install
+
+ RELEASE_REGISTRY_URL='registry.gitlab.com' RELEASE_REGISTRY_USERNAME='<your_gitlab_username>' RELEASE_REGISTRY_PASSWORD='<your_gitlab_personal_access_token>' RELEASE='registry.gitlab.com/gitlab-org/build/omnibus-gitlab-mirror/gitlab-ee:1d5a644145dfe901ea7648d825f8f9f3006d0acf' GITLAB_QA_ADMIN_ACCESS_TOKEN="<your_gdk_admin_personal_access_token>" QA_DEBUG=true CHROME_HEADLESS=false bundle exec bin/qa Test::Instance::All http://gdk.test:3000 qa/specs/features/browser_ui/1_manage/login/login_via_oidc_with_gitlab_as_idp_spec.rb
+ ```
diff --git a/doc/development/testing_guide/flaky_tests.md b/doc/development/testing_guide/flaky_tests.md
index f476815bf32..40be7a7d094 100644
--- a/doc/development/testing_guide/flaky_tests.md
+++ b/doc/development/testing_guide/flaky_tests.md
@@ -81,6 +81,11 @@ difficult to achieve locally.
suite, it might pass as not enough records were created before it, but as soon as it would run
later in the suite, there could be a record that actually has the ID `42`, hence the test would
start to fail.
+- [Example 3](https://gitlab.com/gitlab-org/gitlab/-/issues/402915): State leakage can result from
+ data records created with `let_it_be` shared between test examples, while some test modifies the model
+ either deliberately or unwillingly causing out-of-sync data in test examples. This can result in `PG::QueryCanceled: ERROR` in the subsequent test examples or retries.
+ For more information about state leakages and resolution options,
+ see [GitLab testing best practices](best_practices.md#lets-talk-about-let).
### Random input
@@ -116,6 +121,8 @@ Adding a delay in API or controller could help reproducing the issue.
time before throwing an `element not found` error.
- [Example 2](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/101728/diffs): A CSS selector
only appears after a GraphQL requests has finished, and the UI has updated.
+- [Example 3](https://gitlab.com/gitlab-org/gitlab/-/issues/408215): A false-positive test, Capybara imediatly returns true after
+page visit and page is not fully loaded, or if the element is not detectable by webdriver (such as being rendered outside the viewport or behind other elements).
### Datetime-sensitive
@@ -130,6 +137,7 @@ Adding a delay in API or controller could help reproducing the issue.
**Examples:**
- [Example 1](https://gitlab.com/gitlab-org/gitlab/-/issues/118612): A test that breaks after some time passed.
+- [Example 2](https://gitlab.com/gitlab-org/gitlab/-/issues/403332): A test that breaks in the last day of the month.
### Unstable infrastructure
@@ -150,15 +158,28 @@ usually a good idea.
## Quarantined tests
-When a test frequently fails in `master`,
-create [a ~"failure::flaky-test" issue](https://about.gitlab.com/handbook/engineering/workflow/#broken-master).
+When we have a flaky test in `master`:
-If the test cannot be fixed in a timely fashion, there is an impact on the
-productivity of all the developers, so it should be quarantined. There are two ways to quarantine tests, depending on the test framework being used: RSpec and Jest.
+1. Create [a ~"failure::flaky-test" issue](https://about.gitlab.com/handbook/engineering/workflow/#broken-master) with the relevant group label.
+1. Quarantine the test after the first failure.
+ If the test cannot be fixed in a timely fashion, there is an impact on the
+ productivity of all the developers, so it should be quarantined.
### RSpec
-For RSpec tests, you can use the `:quarantine` metadata with the issue URL.
+#### Fast quarantine
+
+To quickly quarantine a test without having to open a merge request and wait for pipelines,
+you can follow [the fast quarantining process](https://gitlab.com/gitlab-org/quality/engineering-productivity/fast-quarantine/-/tree/main/#fast-quarantine-a-test).
+
+#### Long-term quarantine
+
+Once a test is fast-quarantined, you can proceed with the long-term quarantining process. This can be done by opening a merge request.
+
+First, ensure the test file has a [`feature_category` metadata](../feature_categorization/index.md#rspec-examples), to ensure correct attribution of the test file.
+
+Then, you can use the `quarantine: '<issue url>'` metadata with the URL of the
+~"failure::flaky-test" issue you created previously.
```ruby
it 'succeeds', quarantine: 'https://gitlab.com/gitlab-org/gitlab/-/issues/12345' do
@@ -172,6 +193,8 @@ This means it is skipped unless run with `--tag quarantine`:
bin/rspec --tag quarantine
```
+After the long-term quarantining MR has reached production, you should revert the fast-quarantine MR you created earlier.
+
### Jest
For Jest specs, you can use the `.skip` method along with the `eslint-disable-next-line` comment to disable the `jest/no-disabled-tests` ESLint rule and include the issue URL. Here's an example:
@@ -220,11 +243,10 @@ For example, `FLAKY_RSPEC_GENERATE_REPORT=1 bin/rspec ...`.
### Usage of the `rspec/flaky/report-suite.json` report
-The `rspec/flaky/report-suite.json` report is:
-
-- Used for [automatically skipping known flaky tests](../pipelines/index.md#automatic-skipping-of-flaky-tests).
-- [Imported into Snowflake](https://gitlab.com/gitlab-data/analytics/-/blob/master/extract/gitlab_flaky_tests/upload.py)
- once per day, for monitoring with the [internal dashboard](https://app.periscopedata.com/app/gitlab/888968/EP---Flaky-tests).
+The `rspec/flaky/report-suite.json` report is
+[imported into Snowflake](https://gitlab.com/gitlab-data/analytics/-/blob/7085bea51bb2f8f823e073393934ba5f97259459/extract/gitlab_flaky_tests/upload.py#L19)
+once per day, for monitoring with the
+[internal dashboard](https://app.periscopedata.com/app/gitlab/888968/EP---Flaky-tests).
## Problems we had in the past at GitLab
@@ -251,9 +273,9 @@ which would give us the minimal test combination to reproduce the failure:
for the list under `Knapsack node specs:` in the CI job output log.
1. Save the list of specs as a file, and run:
- ```shell
- cat knapsack_specs.txt | xargs scripts/rspec_bisect_flaky
- ```
+ ```shell
+ cat knapsack_specs.txt | xargs scripts/rspec_bisect_flaky
+ ```
If there is an order-dependency issue, the script above will print the minimal
reproduction.
@@ -300,6 +322,12 @@ If a spec hangs, it might be caused by a [bug in Rails](https://github.com/rails
- <https://gitlab.com/gitlab-org/gitlab/-/merge_requests/81112>
- <https://gitlab.com/gitlab-org/gitlab/-/issues/337039>
+## Suggestions
+
+### Split the test file
+
+It could help to split the large RSpec files in multiple files in order to narrow down the context and identify the problematic tests.
+
## Resources
- [Flaky Tests: Are You Sure You Want to Rerun Them?](https://semaphoreci.com/blog/2017/04/20/flaky-tests.html)
diff --git a/doc/development/testing_guide/frontend_testing.md b/doc/development/testing_guide/frontend_testing.md
index 85d807eceb1..d596c21bd5c 100644
--- a/doc/development/testing_guide/frontend_testing.md
+++ b/doc/development/testing_guide/frontend_testing.md
@@ -520,6 +520,25 @@ it('waits for an event', () => {
});
```
+### Manipulate `gon` object
+
+`gon` (or `window.gon`) is a global object used to pass data from the backend. If your test depends
+on its value you can directly modify it:
+
+```javascript
+describe('when logged in', () => {
+ beforeEach(() => {
+ gon.current_user_id = 1;
+ });
+
+ it('shows message', () => {
+ expect(wrapper.text()).toBe('Logged in!');
+ });
+})
+```
+
+`gon` is reset in every test to ensure tests are isolated.
+
### Ensuring that tests are isolated
Tests are normally architected in a pattern which requires a recurring setup of the component under test. This is often achieved by making use of the `beforeEach` hook.
@@ -548,6 +567,53 @@ Example
});
```
+### Testing local-only Apollo queries and mutations
+
+To add a new query or mutation before it is added to the backend, we can use the `@client` directive. For example:
+
+```graphql
+mutation setActiveBoardItemEE($boardItem: LocalBoardItem, $isIssue: Boolean = true) {
+ setActiveBoardItem(boardItem: $boardItem) @client {
+ ...Issue @include(if: $isIssue)
+ ...EpicDetailed @skip(if: $isIssue)
+ }
+}
+```
+
+When writing test cases for such calls, we can use resolvers to make sure they are called with the correct parameters.
+
+For example, when creating the wrapper, we should make sure the resolver is mapped to the query or mutation.
+The mutation we are mocking here is `setActiveBoardItem`:
+
+```javascript
+const mockSetActiveBoardItemResolver = jest.fn();
+const mockApollo = createMockApollo([], {
+ Mutation: {
+ setActiveBoardItem: mockSetActiveBoardItemResolver,
+ },
+});
+```
+
+In the following code, we must pass four arguments. The second one must be the collection of input variables of the query or mutation mocked.
+To test that the mutation is called with the correct parameters:
+
+```javascript
+it('calls setActiveBoardItemMutation on close', async () => {
+ wrapper.findComponent(GlDrawer).vm.$emit('close');
+
+ await waitForPromises();
+
+ expect(mockSetActiveBoardItemResolver).toHaveBeenCalledWith(
+ {},
+ {
+ boardItem: null,
+ },
+ expect.anything(),
+ expect.anything(),
+ );
+});
+```
+
### Jest best practices
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/34209) in GitLab 13.2.
@@ -800,10 +866,10 @@ often using fixtures to validate correct integration with the backend code.
### Use fixtures
-To import a JSON fixture, `import` it using the `test_fixtures` alias.
+To import a JSON or HTML fixture, `import` it using the `test_fixtures` alias.
```javascript
-import responseBody from 'test_fixtures/some/fixture.json' // loads spec/frontend/fixtures/some/fixture.json
+import responseBody from 'test_fixtures/some/fixture.json' // loads tmp/tests/frontend/fixtures-ee/some/fixture.json
it('makes a request', () => {
axiosMock.onGet(endpoint).reply(200, responseBody);
@@ -814,23 +880,6 @@ it('makes a request', () => {
});
```
-For other fixtures, Jest uses `spec/frontend/__helpers__/fixtures.js` to import them in tests.
-
-The following are examples of tests that work for Jest:
-
-```javascript
-it('uses some HTML element', () => {
- loadHTMLFixture('some/page.html'); // loads spec/frontend/fixtures/some/page.html and adds it to the DOM
-
- const element = document.getElementById('#my-id');
-
- // ...
-
- // Jest does not clean up the DOM automatically
- resetHTMLFixture();
-});
-```
-
### Generate fixtures
You can find code to generate test fixtures in:
@@ -845,6 +894,26 @@ You can generate fixtures by running:
You can find generated fixtures are in `tmp/tests/frontend/fixtures-ee`.
+### Download fixtures
+
+We generate fixtures in GitLab CI, and store them in the package registry.
+
+The `scripts/frontend/download_fixtures.sh` script is meant to download and extract those fixtures for local use:
+
+```shell
+# Checks if a frontend fixture package exists in the gitlab-org/gitlab
+# package registry by looking at the commits on a local branch.
+#
+# The package is downloaded and extracted if it exists
+$ scripts/frontend/download_fixtures.sh
+
+# Same as above, but only looks at the last 10 commits of the currently checked-out branch
+$ scripts/frontend/download_fixtures.sh --max-commits=10
+
+# Looks at the commits on the local master branch instead of the currently checked-out branch
+$ scripts/frontend/download_fixtures.sh --branch master
+```
+
#### Creating new fixtures
For each fixture, you can find the content of the `response` variable in the output file.
@@ -1161,15 +1230,14 @@ You can download any older version of Firefox from the releases FTP server, <htt
## Snapshots
-By now you've probably heard of [Jest snapshot tests](https://jestjs.io/docs/snapshot-testing) and why they are useful for various reasons.
-To use them within GitLab, there are a few guidelines that should be highlighted:
+[Jest snapshot tests](https://jestjs.io/docs/snapshot-testing) are a useful way to prevent unexpected changes to the HTML output of a given component. They should **only** be used when other testing methods (such as asserting elements with `vue-tests-utils`) do not cover the required usecase. To use them within GitLab, there are a few guidelines that should be highlighted:
- Treat snapshots as code
- Don't think of a snapshot file as a black box
- Care for the output of the snapshot, otherwise, it's not providing any real value. This will usually involve reading the generated snapshot file as you would read any other piece of code
Think of a snapshot test as a simple way to store a raw `String` representation of what you've put into the item being tested. This can be used to evaluate changes in a component, a store, a complex piece of generated output, etc. You can see more in the list below for some recommended `Do's and Don'ts`.
-While snapshot tests can be a very powerful tool. They are meant to supplement, not to replace unit tests.
+While snapshot tests can be a very powerful tool, they are meant to supplement, not to replace unit tests.
Jest provides a great set of docs on [best practices](https://jestjs.io/docs/snapshot-testing#best-practices) that we should keep in mind when creating snapshots.
@@ -1181,6 +1249,161 @@ Should the outcome of your spec be different from what is in the generated snaps
Find all the details in Jests official documentation [https://jestjs.io/docs/snapshot-testing](https://jestjs.io/docs/snapshot-testing)
+### Pros and Cons
+
+**Pros**
+
+- Provides a good warning against accidental changes of important HTML structures
+- Ease of setup
+
+**Cons**
+
+- Lacks the clarity or guard rails that `vue-tests-utils` provides by finding elements and asserting their presence directly
+- Creates unnecessary noise when updating components purposefully
+- High risk of taking a snapshot of bugs, which then turns the tests against us since the test will now fail when fixing the issue
+- No meaningful assertions or expectations within snapshots makes them harder to reason about or replace
+- When used with dependencies like [GitLab UI](https://gitlab.com/gitlab-org/gitlab-ui), it creates fragility in tests when the underlying library changes the HTML of a component we are testing
+
+### When to use
+
+**Use snapshots when**
+
+- Protecting critical HTML structures so it doesn't change by accident
+- Asserting JS object or JSON outputs of complex utility functions
+
+### When not to use
+
+**Don't use snapshots when**
+
+- Tests could be written using `vue-tests-utils` instead
+- Asserting the logic of a component
+- Predicting data structure(s) outputs
+- There are UI elements outside of the repository (think of GitLab UI version updates)
+
+### Examples
+
+As you can see, the cons of snapshot tests far outweight the pros in general. To illustrate this better, this section will show a few examples of when you might be tempted to
+use snapshot testing and why they are not good patterns.
+
+#### Example #1 - Element visiblity
+
+When testing elements visibility, favour using `vue-tests-utils (VTU)` to find a given component and then a basic `.exists()` method call on the VTU wrapper. This provides better readability and more resilient testing. If you look at the examples below, notice how the assertions on the snapshots do not tell you what you are expecting to see. We are relying entirely on `it` description to give us context and on the assumption that the snapshot has captured the desired behavior.
+
+```vue
+<template>
+ <my-component v-if="isVisible" />
+</template>
+```
+
+Bad:
+
+```javascript
+it('hides the component', () => {
+ createComponent({ props: { isVisible: false }})
+
+ expect(wrapper.element).toMatchSnapshot()
+})
+
+it('shows the component', () => {
+ createComponent({ props: { isVisible: true }})
+
+ expect(wrapper.element).toMatchSnapshot()
+})
+```
+
+Good:
+
+```javascript
+it('hides the component', () => {
+ createComponent({ props: { isVisible: false }})
+
+ expect(findMyComponent().exists()).toBe(false)
+})
+
+it('shows the component', () => {
+ createComponent({ props: { isVisible: true }})
+
+ expect(findMyComponent().exists()).toBe(true)
+})
+```
+
+Not only that, but imagine having passed the wrong prop to your component and having the wrong visibility: the snapshot test would still pass because you would have captured the HTML **with the issue** and so unless you double-checked the output of the snapshot, you would never know that your test is broken.
+
+#### Example #2 - Presence of text
+
+Finding text within a component is very easy by using the `vue-test-utils` method `wrapper.text()`. However, there are some cases where it might be tempting to use snapshots when the value returned has a lot of inconsistent spacing due to formatting or HTML nesting.
+
+In these instances, it is better to assert each string individually and make multiple assertions than to use a snapshot to ignore the spaces. This is because any change to the DOM layout will fail the snapshot test even if the text is still perfectly formatted.
+
+```vue
+<template>
+ <gl-sprintf :message="my-message">
+ <template #code="{ content }">
+ <code>{{ content }}</code>
+ </template>
+ </gl-sprintf>
+ <p> My second message </p>
+</template>
+```
+
+Bad:
+
+```javascript
+it('renders the text as I expect', () => {
+ expect(wrapper.text()).toMatchSnapshot()
+})
+```
+
+Good:
+
+```javascript
+it('renders the code snippet', () => {
+ expect(findCodeTag().text()).toContain("myFunction()")
+})
+
+it('renders the paragraph text', () => {
+ expect(findOtherText().text()).toBe("My second message")
+})
+```
+
+#### Example #3 - Complex HTML
+
+When we have very complex HTML, we should focus on asserting specific sensitive and meaningful points rather than capturing it as a whole. The value in a snapshot test is to **warn developers** that they might have accidentally change an HTML structure that they did not intend to change. If the output of the change is hard to read, which is often the case with complex HTML output, then **is the signal itself that something changed** sufficient? And if it is, can it be accomplished without snapshots?
+
+A good example of a complex HTML output is `GlTable`. Snapshot testing might feel like a good option since you can capture rows and columns structure, but we should instead try to assert text we expect or count the number of rows and columns manually.
+
+```vue
+<template>
+ <gl-table ...all-them-props />
+</template>
+```
+
+Bad:
+
+```javascript
+it('renders GlTable as I expect', () => {
+ expect(findGlTable().element).toMatchSnapshot()
+})
+```
+
+Good:
+
+```javascript
+it('renders the right number of rows', () => {
+ expect(findGlTable().findAllRows()).toHaveLength(expectedLength)
+})
+
+it('renders the special icon that only appears on a full moon', () => {
+ expect(findGlTable().findMoonIcon().exists()).toBe(true)
+})
+
+it('renders the correct email format', () => {
+ expect(findGlTable().text()).toContain('my_strange_email@shaddyprovide.com')
+})
+```
+
+Although more verbose, this now means that our tests are not going to break if `GlTable` changes its internal implementation, we communicate to other developers (or ourselves in 6 months) what is important to preserve when refactoring or adding to our table.
+
### How to take a snapshot
```javascript
@@ -1211,43 +1434,6 @@ it('renders the component correctly', () => {
The above test will create two snapshots. It's important to decide which of the snapshots provide more value for codebase safety. That is, if one of these snapshots changes, does that highlight a possible break in the codebase? This can help catch unexpected changes if something in an underlying dependency changes without our knowledge.
-### Pros and Cons
-
-**Pros**
-
-- Speed up the creation of unit tests
-- Easy to maintain
-- Provides a good safety net to protect against accidental breakage of important HTML structures
-
-**Cons**
-
-- Is not a catch-all solution that replaces the work of integration or unit tests
-- No meaningful assertions or expectations within snapshots
-- When carelessly used with [GitLab UI](https://gitlab.com/gitlab-org/gitlab-ui) it can create fragility in tests when the underlying library changes the HTML of a component we are testing
-
-A good guideline to follow: the more complex the component you may want to steer away from just snapshot testing. But that's not to say you can't still snapshot test and test your component as normal.
-
-### When to use
-
-**Use snapshots when**
-
-- to capture a components rendered output
-- to fully or partially match templates
-- to match readable data structures
-- to verify correctly composed native HTML elements
-- as a safety net for critical structures so others don't break it by accident
-- Template heavy component
-- Not a lot of logic in the component
-- Composed of native HTML elements
-
-### When not to use
-
-**Don't use snapshots when**
-
-- To capture large data structures just to have something
-- To just have some kind of test written
-- To capture highly volatile UI elements without stubbing them (Think of GitLab UI version updates)
-
## Get started with feature tests
### What is a feature test
@@ -1567,11 +1753,8 @@ Inside the terminal, where capybara is running, you can also execute `next` whic
### Updating ChromeDriver
-On MacOS, if you get a ChromeDriver error, make sure to update it by running
-
-```shell
- brew upgrade chromedriver
-```
+Starting from `Selenium` 4.6, ChromeDriver can be automatically managed by `Selenium Manager` which comes with the `selenium-webdriver` gem.
+You are no longer required to manually keeping chromedriver in sync.
---
diff --git a/doc/development/testing_guide/review_apps.md b/doc/development/testing_guide/review_apps.md
index b074a9e34f3..1e5ee9f3003 100644
--- a/doc/development/testing_guide/review_apps.md
+++ b/doc/development/testing_guide/review_apps.md
@@ -47,8 +47,8 @@ Maintainers can elect to use the [process for merging during broken `master`](ht
## Performance Metrics
-On every [pipeline](https://gitlab.com/gitlab-org/gitlab/pipelines/125315730) in the `qa` stage, the
-`review-performance` job is automatically started: this job does basic
+On every Review App child pipeline in the `qa` stage, the
+`browser_performance` job is automatically started: this job does basic
browser performance testing using a
[Sitespeed.io Container](../../ci/testing/browser_performance_testing.md).
@@ -227,7 +227,7 @@ Review apps are automatically stopped 2 days after the last deployment thanks to
the [Environment auto-stop](../../ci/environments/index.md#stop-an-environment-after-a-certain-time-period) feature.
If you need your review app to stay up for a longer time, you can
-[pin its environment](../../ci/environments/index.md#override-a-deployments-scheduled-stop-time) or retry the
+[pin its environment](../../ci/environments/index.md#override-a-environments-scheduled-stop-date-and-time) or retry the
`review-deploy` job to update the "latest deployed at" time.
The `review-cleanup` job that automatically runs in scheduled
@@ -276,7 +276,7 @@ find a way to limit it to only us.**
## Other resources
- [Review apps integration for CE/EE (presentation)](https://docs.google.com/presentation/d/1QPLr6FO4LduROU8pQIPkX1yfGvD13GEJIBOenqoKxR8/edit?usp=sharing)
-- [Stability issues](https://gitlab.com/gitlab-org/quality/team-tasks/-/issues/212)
+- [Stability issues](https://gitlab.com/gitlab-org/quality/quality-engineering/team-tasks/-/issues/212)
### Helpful command line tools
diff --git a/doc/development/testing_guide/testing_levels.md b/doc/development/testing_guide/testing_levels.md
index c93a5fd539b..480a53bbefe 100644
--- a/doc/development/testing_guide/testing_levels.md
+++ b/doc/development/testing_guide/testing_levels.md
@@ -55,6 +55,7 @@ records should use stubs/doubles as much as possible.
| `lib/` | `spec/lib/` | RSpec | |
| `lib/tasks/` | `spec/tasks/` | RSpec | |
| `rubocop/` | `spec/rubocop/` | RSpec | |
+| `spec/support/` | `spec/support_specs/` | RSpec | |
### Frontend unit tests
@@ -65,7 +66,7 @@ that is not directly perceivable by a user.
graph RL
plain[Plain JavaScript];
Vue[Vue Components];
- feature-flags[Feature Flags];
+ feature-flags[Feature flags];
license-checks[License Checks];
plain---Vuex;
@@ -149,7 +150,7 @@ Component tests cover the state of a single component that is perceivable by a u
graph RL
plain[Plain JavaScript];
Vue[Vue Components];
- feature-flags[Feature Flags];
+ feature-flags[Feature flags];
license-checks[License Checks];
plain---Vuex;
@@ -243,7 +244,7 @@ Their abstraction level is comparable to how a user would interact with the UI.
graph RL
plain[Plain JavaScript];
Vue[Vue Components];
- feature-flags[Feature Flags];
+ feature-flags[Feature flags];
license-checks[License Checks];
plain---Vuex;
@@ -300,7 +301,7 @@ graph RL
- **DOM**:
Testing on the real DOM ensures your components work in the intended environment.
- Part of DOM testing is delegated to [cross-browser testing](https://gitlab.com/gitlab-org/quality/team-tasks/-/issues/45).
+ Part of DOM testing is delegated to [cross-browser testing](https://gitlab.com/gitlab-org/quality/quality-engineering/team-tasks/-/issues/45).
- **Properties or state of components**:
On this level, all tests can only perform actions a user would do.
For example: to change the state of a component, a click event would be fired.
@@ -366,13 +367,12 @@ See also:
- The [RSpec testing guidelines](../testing_guide/best_practices.md#rspec).
- System / Feature tests in the [Testing Best Practices](best_practices.md#system--feature-tests).
-- [Issue #26159](https://gitlab.com/gitlab-org/gitlab/-/issues/26159) which aims at combining those guidelines with this page.
```mermaid
graph RL
plain[Plain JavaScript];
Vue[Vue Components];
- feature-flags[Feature Flags];
+ feature-flags[Feature flags];
license-checks[License Checks];
plain---Vuex;
diff --git a/doc/development/testing_guide/testing_migrations_guide.md b/doc/development/testing_guide/testing_migrations_guide.md
index 1b1fdcca003..f69f0d1febb 100644
--- a/doc/development/testing_guide/testing_migrations_guide.md
+++ b/doc/development/testing_guide/testing_migrations_guide.md
@@ -290,7 +290,7 @@ RSpec.describe MigrateIncidentIssuesToIncidentType do
.from(issue_type)
.to(incident_type)
- expect(other_issue.reload.issue_type).to eql(issue_type)
+ expect(other_issue.reload.issue_type).to eq(issue_type)
end
end