Welcome to mirror list, hosted at ThFree Co, Russian Federation.

gitlab.com/gitlab-org/gitlab-foss.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
path: root/doc
diff options
context:
space:
mode:
authorGitLab Bot <gitlab-bot@gitlab.com>2023-04-10 21:08:34 +0300
committerGitLab Bot <gitlab-bot@gitlab.com>2023-04-10 21:08:34 +0300
commite108710cd11b7f9d6d7a1fc617896f53e45e2a87 (patch)
treee9f21dc68df359e7ee42036fff5862d7b2682e1e /doc
parentf93ec4cb3933e2fe25b90844a6671f2bf312c5a3 (diff)
Add latest changes from gitlab-org/gitlab@master
Diffstat (limited to 'doc')
-rw-r--r--doc/administration/configure.md71
-rw-r--r--doc/administration/gitaly/gitaly_geo_capabilities.md41
-rw-r--r--doc/api/notes.md2
-rw-r--r--doc/development/testing_guide/testing_levels.md1
4 files changed, 76 insertions, 39 deletions
diff --git a/doc/administration/configure.md b/doc/administration/configure.md
index bf618345a94..d68e81ebf69 100644
--- a/doc/administration/configure.md
+++ b/doc/administration/configure.md
@@ -7,44 +7,39 @@ type: reference
# Configure your GitLab installation **(FREE SELF)**
-Customize and configure your self-managed GitLab installation. Here are some quick links to get you started:
+Customize and configure your self-managed GitLab installation.
- [Authentication](auth/index.md)
+- [CI/CD](../administration/cicd.md)
- [Configuration](../user/admin_area/index.md)
-- [Repository storage](repository_storage_paths.md)
-- [Geo](geo/index.md)
-- [Packages](packages/index.md)
-
-The following tables are intended to guide you to choose the right combination of capabilities based on your requirements. It is common to want the most
-available, quickly recoverable, highly performant, and fully data resilient solution. However, there are tradeoffs.
-
-The tables lists features on the left and provides their capabilities to the right along with known trade-offs.
-
-## Gitaly Capabilities
-
-| | Availability | Recoverability | Data Resiliency | Performance | Risks/Trade-offs|
-|-|--------------|----------------|-----------------|-------------|-----------------|
-|Gitaly Cluster | Very high - tolerant of node failures | RTO for a single node of 10 s with no manual intervention | Data is stored on multiple nodes | Good - While writes may take slightly longer due to voting, read distribution improves read speeds | **Trade-off** - Slight decrease in write speed for redundant, strongly-consistent storage solution. **Risks** - [Does not support snapshot backups](gitaly/index.md#snapshot-backup-and-recovery-limitations), GitLab backup task can be slow for large data sets |
-|Gitaly Shards | Single storage location is a single point of failure | Would need to restore only shards which failed | Single point of failure | Good - can allocate repositories to shards to spread load | **Trade-off** - Need to manually configure repositories into different shards to balance loads / storage space **Risks** - Single point of failure relies on recovery process when single-node failure occurs |
-|Gitaly + NFS | Single storage location is a single point of failure | Single node failure requires restoration from backup | Single point of failure | Average - NFS is not ideally suited to large quantities of small reads / writes which can have a detrimental impact on performance | **Trade-off** - Familiar administration though NFS is not ideally suited to Git demands **Risks** - Many instances of NFS compatibility issues which provide very poor customer experiences |
-
-## Geo Capabilities
-
-If your availability needs to span multiple zones or multiple locations, read about [Geo](geo/index.md).
-
-| | Availability | Recoverability | Data Resiliency | Performance | Risks/Trade-offs|
-|-|--------------|----------------|-----------------|-------------|-----------------|
-|Geo| Depends on the architecture of the Geo site. It is possible to deploy secondaries in single and multiple node configurations. | Eventually consistent. Recovery point depends on replication lag, which depends on a number of factors such as network speeds. Geo supports failover from a primary to secondary site using manual commands that are scriptable. | Geo replicates 100% of planned data types and verifies 50%. See [limitations table](geo/replication/datatypes.md#limitations-on-replicationverification) for more detail. | Improves read/clone times for users of a secondary. | Geo is not intended to replace other backup/restore solutions. Because of replication lag and the possibility of replicating bad data from a primary, customers should also take regular backups of their primary site and test the restore process. |
-
-## Scenarios for failure modes and available mitigation paths
-
-The following table outlines failure modes and mitigation paths for the product offerings detailed in the tables above. Note - Gitaly Cluster install assumes an odd number replication factor of 3 or greater
-
-| Gitaly Mode | Loss of Single Gitaly Node | Application / Data Corruption | Regional Outage (Loss of Instance) | Notes |
-| ----------- | -------------------------- | ----------------------------- | ---------------------------------- | ----- |
-| Single Gitaly Node | Downtime - Must restore from backup | Downtime - Must restore from Backup | Downtime - Must wait for outage to end | |
-| Single Gitaly Node + Geo Secondary | Downtime - Must restore from backup, can perform a manual failover to secondary | Downtime - Must restore from Backup, errors could have propagated to secondary | Manual intervention - failover to Geo secondary | |
-| Sharded Gitaly Install | Partial Downtime - Only repositories on impacted node affected, must restore from backup | Partial Downtime - Only repositories on impacted node affected, must restore from backup | Downtime - Must wait for outage to end | |
-| Sharded Gitaly Install + Geo Secondary | Partial Downtime - Only repositories on impacted node affected, must restore from backup, could perform manual failover to secondary for impacted repositories | Partial Downtime - Only repositories on impacted node affected, must restore from backup, errors could have propagated to secondary | Manual intervention - failover to Geo secondary | |
-| Gitaly Cluster Install* | No Downtime - swaps repository primary to another node after 10 seconds | Not applicable; All writes are voted on by multiple Gitaly Cluster nodes | Downtime - Must wait for outage to end | Snapshot backups for Gitaly Cluster nodes not supported at this time |
-| Gitaly Cluster Install* + Geo Secondary | No Downtime - swaps repository primary to another node after 10 seconds | Not applicable; All writes are voted on by multiple Gitaly Cluster nodes | Manual intervention - failover to Geo secondary | Snapshot backups for Gitaly Cluster nodes not supported at this time |
+- [Consul](../administration/consul.md)
+- [Environment variables](../administration/environment_variables.md)
+- [File hooks](../administration/file_hooks.md)
+- [Git protocol v2](../administration/git_protocol.md)
+- [Incoming email](../administration/incoming_email.md)
+- [Instance limits](../administration/instance_limits.md)
+- [Instance Review](../administration/instance_review.md)
+- [PostgreSQL](../administration/postgresql/index.md)
+- [Load balancer](../administration/load_balancer.md)
+- [NFS](../administration/nfs.md)
+- [Postfix](../administration/reply_by_email_postfix_setup.md)
+- [Redis](../administration/redis/index.md)
+- [Sidekiq](../administration/sidekiq/index.md)
+- [S/MIME signing](../administration/smime_signing_email.md)
+- [Repository storage](../administration/repository_storage_paths.md)
+- [Object storage](../administration/object_storage.md)
+- [Merge request diffs storage](../administration/merge_request_diffs.md)
+- [Static objects external storage](../administration/static_objects_external_storage.md)
+- [Geo](../administration/geo/index.md)
+- [Disaster recovery (Geo)](../administration/geo/disaster_recovery/index.md)
+- [Agent server for Kubernetes](../administration/clusters/kas.md)
+- [Server hooks](../administration/server_hooks.md)
+- [Terraform state](../administration/terraform_state.md)
+- [Terraform limits](../user/admin_area/settings/terraform_limits.md)
+- [Packages](../administration/packages/index.md)
+- [Web terminals](../administration/integration/terminal.md)
+- [Wikis](../administration/wikis/index.md)
+- [Invalidate Markdown cache](../administration/invalidate_markdown_cache.md)
+- [Issue closing pattern](../administration/issue_closing_pattern.md)
+- [Snippets](../administration/snippets/index.md)
+- [Host the product documentation](../administration/docs_self_host.md)
diff --git a/doc/administration/gitaly/gitaly_geo_capabilities.md b/doc/administration/gitaly/gitaly_geo_capabilities.md
new file mode 100644
index 00000000000..e4147eec162
--- /dev/null
+++ b/doc/administration/gitaly/gitaly_geo_capabilities.md
@@ -0,0 +1,41 @@
+---
+stage: Systems
+group: Gitaly
+info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
+---
+
+# Gitaly and Geo capabilities
+
+It is common to want the most available, quickly recoverable, highly performant,
+and fully resilient solution for your data. However, there are tradeoffs.
+
+The following tables are intended to guide you to choose the right combination of capabilities based on your requirements.
+
+## Gitaly capabilities
+
+| Capability | Availability | Recoverability | Data Resiliency | Performance | Risks/Trade-offs|
+|------------|--------------|----------------|-----------------|-------------|-----------------|
+|Gitaly Cluster | Very high - tolerant of node failures | RTO for a single node of 10 s with no manual intervention | Data is stored on multiple nodes | Good - While writes may take slightly longer due to voting, read distribution improves read speeds | **Trade-off** - Slight decrease in write speed for redundant, strongly-consistent storage solution. **Risks** - [Does not support snapshot backups](../gitaly/index.md#snapshot-backup-and-recovery-limitations), GitLab backup task can be slow for large data sets |
+|Gitaly Shards | Single storage location is a single point of failure | Would need to restore only shards which failed | Single point of failure | Good - can allocate repositories to shards to spread load | **Trade-off** - Need to manually configure repositories into different shards to balance loads / storage space **Risks** - Single point of failure relies on recovery process when single-node failure occurs |
+|Gitaly + NFS | Single storage location is a single point of failure | Single node failure requires restoration from backup | Single point of failure | Average - NFS is not ideally suited to large quantities of small reads / writes which can have a detrimental impact on performance | **Trade-off** - Familiar administration though NFS is not ideally suited to Git demands **Risks** - Many instances of NFS compatibility issues which provide very poor customer experiences |
+
+## Geo capabilities
+
+If your availability needs to span multiple zones or multiple locations, read about [Geo](../geo/index.md).
+
+| Capability | Availability | Recoverability | Data Resiliency | Performance | Risks/Trade-offs|
+|------------|--------------|----------------|-----------------|-------------|-----------------|
+|Geo| Depends on the architecture of the Geo site. It is possible to deploy secondaries in single and multiple node configurations. | Eventually consistent. Recovery point depends on replication lag, which depends on a number of factors such as network speeds. Geo supports failover from a primary to secondary site using manual commands that are scriptable. | Geo replicates 100% of planned data types and verifies 50%. See [limitations table](../geo/replication/datatypes.md#limitations-on-replicationverification) for more detail. | Improves read/clone times for users of a secondary. | Geo is not intended to replace other backup/restore solutions. Because of replication lag and the possibility of replicating bad data from a primary, customers should also take regular backups of their primary site and test the restore process. |
+
+## Scenarios for failure modes and available mitigation paths
+
+The following table outlines failure modes and mitigation paths for the product offerings detailed in the tables above. Note - Gitaly Cluster install assumes an odd number replication factor of 3 or greater
+
+| Gitaly Mode | Loss of Single Gitaly Node | Application / Data Corruption | Regional Outage (Loss of Instance) | Notes |
+| ----------- | -------------------------- | ----------------------------- | ---------------------------------- | ----- |
+| Single Gitaly Node | Downtime - Must restore from backup | Downtime - Must restore from Backup | Downtime - Must wait for outage to end | |
+| Single Gitaly Node + Geo Secondary | Downtime - Must restore from backup, can perform a manual failover to secondary | Downtime - Must restore from Backup, errors could have propagated to secondary | Manual intervention - failover to Geo secondary | |
+| Sharded Gitaly Install | Partial Downtime - Only repositories on impacted node affected, must restore from backup | Partial Downtime - Only repositories on impacted node affected, must restore from backup | Downtime - Must wait for outage to end | |
+| Sharded Gitaly Install + Geo Secondary | Partial Downtime - Only repositories on impacted node affected, must restore from backup, could perform manual failover to secondary for impacted repositories | Partial Downtime - Only repositories on impacted node affected, must restore from backup, errors could have propagated to secondary | Manual intervention - failover to Geo secondary | |
+| Gitaly Cluster Install* | No Downtime - swaps repository primary to another node after 10 seconds | Not applicable; All writes are voted on by multiple Gitaly Cluster nodes | Downtime - Must wait for outage to end | Snapshot backups for Gitaly Cluster nodes not supported at this time |
+| Gitaly Cluster Install* + Geo Secondary | No Downtime - swaps repository primary to another node after 10 seconds | Not applicable; All writes are voted on by multiple Gitaly Cluster nodes | Manual intervention - failover to Geo secondary | Snapshot backups for Gitaly Cluster nodes not supported at this time |
diff --git a/doc/api/notes.md b/doc/api/notes.md
index 305bdd294c5..48d343267a1 100644
--- a/doc/api/notes.md
+++ b/doc/api/notes.md
@@ -422,7 +422,7 @@ Parameters:
| `merge_request_iid` | integer | yes | The IID of a project merge request |
| `body` | string | yes | The content of a note. Limited to 1,000,000 characters. |
| `created_at` | string | no | Date time string, ISO 8601 formatted. Example: `2016-03-11T03:45:40Z` (requires administrator or project/group owner rights) |
-| `merge_request_diff_sha`| string | no | Required for the `/merge` [quick action](../user/project/quick_actions.md). The SHA of the head commit, which ensures the merge request wasn't updated after the API request was sent. |
+| `merge_request_diff_head_sha`| string | no | Required for the `/merge` [quick action](../user/project/quick_actions.md). The SHA of the head commit, which ensures the merge request wasn't updated after the API request was sent. |
### Modify existing merge request note
diff --git a/doc/development/testing_guide/testing_levels.md b/doc/development/testing_guide/testing_levels.md
index c349e53b5c8..480a53bbefe 100644
--- a/doc/development/testing_guide/testing_levels.md
+++ b/doc/development/testing_guide/testing_levels.md
@@ -55,6 +55,7 @@ records should use stubs/doubles as much as possible.
| `lib/` | `spec/lib/` | RSpec | |
| `lib/tasks/` | `spec/tasks/` | RSpec | |
| `rubocop/` | `spec/rubocop/` | RSpec | |
+| `spec/support/` | `spec/support_specs/` | RSpec | |
### Frontend unit tests