Welcome to mirror list, hosted at ThFree Co, Russian Federation.

gitlab.com/gitlab-org/gitlab-foss.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorGitLab Bot <gitlab-bot@gitlab.com>2020-03-06 06:08:08 +0300
committerGitLab Bot <gitlab-bot@gitlab.com>2020-03-06 06:08:08 +0300
commita6011c3d70e0e8ac318ba6629183c44f8614c4df (patch)
treea3d21394d63c47448998c89f01eb88e57c0ed8ce /doc/development
parentffc757a7a92535559c20eb706593f7358d9bf589 (diff)
Add latest changes from gitlab-org/gitlab@master
Diffstat (limited to 'doc/development')
-rw-r--r--doc/development/api_graphql_styleguide.md5
-rw-r--r--doc/development/application_limits.md22
-rw-r--r--doc/development/architecture.md12
-rw-r--r--doc/development/chaos_endpoints.md10
-rw-r--r--doc/development/database_debugging.md2
-rw-r--r--doc/development/db_dump.md8
-rw-r--r--doc/development/documentation/site_architecture/release_process.md6
-rw-r--r--doc/development/import_project.md10
8 files changed, 37 insertions, 38 deletions
diff --git a/doc/development/api_graphql_styleguide.md b/doc/development/api_graphql_styleguide.md
index 6d9d375166b..c0eb9c83e92 100644
--- a/doc/development/api_graphql_styleguide.md
+++ b/doc/development/api_graphql_styleguide.md
@@ -123,7 +123,7 @@ pagination models.
To expose a collection of resources we can use a connection type. This wraps the array with default pagination fields. For example a query for project-pipelines could look like this:
-```
+```graphql
query($project_path: ID!) {
project(fullPath: $project_path) {
pipelines(first: 2) {
@@ -181,7 +181,7 @@ look like this:
To get the next page, the cursor of the last known element could be
passed:
-```
+```graphql
query($project_path: ID!) {
project(fullPath: $project_path) {
pipelines(first: 2, after: "Njc=") {
@@ -319,7 +319,6 @@ module Types
value 'CLOSED', value: 'closed', description: 'An closed Epic'
end
end
-
```
## Descriptions
diff --git a/doc/development/application_limits.md b/doc/development/application_limits.md
index dd07a9cbfb7..f89b238cd79 100644
--- a/doc/development/application_limits.md
+++ b/doc/development/application_limits.md
@@ -22,22 +22,22 @@ limit values. It's recommended to create separate migration script files.
1. Add new column to the `plan_limits` table with non-null default value 0, eg:
- ```ruby
- add_column(:plan_limits, :project_hooks, :integer, default: 0, null: false)
- ```
+ ```ruby
+ add_column(:plan_limits, :project_hooks, :integer, default: 0, null: false)
+ ```
- NOTE: **Note:** Plan limits entries set to `0` mean that limits are not
- enabled.
+ NOTE: **Note:** Plan limits entries set to `0` mean that limits are not
+ enabled.
1. Insert plan limits values into the database using
`create_or_update_plan_limit` migration helper, eg:
- ```ruby
- create_or_update_plan_limit('project_hooks', 'free', 10)
- create_or_update_plan_limit('project_hooks', 'bronze', 20)
- create_or_update_plan_limit('project_hooks', 'silver', 30)
- create_or_update_plan_limit('project_hooks', 'gold', 100)
- ```
+ ```ruby
+ create_or_update_plan_limit('project_hooks', 'free', 10)
+ create_or_update_plan_limit('project_hooks', 'bronze', 20)
+ create_or_update_plan_limit('project_hooks', 'silver', 30)
+ create_or_update_plan_limit('project_hooks', 'gold', 100)
+ ```
### Plan limits validation
diff --git a/doc/development/architecture.md b/doc/development/architecture.md
index 5a1b53bc2fb..c75de8e8970 100644
--- a/doc/development/architecture.md
+++ b/doc/development/architecture.md
@@ -542,28 +542,28 @@ See the README for more information.
The GitLab init script starts and stops Unicorn and Sidekiq:
-```
+```plaintext
/etc/init.d/gitlab
Usage: service gitlab {start|stop|restart|reload|status}
```
Redis (key-value store/non-persistent database):
-```
+```plaintext
/etc/init.d/redis
Usage: /etc/init.d/redis {start|stop|status|restart|condrestart|try-restart}
```
SSH daemon:
-```
+```plaintext
/etc/init.d/sshd
Usage: /etc/init.d/sshd {start|stop|restart|reload|force-reload|condrestart|try-restart|status}
```
Web server (one of the following):
-```
+```plaintext
/etc/init.d/httpd
Usage: httpd {start|stop|restart|condrestart|try-restart|force-reload|reload|status|fullstatus|graceful|help|configtest}
@@ -573,7 +573,7 @@ Usage: nginx {start|stop|restart|reload|force-reload|status|configtest}
Persistent database:
-```
+```plaintext
$ /etc/init.d/postgresql
Usage: /etc/init.d/postgresql {start|stop|restart|reload|force-reload|status} [version ..]
```
@@ -626,7 +626,7 @@ GitLab Shell has a configuration file at `/home/git/gitlab-shell/config.yml`.
[GitLab](https://gitlab.com/gitlab-org/gitlab/tree/master) provides rake tasks with which you see version information and run a quick check on your configuration to ensure it is configured properly within the application. See [maintenance rake tasks](https://gitlab.com/gitlab-org/gitlab/blob/master/doc/raketasks/maintenance.md).
In a nutshell, do the following:
-```
+```shell
sudo -i -u git
cd gitlab
bundle exec rake gitlab:env:info RAILS_ENV=production
diff --git a/doc/development/chaos_endpoints.md b/doc/development/chaos_endpoints.md
index 2e55f19cd91..26ff3d2def7 100644
--- a/doc/development/chaos_endpoints.md
+++ b/doc/development/chaos_endpoints.md
@@ -47,7 +47,7 @@ To simulate a memory leak in your application, use the `/-/chaos/leakmem` endpoi
NOTE: **Note:**
The memory is not retained after the request finishes. Once the request has completed, the Ruby garbage collector will attempt to recover the memory.
-```
+```plaintext
GET /-/chaos/leakmem
GET /-/chaos/leakmem?memory_mb=1024
GET /-/chaos/leakmem?memory_mb=1024&duration_s=50
@@ -72,7 +72,7 @@ This endpoint attempts to fully utilise a single core, at 100%, for the given pe
Depending on your rack server setup, your request may timeout after a predetermined period (normally 60 seconds).
If you're using Unicorn, this is done by killing the worker process.
-```
+```plaintext
GET /-/chaos/cpu_spin
GET /-/chaos/cpu_spin?duration_s=50
GET /-/chaos/cpu_spin?duration_s=50&async=true
@@ -96,7 +96,7 @@ This endpoint can be used to model yielding execution to another threads when ru
Depending on your rack server setup, your request may timeout after a predetermined period (normally 60 seconds).
If you're using Unicorn, this is done by killing the worker process.
-```
+```plaintext
GET /-/chaos/db_spin
GET /-/chaos/db_spin?duration_s=50
GET /-/chaos/db_spin?duration_s=50&async=true
@@ -119,7 +119,7 @@ This endpoint is similar to the CPU Spin endpoint but simulates off-processor ac
As with the CPU Spin endpoint, this may lead to your request timing out if duration_s exceeds the configured limit.
-```
+```plaintext
GET /-/chaos/sleep
GET /-/chaos/sleep?duration_s=50
GET /-/chaos/sleep?duration_s=50&async=true
@@ -142,7 +142,7 @@ This endpoint will simulate the unexpected death of a worker process using a `ki
NOTE: **Note:**
Since this endpoint uses the `KILL` signal, the worker is not given a chance to cleanup or shutdown.
-```
+```plaintext
GET /-/chaos/kill
GET /-/chaos/kill?async=true
```
diff --git a/doc/development/database_debugging.md b/doc/development/database_debugging.md
index d91edba92db..e577ba6ec8f 100644
--- a/doc/development/database_debugging.md
+++ b/doc/development/database_debugging.md
@@ -39,7 +39,7 @@ If your test DB is giving you problems, it is safe to nuke it because it doesn't
Access the database via one of these commands (they all get you to the same place)
-```
+```ruby
gdk psql -d gitlabhq_development
bundle exec rails dbconsole RAILS_ENV=development
bundle exec rails db RAILS_ENV=development
diff --git a/doc/development/db_dump.md b/doc/development/db_dump.md
index 97762a62a80..bb740d12f7b 100644
--- a/doc/development/db_dump.md
+++ b/doc/development/db_dump.md
@@ -10,7 +10,7 @@ data leaks.
On the staging VM, add the following line to `/etc/gitlab/gitlab.rb` to speed up
large database imports.
-```
+```shell
# On STAGING
echo "postgresql['checkpoint_segments'] = 64" | sudo tee -a /etc/gitlab/gitlab.rb
sudo touch /etc/gitlab/skip-auto-reconfigure
@@ -23,7 +23,7 @@ Next, we let the production environment stream a compressed SQL dump to our
local machine via SSH, and redirect this stream to a psql client on the staging
VM.
-```
+```shell
# On LOCAL MACHINE
ssh -C gitlab.example.com sudo -u gitlab-psql /opt/gitlab/embedded/bin/pg_dump -Cc gitlabhq_production |\
ssh -C staging-vm sudo -u gitlab-psql /opt/gitlab/embedded/bin/psql -d template1
@@ -37,14 +37,14 @@ use this procedure.
First, on the production server, create a list of directories you want to
re-create.
-```
+```shell
# On PRODUCTION
(umask 077; sudo find /var/opt/gitlab/git-data/repositories -maxdepth 1 -type d -print0 > directories.txt)
```
Copy `directories.txt` to the staging server and create the directories there.
-```
+```shell
# On STAGING
sudo -u git xargs -0 mkdir -p < directories.txt
```
diff --git a/doc/development/documentation/site_architecture/release_process.md b/doc/development/documentation/site_architecture/release_process.md
index 61df572a6d2..59a8d3cff01 100644
--- a/doc/development/documentation/site_architecture/release_process.md
+++ b/doc/development/documentation/site_architecture/release_process.md
@@ -70,9 +70,9 @@ this needs to happen when the stable branches for all products have been created
1. Run the raketask to create the single version:
- ```shell
- ./bin/rake "release:single[12.0]"
- ```
+ ```shell
+ ./bin/rake "release:single[12.0]"
+ ```
A new `Dockerfile.12.0` should have been created and committed to a new branch.
diff --git a/doc/development/import_project.md b/doc/development/import_project.md
index e92d18b7ace..3cdf2b8977a 100644
--- a/doc/development/import_project.md
+++ b/doc/development/import_project.md
@@ -115,13 +115,13 @@ The last option is to import a project using a Rails console:
project: project).restore
```
- We are storing all import failures in the `import_failures` data table.
+ We are storing all import failures in the `import_failures` data table.
- To make sure that the project import finished without any issues, check:
+ To make sure that the project import finished without any issues, check:
- ```ruby
- project.import_failures.all
- ```
+ ```ruby
+ project.import_failures.all
+ ```
## Performance testing