Welcome to mirror list, hosted at ThFree Co, Russian Federation.

gitlab.com/gitlab-org/gitlab-foss.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorGitLab Bot <gitlab-bot@gitlab.com>2020-07-20 15:26:25 +0300
committerGitLab Bot <gitlab-bot@gitlab.com>2020-07-20 15:26:25 +0300
commita09983ae35713f5a2bbb100981116d31ce99826e (patch)
tree2ee2af7bd104d57086db360a7e6d8c9d5d43667a /doc/user/clusters
parent18c5ab32b738c0b6ecb4d0df3994000482f34bd8 (diff)
Add latest changes from gitlab-org/gitlab@13-2-stable-ee
Diffstat (limited to 'doc/user/clusters')
-rw-r--r--doc/user/clusters/applications.md333
-rw-r--r--doc/user/clusters/crossplane.md400
-rw-r--r--doc/user/clusters/management_project.md2
3 files changed, 499 insertions, 236 deletions
diff --git a/doc/user/clusters/applications.md b/doc/user/clusters/applications.md
index 86624d12bcf..507ba25850d 100644
--- a/doc/user/clusters/applications.md
+++ b/doc/user/clusters/applications.md
@@ -32,8 +32,6 @@ To see a list of available applications to install. For a:
- [Group-level cluster](../group/clusters/index.md), navigate to your group's
**{cloud-gear}** **Kubernetes** page.
-Install Helm first as it's used to install other applications.
-
NOTE: **Note:**
As of GitLab 11.6, Helm will be upgraded to the latest version supported
by GitLab before installing any of the applications.
@@ -71,17 +69,47 @@ can lead to confusion during deployments.
> - Introduced in GitLab 10.2 for project-level clusters.
> - Introduced in GitLab 11.6 for group-level clusters.
+> - A local Tiller option was [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/209736) in GitLab 13.2 behind a feature flag, enabled by default.
+> - The feature flag for local Tiller is enabled on GitLab.com.
[Helm](https://helm.sh/docs/) is a package manager for Kubernetes and is
-required to install all the other applications. It is installed in its
-own pod inside the cluster which can run the `helm` CLI in a safe
-environment.
+used to install the GitLab-managed apps. GitLab runs each `helm` command
+in a pod within the `gitlab-managed-apps` namespace inside the cluster.
+
+As of GitLab 13.2, the integration uses a local
+[Tiller](https://v2.helm.sh/docs/glossary/#tiller) by default. When using a
+local Tiller, the Helm application does not need to be installed and will not
+be shown in the list of applications.
NOTE: **Note:**
-Installing Helm as a GitLab-managed App behind a proxy is not supported,
-but a [workaround](../../topics/autodevops/index.md#installing-helm-behind-a-proxy)
+GitLab's Helm integration does not support installing applications behind a proxy,
+but a [workaround](../../topics/autodevops/index.md#install-applications-behind-a-proxy)
is available.
+### Enable or disable local Tiller **(CORE ONLY)**
+
+> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/209736) in GitLab 13.2
+> - The option to disable local Tiller is [planned for removal](https://gitlab.com/gitlab-org/gitlab/-/issues/209736) in GitLab 13.3
+
+Local Tiller is under development, but is ready for production use. It is
+deployed behind a feature flag that is **enabled by default**.
+[GitLab administrators with access to the GitLab Rails console](../../administration/feature_flags.md)
+can enable it for your instance.
+
+To enable it:
+
+```ruby
+# Instance-wide
+Feature.enable(:managed_apps_local_tiller)
+```
+
+To disable it:
+
+```ruby
+# Instance-wide
+Feature.disable(:managed_apps_local_tiller)
+```
+
### cert-manager
> Introduced in GitLab 11.6 for project- and group-level clusters.
@@ -609,6 +637,7 @@ Supported applications:
- [Sentry](#install-sentry-using-gitlab-cicd)
- [GitLab Runner](#install-gitlab-runner-using-gitlab-cicd)
- [Cilium](#install-cilium-using-gitlab-cicd)
+- [Falco](#install-falco-using-gitlab-cicd)
- [Vault](#install-vault-using-gitlab-cicd)
- [JupyterHub](#install-jupyterhub-using-gitlab-cicd)
- [Elastic Stack](#install-elastic-stack-using-gitlab-cicd)
@@ -634,6 +663,11 @@ To install applications using GitLab CI/CD:
- template: Managed-Cluster-Applications.gitlab-ci.yml
```
+ NOTE: **Note:**
+ The job provided by this template connects to the cluster using tools provided
+ in a custom Docker image. It requires that you have a runner registered with the Docker,
+ Kubernetes, or Docker Machine executor.
+
1. Add a `.gitlab/managed-apps/config.yaml` file to define which
applications you would like to install. Define the `installed` key as
`true` to install the application and `false` to uninstall the
@@ -683,6 +717,10 @@ management project. Refer to the
[chart](https://github.com/helm/charts/tree/master/stable/nginx-ingress)
for the available configuration options.
+NOTE: **Note:**
+Support for installing the Ingress managed application is provided by the GitLab Configure group.
+If you run into unknown issues, please [open a new issue](https://gitlab.com/gitlab-org/gitlab/-/issues/new) and ping at least 2 people from the [Configure group](https://about.gitlab.com/handbook/product/categories/#configure-group).
+
### Install cert-manager using GitLab CI/CD
cert-manager is installed using GitLab CI/CD by defining configuration in
@@ -720,6 +758,10 @@ management project. Refer to the
[chart](https://hub.helm.sh/charts/jetstack/cert-manager) for the
available configuration options.
+NOTE: **Note:**
+Support for installing the Cert Manager managed application is provided by the GitLab Configure group.
+If you run into unknown issues, please [open a new issue](https://gitlab.com/gitlab-org/gitlab/-/issues/new) and ping at least 2 people from the [Configure group](https://about.gitlab.com/handbook/product/categories/#configure-group).
+
### Install Sentry using GitLab CI/CD
NOTE: **Note:**
@@ -781,6 +823,10 @@ postgresql:
postgresqlPassword: example-postgresql-password
```
+NOTE: **Note:**
+Support for installing the Sentry managed application is provided by the GitLab Health group.
+If you run into unknown issues, please [open a new issue](https://gitlab.com/gitlab-org/gitlab/-/issues/new) and ping at least 2 people from the [Health group](https://about.gitlab.com/handbook/product/product-categories/#health-group).
+
### Install PostHog using GitLab CI/CD
[PostHog](https://www.posthog.com) 🦔 is a developer-friendly, open-source product analytics platform.
@@ -852,6 +898,10 @@ project. Refer to the
[Configuration section of the Prometheus chart's README](https://github.com/helm/charts/tree/master/stable/prometheus#configuration)
for the available configuration options.
+NOTE: **Note:**
+Support for installing the Prometheus managed application is provided by the GitLab APM group.
+If you run into unknown issues, please [open a new issue](https://gitlab.com/gitlab-org/gitlab/-/issues/new) and ping at least 2 people from the [APM group](https://about.gitlab.com/handbook/product/product-categories/#apm-group).
+
### Install GitLab Runner using GitLab CI/CD
GitLab Runner is installed using GitLab CI/CD by defining configuration in
@@ -883,6 +933,10 @@ management project. Refer to the
[chart](https://gitlab.com/gitlab-org/charts/gitlab-runner) for the
available configuration options.
+NOTE: **Note:**
+Support for installing the Runner managed application is provided by the GitLab Runner group.
+If you run into unknown issues, please [open a new issue](https://gitlab.com/gitlab-org/gitlab/-/issues/new) and ping at least 2 people from the [Runner group](https://about.gitlab.com/handbook/product/product-categories/#runner-group).
+
### Install Cilium using GitLab CI/CD
> [Introduced](https://gitlab.com/gitlab-org/cluster-integration/cluster-applications/-/merge_requests/22) in GitLab 12.8.
@@ -909,8 +963,8 @@ a corresponding cluster type. The default value is blank. You can
check the recommended variables for each cluster type in the official
documentation:
-- [Google GKE](https://cilium.readthedocs.io/en/stable/gettingstarted/k8s-install-gke/#prepare-deploy-cilium)
-- [AWS EKS](https://cilium.readthedocs.io/en/stable/gettingstarted/k8s-install-eks/#prepare-deploy-cilium)
+- [Google GKE](https://docs.cilium.io/en/stable/gettingstarted/k8s-install-gke/#deploy-cilium)
+- [AWS EKS](https://docs.cilium.io/en/stable/gettingstarted/k8s-install-eks/#deploy-cilium)
You can customize Cilium's Helm variables by defining the
`.gitlab/managed-apps/cilium/values.yaml` file in your cluster
@@ -920,9 +974,9 @@ for the available configuration options.
CAUTION: **Caution:**
Installation and removal of the Cilium requires a **manual**
-[restart](https://cilium.readthedocs.io/en/stable/gettingstarted/k8s-install-gke/#restart-remaining-pods)
+[restart](https://docs.cilium.io/en/stable/gettingstarted/k8s-install-gke/#restart-unmanaged-pods)
of all affected pods in all namespaces to ensure that they are
-[managed](https://cilium.readthedocs.io/en/stable/troubleshooting/#ensure-pod-is-managed-by-cilium)
+[managed](https://docs.cilium.io/en/stable/troubleshooting/#ensure-pod-is-managed-by-cilium)
by the correct networking plugin.
NOTE: **Note:**
@@ -930,23 +984,20 @@ Major upgrades might require additional setup steps, please consult
the official [upgrade guide](https://docs.cilium.io/en/stable/install/upgrade/) for more
information.
-By default, Cilium will drop all disallowed packets upon policy
-deployment. The audit mode is scheduled for release in
-[Cilium 1.8](https://github.com/cilium/cilium/pull/9970). In the audit
-mode, disallowed packets will not be dropped, and audit
-notifications will be generated instead. GitLab provides alternative Docker
-images for Cilium with the audit patch included. You can switch to the
-custom build and enable the audit mode by adding the following to
+By default, Cilium's [audit
+mode](https://docs.cilium.io/en/v1.8/gettingstarted/policy-creation/?highlight=policy-audit#enable-policy-audit-mode)
+is enabled. In audit mode, Cilium doesn't drop disallowed packets. You
+can use `policy-verdict` log to observe policy-related decisions. You
+can disable audit mode by adding the following to
`.gitlab/managed-apps/cilium/values.yaml`:
```yaml
-global:
- registry: registry.gitlab.com/gitlab-org/defend/cilium
- policyAuditMode: true
+config:
+ policyAuditMode: false
agent:
monitor:
- eventTypes: ["drop", "audit"]
+ eventTypes: ["drop"]
```
The Cilium monitor log for traffic is logged out by the
@@ -968,24 +1019,121 @@ The [Hubble](https://github.com/cilium/hubble) monitoring daemon is
enabled by default and it's set to collect per namespace flow
metrics. This metrics are accessible on the [Threat Monitoring](../application_security/threat_monitoring/index.md)
dashboard. You can disable Hubble by adding the following to
-`.gitlab/managed-apps/config.yaml`:
+`.gitlab/managed-apps/cilium/values.yaml`:
```yaml
-cilium:
- installed: true
+global:
hubble:
- installed: false
+ enabled: false
```
You can also adjust Helm values for Hubble via
-`.gitlab/managed-apps/cilium/hubble-values.yaml`:
+`.gitlab/managed-apps/cilium/values.yaml`:
```yaml
-metrics:
- enabled:
- - 'flow:sourceContext=namespace;destinationContext=namespace'
+global:
+ hubble:
+ enabled: true
+ metrics:
+ enabled:
+ - 'flow:sourceContext=namespace;destinationContext=namespace'
+```
+
+NOTE: **Note:**
+Support for installing the Cilium managed application is provided by the GitLab Container Security group.
+If you run into unknown issues, please [open a new issue](https://gitlab.com/gitlab-org/gitlab/-/issues/new) and ping at least 2 people from the [Container Security group](https://about.gitlab.com/handbook/product/product-categories/#container-security-group).
+
+### Install Falco using GitLab CI/CD
+
+> [Introduced](https://gitlab.com/gitlab-org/cluster-integration/cluster-applications/-/merge_requests/91) in GitLab 13.1.
+
+GitLab Container Host Security Monitoring uses [Falco](https://falco.org/)
+as a runtime security tool that listens to the Linux kernel using eBPF. Falco parses system calls
+and asserts the stream against a configurable rules engine in real-time. For more information, see
+[Falco's Documentation](https://falco.org/docs/).
+
+You can enable Falco in the
+`.gitlab/managed-apps/config.yaml` file:
+
+```yaml
+falco:
+ installed: true
+```
+
+You can customize Falco's Helm variables by defining the
+`.gitlab/managed-apps/falco/values.yaml` file in your cluster
+management project. Refer to the
+[Falco chart](https://github.com/falcosecurity/charts/tree/master/falco)
+for the available configuration options.
+
+CAUTION: **Caution:**
+By default eBPF support is enabled and Falco will use an [eBPF probe](https://falco.org/docs/event-sources/drivers/#using-the-ebpf-probe) to pass system calls to userspace.
+If your cluster doesn't support this, you can configure it to use Falco kernel module instead by adding the following to `.gitlab/managed-apps/falco/values.yaml`:
+
+```yaml
+ebpf:
+ enabled: false
```
+In rare cases where automatic probe installation on your cluster isn't possible and the kernel/probe
+isn't precompiled, you may need to manually prepare the kernel module or eBPF probe with
+[driverkit](https://github.com/falcosecurity/driverkit#against-a-kubernetes-cluster)
+and install it on each cluster node.
+
+By default, Falco is deployed with a limited set of rules. To add more rules, add the following to
+`.gitlab/managed-apps/falco/values.yaml` (you can get examples from
+[Cloud Native Security Hub](https://securityhub.dev/)):
+
+```yaml
+customRules:
+ file-integrity.yaml: |-
+ - rule: Detect New File
+ desc: detect new file created
+ condition: >
+ evt.type = chmod or evt.type = fchmod
+ output: >
+ File below a known directory opened for writing (user=%user.name
+ command=%proc.cmdline file=%fd.name parent=%proc.pname pcmdline=%proc.pcmdline gparent=%proc.aname[2])
+ priority: ERROR
+ tags: [filesystem]
+ - rule: Detect New Directory
+ desc: detect new directory created
+ condition: >
+ mkdir
+ output: >
+ File below a known directory opened for writing (user=%user.name
+ command=%proc.cmdline file=%fd.name parent=%proc.pname pcmdline=%proc.pcmdline gparent=%proc.aname[2])
+ priority: ERROR
+ tags: [filesystem]
+```
+
+By default, Falco only outputs security events to logs as JSON objects. To set it to output to an
+[external API](https://falco.org/docs/alerts/#https-output-send-alerts-to-an-https-end-point)
+or [application](https://falco.org/docs/alerts/#program-output),
+add the following to `.gitlab/managed-apps/falco/values.yaml`:
+
+```yaml
+falco:
+ programOutput:
+ enabled: true
+ keepAlive: false
+ program: mail -s "Falco Notification" someone@example.com
+
+ httpOutput:
+ enabled: true
+ url: http://some.url
+```
+
+You can check these logs with the following command:
+
+```shell
+kubectl logs -l app=falco -n gitlab-managed-apps
+```
+
+NOTE: **Note:**
+Support for installing the Falco managed application is provided by the GitLab Container Security group.
+If you run into unknown issues, please [open a new issue](https://gitlab.com/gitlab-org/gitlab/-/issues/new) and ping at least 2 people from the [Container Security group](https://about.gitlab.com/handbook/product/product-categories/#container-security-group).
+
### Install Vault using GitLab CI/CD
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/9982) in GitLab 12.9.
@@ -1035,7 +1183,7 @@ below are examples and should be replaced with settings specific to your environ
ui:
enabled: true
server:
- # Disable the built in data storage volume as it's not safe for Hight Availablity mode
+ # Disable the built in data storage volume as it's not safe for Hight Availability mode
dataStorage:
enabled: false
# Enable High Availability Mode
@@ -1075,6 +1223,10 @@ kubectl -n gitlab-managed-apps exec -it vault-0 sh
This should give you your unseal keys and initial root token. Make sure to note these down
and keep these safe as you will need them to unseal the Vault throughout its lifecycle.
+NOTE: **Note:**
+Support for installing the Vault managed application is provided by the GitLab Release Management group.
+If you run into unknown issues, please [open a new issue](https://gitlab.com/gitlab-org/gitlab/-/issues/new) and ping at least 2 people from the [Release Management group](https://about.gitlab.com/handbook/product/product-categories/#release-management-group).
+
### Install JupyterHub using GitLab CI/CD
> [Introduced](https://gitlab.com/gitlab-org/cluster-integration/cluster-applications/-/merge_requests/40) in GitLab 12.8.
@@ -1124,6 +1276,10 @@ Refer to the
[chart reference](https://zero-to-jupyterhub.readthedocs.io/en/stable/reference/reference.html) for the
available configuration options.
+NOTE: **Note:**
+Support for installing the JupyterHub managed application is provided by the GitLab Configure group.
+If you run into unknown issues, please [open a new issue](https://gitlab.com/gitlab-org/gitlab/-/issues/new) and ping at least 2 people from the [Configure group](https://about.gitlab.com/handbook/product/categories/#configure-group).
+
### Install Elastic Stack using GitLab CI/CD
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/25138) in GitLab 12.8.
@@ -1151,6 +1307,10 @@ available configuration options.
NOTE: **Note:**
In this alpha implementation of installing Elastic Stack through CI, reading the environment logs through Elasticsearch is unsupported. This is supported if [installed via the UI](#elastic-stack).
+NOTE: **Note:**
+Support for installing the Elastic Stack managed application is provided by the GitLab APM group.
+If you run into unknown issues, please [open a new issue](https://gitlab.com/gitlab-org/gitlab/-/issues/new) and ping at least 2 people from the [APM group](https://about.gitlab.com/handbook/product/product-categories/#apm-group).
+
### Install Crossplane using GitLab CI/CD
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/35675) in GitLab 12.9.
@@ -1177,6 +1337,10 @@ management project. Refer to the
[chart](https://github.com/crossplane/crossplane/tree/master/cluster/charts/crossplane#configuration) for the
available configuration options. Note that this link points to the documentation for the current development release, which may differ from the version you have installed.
+NOTE: **Note:**
+Support for the Crossplane managed application is provided by the Crossplane team.
+If you run into issues, please [open a support ticket](https://github.com/crossplane/crossplane/issues/new/choose) directly.
+
### Install Fluentd using GitLab CI/CD
> [Introduced](https://gitlab.com/gitlab-org/cluster-integration/cluster-applications/-/merge_requests/76) in GitLab 12.10.
@@ -1201,6 +1365,10 @@ The configuration chart link points to the current development release, which
may differ from the version you have installed. To ensure compatibility, switch
to the specific branch or tag you are using.
+NOTE: **Note:**
+Support for installing the Fluentd managed application is provided by the GitLab Container Security group.
+If you run into unknown issues, please [open a new issue](https://gitlab.com/gitlab-org/gitlab/-/issues/new) and ping at least 2 people from the [Container Security group](https://about.gitlab.com/handbook/product/product-categories/#container-security-group).
+
### Install Knative using GitLab CI/CD
To install Knative, define the `.gitlab/managed-apps/config.yaml` file
@@ -1223,6 +1391,10 @@ domain: 'my.wildcard.A.record.dns'
If you plan to use GitLab Serverless capabilities, be sure to set an A record wildcard domain on your custom configuration.
+NOTE: **Note:**
+Support for installing the Knative managed application is provided by the GitLab Configure group.
+If you run into unknown issues, please [open a new issue](https://gitlab.com/gitlab-org/gitlab/-/issues/new) and ping at least 2 people from the [Configure group](https://about.gitlab.com/handbook/product/categories/#configure-group).
+
#### Knative Metrics
GitLab provides [Invocation Metrics](../project/clusters/serverless/index.md#invocation-metrics) for your functions. To collect these metrics, you must have:
@@ -1243,6 +1415,92 @@ by running the following command:
kubectl delete -f https://gitlab.com/gitlab-org/cluster-integration/cluster-applications/-/raw/02c8231e30ef5b6725e6ba368bc63863ceb3c07d/src/default-data/knative/istio-metrics.yaml
```
+### Install AppArmor using GitLab CI/CD
+
+> [Introduced](https://gitlab.com/gitlab-org/cluster-integration/cluster-applications/-/merge_requests/100) in GitLab 13.1.
+
+To install AppArmor into the `gitlab-managed-apps` namespace of your cluster using GitLab CI/CD, define the following configuration in `.gitlab/managed-apps/config.yaml`:
+
+```yaml
+apparmor:
+ installed: true
+```
+
+You can define one or more AppArmor profiles by adding them into `.gitlab/managed-apps/apparmor/values.yaml` as the following:
+
+```yaml
+profiles:
+ profile-one: |-
+ profile profile-one {
+ file,
+ }
+```
+
+Refer to the [AppArmor chart](https://gitlab.com/gitlab-org/charts/apparmor) for more information on this chart.
+
+#### Using AppArmor profiles in your deployments
+
+After installing AppAmor, you can use profiles by adding Pod Annotations. If you're using Auto
+DevOps, you can [customize `auto-deploy-values.yaml`](../../topics/autodevops/customize.md#customize-values-for-helm-chart)
+to annotate your pods. Although it's helpful to be aware of the [list of custom attributes](https://gitlab.com/gitlab-org/charts/auto-deploy-app#gitlabs-auto-deploy-helm-chart), you're only required to set
+`podAnnotations` as follows:
+
+```yaml
+podAnnotations:
+ container.apparmor.security.beta.kubernetes.io/auto-deploy-app: localhost/profile-one
+```
+
+The only information to be changed here is the profile name which is `profile-one` in this example. Refer to the [AppArmor tutorial](https://kubernetes.io/docs/tutorials/clusters/apparmor/#securing-a-pod) for more information on how AppArmor is integrated in Kubernetes.
+
+#### Using PodSecurityPolicy in your deployments
+
+NOTE: **Note:**
+To enable AppArmor annotations on a Pod Security Policy you must first
+load the correspondingAppArmor profile.
+
+[Pod Security Policies](https://kubernetes.io/docs/concepts/policy/pod-security-policy/)are
+resources at the cluster level that control security-related
+properties of deployed pods. You can use such a policy to enable
+loaded AppArmor profiles and apply necessary pod restrictions across a
+cluster. You can deploy a new policy by adding the following
+to`.gitlab/managed-apps/apparmor/values.yaml`:
+
+```yaml
+securityPolicies:
+ example:
+ defaultProfile: profile-one
+ allowedProfiles:
+ - profile-one
+ - profile-two
+ spec:
+ privileged: false
+ seLinux:
+ rule: RunAsAny
+ supplementalGroups:
+ rule: RunAsAny
+ runAsUser:
+ rule: RunAsAny
+ fsGroup:
+ rule: RunAsAny
+ volumes:
+ - '*'
+```
+
+This example creates a single policy named `example` with the provided
+specification, and enables [AppArmor
+annotations](https://kubernetes.io/docs/tutorials/clusters/apparmor/#podsecuritypolicy-annotations)on
+it.
+
+NOTE: **Note:**
+Support for installing the AppArmor managed application is provided by the GitLab Container Security group.
+If you run into unknown issues, please [open a new issue](https://gitlab.com/gitlab-org/gitlab/-/issues/new) and ping at least 2 people from the [Container Security group](https://about.gitlab.com/handbook/product/product-categories/#container-security-group).
+
+## Browse applications logs
+
+> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/36769) in GitLab 13.2.
+
+Logs produced by pods running **GitLab Managed Apps** can be browsed using [**Log Explorer**](../project/clusters/kubernetes_pod_logs.md).
+
## Upgrading applications
> [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/24789) in GitLab 11.8.
@@ -1342,3 +1600,16 @@ The number and size of nodes might not have enough IP addresses to run or instal
For reference, all the AWS instance IP limits are found
[in this AWS repository on GitHub](https://github.com/aws/amazon-vpc-cni-k8s/blob/master/pkg/awsutils/vpc_ip_resource_limit.go) (search for `InstanceENIsAvailable`).
+
+### Unable to install Prometheus
+
+Installing Prometheus is failing with the following error:
+
+```shell
+# kubectl -n gitlab-managed-apps logs install-prometheus
+...
+Error: Could not get apiVersions from Kubernetes: unable to retrieve the complete list of server APIs: admission.certmanager.k8s.io/v1beta1: the server is currently unable to handle the request
+```
+
+This is a bug that was introduced in Helm `2.15` and fixed in `3.0.2`. As a workaround, you'll need
+to make sure that [`cert-manager`](#cert-manager) is installed successfully prior to installing Prometheus.
diff --git a/doc/user/clusters/crossplane.md b/doc/user/clusters/crossplane.md
index 3a430ad55bd..b30ebc57338 100644
--- a/doc/user/clusters/crossplane.md
+++ b/doc/user/clusters/crossplane.md
@@ -6,17 +6,15 @@ info: To determine the technical writer assigned to the Stage/Group associated w
# Crossplane configuration
-Once Crossplane [is installed](applications.md#crossplane), it must be configured for
-use.
-
+After [installing](applications.md#crossplane) Crossplane, you must configure it for use.
The process of configuring Crossplane includes:
-1. Configuring RBAC permissions.
-1. Configuring Crossplane with a cloud provider.
-1. Configure managed service access.
-1. Setting up Resource classes.
-1. Using Auto DevOps configuration options.
-1. Connect to the PostgreSQL instance.
+1. [Configure RBAC permissions](#configure-rbac-permissions).
+1. [Configure Crossplane with a cloud provider](#configure-crossplane-with-a-cloud-provider).
+1. [Configure managed service access](#configure-managed-service-access).
+1. [Set up Resource classes](#setting-up-resource-classes).
+1. Use [Auto DevOps configuration options](#auto-devops-configuration-options).
+1. [Connect to the PostgreSQL instance](#connect-to-the-postgresql-instance).
To allow Crossplane to provision cloud services such as PostgreSQL, the cloud provider
stack must be configured with a user account. For example:
@@ -24,14 +22,13 @@ stack must be configured with a user account. For example:
- A service account for GCP.
- An IAM user for AWS.
-Important notes:
+Some important notes:
-- This guide uses GCP as an example. However, the process for AWS and Azure will be
-similar.
-- Crossplane requires the Kubernetes cluster to be VPC native with Alias IPs enabled so
-that the IP address of the pods are routable within the GCP network.
+- This guide uses GCP as an example, but the processes for AWS and Azure are similar.
+- Crossplane requires the Kubernetes cluster to be VPC native with Alias IPs enabled,
+ so the IP addresses of the pods can be routed within the GCP network.
-First, we need to declare some environment variables with configuration that will be used throughout this guide:
+First, declare some environment variables with configuration for use in this guide:
```shell
export PROJECT_ID=crossplane-playground # the GCP project where all resources reside.
@@ -41,228 +38,223 @@ export REGION=us-central1 # the GCP region where the GKE cluster is provisioned.
## Configure RBAC permissions
-- For GitLab-managed clusters, RBAC is configured automatically.
-
-- For non-GitLab managed clusters, ensure that the service account for the token provided can manage resources in the `database.crossplane.io` API group:
-
- 1. Save the following YAML as `crossplane-database-role.yaml`:
-
- ```yaml
- apiVersion: rbac.authorization.k8s.io/v1
- kind: ClusterRole
- metadata:
- name: crossplane-database-role
- labels:
- rbac.authorization.k8s.io/aggregate-to-edit: "true"
- rules:
- - apiGroups:
- - database.crossplane.io
- resources:
- - postgresqlinstances
- verbs:
- - get
- - list
- - create
- - update
- - delete
- - patch
- - watch
- ```
-
- 1. Apply the cluster role to the cluster:
-
- ```shell
- kubectl apply -f crossplane-database-role.yaml
- ```
+For GitLab-managed clusters, role-based access control (RBAC) is configured automatically.
+
+For non-GitLab managed clusters, ensure that the service account for the token
+provided can manage resources in the `database.crossplane.io` API group:
+
+1. Save the following YAML as `crossplane-database-role.yaml`:
+
+ ```yaml
+ apiVersion: rbac.authorization.k8s.io/v1
+ kind: ClusterRole
+ metadata:
+ name: crossplane-database-role
+ labels:
+ rbac.authorization.k8s.io/aggregate-to-edit: "true"
+ rules:
+ - apiGroups:
+ - database.crossplane.io
+ resources:
+ - postgresqlinstances
+ verbs:
+ - get
+ - list
+ - create
+ - update
+ - delete
+ - patch
+ - watch
+ ```
+
+1. Apply the cluster role to the cluster:
+
+ ```shell
+ kubectl apply -f crossplane-database-role.yaml
+ ```
## Configure Crossplane with a cloud provider
See [Configure Your Cloud Provider Account](https://crossplane.github.io/docs/v0.4/cloud-providers.html)
to configure the installed cloud provider stack with a user account.
-Note that the Secret and the Provider resource referencing the Secret needs to be
+NOTE: **Note:**
+The Secret, and the Provider resource referencing the Secret, must be
applied to the `gitlab-managed-apps` namespace in the guide. Make sure you change that
while following the process.
-[Configure Providers](https://crossplane.github.io/docs/v0.4/cloud-providers.html)
-
## Configure Managed Service Access
-We need to configure connectivity between the PostgreSQL database and the GKE cluster.
-This can done by either:
+Next, configure connectivity between the PostgreSQL database and the GKE cluster
+by either:
- Using Crossplane as demonstrated below.
- Directly in the GCP console by
-[configuring private services access](https://cloud.google.com/vpc/docs/configure-private-services-access).
-Create a GlobalAddress and Connection resources:
-
-```shell
-cat > network.yaml <<EOF
----
-# gitlab-ad-globaladdress defines the IP range that will be allocated for cloud services connecting to the instances in the given Network.
-
-apiVersion: compute.gcp.crossplane.io/v1alpha3
-kind: GlobalAddress
-metadata:
- name: gitlab-ad-globaladdress
-spec:
- providerRef:
- name: gcp-provider
- reclaimPolicy: Delete
- name: gitlab-ad-globaladdress
- purpose: VPC_PEERING
- addressType: INTERNAL
- prefixLength: 16
- network: projects/$PROJECT_ID/global/networks/$NETWORK_NAME
----
-# gitlab-ad-connection is what allows cloud services to use the allocated GlobalAddress for communication. Behind
-# the scenes, it creates a VPC peering to the network that those service instances actually live.
-
-apiVersion: servicenetworking.gcp.crossplane.io/v1alpha3
-kind: Connection
-metadata:
- name: gitlab-ad-connection
-spec:
- providerRef:
- name: gcp-provider
- reclaimPolicy: Delete
- parent: services/servicenetworking.googleapis.com
- network: projects/$PROJECT_ID/global/networks/$NETWORK_NAME
- reservedPeeringRangeRefs:
- - name: gitlab-ad-globaladdress
-EOF
-```
-
-Apply the settings specified in the file with the following command:
-
-```shell
-kubectl apply -f network.yaml
-```
-
-You can verify creation of the network resources with the following commands.
-Verify that the status of both of these resources is ready and is synced.
-
-```shell
-kubectl describe connection.servicenetworking.gcp.crossplane.io gitlab-ad-connection
-kubectl describe globaladdress.compute.gcp.crossplane.io gitlab-ad-globaladdress
-```
+ [configuring private services access](https://cloud.google.com/vpc/docs/configure-private-services-access).
+
+1. Run the following command, which creates a `network.yaml` file, and configures
+ `GlobalAddress` and connection resources:
+
+ ```plaintext
+ cat > network.yaml <<EOF
+ ---
+ # gitlab-ad-globaladdress defines the IP range that will be allocated
+ # for cloud services connecting to the instances in the given Network.
+
+ apiVersion: compute.gcp.crossplane.io/v1alpha3
+ kind: GlobalAddress
+ metadata:
+ name: gitlab-ad-globaladdress
+ spec:
+ providerRef:
+ name: gcp-provider
+ reclaimPolicy: Delete
+ name: gitlab-ad-globaladdress
+ purpose: VPC_PEERING
+ addressType: INTERNAL
+ prefixLength: 16
+ network: projects/$PROJECT_ID/global/networks/$NETWORK_NAME
+ ---
+ # gitlab-ad-connection is what allows cloud services to use the allocated
+ # GlobalAddress for communication. Behind the scenes, it creates a VPC peering
+ # to the network that those service instances actually live.
+
+ apiVersion: servicenetworking.gcp.crossplane.io/v1alpha3
+ kind: Connection
+ metadata:
+ name: gitlab-ad-connection
+ spec:
+ providerRef:
+ name: gcp-provider
+ reclaimPolicy: Delete
+ parent: services/servicenetworking.googleapis.com
+ network: projects/$PROJECT_ID/global/networks/$NETWORK_NAME
+ reservedPeeringRangeRefs:
+ - name: gitlab-ad-globaladdress
+ EOF
+ ```
+
+1. Apply the settings specified in the file with the following command:
+
+ ```shell
+ kubectl apply -f network.yaml
+ ```
+
+1. Verify the creation of the network resources, and that both resources are ready and synced.
+
+ ```shell
+ kubectl describe connection.servicenetworking.gcp.crossplane.io gitlab-ad-connection
+ kubectl describe globaladdress.compute.gcp.crossplane.io gitlab-ad-globaladdress
+ ```
## Setting up Resource classes
-Resource classes are a way of defining a configuration for the required managed service. We will define the PostgreSQL Resource class
-
-- Define a `gcp-postgres-standard.yaml` resource class which contains
-
-1. A default CloudSQLInstanceClass.
-1. A CloudSQLInstanceClass with labels.
-
-```shell
-cat > gcp-postgres-standard.yaml <<EOF
-apiVersion: database.gcp.crossplane.io/v1beta1
-kind: CloudSQLInstanceClass
-metadata:
- name: cloudsqlinstancepostgresql-standard
- labels:
- gitlab-ad-demo: "true"
-specTemplate:
- writeConnectionSecretsToNamespace: gitlab-managed-apps
- forProvider:
- databaseVersion: POSTGRES_11_7
- region: $REGION
- settings:
- tier: db-custom-1-3840
- dataDiskType: PD_SSD
- dataDiskSizeGb: 10
- ipConfiguration:
- privateNetwork: projects/$PROJECT_ID/global/networks/$NETWORK_NAME
- # this should match the name of the provider created in the above step
- providerRef:
- name: gcp-provider
- reclaimPolicy: Delete
----
-apiVersion: database.gcp.crossplane.io/v1beta1
-kind: CloudSQLInstanceClass
-metadata:
- name: cloudsqlinstancepostgresql-standard-default
- annotations:
- resourceclass.crossplane.io/is-default-class: "true"
-specTemplate:
- writeConnectionSecretsToNamespace: gitlab-managed-apps
- forProvider:
- databaseVersion: POSTGRES_11_7
- region: $REGION
- settings:
- tier: db-custom-1-3840
- dataDiskType: PD_SSD
- dataDiskSizeGb: 10
- ipConfiguration:
- privateNetwork: projects/$PROJECT_ID/global/networks/$NETWORK_NAME
- # this should match the name of the provider created in the above step
- providerRef:
- name: gcp-provider
- reclaimPolicy: Delete
-EOF
-```
-
-Apply the resource class configuration with the following command:
-
-```shell
-kubectl apply -f gcp-postgres-standard.yaml
-```
-
-Verify creation of the Resource class with the following command:
-
-```shell
-kubectl get cloudsqlinstanceclasses
-```
-
-The Resource Classes allow you to define classes of service for a managed service. We could create another `CloudSQLInstanceClass` which requests for a larger or a faster disk. It could also request for a specific version of the database.
+Use resource classes to define a configuration for the required managed service.
+This example defines the PostgreSQL Resource class:
+
+1. Run the following command, which define a `gcp-postgres-standard.yaml` resource
+ class containing a default `CloudSQLInstanceClass` with labels:
+
+ ```plaintext
+ cat > gcp-postgres-standard.yaml <<EOF
+ apiVersion: database.gcp.crossplane.io/v1beta1
+ kind: CloudSQLInstanceClass
+ metadata:
+ name: cloudsqlinstancepostgresql-standard
+ labels:
+ gitlab-ad-demo: "true"
+ specTemplate:
+ writeConnectionSecretsToNamespace: gitlab-managed-apps
+ forProvider:
+ databaseVersion: POSTGRES_11_7
+ region: $REGION
+ settings:
+ tier: db-custom-1-3840
+ dataDiskType: PD_SSD
+ dataDiskSizeGb: 10
+ ipConfiguration:
+ privateNetwork: projects/$PROJECT_ID/global/networks/$NETWORK_NAME
+ # this should match the name of the provider created in the above step
+ providerRef:
+ name: gcp-provider
+ reclaimPolicy: Delete
+ ---
+ apiVersion: database.gcp.crossplane.io/v1beta1
+ kind: CloudSQLInstanceClass
+ metadata:
+ name: cloudsqlinstancepostgresql-standard-default
+ annotations:
+ resourceclass.crossplane.io/is-default-class: "true"
+ specTemplate:
+ writeConnectionSecretsToNamespace: gitlab-managed-apps
+ forProvider:
+ databaseVersion: POSTGRES_11_7
+ region: $REGION
+ settings:
+ tier: db-custom-1-3840
+ dataDiskType: PD_SSD
+ dataDiskSizeGb: 10
+ ipConfiguration:
+ privateNetwork: projects/$PROJECT_ID/global/networks/$NETWORK_NAME
+ # this should match the name of the provider created in the above step
+ providerRef:
+ name: gcp-provider
+ reclaimPolicy: Delete
+ EOF
+ ```
+
+1. Apply the resource class configuration with the following command:
+
+ ```shell
+ kubectl apply -f gcp-postgres-standard.yaml
+ ```
+
+1. Verify creation of the Resource class with the following command:
+
+ ```shell
+ kubectl get cloudsqlinstanceclasses
+ ```
+
+The Resource Classes allow you to define classes of service for a managed service.
+We could create another `CloudSQLInstanceClass` which requests for a larger or a
+faster disk. It could also request for a specific version of the database.
## Auto DevOps Configuration Options
-The Auto DevOps pipeline can be run with the following options:
-
-The Environment variables, `AUTO_DEVOPS_POSTGRES_MANAGED` and `AUTO_DEVOPS_POSTGRES_MANAGED_CLASS_SELECTOR` need to be set to provision PostgreSQL using Crossplane
-
-Alternatively, the following options can be overridden from the values for the Helm chart.
-
-- `postgres.managed` set to true which will select a default resource class.
- The resource class needs to be marked with the annotation
- `resourceclass.crossplane.io/is-default-class: "true"`. The CloudSQLInstanceClass
- `cloudsqlinstancepostgresql-standard-default` will be used to satisfy the claim.
-
-- `postgres.managed` set to `true` with `postgres.managedClassSelector`
- providing the resource class to choose based on labels. In this case, the
- value of `postgres.managedClassSelector.matchLabels.gitlab-ad-demo="true"`
- will select the CloudSQLInstance class `cloudsqlinstancepostgresql-standard`
- to satisfy the claim request.
+You can run the Auto DevOps pipeline with either of the following options:
+
+- Setting the Environment variables `AUTO_DEVOPS_POSTGRES_MANAGED` and
+ `AUTO_DEVOPS_POSTGRES_MANAGED_CLASS_SELECTOR` to provision PostgreSQL using Crossplane.
+- Overriding values for the Helm chart:
+ - Set `postgres.managed` to `true`, which selects a default resource class.
+ Mark the resource class with the annotation
+ `resourceclass.crossplane.io/is-default-class: "true"`. The CloudSQLInstanceClass
+ `cloudsqlinstancepostgresql-standard-default` is used to satisfy the claim.
+ - Set `postgres.managed` to `true` with `postgres.managedClassSelector`
+ providing the resource class to choose, based on labels. In this case, the
+ value of `postgres.managedClassSelector.matchLabels.gitlab-ad-demo="true"`
+ selects the CloudSQLInstance class `cloudsqlinstancepostgresql-standard`
+ to satisfy the claim request.
The Auto DevOps pipeline should provision a PostgresqlInstance when it runs successfully.
-Verify creation of the PostgreSQL Instance.
+To verify the PostgreSQL instance was created, run this command. When the `STATUS`
+field of the PostgresqlInstance changes to `BOUND`, it's successfully provisioned:
```shell
-kubectl get postgresqlinstance
-```
+$ kubectl get postgresqlinstance
-Sample Output: The `STATUS` field of the PostgresqlInstance transitions to `BOUND` when it is successfully provisioned.
-
-```plaintext
NAME STATUS CLASS-KIND CLASS-NAME RESOURCE-KIND RESOURCE-NAME AGE
staging-test8 Bound CloudSQLInstanceClass cloudsqlinstancepostgresql-standard CloudSQLInstance xp-ad-demo-24-staging-staging-test8-jj55c 9m
```
-The endpoint of the PostgreSQL instance, and the user credentials, are present in a secret called `app-postgres` within the same project namespace.
-
-Verify the secret with the database information is created with the following command:
+The endpoint of the PostgreSQL instance, and the user credentials, are present in
+a secret called `app-postgres` within the same project namespace. You can verify the
+secret with the following command:
```shell
-kubectl describe secret app-postgres
-```
-
-Sample Output:
+$ kubectl describe secret app-postgres
-```plaintext
Name: app-postgres
Namespace: xp-ad-demo-24-staging
Labels: <none>
diff --git a/doc/user/clusters/management_project.md b/doc/user/clusters/management_project.md
index c8755af29a3..892d2bce184 100644
--- a/doc/user/clusters/management_project.md
+++ b/doc/user/clusters/management_project.md
@@ -97,7 +97,7 @@ Development, Staging, and Production cluster respectively.
```yaml
stages:
-- deploy
+ - deploy
configure development cluster:
stage: deploy