Welcome to mirror list, hosted at ThFree Co, Russian Federation.

github.com/zabbix/zabbix.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorVyacheslav Khaliev <vyacheslav.khaliev@zabbix.com>2022-10-18 20:01:50 +0300
committerVyacheslav Khaliev <vyacheslav.khaliev@zabbix.com>2022-10-18 20:01:50 +0300
commit82b532eefca9c99c88832833844e2e79e93919a8 (patch)
treec28aab43b8ee3c5c0f7ea37a4b1c190b0c382c7a /templates
parentddeeb8a1057bcf861d2452bf54116bb341b14a6b (diff)
.........T [ZBX-21673] fixed descriptions in Ceph by Zabbix agent 2 template
Diffstat (limited to 'templates')
-rw-r--r--templates/app/ceph_agent2/README.md132
-rw-r--r--templates/app/ceph_agent2/template_app_ceph_agent2.yaml118
2 files changed, 125 insertions, 125 deletions
diff --git a/templates/app/ceph_agent2/README.md b/templates/app/ceph_agent2/README.md
index 7fcab02e8b9..31d6718c06c 100644
--- a/templates/app/ceph_agent2/README.md
+++ b/templates/app/ceph_agent2/README.md
@@ -3,11 +3,11 @@
## Overview
-For Zabbix version: 6.4 and higher
-The template to monitor Ceph cluster by Zabbix that work without any external scripts.
+For Zabbix version: 6.4 and higher.
+The template is designed to monitor Ceph cluster by Zabbix, which works without any external scripts.
Most of the metrics are collected in one go, thanks to Zabbix bulk data collection.
-Template `Ceph by Zabbix agent 2` — collects metrics by polling zabbix-agent2.
+The template `Ceph by Zabbix agent 2` — collects metrics by polling *zabbix-agent2*.
@@ -19,9 +19,9 @@ This template was tested on:
> See [Zabbix template operation](https://www.zabbix.com/documentation/6.4/manual/config/templates_out_of_the_box/zabbix_agent2) for basic instructions.
-1. Setup and configure zabbix-agent2 compiled with the Ceph monitoring plugin.
-2. Set the {$CEPH.CONNSTRING} such as <protocol(host:port)> or named session.
-3. Set the user name and password in host macros ({$CEPH.USER}, {$CEPH.API.KEY}) if you want to override parameters from the Zabbix agent configuration file.
+1. Setup and configure *zabbix-agent2* compiled with the *Ceph* monitoring plugin.
+2. Set the {$CEPH.CONNSTRING}, such as <protocol(host:port)>, or named session.
+3. Set the user name and password in the host macros ({$CEPH.USER}, {$CEPH.API.KEY}) if you want to override the parameters from the Zabbix agent configuration file.
Test availability: `zabbix_get -s ceph-host -k ceph.ping["{$CEPH.CONNSTRING}","{$CEPH.USER}","{$CEPH.API.KEY}"]`
@@ -54,67 +54,67 @@ There are no template links in this template.
|Group|Name|Description|Type|Key and additional info|
|-----|----|-----------|----|---------------------|
|Ceph |Ceph: Ping | |ZABBIX_PASSIVE |ceph.ping["{$CEPH.CONNSTRING}","{$CEPH.USER}","{$CEPH.API.KEY}"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `30m`</p> |
-|Ceph |Ceph: Number of Monitors |<p>Number of Monitors configured in Ceph cluster</p> |DEPENDENT |ceph.num_mon<p>**Preprocessing**:</p><p>- JSONPATH: `$.num_mon`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `30m`</p> |
-|Ceph |Ceph: Overall cluster status |<p>Overall Ceph cluster status, eg 0 - HEALTH_OK, 1 - HEALTH_WARN or 2 - HEALTH_ERR</p> |DEPENDENT |ceph.overall_status<p>**Preprocessing**:</p><p>- JSONPATH: `$.overall_status`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+|Ceph |Ceph: Number of Monitors |<p>The number of Monitors configured in a Ceph cluster.</p> |DEPENDENT |ceph.num_mon<p>**Preprocessing**:</p><p>- JSONPATH: `$.num_mon`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `30m`</p> |
+|Ceph |Ceph: Overall cluster status |<p>The overall Ceph cluster status, eg 0 - HEALTH_OK, 1 - HEALTH_WARN or 2 - HEALTH_ERR.</p> |DEPENDENT |ceph.overall_status<p>**Preprocessing**:</p><p>- JSONPATH: `$.overall_status`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
|Ceph |Ceph: Minimum Mon release version |<p>min_mon_release_name</p> |DEPENDENT |ceph.min_mon_release_name<p>**Preprocessing**:</p><p>- JSONPATH: `$.min_mon_release_name`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|Ceph |Ceph: Ceph Read bandwidth |<p>Global read Bytes per second</p> |DEPENDENT |ceph.rd_bytes.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.rd_bytes`</p><p>- CHANGE_PER_SECOND</p> |
-|Ceph |Ceph: Ceph Write bandwidth |<p>Global write Bytes per second</p> |DEPENDENT |ceph.wr_bytes.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.wr_bytes`</p><p>- CHANGE_PER_SECOND</p> |
-|Ceph |Ceph: Ceph Read operations per sec |<p>Global read operations per second</p> |DEPENDENT |ceph.rd_ops.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.rd_ops`</p><p>- CHANGE_PER_SECOND</p> |
-|Ceph |Ceph: Ceph Write operations per sec |<p>Global write operations per second</p> |DEPENDENT |ceph.wr_ops.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.wr_ops`</p><p>- CHANGE_PER_SECOND</p> |
-|Ceph |Ceph: Total bytes available |<p>Total bytes available in Ceph cluster</p> |DEPENDENT |ceph.total_avail_bytes<p>**Preprocessing**:</p><p>- JSONPATH: `$.total_avail_bytes`</p> |
-|Ceph |Ceph: Total bytes |<p>Total (RAW) capacity of Ceph cluster in bytes</p> |DEPENDENT |ceph.total_bytes<p>**Preprocessing**:</p><p>- JSONPATH: `$.total_bytes`</p> |
-|Ceph |Ceph: Total bytes used |<p>Total bytes used in Ceph cluster</p> |DEPENDENT |ceph.total_used_bytes<p>**Preprocessing**:</p><p>- JSONPATH: `$.total_used_bytes`</p> |
-|Ceph |Ceph: Total number of objects |<p>Total number of objects in Ceph cluster</p> |DEPENDENT |ceph.total_objects<p>**Preprocessing**:</p><p>- JSONPATH: `$.total_objects`</p> |
-|Ceph |Ceph: Number of Placement Groups |<p>Total number of Placement Groups in Ceph cluster</p> |DEPENDENT |ceph.num_pg<p>**Preprocessing**:</p><p>- JSONPATH: `$.num_pg`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|Ceph |Ceph: Number of Placement Groups in Temporary state |<p>Total number of Placement Groups in pg_temp state</p> |DEPENDENT |ceph.num_pg_temp<p>**Preprocessing**:</p><p>- JSONPATH: `$.num_pg_temp`</p> |
-|Ceph |Ceph: Number of Placement Groups in Active state |<p>Total number of Placement Groups in active state</p> |DEPENDENT |ceph.pg_states.active<p>**Preprocessing**:</p><p>- JSONPATH: `$.pg_states.active`</p> |
-|Ceph |Ceph: Number of Placement Groups in Clean state |<p>Total number of Placement Groups in clean state</p> |DEPENDENT |ceph.pg_states.clean<p>**Preprocessing**:</p><p>- JSONPATH: `$.pg_states.clean`</p> |
-|Ceph |Ceph: Number of Placement Groups in Peering state |<p>Total number of Placement Groups in peering state</p> |DEPENDENT |ceph.pg_states.peering<p>**Preprocessing**:</p><p>- JSONPATH: `$.pg_states.peering`</p> |
-|Ceph |Ceph: Number of Placement Groups in Scrubbing state |<p>Total number of Placement Groups in scrubbing state</p> |DEPENDENT |ceph.pg_states.scrubbing<p>**Preprocessing**:</p><p>- JSONPATH: `$.pg_states.scrubbing`</p> |
-|Ceph |Ceph: Number of Placement Groups in Undersized state |<p>Total number of Placement Groups in undersized state</p> |DEPENDENT |ceph.pg_states.undersized<p>**Preprocessing**:</p><p>- JSONPATH: `$.pg_states.undersized`</p> |
-|Ceph |Ceph: Number of Placement Groups in Backfilling state |<p>Total number of Placement Groups in backfilling state</p> |DEPENDENT |ceph.pg_states.backfilling<p>**Preprocessing**:</p><p>- JSONPATH: `$.pg_states.backfilling`</p> |
-|Ceph |Ceph: Number of Placement Groups in degraded state |<p>Total number of Placement Groups in degraded state</p> |DEPENDENT |ceph.pg_states.degraded<p>**Preprocessing**:</p><p>- JSONPATH: `$.pg_states.degraded`</p> |
-|Ceph |Ceph: Number of Placement Groups in inconsistent state |<p>Total number of Placement Groups in inconsistent state</p> |DEPENDENT |ceph.pg_states.inconsistent<p>**Preprocessing**:</p><p>- JSONPATH: `$.pg_states.inconsistent`</p> |
-|Ceph |Ceph: Number of Placement Groups in Unknown state |<p>Total number of Placement Groups in unknown state</p> |DEPENDENT |ceph.pg_states.unknown<p>**Preprocessing**:</p><p>- JSONPATH: `$.pg_states.unknown`</p> |
-|Ceph |Ceph: Number of Placement Groups in remapped state |<p>Total number of Placement Groups in remapped state</p> |DEPENDENT |ceph.pg_states.remapped<p>**Preprocessing**:</p><p>- JSONPATH: `$.pg_states.remapped`</p> |
-|Ceph |Ceph: Number of Placement Groups in recovering state |<p>Total number of Placement Groups in recovering state</p> |DEPENDENT |ceph.pg_states.recovering<p>**Preprocessing**:</p><p>- JSONPATH: `$.pg_states.recovering`</p> |
-|Ceph |Ceph: Number of Placement Groups in backfill_toofull state |<p>Total number of Placement Groups in backfill_toofull state</p> |DEPENDENT |ceph.pg_states.backfill_toofull<p>**Preprocessing**:</p><p>- JSONPATH: `$.pg_states.backfill_toofull`</p> |
-|Ceph |Ceph: Number of Placement Groups in backfill_wait state |<p>Total number of Placement Groups in backfill_wait state</p> |DEPENDENT |ceph.pg_states.backfill_wait<p>**Preprocessing**:</p><p>- JSONPATH: `$.pg_states.backfill_wait`</p> |
-|Ceph |Ceph: Number of Placement Groups in recovery_wait state |<p>Total number of Placement Groups in recovery_wait state</p> |DEPENDENT |ceph.pg_states.recovery_wait<p>**Preprocessing**:</p><p>- JSONPATH: `$.pg_states.recovery_wait`</p> |
-|Ceph |Ceph: Number of Pools |<p>Total number of pools in Ceph cluster</p> |DEPENDENT |ceph.num_pools<p>**Preprocessing**:</p><p>- JSONPATH: `$.num_pools`</p> |
-|Ceph |Ceph: Number of OSDs |<p>Number of known storage daemons in Ceph cluster</p> |DEPENDENT |ceph.num_osd<p>**Preprocessing**:</p><p>- JSONPATH: `$.num_osd`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|Ceph |Ceph: Number of OSDs in state: UP |<p>Total number of online storage daemons in Ceph cluster</p> |DEPENDENT |ceph.num_osd_up<p>**Preprocessing**:</p><p>- JSONPATH: `$.num_osd_up`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|Ceph |Ceph: Number of OSDs in state: IN |<p>Total number of participating storage daemons in Ceph cluster</p> |DEPENDENT |ceph.num_osd_in<p>**Preprocessing**:</p><p>- JSONPATH: `$.num_osd_in`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|Ceph |Ceph: Ceph OSD avg fill |<p>Average fill of OSDs</p> |DEPENDENT |ceph.osd_fill.avg<p>**Preprocessing**:</p><p>- JSONPATH: `$.osd_fill.avg`</p> |
-|Ceph |Ceph: Ceph OSD max fill |<p>Percentage fill of maximum filled OSD</p> |DEPENDENT |ceph.osd_fill.max<p>**Preprocessing**:</p><p>- JSONPATH: `$.osd_fill.max`</p> |
-|Ceph |Ceph: Ceph OSD min fill |<p>Percentage fill of minimum filled OSD</p> |DEPENDENT |ceph.osd_fill.min<p>**Preprocessing**:</p><p>- JSONPATH: `$.osd_fill.min`</p> |
-|Ceph |Ceph: Ceph OSD max PGs |<p>Maximum amount of PGs on OSDs</p> |DEPENDENT |ceph.osd_pgs.max<p>**Preprocessing**:</p><p>- JSONPATH: `$.osd_pgs.max`</p> |
-|Ceph |Ceph: Ceph OSD min PGs |<p>Minimum amount of PGs on OSDs</p> |DEPENDENT |ceph.osd_pgs.min<p>**Preprocessing**:</p><p>- JSONPATH: `$.osd_pgs.min`</p> |
-|Ceph |Ceph: Ceph OSD avg PGs |<p>Average amount of PGs on OSDs</p> |DEPENDENT |ceph.osd_pgs.avg<p>**Preprocessing**:</p><p>- JSONPATH: `$.osd_pgs.avg`</p> |
-|Ceph |Ceph: Ceph OSD Apply latency Avg |<p>Average apply latency of OSDs</p> |DEPENDENT |ceph.osd_latency_apply.avg<p>**Preprocessing**:</p><p>- JSONPATH: `$.osd_latency_apply.avg`</p> |
-|Ceph |Ceph: Ceph OSD Apply latency Max |<p>Maximum apply latency of OSDs</p> |DEPENDENT |ceph.osd_latency_apply.max<p>**Preprocessing**:</p><p>- JSONPATH: `$.osd_latency_apply.max`</p> |
-|Ceph |Ceph: Ceph OSD Apply latency Min |<p>Minimum apply latency of OSDs</p> |DEPENDENT |ceph.osd_latency_apply.min<p>**Preprocessing**:</p><p>- JSONPATH: `$.osd_latency_apply.min`</p> |
-|Ceph |Ceph: Ceph OSD Commit latency Avg |<p>Average commit latency of OSDs</p> |DEPENDENT |ceph.osd_latency_commit.avg<p>**Preprocessing**:</p><p>- JSONPATH: `$.osd_latency_commit.avg`</p> |
-|Ceph |Ceph: Ceph OSD Commit latency Max |<p>Maximum commit latency of OSDs</p> |DEPENDENT |ceph.osd_latency_commit.max<p>**Preprocessing**:</p><p>- JSONPATH: `$.osd_latency_commit.max`</p> |
-|Ceph |Ceph: Ceph OSD Commit latency Min |<p>Minimum commit latency of OSDs</p> |DEPENDENT |ceph.osd_latency_commit.min<p>**Preprocessing**:</p><p>- JSONPATH: `$.osd_latency_commit.min`</p> |
-|Ceph |Ceph: Ceph backfill full ratio |<p>Backfill full ratio setting of Ceph cluster as configured on OSDMap</p> |DEPENDENT |ceph.osd_backfillfull_ratio<p>**Preprocessing**:</p><p>- JSONPATH: `$.osd_backfillfull_ratio`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|Ceph |Ceph: Ceph full ratio |<p>Full ratio setting of Ceph cluster as configured on OSDMap</p> |DEPENDENT |ceph.osd_full_ratio<p>**Preprocessing**:</p><p>- JSONPATH: `$.osd_full_ratio`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|Ceph |Ceph: Ceph nearfull ratio |<p>Near full ratio setting of Ceph cluster as configured on OSDMap</p> |DEPENDENT |ceph.osd_nearfull_ratio<p>**Preprocessing**:</p><p>- JSONPATH: `$.osd_nearfull_ratio`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+|Ceph |Ceph: Ceph Read bandwidth |<p>The global read bytes per second.</p> |DEPENDENT |ceph.rd_bytes.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.rd_bytes`</p><p>- CHANGE_PER_SECOND</p> |
+|Ceph |Ceph: Ceph Write bandwidth |<p>The global write bytes per second</p> |DEPENDENT |ceph.wr_bytes.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.wr_bytes`</p><p>- CHANGE_PER_SECOND</p> |
+|Ceph |Ceph: Ceph Read operations per sec |<p>The global read operations per second.</p> |DEPENDENT |ceph.rd_ops.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.rd_ops`</p><p>- CHANGE_PER_SECOND</p> |
+|Ceph |Ceph: Ceph Write operations per sec |<p>The global write operations per second.</p> |DEPENDENT |ceph.wr_ops.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.wr_ops`</p><p>- CHANGE_PER_SECOND</p> |
+|Ceph |Ceph: Total bytes available |<p>The total bytes available in a Ceph cluster.</p> |DEPENDENT |ceph.total_avail_bytes<p>**Preprocessing**:</p><p>- JSONPATH: `$.total_avail_bytes`</p> |
+|Ceph |Ceph: Total bytes |<p>The total (RAW) capacity of a Ceph cluster in bytes.</p> |DEPENDENT |ceph.total_bytes<p>**Preprocessing**:</p><p>- JSONPATH: `$.total_bytes`</p> |
+|Ceph |Ceph: Total bytes used |<p>The total bytes used in a Ceph cluster.</p> |DEPENDENT |ceph.total_used_bytes<p>**Preprocessing**:</p><p>- JSONPATH: `$.total_used_bytes`</p> |
+|Ceph |Ceph: Total number of objects |<p>The total number of objects in a Ceph cluster.</p> |DEPENDENT |ceph.total_objects<p>**Preprocessing**:</p><p>- JSONPATH: `$.total_objects`</p> |
+|Ceph |Ceph: Number of Placement Groups |<p>The total number of Placement Groups in a Ceph cluster.</p> |DEPENDENT |ceph.num_pg<p>**Preprocessing**:</p><p>- JSONPATH: `$.num_pg`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+|Ceph |Ceph: Number of Placement Groups in Temporary state |<p>The total number of Placement Groups in a *pg_temp* state</p> |DEPENDENT |ceph.num_pg_temp<p>**Preprocessing**:</p><p>- JSONPATH: `$.num_pg_temp`</p> |
+|Ceph |Ceph: Number of Placement Groups in Active state |<p>The total number of Placement Groups in an active state.</p> |DEPENDENT |ceph.pg_states.active<p>**Preprocessing**:</p><p>- JSONPATH: `$.pg_states.active`</p> |
+|Ceph |Ceph: Number of Placement Groups in Clean state |<p>The total number of Placement Groups in a clean state.</p> |DEPENDENT |ceph.pg_states.clean<p>**Preprocessing**:</p><p>- JSONPATH: `$.pg_states.clean`</p> |
+|Ceph |Ceph: Number of Placement Groups in Peering state |<p>The total number of Placement Groups in a peering state.</p> |DEPENDENT |ceph.pg_states.peering<p>**Preprocessing**:</p><p>- JSONPATH: `$.pg_states.peering`</p> |
+|Ceph |Ceph: Number of Placement Groups in Scrubbing state |<p>The total number of Placement Groups in a scrubbing state.</p> |DEPENDENT |ceph.pg_states.scrubbing<p>**Preprocessing**:</p><p>- JSONPATH: `$.pg_states.scrubbing`</p> |
+|Ceph |Ceph: Number of Placement Groups in Undersized state |<p>The total number of Placement Groups in an undersized state.</p> |DEPENDENT |ceph.pg_states.undersized<p>**Preprocessing**:</p><p>- JSONPATH: `$.pg_states.undersized`</p> |
+|Ceph |Ceph: Number of Placement Groups in Backfilling state |<p>The total number of Placement Groups in a backfill state.</p> |DEPENDENT |ceph.pg_states.backfilling<p>**Preprocessing**:</p><p>- JSONPATH: `$.pg_states.backfilling`</p> |
+|Ceph |Ceph: Number of Placement Groups in degraded state |<p>The total number of Placement Groups in a degraded state.</p> |DEPENDENT |ceph.pg_states.degraded<p>**Preprocessing**:</p><p>- JSONPATH: `$.pg_states.degraded`</p> |
+|Ceph |Ceph: Number of Placement Groups in inconsistent state |<p>The total number of Placement Groups in an inconsistent state.</p> |DEPENDENT |ceph.pg_states.inconsistent<p>**Preprocessing**:</p><p>- JSONPATH: `$.pg_states.inconsistent`</p> |
+|Ceph |Ceph: Number of Placement Groups in Unknown state |<p>The total number of Placement Groups in an unknown state.</p> |DEPENDENT |ceph.pg_states.unknown<p>**Preprocessing**:</p><p>- JSONPATH: `$.pg_states.unknown`</p> |
+|Ceph |Ceph: Number of Placement Groups in remapped state |<p>The total number of Placement Groups in a remapped state.</p> |DEPENDENT |ceph.pg_states.remapped<p>**Preprocessing**:</p><p>- JSONPATH: `$.pg_states.remapped`</p> |
+|Ceph |Ceph: Number of Placement Groups in recovering state |<p>The total number of Placement Groups in a recovering state.</p> |DEPENDENT |ceph.pg_states.recovering<p>**Preprocessing**:</p><p>- JSONPATH: `$.pg_states.recovering`</p> |
+|Ceph |Ceph: Number of Placement Groups in backfill_toofull state |<p>The total number of Placement Groups in a *backfill_toofull state*.</p> |DEPENDENT |ceph.pg_states.backfill_toofull<p>**Preprocessing**:</p><p>- JSONPATH: `$.pg_states.backfill_toofull`</p> |
+|Ceph |Ceph: Number of Placement Groups in backfill_wait state |<p>The total number of Placement Groups in a *backfill_wait* state.</p> |DEPENDENT |ceph.pg_states.backfill_wait<p>**Preprocessing**:</p><p>- JSONPATH: `$.pg_states.backfill_wait`</p> |
+|Ceph |Ceph: Number of Placement Groups in recovery_wait state |<p>The total number of Placement Groups in a *recovery_wait* state.</p> |DEPENDENT |ceph.pg_states.recovery_wait<p>**Preprocessing**:</p><p>- JSONPATH: `$.pg_states.recovery_wait`</p> |
+|Ceph |Ceph: Number of Pools |<p>The total number of pools in a Ceph cluster.</p> |DEPENDENT |ceph.num_pools<p>**Preprocessing**:</p><p>- JSONPATH: `$.num_pools`</p> |
+|Ceph |Ceph: Number of OSDs |<p>The number of the known storage daemons in a Ceph cluster.</p> |DEPENDENT |ceph.num_osd<p>**Preprocessing**:</p><p>- JSONPATH: `$.num_osd`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+|Ceph |Ceph: Number of OSDs in state: UP |<p>The total number of the online storage daemons in a Ceph cluster.</p> |DEPENDENT |ceph.num_osd_up<p>**Preprocessing**:</p><p>- JSONPATH: `$.num_osd_up`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+|Ceph |Ceph: Number of OSDs in state: IN |<p>The total number of the participating storage daemons in a Ceph cluster.</p> |DEPENDENT |ceph.num_osd_in<p>**Preprocessing**:</p><p>- JSONPATH: `$.num_osd_in`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+|Ceph |Ceph: Ceph OSD avg fill |<p>The average fill of OSDs.</p> |DEPENDENT |ceph.osd_fill.avg<p>**Preprocessing**:</p><p>- JSONPATH: `$.osd_fill.avg`</p> |
+|Ceph |Ceph: Ceph OSD max fill |<p>The percentage of the most filled OSD.</p> |DEPENDENT |ceph.osd_fill.max<p>**Preprocessing**:</p><p>- JSONPATH: `$.osd_fill.max`</p> |
+|Ceph |Ceph: Ceph OSD min fill |<p>The percentage fill of the minimum filled OSD.</p> |DEPENDENT |ceph.osd_fill.min<p>**Preprocessing**:</p><p>- JSONPATH: `$.osd_fill.min`</p> |
+|Ceph |Ceph: Ceph OSD max PGs |<p>The maximum amount of Placement Groups on OSDs.</p> |DEPENDENT |ceph.osd_pgs.max<p>**Preprocessing**:</p><p>- JSONPATH: `$.osd_pgs.max`</p> |
+|Ceph |Ceph: Ceph OSD min PGs |<p>The minimum amount of Placement Groups on OSDs.</p> |DEPENDENT |ceph.osd_pgs.min<p>**Preprocessing**:</p><p>- JSONPATH: `$.osd_pgs.min`</p> |
+|Ceph |Ceph: Ceph OSD avg PGs |<p>The average amount of Placement Groups on OSDs.</p> |DEPENDENT |ceph.osd_pgs.avg<p>**Preprocessing**:</p><p>- JSONPATH: `$.osd_pgs.avg`</p> |
+|Ceph |Ceph: Ceph OSD Apply latency Avg |<p>The average apply latency of OSDs.</p> |DEPENDENT |ceph.osd_latency_apply.avg<p>**Preprocessing**:</p><p>- JSONPATH: `$.osd_latency_apply.avg`</p> |
+|Ceph |Ceph: Ceph OSD Apply latency Max |<p>The maximum apply latency of OSDs.</p> |DEPENDENT |ceph.osd_latency_apply.max<p>**Preprocessing**:</p><p>- JSONPATH: `$.osd_latency_apply.max`</p> |
+|Ceph |Ceph: Ceph OSD Apply latency Min |<p>The minimum apply latency of OSDs.</p> |DEPENDENT |ceph.osd_latency_apply.min<p>**Preprocessing**:</p><p>- JSONPATH: `$.osd_latency_apply.min`</p> |
+|Ceph |Ceph: Ceph OSD Commit latency Avg |<p>The average commit latency of OSDs.</p> |DEPENDENT |ceph.osd_latency_commit.avg<p>**Preprocessing**:</p><p>- JSONPATH: `$.osd_latency_commit.avg`</p> |
+|Ceph |Ceph: Ceph OSD Commit latency Max |<p>The maximum commit latency of OSDs.</p> |DEPENDENT |ceph.osd_latency_commit.max<p>**Preprocessing**:</p><p>- JSONPATH: `$.osd_latency_commit.max`</p> |
+|Ceph |Ceph: Ceph OSD Commit latency Min |<p>The minimum commit latency of OSDs.</p> |DEPENDENT |ceph.osd_latency_commit.min<p>**Preprocessing**:</p><p>- JSONPATH: `$.osd_latency_commit.min`</p> |
+|Ceph |Ceph: Ceph backfill full ratio |<p>The backfill full ratio setting of the Ceph cluster as configured on OSDMap.</p> |DEPENDENT |ceph.osd_backfillfull_ratio<p>**Preprocessing**:</p><p>- JSONPATH: `$.osd_backfillfull_ratio`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+|Ceph |Ceph: Ceph full ratio |<p>The full ratio setting of the Ceph cluster as configured on OSDMap.</p> |DEPENDENT |ceph.osd_full_ratio<p>**Preprocessing**:</p><p>- JSONPATH: `$.osd_full_ratio`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+|Ceph |Ceph: Ceph nearfull ratio |<p>The near full ratio setting of the Ceph cluster as configured on OSDMap.</p> |DEPENDENT |ceph.osd_nearfull_ratio<p>**Preprocessing**:</p><p>- JSONPATH: `$.osd_nearfull_ratio`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
|Ceph |Ceph: [osd.{#OSDNAME}] OSD in | |DEPENDENT |ceph.osd[{#OSDNAME},in]<p>**Preprocessing**:</p><p>- JSONPATH: `$.osds.{#OSDNAME}.in`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
|Ceph |Ceph: [osd.{#OSDNAME}] OSD up | |DEPENDENT |ceph.osd[{#OSDNAME},up]<p>**Preprocessing**:</p><p>- JSONPATH: `$.osds.{#OSDNAME}.up`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
|Ceph |Ceph: [osd.{#OSDNAME}] OSD PGs | |DEPENDENT |ceph.osd[{#OSDNAME},num_pgs]<p>**Preprocessing**:</p><p>- JSONPATH: `$.osds.{#OSDNAME}.num_pgs`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p> |
|Ceph |Ceph: [osd.{#OSDNAME}] OSD fill | |DEPENDENT |ceph.osd[{#OSDNAME},fill]<p>**Preprocessing**:</p><p>- JSONPATH: `$.osds.{#OSDNAME}.osd_fill`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p> |
-|Ceph |Ceph: [osd.{#OSDNAME}] OSD latency apply |<p>Time taken to flush an update to disks.</p> |DEPENDENT |ceph.osd[{#OSDNAME},latency_apply]<p>**Preprocessing**:</p><p>- JSONPATH: `$.osds.{#OSDNAME}.osd_latency_apply`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p> |
-|Ceph |Ceph: [osd.{#OSDNAME}] OSD latency commit |<p>Time taken to commit an operation to the journal.</p> |DEPENDENT |ceph.osd[{#OSDNAME},latency_commit]<p>**Preprocessing**:</p><p>- JSONPATH: `$.osds.{#OSDNAME}.osd_latency_commit`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p> |
-|Ceph |Ceph: [{#POOLNAME}] Pool Used |<p>Total bytes used in pool.</p> |DEPENDENT |ceph.pool["{#POOLNAME}",bytes_used]<p>**Preprocessing**:</p><p>- JSONPATH: `$.pools["{#POOLNAME}"].bytes_used`</p> |
+|Ceph |Ceph: [osd.{#OSDNAME}] OSD latency apply |<p>The time taken to flush an update to disks.</p> |DEPENDENT |ceph.osd[{#OSDNAME},latency_apply]<p>**Preprocessing**:</p><p>- JSONPATH: `$.osds.{#OSDNAME}.osd_latency_apply`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p> |
+|Ceph |Ceph: [osd.{#OSDNAME}] OSD latency commit |<p>The time taken to commit an operation to the journal.</p> |DEPENDENT |ceph.osd[{#OSDNAME},latency_commit]<p>**Preprocessing**:</p><p>- JSONPATH: `$.osds.{#OSDNAME}.osd_latency_commit`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p> |
+|Ceph |Ceph: [{#POOLNAME}] Pool Used |<p>The total bytes used in a pool.</p> |DEPENDENT |ceph.pool["{#POOLNAME}",bytes_used]<p>**Preprocessing**:</p><p>- JSONPATH: `$.pools["{#POOLNAME}"].bytes_used`</p> |
|Ceph |Ceph: [{#POOLNAME}] Max available |<p>The maximum available space in the given pool.</p> |DEPENDENT |ceph.pool["{#POOLNAME}",max_avail]<p>**Preprocessing**:</p><p>- JSONPATH: `$.pools["{#POOLNAME}"].max_avail`</p> |
-|Ceph |Ceph: [{#POOLNAME}] Pool RAW Used |<p>Bytes used in pool including copies made.</p> |DEPENDENT |ceph.pool["{#POOLNAME}",stored_raw]<p>**Preprocessing**:</p><p>- JSONPATH: `$.pools["{#POOLNAME}"].stored_raw`</p> |
-|Ceph |Ceph: [{#POOLNAME}] Pool Percent Used |<p>Percentage of storage used per pool</p> |DEPENDENT |ceph.pool["{#POOLNAME}",percent_used]<p>**Preprocessing**:</p><p>- JSONPATH: `$.pools["{#POOLNAME}"].percent_used`</p> |
-|Ceph |Ceph: [{#POOLNAME}] Pool objects |<p>Number of objects in the pool.</p> |DEPENDENT |ceph.pool["{#POOLNAME}",objects]<p>**Preprocessing**:</p><p>- JSONPATH: `$.pools["{#POOLNAME}"].objects`</p> |
-|Ceph |Ceph: [{#POOLNAME}] Pool Read bandwidth |<p>Per-pool read Bytes/second</p> |DEPENDENT |ceph.pool["{#POOLNAME}",rd_bytes.rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.pools["{#POOLNAME}"].rd_bytes`</p><p>- CHANGE_PER_SECOND</p> |
-|Ceph |Ceph: [{#POOLNAME}] Pool Write bandwidth |<p>Per-pool write Bytes/second</p> |DEPENDENT |ceph.pool["{#POOLNAME}",wr_bytes.rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.pools["{#POOLNAME}"].wr_bytes`</p><p>- CHANGE_PER_SECOND</p> |
-|Ceph |Ceph: [{#POOLNAME}] Pool Read operations |<p>Per-pool read operations/second</p> |DEPENDENT |ceph.pool["{#POOLNAME}",rd_ops.rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.pools["{#POOLNAME}"].rd_ops`</p><p>- CHANGE_PER_SECOND</p> |
-|Ceph |Ceph: [{#POOLNAME}] Pool Write operations |<p>Per-pool write operations/second</p> |DEPENDENT |ceph.pool["{#POOLNAME}",wr_ops.rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.pools["{#POOLNAME}"].wr_ops`</p><p>- CHANGE_PER_SECOND</p> |
+|Ceph |Ceph: [{#POOLNAME}] Pool RAW Used |<p>Bytes used in pool including the copies made.</p> |DEPENDENT |ceph.pool["{#POOLNAME}",stored_raw]<p>**Preprocessing**:</p><p>- JSONPATH: `$.pools["{#POOLNAME}"].stored_raw`</p> |
+|Ceph |Ceph: [{#POOLNAME}] Pool Percent Used |<p>The percentage of the storage used per pool.</p> |DEPENDENT |ceph.pool["{#POOLNAME}",percent_used]<p>**Preprocessing**:</p><p>- JSONPATH: `$.pools["{#POOLNAME}"].percent_used`</p> |
+|Ceph |Ceph: [{#POOLNAME}] Pool objects |<p>The number of objects in the pool.</p> |DEPENDENT |ceph.pool["{#POOLNAME}",objects]<p>**Preprocessing**:</p><p>- JSONPATH: `$.pools["{#POOLNAME}"].objects`</p> |
+|Ceph |Ceph: [{#POOLNAME}] Pool Read bandwidth |<p>The read rate per pool (bytes per second).</p> |DEPENDENT |ceph.pool["{#POOLNAME}",rd_bytes.rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.pools["{#POOLNAME}"].rd_bytes`</p><p>- CHANGE_PER_SECOND</p> |
+|Ceph |Ceph: [{#POOLNAME}] Pool Write bandwidth |<p>The write rate per pool (bytes per second).</p> |DEPENDENT |ceph.pool["{#POOLNAME}",wr_bytes.rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.pools["{#POOLNAME}"].wr_bytes`</p><p>- CHANGE_PER_SECOND</p> |
+|Ceph |Ceph: [{#POOLNAME}] Pool Read operations |<p>The read rate per pool (operations per second).</p> |DEPENDENT |ceph.pool["{#POOLNAME}",rd_ops.rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.pools["{#POOLNAME}"].rd_ops`</p><p>- CHANGE_PER_SECOND</p> |
+|Ceph |Ceph: [{#POOLNAME}] Pool Write operations |<p>The write rate per pool (operations per second).</p> |DEPENDENT |ceph.pool["{#POOLNAME}",wr_ops.rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.pools["{#POOLNAME}"].wr_ops`</p><p>- CHANGE_PER_SECOND</p> |
|Zabbix raw items |Ceph: Get overall cluster status | |ZABBIX_PASSIVE |ceph.status["{$CEPH.CONNSTRING}","{$CEPH.USER}","{$CEPH.API.KEY}"] |
|Zabbix raw items |Ceph: Get OSD stats | |ZABBIX_PASSIVE |ceph.osd.stats["{$CEPH.CONNSTRING}","{$CEPH.USER}","{$CEPH.API.KEY}"] |
|Zabbix raw items |Ceph: Get OSD dump | |ZABBIX_PASSIVE |ceph.osd.dump["{$CEPH.CONNSTRING}","{$CEPH.USER}","{$CEPH.API.KEY}"] |
@@ -124,17 +124,17 @@ There are no template links in this template.
|Name|Description|Expression|Severity|Dependencies and additional info|
|----|-----------|----|----|----|
-|Ceph: Can not connect to cluster |<p>Connection to Ceph RESTful module is broken (if there is any error presented including AUTH and configuration issues).</p> |`last(/Ceph by Zabbix agent 2/ceph.ping["{$CEPH.CONNSTRING}","{$CEPH.USER}","{$CEPH.API.KEY}"])=0` |AVERAGE | |
+|Ceph: Can not connect to cluster |<p>The connection to the Ceph RESTful module is broken (if there is any error presented including *AUTH* and the configuration issues).</p> |`last(/Ceph by Zabbix agent 2/ceph.ping["{$CEPH.CONNSTRING}","{$CEPH.USER}","{$CEPH.API.KEY}"])=0` |AVERAGE | |
|Ceph: Cluster in ERROR state |<p>-</p> |`last(/Ceph by Zabbix agent 2/ceph.overall_status)=2` |AVERAGE |<p>Manual close: YES</p> |
|Ceph: Cluster in WARNING state |<p>-</p> |`last(/Ceph by Zabbix agent 2/ceph.overall_status)=1`<p>Recovery expression:</p>`last(/Ceph by Zabbix agent 2/ceph.overall_status)=0` |WARNING |<p>Manual close: YES</p><p>**Depends on**:</p><p>- Ceph: Cluster in ERROR state</p> |
-|Ceph: Minimum monitor release version has changed |<p>Ceph version has changed. Ack to close.</p> |`last(/Ceph by Zabbix agent 2/ceph.min_mon_release_name,#1)<>last(/Ceph by Zabbix agent 2/ceph.min_mon_release_name,#2) and length(last(/Ceph by Zabbix agent 2/ceph.min_mon_release_name))>0` |INFO |<p>Manual close: YES</p> |
-|Ceph: OSD osd.{#OSDNAME} is down |<p>OSD osd.{#OSDNAME} is marked "down" in the osdmap.</p><p>The OSD daemon may have been stopped, or peer OSDs may be unable to reach the OSD over the network.</p> |`last(/Ceph by Zabbix agent 2/ceph.osd[{#OSDNAME},up]) = 0` |AVERAGE | |
+|Ceph: Minimum monitor release version has changed |<p>A Ceph version has changed. Perform Ack to close manually.</p> |`last(/Ceph by Zabbix agent 2/ceph.min_mon_release_name,#1)<>last(/Ceph by Zabbix agent 2/ceph.min_mon_release_name,#2) and length(last(/Ceph by Zabbix agent 2/ceph.min_mon_release_name))>0` |INFO |<p>Manual close: YES</p> |
+|Ceph: OSD osd.{#OSDNAME} is down |<p>OSD osd.{#OSDNAME} is marked "down" in the *osdmap*.</p><p>The OSD daemon may have been stopped, or peer OSDs may be unable to reach the OSD over the network.</p> |`last(/Ceph by Zabbix agent 2/ceph.osd[{#OSDNAME},up]) = 0` |AVERAGE | |
|Ceph: OSD osd.{#OSDNAME} is full |<p>-</p> |`min(/Ceph by Zabbix agent 2/ceph.osd[{#OSDNAME},fill],15m) > last(/Ceph by Zabbix agent 2/ceph.osd_full_ratio)*100` |AVERAGE | |
|Ceph: Ceph OSD osd.{#OSDNAME} is near full |<p>-</p> |`min(/Ceph by Zabbix agent 2/ceph.osd[{#OSDNAME},fill],15m) > last(/Ceph by Zabbix agent 2/ceph.osd_nearfull_ratio)*100` |WARNING |<p>**Depends on**:</p><p>- Ceph: OSD osd.{#OSDNAME} is full</p> |
## Feedback
-Please report any issues with the template at https://support.zabbix.com
+Please report any issues with the template at https://support.zabbix.com.
-You can also provide feedback, discuss the template or ask for help with it at [ZABBIX forums](https://www.zabbix.com/forum/zabbix-suggestions-and-feedback/410059-discussion-thread-for-official-zabbix-template-ceph).
+You can also provide feedback, discuss the template or ask for help at [ZABBIX forums](https://www.zabbix.com/forum/zabbix-suggestions-and-feedback/410059-discussion-thread-for-official-zabbix-template-ceph).
diff --git a/templates/app/ceph_agent2/template_app_ceph_agent2.yaml b/templates/app/ceph_agent2/template_app_ceph_agent2.yaml
index 72afd6abae9..e61dfd362cc 100644
--- a/templates/app/ceph_agent2/template_app_ceph_agent2.yaml
+++ b/templates/app/ceph_agent2/template_app_ceph_agent2.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '6.4'
- date: '2022-09-20T13:48:25Z'
+ date: '2022-10-18T16:57:55Z'
template_groups:
-
uuid: a571c0d144b14fd4a87a9d9b2aa9fcd6
@@ -64,7 +64,7 @@ zabbix_export:
name: 'Ceph: Minimum monitor release version has changed'
event_name: 'Ceph: Minimum monitor release version has changed (new version: {ITEM.VALUE})'
priority: INFO
- description: 'Ceph version has changed. Ack to close.'
+ description: 'A Ceph version has changed. Perform Ack to close manually.'
manual_close: 'YES'
tags:
-
@@ -77,7 +77,7 @@ zabbix_export:
key: ceph.num_mon
delay: '0'
history: 7d
- description: 'Number of Monitors configured in Ceph cluster'
+ description: 'The number of Monitors configured in a Ceph cluster.'
preprocessing:
-
type: JSONPATH
@@ -100,7 +100,7 @@ zabbix_export:
key: ceph.num_osd
delay: '0'
history: 7d
- description: 'Number of known storage daemons in Ceph cluster'
+ description: 'The number of the known storage daemons in a Ceph cluster.'
preprocessing:
-
type: JSONPATH
@@ -126,7 +126,7 @@ zabbix_export:
key: ceph.num_osd_in
delay: '0'
history: 7d
- description: 'Total number of participating storage daemons in Ceph cluster'
+ description: 'The total number of the participating storage daemons in a Ceph cluster.'
preprocessing:
-
type: JSONPATH
@@ -152,7 +152,7 @@ zabbix_export:
key: ceph.num_osd_up
delay: '0'
history: 7d
- description: 'Total number of online storage daemons in Ceph cluster'
+ description: 'The total number of the online storage daemons in a Ceph cluster.'
preprocessing:
-
type: JSONPATH
@@ -178,7 +178,7 @@ zabbix_export:
key: ceph.num_pg
delay: '0'
history: 7d
- description: 'Total number of Placement Groups in Ceph cluster'
+ description: 'The total number of Placement Groups in a Ceph cluster.'
preprocessing:
-
type: JSONPATH
@@ -204,7 +204,7 @@ zabbix_export:
key: ceph.num_pg_temp
delay: '0'
history: 7d
- description: 'Total number of Placement Groups in pg_temp state'
+ description: 'The total number of Placement Groups in a *pg_temp* state'
preprocessing:
-
type: JSONPATH
@@ -223,7 +223,7 @@ zabbix_export:
key: ceph.num_pools
delay: '0'
history: 7d
- description: 'Total number of pools in Ceph cluster'
+ description: 'The total number of pools in a Ceph cluster.'
preprocessing:
-
type: JSONPATH
@@ -274,7 +274,7 @@ zabbix_export:
delay: '0'
history: 7d
value_type: FLOAT
- description: 'Backfill full ratio setting of Ceph cluster as configured on OSDMap'
+ description: 'The backfill full ratio setting of the Ceph cluster as configured on OSDMap.'
preprocessing:
-
type: JSONPATH
@@ -299,7 +299,7 @@ zabbix_export:
history: 7d
value_type: FLOAT
units: '%'
- description: 'Average fill of OSDs'
+ description: 'The average fill of OSDs.'
preprocessing:
-
type: JSONPATH
@@ -323,7 +323,7 @@ zabbix_export:
history: 7d
value_type: FLOAT
units: '%'
- description: 'Percentage fill of maximum filled OSD'
+ description: 'The percentage of the most filled OSD.'
preprocessing:
-
type: JSONPATH
@@ -347,7 +347,7 @@ zabbix_export:
history: 7d
value_type: FLOAT
units: '%'
- description: 'Percentage fill of minimum filled OSD'
+ description: 'The percentage fill of the minimum filled OSD.'
preprocessing:
-
type: JSONPATH
@@ -370,7 +370,7 @@ zabbix_export:
delay: '0'
history: 7d
value_type: FLOAT
- description: 'Full ratio setting of Ceph cluster as configured on OSDMap'
+ description: 'The full ratio setting of the Ceph cluster as configured on OSDMap.'
preprocessing:
-
type: JSONPATH
@@ -395,7 +395,7 @@ zabbix_export:
history: 7d
value_type: FLOAT
units: ms
- description: 'Average apply latency of OSDs'
+ description: 'The average apply latency of OSDs.'
preprocessing:
-
type: JSONPATH
@@ -419,7 +419,7 @@ zabbix_export:
history: 7d
value_type: FLOAT
units: ms
- description: 'Maximum apply latency of OSDs'
+ description: 'The maximum apply latency of OSDs.'
preprocessing:
-
type: JSONPATH
@@ -443,7 +443,7 @@ zabbix_export:
history: 7d
value_type: FLOAT
units: ms
- description: 'Minimum apply latency of OSDs'
+ description: 'The minimum apply latency of OSDs.'
preprocessing:
-
type: JSONPATH
@@ -467,7 +467,7 @@ zabbix_export:
history: 7d
value_type: FLOAT
units: ms
- description: 'Average commit latency of OSDs'
+ description: 'The average commit latency of OSDs.'
preprocessing:
-
type: JSONPATH
@@ -491,7 +491,7 @@ zabbix_export:
history: 7d
value_type: FLOAT
units: ms
- description: 'Maximum commit latency of OSDs'
+ description: 'The maximum commit latency of OSDs.'
preprocessing:
-
type: JSONPATH
@@ -515,7 +515,7 @@ zabbix_export:
history: 7d
value_type: FLOAT
units: ms
- description: 'Minimum commit latency of OSDs'
+ description: 'The minimum commit latency of OSDs.'
preprocessing:
-
type: JSONPATH
@@ -538,7 +538,7 @@ zabbix_export:
delay: '0'
history: 7d
value_type: FLOAT
- description: 'Near full ratio setting of Ceph cluster as configured on OSDMap'
+ description: 'The near full ratio setting of the Ceph cluster as configured on OSDMap.'
preprocessing:
-
type: JSONPATH
@@ -562,7 +562,7 @@ zabbix_export:
delay: '0'
history: 7d
value_type: FLOAT
- description: 'Average amount of PGs on OSDs'
+ description: 'The average amount of Placement Groups on OSDs.'
preprocessing:
-
type: JSONPATH
@@ -588,7 +588,7 @@ zabbix_export:
delay: '0'
history: 7d
value_type: FLOAT
- description: 'Maximum amount of PGs on OSDs'
+ description: 'The maximum amount of Placement Groups on OSDs.'
preprocessing:
-
type: JSONPATH
@@ -614,7 +614,7 @@ zabbix_export:
delay: '0'
history: 7d
value_type: FLOAT
- description: 'Minimum amount of PGs on OSDs'
+ description: 'The minimum amount of Placement Groups on OSDs.'
preprocessing:
-
type: JSONPATH
@@ -639,7 +639,7 @@ zabbix_export:
key: ceph.overall_status
delay: '0'
history: 7d
- description: 'Overall Ceph cluster status, eg 0 - HEALTH_OK, 1 - HEALTH_WARN or 2 - HEALTH_ERR'
+ description: 'The overall Ceph cluster status, eg 0 - HEALTH_OK, 1 - HEALTH_WARN or 2 - HEALTH_ERR.'
valuemap:
name: 'Ceph cluster status'
preprocessing:
@@ -691,7 +691,7 @@ zabbix_export:
key: ceph.pg_states.active
delay: '0'
history: 7d
- description: 'Total number of Placement Groups in active state'
+ description: 'The total number of Placement Groups in an active state.'
preprocessing:
-
type: JSONPATH
@@ -710,7 +710,7 @@ zabbix_export:
key: ceph.pg_states.backfilling
delay: '0'
history: 7d
- description: 'Total number of Placement Groups in backfilling state'
+ description: 'The total number of Placement Groups in a backfill state.'
preprocessing:
-
type: JSONPATH
@@ -729,7 +729,7 @@ zabbix_export:
key: ceph.pg_states.backfill_toofull
delay: '0'
history: 7d
- description: 'Total number of Placement Groups in backfill_toofull state'
+ description: 'The total number of Placement Groups in a *backfill_toofull state*.'
preprocessing:
-
type: JSONPATH
@@ -748,7 +748,7 @@ zabbix_export:
key: ceph.pg_states.backfill_wait
delay: '0'
history: 7d
- description: 'Total number of Placement Groups in backfill_wait state'
+ description: 'The total number of Placement Groups in a *backfill_wait* state.'
preprocessing:
-
type: JSONPATH
@@ -767,7 +767,7 @@ zabbix_export:
key: ceph.pg_states.clean
delay: '0'
history: 7d
- description: 'Total number of Placement Groups in clean state'
+ description: 'The total number of Placement Groups in a clean state.'
preprocessing:
-
type: JSONPATH
@@ -786,7 +786,7 @@ zabbix_export:
key: ceph.pg_states.degraded
delay: '0'
history: 7d
- description: 'Total number of Placement Groups in degraded state'
+ description: 'The total number of Placement Groups in a degraded state.'
preprocessing:
-
type: JSONPATH
@@ -805,7 +805,7 @@ zabbix_export:
key: ceph.pg_states.inconsistent
delay: '0'
history: 7d
- description: 'Total number of Placement Groups in inconsistent state'
+ description: 'The total number of Placement Groups in an inconsistent state.'
preprocessing:
-
type: JSONPATH
@@ -824,7 +824,7 @@ zabbix_export:
key: ceph.pg_states.peering
delay: '0'
history: 7d
- description: 'Total number of Placement Groups in peering state'
+ description: 'The total number of Placement Groups in a peering state.'
preprocessing:
-
type: JSONPATH
@@ -843,7 +843,7 @@ zabbix_export:
key: ceph.pg_states.recovering
delay: '0'
history: 7d
- description: 'Total number of Placement Groups in recovering state'
+ description: 'The total number of Placement Groups in a recovering state.'
preprocessing:
-
type: JSONPATH
@@ -862,7 +862,7 @@ zabbix_export:
key: ceph.pg_states.recovery_wait
delay: '0'
history: 7d
- description: 'Total number of Placement Groups in recovery_wait state'
+ description: 'The total number of Placement Groups in a *recovery_wait* state.'
preprocessing:
-
type: JSONPATH
@@ -881,7 +881,7 @@ zabbix_export:
key: ceph.pg_states.remapped
delay: '0'
history: 7d
- description: 'Total number of Placement Groups in remapped state'
+ description: 'The total number of Placement Groups in a remapped state.'
preprocessing:
-
type: JSONPATH
@@ -900,7 +900,7 @@ zabbix_export:
key: ceph.pg_states.scrubbing
delay: '0'
history: 7d
- description: 'Total number of Placement Groups in scrubbing state'
+ description: 'The total number of Placement Groups in a scrubbing state.'
preprocessing:
-
type: JSONPATH
@@ -919,7 +919,7 @@ zabbix_export:
key: ceph.pg_states.undersized
delay: '0'
history: 7d
- description: 'Total number of Placement Groups in undersized state'
+ description: 'The total number of Placement Groups in an undersized state.'
preprocessing:
-
type: JSONPATH
@@ -938,7 +938,7 @@ zabbix_export:
key: ceph.pg_states.unknown
delay: '0'
history: 7d
- description: 'Total number of Placement Groups in unknown state'
+ description: 'The total number of Placement Groups in an unknown state.'
preprocessing:
-
type: JSONPATH
@@ -975,7 +975,7 @@ zabbix_export:
expression: 'last(/Ceph by Zabbix agent 2/ceph.ping["{$CEPH.CONNSTRING}","{$CEPH.USER}","{$CEPH.API.KEY}"])=0'
name: 'Ceph: Can not connect to cluster'
priority: AVERAGE
- description: 'Connection to Ceph RESTful module is broken (if there is any error presented including AUTH and configuration issues).'
+ description: 'The connection to the Ceph RESTful module is broken (if there is any error presented including *AUTH* and the configuration issues).'
tags:
-
tag: scope
@@ -989,7 +989,7 @@ zabbix_export:
history: 7d
value_type: FLOAT
units: Bps
- description: 'Global read Bytes per second'
+ description: 'The global read bytes per second.'
preprocessing:
-
type: JSONPATH
@@ -1017,7 +1017,7 @@ zabbix_export:
history: 7d
value_type: FLOAT
units: ops
- description: 'Global read operations per second'
+ description: 'The global read operations per second.'
preprocessing:
-
type: JSONPATH
@@ -1055,7 +1055,7 @@ zabbix_export:
delay: '0'
history: 7d
units: B
- description: 'Total bytes available in Ceph cluster'
+ description: 'The total bytes available in a Ceph cluster.'
preprocessing:
-
type: JSONPATH
@@ -1078,7 +1078,7 @@ zabbix_export:
delay: '0'
history: 7d
units: B
- description: 'Total (RAW) capacity of Ceph cluster in bytes'
+ description: 'The total (RAW) capacity of a Ceph cluster in bytes.'
preprocessing:
-
type: JSONPATH
@@ -1100,7 +1100,7 @@ zabbix_export:
key: ceph.total_objects
delay: '0'
history: 7d
- description: 'Total number of objects in Ceph cluster'
+ description: 'The total number of objects in a Ceph cluster.'
preprocessing:
-
type: JSONPATH
@@ -1120,7 +1120,7 @@ zabbix_export:
delay: '0'
history: 7d
units: B
- description: 'Total bytes used in Ceph cluster'
+ description: 'The total bytes used in a Ceph cluster.'
preprocessing:
-
type: JSONPATH
@@ -1144,7 +1144,7 @@ zabbix_export:
history: 7d
value_type: FLOAT
units: Bps
- description: 'Global write Bytes per second'
+ description: 'The global write bytes per second'
preprocessing:
-
type: JSONPATH
@@ -1172,7 +1172,7 @@ zabbix_export:
history: 7d
value_type: FLOAT
units: ops
- description: 'Global write operations per second'
+ description: 'The global write operations per second.'
preprocessing:
-
type: JSONPATH
@@ -1268,7 +1268,7 @@ zabbix_export:
history: 7d
value_type: FLOAT
units: ms
- description: 'Time taken to flush an update to disks.'
+ description: 'The time taken to flush an update to disks.'
preprocessing:
-
type: JSONPATH
@@ -1299,7 +1299,7 @@ zabbix_export:
history: 7d
value_type: FLOAT
units: ms
- description: 'Time taken to commit an operation to the journal.'
+ description: 'The time taken to commit an operation to the journal.'
preprocessing:
-
type: JSONPATH
@@ -1387,7 +1387,7 @@ zabbix_export:
name: 'Ceph: OSD osd.{#OSDNAME} is down'
priority: AVERAGE
description: |
- OSD osd.{#OSDNAME} is marked "down" in the osdmap.
+ OSD osd.{#OSDNAME} is marked "down" in the *osdmap*.
The OSD daemon may have been stopped, or peer OSDs may be unable to reach the OSD over the network.
tags:
-
@@ -1446,7 +1446,7 @@ zabbix_export:
delay: '0'
history: 7d
units: B
- description: 'Total bytes used in pool.'
+ description: 'The total bytes used in a pool.'
preprocessing:
-
type: JSONPATH
@@ -1497,7 +1497,7 @@ zabbix_export:
key: 'ceph.pool["{#POOLNAME}",objects]'
delay: '0'
history: 7d
- description: 'Number of objects in the pool.'
+ description: 'The number of objects in the pool.'
preprocessing:
-
type: JSONPATH
@@ -1523,7 +1523,7 @@ zabbix_export:
delay: '0'
history: 7d
units: '%'
- description: 'Percentage of storage used per pool'
+ description: 'The percentage of the storage used per pool.'
preprocessing:
-
type: JSONPATH
@@ -1550,7 +1550,7 @@ zabbix_export:
history: 7d
value_type: FLOAT
units: Bps
- description: 'Per-pool read Bytes/second'
+ description: 'The read rate per pool (bytes per second).'
preprocessing:
-
type: JSONPATH
@@ -1581,7 +1581,7 @@ zabbix_export:
history: 7d
value_type: FLOAT
units: ops
- description: 'Per-pool read operations/second'
+ description: 'The read rate per pool (operations per second).'
preprocessing:
-
type: JSONPATH
@@ -1611,7 +1611,7 @@ zabbix_export:
delay: '0'
history: 7d
units: B
- description: 'Bytes used in pool including copies made.'
+ description: 'Bytes used in pool including the copies made.'
preprocessing:
-
type: JSONPATH
@@ -1638,7 +1638,7 @@ zabbix_export:
history: 7d
value_type: FLOAT
units: Bps
- description: 'Per-pool write Bytes/second'
+ description: 'The write rate per pool (bytes per second).'
preprocessing:
-
type: JSONPATH
@@ -1669,7 +1669,7 @@ zabbix_export:
history: 7d
value_type: FLOAT
units: ops
- description: 'Per-pool write operations/second'
+ description: 'The write rate per pool (operations per second).'
preprocessing:
-
type: JSONPATH