Welcome to mirror list, hosted at ThFree Co, Russian Federation.

github.com/zabbix/zabbix.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorDenis Rasihov <denis.rasihov@zabbix.com>2022-06-16 11:09:29 +0300
committerDenis Rasihov <denis.rasihov@zabbix.com>2022-06-16 11:09:29 +0300
commit09537fa8f5bcf38eae5262f95bdc9b45f3c26b44 (patch)
tree2c1e5ef5d5f98cc76f553bb4d424106621381706
parenta30355a659273969ee48c4a33ebaa725d580af5d (diff)
parentfcc77aef72f063d81d79566aba07dc335fcbbfa0 (diff)
.........T [ZBX-21199] fixed space utilization items in HPE MSA 2040 and 2060 templates
* commit 'fcc77aef72f063d81d79566aba07dc335fcbbfa0': .........T [ZBX-21199] updated documentation in HPE MSA 2040 and 2060 templates .........T [ZBX-21199] fixed space utilization items in HPE MSA 2040 and 2060 templates
-rw-r--r--ChangeLog.d/bugfix/ZBX-211991
-rw-r--r--templates/san/hpe_msa2040_http/README.md22
-rw-r--r--templates/san/hpe_msa2040_http/template_san_hpe_msa2040_http.yaml24
-rw-r--r--templates/san/hpe_msa2060_http/README.md22
-rw-r--r--templates/san/hpe_msa2060_http/template_san_hpe_msa2060_http.yaml24
5 files changed, 47 insertions, 46 deletions
diff --git a/ChangeLog.d/bugfix/ZBX-21199 b/ChangeLog.d/bugfix/ZBX-21199
new file mode 100644
index 00000000000..77db1297f09
--- /dev/null
+++ b/ChangeLog.d/bugfix/ZBX-21199
@@ -0,0 +1 @@
+.........T [ZBX-21199] fixed space utilization items in HPE MSA 2040 and 2060 templates (drasihov)
diff --git a/templates/san/hpe_msa2040_http/README.md b/templates/san/hpe_msa2040_http/README.md
index 76c91230462..e76f83048c6 100644
--- a/templates/san/hpe_msa2040_http/README.md
+++ b/templates/san/hpe_msa2040_http/README.md
@@ -5,7 +5,7 @@
For Zabbix version: 6.0 and higher
The template to monitor HPE MSA 2040 by HTTP.
-It works without any external scripts and uses the script items.
+It works without any external scripts and uses the script item.
This template was tested on:
@@ -16,9 +16,9 @@ This template was tested on:
> See [Zabbix template operation](https://www.zabbix.com/documentation/6.0/manual/config/templates_out_of_the_box/http) for basic instructions.
-1. Create user "zabbix" on the storage with monitor role.
-2. Link template to the host.
-3. Configure {$HPE.MSA.API.PASSWORD} and an interface with address through which one API is accessible if not specified.
+1. Create user "zabbix" with monitor role on the storage.
+2. Link the template to a host.
+3. Configure {$HPE.MSA.API.PASSWORD} and an interface with address through which API is accessible.
4. Change {$HPE.MSA.API.SCHEME} and {$HPE.MSA.API.PORT} macros if needed.
@@ -32,14 +32,14 @@ No specific Zabbix configuration is required.
|----|-----------|-------|
|{$HPE.MSA.API.PASSWORD} |<p>Specify password for API.</p> |`` |
|{$HPE.MSA.API.PORT} |<p>Connection port for API.</p> |`443` |
-|{$HPE.MSA.API.SCHEME} |<p>Connection scheme timeout for API.</p> |`https` |
+|{$HPE.MSA.API.SCHEME} |<p>Connection scheme for API.</p> |`https` |
|{$HPE.MSA.API.USERNAME} |<p>Specify user name for API.</p> |`zabbix` |
|{$HPE.MSA.CONTROLLER.CPU.UTIL.CRIT} |<p>The critical threshold of the CPU utilization in %.</p> |`90` |
|{$HPE.MSA.DATA.TIMEOUT} |<p>Response timeout for API.</p> |`30s` |
-|{$HPE.MSA.DISKS.GROUP.PUSED.MAX.CRIT} |<p>The critical threshold of the disk group space utilization in percent.</p> |`90` |
-|{$HPE.MSA.DISKS.GROUP.PUSED.MAX.WARN} |<p>The warning threshold of the disk group space utilization in percent.</p> |`80` |
-|{$HPE.MSA.POOL.PUSED.MAX.CRIT} |<p>The critical threshold of the pool space utilization in percent.</p> |`90` |
-|{$HPE.MSA.POOL.PUSED.MAX.WARN} |<p>The warning threshold of the pool space utilization in percent.</p> |`80` |
+|{$HPE.MSA.DISKS.GROUP.PUSED.MAX.CRIT} |<p>The critical threshold of the disk group space utilization in %.</p> |`90` |
+|{$HPE.MSA.DISKS.GROUP.PUSED.MAX.WARN} |<p>The warning threshold of the disk group space utilization in %.</p> |`80` |
+|{$HPE.MSA.POOL.PUSED.MAX.CRIT} |<p>The critical threshold of the pool space utilization in %.</p> |`90` |
+|{$HPE.MSA.POOL.PUSED.MAX.WARN} |<p>The warning threshold of the pool space utilization in %.</p> |`80` |
## Template links
@@ -101,7 +101,7 @@ There are no template links in this template.
|HPE |Disk group [{#NAME}]: Health |<p>Disk group health.</p> |DEPENDENT |hpe.msa.disks.groups["{#NAME}",health]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-groups'][?(@['name'] == "{#NAME}")].['health-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
|HPE |Disk group [{#NAME}]: Space free |<p>The free space in the disk group.</p> |DEPENDENT |hpe.msa.disks.groups.space["{#NAME}",free]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-groups'][?(@['name'] == "{#NAME}")].['freespace-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `512`</p> |
|HPE |Disk group [{#NAME}]: Space total |<p>The capacity of the disk group.</p> |DEPENDENT |hpe.msa.disks.groups.space["{#NAME}",total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-groups'][?(@['name'] == "{#NAME}")].['size-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `512`</p> |
-|HPE |Disk group [{#NAME}]: Space utilization |<p>The space utilization percentage in the disk group.</p> |CALCULATED |hpe.msa.disks.groups.space["{#NAME}",util]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>**Expression**:</p>`last(//hpe.msa.disks.groups.space["{#NAME}",free])/last(//hpe.msa.disks.groups.space["{#NAME}",total])*100` |
+|HPE |Disk group [{#NAME}]: Space utilization |<p>The space utilization percentage in the disk group.</p> |CALCULATED |hpe.msa.disks.groups.space["{#NAME}",util]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>**Expression**:</p>`100-last(//hpe.msa.disks.groups.space["{#NAME}",free])/last(//hpe.msa.disks.groups.space["{#NAME}",total])*100` |
|HPE |Disk group [{#NAME}]: RAID type |<p>The RAID level of the disk group.</p> |DEPENDENT |hpe.msa.disks.groups.raid["{#NAME}",type]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-groups'][?(@['name'] == "{#NAME}")].['raidtype-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
|HPE |Disk group [{#NAME}]: Status |<p>The status of the disk group:</p><p>- CRIT: Critical. The disk group is online but isn't fault tolerant because some of it's disks are down.</p><p>- DMGD: Damaged. The disk group is online and fault tolerant, but some of it's disks are damaged.</p><p>- FTDN: Fault tolerant with a down disk.The disk group is online and fault tolerant, but some of it's disks are down.</p><p>- FTOL: Fault tolerant.</p><p>- MSNG: Missing. The disk group is online and fault tolerant, but some of it's disks are missing.</p><p>- OFFL: Offline. Either the disk group is using offline initialization, or it's disks are down and data may be lost.</p><p>- QTCR: Quarantined critical. The disk group is critical with at least one inaccessible disk. For example, two disks are inaccessible in a RAID 6 disk group or one disk is inaccessible for other fault-tolerant RAID levels. If the inaccessible disks come online or if after 60 seconds from being quarantined the disk group is QTCRor QTDN, the disk group is automatically dequarantined.</p><p>- QTDN: Quarantined with a down disk. The RAID6 disk group has one inaccessible disk. The disk group is fault tolerant but degraded. If the inaccessible disks come online or if after 60 seconds from being quarantined the disk group is QTCRor QTDN, the disk group is automatically dequarantined.</p><p>- QTOF: Quarantined offline. The disk group is offline with multiple inaccessible disks causing user data to be incomplete, or is an NRAID or RAID 0 disk group.</p><p>- QTUN: Quarantined unsupported. The disk group contains data in a format that is not supported by this system. For example, this system does not support linear disk groups.</p><p>- STOP: The disk group is stopped.</p><p>- UNKN: Unknown.</p><p>- UP: Up. The disk group is online and does not have fault-tolerant attributes.</p> |DEPENDENT |hpe.msa.disks.groups["{#NAME}",status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-groups'][?(@['name'] == "{#NAME}")].['status-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
|HPE |Disk group [{#NAME}]: IOPS, total rate |<p>Input/output operations per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.</p> |DEPENDENT |hpe.msa.disks.groups.iops.total["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-group-statistics'][?(@['name'] == "{#NAME}")].['iops'].first()`</p> |
@@ -116,7 +116,7 @@ There are no template links in this template.
|HPE |Pool [{#NAME}]: Health |<p>Pool health.</p> |DEPENDENT |hpe.msa.pools["{#NAME}",health]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['pools'][?(@['name'] == "{#NAME}")].['health-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
|HPE |Pool [{#NAME}]: Space free |<p>The free space in the pool.</p> |DEPENDENT |hpe.msa.pools.space["{#NAME}",free]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['pools'][?(@['name'] == "{#NAME}")].['total-avail-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `512`</p> |
|HPE |Pool [{#NAME}]: Space total |<p>The capacity of the pool.</p> |DEPENDENT |hpe.msa.pools.space["{#NAME}",total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['pools'][?(@['name'] == "{#NAME}")].['total-size-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `512`</p> |
-|HPE |Pool [{#NAME}]: Space utilization |<p>The space utilization percentage in the pool.</p> |CALCULATED |hpe.msa.pools.space["{#NAME}",util]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>**Expression**:</p>`last(//hpe.msa.pools.space["{#NAME}",free])/last(//hpe.msa.pools.space["{#NAME}",total])*100` |
+|HPE |Pool [{#NAME}]: Space utilization |<p>The space utilization percentage in the pool.</p> |CALCULATED |hpe.msa.pools.space["{#NAME}",util]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>**Expression**:</p>`100-last(//hpe.msa.pools.space["{#NAME}",free])/last(//hpe.msa.pools.space["{#NAME}",total])*100` |
|HPE |Volume [{#NAME}]: Space allocated |<p>The amount of space currently allocated to the volume.</p> |DEPENDENT |hpe.msa.volumes.space["{#NAME}",allocated]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['volumes'][?(@['volume-name'] == "{#NAME}")].['allocated-size-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `512`</p> |
|HPE |Volume [{#NAME}]: Space total |<p>The capacity of the volume.</p> |DEPENDENT |hpe.msa.volumes.space["{#NAME}",total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['volumes'][?(@['volume-name'] == "{#NAME}")].['size-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `512`</p> |
|HPE |Volume [{#NAME}]: IOPS, total rate |<p>Input/output operations per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.</p> |DEPENDENT |hpe.msa.volumes.iops.total["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['volume-statistics'][?(@['volume-name'] == "{#NAME}")].['iops'].first()`</p> |
diff --git a/templates/san/hpe_msa2040_http/template_san_hpe_msa2040_http.yaml b/templates/san/hpe_msa2040_http/template_san_hpe_msa2040_http.yaml
index 4a25843299f..e28b8ae6fd9 100644
--- a/templates/san/hpe_msa2040_http/template_san_hpe_msa2040_http.yaml
+++ b/templates/san/hpe_msa2040_http/template_san_hpe_msa2040_http.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '6.0'
- date: '2022-06-01T12:48:50Z'
+ date: '2022-06-16T07:39:49Z'
groups:
-
uuid: 7c2cb727f85b492d88cd56e17127c64d
@@ -12,12 +12,12 @@ zabbix_export:
name: 'HPE MSA 2040 Storage by HTTP'
description: |
The template to monitor HPE MSA 2040 by HTTP.
- It works without any external scripts and uses the script items.
+ It works without any external scripts and uses the script item.
Setup:
- 1. Create user zabbix on the storage with monitor role.
- 2. Link template to the host.
- 3. Configure {$HPE.MSA.API.PASSWORD} and an interface with address through which one API is accessible if not specified.
+ 1. Create user "zabbix" with monitor role on the storage.
+ 2. Link the template to a host.
+ 3. Configure {$HPE.MSA.API.PASSWORD} and an interface with address through which API is accessible.
4. Change {$HPE.MSA.API.SCHEME} and {$HPE.MSA.API.PORT} macros if needed.
You can discuss this template or leave feedback on our forum https://www.zabbix.com/forum/zabbix-suggestions-and-feedback
@@ -2070,7 +2070,7 @@ zabbix_export:
history: 7d
value_type: FLOAT
units: '%'
- params: 'last(//hpe.msa.disks.groups.space["{#NAME}",free])/last(//hpe.msa.disks.groups.space["{#NAME}",total])*100'
+ params: '100-last(//hpe.msa.disks.groups.space["{#NAME}",free])/last(//hpe.msa.disks.groups.space["{#NAME}",total])*100'
description: 'The space utilization percentage in the disk group.'
preprocessing:
-
@@ -3144,7 +3144,7 @@ zabbix_export:
history: 7d
value_type: FLOAT
units: '%'
- params: 'last(//hpe.msa.pools.space["{#NAME}",free])/last(//hpe.msa.pools.space["{#NAME}",total])*100'
+ params: '100-last(//hpe.msa.pools.space["{#NAME}",free])/last(//hpe.msa.pools.space["{#NAME}",total])*100'
description: 'The space utilization percentage in the pool.'
preprocessing:
-
@@ -4149,7 +4149,7 @@ zabbix_export:
-
macro: '{$HPE.MSA.API.SCHEME}'
value: https
- description: 'Connection scheme timeout for API.'
+ description: 'Connection scheme for API.'
-
macro: '{$HPE.MSA.API.USERNAME}'
value: zabbix
@@ -4165,19 +4165,19 @@ zabbix_export:
-
macro: '{$HPE.MSA.DISKS.GROUP.PUSED.MAX.CRIT}'
value: '90'
- description: 'The critical threshold of the disk group space utilization in percent.'
+ description: 'The critical threshold of the disk group space utilization in %.'
-
macro: '{$HPE.MSA.DISKS.GROUP.PUSED.MAX.WARN}'
value: '80'
- description: 'The warning threshold of the disk group space utilization in percent.'
+ description: 'The warning threshold of the disk group space utilization in %.'
-
macro: '{$HPE.MSA.POOL.PUSED.MAX.CRIT}'
value: '90'
- description: 'The critical threshold of the pool space utilization in percent.'
+ description: 'The critical threshold of the pool space utilization in %.'
-
macro: '{$HPE.MSA.POOL.PUSED.MAX.WARN}'
value: '80'
- description: 'The warning threshold of the pool space utilization in percent.'
+ description: 'The warning threshold of the pool space utilization in %.'
valuemaps:
-
uuid: 3bb065172c93464c9f5e2e569f523a05
diff --git a/templates/san/hpe_msa2060_http/README.md b/templates/san/hpe_msa2060_http/README.md
index bf077ec9437..4484b0e5b96 100644
--- a/templates/san/hpe_msa2060_http/README.md
+++ b/templates/san/hpe_msa2060_http/README.md
@@ -5,7 +5,7 @@
For Zabbix version: 6.0 and higher
The template to monitor HPE MSA 2060 by HTTP.
-It works without any external scripts and uses the script items.
+It works without any external scripts and uses the script item.
This template was tested on:
@@ -16,9 +16,9 @@ This template was tested on:
> See [Zabbix template operation](https://www.zabbix.com/documentation/6.0/manual/config/templates_out_of_the_box/http) for basic instructions.
-1. Create user "zabbix" on the storage with monitor role.
-2. Link template to the host.
-3. Configure {$HPE.MSA.API.PASSWORD} and an interface with address through which one API is accessible if not specified.
+1. Create user "zabbix" with monitor role on the storage.
+2. Link the template to a host.
+3. Configure {$HPE.MSA.API.PASSWORD} and an interface with address through which API is accessible.
4. Change {$HPE.MSA.API.SCHEME} and {$HPE.MSA.API.PORT} macros if needed.
@@ -32,14 +32,14 @@ No specific Zabbix configuration is required.
|----|-----------|-------|
|{$HPE.MSA.API.PASSWORD} |<p>Specify password for API.</p> |`` |
|{$HPE.MSA.API.PORT} |<p>Connection port for API.</p> |`443` |
-|{$HPE.MSA.API.SCHEME} |<p>Connection scheme timeout for API.</p> |`https` |
+|{$HPE.MSA.API.SCHEME} |<p>Connection scheme for API.</p> |`https` |
|{$HPE.MSA.API.USERNAME} |<p>Specify user name for API.</p> |`zabbix` |
|{$HPE.MSA.CONTROLLER.CPU.UTIL.CRIT} |<p>The critical threshold of the CPU utilization in %.</p> |`90` |
|{$HPE.MSA.DATA.TIMEOUT} |<p>Response timeout for API.</p> |`30s` |
-|{$HPE.MSA.DISKS.GROUP.PUSED.MAX.CRIT} |<p>The critical threshold of the disk group space utilization in percent.</p> |`90` |
-|{$HPE.MSA.DISKS.GROUP.PUSED.MAX.WARN} |<p>The warning threshold of the disk group space utilization in percent.</p> |`80` |
-|{$HPE.MSA.POOL.PUSED.MAX.CRIT} |<p>The critical threshold of the pool space utilization in percent.</p> |`90` |
-|{$HPE.MSA.POOL.PUSED.MAX.WARN} |<p>The warning threshold of the pool space utilization in percent.</p> |`80` |
+|{$HPE.MSA.DISKS.GROUP.PUSED.MAX.CRIT} |<p>The critical threshold of the disk group space utilization in %.</p> |`90` |
+|{$HPE.MSA.DISKS.GROUP.PUSED.MAX.WARN} |<p>The warning threshold of the disk group space utilization in %.</p> |`80` |
+|{$HPE.MSA.POOL.PUSED.MAX.CRIT} |<p>The critical threshold of the pool space utilization in %.</p> |`90` |
+|{$HPE.MSA.POOL.PUSED.MAX.WARN} |<p>The warning threshold of the pool space utilization in %.</p> |`80` |
## Template links
@@ -104,7 +104,7 @@ There are no template links in this template.
|HPE |Disk group [{#NAME}]: Blocks total |<p>Total space in blocks.</p> |DEPENDENT |hpe.msa.disks.groups.blocks["{#NAME}",total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-groups'][?(@['name'] == "{#NAME}")].['blocks'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
|HPE |Disk group [{#NAME}]: Space free |<p>The free space in the disk group.</p> |CALCULATED |hpe.msa.disks.groups.space["{#NAME}",free]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>**Expression**:</p>`last(//hpe.msa.disks.groups.blocks["{#NAME}",size])*last(//hpe.msa.disks.groups.blocks["{#NAME}",free])` |
|HPE |Disk group [{#NAME}]: Space total |<p>The capacity of the disk group.</p> |CALCULATED |hpe.msa.disks.groups.space["{#NAME}",total]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>**Expression**:</p>`last(//hpe.msa.disks.groups.blocks["{#NAME}",size])*last(//hpe.msa.disks.groups.blocks["{#NAME}",total])` |
-|HPE |Disk group [{#NAME}]: Space utilization |<p>The space utilization percentage in the disk group.</p> |CALCULATED |hpe.msa.disks.groups.space["{#NAME}",util]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>**Expression**:</p>`last(//hpe.msa.disks.groups.space["{#NAME}",free])/last(//hpe.msa.disks.groups.space["{#NAME}",total])*100` |
+|HPE |Disk group [{#NAME}]: Space utilization |<p>The space utilization percentage in the disk group.</p> |CALCULATED |hpe.msa.disks.groups.space["{#NAME}",util]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>**Expression**:</p>`100-last(//hpe.msa.disks.groups.space["{#NAME}",free])/last(//hpe.msa.disks.groups.space["{#NAME}",total])*100` |
|HPE |Disk group [{#NAME}]: RAID type |<p>The RAID level of the disk group.</p> |DEPENDENT |hpe.msa.disks.groups.raid["{#NAME}",type]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-groups'][?(@['name'] == "{#NAME}")].['raidtype-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
|HPE |Disk group [{#NAME}]: Status |<p>The status of the disk group:</p><p>- CRIT: Critical. The disk group is online but isn't fault tolerant because some of it's disks are down.</p><p>- DMGD: Damaged. The disk group is online and fault tolerant, but some of it's disks are damaged.</p><p>- FTDN: Fault tolerant with a down disk.The disk group is online and fault tolerant, but some of it's disks are down.</p><p>- FTOL: Fault tolerant.</p><p>- MSNG: Missing. The disk group is online and fault tolerant, but some of it's disks are missing.</p><p>- OFFL: Offline. Either the disk group is using offline initialization, or it's disks are down and data may be lost.</p><p>- QTCR: Quarantined critical. The disk group is critical with at least one inaccessible disk. For example, two disks are inaccessible in a RAID 6 disk group or one disk is inaccessible for other fault-tolerant RAID levels. If the inaccessible disks come online or if after 60 seconds from being quarantined the disk group is QTCRor QTDN, the disk group is automatically dequarantined.</p><p>- QTDN: Quarantined with a down disk. The RAID6 disk group has one inaccessible disk. The disk group is fault tolerant but degraded. If the inaccessible disks come online or if after 60 seconds from being quarantined the disk group is QTCRor QTDN, the disk group is automatically dequarantined.</p><p>- QTOF: Quarantined offline. The disk group is offline with multiple inaccessible disks causing user data to be incomplete, or is an NRAID or RAID 0 disk group.</p><p>- QTUN: Quarantined unsupported. The disk group contains data in a format that is not supported by this system. For example, this system does not support linear disk groups.</p><p>- STOP: The disk group is stopped.</p><p>- UNKN: Unknown.</p><p>- UP: Up. The disk group is online and does not have fault-tolerant attributes.</p> |DEPENDENT |hpe.msa.disks.groups["{#NAME}",status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-groups'][?(@['name'] == "{#NAME}")].['status-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
|HPE |Disk group [{#NAME}]: IOPS, total rate |<p>Input/output operations per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.</p> |DEPENDENT |hpe.msa.disks.groups.iops.total["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-group-statistics'][?(@['name'] == "{#NAME}")].['iops'].first()`</p> |
@@ -122,7 +122,7 @@ There are no template links in this template.
|HPE |Pool [{#NAME}]: Blocks total |<p>Total space in blocks.</p> |DEPENDENT |hpe.msa.pools.blocks["{#NAME}",total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['pools'][?(@['name'] == "{#NAME}")].['total-size-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
|HPE |Pool [{#NAME}]: Space free |<p>The free space in the pool.</p> |CALCULATED |hpe.msa.pools.space["{#NAME}",free]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>**Expression**:</p>`last(//hpe.msa.pools.blocks["{#NAME}",size])*last(//hpe.msa.pools.blocks["{#NAME}",available])` |
|HPE |Pool [{#NAME}]: Space total |<p>The capacity of the pool.</p> |CALCULATED |hpe.msa.pools.space["{#NAME}",total]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>**Expression**:</p>`last(//hpe.msa.pools.blocks["{#NAME}",size])*last(//hpe.msa.pools.blocks["{#NAME}",total])` |
-|HPE |Pool [{#NAME}]: Space utilization |<p>The space utilization percentage in the pool.</p> |CALCULATED |hpe.msa.pools.space["{#NAME}",util]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>**Expression**:</p>`last(//hpe.msa.pools.space["{#NAME}",free])/last(//hpe.msa.pools.space["{#NAME}",total])*100` |
+|HPE |Pool [{#NAME}]: Space utilization |<p>The space utilization percentage in the pool.</p> |CALCULATED |hpe.msa.pools.space["{#NAME}",util]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>**Expression**:</p>`100-last(//hpe.msa.pools.space["{#NAME}",free])/last(//hpe.msa.pools.space["{#NAME}",total])*100` |
|HPE |Volume [{#NAME}]: Blocks size |<p>The size of a block, in bytes.</p> |DEPENDENT |hpe.msa.volumes.blocks["{#NAME}",size]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['volumes'][?(@['volume-name'] == "{#NAME}")].['blocksize'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
|HPE |Volume [{#NAME}]: Blocks allocated |<p>The amount of blocks currently allocated to the volume.</p> |DEPENDENT |hpe.msa.volumes.blocks["{#NAME}",allocated]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['volumes'][?(@['volume-name'] == "{#NAME}")].['allocated-size-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
|HPE |Volume [{#NAME}]: Blocks total |<p>Total space in blocks.</p> |DEPENDENT |hpe.msa.volumes.blocks["{#NAME}",total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['volumes'][?(@['volume-name'] == "{#NAME}")].['blocks'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
diff --git a/templates/san/hpe_msa2060_http/template_san_hpe_msa2060_http.yaml b/templates/san/hpe_msa2060_http/template_san_hpe_msa2060_http.yaml
index 1fc86d7826a..69702938fc4 100644
--- a/templates/san/hpe_msa2060_http/template_san_hpe_msa2060_http.yaml
+++ b/templates/san/hpe_msa2060_http/template_san_hpe_msa2060_http.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '6.0'
- date: '2022-06-01T12:48:57Z'
+ date: '2022-06-16T07:39:55Z'
groups:
-
uuid: 7c2cb727f85b492d88cd56e17127c64d
@@ -12,12 +12,12 @@ zabbix_export:
name: 'HPE MSA 2060 Storage by HTTP'
description: |
The template to monitor HPE MSA 2060 by HTTP.
- It works without any external scripts and uses the script items.
+ It works without any external scripts and uses the script item.
Setup:
- 1. Create user zabbix on the storage with monitor role.
- 2. Link template to the host.
- 3. Configure {$HPE.MSA.API.PASSWORD} and an interface with address through which one API is accessible if not specified.
+ 1. Create user "zabbix" with monitor role on the storage.
+ 2. Link the template to a host.
+ 3. Configure {$HPE.MSA.API.PASSWORD} and an interface with address through which API is accessible.
4. Change {$HPE.MSA.API.SCHEME} and {$HPE.MSA.API.PORT} macros if needed.
You can discuss this template or leave feedback on our forum https://www.zabbix.com/forum/zabbix-suggestions-and-feedback
@@ -2162,7 +2162,7 @@ zabbix_export:
history: 7d
value_type: FLOAT
units: '%'
- params: 'last(//hpe.msa.disks.groups.space["{#NAME}",free])/last(//hpe.msa.disks.groups.space["{#NAME}",total])*100'
+ params: '100-last(//hpe.msa.disks.groups.space["{#NAME}",free])/last(//hpe.msa.disks.groups.space["{#NAME}",total])*100'
description: 'The space utilization percentage in the disk group.'
preprocessing:
-
@@ -3273,7 +3273,7 @@ zabbix_export:
history: 7d
value_type: FLOAT
units: '%'
- params: 'last(//hpe.msa.pools.space["{#NAME}",free])/last(//hpe.msa.pools.space["{#NAME}",total])*100'
+ params: '100-last(//hpe.msa.pools.space["{#NAME}",free])/last(//hpe.msa.pools.space["{#NAME}",total])*100'
description: 'The space utilization percentage in the pool.'
preprocessing:
-
@@ -4300,7 +4300,7 @@ zabbix_export:
-
macro: '{$HPE.MSA.API.SCHEME}'
value: https
- description: 'Connection scheme timeout for API.'
+ description: 'Connection scheme for API.'
-
macro: '{$HPE.MSA.API.USERNAME}'
value: zabbix
@@ -4316,19 +4316,19 @@ zabbix_export:
-
macro: '{$HPE.MSA.DISKS.GROUP.PUSED.MAX.CRIT}'
value: '90'
- description: 'The critical threshold of the disk group space utilization in percent.'
+ description: 'The critical threshold of the disk group space utilization in %.'
-
macro: '{$HPE.MSA.DISKS.GROUP.PUSED.MAX.WARN}'
value: '80'
- description: 'The warning threshold of the disk group space utilization in percent.'
+ description: 'The warning threshold of the disk group space utilization in %.'
-
macro: '{$HPE.MSA.POOL.PUSED.MAX.CRIT}'
value: '90'
- description: 'The critical threshold of the pool space utilization in percent.'
+ description: 'The critical threshold of the pool space utilization in %.'
-
macro: '{$HPE.MSA.POOL.PUSED.MAX.WARN}'
value: '80'
- description: 'The warning threshold of the pool space utilization in percent.'
+ description: 'The warning threshold of the pool space utilization in %.'
valuemaps:
-
uuid: f7af1259f3c54a5faa040c743d386d1d