Welcome to mirror list, hosted at ThFree Co, Russian Federation.

github.com/zabbix/zabbix.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorDenis Rasihov <denis.rasihov@zabbix.com>2022-05-13 14:10:42 +0300
committerDenis Rasihov <denis.rasihov@zabbix.com>2022-05-13 14:10:42 +0300
commitc4466286aed2a0a5c13bd59c2a5953c05b9ffab2 (patch)
treeaf995957698d7d40603ac1a245588cfa07656e8e /templates
parentefb8223372133e69023d2c481273f7d6c04cab54 (diff)
.........T [ZBXNEXT-7630] fixed after review
Diffstat (limited to 'templates')
-rw-r--r--templates/san/hpe_msa2040_http/README.md220
-rw-r--r--templates/san/hpe_msa2040_http/template_san_hpe_msa2040_http.yaml1990
-rw-r--r--templates/san/hpe_msa2060_http/README.md226
-rw-r--r--templates/san/hpe_msa2060_http/template_san_hpe_msa2060_http.yaml1966
4 files changed, 3037 insertions, 1365 deletions
diff --git a/templates/san/hpe_msa2040_http/README.md b/templates/san/hpe_msa2040_http/README.md
index f714ca52d4c..50b25fd3d29 100644
--- a/templates/san/hpe_msa2040_http/README.md
+++ b/templates/san/hpe_msa2040_http/README.md
@@ -10,19 +10,16 @@ It works without any external scripts and uses the script items.
This template was tested on:
-- MSA 2040, version 21.2.8
+- HPE MSA 2040 Storage
## Setup
> See [Zabbix template operation](https://www.zabbix.com/documentation/6.0/manual/config/templates_out_of_the_box/http) for basic instructions.
-1. Create user zabbix on the storage with browse role and enable it for all domains.
-2. The WSAPI server does not start automatically. To enable it:
-- log in to the CLI as Super, Service, or any role granted the wsapi_set right;
-- start the WSAPI server by command: 'startwsapi';
-- to check WSAPI state use command: 'showwsapi'.
-3. Link template to the host.
-4. Configure {$HPE.MSA.API.PASSWORD} and {$HPE.PRIMERA.API.PASSWORD}.
+1. Create user "zabbix" on the storage with monitor role.
+2. Link template to the host.
+3. Configure {$HPE.MSA.API.PASSWORD} and an interface with address through which one API is accessible if not specified.
+4. Change {$HPE.MSA.API.SCHEME} and {$HPE.MSA.API.PORT} macros if needed.
## Zabbix configuration
@@ -33,15 +30,16 @@ No specific Zabbix configuration is required.
|Name|Description|Default|
|----|-----------|-------|
-|{$HPE.MSA.API.PASSWORD} |<p>Specify password for WSAPI.</p> |`` |
-|{$HPE.MSA.API.PORT} |<p>Connection port for WSAPI.</p> |`443` |
-|{$HPE.MSA.API.SCHEME} |<p>Connection scheme timeout for WSAPI.</p> |`https` |
-|{$HPE.MSA.API.USERNAME} |<p>Specify user name for WSAPI.</p> |`zabbix` |
-|{$HPE.MSA.DATA.TIMEOUT} |<p>Response timeout for WSAPI.</p> |`5s` |
-|{$HPE.PRIMERA.DISKS.GROUP.PUSED.MAX.CRIT} |<p>The critical threshold of the disk group space utilization in percent.</p> |`90` |
-|{$HPE.PRIMERA.DISKS.GROUP.PUSED.MAX.WARN} |<p>The warning threshold of the disk group space utilization in percent.</p> |`80` |
-|{$HPE.PRIMERA.POOL.PUSED.MAX.CRIT} |<p>The critical threshold of the pool space utilization in percent.</p> |`90` |
-|{$HPE.PRIMERA.POOL.PUSED.MAX.WARN} |<p>The warning threshold of the pool space utilization in percent.</p> |`80` |
+|{$HPE.MSA.API.PASSWORD} |<p>Specify password for API.</p> |`` |
+|{$HPE.MSA.API.PORT} |<p>Connection port for API.</p> |`443` |
+|{$HPE.MSA.API.SCHEME} |<p>Connection scheme timeout for API.</p> |`https` |
+|{$HPE.MSA.API.USERNAME} |<p>Specify user name for API.</p> |`zabbix` |
+|{$HPE.MSA.CONTROLLER.CPU.UTIL.CRIT} |<p>The critical threshold of the CPU utilization in %.</p> |`90` |
+|{$HPE.MSA.DATA.TIMEOUT} |<p>Response timeout for API.</p> |`5s` |
+|{$HPE.MSA.DISKS.GROUP.PUSED.MAX.CRIT} |<p>The critical threshold of the disk group space utilization in percent.</p> |`90` |
+|{$HPE.MSA.DISKS.GROUP.PUSED.MAX.WARN} |<p>The warning threshold of the disk group space utilization in percent.</p> |`80` |
+|{$HPE.MSA.POOL.PUSED.MAX.CRIT} |<p>The critical threshold of the pool space utilization in percent.</p> |`90` |
+|{$HPE.MSA.POOL.PUSED.MAX.WARN} |<p>The warning threshold of the pool space utilization in percent.</p> |`80` |
## Template links
@@ -52,17 +50,15 @@ There are no template links in this template.
|Name|Description|Type|Key and additional info|
|----|-----------|----|----|
|Controllers discovery |<p>Discover controllers.</p> |DEPENDENT |hpe.msa.controllers.discovery<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
-|Controller statistics discovery |<p>Discover controller statistics.</p> |DEPENDENT |hpe.msa.controllers.statistics.discovery<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
|Disk groups discovery |<p>Discover disk groups.</p> |DEPENDENT |hpe.msa.disks.groups.discovery<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
-|Disk group statistics discovery |<p>Discover disk group statistics.</p> |DEPENDENT |hpe.msa.disks.groups.statistics.discovery<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
|Disks discovery |<p>Discover disks.</p> |DEPENDENT |hpe.msa.disks.discovery<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p><p>**Overrides:**</p><p>SSD life left<br> - {#TYPE} MATCHES_REGEX `8`<br> - ITEM_PROTOTYPE REGEXP `SSD life left` - DISCOVER</p> |
|Enclosures discovery |<p>Discover enclosures.</p> |DEPENDENT |hpe.msa.enclosures.discovery<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
|Fans discovery |<p>Discover fans.</p> |DEPENDENT |hpe.msa.fans.discovery<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
+|I/O modules discovery |<p>Discover I/O modules.</p> |DEPENDENT |hpe.msa.io_modules.discovery<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
|Pools discovery |<p>Discover pools.</p> |DEPENDENT |hpe.msa.pools.discovery<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
|Ports discovery |<p>Discover ports.</p> |DEPENDENT |hpe.msa.ports.discovery<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
|Power supplies discovery |<p>Discover power supplies.</p> |DEPENDENT |hpe.msa.power_supplies.discovery<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
|Volumes discovery |<p>Discover volumes.</p> |DEPENDENT |hpe.msa.volumes.discovery<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
-|Volume statistics discovery |<p>Discover volume statistics.</p> |DEPENDENT |hpe.msa.volumes.statistics.discovery<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
## Items collected
@@ -76,27 +72,43 @@ There are no template links in this template.
|HPE |Vendor name |<p>The vendor name.</p> |DEPENDENT |hpe.msa.system.vendor_name<p>**Preprocessing**:</p><p>- JSONPATH: `$.system[0].['vendor-name']`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
|HPE |System health |<p>System health status.</p> |DEPENDENT |hpe.msa.system.health<p>**Preprocessing**:</p><p>- JSONPATH: `$.system[0].['health-numeric']`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p> |
|HPE |HPE MSA: Service ping |<p>Check if HTTP/HTTPS service accepts TCP connections.</p> |SIMPLE |net.tcp.service["{$HPE.MSA.API.SCHEME}","{HOST.CONN}"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|HPE |Controller [{#DURABLE.ID}]: Firmware version |<p>Storage controller firmware version.</p> |DEPENDENT |hpe.msa.controllers["{#DURABLE.ID}",firmware]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['sc-fw'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|HPE |Controller [{#DURABLE.ID}]: Part number |<p>Part number of the controller.</p> |DEPENDENT |hpe.msa.controllers["{#DURABLE.ID}",part_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['part-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|HPE |Controller [{#DURABLE.ID}]: Serial number |<p>Storage controller serial number.</p> |DEPENDENT |hpe.msa.controllers["{#DURABLE.ID}",serial_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['serial-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|HPE |Controller [{#DURABLE.ID}]: Health |<p>Controller health status.</p> |DEPENDENT |hpe.msa.controllers["{#DURABLE.ID}",health]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['health-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|HPE |Controller [{#DURABLE.ID}]: Status |<p>Storage controller status.</p> |DEPENDENT |hpe.msa.controllers["{#DURABLE.ID}",status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['status-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|HPE |Controller [{#DURABLE.ID}]: CPU utilization |<p>Percentage of time the CPU is busy, from 0 to 100.</p> |DEPENDENT |hpe.msa.controllers.cpu["{#DURABLE.ID}",util]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['cpu-load'].first()`</p> |
-|HPE |Controller [{#DURABLE.ID}]: IOPS, rate |<p>Input/output operations per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.</p> |DEPENDENT |hpe.msa.controllers.iops["{#DURABLE.ID}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['iops'].first()`</p> |
-|HPE |Controller [{#DURABLE.ID}]: Uptime |<p>Number of seconds since the controller was restarted.</p> |DEPENDENT |hpe.msa.controllers["{#DURABLE.ID}",uptime]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['power-on-time'].first()`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Firmware version |<p>Storage controller firmware version.</p> |DEPENDENT |hpe.msa.controllers["{#CONTROLLER.ID}",firmware]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['sc-fw'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Part number |<p>Part number of the controller.</p> |DEPENDENT |hpe.msa.controllers["{#CONTROLLER.ID}",part_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['part-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Serial number |<p>Storage controller serial number.</p> |DEPENDENT |hpe.msa.controllers["{#CONTROLLER.ID}",serial_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['serial-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Health |<p>Controller health status.</p> |DEPENDENT |hpe.msa.controllers["{#CONTROLLER.ID}",health]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['health-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Status |<p>Storage controller status.</p> |DEPENDENT |hpe.msa.controllers["{#CONTROLLER.ID}",status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['status-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Disks |<p>Number of disks in the storage system.</p> |DEPENDENT |hpe.msa.controllers["{#CONTROLLER.ID}",disks]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['disks'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Pools |<p>Number of pools in the storage system.</p> |DEPENDENT |hpe.msa.controllers["{#CONTROLLER.ID}",pools]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['number-of-storage-pools'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Disk groups |<p>Number of disk groups in the storage system.</p> |DEPENDENT |hpe.msa.controllers["{#CONTROLLER.ID}",disk_groups]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['virtual-disks'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: IP address |<p>Controller network port IP address.</p> |DEPENDENT |hpe.msa.controllers["{#CONTROLLER.ID}",ip_address]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['ip-address'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Cache memory size |<p>Controller cache memory size.</p> |DEPENDENT |hpe.msa.controllers.cache["{#CONTROLLER.ID}",total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['cache-memory-size'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p><p>- MULTIPLIER: `1048576`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Cache: Write utilization |<p>Percentage of write cache in use, from 0 to 100.</p> |DEPENDENT |hpe.msa.controllers.cache.write["{#CONTROLLER.ID}",util]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['write-cache-used'].first()`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Cache: Read hits, rate |<p>For the controller that owns the volume, the number of times the block to be read is found in cache per second.</p> |DEPENDENT |hpe.msa.controllers.cache.read.hits["{#CONTROLLER.ID}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['read-cache-hits'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Cache: Read misses, rate |<p>For the controller that owns the volume, the number of times the block to be read is not found in cache per second.</p> |DEPENDENT |hpe.msa.controllers.cache.read.misses["{#CONTROLLER.ID}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['read-cache-misses'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Cache: Write hits, rate |<p>For the controller that owns the volume, the number of times the block written to is found in cache per second.</p> |DEPENDENT |hpe.msa.controllers.cache.write.hits["{#CONTROLLER.ID}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['write-cache-hits'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Cache: Write misses, rate |<p>For the controller that owns the volume, the number of times the block written to is not found in cache per second.</p> |DEPENDENT |hpe.msa.controllers.cache.write.misses["{#CONTROLLER.ID}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['write-cache-misses'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: CPU utilization |<p>Percentage of time the CPU is busy, from 0 to 100.</p> |DEPENDENT |hpe.msa.controllers.cpu["{#CONTROLLER.ID}",util]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['cpu-load'].first()`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: IOPS, total rate |<p>Input/output operations per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.</p> |DEPENDENT |hpe.msa.controllers.iops.total["{#CONTROLLER.ID}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['iops'].first()`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: IOPS, read rate |<p>Number of read operations per second.</p> |DEPENDENT |hpe.msa.controllers.iops.read["{#CONTROLLER.ID}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['number-of-reads'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: IOPS, write rate |<p>Number of write operations per second.</p> |DEPENDENT |hpe.msa.controllers.iops.write["{#CONTROLLER.ID}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['number-of-writes'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Data transfer rate: Total |<p>The data transfer rate, in bytes per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.</p> |DEPENDENT |hpe.msa.controllers.data_transfer.total["{#CONTROLLER.ID}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['bytes-per-second-numeric'].first()`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Data transfer rate: Reads |<p>The data read rate, in bytes per second.</p> |DEPENDENT |hpe.msa.controllers.data_transfer.reads["{#CONTROLLER.ID}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['data-read-numeric'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Data transfer rate: Writes |<p>The data write rate, in bytes per second.</p> |DEPENDENT |hpe.msa.controllers.data_transfer.writes["{#CONTROLLER.ID}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['data-written-numeric'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Uptime |<p>Number of seconds since the controller was restarted.</p> |DEPENDENT |hpe.msa.controllers["{#CONTROLLER.ID}",uptime]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['power-on-time'].first()`</p> |
|HPE |Disk group [{#NAME}]: Disks count |<p>Number of disks in the disk group.</p> |DEPENDENT |hpe.msa.disks.groups["{#NAME}",disk_count]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['diskcount'].first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Disk group [{#NAME}]: Pool space used |<p>The percentage of pool capacity that the disk group occupies.</p> |DEPENDENT |hpe.msa.disks.groups.space["{#NAME}",pool_util]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['pool-percentage'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
|HPE |Disk group [{#NAME}]: Health |<p>Disk group health.</p> |DEPENDENT |hpe.msa.disks.groups["{#NAME}",health]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['health-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
|HPE |Disk group [{#NAME}]: Space free |<p>The free space in the disk group.</p> |DEPENDENT |hpe.msa.disks.groups.space["{#NAME}",free]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['freespace-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `512`</p> |
|HPE |Disk group [{#NAME}]: Space total |<p>The capacity of the disk group.</p> |DEPENDENT |hpe.msa.disks.groups.space["{#NAME}",total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['size-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `512`</p> |
|HPE |Disk group [{#NAME}]: Space utilization |<p>The space utilization percentage in the disk group.</p> |CALCULATED |hpe.msa.disks.groups.space["{#NAME}",util]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>**Expression**:</p>`last(//hpe.msa.disks.groups.space["{#NAME}",free])/last(//hpe.msa.disks.groups.space["{#NAME}",total])*100` |
|HPE |Disk group [{#NAME}]: RAID type |<p>The RAID level of the disk group.</p> |DEPENDENT |hpe.msa.disks.groups.raid["{#NAME}",type]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['raidtype-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
|HPE |Disk group [{#NAME}]: Status |<p>The status of the disk group:</p><p>- CRIT: Critical. The disk group is online but isn't fault tolerant because some of it's disks are down.</p><p>- DMGD: Damaged. The disk group is online and fault tolerant, but some of it's disks are damaged.</p><p>- FTDN: Fault tolerant with a down disk.The disk group is online and fault tolerant, but some of it's disks are down.</p><p>- FTOL: Fault tolerant.</p><p>- MSNG: Missing. The disk group is online and fault tolerant, but some of it's disks are missing.</p><p>- OFFL: Offline. Either the disk group is using offline initialization, or it's disks are down and data may be lost.</p><p>- QTCR: Quarantined critical. The disk group is critical with at least one inaccessible disk. For example, two disks are inaccessible in a RAID 6 disk group or one disk is inaccessible for other fault-tolerant RAID levels. If the inaccessible disks come online or if after 60 seconds from being quarantined the disk group is QTCRor QTDN, the disk group is automatically dequarantined.</p><p>- QTDN: Quarantined with a down disk. The RAID6 disk group has one inaccessible disk. The disk group is fault tolerant but degraded. If the inaccessible disks come online or if after 60 seconds from being quarantined the disk group is QTCRor QTDN, the disk group is automatically dequarantined.</p><p>- QTOF: Quarantined offline. The disk group is offline with multiple inaccessible disks causing user data to be incomplete, or is an NRAID or RAID 0 disk group.</p><p>- QTUN: Quarantined unsupported. The disk group contains data in a format that is not supported by this system. For example, this system does not support linear disk groups.</p><p>- STOP: The disk group is stopped.</p><p>- UNKN: Unknown.</p><p>- UP: Up. The disk group is online and does not have fault-tolerant attributes.</p> |DEPENDENT |hpe.msa.disks.groups["{#NAME}",status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['status-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|HPE |Disk group [{#NAME}]: IOPS, rate |<p>Input/output operations per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.</p> |DEPENDENT |hpe.msa.disks.groups.iops["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['iops'].first()`</p> |
+|HPE |Disk group [{#NAME}]: IOPS, total rate |<p>Input/output operations per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.</p> |DEPENDENT |hpe.msa.disks.groups.iops.total["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['iops'].first()`</p> |
|HPE |Disk group [{#NAME}]: Average response time: Total |<p>Average response time for read and write operations, calculated over the interval since these statistics were last requested or reset.</p> |DEPENDENT |hpe.msa.disks.groups.avg_rsp_time["{#NAME}",total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['avg-rsp-time'].first()`</p><p>- MULTIPLIER: `0.000001`</p> |
|HPE |Disk group [{#NAME}]: Average response time: Read |<p>Average response time for all read operations, calculated over the interval since these statistics were last requested or reset.</p> |DEPENDENT |hpe.msa.disks.groups.avg_rsp_time["{#NAME}",read]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['avg-read-rsp-time'].first()`</p><p>- MULTIPLIER: `0.000001`</p> |
|HPE |Disk group [{#NAME}]: Average response time: Write |<p>Average response time for all write operations, calculated over the interval since these statistics were last requested or reset.</p> |DEPENDENT |hpe.msa.disks.groups.avg_rsp_time["{#NAME}",write]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['avg-write-rsp-time'].first()`</p><p>- MULTIPLIER: `0.000001`</p> |
-|HPE |Disk group [{#NAME}]: Reads, rate |<p>Number of read operations per second.</p> |DEPENDENT |hpe.msa.disks.groups.reads["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['number-of-reads'].first()`</p><p>- CHANGE_PER_SECOND</p> |
-|HPE |Disk group [{#NAME}]: Writes, rate |<p>Number of write operations per second.</p> |DEPENDENT |hpe.msa.disks.groups.writes["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['number-of-writes'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Disk group [{#NAME}]: IOPS, read rate |<p>Number of read operations per second.</p> |DEPENDENT |hpe.msa.disks.groups.iops.read["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['number-of-reads'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Disk group [{#NAME}]: IOPS, write rate |<p>Number of write operations per second.</p> |DEPENDENT |hpe.msa.disks.groups.iops.write["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['number-of-writes'].first()`</p><p>- CHANGE_PER_SECOND</p> |
|HPE |Disk group [{#NAME}]: Data transfer rate: Total |<p>The data transfer rate, in bytes per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.</p> |DEPENDENT |hpe.msa.disks.groups.data_transfer.total["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['bytes-per-second-numeric'].first()`</p> |
|HPE |Disk group [{#NAME}]: Data transfer rate: Reads |<p>The data read rate, in bytes per second.</p> |DEPENDENT |hpe.msa.disks.groups.data_transfer.reads["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['data-read-numeric'].first()`</p><p>- CHANGE_PER_SECOND</p> |
|HPE |Disk group [{#NAME}]: Data transfer rate: Writes |<p>The data write rate, in bytes per second.</p> |DEPENDENT |hpe.msa.disks.groups.data_transfer.writes["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['data-written-numeric'].first()`</p><p>- CHANGE_PER_SECOND</p> |
@@ -104,11 +116,11 @@ There are no template links in this template.
|HPE |Pool [{#NAME}]: Space free |<p>The free space in the pool.</p> |DEPENDENT |hpe.msa.pools.space["{#NAME}",free]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['total-avail-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `512`</p> |
|HPE |Pool [{#NAME}]: Space total |<p>The capacity of the pool.</p> |DEPENDENT |hpe.msa.pools.space["{#NAME}",total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['total-size-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `512`</p> |
|HPE |Pool [{#NAME}]: Space utilization |<p>The space utilization percentage in the pool.</p> |CALCULATED |hpe.msa.pools.space["{#NAME}",util]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>**Expression**:</p>`last(//hpe.msa.pools.space["{#NAME}",free])/last(//hpe.msa.pools.space["{#NAME}",total])*100` |
-|HPE |Volume [{#NAME}]: Space allocated |<p>The amount of space currently allocated to the volume.</p> |DEPENDENT |hpe.msa.volumes.space["{#NAME}",allocated]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['allocated-size-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `512`</p> |
-|HPE |Volume [{#NAME}]: Space total |<p>The capacity of the volume.</p> |DEPENDENT |hpe.msa.volumes.space["{#NAME}",total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['size-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `512`</p> |
-|HPE |Volume [{#NAME}]: IOPS, rate |<p>Input/output operations per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.</p> |DEPENDENT |hpe.msa.volumes.iops["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['volume-name'] == "{#NAME}")].['iops'].first()`</p> |
-|HPE |Volume [{#NAME}]: Reads, rate |<p>Number of read operations per second.</p> |DEPENDENT |hpe.msa.volumes.reads["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['volume-name'] == "{#NAME}")].['number-of-reads'].first()`</p><p>- CHANGE_PER_SECOND</p> |
-|HPE |Volume [{#NAME}]: Writes, rate |<p>Number of write operations per second.</p> |DEPENDENT |hpe.msa.volumes.writes["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['volume-name'] == "{#NAME}")].['number-of-writes'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Volume [{#NAME}]: Space allocated |<p>The amount of space currently allocated to the volume.</p> |DEPENDENT |hpe.msa.volumes.space["{#NAME}",allocated]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['volume-name'] == "{#NAME}")].['allocated-size-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `512`</p> |
+|HPE |Volume [{#NAME}]: Space total |<p>The capacity of the volume.</p> |DEPENDENT |hpe.msa.volumes.space["{#NAME}",total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['volume-name'] == "{#NAME}")].['size-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `512`</p> |
+|HPE |Volume [{#NAME}]: IOPS, total rate |<p>Input/output operations per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.</p> |DEPENDENT |hpe.msa.volumes.iops.total["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['volume-name'] == "{#NAME}")].['iops'].first()`</p> |
+|HPE |Volume [{#NAME}]: IOPS, read rate |<p>Number of read operations per second.</p> |DEPENDENT |hpe.msa.volumes.iops.read["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['volume-name'] == "{#NAME}")].['number-of-reads'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Volume [{#NAME}]: IOPS, write rate |<p>Number of write operations per second.</p> |DEPENDENT |hpe.msa.volumes.iops.write["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['volume-name'] == "{#NAME}")].['number-of-writes'].first()`</p><p>- CHANGE_PER_SECOND</p> |
|HPE |Volume [{#NAME}]: Data transfer rate: Total |<p>The data transfer rate, in bytes per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.</p> |DEPENDENT |hpe.msa.volumes.data_transfer.total["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['volume-name'] == "{#NAME}")].['bytes-per-second-numeric'].first()`</p> |
|HPE |Volume [{#NAME}]: Data transfer rate: Reads |<p>The data read rate, in bytes per second.</p> |DEPENDENT |hpe.msa.volumes.data_transfer.reads["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['volume-name'] == "{#NAME}")].['data-read-numeric'].first()`</p><p>- CHANGE_PER_SECOND</p> |
|HPE |Volume [{#NAME}]: Data transfer rate: Writes |<p>The data write rate, in bytes per second.</p> |DEPENDENT |hpe.msa.volumes.data_transfer.writes["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['volume-name'] == "{#NAME}")].['data-written-numeric'].first()`</p><p>- CHANGE_PER_SECOND</p> |
@@ -119,42 +131,48 @@ There are no template links in this template.
|HPE |Enclosure [{#DURABLE.ID}]: Health |<p>Enclosure health.</p> |DEPENDENT |hpe.msa.enclosures["{#DURABLE.ID}",health]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['health-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
|HPE |Enclosure [{#DURABLE.ID}]: Status |<p>Enclosure status.</p> |DEPENDENT |hpe.msa.enclosures["{#DURABLE.ID}",status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['status-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 6`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
|HPE |Enclosure [{#DURABLE.ID}]: Midplane serial number |<p>Midplane serial number.</p> |DEPENDENT |hpe.msa.enclosures["{#DURABLE.ID}",midplane_serial_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['midplane-serial-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|HPE |Enclosure [{#DURABLE.ID}]: Part number. |<p>Enclosure part number.</p> |DEPENDENT |hpe.msa.enclosures["{#DURABLE.ID}",part_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['part-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Enclosure [{#DURABLE.ID}]: Part number |<p>Enclosure part number.</p> |DEPENDENT |hpe.msa.enclosures["{#DURABLE.ID}",part_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['part-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
|HPE |Enclosure [{#DURABLE.ID}]: Model |<p>Enclosure model.</p> |DEPENDENT |hpe.msa.enclosures["{#DURABLE.ID}",model]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['model'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
|HPE |Enclosure [{#DURABLE.ID}]: Power |<p>Enclosure power in watts.</p> |DEPENDENT |hpe.msa.enclosures["{#DURABLE.ID}",power]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['enclosure-power'].first()`</p> |
-|HPE |Power supply [{#LOCATION}]: Health |<p>Power supply health status.</p> |DEPENDENT |hpe.msa.power_supplies["{#DURABLE.ID}",health]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['health-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|HPE |Power supply [{#LOCATION}]: Status |<p>Power supply status.</p> |DEPENDENT |hpe.msa.power_supplies["{#DURABLE.ID}",status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['status-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|HPE |Power supply [{#LOCATION}]: Part number. |<p>Power supply part number.</p> |DEPENDENT |hpe.msa.power_supplies["{#DURABLE.ID}",part_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['part-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|HPE |Power supply [{#LOCATION}]: Serial number. |<p>Power supply serial number.</p> |DEPENDENT |hpe.msa.power_supplies["{#DURABLE.ID}",serial_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['serial-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Power supply [{#DURABLE.ID}]: Health |<p>Power supply health status.</p> |DEPENDENT |hpe.msa.power_supplies["{#DURABLE.ID}",health]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['health-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Power supply [{#DURABLE.ID}]: Status |<p>Power supply status.</p> |DEPENDENT |hpe.msa.power_supplies["{#DURABLE.ID}",status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['status-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Power supply [{#DURABLE.ID}]: Part number |<p>Power supply part number.</p> |DEPENDENT |hpe.msa.power_supplies["{#DURABLE.ID}",part_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['part-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Power supply [{#DURABLE.ID}]: Serial number |<p>Power supply serial number.</p> |DEPENDENT |hpe.msa.power_supplies["{#DURABLE.ID}",serial_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['serial-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
|HPE |Port [{#NAME}]: Health |<p>Port health status.</p> |DEPENDENT |hpe.msa.ports["{#NAME}",health]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['port'] == "{#NAME}")].['health-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
|HPE |Port [{#NAME}]: Status |<p>Port status.</p> |DEPENDENT |hpe.msa.ports["{#NAME}",status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['port'] == "{#NAME}")].['status-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|HPE |{#NAME} [{#LOCATION}]: Health |<p>Fan health status.</p> |DEPENDENT |hpe.msa.fans["{#DURABLE.ID}",health]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['health-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|HPE |{#NAME} [{#LOCATION}]: Status |<p>Fan status.</p> |DEPENDENT |hpe.msa.fans["{#DURABLE.ID}",status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['status-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|HPE |{#NAME} [{#LOCATION}]: Speed |<p>Fan speed (revolutions per minute).</p> |DEPENDENT |hpe.msa.fans["{#DURABLE.ID}",speed]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['speed'].first()`</p> |
-|HPE |Disk [{#ENCLOSURE.ID}.{#SLOT}]: Health |<p>Disk health status.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",health]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['health-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|HPE |Disk [{#ENCLOSURE.ID}.{#SLOT}]: Temperature status |<p>Disk temperature status.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",temperature_status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['temperature-status-numeric'].first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- IN_RANGE: `1 3`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|HPE |Disk [{#ENCLOSURE.ID}.{#SLOT}]: Temperature |<p>Temperature of the disk.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",temperature]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['temperature-numeric'].first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|HPE |Disk [{#ENCLOSURE.ID}.{#SLOT}]: Type |<p>Disk type:</p><p>SAS: Enterprise SAS spinning disk.</p><p>SAS MDL: Midline SAS spinning disk.</p><p>SSD SAS: SAS solit-state disk.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",type]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['description-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|HPE |Disk [{#ENCLOSURE.ID}.{#SLOT}]: Disk group |<p>If the disk is in a disk group, the disk group name.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",group]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['disk-group'].first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|HPE |Disk [{#ENCLOSURE.ID}.{#SLOT}]: Storage pool |<p>If the disk is in a pool, the pool name.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",pool]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['storage-pool-name'].first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|HPE |Disk [{#ENCLOSURE.ID}.{#SLOT}]: Vendor |<p>Disk vendor.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",vendor]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['vendor'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|HPE |Disk [{#ENCLOSURE.ID}.{#SLOT}]: Model |<p>Disk model.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",model]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['model'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|HPE |Disk [{#ENCLOSURE.ID}.{#SLOT}]: Serial number |<p>Disk serial number.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",serial_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['serial-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|HPE |Disk [{#ENCLOSURE.ID}.{#SLOT}]: Space total |<p>Total size of the disk.</p> |DEPENDENT |hpe.msa.disks.space["{#DURABLE.ID}",total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['size-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `512`</p> |
-|HPE |Disk [{#ENCLOSURE.ID}.{#SLOT}]: SSD life left |<p>The percantage of disk life remaining.</p> |DEPENDENT |hpe.msa.disks.ssd["{#DURABLE.ID}",life_left]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['ssd-life-left-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|Zabbix raw items |HPE MSA: Get system |<p>-</p> |SCRIPT |hpe.msa.raw.system<p>**Expression**:</p>`The text is too long. Please see the template.` |
-|Zabbix raw items |HPE MSA: Get controllers |<p>-</p> |SCRIPT |hpe.msa.raw.controllers<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controllers']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
-|Zabbix raw items |HPE MSA: Get controller statistics |<p>-</p> |SCRIPT |hpe.msa.raw.controllers.statistics<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controller-statistics']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
-|Zabbix raw items |HPE MSA: Get disk groups |<p>-</p> |SCRIPT |hpe.msa.raw.disks.groups<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-groups']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
-|Zabbix raw items |HPE MSA: Get disk group statistics |<p>-</p> |SCRIPT |hpe.msa.raw.disks.groups.statistics<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-group-statistics']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
-|Zabbix raw items |HPE MSA: Get disks |<p>-</p> |SCRIPT |hpe.msa.raw.disks<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disks']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
-|Zabbix raw items |HPE MSA: Get enclosures |<p>-</p> |SCRIPT |hpe.msa.raw.enclosures<p>**Preprocessing**:</p><p>- JSONPATH: `$.['enclosures']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
-|Zabbix raw items |HPE MSA: Get fans |<p>-</p> |SCRIPT |hpe.msa.raw.fans<p>**Preprocessing**:</p><p>- JSONPATH: `$.['fans']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
-|Zabbix raw items |HPE MSA: Get pools |<p>-</p> |SCRIPT |hpe.msa.raw.pools<p>**Preprocessing**:</p><p>- JSONPATH: `$.['pools']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
-|Zabbix raw items |HPE MSA: Get ports |<p>-</p> |SCRIPT |hpe.msa.raw.ports<p>**Preprocessing**:</p><p>- JSONPATH: `$.['ports']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
-|Zabbix raw items |HPE MSA: Get power supplies |<p>-</p> |SCRIPT |hpe.msa.raw.power_supplies<p>**Preprocessing**:</p><p>- JSONPATH: `$.['power-supplies']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
-|Zabbix raw items |HPE MSA: Get volumes |<p>-</p> |SCRIPT |hpe.msa.raw.volumes<p>**Preprocessing**:</p><p>- JSONPATH: `$.['volumes']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
-|Zabbix raw items |HPE MSA: Get volume statistics |<p>-</p> |SCRIPT |hpe.msa.raw.volumes.statistics<p>**Preprocessing**:</p><p>- JSONPATH: `$.['volume-statistics']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
+|HPE |Port [{#NAME}]: Type |<p>Port type.</p> |DEPENDENT |hpe.msa.ports["{#NAME}",type]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['port'] == "{#NAME}")].['port-type-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Fan [{#DURABLE.ID}]: Health |<p>Fan health status.</p> |DEPENDENT |hpe.msa.fans["{#DURABLE.ID}",health]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['health-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Fan [{#DURABLE.ID}]: Status |<p>Fan status.</p> |DEPENDENT |hpe.msa.fans["{#DURABLE.ID}",status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['status-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Fan [{#DURABLE.ID}]: Speed |<p>Fan speed (revolutions per minute).</p> |DEPENDENT |hpe.msa.fans["{#DURABLE.ID}",speed]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['speed'].first()`</p> |
+|HPE |Disk [{#DURABLE.ID}]: Health |<p>Disk health status.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",health]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['health-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Disk [{#DURABLE.ID}]: Temperature status |<p>Disk temperature status.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",temperature_status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['temperature-status-numeric'].first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- IN_RANGE: `1 3`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Disk [{#DURABLE.ID}]: Temperature |<p>Temperature of the disk.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",temperature]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['temperature-numeric'].first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Disk [{#DURABLE.ID}]: Type |<p>Disk type:</p><p>SAS: Enterprise SAS spinning disk.</p><p>SAS MDL: Midline SAS spinning disk.</p><p>SSD SAS: SAS solit-state disk.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",type]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['description-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Disk [{#DURABLE.ID}]: Disk group |<p>If the disk is in a disk group, the disk group name.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",group]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['disk-group'].first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Disk [{#DURABLE.ID}]: Storage pool |<p>If the disk is in a pool, the pool name.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",pool]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['storage-pool-name'].first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Disk [{#DURABLE.ID}]: Vendor |<p>Disk vendor.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",vendor]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['vendor'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Disk [{#DURABLE.ID}]: Model |<p>Disk model.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",model]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['model'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Disk [{#DURABLE.ID}]: Serial number |<p>Disk serial number.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",serial_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['serial-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Disk [{#DURABLE.ID}]: Space total |<p>Total size of the disk.</p> |DEPENDENT |hpe.msa.disks.space["{#DURABLE.ID}",total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['size-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `512`</p> |
+|HPE |Disk [{#DURABLE.ID}]: SSD life left |<p>The percantage of disk life remaining.</p> |DEPENDENT |hpe.msa.disks.ssd["{#DURABLE.ID}",life_left]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['ssd-life-left-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |I/O module [{#DURABLE.ID}]: Health |<p>I/O module health status.</p> |DEPENDENT |hpe.msa.io_modules["{#DURABLE.ID}",health]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['health-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |I/O module [{#DURABLE.ID}]: Status |<p>I/O module status.</p> |DEPENDENT |hpe.msa.io_modules["{#DURABLE.ID}",status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['status-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 3`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |I/O module [{#DURABLE.ID}]: Part number |<p>Part number of the I/O module.</p> |DEPENDENT |hpe.msa.io_modules["{#DURABLE.ID}",part_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['part-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |I/O module [{#DURABLE.ID}]: Serial number |<p>I/O module serial number.</p> |DEPENDENT |hpe.msa.io_modules["{#DURABLE.ID}",serial_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['serial-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|Zabbix raw items |HPE MSA: Get system |<p>General system information.</p> |SCRIPT |hpe.msa.raw.system<p>**Expression**:</p>`The text is too long. Please see the template.` |
+|Zabbix raw items |HPE MSA: Get controllers |<p>The list of controllers.</p> |SCRIPT |hpe.msa.raw.controllers<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controllers']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
+|Zabbix raw items |HPE MSA: Get controller statistics |<p>The list of controllers statistics.</p> |SCRIPT |hpe.msa.raw.controllers.statistics<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controller-statistics']`</p><p>- JAVASCRIPT: `The text is too long. Please see the template.`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
+|Zabbix raw items |HPE MSA: Get I/O modules |<p>The list of I/O modules.</p> |SCRIPT |hpe.msa.raw.io_modules<p>**Preprocessing**:</p><p>- JSONPATH: `$.['io-modules']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
+|Zabbix raw items |HPE MSA: Get disk groups |<p>The list of disk groups.</p> |SCRIPT |hpe.msa.raw.disks.groups<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-groups']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
+|Zabbix raw items |HPE MSA: Get disk group statistics |<p>The list of disk groups statistics.</p> |SCRIPT |hpe.msa.raw.disks.groups.statistics<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-group-statistics']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
+|Zabbix raw items |HPE MSA: Get disks |<p>The list of disks.</p> |SCRIPT |hpe.msa.raw.disks<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disks']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
+|Zabbix raw items |HPE MSA: Get enclosures |<p>The list of enclosures.</p> |SCRIPT |hpe.msa.raw.enclosures<p>**Preprocessing**:</p><p>- JSONPATH: `$.['enclosures']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
+|Zabbix raw items |HPE MSA: Get fans |<p>The list of fans.</p> |SCRIPT |hpe.msa.raw.fans<p>**Preprocessing**:</p><p>- JSONPATH: `$.['fans']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
+|Zabbix raw items |HPE MSA: Get pools |<p>The list of pools.</p> |SCRIPT |hpe.msa.raw.pools<p>**Preprocessing**:</p><p>- JSONPATH: `$.['pools']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
+|Zabbix raw items |HPE MSA: Get ports |<p>The list of ports.</p> |SCRIPT |hpe.msa.raw.ports<p>**Preprocessing**:</p><p>- JSONPATH: `$.['ports']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
+|Zabbix raw items |HPE MSA: Get power supplies |<p>The list of power supplies.</p> |SCRIPT |hpe.msa.raw.power_supplies<p>**Preprocessing**:</p><p>- JSONPATH: `$.['power-supplies']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
+|Zabbix raw items |HPE MSA: Get volumes |<p>The list of volumes.</p> |SCRIPT |hpe.msa.raw.volumes<p>**Preprocessing**:</p><p>- JSONPATH: `$.['volumes']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
+|Zabbix raw items |HPE MSA: Get volume statistics |<p>The list of volumes statistics.</p> |SCRIPT |hpe.msa.raw.volumes.statistics<p>**Preprocessing**:</p><p>- JSONPATH: `$.['volume-statistics']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
## Triggers
@@ -163,18 +181,19 @@ There are no template links in this template.
|System health is in degraded state |<p>System health is in degraded state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.system.health)=1` |WARNING | |
|System health is in fault state |<p>System health is in fault state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.system.health)=2` |AVERAGE | |
|System health is in unknown state |<p>System health is in unknown state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.system.health)=3` |INFO | |
-|Failed to fetch API data |<p>Zabbix has not received data for items for the last 5 minutes.</p> |`nodata(/HPE MSA 2040 Storage by HTTP/hpe.msa.system.health,5m)=1` |WARNING |<p>**Depends on**:</p><p>- Service is down</p> |
-|Service is down |<p>-</p> |`max(/HPE MSA 2040 Storage by HTTP/net.tcp.service["{$HPE.MSA.API.SCHEME}","{HOST.CONN}"],5m)=0` |WARNING | |
-|Controller [{#DURABLE.ID}]: Controller health is in degraded state |<p>Controller health is in degraded state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.controllers["{#DURABLE.ID}",health])=1` |WARNING |<p>**Depends on**:</p><p>- Controller [{#DURABLE.ID}]: Controller is down</p> |
-|Controller [{#DURABLE.ID}]: Controller health is in fault state |<p>Controller health is in fault state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.controllers["{#DURABLE.ID}",health])=2` |AVERAGE |<p>**Depends on**:</p><p>- Controller [{#DURABLE.ID}]: Controller is down</p> |
-|Controller [{#DURABLE.ID}]: Controller health is in unknown state |<p>Controller health is in unknown state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.controllers["{#DURABLE.ID}",health])=3` |INFO |<p>**Depends on**:</p><p>- Controller [{#DURABLE.ID}]: Controller is down</p> |
-|Controller [{#DURABLE.ID}]: Controller is down |<p>-</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.controllers["{#DURABLE.ID}",status])=1` |HIGH | |
-|Controller [{#DURABLE.ID}]: Controller has been restarted |<p>-</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.controllers["{#DURABLE.ID}",uptime])<10m` |INFO | |
+|Failed to fetch API data |<p>Zabbix has not received data for items for the last 5 minutes.</p> |`nodata(/HPE MSA 2040 Storage by HTTP/hpe.msa.system.health,5m)=1` |AVERAGE |<p>**Depends on**:</p><p>- Service is down or unavailable</p> |
+|Service is down or unavailable |<p>HTTP/HTTPS service is down or unable to establish TCP connection.</p> |`max(/HPE MSA 2040 Storage by HTTP/net.tcp.service["{$HPE.MSA.API.SCHEME}","{HOST.CONN}"],5m)=0` |HIGH | |
+|Controller [{#CONTROLLER.ID}]: Controller health is in degraded state |<p>Controller health is in degraded state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",health])=1` |WARNING |<p>**Depends on**:</p><p>- Controller [{#CONTROLLER.ID}]: Controller is down</p> |
+|Controller [{#CONTROLLER.ID}]: Controller health is in fault state |<p>Controller health is in fault state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",health])=2` |AVERAGE |<p>**Depends on**:</p><p>- Controller [{#CONTROLLER.ID}]: Controller is down</p> |
+|Controller [{#CONTROLLER.ID}]: Controller health is in unknown state |<p>Controller health is in unknown state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",health])=3` |INFO |<p>**Depends on**:</p><p>- Controller [{#CONTROLLER.ID}]: Controller is down</p> |
+|Controller [{#CONTROLLER.ID}]: Controller is down |<p>The controller is down.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",status])=1` |HIGH | |
+|Controller [{#CONTROLLER.ID}]: High CPU utilization |<p>Controller CPU utilization is too high. The system might be slow to respond.</p> |`min(/HPE MSA 2040 Storage by HTTP/hpe.msa.controllers.cpu["{#CONTROLLER.ID}",util],5m)>{$HPE.MSA.CONTROLLER.CPU.UTIL.CRIT}` |WARNING | |
+|Controller [{#CONTROLLER.ID}]: Controller has been restarted |<p>The controller uptime is less than 10 minutes.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",uptime])<10m` |WARNING | |
|Disk group [{#NAME}]: Disk group health is in degraded state |<p>Disk group health is in degraded state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",health])=1` |WARNING | |
|Disk group [{#NAME}]: Disk group health is in fault state |<p>Disk group health is in fault state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",health])=2` |AVERAGE | |
|Disk group [{#NAME}]: Disk group health is in unknown state |<p>Disk group health is in unknown state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",health])=3` |INFO | |
-|Disk group [{#NAME}]: Disk group space is low |<p>Disk group is running low on free space (less than {$HPE.PRIMERA.DISKS.GROUP.PUSED.MAX.WARN:"{#NAME}"}% available).</p> |`min(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups.space["{#NAME}",util],5m)>{$HPE.PRIMERA.DISKS.GROUP.PUSED.MAX.WARN:"{#NAME}"}` |WARNING |<p>**Depends on**:</p><p>- Disk group [{#NAME}]: Disk group space is critically low</p> |
-|Disk group [{#NAME}]: Disk group space is critically low |<p>Disk group is running low on free space (less than {$HPE.PRIMERA.DISKS.GROUP.PUSED.MAX.CRIT:"{#NAME}"}% available).</p> |`min(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups.space["{#NAME}",util],5m)>{$HPE.PRIMERA.DISKS.GROUP.PUSED.MAX.CRIT:"{#NAME}"}` |AVERAGE | |
+|Disk group [{#NAME}]: Disk group space is low |<p>Disk group is running low on free space (less than {$HPE.MSA.DISKS.GROUP.PUSED.MAX.WARN:"{#NAME}"}% available).</p> |`min(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups.space["{#NAME}",util],5m)>{$HPE.MSA.DISKS.GROUP.PUSED.MAX.WARN:"{#NAME}"}` |WARNING |<p>**Depends on**:</p><p>- Disk group [{#NAME}]: Disk group space is critically low</p> |
+|Disk group [{#NAME}]: Disk group space is critically low |<p>Disk group is running low on free space (less than {$HPE.MSA.DISKS.GROUP.PUSED.MAX.CRIT:"{#NAME}"}% available).</p> |`min(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups.space["{#NAME}",util],5m)>{$HPE.MSA.DISKS.GROUP.PUSED.MAX.CRIT:"{#NAME}"}` |AVERAGE | |
|Disk group [{#NAME}]: Disk group is fault tolerant with a down disk |<p>The disk group is online and fault tolerant, but some of it's disks are down.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=1` |AVERAGE | |
|Disk group [{#NAME}]: Disk group has damaged disks |<p>The disk group is online and fault tolerant, but some of it's disks are damaged.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=9` |AVERAGE | |
|Disk group [{#NAME}]: Disk group has missing disks |<p>The disk group is online and fault tolerant, but some of it's disks are missing.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=8` |AVERAGE | |
@@ -188,8 +207,8 @@ There are no template links in this template.
|Pool [{#NAME}]: Pool health is in degraded state |<p>Pool health is in degraded state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.pools["{#NAME}",health])=1` |WARNING | |
|Pool [{#NAME}]: Pool health is in fault state |<p>Pool health is in fault state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.pools["{#NAME}",health])=2` |AVERAGE | |
|Pool [{#NAME}]: Pool health is in unknown state |<p>Pool [{#NAME}] health is in unknown state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.pools["{#NAME}",health])=3` |INFO | |
-|Pool [{#NAME}]: Pool space is low |<p>Pool is running low on free space (less than {$HPE.PRIMERA.POOL.PUSED.MAX.WARN:"{#NAME}"}% available).</p> |`min(/HPE MSA 2040 Storage by HTTP/hpe.msa.pools.space["{#NAME}",util],5m)>{$HPE.PRIMERA.POOL.PUSED.MAX.WARN:"{#NAME}"}` |WARNING |<p>**Depends on**:</p><p>- Pool [{#NAME}]: Pool space is critically low</p> |
-|Pool [{#NAME}]: Pool space is critically low |<p>Pool is running low on free space (less than {$HPE.PRIMERA.POOL.PUSED.MAX.CRIT:"{#NAME}"}% available).</p> |`min(/HPE MSA 2040 Storage by HTTP/hpe.msa.pools.space["{#NAME}",util],5m)>{$HPE.PRIMERA.POOL.PUSED.MAX.CRIT:"{#NAME}"}` |AVERAGE | |
+|Pool [{#NAME}]: Pool space is low |<p>Pool is running low on free space (less than {$HPE.MSA.POOL.PUSED.MAX.WARN:"{#NAME}"}% available).</p> |`min(/HPE MSA 2040 Storage by HTTP/hpe.msa.pools.space["{#NAME}",util],5m)>{$HPE.MSA.POOL.PUSED.MAX.WARN:"{#NAME}"}` |WARNING |<p>**Depends on**:</p><p>- Pool [{#NAME}]: Pool space is critically low</p> |
+|Pool [{#NAME}]: Pool space is critically low |<p>Pool is running low on free space (less than {$HPE.MSA.POOL.PUSED.MAX.CRIT:"{#NAME}"}% available).</p> |`min(/HPE MSA 2040 Storage by HTTP/hpe.msa.pools.space["{#NAME}",util],5m)>{$HPE.MSA.POOL.PUSED.MAX.CRIT:"{#NAME}"}` |AVERAGE | |
|Enclosure [{#DURABLE.ID}]: Enclosure health is in degraded state |<p>Enclosure health is in degraded state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",health])=1` |WARNING | |
|Enclosure [{#DURABLE.ID}]: Enclosure health is in fault state |<p>Enclosure health is in fault state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",health])=2` |AVERAGE | |
|Enclosure [{#DURABLE.ID}]: Enclosure health is in unknown state |<p>Enclosure health is in unknown state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",health])=3` |INFO | |
@@ -197,31 +216,36 @@ There are no template links in this template.
|Enclosure [{#DURABLE.ID}]: Enclosure has warning status |<p>Enclosure has warning status.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",status])=3` |WARNING | |
|Enclosure [{#DURABLE.ID}]: Enclosure is unavailable |<p>Enclosure is unavailable.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",status])=7` |HIGH | |
|Enclosure [{#DURABLE.ID}]: Enclosure is unrecoverable |<p>Enclosure is unrecoverable.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",status])=4` |HIGH | |
-|Enclosure [{#DURABLE.ID}]: Enclosure has unknown status |<p>Enclosure has unknown status</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",status])=6` |INFO | |
-|Power supply [{#LOCATION}]: Power supply health is in degraded state |<p>Power supply health is in degraded state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",health])=1` |WARNING | |
-|Power supply [{#LOCATION}]: Power supply health is in fault state |<p>Power supply health is in fault state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",health])=2` |AVERAGE | |
-|Power supply [{#LOCATION}]: Power supply health is in unknown state |<p>Power supply health is in unknown state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",health])=3` |INFO | |
-|Power supply [{#LOCATION}]: Power supply has error status |<p>Power supply has error status.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",status])=2` |AVERAGE | |
-|Power supply [{#LOCATION}]: Power supply has warning status |<p>Power supply has warning status.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",status])=1` |WARNING | |
-|Power supply [{#LOCATION}]: Power supply has unknown status |<p>Power supply has unknown status.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",status])=4` |INFO | |
+|Enclosure [{#DURABLE.ID}]: Enclosure has unknown status |<p>Enclosure has unknown status.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",status])=6` |INFO | |
+|Power supply [{#DURABLE.ID}]: Power supply health is in degraded state |<p>Power supply health is in degraded state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",health])=1` |WARNING | |
+|Power supply [{#DURABLE.ID}]: Power supply health is in fault state |<p>Power supply health is in fault state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",health])=2` |AVERAGE | |
+|Power supply [{#DURABLE.ID}]: Power supply health is in unknown state |<p>Power supply health is in unknown state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",health])=3` |INFO | |
+|Power supply [{#DURABLE.ID}]: Power supply has error status |<p>Power supply has error status.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",status])=2` |AVERAGE | |
+|Power supply [{#DURABLE.ID}]: Power supply has warning status |<p>Power supply has warning status.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",status])=1` |WARNING | |
+|Power supply [{#DURABLE.ID}]: Power supply has unknown status |<p>Power supply has unknown status.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",status])=4` |INFO | |
|Port [{#NAME}]: Port health is in degraded state |<p>Port health is in degraded state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.ports["{#NAME}",health])=1` |WARNING | |
|Port [{#NAME}]: Port health is in fault state |<p>Port health is in fault state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.ports["{#NAME}",health])=2` |AVERAGE | |
|Port [{#NAME}]: Port health is in unknown state |<p>Port health is in unknown state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.ports["{#NAME}",health])=3` |INFO | |
|Port [{#NAME}]: Port has error status |<p>Port has error status.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.ports["{#NAME}",status])=2` |AVERAGE | |
|Port [{#NAME}]: Port has warning status |<p>Port has warning status.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.ports["{#NAME}",status])=1` |WARNING | |
|Port [{#NAME}]: Port has unknown status |<p>Port has unknown status.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.ports["{#NAME}",status])=4` |INFO | |
-|{#NAME} [{#LOCATION}]: Fan health is in degraded state |<p>Fan health is in degraded state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",health])=1` |WARNING | |
-|{#NAME} [{#LOCATION}]: Fan health is in fault state |<p>Fan health is in fault state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",health])=2` |AVERAGE | |
-|{#NAME} [{#LOCATION}]: Fan health is in unknown state |<p>Fan health is in unknown state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",health])=3` |INFO | |
-|{#NAME} [{#LOCATION}]: Fan has error status |<p>Fan has error status.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",status])=1` |AVERAGE | |
-|{#NAME} [{#LOCATION}]: Fan is missing |<p>Fan is missing.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",status])=3` |INFO | |
-|{#NAME} [{#LOCATION}]: Fan is off |<p>Fan is off.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",status])=2` |WARNING | |
-|Disk [{#ENCLOSURE.ID}.{#SLOT}]: Disk health is in degraded state |<p>Disk health is in degraded state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",health])=1` |WARNING | |
-|Disk [{#ENCLOSURE.ID}.{#SLOT}]: Disk health is in fault state |<p>Disk health is in fault state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",health])=2` |AVERAGE | |
-|Disk [{#ENCLOSURE.ID}.{#SLOT}]: Disk health is in unknown state |<p>Disk health is in unknown state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",health])=3` |INFO | |
-|Disk [{#ENCLOSURE.ID}.{#SLOT}]: Disk temperature is high |<p>Disk temperature is high.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",temperature_status])=3` |WARNING | |
-|Disk [{#ENCLOSURE.ID}.{#SLOT}]: Disk temperature is critically high |<p>Disk temperature is critically high.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",temperature_status])=2` |AVERAGE | |
-|Disk [{#ENCLOSURE.ID}.{#SLOT}]: Disk temperature is unknown |<p>Disk temperature is unknown.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",temperature_status])=4` |INFO | |
+|Fan [{#DURABLE.ID}]: Fan health is in degraded state |<p>Fan health is in degraded state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",health])=1` |WARNING | |
+|Fan [{#DURABLE.ID}]: Fan health is in fault state |<p>Fan health is in fault state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",health])=2` |AVERAGE | |
+|Fan [{#DURABLE.ID}]: Fan health is in unknown state |<p>Fan health is in unknown state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",health])=3` |INFO | |
+|Fan [{#DURABLE.ID}]: Fan has error status |<p>Fan has error status.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",status])=1` |AVERAGE | |
+|Fan [{#DURABLE.ID}]: Fan is missing |<p>Fan is missing.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",status])=3` |INFO | |
+|Fan [{#DURABLE.ID}]: Fan is off |<p>Fan is off.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",status])=2` |WARNING | |
+|Disk [{#DURABLE.ID}]: Disk health is in degraded state |<p>Disk health is in degraded state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",health])=1` |WARNING | |
+|Disk [{#DURABLE.ID}]: Disk health is in fault state |<p>Disk health is in fault state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",health])=2` |AVERAGE | |
+|Disk [{#DURABLE.ID}]: Disk health is in unknown state |<p>Disk health is in unknown state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",health])=3` |INFO | |
+|Disk [{#DURABLE.ID}]: Disk temperature is high |<p>Disk temperature is high.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",temperature_status])=3` |WARNING | |
+|Disk [{#DURABLE.ID}]: Disk temperature is critically high |<p>Disk temperature is critically high.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",temperature_status])=2` |AVERAGE | |
+|Disk [{#DURABLE.ID}]: Disk temperature is unknown |<p>Disk temperature is unknown.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",temperature_status])=4` |INFO | |
+|I/O module [{#DURABLE.ID}]: I/O module health is in degraded state |<p>I/O module health is in degraded state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.io_modules["{#DURABLE.ID}",health])=1` |WARNING | |
+|I/O module [{#DURABLE.ID}]: I/O module health is in fault state |<p>I/O module health is in fault state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.io_modules["{#DURABLE.ID}",health])=2` |AVERAGE | |
+|I/O module [{#DURABLE.ID}]: I/O module health is in unknown state |<p>I/O module health is in unknown state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.io_modules["{#DURABLE.ID}",health])=3` |INFO | |
+|I/O module [{#DURABLE.ID}]: I/O module is down |<p>I/O module is down.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.io_modules["{#DURABLE.ID}",status])=1` |AVERAGE | |
+|I/O module [{#DURABLE.ID}]: I/O module has unknown status |<p>I/O module has unknown status.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.io_modules["{#DURABLE.ID}",status])=3` |INFO | |
## Feedback
diff --git a/templates/san/hpe_msa2040_http/template_san_hpe_msa2040_http.yaml b/templates/san/hpe_msa2040_http/template_san_hpe_msa2040_http.yaml
index 25c3071ada2..5800b981af4 100644
--- a/templates/san/hpe_msa2040_http/template_san_hpe_msa2040_http.yaml
+++ b/templates/san/hpe_msa2040_http/template_san_hpe_msa2040_http.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '6.0'
- date: '2022-05-11T05:26:40Z'
+ date: '2022-05-13T11:03:57Z'
groups:
-
uuid: 7c2cb727f85b492d88cd56e17127c64d
@@ -15,13 +15,10 @@ zabbix_export:
It works without any external scripts and uses the script items.
Setup:
- 1. Create user zabbix on the storage with browse role and enable it for all domains.
- 2. The WSAPI server does not start automatically. To enable it:
- - log in to the CLI as Super, Service, or any role granted the wsapi_set right;
- - start the WSAPI server by command: 'startwsapi';
- - to check WSAPI state use command: 'showwsapi'.
- 3. Link template to the host.
- 4. Configure {$HPE.MSA.API.PASSWORD} and {$HPE.PRIMERA.API.PASSWORD}.
+ 1. Create user zabbix on the storage with monitor role.
+ 2. Link template to the host.
+ 3. Configure {$HPE.MSA.API.PASSWORD} and an interface with address through which one API is accessible if not specified.
+ 4. Change {$HPE.MSA.API.SCHEME} and {$HPE.MSA.API.PORT} macros if needed.
You can discuss this template or leave feedback on our forum https://www.zabbix.com/forum/zabbix-suggestions-and-feedback
@@ -135,6 +132,7 @@ zabbix_export:
}
return JSON.stringify(result);
+ description: 'The list of controllers.'
preprocessing:
-
type: JSONPATH
@@ -263,11 +261,22 @@ zabbix_export:
}
return JSON.stringify(result);
+ description: 'The list of controllers statistics.'
preprocessing:
-
type: JSONPATH
parameters:
- '$.[''controller-statistics'']'
+ -
+ type: JAVASCRIPT
+ parameters:
+ - |
+ var result = [];
+ JSON.parse(value).forEach(function (key) {
+ key["durable-id"] = key["durable-id"].toLowerCase();
+ result.push(key);
+ });
+ return JSON.stringify(result);
timeout: '{$HPE.MSA.DATA.TIMEOUT}'
parameters:
-
@@ -391,6 +400,7 @@ zabbix_export:
}
return JSON.stringify(result);
+ description: 'The list of disks.'
preprocessing:
-
type: JSONPATH
@@ -519,6 +529,7 @@ zabbix_export:
}
return JSON.stringify(result);
+ description: 'The list of disk groups.'
preprocessing:
-
type: JSONPATH
@@ -647,6 +658,7 @@ zabbix_export:
}
return JSON.stringify(result);
+ description: 'The list of disk groups statistics.'
preprocessing:
-
type: JSONPATH
@@ -775,6 +787,7 @@ zabbix_export:
}
return JSON.stringify(result);
+ description: 'The list of enclosures.'
preprocessing:
-
type: JSONPATH
@@ -903,6 +916,7 @@ zabbix_export:
}
return JSON.stringify(result);
+ description: 'The list of fans.'
preprocessing:
-
type: JSONPATH
@@ -927,6 +941,96 @@ zabbix_export:
tag: component
value: raw
-
+ uuid: c6e8ad1dbc7f442eb003c738c07819cf
+ name: 'HPE MSA: Get I/O modules'
+ type: SCRIPT
+ key: hpe.msa.raw.io_modules
+ history: '0'
+ trends: '0'
+ value_type: TEXT
+ params: |
+ var params = JSON.parse(value),
+ fields = ['username', 'password', 'method', 'base_url'],
+ result = {};
+
+ fields.forEach(function (field) {
+ if (typeof params !== 'object' || typeof params[field] === 'undefined' || params[field] === '' ) {
+ throw 'Required param is not set: "' + field + '".';
+ }
+ });
+
+ if (!params.base_url.endsWith('/')) {
+ params.base_url += '/';
+ }
+
+ var response, request = new HttpRequest();
+ request.addHeader('datatype: json');
+
+ auth_string = sha256(params.username + '_' + params.password)
+
+ response = request.get(params.base_url + 'api/login/' + auth_string);
+
+ if (request.getStatus() < 200 || request.getStatus() >= 300) {
+ throw 'Request failed with status code ' + request.getStatus() + ': ' + response;
+ }
+
+ if (response !== null) {
+ try {
+ auth_data = JSON.parse(response);
+ }
+ catch (error) {
+ throw 'Failed to parse auth response received from device API. Check debug log for more information.';
+ }
+ }
+
+ sessionKey = auth_data['status'][0]['response'];
+
+ request = new HttpRequest();
+ request.addHeader('sessionKey: ' + sessionKey);
+ request.addHeader('datatype: json');
+
+ response = request.get(params.base_url + 'api/show/' + params.method);
+
+
+ if (request.getStatus() < 200 || request.getStatus() >= 300) {
+ throw 'Request failed with status code ' + request.getStatus() + ': ' + response;
+ }
+
+ if (response !== null) {
+ try {
+ result = JSON.stringify(response);
+ }
+ catch (error) {
+ throw 'Failed to parse response received from device API. Check debug log for more information.';
+ }
+ }
+
+ return response;
+ description: 'The list of I/O modules.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''io-modules'']'
+ timeout: '{$HPE.MSA.DATA.TIMEOUT}'
+ parameters:
+ -
+ name: base_url
+ value: '{$HPE.MSA.API.SCHEME}://{HOST.CONN}:{$HPE.MSA.API.PORT}/'
+ -
+ name: method
+ value: io-modules
+ -
+ name: username
+ value: '{$HPE.MSA.API.USERNAME}'
+ -
+ name: password
+ value: '{$HPE.MSA.API.PASSWORD}'
+ tags:
+ -
+ tag: component
+ value: raw
+ -
uuid: 0f86482556334c7aa1988d39853b9873
name: 'HPE MSA: Get pools'
type: SCRIPT
@@ -1031,6 +1135,7 @@ zabbix_export:
}
return JSON.stringify(result);
+ description: 'The list of pools.'
preprocessing:
-
type: JSONPATH
@@ -1159,6 +1264,7 @@ zabbix_export:
}
return JSON.stringify(result);
+ description: 'The list of ports.'
preprocessing:
-
type: JSONPATH
@@ -1287,6 +1393,7 @@ zabbix_export:
}
return JSON.stringify(result);
+ description: 'The list of power supplies.'
preprocessing:
-
type: JSONPATH
@@ -1415,6 +1522,7 @@ zabbix_export:
}
return JSON.stringify(result);
+ description: 'General system information.'
timeout: '{$HPE.MSA.DATA.TIMEOUT}'
parameters:
-
@@ -1538,6 +1646,7 @@ zabbix_export:
}
return JSON.stringify(result);
+ description: 'The list of volumes.'
preprocessing:
-
type: JSONPATH
@@ -1666,6 +1775,7 @@ zabbix_export:
}
return JSON.stringify(result);
+ description: 'The list of volumes statistics.'
preprocessing:
-
type: JSONPATH
@@ -1747,11 +1857,11 @@ zabbix_export:
expression: 'nodata(/HPE MSA 2040 Storage by HTTP/hpe.msa.system.health,5m)=1'
name: 'Failed to fetch API data'
event_name: 'Failed to fetch API data (or no data for 5m)'
- priority: WARNING
+ priority: AVERAGE
description: 'Zabbix has not received data for items for the last 5 minutes.'
dependencies:
-
- name: 'Service is down'
+ name: 'Service is down or unavailable'
expression: 'max(/HPE MSA 2040 Storage by HTTP/net.tcp.service["{$HPE.MSA.API.SCHEME}","{HOST.CONN}"],5m)=0'
tags:
-
@@ -1940,8 +2050,9 @@ zabbix_export:
-
uuid: b8d07373a0fb4051a0534891b255994a
expression: 'max(/HPE MSA 2040 Storage by HTTP/net.tcp.service["{$HPE.MSA.API.SCHEME}","{HOST.CONN}"],5m)=0'
- name: 'Service is down'
- priority: WARNING
+ name: 'Service is down or unavailable'
+ priority: HIGH
+ description: 'HTTP/HTTPS service is down or unable to establish TCP connection.'
tags:
-
tag: scope
@@ -1956,10 +2067,413 @@ zabbix_export:
description: 'Discover controllers.'
item_prototypes:
-
+ uuid: 53b0ea51add74c629814c881ac824d1b
+ name: 'Controller [{#CONTROLLER.ID}]: Cache: Read hits, rate'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.cache.read.hits["{#CONTROLLER.ID}",rate]'
+ delay: '0'
+ history: 7d
+ description: 'For the controller that owns the volume, the number of times the block to be read is found in cache per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''read-cache-hits''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.raw.controllers.statistics
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 23ed270bc823484cb514600bf23b2aa5
+ name: 'Controller [{#CONTROLLER.ID}]: Cache: Read misses, rate'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.cache.read.misses["{#CONTROLLER.ID}",rate]'
+ delay: '0'
+ history: 7d
+ description: 'For the controller that owns the volume, the number of times the block to be read is not found in cache per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''read-cache-misses''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.raw.controllers.statistics
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 71a92c76ae7740cd9e58ea337f4a75e3
+ name: 'Controller [{#CONTROLLER.ID}]: Cache: Write hits, rate'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.cache.write.hits["{#CONTROLLER.ID}",rate]'
+ delay: '0'
+ history: 7d
+ description: 'For the controller that owns the volume, the number of times the block written to is found in cache per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''write-cache-hits''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.raw.controllers.statistics
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: bafcf98cee9c4a8da0aea7b39a5242d4
+ name: 'Controller [{#CONTROLLER.ID}]: Cache: Write misses, rate'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.cache.write.misses["{#CONTROLLER.ID}",rate]'
+ delay: '0'
+ history: 7d
+ description: 'For the controller that owns the volume, the number of times the block written to is not found in cache per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''write-cache-misses''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.raw.controllers.statistics
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: fa9400f2dcba40f4b57dfcef6f7856a0
+ name: 'Controller [{#CONTROLLER.ID}]: Cache: Write utilization'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.cache.write["{#CONTROLLER.ID}",util]'
+ delay: '0'
+ history: 7d
+ units: '%'
+ description: 'Percentage of write cache in use, from 0 to 100.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''write-cache-used''].first()'
+ master_item:
+ key: hpe.msa.raw.controllers.statistics
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 38a6ca0447d548c593d08acf377250cb
+ name: 'Controller [{#CONTROLLER.ID}]: Cache memory size'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.cache["{#CONTROLLER.ID}",total]'
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'Controller cache memory size.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''cache-memory-size''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ -
+ type: MULTIPLIER
+ parameters:
+ - '1048576'
+ master_item:
+ key: hpe.msa.raw.controllers
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: cfff8c77d99440d18794e1c6dbf738ad
+ name: 'Controller [{#CONTROLLER.ID}]: CPU utilization'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.cpu["{#CONTROLLER.ID}",util]'
+ delay: '0'
+ history: 7d
+ units: '%'
+ description: 'Percentage of time the CPU is busy, from 0 to 100.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''cpu-load''].first()'
+ master_item:
+ key: hpe.msa.raw.controllers.statistics
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ trigger_prototypes:
+ -
+ uuid: b94f1cfd6e6a48f8a18c644532b7a9c8
+ expression: 'min(/HPE MSA 2040 Storage by HTTP/hpe.msa.controllers.cpu["{#CONTROLLER.ID}",util],5m)>{$HPE.MSA.CONTROLLER.CPU.UTIL.CRIT}'
+ name: 'Controller [{#CONTROLLER.ID}]: High CPU utilization'
+ event_name: 'Controller [{#CONTROLLER.ID}]: High CPU utilization (over {$HPE.MSA.CONTROLLER.CPU.UTIL.CRIT}% for 5m)'
+ priority: WARNING
+ description: 'Controller CPU utilization is too high. The system might be slow to respond.'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: c87dc81f4a3447f3962a69a8b0d79769
+ name: 'Controller [{#CONTROLLER.ID}]: Data transfer rate: Reads'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.data_transfer.reads["{#CONTROLLER.ID}",rate]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: Bps
+ description: 'The data read rate, in bytes per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''data-read-numeric''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.raw.controllers.statistics
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 7c34d1c4fd784fb695d9fc7c5a686329
+ name: 'Controller [{#CONTROLLER.ID}]: Data transfer rate: Total'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.data_transfer.total["{#CONTROLLER.ID}",rate]'
+ delay: '0'
+ history: 7d
+ units: Bps
+ description: 'The data transfer rate, in bytes per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''bytes-per-second-numeric''].first()'
+ master_item:
+ key: hpe.msa.raw.controllers.statistics
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 93b508f92de04dfbbfe7099bf37796ce
+ name: 'Controller [{#CONTROLLER.ID}]: Data transfer rate: Writes'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.data_transfer.writes["{#CONTROLLER.ID}",rate]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: Bps
+ description: 'The data write rate, in bytes per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''data-written-numeric''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.raw.controllers.statistics
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 3d7f1a97cd8249efbabc2402006c1cc2
+ name: 'Controller [{#CONTROLLER.ID}]: IOPS, read rate'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.iops.read["{#CONTROLLER.ID}",rate]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: '!r/s'
+ description: 'Number of read operations per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''number-of-reads''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.raw.controllers.statistics
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 8bf0601293a64628be08d16391d1e11b
+ name: 'Controller [{#CONTROLLER.ID}]: IOPS, total rate'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.iops.total["{#CONTROLLER.ID}",rate]'
+ delay: '0'
+ history: 7d
+ units: '!iops'
+ description: 'Input/output operations per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''iops''].first()'
+ master_item:
+ key: hpe.msa.raw.controllers.statistics
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 6444038b72294992ab17c126ccbe7251
+ name: 'Controller [{#CONTROLLER.ID}]: IOPS, write rate'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.iops.write["{#CONTROLLER.ID}",rate]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: '!w/s'
+ description: 'Number of write operations per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''number-of-writes''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.raw.controllers.statistics
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 5940d26205924a13ba351f5d56192fcb
+ name: 'Controller [{#CONTROLLER.ID}]: Disks'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers["{#CONTROLLER.ID}",disks]'
+ delay: '0'
+ history: 7d
+ description: 'Number of disks in the storage system.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''disks''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.raw.controllers
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 94c2c9bfd2414875a53fbe94f6230666
+ name: 'Controller [{#CONTROLLER.ID}]: Disk groups'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers["{#CONTROLLER.ID}",disk_groups]'
+ delay: '0'
+ history: 7d
+ description: 'Number of disk groups in the storage system.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''virtual-disks''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.raw.controllers
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
uuid: 5a987843b14c4d25a1fde4429015f773
- name: 'Controller [{#DURABLE.ID}]: Firmware version'
+ name: 'Controller [{#CONTROLLER.ID}]: Firmware version'
type: DEPENDENT
- key: 'hpe.msa.controllers["{#DURABLE.ID}",firmware]'
+ key: 'hpe.msa.controllers["{#CONTROLLER.ID}",firmware]'
delay: '0'
history: 7d
trends: '0'
@@ -1982,12 +2496,12 @@ zabbix_export:
value: controller
-
tag: controller
- value: '{#DURABLE.ID}'
+ value: '{#CONTROLLER.ID}'
-
uuid: 6d2a84b6b1804082ab4ef3451a52b552
- name: 'Controller [{#DURABLE.ID}]: Health'
+ name: 'Controller [{#CONTROLLER.ID}]: Health'
type: DEPENDENT
- key: 'hpe.msa.controllers["{#DURABLE.ID}",health]'
+ key: 'hpe.msa.controllers["{#CONTROLLER.ID}",health]'
delay: '0'
history: 7d
description: 'Controller health status.'
@@ -2015,65 +2529,65 @@ zabbix_export:
value: health
-
tag: controller
- value: '{#DURABLE.ID}'
+ value: '{#CONTROLLER.ID}'
trigger_prototypes:
-
uuid: 381a5fe2adfd4f4ea15763cdf0a1bd0d
- expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.controllers["{#DURABLE.ID}",health])=1'
- name: 'Controller [{#DURABLE.ID}]: Controller health is in degraded state'
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",health])=1'
+ name: 'Controller [{#CONTROLLER.ID}]: Controller health is in degraded state'
priority: WARNING
description: 'Controller health is in degraded state.'
dependencies:
-
- name: 'Controller [{#DURABLE.ID}]: Controller is down'
- expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.controllers["{#DURABLE.ID}",status])=1'
+ name: 'Controller [{#CONTROLLER.ID}]: Controller is down'
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",status])=1'
tags:
-
tag: scope
value: performance
-
uuid: 2082d12ff9c54a5ea709dba05c14ae00
- expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.controllers["{#DURABLE.ID}",health])=2'
- name: 'Controller [{#DURABLE.ID}]: Controller health is in fault state'
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",health])=2'
+ name: 'Controller [{#CONTROLLER.ID}]: Controller health is in fault state'
priority: AVERAGE
description: 'Controller health is in fault state.'
dependencies:
-
- name: 'Controller [{#DURABLE.ID}]: Controller is down'
- expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.controllers["{#DURABLE.ID}",status])=1'
+ name: 'Controller [{#CONTROLLER.ID}]: Controller is down'
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",status])=1'
tags:
-
tag: scope
value: availability
-
uuid: 0b2ed99c47a64210b198cc0a3a6b84b5
- expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.controllers["{#DURABLE.ID}",health])=3'
- name: 'Controller [{#DURABLE.ID}]: Controller health is in unknown state'
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",health])=3'
+ name: 'Controller [{#CONTROLLER.ID}]: Controller health is in unknown state'
priority: INFO
description: 'Controller health is in unknown state.'
dependencies:
-
- name: 'Controller [{#DURABLE.ID}]: Controller is down'
- expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.controllers["{#DURABLE.ID}",status])=1'
+ name: 'Controller [{#CONTROLLER.ID}]: Controller is down'
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",status])=1'
tags:
-
tag: scope
value: notice
-
- uuid: 33e754d5acb84b7c86b2e23b122e6eed
- name: 'Controller [{#DURABLE.ID}]: Part number'
+ uuid: 5f00490ddd22458b93add06ed24a9f96
+ name: 'Controller [{#CONTROLLER.ID}]: IP address'
type: DEPENDENT
- key: 'hpe.msa.controllers["{#DURABLE.ID}",part_number]'
+ key: 'hpe.msa.controllers["{#CONTROLLER.ID}",ip_address]'
delay: '0'
history: 7d
trends: '0'
value_type: CHAR
- description: 'Part number of the controller.'
+ description: 'Controller network port IP address.'
preprocessing:
-
type: JSONPATH
parameters:
- - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''part-number''].first()'
+ - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''ip-address''].first()'
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
@@ -2086,22 +2600,22 @@ zabbix_export:
value: controller
-
tag: controller
- value: '{#DURABLE.ID}'
+ value: '{#CONTROLLER.ID}'
-
- uuid: c073adb77eb84cf79e1e1693d9378d47
- name: 'Controller [{#DURABLE.ID}]: Serial number'
+ uuid: 33e754d5acb84b7c86b2e23b122e6eed
+ name: 'Controller [{#CONTROLLER.ID}]: Part number'
type: DEPENDENT
- key: 'hpe.msa.controllers["{#DURABLE.ID}",serial_number]'
+ key: 'hpe.msa.controllers["{#CONTROLLER.ID}",part_number]'
delay: '0'
history: 7d
trends: '0'
value_type: CHAR
- description: 'Storage controller serial number.'
+ description: 'Part number of the controller.'
preprocessing:
-
type: JSONPATH
parameters:
- - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''serial-number''].first()'
+ - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''part-number''].first()'
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
@@ -2114,22 +2628,20 @@ zabbix_export:
value: controller
-
tag: controller
- value: '{#DURABLE.ID}'
+ value: '{#CONTROLLER.ID}'
-
- uuid: a2be1b4b814d45b18bb4e313818511d6
- name: 'Controller [{#DURABLE.ID}]: Status'
+ uuid: e4930566c3844f9487e343c203f3eb96
+ name: 'Controller [{#CONTROLLER.ID}]: Pools'
type: DEPENDENT
- key: 'hpe.msa.controllers["{#DURABLE.ID}",status]'
+ key: 'hpe.msa.controllers["{#CONTROLLER.ID}",pools]'
delay: '0'
history: 7d
- description: 'Storage controller status.'
- valuemap:
- name: 'Controller status'
+ description: 'Number of pools in the storage system.'
preprocessing:
-
type: JSONPATH
parameters:
- - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''status-numeric''].first()'
+ - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''number-of-storage-pools''].first()'
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
@@ -2141,90 +2653,83 @@ zabbix_export:
tag: component
value: controller
-
- tag: component
- value: health
- -
tag: controller
- value: '{#DURABLE.ID}'
- trigger_prototypes:
- -
- uuid: 1524e80a37cb4b64a7360488e132a433
- expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.controllers["{#DURABLE.ID}",status])=1'
- name: 'Controller [{#DURABLE.ID}]: Controller is down'
- priority: HIGH
- tags:
- -
- tag: scope
- value: availability
- master_item:
- key: hpe.msa.raw.controllers
- lld_macro_paths:
- -
- lld_macro: '{#DURABLE.ID}'
- path: '$.[''durable-id'']'
- preprocessing:
- -
- type: DISCARD_UNCHANGED_HEARTBEAT
- parameters:
- - 6h
- -
- uuid: ec4fba7c51fa4d94a480d829b1a9b06a
- name: 'Controller statistics discovery'
- type: DEPENDENT
- key: hpe.msa.controllers.statistics.discovery
- delay: '0'
- description: 'Discover controller statistics.'
- item_prototypes:
+ value: '{#CONTROLLER.ID}'
-
- uuid: cfff8c77d99440d18794e1c6dbf738ad
- name: 'Controller [{#DURABLE.ID}]: CPU utilization'
+ uuid: c073adb77eb84cf79e1e1693d9378d47
+ name: 'Controller [{#CONTROLLER.ID}]: Serial number'
type: DEPENDENT
- key: 'hpe.msa.controllers.cpu["{#DURABLE.ID}",util]'
+ key: 'hpe.msa.controllers["{#CONTROLLER.ID}",serial_number]'
delay: '0'
history: 7d
- units: '%'
- description: 'Percentage of time the CPU is busy, from 0 to 100.'
+ trends: '0'
+ value_type: CHAR
+ description: 'Storage controller serial number.'
preprocessing:
-
type: JSONPATH
parameters:
- - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''cpu-load''].first()'
+ - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''serial-number''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
master_item:
- key: hpe.msa.raw.controllers.statistics
+ key: hpe.msa.raw.controllers
tags:
-
tag: component
value: controller
-
tag: controller
- value: '{#DURABLE.ID}'
+ value: '{#CONTROLLER.ID}'
-
- uuid: 8bf0601293a64628be08d16391d1e11b
- name: 'Controller [{#DURABLE.ID}]: IOPS, rate'
+ uuid: a2be1b4b814d45b18bb4e313818511d6
+ name: 'Controller [{#CONTROLLER.ID}]: Status'
type: DEPENDENT
- key: 'hpe.msa.controllers.iops["{#DURABLE.ID}",rate]'
+ key: 'hpe.msa.controllers["{#CONTROLLER.ID}",status]'
delay: '0'
history: 7d
- description: 'Input/output operations per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.'
+ description: 'Storage controller status.'
+ valuemap:
+ name: 'Controller status'
preprocessing:
-
type: JSONPATH
parameters:
- - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''iops''].first()'
+ - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''status-numeric''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
master_item:
- key: hpe.msa.raw.controllers.statistics
+ key: hpe.msa.raw.controllers
tags:
-
tag: component
value: controller
-
+ tag: component
+ value: health
+ -
tag: controller
- value: '{#DURABLE.ID}'
+ value: '{#CONTROLLER.ID}'
+ trigger_prototypes:
+ -
+ uuid: 1524e80a37cb4b64a7360488e132a433
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",status])=1'
+ name: 'Controller [{#CONTROLLER.ID}]: Controller is down'
+ priority: HIGH
+ description: 'The controller is down.'
+ tags:
+ -
+ tag: scope
+ value: availability
-
uuid: df2bede9ea85483581a35a45a15d4de4
- name: 'Controller [{#DURABLE.ID}]: Uptime'
+ name: 'Controller [{#CONTROLLER.ID}]: Uptime'
type: DEPENDENT
- key: 'hpe.msa.controllers["{#DURABLE.ID}",uptime]'
+ key: 'hpe.msa.controllers["{#CONTROLLER.ID}",uptime]'
delay: '0'
history: 7d
units: uptime
@@ -2242,32 +2747,102 @@ zabbix_export:
value: controller
-
tag: controller
- value: '{#DURABLE.ID}'
+ value: '{#CONTROLLER.ID}'
trigger_prototypes:
-
uuid: 136bb1ccd4114a529a99ddbf803fd974
- expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.controllers["{#DURABLE.ID}",uptime])<10m'
- name: 'Controller [{#DURABLE.ID}]: Controller has been restarted'
- event_name: 'Controller [{#DURABLE.ID}]: Controller [{#DURABLE.ID}] has been restarted (uptime < 10m)'
- priority: INFO
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",uptime])<10m'
+ name: 'Controller [{#CONTROLLER.ID}]: Controller has been restarted'
+ event_name: 'Controller [{#CONTROLLER.ID}]: Controller has been restarted (uptime < 10m)'
+ priority: WARNING
+ description: 'The controller uptime is less than 10 minutes.'
tags:
-
tag: scope
- value: notice
+ value: availability
graph_prototypes:
-
+ uuid: 93aeac1a193e43d3a93a3892bd26b0ff
+ name: 'Controller [{#CONTROLLER.ID}]: Cache: Write utilization'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE MSA 2040 Storage by HTTP'
+ key: 'hpe.msa.controllers.cache.write["{#CONTROLLER.ID}",util]'
+ -
+ uuid: a7432b24cd834aa0be9dec3935641dfb
+ name: 'Controller [{#CONTROLLER.ID}]: Cache usage'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE MSA 2040 Storage by HTTP'
+ key: 'hpe.msa.controllers.cache.read.hits["{#CONTROLLER.ID}",rate]'
+ -
+ sortorder: '1'
+ color: 2774A4
+ item:
+ host: 'HPE MSA 2040 Storage by HTTP'
+ key: 'hpe.msa.controllers.cache.read.misses["{#CONTROLLER.ID}",rate]'
+ -
+ sortorder: '2'
+ color: F63100
+ item:
+ host: 'HPE MSA 2040 Storage by HTTP'
+ key: 'hpe.msa.controllers.cache.write.hits["{#CONTROLLER.ID}",rate]'
+ -
+ sortorder: '3'
+ color: A54F10
+ item:
+ host: 'HPE MSA 2040 Storage by HTTP'
+ key: 'hpe.msa.controllers.cache.write.misses["{#CONTROLLER.ID}",rate]'
+ -
uuid: fca4007d4dd1491dbceba1644b50e1b5
- name: 'Controller [{#DURABLE.ID}]: Controller CPU utilization'
+ name: 'Controller [{#CONTROLLER.ID}]: Controller CPU utilization'
graph_items:
-
color: 1A7C11
item:
host: 'HPE MSA 2040 Storage by HTTP'
- key: 'hpe.msa.controllers.cpu["{#DURABLE.ID}",util]'
+ key: 'hpe.msa.controllers.cpu["{#CONTROLLER.ID}",util]'
+ -
+ uuid: 0b2598db582546308d092c9e7889e698
+ name: 'Controller [{#CONTROLLER.ID}]: Data transfer rate'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE MSA 2040 Storage by HTTP'
+ key: 'hpe.msa.controllers.data_transfer.reads["{#CONTROLLER.ID}",rate]'
+ -
+ sortorder: '1'
+ color: 2774A4
+ item:
+ host: 'HPE MSA 2040 Storage by HTTP'
+ key: 'hpe.msa.controllers.data_transfer.writes["{#CONTROLLER.ID}",rate]'
+ -
+ uuid: 0793bb861e874a2c8e7e60a4c40bc34e
+ name: 'Controller [{#CONTROLLER.ID}]: Disk operations rate'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE MSA 2040 Storage by HTTP'
+ key: 'hpe.msa.controllers.iops.read["{#CONTROLLER.ID}",rate]'
+ -
+ sortorder: '1'
+ color: 2774A4
+ item:
+ host: 'HPE MSA 2040 Storage by HTTP'
+ key: 'hpe.msa.controllers.iops.write["{#CONTROLLER.ID}",rate]'
master_item:
- key: hpe.msa.raw.controllers.statistics
+ key: hpe.msa.raw.controllers
lld_macro_paths:
-
+ lld_macro: '{#CONTROLLER.ID}'
+ path: '$.[''controller-id'']'
+ -
lld_macro: '{#DURABLE.ID}'
path: '$.[''durable-id'']'
preprocessing:
@@ -2285,7 +2860,7 @@ zabbix_export:
item_prototypes:
-
uuid: 60418ff95d2b4ac698fe041647656005
- name: 'Disk [{#ENCLOSURE.ID}.{#SLOT}]: Space total'
+ name: 'Disk [{#DURABLE.ID}]: Space total'
type: DEPENDENT
key: 'hpe.msa.disks.space["{#DURABLE.ID}",total]'
delay: '0'
@@ -2313,10 +2888,10 @@ zabbix_export:
value: disk
-
tag: disk
- value: '{#ENCLOSURE.ID}.{#SLOT}'
+ value: '{#DURABLE.ID}'
-
uuid: 579f29536b0740b9887cbb0863bd3e45
- name: 'Disk [{#ENCLOSURE.ID}.{#SLOT}]: SSD life left'
+ name: 'Disk [{#DURABLE.ID}]: SSD life left'
type: DEPENDENT
key: 'hpe.msa.disks.ssd["{#DURABLE.ID}",life_left]'
delay: '0'
@@ -2341,10 +2916,10 @@ zabbix_export:
value: disk
-
tag: disk
- value: '{#ENCLOSURE.ID}.{#SLOT}'
+ value: '{#DURABLE.ID}'
-
uuid: a430bd06d24447649687dc9b9c3dee2c
- name: 'Disk [{#ENCLOSURE.ID}.{#SLOT}]: Disk group'
+ name: 'Disk [{#DURABLE.ID}]: Disk group'
type: DEPENDENT
key: 'hpe.msa.disks["{#DURABLE.ID}",group]'
delay: '0'
@@ -2370,10 +2945,10 @@ zabbix_export:
value: disk
-
tag: disk
- value: '{#ENCLOSURE.ID}.{#SLOT}'
+ value: '{#DURABLE.ID}'
-
uuid: 17f4069e731b45c7a9d9bfc5786a07fc
- name: 'Disk [{#ENCLOSURE.ID}.{#SLOT}]: Health'
+ name: 'Disk [{#DURABLE.ID}]: Health'
type: DEPENDENT
key: 'hpe.msa.disks["{#DURABLE.ID}",health]'
delay: '0'
@@ -2403,12 +2978,12 @@ zabbix_export:
value: health
-
tag: disk
- value: '{#ENCLOSURE.ID}.{#SLOT}'
+ value: '{#DURABLE.ID}'
trigger_prototypes:
-
uuid: 58d2da30bfe74d05ad05e0b286fe0fae
expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",health])=1'
- name: 'Disk [{#ENCLOSURE.ID}.{#SLOT}]: Disk health is in degraded state'
+ name: 'Disk [{#DURABLE.ID}]: Disk health is in degraded state'
priority: WARNING
description: 'Disk health is in degraded state.'
tags:
@@ -2418,7 +2993,7 @@ zabbix_export:
-
uuid: 1f0e81d23e1e423ba885425f33773f5b
expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",health])=2'
- name: 'Disk [{#ENCLOSURE.ID}.{#SLOT}]: Disk health is in fault state'
+ name: 'Disk [{#DURABLE.ID}]: Disk health is in fault state'
priority: AVERAGE
description: 'Disk health is in fault state.'
tags:
@@ -2428,7 +3003,7 @@ zabbix_export:
-
uuid: dc75dd0456a145b3ab0646c9403caeb6
expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",health])=3'
- name: 'Disk [{#ENCLOSURE.ID}.{#SLOT}]: Disk health is in unknown state'
+ name: 'Disk [{#DURABLE.ID}]: Disk health is in unknown state'
priority: INFO
description: 'Disk health is in unknown state.'
tags:
@@ -2437,7 +3012,7 @@ zabbix_export:
value: notice
-
uuid: 689e29b31fd0490fb26920c04d094136
- name: 'Disk [{#ENCLOSURE.ID}.{#SLOT}]: Model'
+ name: 'Disk [{#DURABLE.ID}]: Model'
type: DEPENDENT
key: 'hpe.msa.disks["{#DURABLE.ID}",model]'
delay: '0'
@@ -2462,10 +3037,10 @@ zabbix_export:
value: disk
-
tag: disk
- value: '{#ENCLOSURE.ID}.{#SLOT}'
+ value: '{#DURABLE.ID}'
-
uuid: 20d37295acce41acac8ba77962130774
- name: 'Disk [{#ENCLOSURE.ID}.{#SLOT}]: Storage pool'
+ name: 'Disk [{#DURABLE.ID}]: Storage pool'
type: DEPENDENT
key: 'hpe.msa.disks["{#DURABLE.ID}",pool]'
delay: '0'
@@ -2491,10 +3066,10 @@ zabbix_export:
value: disk
-
tag: disk
- value: '{#ENCLOSURE.ID}.{#SLOT}'
+ value: '{#DURABLE.ID}'
-
uuid: 7c4da69f28824444960e6783fe090526
- name: 'Disk [{#ENCLOSURE.ID}.{#SLOT}]: Serial number'
+ name: 'Disk [{#DURABLE.ID}]: Serial number'
type: DEPENDENT
key: 'hpe.msa.disks["{#DURABLE.ID}",serial_number]'
delay: '0'
@@ -2519,10 +3094,10 @@ zabbix_export:
value: disk
-
tag: disk
- value: '{#ENCLOSURE.ID}.{#SLOT}'
+ value: '{#DURABLE.ID}'
-
uuid: 770749eafc79429185e7127d95b1ff74
- name: 'Disk [{#ENCLOSURE.ID}.{#SLOT}]: Temperature'
+ name: 'Disk [{#DURABLE.ID}]: Temperature'
type: DEPENDENT
key: 'hpe.msa.disks["{#DURABLE.ID}",temperature]'
delay: '0'
@@ -2547,10 +3122,10 @@ zabbix_export:
value: disk
-
tag: disk
- value: '{#ENCLOSURE.ID}.{#SLOT}'
+ value: '{#DURABLE.ID}'
-
uuid: 5ba57b2f4d014b2a81c546e8f74a133e
- name: 'Disk [{#ENCLOSURE.ID}.{#SLOT}]: Temperature status'
+ name: 'Disk [{#DURABLE.ID}]: Temperature status'
type: DEPENDENT
key: 'hpe.msa.disks["{#DURABLE.ID}",temperature_status]'
delay: '0'
@@ -2586,12 +3161,12 @@ zabbix_export:
value: health
-
tag: disk
- value: '{#ENCLOSURE.ID}.{#SLOT}'
+ value: '{#DURABLE.ID}'
trigger_prototypes:
-
uuid: b194f7b133274552823b66e44c88bd02
expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",temperature_status])=2'
- name: 'Disk [{#ENCLOSURE.ID}.{#SLOT}]: Disk temperature is critically high'
+ name: 'Disk [{#DURABLE.ID}]: Disk temperature is critically high'
priority: AVERAGE
description: 'Disk temperature is critically high.'
tags:
@@ -2601,7 +3176,7 @@ zabbix_export:
-
uuid: aaabacd5f5194378b6c8388e2ef90abe
expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",temperature_status])=3'
- name: 'Disk [{#ENCLOSURE.ID}.{#SLOT}]: Disk temperature is high'
+ name: 'Disk [{#DURABLE.ID}]: Disk temperature is high'
priority: WARNING
description: 'Disk temperature is high.'
tags:
@@ -2611,7 +3186,7 @@ zabbix_export:
-
uuid: 60d0fc661aa140798f937a63fdd6e5f9
expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",temperature_status])=4'
- name: 'Disk [{#ENCLOSURE.ID}.{#SLOT}]: Disk temperature is unknown'
+ name: 'Disk [{#DURABLE.ID}]: Disk temperature is unknown'
priority: INFO
description: 'Disk temperature is unknown.'
tags:
@@ -2620,7 +3195,7 @@ zabbix_export:
value: notice
-
uuid: d781943c08d24556a083a16cca34ad58
- name: 'Disk [{#ENCLOSURE.ID}.{#SLOT}]: Type'
+ name: 'Disk [{#DURABLE.ID}]: Type'
type: DEPENDENT
key: 'hpe.msa.disks["{#DURABLE.ID}",type]'
delay: '0'
@@ -2649,10 +3224,10 @@ zabbix_export:
value: disk
-
tag: disk
- value: '{#ENCLOSURE.ID}.{#SLOT}'
+ value: '{#DURABLE.ID}'
-
uuid: 86ce9f4d139e46908750d158b004b517
- name: 'Disk [{#ENCLOSURE.ID}.{#SLOT}]: Vendor'
+ name: 'Disk [{#DURABLE.ID}]: Vendor'
type: DEPENDENT
key: 'hpe.msa.disks["{#DURABLE.ID}",vendor]'
delay: '0'
@@ -2677,7 +3252,7 @@ zabbix_export:
value: disk
-
tag: disk
- value: '{#ENCLOSURE.ID}.{#SLOT}'
+ value: '{#DURABLE.ID}'
master_item:
key: hpe.msa.raw.disks
lld_macro_paths:
@@ -2685,12 +3260,6 @@ zabbix_export:
lld_macro: '{#DURABLE.ID}'
path: '$.[''durable-id'']'
-
- lld_macro: '{#ENCLOSURE.ID}'
- path: '$.[''enclosure-id'']'
- -
- lld_macro: '{#SLOT}'
- path: '$.[''slot'']'
- -
lld_macro: '{#TYPE}'
path: '$.[''description-numeric'']'
preprocessing:
@@ -2724,6 +3293,248 @@ zabbix_export:
description: 'Discover disk groups.'
item_prototypes:
-
+ uuid: 5b0b3db4bdff429996111d566b6d0386
+ name: 'Disk group [{#NAME}]: Average response time: Read'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups.avg_rsp_time["{#NAME}",read]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: s
+ description: 'Average response time for all read operations, calculated over the interval since these statistics were last requested or reset.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''name''] == "{#NAME}")].[''avg-read-rsp-time''].first()'
+ -
+ type: MULTIPLIER
+ parameters:
+ - '0.000001'
+ master_item:
+ key: hpe.msa.raw.disks.groups.statistics
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
+ uuid: 4a4fb1ae86df4607882de9c9d40f51f4
+ name: 'Disk group [{#NAME}]: Average response time: Total'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups.avg_rsp_time["{#NAME}",total]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: s
+ description: 'Average response time for read and write operations, calculated over the interval since these statistics were last requested or reset.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''name''] == "{#NAME}")].[''avg-rsp-time''].first()'
+ -
+ type: MULTIPLIER
+ parameters:
+ - '0.000001'
+ master_item:
+ key: hpe.msa.raw.disks.groups.statistics
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
+ uuid: a93c1e1b1eee496d861464128aaefa57
+ name: 'Disk group [{#NAME}]: Average response time: Write'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups.avg_rsp_time["{#NAME}",write]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: s
+ description: 'Average response time for all write operations, calculated over the interval since these statistics were last requested or reset.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''name''] == "{#NAME}")].[''avg-write-rsp-time''].first()'
+ -
+ type: MULTIPLIER
+ parameters:
+ - '0.000001'
+ master_item:
+ key: hpe.msa.raw.disks.groups.statistics
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
+ uuid: 46ba55c8ec2e4811b254441f22ead159
+ name: 'Disk group [{#NAME}]: Data transfer rate: Reads'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups.data_transfer.reads["{#NAME}",rate]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: Bps
+ description: 'The data read rate, in bytes per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''name''] == "{#NAME}")].[''data-read-numeric''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.raw.disks.groups.statistics
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
+ uuid: b1e2347ea10b4e84bb227668f5560b14
+ name: 'Disk group [{#NAME}]: Data transfer rate: Total'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups.data_transfer.total["{#NAME}",rate]'
+ delay: '0'
+ history: 7d
+ units: Bps
+ description: 'The data transfer rate, in bytes per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''name''] == "{#NAME}")].[''bytes-per-second-numeric''].first()'
+ master_item:
+ key: hpe.msa.raw.disks.groups.statistics
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
+ uuid: a3df11b895fa425799c34516050000bd
+ name: 'Disk group [{#NAME}]: Data transfer rate: Writes'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups.data_transfer.writes["{#NAME}",rate]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: Bps
+ description: 'The data write rate, in bytes per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''name''] == "{#NAME}")].[''data-written-numeric''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.raw.disks.groups.statistics
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
+ uuid: 18cd4383127548b68313184a2b94750f
+ name: 'Disk group [{#NAME}]: IOPS, read rate'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups.iops.read["{#NAME}",rate]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: '!r/s'
+ description: 'Number of read operations per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''name''] == "{#NAME}")].[''number-of-reads''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.raw.disks.groups.statistics
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
+ uuid: 044e291ab66d48dcb8b66ee18f638702
+ name: 'Disk group [{#NAME}]: IOPS, total rate'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups.iops.total["{#NAME}",rate]'
+ delay: '0'
+ history: 7d
+ units: '!iops'
+ description: 'Input/output operations per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''name''] == "{#NAME}")].[''iops''].first()'
+ master_item:
+ key: hpe.msa.raw.disks.groups.statistics
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
+ uuid: 66ec5badb1d2491d9e07b5ce45486d72
+ name: 'Disk group [{#NAME}]: IOPS, write rate'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups.iops.write["{#NAME}",rate]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: '!w/s'
+ description: 'Number of write operations per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''name''] == "{#NAME}")].[''number-of-writes''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.raw.disks.groups.statistics
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
uuid: 5356a1f819a54c59bb3765d99a965537
name: 'Disk group [{#NAME}]: RAID type'
type: DEPENDENT
@@ -2783,6 +3594,33 @@ zabbix_export:
tag: disk-group
value: '{#NAME}'
-
+ uuid: bfe1a64952754488898798f5f07e24b1
+ name: 'Disk group [{#NAME}]: Pool space used'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups.space["{#NAME}",pool_util]'
+ delay: '0'
+ history: 7d
+ units: '%'
+ description: 'The percentage of pool capacity that the disk group occupies.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''name''] == "{#NAME}")].[''pool-percentage''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.raw.disks.groups
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
uuid: 29eae883b9fc4e2191daa870bd9d58ad
name: 'Disk group [{#NAME}]: Space total'
type: DEPENDENT
@@ -2838,26 +3676,26 @@ zabbix_export:
trigger_prototypes:
-
uuid: d6494d79dae94aeda2b78169f8960224
- expression: 'min(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups.space["{#NAME}",util],5m)>{$HPE.PRIMERA.DISKS.GROUP.PUSED.MAX.CRIT:"{#NAME}"}'
+ expression: 'min(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups.space["{#NAME}",util],5m)>{$HPE.MSA.DISKS.GROUP.PUSED.MAX.CRIT:"{#NAME}"}'
name: 'Disk group [{#NAME}]: Disk group space is critically low'
- event_name: 'Disk group [{#NAME}]: Disk group space is critically low (used > {$HPE.PRIMERA.DISKS.GROUP.PUSED.MAX.CRIT:"{#NAME}"}%)'
+ event_name: 'Disk group [{#NAME}]: Disk group space is critically low (used > {$HPE.MSA.DISKS.GROUP.PUSED.MAX.CRIT:"{#NAME}"}%)'
priority: AVERAGE
- description: 'Disk group is running low on free space (less than {$HPE.PRIMERA.DISKS.GROUP.PUSED.MAX.CRIT:"{#NAME}"}% available).'
+ description: 'Disk group is running low on free space (less than {$HPE.MSA.DISKS.GROUP.PUSED.MAX.CRIT:"{#NAME}"}% available).'
tags:
-
tag: scope
value: performance
-
uuid: ea04be93082640709ec6e58ae640575c
- expression: 'min(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups.space["{#NAME}",util],5m)>{$HPE.PRIMERA.DISKS.GROUP.PUSED.MAX.WARN:"{#NAME}"}'
+ expression: 'min(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups.space["{#NAME}",util],5m)>{$HPE.MSA.DISKS.GROUP.PUSED.MAX.WARN:"{#NAME}"}'
name: 'Disk group [{#NAME}]: Disk group space is low'
- event_name: 'Disk group [{#NAME}]: Disk group space is low (used > {$HPE.PRIMERA.DISKS.GROUP.PUSED.MAX.WARN:"{#NAME}"}%)'
+ event_name: 'Disk group [{#NAME}]: Disk group space is low (used > {$HPE.MSA.DISKS.GROUP.PUSED.MAX.WARN:"{#NAME}"}%)'
priority: WARNING
- description: 'Disk group is running low on free space (less than {$HPE.PRIMERA.DISKS.GROUP.PUSED.MAX.WARN:"{#NAME}"}% available).'
+ description: 'Disk group is running low on free space (less than {$HPE.MSA.DISKS.GROUP.PUSED.MAX.WARN:"{#NAME}"}% available).'
dependencies:
-
name: 'Disk group [{#NAME}]: Disk group space is critically low'
- expression: 'min(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups.space["{#NAME}",util],5m)>{$HPE.PRIMERA.DISKS.GROUP.PUSED.MAX.CRIT:"{#NAME}"}'
+ expression: 'min(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups.space["{#NAME}",util],5m)>{$HPE.MSA.DISKS.GROUP.PUSED.MAX.CRIT:"{#NAME}"}'
tags:
-
tag: scope
@@ -3102,326 +3940,67 @@ zabbix_export:
value: performance
graph_prototypes:
-
- uuid: 234be7ebf50e42f6a098662f1fffba03
- name: 'Disk group [{#NAME}]: Space utilization'
+ uuid: 1d5b8a7246a845678a938da75b7e32cc
+ name: 'Disk group [{#NAME}]: Average response time'
graph_items:
-
color: 1A7C11
item:
host: 'HPE MSA 2040 Storage by HTTP'
- key: 'hpe.msa.disks.groups.space["{#NAME}",free]'
+ key: 'hpe.msa.disks.groups.avg_rsp_time["{#NAME}",read]'
-
sortorder: '1'
color: 2774A4
item:
host: 'HPE MSA 2040 Storage by HTTP'
- key: 'hpe.msa.disks.groups.space["{#NAME}",total]'
- master_item:
- key: hpe.msa.raw.disks.groups
- lld_macro_paths:
- -
- lld_macro: '{#NAME}'
- path: '$.[''name'']'
- preprocessing:
- -
- type: DISCARD_UNCHANGED_HEARTBEAT
- parameters:
- - 6h
- -
- uuid: ec2f8888805e42318e5eb0d3fe738091
- name: 'Disk group statistics discovery'
- type: DEPENDENT
- key: hpe.msa.disks.groups.statistics.discovery
- delay: '0'
- description: 'Discover disk group statistics.'
- item_prototypes:
- -
- uuid: 5b0b3db4bdff429996111d566b6d0386
- name: 'Disk group [{#NAME}]: Average response time: Read'
- type: DEPENDENT
- key: 'hpe.msa.disks.groups.avg_rsp_time["{#NAME}",read]'
- delay: '0'
- history: 7d
- value_type: FLOAT
- units: s
- description: 'Average response time for all read operations, calculated over the interval since these statistics were last requested or reset.'
- preprocessing:
- -
- type: JSONPATH
- parameters:
- - '$.[?(@[''name''] == "{#NAME}")].[''avg-read-rsp-time''].first()'
- -
- type: MULTIPLIER
- parameters:
- - '0.000001'
- master_item:
- key: hpe.msa.raw.disks.groups.statistics
- tags:
- -
- tag: component
- value: disk-group
- -
- tag: disk-group
- value: '{#NAME}'
- -
- uuid: 4a4fb1ae86df4607882de9c9d40f51f4
- name: 'Disk group [{#NAME}]: Average response time: Total'
- type: DEPENDENT
- key: 'hpe.msa.disks.groups.avg_rsp_time["{#NAME}",total]'
- delay: '0'
- history: 7d
- value_type: FLOAT
- units: s
- description: 'Average response time for read and write operations, calculated over the interval since these statistics were last requested or reset.'
- preprocessing:
- -
- type: JSONPATH
- parameters:
- - '$.[?(@[''name''] == "{#NAME}")].[''avg-rsp-time''].first()'
- -
- type: MULTIPLIER
- parameters:
- - '0.000001'
- master_item:
- key: hpe.msa.raw.disks.groups.statistics
- tags:
- -
- tag: component
- value: disk-group
- -
- tag: disk-group
- value: '{#NAME}'
- -
- uuid: a93c1e1b1eee496d861464128aaefa57
- name: 'Disk group [{#NAME}]: Average response time: Write'
- type: DEPENDENT
- key: 'hpe.msa.disks.groups.avg_rsp_time["{#NAME}",write]'
- delay: '0'
- history: 7d
- value_type: FLOAT
- units: s
- description: 'Average response time for all write operations, calculated over the interval since these statistics were last requested or reset.'
- preprocessing:
- -
- type: JSONPATH
- parameters:
- - '$.[?(@[''name''] == "{#NAME}")].[''avg-write-rsp-time''].first()'
- -
- type: MULTIPLIER
- parameters:
- - '0.000001'
- master_item:
- key: hpe.msa.raw.disks.groups.statistics
- tags:
- -
- tag: component
- value: disk-group
- -
- tag: disk-group
- value: '{#NAME}'
- -
- uuid: 46ba55c8ec2e4811b254441f22ead159
- name: 'Disk group [{#NAME}]: Data transfer rate: Reads'
- type: DEPENDENT
- key: 'hpe.msa.disks.groups.data_transfer.reads["{#NAME}",rate]'
- delay: '0'
- history: 7d
- value_type: FLOAT
- units: Bps
- description: 'The data read rate, in bytes per second.'
- preprocessing:
- -
- type: JSONPATH
- parameters:
- - '$.[?(@[''name''] == "{#NAME}")].[''data-read-numeric''].first()'
- -
- type: CHANGE_PER_SECOND
- parameters:
- - ''
- master_item:
- key: hpe.msa.raw.disks.groups.statistics
- tags:
- -
- tag: component
- value: disk-group
- -
- tag: disk-group
- value: '{#NAME}'
- -
- uuid: b1e2347ea10b4e84bb227668f5560b14
- name: 'Disk group [{#NAME}]: Data transfer rate: Total'
- type: DEPENDENT
- key: 'hpe.msa.disks.groups.data_transfer.total["{#NAME}",rate]'
- delay: '0'
- history: 7d
- units: Bps
- description: 'The data transfer rate, in bytes per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.'
- preprocessing:
- -
- type: JSONPATH
- parameters:
- - '$.[?(@[''name''] == "{#NAME}")].[''bytes-per-second-numeric''].first()'
- master_item:
- key: hpe.msa.raw.disks.groups.statistics
- tags:
- -
- tag: component
- value: disk-group
- -
- tag: disk-group
- value: '{#NAME}'
- -
- uuid: a3df11b895fa425799c34516050000bd
- name: 'Disk group [{#NAME}]: Data transfer rate: Writes'
- type: DEPENDENT
- key: 'hpe.msa.disks.groups.data_transfer.writes["{#NAME}",rate]'
- delay: '0'
- history: 7d
- value_type: FLOAT
- units: Bps
- description: 'The data write rate, in bytes per second.'
- preprocessing:
- -
- type: JSONPATH
- parameters:
- - '$.[?(@[''name''] == "{#NAME}")].[''data-written-numeric''].first()'
- -
- type: CHANGE_PER_SECOND
- parameters:
- - ''
- master_item:
- key: hpe.msa.raw.disks.groups.statistics
- tags:
- -
- tag: component
- value: disk-group
- -
- tag: disk-group
- value: '{#NAME}'
- -
- uuid: 044e291ab66d48dcb8b66ee18f638702
- name: 'Disk group [{#NAME}]: IOPS, rate'
- type: DEPENDENT
- key: 'hpe.msa.disks.groups.iops["{#NAME}",rate]'
- delay: '0'
- history: 7d
- description: 'Input/output operations per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.'
- preprocessing:
- -
- type: JSONPATH
- parameters:
- - '$.[?(@[''name''] == "{#NAME}")].[''iops''].first()'
- master_item:
- key: hpe.msa.raw.disks.groups.statistics
- tags:
- -
- tag: component
- value: disk-group
- -
- tag: disk-group
- value: '{#NAME}'
- -
- uuid: 18cd4383127548b68313184a2b94750f
- name: 'Disk group [{#NAME}]: Reads, rate'
- type: DEPENDENT
- key: 'hpe.msa.disks.groups.reads["{#NAME}",rate]'
- delay: '0'
- history: 7d
- value_type: FLOAT
- description: 'Number of read operations per second.'
- preprocessing:
- -
- type: JSONPATH
- parameters:
- - '$.[?(@[''name''] == "{#NAME}")].[''number-of-reads''].first()'
- -
- type: CHANGE_PER_SECOND
- parameters:
- - ''
- master_item:
- key: hpe.msa.raw.disks.groups.statistics
- tags:
- -
- tag: component
- value: disk-group
- -
- tag: disk-group
- value: '{#NAME}'
- -
- uuid: 66ec5badb1d2491d9e07b5ce45486d72
- name: 'Disk group [{#NAME}]: Writes, rate'
- type: DEPENDENT
- key: 'hpe.msa.disks.groups.writes["{#NAME}",rate]'
- delay: '0'
- history: 7d
- value_type: FLOAT
- description: 'Number of write operations per second.'
- preprocessing:
- -
- type: JSONPATH
- parameters:
- - '$.[?(@[''name''] == "{#NAME}")].[''number-of-writes''].first()'
- -
- type: CHANGE_PER_SECOND
- parameters:
- - ''
- master_item:
- key: hpe.msa.raw.disks.groups.statistics
- tags:
- -
- tag: component
- value: disk-group
- -
- tag: disk-group
- value: '{#NAME}'
- graph_prototypes:
+ key: 'hpe.msa.disks.groups.avg_rsp_time["{#NAME}",write]'
-
- uuid: 1d5b8a7246a845678a938da75b7e32cc
- name: 'Disk group [{#NAME}]: Average response time'
+ uuid: b718bd4950f64abb892ba3bfe738ad49
+ name: 'Disk group [{#NAME}]: Data transfer rate'
graph_items:
-
color: 1A7C11
item:
host: 'HPE MSA 2040 Storage by HTTP'
- key: 'hpe.msa.disks.groups.avg_rsp_time["{#NAME}",read]'
+ key: 'hpe.msa.disks.groups.data_transfer.reads["{#NAME}",rate]'
-
sortorder: '1'
color: 2774A4
item:
host: 'HPE MSA 2040 Storage by HTTP'
- key: 'hpe.msa.disks.groups.avg_rsp_time["{#NAME}",write]'
+ key: 'hpe.msa.disks.groups.data_transfer.writes["{#NAME}",rate]'
-
- uuid: b718bd4950f64abb892ba3bfe738ad49
- name: 'Disk group [{#NAME}]: Data transfer rate'
+ uuid: 55d7871c891446b086860f8c861fc3f7
+ name: 'Disk group [{#NAME}]: Disk operations rate'
graph_items:
-
color: 1A7C11
item:
host: 'HPE MSA 2040 Storage by HTTP'
- key: 'hpe.msa.disks.groups.data_transfer.reads["{#NAME}",rate]'
+ key: 'hpe.msa.disks.groups.iops.read["{#NAME}",rate]'
-
sortorder: '1'
color: 2774A4
item:
host: 'HPE MSA 2040 Storage by HTTP'
- key: 'hpe.msa.disks.groups.data_transfer.writes["{#NAME}",rate]'
+ key: 'hpe.msa.disks.groups.iops.write["{#NAME}",rate]'
-
- uuid: 55d7871c891446b086860f8c861fc3f7
- name: 'Disk group [{#NAME}]: Disk operations rate'
+ uuid: 234be7ebf50e42f6a098662f1fffba03
+ name: 'Disk group [{#NAME}]: Space utilization'
graph_items:
-
color: 1A7C11
item:
host: 'HPE MSA 2040 Storage by HTTP'
- key: 'hpe.msa.disks.groups.reads["{#NAME}",rate]'
+ key: 'hpe.msa.disks.groups.space["{#NAME}",free]'
-
sortorder: '1'
color: 2774A4
item:
host: 'HPE MSA 2040 Storage by HTTP'
- key: 'hpe.msa.disks.groups.writes["{#NAME}",rate]'
+ key: 'hpe.msa.disks.groups.space["{#NAME}",total]'
master_item:
- key: hpe.msa.raw.disks.groups.statistics
+ key: hpe.msa.raw.disks.groups
lld_macro_paths:
-
lld_macro: '{#NAME}'
@@ -3561,7 +4140,7 @@ zabbix_export:
value: '{#DURABLE.ID}'
-
uuid: f9279641e2cb4c95a07d43ef1f1caba5
- name: 'Enclosure [{#DURABLE.ID}]: Part number.'
+ name: 'Enclosure [{#DURABLE.ID}]: Part number'
type: DEPENDENT
key: 'hpe.msa.enclosures["{#DURABLE.ID}",part_number]'
delay: '0'
@@ -3660,7 +4239,7 @@ zabbix_export:
expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",status])=6'
name: 'Enclosure [{#DURABLE.ID}]: Enclosure has unknown status'
priority: INFO
- description: 'Enclosure has unknown status'
+ description: 'Enclosure has unknown status.'
tags:
-
tag: scope
@@ -3716,7 +4295,7 @@ zabbix_export:
item_prototypes:
-
uuid: b4732ef73f0e4fcc9458797b28e2b829
- name: '{#NAME} [{#LOCATION}]: Health'
+ name: 'Fan [{#DURABLE.ID}]: Health'
type: DEPENDENT
key: 'hpe.msa.fans["{#DURABLE.ID}",health]'
delay: '0'
@@ -3751,7 +4330,7 @@ zabbix_export:
-
uuid: 377a9c494a5443c0ba694ab78683da17
expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",health])=1'
- name: '{#NAME} [{#LOCATION}]: Fan health is in degraded state'
+ name: 'Fan [{#DURABLE.ID}]: Fan health is in degraded state'
priority: WARNING
description: 'Fan health is in degraded state.'
tags:
@@ -3761,7 +4340,7 @@ zabbix_export:
-
uuid: 4446cef7b06140e3a29018944201ebd7
expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",health])=2'
- name: '{#NAME} [{#LOCATION}]: Fan health is in fault state'
+ name: 'Fan [{#DURABLE.ID}]: Fan health is in fault state'
priority: AVERAGE
description: 'Fan health is in fault state.'
tags:
@@ -3771,7 +4350,7 @@ zabbix_export:
-
uuid: 3273a1f3595046e69ef6c74ac6f56eeb
expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",health])=3'
- name: '{#NAME} [{#LOCATION}]: Fan health is in unknown state'
+ name: 'Fan [{#DURABLE.ID}]: Fan health is in unknown state'
priority: INFO
description: 'Fan health is in unknown state.'
tags:
@@ -3780,7 +4359,7 @@ zabbix_export:
value: notice
-
uuid: eb7057d0b65e40138899753b06abfb68
- name: '{#NAME} [{#LOCATION}]: Speed'
+ name: 'Fan [{#DURABLE.ID}]: Speed'
type: DEPENDENT
key: 'hpe.msa.fans["{#DURABLE.ID}",speed]'
delay: '0'
@@ -3803,7 +4382,7 @@ zabbix_export:
value: '{#DURABLE.ID}'
-
uuid: 45f948cb8f484367a7a5735beb796a1b
- name: '{#NAME} [{#LOCATION}]: Status'
+ name: 'Fan [{#DURABLE.ID}]: Status'
type: DEPENDENT
key: 'hpe.msa.fans["{#DURABLE.ID}",status]'
delay: '0'
@@ -3836,7 +4415,7 @@ zabbix_export:
-
uuid: f8afe70029aa4cdfb1f68452eea27986
expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",status])=1'
- name: '{#NAME} [{#LOCATION}]: Fan has error status'
+ name: 'Fan [{#DURABLE.ID}]: Fan has error status'
priority: AVERAGE
description: 'Fan has error status.'
tags:
@@ -3846,7 +4425,7 @@ zabbix_export:
-
uuid: 8ad445006c51474fbee30a70971a97a5
expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",status])=3'
- name: '{#NAME} [{#LOCATION}]: Fan is missing'
+ name: 'Fan [{#DURABLE.ID}]: Fan is missing'
priority: INFO
description: 'Fan is missing.'
tags:
@@ -3856,7 +4435,7 @@ zabbix_export:
-
uuid: fabe4e0bde194675a089db45125428b6
expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",status])=2'
- name: '{#NAME} [{#LOCATION}]: Fan is off'
+ name: 'Fan [{#DURABLE.ID}]: Fan is off'
priority: WARNING
description: 'Fan is off.'
tags:
@@ -3866,7 +4445,7 @@ zabbix_export:
graph_prototypes:
-
uuid: 44c2c9cdec6247cf8f4d0e2bd7e0e372
- name: '{#NAME} [{#LOCATION}]: Speed'
+ name: 'Fan [{#DURABLE.ID}]: Speed'
graph_items:
-
color: 1A7C11
@@ -3880,9 +4459,6 @@ zabbix_export:
lld_macro: '{#DURABLE.ID}'
path: '$.[''durable-id'']'
-
- lld_macro: '{#LOCATION}'
- path: '$.[''location'']'
- -
lld_macro: '{#NAME}'
path: '$.[''name'']'
preprocessing:
@@ -3891,6 +4467,199 @@ zabbix_export:
parameters:
- 6h
-
+ uuid: 472d12b2436845f1baadde23d614d005
+ name: 'I/O modules discovery'
+ type: DEPENDENT
+ key: hpe.msa.io_modules.discovery
+ delay: '0'
+ description: 'Discover I/O modules.'
+ item_prototypes:
+ -
+ uuid: ed9ca320d18e4c1fb5f084a9632b5a00
+ name: 'I/O module [{#DURABLE.ID}]: Health'
+ type: DEPENDENT
+ key: 'hpe.msa.io_modules["{#DURABLE.ID}",health]'
+ delay: '0'
+ history: 7d
+ description: 'I/O module health status.'
+ valuemap:
+ name: Health
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''health-numeric''].first()'
+ error_handler: CUSTOM_VALUE
+ error_handler_params: '4'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.raw.io_modules
+ tags:
+ -
+ tag: component
+ value: health
+ -
+ tag: component
+ value: io-module
+ -
+ tag: io-module
+ value: '{#DURABLE.ID}'
+ trigger_prototypes:
+ -
+ uuid: c5b4a405c4af4ef6a9c8d49c70d1ce39
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.io_modules["{#DURABLE.ID}",health])=1'
+ name: 'I/O module [{#DURABLE.ID}]: I/O module health is in degraded state'
+ priority: WARNING
+ description: 'I/O module health is in degraded state.'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: 9f85f8da9f7648abaa92572f6cb401aa
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.io_modules["{#DURABLE.ID}",health])=2'
+ name: 'I/O module [{#DURABLE.ID}]: I/O module health is in fault state'
+ priority: AVERAGE
+ description: 'I/O module health is in fault state.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ uuid: ebb4a6c3a45e42c2a61a55f5f82aff5a
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.io_modules["{#DURABLE.ID}",health])=3'
+ name: 'I/O module [{#DURABLE.ID}]: I/O module health is in unknown state'
+ priority: INFO
+ description: 'I/O module health is in unknown state.'
+ tags:
+ -
+ tag: scope
+ value: notice
+ -
+ uuid: db5b0e5c525d4ed2a4e6658680e4f352
+ name: 'I/O module [{#DURABLE.ID}]: Part number'
+ type: DEPENDENT
+ key: 'hpe.msa.io_modules["{#DURABLE.ID}",part_number]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'Part number of the I/O module.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''part-number''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.raw.io_modules
+ tags:
+ -
+ tag: component
+ value: io-module
+ -
+ tag: io-module
+ value: '{#DURABLE.ID}'
+ -
+ uuid: 85a7c6949e42427c98701777ea846489
+ name: 'I/O module [{#DURABLE.ID}]: Serial number'
+ type: DEPENDENT
+ key: 'hpe.msa.io_modules["{#DURABLE.ID}",serial_number]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'I/O module serial number.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''serial-number''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.raw.io_modules
+ tags:
+ -
+ tag: component
+ value: io-module
+ -
+ tag: io-module
+ value: '{#DURABLE.ID}'
+ -
+ uuid: 1dc6e1b00561421bbdcf590316b111bc
+ name: 'I/O module [{#DURABLE.ID}]: Status'
+ type: DEPENDENT
+ key: 'hpe.msa.io_modules["{#DURABLE.ID}",status]'
+ delay: '0'
+ history: 7d
+ description: 'I/O module status.'
+ valuemap:
+ name: 'I/O module status'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''status-numeric''].first()'
+ error_handler: CUSTOM_VALUE
+ error_handler_params: '3'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.raw.io_modules
+ tags:
+ -
+ tag: component
+ value: health
+ -
+ tag: component
+ value: io-module
+ -
+ tag: io-module
+ value: '{#DURABLE.ID}'
+ trigger_prototypes:
+ -
+ uuid: 7c2faab01158442284b85acdf22bdc43
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.io_modules["{#DURABLE.ID}",status])=3'
+ name: 'I/O module [{#DURABLE.ID}]: I/O module has unknown status'
+ priority: INFO
+ description: 'I/O module has unknown status.'
+ tags:
+ -
+ tag: scope
+ value: notice
+ -
+ uuid: 5dde3710cdb1422db37d05962a97a264
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.io_modules["{#DURABLE.ID}",status])=1'
+ name: 'I/O module [{#DURABLE.ID}]: I/O module is down'
+ priority: AVERAGE
+ description: 'I/O module is down.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ master_item:
+ key: hpe.msa.raw.io_modules
+ lld_macro_paths:
+ -
+ lld_macro: '{#DURABLE.ID}'
+ path: '$.[''durable-id'']'
+ preprocessing:
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 6h
+ -
uuid: 082c1cfb851548928911b9ab69f6f75e
name: 'Pools discovery'
type: DEPENDENT
@@ -3984,26 +4753,26 @@ zabbix_export:
trigger_prototypes:
-
uuid: 042ac4fedb00485c8c6f48016182b9dd
- expression: 'min(/HPE MSA 2040 Storage by HTTP/hpe.msa.pools.space["{#NAME}",util],5m)>{$HPE.PRIMERA.POOL.PUSED.MAX.CRIT:"{#NAME}"}'
+ expression: 'min(/HPE MSA 2040 Storage by HTTP/hpe.msa.pools.space["{#NAME}",util],5m)>{$HPE.MSA.POOL.PUSED.MAX.CRIT:"{#NAME}"}'
name: 'Pool [{#NAME}]: Pool space is critically low'
- event_name: 'Pool [{#NAME}]: Pool space is critically low (used > {$HPE.PRIMERA.POOL.PUSED.MAX.CRIT:"{#NAME}"}%)'
+ event_name: 'Pool [{#NAME}]: Pool space is critically low (used > {$HPE.MSA.POOL.PUSED.MAX.CRIT:"{#NAME}"}%)'
priority: AVERAGE
- description: 'Pool is running low on free space (less than {$HPE.PRIMERA.POOL.PUSED.MAX.CRIT:"{#NAME}"}% available).'
+ description: 'Pool is running low on free space (less than {$HPE.MSA.POOL.PUSED.MAX.CRIT:"{#NAME}"}% available).'
tags:
-
tag: scope
value: performance
-
uuid: f4c7a9ed832d4668be64acf9da3c9814
- expression: 'min(/HPE MSA 2040 Storage by HTTP/hpe.msa.pools.space["{#NAME}",util],5m)>{$HPE.PRIMERA.POOL.PUSED.MAX.WARN:"{#NAME}"}'
+ expression: 'min(/HPE MSA 2040 Storage by HTTP/hpe.msa.pools.space["{#NAME}",util],5m)>{$HPE.MSA.POOL.PUSED.MAX.WARN:"{#NAME}"}'
name: 'Pool [{#NAME}]: Pool space is low'
- event_name: 'Pool [{#NAME}]: Pool space is low (used > {$HPE.PRIMERA.POOL.PUSED.MAX.WARN:"{#NAME}"}%)'
+ event_name: 'Pool [{#NAME}]: Pool space is low (used > {$HPE.MSA.POOL.PUSED.MAX.WARN:"{#NAME}"}%)'
priority: WARNING
- description: 'Pool is running low on free space (less than {$HPE.PRIMERA.POOL.PUSED.MAX.WARN:"{#NAME}"}% available).'
+ description: 'Pool is running low on free space (less than {$HPE.MSA.POOL.PUSED.MAX.WARN:"{#NAME}"}% available).'
dependencies:
-
name: 'Pool [{#NAME}]: Pool space is critically low'
- expression: 'min(/HPE MSA 2040 Storage by HTTP/hpe.msa.pools.space["{#NAME}",util],5m)>{$HPE.PRIMERA.POOL.PUSED.MAX.CRIT:"{#NAME}"}'
+ expression: 'min(/HPE MSA 2040 Storage by HTTP/hpe.msa.pools.space["{#NAME}",util],5m)>{$HPE.MSA.POOL.PUSED.MAX.CRIT:"{#NAME}"}'
tags:
-
tag: scope
@@ -4235,6 +5004,34 @@ zabbix_export:
-
tag: scope
value: performance
+ -
+ uuid: b1240a5950a3466b9d0725729bef3a03
+ name: 'Port [{#NAME}]: Type'
+ type: DEPENDENT
+ key: 'hpe.msa.ports["{#NAME}",type]'
+ delay: '0'
+ history: 7d
+ description: 'Port type.'
+ valuemap:
+ name: 'Port type'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''port''] == "{#NAME}")].[''port-type-numeric''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.raw.ports
+ tags:
+ -
+ tag: component
+ value: port
+ -
+ tag: port
+ value: '{#NAME}'
master_item:
key: hpe.msa.raw.ports
lld_macro_paths:
@@ -4256,7 +5053,7 @@ zabbix_export:
item_prototypes:
-
uuid: 4e4f593738fb451cbfd1589a3054387e
- name: 'Power supply [{#LOCATION}]: Health'
+ name: 'Power supply [{#DURABLE.ID}]: Health'
type: DEPENDENT
key: 'hpe.msa.power_supplies["{#DURABLE.ID}",health]'
delay: '0'
@@ -4286,12 +5083,12 @@ zabbix_export:
value: power-supply
-
tag: power-supply
- value: '{#LOCATION}'
+ value: '{#DURABLE.ID}'
trigger_prototypes:
-
uuid: 2394f69a635a4072bd96494b8df8ae3e
expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",health])=1'
- name: 'Power supply [{#LOCATION}]: Power supply health is in degraded state'
+ name: 'Power supply [{#DURABLE.ID}]: Power supply health is in degraded state'
priority: WARNING
description: 'Power supply health is in degraded state.'
tags:
@@ -4301,7 +5098,7 @@ zabbix_export:
-
uuid: f390553cfe4646e0ab9a4fd9cab20886
expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",health])=2'
- name: 'Power supply [{#LOCATION}]: Power supply health is in fault state'
+ name: 'Power supply [{#DURABLE.ID}]: Power supply health is in fault state'
priority: AVERAGE
description: 'Power supply health is in fault state.'
tags:
@@ -4311,7 +5108,7 @@ zabbix_export:
-
uuid: 9499fbdcc6a946138fb6cd69d8be9a00
expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",health])=3'
- name: 'Power supply [{#LOCATION}]: Power supply health is in unknown state'
+ name: 'Power supply [{#DURABLE.ID}]: Power supply health is in unknown state'
priority: INFO
description: 'Power supply health is in unknown state.'
tags:
@@ -4320,7 +5117,7 @@ zabbix_export:
value: notice
-
uuid: 1b72c54bff3a4b129e959db43e895839
- name: 'Power supply [{#LOCATION}]: Part number.'
+ name: 'Power supply [{#DURABLE.ID}]: Part number'
type: DEPENDENT
key: 'hpe.msa.power_supplies["{#DURABLE.ID}",part_number]'
delay: '0'
@@ -4345,10 +5142,10 @@ zabbix_export:
value: power-supply
-
tag: power-supply
- value: '{#LOCATION}'
+ value: '{#DURABLE.ID}'
-
uuid: bdbf30f2e70d427bb9237b941fed5941
- name: 'Power supply [{#LOCATION}]: Serial number.'
+ name: 'Power supply [{#DURABLE.ID}]: Serial number'
type: DEPENDENT
key: 'hpe.msa.power_supplies["{#DURABLE.ID}",serial_number]'
delay: '0'
@@ -4373,10 +5170,10 @@ zabbix_export:
value: power-supply
-
tag: power-supply
- value: '{#LOCATION}'
+ value: '{#DURABLE.ID}'
-
uuid: 110fa50ee1d64ecdb064d3bd7b34dc90
- name: 'Power supply [{#LOCATION}]: Status'
+ name: 'Power supply [{#DURABLE.ID}]: Status'
type: DEPENDENT
key: 'hpe.msa.power_supplies["{#DURABLE.ID}",status]'
delay: '0'
@@ -4406,12 +5203,12 @@ zabbix_export:
value: power-supply
-
tag: power-supply
- value: '{#LOCATION}'
+ value: '{#DURABLE.ID}'
trigger_prototypes:
-
uuid: 28896e70b14f463aae8c8af4786e52ff
expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",status])=2'
- name: 'Power supply [{#LOCATION}]: Power supply has error status'
+ name: 'Power supply [{#DURABLE.ID}]: Power supply has error status'
priority: AVERAGE
description: 'Power supply has error status.'
tags:
@@ -4421,7 +5218,7 @@ zabbix_export:
-
uuid: ac6b0d55fbac4f338261f6a90b68e5b0
expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",status])=4'
- name: 'Power supply [{#LOCATION}]: Power supply has unknown status'
+ name: 'Power supply [{#DURABLE.ID}]: Power supply has unknown status'
priority: INFO
description: 'Power supply has unknown status.'
tags:
@@ -4431,7 +5228,7 @@ zabbix_export:
-
uuid: c9cddccdeed34aa4a533f0ad07aab5ae
expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",status])=1'
- name: 'Power supply [{#LOCATION}]: Power supply has warning status'
+ name: 'Power supply [{#DURABLE.ID}]: Power supply has warning status'
priority: WARNING
description: 'Power supply has warning status.'
tags:
@@ -4444,9 +5241,6 @@ zabbix_export:
-
lld_macro: '{#DURABLE.ID}'
path: '$.[''durable-id'']'
- -
- lld_macro: '{#LOCATION}'
- path: '$.[''location'']'
preprocessing:
-
type: DISCARD_UNCHANGED_HEARTBEAT
@@ -4461,102 +5255,6 @@ zabbix_export:
description: 'Discover volumes.'
item_prototypes:
-
- uuid: b47d7b03e19f4e25803b1d639a0ecf43
- name: 'Volume [{#NAME}]: Space allocated'
- type: DEPENDENT
- key: 'hpe.msa.volumes.space["{#NAME}",allocated]'
- delay: '0'
- history: 7d
- description: 'The amount of space currently allocated to the volume.'
- preprocessing:
- -
- type: JSONPATH
- parameters:
- - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''allocated-size-numeric''].first()'
- -
- type: DISCARD_UNCHANGED_HEARTBEAT
- parameters:
- - 1h
- -
- type: MULTIPLIER
- parameters:
- - '512'
- master_item:
- key: hpe.msa.raw.volumes
- tags:
- -
- tag: component
- value: volume
- -
- tag: volume
- value: '{#NAME}'
- -
- uuid: b6aaba39f7c74dcf95947626852855c8
- name: 'Volume [{#NAME}]: Space total'
- type: DEPENDENT
- key: 'hpe.msa.volumes.space["{#NAME}",total]'
- delay: '0'
- history: 7d
- units: B
- description: 'The capacity of the volume.'
- preprocessing:
- -
- type: JSONPATH
- parameters:
- - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''size-numeric''].first()'
- -
- type: DISCARD_UNCHANGED_HEARTBEAT
- parameters:
- - 1h
- -
- type: MULTIPLIER
- parameters:
- - '512'
- master_item:
- key: hpe.msa.raw.volumes
- tags:
- -
- tag: component
- value: volume
- -
- tag: volume
- value: '{#NAME}'
- graph_prototypes:
- -
- uuid: f8c4f07925404bc0b1e3ada45358580a
- name: 'Volume [{#NAME}]: Space utilization'
- graph_items:
- -
- color: 1A7C11
- item:
- host: 'HPE MSA 2040 Storage by HTTP'
- key: 'hpe.msa.volumes.space["{#NAME}",allocated]'
- -
- sortorder: '1'
- color: 2774A4
- item:
- host: 'HPE MSA 2040 Storage by HTTP'
- key: 'hpe.msa.volumes.space["{#NAME}",total]'
- master_item:
- key: hpe.msa.raw.volumes
- lld_macro_paths:
- -
- lld_macro: '{#NAME}'
- path: '$.[''name'']'
- preprocessing:
- -
- type: DISCARD_UNCHANGED_HEARTBEAT
- parameters:
- - 6h
- -
- uuid: 6e89241aea1e439d99243cdd887a6f2d
- name: 'Volume statistics discovery'
- type: DEPENDENT
- key: hpe.msa.volumes.statistics.discovery
- delay: '0'
- description: 'Discover volume statistics.'
- item_prototypes:
- -
uuid: f9818ae47544417bb270af4f8f014c0a
name: 'Volume [{#NAME}]: Cache: Read hits, rate'
type: DEPENDENT
@@ -4740,12 +5438,41 @@ zabbix_export:
tag: volume
value: '{#NAME}'
-
+ uuid: 0e2831ed17ec4fe0a56b800086b47901
+ name: 'Volume [{#NAME}]: IOPS, read rate'
+ type: DEPENDENT
+ key: 'hpe.msa.volumes.iops.read["{#NAME}",rate]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: '!r/s'
+ description: 'Number of read operations per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''volume-name''] == "{#NAME}")].[''number-of-reads''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.raw.volumes.statistics
+ tags:
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
uuid: 9d14e4239f5941a7bfb07b6645b9e698
- name: 'Volume [{#NAME}]: IOPS, rate'
+ name: 'Volume [{#NAME}]: IOPS, total rate'
type: DEPENDENT
- key: 'hpe.msa.volumes.iops["{#NAME}",rate]'
+ key: 'hpe.msa.volumes.iops.total["{#NAME}",rate]'
delay: '0'
history: 7d
+ units: '!iops'
description: 'Input/output operations per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.'
preprocessing:
-
@@ -4762,19 +5489,20 @@ zabbix_export:
tag: volume
value: '{#NAME}'
-
- uuid: 0e2831ed17ec4fe0a56b800086b47901
- name: 'Volume [{#NAME}]: Reads, rate'
+ uuid: e1a6b6cc609c4cf789978f01b18af31f
+ name: 'Volume [{#NAME}]: IOPS, write rate'
type: DEPENDENT
- key: 'hpe.msa.volumes.reads["{#NAME}",rate]'
+ key: 'hpe.msa.volumes.iops.write["{#NAME}",rate]'
delay: '0'
history: 7d
value_type: FLOAT
- description: 'Number of read operations per second.'
+ units: '!w/s'
+ description: 'Number of write operations per second.'
preprocessing:
-
type: JSONPATH
parameters:
- - '$.[?(@[''volume-name''] == "{#NAME}")].[''number-of-reads''].first()'
+ - '$.[?(@[''volume-name''] == "{#NAME}")].[''number-of-writes''].first()'
-
type: CHANGE_PER_SECOND
parameters:
@@ -4789,25 +5517,60 @@ zabbix_export:
tag: volume
value: '{#NAME}'
-
- uuid: e1a6b6cc609c4cf789978f01b18af31f
- name: 'Volume [{#NAME}]: Writes, rate'
+ uuid: b47d7b03e19f4e25803b1d639a0ecf43
+ name: 'Volume [{#NAME}]: Space allocated'
type: DEPENDENT
- key: 'hpe.msa.volumes.writes["{#NAME}",rate]'
+ key: 'hpe.msa.volumes.space["{#NAME}",allocated]'
delay: '0'
history: 7d
- value_type: FLOAT
- description: 'Number of write operations per second.'
+ units: B
+ description: 'The amount of space currently allocated to the volume.'
preprocessing:
-
type: JSONPATH
parameters:
- - '$.[?(@[''volume-name''] == "{#NAME}")].[''number-of-writes''].first()'
+ - '$.[?(@[''volume-name''] == "{#NAME}")].[''allocated-size-numeric''].first()'
-
- type: CHANGE_PER_SECOND
+ type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- - ''
+ - 1h
+ -
+ type: MULTIPLIER
+ parameters:
+ - '512'
master_item:
- key: hpe.msa.raw.volumes.statistics
+ key: hpe.msa.raw.volumes
+ tags:
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: b6aaba39f7c74dcf95947626852855c8
+ name: 'Volume [{#NAME}]: Space total'
+ type: DEPENDENT
+ key: 'hpe.msa.volumes.space["{#NAME}",total]'
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'The capacity of the volume.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''volume-name''] == "{#NAME}")].[''size-numeric''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ -
+ type: MULTIPLIER
+ parameters:
+ - '512'
+ master_item:
+ key: hpe.msa.raw.volumes
tags:
-
tag: component
@@ -4866,15 +5629,30 @@ zabbix_export:
color: 1A7C11
item:
host: 'HPE MSA 2040 Storage by HTTP'
- key: 'hpe.msa.volumes.reads["{#NAME}",rate]'
+ key: 'hpe.msa.volumes.iops.read["{#NAME}",rate]'
-
sortorder: '1'
color: 2774A4
item:
host: 'HPE MSA 2040 Storage by HTTP'
- key: 'hpe.msa.volumes.writes["{#NAME}",rate]'
+ key: 'hpe.msa.volumes.iops.write["{#NAME}",rate]'
+ -
+ uuid: f8c4f07925404bc0b1e3ada45358580a
+ name: 'Volume [{#NAME}]: Space utilization'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE MSA 2040 Storage by HTTP'
+ key: 'hpe.msa.volumes.space["{#NAME}",allocated]'
+ -
+ sortorder: '1'
+ color: 2774A4
+ item:
+ host: 'HPE MSA 2040 Storage by HTTP'
+ key: 'hpe.msa.volumes.space["{#NAME}",total]'
master_item:
- key: hpe.msa.raw.volumes.statistics
+ key: hpe.msa.raw.volumes
lld_macro_paths:
-
lld_macro: '{#NAME}'
@@ -4898,37 +5676,41 @@ zabbix_export:
-
macro: '{$HPE.MSA.API.PASSWORD}'
type: SECRET_TEXT
- description: 'Specify password for WSAPI.'
+ description: 'Specify password for API.'
-
macro: '{$HPE.MSA.API.PORT}'
value: '443'
- description: 'Connection port for WSAPI.'
+ description: 'Connection port for API.'
-
macro: '{$HPE.MSA.API.SCHEME}'
value: https
- description: 'Connection scheme timeout for WSAPI.'
+ description: 'Connection scheme timeout for API.'
-
macro: '{$HPE.MSA.API.USERNAME}'
value: zabbix
- description: 'Specify user name for WSAPI.'
+ description: 'Specify user name for API.'
+ -
+ macro: '{$HPE.MSA.CONTROLLER.CPU.UTIL.CRIT}'
+ value: '90'
+ description: 'The critical threshold of the CPU utilization in %.'
-
macro: '{$HPE.MSA.DATA.TIMEOUT}'
value: 5s
- description: 'Response timeout for WSAPI.'
+ description: 'Response timeout for API.'
-
- macro: '{$HPE.PRIMERA.DISKS.GROUP.PUSED.MAX.CRIT}'
+ macro: '{$HPE.MSA.DISKS.GROUP.PUSED.MAX.CRIT}'
value: '90'
description: 'The critical threshold of the disk group space utilization in percent.'
-
- macro: '{$HPE.PRIMERA.DISKS.GROUP.PUSED.MAX.WARN}'
+ macro: '{$HPE.MSA.DISKS.GROUP.PUSED.MAX.WARN}'
value: '80'
description: 'The warning threshold of the disk group space utilization in percent.'
-
- macro: '{$HPE.PRIMERA.POOL.PUSED.MAX.CRIT}'
+ macro: '{$HPE.MSA.POOL.PUSED.MAX.CRIT}'
value: '90'
description: 'The critical threshold of the pool space utilization in percent.'
-
- macro: '{$HPE.PRIMERA.POOL.PUSED.MAX.WARN}'
+ macro: '{$HPE.MSA.POOL.PUSED.MAX.WARN}'
value: '80'
description: 'The warning threshold of the pool space utilization in percent.'
valuemaps:
@@ -5081,6 +5863,38 @@ zabbix_export:
value: '4'
newvalue: N/A
-
+ uuid: 8792604f2c914a5c8e810dffbfa0ebfd
+ name: 'I/O module status'
+ mappings:
+ -
+ value: '0'
+ newvalue: Operational
+ -
+ value: '1'
+ newvalue: Down
+ -
+ value: '2'
+ newvalue: 'Not installed'
+ -
+ value: '3'
+ newvalue: Unknown
+ -
+ uuid: 66a23d01db744677a1878143ccf102c7
+ name: 'Port type'
+ mappings:
+ -
+ value: '0'
+ newvalue: Unknown
+ -
+ value: '6'
+ newvalue: FC
+ -
+ value: '8'
+ newvalue: SAS
+ -
+ value: '9'
+ newvalue: iSCSI
+ -
uuid: 996bbe1c4e2841d6ac35efd9b5236fef
name: 'RAID type'
mappings:
diff --git a/templates/san/hpe_msa2060_http/README.md b/templates/san/hpe_msa2060_http/README.md
index 66de4349d5b..0b4e6b57e72 100644
--- a/templates/san/hpe_msa2060_http/README.md
+++ b/templates/san/hpe_msa2060_http/README.md
@@ -10,19 +10,16 @@ It works without any external scripts and uses the script items.
This template was tested on:
-- MSA 2060, version 21.2.8
+- HPE MSA 2060 Storage
## Setup
> See [Zabbix template operation](https://www.zabbix.com/documentation/6.0/manual/config/templates_out_of_the_box/http) for basic instructions.
-1. Create user zabbix on the storage with browse role and enable it for all domains.
-2. The WSAPI server does not start automatically. To enable it:
-- log in to the CLI as Super, Service, or any role granted the wsapi_set right;
-- start the WSAPI server by command: 'startwsapi';
-- to check WSAPI state use command: 'showwsapi'.
-3. Link template to the host.
-4. Configure {$HPE.MSA.API.PASSWORD} and {$HPE.PRIMERA.API.PASSWORD}.
+1. Create user "zabbix" on the storage with monitor role.
+2. Link template to the host.
+3. Configure {$HPE.MSA.API.PASSWORD} and an interface with address through which one API is accessible if not specified.
+4. Change {$HPE.MSA.API.SCHEME} and {$HPE.MSA.API.PORT} macros if needed.
## Zabbix configuration
@@ -33,15 +30,16 @@ No specific Zabbix configuration is required.
|Name|Description|Default|
|----|-----------|-------|
-|{$HPE.MSA.API.PASSWORD} |<p>Specify password for WSAPI.</p> |`` |
-|{$HPE.MSA.API.PORT} |<p>Connection port for WSAPI.</p> |`443` |
-|{$HPE.MSA.API.SCHEME} |<p>Connection scheme timeout for WSAPI.</p> |`https` |
-|{$HPE.MSA.API.USERNAME} |<p>Specify user name for WSAPI.</p> |`zabbix` |
-|{$HPE.MSA.DATA.TIMEOUT} |<p>Response timeout for WSAPI.</p> |`5s` |
-|{$HPE.PRIMERA.DISKS.GROUP.PUSED.MAX.CRIT} |<p>The critical threshold of the disk group space utilization in percent.</p> |`90` |
-|{$HPE.PRIMERA.DISKS.GROUP.PUSED.MAX.WARN} |<p>The warning threshold of the disk group space utilization in percent.</p> |`80` |
-|{$HPE.PRIMERA.POOL.PUSED.MAX.CRIT} |<p>The critical threshold of the pool space utilization in percent.</p> |`90` |
-|{$HPE.PRIMERA.POOL.PUSED.MAX.WARN} |<p>The warning threshold of the pool space utilization in percent.</p> |`80` |
+|{$HPE.MSA.API.PASSWORD} |<p>Specify password for API.</p> |`` |
+|{$HPE.MSA.API.PORT} |<p>Connection port for API.</p> |`443` |
+|{$HPE.MSA.API.SCHEME} |<p>Connection scheme timeout for API.</p> |`https` |
+|{$HPE.MSA.API.USERNAME} |<p>Specify user name for API.</p> |`zabbix` |
+|{$HPE.MSA.CONTROLLER.CPU.UTIL.CRIT} |<p>The critical threshold of the CPU utilization in %.</p> |`90` |
+|{$HPE.MSA.DATA.TIMEOUT} |<p>Response timeout for API.</p> |`5s` |
+|{$HPE.MSA.DISKS.GROUP.PUSED.MAX.CRIT} |<p>The critical threshold of the disk group space utilization in percent.</p> |`90` |
+|{$HPE.MSA.DISKS.GROUP.PUSED.MAX.WARN} |<p>The warning threshold of the disk group space utilization in percent.</p> |`80` |
+|{$HPE.MSA.POOL.PUSED.MAX.CRIT} |<p>The critical threshold of the pool space utilization in percent.</p> |`90` |
+|{$HPE.MSA.POOL.PUSED.MAX.WARN} |<p>The warning threshold of the pool space utilization in percent.</p> |`80` |
## Template links
@@ -52,17 +50,15 @@ There are no template links in this template.
|Name|Description|Type|Key and additional info|
|----|-----------|----|----|
|Controllers discovery |<p>Discover controllers.</p> |DEPENDENT |hpe.msa.controllers.discovery<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
-|Controller statistics discovery |<p>Discover controller statistics.</p> |DEPENDENT |hpe.msa.controllers.statistics.discovery<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
|Disk groups discovery |<p>Discover disk groups.</p> |DEPENDENT |hpe.msa.disks.groups.discovery<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
-|Disk group statistics discovery |<p>Discover disk group statistics.</p> |DEPENDENT |hpe.msa.disks.groups.statistics.discovery<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
|Disks discovery |<p>Discover disks.</p> |DEPENDENT |hpe.msa.disks.discovery<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p><p>**Overrides:**</p><p>SSD life left<br> - {#TYPE} MATCHES_REGEX `8`<br> - ITEM_PROTOTYPE REGEXP `SSD life left` - DISCOVER</p> |
|Enclosures discovery |<p>Discover enclosures.</p> |DEPENDENT |hpe.msa.enclosures.discovery<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
|Fans discovery |<p>Discover fans.</p> |DEPENDENT |hpe.msa.fans.discovery<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
+|I/O modules discovery |<p>Discover I/O modules.</p> |DEPENDENT |hpe.msa.io_modules.discovery<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
|Pools discovery |<p>Discover pools.</p> |DEPENDENT |hpe.msa.pools.discovery<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
|Ports discovery |<p>Discover ports.</p> |DEPENDENT |hpe.msa.ports.discovery<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
|Power supplies discovery |<p>Discover power supplies.</p> |DEPENDENT |hpe.msa.power_supplies.discovery<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
|Volumes discovery |<p>Discover volumes.</p> |DEPENDENT |hpe.msa.volumes.discovery<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
-|Volume statistics discovery |<p>Discover volume statistics.</p> |DEPENDENT |hpe.msa.volumes.statistics.discovery<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
## Items collected
@@ -76,15 +72,31 @@ There are no template links in this template.
|HPE |Vendor name |<p>The vendor name.</p> |DEPENDENT |hpe.msa.system.vendor_name<p>**Preprocessing**:</p><p>- JSONPATH: `$.system[0].['vendor-name']`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
|HPE |System health |<p>System health status.</p> |DEPENDENT |hpe.msa.system.health<p>**Preprocessing**:</p><p>- JSONPATH: `$.system[0].['health-numeric']`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p> |
|HPE |HPE MSA: Service ping |<p>Check if HTTP/HTTPS service accepts TCP connections.</p> |SIMPLE |net.tcp.service["{$HPE.MSA.API.SCHEME}","{HOST.CONN}"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|HPE |Controller [{#DURABLE.ID}]: Firmware version |<p>Storage controller firmware version.</p> |DEPENDENT |hpe.msa.controllers["{#DURABLE.ID}",firmware]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['sc-fw'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|HPE |Controller [{#DURABLE.ID}]: Part number |<p>Part number of the controller.</p> |DEPENDENT |hpe.msa.controllers["{#DURABLE.ID}",part_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['part-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|HPE |Controller [{#DURABLE.ID}]: Serial number |<p>Storage controller serial number.</p> |DEPENDENT |hpe.msa.controllers["{#DURABLE.ID}",serial_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['serial-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|HPE |Controller [{#DURABLE.ID}]: Health |<p>Controller health status.</p> |DEPENDENT |hpe.msa.controllers["{#DURABLE.ID}",health]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['health-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|HPE |Controller [{#DURABLE.ID}]: Status |<p>Storage controller status.</p> |DEPENDENT |hpe.msa.controllers["{#DURABLE.ID}",status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['status-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|HPE |Controller [{#DURABLE.ID}]: CPU utilization |<p>Percentage of time the CPU is busy, from 0 to 100.</p> |DEPENDENT |hpe.msa.controllers.cpu["{#DURABLE.ID}",util]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['cpu-load'].first()`</p> |
-|HPE |Controller [{#DURABLE.ID}]: IOPS, rate |<p>Input/output operations per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.</p> |DEPENDENT |hpe.msa.controllers.iops["{#DURABLE.ID}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['iops'].first()`</p> |
-|HPE |Controller [{#DURABLE.ID}]: Uptime |<p>Number of seconds since the controller was restarted.</p> |DEPENDENT |hpe.msa.controllers["{#DURABLE.ID}",uptime]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['power-on-time'].first()`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Firmware version |<p>Storage controller firmware version.</p> |DEPENDENT |hpe.msa.controllers["{#CONTROLLER.ID}",firmware]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['sc-fw'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Part number |<p>Part number of the controller.</p> |DEPENDENT |hpe.msa.controllers["{#CONTROLLER.ID}",part_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['part-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Serial number |<p>Storage controller serial number.</p> |DEPENDENT |hpe.msa.controllers["{#CONTROLLER.ID}",serial_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['serial-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Health |<p>Controller health status.</p> |DEPENDENT |hpe.msa.controllers["{#CONTROLLER.ID}",health]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['health-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Status |<p>Storage controller status.</p> |DEPENDENT |hpe.msa.controllers["{#CONTROLLER.ID}",status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['status-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Disks |<p>Number of disks in the storage system.</p> |DEPENDENT |hpe.msa.controllers["{#CONTROLLER.ID}",disks]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['disks'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Pools |<p>Number of pools in the storage system.</p> |DEPENDENT |hpe.msa.controllers["{#CONTROLLER.ID}",pools]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['number-of-storage-pools'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Disk groups |<p>Number of disk groups in the storage system.</p> |DEPENDENT |hpe.msa.controllers["{#CONTROLLER.ID}",disk_groups]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['virtual-disks'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: IP address |<p>Controller network port IP address.</p> |DEPENDENT |hpe.msa.controllers["{#CONTROLLER.ID}",ip_address]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['ip-address'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Cache memory size |<p>Controller cache memory size.</p> |DEPENDENT |hpe.msa.controllers.cache["{#CONTROLLER.ID}",total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['cache-memory-size'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p><p>- MULTIPLIER: `1048576`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Cache: Write utilization |<p>Percentage of write cache in use, from 0 to 100.</p> |DEPENDENT |hpe.msa.controllers.cache.write["{#CONTROLLER.ID}",util]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['write-cache-used'].first()`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Cache: Read hits, rate |<p>For the controller that owns the volume, the number of times the block to be read is found in cache per second.</p> |DEPENDENT |hpe.msa.controllers.cache.read.hits["{#CONTROLLER.ID}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['read-cache-hits'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Cache: Read misses, rate |<p>For the controller that owns the volume, the number of times the block to be read is not found in cache per second.</p> |DEPENDENT |hpe.msa.controllers.cache.read.misses["{#CONTROLLER.ID}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['read-cache-misses'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Cache: Write hits, rate |<p>For the controller that owns the volume, the number of times the block written to is found in cache per second.</p> |DEPENDENT |hpe.msa.controllers.cache.write.hits["{#CONTROLLER.ID}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['write-cache-hits'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Cache: Write misses, rate |<p>For the controller that owns the volume, the number of times the block written to is not found in cache per second.</p> |DEPENDENT |hpe.msa.controllers.cache.write.misses["{#CONTROLLER.ID}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['write-cache-misses'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: CPU utilization |<p>Percentage of time the CPU is busy, from 0 to 100.</p> |DEPENDENT |hpe.msa.controllers.cpu["{#CONTROLLER.ID}",util]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['cpu-load'].first()`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: IOPS, total rate |<p>Input/output operations per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.</p> |DEPENDENT |hpe.msa.controllers.iops.total["{#CONTROLLER.ID}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['iops'].first()`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: IOPS, read rate |<p>Number of read operations per second.</p> |DEPENDENT |hpe.msa.controllers.iops.read["{#CONTROLLER.ID}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['number-of-reads'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: IOPS, write rate |<p>Number of write operations per second.</p> |DEPENDENT |hpe.msa.controllers.iops.write["{#CONTROLLER.ID}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['number-of-writes'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Data transfer rate: Total |<p>The data transfer rate, in bytes per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.</p> |DEPENDENT |hpe.msa.controllers.data_transfer.total["{#CONTROLLER.ID}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['bytes-per-second-numeric'].first()`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Data transfer rate: Reads |<p>The data read rate, in bytes per second.</p> |DEPENDENT |hpe.msa.controllers.data_transfer.reads["{#CONTROLLER.ID}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['data-read-numeric'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Data transfer rate: Writes |<p>The data write rate, in bytes per second.</p> |DEPENDENT |hpe.msa.controllers.data_transfer.writes["{#CONTROLLER.ID}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['data-written-numeric'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Uptime |<p>Number of seconds since the controller was restarted.</p> |DEPENDENT |hpe.msa.controllers["{#CONTROLLER.ID}",uptime]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['power-on-time'].first()`</p> |
|HPE |Disk group [{#NAME}]: Disks count |<p>Number of disks in the disk group.</p> |DEPENDENT |hpe.msa.disks.groups["{#NAME}",disk_count]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['diskcount'].first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Disk group [{#NAME}]: Pool space used |<p>The percentage of pool capacity that the disk group occupies.</p> |DEPENDENT |hpe.msa.disks.groups.space["{#NAME}",pool_util]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['pool-percentage'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
|HPE |Disk group [{#NAME}]: Health |<p>Disk group health.</p> |DEPENDENT |hpe.msa.disks.groups["{#NAME}",health]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['health-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
|HPE |Disk group [{#NAME}]: Blocks size |<p>The size of a block, in bytes.</p> |DEPENDENT |hpe.msa.disks.groups.blocks["{#NAME}",size]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['blocksize'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
|HPE |Disk group [{#NAME}]: Blocks free |<p>Free space in blocks.</p> |DEPENDENT |hpe.msa.disks.groups.blocks["{#NAME}",free]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['freespace-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
@@ -94,12 +106,12 @@ There are no template links in this template.
|HPE |Disk group [{#NAME}]: Space utilization |<p>The space utilization percentage in the disk group.</p> |CALCULATED |hpe.msa.disks.groups.space["{#NAME}",util]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>**Expression**:</p>`last(//hpe.msa.disks.groups.space["{#NAME}",free])/last(//hpe.msa.disks.groups.space["{#NAME}",total])*100` |
|HPE |Disk group [{#NAME}]: RAID type |<p>The RAID level of the disk group.</p> |DEPENDENT |hpe.msa.disks.groups.raid["{#NAME}",type]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['raidtype-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
|HPE |Disk group [{#NAME}]: Status |<p>The status of the disk group:</p><p>- CRIT: Critical. The disk group is online but isn't fault tolerant because some of it's disks are down.</p><p>- DMGD: Damaged. The disk group is online and fault tolerant, but some of it's disks are damaged.</p><p>- FTDN: Fault tolerant with a down disk.The disk group is online and fault tolerant, but some of it's disks are down.</p><p>- FTOL: Fault tolerant.</p><p>- MSNG: Missing. The disk group is online and fault tolerant, but some of it's disks are missing.</p><p>- OFFL: Offline. Either the disk group is using offline initialization, or it's disks are down and data may be lost.</p><p>- QTCR: Quarantined critical. The disk group is critical with at least one inaccessible disk. For example, two disks are inaccessible in a RAID 6 disk group or one disk is inaccessible for other fault-tolerant RAID levels. If the inaccessible disks come online or if after 60 seconds from being quarantined the disk group is QTCRor QTDN, the disk group is automatically dequarantined.</p><p>- QTDN: Quarantined with a down disk. The RAID6 disk group has one inaccessible disk. The disk group is fault tolerant but degraded. If the inaccessible disks come online or if after 60 seconds from being quarantined the disk group is QTCRor QTDN, the disk group is automatically dequarantined.</p><p>- QTOF: Quarantined offline. The disk group is offline with multiple inaccessible disks causing user data to be incomplete, or is an NRAID or RAID 0 disk group.</p><p>- QTUN: Quarantined unsupported. The disk group contains data in a format that is not supported by this system. For example, this system does not support linear disk groups.</p><p>- STOP: The disk group is stopped.</p><p>- UNKN: Unknown.</p><p>- UP: Up. The disk group is online and does not have fault-tolerant attributes.</p> |DEPENDENT |hpe.msa.disks.groups["{#NAME}",status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['status-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|HPE |Disk group [{#NAME}]: IOPS, rate |<p>Input/output operations per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.</p> |DEPENDENT |hpe.msa.disks.groups.iops["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['iops'].first()`</p> |
+|HPE |Disk group [{#NAME}]: IOPS, total rate |<p>Input/output operations per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.</p> |DEPENDENT |hpe.msa.disks.groups.iops.total["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['iops'].first()`</p> |
|HPE |Disk group [{#NAME}]: Average response time: Total |<p>Average response time for read and write operations, calculated over the interval since these statistics were last requested or reset.</p> |DEPENDENT |hpe.msa.disks.groups.avg_rsp_time["{#NAME}",total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['avg-rsp-time'].first()`</p><p>- MULTIPLIER: `0.000001`</p> |
|HPE |Disk group [{#NAME}]: Average response time: Read |<p>Average response time for all read operations, calculated over the interval since these statistics were last requested or reset.</p> |DEPENDENT |hpe.msa.disks.groups.avg_rsp_time["{#NAME}",read]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['avg-read-rsp-time'].first()`</p><p>- MULTIPLIER: `0.000001`</p> |
|HPE |Disk group [{#NAME}]: Average response time: Write |<p>Average response time for all write operations, calculated over the interval since these statistics were last requested or reset.</p> |DEPENDENT |hpe.msa.disks.groups.avg_rsp_time["{#NAME}",write]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['avg-write-rsp-time'].first()`</p><p>- MULTIPLIER: `0.000001`</p> |
-|HPE |Disk group [{#NAME}]: Reads, rate |<p>Number of read operations per second.</p> |DEPENDENT |hpe.msa.disks.groups.reads["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['number-of-reads'].first()`</p><p>- CHANGE_PER_SECOND</p> |
-|HPE |Disk group [{#NAME}]: Writes, rate |<p>Number of write operations per second.</p> |DEPENDENT |hpe.msa.disks.groups.writes["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['number-of-writes'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Disk group [{#NAME}]: IOPS, read rate |<p>Number of read operations per second.</p> |DEPENDENT |hpe.msa.disks.groups.iops.read["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['number-of-reads'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Disk group [{#NAME}]: IOPS, write rate |<p>Number of write operations per second.</p> |DEPENDENT |hpe.msa.disks.groups.iops.write["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['number-of-writes'].first()`</p><p>- CHANGE_PER_SECOND</p> |
|HPE |Disk group [{#NAME}]: Data transfer rate: Total |<p>The data transfer rate, in bytes per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.</p> |DEPENDENT |hpe.msa.disks.groups.data_transfer.total["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['bytes-per-second-numeric'].first()`</p> |
|HPE |Disk group [{#NAME}]: Data transfer rate: Reads |<p>The data read rate, in bytes per second.</p> |DEPENDENT |hpe.msa.disks.groups.data_transfer.reads["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['data-read-numeric'].first()`</p><p>- CHANGE_PER_SECOND</p> |
|HPE |Disk group [{#NAME}]: Data transfer rate: Writes |<p>The data write rate, in bytes per second.</p> |DEPENDENT |hpe.msa.disks.groups.data_transfer.writes["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['data-written-numeric'].first()`</p><p>- CHANGE_PER_SECOND</p> |
@@ -110,14 +122,14 @@ There are no template links in this template.
|HPE |Pool [{#NAME}]: Space free |<p>The free space in the pool.</p> |CALCULATED |hpe.msa.pools.space["{#NAME}",free]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>**Expression**:</p>`last(//hpe.msa.pools.blocks["{#NAME}",size])*last(//hpe.msa.pools.blocks["{#NAME}",available])` |
|HPE |Pool [{#NAME}]: Space total |<p>The capacity of the pool.</p> |CALCULATED |hpe.msa.pools.space["{#NAME}",total]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>**Expression**:</p>`last(//hpe.msa.pools.blocks["{#NAME}",size])*last(//hpe.msa.pools.blocks["{#NAME}",total])` |
|HPE |Pool [{#NAME}]: Space utilization |<p>The space utilization percentage in the pool.</p> |CALCULATED |hpe.msa.pools.space["{#NAME}",util]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>**Expression**:</p>`last(//hpe.msa.pools.space["{#NAME}",free])/last(//hpe.msa.pools.space["{#NAME}",total])*100` |
-|HPE |Volume [{#NAME}]: Blocks size |<p>The size of a block, in bytes.</p> |DEPENDENT |hpe.msa.volumes.blocks["{#NAME}",size]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['blocksize'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|HPE |Volume [{#NAME}]: Blocks allocated |<p>The amount of blocks currently allocated to the volume.</p> |DEPENDENT |hpe.msa.volumes.blocks["{#NAME}",allocated]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['allocated-size-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|HPE |Volume [{#NAME}]: Blocks total |<p>Total space in blocks.</p> |DEPENDENT |hpe.msa.volumes.blocks["{#NAME}",total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['blocks'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Volume [{#NAME}]: Blocks size |<p>The size of a block, in bytes.</p> |DEPENDENT |hpe.msa.volumes.blocks["{#NAME}",size]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['volume-name'] == "{#NAME}")].['blocksize'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Volume [{#NAME}]: Blocks allocated |<p>The amount of blocks currently allocated to the volume.</p> |DEPENDENT |hpe.msa.volumes.blocks["{#NAME}",allocated]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['volume-name'] == "{#NAME}")].['allocated-size-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Volume [{#NAME}]: Blocks total |<p>Total space in blocks.</p> |DEPENDENT |hpe.msa.volumes.blocks["{#NAME}",total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['volume-name'] == "{#NAME}")].['blocks'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
|HPE |Volume [{#NAME}]: Space allocated |<p>The amount of space currently allocated to the volume.</p> |CALCULATED |hpe.msa.volumes.space["{#NAME}",allocated]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>**Expression**:</p>`last(//hpe.msa.volumes.blocks["{#NAME}",size])*last(//hpe.msa.volumes.blocks["{#NAME}",allocated])` |
|HPE |Volume [{#NAME}]: Space total |<p>The capacity of the volume.</p> |CALCULATED |hpe.msa.volumes.space["{#NAME}",total]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>**Expression**:</p>`last(//hpe.msa.volumes.blocks["{#NAME}",size])*last(//hpe.msa.volumes.blocks["{#NAME}",total])` |
-|HPE |Volume [{#NAME}]: IOPS, rate |<p>Input/output operations per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.</p> |DEPENDENT |hpe.msa.volumes.iops["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['volume-name'] == "{#NAME}")].['iops'].first()`</p> |
-|HPE |Volume [{#NAME}]: Reads, rate |<p>Number of read operations per second.</p> |DEPENDENT |hpe.msa.volumes.reads["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['volume-name'] == "{#NAME}")].['number-of-reads'].first()`</p><p>- CHANGE_PER_SECOND</p> |
-|HPE |Volume [{#NAME}]: Writes, rate |<p>Number of write operations per second.</p> |DEPENDENT |hpe.msa.volumes.writes["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['volume-name'] == "{#NAME}")].['number-of-writes'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Volume [{#NAME}]: IOPS, total rate |<p>Total input/output operations per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.</p> |DEPENDENT |hpe.msa.volumes.iops.total["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['volume-name'] == "{#NAME}")].['iops'].first()`</p> |
+|HPE |Volume [{#NAME}]: IOPS, read rate |<p>Number of read operations per second.</p> |DEPENDENT |hpe.msa.volumes.iops.read["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['volume-name'] == "{#NAME}")].['number-of-reads'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Volume [{#NAME}]: IOPS, write rate |<p>Number of write operations per second.</p> |DEPENDENT |hpe.msa.volumes.iops.write["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['volume-name'] == "{#NAME}")].['number-of-writes'].first()`</p><p>- CHANGE_PER_SECOND</p> |
|HPE |Volume [{#NAME}]: Data transfer rate: Total |<p>The data transfer rate, in bytes per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.</p> |DEPENDENT |hpe.msa.volumes.data_transfer.total["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['volume-name'] == "{#NAME}")].['bytes-per-second-numeric'].first()`</p> |
|HPE |Volume [{#NAME}]: Data transfer rate: Reads |<p>The data read rate, in bytes per second.</p> |DEPENDENT |hpe.msa.volumes.data_transfer.reads["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['volume-name'] == "{#NAME}")].['data-read-numeric'].first()`</p><p>- CHANGE_PER_SECOND</p> |
|HPE |Volume [{#NAME}]: Data transfer rate: Writes |<p>The data write rate, in bytes per second.</p> |DEPENDENT |hpe.msa.volumes.data_transfer.writes["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['volume-name'] == "{#NAME}")].['data-written-numeric'].first()`</p><p>- CHANGE_PER_SECOND</p> |
@@ -128,44 +140,50 @@ There are no template links in this template.
|HPE |Enclosure [{#DURABLE.ID}]: Health |<p>Enclosure health.</p> |DEPENDENT |hpe.msa.enclosures["{#DURABLE.ID}",health]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['health-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
|HPE |Enclosure [{#DURABLE.ID}]: Status |<p>Enclosure status.</p> |DEPENDENT |hpe.msa.enclosures["{#DURABLE.ID}",status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['status-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 6`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
|HPE |Enclosure [{#DURABLE.ID}]: Midplane serial number |<p>Midplane serial number.</p> |DEPENDENT |hpe.msa.enclosures["{#DURABLE.ID}",midplane_serial_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['midplane-serial-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|HPE |Enclosure [{#DURABLE.ID}]: Part number. |<p>Enclosure part number.</p> |DEPENDENT |hpe.msa.enclosures["{#DURABLE.ID}",part_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['part-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Enclosure [{#DURABLE.ID}]: Part number |<p>Enclosure part number.</p> |DEPENDENT |hpe.msa.enclosures["{#DURABLE.ID}",part_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['part-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
|HPE |Enclosure [{#DURABLE.ID}]: Model |<p>Enclosure model.</p> |DEPENDENT |hpe.msa.enclosures["{#DURABLE.ID}",model]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['model'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
|HPE |Enclosure [{#DURABLE.ID}]: Power |<p>Enclosure power in watts.</p> |DEPENDENT |hpe.msa.enclosures["{#DURABLE.ID}",power]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['enclosure-power'].first()`</p> |
-|HPE |Power supply [{#LOCATION}]: Health |<p>Power supply health status.</p> |DEPENDENT |hpe.msa.power_supplies["{#DURABLE.ID}",health]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['health-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|HPE |Power supply [{#LOCATION}]: Status |<p>Power supply status.</p> |DEPENDENT |hpe.msa.power_supplies["{#DURABLE.ID}",status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['status-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|HPE |Power supply [{#LOCATION}]: Part number. |<p>Power supply part number.</p> |DEPENDENT |hpe.msa.power_supplies["{#DURABLE.ID}",part_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['part-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|HPE |Power supply [{#LOCATION}]: Serial number. |<p>Power supply serial number.</p> |DEPENDENT |hpe.msa.power_supplies["{#DURABLE.ID}",serial_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['serial-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Power supply [{#DURABLE.ID}]: Health |<p>Power supply health status.</p> |DEPENDENT |hpe.msa.power_supplies["{#DURABLE.ID}",health]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['health-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Power supply [{#DURABLE.ID}]: Status |<p>Power supply status.</p> |DEPENDENT |hpe.msa.power_supplies["{#DURABLE.ID}",status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['status-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Power supply [{#DURABLE.ID}]: Part number |<p>Power supply part number.</p> |DEPENDENT |hpe.msa.power_supplies["{#DURABLE.ID}",part_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['part-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Power supply [{#DURABLE.ID}]: Serial number |<p>Power supply serial number.</p> |DEPENDENT |hpe.msa.power_supplies["{#DURABLE.ID}",serial_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['serial-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
|HPE |Port [{#NAME}]: Health |<p>Port health status.</p> |DEPENDENT |hpe.msa.ports["{#NAME}",health]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['port'] == "{#NAME}")].['health-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
|HPE |Port [{#NAME}]: Status |<p>Port status.</p> |DEPENDENT |hpe.msa.ports["{#NAME}",status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['port'] == "{#NAME}")].['status-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|HPE |{#NAME} [{#LOCATION}]: Health |<p>Fan health status.</p> |DEPENDENT |hpe.msa.fans["{#DURABLE.ID}",health]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['health-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|HPE |{#NAME} [{#LOCATION}]: Status |<p>Fan status.</p> |DEPENDENT |hpe.msa.fans["{#DURABLE.ID}",status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['status-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|HPE |{#NAME} [{#LOCATION}]: Speed |<p>Fan speed (revolutions per minute).</p> |DEPENDENT |hpe.msa.fans["{#DURABLE.ID}",speed]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['speed'].first()`</p> |
-|HPE |Disk [{#ENCLOSURE.ID}.{#SLOT}]: Health |<p>Disk health status.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",health]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['health-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|HPE |Disk [{#ENCLOSURE.ID}.{#SLOT}]: Temperature status |<p>Disk temperature status.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",temperature_status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['temperature-status-numeric'].first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- IN_RANGE: `1 3`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|HPE |Disk [{#ENCLOSURE.ID}.{#SLOT}]: Temperature |<p>Temperature of the disk.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",temperature]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['temperature-numeric'].first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|HPE |Disk [{#ENCLOSURE.ID}.{#SLOT}]: Type |<p>Disk type:</p><p>SAS: Enterprise SAS spinning disk.</p><p>SAS MDL: Midline SAS spinning disk.</p><p>SSD SAS: SAS solit-state disk.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",type]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['description-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|HPE |Disk [{#ENCLOSURE.ID}.{#SLOT}]: Disk group |<p>If the disk is in a disk group, the disk group name.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",group]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['disk-group'].first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|HPE |Disk [{#ENCLOSURE.ID}.{#SLOT}]: Storage pool |<p>If the disk is in a pool, the pool name.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",pool]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['storage-pool-name'].first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|HPE |Disk [{#ENCLOSURE.ID}.{#SLOT}]: Vendor |<p>Disk vendor.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",vendor]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['vendor'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|HPE |Disk [{#ENCLOSURE.ID}.{#SLOT}]: Model |<p>Disk model.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",model]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['model'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|HPE |Disk [{#ENCLOSURE.ID}.{#SLOT}]: Serial number |<p>Disk serial number.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",serial_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['serial-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|HPE |Disk [{#ENCLOSURE.ID}.{#SLOT}]: Blocks size |<p>The size of a block, in bytes.</p> |DEPENDENT |hpe.msa.disks.blocks["{#DURABLE.ID}",size]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['blocksize'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|HPE |Disk [{#ENCLOSURE.ID}.{#SLOT}]: Blocks total |<p>Total space in blocks.</p> |DEPENDENT |hpe.msa.disks.blocks["{#DURABLE.ID}",total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['blocks'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|HPE |Disk [{#ENCLOSURE.ID}.{#SLOT}]: Space total |<p>Total size of the disk.</p> |CALCULATED |hpe.msa.disks.space["{#DURABLE.ID}",total]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p><p>**Expression**:</p>`last(//hpe.msa.disks.blocks["{#DURABLE.ID}",size])*last(//hpe.msa.disks.blocks["{#DURABLE.ID}",total])` |
-|HPE |Disk [{#ENCLOSURE.ID}.{#SLOT}]: SSD life left |<p>The percantage of disk life remaining.</p> |DEPENDENT |hpe.msa.disks.ssd["{#DURABLE.ID}",life_left]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['ssd-life-left-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|Zabbix raw items |HPE MSA: Get system |<p>-</p> |SCRIPT |hpe.msa.raw.system<p>**Expression**:</p>`The text is too long. Please see the template.` |
-|Zabbix raw items |HPE MSA: Get controllers |<p>-</p> |SCRIPT |hpe.msa.raw.controllers<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controllers']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
-|Zabbix raw items |HPE MSA: Get controller statistics |<p>-</p> |SCRIPT |hpe.msa.raw.controllers.statistics<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controller-statistics']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
-|Zabbix raw items |HPE MSA: Get disk groups |<p>-</p> |SCRIPT |hpe.msa.raw.disks.groups<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-groups']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
-|Zabbix raw items |HPE MSA: Get disk group statistics |<p>-</p> |SCRIPT |hpe.msa.raw.disks.groups.statistics<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-group-statistics']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
-|Zabbix raw items |HPE MSA: Get disks |<p>-</p> |SCRIPT |hpe.msa.raw.disks<p>**Preprocessing**:</p><p>- JSONPATH: `$.['drives']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
-|Zabbix raw items |HPE MSA: Get enclosures |<p>-</p> |SCRIPT |hpe.msa.raw.enclosures<p>**Preprocessing**:</p><p>- JSONPATH: `$.['enclosures']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
-|Zabbix raw items |HPE MSA: Get fans |<p>-</p> |SCRIPT |hpe.msa.raw.fans<p>**Preprocessing**:</p><p>- JSONPATH: `$.['fan']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
-|Zabbix raw items |HPE MSA: Get pools |<p>-</p> |SCRIPT |hpe.msa.raw.pools<p>**Preprocessing**:</p><p>- JSONPATH: `$.['pools']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
-|Zabbix raw items |HPE MSA: Get ports |<p>-</p> |SCRIPT |hpe.msa.raw.ports<p>**Preprocessing**:</p><p>- JSONPATH: `$.['port']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
-|Zabbix raw items |HPE MSA: Get power supplies |<p>-</p> |SCRIPT |hpe.msa.raw.power_supplies<p>**Preprocessing**:</p><p>- JSONPATH: `$.['power-supplies']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
-|Zabbix raw items |HPE MSA: Get volumes |<p>-</p> |SCRIPT |hpe.msa.raw.volumes<p>**Preprocessing**:</p><p>- JSONPATH: `$.['volumes']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
-|Zabbix raw items |HPE MSA: Get volume statistics |<p>-</p> |SCRIPT |hpe.msa.raw.volumes.statistics<p>**Preprocessing**:</p><p>- JSONPATH: `$.['volume-statistics']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
+|HPE |Port [{#NAME}]: Type |<p>Port type.</p> |DEPENDENT |hpe.msa.ports["{#NAME}",type]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['port'] == "{#NAME}")].['port-type-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Fan [{#DURABLE.ID}]: Health |<p>Fan health status.</p> |DEPENDENT |hpe.msa.fans["{#DURABLE.ID}",health]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['health-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Fan [{#DURABLE.ID}]: Status |<p>Fan status.</p> |DEPENDENT |hpe.msa.fans["{#DURABLE.ID}",status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['status-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Fan [{#DURABLE.ID}]: Speed |<p>Fan speed (revolutions per minute).</p> |DEPENDENT |hpe.msa.fans["{#DURABLE.ID}",speed]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['speed'].first()`</p> |
+|HPE |Disk [{#DURABLE.ID}]: Health |<p>Disk health status.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",health]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['health-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Disk [{#DURABLE.ID}]: Temperature status |<p>Disk temperature status.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",temperature_status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['temperature-status-numeric'].first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- IN_RANGE: `1 3`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Disk [{#DURABLE.ID}]: Temperature |<p>Temperature of the disk.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",temperature]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['temperature-numeric'].first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Disk [{#DURABLE.ID}]: Type |<p>Disk type:</p><p>SAS: Enterprise SAS spinning disk.</p><p>SAS MDL: Midline SAS spinning disk.</p><p>SSD SAS: SAS solit-state disk.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",type]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['description-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Disk [{#DURABLE.ID}]: Disk group |<p>If the disk is in a disk group, the disk group name.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",group]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['disk-group'].first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Disk [{#DURABLE.ID}]: Storage pool |<p>If the disk is in a pool, the pool name.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",pool]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['storage-pool-name'].first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Disk [{#DURABLE.ID}]: Vendor |<p>Disk vendor.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",vendor]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['vendor'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Disk [{#DURABLE.ID}]: Model |<p>Disk model.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",model]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['model'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Disk [{#DURABLE.ID}]: Serial number |<p>Disk serial number.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",serial_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['serial-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Disk [{#DURABLE.ID}]: Blocks size |<p>The size of a block, in bytes.</p> |DEPENDENT |hpe.msa.disks.blocks["{#DURABLE.ID}",size]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['blocksize'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Disk [{#DURABLE.ID}]: Blocks total |<p>Total space in blocks.</p> |DEPENDENT |hpe.msa.disks.blocks["{#DURABLE.ID}",total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['blocks'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Disk [{#DURABLE.ID}]: Space total |<p>Total size of the disk.</p> |CALCULATED |hpe.msa.disks.space["{#DURABLE.ID}",total]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p><p>**Expression**:</p>`last(//hpe.msa.disks.blocks["{#DURABLE.ID}",size])*last(//hpe.msa.disks.blocks["{#DURABLE.ID}",total])` |
+|HPE |Disk [{#DURABLE.ID}]: SSD life left |<p>The percantage of disk life remaining.</p> |DEPENDENT |hpe.msa.disks.ssd["{#DURABLE.ID}",life_left]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['ssd-life-left-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |I/O module [{#DURABLE.ID}]: Health |<p>I/O module health status.</p> |DEPENDENT |hpe.msa.io_modules["{#DURABLE.ID}",health]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['health-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |I/O module [{#DURABLE.ID}]: Status |<p>I/O module status.</p> |DEPENDENT |hpe.msa.io_modules["{#DURABLE.ID}",status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['status-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 3`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |I/O module [{#DURABLE.ID}]: Part number |<p>Part number of the I/O module.</p> |DEPENDENT |hpe.msa.io_modules["{#DURABLE.ID}",part_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['part-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |I/O module [{#DURABLE.ID}]: Serial number |<p>I/O module serial number.</p> |DEPENDENT |hpe.msa.io_modules["{#DURABLE.ID}",serial_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['serial-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|Zabbix raw items |HPE MSA: Get system |<p>General system information.</p> |SCRIPT |hpe.msa.raw.system<p>**Expression**:</p>`The text is too long. Please see the template.` |
+|Zabbix raw items |HPE MSA: Get controllers |<p>The list of controllers.</p> |SCRIPT |hpe.msa.raw.controllers<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controllers']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
+|Zabbix raw items |HPE MSA: Get controller statistics |<p>The list of controllers statistics.</p> |SCRIPT |hpe.msa.raw.controllers.statistics<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controller-statistics']`</p><p>- JAVASCRIPT: `The text is too long. Please see the template.`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
+|Zabbix raw items |HPE MSA: Get I/O modules |<p>The list of I/O modules.</p> |SCRIPT |hpe.msa.raw.io_modules<p>**Preprocessing**:</p><p>- JSONPATH: `$.['io-modules']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
+|Zabbix raw items |HPE MSA: Get disk groups |<p>The list of disk groups.</p> |SCRIPT |hpe.msa.raw.disks.groups<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-groups']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
+|Zabbix raw items |HPE MSA: Get disk group statistics |<p>The list of disk groups statistics.</p> |SCRIPT |hpe.msa.raw.disks.groups.statistics<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-group-statistics']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
+|Zabbix raw items |HPE MSA: Get disks |<p>The list of disks.</p> |SCRIPT |hpe.msa.raw.disks<p>**Preprocessing**:</p><p>- JSONPATH: `$.['drives']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
+|Zabbix raw items |HPE MSA: Get enclosures |<p>The list of enclosures.</p> |SCRIPT |hpe.msa.raw.enclosures<p>**Preprocessing**:</p><p>- JSONPATH: `$.['enclosures']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
+|Zabbix raw items |HPE MSA: Get fans |<p>The list of fans.</p> |SCRIPT |hpe.msa.raw.fans<p>**Preprocessing**:</p><p>- JSONPATH: `$.['fan']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
+|Zabbix raw items |HPE MSA: Get pools |<p>The list of pools.</p> |SCRIPT |hpe.msa.raw.pools<p>**Preprocessing**:</p><p>- JSONPATH: `$.['pools']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
+|Zabbix raw items |HPE MSA: Get ports |<p>The list of ports.</p> |SCRIPT |hpe.msa.raw.ports<p>**Preprocessing**:</p><p>- JSONPATH: `$.['port']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
+|Zabbix raw items |HPE MSA: Get power supplies |<p>The list of power supplies.</p> |SCRIPT |hpe.msa.raw.power_supplies<p>**Preprocessing**:</p><p>- JSONPATH: `$.['power-supplies']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
+|Zabbix raw items |HPE MSA: Get volumes |<p>The list of volumes.</p> |SCRIPT |hpe.msa.raw.volumes<p>**Preprocessing**:</p><p>- JSONPATH: `$.['volumes']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
+|Zabbix raw items |HPE MSA: Get volume statistics |<p>The list of volumes statistics.</p> |SCRIPT |hpe.msa.raw.volumes.statistics<p>**Preprocessing**:</p><p>- JSONPATH: `$.['volume-statistics']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
## Triggers
@@ -174,18 +192,19 @@ There are no template links in this template.
|System health is in degraded state |<p>System health is in degraded state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.system.health)=1` |WARNING | |
|System health is in fault state |<p>System health is in fault state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.system.health)=2` |AVERAGE | |
|System health is in unknown state |<p>System health is in unknown state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.system.health)=3` |INFO | |
-|Failed to fetch API data |<p>Zabbix has not received data for items for the last 5 minutes.</p> |`nodata(/HPE MSA 2060 Storage by HTTP/hpe.msa.system.health,5m)=1` |WARNING |<p>**Depends on**:</p><p>- Service is down</p> |
-|Service is down |<p>-</p> |`max(/HPE MSA 2060 Storage by HTTP/net.tcp.service["{$HPE.MSA.API.SCHEME}","{HOST.CONN}"],5m)=0` |WARNING | |
-|Controller [{#DURABLE.ID}]: Controller health is in degraded state |<p>Controller health is in degraded state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers["{#DURABLE.ID}",health])=1` |WARNING |<p>**Depends on**:</p><p>- Controller [{#DURABLE.ID}]: Controller is down</p> |
-|Controller [{#DURABLE.ID}]: Controller health is in fault state |<p>Controller health is in fault state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers["{#DURABLE.ID}",health])=2` |AVERAGE |<p>**Depends on**:</p><p>- Controller [{#DURABLE.ID}]: Controller is down</p> |
-|Controller [{#DURABLE.ID}]: Controller health is in unknown state |<p>Controller health is in unknown state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers["{#DURABLE.ID}",health])=3` |INFO |<p>**Depends on**:</p><p>- Controller [{#DURABLE.ID}]: Controller is down</p> |
-|Controller [{#DURABLE.ID}]: Controller is down |<p>-</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers["{#DURABLE.ID}",status])=1` |HIGH | |
-|Controller [{#DURABLE.ID}]: Controller has been restarted |<p>-</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers["{#DURABLE.ID}",uptime])<10m` |INFO | |
+|Failed to fetch API data |<p>Zabbix has not received data for items for the last 5 minutes.</p> |`nodata(/HPE MSA 2060 Storage by HTTP/hpe.msa.system.health,5m)=1` |AVERAGE |<p>**Depends on**:</p><p>- Service is down or unavailable</p> |
+|Service is down or unavailable |<p>HTTP/HTTPS service is down or unable to establish TCP connection.</p> |`max(/HPE MSA 2060 Storage by HTTP/net.tcp.service["{$HPE.MSA.API.SCHEME}","{HOST.CONN}"],5m)=0` |HIGH | |
+|Controller [{#CONTROLLER.ID}]: Controller health is in degraded state |<p>Controller health is in degraded state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",health])=1` |WARNING |<p>**Depends on**:</p><p>- Controller [{#CONTROLLER.ID}]: Controller is down</p> |
+|Controller [{#CONTROLLER.ID}]: Controller health is in fault state |<p>Controller health is in fault state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",health])=2` |AVERAGE |<p>**Depends on**:</p><p>- Controller [{#CONTROLLER.ID}]: Controller is down</p> |
+|Controller [{#CONTROLLER.ID}]: Controller health is in unknown state |<p>Controller health is in unknown state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",health])=3` |INFO |<p>**Depends on**:</p><p>- Controller [{#CONTROLLER.ID}]: Controller is down</p> |
+|Controller [{#CONTROLLER.ID}]: Controller is down |<p>The controller is down.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",status])=1` |HIGH | |
+|Controller [{#CONTROLLER.ID}]: High CPU utilization |<p>Controller CPU utilization is too high. The system might be slow to respond.</p> |`min(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers.cpu["{#CONTROLLER.ID}",util],5m)>{$HPE.MSA.CONTROLLER.CPU.UTIL.CRIT}` |WARNING | |
+|Controller [{#CONTROLLER.ID}]: Controller has been restarted |<p>The controller uptime is less than 10 minutes.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",uptime])<10m` |WARNING | |
|Disk group [{#NAME}]: Disk group health is in degraded state |<p>Disk group health is in degraded state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",health])=1` |WARNING | |
|Disk group [{#NAME}]: Disk group health is in fault state |<p>Disk group health is in fault state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",health])=2` |AVERAGE | |
|Disk group [{#NAME}]: Disk group health is in unknown state |<p>Disk group health is in unknown state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",health])=3` |INFO | |
-|Disk group [{#NAME}]: Disk group space is low |<p>Disk group is running low on free space (less than {$HPE.PRIMERA.DISKS.GROUP.PUSED.MAX.WARN:"{#NAME}"}% available).</p> |`min(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups.space["{#NAME}",util],5m)>{$HPE.PRIMERA.DISKS.GROUP.PUSED.MAX.WARN:"{#NAME}"}` |WARNING |<p>**Depends on**:</p><p>- Disk group [{#NAME}]: Disk group space is critically low</p> |
-|Disk group [{#NAME}]: Disk group space is critically low |<p>Disk group is running low on free space (less than {$HPE.PRIMERA.DISKS.GROUP.PUSED.MAX.CRIT:"{#NAME}"}% available).</p> |`min(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups.space["{#NAME}",util],5m)>{$HPE.PRIMERA.DISKS.GROUP.PUSED.MAX.CRIT:"{#NAME}"}` |AVERAGE | |
+|Disk group [{#NAME}]: Disk group space is low |<p>Disk group is running low on free space (less than {$HPE.MSA.DISKS.GROUP.PUSED.MAX.WARN:"{#NAME}"}% available).</p> |`min(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups.space["{#NAME}",util],5m)>{$HPE.MSA.DISKS.GROUP.PUSED.MAX.WARN:"{#NAME}"}` |WARNING |<p>**Depends on**:</p><p>- Disk group [{#NAME}]: Disk group space is critically low</p> |
+|Disk group [{#NAME}]: Disk group space is critically low |<p>Disk group is running low on free space (less than {$HPE.MSA.DISKS.GROUP.PUSED.MAX.CRIT:"{#NAME}"}% available).</p> |`min(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups.space["{#NAME}",util],5m)>{$HPE.MSA.DISKS.GROUP.PUSED.MAX.CRIT:"{#NAME}"}` |AVERAGE | |
|Disk group [{#NAME}]: Disk group is fault tolerant with a down disk |<p>The disk group is online and fault tolerant, but some of it's disks are down.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=1` |AVERAGE | |
|Disk group [{#NAME}]: Disk group has damaged disks |<p>The disk group is online and fault tolerant, but some of it's disks are damaged.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=9` |AVERAGE | |
|Disk group [{#NAME}]: Disk group has missing disks |<p>The disk group is online and fault tolerant, but some of it's disks are missing.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=8` |AVERAGE | |
@@ -199,8 +218,8 @@ There are no template links in this template.
|Pool [{#NAME}]: Pool health is in degraded state |<p>Pool health is in degraded state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.pools["{#NAME}",health])=1` |WARNING | |
|Pool [{#NAME}]: Pool health is in fault state |<p>Pool health is in fault state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.pools["{#NAME}",health])=2` |AVERAGE | |
|Pool [{#NAME}]: Pool health is in unknown state |<p>Pool [{#NAME}] health is in unknown state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.pools["{#NAME}",health])=3` |INFO | |
-|Pool [{#NAME}]: Pool space is low |<p>Pool is running low on free space (less than {$HPE.PRIMERA.POOL.PUSED.MAX.WARN:"{#NAME}"}% available).</p> |`min(/HPE MSA 2060 Storage by HTTP/hpe.msa.pools.space["{#NAME}",util],5m)>{$HPE.PRIMERA.POOL.PUSED.MAX.WARN:"{#NAME}"}` |WARNING |<p>**Depends on**:</p><p>- Pool [{#NAME}]: Pool space is critically low</p> |
-|Pool [{#NAME}]: Pool space is critically low |<p>Pool is running low on free space (less than {$HPE.PRIMERA.POOL.PUSED.MAX.CRIT:"{#NAME}"}% available).</p> |`min(/HPE MSA 2060 Storage by HTTP/hpe.msa.pools.space["{#NAME}",util],5m)>{$HPE.PRIMERA.POOL.PUSED.MAX.CRIT:"{#NAME}"}` |AVERAGE | |
+|Pool [{#NAME}]: Pool space is low |<p>Pool is running low on free space (less than {$HPE.MSA.POOL.PUSED.MAX.WARN:"{#NAME}"}% available).</p> |`min(/HPE MSA 2060 Storage by HTTP/hpe.msa.pools.space["{#NAME}",util],5m)>{$HPE.MSA.POOL.PUSED.MAX.WARN:"{#NAME}"}` |WARNING |<p>**Depends on**:</p><p>- Pool [{#NAME}]: Pool space is critically low</p> |
+|Pool [{#NAME}]: Pool space is critically low |<p>Pool is running low on free space (less than {$HPE.MSA.POOL.PUSED.MAX.CRIT:"{#NAME}"}% available).</p> |`min(/HPE MSA 2060 Storage by HTTP/hpe.msa.pools.space["{#NAME}",util],5m)>{$HPE.MSA.POOL.PUSED.MAX.CRIT:"{#NAME}"}` |AVERAGE | |
|Enclosure [{#DURABLE.ID}]: Enclosure health is in degraded state |<p>Enclosure health is in degraded state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",health])=1` |WARNING | |
|Enclosure [{#DURABLE.ID}]: Enclosure health is in fault state |<p>Enclosure health is in fault state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",health])=2` |AVERAGE | |
|Enclosure [{#DURABLE.ID}]: Enclosure health is in unknown state |<p>Enclosure health is in unknown state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",health])=3` |INFO | |
@@ -208,31 +227,36 @@ There are no template links in this template.
|Enclosure [{#DURABLE.ID}]: Enclosure has warning status |<p>Enclosure has warning status.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",status])=3` |WARNING | |
|Enclosure [{#DURABLE.ID}]: Enclosure is unavailable |<p>Enclosure is unavailable.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",status])=7` |HIGH | |
|Enclosure [{#DURABLE.ID}]: Enclosure is unrecoverable |<p>Enclosure is unrecoverable.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",status])=4` |HIGH | |
-|Enclosure [{#DURABLE.ID}]: Enclosure has unknown status |<p>Enclosure has unknown status</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",status])=6` |INFO | |
-|Power supply [{#LOCATION}]: Power supply health is in degraded state |<p>Power supply health is in degraded state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",health])=1` |WARNING | |
-|Power supply [{#LOCATION}]: Power supply health is in fault state |<p>Power supply health is in fault state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",health])=2` |AVERAGE | |
-|Power supply [{#LOCATION}]: Power supply health is in unknown state |<p>Power supply health is in unknown state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",health])=3` |INFO | |
-|Power supply [{#LOCATION}]: Power supply has error status |<p>Power supply has error status.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",status])=2` |AVERAGE | |
-|Power supply [{#LOCATION}]: Power supply has warning status |<p>Power supply has warning status.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",status])=1` |WARNING | |
-|Power supply [{#LOCATION}]: Power supply has unknown status |<p>Power supply has unknown status.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",status])=4` |INFO | |
+|Enclosure [{#DURABLE.ID}]: Enclosure has unknown status |<p>Enclosure has unknown status.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",status])=6` |INFO | |
+|Power supply [{#DURABLE.ID}]: Power supply health is in degraded state |<p>Power supply health is in degraded state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",health])=1` |WARNING | |
+|Power supply [{#DURABLE.ID}]: Power supply health is in fault state |<p>Power supply health is in fault state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",health])=2` |AVERAGE | |
+|Power supply [{#DURABLE.ID}]: Power supply health is in unknown state |<p>Power supply health is in unknown state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",health])=3` |INFO | |
+|Power supply [{#DURABLE.ID}]: Power supply has error status |<p>Power supply has error status.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",status])=2` |AVERAGE | |
+|Power supply [{#DURABLE.ID}]: Power supply has warning status |<p>Power supply has warning status.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",status])=1` |WARNING | |
+|Power supply [{#DURABLE.ID}]: Power supply has unknown status |<p>Power supply has unknown status.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",status])=4` |INFO | |
|Port [{#NAME}]: Port health is in degraded state |<p>Port health is in degraded state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.ports["{#NAME}",health])=1` |WARNING | |
|Port [{#NAME}]: Port health is in fault state |<p>Port health is in fault state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.ports["{#NAME}",health])=2` |AVERAGE | |
|Port [{#NAME}]: Port health is in unknown state |<p>Port health is in unknown state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.ports["{#NAME}",health])=3` |INFO | |
|Port [{#NAME}]: Port has error status |<p>Port has error status.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.ports["{#NAME}",status])=2` |AVERAGE | |
|Port [{#NAME}]: Port has warning status |<p>Port has warning status.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.ports["{#NAME}",status])=1` |WARNING | |
|Port [{#NAME}]: Port has unknown status |<p>Port has unknown status.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.ports["{#NAME}",status])=4` |INFO | |
-|{#NAME} [{#LOCATION}]: Fan health is in degraded state |<p>Fan health is in degraded state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",health])=1` |WARNING | |
-|{#NAME} [{#LOCATION}]: Fan health is in fault state |<p>Fan health is in fault state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",health])=2` |AVERAGE | |
-|{#NAME} [{#LOCATION}]: Fan health is in unknown state |<p>Fan health is in unknown state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",health])=3` |INFO | |
-|{#NAME} [{#LOCATION}]: Fan has error status |<p>Fan has error status.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",status])=1` |AVERAGE | |
-|{#NAME} [{#LOCATION}]: Fan is missing |<p>Fan is missing.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",status])=3` |INFO | |
-|{#NAME} [{#LOCATION}]: Fan is off |<p>Fan is off.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",status])=2` |WARNING | |
-|Disk [{#ENCLOSURE.ID}.{#SLOT}]: Disk health is in degraded state |<p>Disk health is in degraded state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",health])=1` |WARNING | |
-|Disk [{#ENCLOSURE.ID}.{#SLOT}]: Disk health is in fault state |<p>Disk health is in fault state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",health])=2` |AVERAGE | |
-|Disk [{#ENCLOSURE.ID}.{#SLOT}]: Disk health is in unknown state |<p>Disk health is in unknown state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",health])=3` |INFO | |
-|Disk [{#ENCLOSURE.ID}.{#SLOT}]: Disk temperature is high |<p>Disk temperature is high.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",temperature_status])=3` |WARNING | |
-|Disk [{#ENCLOSURE.ID}.{#SLOT}]: Disk temperature is critically high |<p>Disk temperature is critically high.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",temperature_status])=2` |AVERAGE | |
-|Disk [{#ENCLOSURE.ID}.{#SLOT}]: Disk temperature is unknown |<p>Disk temperature is unknown.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",temperature_status])=4` |INFO | |
+|Fan [{#DURABLE.ID}]: Fan health is in degraded state |<p>Fan health is in degraded state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",health])=1` |WARNING | |
+|Fan [{#DURABLE.ID}]: Fan health is in fault state |<p>Fan health is in fault state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",health])=2` |AVERAGE | |
+|Fan [{#DURABLE.ID}]: Fan health is in unknown state |<p>Fan health is in unknown state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",health])=3` |INFO | |
+|Fan [{#DURABLE.ID}]: Fan has error status |<p>Fan has error status.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",status])=1` |AVERAGE | |
+|Fan [{#DURABLE.ID}]: Fan is missing |<p>Fan is missing.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",status])=3` |INFO | |
+|Fan [{#DURABLE.ID}]: Fan is off |<p>Fan is off.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",status])=2` |WARNING | |
+|Disk [{#DURABLE.ID}]: Disk health is in degraded state |<p>Disk health is in degraded state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",health])=1` |WARNING | |
+|Disk [{#DURABLE.ID}]: Disk health is in fault state |<p>Disk health is in fault state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",health])=2` |AVERAGE | |
+|Disk [{#DURABLE.ID}]: Disk health is in unknown state |<p>Disk health is in unknown state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",health])=3` |INFO | |
+|Disk [{#DURABLE.ID}]: Disk temperature is high |<p>Disk temperature is high.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",temperature_status])=3` |WARNING | |
+|Disk [{#DURABLE.ID}]: Disk temperature is critically high |<p>Disk temperature is critically high.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",temperature_status])=2` |AVERAGE | |
+|Disk [{#DURABLE.ID}]: Disk temperature is unknown |<p>Disk temperature is unknown.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",temperature_status])=4` |INFO | |
+|I/O module [{#DURABLE.ID}]: I/O module health is in degraded state |<p>I/O module health is in degraded state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.io_modules["{#DURABLE.ID}",health])=1` |WARNING | |
+|I/O module [{#DURABLE.ID}]: I/O module health is in fault state |<p>I/O module health is in fault state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.io_modules["{#DURABLE.ID}",health])=2` |AVERAGE | |
+|I/O module [{#DURABLE.ID}]: I/O module health is in unknown state |<p>I/O module health is in unknown state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.io_modules["{#DURABLE.ID}",health])=3` |INFO | |
+|I/O module [{#DURABLE.ID}]: I/O module is down |<p>I/O module is down.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.io_modules["{#DURABLE.ID}",status])=1` |AVERAGE | |
+|I/O module [{#DURABLE.ID}]: I/O module has unknown status |<p>I/O module has unknown status.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.io_modules["{#DURABLE.ID}",status])=3` |INFO | |
## Feedback
diff --git a/templates/san/hpe_msa2060_http/template_san_hpe_msa2060_http.yaml b/templates/san/hpe_msa2060_http/template_san_hpe_msa2060_http.yaml
index b568dde4f53..6f4d81fbc91 100644
--- a/templates/san/hpe_msa2060_http/template_san_hpe_msa2060_http.yaml
+++ b/templates/san/hpe_msa2060_http/template_san_hpe_msa2060_http.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '6.0'
- date: '2022-05-11T05:26:46Z'
+ date: '2022-05-13T11:04:04Z'
groups:
-
uuid: 7c2cb727f85b492d88cd56e17127c64d
@@ -15,13 +15,10 @@ zabbix_export:
It works without any external scripts and uses the script items.
Setup:
- 1. Create user zabbix on the storage with browse role and enable it for all domains.
- 2. The WSAPI server does not start automatically. To enable it:
- - log in to the CLI as Super, Service, or any role granted the wsapi_set right;
- - start the WSAPI server by command: 'startwsapi';
- - to check WSAPI state use command: 'showwsapi'.
- 3. Link template to the host.
- 4. Configure {$HPE.MSA.API.PASSWORD} and {$HPE.PRIMERA.API.PASSWORD}.
+ 1. Create user zabbix on the storage with monitor role.
+ 2. Link template to the host.
+ 3. Configure {$HPE.MSA.API.PASSWORD} and an interface with address through which one API is accessible if not specified.
+ 4. Change {$HPE.MSA.API.SCHEME} and {$HPE.MSA.API.PORT} macros if needed.
You can discuss this template or leave feedback on our forum https://www.zabbix.com/forum/zabbix-suggestions-and-feedback
@@ -96,6 +93,7 @@ zabbix_export:
}
return response;
+ description: 'The list of controllers.'
preprocessing:
-
type: JSONPATH
@@ -185,11 +183,22 @@ zabbix_export:
}
return response;
+ description: 'The list of controllers statistics.'
preprocessing:
-
type: JSONPATH
parameters:
- '$.[''controller-statistics'']'
+ -
+ type: JAVASCRIPT
+ parameters:
+ - |
+ var result = [];
+ JSON.parse(value).forEach(function (key) {
+ key["durable-id"] = key["durable-id"].toLowerCase();
+ result.push(key);
+ });
+ return JSON.stringify(result);
timeout: '{$HPE.MSA.DATA.TIMEOUT}'
parameters:
-
@@ -274,6 +283,7 @@ zabbix_export:
}
return response;
+ description: 'The list of disks.'
preprocessing:
-
type: JSONPATH
@@ -363,6 +373,7 @@ zabbix_export:
}
return response;
+ description: 'The list of disk groups.'
preprocessing:
-
type: JSONPATH
@@ -452,6 +463,7 @@ zabbix_export:
}
return response;
+ description: 'The list of disk groups statistics.'
preprocessing:
-
type: JSONPATH
@@ -541,6 +553,7 @@ zabbix_export:
}
return response;
+ description: 'The list of enclosures.'
preprocessing:
-
type: JSONPATH
@@ -630,6 +643,7 @@ zabbix_export:
}
return response;
+ description: 'The list of fans.'
preprocessing:
-
type: JSONPATH
@@ -654,6 +668,96 @@ zabbix_export:
tag: component
value: raw
-
+ uuid: c4edf201793e4a8abd827b849631e79b
+ name: 'HPE MSA: Get I/O modules'
+ type: SCRIPT
+ key: hpe.msa.raw.io_modules
+ history: '0'
+ trends: '0'
+ value_type: TEXT
+ params: |
+ var params = JSON.parse(value),
+ fields = ['username', 'password', 'method', 'base_url'],
+ result = {};
+
+ fields.forEach(function (field) {
+ if (typeof params !== 'object' || typeof params[field] === 'undefined' || params[field] === '' ) {
+ throw 'Required param is not set: "' + field + '".';
+ }
+ });
+
+ if (!params.base_url.endsWith('/')) {
+ params.base_url += '/';
+ }
+
+ var response, request = new HttpRequest();
+ request.addHeader('datatype: json');
+
+ auth_string = sha256(params.username + '_' + params.password)
+
+ response = request.get(params.base_url + 'api/login/' + auth_string);
+
+ if (request.getStatus() < 200 || request.getStatus() >= 300) {
+ throw 'Request failed with status code ' + request.getStatus() + ': ' + response;
+ }
+
+ if (response !== null) {
+ try {
+ auth_data = JSON.parse(response);
+ }
+ catch (error) {
+ throw 'Failed to parse auth response received from device API. Check debug log for more information.';
+ }
+ }
+
+ sessionKey = auth_data['status'][0]['response'];
+
+ request = new HttpRequest();
+ request.addHeader('sessionKey: ' + sessionKey);
+ request.addHeader('datatype: json');
+
+ response = request.get(params.base_url + 'api/show/' + params.method);
+
+
+ if (request.getStatus() < 200 || request.getStatus() >= 300) {
+ throw 'Request failed with status code ' + request.getStatus() + ': ' + response;
+ }
+
+ if (response !== null) {
+ try {
+ result = JSON.stringify(response);
+ }
+ catch (error) {
+ throw 'Failed to parse response received from device API. Check debug log for more information.';
+ }
+ }
+
+ return response;
+ description: 'The list of I/O modules.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''io-modules'']'
+ timeout: '{$HPE.MSA.DATA.TIMEOUT}'
+ parameters:
+ -
+ name: base_url
+ value: '{$HPE.MSA.API.SCHEME}://{HOST.CONN}:{$HPE.MSA.API.PORT}/'
+ -
+ name: method
+ value: io-modules
+ -
+ name: username
+ value: '{$HPE.MSA.API.USERNAME}'
+ -
+ name: password
+ value: '{$HPE.MSA.API.PASSWORD}'
+ tags:
+ -
+ tag: component
+ value: raw
+ -
uuid: e1eb13f74cd04797a6c1f3a5bc5e1b0d
name: 'HPE MSA: Get pools'
type: SCRIPT
@@ -719,6 +823,7 @@ zabbix_export:
}
return response;
+ description: 'The list of pools.'
preprocessing:
-
type: JSONPATH
@@ -808,6 +913,7 @@ zabbix_export:
}
return response;
+ description: 'The list of ports.'
preprocessing:
-
type: JSONPATH
@@ -897,6 +1003,7 @@ zabbix_export:
}
return response;
+ description: 'The list of power supplies.'
preprocessing:
-
type: JSONPATH
@@ -986,6 +1093,7 @@ zabbix_export:
}
return response;
+ description: 'General system information.'
timeout: '{$HPE.MSA.DATA.TIMEOUT}'
parameters:
-
@@ -1070,6 +1178,7 @@ zabbix_export:
}
return response;
+ description: 'The list of volumes.'
preprocessing:
-
type: JSONPATH
@@ -1159,6 +1268,7 @@ zabbix_export:
}
return response;
+ description: 'The list of volumes statistics.'
preprocessing:
-
type: JSONPATH
@@ -1240,11 +1350,11 @@ zabbix_export:
expression: 'nodata(/HPE MSA 2060 Storage by HTTP/hpe.msa.system.health,5m)=1'
name: 'Failed to fetch API data'
event_name: 'Failed to fetch API data (or no data for 5m)'
- priority: WARNING
+ priority: AVERAGE
description: 'Zabbix has not received data for items for the last 5 minutes.'
dependencies:
-
- name: 'Service is down'
+ name: 'Service is down or unavailable'
expression: 'max(/HPE MSA 2060 Storage by HTTP/net.tcp.service["{$HPE.MSA.API.SCHEME}","{HOST.CONN}"],5m)=0'
tags:
-
@@ -1433,8 +1543,9 @@ zabbix_export:
-
uuid: 9c1bf26f95d946f386bbf613d3d55779
expression: 'max(/HPE MSA 2060 Storage by HTTP/net.tcp.service["{$HPE.MSA.API.SCHEME}","{HOST.CONN}"],5m)=0'
- name: 'Service is down'
- priority: WARNING
+ name: 'Service is down or unavailable'
+ priority: HIGH
+ description: 'HTTP/HTTPS service is down or unable to establish TCP connection.'
tags:
-
tag: scope
@@ -1449,10 +1560,413 @@ zabbix_export:
description: 'Discover controllers.'
item_prototypes:
-
+ uuid: 73bc16fc631f4386abbc78897db07e13
+ name: 'Controller [{#CONTROLLER.ID}]: Cache: Read hits, rate'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.cache.read.hits["{#CONTROLLER.ID}",rate]'
+ delay: '0'
+ history: 7d
+ description: 'For the controller that owns the volume, the number of times the block to be read is found in cache per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''read-cache-hits''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.raw.controllers.statistics
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 04e14fe4d8ba4693b954ebcac1671649
+ name: 'Controller [{#CONTROLLER.ID}]: Cache: Read misses, rate'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.cache.read.misses["{#CONTROLLER.ID}",rate]'
+ delay: '0'
+ history: 7d
+ description: 'For the controller that owns the volume, the number of times the block to be read is not found in cache per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''read-cache-misses''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.raw.controllers.statistics
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 5cb9f7eb42d2413a90161ac192629073
+ name: 'Controller [{#CONTROLLER.ID}]: Cache: Write hits, rate'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.cache.write.hits["{#CONTROLLER.ID}",rate]'
+ delay: '0'
+ history: 7d
+ description: 'For the controller that owns the volume, the number of times the block written to is found in cache per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''write-cache-hits''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.raw.controllers.statistics
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 61aa7235c6c44cfababd1b2390cc0443
+ name: 'Controller [{#CONTROLLER.ID}]: Cache: Write misses, rate'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.cache.write.misses["{#CONTROLLER.ID}",rate]'
+ delay: '0'
+ history: 7d
+ description: 'For the controller that owns the volume, the number of times the block written to is not found in cache per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''write-cache-misses''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.raw.controllers.statistics
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 0d754544c18143ff98114e1ed316ad1e
+ name: 'Controller [{#CONTROLLER.ID}]: Cache: Write utilization'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.cache.write["{#CONTROLLER.ID}",util]'
+ delay: '0'
+ history: 7d
+ units: '%'
+ description: 'Percentage of write cache in use, from 0 to 100.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''write-cache-used''].first()'
+ master_item:
+ key: hpe.msa.raw.controllers.statistics
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 482c5af99fe740278c4663ba300dee04
+ name: 'Controller [{#CONTROLLER.ID}]: Cache memory size'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.cache["{#CONTROLLER.ID}",total]'
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'Controller cache memory size.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''cache-memory-size''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ -
+ type: MULTIPLIER
+ parameters:
+ - '1048576'
+ master_item:
+ key: hpe.msa.raw.controllers
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 80d6ae014e354f6c844c3b88ea66c530
+ name: 'Controller [{#CONTROLLER.ID}]: CPU utilization'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.cpu["{#CONTROLLER.ID}",util]'
+ delay: '0'
+ history: 7d
+ units: '%'
+ description: 'Percentage of time the CPU is busy, from 0 to 100.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''cpu-load''].first()'
+ master_item:
+ key: hpe.msa.raw.controllers.statistics
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ trigger_prototypes:
+ -
+ uuid: 0bf68b46b7644ad5ad0123df49c1da35
+ expression: 'min(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers.cpu["{#CONTROLLER.ID}",util],5m)>{$HPE.MSA.CONTROLLER.CPU.UTIL.CRIT}'
+ name: 'Controller [{#CONTROLLER.ID}]: High CPU utilization'
+ event_name: 'Controller [{#CONTROLLER.ID}]: High CPU utilization (over {$HPE.MSA.CONTROLLER.CPU.UTIL.CRIT}% for 5m)'
+ priority: WARNING
+ description: 'Controller CPU utilization is too high. The system might be slow to respond.'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: c8fbfd459fce4149b1459e366b61981a
+ name: 'Controller [{#CONTROLLER.ID}]: Data transfer rate: Reads'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.data_transfer.reads["{#CONTROLLER.ID}",rate]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: Bps
+ description: 'The data read rate, in bytes per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''data-read-numeric''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.raw.controllers.statistics
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 9c5c23273f5b43ad9e300d2c7b90bc3f
+ name: 'Controller [{#CONTROLLER.ID}]: Data transfer rate: Total'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.data_transfer.total["{#CONTROLLER.ID}",rate]'
+ delay: '0'
+ history: 7d
+ units: Bps
+ description: 'The data transfer rate, in bytes per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''bytes-per-second-numeric''].first()'
+ master_item:
+ key: hpe.msa.raw.controllers.statistics
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 94f0b7f7d397453f9227c1b473a77a4e
+ name: 'Controller [{#CONTROLLER.ID}]: Data transfer rate: Writes'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.data_transfer.writes["{#CONTROLLER.ID}",rate]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: Bps
+ description: 'The data write rate, in bytes per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''data-written-numeric''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.raw.controllers.statistics
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 8b0f014d1ed5470d919357f204b704ca
+ name: 'Controller [{#CONTROLLER.ID}]: IOPS, read rate'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.iops.read["{#CONTROLLER.ID}",rate]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: '!r/s'
+ description: 'Number of read operations per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''number-of-reads''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.raw.controllers.statistics
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 16f2fd5bd9d244daa09aef3f79a5d450
+ name: 'Controller [{#CONTROLLER.ID}]: IOPS, total rate'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.iops.total["{#CONTROLLER.ID}",rate]'
+ delay: '0'
+ history: 7d
+ units: '!iops'
+ description: 'Input/output operations per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''iops''].first()'
+ master_item:
+ key: hpe.msa.raw.controllers.statistics
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 9b8366ac60304c3c98dedc278ad18418
+ name: 'Controller [{#CONTROLLER.ID}]: IOPS, write rate'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.iops.write["{#CONTROLLER.ID}",rate]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: '!w/s'
+ description: 'Number of write operations per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''number-of-writes''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.raw.controllers.statistics
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 5f6c124f1aef41499ee52616ede02de9
+ name: 'Controller [{#CONTROLLER.ID}]: Disks'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers["{#CONTROLLER.ID}",disks]'
+ delay: '0'
+ history: 7d
+ description: 'Number of disks in the storage system.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''disks''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.raw.controllers
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: c70f280c9c494b769b442f3a22a3c173
+ name: 'Controller [{#CONTROLLER.ID}]: Disk groups'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers["{#CONTROLLER.ID}",disk_groups]'
+ delay: '0'
+ history: 7d
+ description: 'Number of disk groups in the storage system.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''virtual-disks''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.raw.controllers
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
uuid: ba1bb9818a9a487c8742d619316b087e
- name: 'Controller [{#DURABLE.ID}]: Firmware version'
+ name: 'Controller [{#CONTROLLER.ID}]: Firmware version'
type: DEPENDENT
- key: 'hpe.msa.controllers["{#DURABLE.ID}",firmware]'
+ key: 'hpe.msa.controllers["{#CONTROLLER.ID}",firmware]'
delay: '0'
history: 7d
trends: '0'
@@ -1475,12 +1989,12 @@ zabbix_export:
value: controller
-
tag: controller
- value: '{#DURABLE.ID}'
+ value: '{#CONTROLLER.ID}'
-
uuid: 5f5307f2904a4792af1906a2b03a2a9b
- name: 'Controller [{#DURABLE.ID}]: Health'
+ name: 'Controller [{#CONTROLLER.ID}]: Health'
type: DEPENDENT
- key: 'hpe.msa.controllers["{#DURABLE.ID}",health]'
+ key: 'hpe.msa.controllers["{#CONTROLLER.ID}",health]'
delay: '0'
history: 7d
description: 'Controller health status.'
@@ -1508,65 +2022,65 @@ zabbix_export:
value: health
-
tag: controller
- value: '{#DURABLE.ID}'
+ value: '{#CONTROLLER.ID}'
trigger_prototypes:
-
uuid: 3988a5b897a34c84952fa573d7019879
- expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers["{#DURABLE.ID}",health])=1'
- name: 'Controller [{#DURABLE.ID}]: Controller health is in degraded state'
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",health])=1'
+ name: 'Controller [{#CONTROLLER.ID}]: Controller health is in degraded state'
priority: WARNING
description: 'Controller health is in degraded state.'
dependencies:
-
- name: 'Controller [{#DURABLE.ID}]: Controller is down'
- expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers["{#DURABLE.ID}",status])=1'
+ name: 'Controller [{#CONTROLLER.ID}]: Controller is down'
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",status])=1'
tags:
-
tag: scope
value: performance
-
uuid: 7256e023ac82427bb6ee923d4ff07786
- expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers["{#DURABLE.ID}",health])=2'
- name: 'Controller [{#DURABLE.ID}]: Controller health is in fault state'
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",health])=2'
+ name: 'Controller [{#CONTROLLER.ID}]: Controller health is in fault state'
priority: AVERAGE
description: 'Controller health is in fault state.'
dependencies:
-
- name: 'Controller [{#DURABLE.ID}]: Controller is down'
- expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers["{#DURABLE.ID}",status])=1'
+ name: 'Controller [{#CONTROLLER.ID}]: Controller is down'
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",status])=1'
tags:
-
tag: scope
value: availability
-
uuid: 15bc89e6c61549caaf5a66c85446ea9d
- expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers["{#DURABLE.ID}",health])=3'
- name: 'Controller [{#DURABLE.ID}]: Controller health is in unknown state'
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",health])=3'
+ name: 'Controller [{#CONTROLLER.ID}]: Controller health is in unknown state'
priority: INFO
description: 'Controller health is in unknown state.'
dependencies:
-
- name: 'Controller [{#DURABLE.ID}]: Controller is down'
- expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers["{#DURABLE.ID}",status])=1'
+ name: 'Controller [{#CONTROLLER.ID}]: Controller is down'
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",status])=1'
tags:
-
tag: scope
value: notice
-
- uuid: 3405ef21e2cb40729e16c5b8aaf35996
- name: 'Controller [{#DURABLE.ID}]: Part number'
+ uuid: 2c9c2636aeb543ec8e70102c555fe776
+ name: 'Controller [{#CONTROLLER.ID}]: IP address'
type: DEPENDENT
- key: 'hpe.msa.controllers["{#DURABLE.ID}",part_number]'
+ key: 'hpe.msa.controllers["{#CONTROLLER.ID}",ip_address]'
delay: '0'
history: 7d
trends: '0'
value_type: CHAR
- description: 'Part number of the controller.'
+ description: 'Controller network port IP address.'
preprocessing:
-
type: JSONPATH
parameters:
- - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''part-number''].first()'
+ - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''ip-address''].first()'
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
@@ -1579,22 +2093,22 @@ zabbix_export:
value: controller
-
tag: controller
- value: '{#DURABLE.ID}'
+ value: '{#CONTROLLER.ID}'
-
- uuid: 6980d1841bc04c79868d6f05bf59921e
- name: 'Controller [{#DURABLE.ID}]: Serial number'
+ uuid: 3405ef21e2cb40729e16c5b8aaf35996
+ name: 'Controller [{#CONTROLLER.ID}]: Part number'
type: DEPENDENT
- key: 'hpe.msa.controllers["{#DURABLE.ID}",serial_number]'
+ key: 'hpe.msa.controllers["{#CONTROLLER.ID}",part_number]'
delay: '0'
history: 7d
trends: '0'
value_type: CHAR
- description: 'Storage controller serial number.'
+ description: 'Part number of the controller.'
preprocessing:
-
type: JSONPATH
parameters:
- - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''serial-number''].first()'
+ - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''part-number''].first()'
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
@@ -1607,22 +2121,20 @@ zabbix_export:
value: controller
-
tag: controller
- value: '{#DURABLE.ID}'
+ value: '{#CONTROLLER.ID}'
-
- uuid: c0c2034fc848400c9b1f09f0c54790b3
- name: 'Controller [{#DURABLE.ID}]: Status'
+ uuid: 9b4ee1a634c3462f8fb48eb0e79984df
+ name: 'Controller [{#CONTROLLER.ID}]: Pools'
type: DEPENDENT
- key: 'hpe.msa.controllers["{#DURABLE.ID}",status]'
+ key: 'hpe.msa.controllers["{#CONTROLLER.ID}",pools]'
delay: '0'
history: 7d
- description: 'Storage controller status.'
- valuemap:
- name: 'Controller status'
+ description: 'Number of pools in the storage system.'
preprocessing:
-
type: JSONPATH
parameters:
- - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''status-numeric''].first()'
+ - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''number-of-storage-pools''].first()'
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
@@ -1634,90 +2146,83 @@ zabbix_export:
tag: component
value: controller
-
- tag: component
- value: health
- -
tag: controller
- value: '{#DURABLE.ID}'
- trigger_prototypes:
- -
- uuid: 99de4f8de416485db5c3844d1c8d654b
- expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers["{#DURABLE.ID}",status])=1'
- name: 'Controller [{#DURABLE.ID}]: Controller is down'
- priority: HIGH
- tags:
- -
- tag: scope
- value: availability
- master_item:
- key: hpe.msa.raw.controllers
- lld_macro_paths:
- -
- lld_macro: '{#DURABLE.ID}'
- path: '$.[''durable-id'']'
- preprocessing:
- -
- type: DISCARD_UNCHANGED_HEARTBEAT
- parameters:
- - 6h
- -
- uuid: 0d2220a2c825447eb7b636a914302213
- name: 'Controller statistics discovery'
- type: DEPENDENT
- key: hpe.msa.controllers.statistics.discovery
- delay: '0'
- description: 'Discover controller statistics.'
- item_prototypes:
+ value: '{#CONTROLLER.ID}'
-
- uuid: 80d6ae014e354f6c844c3b88ea66c530
- name: 'Controller [{#DURABLE.ID}]: CPU utilization'
+ uuid: 6980d1841bc04c79868d6f05bf59921e
+ name: 'Controller [{#CONTROLLER.ID}]: Serial number'
type: DEPENDENT
- key: 'hpe.msa.controllers.cpu["{#DURABLE.ID}",util]'
+ key: 'hpe.msa.controllers["{#CONTROLLER.ID}",serial_number]'
delay: '0'
history: 7d
- units: '%'
- description: 'Percentage of time the CPU is busy, from 0 to 100.'
+ trends: '0'
+ value_type: CHAR
+ description: 'Storage controller serial number.'
preprocessing:
-
type: JSONPATH
parameters:
- - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''cpu-load''].first()'
+ - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''serial-number''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
master_item:
- key: hpe.msa.raw.controllers.statistics
+ key: hpe.msa.raw.controllers
tags:
-
tag: component
value: controller
-
tag: controller
- value: '{#DURABLE.ID}'
+ value: '{#CONTROLLER.ID}'
-
- uuid: 16f2fd5bd9d244daa09aef3f79a5d450
- name: 'Controller [{#DURABLE.ID}]: IOPS, rate'
+ uuid: c0c2034fc848400c9b1f09f0c54790b3
+ name: 'Controller [{#CONTROLLER.ID}]: Status'
type: DEPENDENT
- key: 'hpe.msa.controllers.iops["{#DURABLE.ID}",rate]'
+ key: 'hpe.msa.controllers["{#CONTROLLER.ID}",status]'
delay: '0'
history: 7d
- description: 'Input/output operations per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.'
+ description: 'Storage controller status.'
+ valuemap:
+ name: 'Controller status'
preprocessing:
-
type: JSONPATH
parameters:
- - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''iops''].first()'
+ - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''status-numeric''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
master_item:
- key: hpe.msa.raw.controllers.statistics
+ key: hpe.msa.raw.controllers
tags:
-
tag: component
value: controller
-
+ tag: component
+ value: health
+ -
tag: controller
- value: '{#DURABLE.ID}'
+ value: '{#CONTROLLER.ID}'
+ trigger_prototypes:
+ -
+ uuid: 99de4f8de416485db5c3844d1c8d654b
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",status])=1'
+ name: 'Controller [{#CONTROLLER.ID}]: Controller is down'
+ priority: HIGH
+ description: 'The controller is down.'
+ tags:
+ -
+ tag: scope
+ value: availability
-
uuid: 7a9b3ba8dd5446d0961a6eea595c2b49
- name: 'Controller [{#DURABLE.ID}]: Uptime'
+ name: 'Controller [{#CONTROLLER.ID}]: Uptime'
type: DEPENDENT
- key: 'hpe.msa.controllers["{#DURABLE.ID}",uptime]'
+ key: 'hpe.msa.controllers["{#CONTROLLER.ID}",uptime]'
delay: '0'
history: 7d
units: uptime
@@ -1735,32 +2240,102 @@ zabbix_export:
value: controller
-
tag: controller
- value: '{#DURABLE.ID}'
+ value: '{#CONTROLLER.ID}'
trigger_prototypes:
-
uuid: 255250aa4b75465a989bf8f3fd805667
- expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers["{#DURABLE.ID}",uptime])<10m'
- name: 'Controller [{#DURABLE.ID}]: Controller has been restarted'
- event_name: 'Controller [{#DURABLE.ID}]: Controller [{#DURABLE.ID}] has been restarted (uptime < 10m)'
- priority: INFO
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",uptime])<10m'
+ name: 'Controller [{#CONTROLLER.ID}]: Controller has been restarted'
+ event_name: 'Controller [{#CONTROLLER.ID}]: Controller has been restarted (uptime < 10m)'
+ priority: WARNING
+ description: 'The controller uptime is less than 10 minutes.'
tags:
-
tag: scope
- value: notice
+ value: availability
graph_prototypes:
-
+ uuid: a0bac1256ecf42fb9e980a49e52f008e
+ name: 'Controller [{#CONTROLLER.ID}]: Cache: Write utilization'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE MSA 2060 Storage by HTTP'
+ key: 'hpe.msa.controllers.cache.write["{#CONTROLLER.ID}",util]'
+ -
+ uuid: 2b3343a641304872a82c84e1b918f8b3
+ name: 'Controller [{#CONTROLLER.ID}]: Cache usage'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE MSA 2060 Storage by HTTP'
+ key: 'hpe.msa.controllers.cache.read.hits["{#CONTROLLER.ID}",rate]'
+ -
+ sortorder: '1'
+ color: 2774A4
+ item:
+ host: 'HPE MSA 2060 Storage by HTTP'
+ key: 'hpe.msa.controllers.cache.read.misses["{#CONTROLLER.ID}",rate]'
+ -
+ sortorder: '2'
+ color: F63100
+ item:
+ host: 'HPE MSA 2060 Storage by HTTP'
+ key: 'hpe.msa.controllers.cache.write.hits["{#CONTROLLER.ID}",rate]'
+ -
+ sortorder: '3'
+ color: A54F10
+ item:
+ host: 'HPE MSA 2060 Storage by HTTP'
+ key: 'hpe.msa.controllers.cache.write.misses["{#CONTROLLER.ID}",rate]'
+ -
uuid: ed2117af47d94be9bed0632a0b662a25
- name: 'Controller [{#DURABLE.ID}]: Controller CPU utilization'
+ name: 'Controller [{#CONTROLLER.ID}]: Controller CPU utilization'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE MSA 2060 Storage by HTTP'
+ key: 'hpe.msa.controllers.cpu["{#CONTROLLER.ID}",util]'
+ -
+ uuid: 27b53c540cae45da9b2e13cbbb1ab821
+ name: 'Controller [{#CONTROLLER.ID}]: Data transfer rate'
graph_items:
-
color: 1A7C11
item:
host: 'HPE MSA 2060 Storage by HTTP'
- key: 'hpe.msa.controllers.cpu["{#DURABLE.ID}",util]'
+ key: 'hpe.msa.controllers.data_transfer.reads["{#CONTROLLER.ID}",rate]'
+ -
+ sortorder: '1'
+ color: 2774A4
+ item:
+ host: 'HPE MSA 2060 Storage by HTTP'
+ key: 'hpe.msa.controllers.data_transfer.writes["{#CONTROLLER.ID}",rate]'
+ -
+ uuid: ce3c794ac9424be5a104b812680cc77b
+ name: 'Controller [{#CONTROLLER.ID}]: Disk operations rate'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE MSA 2060 Storage by HTTP'
+ key: 'hpe.msa.controllers.iops.read["{#CONTROLLER.ID}",rate]'
+ -
+ sortorder: '1'
+ color: 2774A4
+ item:
+ host: 'HPE MSA 2060 Storage by HTTP'
+ key: 'hpe.msa.controllers.iops.write["{#CONTROLLER.ID}",rate]'
master_item:
- key: hpe.msa.raw.controllers.statistics
+ key: hpe.msa.raw.controllers
lld_macro_paths:
-
+ lld_macro: '{#CONTROLLER.ID}'
+ path: '$.[''controller-id'']'
+ -
lld_macro: '{#DURABLE.ID}'
path: '$.[''durable-id'']'
preprocessing:
@@ -1778,7 +2353,7 @@ zabbix_export:
item_prototypes:
-
uuid: 4fedb88c1bb74c2cb5a0f72fdfcff104
- name: 'Disk [{#ENCLOSURE.ID}.{#SLOT}]: Blocks size'
+ name: 'Disk [{#DURABLE.ID}]: Blocks size'
type: DEPENDENT
key: 'hpe.msa.disks.blocks["{#DURABLE.ID}",size]'
delay: '0'
@@ -1802,10 +2377,10 @@ zabbix_export:
value: disk
-
tag: disk
- value: '{#ENCLOSURE.ID}.{#SLOT}'
+ value: '{#DURABLE.ID}'
-
uuid: a491cb03df9c4e3ead70e0a74d9337b2
- name: 'Disk [{#ENCLOSURE.ID}.{#SLOT}]: Blocks total'
+ name: 'Disk [{#DURABLE.ID}]: Blocks total'
type: DEPENDENT
key: 'hpe.msa.disks.blocks["{#DURABLE.ID}",total]'
delay: '0'
@@ -1828,10 +2403,10 @@ zabbix_export:
value: disk
-
tag: disk
- value: '{#ENCLOSURE.ID}.{#SLOT}'
+ value: '{#DURABLE.ID}'
-
uuid: 6c20cf4e84b0427fbe797fc209d78785
- name: 'Disk [{#ENCLOSURE.ID}.{#SLOT}]: Space total'
+ name: 'Disk [{#DURABLE.ID}]: Space total'
type: CALCULATED
key: 'hpe.msa.disks.space["{#DURABLE.ID}",total]'
delay: 1h
@@ -1850,10 +2425,10 @@ zabbix_export:
value: disk
-
tag: disk
- value: '{#ENCLOSURE.ID}.{#SLOT}'
+ value: '{#DURABLE.ID}'
-
uuid: 80ea0929a1bf43f4bdeba80e675c52bd
- name: 'Disk [{#ENCLOSURE.ID}.{#SLOT}]: SSD life left'
+ name: 'Disk [{#DURABLE.ID}]: SSD life left'
type: DEPENDENT
key: 'hpe.msa.disks.ssd["{#DURABLE.ID}",life_left]'
delay: '0'
@@ -1878,10 +2453,10 @@ zabbix_export:
value: disk
-
tag: disk
- value: '{#ENCLOSURE.ID}.{#SLOT}'
+ value: '{#DURABLE.ID}'
-
uuid: f5bb9b7f437f434d83ca0542e41b2673
- name: 'Disk [{#ENCLOSURE.ID}.{#SLOT}]: Disk group'
+ name: 'Disk [{#DURABLE.ID}]: Disk group'
type: DEPENDENT
key: 'hpe.msa.disks["{#DURABLE.ID}",group]'
delay: '0'
@@ -1907,10 +2482,10 @@ zabbix_export:
value: disk
-
tag: disk
- value: '{#ENCLOSURE.ID}.{#SLOT}'
+ value: '{#DURABLE.ID}'
-
uuid: 86fca5ad02af49c8a1d48f4a260a0dbf
- name: 'Disk [{#ENCLOSURE.ID}.{#SLOT}]: Health'
+ name: 'Disk [{#DURABLE.ID}]: Health'
type: DEPENDENT
key: 'hpe.msa.disks["{#DURABLE.ID}",health]'
delay: '0'
@@ -1940,12 +2515,12 @@ zabbix_export:
value: health
-
tag: disk
- value: '{#ENCLOSURE.ID}.{#SLOT}'
+ value: '{#DURABLE.ID}'
trigger_prototypes:
-
uuid: f76f8eec05a94e2db9d4cd3bcbb43aa4
expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",health])=1'
- name: 'Disk [{#ENCLOSURE.ID}.{#SLOT}]: Disk health is in degraded state'
+ name: 'Disk [{#DURABLE.ID}]: Disk health is in degraded state'
priority: WARNING
description: 'Disk health is in degraded state.'
tags:
@@ -1955,7 +2530,7 @@ zabbix_export:
-
uuid: 383181e44a114334ab28ff09f49b2d51
expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",health])=2'
- name: 'Disk [{#ENCLOSURE.ID}.{#SLOT}]: Disk health is in fault state'
+ name: 'Disk [{#DURABLE.ID}]: Disk health is in fault state'
priority: AVERAGE
description: 'Disk health is in fault state.'
tags:
@@ -1965,7 +2540,7 @@ zabbix_export:
-
uuid: 2b2d78c6c29f4bd58eff632809dee978
expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",health])=3'
- name: 'Disk [{#ENCLOSURE.ID}.{#SLOT}]: Disk health is in unknown state'
+ name: 'Disk [{#DURABLE.ID}]: Disk health is in unknown state'
priority: INFO
description: 'Disk health is in unknown state.'
tags:
@@ -1974,7 +2549,7 @@ zabbix_export:
value: notice
-
uuid: 8f8ad679881c4693acfed363e5498b34
- name: 'Disk [{#ENCLOSURE.ID}.{#SLOT}]: Model'
+ name: 'Disk [{#DURABLE.ID}]: Model'
type: DEPENDENT
key: 'hpe.msa.disks["{#DURABLE.ID}",model]'
delay: '0'
@@ -1999,10 +2574,10 @@ zabbix_export:
value: disk
-
tag: disk
- value: '{#ENCLOSURE.ID}.{#SLOT}'
+ value: '{#DURABLE.ID}'
-
uuid: 7fffecbf1ede4a5e9da5efc4311fc62e
- name: 'Disk [{#ENCLOSURE.ID}.{#SLOT}]: Storage pool'
+ name: 'Disk [{#DURABLE.ID}]: Storage pool'
type: DEPENDENT
key: 'hpe.msa.disks["{#DURABLE.ID}",pool]'
delay: '0'
@@ -2028,10 +2603,10 @@ zabbix_export:
value: disk
-
tag: disk
- value: '{#ENCLOSURE.ID}.{#SLOT}'
+ value: '{#DURABLE.ID}'
-
uuid: 9a43a148ad4742e1a1df0038b36a171f
- name: 'Disk [{#ENCLOSURE.ID}.{#SLOT}]: Serial number'
+ name: 'Disk [{#DURABLE.ID}]: Serial number'
type: DEPENDENT
key: 'hpe.msa.disks["{#DURABLE.ID}",serial_number]'
delay: '0'
@@ -2056,10 +2631,10 @@ zabbix_export:
value: disk
-
tag: disk
- value: '{#ENCLOSURE.ID}.{#SLOT}'
+ value: '{#DURABLE.ID}'
-
uuid: 119dc5c43fb741028ccd599d25ad032c
- name: 'Disk [{#ENCLOSURE.ID}.{#SLOT}]: Temperature'
+ name: 'Disk [{#DURABLE.ID}]: Temperature'
type: DEPENDENT
key: 'hpe.msa.disks["{#DURABLE.ID}",temperature]'
delay: '0'
@@ -2084,10 +2659,10 @@ zabbix_export:
value: disk
-
tag: disk
- value: '{#ENCLOSURE.ID}.{#SLOT}'
+ value: '{#DURABLE.ID}'
-
uuid: 0a0cf4600214443aa504d5c55d1f4015
- name: 'Disk [{#ENCLOSURE.ID}.{#SLOT}]: Temperature status'
+ name: 'Disk [{#DURABLE.ID}]: Temperature status'
type: DEPENDENT
key: 'hpe.msa.disks["{#DURABLE.ID}",temperature_status]'
delay: '0'
@@ -2123,12 +2698,12 @@ zabbix_export:
value: health
-
tag: disk
- value: '{#ENCLOSURE.ID}.{#SLOT}'
+ value: '{#DURABLE.ID}'
trigger_prototypes:
-
uuid: d4b8f77421d744918e087f696b3f0fff
expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",temperature_status])=2'
- name: 'Disk [{#ENCLOSURE.ID}.{#SLOT}]: Disk temperature is critically high'
+ name: 'Disk [{#DURABLE.ID}]: Disk temperature is critically high'
priority: AVERAGE
description: 'Disk temperature is critically high.'
tags:
@@ -2138,7 +2713,7 @@ zabbix_export:
-
uuid: fbbac4048fda477a99f00566624b6bdb
expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",temperature_status])=3'
- name: 'Disk [{#ENCLOSURE.ID}.{#SLOT}]: Disk temperature is high'
+ name: 'Disk [{#DURABLE.ID}]: Disk temperature is high'
priority: WARNING
description: 'Disk temperature is high.'
tags:
@@ -2148,7 +2723,7 @@ zabbix_export:
-
uuid: 41e4f00446304206804da350a88ce3b9
expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",temperature_status])=4'
- name: 'Disk [{#ENCLOSURE.ID}.{#SLOT}]: Disk temperature is unknown'
+ name: 'Disk [{#DURABLE.ID}]: Disk temperature is unknown'
priority: INFO
description: 'Disk temperature is unknown.'
tags:
@@ -2157,7 +2732,7 @@ zabbix_export:
value: notice
-
uuid: 1a23ef68bb484fd5baeba2b352b970db
- name: 'Disk [{#ENCLOSURE.ID}.{#SLOT}]: Type'
+ name: 'Disk [{#DURABLE.ID}]: Type'
type: DEPENDENT
key: 'hpe.msa.disks["{#DURABLE.ID}",type]'
delay: '0'
@@ -2186,10 +2761,10 @@ zabbix_export:
value: disk
-
tag: disk
- value: '{#ENCLOSURE.ID}.{#SLOT}'
+ value: '{#DURABLE.ID}'
-
uuid: d8e35779834640c8afdc5874f72fe8af
- name: 'Disk [{#ENCLOSURE.ID}.{#SLOT}]: Vendor'
+ name: 'Disk [{#DURABLE.ID}]: Vendor'
type: DEPENDENT
key: 'hpe.msa.disks["{#DURABLE.ID}",vendor]'
delay: '0'
@@ -2214,7 +2789,7 @@ zabbix_export:
value: disk
-
tag: disk
- value: '{#ENCLOSURE.ID}.{#SLOT}'
+ value: '{#DURABLE.ID}'
master_item:
key: hpe.msa.raw.disks
lld_macro_paths:
@@ -2222,12 +2797,6 @@ zabbix_export:
lld_macro: '{#DURABLE.ID}'
path: '$.[''durable-id'']'
-
- lld_macro: '{#ENCLOSURE.ID}'
- path: '$.[''enclosure-id'']'
- -
- lld_macro: '{#SLOT}'
- path: '$.[''slot'']'
- -
lld_macro: '{#TYPE}'
path: '$.[''description-numeric'']'
preprocessing:
@@ -2261,6 +2830,90 @@ zabbix_export:
description: 'Discover disk groups.'
item_prototypes:
-
+ uuid: 8f68ad1b814d4287a6fd72d5bd03f7da
+ name: 'Disk group [{#NAME}]: Average response time: Read'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups.avg_rsp_time["{#NAME}",read]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: s
+ description: 'Average response time for all read operations, calculated over the interval since these statistics were last requested or reset.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''name''] == "{#NAME}")].[''avg-read-rsp-time''].first()'
+ -
+ type: MULTIPLIER
+ parameters:
+ - '0.000001'
+ master_item:
+ key: hpe.msa.raw.disks.groups.statistics
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
+ uuid: 2ae8acbcd0b9442c9adc8086fa36fa40
+ name: 'Disk group [{#NAME}]: Average response time: Total'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups.avg_rsp_time["{#NAME}",total]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: s
+ description: 'Average response time for read and write operations, calculated over the interval since these statistics were last requested or reset.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''name''] == "{#NAME}")].[''avg-rsp-time''].first()'
+ -
+ type: MULTIPLIER
+ parameters:
+ - '0.000001'
+ master_item:
+ key: hpe.msa.raw.disks.groups.statistics
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
+ uuid: f99ce5e6e31140c298ee447d3a2b8c4d
+ name: 'Disk group [{#NAME}]: Average response time: Write'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups.avg_rsp_time["{#NAME}",write]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: s
+ description: 'Average response time for all write operations, calculated over the interval since these statistics were last requested or reset.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''name''] == "{#NAME}")].[''avg-write-rsp-time''].first()'
+ -
+ type: MULTIPLIER
+ parameters:
+ - '0.000001'
+ master_item:
+ key: hpe.msa.raw.disks.groups.statistics
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
uuid: 705fce660a944a47ad7ff0e9c9b1d37e
name: 'Disk group [{#NAME}]: Blocks free'
type: DEPENDENT
@@ -2340,6 +2993,164 @@ zabbix_export:
tag: disk-group
value: '{#NAME}'
-
+ uuid: ecd3de6d32e94d2ab50111659147c97e
+ name: 'Disk group [{#NAME}]: Data transfer rate: Reads'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups.data_transfer.reads["{#NAME}",rate]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: Bps
+ description: 'The data read rate, in bytes per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''name''] == "{#NAME}")].[''data-read-numeric''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.raw.disks.groups.statistics
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
+ uuid: 28b236ea619f4130a3271459e9fce06b
+ name: 'Disk group [{#NAME}]: Data transfer rate: Total'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups.data_transfer.total["{#NAME}",rate]'
+ delay: '0'
+ history: 7d
+ units: Bps
+ description: 'The data transfer rate, in bytes per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''name''] == "{#NAME}")].[''bytes-per-second-numeric''].first()'
+ master_item:
+ key: hpe.msa.raw.disks.groups.statistics
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
+ uuid: 51ef802067c149bea1d5d976df6e3a6f
+ name: 'Disk group [{#NAME}]: Data transfer rate: Writes'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups.data_transfer.writes["{#NAME}",rate]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: Bps
+ description: 'The data write rate, in bytes per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''name''] == "{#NAME}")].[''data-written-numeric''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.raw.disks.groups.statistics
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
+ uuid: 95925d6d4af94964b388208ff185642d
+ name: 'Disk group [{#NAME}]: IOPS, read rate'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups.iops.read["{#NAME}",rate]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: '!r/s'
+ description: 'Number of read operations per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''name''] == "{#NAME}")].[''number-of-reads''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.raw.disks.groups.statistics
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
+ uuid: c9fdf59576554063b404d190ad90db18
+ name: 'Disk group [{#NAME}]: IOPS, total rate'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups.iops.total["{#NAME}",rate]'
+ delay: '0'
+ history: 7d
+ units: '!iops'
+ description: 'Input/output operations per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''name''] == "{#NAME}")].[''iops''].first()'
+ master_item:
+ key: hpe.msa.raw.disks.groups.statistics
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
+ uuid: 31f5b13a56704e438b600df70c37a1fd
+ name: 'Disk group [{#NAME}]: IOPS, write rate'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups.iops.write["{#NAME}",rate]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: '!w/s'
+ description: 'Number of write operations per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''name''] == "{#NAME}")].[''number-of-writes''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.raw.disks.groups.statistics
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
uuid: 7359b1d550734d30bb83612538b36e95
name: 'Disk group [{#NAME}]: RAID type'
type: DEPENDENT
@@ -2389,6 +3200,33 @@ zabbix_export:
tag: disk-group
value: '{#NAME}'
-
+ uuid: bc8e6e0fb286466593186708cddf3b2a
+ name: 'Disk group [{#NAME}]: Pool space used'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups.space["{#NAME}",pool_util]'
+ delay: '0'
+ history: 7d
+ units: '%'
+ description: 'The percentage of pool capacity that the disk group occupies.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''name''] == "{#NAME}")].[''pool-percentage''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.raw.disks.groups
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
uuid: fb3dd5308c97446693932206be17ace3
name: 'Disk group [{#NAME}]: Space total'
type: CALCULATED
@@ -2434,26 +3272,26 @@ zabbix_export:
trigger_prototypes:
-
uuid: df1af9dad6444821a86a26158469d0cb
- expression: 'min(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups.space["{#NAME}",util],5m)>{$HPE.PRIMERA.DISKS.GROUP.PUSED.MAX.CRIT:"{#NAME}"}'
+ expression: 'min(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups.space["{#NAME}",util],5m)>{$HPE.MSA.DISKS.GROUP.PUSED.MAX.CRIT:"{#NAME}"}'
name: 'Disk group [{#NAME}]: Disk group space is critically low'
- event_name: 'Disk group [{#NAME}]: Disk group space is critically low (used > {$HPE.PRIMERA.DISKS.GROUP.PUSED.MAX.CRIT:"{#NAME}"}%)'
+ event_name: 'Disk group [{#NAME}]: Disk group space is critically low (used > {$HPE.MSA.DISKS.GROUP.PUSED.MAX.CRIT:"{#NAME}"}%)'
priority: AVERAGE
- description: 'Disk group is running low on free space (less than {$HPE.PRIMERA.DISKS.GROUP.PUSED.MAX.CRIT:"{#NAME}"}% available).'
+ description: 'Disk group is running low on free space (less than {$HPE.MSA.DISKS.GROUP.PUSED.MAX.CRIT:"{#NAME}"}% available).'
tags:
-
tag: scope
value: performance
-
uuid: 713960711c324dc780998f8f263344a2
- expression: 'min(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups.space["{#NAME}",util],5m)>{$HPE.PRIMERA.DISKS.GROUP.PUSED.MAX.WARN:"{#NAME}"}'
+ expression: 'min(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups.space["{#NAME}",util],5m)>{$HPE.MSA.DISKS.GROUP.PUSED.MAX.WARN:"{#NAME}"}'
name: 'Disk group [{#NAME}]: Disk group space is low'
- event_name: 'Disk group [{#NAME}]: Disk group space is low (used > {$HPE.PRIMERA.DISKS.GROUP.PUSED.MAX.WARN:"{#NAME}"}%)'
+ event_name: 'Disk group [{#NAME}]: Disk group space is low (used > {$HPE.MSA.DISKS.GROUP.PUSED.MAX.WARN:"{#NAME}"}%)'
priority: WARNING
- description: 'Disk group is running low on free space (less than {$HPE.PRIMERA.DISKS.GROUP.PUSED.MAX.WARN:"{#NAME}"}% available).'
+ description: 'Disk group is running low on free space (less than {$HPE.MSA.DISKS.GROUP.PUSED.MAX.WARN:"{#NAME}"}% available).'
dependencies:
-
name: 'Disk group [{#NAME}]: Disk group space is critically low'
- expression: 'min(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups.space["{#NAME}",util],5m)>{$HPE.PRIMERA.DISKS.GROUP.PUSED.MAX.CRIT:"{#NAME}"}'
+ expression: 'min(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups.space["{#NAME}",util],5m)>{$HPE.MSA.DISKS.GROUP.PUSED.MAX.CRIT:"{#NAME}"}'
tags:
-
tag: scope
@@ -2698,326 +3536,67 @@ zabbix_export:
value: performance
graph_prototypes:
-
- uuid: 495a941dc4ef45e8b60d6a94bb1fbdcd
- name: 'Disk group [{#NAME}]: Space utilization'
+ uuid: e1f7331965524670b8c44c0b0d8eb99b
+ name: 'Disk group [{#NAME}]: Average response time'
graph_items:
-
color: 1A7C11
item:
host: 'HPE MSA 2060 Storage by HTTP'
- key: 'hpe.msa.disks.groups.space["{#NAME}",free]'
+ key: 'hpe.msa.disks.groups.avg_rsp_time["{#NAME}",read]'
-
sortorder: '1'
color: 2774A4
item:
host: 'HPE MSA 2060 Storage by HTTP'
- key: 'hpe.msa.disks.groups.space["{#NAME}",total]'
- master_item:
- key: hpe.msa.raw.disks.groups
- lld_macro_paths:
- -
- lld_macro: '{#NAME}'
- path: '$.[''name'']'
- preprocessing:
- -
- type: DISCARD_UNCHANGED_HEARTBEAT
- parameters:
- - 6h
- -
- uuid: f7364188cd1e49a09051c99b5d724b13
- name: 'Disk group statistics discovery'
- type: DEPENDENT
- key: hpe.msa.disks.groups.statistics.discovery
- delay: '0'
- description: 'Discover disk group statistics.'
- item_prototypes:
- -
- uuid: 8f68ad1b814d4287a6fd72d5bd03f7da
- name: 'Disk group [{#NAME}]: Average response time: Read'
- type: DEPENDENT
- key: 'hpe.msa.disks.groups.avg_rsp_time["{#NAME}",read]'
- delay: '0'
- history: 7d
- value_type: FLOAT
- units: s
- description: 'Average response time for all read operations, calculated over the interval since these statistics were last requested or reset.'
- preprocessing:
- -
- type: JSONPATH
- parameters:
- - '$.[?(@[''name''] == "{#NAME}")].[''avg-read-rsp-time''].first()'
- -
- type: MULTIPLIER
- parameters:
- - '0.000001'
- master_item:
- key: hpe.msa.raw.disks.groups.statistics
- tags:
- -
- tag: component
- value: disk-group
- -
- tag: disk-group
- value: '{#NAME}'
- -
- uuid: 2ae8acbcd0b9442c9adc8086fa36fa40
- name: 'Disk group [{#NAME}]: Average response time: Total'
- type: DEPENDENT
- key: 'hpe.msa.disks.groups.avg_rsp_time["{#NAME}",total]'
- delay: '0'
- history: 7d
- value_type: FLOAT
- units: s
- description: 'Average response time for read and write operations, calculated over the interval since these statistics were last requested or reset.'
- preprocessing:
- -
- type: JSONPATH
- parameters:
- - '$.[?(@[''name''] == "{#NAME}")].[''avg-rsp-time''].first()'
- -
- type: MULTIPLIER
- parameters:
- - '0.000001'
- master_item:
- key: hpe.msa.raw.disks.groups.statistics
- tags:
- -
- tag: component
- value: disk-group
- -
- tag: disk-group
- value: '{#NAME}'
- -
- uuid: f99ce5e6e31140c298ee447d3a2b8c4d
- name: 'Disk group [{#NAME}]: Average response time: Write'
- type: DEPENDENT
- key: 'hpe.msa.disks.groups.avg_rsp_time["{#NAME}",write]'
- delay: '0'
- history: 7d
- value_type: FLOAT
- units: s
- description: 'Average response time for all write operations, calculated over the interval since these statistics were last requested or reset.'
- preprocessing:
- -
- type: JSONPATH
- parameters:
- - '$.[?(@[''name''] == "{#NAME}")].[''avg-write-rsp-time''].first()'
- -
- type: MULTIPLIER
- parameters:
- - '0.000001'
- master_item:
- key: hpe.msa.raw.disks.groups.statistics
- tags:
- -
- tag: component
- value: disk-group
- -
- tag: disk-group
- value: '{#NAME}'
- -
- uuid: ecd3de6d32e94d2ab50111659147c97e
- name: 'Disk group [{#NAME}]: Data transfer rate: Reads'
- type: DEPENDENT
- key: 'hpe.msa.disks.groups.data_transfer.reads["{#NAME}",rate]'
- delay: '0'
- history: 7d
- value_type: FLOAT
- units: Bps
- description: 'The data read rate, in bytes per second.'
- preprocessing:
- -
- type: JSONPATH
- parameters:
- - '$.[?(@[''name''] == "{#NAME}")].[''data-read-numeric''].first()'
- -
- type: CHANGE_PER_SECOND
- parameters:
- - ''
- master_item:
- key: hpe.msa.raw.disks.groups.statistics
- tags:
- -
- tag: component
- value: disk-group
- -
- tag: disk-group
- value: '{#NAME}'
- -
- uuid: 28b236ea619f4130a3271459e9fce06b
- name: 'Disk group [{#NAME}]: Data transfer rate: Total'
- type: DEPENDENT
- key: 'hpe.msa.disks.groups.data_transfer.total["{#NAME}",rate]'
- delay: '0'
- history: 7d
- units: Bps
- description: 'The data transfer rate, in bytes per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.'
- preprocessing:
- -
- type: JSONPATH
- parameters:
- - '$.[?(@[''name''] == "{#NAME}")].[''bytes-per-second-numeric''].first()'
- master_item:
- key: hpe.msa.raw.disks.groups.statistics
- tags:
- -
- tag: component
- value: disk-group
- -
- tag: disk-group
- value: '{#NAME}'
- -
- uuid: 51ef802067c149bea1d5d976df6e3a6f
- name: 'Disk group [{#NAME}]: Data transfer rate: Writes'
- type: DEPENDENT
- key: 'hpe.msa.disks.groups.data_transfer.writes["{#NAME}",rate]'
- delay: '0'
- history: 7d
- value_type: FLOAT
- units: Bps
- description: 'The data write rate, in bytes per second.'
- preprocessing:
- -
- type: JSONPATH
- parameters:
- - '$.[?(@[''name''] == "{#NAME}")].[''data-written-numeric''].first()'
- -
- type: CHANGE_PER_SECOND
- parameters:
- - ''
- master_item:
- key: hpe.msa.raw.disks.groups.statistics
- tags:
- -
- tag: component
- value: disk-group
- -
- tag: disk-group
- value: '{#NAME}'
- -
- uuid: c9fdf59576554063b404d190ad90db18
- name: 'Disk group [{#NAME}]: IOPS, rate'
- type: DEPENDENT
- key: 'hpe.msa.disks.groups.iops["{#NAME}",rate]'
- delay: '0'
- history: 7d
- description: 'Input/output operations per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.'
- preprocessing:
- -
- type: JSONPATH
- parameters:
- - '$.[?(@[''name''] == "{#NAME}")].[''iops''].first()'
- master_item:
- key: hpe.msa.raw.disks.groups.statistics
- tags:
- -
- tag: component
- value: disk-group
- -
- tag: disk-group
- value: '{#NAME}'
- -
- uuid: 95925d6d4af94964b388208ff185642d
- name: 'Disk group [{#NAME}]: Reads, rate'
- type: DEPENDENT
- key: 'hpe.msa.disks.groups.reads["{#NAME}",rate]'
- delay: '0'
- history: 7d
- value_type: FLOAT
- description: 'Number of read operations per second.'
- preprocessing:
- -
- type: JSONPATH
- parameters:
- - '$.[?(@[''name''] == "{#NAME}")].[''number-of-reads''].first()'
- -
- type: CHANGE_PER_SECOND
- parameters:
- - ''
- master_item:
- key: hpe.msa.raw.disks.groups.statistics
- tags:
- -
- tag: component
- value: disk-group
- -
- tag: disk-group
- value: '{#NAME}'
- -
- uuid: 31f5b13a56704e438b600df70c37a1fd
- name: 'Disk group [{#NAME}]: Writes, rate'
- type: DEPENDENT
- key: 'hpe.msa.disks.groups.writes["{#NAME}",rate]'
- delay: '0'
- history: 7d
- value_type: FLOAT
- description: 'Number of write operations per second.'
- preprocessing:
- -
- type: JSONPATH
- parameters:
- - '$.[?(@[''name''] == "{#NAME}")].[''number-of-writes''].first()'
- -
- type: CHANGE_PER_SECOND
- parameters:
- - ''
- master_item:
- key: hpe.msa.raw.disks.groups.statistics
- tags:
- -
- tag: component
- value: disk-group
- -
- tag: disk-group
- value: '{#NAME}'
- graph_prototypes:
+ key: 'hpe.msa.disks.groups.avg_rsp_time["{#NAME}",write]'
-
- uuid: e1f7331965524670b8c44c0b0d8eb99b
- name: 'Disk group [{#NAME}]: Average response time'
+ uuid: 1354b947316a46be8dc696c29f408a6b
+ name: 'Disk group [{#NAME}]: Data transfer rate'
graph_items:
-
color: 1A7C11
item:
host: 'HPE MSA 2060 Storage by HTTP'
- key: 'hpe.msa.disks.groups.avg_rsp_time["{#NAME}",read]'
+ key: 'hpe.msa.disks.groups.data_transfer.reads["{#NAME}",rate]'
-
sortorder: '1'
color: 2774A4
item:
host: 'HPE MSA 2060 Storage by HTTP'
- key: 'hpe.msa.disks.groups.avg_rsp_time["{#NAME}",write]'
+ key: 'hpe.msa.disks.groups.data_transfer.writes["{#NAME}",rate]'
-
- uuid: 1354b947316a46be8dc696c29f408a6b
- name: 'Disk group [{#NAME}]: Data transfer rate'
+ uuid: f7f556011add4cd6b0fe8e4545c607a0
+ name: 'Disk group [{#NAME}]: Disk operations rate'
graph_items:
-
color: 1A7C11
item:
host: 'HPE MSA 2060 Storage by HTTP'
- key: 'hpe.msa.disks.groups.data_transfer.reads["{#NAME}",rate]'
+ key: 'hpe.msa.disks.groups.iops.read["{#NAME}",rate]'
-
sortorder: '1'
color: 2774A4
item:
host: 'HPE MSA 2060 Storage by HTTP'
- key: 'hpe.msa.disks.groups.data_transfer.writes["{#NAME}",rate]'
+ key: 'hpe.msa.disks.groups.iops.write["{#NAME}",rate]'
-
- uuid: f7f556011add4cd6b0fe8e4545c607a0
- name: 'Disk group [{#NAME}]: Disk operations rate'
+ uuid: 495a941dc4ef45e8b60d6a94bb1fbdcd
+ name: 'Disk group [{#NAME}]: Space utilization'
graph_items:
-
color: 1A7C11
item:
host: 'HPE MSA 2060 Storage by HTTP'
- key: 'hpe.msa.disks.groups.reads["{#NAME}",rate]'
+ key: 'hpe.msa.disks.groups.space["{#NAME}",free]'
-
sortorder: '1'
color: 2774A4
item:
host: 'HPE MSA 2060 Storage by HTTP'
- key: 'hpe.msa.disks.groups.writes["{#NAME}",rate]'
+ key: 'hpe.msa.disks.groups.space["{#NAME}",total]'
master_item:
- key: hpe.msa.raw.disks.groups.statistics
+ key: hpe.msa.raw.disks.groups
lld_macro_paths:
-
lld_macro: '{#NAME}'
@@ -3157,7 +3736,7 @@ zabbix_export:
value: '{#DURABLE.ID}'
-
uuid: 89f11d7bf0e24a92bf4d4b4b1d86af58
- name: 'Enclosure [{#DURABLE.ID}]: Part number.'
+ name: 'Enclosure [{#DURABLE.ID}]: Part number'
type: DEPENDENT
key: 'hpe.msa.enclosures["{#DURABLE.ID}",part_number]'
delay: '0'
@@ -3256,7 +3835,7 @@ zabbix_export:
expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",status])=6'
name: 'Enclosure [{#DURABLE.ID}]: Enclosure has unknown status'
priority: INFO
- description: 'Enclosure has unknown status'
+ description: 'Enclosure has unknown status.'
tags:
-
tag: scope
@@ -3312,7 +3891,7 @@ zabbix_export:
item_prototypes:
-
uuid: f9be9af4ff9047f1af946313df3e7165
- name: '{#NAME} [{#LOCATION}]: Health'
+ name: 'Fan [{#DURABLE.ID}]: Health'
type: DEPENDENT
key: 'hpe.msa.fans["{#DURABLE.ID}",health]'
delay: '0'
@@ -3347,7 +3926,7 @@ zabbix_export:
-
uuid: 3ee1b1d0d6b34c8eba02480e9e4d5be2
expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",health])=1'
- name: '{#NAME} [{#LOCATION}]: Fan health is in degraded state'
+ name: 'Fan [{#DURABLE.ID}]: Fan health is in degraded state'
priority: WARNING
description: 'Fan health is in degraded state.'
tags:
@@ -3357,7 +3936,7 @@ zabbix_export:
-
uuid: 3e3785f9915d46068ebe2eff21bac813
expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",health])=2'
- name: '{#NAME} [{#LOCATION}]: Fan health is in fault state'
+ name: 'Fan [{#DURABLE.ID}]: Fan health is in fault state'
priority: AVERAGE
description: 'Fan health is in fault state.'
tags:
@@ -3367,7 +3946,7 @@ zabbix_export:
-
uuid: 4bf2e519b5484d338f997ea5dac462e0
expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",health])=3'
- name: '{#NAME} [{#LOCATION}]: Fan health is in unknown state'
+ name: 'Fan [{#DURABLE.ID}]: Fan health is in unknown state'
priority: INFO
description: 'Fan health is in unknown state.'
tags:
@@ -3376,7 +3955,7 @@ zabbix_export:
value: notice
-
uuid: f028a919d56b45129f9ead200519adaa
- name: '{#NAME} [{#LOCATION}]: Speed'
+ name: 'Fan [{#DURABLE.ID}]: Speed'
type: DEPENDENT
key: 'hpe.msa.fans["{#DURABLE.ID}",speed]'
delay: '0'
@@ -3399,7 +3978,7 @@ zabbix_export:
value: '{#DURABLE.ID}'
-
uuid: df1d8af5df104afc829b403aec6efc96
- name: '{#NAME} [{#LOCATION}]: Status'
+ name: 'Fan [{#DURABLE.ID}]: Status'
type: DEPENDENT
key: 'hpe.msa.fans["{#DURABLE.ID}",status]'
delay: '0'
@@ -3432,7 +4011,7 @@ zabbix_export:
-
uuid: 183a1e1c4d444c9a8189035a2af22dc1
expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",status])=1'
- name: '{#NAME} [{#LOCATION}]: Fan has error status'
+ name: 'Fan [{#DURABLE.ID}]: Fan has error status'
priority: AVERAGE
description: 'Fan has error status.'
tags:
@@ -3442,7 +4021,7 @@ zabbix_export:
-
uuid: 4d9e3d1bb22444f981295df07f0d9c24
expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",status])=3'
- name: '{#NAME} [{#LOCATION}]: Fan is missing'
+ name: 'Fan [{#DURABLE.ID}]: Fan is missing'
priority: INFO
description: 'Fan is missing.'
tags:
@@ -3452,7 +4031,7 @@ zabbix_export:
-
uuid: a6e4ea796b98432284a9fd9fff1d82f9
expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",status])=2'
- name: '{#NAME} [{#LOCATION}]: Fan is off'
+ name: 'Fan [{#DURABLE.ID}]: Fan is off'
priority: WARNING
description: 'Fan is off.'
tags:
@@ -3462,7 +4041,7 @@ zabbix_export:
graph_prototypes:
-
uuid: 1def9fd4627d4552bf34e8ce35f3cd46
- name: '{#NAME} [{#LOCATION}]: Speed'
+ name: 'Fan [{#DURABLE.ID}]: Speed'
graph_items:
-
color: 1A7C11
@@ -3475,12 +4054,199 @@ zabbix_export:
-
lld_macro: '{#DURABLE.ID}'
path: '$.[''durable-id'']'
+ preprocessing:
-
- lld_macro: '{#LOCATION}'
- path: '$.[''location'']'
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 6h
+ -
+ uuid: eaf913faf35b41eebb0f6177bc3a457a
+ name: 'I/O modules discovery'
+ type: DEPENDENT
+ key: hpe.msa.io_modules.discovery
+ delay: '0'
+ description: 'Discover I/O modules.'
+ item_prototypes:
-
- lld_macro: '{#NAME}'
- path: '$.[''name'']'
+ uuid: 5d43d0b89d714df4896be2bab5aa0eb5
+ name: 'I/O module [{#DURABLE.ID}]: Health'
+ type: DEPENDENT
+ key: 'hpe.msa.io_modules["{#DURABLE.ID}",health]'
+ delay: '0'
+ history: 7d
+ description: 'I/O module health status.'
+ valuemap:
+ name: Health
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''health-numeric''].first()'
+ error_handler: CUSTOM_VALUE
+ error_handler_params: '4'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.raw.io_modules
+ tags:
+ -
+ tag: component
+ value: health
+ -
+ tag: component
+ value: io-module
+ -
+ tag: io-module
+ value: '{#DURABLE.ID}'
+ trigger_prototypes:
+ -
+ uuid: 4b2e0511c4e44191bc6721d15474dd8e
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.io_modules["{#DURABLE.ID}",health])=1'
+ name: 'I/O module [{#DURABLE.ID}]: I/O module health is in degraded state'
+ priority: WARNING
+ description: 'I/O module health is in degraded state.'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: 39ebf5366155403592dac6e66e96cd51
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.io_modules["{#DURABLE.ID}",health])=2'
+ name: 'I/O module [{#DURABLE.ID}]: I/O module health is in fault state'
+ priority: AVERAGE
+ description: 'I/O module health is in fault state.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ uuid: 6695d1d9fc5649c79fe504ecc1cb5332
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.io_modules["{#DURABLE.ID}",health])=3'
+ name: 'I/O module [{#DURABLE.ID}]: I/O module health is in unknown state'
+ priority: INFO
+ description: 'I/O module health is in unknown state.'
+ tags:
+ -
+ tag: scope
+ value: notice
+ -
+ uuid: 07c0c7e1822c41028804fd2c873a861b
+ name: 'I/O module [{#DURABLE.ID}]: Part number'
+ type: DEPENDENT
+ key: 'hpe.msa.io_modules["{#DURABLE.ID}",part_number]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'Part number of the I/O module.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''part-number''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.raw.io_modules
+ tags:
+ -
+ tag: component
+ value: io-module
+ -
+ tag: io-module
+ value: '{#DURABLE.ID}'
+ -
+ uuid: 67810330da554f5ea0fe8d0951cd763c
+ name: 'I/O module [{#DURABLE.ID}]: Serial number'
+ type: DEPENDENT
+ key: 'hpe.msa.io_modules["{#DURABLE.ID}",serial_number]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'I/O module serial number.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''serial-number''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.raw.io_modules
+ tags:
+ -
+ tag: component
+ value: io-module
+ -
+ tag: io-module
+ value: '{#DURABLE.ID}'
+ -
+ uuid: 6d196abdf7574556bdd050fc08daa5ff
+ name: 'I/O module [{#DURABLE.ID}]: Status'
+ type: DEPENDENT
+ key: 'hpe.msa.io_modules["{#DURABLE.ID}",status]'
+ delay: '0'
+ history: 7d
+ description: 'I/O module status.'
+ valuemap:
+ name: 'I/O module status'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''status-numeric''].first()'
+ error_handler: CUSTOM_VALUE
+ error_handler_params: '3'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.raw.io_modules
+ tags:
+ -
+ tag: component
+ value: health
+ -
+ tag: component
+ value: io-module
+ -
+ tag: io-module
+ value: '{#DURABLE.ID}'
+ trigger_prototypes:
+ -
+ uuid: bcbd4f7d545c4614a8fed9b852f219ab
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.io_modules["{#DURABLE.ID}",status])=3'
+ name: 'I/O module [{#DURABLE.ID}]: I/O module has unknown status'
+ priority: INFO
+ description: 'I/O module has unknown status.'
+ tags:
+ -
+ tag: scope
+ value: notice
+ -
+ uuid: 6d2ea6ce4da442aeac0d388c978e611f
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.io_modules["{#DURABLE.ID}",status])=1'
+ name: 'I/O module [{#DURABLE.ID}]: I/O module is down'
+ priority: AVERAGE
+ description: 'I/O module is down.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ master_item:
+ key: hpe.msa.raw.io_modules
+ lld_macro_paths:
+ -
+ lld_macro: '{#DURABLE.ID}'
+ path: '$.[''durable-id'']'
preprocessing:
-
type: DISCARD_UNCHANGED_HEARTBEAT
@@ -3640,26 +4406,26 @@ zabbix_export:
trigger_prototypes:
-
uuid: c73b4a77e94a43f5951f6a541d65637e
- expression: 'min(/HPE MSA 2060 Storage by HTTP/hpe.msa.pools.space["{#NAME}",util],5m)>{$HPE.PRIMERA.POOL.PUSED.MAX.CRIT:"{#NAME}"}'
+ expression: 'min(/HPE MSA 2060 Storage by HTTP/hpe.msa.pools.space["{#NAME}",util],5m)>{$HPE.MSA.POOL.PUSED.MAX.CRIT:"{#NAME}"}'
name: 'Pool [{#NAME}]: Pool space is critically low'
- event_name: 'Pool [{#NAME}]: Pool space is critically low (used > {$HPE.PRIMERA.POOL.PUSED.MAX.CRIT:"{#NAME}"}%)'
+ event_name: 'Pool [{#NAME}]: Pool space is critically low (used > {$HPE.MSA.POOL.PUSED.MAX.CRIT:"{#NAME}"}%)'
priority: AVERAGE
- description: 'Pool is running low on free space (less than {$HPE.PRIMERA.POOL.PUSED.MAX.CRIT:"{#NAME}"}% available).'
+ description: 'Pool is running low on free space (less than {$HPE.MSA.POOL.PUSED.MAX.CRIT:"{#NAME}"}% available).'
tags:
-
tag: scope
value: performance
-
uuid: c7644beb62bc40e99d6045af6d4bc16f
- expression: 'min(/HPE MSA 2060 Storage by HTTP/hpe.msa.pools.space["{#NAME}",util],5m)>{$HPE.PRIMERA.POOL.PUSED.MAX.WARN:"{#NAME}"}'
+ expression: 'min(/HPE MSA 2060 Storage by HTTP/hpe.msa.pools.space["{#NAME}",util],5m)>{$HPE.MSA.POOL.PUSED.MAX.WARN:"{#NAME}"}'
name: 'Pool [{#NAME}]: Pool space is low'
- event_name: 'Pool [{#NAME}]: Pool space is low (used > {$HPE.PRIMERA.POOL.PUSED.MAX.WARN:"{#NAME}"}%)'
+ event_name: 'Pool [{#NAME}]: Pool space is low (used > {$HPE.MSA.POOL.PUSED.MAX.WARN:"{#NAME}"}%)'
priority: WARNING
- description: 'Pool is running low on free space (less than {$HPE.PRIMERA.POOL.PUSED.MAX.WARN:"{#NAME}"}% available).'
+ description: 'Pool is running low on free space (less than {$HPE.MSA.POOL.PUSED.MAX.WARN:"{#NAME}"}% available).'
dependencies:
-
name: 'Pool [{#NAME}]: Pool space is critically low'
- expression: 'min(/HPE MSA 2060 Storage by HTTP/hpe.msa.pools.space["{#NAME}",util],5m)>{$HPE.PRIMERA.POOL.PUSED.MAX.CRIT:"{#NAME}"}'
+ expression: 'min(/HPE MSA 2060 Storage by HTTP/hpe.msa.pools.space["{#NAME}",util],5m)>{$HPE.MSA.POOL.PUSED.MAX.CRIT:"{#NAME}"}'
tags:
-
tag: scope
@@ -3891,6 +4657,34 @@ zabbix_export:
-
tag: scope
value: performance
+ -
+ uuid: 32ad6655625e408a9dd577624afbfa6a
+ name: 'Port [{#NAME}]: Type'
+ type: DEPENDENT
+ key: 'hpe.msa.ports["{#NAME}",type]'
+ delay: '0'
+ history: 7d
+ description: 'Port type.'
+ valuemap:
+ name: 'Port type'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[?(@[''port''] == "{#NAME}")].[''port-type-numeric''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.raw.ports
+ tags:
+ -
+ tag: component
+ value: port
+ -
+ tag: port
+ value: '{#NAME}'
master_item:
key: hpe.msa.raw.ports
lld_macro_paths:
@@ -3912,7 +4706,7 @@ zabbix_export:
item_prototypes:
-
uuid: 993bc2db3b444dc5bc37794985e63ea9
- name: 'Power supply [{#LOCATION}]: Health'
+ name: 'Power supply [{#DURABLE.ID}]: Health'
type: DEPENDENT
key: 'hpe.msa.power_supplies["{#DURABLE.ID}",health]'
delay: '0'
@@ -3942,12 +4736,12 @@ zabbix_export:
value: power-supply
-
tag: power-supply
- value: '{#LOCATION}'
+ value: '{#DURABLE.ID}'
trigger_prototypes:
-
uuid: 1b512fda735440b5839a63fd26c19535
expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",health])=1'
- name: 'Power supply [{#LOCATION}]: Power supply health is in degraded state'
+ name: 'Power supply [{#DURABLE.ID}]: Power supply health is in degraded state'
priority: WARNING
description: 'Power supply health is in degraded state.'
tags:
@@ -3957,7 +4751,7 @@ zabbix_export:
-
uuid: b75fb541ae0e43cc9cdb86e07dc3e394
expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",health])=2'
- name: 'Power supply [{#LOCATION}]: Power supply health is in fault state'
+ name: 'Power supply [{#DURABLE.ID}]: Power supply health is in fault state'
priority: AVERAGE
description: 'Power supply health is in fault state.'
tags:
@@ -3967,7 +4761,7 @@ zabbix_export:
-
uuid: 555ee9ef33b54d029df2f17d5f899539
expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",health])=3'
- name: 'Power supply [{#LOCATION}]: Power supply health is in unknown state'
+ name: 'Power supply [{#DURABLE.ID}]: Power supply health is in unknown state'
priority: INFO
description: 'Power supply health is in unknown state.'
tags:
@@ -3976,7 +4770,7 @@ zabbix_export:
value: notice
-
uuid: efae55cfdd1e4021a623e2128f988611
- name: 'Power supply [{#LOCATION}]: Part number.'
+ name: 'Power supply [{#DURABLE.ID}]: Part number'
type: DEPENDENT
key: 'hpe.msa.power_supplies["{#DURABLE.ID}",part_number]'
delay: '0'
@@ -4001,10 +4795,10 @@ zabbix_export:
value: power-supply
-
tag: power-supply
- value: '{#LOCATION}'
+ value: '{#DURABLE.ID}'
-
uuid: 6716c3d0177247fe8a35fa1eb206a54f
- name: 'Power supply [{#LOCATION}]: Serial number.'
+ name: 'Power supply [{#DURABLE.ID}]: Serial number'
type: DEPENDENT
key: 'hpe.msa.power_supplies["{#DURABLE.ID}",serial_number]'
delay: '0'
@@ -4029,10 +4823,10 @@ zabbix_export:
value: power-supply
-
tag: power-supply
- value: '{#LOCATION}'
+ value: '{#DURABLE.ID}'
-
uuid: a3ff6ab5576246fe9e794e01df4fe1b9
- name: 'Power supply [{#LOCATION}]: Status'
+ name: 'Power supply [{#DURABLE.ID}]: Status'
type: DEPENDENT
key: 'hpe.msa.power_supplies["{#DURABLE.ID}",status]'
delay: '0'
@@ -4062,12 +4856,12 @@ zabbix_export:
value: power-supply
-
tag: power-supply
- value: '{#LOCATION}'
+ value: '{#DURABLE.ID}'
trigger_prototypes:
-
uuid: 49c9d2d61c45476da5564299b2eebdee
expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",status])=2'
- name: 'Power supply [{#LOCATION}]: Power supply has error status'
+ name: 'Power supply [{#DURABLE.ID}]: Power supply has error status'
priority: AVERAGE
description: 'Power supply has error status.'
tags:
@@ -4077,7 +4871,7 @@ zabbix_export:
-
uuid: d6cbaeb5aab84e5eb487af4bf319d640
expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",status])=4'
- name: 'Power supply [{#LOCATION}]: Power supply has unknown status'
+ name: 'Power supply [{#DURABLE.ID}]: Power supply has unknown status'
priority: INFO
description: 'Power supply has unknown status.'
tags:
@@ -4087,7 +4881,7 @@ zabbix_export:
-
uuid: b7e85e7a6c254aba930d7704c58adf47
expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",status])=1'
- name: 'Power supply [{#LOCATION}]: Power supply has warning status'
+ name: 'Power supply [{#DURABLE.ID}]: Power supply has warning status'
priority: WARNING
description: 'Power supply has warning status.'
tags:
@@ -4100,9 +4894,6 @@ zabbix_export:
-
lld_macro: '{#DURABLE.ID}'
path: '$.[''durable-id'']'
- -
- lld_macro: '{#LOCATION}'
- path: '$.[''location'']'
preprocessing:
-
type: DISCARD_UNCHANGED_HEARTBEAT
@@ -4128,7 +4919,7 @@ zabbix_export:
-
type: JSONPATH
parameters:
- - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''allocated-size-numeric''].first()'
+ - '$.[?(@[''volume-name''] == "{#NAME}")].[''allocated-size-numeric''].first()'
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
@@ -4155,7 +4946,7 @@ zabbix_export:
-
type: JSONPATH
parameters:
- - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''blocksize''].first()'
+ - '$.[?(@[''volume-name''] == "{#NAME}")].[''blocksize''].first()'
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
@@ -4181,7 +4972,7 @@ zabbix_export:
-
type: JSONPATH
parameters:
- - '$.[?(@[''durable-id''] == "{#DURABLE.ID}")].[''blocks''].first()'
+ - '$.[?(@[''volume-name''] == "{#NAME}")].[''blocks''].first()'
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
@@ -4196,83 +4987,6 @@ zabbix_export:
tag: volume
value: '{#NAME}'
-
- uuid: 860855a80c554e0685d4d4125342b547
- name: 'Volume [{#NAME}]: Space allocated'
- type: CALCULATED
- key: 'hpe.msa.volumes.space["{#NAME}",allocated]'
- history: 7d
- units: B
- params: 'last(//hpe.msa.volumes.blocks["{#NAME}",size])*last(//hpe.msa.volumes.blocks["{#NAME}",allocated])'
- description: 'The amount of space currently allocated to the volume.'
- preprocessing:
- -
- type: DISCARD_UNCHANGED_HEARTBEAT
- parameters:
- - 1h
- tags:
- -
- tag: component
- value: volume
- -
- tag: volume
- value: '{#NAME}'
- -
- uuid: eb09d8791bb84c8aadf5cdcac3d76413
- name: 'Volume [{#NAME}]: Space total'
- type: CALCULATED
- key: 'hpe.msa.volumes.space["{#NAME}",total]'
- history: 7d
- units: B
- params: 'last(//hpe.msa.volumes.blocks["{#NAME}",size])*last(//hpe.msa.volumes.blocks["{#NAME}",total])'
- description: 'The capacity of the volume.'
- preprocessing:
- -
- type: DISCARD_UNCHANGED_HEARTBEAT
- parameters:
- - 1h
- tags:
- -
- tag: component
- value: volume
- -
- tag: volume
- value: '{#NAME}'
- graph_prototypes:
- -
- uuid: 5a316cdf8c6f42acb3cb7a158861145a
- name: 'Volume [{#NAME}]: Space utilization'
- graph_items:
- -
- color: 1A7C11
- item:
- host: 'HPE MSA 2060 Storage by HTTP'
- key: 'hpe.msa.volumes.space["{#NAME}",allocated]'
- -
- sortorder: '1'
- color: 2774A4
- item:
- host: 'HPE MSA 2060 Storage by HTTP'
- key: 'hpe.msa.volumes.space["{#NAME}",total]'
- master_item:
- key: hpe.msa.raw.volumes
- lld_macro_paths:
- -
- lld_macro: '{#NAME}'
- path: '$.[''name'']'
- preprocessing:
- -
- type: DISCARD_UNCHANGED_HEARTBEAT
- parameters:
- - 6h
- -
- uuid: 326d40b629954b6c81b8294e2fc761df
- name: 'Volume statistics discovery'
- type: DEPENDENT
- key: hpe.msa.volumes.statistics.discovery
- delay: '0'
- description: 'Discover volume statistics.'
- item_prototypes:
- -
uuid: b7615bb6a3434303a2bb4751e7aed458
name: 'Volume [{#NAME}]: Cache: Read hits, rate'
type: DEPENDENT
@@ -4456,18 +5170,24 @@ zabbix_export:
tag: volume
value: '{#NAME}'
-
- uuid: b925122eda0c4c1380b843bc764ed122
- name: 'Volume [{#NAME}]: IOPS, rate'
+ uuid: 00f5c3f9d19d450e999c389ba297fb41
+ name: 'Volume [{#NAME}]: IOPS, read rate'
type: DEPENDENT
- key: 'hpe.msa.volumes.iops["{#NAME}",rate]'
+ key: 'hpe.msa.volumes.iops.read["{#NAME}",rate]'
delay: '0'
history: 7d
- description: 'Input/output operations per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.'
+ value_type: FLOAT
+ units: '!r/s'
+ description: 'Number of read operations per second.'
preprocessing:
-
type: JSONPATH
parameters:
- - '$.[?(@[''volume-name''] == "{#NAME}")].[''iops''].first()'
+ - '$.[?(@[''volume-name''] == "{#NAME}")].[''number-of-reads''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
master_item:
key: hpe.msa.raw.volumes.statistics
tags:
@@ -4478,23 +5198,19 @@ zabbix_export:
tag: volume
value: '{#NAME}'
-
- uuid: 00f5c3f9d19d450e999c389ba297fb41
- name: 'Volume [{#NAME}]: Reads, rate'
+ uuid: b925122eda0c4c1380b843bc764ed122
+ name: 'Volume [{#NAME}]: IOPS, total rate'
type: DEPENDENT
- key: 'hpe.msa.volumes.reads["{#NAME}",rate]'
+ key: 'hpe.msa.volumes.iops.total["{#NAME}",rate]'
delay: '0'
history: 7d
- value_type: FLOAT
- description: 'Number of read operations per second.'
+ units: '!iops'
+ description: 'Total input/output operations per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.'
preprocessing:
-
type: JSONPATH
parameters:
- - '$.[?(@[''volume-name''] == "{#NAME}")].[''number-of-reads''].first()'
- -
- type: CHANGE_PER_SECOND
- parameters:
- - ''
+ - '$.[?(@[''volume-name''] == "{#NAME}")].[''iops''].first()'
master_item:
key: hpe.msa.raw.volumes.statistics
tags:
@@ -4506,12 +5222,13 @@ zabbix_export:
value: '{#NAME}'
-
uuid: a9fcc1525204489cad52cf4e88518064
- name: 'Volume [{#NAME}]: Writes, rate'
+ name: 'Volume [{#NAME}]: IOPS, write rate'
type: DEPENDENT
- key: 'hpe.msa.volumes.writes["{#NAME}",rate]'
+ key: 'hpe.msa.volumes.iops.write["{#NAME}",rate]'
delay: '0'
history: 7d
value_type: FLOAT
+ units: '!w/s'
description: 'Number of write operations per second.'
preprocessing:
-
@@ -4531,6 +5248,48 @@ zabbix_export:
-
tag: volume
value: '{#NAME}'
+ -
+ uuid: 860855a80c554e0685d4d4125342b547
+ name: 'Volume [{#NAME}]: Space allocated'
+ type: CALCULATED
+ key: 'hpe.msa.volumes.space["{#NAME}",allocated]'
+ history: 7d
+ units: B
+ params: 'last(//hpe.msa.volumes.blocks["{#NAME}",size])*last(//hpe.msa.volumes.blocks["{#NAME}",allocated])'
+ description: 'The amount of space currently allocated to the volume.'
+ preprocessing:
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ tags:
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: eb09d8791bb84c8aadf5cdcac3d76413
+ name: 'Volume [{#NAME}]: Space total'
+ type: CALCULATED
+ key: 'hpe.msa.volumes.space["{#NAME}",total]'
+ history: 7d
+ units: B
+ params: 'last(//hpe.msa.volumes.blocks["{#NAME}",size])*last(//hpe.msa.volumes.blocks["{#NAME}",total])'
+ description: 'The capacity of the volume.'
+ preprocessing:
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ tags:
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
graph_prototypes:
-
uuid: 8905b826b774473991f74b927716322e
@@ -4582,15 +5341,30 @@ zabbix_export:
color: 1A7C11
item:
host: 'HPE MSA 2060 Storage by HTTP'
- key: 'hpe.msa.volumes.reads["{#NAME}",rate]'
+ key: 'hpe.msa.volumes.iops.read["{#NAME}",rate]'
-
sortorder: '1'
color: 2774A4
item:
host: 'HPE MSA 2060 Storage by HTTP'
- key: 'hpe.msa.volumes.writes["{#NAME}",rate]'
+ key: 'hpe.msa.volumes.iops.write["{#NAME}",rate]'
+ -
+ uuid: 5a316cdf8c6f42acb3cb7a158861145a
+ name: 'Volume [{#NAME}]: Space utilization'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE MSA 2060 Storage by HTTP'
+ key: 'hpe.msa.volumes.space["{#NAME}",allocated]'
+ -
+ sortorder: '1'
+ color: 2774A4
+ item:
+ host: 'HPE MSA 2060 Storage by HTTP'
+ key: 'hpe.msa.volumes.space["{#NAME}",total]'
master_item:
- key: hpe.msa.raw.volumes.statistics
+ key: hpe.msa.raw.volumes
lld_macro_paths:
-
lld_macro: '{#NAME}'
@@ -4614,37 +5388,41 @@ zabbix_export:
-
macro: '{$HPE.MSA.API.PASSWORD}'
type: SECRET_TEXT
- description: 'Specify password for WSAPI.'
+ description: 'Specify password for API.'
-
macro: '{$HPE.MSA.API.PORT}'
value: '443'
- description: 'Connection port for WSAPI.'
+ description: 'Connection port for API.'
-
macro: '{$HPE.MSA.API.SCHEME}'
value: https
- description: 'Connection scheme timeout for WSAPI.'
+ description: 'Connection scheme timeout for API.'
-
macro: '{$HPE.MSA.API.USERNAME}'
value: zabbix
- description: 'Specify user name for WSAPI.'
+ description: 'Specify user name for API.'
+ -
+ macro: '{$HPE.MSA.CONTROLLER.CPU.UTIL.CRIT}'
+ value: '90'
+ description: 'The critical threshold of the CPU utilization in %.'
-
macro: '{$HPE.MSA.DATA.TIMEOUT}'
value: 5s
- description: 'Response timeout for WSAPI.'
+ description: 'Response timeout for API.'
-
- macro: '{$HPE.PRIMERA.DISKS.GROUP.PUSED.MAX.CRIT}'
+ macro: '{$HPE.MSA.DISKS.GROUP.PUSED.MAX.CRIT}'
value: '90'
description: 'The critical threshold of the disk group space utilization in percent.'
-
- macro: '{$HPE.PRIMERA.DISKS.GROUP.PUSED.MAX.WARN}'
+ macro: '{$HPE.MSA.DISKS.GROUP.PUSED.MAX.WARN}'
value: '80'
description: 'The warning threshold of the disk group space utilization in percent.'
-
- macro: '{$HPE.PRIMERA.POOL.PUSED.MAX.CRIT}'
+ macro: '{$HPE.MSA.POOL.PUSED.MAX.CRIT}'
value: '90'
description: 'The critical threshold of the pool space utilization in percent.'
-
- macro: '{$HPE.PRIMERA.POOL.PUSED.MAX.WARN}'
+ macro: '{$HPE.MSA.POOL.PUSED.MAX.WARN}'
value: '80'
description: 'The warning threshold of the pool space utilization in percent.'
valuemaps:
@@ -4794,6 +5572,38 @@ zabbix_export:
value: '4'
newvalue: N/A
-
+ uuid: 604220d02ff84f0da63e9032a261e006
+ name: 'I/O module status'
+ mappings:
+ -
+ value: '0'
+ newvalue: Operational
+ -
+ value: '1'
+ newvalue: Down
+ -
+ value: '2'
+ newvalue: 'Not installed'
+ -
+ value: '3'
+ newvalue: Unknown
+ -
+ uuid: ec101e7d212747779ed56ef9dbf72e2b
+ name: 'Port type'
+ mappings:
+ -
+ value: '0'
+ newvalue: Unknown
+ -
+ value: '6'
+ newvalue: FC
+ -
+ value: '8'
+ newvalue: SAS
+ -
+ value: '9'
+ newvalue: iSCSI
+ -
uuid: 171c9abf20514b0fb78d532bd987881b
name: 'RAID type'
mappings: