Welcome to mirror list, hosted at ThFree Co, Russian Federation.

github.com/zabbix/zabbix.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
Diffstat (limited to 'templates/san')
-rw-r--r--templates/san/hpe_msa2040_http/README.md240
-rw-r--r--templates/san/hpe_msa2040_http/template_san_hpe_msa2040_http.yaml4417
-rw-r--r--templates/san/hpe_msa2060_http/README.md250
-rw-r--r--templates/san/hpe_msa2060_http/template_san_hpe_msa2060_http.yaml4559
-rw-r--r--templates/san/hpe_primera_http/README.md189
-rw-r--r--templates/san/hpe_primera_http/template_san_hpe_primera_http.yaml4681
6 files changed, 14336 insertions, 0 deletions
diff --git a/templates/san/hpe_msa2040_http/README.md b/templates/san/hpe_msa2040_http/README.md
new file mode 100644
index 00000000000..e76f83048c6
--- /dev/null
+++ b/templates/san/hpe_msa2040_http/README.md
@@ -0,0 +1,240 @@
+
+# HPE MSA 2040 Storage by HTTP
+
+## Overview
+
+For Zabbix version: 6.0 and higher
+The template to monitor HPE MSA 2040 by HTTP.
+It works without any external scripts and uses the script item.
+
+
+This template was tested on:
+
+- HPE MSA 2040 Storage
+
+## Setup
+
+> See [Zabbix template operation](https://www.zabbix.com/documentation/6.0/manual/config/templates_out_of_the_box/http) for basic instructions.
+
+1. Create user "zabbix" with monitor role on the storage.
+2. Link the template to a host.
+3. Configure {$HPE.MSA.API.PASSWORD} and an interface with address through which API is accessible.
+4. Change {$HPE.MSA.API.SCHEME} and {$HPE.MSA.API.PORT} macros if needed.
+
+
+## Zabbix configuration
+
+No specific Zabbix configuration is required.
+
+### Macros used
+
+|Name|Description|Default|
+|----|-----------|-------|
+|{$HPE.MSA.API.PASSWORD} |<p>Specify password for API.</p> |`` |
+|{$HPE.MSA.API.PORT} |<p>Connection port for API.</p> |`443` |
+|{$HPE.MSA.API.SCHEME} |<p>Connection scheme for API.</p> |`https` |
+|{$HPE.MSA.API.USERNAME} |<p>Specify user name for API.</p> |`zabbix` |
+|{$HPE.MSA.CONTROLLER.CPU.UTIL.CRIT} |<p>The critical threshold of the CPU utilization in %.</p> |`90` |
+|{$HPE.MSA.DATA.TIMEOUT} |<p>Response timeout for API.</p> |`30s` |
+|{$HPE.MSA.DISKS.GROUP.PUSED.MAX.CRIT} |<p>The critical threshold of the disk group space utilization in %.</p> |`90` |
+|{$HPE.MSA.DISKS.GROUP.PUSED.MAX.WARN} |<p>The warning threshold of the disk group space utilization in %.</p> |`80` |
+|{$HPE.MSA.POOL.PUSED.MAX.CRIT} |<p>The critical threshold of the pool space utilization in %.</p> |`90` |
+|{$HPE.MSA.POOL.PUSED.MAX.WARN} |<p>The warning threshold of the pool space utilization in %.</p> |`80` |
+
+## Template links
+
+There are no template links in this template.
+
+## Discovery rules
+
+|Name|Description|Type|Key and additional info|
+|----|-----------|----|----|
+|Controllers discovery |<p>Discover controllers.</p> |DEPENDENT |hpe.msa.controllers.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controllers']`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
+|Disk groups discovery |<p>Discover disk groups.</p> |DEPENDENT |hpe.msa.disks.groups.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-groups']`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
+|Disks discovery |<p>Discover disks.</p> |DEPENDENT |hpe.msa.disks.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disks']`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p><p>**Overrides:**</p><p>SSD life left<br> - {#TYPE} MATCHES_REGEX `8`<br> - ITEM_PROTOTYPE REGEXP `SSD life left` - DISCOVER</p> |
+|Enclosures discovery |<p>Discover enclosures.</p> |DEPENDENT |hpe.msa.enclosures.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.['enclosures']`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
+|Fans discovery |<p>Discover fans.</p> |DEPENDENT |hpe.msa.fans.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.['fans']`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
+|FRU discovery |<p>Discover FRU.</p> |DEPENDENT |hpe.msa.frus.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.['frus']`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p><p>**Filter**:</p> <p>- {#TYPE} NOT_MATCHES_REGEX `^(POWER_SUPPLY|RAID_IOM|CHASSIS_MIDPLANE)$`</p> |
+|Pools discovery |<p>Discover pools.</p> |DEPENDENT |hpe.msa.pools.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.['pools']`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
+|Ports discovery |<p>Discover ports.</p> |DEPENDENT |hpe.msa.ports.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.['ports']`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
+|Power supplies discovery |<p>Discover power supplies.</p> |DEPENDENT |hpe.msa.power_supplies.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.['power-supplies']`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
+|Volumes discovery |<p>Discover volumes.</p> |DEPENDENT |hpe.msa.volumes.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.['volumes']`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
+
+## Items collected
+
+|Group|Name|Description|Type|Key and additional info|
+|-----|----|-----------|----|---------------------|
+|HPE |Get method errors |<p>A list of method errors from API requests.</p> |DEPENDENT |hpe.msa.data.errors<p>**Preprocessing**:</p><p>- JSONPATH: `$.['errors']`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Product ID |<p>The product model identifier.</p> |DEPENDENT |hpe.msa.system.product_id<p>**Preprocessing**:</p><p>- JSONPATH: `$.system[0].['product-id']`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |System contact |<p>The name of the person who administers the system.</p> |DEPENDENT |hpe.msa.system.contact<p>**Preprocessing**:</p><p>- JSONPATH: `$.system[0].['system-contact']`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |System information |<p>A brief description of what the system is used for or how it is configured.</p> |DEPENDENT |hpe.msa.system.info<p>**Preprocessing**:</p><p>- JSONPATH: `$.system[0].['system-information']`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |System location |<p>The location of the system.</p> |DEPENDENT |hpe.msa.system.location<p>**Preprocessing**:</p><p>- JSONPATH: `$.system[0].['system-location']`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |System name |<p>The name of the storage system.</p> |DEPENDENT |hpe.msa.system.name<p>**Preprocessing**:</p><p>- JSONPATH: `$.system[0].['system-name']`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Vendor name |<p>The vendor name.</p> |DEPENDENT |hpe.msa.system.vendor_name<p>**Preprocessing**:</p><p>- JSONPATH: `$.system[0].['vendor-name']`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |System health |<p>System health status.</p> |DEPENDENT |hpe.msa.system.health<p>**Preprocessing**:</p><p>- JSONPATH: `$.system[0].['health-numeric']`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p> |
+|HPE |HPE MSA: Service ping |<p>Check if HTTP/HTTPS service accepts TCP connections.</p> |SIMPLE |net.tcp.service["{$HPE.MSA.API.SCHEME}","{HOST.CONN}","{$HPE.MSA.API.PORT}"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Firmware version |<p>Storage controller firmware version.</p> |DEPENDENT |hpe.msa.controllers["{#CONTROLLER.ID}",firmware]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controllers'][?(@['durable-id'] == "{#DURABLE.ID}")].['sc-fw'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Part number |<p>Part number of the controller.</p> |DEPENDENT |hpe.msa.controllers["{#CONTROLLER.ID}",part_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controllers'][?(@['durable-id'] == "{#DURABLE.ID}")].['part-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Serial number |<p>Storage controller serial number.</p> |DEPENDENT |hpe.msa.controllers["{#CONTROLLER.ID}",serial_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controllers'][?(@['durable-id'] == "{#DURABLE.ID}")].['serial-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Health |<p>Controller health status.</p> |DEPENDENT |hpe.msa.controllers["{#CONTROLLER.ID}",health]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controllers'][?(@['durable-id'] == "{#DURABLE.ID}")].['health-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Status |<p>Storage controller status.</p> |DEPENDENT |hpe.msa.controllers["{#CONTROLLER.ID}",status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controllers'][?(@['durable-id'] == "{#DURABLE.ID}")].['status-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Disks |<p>Number of disks in the storage system.</p> |DEPENDENT |hpe.msa.controllers["{#CONTROLLER.ID}",disks]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controllers'][?(@['durable-id'] == "{#DURABLE.ID}")].['disks'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Pools |<p>Number of pools in the storage system.</p> |DEPENDENT |hpe.msa.controllers["{#CONTROLLER.ID}",pools]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controllers'][?(@['durable-id'] == "{#DURABLE.ID}")].['number-of-storage-pools'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Disk groups |<p>Number of disk groups in the storage system.</p> |DEPENDENT |hpe.msa.controllers["{#CONTROLLER.ID}",disk_groups]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controllers'][?(@['durable-id'] == "{#DURABLE.ID}")].['virtual-disks'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: IP address |<p>Controller network port IP address.</p> |DEPENDENT |hpe.msa.controllers["{#CONTROLLER.ID}",ip_address]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controllers'][?(@['durable-id'] == "{#DURABLE.ID}")].['ip-address'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Cache memory size |<p>Controller cache memory size.</p> |DEPENDENT |hpe.msa.controllers.cache["{#CONTROLLER.ID}",total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controllers'][?(@['durable-id'] == "{#DURABLE.ID}")].['cache-memory-size'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p><p>- MULTIPLIER: `1048576`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Cache: Write utilization |<p>Percentage of write cache in use, from 0 to 100.</p> |DEPENDENT |hpe.msa.controllers.cache.write["{#CONTROLLER.ID}",util]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controller-statistics'][?(@['durable-id'] == "{#DURABLE.ID}")].['write-cache-used'].first()`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Cache: Read hits, rate |<p>For the controller that owns the volume, the number of times the block to be read is found in cache per second.</p> |DEPENDENT |hpe.msa.controllers.cache.read.hits["{#CONTROLLER.ID}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controller-statistics'][?(@['durable-id'] == "{#DURABLE.ID}")].['read-cache-hits'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Cache: Read misses, rate |<p>For the controller that owns the volume, the number of times the block to be read is not found in cache per second.</p> |DEPENDENT |hpe.msa.controllers.cache.read.misses["{#CONTROLLER.ID}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controller-statistics'][?(@['durable-id'] == "{#DURABLE.ID}")].['read-cache-misses'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Cache: Write hits, rate |<p>For the controller that owns the volume, the number of times the block written to is found in cache per second.</p> |DEPENDENT |hpe.msa.controllers.cache.write.hits["{#CONTROLLER.ID}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controller-statistics'][?(@['durable-id'] == "{#DURABLE.ID}")].['write-cache-hits'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Cache: Write misses, rate |<p>For the controller that owns the volume, the number of times the block written to is not found in cache per second.</p> |DEPENDENT |hpe.msa.controllers.cache.write.misses["{#CONTROLLER.ID}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controller-statistics'][?(@['durable-id'] == "{#DURABLE.ID}")].['write-cache-misses'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: CPU utilization |<p>Percentage of time the CPU is busy, from 0 to 100.</p> |DEPENDENT |hpe.msa.controllers.cpu["{#CONTROLLER.ID}",util]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controller-statistics'][?(@['durable-id'] == "{#DURABLE.ID}")].['cpu-load'].first()`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: IOPS, total rate |<p>Input/output operations per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.</p> |DEPENDENT |hpe.msa.controllers.iops.total["{#CONTROLLER.ID}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controller-statistics'][?(@['durable-id'] == "{#DURABLE.ID}")].['iops'].first()`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: IOPS, read rate |<p>Number of read operations per second.</p> |DEPENDENT |hpe.msa.controllers.iops.read["{#CONTROLLER.ID}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controller-statistics'][?(@['durable-id'] == "{#DURABLE.ID}")].['number-of-reads'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: IOPS, write rate |<p>Number of write operations per second.</p> |DEPENDENT |hpe.msa.controllers.iops.write["{#CONTROLLER.ID}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controller-statistics'][?(@['durable-id'] == "{#DURABLE.ID}")].['number-of-writes'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Data transfer rate: Total |<p>The data transfer rate, in bytes per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.</p> |DEPENDENT |hpe.msa.controllers.data_transfer.total["{#CONTROLLER.ID}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controller-statistics'][?(@['durable-id'] == "{#DURABLE.ID}")].['bytes-per-second-numeric'].first()`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Data transfer rate: Reads |<p>The data read rate, in bytes per second.</p> |DEPENDENT |hpe.msa.controllers.data_transfer.reads["{#CONTROLLER.ID}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controller-statistics'][?(@['durable-id'] == "{#DURABLE.ID}")].['data-read-numeric'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Data transfer rate: Writes |<p>The data write rate, in bytes per second.</p> |DEPENDENT |hpe.msa.controllers.data_transfer.writes["{#CONTROLLER.ID}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controller-statistics'][?(@['durable-id'] == "{#DURABLE.ID}")].['data-written-numeric'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Uptime |<p>Number of seconds since the controller was restarted.</p> |DEPENDENT |hpe.msa.controllers["{#CONTROLLER.ID}",uptime]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controller-statistics'][?(@['durable-id'] == "{#DURABLE.ID}")].['power-on-time'].first()`</p> |
+|HPE |Disk group [{#NAME}]: Disks count |<p>Number of disks in the disk group.</p> |DEPENDENT |hpe.msa.disks.groups["{#NAME}",disk_count]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-groups'][?(@['name'] == "{#NAME}")].['diskcount'].first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Disk group [{#NAME}]: Pool space used |<p>The percentage of pool capacity that the disk group occupies.</p> |DEPENDENT |hpe.msa.disks.groups.space["{#NAME}",pool_util]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-groups'][?(@['name'] == "{#NAME}")].['pool-percentage'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Disk group [{#NAME}]: Health |<p>Disk group health.</p> |DEPENDENT |hpe.msa.disks.groups["{#NAME}",health]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-groups'][?(@['name'] == "{#NAME}")].['health-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Disk group [{#NAME}]: Space free |<p>The free space in the disk group.</p> |DEPENDENT |hpe.msa.disks.groups.space["{#NAME}",free]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-groups'][?(@['name'] == "{#NAME}")].['freespace-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `512`</p> |
+|HPE |Disk group [{#NAME}]: Space total |<p>The capacity of the disk group.</p> |DEPENDENT |hpe.msa.disks.groups.space["{#NAME}",total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-groups'][?(@['name'] == "{#NAME}")].['size-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `512`</p> |
+|HPE |Disk group [{#NAME}]: Space utilization |<p>The space utilization percentage in the disk group.</p> |CALCULATED |hpe.msa.disks.groups.space["{#NAME}",util]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>**Expression**:</p>`100-last(//hpe.msa.disks.groups.space["{#NAME}",free])/last(//hpe.msa.disks.groups.space["{#NAME}",total])*100` |
+|HPE |Disk group [{#NAME}]: RAID type |<p>The RAID level of the disk group.</p> |DEPENDENT |hpe.msa.disks.groups.raid["{#NAME}",type]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-groups'][?(@['name'] == "{#NAME}")].['raidtype-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Disk group [{#NAME}]: Status |<p>The status of the disk group:</p><p>- CRIT: Critical. The disk group is online but isn't fault tolerant because some of it's disks are down.</p><p>- DMGD: Damaged. The disk group is online and fault tolerant, but some of it's disks are damaged.</p><p>- FTDN: Fault tolerant with a down disk.The disk group is online and fault tolerant, but some of it's disks are down.</p><p>- FTOL: Fault tolerant.</p><p>- MSNG: Missing. The disk group is online and fault tolerant, but some of it's disks are missing.</p><p>- OFFL: Offline. Either the disk group is using offline initialization, or it's disks are down and data may be lost.</p><p>- QTCR: Quarantined critical. The disk group is critical with at least one inaccessible disk. For example, two disks are inaccessible in a RAID 6 disk group or one disk is inaccessible for other fault-tolerant RAID levels. If the inaccessible disks come online or if after 60 seconds from being quarantined the disk group is QTCRor QTDN, the disk group is automatically dequarantined.</p><p>- QTDN: Quarantined with a down disk. The RAID6 disk group has one inaccessible disk. The disk group is fault tolerant but degraded. If the inaccessible disks come online or if after 60 seconds from being quarantined the disk group is QTCRor QTDN, the disk group is automatically dequarantined.</p><p>- QTOF: Quarantined offline. The disk group is offline with multiple inaccessible disks causing user data to be incomplete, or is an NRAID or RAID 0 disk group.</p><p>- QTUN: Quarantined unsupported. The disk group contains data in a format that is not supported by this system. For example, this system does not support linear disk groups.</p><p>- STOP: The disk group is stopped.</p><p>- UNKN: Unknown.</p><p>- UP: Up. The disk group is online and does not have fault-tolerant attributes.</p> |DEPENDENT |hpe.msa.disks.groups["{#NAME}",status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-groups'][?(@['name'] == "{#NAME}")].['status-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Disk group [{#NAME}]: IOPS, total rate |<p>Input/output operations per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.</p> |DEPENDENT |hpe.msa.disks.groups.iops.total["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-group-statistics'][?(@['name'] == "{#NAME}")].['iops'].first()`</p> |
+|HPE |Disk group [{#NAME}]: Average response time: Total |<p>Average response time for read and write operations, calculated over the interval since these statistics were last requested or reset.</p> |DEPENDENT |hpe.msa.disks.groups.avg_rsp_time["{#NAME}",total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-group-statistics'][?(@['name'] == "{#NAME}")].['avg-rsp-time'].first()`</p><p>- MULTIPLIER: `0.000001`</p> |
+|HPE |Disk group [{#NAME}]: Average response time: Read |<p>Average response time for all read operations, calculated over the interval since these statistics were last requested or reset.</p> |DEPENDENT |hpe.msa.disks.groups.avg_rsp_time["{#NAME}",read]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-group-statistics'][?(@['name'] == "{#NAME}")].['avg-read-rsp-time'].first()`</p><p>- MULTIPLIER: `0.000001`</p> |
+|HPE |Disk group [{#NAME}]: Average response time: Write |<p>Average response time for all write operations, calculated over the interval since these statistics were last requested or reset.</p> |DEPENDENT |hpe.msa.disks.groups.avg_rsp_time["{#NAME}",write]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-group-statistics'][?(@['name'] == "{#NAME}")].['avg-write-rsp-time'].first()`</p><p>- MULTIPLIER: `0.000001`</p> |
+|HPE |Disk group [{#NAME}]: IOPS, read rate |<p>Number of read operations per second.</p> |DEPENDENT |hpe.msa.disks.groups.iops.read["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-group-statistics'][?(@['name'] == "{#NAME}")].['number-of-reads'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Disk group [{#NAME}]: IOPS, write rate |<p>Number of write operations per second.</p> |DEPENDENT |hpe.msa.disks.groups.iops.write["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-group-statistics'][?(@['name'] == "{#NAME}")].['number-of-writes'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Disk group [{#NAME}]: Data transfer rate: Total |<p>The data transfer rate, in bytes per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.</p> |DEPENDENT |hpe.msa.disks.groups.data_transfer.total["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-group-statistics'][?(@['name'] == "{#NAME}")].['bytes-per-second-numeric'].first()`</p> |
+|HPE |Disk group [{#NAME}]: Data transfer rate: Reads |<p>The data read rate, in bytes per second.</p> |DEPENDENT |hpe.msa.disks.groups.data_transfer.reads["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-group-statistics'][?(@['name'] == "{#NAME}")].['data-read-numeric'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Disk group [{#NAME}]: Data transfer rate: Writes |<p>The data write rate, in bytes per second.</p> |DEPENDENT |hpe.msa.disks.groups.data_transfer.writes["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-group-statistics'][?(@['name'] == "{#NAME}")].['data-written-numeric'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Pool [{#NAME}]: Health |<p>Pool health.</p> |DEPENDENT |hpe.msa.pools["{#NAME}",health]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['pools'][?(@['name'] == "{#NAME}")].['health-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Pool [{#NAME}]: Space free |<p>The free space in the pool.</p> |DEPENDENT |hpe.msa.pools.space["{#NAME}",free]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['pools'][?(@['name'] == "{#NAME}")].['total-avail-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `512`</p> |
+|HPE |Pool [{#NAME}]: Space total |<p>The capacity of the pool.</p> |DEPENDENT |hpe.msa.pools.space["{#NAME}",total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['pools'][?(@['name'] == "{#NAME}")].['total-size-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `512`</p> |
+|HPE |Pool [{#NAME}]: Space utilization |<p>The space utilization percentage in the pool.</p> |CALCULATED |hpe.msa.pools.space["{#NAME}",util]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>**Expression**:</p>`100-last(//hpe.msa.pools.space["{#NAME}",free])/last(//hpe.msa.pools.space["{#NAME}",total])*100` |
+|HPE |Volume [{#NAME}]: Space allocated |<p>The amount of space currently allocated to the volume.</p> |DEPENDENT |hpe.msa.volumes.space["{#NAME}",allocated]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['volumes'][?(@['volume-name'] == "{#NAME}")].['allocated-size-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `512`</p> |
+|HPE |Volume [{#NAME}]: Space total |<p>The capacity of the volume.</p> |DEPENDENT |hpe.msa.volumes.space["{#NAME}",total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['volumes'][?(@['volume-name'] == "{#NAME}")].['size-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `512`</p> |
+|HPE |Volume [{#NAME}]: IOPS, total rate |<p>Input/output operations per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.</p> |DEPENDENT |hpe.msa.volumes.iops.total["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['volume-statistics'][?(@['volume-name'] == "{#NAME}")].['iops'].first()`</p> |
+|HPE |Volume [{#NAME}]: IOPS, read rate |<p>Number of read operations per second.</p> |DEPENDENT |hpe.msa.volumes.iops.read["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['volume-statistics'][?(@['volume-name'] == "{#NAME}")].['number-of-reads'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Volume [{#NAME}]: IOPS, write rate |<p>Number of write operations per second.</p> |DEPENDENT |hpe.msa.volumes.iops.write["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['volume-statistics'][?(@['volume-name'] == "{#NAME}")].['number-of-writes'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Volume [{#NAME}]: Data transfer rate: Total |<p>The data transfer rate, in bytes per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.</p> |DEPENDENT |hpe.msa.volumes.data_transfer.total["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['volume-statistics'][?(@['volume-name'] == "{#NAME}")].['bytes-per-second-numeric'].first()`</p> |
+|HPE |Volume [{#NAME}]: Data transfer rate: Reads |<p>The data read rate, in bytes per second.</p> |DEPENDENT |hpe.msa.volumes.data_transfer.reads["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['volume-statistics'][?(@['volume-name'] == "{#NAME}")].['data-read-numeric'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Volume [{#NAME}]: Data transfer rate: Writes |<p>The data write rate, in bytes per second.</p> |DEPENDENT |hpe.msa.volumes.data_transfer.writes["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['volume-statistics'][?(@['volume-name'] == "{#NAME}")].['data-written-numeric'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Volume [{#NAME}]: Cache: Read hits, rate |<p>For the controller that owns the volume, the number of times the block to be read is found in cache per second.</p> |DEPENDENT |hpe.msa.volumes.cache.read.hits["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['volume-statistics'][?(@['volume-name'] == "{#NAME}")].['read-cache-hits'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Volume [{#NAME}]: Cache: Read misses, rate |<p>For the controller that owns the volume, the number of times the block to be read is not found in cache per second.</p> |DEPENDENT |hpe.msa.volumes.cache.read.misses["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['volume-statistics'][?(@['volume-name'] == "{#NAME}")].['read-cache-misses'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Volume [{#NAME}]: Cache: Write hits, rate |<p>For the controller that owns the volume, the number of times the block written to is found in cache per second.</p> |DEPENDENT |hpe.msa.volumes.cache.write.hits["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['volume-statistics'][?(@['volume-name'] == "{#NAME}")].['write-cache-hits'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Volume [{#NAME}]: Cache: Write misses, rate |<p>For the controller that owns the volume, the number of times the block written to is not found in cache per second.</p> |DEPENDENT |hpe.msa.volumes.cache.write.misses["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['volume-statistics'][?(@['volume-name'] == "{#NAME}")].['write-cache-misses'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Enclosure [{#DURABLE.ID}]: Health |<p>Enclosure health.</p> |DEPENDENT |hpe.msa.enclosures["{#DURABLE.ID}",health]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['enclosures'][?(@['durable-id'] == "{#DURABLE.ID}")].['health-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Enclosure [{#DURABLE.ID}]: Status |<p>Enclosure status.</p> |DEPENDENT |hpe.msa.enclosures["{#DURABLE.ID}",status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['enclosures'][?(@['durable-id'] == "{#DURABLE.ID}")].['status-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 6`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Enclosure [{#DURABLE.ID}]: Midplane serial number |<p>Midplane serial number.</p> |DEPENDENT |hpe.msa.enclosures["{#DURABLE.ID}",midplane_serial_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['enclosures'][?(@['durable-id'] == "{#DURABLE.ID}")].['midplane-serial-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Enclosure [{#DURABLE.ID}]: Part number |<p>Enclosure part number.</p> |DEPENDENT |hpe.msa.enclosures["{#DURABLE.ID}",part_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['enclosures'][?(@['durable-id'] == "{#DURABLE.ID}")].['part-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Enclosure [{#DURABLE.ID}]: Model |<p>Enclosure model.</p> |DEPENDENT |hpe.msa.enclosures["{#DURABLE.ID}",model]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['enclosures'][?(@['durable-id'] == "{#DURABLE.ID}")].['model'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Enclosure [{#DURABLE.ID}]: Power |<p>Enclosure power in watts.</p> |DEPENDENT |hpe.msa.enclosures["{#DURABLE.ID}",power]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['enclosures'][?(@['durable-id'] == "{#DURABLE.ID}")].['enclosure-power'].first()`</p> |
+|HPE |Power supply [{#DURABLE.ID}]: Health |<p>Power supply health status.</p> |DEPENDENT |hpe.msa.power_supplies["{#DURABLE.ID}",health]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['power-supplies'][?(@['durable-id'] == "{#DURABLE.ID}")].['health-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Power supply [{#DURABLE.ID}]: Status |<p>Power supply status.</p> |DEPENDENT |hpe.msa.power_supplies["{#DURABLE.ID}",status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['power-supplies'][?(@['durable-id'] == "{#DURABLE.ID}")].['status-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Power supply [{#DURABLE.ID}]: Part number |<p>Power supply part number.</p> |DEPENDENT |hpe.msa.power_supplies["{#DURABLE.ID}",part_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['power-supplies'][?(@['durable-id'] == "{#DURABLE.ID}")].['part-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Power supply [{#DURABLE.ID}]: Serial number |<p>Power supply serial number.</p> |DEPENDENT |hpe.msa.power_supplies["{#DURABLE.ID}",serial_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['power-supplies'][?(@['durable-id'] == "{#DURABLE.ID}")].['serial-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Power supply [{#DURABLE.ID}]: Temperature |<p>Power supply temperature.</p> |DEPENDENT |hpe.msa.power_supplies["{#DURABLE.ID}",temperature]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['power-supplies'][?(@['durable-id'] == "{#DURABLE.ID}")].['dctemp'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Port [{#NAME}]: Health |<p>Port health status.</p> |DEPENDENT |hpe.msa.ports["{#NAME}",health]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['ports'][?(@['port'] == "{#NAME}")].['health-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Port [{#NAME}]: Status |<p>Port status.</p> |DEPENDENT |hpe.msa.ports["{#NAME}",status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['ports'][?(@['port'] == "{#NAME}")].['status-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Port [{#NAME}]: Type |<p>Port type.</p> |DEPENDENT |hpe.msa.ports["{#NAME}",type]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['ports'][?(@['port'] == "{#NAME}")].['port-type-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Fan [{#DURABLE.ID}]: Health |<p>Fan health status.</p> |DEPENDENT |hpe.msa.fans["{#DURABLE.ID}",health]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['fans'][?(@['durable-id'] == "{#DURABLE.ID}")].['health-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Fan [{#DURABLE.ID}]: Status |<p>Fan status.</p> |DEPENDENT |hpe.msa.fans["{#DURABLE.ID}",status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['fans'][?(@['durable-id'] == "{#DURABLE.ID}")].['status-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Fan [{#DURABLE.ID}]: Speed |<p>Fan speed (revolutions per minute).</p> |DEPENDENT |hpe.msa.fans["{#DURABLE.ID}",speed]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['fans'][?(@['durable-id'] == "{#DURABLE.ID}")].['speed'].first()`</p> |
+|HPE |Disk [{#DURABLE.ID}]: Health |<p>Disk health status.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",health]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disks'][?(@['durable-id'] == "{#DURABLE.ID}")].['health-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Disk [{#DURABLE.ID}]: Temperature status |<p>Disk temperature status.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",temperature_status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disks'][?(@['durable-id'] == "{#DURABLE.ID}")].['temperature-status-numeric'].first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- IN_RANGE: `1 3`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Disk [{#DURABLE.ID}]: Temperature |<p>Temperature of the disk.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",temperature]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disks'][?(@['durable-id'] == "{#DURABLE.ID}")].['temperature-numeric'].first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Disk [{#DURABLE.ID}]: Type |<p>Disk type:</p><p>SAS: Enterprise SAS spinning disk.</p><p>SAS MDL: Midline SAS spinning disk.</p><p>SSD SAS: SAS solit-state disk.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",type]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disks'][?(@['durable-id'] == "{#DURABLE.ID}")].['description-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Disk [{#DURABLE.ID}]: Disk group |<p>If the disk is in a disk group, the disk group name.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",group]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disks'][?(@['durable-id'] == "{#DURABLE.ID}")].['disk-group'].first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Disk [{#DURABLE.ID}]: Storage pool |<p>If the disk is in a pool, the pool name.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",pool]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disks'][?(@['durable-id'] == "{#DURABLE.ID}")].['storage-pool-name'].first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Disk [{#DURABLE.ID}]: Vendor |<p>Disk vendor.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",vendor]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disks'][?(@['durable-id'] == "{#DURABLE.ID}")].['vendor'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Disk [{#DURABLE.ID}]: Model |<p>Disk model.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",model]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disks'][?(@['durable-id'] == "{#DURABLE.ID}")].['model'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Disk [{#DURABLE.ID}]: Serial number |<p>Disk serial number.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",serial_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disks'][?(@['durable-id'] == "{#DURABLE.ID}")].['serial-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Disk [{#DURABLE.ID}]: Space total |<p>Total size of the disk.</p> |DEPENDENT |hpe.msa.disks.space["{#DURABLE.ID}",total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disks'][?(@['durable-id'] == "{#DURABLE.ID}")].['size-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `512`</p> |
+|HPE |Disk [{#DURABLE.ID}]: SSD life left |<p>The percantage of disk life remaining.</p> |DEPENDENT |hpe.msa.disks.ssd["{#DURABLE.ID}",life_left]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disks'][?(@['durable-id'] == "{#DURABLE.ID}")].['ssd-life-left-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |FRU [{#ENCLOSURE.ID}: {#LOCATION}]: Status |<p>{#DESCRIPTION}. FRU status:</p><p>Absent: Component is not present.</p><p>Fault: At least one subcomponent has a fault.</p><p>Invalid data: For a power supply module, the EEPROM is improperly programmed.</p><p>OK: All subcomponents are operating normally.</p><p>Not available: Status is not available.</p> |DEPENDENT |hpe.msa.frus["{#ENCLOSURE.ID}:{#LOCATION}",status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['frus'][?(@['name'] == "{#TYPE}" && @['fru-location'] == "{#LOCATION}")].['fru-status'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- JAVASCRIPT: `The text is too long. Please see the template.`</p> |
+|HPE |FRU [{#ENCLOSURE.ID}: {#LOCATION}]: Part number |<p>{#DESCRIPTION}. Part number of the FRU.</p> |DEPENDENT |hpe.msa.frus["{#ENCLOSURE.ID}:{#LOCATION}",part_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['frus'][?(@['name'] == "{#TYPE}" && @['fru-location'] == "{#LOCATION}")].['part-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |FRU [{#ENCLOSURE.ID}: {#LOCATION}]: Serial number |<p>{#DESCRIPTION}. FRU serial number.</p> |DEPENDENT |hpe.msa.frus["{#ENCLOSURE.ID}:{#LOCATION}",serial_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['frus'][?(@['name'] == "{#TYPE}" && @['fru-location'] == "{#LOCATION}")].['serial-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|Zabbix raw items |HPE MSA: Get data |<p>The JSON with result of API requests.</p> |SCRIPT |hpe.msa.data.get<p>**Expression**:</p>`The text is too long. Please see the template.` |
+
+## Triggers
+
+|Name|Description|Expression|Severity|Dependencies and additional info|
+|----|-----------|----|----|----|
+|There are errors in method requests to API |<p>There are errors in method requests to API.</p> |`length(last(/HPE MSA 2040 Storage by HTTP/hpe.msa.data.errors))>0` |AVERAGE |<p>**Depends on**:</p><p>- Service is down or unavailable</p> |
+|System health is in degraded state |<p>System health is in degraded state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.system.health)=1` |WARNING | |
+|System health is in fault state |<p>System health is in fault state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.system.health)=2` |AVERAGE | |
+|System health is in unknown state |<p>System health is in unknown state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.system.health)=3` |INFO | |
+|Service is down or unavailable |<p>HTTP/HTTPS service is down or unable to establish TCP connection.</p> |`max(/HPE MSA 2040 Storage by HTTP/net.tcp.service["{$HPE.MSA.API.SCHEME}","{HOST.CONN}","{$HPE.MSA.API.PORT}"],5m)=0` |HIGH | |
+|Controller [{#CONTROLLER.ID}]: Controller health is in degraded state |<p>Controller health is in degraded state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",health])=1` |WARNING |<p>**Depends on**:</p><p>- Controller [{#CONTROLLER.ID}]: Controller is down</p> |
+|Controller [{#CONTROLLER.ID}]: Controller health is in fault state |<p>Controller health is in fault state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",health])=2` |AVERAGE |<p>**Depends on**:</p><p>- Controller [{#CONTROLLER.ID}]: Controller is down</p> |
+|Controller [{#CONTROLLER.ID}]: Controller health is in unknown state |<p>Controller health is in unknown state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",health])=3` |INFO |<p>**Depends on**:</p><p>- Controller [{#CONTROLLER.ID}]: Controller is down</p> |
+|Controller [{#CONTROLLER.ID}]: Controller is down |<p>The controller is down.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",status])=1` |HIGH | |
+|Controller [{#CONTROLLER.ID}]: High CPU utilization |<p>Controller CPU utilization is too high. The system might be slow to respond.</p> |`min(/HPE MSA 2040 Storage by HTTP/hpe.msa.controllers.cpu["{#CONTROLLER.ID}",util],5m)>{$HPE.MSA.CONTROLLER.CPU.UTIL.CRIT}` |WARNING | |
+|Controller [{#CONTROLLER.ID}]: Controller has been restarted |<p>The controller uptime is less than 10 minutes.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",uptime])<10m` |WARNING | |
+|Disk group [{#NAME}]: Disk group health is in degraded state |<p>Disk group health is in degraded state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",health])=1` |WARNING | |
+|Disk group [{#NAME}]: Disk group health is in fault state |<p>Disk group health is in fault state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",health])=2` |AVERAGE | |
+|Disk group [{#NAME}]: Disk group health is in unknown state |<p>Disk group health is in unknown state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",health])=3` |INFO | |
+|Disk group [{#NAME}]: Disk group space is low |<p>Disk group is running low on free space (less than {$HPE.MSA.DISKS.GROUP.PUSED.MAX.WARN:"{#NAME}"}% available).</p> |`min(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups.space["{#NAME}",util],5m)>{$HPE.MSA.DISKS.GROUP.PUSED.MAX.WARN:"{#NAME}"}` |WARNING |<p>**Depends on**:</p><p>- Disk group [{#NAME}]: Disk group space is critically low</p> |
+|Disk group [{#NAME}]: Disk group space is critically low |<p>Disk group is running low on free space (less than {$HPE.MSA.DISKS.GROUP.PUSED.MAX.CRIT:"{#NAME}"}% available).</p> |`min(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups.space["{#NAME}",util],5m)>{$HPE.MSA.DISKS.GROUP.PUSED.MAX.CRIT:"{#NAME}"}` |AVERAGE | |
+|Disk group [{#NAME}]: Disk group is fault tolerant with a down disk |<p>The disk group is online and fault tolerant, but some of it's disks are down.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=1` |AVERAGE | |
+|Disk group [{#NAME}]: Disk group has damaged disks |<p>The disk group is online and fault tolerant, but some of it's disks are damaged.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=9` |AVERAGE | |
+|Disk group [{#NAME}]: Disk group has missing disks |<p>The disk group is online and fault tolerant, but some of it's disks are missing.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=8` |AVERAGE | |
+|Disk group [{#NAME}]: Disk group is offline |<p>Either the disk group is using offline initialization, or it's disks are down and data may be lost.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=3` |AVERAGE | |
+|Disk group [{#NAME}]: Disk group is quarantined critical |<p>The disk group is critical with at least one inaccessible disk. For example, two disks are inaccessible in a RAID 6 disk group or one disk is inaccessible for other fault-tolerant RAID levels. If the inaccessible disks come online or if after 60 seconds from being quarantined the disk group is QTCRor QTDN, the disk group is automatically dequarantined.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=4` |AVERAGE | |
+|Disk group [{#NAME}]: Disk group is quarantined offline |<p>The disk group is offline with multiple inaccessible disks causing user data to be incomplete, or is an NRAID or RAID 0 disk group.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=5` |AVERAGE | |
+|Disk group [{#NAME}]: Disk group is quarantined unsupported |<p>The disk group contains data in a format that is not supported by this system. For example, this system does not support linear disk groups.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=5` |AVERAGE | |
+|Disk group [{#NAME}]: Disk group is quarantined with an inaccessible disk |<p>The RAID6 disk group has one inaccessible disk. The disk group is fault tolerant but degraded. If the inaccessible disks come online or if after 60 seconds from being quarantined the disk group is QTCRor QTDN, the disk group is automatically dequarantined.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=6` |AVERAGE | |
+|Disk group [{#NAME}]: Disk group is stopped |<p>The disk group is stopped.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=7` |AVERAGE | |
+|Disk group [{#NAME}]: Disk group status is critical |<p>The disk group is online but isn't fault tolerant because some of its disks are down.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=2` |AVERAGE | |
+|Pool [{#NAME}]: Pool health is in degraded state |<p>Pool health is in degraded state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.pools["{#NAME}",health])=1` |WARNING | |
+|Pool [{#NAME}]: Pool health is in fault state |<p>Pool health is in fault state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.pools["{#NAME}",health])=2` |AVERAGE | |
+|Pool [{#NAME}]: Pool health is in unknown state |<p>Pool [{#NAME}] health is in unknown state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.pools["{#NAME}",health])=3` |INFO | |
+|Pool [{#NAME}]: Pool space is low |<p>Pool is running low on free space (less than {$HPE.MSA.POOL.PUSED.MAX.WARN:"{#NAME}"}% available).</p> |`min(/HPE MSA 2040 Storage by HTTP/hpe.msa.pools.space["{#NAME}",util],5m)>{$HPE.MSA.POOL.PUSED.MAX.WARN:"{#NAME}"}` |WARNING |<p>**Depends on**:</p><p>- Pool [{#NAME}]: Pool space is critically low</p> |
+|Pool [{#NAME}]: Pool space is critically low |<p>Pool is running low on free space (less than {$HPE.MSA.POOL.PUSED.MAX.CRIT:"{#NAME}"}% available).</p> |`min(/HPE MSA 2040 Storage by HTTP/hpe.msa.pools.space["{#NAME}",util],5m)>{$HPE.MSA.POOL.PUSED.MAX.CRIT:"{#NAME}"}` |AVERAGE | |
+|Enclosure [{#DURABLE.ID}]: Enclosure health is in degraded state |<p>Enclosure health is in degraded state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",health])=1` |WARNING | |
+|Enclosure [{#DURABLE.ID}]: Enclosure health is in fault state |<p>Enclosure health is in fault state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",health])=2` |AVERAGE | |
+|Enclosure [{#DURABLE.ID}]: Enclosure health is in unknown state |<p>Enclosure health is in unknown state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",health])=3` |INFO | |
+|Enclosure [{#DURABLE.ID}]: Enclosure has critical status |<p>Enclosure has critical status.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",status])=2` |HIGH | |
+|Enclosure [{#DURABLE.ID}]: Enclosure has warning status |<p>Enclosure has warning status.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",status])=3` |WARNING | |
+|Enclosure [{#DURABLE.ID}]: Enclosure is unavailable |<p>Enclosure is unavailable.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",status])=7` |HIGH | |
+|Enclosure [{#DURABLE.ID}]: Enclosure is unrecoverable |<p>Enclosure is unrecoverable.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",status])=4` |HIGH | |
+|Enclosure [{#DURABLE.ID}]: Enclosure has unknown status |<p>Enclosure has unknown status.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",status])=6` |INFO | |
+|Power supply [{#DURABLE.ID}]: Power supply health is in degraded state |<p>Power supply health is in degraded state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",health])=1` |WARNING | |
+|Power supply [{#DURABLE.ID}]: Power supply health is in fault state |<p>Power supply health is in fault state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",health])=2` |AVERAGE | |
+|Power supply [{#DURABLE.ID}]: Power supply health is in unknown state |<p>Power supply health is in unknown state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",health])=3` |INFO | |
+|Power supply [{#DURABLE.ID}]: Power supply has error status |<p>Power supply has error status.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",status])=2` |AVERAGE | |
+|Power supply [{#DURABLE.ID}]: Power supply has warning status |<p>Power supply has warning status.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",status])=1` |WARNING | |
+|Power supply [{#DURABLE.ID}]: Power supply has unknown status |<p>Power supply has unknown status.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",status])=4` |INFO | |
+|Port [{#NAME}]: Port health is in degraded state |<p>Port health is in degraded state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.ports["{#NAME}",health])=1` |WARNING | |
+|Port [{#NAME}]: Port health is in fault state |<p>Port health is in fault state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.ports["{#NAME}",health])=2` |AVERAGE | |
+|Port [{#NAME}]: Port health is in unknown state |<p>Port health is in unknown state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.ports["{#NAME}",health])=3` |INFO | |
+|Port [{#NAME}]: Port has error status |<p>Port has error status.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.ports["{#NAME}",status])=2` |AVERAGE | |
+|Port [{#NAME}]: Port has warning status |<p>Port has warning status.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.ports["{#NAME}",status])=1` |WARNING | |
+|Port [{#NAME}]: Port has unknown status |<p>Port has unknown status.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.ports["{#NAME}",status])=4` |INFO | |
+|Fan [{#DURABLE.ID}]: Fan health is in degraded state |<p>Fan health is in degraded state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",health])=1` |WARNING | |
+|Fan [{#DURABLE.ID}]: Fan health is in fault state |<p>Fan health is in fault state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",health])=2` |AVERAGE | |
+|Fan [{#DURABLE.ID}]: Fan health is in unknown state |<p>Fan health is in unknown state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",health])=3` |INFO | |
+|Fan [{#DURABLE.ID}]: Fan has error status |<p>Fan has error status.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",status])=1` |AVERAGE | |
+|Fan [{#DURABLE.ID}]: Fan is missing |<p>Fan is missing.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",status])=3` |INFO | |
+|Fan [{#DURABLE.ID}]: Fan is off |<p>Fan is off.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",status])=2` |WARNING | |
+|Disk [{#DURABLE.ID}]: Disk health is in degraded state |<p>Disk health is in degraded state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",health])=1` |WARNING | |
+|Disk [{#DURABLE.ID}]: Disk health is in fault state |<p>Disk health is in fault state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",health])=2` |AVERAGE | |
+|Disk [{#DURABLE.ID}]: Disk health is in unknown state |<p>Disk health is in unknown state.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",health])=3` |INFO | |
+|Disk [{#DURABLE.ID}]: Disk temperature is high |<p>Disk temperature is high.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",temperature_status])=3` |WARNING | |
+|Disk [{#DURABLE.ID}]: Disk temperature is critically high |<p>Disk temperature is critically high.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",temperature_status])=2` |AVERAGE | |
+|Disk [{#DURABLE.ID}]: Disk temperature is unknown |<p>Disk temperature is unknown.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",temperature_status])=4` |INFO | |
+|FRU [{#ENCLOSURE.ID}: {#LOCATION}]: FRU status is Degraded or Fault |<p>FRU status is Degraded or Fault.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.frus["{#ENCLOSURE.ID}:{#LOCATION}",status])=1` |AVERAGE | |
+|FRU [{#ENCLOSURE.ID}: {#LOCATION}]: FRU ID data is invalid |<p>The FRU ID data is invalid. The FRU's EEPROM is improperly programmed.</p> |`last(/HPE MSA 2040 Storage by HTTP/hpe.msa.frus["{#ENCLOSURE.ID}:{#LOCATION}",status])=0` |WARNING | |
+
+## Feedback
+
+Please report any issues with the template at https://support.zabbix.com
+
+You can also provide feedback, discuss the template or ask for help with it at [ZABBIX forums](https://www.zabbix.com/forum/zabbix-suggestions-and-feedback).
+
diff --git a/templates/san/hpe_msa2040_http/template_san_hpe_msa2040_http.yaml b/templates/san/hpe_msa2040_http/template_san_hpe_msa2040_http.yaml
new file mode 100644
index 00000000000..e28b8ae6fd9
--- /dev/null
+++ b/templates/san/hpe_msa2040_http/template_san_hpe_msa2040_http.yaml
@@ -0,0 +1,4417 @@
+zabbix_export:
+ version: '6.0'
+ date: '2022-06-16T07:39:49Z'
+ groups:
+ -
+ uuid: 7c2cb727f85b492d88cd56e17127c64d
+ name: Templates/SAN
+ templates:
+ -
+ uuid: be10b1140fce4cc08247260b71bcd037
+ template: 'HPE MSA 2040 Storage by HTTP'
+ name: 'HPE MSA 2040 Storage by HTTP'
+ description: |
+ The template to monitor HPE MSA 2040 by HTTP.
+ It works without any external scripts and uses the script item.
+
+ Setup:
+ 1. Create user "zabbix" with monitor role on the storage.
+ 2. Link the template to a host.
+ 3. Configure {$HPE.MSA.API.PASSWORD} and an interface with address through which API is accessible.
+ 4. Change {$HPE.MSA.API.SCHEME} and {$HPE.MSA.API.PORT} macros if needed.
+
+ You can discuss this template or leave feedback on our forum https://www.zabbix.com/forum/zabbix-suggestions-and-feedback
+
+ Template tooling version used: 0.41
+ groups:
+ -
+ name: Templates/SAN
+ items:
+ -
+ uuid: 51d0ae1b4663471d868c27ccd2fb4fed
+ name: 'Get method errors'
+ type: DEPENDENT
+ key: hpe.msa.data.errors
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: TEXT
+ description: 'A list of method errors from API requests.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''errors'']'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: errors
+ triggers:
+ -
+ uuid: 7f80562a0b4f4329be454c418de3f517
+ expression: 'length(last(/HPE MSA 2040 Storage by HTTP/hpe.msa.data.errors))>0'
+ name: 'There are errors in method requests to API'
+ priority: AVERAGE
+ description: 'There are errors in method requests to API.'
+ dependencies:
+ -
+ name: 'Service is down or unavailable'
+ expression: 'max(/HPE MSA 2040 Storage by HTTP/net.tcp.service["{$HPE.MSA.API.SCHEME}","{HOST.CONN}","{$HPE.MSA.API.PORT}"],5m)=0'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ uuid: e07e09dbcdd44f509a06343c9a53a455
+ name: 'HPE MSA: Get data'
+ type: SCRIPT
+ key: hpe.msa.data.get
+ history: '0'
+ trends: '0'
+ value_type: TEXT
+ params: |
+ var params = JSON.parse(value),
+ fields = ['username', 'password', 'base_url'],
+ methods = [
+ 'system',
+ 'controllers',
+ 'controller-statistics',
+ 'frus',
+ 'disk-groups',
+ 'disk-group-statistics',
+ 'disks',
+ 'enclosures',
+ 'fans',
+ 'pools',
+ 'ports',
+ 'power-supplies',
+ 'volumes',
+ 'volume-statistics'
+ ],
+ data_tmp = {},
+ result_tmp = {},
+ session_key,
+ data = {};
+
+ fields.forEach(function (field) {
+ if (typeof params !== 'object' || typeof params[field] === 'undefined' || params[field] === '' ) {
+ throw 'Required param is not set: "' + field + '".';
+ }
+ });
+
+ if (!params.base_url.endsWith('/')) {
+ params.base_url += '/';
+ }
+
+ var response, request = new HttpRequest();
+ request.addHeader('datatype: xml');
+
+ auth_string = md5(params.username + '_' + params.password);
+ response = request.get(params.base_url + 'api/login/' + auth_string);
+
+ if (request.getStatus() < 200 || request.getStatus() >= 300) {
+ throw 'Authentication request has failed with status code ' + request.getStatus() + ': ' + response;
+ }
+
+ if (response !== null) {
+ try {
+ session_key = XML.query(response, '/RESPONSE/OBJECT/PROPERTY[@name="response"]/text()');
+ return_code = XML.query(response, '/RESPONSE/OBJECT/PROPERTY[@name="return-code"]/text()');
+ }
+ catch (error) {
+ throw 'Failed to parse authentication response received from device API.';
+ }
+ }
+
+ if (return_code != '1') {
+ throw 'Authentication failed.'
+ }
+ else if (session_key === '') {
+ throw 'Failed to retrieve session key from authentication response.';
+ }
+
+ request.clearHeader();
+ request.addHeader('sessionKey: ' + session_key);
+ request.addHeader('datatype: api-embed');
+
+ data.errors = [];
+
+ methods.forEach(function (method) {
+ response = request.get(params.base_url + 'api/show/' + method);
+ method_error = '';
+
+ if (request.getStatus() < 200 || request.getStatus() >= 300) {
+ method_error = 'Method: ' + method + '. Request has failed with status code ' + request.getStatus() + ': ' + response;
+ data.errors.push(method_error);
+ return;
+ }
+
+ if (response !== null) {
+ try {
+ result_tmp = JSON.parse(XML.toJson(response));
+ data[method] = [];
+
+ result_tmp.RESPONSE.OBJECT.forEach(function (object) {
+ var data_tmp = {};
+
+ if (object['@basetype'] !== 'status' && object['@basetype'] !== 'enclosure-sku') {
+ object.PROPERTY.forEach(function (property) {
+ name = property['@name'];
+ value = property['#text'] || '';
+ data_tmp[name] = value;
+ });
+
+ if (method == 'controller-statistics') {
+ data_tmp['durable-id'] = data_tmp['durable-id'].toLowerCase();
+ }
+
+ data[method].push(data_tmp);
+ }
+ });
+ }
+ catch (error) {
+ method_error = 'Method: ' + method + '. Failed to parse response received from device API.';
+ }
+ }
+ else {
+ method_error = 'Method: ' + method + '. No data received by request.';
+ }
+
+ if (method_error.length > 0) {
+ data.errors.push(method_error);
+ }
+ });
+
+ if (data.errors.length == 0) {
+ data.errors = '';
+ }
+
+ response = request.get(params.base_url + 'api/exit');
+
+ if (request.getStatus() < 200 || request.getStatus() >= 300) {
+ throw 'Logout request failed with status code ' + request.getStatus() + ': ' + response;
+ }
+
+ return JSON.stringify(data);
+ description: 'The JSON with result of API requests.'
+ timeout: '{$HPE.MSA.DATA.TIMEOUT}'
+ parameters:
+ -
+ name: base_url
+ value: '{$HPE.MSA.API.SCHEME}://{HOST.CONN}:{$HPE.MSA.API.PORT}/'
+ -
+ name: username
+ value: '{$HPE.MSA.API.USERNAME}'
+ -
+ name: password
+ value: '{$HPE.MSA.API.PASSWORD}'
+ tags:
+ -
+ tag: component
+ value: raw
+ -
+ uuid: 802692ec1429407a8bbb55e338959c0b
+ name: 'System contact'
+ type: DEPENDENT
+ key: hpe.msa.system.contact
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'The name of the person who administers the system.'
+ inventory_link: CONTACT
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.system[0].[''system-contact'']'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: system
+ -
+ uuid: 4516edee03084515bcf139c22abc4c7c
+ name: 'System health'
+ type: DEPENDENT
+ key: hpe.msa.system.health
+ delay: '0'
+ history: 7d
+ description: 'System health status.'
+ valuemap:
+ name: Health
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.system[0].[''health-numeric'']'
+ error_handler: CUSTOM_VALUE
+ error_handler_params: '4'
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: health
+ -
+ tag: component
+ value: system
+ triggers:
+ -
+ uuid: ee37a443b22a4161a88014a0c32dfdfa
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.system.health)=1'
+ name: 'System health is in degraded state'
+ priority: WARNING
+ description: 'System health is in degraded state.'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: 54472b6cdf84418baf10b4a7d5e16e5c
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.system.health)=2'
+ name: 'System health is in fault state'
+ priority: AVERAGE
+ description: 'System health is in fault state.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ uuid: ccb821dafad1404dbc1873561a69b7cc
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.system.health)=3'
+ name: 'System health is in unknown state'
+ priority: INFO
+ description: 'System health is in unknown state.'
+ tags:
+ -
+ tag: scope
+ value: notice
+ -
+ uuid: 6b82f7545a334f9cad752bd18f8886bc
+ name: 'System information'
+ type: DEPENDENT
+ key: hpe.msa.system.info
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'A brief description of what the system is used for or how it is configured.'
+ inventory_link: NOTES
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.system[0].[''system-information'']'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: system
+ -
+ uuid: c5f082947e844adbbcf2982ad9c0c76e
+ name: 'System location'
+ type: DEPENDENT
+ key: hpe.msa.system.location
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'The location of the system.'
+ inventory_link: LOCATION
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.system[0].[''system-location'']'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: system
+ -
+ uuid: 419165bfe80f46f7af1c5d6ab46c1f14
+ name: 'System name'
+ type: DEPENDENT
+ key: hpe.msa.system.name
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'The name of the storage system.'
+ inventory_link: NAME
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.system[0].[''system-name'']'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: system
+ -
+ uuid: 79c87a81895f46658f2e902cf7166860
+ name: 'Product ID'
+ type: DEPENDENT
+ key: hpe.msa.system.product_id
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'The product model identifier.'
+ inventory_link: MODEL
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.system[0].[''product-id'']'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: system
+ -
+ uuid: 947bb21483e747c9ad13b995b79289c0
+ name: 'Vendor name'
+ type: DEPENDENT
+ key: hpe.msa.system.vendor_name
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'The vendor name.'
+ inventory_link: VENDOR
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.system[0].[''vendor-name'']'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: system
+ -
+ uuid: d1242f5aede14008ae6896123bb944a5
+ name: 'HPE MSA: Service ping'
+ type: SIMPLE
+ key: 'net.tcp.service["{$HPE.MSA.API.SCHEME}","{HOST.CONN}","{$HPE.MSA.API.PORT}"]'
+ history: 7d
+ description: 'Check if HTTP/HTTPS service accepts TCP connections.'
+ preprocessing:
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ tags:
+ -
+ tag: component
+ value: health
+ -
+ tag: component
+ value: network
+ triggers:
+ -
+ uuid: b8d07373a0fb4051a0534891b255994a
+ expression: 'max(/HPE MSA 2040 Storage by HTTP/net.tcp.service["{$HPE.MSA.API.SCHEME}","{HOST.CONN}","{$HPE.MSA.API.PORT}"],5m)=0'
+ name: 'Service is down or unavailable'
+ priority: HIGH
+ description: 'HTTP/HTTPS service is down or unable to establish TCP connection.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ discovery_rules:
+ -
+ uuid: 66eabcbe564644dea3427afcbf76b87c
+ name: 'Controllers discovery'
+ type: DEPENDENT
+ key: hpe.msa.controllers.discovery
+ delay: '0'
+ description: 'Discover controllers.'
+ item_prototypes:
+ -
+ uuid: 53b0ea51add74c629814c881ac824d1b
+ name: 'Controller [{#CONTROLLER.ID}]: Cache: Read hits, rate'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.cache.read.hits["{#CONTROLLER.ID}",rate]'
+ delay: '0'
+ history: 7d
+ description: 'For the controller that owns the volume, the number of times the block to be read is found in cache per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''controller-statistics''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''read-cache-hits''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 23ed270bc823484cb514600bf23b2aa5
+ name: 'Controller [{#CONTROLLER.ID}]: Cache: Read misses, rate'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.cache.read.misses["{#CONTROLLER.ID}",rate]'
+ delay: '0'
+ history: 7d
+ description: 'For the controller that owns the volume, the number of times the block to be read is not found in cache per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''controller-statistics''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''read-cache-misses''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 71a92c76ae7740cd9e58ea337f4a75e3
+ name: 'Controller [{#CONTROLLER.ID}]: Cache: Write hits, rate'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.cache.write.hits["{#CONTROLLER.ID}",rate]'
+ delay: '0'
+ history: 7d
+ description: 'For the controller that owns the volume, the number of times the block written to is found in cache per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''controller-statistics''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''write-cache-hits''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: bafcf98cee9c4a8da0aea7b39a5242d4
+ name: 'Controller [{#CONTROLLER.ID}]: Cache: Write misses, rate'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.cache.write.misses["{#CONTROLLER.ID}",rate]'
+ delay: '0'
+ history: 7d
+ description: 'For the controller that owns the volume, the number of times the block written to is not found in cache per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''controller-statistics''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''write-cache-misses''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: fa9400f2dcba40f4b57dfcef6f7856a0
+ name: 'Controller [{#CONTROLLER.ID}]: Cache: Write utilization'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.cache.write["{#CONTROLLER.ID}",util]'
+ delay: '0'
+ history: 7d
+ units: '%'
+ description: 'Percentage of write cache in use, from 0 to 100.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''controller-statistics''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''write-cache-used''].first()'
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 38a6ca0447d548c593d08acf377250cb
+ name: 'Controller [{#CONTROLLER.ID}]: Cache memory size'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.cache["{#CONTROLLER.ID}",total]'
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'Controller cache memory size.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''controllers''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''cache-memory-size''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ -
+ type: MULTIPLIER
+ parameters:
+ - '1048576'
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: cfff8c77d99440d18794e1c6dbf738ad
+ name: 'Controller [{#CONTROLLER.ID}]: CPU utilization'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.cpu["{#CONTROLLER.ID}",util]'
+ delay: '0'
+ history: 7d
+ units: '%'
+ description: 'Percentage of time the CPU is busy, from 0 to 100.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''controller-statistics''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''cpu-load''].first()'
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ trigger_prototypes:
+ -
+ uuid: b94f1cfd6e6a48f8a18c644532b7a9c8
+ expression: 'min(/HPE MSA 2040 Storage by HTTP/hpe.msa.controllers.cpu["{#CONTROLLER.ID}",util],5m)>{$HPE.MSA.CONTROLLER.CPU.UTIL.CRIT}'
+ name: 'Controller [{#CONTROLLER.ID}]: High CPU utilization'
+ event_name: 'Controller [{#CONTROLLER.ID}]: High CPU utilization (over {$HPE.MSA.CONTROLLER.CPU.UTIL.CRIT}% for 5m)'
+ priority: WARNING
+ description: 'Controller CPU utilization is too high. The system might be slow to respond.'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: c87dc81f4a3447f3962a69a8b0d79769
+ name: 'Controller [{#CONTROLLER.ID}]: Data transfer rate: Reads'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.data_transfer.reads["{#CONTROLLER.ID}",rate]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: Bps
+ description: 'The data read rate, in bytes per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''controller-statistics''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''data-read-numeric''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 7c34d1c4fd784fb695d9fc7c5a686329
+ name: 'Controller [{#CONTROLLER.ID}]: Data transfer rate: Total'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.data_transfer.total["{#CONTROLLER.ID}",rate]'
+ delay: '0'
+ history: 7d
+ units: Bps
+ description: 'The data transfer rate, in bytes per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''controller-statistics''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''bytes-per-second-numeric''].first()'
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 93b508f92de04dfbbfe7099bf37796ce
+ name: 'Controller [{#CONTROLLER.ID}]: Data transfer rate: Writes'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.data_transfer.writes["{#CONTROLLER.ID}",rate]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: Bps
+ description: 'The data write rate, in bytes per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''controller-statistics''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''data-written-numeric''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 3d7f1a97cd8249efbabc2402006c1cc2
+ name: 'Controller [{#CONTROLLER.ID}]: IOPS, read rate'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.iops.read["{#CONTROLLER.ID}",rate]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: '!r/s'
+ description: 'Number of read operations per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''controller-statistics''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''number-of-reads''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 8bf0601293a64628be08d16391d1e11b
+ name: 'Controller [{#CONTROLLER.ID}]: IOPS, total rate'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.iops.total["{#CONTROLLER.ID}",rate]'
+ delay: '0'
+ history: 7d
+ units: '!iops'
+ description: 'Input/output operations per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''controller-statistics''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''iops''].first()'
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 6444038b72294992ab17c126ccbe7251
+ name: 'Controller [{#CONTROLLER.ID}]: IOPS, write rate'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.iops.write["{#CONTROLLER.ID}",rate]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: '!w/s'
+ description: 'Number of write operations per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''controller-statistics''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''number-of-writes''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 5940d26205924a13ba351f5d56192fcb
+ name: 'Controller [{#CONTROLLER.ID}]: Disks'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers["{#CONTROLLER.ID}",disks]'
+ delay: '0'
+ history: 7d
+ description: 'Number of disks in the storage system.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''controllers''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''disks''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 94c2c9bfd2414875a53fbe94f6230666
+ name: 'Controller [{#CONTROLLER.ID}]: Disk groups'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers["{#CONTROLLER.ID}",disk_groups]'
+ delay: '0'
+ history: 7d
+ description: 'Number of disk groups in the storage system.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''controllers''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''virtual-disks''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 5a987843b14c4d25a1fde4429015f773
+ name: 'Controller [{#CONTROLLER.ID}]: Firmware version'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers["{#CONTROLLER.ID}",firmware]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'Storage controller firmware version.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''controllers''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''sc-fw''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 6d2a84b6b1804082ab4ef3451a52b552
+ name: 'Controller [{#CONTROLLER.ID}]: Health'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers["{#CONTROLLER.ID}",health]'
+ delay: '0'
+ history: 7d
+ description: 'Controller health status.'
+ valuemap:
+ name: Health
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''controllers''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''health-numeric''].first()'
+ error_handler: CUSTOM_VALUE
+ error_handler_params: '4'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: component
+ value: health
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ trigger_prototypes:
+ -
+ uuid: 381a5fe2adfd4f4ea15763cdf0a1bd0d
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",health])=1'
+ name: 'Controller [{#CONTROLLER.ID}]: Controller health is in degraded state'
+ priority: WARNING
+ description: 'Controller health is in degraded state.'
+ dependencies:
+ -
+ name: 'Controller [{#CONTROLLER.ID}]: Controller is down'
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",status])=1'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: 2082d12ff9c54a5ea709dba05c14ae00
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",health])=2'
+ name: 'Controller [{#CONTROLLER.ID}]: Controller health is in fault state'
+ priority: AVERAGE
+ description: 'Controller health is in fault state.'
+ dependencies:
+ -
+ name: 'Controller [{#CONTROLLER.ID}]: Controller is down'
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",status])=1'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ uuid: 0b2ed99c47a64210b198cc0a3a6b84b5
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",health])=3'
+ name: 'Controller [{#CONTROLLER.ID}]: Controller health is in unknown state'
+ priority: INFO
+ description: 'Controller health is in unknown state.'
+ dependencies:
+ -
+ name: 'Controller [{#CONTROLLER.ID}]: Controller is down'
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",status])=1'
+ tags:
+ -
+ tag: scope
+ value: notice
+ -
+ uuid: 5f00490ddd22458b93add06ed24a9f96
+ name: 'Controller [{#CONTROLLER.ID}]: IP address'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers["{#CONTROLLER.ID}",ip_address]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'Controller network port IP address.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''controllers''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''ip-address''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 33e754d5acb84b7c86b2e23b122e6eed
+ name: 'Controller [{#CONTROLLER.ID}]: Part number'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers["{#CONTROLLER.ID}",part_number]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'Part number of the controller.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''controllers''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''part-number''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: e4930566c3844f9487e343c203f3eb96
+ name: 'Controller [{#CONTROLLER.ID}]: Pools'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers["{#CONTROLLER.ID}",pools]'
+ delay: '0'
+ history: 7d
+ description: 'Number of pools in the storage system.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''controllers''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''number-of-storage-pools''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: c073adb77eb84cf79e1e1693d9378d47
+ name: 'Controller [{#CONTROLLER.ID}]: Serial number'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers["{#CONTROLLER.ID}",serial_number]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'Storage controller serial number.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''controllers''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''serial-number''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: a2be1b4b814d45b18bb4e313818511d6
+ name: 'Controller [{#CONTROLLER.ID}]: Status'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers["{#CONTROLLER.ID}",status]'
+ delay: '0'
+ history: 7d
+ description: 'Storage controller status.'
+ valuemap:
+ name: 'Controller status'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''controllers''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''status-numeric''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: component
+ value: health
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ trigger_prototypes:
+ -
+ uuid: 1524e80a37cb4b64a7360488e132a433
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",status])=1'
+ name: 'Controller [{#CONTROLLER.ID}]: Controller is down'
+ priority: HIGH
+ description: 'The controller is down.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ uuid: df2bede9ea85483581a35a45a15d4de4
+ name: 'Controller [{#CONTROLLER.ID}]: Uptime'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers["{#CONTROLLER.ID}",uptime]'
+ delay: '0'
+ history: 7d
+ units: uptime
+ description: 'Number of seconds since the controller was restarted.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''controller-statistics''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''power-on-time''].first()'
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ trigger_prototypes:
+ -
+ uuid: 136bb1ccd4114a529a99ddbf803fd974
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",uptime])<10m'
+ name: 'Controller [{#CONTROLLER.ID}]: Controller has been restarted'
+ event_name: 'Controller [{#CONTROLLER.ID}]: Controller has been restarted (uptime < 10m)'
+ priority: WARNING
+ description: 'The controller uptime is less than 10 minutes.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ graph_prototypes:
+ -
+ uuid: 93aeac1a193e43d3a93a3892bd26b0ff
+ name: 'Controller [{#CONTROLLER.ID}]: Cache: Write utilization'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE MSA 2040 Storage by HTTP'
+ key: 'hpe.msa.controllers.cache.write["{#CONTROLLER.ID}",util]'
+ -
+ uuid: a7432b24cd834aa0be9dec3935641dfb
+ name: 'Controller [{#CONTROLLER.ID}]: Cache usage'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE MSA 2040 Storage by HTTP'
+ key: 'hpe.msa.controllers.cache.read.hits["{#CONTROLLER.ID}",rate]'
+ -
+ sortorder: '1'
+ color: 2774A4
+ item:
+ host: 'HPE MSA 2040 Storage by HTTP'
+ key: 'hpe.msa.controllers.cache.read.misses["{#CONTROLLER.ID}",rate]'
+ -
+ sortorder: '2'
+ color: F63100
+ item:
+ host: 'HPE MSA 2040 Storage by HTTP'
+ key: 'hpe.msa.controllers.cache.write.hits["{#CONTROLLER.ID}",rate]'
+ -
+ sortorder: '3'
+ color: A54F10
+ item:
+ host: 'HPE MSA 2040 Storage by HTTP'
+ key: 'hpe.msa.controllers.cache.write.misses["{#CONTROLLER.ID}",rate]'
+ -
+ uuid: fca4007d4dd1491dbceba1644b50e1b5
+ name: 'Controller [{#CONTROLLER.ID}]: Controller CPU utilization'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE MSA 2040 Storage by HTTP'
+ key: 'hpe.msa.controllers.cpu["{#CONTROLLER.ID}",util]'
+ -
+ uuid: 0b2598db582546308d092c9e7889e698
+ name: 'Controller [{#CONTROLLER.ID}]: Data transfer rate'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE MSA 2040 Storage by HTTP'
+ key: 'hpe.msa.controllers.data_transfer.reads["{#CONTROLLER.ID}",rate]'
+ -
+ sortorder: '1'
+ color: 2774A4
+ item:
+ host: 'HPE MSA 2040 Storage by HTTP'
+ key: 'hpe.msa.controllers.data_transfer.writes["{#CONTROLLER.ID}",rate]'
+ -
+ uuid: 0793bb861e874a2c8e7e60a4c40bc34e
+ name: 'Controller [{#CONTROLLER.ID}]: Disk operations rate'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE MSA 2040 Storage by HTTP'
+ key: 'hpe.msa.controllers.iops.read["{#CONTROLLER.ID}",rate]'
+ -
+ sortorder: '1'
+ color: 2774A4
+ item:
+ host: 'HPE MSA 2040 Storage by HTTP'
+ key: 'hpe.msa.controllers.iops.write["{#CONTROLLER.ID}",rate]'
+ master_item:
+ key: hpe.msa.data.get
+ lld_macro_paths:
+ -
+ lld_macro: '{#CONTROLLER.ID}'
+ path: '$.[''controller-id'']'
+ -
+ lld_macro: '{#DURABLE.ID}'
+ path: '$.[''durable-id'']'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''controllers'']'
+ error_handler: DISCARD_VALUE
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 6h
+ -
+ uuid: 16b9a9b6da11463d865cb2b59f77f376
+ name: 'Disks discovery'
+ type: DEPENDENT
+ key: hpe.msa.disks.discovery
+ delay: '0'
+ description: 'Discover disks.'
+ item_prototypes:
+ -
+ uuid: 60418ff95d2b4ac698fe041647656005
+ name: 'Disk [{#DURABLE.ID}]: Space total'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.space["{#DURABLE.ID}",total]'
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'Total size of the disk.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disks''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''size-numeric''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ -
+ type: MULTIPLIER
+ parameters:
+ - '512'
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk
+ -
+ tag: disk
+ value: '{#DURABLE.ID}'
+ -
+ uuid: 579f29536b0740b9887cbb0863bd3e45
+ name: 'Disk [{#DURABLE.ID}]: SSD life left'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.ssd["{#DURABLE.ID}",life_left]'
+ delay: '0'
+ history: 7d
+ discover: NO_DISCOVER
+ units: '%'
+ description: 'The percantage of disk life remaining.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disks''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''ssd-life-left-numeric''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk
+ -
+ tag: disk
+ value: '{#DURABLE.ID}'
+ -
+ uuid: a430bd06d24447649687dc9b9c3dee2c
+ name: 'Disk [{#DURABLE.ID}]: Disk group'
+ type: DEPENDENT
+ key: 'hpe.msa.disks["{#DURABLE.ID}",group]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'If the disk is in a disk group, the disk group name.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disks''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''disk-group''].first()'
+ error_handler: DISCARD_VALUE
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk
+ -
+ tag: disk
+ value: '{#DURABLE.ID}'
+ -
+ uuid: 17f4069e731b45c7a9d9bfc5786a07fc
+ name: 'Disk [{#DURABLE.ID}]: Health'
+ type: DEPENDENT
+ key: 'hpe.msa.disks["{#DURABLE.ID}",health]'
+ delay: '0'
+ history: 7d
+ description: 'Disk health status.'
+ valuemap:
+ name: Health
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disks''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''health-numeric''].first()'
+ error_handler: CUSTOM_VALUE
+ error_handler_params: '4'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk
+ -
+ tag: component
+ value: health
+ -
+ tag: disk
+ value: '{#DURABLE.ID}'
+ trigger_prototypes:
+ -
+ uuid: 58d2da30bfe74d05ad05e0b286fe0fae
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",health])=1'
+ name: 'Disk [{#DURABLE.ID}]: Disk health is in degraded state'
+ priority: WARNING
+ description: 'Disk health is in degraded state.'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: 1f0e81d23e1e423ba885425f33773f5b
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",health])=2'
+ name: 'Disk [{#DURABLE.ID}]: Disk health is in fault state'
+ priority: AVERAGE
+ description: 'Disk health is in fault state.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ uuid: dc75dd0456a145b3ab0646c9403caeb6
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",health])=3'
+ name: 'Disk [{#DURABLE.ID}]: Disk health is in unknown state'
+ priority: INFO
+ description: 'Disk health is in unknown state.'
+ tags:
+ -
+ tag: scope
+ value: notice
+ -
+ uuid: 689e29b31fd0490fb26920c04d094136
+ name: 'Disk [{#DURABLE.ID}]: Model'
+ type: DEPENDENT
+ key: 'hpe.msa.disks["{#DURABLE.ID}",model]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'Disk model.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disks''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''model''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk
+ -
+ tag: disk
+ value: '{#DURABLE.ID}'
+ -
+ uuid: 20d37295acce41acac8ba77962130774
+ name: 'Disk [{#DURABLE.ID}]: Storage pool'
+ type: DEPENDENT
+ key: 'hpe.msa.disks["{#DURABLE.ID}",pool]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'If the disk is in a pool, the pool name.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disks''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''storage-pool-name''].first()'
+ error_handler: DISCARD_VALUE
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk
+ -
+ tag: disk
+ value: '{#DURABLE.ID}'
+ -
+ uuid: 7c4da69f28824444960e6783fe090526
+ name: 'Disk [{#DURABLE.ID}]: Serial number'
+ type: DEPENDENT
+ key: 'hpe.msa.disks["{#DURABLE.ID}",serial_number]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'Disk serial number.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disks''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''serial-number''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk
+ -
+ tag: disk
+ value: '{#DURABLE.ID}'
+ -
+ uuid: 770749eafc79429185e7127d95b1ff74
+ name: 'Disk [{#DURABLE.ID}]: Temperature'
+ type: DEPENDENT
+ key: 'hpe.msa.disks["{#DURABLE.ID}",temperature]'
+ delay: '0'
+ history: 7d
+ units: '!°C'
+ description: 'Temperature of the disk.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disks''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''temperature-numeric''].first()'
+ error_handler: DISCARD_VALUE
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk
+ -
+ tag: disk
+ value: '{#DURABLE.ID}'
+ -
+ uuid: 5ba57b2f4d014b2a81c546e8f74a133e
+ name: 'Disk [{#DURABLE.ID}]: Temperature status'
+ type: DEPENDENT
+ key: 'hpe.msa.disks["{#DURABLE.ID}",temperature_status]'
+ delay: '0'
+ history: 7d
+ description: 'Disk temperature status.'
+ valuemap:
+ name: 'Disk temperature status'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disks''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''temperature-status-numeric''].first()'
+ error_handler: DISCARD_VALUE
+ -
+ type: IN_RANGE
+ parameters:
+ - '1'
+ - '3'
+ error_handler: CUSTOM_VALUE
+ error_handler_params: '4'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk
+ -
+ tag: component
+ value: health
+ -
+ tag: disk
+ value: '{#DURABLE.ID}'
+ trigger_prototypes:
+ -
+ uuid: b194f7b133274552823b66e44c88bd02
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",temperature_status])=2'
+ name: 'Disk [{#DURABLE.ID}]: Disk temperature is critically high'
+ priority: AVERAGE
+ description: 'Disk temperature is critically high.'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: aaabacd5f5194378b6c8388e2ef90abe
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",temperature_status])=3'
+ name: 'Disk [{#DURABLE.ID}]: Disk temperature is high'
+ priority: WARNING
+ description: 'Disk temperature is high.'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: 60d0fc661aa140798f937a63fdd6e5f9
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",temperature_status])=4'
+ name: 'Disk [{#DURABLE.ID}]: Disk temperature is unknown'
+ priority: INFO
+ description: 'Disk temperature is unknown.'
+ tags:
+ -
+ tag: scope
+ value: notice
+ -
+ uuid: d781943c08d24556a083a16cca34ad58
+ name: 'Disk [{#DURABLE.ID}]: Type'
+ type: DEPENDENT
+ key: 'hpe.msa.disks["{#DURABLE.ID}",type]'
+ delay: '0'
+ history: 7d
+ description: |
+ Disk type:
+ SAS: Enterprise SAS spinning disk.
+ SAS MDL: Midline SAS spinning disk.
+ SSD SAS: SAS solit-state disk.
+ valuemap:
+ name: 'Disk type'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disks''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''description-numeric''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk
+ -
+ tag: disk
+ value: '{#DURABLE.ID}'
+ -
+ uuid: 86ce9f4d139e46908750d158b004b517
+ name: 'Disk [{#DURABLE.ID}]: Vendor'
+ type: DEPENDENT
+ key: 'hpe.msa.disks["{#DURABLE.ID}",vendor]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'Disk vendor.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disks''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''vendor''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk
+ -
+ tag: disk
+ value: '{#DURABLE.ID}'
+ master_item:
+ key: hpe.msa.data.get
+ lld_macro_paths:
+ -
+ lld_macro: '{#DURABLE.ID}'
+ path: '$.[''durable-id'']'
+ -
+ lld_macro: '{#TYPE}'
+ path: '$.[''description-numeric'']'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disks'']'
+ error_handler: DISCARD_VALUE
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 6h
+ overrides:
+ -
+ name: 'SSD life left'
+ step: '1'
+ filter:
+ conditions:
+ -
+ macro: '{#TYPE}'
+ value: '8'
+ formulaid: A
+ operations:
+ -
+ operationobject: ITEM_PROTOTYPE
+ operator: REGEXP
+ value: 'SSD life left'
+ status: ENABLED
+ discover: DISCOVER
+ -
+ uuid: dd952ff876134376baef061dc260884c
+ name: 'Disk groups discovery'
+ type: DEPENDENT
+ key: hpe.msa.disks.groups.discovery
+ delay: '0'
+ description: 'Discover disk groups.'
+ item_prototypes:
+ -
+ uuid: 5b0b3db4bdff429996111d566b6d0386
+ name: 'Disk group [{#NAME}]: Average response time: Read'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups.avg_rsp_time["{#NAME}",read]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: s
+ description: 'Average response time for all read operations, calculated over the interval since these statistics were last requested or reset.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disk-group-statistics''][?(@[''name''] == "{#NAME}")].[''avg-read-rsp-time''].first()'
+ -
+ type: MULTIPLIER
+ parameters:
+ - '0.000001'
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
+ uuid: 4a4fb1ae86df4607882de9c9d40f51f4
+ name: 'Disk group [{#NAME}]: Average response time: Total'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups.avg_rsp_time["{#NAME}",total]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: s
+ description: 'Average response time for read and write operations, calculated over the interval since these statistics were last requested or reset.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disk-group-statistics''][?(@[''name''] == "{#NAME}")].[''avg-rsp-time''].first()'
+ -
+ type: MULTIPLIER
+ parameters:
+ - '0.000001'
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
+ uuid: a93c1e1b1eee496d861464128aaefa57
+ name: 'Disk group [{#NAME}]: Average response time: Write'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups.avg_rsp_time["{#NAME}",write]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: s
+ description: 'Average response time for all write operations, calculated over the interval since these statistics were last requested or reset.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disk-group-statistics''][?(@[''name''] == "{#NAME}")].[''avg-write-rsp-time''].first()'
+ -
+ type: MULTIPLIER
+ parameters:
+ - '0.000001'
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
+ uuid: 46ba55c8ec2e4811b254441f22ead159
+ name: 'Disk group [{#NAME}]: Data transfer rate: Reads'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups.data_transfer.reads["{#NAME}",rate]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: Bps
+ description: 'The data read rate, in bytes per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disk-group-statistics''][?(@[''name''] == "{#NAME}")].[''data-read-numeric''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
+ uuid: b1e2347ea10b4e84bb227668f5560b14
+ name: 'Disk group [{#NAME}]: Data transfer rate: Total'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups.data_transfer.total["{#NAME}",rate]'
+ delay: '0'
+ history: 7d
+ units: Bps
+ description: 'The data transfer rate, in bytes per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disk-group-statistics''][?(@[''name''] == "{#NAME}")].[''bytes-per-second-numeric''].first()'
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
+ uuid: a3df11b895fa425799c34516050000bd
+ name: 'Disk group [{#NAME}]: Data transfer rate: Writes'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups.data_transfer.writes["{#NAME}",rate]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: Bps
+ description: 'The data write rate, in bytes per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disk-group-statistics''][?(@[''name''] == "{#NAME}")].[''data-written-numeric''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
+ uuid: 18cd4383127548b68313184a2b94750f
+ name: 'Disk group [{#NAME}]: IOPS, read rate'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups.iops.read["{#NAME}",rate]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: '!r/s'
+ description: 'Number of read operations per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disk-group-statistics''][?(@[''name''] == "{#NAME}")].[''number-of-reads''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
+ uuid: 044e291ab66d48dcb8b66ee18f638702
+ name: 'Disk group [{#NAME}]: IOPS, total rate'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups.iops.total["{#NAME}",rate]'
+ delay: '0'
+ history: 7d
+ units: '!iops'
+ description: 'Input/output operations per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disk-group-statistics''][?(@[''name''] == "{#NAME}")].[''iops''].first()'
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
+ uuid: 66ec5badb1d2491d9e07b5ce45486d72
+ name: 'Disk group [{#NAME}]: IOPS, write rate'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups.iops.write["{#NAME}",rate]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: '!w/s'
+ description: 'Number of write operations per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disk-group-statistics''][?(@[''name''] == "{#NAME}")].[''number-of-writes''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
+ uuid: 5356a1f819a54c59bb3765d99a965537
+ name: 'Disk group [{#NAME}]: RAID type'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups.raid["{#NAME}",type]'
+ delay: '0'
+ history: 7d
+ description: 'The RAID level of the disk group.'
+ valuemap:
+ name: 'RAID type'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disk-groups''][?(@[''name''] == "{#NAME}")].[''raidtype-numeric''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
+ uuid: b1c95904002b4c17a1c007c664fa4ff8
+ name: 'Disk group [{#NAME}]: Space free'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups.space["{#NAME}",free]'
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'The free space in the disk group.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disk-groups''][?(@[''name''] == "{#NAME}")].[''freespace-numeric''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ -
+ type: MULTIPLIER
+ parameters:
+ - '512'
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
+ uuid: bfe1a64952754488898798f5f07e24b1
+ name: 'Disk group [{#NAME}]: Pool space used'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups.space["{#NAME}",pool_util]'
+ delay: '0'
+ history: 7d
+ units: '%'
+ description: 'The percentage of pool capacity that the disk group occupies.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disk-groups''][?(@[''name''] == "{#NAME}")].[''pool-percentage''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
+ uuid: 29eae883b9fc4e2191daa870bd9d58ad
+ name: 'Disk group [{#NAME}]: Space total'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups.space["{#NAME}",total]'
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'The capacity of the disk group.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disk-groups''][?(@[''name''] == "{#NAME}")].[''size-numeric''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ -
+ type: MULTIPLIER
+ parameters:
+ - '512'
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
+ uuid: 760b63c8140544dd8af0de8fd873c8cb
+ name: 'Disk group [{#NAME}]: Space utilization'
+ type: CALCULATED
+ key: 'hpe.msa.disks.groups.space["{#NAME}",util]'
+ history: 7d
+ value_type: FLOAT
+ units: '%'
+ params: '100-last(//hpe.msa.disks.groups.space["{#NAME}",free])/last(//hpe.msa.disks.groups.space["{#NAME}",total])*100'
+ description: 'The space utilization percentage in the disk group.'
+ preprocessing:
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ trigger_prototypes:
+ -
+ uuid: d6494d79dae94aeda2b78169f8960224
+ expression: 'min(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups.space["{#NAME}",util],5m)>{$HPE.MSA.DISKS.GROUP.PUSED.MAX.CRIT:"{#NAME}"}'
+ name: 'Disk group [{#NAME}]: Disk group space is critically low'
+ event_name: 'Disk group [{#NAME}]: Disk group space is critically low (used > {$HPE.MSA.DISKS.GROUP.PUSED.MAX.CRIT:"{#NAME}"}%)'
+ priority: AVERAGE
+ description: 'Disk group is running low on free space (less than {$HPE.MSA.DISKS.GROUP.PUSED.MAX.CRIT:"{#NAME}"}% available).'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: ea04be93082640709ec6e58ae640575c
+ expression: 'min(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups.space["{#NAME}",util],5m)>{$HPE.MSA.DISKS.GROUP.PUSED.MAX.WARN:"{#NAME}"}'
+ name: 'Disk group [{#NAME}]: Disk group space is low'
+ event_name: 'Disk group [{#NAME}]: Disk group space is low (used > {$HPE.MSA.DISKS.GROUP.PUSED.MAX.WARN:"{#NAME}"}%)'
+ priority: WARNING
+ description: 'Disk group is running low on free space (less than {$HPE.MSA.DISKS.GROUP.PUSED.MAX.WARN:"{#NAME}"}% available).'
+ dependencies:
+ -
+ name: 'Disk group [{#NAME}]: Disk group space is critically low'
+ expression: 'min(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups.space["{#NAME}",util],5m)>{$HPE.MSA.DISKS.GROUP.PUSED.MAX.CRIT:"{#NAME}"}'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: 085fae4f87444b62ae5c52703176a533
+ name: 'Disk group [{#NAME}]: Disks count'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups["{#NAME}",disk_count]'
+ delay: '0'
+ history: 7d
+ description: 'Number of disks in the disk group.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disk-groups''][?(@[''name''] == "{#NAME}")].[''diskcount''].first()'
+ error_handler: DISCARD_VALUE
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
+ uuid: 1c714d46a3ae4e77b4a2e155c047e630
+ name: 'Disk group [{#NAME}]: Health'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups["{#NAME}",health]'
+ delay: '0'
+ history: 7d
+ description: 'Disk group health.'
+ valuemap:
+ name: Health
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disk-groups''][?(@[''name''] == "{#NAME}")].[''health-numeric''].first()'
+ error_handler: CUSTOM_VALUE
+ error_handler_params: '4'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: component
+ value: health
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ trigger_prototypes:
+ -
+ uuid: ad99b0f4a6b14b1d9819ab63376e11e7
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",health])=1'
+ name: 'Disk group [{#NAME}]: Disk group health is in degraded state'
+ priority: WARNING
+ description: 'Disk group health is in degraded state.'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: 28f69da63b024079b8953165da6cbfdc
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",health])=2'
+ name: 'Disk group [{#NAME}]: Disk group health is in fault state'
+ priority: AVERAGE
+ description: 'Disk group health is in fault state.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ uuid: 94695c4222c94bd1b12d9ecb4b21e628
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",health])=3'
+ name: 'Disk group [{#NAME}]: Disk group health is in unknown state'
+ priority: INFO
+ description: 'Disk group health is in unknown state.'
+ tags:
+ -
+ tag: scope
+ value: notice
+ -
+ uuid: 27ad0ae81baa43528cf94d3ccc5c3ec3
+ name: 'Disk group [{#NAME}]: Status'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups["{#NAME}",status]'
+ delay: '0'
+ history: 7d
+ description: |
+ The status of the disk group:
+
+ - CRIT: Critical. The disk group is online but isn't fault tolerant because some of it's disks are down.
+ - DMGD: Damaged. The disk group is online and fault tolerant, but some of it's disks are damaged.
+ - FTDN: Fault tolerant with a down disk.The disk group is online and fault tolerant, but some of it's disks are down.
+ - FTOL: Fault tolerant.
+ - MSNG: Missing. The disk group is online and fault tolerant, but some of it's disks are missing.
+ - OFFL: Offline. Either the disk group is using offline initialization, or it's disks are down and data may be lost.
+ - QTCR: Quarantined critical. The disk group is critical with at least one inaccessible disk. For example, two disks are inaccessible in a RAID 6 disk group or one disk is inaccessible for other fault-tolerant RAID levels. If the inaccessible disks come online or if after 60 seconds from being quarantined the disk group is QTCRor QTDN, the disk group is automatically dequarantined.
+ - QTDN: Quarantined with a down disk. The RAID6 disk group has one inaccessible disk. The disk group is fault tolerant but degraded. If the inaccessible disks come online or if after 60 seconds from being quarantined the disk group is QTCRor QTDN, the disk group is automatically dequarantined.
+ - QTOF: Quarantined offline. The disk group is offline with multiple inaccessible disks causing user data to be incomplete, or is an NRAID or RAID 0 disk group.
+ - QTUN: Quarantined unsupported. The disk group contains data in a format that is not supported by this system. For example, this system does not support linear disk groups.
+ - STOP: The disk group is stopped.
+ - UNKN: Unknown.
+ - UP: Up. The disk group is online and does not have fault-tolerant attributes.
+ valuemap:
+ name: 'Disk group status'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disk-groups''][?(@[''name''] == "{#NAME}")].[''status-numeric''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: component
+ value: health
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ trigger_prototypes:
+ -
+ uuid: 9bbf1f8a67564b769db5921a2023defd
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=9'
+ name: 'Disk group [{#NAME}]: Disk group has damaged disks'
+ priority: AVERAGE
+ description: 'The disk group is online and fault tolerant, but some of it''s disks are damaged.'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: d4d5a63b514d4f1aaff9e8c68db9026e
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=8'
+ name: 'Disk group [{#NAME}]: Disk group has missing disks'
+ priority: AVERAGE
+ description: 'The disk group is online and fault tolerant, but some of it''s disks are missing.'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: 26b5b53b33c940d5a642ea13d670bf55
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=1'
+ name: 'Disk group [{#NAME}]: Disk group is fault tolerant with a down disk'
+ priority: AVERAGE
+ description: 'The disk group is online and fault tolerant, but some of it''s disks are down.'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: b1e7e080f7264ae0be323a500abc211f
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=3'
+ name: 'Disk group [{#NAME}]: Disk group is offline'
+ priority: AVERAGE
+ description: 'Either the disk group is using offline initialization, or it''s disks are down and data may be lost.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ uuid: b8b5b248c275453d91c214c19d01f5d9
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=4'
+ name: 'Disk group [{#NAME}]: Disk group is quarantined critical'
+ priority: AVERAGE
+ description: 'The disk group is critical with at least one inaccessible disk. For example, two disks are inaccessible in a RAID 6 disk group or one disk is inaccessible for other fault-tolerant RAID levels. If the inaccessible disks come online or if after 60 seconds from being quarantined the disk group is QTCRor QTDN, the disk group is automatically dequarantined.'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: bc1c2bbfffd541998099e695f9c98386
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=5'
+ name: 'Disk group [{#NAME}]: Disk group is quarantined offline'
+ priority: AVERAGE
+ description: 'The disk group is offline with multiple inaccessible disks causing user data to be incomplete, or is an NRAID or RAID 0 disk group.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ uuid: 7c4981f0b0fb4a3891b8a410501224d0
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=5'
+ name: 'Disk group [{#NAME}]: Disk group is quarantined unsupported'
+ priority: AVERAGE
+ description: 'The disk group contains data in a format that is not supported by this system. For example, this system does not support linear disk groups.'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: 9e762711ecf54f8691e6be32a3e92738
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=6'
+ name: 'Disk group [{#NAME}]: Disk group is quarantined with an inaccessible disk'
+ priority: AVERAGE
+ description: 'The RAID6 disk group has one inaccessible disk. The disk group is fault tolerant but degraded. If the inaccessible disks come online or if after 60 seconds from being quarantined the disk group is QTCRor QTDN, the disk group is automatically dequarantined.'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: 191dbf4bdd294add8ed0815c21f6eadb
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=7'
+ name: 'Disk group [{#NAME}]: Disk group is stopped'
+ priority: AVERAGE
+ description: 'The disk group is stopped.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ uuid: 05480e1bc3ff4e7a8c5a20286d6f306c
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=2'
+ name: 'Disk group [{#NAME}]: Disk group status is critical'
+ priority: AVERAGE
+ description: 'The disk group is online but isn''t fault tolerant because some of its disks are down.'
+ tags:
+ -
+ tag: scope
+ value: performance
+ graph_prototypes:
+ -
+ uuid: 1d5b8a7246a845678a938da75b7e32cc
+ name: 'Disk group [{#NAME}]: Average response time'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE MSA 2040 Storage by HTTP'
+ key: 'hpe.msa.disks.groups.avg_rsp_time["{#NAME}",read]'
+ -
+ sortorder: '1'
+ color: 2774A4
+ item:
+ host: 'HPE MSA 2040 Storage by HTTP'
+ key: 'hpe.msa.disks.groups.avg_rsp_time["{#NAME}",write]'
+ -
+ uuid: b718bd4950f64abb892ba3bfe738ad49
+ name: 'Disk group [{#NAME}]: Data transfer rate'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE MSA 2040 Storage by HTTP'
+ key: 'hpe.msa.disks.groups.data_transfer.reads["{#NAME}",rate]'
+ -
+ sortorder: '1'
+ color: 2774A4
+ item:
+ host: 'HPE MSA 2040 Storage by HTTP'
+ key: 'hpe.msa.disks.groups.data_transfer.writes["{#NAME}",rate]'
+ -
+ uuid: 55d7871c891446b086860f8c861fc3f7
+ name: 'Disk group [{#NAME}]: Disk operations rate'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE MSA 2040 Storage by HTTP'
+ key: 'hpe.msa.disks.groups.iops.read["{#NAME}",rate]'
+ -
+ sortorder: '1'
+ color: 2774A4
+ item:
+ host: 'HPE MSA 2040 Storage by HTTP'
+ key: 'hpe.msa.disks.groups.iops.write["{#NAME}",rate]'
+ -
+ uuid: 234be7ebf50e42f6a098662f1fffba03
+ name: 'Disk group [{#NAME}]: Space utilization'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE MSA 2040 Storage by HTTP'
+ key: 'hpe.msa.disks.groups.space["{#NAME}",free]'
+ -
+ sortorder: '1'
+ color: 2774A4
+ item:
+ host: 'HPE MSA 2040 Storage by HTTP'
+ key: 'hpe.msa.disks.groups.space["{#NAME}",total]'
+ master_item:
+ key: hpe.msa.data.get
+ lld_macro_paths:
+ -
+ lld_macro: '{#NAME}'
+ path: '$.[''name'']'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disk-groups'']'
+ error_handler: DISCARD_VALUE
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 6h
+ -
+ uuid: c6713507122242988dc9fae6e77bdff6
+ name: 'Enclosures discovery'
+ type: DEPENDENT
+ key: hpe.msa.enclosures.discovery
+ delay: '0'
+ description: 'Discover enclosures.'
+ item_prototypes:
+ -
+ uuid: 806b44d4f2dd44eea6db7e982c5fea16
+ name: 'Enclosure [{#DURABLE.ID}]: Health'
+ type: DEPENDENT
+ key: 'hpe.msa.enclosures["{#DURABLE.ID}",health]'
+ delay: '0'
+ history: 7d
+ description: 'Enclosure health.'
+ valuemap:
+ name: Health
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''enclosures''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''health-numeric''].first()'
+ error_handler: CUSTOM_VALUE
+ error_handler_params: '4'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: enclosure
+ -
+ tag: component
+ value: health
+ -
+ tag: enclosure
+ value: '{#DURABLE.ID}'
+ trigger_prototypes:
+ -
+ uuid: 934c5f9e2d19499fab1d88ff9a36c9c9
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",health])=1'
+ name: 'Enclosure [{#DURABLE.ID}]: Enclosure health is in degraded state'
+ priority: WARNING
+ description: 'Enclosure health is in degraded state.'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: 3d06d5ce761c42e983a5eec029bb671e
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",health])=2'
+ name: 'Enclosure [{#DURABLE.ID}]: Enclosure health is in fault state'
+ priority: AVERAGE
+ description: 'Enclosure health is in fault state.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ uuid: df1275bd16434b1ca77749930e1af3f8
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",health])=3'
+ name: 'Enclosure [{#DURABLE.ID}]: Enclosure health is in unknown state'
+ priority: INFO
+ description: 'Enclosure health is in unknown state.'
+ tags:
+ -
+ tag: scope
+ value: notice
+ -
+ uuid: 42987ecd83d74ffa91a8da7d72aacdb0
+ name: 'Enclosure [{#DURABLE.ID}]: Midplane serial number'
+ type: DEPENDENT
+ key: 'hpe.msa.enclosures["{#DURABLE.ID}",midplane_serial_number]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'Midplane serial number.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''enclosures''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''midplane-serial-number''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: enclosure
+ -
+ tag: enclosure
+ value: '{#DURABLE.ID}'
+ -
+ uuid: 10fff6e5bc2143348c3b0c6a3eb87631
+ name: 'Enclosure [{#DURABLE.ID}]: Model'
+ type: DEPENDENT
+ key: 'hpe.msa.enclosures["{#DURABLE.ID}",model]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'Enclosure model.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''enclosures''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''model''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: enclosure
+ -
+ tag: enclosure
+ value: '{#DURABLE.ID}'
+ -
+ uuid: f9279641e2cb4c95a07d43ef1f1caba5
+ name: 'Enclosure [{#DURABLE.ID}]: Part number'
+ type: DEPENDENT
+ key: 'hpe.msa.enclosures["{#DURABLE.ID}",part_number]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'Enclosure part number.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''enclosures''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''part-number''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: enclosure
+ -
+ tag: enclosure
+ value: '{#DURABLE.ID}'
+ -
+ uuid: cd0ec35c114b41579d0dfcebdc5e7211
+ name: 'Enclosure [{#DURABLE.ID}]: Power'
+ type: DEPENDENT
+ key: 'hpe.msa.enclosures["{#DURABLE.ID}",power]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: W
+ description: 'Enclosure power in watts.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''enclosures''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''enclosure-power''].first()'
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: enclosure
+ -
+ tag: enclosure
+ value: '{#DURABLE.ID}'
+ -
+ uuid: 98205e12a4c44a35a59879da5cc9f39c
+ name: 'Enclosure [{#DURABLE.ID}]: Status'
+ type: DEPENDENT
+ key: 'hpe.msa.enclosures["{#DURABLE.ID}",status]'
+ delay: '0'
+ history: 7d
+ description: 'Enclosure status.'
+ valuemap:
+ name: 'Enclosure status'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''enclosures''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''status-numeric''].first()'
+ error_handler: CUSTOM_VALUE
+ error_handler_params: '6'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: enclosure
+ -
+ tag: component
+ value: health
+ -
+ tag: enclosure
+ value: '{#DURABLE.ID}'
+ trigger_prototypes:
+ -
+ uuid: db8329f956d94e74bb6379b29a000bf0
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",status])=2'
+ name: 'Enclosure [{#DURABLE.ID}]: Enclosure has critical status'
+ priority: HIGH
+ description: 'Enclosure has critical status.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ uuid: 6a32b4a08bfb49939633b42a16041c7f
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",status])=6'
+ name: 'Enclosure [{#DURABLE.ID}]: Enclosure has unknown status'
+ priority: INFO
+ description: 'Enclosure has unknown status.'
+ tags:
+ -
+ tag: scope
+ value: notice
+ -
+ uuid: 2fd78acd77804a1f8a474c973bf5c93e
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",status])=3'
+ name: 'Enclosure [{#DURABLE.ID}]: Enclosure has warning status'
+ priority: WARNING
+ description: 'Enclosure has warning status.'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: 8cecd9a3ecf14931b8b3ccffff4a4615
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",status])=7'
+ name: 'Enclosure [{#DURABLE.ID}]: Enclosure is unavailable'
+ priority: HIGH
+ description: 'Enclosure is unavailable.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ uuid: 458cfb2a9dfb476dae940b66342b12bf
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",status])=4'
+ name: 'Enclosure [{#DURABLE.ID}]: Enclosure is unrecoverable'
+ priority: HIGH
+ description: 'Enclosure is unrecoverable.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ master_item:
+ key: hpe.msa.data.get
+ lld_macro_paths:
+ -
+ lld_macro: '{#DURABLE.ID}'
+ path: '$.[''durable-id'']'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''enclosures'']'
+ error_handler: DISCARD_VALUE
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 6h
+ -
+ uuid: 6900c1efa2b3456ead4ae5e5a033700e
+ name: 'Fans discovery'
+ type: DEPENDENT
+ key: hpe.msa.fans.discovery
+ delay: '0'
+ description: 'Discover fans.'
+ item_prototypes:
+ -
+ uuid: b4732ef73f0e4fcc9458797b28e2b829
+ name: 'Fan [{#DURABLE.ID}]: Health'
+ type: DEPENDENT
+ key: 'hpe.msa.fans["{#DURABLE.ID}",health]'
+ delay: '0'
+ history: 7d
+ description: 'Fan health status.'
+ valuemap:
+ name: Health
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''fans''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''health-numeric''].first()'
+ error_handler: CUSTOM_VALUE
+ error_handler_params: '4'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: fan
+ -
+ tag: component
+ value: health
+ -
+ tag: fan
+ value: '{#DURABLE.ID}'
+ trigger_prototypes:
+ -
+ uuid: 377a9c494a5443c0ba694ab78683da17
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",health])=1'
+ name: 'Fan [{#DURABLE.ID}]: Fan health is in degraded state'
+ priority: WARNING
+ description: 'Fan health is in degraded state.'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: 4446cef7b06140e3a29018944201ebd7
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",health])=2'
+ name: 'Fan [{#DURABLE.ID}]: Fan health is in fault state'
+ priority: AVERAGE
+ description: 'Fan health is in fault state.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ uuid: 3273a1f3595046e69ef6c74ac6f56eeb
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",health])=3'
+ name: 'Fan [{#DURABLE.ID}]: Fan health is in unknown state'
+ priority: INFO
+ description: 'Fan health is in unknown state.'
+ tags:
+ -
+ tag: scope
+ value: notice
+ -
+ uuid: eb7057d0b65e40138899753b06abfb68
+ name: 'Fan [{#DURABLE.ID}]: Speed'
+ type: DEPENDENT
+ key: 'hpe.msa.fans["{#DURABLE.ID}",speed]'
+ delay: '0'
+ history: 7d
+ units: '!RPM'
+ description: 'Fan speed (revolutions per minute).'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''fans''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''speed''].first()'
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: fan
+ -
+ tag: fan
+ value: '{#DURABLE.ID}'
+ -
+ uuid: 45f948cb8f484367a7a5735beb796a1b
+ name: 'Fan [{#DURABLE.ID}]: Status'
+ type: DEPENDENT
+ key: 'hpe.msa.fans["{#DURABLE.ID}",status]'
+ delay: '0'
+ history: 7d
+ description: 'Fan status.'
+ valuemap:
+ name: 'Fan status'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''fans''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''status-numeric''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: fan
+ -
+ tag: component
+ value: health
+ -
+ tag: fan
+ value: '{#DURABLE.ID}'
+ trigger_prototypes:
+ -
+ uuid: f8afe70029aa4cdfb1f68452eea27986
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",status])=1'
+ name: 'Fan [{#DURABLE.ID}]: Fan has error status'
+ priority: AVERAGE
+ description: 'Fan has error status.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ uuid: 8ad445006c51474fbee30a70971a97a5
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",status])=3'
+ name: 'Fan [{#DURABLE.ID}]: Fan is missing'
+ priority: INFO
+ description: 'Fan is missing.'
+ tags:
+ -
+ tag: scope
+ value: notice
+ -
+ uuid: fabe4e0bde194675a089db45125428b6
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",status])=2'
+ name: 'Fan [{#DURABLE.ID}]: Fan is off'
+ priority: WARNING
+ description: 'Fan is off.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ graph_prototypes:
+ -
+ uuid: 44c2c9cdec6247cf8f4d0e2bd7e0e372
+ name: 'Fan [{#DURABLE.ID}]: Speed'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE MSA 2040 Storage by HTTP'
+ key: 'hpe.msa.fans["{#DURABLE.ID}",speed]'
+ master_item:
+ key: hpe.msa.data.get
+ lld_macro_paths:
+ -
+ lld_macro: '{#DURABLE.ID}'
+ path: '$.[''durable-id'']'
+ -
+ lld_macro: '{#NAME}'
+ path: '$.[''name'']'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''fans'']'
+ error_handler: DISCARD_VALUE
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 6h
+ -
+ uuid: ec7d856fd690401888f93f8d9c135828
+ name: 'FRU discovery'
+ type: DEPENDENT
+ key: hpe.msa.frus.discovery
+ delay: '0'
+ filter:
+ conditions:
+ -
+ macro: '{#TYPE}'
+ value: ^(POWER_SUPPLY|RAID_IOM|CHASSIS_MIDPLANE)$
+ operator: NOT_MATCHES_REGEX
+ formulaid: A
+ description: 'Discover FRU.'
+ item_prototypes:
+ -
+ uuid: 77df1d8bfba9428e887025a05f02f306
+ name: 'FRU [{#ENCLOSURE.ID}: {#LOCATION}]: Part number'
+ type: DEPENDENT
+ key: 'hpe.msa.frus["{#ENCLOSURE.ID}:{#LOCATION}",part_number]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: '{#DESCRIPTION}. Part number of the FRU.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''frus''][?(@[''name''] == "{#TYPE}" && @[''fru-location''] == "{#LOCATION}")].[''part-number''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: fru
+ -
+ tag: fru
+ value: 'Enclosure {#ENCLOSURE.ID}: {#LOCATION}'
+ -
+ uuid: 04fc08de0c3947cba0c8f6c633ae3157
+ name: 'FRU [{#ENCLOSURE.ID}: {#LOCATION}]: Serial number'
+ type: DEPENDENT
+ key: 'hpe.msa.frus["{#ENCLOSURE.ID}:{#LOCATION}",serial_number]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: '{#DESCRIPTION}. FRU serial number.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''frus''][?(@[''name''] == "{#TYPE}" && @[''fru-location''] == "{#LOCATION}")].[''serial-number''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: fru
+ -
+ tag: fru
+ value: 'Enclosure {#ENCLOSURE.ID}: {#LOCATION}'
+ -
+ uuid: ef3acb289f9c4a8e919b136dabf7b5c5
+ name: 'FRU [{#ENCLOSURE.ID}: {#LOCATION}]: Status'
+ type: DEPENDENT
+ key: 'hpe.msa.frus["{#ENCLOSURE.ID}:{#LOCATION}",status]'
+ delay: '0'
+ history: 7d
+ description: |
+ {#DESCRIPTION}. FRU status:
+
+ Absent: Component is not present.
+ Fault: At least one subcomponent has a fault.
+ Invalid data: For a power supply module, the EEPROM is improperly programmed.
+ OK: All subcomponents are operating normally.
+ Not available: Status is not available.
+ valuemap:
+ name: 'FRU status'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''frus''][?(@[''name''] == "{#TYPE}" && @[''fru-location''] == "{#LOCATION}")].[''fru-status''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ -
+ type: JAVASCRIPT
+ parameters:
+ - |
+ if (value == 'Absent') {
+ return 2;
+ }
+ else if (value == 'Fault') {
+ return 1;
+ }
+ else if (value == 'Invalid Data') {
+ return 0;
+ }
+ else if (value == 'OK') {
+ return 4;
+ }
+ else if (value == 'Not Available') {
+ return 5;
+ }
+ return 6;
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: fru
+ -
+ tag: component
+ value: health
+ -
+ tag: fru
+ value: 'Enclosure {#ENCLOSURE.ID}: {#LOCATION}'
+ trigger_prototypes:
+ -
+ uuid: 8182ee0edeb94f4a845c7eda047718c8
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.frus["{#ENCLOSURE.ID}:{#LOCATION}",status])=0'
+ name: 'FRU [{#ENCLOSURE.ID}: {#LOCATION}]: FRU ID data is invalid'
+ priority: WARNING
+ description: 'The FRU ID data is invalid. The FRU''s EEPROM is improperly programmed.'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: 8bef225423a548c3a289c67c40ffd906
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.frus["{#ENCLOSURE.ID}:{#LOCATION}",status])=1'
+ name: 'FRU [{#ENCLOSURE.ID}: {#LOCATION}]: FRU status is Degraded or Fault'
+ priority: AVERAGE
+ description: 'FRU status is Degraded or Fault.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ master_item:
+ key: hpe.msa.data.get
+ lld_macro_paths:
+ -
+ lld_macro: '{#DESCRIPTION}'
+ path: '$.[''description'']'
+ -
+ lld_macro: '{#ENCLOSURE.ID}'
+ path: '$.[''enclosure-id'']'
+ -
+ lld_macro: '{#LOCATION}'
+ path: '$.[''fru-location'']'
+ -
+ lld_macro: '{#TYPE}'
+ path: '$.[''name'']'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''frus'']'
+ error_handler: DISCARD_VALUE
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 6h
+ -
+ uuid: 082c1cfb851548928911b9ab69f6f75e
+ name: 'Pools discovery'
+ type: DEPENDENT
+ key: hpe.msa.pools.discovery
+ delay: '0'
+ description: 'Discover pools.'
+ item_prototypes:
+ -
+ uuid: 2a8b8ebd3bbb4e4e851602e1a84bb0da
+ name: 'Pool [{#NAME}]: Space free'
+ type: DEPENDENT
+ key: 'hpe.msa.pools.space["{#NAME}",free]'
+ delay: '0'
+ history: 7d
+ description: 'The free space in the pool.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''pools''][?(@[''name''] == "{#NAME}")].[''total-avail-numeric''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ -
+ type: MULTIPLIER
+ parameters:
+ - '512'
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: pool
+ -
+ tag: pool
+ value: '{#NAME}'
+ -
+ uuid: 0518c9f95bad4208ba33def89432975d
+ name: 'Pool [{#NAME}]: Space total'
+ type: DEPENDENT
+ key: 'hpe.msa.pools.space["{#NAME}",total]'
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'The capacity of the pool.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''pools''][?(@[''name''] == "{#NAME}")].[''total-size-numeric''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ -
+ type: MULTIPLIER
+ parameters:
+ - '512'
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: pool
+ -
+ tag: pool
+ value: '{#NAME}'
+ -
+ uuid: cc361d77ac8046fc833db41fbd5d2cd3
+ name: 'Pool [{#NAME}]: Space utilization'
+ type: CALCULATED
+ key: 'hpe.msa.pools.space["{#NAME}",util]'
+ history: 7d
+ value_type: FLOAT
+ units: '%'
+ params: '100-last(//hpe.msa.pools.space["{#NAME}",free])/last(//hpe.msa.pools.space["{#NAME}",total])*100'
+ description: 'The space utilization percentage in the pool.'
+ preprocessing:
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ tags:
+ -
+ tag: component
+ value: pool
+ -
+ tag: pool
+ value: '{#NAME}'
+ trigger_prototypes:
+ -
+ uuid: 042ac4fedb00485c8c6f48016182b9dd
+ expression: 'min(/HPE MSA 2040 Storage by HTTP/hpe.msa.pools.space["{#NAME}",util],5m)>{$HPE.MSA.POOL.PUSED.MAX.CRIT:"{#NAME}"}'
+ name: 'Pool [{#NAME}]: Pool space is critically low'
+ event_name: 'Pool [{#NAME}]: Pool space is critically low (used > {$HPE.MSA.POOL.PUSED.MAX.CRIT:"{#NAME}"}%)'
+ priority: AVERAGE
+ description: 'Pool is running low on free space (less than {$HPE.MSA.POOL.PUSED.MAX.CRIT:"{#NAME}"}% available).'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: f4c7a9ed832d4668be64acf9da3c9814
+ expression: 'min(/HPE MSA 2040 Storage by HTTP/hpe.msa.pools.space["{#NAME}",util],5m)>{$HPE.MSA.POOL.PUSED.MAX.WARN:"{#NAME}"}'
+ name: 'Pool [{#NAME}]: Pool space is low'
+ event_name: 'Pool [{#NAME}]: Pool space is low (used > {$HPE.MSA.POOL.PUSED.MAX.WARN:"{#NAME}"}%)'
+ priority: WARNING
+ description: 'Pool is running low on free space (less than {$HPE.MSA.POOL.PUSED.MAX.WARN:"{#NAME}"}% available).'
+ dependencies:
+ -
+ name: 'Pool [{#NAME}]: Pool space is critically low'
+ expression: 'min(/HPE MSA 2040 Storage by HTTP/hpe.msa.pools.space["{#NAME}",util],5m)>{$HPE.MSA.POOL.PUSED.MAX.CRIT:"{#NAME}"}'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: 4b79ed6e64cc484bb69f3677cd7932ef
+ name: 'Pool [{#NAME}]: Health'
+ type: DEPENDENT
+ key: 'hpe.msa.pools["{#NAME}",health]'
+ delay: '0'
+ history: 7d
+ description: 'Pool health.'
+ valuemap:
+ name: Health
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''pools''][?(@[''name''] == "{#NAME}")].[''health-numeric''].first()'
+ error_handler: CUSTOM_VALUE
+ error_handler_params: '4'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: health
+ -
+ tag: component
+ value: pool
+ -
+ tag: pool
+ value: '{#NAME}'
+ trigger_prototypes:
+ -
+ uuid: 7af3ccbf497c44adb907b6d15ecebe33
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.pools["{#NAME}",health])=1'
+ name: 'Pool [{#NAME}]: Pool health is in degraded state'
+ priority: WARNING
+ description: 'Pool health is in degraded state.'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: 093fa03c7c3f4ac4adbd3234bf6007a0
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.pools["{#NAME}",health])=2'
+ name: 'Pool [{#NAME}]: Pool health is in fault state'
+ priority: AVERAGE
+ description: 'Pool health is in fault state.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ uuid: 2af3a0092d57420c95bf82adc39eae5f
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.pools["{#NAME}",health])=3'
+ name: 'Pool [{#NAME}]: Pool health is in unknown state'
+ priority: INFO
+ description: 'Pool [{#NAME}] health is in unknown state.'
+ tags:
+ -
+ tag: scope
+ value: notice
+ graph_prototypes:
+ -
+ uuid: 001c0a805d3a40bf86632b498883519d
+ name: 'Pool [{#NAME}]: Space utilization'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE MSA 2040 Storage by HTTP'
+ key: 'hpe.msa.pools.space["{#NAME}",free]'
+ -
+ sortorder: '1'
+ color: 2774A4
+ item:
+ host: 'HPE MSA 2040 Storage by HTTP'
+ key: 'hpe.msa.pools.space["{#NAME}",total]'
+ master_item:
+ key: hpe.msa.data.get
+ lld_macro_paths:
+ -
+ lld_macro: '{#NAME}'
+ path: '$.[''name'']'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''pools'']'
+ error_handler: DISCARD_VALUE
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 6h
+ -
+ uuid: 09754bd16c674ff08fad52f060035961
+ name: 'Ports discovery'
+ type: DEPENDENT
+ key: hpe.msa.ports.discovery
+ delay: '0'
+ description: 'Discover ports.'
+ item_prototypes:
+ -
+ uuid: 27564169c2b04cba924162a5630bbd4b
+ name: 'Port [{#NAME}]: Health'
+ type: DEPENDENT
+ key: 'hpe.msa.ports["{#NAME}",health]'
+ delay: '0'
+ history: 7d
+ description: 'Port health status.'
+ valuemap:
+ name: Health
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''ports''][?(@[''port''] == "{#NAME}")].[''health-numeric''].first()'
+ error_handler: CUSTOM_VALUE
+ error_handler_params: '4'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: health
+ -
+ tag: component
+ value: port
+ -
+ tag: port
+ value: '{#NAME}'
+ trigger_prototypes:
+ -
+ uuid: 266d310dc71e4c60977668e330eec8df
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.ports["{#NAME}",health])=1'
+ name: 'Port [{#NAME}]: Port health is in degraded state'
+ priority: WARNING
+ description: 'Port health is in degraded state.'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: 19a02ecfb5d242ff85e233961cc4a384
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.ports["{#NAME}",health])=2'
+ name: 'Port [{#NAME}]: Port health is in fault state'
+ priority: AVERAGE
+ description: 'Port health is in fault state.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ uuid: 8461e41fdd2944f08d3b95c63df0fa9f
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.ports["{#NAME}",health])=3'
+ name: 'Port [{#NAME}]: Port health is in unknown state'
+ priority: INFO
+ description: 'Port health is in unknown state.'
+ tags:
+ -
+ tag: scope
+ value: notice
+ -
+ uuid: 57986481099a4bffb5b61816e1ba4110
+ name: 'Port [{#NAME}]: Status'
+ type: DEPENDENT
+ key: 'hpe.msa.ports["{#NAME}",status]'
+ delay: '0'
+ history: 7d
+ description: 'Port status.'
+ valuemap:
+ name: Status
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''ports''][?(@[''port''] == "{#NAME}")].[''status-numeric''].first()'
+ error_handler: CUSTOM_VALUE
+ error_handler_params: '4'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: health
+ -
+ tag: component
+ value: port
+ -
+ tag: port
+ value: '{#NAME}'
+ trigger_prototypes:
+ -
+ uuid: 7ff86d50c241496d9bfa54359e17222e
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.ports["{#NAME}",status])=2'
+ name: 'Port [{#NAME}]: Port has error status'
+ priority: AVERAGE
+ description: 'Port has error status.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ uuid: bdad67d08b92447e9964ea6362c0989c
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.ports["{#NAME}",status])=4'
+ name: 'Port [{#NAME}]: Port has unknown status'
+ priority: INFO
+ description: 'Port has unknown status.'
+ tags:
+ -
+ tag: scope
+ value: notice
+ -
+ uuid: 95ba19413bca495aba96f32fa91bc54b
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.ports["{#NAME}",status])=1'
+ name: 'Port [{#NAME}]: Port has warning status'
+ priority: WARNING
+ description: 'Port has warning status.'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: b1240a5950a3466b9d0725729bef3a03
+ name: 'Port [{#NAME}]: Type'
+ type: DEPENDENT
+ key: 'hpe.msa.ports["{#NAME}",type]'
+ delay: '0'
+ history: 7d
+ description: 'Port type.'
+ valuemap:
+ name: 'Port type'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''ports''][?(@[''port''] == "{#NAME}")].[''port-type-numeric''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: port
+ -
+ tag: port
+ value: '{#NAME}'
+ master_item:
+ key: hpe.msa.data.get
+ lld_macro_paths:
+ -
+ lld_macro: '{#NAME}'
+ path: '$.[''port'']'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''ports'']'
+ error_handler: DISCARD_VALUE
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 6h
+ -
+ uuid: 2cf7945eea95414a88ce572f4c075bb1
+ name: 'Power supplies discovery'
+ type: DEPENDENT
+ key: hpe.msa.power_supplies.discovery
+ delay: '0'
+ description: 'Discover power supplies.'
+ item_prototypes:
+ -
+ uuid: 4e4f593738fb451cbfd1589a3054387e
+ name: 'Power supply [{#DURABLE.ID}]: Health'
+ type: DEPENDENT
+ key: 'hpe.msa.power_supplies["{#DURABLE.ID}",health]'
+ delay: '0'
+ history: 7d
+ description: 'Power supply health status.'
+ valuemap:
+ name: Health
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''power-supplies''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''health-numeric''].first()'
+ error_handler: CUSTOM_VALUE
+ error_handler_params: '4'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: health
+ -
+ tag: component
+ value: power-supply
+ -
+ tag: power-supply
+ value: '{#DURABLE.ID}'
+ trigger_prototypes:
+ -
+ uuid: 2394f69a635a4072bd96494b8df8ae3e
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",health])=1'
+ name: 'Power supply [{#DURABLE.ID}]: Power supply health is in degraded state'
+ priority: WARNING
+ description: 'Power supply health is in degraded state.'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: f390553cfe4646e0ab9a4fd9cab20886
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",health])=2'
+ name: 'Power supply [{#DURABLE.ID}]: Power supply health is in fault state'
+ priority: AVERAGE
+ description: 'Power supply health is in fault state.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ uuid: 9499fbdcc6a946138fb6cd69d8be9a00
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",health])=3'
+ name: 'Power supply [{#DURABLE.ID}]: Power supply health is in unknown state'
+ priority: INFO
+ description: 'Power supply health is in unknown state.'
+ tags:
+ -
+ tag: scope
+ value: notice
+ -
+ uuid: 1b72c54bff3a4b129e959db43e895839
+ name: 'Power supply [{#DURABLE.ID}]: Part number'
+ type: DEPENDENT
+ key: 'hpe.msa.power_supplies["{#DURABLE.ID}",part_number]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'Power supply part number.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''power-supplies''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''part-number''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: power-supply
+ -
+ tag: power-supply
+ value: '{#DURABLE.ID}'
+ -
+ uuid: bdbf30f2e70d427bb9237b941fed5941
+ name: 'Power supply [{#DURABLE.ID}]: Serial number'
+ type: DEPENDENT
+ key: 'hpe.msa.power_supplies["{#DURABLE.ID}",serial_number]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'Power supply serial number.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''power-supplies''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''serial-number''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: power-supply
+ -
+ tag: power-supply
+ value: '{#DURABLE.ID}'
+ -
+ uuid: 110fa50ee1d64ecdb064d3bd7b34dc90
+ name: 'Power supply [{#DURABLE.ID}]: Status'
+ type: DEPENDENT
+ key: 'hpe.msa.power_supplies["{#DURABLE.ID}",status]'
+ delay: '0'
+ history: 7d
+ description: 'Power supply status.'
+ valuemap:
+ name: Status
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''power-supplies''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''status-numeric''].first()'
+ error_handler: CUSTOM_VALUE
+ error_handler_params: '4'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: health
+ -
+ tag: component
+ value: power-supply
+ -
+ tag: power-supply
+ value: '{#DURABLE.ID}'
+ trigger_prototypes:
+ -
+ uuid: 28896e70b14f463aae8c8af4786e52ff
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",status])=2'
+ name: 'Power supply [{#DURABLE.ID}]: Power supply has error status'
+ priority: AVERAGE
+ description: 'Power supply has error status.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ uuid: ac6b0d55fbac4f338261f6a90b68e5b0
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",status])=4'
+ name: 'Power supply [{#DURABLE.ID}]: Power supply has unknown status'
+ priority: INFO
+ description: 'Power supply has unknown status.'
+ tags:
+ -
+ tag: scope
+ value: notice
+ -
+ uuid: c9cddccdeed34aa4a533f0ad07aab5ae
+ expression: 'last(/HPE MSA 2040 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",status])=1'
+ name: 'Power supply [{#DURABLE.ID}]: Power supply has warning status'
+ priority: WARNING
+ description: 'Power supply has warning status.'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: 8b4399f3d9624239be2e6ac15971300b
+ name: 'Power supply [{#DURABLE.ID}]: Temperature'
+ type: DEPENDENT
+ key: 'hpe.msa.power_supplies["{#DURABLE.ID}",temperature]'
+ delay: '0'
+ history: 7d
+ units: '!°C'
+ description: 'Power supply temperature.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''power-supplies''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''dctemp''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: power-supply
+ -
+ tag: power-supply
+ value: '{#DURABLE.ID}'
+ graph_prototypes:
+ -
+ uuid: 538040f8853648058e10830ddc2cba70
+ name: 'Power supply [{#DURABLE.ID}]: Temperature'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE MSA 2040 Storage by HTTP'
+ key: 'hpe.msa.power_supplies["{#DURABLE.ID}",temperature]'
+ master_item:
+ key: hpe.msa.data.get
+ lld_macro_paths:
+ -
+ lld_macro: '{#DURABLE.ID}'
+ path: '$.[''durable-id'']'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''power-supplies'']'
+ error_handler: DISCARD_VALUE
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 6h
+ -
+ uuid: faae0d9be7ea4531a584a52002317cc9
+ name: 'Volumes discovery'
+ type: DEPENDENT
+ key: hpe.msa.volumes.discovery
+ delay: '0'
+ description: 'Discover volumes.'
+ item_prototypes:
+ -
+ uuid: f9818ae47544417bb270af4f8f014c0a
+ name: 'Volume [{#NAME}]: Cache: Read hits, rate'
+ type: DEPENDENT
+ key: 'hpe.msa.volumes.cache.read.hits["{#NAME}",rate]'
+ delay: '0'
+ history: 7d
+ description: 'For the controller that owns the volume, the number of times the block to be read is found in cache per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''volume-statistics''][?(@[''volume-name''] == "{#NAME}")].[''read-cache-hits''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: 877afc03787443129373d955067f8c6c
+ name: 'Volume [{#NAME}]: Cache: Read misses, rate'
+ type: DEPENDENT
+ key: 'hpe.msa.volumes.cache.read.misses["{#NAME}",rate]'
+ delay: '0'
+ history: 7d
+ description: 'For the controller that owns the volume, the number of times the block to be read is not found in cache per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''volume-statistics''][?(@[''volume-name''] == "{#NAME}")].[''read-cache-misses''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: e3a0b52f33e847c980ffe3f4dcda5ab4
+ name: 'Volume [{#NAME}]: Cache: Write hits, rate'
+ type: DEPENDENT
+ key: 'hpe.msa.volumes.cache.write.hits["{#NAME}",rate]'
+ delay: '0'
+ history: 7d
+ description: 'For the controller that owns the volume, the number of times the block written to is found in cache per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''volume-statistics''][?(@[''volume-name''] == "{#NAME}")].[''write-cache-hits''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: b2b0c3fd7ab74eb3a6013c3f3d65e356
+ name: 'Volume [{#NAME}]: Cache: Write misses, rate'
+ type: DEPENDENT
+ key: 'hpe.msa.volumes.cache.write.misses["{#NAME}",rate]'
+ delay: '0'
+ history: 7d
+ description: 'For the controller that owns the volume, the number of times the block written to is not found in cache per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''volume-statistics''][?(@[''volume-name''] == "{#NAME}")].[''write-cache-misses''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: 6b12caedf23b4b768dbff01096d72c93
+ name: 'Volume [{#NAME}]: Data transfer rate: Reads'
+ type: DEPENDENT
+ key: 'hpe.msa.volumes.data_transfer.reads["{#NAME}",rate]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: Bps
+ description: 'The data read rate, in bytes per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''volume-statistics''][?(@[''volume-name''] == "{#NAME}")].[''data-read-numeric''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: 705428d111dd49d19eb79b6a0de592c1
+ name: 'Volume [{#NAME}]: Data transfer rate: Total'
+ type: DEPENDENT
+ key: 'hpe.msa.volumes.data_transfer.total["{#NAME}",rate]'
+ delay: '0'
+ history: 7d
+ units: Bps
+ description: 'The data transfer rate, in bytes per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''volume-statistics''][?(@[''volume-name''] == "{#NAME}")].[''bytes-per-second-numeric''].first()'
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: 5f44581f011b46cf96ebd040de635976
+ name: 'Volume [{#NAME}]: Data transfer rate: Writes'
+ type: DEPENDENT
+ key: 'hpe.msa.volumes.data_transfer.writes["{#NAME}",rate]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: Bps
+ description: 'The data write rate, in bytes per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''volume-statistics''][?(@[''volume-name''] == "{#NAME}")].[''data-written-numeric''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: 0e2831ed17ec4fe0a56b800086b47901
+ name: 'Volume [{#NAME}]: IOPS, read rate'
+ type: DEPENDENT
+ key: 'hpe.msa.volumes.iops.read["{#NAME}",rate]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: '!r/s'
+ description: 'Number of read operations per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''volume-statistics''][?(@[''volume-name''] == "{#NAME}")].[''number-of-reads''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: 9d14e4239f5941a7bfb07b6645b9e698
+ name: 'Volume [{#NAME}]: IOPS, total rate'
+ type: DEPENDENT
+ key: 'hpe.msa.volumes.iops.total["{#NAME}",rate]'
+ delay: '0'
+ history: 7d
+ units: '!iops'
+ description: 'Input/output operations per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''volume-statistics''][?(@[''volume-name''] == "{#NAME}")].[''iops''].first()'
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: e1a6b6cc609c4cf789978f01b18af31f
+ name: 'Volume [{#NAME}]: IOPS, write rate'
+ type: DEPENDENT
+ key: 'hpe.msa.volumes.iops.write["{#NAME}",rate]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: '!w/s'
+ description: 'Number of write operations per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''volume-statistics''][?(@[''volume-name''] == "{#NAME}")].[''number-of-writes''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: b47d7b03e19f4e25803b1d639a0ecf43
+ name: 'Volume [{#NAME}]: Space allocated'
+ type: DEPENDENT
+ key: 'hpe.msa.volumes.space["{#NAME}",allocated]'
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'The amount of space currently allocated to the volume.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''volumes''][?(@[''volume-name''] == "{#NAME}")].[''allocated-size-numeric''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ -
+ type: MULTIPLIER
+ parameters:
+ - '512'
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: b6aaba39f7c74dcf95947626852855c8
+ name: 'Volume [{#NAME}]: Space total'
+ type: DEPENDENT
+ key: 'hpe.msa.volumes.space["{#NAME}",total]'
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'The capacity of the volume.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''volumes''][?(@[''volume-name''] == "{#NAME}")].[''size-numeric''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ -
+ type: MULTIPLIER
+ parameters:
+ - '512'
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ graph_prototypes:
+ -
+ uuid: 20d2047e3f024e5197362375601415eb
+ name: 'Volume [{#NAME}]: Cache usage'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE MSA 2040 Storage by HTTP'
+ key: 'hpe.msa.volumes.cache.read.hits["{#NAME}",rate]'
+ -
+ sortorder: '1'
+ color: 2774A4
+ item:
+ host: 'HPE MSA 2040 Storage by HTTP'
+ key: 'hpe.msa.volumes.cache.read.misses["{#NAME}",rate]'
+ -
+ sortorder: '2'
+ color: F63100
+ item:
+ host: 'HPE MSA 2040 Storage by HTTP'
+ key: 'hpe.msa.volumes.cache.write.hits["{#NAME}",rate]'
+ -
+ sortorder: '3'
+ color: A54F10
+ item:
+ host: 'HPE MSA 2040 Storage by HTTP'
+ key: 'hpe.msa.volumes.cache.write.misses["{#NAME}",rate]'
+ -
+ uuid: 0b11191f26464e79add18302e245a9cc
+ name: 'Volume [{#NAME}]: Data transfer rate'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE MSA 2040 Storage by HTTP'
+ key: 'hpe.msa.volumes.data_transfer.reads["{#NAME}",rate]'
+ -
+ sortorder: '1'
+ color: 2774A4
+ item:
+ host: 'HPE MSA 2040 Storage by HTTP'
+ key: 'hpe.msa.volumes.data_transfer.writes["{#NAME}",rate]'
+ -
+ uuid: 133ef12b0cbc49a1a37c594f5c498643
+ name: 'Volume [{#NAME}]: Disk operations rate'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE MSA 2040 Storage by HTTP'
+ key: 'hpe.msa.volumes.iops.read["{#NAME}",rate]'
+ -
+ sortorder: '1'
+ color: 2774A4
+ item:
+ host: 'HPE MSA 2040 Storage by HTTP'
+ key: 'hpe.msa.volumes.iops.write["{#NAME}",rate]'
+ -
+ uuid: f8c4f07925404bc0b1e3ada45358580a
+ name: 'Volume [{#NAME}]: Space utilization'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE MSA 2040 Storage by HTTP'
+ key: 'hpe.msa.volumes.space["{#NAME}",allocated]'
+ -
+ sortorder: '1'
+ color: 2774A4
+ item:
+ host: 'HPE MSA 2040 Storage by HTTP'
+ key: 'hpe.msa.volumes.space["{#NAME}",total]'
+ master_item:
+ key: hpe.msa.data.get
+ lld_macro_paths:
+ -
+ lld_macro: '{#NAME}'
+ path: '$.[''volume-name'']'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''volumes'']'
+ error_handler: DISCARD_VALUE
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 6h
+ tags:
+ -
+ tag: class
+ value: storage
+ -
+ tag: target
+ value: hpe
+ -
+ tag: target
+ value: msa-2040
+ macros:
+ -
+ macro: '{$HPE.MSA.API.PASSWORD}'
+ type: SECRET_TEXT
+ description: 'Specify password for API.'
+ -
+ macro: '{$HPE.MSA.API.PORT}'
+ value: '443'
+ description: 'Connection port for API.'
+ -
+ macro: '{$HPE.MSA.API.SCHEME}'
+ value: https
+ description: 'Connection scheme for API.'
+ -
+ macro: '{$HPE.MSA.API.USERNAME}'
+ value: zabbix
+ description: 'Specify user name for API.'
+ -
+ macro: '{$HPE.MSA.CONTROLLER.CPU.UTIL.CRIT}'
+ value: '90'
+ description: 'The critical threshold of the CPU utilization in %.'
+ -
+ macro: '{$HPE.MSA.DATA.TIMEOUT}'
+ value: 30s
+ description: 'Response timeout for API.'
+ -
+ macro: '{$HPE.MSA.DISKS.GROUP.PUSED.MAX.CRIT}'
+ value: '90'
+ description: 'The critical threshold of the disk group space utilization in %.'
+ -
+ macro: '{$HPE.MSA.DISKS.GROUP.PUSED.MAX.WARN}'
+ value: '80'
+ description: 'The warning threshold of the disk group space utilization in %.'
+ -
+ macro: '{$HPE.MSA.POOL.PUSED.MAX.CRIT}'
+ value: '90'
+ description: 'The critical threshold of the pool space utilization in %.'
+ -
+ macro: '{$HPE.MSA.POOL.PUSED.MAX.WARN}'
+ value: '80'
+ description: 'The warning threshold of the pool space utilization in %.'
+ valuemaps:
+ -
+ uuid: 3bb065172c93464c9f5e2e569f523a05
+ name: 'Controller status'
+ mappings:
+ -
+ value: '0'
+ newvalue: Operational
+ -
+ value: '1'
+ newvalue: Down
+ -
+ value: '2'
+ newvalue: 'Not Installed'
+ -
+ uuid: 78f22a3d82a64372abb3e3eeb08cf03e
+ name: 'Disk group status'
+ mappings:
+ -
+ value: '0'
+ newvalue: FTOL
+ -
+ value: '1'
+ newvalue: FTDN
+ -
+ value: '2'
+ newvalue: CRIT
+ -
+ value: '3'
+ newvalue: OFFL
+ -
+ value: '4'
+ newvalue: QTCR
+ -
+ value: '5'
+ newvalue: QTOF
+ -
+ value: '6'
+ newvalue: QTDN
+ -
+ value: '7'
+ newvalue: STOP
+ -
+ value: '8'
+ newvalue: MSNG
+ -
+ value: '9'
+ newvalue: DMGD
+ -
+ value: '11'
+ newvalue: QTDN
+ -
+ value: '250'
+ newvalue: UP
+ -
+ uuid: eb92d7812b8e4d2dbe4908fc3d42ade8
+ name: 'Disk temperature status'
+ mappings:
+ -
+ value: '1'
+ newvalue: OK
+ -
+ value: '2'
+ newvalue: Critical
+ -
+ value: '3'
+ newvalue: Warning
+ -
+ value: '4'
+ newvalue: Unknown
+ -
+ uuid: e6478ee0a41b49778f2a3dc130649838
+ name: 'Disk type'
+ mappings:
+ -
+ value: '4'
+ newvalue: SAS
+ -
+ value: '8'
+ newvalue: 'SSD SAS'
+ -
+ value: '11'
+ newvalue: 'SAS MDL'
+ -
+ uuid: 243d29502c1c416c85eb2ccc961a159c
+ name: 'Enclosure status'
+ mappings:
+ -
+ value: '0'
+ newvalue: Unsupported
+ -
+ value: '1'
+ newvalue: Up
+ -
+ value: '2'
+ newvalue: Error
+ -
+ value: '3'
+ newvalue: Warning
+ -
+ value: '4'
+ newvalue: Unrecoverable
+ -
+ value: '5'
+ newvalue: 'Not Present'
+ -
+ value: '6'
+ newvalue: Unknown
+ -
+ value: '7'
+ newvalue: Unavailable
+ -
+ value: '20'
+ newvalue: 'Spun Down'
+ -
+ uuid: 40916613dcf24dc2beb8634ec67c04bf
+ name: 'Fan status'
+ mappings:
+ -
+ value: '0'
+ newvalue: Up
+ -
+ value: '1'
+ newvalue: Error
+ -
+ value: '2'
+ newvalue: 'Off'
+ -
+ value: '3'
+ newvalue: Missing
+ -
+ uuid: f656acc354ab4593a1c1718668c02001
+ name: 'FRU status'
+ mappings:
+ -
+ value: '0'
+ newvalue: 'Invalid data'
+ -
+ value: '1'
+ newvalue: Fault
+ -
+ value: '2'
+ newvalue: Absent
+ -
+ value: '4'
+ newvalue: OK
+ -
+ value: '5'
+ newvalue: 'Not available'
+ -
+ value: '6'
+ newvalue: Unknown
+ -
+ uuid: 448c57be77694badb75dbdabe9b233df
+ name: Health
+ mappings:
+ -
+ value: '0'
+ newvalue: OK
+ -
+ value: '1'
+ newvalue: Degraded
+ -
+ value: '2'
+ newvalue: Fault
+ -
+ value: '3'
+ newvalue: Unknown
+ -
+ value: '4'
+ newvalue: N/A
+ -
+ uuid: 66a23d01db744677a1878143ccf102c7
+ name: 'Port type'
+ mappings:
+ -
+ value: '0'
+ newvalue: Unknown
+ -
+ value: '6'
+ newvalue: FC
+ -
+ value: '8'
+ newvalue: SAS
+ -
+ value: '9'
+ newvalue: iSCSI
+ -
+ uuid: 996bbe1c4e2841d6ac35efd9b5236fef
+ name: 'RAID type'
+ mappings:
+ -
+ value: '0'
+ newvalue: RAID0
+ -
+ value: '1'
+ newvalue: RAID1
+ -
+ value: '3'
+ newvalue: RAID3
+ -
+ value: '5'
+ newvalue: RAID5
+ -
+ value: '6'
+ newvalue: NRAID
+ -
+ value: '8'
+ newvalue: RAID50
+ -
+ value: '10'
+ newvalue: RAID10
+ -
+ value: '11'
+ newvalue: RAID6
+ -
+ uuid: 6c5d6649be2347ca83258f0ab1a63137
+ name: Status
+ mappings:
+ -
+ value: '0'
+ newvalue: Up
+ -
+ value: '1'
+ newvalue: Warning
+ -
+ value: '2'
+ newvalue: Error
+ -
+ value: '3'
+ newvalue: 'Not present'
+ -
+ value: '4'
+ newvalue: Unknown
+ -
+ value: '6'
+ newvalue: Disconnected
diff --git a/templates/san/hpe_msa2060_http/README.md b/templates/san/hpe_msa2060_http/README.md
new file mode 100644
index 00000000000..4484b0e5b96
--- /dev/null
+++ b/templates/san/hpe_msa2060_http/README.md
@@ -0,0 +1,250 @@
+
+# HPE MSA 2060 Storage by HTTP
+
+## Overview
+
+For Zabbix version: 6.0 and higher
+The template to monitor HPE MSA 2060 by HTTP.
+It works without any external scripts and uses the script item.
+
+
+This template was tested on:
+
+- HPE MSA 2060 Storage
+
+## Setup
+
+> See [Zabbix template operation](https://www.zabbix.com/documentation/6.0/manual/config/templates_out_of_the_box/http) for basic instructions.
+
+1. Create user "zabbix" with monitor role on the storage.
+2. Link the template to a host.
+3. Configure {$HPE.MSA.API.PASSWORD} and an interface with address through which API is accessible.
+4. Change {$HPE.MSA.API.SCHEME} and {$HPE.MSA.API.PORT} macros if needed.
+
+
+## Zabbix configuration
+
+No specific Zabbix configuration is required.
+
+### Macros used
+
+|Name|Description|Default|
+|----|-----------|-------|
+|{$HPE.MSA.API.PASSWORD} |<p>Specify password for API.</p> |`` |
+|{$HPE.MSA.API.PORT} |<p>Connection port for API.</p> |`443` |
+|{$HPE.MSA.API.SCHEME} |<p>Connection scheme for API.</p> |`https` |
+|{$HPE.MSA.API.USERNAME} |<p>Specify user name for API.</p> |`zabbix` |
+|{$HPE.MSA.CONTROLLER.CPU.UTIL.CRIT} |<p>The critical threshold of the CPU utilization in %.</p> |`90` |
+|{$HPE.MSA.DATA.TIMEOUT} |<p>Response timeout for API.</p> |`30s` |
+|{$HPE.MSA.DISKS.GROUP.PUSED.MAX.CRIT} |<p>The critical threshold of the disk group space utilization in %.</p> |`90` |
+|{$HPE.MSA.DISKS.GROUP.PUSED.MAX.WARN} |<p>The warning threshold of the disk group space utilization in %.</p> |`80` |
+|{$HPE.MSA.POOL.PUSED.MAX.CRIT} |<p>The critical threshold of the pool space utilization in %.</p> |`90` |
+|{$HPE.MSA.POOL.PUSED.MAX.WARN} |<p>The warning threshold of the pool space utilization in %.</p> |`80` |
+
+## Template links
+
+There are no template links in this template.
+
+## Discovery rules
+
+|Name|Description|Type|Key and additional info|
+|----|-----------|----|----|
+|Controllers discovery |<p>Discover controllers.</p> |DEPENDENT |hpe.msa.controllers.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controllers']`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
+|Disk groups discovery |<p>Discover disk groups.</p> |DEPENDENT |hpe.msa.disks.groups.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-groups']`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
+|Disks discovery |<p>Discover disks.</p> |DEPENDENT |hpe.msa.disks.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disks']`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p><p>**Overrides:**</p><p>SSD life left<br> - {#TYPE} MATCHES_REGEX `8`<br> - ITEM_PROTOTYPE REGEXP `SSD life left` - DISCOVER</p> |
+|Enclosures discovery |<p>Discover enclosures.</p> |DEPENDENT |hpe.msa.enclosures.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.['enclosures']`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
+|Fans discovery |<p>Discover fans.</p> |DEPENDENT |hpe.msa.fans.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.['fans']`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
+|FRU discovery |<p>Discover FRU.</p> |DEPENDENT |hpe.msa.frus.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.['frus']`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p><p>**Filter**:</p> <p>- {#TYPE} NOT_MATCHES_REGEX `^(POWER_SUPPLY|RAID_IOM|CHASSIS_MIDPLANE)$`</p> |
+|Pools discovery |<p>Discover pools.</p> |DEPENDENT |hpe.msa.pools.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.['pools']`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
+|Ports discovery |<p>Discover ports.</p> |DEPENDENT |hpe.msa.ports.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.['ports']`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
+|Power supplies discovery |<p>Discover power supplies.</p> |DEPENDENT |hpe.msa.power_supplies.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.['power-supplies']`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
+|Volumes discovery |<p>Discover volumes.</p> |DEPENDENT |hpe.msa.volumes.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.['volumes']`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
+
+## Items collected
+
+|Group|Name|Description|Type|Key and additional info|
+|-----|----|-----------|----|---------------------|
+|HPE |Get method errors |<p>A list of method errors from API requests.</p> |DEPENDENT |hpe.msa.data.errors<p>**Preprocessing**:</p><p>- JSONPATH: `$.['errors']`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Product ID |<p>The product model identifier.</p> |DEPENDENT |hpe.msa.system.product_id<p>**Preprocessing**:</p><p>- JSONPATH: `$.system[0].['product-id']`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |System contact |<p>The name of the person who administers the system.</p> |DEPENDENT |hpe.msa.system.contact<p>**Preprocessing**:</p><p>- JSONPATH: `$.system[0].['system-contact']`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |System information |<p>A brief description of what the system is used for or how it is configured.</p> |DEPENDENT |hpe.msa.system.info<p>**Preprocessing**:</p><p>- JSONPATH: `$.system[0].['system-information']`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |System location |<p>The location of the system.</p> |DEPENDENT |hpe.msa.system.location<p>**Preprocessing**:</p><p>- JSONPATH: `$.system[0].['system-location']`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |System name |<p>The name of the storage system.</p> |DEPENDENT |hpe.msa.system.name<p>**Preprocessing**:</p><p>- JSONPATH: `$.system[0].['system-name']`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Vendor name |<p>The vendor name.</p> |DEPENDENT |hpe.msa.system.vendor_name<p>**Preprocessing**:</p><p>- JSONPATH: `$.system[0].['vendor-name']`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |System health |<p>System health status.</p> |DEPENDENT |hpe.msa.system.health<p>**Preprocessing**:</p><p>- JSONPATH: `$.system[0].['health-numeric']`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p> |
+|HPE |HPE MSA: Service ping |<p>Check if HTTP/HTTPS service accepts TCP connections.</p> |SIMPLE |net.tcp.service["{$HPE.MSA.API.SCHEME}","{HOST.CONN}","{$HPE.MSA.API.PORT}"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Firmware version |<p>Storage controller firmware version.</p> |DEPENDENT |hpe.msa.controllers["{#CONTROLLER.ID}",firmware]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controllers'][?(@['durable-id'] == "{#DURABLE.ID}")].['sc-fw'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Part number |<p>Part number of the controller.</p> |DEPENDENT |hpe.msa.controllers["{#CONTROLLER.ID}",part_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controllers'][?(@['durable-id'] == "{#DURABLE.ID}")].['part-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Serial number |<p>Storage controller serial number.</p> |DEPENDENT |hpe.msa.controllers["{#CONTROLLER.ID}",serial_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controllers'][?(@['durable-id'] == "{#DURABLE.ID}")].['serial-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Health |<p>Controller health status.</p> |DEPENDENT |hpe.msa.controllers["{#CONTROLLER.ID}",health]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controllers'][?(@['durable-id'] == "{#DURABLE.ID}")].['health-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Status |<p>Storage controller status.</p> |DEPENDENT |hpe.msa.controllers["{#CONTROLLER.ID}",status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controllers'][?(@['durable-id'] == "{#DURABLE.ID}")].['status-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Disks |<p>Number of disks in the storage system.</p> |DEPENDENT |hpe.msa.controllers["{#CONTROLLER.ID}",disks]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controllers'][?(@['durable-id'] == "{#DURABLE.ID}")].['disks'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Pools |<p>Number of pools in the storage system.</p> |DEPENDENT |hpe.msa.controllers["{#CONTROLLER.ID}",pools]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controllers'][?(@['durable-id'] == "{#DURABLE.ID}")].['number-of-storage-pools'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Disk groups |<p>Number of disk groups in the storage system.</p> |DEPENDENT |hpe.msa.controllers["{#CONTROLLER.ID}",disk_groups]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controllers'][?(@['durable-id'] == "{#DURABLE.ID}")].['virtual-disks'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: IP address |<p>Controller network port IP address.</p> |DEPENDENT |hpe.msa.controllers["{#CONTROLLER.ID}",ip_address]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controllers'][?(@['durable-id'] == "{#DURABLE.ID}")].['ip-address'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Cache memory size |<p>Controller cache memory size.</p> |DEPENDENT |hpe.msa.controllers.cache["{#CONTROLLER.ID}",total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controllers'][?(@['durable-id'] == "{#DURABLE.ID}")].['cache-memory-size'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p><p>- MULTIPLIER: `1048576`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Cache: Write utilization |<p>Percentage of write cache in use, from 0 to 100.</p> |DEPENDENT |hpe.msa.controllers.cache.write["{#CONTROLLER.ID}",util]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controller-statistics'][?(@['durable-id'] == "{#DURABLE.ID}")].['write-cache-used'].first()`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Cache: Read hits, rate |<p>For the controller that owns the volume, the number of times the block to be read is found in cache per second.</p> |DEPENDENT |hpe.msa.controllers.cache.read.hits["{#CONTROLLER.ID}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controller-statistics'][?(@['durable-id'] == "{#DURABLE.ID}")].['read-cache-hits'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Cache: Read misses, rate |<p>For the controller that owns the volume, the number of times the block to be read is not found in cache per second.</p> |DEPENDENT |hpe.msa.controllers.cache.read.misses["{#CONTROLLER.ID}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controller-statistics'][?(@['durable-id'] == "{#DURABLE.ID}")].['read-cache-misses'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Cache: Write hits, rate |<p>For the controller that owns the volume, the number of times the block written to is found in cache per second.</p> |DEPENDENT |hpe.msa.controllers.cache.write.hits["{#CONTROLLER.ID}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controller-statistics'][?(@['durable-id'] == "{#DURABLE.ID}")].['write-cache-hits'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Cache: Write misses, rate |<p>For the controller that owns the volume, the number of times the block written to is not found in cache per second.</p> |DEPENDENT |hpe.msa.controllers.cache.write.misses["{#CONTROLLER.ID}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controller-statistics'][?(@['durable-id'] == "{#DURABLE.ID}")].['write-cache-misses'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: CPU utilization |<p>Percentage of time the CPU is busy, from 0 to 100.</p> |DEPENDENT |hpe.msa.controllers.cpu["{#CONTROLLER.ID}",util]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controller-statistics'][?(@['durable-id'] == "{#DURABLE.ID}")].['cpu-load'].first()`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: IOPS, total rate |<p>Input/output operations per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.</p> |DEPENDENT |hpe.msa.controllers.iops.total["{#CONTROLLER.ID}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controller-statistics'][?(@['durable-id'] == "{#DURABLE.ID}")].['iops'].first()`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: IOPS, read rate |<p>Number of read operations per second.</p> |DEPENDENT |hpe.msa.controllers.iops.read["{#CONTROLLER.ID}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controller-statistics'][?(@['durable-id'] == "{#DURABLE.ID}")].['number-of-reads'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: IOPS, write rate |<p>Number of write operations per second.</p> |DEPENDENT |hpe.msa.controllers.iops.write["{#CONTROLLER.ID}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controller-statistics'][?(@['durable-id'] == "{#DURABLE.ID}")].['number-of-writes'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Data transfer rate: Total |<p>The data transfer rate, in bytes per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.</p> |DEPENDENT |hpe.msa.controllers.data_transfer.total["{#CONTROLLER.ID}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controller-statistics'][?(@['durable-id'] == "{#DURABLE.ID}")].['bytes-per-second-numeric'].first()`</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Data transfer rate: Reads |<p>The data read rate, in bytes per second.</p> |DEPENDENT |hpe.msa.controllers.data_transfer.reads["{#CONTROLLER.ID}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controller-statistics'][?(@['durable-id'] == "{#DURABLE.ID}")].['data-read-numeric'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Data transfer rate: Writes |<p>The data write rate, in bytes per second.</p> |DEPENDENT |hpe.msa.controllers.data_transfer.writes["{#CONTROLLER.ID}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controller-statistics'][?(@['durable-id'] == "{#DURABLE.ID}")].['data-written-numeric'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Controller [{#CONTROLLER.ID}]: Uptime |<p>Number of seconds since the controller was restarted.</p> |DEPENDENT |hpe.msa.controllers["{#CONTROLLER.ID}",uptime]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controller-statistics'][?(@['durable-id'] == "{#DURABLE.ID}")].['power-on-time'].first()`</p> |
+|HPE |Disk group [{#NAME}]: Disks count |<p>Number of disks in the disk group.</p> |DEPENDENT |hpe.msa.disks.groups["{#NAME}",disk_count]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-groups'][?(@['name'] == "{#NAME}")].['diskcount'].first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Disk group [{#NAME}]: Pool space used |<p>The percentage of pool capacity that the disk group occupies.</p> |DEPENDENT |hpe.msa.disks.groups.space["{#NAME}",pool_util]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-groups'][?(@['name'] == "{#NAME}")].['pool-percentage'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Disk group [{#NAME}]: Health |<p>Disk group health.</p> |DEPENDENT |hpe.msa.disks.groups["{#NAME}",health]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-groups'][?(@['name'] == "{#NAME}")].['health-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Disk group [{#NAME}]: Blocks size |<p>The size of a block, in bytes.</p> |DEPENDENT |hpe.msa.disks.groups.blocks["{#NAME}",size]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-groups'][?(@['name'] == "{#NAME}")].['blocksize'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Disk group [{#NAME}]: Blocks free |<p>Free space in blocks.</p> |DEPENDENT |hpe.msa.disks.groups.blocks["{#NAME}",free]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-groups'][?(@['name'] == "{#NAME}")].['freespace-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Disk group [{#NAME}]: Blocks total |<p>Total space in blocks.</p> |DEPENDENT |hpe.msa.disks.groups.blocks["{#NAME}",total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-groups'][?(@['name'] == "{#NAME}")].['blocks'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Disk group [{#NAME}]: Space free |<p>The free space in the disk group.</p> |CALCULATED |hpe.msa.disks.groups.space["{#NAME}",free]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>**Expression**:</p>`last(//hpe.msa.disks.groups.blocks["{#NAME}",size])*last(//hpe.msa.disks.groups.blocks["{#NAME}",free])` |
+|HPE |Disk group [{#NAME}]: Space total |<p>The capacity of the disk group.</p> |CALCULATED |hpe.msa.disks.groups.space["{#NAME}",total]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>**Expression**:</p>`last(//hpe.msa.disks.groups.blocks["{#NAME}",size])*last(//hpe.msa.disks.groups.blocks["{#NAME}",total])` |
+|HPE |Disk group [{#NAME}]: Space utilization |<p>The space utilization percentage in the disk group.</p> |CALCULATED |hpe.msa.disks.groups.space["{#NAME}",util]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>**Expression**:</p>`100-last(//hpe.msa.disks.groups.space["{#NAME}",free])/last(//hpe.msa.disks.groups.space["{#NAME}",total])*100` |
+|HPE |Disk group [{#NAME}]: RAID type |<p>The RAID level of the disk group.</p> |DEPENDENT |hpe.msa.disks.groups.raid["{#NAME}",type]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-groups'][?(@['name'] == "{#NAME}")].['raidtype-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Disk group [{#NAME}]: Status |<p>The status of the disk group:</p><p>- CRIT: Critical. The disk group is online but isn't fault tolerant because some of it's disks are down.</p><p>- DMGD: Damaged. The disk group is online and fault tolerant, but some of it's disks are damaged.</p><p>- FTDN: Fault tolerant with a down disk.The disk group is online and fault tolerant, but some of it's disks are down.</p><p>- FTOL: Fault tolerant.</p><p>- MSNG: Missing. The disk group is online and fault tolerant, but some of it's disks are missing.</p><p>- OFFL: Offline. Either the disk group is using offline initialization, or it's disks are down and data may be lost.</p><p>- QTCR: Quarantined critical. The disk group is critical with at least one inaccessible disk. For example, two disks are inaccessible in a RAID 6 disk group or one disk is inaccessible for other fault-tolerant RAID levels. If the inaccessible disks come online or if after 60 seconds from being quarantined the disk group is QTCRor QTDN, the disk group is automatically dequarantined.</p><p>- QTDN: Quarantined with a down disk. The RAID6 disk group has one inaccessible disk. The disk group is fault tolerant but degraded. If the inaccessible disks come online or if after 60 seconds from being quarantined the disk group is QTCRor QTDN, the disk group is automatically dequarantined.</p><p>- QTOF: Quarantined offline. The disk group is offline with multiple inaccessible disks causing user data to be incomplete, or is an NRAID or RAID 0 disk group.</p><p>- QTUN: Quarantined unsupported. The disk group contains data in a format that is not supported by this system. For example, this system does not support linear disk groups.</p><p>- STOP: The disk group is stopped.</p><p>- UNKN: Unknown.</p><p>- UP: Up. The disk group is online and does not have fault-tolerant attributes.</p> |DEPENDENT |hpe.msa.disks.groups["{#NAME}",status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-groups'][?(@['name'] == "{#NAME}")].['status-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Disk group [{#NAME}]: IOPS, total rate |<p>Input/output operations per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.</p> |DEPENDENT |hpe.msa.disks.groups.iops.total["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-group-statistics'][?(@['name'] == "{#NAME}")].['iops'].first()`</p> |
+|HPE |Disk group [{#NAME}]: Average response time: Total |<p>Average response time for read and write operations, calculated over the interval since these statistics were last requested or reset.</p> |DEPENDENT |hpe.msa.disks.groups.avg_rsp_time["{#NAME}",total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-group-statistics'][?(@['name'] == "{#NAME}")].['avg-rsp-time'].first()`</p><p>- MULTIPLIER: `0.000001`</p> |
+|HPE |Disk group [{#NAME}]: Average response time: Read |<p>Average response time for all read operations, calculated over the interval since these statistics were last requested or reset.</p> |DEPENDENT |hpe.msa.disks.groups.avg_rsp_time["{#NAME}",read]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-group-statistics'][?(@['name'] == "{#NAME}")].['avg-read-rsp-time'].first()`</p><p>- MULTIPLIER: `0.000001`</p> |
+|HPE |Disk group [{#NAME}]: Average response time: Write |<p>Average response time for all write operations, calculated over the interval since these statistics were last requested or reset.</p> |DEPENDENT |hpe.msa.disks.groups.avg_rsp_time["{#NAME}",write]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-group-statistics'][?(@['name'] == "{#NAME}")].['avg-write-rsp-time'].first()`</p><p>- MULTIPLIER: `0.000001`</p> |
+|HPE |Disk group [{#NAME}]: IOPS, read rate |<p>Number of read operations per second.</p> |DEPENDENT |hpe.msa.disks.groups.iops.read["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-group-statistics'][?(@['name'] == "{#NAME}")].['number-of-reads'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Disk group [{#NAME}]: IOPS, write rate |<p>Number of write operations per second.</p> |DEPENDENT |hpe.msa.disks.groups.iops.write["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-group-statistics'][?(@['name'] == "{#NAME}")].['number-of-writes'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Disk group [{#NAME}]: Data transfer rate: Total |<p>The data transfer rate, in bytes per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.</p> |DEPENDENT |hpe.msa.disks.groups.data_transfer.total["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-group-statistics'][?(@['name'] == "{#NAME}")].['bytes-per-second-numeric'].first()`</p> |
+|HPE |Disk group [{#NAME}]: Data transfer rate: Reads |<p>The data read rate, in bytes per second.</p> |DEPENDENT |hpe.msa.disks.groups.data_transfer.reads["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-group-statistics'][?(@['name'] == "{#NAME}")].['data-read-numeric'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Disk group [{#NAME}]: Data transfer rate: Writes |<p>The data write rate, in bytes per second.</p> |DEPENDENT |hpe.msa.disks.groups.data_transfer.writes["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-group-statistics'][?(@['name'] == "{#NAME}")].['data-written-numeric'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Pool [{#NAME}]: Health |<p>Pool health.</p> |DEPENDENT |hpe.msa.pools["{#NAME}",health]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['pools'][?(@['name'] == "{#NAME}")].['health-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Pool [{#NAME}]: Blocks size |<p>The size of a block, in bytes.</p> |DEPENDENT |hpe.msa.pools.blocks["{#NAME}",size]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['pools'][?(@['name'] == "{#NAME}")].['blocksize'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Pool [{#NAME}]: Blocks available |<p>Available space in blocks.</p> |DEPENDENT |hpe.msa.pools.blocks["{#NAME}",available]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['pools'][?(@['name'] == "{#NAME}")].['total-avail-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Pool [{#NAME}]: Blocks total |<p>Total space in blocks.</p> |DEPENDENT |hpe.msa.pools.blocks["{#NAME}",total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['pools'][?(@['name'] == "{#NAME}")].['total-size-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Pool [{#NAME}]: Space free |<p>The free space in the pool.</p> |CALCULATED |hpe.msa.pools.space["{#NAME}",free]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>**Expression**:</p>`last(//hpe.msa.pools.blocks["{#NAME}",size])*last(//hpe.msa.pools.blocks["{#NAME}",available])` |
+|HPE |Pool [{#NAME}]: Space total |<p>The capacity of the pool.</p> |CALCULATED |hpe.msa.pools.space["{#NAME}",total]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>**Expression**:</p>`last(//hpe.msa.pools.blocks["{#NAME}",size])*last(//hpe.msa.pools.blocks["{#NAME}",total])` |
+|HPE |Pool [{#NAME}]: Space utilization |<p>The space utilization percentage in the pool.</p> |CALCULATED |hpe.msa.pools.space["{#NAME}",util]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>**Expression**:</p>`100-last(//hpe.msa.pools.space["{#NAME}",free])/last(//hpe.msa.pools.space["{#NAME}",total])*100` |
+|HPE |Volume [{#NAME}]: Blocks size |<p>The size of a block, in bytes.</p> |DEPENDENT |hpe.msa.volumes.blocks["{#NAME}",size]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['volumes'][?(@['volume-name'] == "{#NAME}")].['blocksize'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Volume [{#NAME}]: Blocks allocated |<p>The amount of blocks currently allocated to the volume.</p> |DEPENDENT |hpe.msa.volumes.blocks["{#NAME}",allocated]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['volumes'][?(@['volume-name'] == "{#NAME}")].['allocated-size-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Volume [{#NAME}]: Blocks total |<p>Total space in blocks.</p> |DEPENDENT |hpe.msa.volumes.blocks["{#NAME}",total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['volumes'][?(@['volume-name'] == "{#NAME}")].['blocks'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Volume [{#NAME}]: Space allocated |<p>The amount of space currently allocated to the volume.</p> |CALCULATED |hpe.msa.volumes.space["{#NAME}",allocated]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>**Expression**:</p>`last(//hpe.msa.volumes.blocks["{#NAME}",size])*last(//hpe.msa.volumes.blocks["{#NAME}",allocated])` |
+|HPE |Volume [{#NAME}]: Space total |<p>The capacity of the volume.</p> |CALCULATED |hpe.msa.volumes.space["{#NAME}",total]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>**Expression**:</p>`last(//hpe.msa.volumes.blocks["{#NAME}",size])*last(//hpe.msa.volumes.blocks["{#NAME}",total])` |
+|HPE |Volume [{#NAME}]: IOPS, total rate |<p>Total input/output operations per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.</p> |DEPENDENT |hpe.msa.volumes.iops.total["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['volume-statistics'][?(@['volume-name'] == "{#NAME}")].['iops'].first()`</p> |
+|HPE |Volume [{#NAME}]: IOPS, read rate |<p>Number of read operations per second.</p> |DEPENDENT |hpe.msa.volumes.iops.read["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['volume-statistics'][?(@['volume-name'] == "{#NAME}")].['number-of-reads'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Volume [{#NAME}]: IOPS, write rate |<p>Number of write operations per second.</p> |DEPENDENT |hpe.msa.volumes.iops.write["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['volume-statistics'][?(@['volume-name'] == "{#NAME}")].['number-of-writes'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Volume [{#NAME}]: Data transfer rate: Total |<p>The data transfer rate, in bytes per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.</p> |DEPENDENT |hpe.msa.volumes.data_transfer.total["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['volume-statistics'][?(@['volume-name'] == "{#NAME}")].['bytes-per-second-numeric'].first()`</p> |
+|HPE |Volume [{#NAME}]: Data transfer rate: Reads |<p>The data read rate, in bytes per second.</p> |DEPENDENT |hpe.msa.volumes.data_transfer.reads["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['volume-statistics'][?(@['volume-name'] == "{#NAME}")].['data-read-numeric'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Volume [{#NAME}]: Data transfer rate: Writes |<p>The data write rate, in bytes per second.</p> |DEPENDENT |hpe.msa.volumes.data_transfer.writes["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['volume-statistics'][?(@['volume-name'] == "{#NAME}")].['data-written-numeric'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Volume [{#NAME}]: Cache: Read hits, rate |<p>For the controller that owns the volume, the number of times the block to be read is found in cache per second.</p> |DEPENDENT |hpe.msa.volumes.cache.read.hits["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['volume-statistics'][?(@['volume-name'] == "{#NAME}")].['read-cache-hits'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Volume [{#NAME}]: Cache: Read misses, rate |<p>For the controller that owns the volume, the number of times the block to be read is not found in cache per second.</p> |DEPENDENT |hpe.msa.volumes.cache.read.misses["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['volume-statistics'][?(@['volume-name'] == "{#NAME}")].['read-cache-misses'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Volume [{#NAME}]: Cache: Write hits, rate |<p>For the controller that owns the volume, the number of times the block written to is found in cache per second.</p> |DEPENDENT |hpe.msa.volumes.cache.write.hits["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['volume-statistics'][?(@['volume-name'] == "{#NAME}")].['write-cache-hits'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Volume [{#NAME}]: Cache: Write misses, rate |<p>For the controller that owns the volume, the number of times the block written to is not found in cache per second.</p> |DEPENDENT |hpe.msa.volumes.cache.write.misses["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['volume-statistics'][?(@['volume-name'] == "{#NAME}")].['write-cache-misses'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Enclosure [{#DURABLE.ID}]: Health |<p>Enclosure health.</p> |DEPENDENT |hpe.msa.enclosures["{#DURABLE.ID}",health]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['enclosures'][?(@['durable-id'] == "{#DURABLE.ID}")].['health-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Enclosure [{#DURABLE.ID}]: Status |<p>Enclosure status.</p> |DEPENDENT |hpe.msa.enclosures["{#DURABLE.ID}",status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['enclosures'][?(@['durable-id'] == "{#DURABLE.ID}")].['status-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 6`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Enclosure [{#DURABLE.ID}]: Midplane serial number |<p>Midplane serial number.</p> |DEPENDENT |hpe.msa.enclosures["{#DURABLE.ID}",midplane_serial_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['enclosures'][?(@['durable-id'] == "{#DURABLE.ID}")].['midplane-serial-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Enclosure [{#DURABLE.ID}]: Part number |<p>Enclosure part number.</p> |DEPENDENT |hpe.msa.enclosures["{#DURABLE.ID}",part_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['enclosures'][?(@['durable-id'] == "{#DURABLE.ID}")].['part-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Enclosure [{#DURABLE.ID}]: Model |<p>Enclosure model.</p> |DEPENDENT |hpe.msa.enclosures["{#DURABLE.ID}",model]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['enclosures'][?(@['durable-id'] == "{#DURABLE.ID}")].['model'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Enclosure [{#DURABLE.ID}]: Power |<p>Enclosure power in watts.</p> |DEPENDENT |hpe.msa.enclosures["{#DURABLE.ID}",power]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['enclosures'][?(@['durable-id'] == "{#DURABLE.ID}")].['enclosure-power'].first()`</p> |
+|HPE |Power supply [{#DURABLE.ID}]: Health |<p>Power supply health status.</p> |DEPENDENT |hpe.msa.power_supplies["{#DURABLE.ID}",health]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['power-supplies'][?(@['durable-id'] == "{#DURABLE.ID}")].['health-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Power supply [{#DURABLE.ID}]: Status |<p>Power supply status.</p> |DEPENDENT |hpe.msa.power_supplies["{#DURABLE.ID}",status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['power-supplies'][?(@['durable-id'] == "{#DURABLE.ID}")].['status-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Power supply [{#DURABLE.ID}]: Part number |<p>Power supply part number.</p> |DEPENDENT |hpe.msa.power_supplies["{#DURABLE.ID}",part_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['power-supplies'][?(@['durable-id'] == "{#DURABLE.ID}")].['part-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Power supply [{#DURABLE.ID}]: Serial number |<p>Power supply serial number.</p> |DEPENDENT |hpe.msa.power_supplies["{#DURABLE.ID}",serial_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['power-supplies'][?(@['durable-id'] == "{#DURABLE.ID}")].['serial-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Port [{#NAME}]: Health |<p>Port health status.</p> |DEPENDENT |hpe.msa.ports["{#NAME}",health]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['ports'][?(@['port'] == "{#NAME}")].['health-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Port [{#NAME}]: Status |<p>Port status.</p> |DEPENDENT |hpe.msa.ports["{#NAME}",status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['ports'][?(@['port'] == "{#NAME}")].['status-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Port [{#NAME}]: Type |<p>Port type.</p> |DEPENDENT |hpe.msa.ports["{#NAME}",type]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['ports'][?(@['port'] == "{#NAME}")].['port-type-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Fan [{#DURABLE.ID}]: Health |<p>Fan health status.</p> |DEPENDENT |hpe.msa.fans["{#DURABLE.ID}",health]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['fans'][?(@['durable-id'] == "{#DURABLE.ID}")].['health-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Fan [{#DURABLE.ID}]: Status |<p>Fan status.</p> |DEPENDENT |hpe.msa.fans["{#DURABLE.ID}",status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['fans'][?(@['durable-id'] == "{#DURABLE.ID}")].['status-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Fan [{#DURABLE.ID}]: Speed |<p>Fan speed (revolutions per minute).</p> |DEPENDENT |hpe.msa.fans["{#DURABLE.ID}",speed]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['fans'][?(@['durable-id'] == "{#DURABLE.ID}")].['speed'].first()`</p> |
+|HPE |Disk [{#DURABLE.ID}]: Health |<p>Disk health status.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",health]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disks'][?(@['durable-id'] == "{#DURABLE.ID}")].['health-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Disk [{#DURABLE.ID}]: Temperature status |<p>Disk temperature status.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",temperature_status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disks'][?(@['durable-id'] == "{#DURABLE.ID}")].['temperature-status-numeric'].first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- IN_RANGE: `1 3`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Disk [{#DURABLE.ID}]: Temperature |<p>Temperature of the disk.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",temperature]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disks'][?(@['durable-id'] == "{#DURABLE.ID}")].['temperature-numeric'].first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Disk [{#DURABLE.ID}]: Type |<p>Disk type:</p><p>SAS: Enterprise SAS spinning disk.</p><p>SAS MDL: Midline SAS spinning disk.</p><p>SSD SAS: SAS solit-state disk.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",type]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disks'][?(@['durable-id'] == "{#DURABLE.ID}")].['description-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Disk [{#DURABLE.ID}]: Disk group |<p>If the disk is in a disk group, the disk group name.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",group]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disks'][?(@['durable-id'] == "{#DURABLE.ID}")].['disk-group'].first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Disk [{#DURABLE.ID}]: Storage pool |<p>If the disk is in a pool, the pool name.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",pool]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disks'][?(@['durable-id'] == "{#DURABLE.ID}")].['storage-pool-name'].first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Disk [{#DURABLE.ID}]: Vendor |<p>Disk vendor.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",vendor]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disks'][?(@['durable-id'] == "{#DURABLE.ID}")].['vendor'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Disk [{#DURABLE.ID}]: Model |<p>Disk model.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",model]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disks'][?(@['durable-id'] == "{#DURABLE.ID}")].['model'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Disk [{#DURABLE.ID}]: Serial number |<p>Disk serial number.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",serial_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disks'][?(@['durable-id'] == "{#DURABLE.ID}")].['serial-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Disk [{#DURABLE.ID}]: Blocks size |<p>The size of a block, in bytes.</p> |DEPENDENT |hpe.msa.disks.blocks["{#DURABLE.ID}",size]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disks'][?(@['durable-id'] == "{#DURABLE.ID}")].['blocksize'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Disk [{#DURABLE.ID}]: Blocks total |<p>Total space in blocks.</p> |DEPENDENT |hpe.msa.disks.blocks["{#DURABLE.ID}",total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disks'][?(@['durable-id'] == "{#DURABLE.ID}")].['blocks'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Disk [{#DURABLE.ID}]: Space total |<p>Total size of the disk.</p> |CALCULATED |hpe.msa.disks.space["{#DURABLE.ID}",total]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p><p>**Expression**:</p>`last(//hpe.msa.disks.blocks["{#DURABLE.ID}",size])*last(//hpe.msa.disks.blocks["{#DURABLE.ID}",total])` |
+|HPE |Disk [{#DURABLE.ID}]: SSD life left |<p>The percantage of disk life remaining.</p> |DEPENDENT |hpe.msa.disks.ssd["{#DURABLE.ID}",life_left]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disks'][?(@['durable-id'] == "{#DURABLE.ID}")].['ssd-life-left-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |FRU [{#ENCLOSURE.ID}: {#LOCATION}]: Status |<p>{#DESCRIPTION}. FRU status:</p><p>Absent: The FRU is not present.</p><p>Fault: The FRU's health is Degraded or Fault.</p><p>Invalid data: The FRU ID data is invalid. The FRU's EEPROM is improperly programmed.</p><p>OK: The FRU is operating normally.</p><p>Power off: The FRU is powered off.</p> |DEPENDENT |hpe.msa.frus["{#ENCLOSURE.ID}:{#LOCATION}",status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['frus'][?(@['name'] == "{#TYPE}" && @['fru-location'] == "{#LOCATION}")].['fru-status-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |FRU [{#ENCLOSURE.ID}: {#LOCATION}]: Part number |<p>{#DESCRIPTION}. Part number of the FRU.</p> |DEPENDENT |hpe.msa.frus["{#ENCLOSURE.ID}:{#LOCATION}",part_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['frus'][?(@['name'] == "{#TYPE}" && @['fru-location'] == "{#LOCATION}")].['part-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |FRU [{#ENCLOSURE.ID}: {#LOCATION}]: Serial number |<p>{#DESCRIPTION}. FRU serial number.</p> |DEPENDENT |hpe.msa.frus["{#ENCLOSURE.ID}:{#LOCATION}",serial_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.['frus'][?(@['name'] == "{#TYPE}" && @['fru-location'] == "{#LOCATION}")].['serial-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|Zabbix raw items |HPE MSA: Get data |<p>The JSON with result of API requests.</p> |SCRIPT |hpe.msa.data.get<p>**Expression**:</p>`The text is too long. Please see the template.` |
+
+## Triggers
+
+|Name|Description|Expression|Severity|Dependencies and additional info|
+|----|-----------|----|----|----|
+|There are errors in method requests to API |<p>There are errors in method requests to API.</p> |`length(last(/HPE MSA 2060 Storage by HTTP/hpe.msa.data.errors))>0` |AVERAGE |<p>**Depends on**:</p><p>- Service is down or unavailable</p> |
+|System health is in degraded state |<p>System health is in degraded state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.system.health)=1` |WARNING | |
+|System health is in fault state |<p>System health is in fault state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.system.health)=2` |AVERAGE | |
+|System health is in unknown state |<p>System health is in unknown state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.system.health)=3` |INFO | |
+|Service is down or unavailable |<p>HTTP/HTTPS service is down or unable to establish TCP connection.</p> |`max(/HPE MSA 2060 Storage by HTTP/net.tcp.service["{$HPE.MSA.API.SCHEME}","{HOST.CONN}","{$HPE.MSA.API.PORT}"],5m)=0` |HIGH | |
+|Controller [{#CONTROLLER.ID}]: Controller health is in degraded state |<p>Controller health is in degraded state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",health])=1` |WARNING |<p>**Depends on**:</p><p>- Controller [{#CONTROLLER.ID}]: Controller is down</p> |
+|Controller [{#CONTROLLER.ID}]: Controller health is in fault state |<p>Controller health is in fault state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",health])=2` |AVERAGE |<p>**Depends on**:</p><p>- Controller [{#CONTROLLER.ID}]: Controller is down</p> |
+|Controller [{#CONTROLLER.ID}]: Controller health is in unknown state |<p>Controller health is in unknown state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",health])=3` |INFO |<p>**Depends on**:</p><p>- Controller [{#CONTROLLER.ID}]: Controller is down</p> |
+|Controller [{#CONTROLLER.ID}]: Controller is down |<p>The controller is down.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",status])=1` |HIGH | |
+|Controller [{#CONTROLLER.ID}]: High CPU utilization |<p>Controller CPU utilization is too high. The system might be slow to respond.</p> |`min(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers.cpu["{#CONTROLLER.ID}",util],5m)>{$HPE.MSA.CONTROLLER.CPU.UTIL.CRIT}` |WARNING | |
+|Controller [{#CONTROLLER.ID}]: Controller has been restarted |<p>The controller uptime is less than 10 minutes.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",uptime])<10m` |WARNING | |
+|Disk group [{#NAME}]: Disk group health is in degraded state |<p>Disk group health is in degraded state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",health])=1` |WARNING | |
+|Disk group [{#NAME}]: Disk group health is in fault state |<p>Disk group health is in fault state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",health])=2` |AVERAGE | |
+|Disk group [{#NAME}]: Disk group health is in unknown state |<p>Disk group health is in unknown state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",health])=3` |INFO | |
+|Disk group [{#NAME}]: Disk group space is low |<p>Disk group is running low on free space (less than {$HPE.MSA.DISKS.GROUP.PUSED.MAX.WARN:"{#NAME}"}% available).</p> |`min(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups.space["{#NAME}",util],5m)>{$HPE.MSA.DISKS.GROUP.PUSED.MAX.WARN:"{#NAME}"}` |WARNING |<p>**Depends on**:</p><p>- Disk group [{#NAME}]: Disk group space is critically low</p> |
+|Disk group [{#NAME}]: Disk group space is critically low |<p>Disk group is running low on free space (less than {$HPE.MSA.DISKS.GROUP.PUSED.MAX.CRIT:"{#NAME}"}% available).</p> |`min(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups.space["{#NAME}",util],5m)>{$HPE.MSA.DISKS.GROUP.PUSED.MAX.CRIT:"{#NAME}"}` |AVERAGE | |
+|Disk group [{#NAME}]: Disk group is fault tolerant with a down disk |<p>The disk group is online and fault tolerant, but some of it's disks are down.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=1` |AVERAGE | |
+|Disk group [{#NAME}]: Disk group has damaged disks |<p>The disk group is online and fault tolerant, but some of it's disks are damaged.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=9` |AVERAGE | |
+|Disk group [{#NAME}]: Disk group has missing disks |<p>The disk group is online and fault tolerant, but some of it's disks are missing.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=8` |AVERAGE | |
+|Disk group [{#NAME}]: Disk group is offline |<p>Either the disk group is using offline initialization, or it's disks are down and data may be lost.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=3` |AVERAGE | |
+|Disk group [{#NAME}]: Disk group is quarantined critical |<p>The disk group is critical with at least one inaccessible disk. For example, two disks are inaccessible in a RAID 6 disk group or one disk is inaccessible for other fault-tolerant RAID levels. If the inaccessible disks come online or if after 60 seconds from being quarantined the disk group is QTCRor QTDN, the disk group is automatically dequarantined.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=4` |AVERAGE | |
+|Disk group [{#NAME}]: Disk group is quarantined offline |<p>The disk group is offline with multiple inaccessible disks causing user data to be incomplete, or is an NRAID or RAID 0 disk group.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=5` |AVERAGE | |
+|Disk group [{#NAME}]: Disk group is quarantined unsupported |<p>The disk group contains data in a format that is not supported by this system. For example, this system does not support linear disk groups.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=5` |AVERAGE | |
+|Disk group [{#NAME}]: Disk group is quarantined with an inaccessible disk |<p>The RAID6 disk group has one inaccessible disk. The disk group is fault tolerant but degraded. If the inaccessible disks come online or if after 60 seconds from being quarantined the disk group is QTCRor QTDN, the disk group is automatically dequarantined.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=6` |AVERAGE | |
+|Disk group [{#NAME}]: Disk group is stopped |<p>The disk group is stopped.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=7` |AVERAGE | |
+|Disk group [{#NAME}]: Disk group status is critical |<p>The disk group is online but isn't fault tolerant because some of its disks are down.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=2` |AVERAGE | |
+|Pool [{#NAME}]: Pool health is in degraded state |<p>Pool health is in degraded state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.pools["{#NAME}",health])=1` |WARNING | |
+|Pool [{#NAME}]: Pool health is in fault state |<p>Pool health is in fault state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.pools["{#NAME}",health])=2` |AVERAGE | |
+|Pool [{#NAME}]: Pool health is in unknown state |<p>Pool [{#NAME}] health is in unknown state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.pools["{#NAME}",health])=3` |INFO | |
+|Pool [{#NAME}]: Pool space is low |<p>Pool is running low on free space (less than {$HPE.MSA.POOL.PUSED.MAX.WARN:"{#NAME}"}% available).</p> |`min(/HPE MSA 2060 Storage by HTTP/hpe.msa.pools.space["{#NAME}",util],5m)>{$HPE.MSA.POOL.PUSED.MAX.WARN:"{#NAME}"}` |WARNING |<p>**Depends on**:</p><p>- Pool [{#NAME}]: Pool space is critically low</p> |
+|Pool [{#NAME}]: Pool space is critically low |<p>Pool is running low on free space (less than {$HPE.MSA.POOL.PUSED.MAX.CRIT:"{#NAME}"}% available).</p> |`min(/HPE MSA 2060 Storage by HTTP/hpe.msa.pools.space["{#NAME}",util],5m)>{$HPE.MSA.POOL.PUSED.MAX.CRIT:"{#NAME}"}` |AVERAGE | |
+|Enclosure [{#DURABLE.ID}]: Enclosure health is in degraded state |<p>Enclosure health is in degraded state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",health])=1` |WARNING | |
+|Enclosure [{#DURABLE.ID}]: Enclosure health is in fault state |<p>Enclosure health is in fault state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",health])=2` |AVERAGE | |
+|Enclosure [{#DURABLE.ID}]: Enclosure health is in unknown state |<p>Enclosure health is in unknown state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",health])=3` |INFO | |
+|Enclosure [{#DURABLE.ID}]: Enclosure has critical status |<p>Enclosure has critical status.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",status])=2` |HIGH | |
+|Enclosure [{#DURABLE.ID}]: Enclosure has warning status |<p>Enclosure has warning status.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",status])=3` |WARNING | |
+|Enclosure [{#DURABLE.ID}]: Enclosure is unavailable |<p>Enclosure is unavailable.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",status])=7` |HIGH | |
+|Enclosure [{#DURABLE.ID}]: Enclosure is unrecoverable |<p>Enclosure is unrecoverable.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",status])=4` |HIGH | |
+|Enclosure [{#DURABLE.ID}]: Enclosure has unknown status |<p>Enclosure has unknown status.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",status])=6` |INFO | |
+|Power supply [{#DURABLE.ID}]: Power supply health is in degraded state |<p>Power supply health is in degraded state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",health])=1` |WARNING | |
+|Power supply [{#DURABLE.ID}]: Power supply health is in fault state |<p>Power supply health is in fault state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",health])=2` |AVERAGE | |
+|Power supply [{#DURABLE.ID}]: Power supply health is in unknown state |<p>Power supply health is in unknown state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",health])=3` |INFO | |
+|Power supply [{#DURABLE.ID}]: Power supply has error status |<p>Power supply has error status.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",status])=2` |AVERAGE | |
+|Power supply [{#DURABLE.ID}]: Power supply has warning status |<p>Power supply has warning status.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",status])=1` |WARNING | |
+|Power supply [{#DURABLE.ID}]: Power supply has unknown status |<p>Power supply has unknown status.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",status])=4` |INFO | |
+|Port [{#NAME}]: Port health is in degraded state |<p>Port health is in degraded state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.ports["{#NAME}",health])=1` |WARNING | |
+|Port [{#NAME}]: Port health is in fault state |<p>Port health is in fault state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.ports["{#NAME}",health])=2` |AVERAGE | |
+|Port [{#NAME}]: Port health is in unknown state |<p>Port health is in unknown state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.ports["{#NAME}",health])=3` |INFO | |
+|Port [{#NAME}]: Port has error status |<p>Port has error status.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.ports["{#NAME}",status])=2` |AVERAGE | |
+|Port [{#NAME}]: Port has warning status |<p>Port has warning status.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.ports["{#NAME}",status])=1` |WARNING | |
+|Port [{#NAME}]: Port has unknown status |<p>Port has unknown status.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.ports["{#NAME}",status])=4` |INFO | |
+|Fan [{#DURABLE.ID}]: Fan health is in degraded state |<p>Fan health is in degraded state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",health])=1` |WARNING | |
+|Fan [{#DURABLE.ID}]: Fan health is in fault state |<p>Fan health is in fault state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",health])=2` |AVERAGE | |
+|Fan [{#DURABLE.ID}]: Fan health is in unknown state |<p>Fan health is in unknown state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",health])=3` |INFO | |
+|Fan [{#DURABLE.ID}]: Fan has error status |<p>Fan has error status.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",status])=1` |AVERAGE | |
+|Fan [{#DURABLE.ID}]: Fan is missing |<p>Fan is missing.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",status])=3` |INFO | |
+|Fan [{#DURABLE.ID}]: Fan is off |<p>Fan is off.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",status])=2` |WARNING | |
+|Disk [{#DURABLE.ID}]: Disk health is in degraded state |<p>Disk health is in degraded state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",health])=1` |WARNING | |
+|Disk [{#DURABLE.ID}]: Disk health is in fault state |<p>Disk health is in fault state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",health])=2` |AVERAGE | |
+|Disk [{#DURABLE.ID}]: Disk health is in unknown state |<p>Disk health is in unknown state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",health])=3` |INFO | |
+|Disk [{#DURABLE.ID}]: Disk temperature is high |<p>Disk temperature is high.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",temperature_status])=3` |WARNING | |
+|Disk [{#DURABLE.ID}]: Disk temperature is critically high |<p>Disk temperature is critically high.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",temperature_status])=2` |AVERAGE | |
+|Disk [{#DURABLE.ID}]: Disk temperature is unknown |<p>Disk temperature is unknown.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",temperature_status])=4` |INFO | |
+|FRU [{#ENCLOSURE.ID}: {#LOCATION}]: FRU status is Degraded or Fault |<p>FRU status is Degraded or Fault.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.frus["{#ENCLOSURE.ID}:{#LOCATION}",status])=1` |AVERAGE | |
+|FRU [{#ENCLOSURE.ID}: {#LOCATION}]: FRU ID data is invalid |<p>The FRU ID data is invalid. The FRU's EEPROM is improperly programmed.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.frus["{#ENCLOSURE.ID}:{#LOCATION}",status])=0` |WARNING | |
+
+## Feedback
+
+Please report any issues with the template at https://support.zabbix.com
+
+You can also provide feedback, discuss the template or ask for help with it at [ZABBIX forums](https://www.zabbix.com/forum/zabbix-suggestions-and-feedback).
+
diff --git a/templates/san/hpe_msa2060_http/template_san_hpe_msa2060_http.yaml b/templates/san/hpe_msa2060_http/template_san_hpe_msa2060_http.yaml
new file mode 100644
index 00000000000..69702938fc4
--- /dev/null
+++ b/templates/san/hpe_msa2060_http/template_san_hpe_msa2060_http.yaml
@@ -0,0 +1,4559 @@
+zabbix_export:
+ version: '6.0'
+ date: '2022-06-16T07:39:55Z'
+ groups:
+ -
+ uuid: 7c2cb727f85b492d88cd56e17127c64d
+ name: Templates/SAN
+ templates:
+ -
+ uuid: 10537641cfa3416ab0f1451cdb61d804
+ template: 'HPE MSA 2060 Storage by HTTP'
+ name: 'HPE MSA 2060 Storage by HTTP'
+ description: |
+ The template to monitor HPE MSA 2060 by HTTP.
+ It works without any external scripts and uses the script item.
+
+ Setup:
+ 1. Create user "zabbix" with monitor role on the storage.
+ 2. Link the template to a host.
+ 3. Configure {$HPE.MSA.API.PASSWORD} and an interface with address through which API is accessible.
+ 4. Change {$HPE.MSA.API.SCHEME} and {$HPE.MSA.API.PORT} macros if needed.
+
+ You can discuss this template or leave feedback on our forum https://www.zabbix.com/forum/zabbix-suggestions-and-feedback
+
+ Template tooling version used: 0.41
+ groups:
+ -
+ name: Templates/SAN
+ items:
+ -
+ uuid: 078dd015f25d4778af429f9b5e391bc5
+ name: 'Get method errors'
+ type: DEPENDENT
+ key: hpe.msa.data.errors
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: TEXT
+ description: 'A list of method errors from API requests.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''errors'']'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: errors
+ triggers:
+ -
+ uuid: 2133ddf10a3641d78e609948d6842687
+ expression: 'length(last(/HPE MSA 2060 Storage by HTTP/hpe.msa.data.errors))>0'
+ name: 'There are errors in method requests to API'
+ priority: AVERAGE
+ description: 'There are errors in method requests to API.'
+ dependencies:
+ -
+ name: 'Service is down or unavailable'
+ expression: 'max(/HPE MSA 2060 Storage by HTTP/net.tcp.service["{$HPE.MSA.API.SCHEME}","{HOST.CONN}","{$HPE.MSA.API.PORT}"],5m)=0'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ uuid: bafec666b170480f941fe25cb3cf903d
+ name: 'HPE MSA: Get data'
+ type: SCRIPT
+ key: hpe.msa.data.get
+ history: '0'
+ trends: '0'
+ value_type: TEXT
+ params: |
+ var params = JSON.parse(value),
+ fields = ['username', 'password', 'base_url'],
+ methods = [
+ 'system',
+ 'controllers',
+ 'controller-statistics',
+ 'frus',
+ 'disk-groups',
+ 'disk-group-statistics',
+ 'disks',
+ 'enclosures',
+ 'fans',
+ 'pools',
+ 'ports',
+ 'power-supplies',
+ 'volumes',
+ 'volume-statistics'
+ ],
+ result = {},
+ data = {};
+
+ fields.forEach(function (field) {
+ if (typeof params !== 'object' || typeof params[field] === 'undefined' || params[field] === '' ) {
+ throw 'Required param is not set: "' + field + '".';
+ }
+ });
+
+ if (!params.base_url.endsWith('/')) {
+ params.base_url += '/';
+ }
+
+ var response, request = new HttpRequest();
+ request.addHeader('datatype: json');
+
+ auth_string = sha256(params.username + '_' + params.password);
+
+ response = request.get(params.base_url + 'api/login/' + auth_string);
+
+ if (request.getStatus() < 200 || request.getStatus() >= 300) {
+ throw 'Authentication request has failed with status code ' + request.getStatus() + ': ' + response;
+ }
+
+ if (response !== null) {
+ try {
+ auth_data = JSON.parse(response);
+ }
+ catch (error) {
+ throw 'Failed to parse authentication response received from device API.';
+ }
+ }
+
+ session_key = auth_data['status'][0]['response'];
+
+ request.addHeader('sessionKey: ' + session_key);
+
+ data.errors = [];
+
+ methods.forEach(function (method) {
+ response = request.get(params.base_url + 'api/show/' + method);
+ method_error = '';
+
+ if (request.getStatus() < 200 || request.getStatus() >= 300) {
+ method_error = 'Method: ' + method + '. Request has failed with status code ' + request.getStatus() + ': ' + response;
+ data.errors.push(method_error);
+ return;
+ }
+
+ if (response !== null) {
+ try {
+ result = JSON.parse(response);
+ switch (method) {
+ case 'controller-statistics':
+ var stats_array = result['controller-statistics'] || [];
+ for (var i = 0; i < stats_array.length; i++) {
+ result['controller-statistics'][i]['durable-id'] = result['controller-statistics'][i]['durable-id'].toLowerCase();
+ }
+ data[method] = result[method];
+ break;
+ case 'frus':
+ data[method] = result['enclosure-fru'];
+ break;
+ case 'disks':
+ data[method] = result['drives'];
+ break;
+ case 'fans':
+ data[method] = result['fan'];
+ break;
+ case 'ports':
+ data[method] = result['port'];
+ break;
+ default:
+ data[method] = result[method];
+ }
+ }
+ catch (error) {
+ method_error = 'Method: ' + method + '. Failed to parse response received from device API.';
+ }
+ }
+ else {
+ method_error = 'Method: ' + method + '. No data received by request.';
+ }
+
+ if (method_error.length > 0) {
+ data.errors.push(method_error);
+ }
+ });
+
+ if (data.errors.length == 0) {
+ data.errors = '';
+ }
+
+ return JSON.stringify(data);
+ description: 'The JSON with result of API requests.'
+ timeout: '{$HPE.MSA.DATA.TIMEOUT}'
+ parameters:
+ -
+ name: base_url
+ value: '{$HPE.MSA.API.SCHEME}://{HOST.CONN}:{$HPE.MSA.API.PORT}/'
+ -
+ name: username
+ value: '{$HPE.MSA.API.USERNAME}'
+ -
+ name: password
+ value: '{$HPE.MSA.API.PASSWORD}'
+ tags:
+ -
+ tag: component
+ value: raw
+ -
+ uuid: 4c8b2c72135a4af781c0f31730366abe
+ name: 'System contact'
+ type: DEPENDENT
+ key: hpe.msa.system.contact
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'The name of the person who administers the system.'
+ inventory_link: CONTACT
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.system[0].[''system-contact'']'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: system
+ -
+ uuid: dc310d8c55a74a00bed9c004ba33d1fa
+ name: 'System health'
+ type: DEPENDENT
+ key: hpe.msa.system.health
+ delay: '0'
+ history: 7d
+ description: 'System health status.'
+ valuemap:
+ name: Health
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.system[0].[''health-numeric'']'
+ error_handler: CUSTOM_VALUE
+ error_handler_params: '4'
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: health
+ -
+ tag: component
+ value: system
+ triggers:
+ -
+ uuid: 49e8c1d8a14f40b5acb2723e370ccccb
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.system.health)=1'
+ name: 'System health is in degraded state'
+ priority: WARNING
+ description: 'System health is in degraded state.'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: 2709a971f2ce417d8e269a0e5ebdd964
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.system.health)=2'
+ name: 'System health is in fault state'
+ priority: AVERAGE
+ description: 'System health is in fault state.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ uuid: fa35428a4f41453984bd0bfa566e0674
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.system.health)=3'
+ name: 'System health is in unknown state'
+ priority: INFO
+ description: 'System health is in unknown state.'
+ tags:
+ -
+ tag: scope
+ value: notice
+ -
+ uuid: c4aae4a5f218472698751d9de8d1087d
+ name: 'System information'
+ type: DEPENDENT
+ key: hpe.msa.system.info
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'A brief description of what the system is used for or how it is configured.'
+ inventory_link: NOTES
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.system[0].[''system-information'']'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: system
+ -
+ uuid: 3768f170e5ef44bca39e89b1f8973e6d
+ name: 'System location'
+ type: DEPENDENT
+ key: hpe.msa.system.location
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'The location of the system.'
+ inventory_link: LOCATION
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.system[0].[''system-location'']'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: system
+ -
+ uuid: 00c58217d52e4cd5852bdd9c71c4375f
+ name: 'System name'
+ type: DEPENDENT
+ key: hpe.msa.system.name
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'The name of the storage system.'
+ inventory_link: NAME
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.system[0].[''system-name'']'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: system
+ -
+ uuid: 103e58d547284e68b079e92074950ff9
+ name: 'Product ID'
+ type: DEPENDENT
+ key: hpe.msa.system.product_id
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'The product model identifier.'
+ inventory_link: MODEL
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.system[0].[''product-id'']'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: system
+ -
+ uuid: 7865d8ae697c40c5b5855c47bb82ccc4
+ name: 'Vendor name'
+ type: DEPENDENT
+ key: hpe.msa.system.vendor_name
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'The vendor name.'
+ inventory_link: VENDOR
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.system[0].[''vendor-name'']'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: system
+ -
+ uuid: 3831060089ff497993088472e922df38
+ name: 'HPE MSA: Service ping'
+ type: SIMPLE
+ key: 'net.tcp.service["{$HPE.MSA.API.SCHEME}","{HOST.CONN}","{$HPE.MSA.API.PORT}"]'
+ history: 7d
+ description: 'Check if HTTP/HTTPS service accepts TCP connections.'
+ preprocessing:
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ tags:
+ -
+ tag: component
+ value: health
+ -
+ tag: component
+ value: network
+ triggers:
+ -
+ uuid: 9c1bf26f95d946f386bbf613d3d55779
+ expression: 'max(/HPE MSA 2060 Storage by HTTP/net.tcp.service["{$HPE.MSA.API.SCHEME}","{HOST.CONN}","{$HPE.MSA.API.PORT}"],5m)=0'
+ name: 'Service is down or unavailable'
+ priority: HIGH
+ description: 'HTTP/HTTPS service is down or unable to establish TCP connection.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ discovery_rules:
+ -
+ uuid: 91c30dd0509843898601ce6d489fab03
+ name: 'Controllers discovery'
+ type: DEPENDENT
+ key: hpe.msa.controllers.discovery
+ delay: '0'
+ description: 'Discover controllers.'
+ item_prototypes:
+ -
+ uuid: 73bc16fc631f4386abbc78897db07e13
+ name: 'Controller [{#CONTROLLER.ID}]: Cache: Read hits, rate'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.cache.read.hits["{#CONTROLLER.ID}",rate]'
+ delay: '0'
+ history: 7d
+ description: 'For the controller that owns the volume, the number of times the block to be read is found in cache per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''controller-statistics''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''read-cache-hits''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 04e14fe4d8ba4693b954ebcac1671649
+ name: 'Controller [{#CONTROLLER.ID}]: Cache: Read misses, rate'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.cache.read.misses["{#CONTROLLER.ID}",rate]'
+ delay: '0'
+ history: 7d
+ description: 'For the controller that owns the volume, the number of times the block to be read is not found in cache per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''controller-statistics''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''read-cache-misses''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 5cb9f7eb42d2413a90161ac192629073
+ name: 'Controller [{#CONTROLLER.ID}]: Cache: Write hits, rate'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.cache.write.hits["{#CONTROLLER.ID}",rate]'
+ delay: '0'
+ history: 7d
+ description: 'For the controller that owns the volume, the number of times the block written to is found in cache per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''controller-statistics''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''write-cache-hits''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 61aa7235c6c44cfababd1b2390cc0443
+ name: 'Controller [{#CONTROLLER.ID}]: Cache: Write misses, rate'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.cache.write.misses["{#CONTROLLER.ID}",rate]'
+ delay: '0'
+ history: 7d
+ description: 'For the controller that owns the volume, the number of times the block written to is not found in cache per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''controller-statistics''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''write-cache-misses''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 0d754544c18143ff98114e1ed316ad1e
+ name: 'Controller [{#CONTROLLER.ID}]: Cache: Write utilization'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.cache.write["{#CONTROLLER.ID}",util]'
+ delay: '0'
+ history: 7d
+ units: '%'
+ description: 'Percentage of write cache in use, from 0 to 100.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''controller-statistics''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''write-cache-used''].first()'
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 482c5af99fe740278c4663ba300dee04
+ name: 'Controller [{#CONTROLLER.ID}]: Cache memory size'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.cache["{#CONTROLLER.ID}",total]'
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'Controller cache memory size.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''controllers''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''cache-memory-size''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ -
+ type: MULTIPLIER
+ parameters:
+ - '1048576'
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 80d6ae014e354f6c844c3b88ea66c530
+ name: 'Controller [{#CONTROLLER.ID}]: CPU utilization'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.cpu["{#CONTROLLER.ID}",util]'
+ delay: '0'
+ history: 7d
+ units: '%'
+ description: 'Percentage of time the CPU is busy, from 0 to 100.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''controller-statistics''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''cpu-load''].first()'
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ trigger_prototypes:
+ -
+ uuid: 0bf68b46b7644ad5ad0123df49c1da35
+ expression: 'min(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers.cpu["{#CONTROLLER.ID}",util],5m)>{$HPE.MSA.CONTROLLER.CPU.UTIL.CRIT}'
+ name: 'Controller [{#CONTROLLER.ID}]: High CPU utilization'
+ event_name: 'Controller [{#CONTROLLER.ID}]: High CPU utilization (over {$HPE.MSA.CONTROLLER.CPU.UTIL.CRIT}% for 5m)'
+ priority: WARNING
+ description: 'Controller CPU utilization is too high. The system might be slow to respond.'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: c8fbfd459fce4149b1459e366b61981a
+ name: 'Controller [{#CONTROLLER.ID}]: Data transfer rate: Reads'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.data_transfer.reads["{#CONTROLLER.ID}",rate]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: Bps
+ description: 'The data read rate, in bytes per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''controller-statistics''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''data-read-numeric''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 9c5c23273f5b43ad9e300d2c7b90bc3f
+ name: 'Controller [{#CONTROLLER.ID}]: Data transfer rate: Total'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.data_transfer.total["{#CONTROLLER.ID}",rate]'
+ delay: '0'
+ history: 7d
+ units: Bps
+ description: 'The data transfer rate, in bytes per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''controller-statistics''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''bytes-per-second-numeric''].first()'
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 94f0b7f7d397453f9227c1b473a77a4e
+ name: 'Controller [{#CONTROLLER.ID}]: Data transfer rate: Writes'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.data_transfer.writes["{#CONTROLLER.ID}",rate]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: Bps
+ description: 'The data write rate, in bytes per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''controller-statistics''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''data-written-numeric''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 8b0f014d1ed5470d919357f204b704ca
+ name: 'Controller [{#CONTROLLER.ID}]: IOPS, read rate'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.iops.read["{#CONTROLLER.ID}",rate]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: '!r/s'
+ description: 'Number of read operations per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''controller-statistics''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''number-of-reads''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 16f2fd5bd9d244daa09aef3f79a5d450
+ name: 'Controller [{#CONTROLLER.ID}]: IOPS, total rate'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.iops.total["{#CONTROLLER.ID}",rate]'
+ delay: '0'
+ history: 7d
+ units: '!iops'
+ description: 'Input/output operations per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''controller-statistics''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''iops''].first()'
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 9b8366ac60304c3c98dedc278ad18418
+ name: 'Controller [{#CONTROLLER.ID}]: IOPS, write rate'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers.iops.write["{#CONTROLLER.ID}",rate]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: '!w/s'
+ description: 'Number of write operations per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''controller-statistics''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''number-of-writes''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 5f6c124f1aef41499ee52616ede02de9
+ name: 'Controller [{#CONTROLLER.ID}]: Disks'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers["{#CONTROLLER.ID}",disks]'
+ delay: '0'
+ history: 7d
+ description: 'Number of disks in the storage system.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''controllers''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''disks''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: c70f280c9c494b769b442f3a22a3c173
+ name: 'Controller [{#CONTROLLER.ID}]: Disk groups'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers["{#CONTROLLER.ID}",disk_groups]'
+ delay: '0'
+ history: 7d
+ description: 'Number of disk groups in the storage system.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''controllers''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''virtual-disks''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: ba1bb9818a9a487c8742d619316b087e
+ name: 'Controller [{#CONTROLLER.ID}]: Firmware version'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers["{#CONTROLLER.ID}",firmware]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'Storage controller firmware version.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''controllers''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''sc-fw''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 5f5307f2904a4792af1906a2b03a2a9b
+ name: 'Controller [{#CONTROLLER.ID}]: Health'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers["{#CONTROLLER.ID}",health]'
+ delay: '0'
+ history: 7d
+ description: 'Controller health status.'
+ valuemap:
+ name: Health
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''controllers''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''health-numeric''].first()'
+ error_handler: CUSTOM_VALUE
+ error_handler_params: '4'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: component
+ value: health
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ trigger_prototypes:
+ -
+ uuid: 3988a5b897a34c84952fa573d7019879
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",health])=1'
+ name: 'Controller [{#CONTROLLER.ID}]: Controller health is in degraded state'
+ priority: WARNING
+ description: 'Controller health is in degraded state.'
+ dependencies:
+ -
+ name: 'Controller [{#CONTROLLER.ID}]: Controller is down'
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",status])=1'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: 7256e023ac82427bb6ee923d4ff07786
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",health])=2'
+ name: 'Controller [{#CONTROLLER.ID}]: Controller health is in fault state'
+ priority: AVERAGE
+ description: 'Controller health is in fault state.'
+ dependencies:
+ -
+ name: 'Controller [{#CONTROLLER.ID}]: Controller is down'
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",status])=1'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ uuid: 15bc89e6c61549caaf5a66c85446ea9d
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",health])=3'
+ name: 'Controller [{#CONTROLLER.ID}]: Controller health is in unknown state'
+ priority: INFO
+ description: 'Controller health is in unknown state.'
+ dependencies:
+ -
+ name: 'Controller [{#CONTROLLER.ID}]: Controller is down'
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",status])=1'
+ tags:
+ -
+ tag: scope
+ value: notice
+ -
+ uuid: 2c9c2636aeb543ec8e70102c555fe776
+ name: 'Controller [{#CONTROLLER.ID}]: IP address'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers["{#CONTROLLER.ID}",ip_address]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'Controller network port IP address.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''controllers''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''ip-address''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 3405ef21e2cb40729e16c5b8aaf35996
+ name: 'Controller [{#CONTROLLER.ID}]: Part number'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers["{#CONTROLLER.ID}",part_number]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'Part number of the controller.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''controllers''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''part-number''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 9b4ee1a634c3462f8fb48eb0e79984df
+ name: 'Controller [{#CONTROLLER.ID}]: Pools'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers["{#CONTROLLER.ID}",pools]'
+ delay: '0'
+ history: 7d
+ description: 'Number of pools in the storage system.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''controllers''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''number-of-storage-pools''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: 6980d1841bc04c79868d6f05bf59921e
+ name: 'Controller [{#CONTROLLER.ID}]: Serial number'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers["{#CONTROLLER.ID}",serial_number]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'Storage controller serial number.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''controllers''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''serial-number''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ -
+ uuid: c0c2034fc848400c9b1f09f0c54790b3
+ name: 'Controller [{#CONTROLLER.ID}]: Status'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers["{#CONTROLLER.ID}",status]'
+ delay: '0'
+ history: 7d
+ description: 'Storage controller status.'
+ valuemap:
+ name: 'Controller status'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''controllers''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''status-numeric''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: component
+ value: health
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ trigger_prototypes:
+ -
+ uuid: 99de4f8de416485db5c3844d1c8d654b
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",status])=1'
+ name: 'Controller [{#CONTROLLER.ID}]: Controller is down'
+ priority: HIGH
+ description: 'The controller is down.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ uuid: 7a9b3ba8dd5446d0961a6eea595c2b49
+ name: 'Controller [{#CONTROLLER.ID}]: Uptime'
+ type: DEPENDENT
+ key: 'hpe.msa.controllers["{#CONTROLLER.ID}",uptime]'
+ delay: '0'
+ history: 7d
+ units: uptime
+ description: 'Number of seconds since the controller was restarted.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''controller-statistics''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''power-on-time''].first()'
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: controller
+ -
+ tag: controller
+ value: '{#CONTROLLER.ID}'
+ trigger_prototypes:
+ -
+ uuid: 255250aa4b75465a989bf8f3fd805667
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers["{#CONTROLLER.ID}",uptime])<10m'
+ name: 'Controller [{#CONTROLLER.ID}]: Controller has been restarted'
+ event_name: 'Controller [{#CONTROLLER.ID}]: Controller has been restarted (uptime < 10m)'
+ priority: WARNING
+ description: 'The controller uptime is less than 10 minutes.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ graph_prototypes:
+ -
+ uuid: a0bac1256ecf42fb9e980a49e52f008e
+ name: 'Controller [{#CONTROLLER.ID}]: Cache: Write utilization'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE MSA 2060 Storage by HTTP'
+ key: 'hpe.msa.controllers.cache.write["{#CONTROLLER.ID}",util]'
+ -
+ uuid: 2b3343a641304872a82c84e1b918f8b3
+ name: 'Controller [{#CONTROLLER.ID}]: Cache usage'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE MSA 2060 Storage by HTTP'
+ key: 'hpe.msa.controllers.cache.read.hits["{#CONTROLLER.ID}",rate]'
+ -
+ sortorder: '1'
+ color: 2774A4
+ item:
+ host: 'HPE MSA 2060 Storage by HTTP'
+ key: 'hpe.msa.controllers.cache.read.misses["{#CONTROLLER.ID}",rate]'
+ -
+ sortorder: '2'
+ color: F63100
+ item:
+ host: 'HPE MSA 2060 Storage by HTTP'
+ key: 'hpe.msa.controllers.cache.write.hits["{#CONTROLLER.ID}",rate]'
+ -
+ sortorder: '3'
+ color: A54F10
+ item:
+ host: 'HPE MSA 2060 Storage by HTTP'
+ key: 'hpe.msa.controllers.cache.write.misses["{#CONTROLLER.ID}",rate]'
+ -
+ uuid: ed2117af47d94be9bed0632a0b662a25
+ name: 'Controller [{#CONTROLLER.ID}]: Controller CPU utilization'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE MSA 2060 Storage by HTTP'
+ key: 'hpe.msa.controllers.cpu["{#CONTROLLER.ID}",util]'
+ -
+ uuid: 27b53c540cae45da9b2e13cbbb1ab821
+ name: 'Controller [{#CONTROLLER.ID}]: Data transfer rate'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE MSA 2060 Storage by HTTP'
+ key: 'hpe.msa.controllers.data_transfer.reads["{#CONTROLLER.ID}",rate]'
+ -
+ sortorder: '1'
+ color: 2774A4
+ item:
+ host: 'HPE MSA 2060 Storage by HTTP'
+ key: 'hpe.msa.controllers.data_transfer.writes["{#CONTROLLER.ID}",rate]'
+ -
+ uuid: ce3c794ac9424be5a104b812680cc77b
+ name: 'Controller [{#CONTROLLER.ID}]: Disk operations rate'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE MSA 2060 Storage by HTTP'
+ key: 'hpe.msa.controllers.iops.read["{#CONTROLLER.ID}",rate]'
+ -
+ sortorder: '1'
+ color: 2774A4
+ item:
+ host: 'HPE MSA 2060 Storage by HTTP'
+ key: 'hpe.msa.controllers.iops.write["{#CONTROLLER.ID}",rate]'
+ master_item:
+ key: hpe.msa.data.get
+ lld_macro_paths:
+ -
+ lld_macro: '{#CONTROLLER.ID}'
+ path: '$.[''controller-id'']'
+ -
+ lld_macro: '{#DURABLE.ID}'
+ path: '$.[''durable-id'']'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''controllers'']'
+ error_handler: DISCARD_VALUE
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 6h
+ -
+ uuid: 46478b42c76348d7824c715fd6d20f74
+ name: 'Disks discovery'
+ type: DEPENDENT
+ key: hpe.msa.disks.discovery
+ delay: '0'
+ description: 'Discover disks.'
+ item_prototypes:
+ -
+ uuid: 4fedb88c1bb74c2cb5a0f72fdfcff104
+ name: 'Disk [{#DURABLE.ID}]: Blocks size'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.blocks["{#DURABLE.ID}",size]'
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'The size of a block, in bytes.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disks''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''blocksize''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk
+ -
+ tag: disk
+ value: '{#DURABLE.ID}'
+ -
+ uuid: a491cb03df9c4e3ead70e0a74d9337b2
+ name: 'Disk [{#DURABLE.ID}]: Blocks total'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.blocks["{#DURABLE.ID}",total]'
+ delay: '0'
+ history: 7d
+ description: 'Total space in blocks.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disks''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''blocks''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk
+ -
+ tag: disk
+ value: '{#DURABLE.ID}'
+ -
+ uuid: 6c20cf4e84b0427fbe797fc209d78785
+ name: 'Disk [{#DURABLE.ID}]: Space total'
+ type: CALCULATED
+ key: 'hpe.msa.disks.space["{#DURABLE.ID}",total]'
+ delay: 1h
+ history: 7d
+ units: B
+ params: 'last(//hpe.msa.disks.blocks["{#DURABLE.ID}",size])*last(//hpe.msa.disks.blocks["{#DURABLE.ID}",total])'
+ description: 'Total size of the disk.'
+ preprocessing:
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ tags:
+ -
+ tag: component
+ value: disk
+ -
+ tag: disk
+ value: '{#DURABLE.ID}'
+ -
+ uuid: 80ea0929a1bf43f4bdeba80e675c52bd
+ name: 'Disk [{#DURABLE.ID}]: SSD life left'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.ssd["{#DURABLE.ID}",life_left]'
+ delay: '0'
+ history: 7d
+ discover: NO_DISCOVER
+ units: '%'
+ description: 'The percantage of disk life remaining.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disks''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''ssd-life-left-numeric''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk
+ -
+ tag: disk
+ value: '{#DURABLE.ID}'
+ -
+ uuid: f5bb9b7f437f434d83ca0542e41b2673
+ name: 'Disk [{#DURABLE.ID}]: Disk group'
+ type: DEPENDENT
+ key: 'hpe.msa.disks["{#DURABLE.ID}",group]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'If the disk is in a disk group, the disk group name.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disks''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''disk-group''].first()'
+ error_handler: DISCARD_VALUE
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk
+ -
+ tag: disk
+ value: '{#DURABLE.ID}'
+ -
+ uuid: 86fca5ad02af49c8a1d48f4a260a0dbf
+ name: 'Disk [{#DURABLE.ID}]: Health'
+ type: DEPENDENT
+ key: 'hpe.msa.disks["{#DURABLE.ID}",health]'
+ delay: '0'
+ history: 7d
+ description: 'Disk health status.'
+ valuemap:
+ name: Health
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disks''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''health-numeric''].first()'
+ error_handler: CUSTOM_VALUE
+ error_handler_params: '4'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk
+ -
+ tag: component
+ value: health
+ -
+ tag: disk
+ value: '{#DURABLE.ID}'
+ trigger_prototypes:
+ -
+ uuid: f76f8eec05a94e2db9d4cd3bcbb43aa4
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",health])=1'
+ name: 'Disk [{#DURABLE.ID}]: Disk health is in degraded state'
+ priority: WARNING
+ description: 'Disk health is in degraded state.'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: 383181e44a114334ab28ff09f49b2d51
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",health])=2'
+ name: 'Disk [{#DURABLE.ID}]: Disk health is in fault state'
+ priority: AVERAGE
+ description: 'Disk health is in fault state.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ uuid: 2b2d78c6c29f4bd58eff632809dee978
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",health])=3'
+ name: 'Disk [{#DURABLE.ID}]: Disk health is in unknown state'
+ priority: INFO
+ description: 'Disk health is in unknown state.'
+ tags:
+ -
+ tag: scope
+ value: notice
+ -
+ uuid: 8f8ad679881c4693acfed363e5498b34
+ name: 'Disk [{#DURABLE.ID}]: Model'
+ type: DEPENDENT
+ key: 'hpe.msa.disks["{#DURABLE.ID}",model]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'Disk model.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disks''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''model''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk
+ -
+ tag: disk
+ value: '{#DURABLE.ID}'
+ -
+ uuid: 7fffecbf1ede4a5e9da5efc4311fc62e
+ name: 'Disk [{#DURABLE.ID}]: Storage pool'
+ type: DEPENDENT
+ key: 'hpe.msa.disks["{#DURABLE.ID}",pool]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'If the disk is in a pool, the pool name.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disks''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''storage-pool-name''].first()'
+ error_handler: DISCARD_VALUE
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk
+ -
+ tag: disk
+ value: '{#DURABLE.ID}'
+ -
+ uuid: 9a43a148ad4742e1a1df0038b36a171f
+ name: 'Disk [{#DURABLE.ID}]: Serial number'
+ type: DEPENDENT
+ key: 'hpe.msa.disks["{#DURABLE.ID}",serial_number]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'Disk serial number.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disks''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''serial-number''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk
+ -
+ tag: disk
+ value: '{#DURABLE.ID}'
+ -
+ uuid: 119dc5c43fb741028ccd599d25ad032c
+ name: 'Disk [{#DURABLE.ID}]: Temperature'
+ type: DEPENDENT
+ key: 'hpe.msa.disks["{#DURABLE.ID}",temperature]'
+ delay: '0'
+ history: 7d
+ units: '!°C'
+ description: 'Temperature of the disk.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disks''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''temperature-numeric''].first()'
+ error_handler: DISCARD_VALUE
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk
+ -
+ tag: disk
+ value: '{#DURABLE.ID}'
+ -
+ uuid: 0a0cf4600214443aa504d5c55d1f4015
+ name: 'Disk [{#DURABLE.ID}]: Temperature status'
+ type: DEPENDENT
+ key: 'hpe.msa.disks["{#DURABLE.ID}",temperature_status]'
+ delay: '0'
+ history: 7d
+ description: 'Disk temperature status.'
+ valuemap:
+ name: 'Disk temperature status'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disks''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''temperature-status-numeric''].first()'
+ error_handler: DISCARD_VALUE
+ -
+ type: IN_RANGE
+ parameters:
+ - '1'
+ - '3'
+ error_handler: CUSTOM_VALUE
+ error_handler_params: '4'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk
+ -
+ tag: component
+ value: health
+ -
+ tag: disk
+ value: '{#DURABLE.ID}'
+ trigger_prototypes:
+ -
+ uuid: d4b8f77421d744918e087f696b3f0fff
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",temperature_status])=2'
+ name: 'Disk [{#DURABLE.ID}]: Disk temperature is critically high'
+ priority: AVERAGE
+ description: 'Disk temperature is critically high.'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: fbbac4048fda477a99f00566624b6bdb
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",temperature_status])=3'
+ name: 'Disk [{#DURABLE.ID}]: Disk temperature is high'
+ priority: WARNING
+ description: 'Disk temperature is high.'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: 41e4f00446304206804da350a88ce3b9
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",temperature_status])=4'
+ name: 'Disk [{#DURABLE.ID}]: Disk temperature is unknown'
+ priority: INFO
+ description: 'Disk temperature is unknown.'
+ tags:
+ -
+ tag: scope
+ value: notice
+ -
+ uuid: 1a23ef68bb484fd5baeba2b352b970db
+ name: 'Disk [{#DURABLE.ID}]: Type'
+ type: DEPENDENT
+ key: 'hpe.msa.disks["{#DURABLE.ID}",type]'
+ delay: '0'
+ history: 7d
+ description: |
+ Disk type:
+ SAS: Enterprise SAS spinning disk.
+ SAS MDL: Midline SAS spinning disk.
+ SSD SAS: SAS solit-state disk.
+ valuemap:
+ name: 'Disk type'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disks''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''description-numeric''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk
+ -
+ tag: disk
+ value: '{#DURABLE.ID}'
+ -
+ uuid: d8e35779834640c8afdc5874f72fe8af
+ name: 'Disk [{#DURABLE.ID}]: Vendor'
+ type: DEPENDENT
+ key: 'hpe.msa.disks["{#DURABLE.ID}",vendor]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'Disk vendor.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disks''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''vendor''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk
+ -
+ tag: disk
+ value: '{#DURABLE.ID}'
+ master_item:
+ key: hpe.msa.data.get
+ lld_macro_paths:
+ -
+ lld_macro: '{#DURABLE.ID}'
+ path: '$.[''durable-id'']'
+ -
+ lld_macro: '{#TYPE}'
+ path: '$.[''description-numeric'']'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disks'']'
+ error_handler: DISCARD_VALUE
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 6h
+ overrides:
+ -
+ name: 'SSD life left'
+ step: '1'
+ filter:
+ conditions:
+ -
+ macro: '{#TYPE}'
+ value: '8'
+ formulaid: A
+ operations:
+ -
+ operationobject: ITEM_PROTOTYPE
+ operator: REGEXP
+ value: 'SSD life left'
+ status: ENABLED
+ discover: DISCOVER
+ -
+ uuid: 88aaea8c16a247559c68783ad0cd5c4d
+ name: 'Disk groups discovery'
+ type: DEPENDENT
+ key: hpe.msa.disks.groups.discovery
+ delay: '0'
+ description: 'Discover disk groups.'
+ item_prototypes:
+ -
+ uuid: 8f68ad1b814d4287a6fd72d5bd03f7da
+ name: 'Disk group [{#NAME}]: Average response time: Read'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups.avg_rsp_time["{#NAME}",read]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: s
+ description: 'Average response time for all read operations, calculated over the interval since these statistics were last requested or reset.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disk-group-statistics''][?(@[''name''] == "{#NAME}")].[''avg-read-rsp-time''].first()'
+ -
+ type: MULTIPLIER
+ parameters:
+ - '0.000001'
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
+ uuid: 2ae8acbcd0b9442c9adc8086fa36fa40
+ name: 'Disk group [{#NAME}]: Average response time: Total'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups.avg_rsp_time["{#NAME}",total]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: s
+ description: 'Average response time for read and write operations, calculated over the interval since these statistics were last requested or reset.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disk-group-statistics''][?(@[''name''] == "{#NAME}")].[''avg-rsp-time''].first()'
+ -
+ type: MULTIPLIER
+ parameters:
+ - '0.000001'
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
+ uuid: f99ce5e6e31140c298ee447d3a2b8c4d
+ name: 'Disk group [{#NAME}]: Average response time: Write'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups.avg_rsp_time["{#NAME}",write]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: s
+ description: 'Average response time for all write operations, calculated over the interval since these statistics were last requested or reset.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disk-group-statistics''][?(@[''name''] == "{#NAME}")].[''avg-write-rsp-time''].first()'
+ -
+ type: MULTIPLIER
+ parameters:
+ - '0.000001'
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
+ uuid: 705fce660a944a47ad7ff0e9c9b1d37e
+ name: 'Disk group [{#NAME}]: Blocks free'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups.blocks["{#NAME}",free]'
+ delay: '0'
+ history: 7d
+ description: 'Free space in blocks.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disk-groups''][?(@[''name''] == "{#NAME}")].[''freespace-numeric''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
+ uuid: 27e3fc79212e407ca1ae5fb06557440d
+ name: 'Disk group [{#NAME}]: Blocks size'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups.blocks["{#NAME}",size]'
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'The size of a block, in bytes.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disk-groups''][?(@[''name''] == "{#NAME}")].[''blocksize''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
+ uuid: f14e651fd9dc4b03bb00e5db780f0114
+ name: 'Disk group [{#NAME}]: Blocks total'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups.blocks["{#NAME}",total]'
+ delay: '0'
+ history: 7d
+ description: 'Total space in blocks.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disk-groups''][?(@[''name''] == "{#NAME}")].[''blocks''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
+ uuid: ecd3de6d32e94d2ab50111659147c97e
+ name: 'Disk group [{#NAME}]: Data transfer rate: Reads'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups.data_transfer.reads["{#NAME}",rate]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: Bps
+ description: 'The data read rate, in bytes per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disk-group-statistics''][?(@[''name''] == "{#NAME}")].[''data-read-numeric''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
+ uuid: 28b236ea619f4130a3271459e9fce06b
+ name: 'Disk group [{#NAME}]: Data transfer rate: Total'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups.data_transfer.total["{#NAME}",rate]'
+ delay: '0'
+ history: 7d
+ units: Bps
+ description: 'The data transfer rate, in bytes per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disk-group-statistics''][?(@[''name''] == "{#NAME}")].[''bytes-per-second-numeric''].first()'
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
+ uuid: 51ef802067c149bea1d5d976df6e3a6f
+ name: 'Disk group [{#NAME}]: Data transfer rate: Writes'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups.data_transfer.writes["{#NAME}",rate]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: Bps
+ description: 'The data write rate, in bytes per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disk-group-statistics''][?(@[''name''] == "{#NAME}")].[''data-written-numeric''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
+ uuid: 95925d6d4af94964b388208ff185642d
+ name: 'Disk group [{#NAME}]: IOPS, read rate'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups.iops.read["{#NAME}",rate]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: '!r/s'
+ description: 'Number of read operations per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disk-group-statistics''][?(@[''name''] == "{#NAME}")].[''number-of-reads''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
+ uuid: c9fdf59576554063b404d190ad90db18
+ name: 'Disk group [{#NAME}]: IOPS, total rate'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups.iops.total["{#NAME}",rate]'
+ delay: '0'
+ history: 7d
+ units: '!iops'
+ description: 'Input/output operations per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disk-group-statistics''][?(@[''name''] == "{#NAME}")].[''iops''].first()'
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
+ uuid: 31f5b13a56704e438b600df70c37a1fd
+ name: 'Disk group [{#NAME}]: IOPS, write rate'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups.iops.write["{#NAME}",rate]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: '!w/s'
+ description: 'Number of write operations per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disk-group-statistics''][?(@[''name''] == "{#NAME}")].[''number-of-writes''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
+ uuid: 7359b1d550734d30bb83612538b36e95
+ name: 'Disk group [{#NAME}]: RAID type'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups.raid["{#NAME}",type]'
+ delay: '0'
+ history: 7d
+ description: 'The RAID level of the disk group.'
+ valuemap:
+ name: 'RAID type'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disk-groups''][?(@[''name''] == "{#NAME}")].[''raidtype-numeric''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
+ uuid: b0d49d3da6b14b9cb8eeb95a3665a26e
+ name: 'Disk group [{#NAME}]: Space free'
+ type: CALCULATED
+ key: 'hpe.msa.disks.groups.space["{#NAME}",free]'
+ history: 7d
+ units: B
+ params: 'last(//hpe.msa.disks.groups.blocks["{#NAME}",size])*last(//hpe.msa.disks.groups.blocks["{#NAME}",free])'
+ description: 'The free space in the disk group.'
+ preprocessing:
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
+ uuid: bc8e6e0fb286466593186708cddf3b2a
+ name: 'Disk group [{#NAME}]: Pool space used'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups.space["{#NAME}",pool_util]'
+ delay: '0'
+ history: 7d
+ units: '%'
+ description: 'The percentage of pool capacity that the disk group occupies.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disk-groups''][?(@[''name''] == "{#NAME}")].[''pool-percentage''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
+ uuid: fb3dd5308c97446693932206be17ace3
+ name: 'Disk group [{#NAME}]: Space total'
+ type: CALCULATED
+ key: 'hpe.msa.disks.groups.space["{#NAME}",total]'
+ history: 7d
+ units: B
+ params: 'last(//hpe.msa.disks.groups.blocks["{#NAME}",size])*last(//hpe.msa.disks.groups.blocks["{#NAME}",total])'
+ description: 'The capacity of the disk group.'
+ preprocessing:
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
+ uuid: 2a9c7f901f494b8bb7d2c74cd7c3030c
+ name: 'Disk group [{#NAME}]: Space utilization'
+ type: CALCULATED
+ key: 'hpe.msa.disks.groups.space["{#NAME}",util]'
+ history: 7d
+ value_type: FLOAT
+ units: '%'
+ params: '100-last(//hpe.msa.disks.groups.space["{#NAME}",free])/last(//hpe.msa.disks.groups.space["{#NAME}",total])*100'
+ description: 'The space utilization percentage in the disk group.'
+ preprocessing:
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ trigger_prototypes:
+ -
+ uuid: df1af9dad6444821a86a26158469d0cb
+ expression: 'min(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups.space["{#NAME}",util],5m)>{$HPE.MSA.DISKS.GROUP.PUSED.MAX.CRIT:"{#NAME}"}'
+ name: 'Disk group [{#NAME}]: Disk group space is critically low'
+ event_name: 'Disk group [{#NAME}]: Disk group space is critically low (used > {$HPE.MSA.DISKS.GROUP.PUSED.MAX.CRIT:"{#NAME}"}%)'
+ priority: AVERAGE
+ description: 'Disk group is running low on free space (less than {$HPE.MSA.DISKS.GROUP.PUSED.MAX.CRIT:"{#NAME}"}% available).'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: 713960711c324dc780998f8f263344a2
+ expression: 'min(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups.space["{#NAME}",util],5m)>{$HPE.MSA.DISKS.GROUP.PUSED.MAX.WARN:"{#NAME}"}'
+ name: 'Disk group [{#NAME}]: Disk group space is low'
+ event_name: 'Disk group [{#NAME}]: Disk group space is low (used > {$HPE.MSA.DISKS.GROUP.PUSED.MAX.WARN:"{#NAME}"}%)'
+ priority: WARNING
+ description: 'Disk group is running low on free space (less than {$HPE.MSA.DISKS.GROUP.PUSED.MAX.WARN:"{#NAME}"}% available).'
+ dependencies:
+ -
+ name: 'Disk group [{#NAME}]: Disk group space is critically low'
+ expression: 'min(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups.space["{#NAME}",util],5m)>{$HPE.MSA.DISKS.GROUP.PUSED.MAX.CRIT:"{#NAME}"}'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: 4c6bbdcdb05d45e0af52548aef4e8716
+ name: 'Disk group [{#NAME}]: Disks count'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups["{#NAME}",disk_count]'
+ delay: '0'
+ history: 7d
+ description: 'Number of disks in the disk group.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disk-groups''][?(@[''name''] == "{#NAME}")].[''diskcount''].first()'
+ error_handler: DISCARD_VALUE
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ -
+ uuid: 97b6e0e2ec844636be64931fca6e2c6c
+ name: 'Disk group [{#NAME}]: Health'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups["{#NAME}",health]'
+ delay: '0'
+ history: 7d
+ description: 'Disk group health.'
+ valuemap:
+ name: Health
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disk-groups''][?(@[''name''] == "{#NAME}")].[''health-numeric''].first()'
+ error_handler: CUSTOM_VALUE
+ error_handler_params: '4'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: component
+ value: health
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ trigger_prototypes:
+ -
+ uuid: 7899b8a15b5042f3a4467a7cdee4c6ae
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",health])=1'
+ name: 'Disk group [{#NAME}]: Disk group health is in degraded state'
+ priority: WARNING
+ description: 'Disk group health is in degraded state.'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: e7c6a3b20c424196854a5437aba4c3ec
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",health])=2'
+ name: 'Disk group [{#NAME}]: Disk group health is in fault state'
+ priority: AVERAGE
+ description: 'Disk group health is in fault state.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ uuid: 177dd9d1cfa54b3e8c9e6479cb96af03
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",health])=3'
+ name: 'Disk group [{#NAME}]: Disk group health is in unknown state'
+ priority: INFO
+ description: 'Disk group health is in unknown state.'
+ tags:
+ -
+ tag: scope
+ value: notice
+ -
+ uuid: 6755c1253e83442780eeb31d67062980
+ name: 'Disk group [{#NAME}]: Status'
+ type: DEPENDENT
+ key: 'hpe.msa.disks.groups["{#NAME}",status]'
+ delay: '0'
+ history: 7d
+ description: |
+ The status of the disk group:
+
+ - CRIT: Critical. The disk group is online but isn't fault tolerant because some of it's disks are down.
+ - DMGD: Damaged. The disk group is online and fault tolerant, but some of it's disks are damaged.
+ - FTDN: Fault tolerant with a down disk.The disk group is online and fault tolerant, but some of it's disks are down.
+ - FTOL: Fault tolerant.
+ - MSNG: Missing. The disk group is online and fault tolerant, but some of it's disks are missing.
+ - OFFL: Offline. Either the disk group is using offline initialization, or it's disks are down and data may be lost.
+ - QTCR: Quarantined critical. The disk group is critical with at least one inaccessible disk. For example, two disks are inaccessible in a RAID 6 disk group or one disk is inaccessible for other fault-tolerant RAID levels. If the inaccessible disks come online or if after 60 seconds from being quarantined the disk group is QTCRor QTDN, the disk group is automatically dequarantined.
+ - QTDN: Quarantined with a down disk. The RAID6 disk group has one inaccessible disk. The disk group is fault tolerant but degraded. If the inaccessible disks come online or if after 60 seconds from being quarantined the disk group is QTCRor QTDN, the disk group is automatically dequarantined.
+ - QTOF: Quarantined offline. The disk group is offline with multiple inaccessible disks causing user data to be incomplete, or is an NRAID or RAID 0 disk group.
+ - QTUN: Quarantined unsupported. The disk group contains data in a format that is not supported by this system. For example, this system does not support linear disk groups.
+ - STOP: The disk group is stopped.
+ - UNKN: Unknown.
+ - UP: Up. The disk group is online and does not have fault-tolerant attributes.
+ valuemap:
+ name: 'Disk group status'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disk-groups''][?(@[''name''] == "{#NAME}")].[''status-numeric''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: disk-group
+ -
+ tag: component
+ value: health
+ -
+ tag: disk-group
+ value: '{#NAME}'
+ trigger_prototypes:
+ -
+ uuid: 8deee88d964846598d5574d197694b17
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=9'
+ name: 'Disk group [{#NAME}]: Disk group has damaged disks'
+ priority: AVERAGE
+ description: 'The disk group is online and fault tolerant, but some of it''s disks are damaged.'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: c615e1bb1c824e7ba109b8a6580eb9b9
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=8'
+ name: 'Disk group [{#NAME}]: Disk group has missing disks'
+ priority: AVERAGE
+ description: 'The disk group is online and fault tolerant, but some of it''s disks are missing.'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: bead3a0bb95342b3b3ceae7becff99b8
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=1'
+ name: 'Disk group [{#NAME}]: Disk group is fault tolerant with a down disk'
+ priority: AVERAGE
+ description: 'The disk group is online and fault tolerant, but some of it''s disks are down.'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: c89466e00c2b40c1933fde60332a428a
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=3'
+ name: 'Disk group [{#NAME}]: Disk group is offline'
+ priority: AVERAGE
+ description: 'Either the disk group is using offline initialization, or it''s disks are down and data may be lost.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ uuid: 3dc5b3bc1128451491217639cf4e5115
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=4'
+ name: 'Disk group [{#NAME}]: Disk group is quarantined critical'
+ priority: AVERAGE
+ description: 'The disk group is critical with at least one inaccessible disk. For example, two disks are inaccessible in a RAID 6 disk group or one disk is inaccessible for other fault-tolerant RAID levels. If the inaccessible disks come online or if after 60 seconds from being quarantined the disk group is QTCRor QTDN, the disk group is automatically dequarantined.'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: 6892c8c05331497ab37db2b2fe3673a1
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=5'
+ name: 'Disk group [{#NAME}]: Disk group is quarantined offline'
+ priority: AVERAGE
+ description: 'The disk group is offline with multiple inaccessible disks causing user data to be incomplete, or is an NRAID or RAID 0 disk group.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ uuid: 8a8bda977e11462a906fd200f1b67a72
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=5'
+ name: 'Disk group [{#NAME}]: Disk group is quarantined unsupported'
+ priority: AVERAGE
+ description: 'The disk group contains data in a format that is not supported by this system. For example, this system does not support linear disk groups.'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: 21f06dd8f8de49f58a64a638d24ff905
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=6'
+ name: 'Disk group [{#NAME}]: Disk group is quarantined with an inaccessible disk'
+ priority: AVERAGE
+ description: 'The RAID6 disk group has one inaccessible disk. The disk group is fault tolerant but degraded. If the inaccessible disks come online or if after 60 seconds from being quarantined the disk group is QTCRor QTDN, the disk group is automatically dequarantined.'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: 1914fede726744829b2e41392b957857
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=7'
+ name: 'Disk group [{#NAME}]: Disk group is stopped'
+ priority: AVERAGE
+ description: 'The disk group is stopped.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ uuid: a79a6cf86bd44f55a7859808f632bf48
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=2'
+ name: 'Disk group [{#NAME}]: Disk group status is critical'
+ priority: AVERAGE
+ description: 'The disk group is online but isn''t fault tolerant because some of its disks are down.'
+ tags:
+ -
+ tag: scope
+ value: performance
+ graph_prototypes:
+ -
+ uuid: e1f7331965524670b8c44c0b0d8eb99b
+ name: 'Disk group [{#NAME}]: Average response time'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE MSA 2060 Storage by HTTP'
+ key: 'hpe.msa.disks.groups.avg_rsp_time["{#NAME}",read]'
+ -
+ sortorder: '1'
+ color: 2774A4
+ item:
+ host: 'HPE MSA 2060 Storage by HTTP'
+ key: 'hpe.msa.disks.groups.avg_rsp_time["{#NAME}",write]'
+ -
+ uuid: 1354b947316a46be8dc696c29f408a6b
+ name: 'Disk group [{#NAME}]: Data transfer rate'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE MSA 2060 Storage by HTTP'
+ key: 'hpe.msa.disks.groups.data_transfer.reads["{#NAME}",rate]'
+ -
+ sortorder: '1'
+ color: 2774A4
+ item:
+ host: 'HPE MSA 2060 Storage by HTTP'
+ key: 'hpe.msa.disks.groups.data_transfer.writes["{#NAME}",rate]'
+ -
+ uuid: f7f556011add4cd6b0fe8e4545c607a0
+ name: 'Disk group [{#NAME}]: Disk operations rate'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE MSA 2060 Storage by HTTP'
+ key: 'hpe.msa.disks.groups.iops.read["{#NAME}",rate]'
+ -
+ sortorder: '1'
+ color: 2774A4
+ item:
+ host: 'HPE MSA 2060 Storage by HTTP'
+ key: 'hpe.msa.disks.groups.iops.write["{#NAME}",rate]'
+ -
+ uuid: 495a941dc4ef45e8b60d6a94bb1fbdcd
+ name: 'Disk group [{#NAME}]: Space utilization'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE MSA 2060 Storage by HTTP'
+ key: 'hpe.msa.disks.groups.space["{#NAME}",free]'
+ -
+ sortorder: '1'
+ color: 2774A4
+ item:
+ host: 'HPE MSA 2060 Storage by HTTP'
+ key: 'hpe.msa.disks.groups.space["{#NAME}",total]'
+ master_item:
+ key: hpe.msa.data.get
+ lld_macro_paths:
+ -
+ lld_macro: '{#NAME}'
+ path: '$.[''name'']'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''disk-groups'']'
+ error_handler: DISCARD_VALUE
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 6h
+ -
+ uuid: 5a97871f702348dca7a5378885087ea8
+ name: 'Enclosures discovery'
+ type: DEPENDENT
+ key: hpe.msa.enclosures.discovery
+ delay: '0'
+ description: 'Discover enclosures.'
+ item_prototypes:
+ -
+ uuid: 2e70432b3c324ecdb78ab77e5f9bbaf3
+ name: 'Enclosure [{#DURABLE.ID}]: Health'
+ type: DEPENDENT
+ key: 'hpe.msa.enclosures["{#DURABLE.ID}",health]'
+ delay: '0'
+ history: 7d
+ description: 'Enclosure health.'
+ valuemap:
+ name: Health
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''enclosures''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''health-numeric''].first()'
+ error_handler: CUSTOM_VALUE
+ error_handler_params: '4'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: enclosure
+ -
+ tag: component
+ value: health
+ -
+ tag: enclosure
+ value: '{#DURABLE.ID}'
+ trigger_prototypes:
+ -
+ uuid: d15d460b8c924f609f5cdd055060f8ce
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",health])=1'
+ name: 'Enclosure [{#DURABLE.ID}]: Enclosure health is in degraded state'
+ priority: WARNING
+ description: 'Enclosure health is in degraded state.'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: 7c2f6a7efbf245298c3ee0b137718dc8
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",health])=2'
+ name: 'Enclosure [{#DURABLE.ID}]: Enclosure health is in fault state'
+ priority: AVERAGE
+ description: 'Enclosure health is in fault state.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ uuid: 6732ced099d748daa5cbdf6d97580efd
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",health])=3'
+ name: 'Enclosure [{#DURABLE.ID}]: Enclosure health is in unknown state'
+ priority: INFO
+ description: 'Enclosure health is in unknown state.'
+ tags:
+ -
+ tag: scope
+ value: notice
+ -
+ uuid: e3a1c5f6dee545a8a7d4b68768d060ab
+ name: 'Enclosure [{#DURABLE.ID}]: Midplane serial number'
+ type: DEPENDENT
+ key: 'hpe.msa.enclosures["{#DURABLE.ID}",midplane_serial_number]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'Midplane serial number.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''enclosures''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''midplane-serial-number''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: enclosure
+ -
+ tag: enclosure
+ value: '{#DURABLE.ID}'
+ -
+ uuid: 1dcecf03b9814aac9749badf800e4717
+ name: 'Enclosure [{#DURABLE.ID}]: Model'
+ type: DEPENDENT
+ key: 'hpe.msa.enclosures["{#DURABLE.ID}",model]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'Enclosure model.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''enclosures''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''model''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: enclosure
+ -
+ tag: enclosure
+ value: '{#DURABLE.ID}'
+ -
+ uuid: 89f11d7bf0e24a92bf4d4b4b1d86af58
+ name: 'Enclosure [{#DURABLE.ID}]: Part number'
+ type: DEPENDENT
+ key: 'hpe.msa.enclosures["{#DURABLE.ID}",part_number]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'Enclosure part number.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''enclosures''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''part-number''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: enclosure
+ -
+ tag: enclosure
+ value: '{#DURABLE.ID}'
+ -
+ uuid: b426baf09f1445eda59abd0e2ee6dd2c
+ name: 'Enclosure [{#DURABLE.ID}]: Power'
+ type: DEPENDENT
+ key: 'hpe.msa.enclosures["{#DURABLE.ID}",power]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: W
+ description: 'Enclosure power in watts.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''enclosures''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''enclosure-power''].first()'
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: enclosure
+ -
+ tag: enclosure
+ value: '{#DURABLE.ID}'
+ -
+ uuid: 602b941548ab417bbe59f3f298bf6da9
+ name: 'Enclosure [{#DURABLE.ID}]: Status'
+ type: DEPENDENT
+ key: 'hpe.msa.enclosures["{#DURABLE.ID}",status]'
+ delay: '0'
+ history: 7d
+ description: 'Enclosure status.'
+ valuemap:
+ name: 'Enclosure status'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''enclosures''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''status-numeric''].first()'
+ error_handler: CUSTOM_VALUE
+ error_handler_params: '6'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: enclosure
+ -
+ tag: component
+ value: health
+ -
+ tag: enclosure
+ value: '{#DURABLE.ID}'
+ trigger_prototypes:
+ -
+ uuid: ef763c350b2e4d20bdecbe50703ec8dd
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",status])=2'
+ name: 'Enclosure [{#DURABLE.ID}]: Enclosure has critical status'
+ priority: HIGH
+ description: 'Enclosure has critical status.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ uuid: e3a7198f287e4600a0abfa929ee183de
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",status])=6'
+ name: 'Enclosure [{#DURABLE.ID}]: Enclosure has unknown status'
+ priority: INFO
+ description: 'Enclosure has unknown status.'
+ tags:
+ -
+ tag: scope
+ value: notice
+ -
+ uuid: 27ba4d2474604caaa2712222cf621294
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",status])=3'
+ name: 'Enclosure [{#DURABLE.ID}]: Enclosure has warning status'
+ priority: WARNING
+ description: 'Enclosure has warning status.'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: 53b61c7521d94161b063a5ea506b5466
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",status])=7'
+ name: 'Enclosure [{#DURABLE.ID}]: Enclosure is unavailable'
+ priority: HIGH
+ description: 'Enclosure is unavailable.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ uuid: 2218d1bf55aa4db0968dab804c0687e3
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",status])=4'
+ name: 'Enclosure [{#DURABLE.ID}]: Enclosure is unrecoverable'
+ priority: HIGH
+ description: 'Enclosure is unrecoverable.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ master_item:
+ key: hpe.msa.data.get
+ lld_macro_paths:
+ -
+ lld_macro: '{#DURABLE.ID}'
+ path: '$.[''durable-id'']'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''enclosures'']'
+ error_handler: DISCARD_VALUE
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 6h
+ -
+ uuid: 9043169f17de44baa174459b560de4f5
+ name: 'Fans discovery'
+ type: DEPENDENT
+ key: hpe.msa.fans.discovery
+ delay: '0'
+ description: 'Discover fans.'
+ item_prototypes:
+ -
+ uuid: f9be9af4ff9047f1af946313df3e7165
+ name: 'Fan [{#DURABLE.ID}]: Health'
+ type: DEPENDENT
+ key: 'hpe.msa.fans["{#DURABLE.ID}",health]'
+ delay: '0'
+ history: 7d
+ description: 'Fan health status.'
+ valuemap:
+ name: Health
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''fans''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''health-numeric''].first()'
+ error_handler: CUSTOM_VALUE
+ error_handler_params: '4'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: fan
+ -
+ tag: component
+ value: health
+ -
+ tag: fan
+ value: '{#DURABLE.ID}'
+ trigger_prototypes:
+ -
+ uuid: 3ee1b1d0d6b34c8eba02480e9e4d5be2
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",health])=1'
+ name: 'Fan [{#DURABLE.ID}]: Fan health is in degraded state'
+ priority: WARNING
+ description: 'Fan health is in degraded state.'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: 3e3785f9915d46068ebe2eff21bac813
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",health])=2'
+ name: 'Fan [{#DURABLE.ID}]: Fan health is in fault state'
+ priority: AVERAGE
+ description: 'Fan health is in fault state.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ uuid: 4bf2e519b5484d338f997ea5dac462e0
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",health])=3'
+ name: 'Fan [{#DURABLE.ID}]: Fan health is in unknown state'
+ priority: INFO
+ description: 'Fan health is in unknown state.'
+ tags:
+ -
+ tag: scope
+ value: notice
+ -
+ uuid: f028a919d56b45129f9ead200519adaa
+ name: 'Fan [{#DURABLE.ID}]: Speed'
+ type: DEPENDENT
+ key: 'hpe.msa.fans["{#DURABLE.ID}",speed]'
+ delay: '0'
+ history: 7d
+ units: '!RPM'
+ description: 'Fan speed (revolutions per minute).'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''fans''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''speed''].first()'
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: fan
+ -
+ tag: fan
+ value: '{#DURABLE.ID}'
+ -
+ uuid: df1d8af5df104afc829b403aec6efc96
+ name: 'Fan [{#DURABLE.ID}]: Status'
+ type: DEPENDENT
+ key: 'hpe.msa.fans["{#DURABLE.ID}",status]'
+ delay: '0'
+ history: 7d
+ description: 'Fan status.'
+ valuemap:
+ name: 'Fan status'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''fans''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''status-numeric''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: fan
+ -
+ tag: component
+ value: health
+ -
+ tag: fan
+ value: '{#DURABLE.ID}'
+ trigger_prototypes:
+ -
+ uuid: 183a1e1c4d444c9a8189035a2af22dc1
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",status])=1'
+ name: 'Fan [{#DURABLE.ID}]: Fan has error status'
+ priority: AVERAGE
+ description: 'Fan has error status.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ uuid: 4d9e3d1bb22444f981295df07f0d9c24
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",status])=3'
+ name: 'Fan [{#DURABLE.ID}]: Fan is missing'
+ priority: INFO
+ description: 'Fan is missing.'
+ tags:
+ -
+ tag: scope
+ value: notice
+ -
+ uuid: a6e4ea796b98432284a9fd9fff1d82f9
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",status])=2'
+ name: 'Fan [{#DURABLE.ID}]: Fan is off'
+ priority: WARNING
+ description: 'Fan is off.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ graph_prototypes:
+ -
+ uuid: 1def9fd4627d4552bf34e8ce35f3cd46
+ name: 'Fan [{#DURABLE.ID}]: Speed'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE MSA 2060 Storage by HTTP'
+ key: 'hpe.msa.fans["{#DURABLE.ID}",speed]'
+ master_item:
+ key: hpe.msa.data.get
+ lld_macro_paths:
+ -
+ lld_macro: '{#DURABLE.ID}'
+ path: '$.[''durable-id'']'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''fans'']'
+ error_handler: DISCARD_VALUE
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 6h
+ -
+ uuid: 30f91e8f7fba489aa649759219efa67c
+ name: 'FRU discovery'
+ type: DEPENDENT
+ key: hpe.msa.frus.discovery
+ delay: '0'
+ filter:
+ conditions:
+ -
+ macro: '{#TYPE}'
+ value: ^(POWER_SUPPLY|RAID_IOM|CHASSIS_MIDPLANE)$
+ operator: NOT_MATCHES_REGEX
+ formulaid: A
+ description: 'Discover FRU.'
+ item_prototypes:
+ -
+ uuid: 8cbf62d188084ea4a72eaa37987d8d8e
+ name: 'FRU [{#ENCLOSURE.ID}: {#LOCATION}]: Part number'
+ type: DEPENDENT
+ key: 'hpe.msa.frus["{#ENCLOSURE.ID}:{#LOCATION}",part_number]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: '{#DESCRIPTION}. Part number of the FRU.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''frus''][?(@[''name''] == "{#TYPE}" && @[''fru-location''] == "{#LOCATION}")].[''part-number''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: fru
+ -
+ tag: fru
+ value: 'Enclosure {#ENCLOSURE.ID}: {#LOCATION}'
+ -
+ uuid: 49c52c2c5b174c78a60756eb7a9e34f1
+ name: 'FRU [{#ENCLOSURE.ID}: {#LOCATION}]: Serial number'
+ type: DEPENDENT
+ key: 'hpe.msa.frus["{#ENCLOSURE.ID}:{#LOCATION}",serial_number]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: '{#DESCRIPTION}. FRU serial number.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''frus''][?(@[''name''] == "{#TYPE}" && @[''fru-location''] == "{#LOCATION}")].[''serial-number''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: fru
+ -
+ tag: fru
+ value: 'Enclosure {#ENCLOSURE.ID}: {#LOCATION}'
+ -
+ uuid: d72f7be111ae4335b92d6a1d0ad9e3ee
+ name: 'FRU [{#ENCLOSURE.ID}: {#LOCATION}]: Status'
+ type: DEPENDENT
+ key: 'hpe.msa.frus["{#ENCLOSURE.ID}:{#LOCATION}",status]'
+ delay: '0'
+ history: 7d
+ description: |
+ {#DESCRIPTION}. FRU status:
+
+ Absent: The FRU is not present.
+ Fault: The FRU's health is Degraded or Fault.
+ Invalid data: The FRU ID data is invalid. The FRU's EEPROM is improperly programmed.
+ OK: The FRU is operating normally.
+ Power off: The FRU is powered off.
+ valuemap:
+ name: 'FRU status'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''frus''][?(@[''name''] == "{#TYPE}" && @[''fru-location''] == "{#LOCATION}")].[''fru-status-numeric''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: fru
+ -
+ tag: component
+ value: health
+ -
+ tag: fru
+ value: 'Enclosure {#ENCLOSURE.ID}: {#LOCATION}'
+ trigger_prototypes:
+ -
+ uuid: 2533eb2e4344494d9ec72629dab7b1a8
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.frus["{#ENCLOSURE.ID}:{#LOCATION}",status])=0'
+ name: 'FRU [{#ENCLOSURE.ID}: {#LOCATION}]: FRU ID data is invalid'
+ priority: WARNING
+ description: 'The FRU ID data is invalid. The FRU''s EEPROM is improperly programmed.'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: 7a994469e45f467c8582c24258d0eb75
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.frus["{#ENCLOSURE.ID}:{#LOCATION}",status])=1'
+ name: 'FRU [{#ENCLOSURE.ID}: {#LOCATION}]: FRU status is Degraded or Fault'
+ priority: AVERAGE
+ description: 'FRU status is Degraded or Fault.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ master_item:
+ key: hpe.msa.data.get
+ lld_macro_paths:
+ -
+ lld_macro: '{#DESCRIPTION}'
+ path: '$.[''description'']'
+ -
+ lld_macro: '{#ENCLOSURE.ID}'
+ path: '$.[''enclosure-id'']'
+ -
+ lld_macro: '{#LOCATION}'
+ path: '$.[''fru-location'']'
+ -
+ lld_macro: '{#TYPE}'
+ path: '$.[''name'']'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''frus'']'
+ error_handler: DISCARD_VALUE
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 6h
+ -
+ uuid: 178b94ddcab947ffb1614622c2b7e08e
+ name: 'Pools discovery'
+ type: DEPENDENT
+ key: hpe.msa.pools.discovery
+ delay: '0'
+ description: 'Discover pools.'
+ item_prototypes:
+ -
+ uuid: 09d67b3577af4e21a7bbd09078d705cd
+ name: 'Pool [{#NAME}]: Blocks available'
+ type: DEPENDENT
+ key: 'hpe.msa.pools.blocks["{#NAME}",available]'
+ delay: '0'
+ history: 7d
+ description: 'Available space in blocks.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''pools''][?(@[''name''] == "{#NAME}")].[''total-avail-numeric''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: pool
+ -
+ tag: pool
+ value: '{#NAME}'
+ -
+ uuid: 076921fcd93941b09b79c7d44873417d
+ name: 'Pool [{#NAME}]: Blocks size'
+ type: DEPENDENT
+ key: 'hpe.msa.pools.blocks["{#NAME}",size]'
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'The size of a block, in bytes.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''pools''][?(@[''name''] == "{#NAME}")].[''blocksize''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: pool
+ -
+ tag: pool
+ value: '{#NAME}'
+ -
+ uuid: fd29559e5bb3455b8b4cfe56f75f54b2
+ name: 'Pool [{#NAME}]: Blocks total'
+ type: DEPENDENT
+ key: 'hpe.msa.pools.blocks["{#NAME}",total]'
+ delay: '0'
+ history: 7d
+ description: 'Total space in blocks.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''pools''][?(@[''name''] == "{#NAME}")].[''total-size-numeric''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: pool
+ -
+ tag: pool
+ value: '{#NAME}'
+ -
+ uuid: d99eeba76b354e73b0118b46402d93bf
+ name: 'Pool [{#NAME}]: Space free'
+ type: CALCULATED
+ key: 'hpe.msa.pools.space["{#NAME}",free]'
+ history: 7d
+ units: B
+ params: 'last(//hpe.msa.pools.blocks["{#NAME}",size])*last(//hpe.msa.pools.blocks["{#NAME}",available])'
+ description: 'The free space in the pool.'
+ preprocessing:
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ tags:
+ -
+ tag: component
+ value: pool
+ -
+ tag: pool
+ value: '{#NAME}'
+ -
+ uuid: 9fce545fbe724da28a13b8ca8759c37d
+ name: 'Pool [{#NAME}]: Space total'
+ type: CALCULATED
+ key: 'hpe.msa.pools.space["{#NAME}",total]'
+ history: 7d
+ units: B
+ params: 'last(//hpe.msa.pools.blocks["{#NAME}",size])*last(//hpe.msa.pools.blocks["{#NAME}",total])'
+ description: 'The capacity of the pool.'
+ preprocessing:
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ tags:
+ -
+ tag: component
+ value: pool
+ -
+ tag: pool
+ value: '{#NAME}'
+ -
+ uuid: ad9bdb342a494d82a36e42e75a3bbf3e
+ name: 'Pool [{#NAME}]: Space utilization'
+ type: CALCULATED
+ key: 'hpe.msa.pools.space["{#NAME}",util]'
+ history: 7d
+ value_type: FLOAT
+ units: '%'
+ params: '100-last(//hpe.msa.pools.space["{#NAME}",free])/last(//hpe.msa.pools.space["{#NAME}",total])*100'
+ description: 'The space utilization percentage in the pool.'
+ preprocessing:
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ tags:
+ -
+ tag: component
+ value: pool
+ -
+ tag: pool
+ value: '{#NAME}'
+ trigger_prototypes:
+ -
+ uuid: c73b4a77e94a43f5951f6a541d65637e
+ expression: 'min(/HPE MSA 2060 Storage by HTTP/hpe.msa.pools.space["{#NAME}",util],5m)>{$HPE.MSA.POOL.PUSED.MAX.CRIT:"{#NAME}"}'
+ name: 'Pool [{#NAME}]: Pool space is critically low'
+ event_name: 'Pool [{#NAME}]: Pool space is critically low (used > {$HPE.MSA.POOL.PUSED.MAX.CRIT:"{#NAME}"}%)'
+ priority: AVERAGE
+ description: 'Pool is running low on free space (less than {$HPE.MSA.POOL.PUSED.MAX.CRIT:"{#NAME}"}% available).'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: c7644beb62bc40e99d6045af6d4bc16f
+ expression: 'min(/HPE MSA 2060 Storage by HTTP/hpe.msa.pools.space["{#NAME}",util],5m)>{$HPE.MSA.POOL.PUSED.MAX.WARN:"{#NAME}"}'
+ name: 'Pool [{#NAME}]: Pool space is low'
+ event_name: 'Pool [{#NAME}]: Pool space is low (used > {$HPE.MSA.POOL.PUSED.MAX.WARN:"{#NAME}"}%)'
+ priority: WARNING
+ description: 'Pool is running low on free space (less than {$HPE.MSA.POOL.PUSED.MAX.WARN:"{#NAME}"}% available).'
+ dependencies:
+ -
+ name: 'Pool [{#NAME}]: Pool space is critically low'
+ expression: 'min(/HPE MSA 2060 Storage by HTTP/hpe.msa.pools.space["{#NAME}",util],5m)>{$HPE.MSA.POOL.PUSED.MAX.CRIT:"{#NAME}"}'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: 15096639cae947d383a506f0332ff6d3
+ name: 'Pool [{#NAME}]: Health'
+ type: DEPENDENT
+ key: 'hpe.msa.pools["{#NAME}",health]'
+ delay: '0'
+ history: 7d
+ description: 'Pool health.'
+ valuemap:
+ name: Health
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''pools''][?(@[''name''] == "{#NAME}")].[''health-numeric''].first()'
+ error_handler: CUSTOM_VALUE
+ error_handler_params: '4'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: health
+ -
+ tag: component
+ value: pool
+ -
+ tag: pool
+ value: '{#NAME}'
+ trigger_prototypes:
+ -
+ uuid: 20723e93add44447a5cab3c8cc4849a6
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.pools["{#NAME}",health])=1'
+ name: 'Pool [{#NAME}]: Pool health is in degraded state'
+ priority: WARNING
+ description: 'Pool health is in degraded state.'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: 1881bd0efca04c58a56effb8e232e734
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.pools["{#NAME}",health])=2'
+ name: 'Pool [{#NAME}]: Pool health is in fault state'
+ priority: AVERAGE
+ description: 'Pool health is in fault state.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ uuid: 62db05047b5a4b8797eee5667bb3bdf4
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.pools["{#NAME}",health])=3'
+ name: 'Pool [{#NAME}]: Pool health is in unknown state'
+ priority: INFO
+ description: 'Pool [{#NAME}] health is in unknown state.'
+ tags:
+ -
+ tag: scope
+ value: notice
+ graph_prototypes:
+ -
+ uuid: 93151c5760fb405498d1df049185ffe7
+ name: 'Pool [{#NAME}]: Space utilization'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE MSA 2060 Storage by HTTP'
+ key: 'hpe.msa.pools.space["{#NAME}",free]'
+ -
+ sortorder: '1'
+ color: 2774A4
+ item:
+ host: 'HPE MSA 2060 Storage by HTTP'
+ key: 'hpe.msa.pools.space["{#NAME}",total]'
+ master_item:
+ key: hpe.msa.data.get
+ lld_macro_paths:
+ -
+ lld_macro: '{#NAME}'
+ path: '$.[''name'']'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''pools'']'
+ error_handler: DISCARD_VALUE
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 6h
+ -
+ uuid: bed52618dbc6498f99ddeedc78c0cdad
+ name: 'Ports discovery'
+ type: DEPENDENT
+ key: hpe.msa.ports.discovery
+ delay: '0'
+ description: 'Discover ports.'
+ item_prototypes:
+ -
+ uuid: cf4f9aaf55e6435d949d3b5074b9f37f
+ name: 'Port [{#NAME}]: Health'
+ type: DEPENDENT
+ key: 'hpe.msa.ports["{#NAME}",health]'
+ delay: '0'
+ history: 7d
+ description: 'Port health status.'
+ valuemap:
+ name: Health
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''ports''][?(@[''port''] == "{#NAME}")].[''health-numeric''].first()'
+ error_handler: CUSTOM_VALUE
+ error_handler_params: '4'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: health
+ -
+ tag: component
+ value: port
+ -
+ tag: port
+ value: '{#NAME}'
+ trigger_prototypes:
+ -
+ uuid: 9775011d59a846669087e6c90c4a011a
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.ports["{#NAME}",health])=1'
+ name: 'Port [{#NAME}]: Port health is in degraded state'
+ priority: WARNING
+ description: 'Port health is in degraded state.'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: a5dec537528f42e0948ea15f1a290f26
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.ports["{#NAME}",health])=2'
+ name: 'Port [{#NAME}]: Port health is in fault state'
+ priority: AVERAGE
+ description: 'Port health is in fault state.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ uuid: 7025a0e6c93e4731be966c2a9e774581
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.ports["{#NAME}",health])=3'
+ name: 'Port [{#NAME}]: Port health is in unknown state'
+ priority: INFO
+ description: 'Port health is in unknown state.'
+ tags:
+ -
+ tag: scope
+ value: notice
+ -
+ uuid: cab1cd26264d408998c5ea8737571ed4
+ name: 'Port [{#NAME}]: Status'
+ type: DEPENDENT
+ key: 'hpe.msa.ports["{#NAME}",status]'
+ delay: '0'
+ history: 7d
+ description: 'Port status.'
+ valuemap:
+ name: Status
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''ports''][?(@[''port''] == "{#NAME}")].[''status-numeric''].first()'
+ error_handler: CUSTOM_VALUE
+ error_handler_params: '4'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: health
+ -
+ tag: component
+ value: port
+ -
+ tag: port
+ value: '{#NAME}'
+ trigger_prototypes:
+ -
+ uuid: c1d2f824a3d4470abb6817753b1d4047
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.ports["{#NAME}",status])=2'
+ name: 'Port [{#NAME}]: Port has error status'
+ priority: AVERAGE
+ description: 'Port has error status.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ uuid: 6083cdfcb59848a6b5249147155996c2
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.ports["{#NAME}",status])=4'
+ name: 'Port [{#NAME}]: Port has unknown status'
+ priority: INFO
+ description: 'Port has unknown status.'
+ tags:
+ -
+ tag: scope
+ value: notice
+ -
+ uuid: dd32b960ce1544d880d94b2da4dba03e
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.ports["{#NAME}",status])=1'
+ name: 'Port [{#NAME}]: Port has warning status'
+ priority: WARNING
+ description: 'Port has warning status.'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: 32ad6655625e408a9dd577624afbfa6a
+ name: 'Port [{#NAME}]: Type'
+ type: DEPENDENT
+ key: 'hpe.msa.ports["{#NAME}",type]'
+ delay: '0'
+ history: 7d
+ description: 'Port type.'
+ valuemap:
+ name: 'Port type'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''ports''][?(@[''port''] == "{#NAME}")].[''port-type-numeric''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: port
+ -
+ tag: port
+ value: '{#NAME}'
+ master_item:
+ key: hpe.msa.data.get
+ lld_macro_paths:
+ -
+ lld_macro: '{#NAME}'
+ path: '$.[''port'']'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''ports'']'
+ error_handler: DISCARD_VALUE
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 6h
+ -
+ uuid: 1561695bd2174eada622a0d90ee1c3df
+ name: 'Power supplies discovery'
+ type: DEPENDENT
+ key: hpe.msa.power_supplies.discovery
+ delay: '0'
+ description: 'Discover power supplies.'
+ item_prototypes:
+ -
+ uuid: 993bc2db3b444dc5bc37794985e63ea9
+ name: 'Power supply [{#DURABLE.ID}]: Health'
+ type: DEPENDENT
+ key: 'hpe.msa.power_supplies["{#DURABLE.ID}",health]'
+ delay: '0'
+ history: 7d
+ description: 'Power supply health status.'
+ valuemap:
+ name: Health
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''power-supplies''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''health-numeric''].first()'
+ error_handler: CUSTOM_VALUE
+ error_handler_params: '4'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: health
+ -
+ tag: component
+ value: power-supply
+ -
+ tag: power-supply
+ value: '{#DURABLE.ID}'
+ trigger_prototypes:
+ -
+ uuid: 1b512fda735440b5839a63fd26c19535
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",health])=1'
+ name: 'Power supply [{#DURABLE.ID}]: Power supply health is in degraded state'
+ priority: WARNING
+ description: 'Power supply health is in degraded state.'
+ tags:
+ -
+ tag: scope
+ value: performance
+ -
+ uuid: b75fb541ae0e43cc9cdb86e07dc3e394
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",health])=2'
+ name: 'Power supply [{#DURABLE.ID}]: Power supply health is in fault state'
+ priority: AVERAGE
+ description: 'Power supply health is in fault state.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ uuid: 555ee9ef33b54d029df2f17d5f899539
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",health])=3'
+ name: 'Power supply [{#DURABLE.ID}]: Power supply health is in unknown state'
+ priority: INFO
+ description: 'Power supply health is in unknown state.'
+ tags:
+ -
+ tag: scope
+ value: notice
+ -
+ uuid: efae55cfdd1e4021a623e2128f988611
+ name: 'Power supply [{#DURABLE.ID}]: Part number'
+ type: DEPENDENT
+ key: 'hpe.msa.power_supplies["{#DURABLE.ID}",part_number]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'Power supply part number.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''power-supplies''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''part-number''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: power-supply
+ -
+ tag: power-supply
+ value: '{#DURABLE.ID}'
+ -
+ uuid: 6716c3d0177247fe8a35fa1eb206a54f
+ name: 'Power supply [{#DURABLE.ID}]: Serial number'
+ type: DEPENDENT
+ key: 'hpe.msa.power_supplies["{#DURABLE.ID}",serial_number]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'Power supply serial number.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''power-supplies''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''serial-number''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: power-supply
+ -
+ tag: power-supply
+ value: '{#DURABLE.ID}'
+ -
+ uuid: a3ff6ab5576246fe9e794e01df4fe1b9
+ name: 'Power supply [{#DURABLE.ID}]: Status'
+ type: DEPENDENT
+ key: 'hpe.msa.power_supplies["{#DURABLE.ID}",status]'
+ delay: '0'
+ history: 7d
+ description: 'Power supply status.'
+ valuemap:
+ name: Status
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''power-supplies''][?(@[''durable-id''] == "{#DURABLE.ID}")].[''status-numeric''].first()'
+ error_handler: CUSTOM_VALUE
+ error_handler_params: '4'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: health
+ -
+ tag: component
+ value: power-supply
+ -
+ tag: power-supply
+ value: '{#DURABLE.ID}'
+ trigger_prototypes:
+ -
+ uuid: 49c9d2d61c45476da5564299b2eebdee
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",status])=2'
+ name: 'Power supply [{#DURABLE.ID}]: Power supply has error status'
+ priority: AVERAGE
+ description: 'Power supply has error status.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ uuid: d6cbaeb5aab84e5eb487af4bf319d640
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",status])=4'
+ name: 'Power supply [{#DURABLE.ID}]: Power supply has unknown status'
+ priority: INFO
+ description: 'Power supply has unknown status.'
+ tags:
+ -
+ tag: scope
+ value: notice
+ -
+ uuid: b7e85e7a6c254aba930d7704c58adf47
+ expression: 'last(/HPE MSA 2060 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",status])=1'
+ name: 'Power supply [{#DURABLE.ID}]: Power supply has warning status'
+ priority: WARNING
+ description: 'Power supply has warning status.'
+ tags:
+ -
+ tag: scope
+ value: performance
+ master_item:
+ key: hpe.msa.data.get
+ lld_macro_paths:
+ -
+ lld_macro: '{#DURABLE.ID}'
+ path: '$.[''durable-id'']'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''power-supplies'']'
+ error_handler: DISCARD_VALUE
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 6h
+ -
+ uuid: b132a010c8a84da79eee1ba725301be9
+ name: 'Volumes discovery'
+ type: DEPENDENT
+ key: hpe.msa.volumes.discovery
+ delay: '0'
+ description: 'Discover volumes.'
+ item_prototypes:
+ -
+ uuid: cc6c4bddc05243c7a90082a3450a76a7
+ name: 'Volume [{#NAME}]: Blocks allocated'
+ type: DEPENDENT
+ key: 'hpe.msa.volumes.blocks["{#NAME}",allocated]'
+ delay: '0'
+ history: 7d
+ description: 'The amount of blocks currently allocated to the volume.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''volumes''][?(@[''volume-name''] == "{#NAME}")].[''allocated-size-numeric''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: 900d94185fa9480590915bbafb8ccda0
+ name: 'Volume [{#NAME}]: Blocks size'
+ type: DEPENDENT
+ key: 'hpe.msa.volumes.blocks["{#NAME}",size]'
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'The size of a block, in bytes.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''volumes''][?(@[''volume-name''] == "{#NAME}")].[''blocksize''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: 5cdae787c8b6485899f8f4e8c3cf6b71
+ name: 'Volume [{#NAME}]: Blocks total'
+ type: DEPENDENT
+ key: 'hpe.msa.volumes.blocks["{#NAME}",total]'
+ delay: '0'
+ history: 7d
+ description: 'Total space in blocks.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''volumes''][?(@[''volume-name''] == "{#NAME}")].[''blocks''].first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: b7615bb6a3434303a2bb4751e7aed458
+ name: 'Volume [{#NAME}]: Cache: Read hits, rate'
+ type: DEPENDENT
+ key: 'hpe.msa.volumes.cache.read.hits["{#NAME}",rate]'
+ delay: '0'
+ history: 7d
+ description: 'For the controller that owns the volume, the number of times the block to be read is found in cache per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''volume-statistics''][?(@[''volume-name''] == "{#NAME}")].[''read-cache-hits''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: 655e319736804d8db4b6988f7205c5e3
+ name: 'Volume [{#NAME}]: Cache: Read misses, rate'
+ type: DEPENDENT
+ key: 'hpe.msa.volumes.cache.read.misses["{#NAME}",rate]'
+ delay: '0'
+ history: 7d
+ description: 'For the controller that owns the volume, the number of times the block to be read is not found in cache per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''volume-statistics''][?(@[''volume-name''] == "{#NAME}")].[''read-cache-misses''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: 849ef4370f4b46ea894d2a2e1e4a3ea4
+ name: 'Volume [{#NAME}]: Cache: Write hits, rate'
+ type: DEPENDENT
+ key: 'hpe.msa.volumes.cache.write.hits["{#NAME}",rate]'
+ delay: '0'
+ history: 7d
+ description: 'For the controller that owns the volume, the number of times the block written to is found in cache per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''volume-statistics''][?(@[''volume-name''] == "{#NAME}")].[''write-cache-hits''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: 593f4fce31f24e9b99c9bc69d2ead38b
+ name: 'Volume [{#NAME}]: Cache: Write misses, rate'
+ type: DEPENDENT
+ key: 'hpe.msa.volumes.cache.write.misses["{#NAME}",rate]'
+ delay: '0'
+ history: 7d
+ description: 'For the controller that owns the volume, the number of times the block written to is not found in cache per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''volume-statistics''][?(@[''volume-name''] == "{#NAME}")].[''write-cache-misses''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: 1a810fe32e464e8cbcdfc61769bc7869
+ name: 'Volume [{#NAME}]: Data transfer rate: Reads'
+ type: DEPENDENT
+ key: 'hpe.msa.volumes.data_transfer.reads["{#NAME}",rate]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: Bps
+ description: 'The data read rate, in bytes per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''volume-statistics''][?(@[''volume-name''] == "{#NAME}")].[''data-read-numeric''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: 4dd1d47335a9425a94ffcee4c8ed2216
+ name: 'Volume [{#NAME}]: Data transfer rate: Total'
+ type: DEPENDENT
+ key: 'hpe.msa.volumes.data_transfer.total["{#NAME}",rate]'
+ delay: '0'
+ history: 7d
+ units: Bps
+ description: 'The data transfer rate, in bytes per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''volume-statistics''][?(@[''volume-name''] == "{#NAME}")].[''bytes-per-second-numeric''].first()'
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: d5198b50ba8f4db1aa160d0208540a74
+ name: 'Volume [{#NAME}]: Data transfer rate: Writes'
+ type: DEPENDENT
+ key: 'hpe.msa.volumes.data_transfer.writes["{#NAME}",rate]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: Bps
+ description: 'The data write rate, in bytes per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''volume-statistics''][?(@[''volume-name''] == "{#NAME}")].[''data-written-numeric''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: 00f5c3f9d19d450e999c389ba297fb41
+ name: 'Volume [{#NAME}]: IOPS, read rate'
+ type: DEPENDENT
+ key: 'hpe.msa.volumes.iops.read["{#NAME}",rate]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: '!r/s'
+ description: 'Number of read operations per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''volume-statistics''][?(@[''volume-name''] == "{#NAME}")].[''number-of-reads''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: b925122eda0c4c1380b843bc764ed122
+ name: 'Volume [{#NAME}]: IOPS, total rate'
+ type: DEPENDENT
+ key: 'hpe.msa.volumes.iops.total["{#NAME}",rate]'
+ delay: '0'
+ history: 7d
+ units: '!iops'
+ description: 'Total input/output operations per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''volume-statistics''][?(@[''volume-name''] == "{#NAME}")].[''iops''].first()'
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: a9fcc1525204489cad52cf4e88518064
+ name: 'Volume [{#NAME}]: IOPS, write rate'
+ type: DEPENDENT
+ key: 'hpe.msa.volumes.iops.write["{#NAME}",rate]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ units: '!w/s'
+ description: 'Number of write operations per second.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''volume-statistics''][?(@[''volume-name''] == "{#NAME}")].[''number-of-writes''].first()'
+ -
+ type: CHANGE_PER_SECOND
+ parameters:
+ - ''
+ master_item:
+ key: hpe.msa.data.get
+ tags:
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: 860855a80c554e0685d4d4125342b547
+ name: 'Volume [{#NAME}]: Space allocated'
+ type: CALCULATED
+ key: 'hpe.msa.volumes.space["{#NAME}",allocated]'
+ history: 7d
+ units: B
+ params: 'last(//hpe.msa.volumes.blocks["{#NAME}",size])*last(//hpe.msa.volumes.blocks["{#NAME}",allocated])'
+ description: 'The amount of space currently allocated to the volume.'
+ preprocessing:
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ tags:
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: eb09d8791bb84c8aadf5cdcac3d76413
+ name: 'Volume [{#NAME}]: Space total'
+ type: CALCULATED
+ key: 'hpe.msa.volumes.space["{#NAME}",total]'
+ history: 7d
+ units: B
+ params: 'last(//hpe.msa.volumes.blocks["{#NAME}",size])*last(//hpe.msa.volumes.blocks["{#NAME}",total])'
+ description: 'The capacity of the volume.'
+ preprocessing:
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ tags:
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ graph_prototypes:
+ -
+ uuid: 8905b826b774473991f74b927716322e
+ name: 'Volume [{#NAME}]: Cache usage'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE MSA 2060 Storage by HTTP'
+ key: 'hpe.msa.volumes.cache.read.hits["{#NAME}",rate]'
+ -
+ sortorder: '1'
+ color: 2774A4
+ item:
+ host: 'HPE MSA 2060 Storage by HTTP'
+ key: 'hpe.msa.volumes.cache.read.misses["{#NAME}",rate]'
+ -
+ sortorder: '2'
+ color: F63100
+ item:
+ host: 'HPE MSA 2060 Storage by HTTP'
+ key: 'hpe.msa.volumes.cache.write.hits["{#NAME}",rate]'
+ -
+ sortorder: '3'
+ color: A54F10
+ item:
+ host: 'HPE MSA 2060 Storage by HTTP'
+ key: 'hpe.msa.volumes.cache.write.misses["{#NAME}",rate]'
+ -
+ uuid: 1bd9df7bab9c4f3a978810c82cc61f42
+ name: 'Volume [{#NAME}]: Data transfer rate'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE MSA 2060 Storage by HTTP'
+ key: 'hpe.msa.volumes.data_transfer.reads["{#NAME}",rate]'
+ -
+ sortorder: '1'
+ color: 2774A4
+ item:
+ host: 'HPE MSA 2060 Storage by HTTP'
+ key: 'hpe.msa.volumes.data_transfer.writes["{#NAME}",rate]'
+ -
+ uuid: 24dfc70c5d724f13ac1cec6b229c7fe9
+ name: 'Volume [{#NAME}]: Disk operations rate'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE MSA 2060 Storage by HTTP'
+ key: 'hpe.msa.volumes.iops.read["{#NAME}",rate]'
+ -
+ sortorder: '1'
+ color: 2774A4
+ item:
+ host: 'HPE MSA 2060 Storage by HTTP'
+ key: 'hpe.msa.volumes.iops.write["{#NAME}",rate]'
+ -
+ uuid: 5a316cdf8c6f42acb3cb7a158861145a
+ name: 'Volume [{#NAME}]: Space utilization'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE MSA 2060 Storage by HTTP'
+ key: 'hpe.msa.volumes.space["{#NAME}",allocated]'
+ -
+ sortorder: '1'
+ color: 2774A4
+ item:
+ host: 'HPE MSA 2060 Storage by HTTP'
+ key: 'hpe.msa.volumes.space["{#NAME}",total]'
+ master_item:
+ key: hpe.msa.data.get
+ lld_macro_paths:
+ -
+ lld_macro: '{#NAME}'
+ path: '$.[''volume-name'']'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.[''volumes'']'
+ error_handler: DISCARD_VALUE
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 6h
+ tags:
+ -
+ tag: class
+ value: storage
+ -
+ tag: target
+ value: hpe
+ -
+ tag: target
+ value: msa-2060
+ macros:
+ -
+ macro: '{$HPE.MSA.API.PASSWORD}'
+ type: SECRET_TEXT
+ description: 'Specify password for API.'
+ -
+ macro: '{$HPE.MSA.API.PORT}'
+ value: '443'
+ description: 'Connection port for API.'
+ -
+ macro: '{$HPE.MSA.API.SCHEME}'
+ value: https
+ description: 'Connection scheme for API.'
+ -
+ macro: '{$HPE.MSA.API.USERNAME}'
+ value: zabbix
+ description: 'Specify user name for API.'
+ -
+ macro: '{$HPE.MSA.CONTROLLER.CPU.UTIL.CRIT}'
+ value: '90'
+ description: 'The critical threshold of the CPU utilization in %.'
+ -
+ macro: '{$HPE.MSA.DATA.TIMEOUT}'
+ value: 30s
+ description: 'Response timeout for API.'
+ -
+ macro: '{$HPE.MSA.DISKS.GROUP.PUSED.MAX.CRIT}'
+ value: '90'
+ description: 'The critical threshold of the disk group space utilization in %.'
+ -
+ macro: '{$HPE.MSA.DISKS.GROUP.PUSED.MAX.WARN}'
+ value: '80'
+ description: 'The warning threshold of the disk group space utilization in %.'
+ -
+ macro: '{$HPE.MSA.POOL.PUSED.MAX.CRIT}'
+ value: '90'
+ description: 'The critical threshold of the pool space utilization in %.'
+ -
+ macro: '{$HPE.MSA.POOL.PUSED.MAX.WARN}'
+ value: '80'
+ description: 'The warning threshold of the pool space utilization in %.'
+ valuemaps:
+ -
+ uuid: f7af1259f3c54a5faa040c743d386d1d
+ name: 'Controller status'
+ mappings:
+ -
+ value: '0'
+ newvalue: Operational
+ -
+ value: '1'
+ newvalue: Down
+ -
+ value: '2'
+ newvalue: 'Not Installed'
+ -
+ uuid: 6bb0dfe12f4249ef9f4b804885c70c60
+ name: 'Disk group status'
+ mappings:
+ -
+ value: '0'
+ newvalue: FTOL
+ -
+ value: '1'
+ newvalue: FTDN
+ -
+ value: '2'
+ newvalue: CRIT
+ -
+ value: '3'
+ newvalue: OFFL
+ -
+ value: '4'
+ newvalue: QTCR
+ -
+ value: '5'
+ newvalue: QTOF
+ -
+ value: '6'
+ newvalue: QTDN
+ -
+ value: '7'
+ newvalue: STOP
+ -
+ value: '8'
+ newvalue: MSNG
+ -
+ value: '9'
+ newvalue: DMGD
+ -
+ value: '11'
+ newvalue: QTDN
+ -
+ value: '250'
+ newvalue: UP
+ -
+ uuid: de0e7d801a9b42cf80fe4c71c0eed982
+ name: 'Disk temperature status'
+ mappings:
+ -
+ value: '1'
+ newvalue: OK
+ -
+ value: '2'
+ newvalue: Critical
+ -
+ value: '3'
+ newvalue: Warning
+ -
+ value: '4'
+ newvalue: Unknown
+ -
+ uuid: 10547e62c7bb4581b347bc523ef03582
+ name: 'Disk type'
+ mappings:
+ -
+ value: '4'
+ newvalue: SAS
+ -
+ value: '8'
+ newvalue: 'SSD SAS'
+ -
+ value: '11'
+ newvalue: 'SAS MDL'
+ -
+ uuid: 37317f19f7d74b8fa61dd1b28e6f4d42
+ name: 'Enclosure status'
+ mappings:
+ -
+ value: '0'
+ newvalue: Unsupported
+ -
+ value: '1'
+ newvalue: OK
+ -
+ value: '2'
+ newvalue: Critical
+ -
+ value: '3'
+ newvalue: Warning
+ -
+ value: '4'
+ newvalue: Unrecoverable
+ -
+ value: '5'
+ newvalue: 'Not installed'
+ -
+ value: '6'
+ newvalue: Unknown
+ -
+ value: '7'
+ newvalue: Unavailable
+ -
+ uuid: 1acc14c82fba4c3daa207d0ce9b702f2
+ name: 'Fan status'
+ mappings:
+ -
+ value: '0'
+ newvalue: Up
+ -
+ value: '1'
+ newvalue: Error
+ -
+ value: '2'
+ newvalue: 'Off'
+ -
+ value: '3'
+ newvalue: Missing
+ -
+ uuid: 284ed898fb7c46ecb8d719646445264c
+ name: 'FRU status'
+ mappings:
+ -
+ value: '0'
+ newvalue: 'Invalid data'
+ -
+ value: '1'
+ newvalue: Fault
+ -
+ value: '2'
+ newvalue: Absent
+ -
+ value: '3'
+ newvalue: 'Power off'
+ -
+ value: '4'
+ newvalue: OK
+ -
+ uuid: cb8c3d00dfd4456181765b8b350ea4d2
+ name: Health
+ mappings:
+ -
+ value: '0'
+ newvalue: OK
+ -
+ value: '1'
+ newvalue: Degraded
+ -
+ value: '2'
+ newvalue: Fault
+ -
+ value: '3'
+ newvalue: Unknown
+ -
+ value: '4'
+ newvalue: N/A
+ -
+ uuid: ec101e7d212747779ed56ef9dbf72e2b
+ name: 'Port type'
+ mappings:
+ -
+ value: '0'
+ newvalue: Unknown
+ -
+ value: '6'
+ newvalue: FC
+ -
+ value: '8'
+ newvalue: SAS
+ -
+ value: '9'
+ newvalue: iSCSI
+ -
+ uuid: 171c9abf20514b0fb78d532bd987881b
+ name: 'RAID type'
+ mappings:
+ -
+ value: '0'
+ newvalue: RAID0
+ -
+ value: '1'
+ newvalue: RAID1
+ -
+ value: '2'
+ newvalue: MSA-DP+
+ -
+ value: '5'
+ newvalue: RAID5
+ -
+ value: '6'
+ newvalue: NRAID
+ -
+ value: '10'
+ newvalue: RAID10
+ -
+ value: '11'
+ newvalue: RAID6
+ -
+ uuid: 402b0dacf14a4436b0d3cfe237bf1e86
+ name: Status
+ mappings:
+ -
+ value: '0'
+ newvalue: Up
+ -
+ value: '1'
+ newvalue: Warning
+ -
+ value: '2'
+ newvalue: Error
+ -
+ value: '3'
+ newvalue: 'Not present'
+ -
+ value: '4'
+ newvalue: Unknown
+ -
+ value: '6'
+ newvalue: Disconnected
diff --git a/templates/san/hpe_primera_http/README.md b/templates/san/hpe_primera_http/README.md
new file mode 100644
index 00000000000..70db3114347
--- /dev/null
+++ b/templates/san/hpe_primera_http/README.md
@@ -0,0 +1,189 @@
+
+# HPE Primera by HTTP
+
+## Overview
+
+For Zabbix version: 6.0 and higher
+The template to monitor HPE Primera by HTTP.
+It works without any external scripts and uses the script item.
+
+This template was tested on:
+
+- HPE Primera, version 4.2.1.6
+
+## Setup
+
+> See [Zabbix template operation](https://www.zabbix.com/documentation/6.0/manual/config/templates_out_of_the_box/http) for basic instructions.
+
+1. Create user zabbix on the storage with browse role and enable it for all domains.
+2. The WSAPI server does not start automatically.
+ Log in to the CLI as Super, Service, or any role granted the wsapi_set right.
+ Start the WSAPI server by command: `startwsapi`.
+ To check WSAPI state use command: `showwsapi`.
+3. Link template to the host.
+4. Configure macros {$HPE.PRIMERA.API.USERNAME} and {$HPE.PRIMERA.API.PASSWORD}.
+
+## Zabbix configuration
+
+No specific Zabbix configuration is required.
+
+### Macros used
+
+|Name|Description|Default|
+|----|-----------|-------|
+|{$HPE.PRIMERA.API.PASSWORD} |<p>Specify password for WSAPI.</p> |`` |
+|{$HPE.PRIMERA.API.PORT} |<p>The WSAPI port.</p> |`443` |
+|{$HPE.PRIMERA.API.SCHEME} |<p>The WSAPI scheme (http/https).</p> |`https` |
+|{$HPE.PRIMERA.API.USERNAME} |<p>Specify user name for WSAPI.</p> |`zabbix` |
+|{$HPE.PRIMERA.CPG.NAME.MATCHES} |<p>This macro is used in filters of CPGs discovery rule.</p> |`.*` |
+|{$HPE.PRIMERA.CPG.NAME.NOT_MATCHES} |<p>This macro is used in filters of CPGs discovery rule.</p> |`CHANGE_IF_NEEDED` |
+|{$HPE.PRIMERA.DATA.TIMEOUT} |<p>Response timeout for WSAPI.</p> |`15s` |
+|{$HPE.PRIMERA.LLD.FILTER.TASK.NAME.MATCHES} |<p>Filter of discoverable tasks by name.</p> |`CHANGE_IF_NEEDED` |
+|{$HPE.PRIMERA.LLD.FILTER.TASK.NAME.NOT_MATCHES} |<p>Filter to exclude discovered tasks by name.</p> |`.*` |
+|{$HPE.PRIMERA.LLD.FILTER.TASK.TYPE.MATCHES} |<p>Filter of discoverable tasks by type.</p> |`.*` |
+|{$HPE.PRIMERA.LLD.FILTER.TASK.TYPE.NOT_MATCHES} |<p>Filter to exclude discovered tasks by type.</p> |`CHANGE_IF_NEEDED` |
+|{$HPE.PRIMERA.VOLUME.NAME.MATCHES} |<p>This macro is used in filters of volume discovery rule.</p> |`.*` |
+|{$HPE.PRIMERA.VOLUME.NAME.NOT_MATCHES} |<p>This macro is used in filters of volume discovery rule.</p> |`^(admin|.srdata|.mgmtdata)$` |
+
+## Template links
+
+There are no template links in this template.
+
+## Discovery rules
+
+|Name|Description|Type|Key and additional info|
+|----|-----------|----|----|
+|Common provisioning groups discovery |<p>List of CPGs resources.</p> |DEPENDENT |hpe.primera.cpg.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpgs.members`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p><p>**Filter**:</p>AND <p>- {#NAME} MATCHES_REGEX `{$HPE.PRIMERA.CPG.NAME.MATCHES}`</p><p>- {#NAME} NOT_MATCHES_REGEX `{$HPE.PRIMERA.CPG.NAME.NOT_MATCHES}`</p> |
+|Disks discovery |<p>List of physical disk resources.</p> |DEPENDENT |hpe.primera.disks.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.disks.members`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
+|Hosts discovery |<p>List of host properties.</p> |DEPENDENT |hpe.primera.hosts.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.hosts.members`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p><p>**Filter**:</p>AND <p>- {#NAME} EXISTS</p> |
+|Ports discovery |<p>List of ports.</p> |DEPENDENT |hpe.primera.ports.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.ports.members`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p><p>**Filter**:</p>AND <p>- {#TYPE} NOT_MATCHES_REGEX `3`</p> |
+|Tasks discovery |<p>List of tasks started within last 24 hours.</p> |DEPENDENT |hpe.primera.tasks.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.tasks`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p><p>**Filter**:</p>AND <p>- {#NAME} MATCHES_REGEX `{$HPE.PRIMERA.LLD.FILTER.TASK.NAME.MATCHES}`</p><p>- {#NAME} NOT_MATCHES_REGEX `{$HPE.PRIMERA.LLD.FILTER.TASK.NAME.NOT_MATCHES}`</p><p>- {#TYPE} MATCHES_REGEX `{$HPE.PRIMERA.LLD.FILTER.TASK.TYPE.MATCHES}`</p><p>- {#TYPE} NOT_MATCHES_REGEX `{$HPE.PRIMERA.LLD.FILTER.TASK.TYPE.NOT_MATCHES}`</p> |
+|Volumes discovery |<p>List of storage volume resources.</p> |DEPENDENT |hpe.primera.volumes.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p><p>**Filter**:</p>AND <p>- {#NAME} MATCHES_REGEX `{$HPE.PRIMERA.VOLUME.NAME.MATCHES}`</p><p>- {#NAME} NOT_MATCHES_REGEX `{$HPE.PRIMERA.VOLUME.NAME.NOT_MATCHES}`</p> |
+
+## Items collected
+
+|Group|Name|Description|Type|Key and additional info|
+|-----|----|-----------|----|---------------------|
+|HPE |HPE Primera: Get data |<p>The JSON with result of WSAPI requests.</p> |SCRIPT |hpe.primera.data.get<p>**Expression**:</p>`The text is too long. Please see the template.` |
+|HPE |HPE Primera: Get errors |<p>A list of errors from WSAPI requests.</p> |DEPENDENT |hpe.primera.data.errors<p>**Preprocessing**:</p><p>- JSONPATH: `$.errors`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |HPE Primera: Capacity allocated |<p>Allocated capacity in the system.</p> |DEPENDENT |hpe.primera.system.capacity.allocated<p>**Preprocessing**:</p><p>- JSONPATH: `$.system.allocatedCapacityMiB`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `1048576`</p> |
+|HPE |HPE Primera: Chunklet size |<p>Chunklet size.</p> |DEPENDENT |hpe.primera.system.chunklet.size<p>**Preprocessing**:</p><p>- JSONPATH: `$.system.chunkletSizeMiB`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `1048576`</p> |
+|HPE |HPE Primera: System contact |<p>Contact of the system.</p> |DEPENDENT |hpe.primera.system.contact<p>**Preprocessing**:</p><p>- JSONPATH: `$.system.contact`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |HPE Primera: Capacity failed |<p>Failed capacity in the system.</p> |DEPENDENT |hpe.primera.system.capacity.failed<p>**Preprocessing**:</p><p>- JSONPATH: `$.system.failedCapacityMiB`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `1048576`</p> |
+|HPE |HPE Primera: Capacity free |<p>Free capacity in the system.</p> |DEPENDENT |hpe.primera.system.capacity.free<p>**Preprocessing**:</p><p>- JSONPATH: `$.system.freeCapacityMiB`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p><p>- MULTIPLIER: `1048576`</p> |
+|HPE |HPE Primera: System location |<p>Location of the system.</p> |DEPENDENT |hpe.primera.system.location<p>**Preprocessing**:</p><p>- JSONPATH: `$.system.location`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |HPE Primera: Model |<p>System model.</p> |DEPENDENT |hpe.primera.system.model<p>**Preprocessing**:</p><p>- JSONPATH: `$.system.model`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |HPE Primera: System name |<p>System name.</p> |DEPENDENT |hpe.primera.system.name<p>**Preprocessing**:</p><p>- JSONPATH: `$.system.name`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
+|HPE |HPE Primera: Serial number |<p>System serial number.</p> |DEPENDENT |hpe.primera.system.serial_number<p>**Preprocessing**:</p><p>- JSONPATH: `$.system.serialNumber`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |HPE Primera: Software version number |<p>Storage system software version number.</p> |DEPENDENT |hpe.primera.system.sw_version<p>**Preprocessing**:</p><p>- JSONPATH: `$.system.systemVersion`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |HPE Primera: Capacity total |<p>Total capacity in the system.</p> |DEPENDENT |hpe.primera.system.capacity.total<p>**Preprocessing**:</p><p>- JSONPATH: `$.system.totalCapacityMiB`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `1048576`</p> |
+|HPE |HPE Primera: Nodes total |<p>Total number of nodes in the system.</p> |DEPENDENT |hpe.primera.system.nodes.total<p>**Preprocessing**:</p><p>- JSONPATH: `$.system.totalNodes`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |HPE Primera: Nodes online |<p>Number of online nodes in the system.</p> |DEPENDENT |hpe.primera.system.nodes.online<p>**Preprocessing**:</p><p>- JSONPATH: `$.system.onlineNodes.length()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |HPE Primera: Disks total |<p>Number of physical disks.</p> |DEPENDENT |hpe.primera.disks.total<p>**Preprocessing**:</p><p>- JSONPATH: `$.disks.total`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |HPE Primera: Service ping |<p>Checks if the service is running and accepting TCP connections.</p> |SIMPLE |net.tcp.service["{$HPE.PRIMERA.API.SCHEME}","{HOST.CONN}","{$HPE.PRIMERA.API.PORT}"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+|HPE |CPG [{#NAME}]: Degraded state |<p>Detailed state of the CPG:</p><p>LDS_NOT_STARTED (1) - LDs not started.</p><p>NOT_STARTED (2) - VV not started.</p><p>NEEDS_CHECK (3) - check for consistency.</p><p>NEEDS_MAINT_CHECK (4) - maintenance check is required.</p><p>INTERNAL_CONSISTENCY_ERROR (5) - internal consistency error.</p><p>SNAPDATA_INVALID (6) - invalid snapshot data.</p><p>PRESERVED (7) - unavailable LD sets due to missing chunklets. Preserved remaining VV data.</p><p>STALE (8) - parts of the VV contain old data because of a copy-on-write operation.</p><p>COPY_FAILED (9) - a promote or copy operation to this volume failed.</p><p>DEGRADED_AVAIL (10) - degraded due to availability.</p><p>DEGRADED_PERF (11) - degraded due to performance.</p><p>PROMOTING (12) - volume is the current target of a promote operation.</p><p>COPY_TARGET (13) - volume is the current target of a physical copy operation.</p><p>RESYNC_TARGET (14) - volume is the current target of a resynchronized copy operation.</p><p>TUNING (15) - volume tuning is in progress.</p><p>CLOSING (16) - volume is closing.</p><p>REMOVING (17) - removing the volume.</p><p>REMOVING_RETRY (18) - retrying a volume removal operation.</p><p>CREATING (19) - creating a volume.</p><p>COPY_SOURCE (20) - copy source.</p><p>IMPORTING (21) - importing a volume.</p><p>CONVERTING (22) - converting a volume.</p><p>INVALID (23) - invalid.</p><p>EXCLUSIVE (24) - local storage system has exclusive access to the volume.</p><p>CONSISTENT (25) - volume is being imported consistently along with other volumes in the VV set.</p><p>STANDBY (26) - volume in standby mode.</p><p>SD_META_INCONSISTENT (27) - SD Meta Inconsistent.</p><p>SD_NEEDS_FIX (28) - SD needs fix.</p><p>SD_META_FIXING (29) - SD meta fix.</p><p>UNKNOWN (999) - unknown state.</p><p>NOT_SUPPORTED_BY_WSAPI (1000) - state not supported by WSAPI.</p> |DEPENDENT |hpe.primera.cpg.state["{#ID}",degraded]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpgs.members[?(@.id == "{#ID}")].degradedStates.first()`</p> |
+|HPE |CPG [{#NAME}]: Failed state |<p>Detailed state of the CPG:</p><p>LDS_NOT_STARTED (1) - LDs not started.</p><p>NOT_STARTED (2) - VV not started.</p><p>NEEDS_CHECK (3) - check for consistency.</p><p>NEEDS_MAINT_CHECK (4) - maintenance check is required.</p><p>INTERNAL_CONSISTENCY_ERROR (5) - internal consistency error.</p><p>SNAPDATA_INVALID (6) - invalid snapshot data.</p><p>PRESERVED (7) - unavailable LD sets due to missing chunklets. Preserved remaining VV data.</p><p>STALE (8) - parts of the VV contain old data because of a copy-on-write operation.</p><p>COPY_FAILED (9) - a promote or copy operation to this volume failed.</p><p>DEGRADED_AVAIL (10) - degraded due to availability.</p><p>DEGRADED_PERF (11) - degraded due to performance.</p><p>PROMOTING (12) - volume is the current target of a promote operation.</p><p>COPY_TARGET (13) - volume is the current target of a physical copy operation.</p><p>RESYNC_TARGET (14) - volume is the current target of a resynchronized copy operation.</p><p>TUNING (15) - volume tuning is in progress.</p><p>CLOSING (16) - volume is closing.</p><p>REMOVING (17) - removing the volume.</p><p>REMOVING_RETRY (18) - retrying a volume removal operation.</p><p>CREATING (19) - creating a volume.</p><p>COPY_SOURCE (20) - copy source.</p><p>IMPORTING (21) - importing a volume.</p><p>CONVERTING (22) - converting a volume.</p><p>INVALID (23) - invalid.</p><p>EXCLUSIVE (24) - local storage system has exclusive access to the volume.</p><p>CONSISTENT (25) - volume is being imported consistently along with other volumes in the VV set.</p><p>STANDBY (26) - volume in standby mode.</p><p>SD_META_INCONSISTENT (27) - SD Meta Inconsistent.</p><p>SD_NEEDS_FIX (28) - SD needs fix.</p><p>SD_META_FIXING (29) - SD meta fix.</p><p>UNKNOWN (999) - unknown state.</p><p>NOT_SUPPORTED_BY_WSAPI (1000) - state not supported by WSAPI.</p> |DEPENDENT |hpe.primera.cpg.state["{#ID}",failed]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpgs.members[?(@.id == "{#ID}")].failedStates.first()`</p><p>- JAVASCRIPT: `return JSON.stringify(JSON.parse(value));`</p> |
+|HPE |CPG [{#NAME}]: CPG space: Free |<p>Free CPG space.</p> |DEPENDENT |hpe.primera.cpg.space["{#ID}",free]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpgs.members[?(@.id == "{#ID}")].freeSpaceMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p><p>- MULTIPLIER: `1048576`</p> |
+|HPE |CPG [{#NAME}]: Number of FPVVs |<p>Number of FPVVs (Fully Provisioned Virtual Volumes) allocated in the CPG.</p> |DEPENDENT |hpe.primera.cpg.fpvv["{#ID}",count]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpgs.members[?(@.id == "{#ID}")].numFPVVs.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |CPG [{#NAME}]: Number of TPVVs |<p>Number of TPVVs (Thinly Provisioned Virtual Volumes) allocated in the CPG.</p> |DEPENDENT |hpe.primera.cpg.tpvv["{#ID}",count]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpgs.members[?(@.id == "{#ID}")].numTPVVs.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |CPG [{#NAME}]: Number of TDVVs |<p>Number of TDVVs (Thinly Deduplicated Virtual Volume) created in the CPG.</p> |DEPENDENT |hpe.primera.cpg.tdvv["{#ID}",count]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpgs.members[?(@.id == "{#ID}")].numTDVVs.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |CPG [{#NAME}]: Raw space: Free |<p>Raw free space.</p> |DEPENDENT |hpe.primera.cpg.space.raw["{#ID}",free]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpgs.members[?(@.id == "{#ID}")].rawFreeSpaceMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p><p>- MULTIPLIER: `1048576`</p> |
+|HPE |CPG [{#NAME}]: Raw space: Shared |<p>Raw shared space.</p> |DEPENDENT |hpe.primera.cpg.space.raw["{#ID}",shared]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpgs.members[?(@.id == "{#ID}")].rawSharedSpaceMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `1048576`</p> |
+|HPE |CPG [{#NAME}]: Raw space: Total |<p>Raw total space.</p> |DEPENDENT |hpe.primera.cpg.space.raw["{#ID}",total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpgs.members[?(@.id == "{#ID}")].rawTotalSpaceMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `1048576`</p> |
+|HPE |CPG [{#NAME}]: CPG space: Shared |<p>Shared CPG space.</p> |DEPENDENT |hpe.primera.cpg.space["{#ID}",shared]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpgs.members[?(@.id == "{#ID}")].sharedSpaceMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `1048576`</p> |
+|HPE |CPG [{#NAME}]: State |<p>Overall state of the CPG:</p><p>NORMAL (1) - normal operation;</p><p>DEGRADED (2) - degraded state;</p><p>FAILED (3) - abnormal operation;</p><p>UNKNOWN (99) - unknown state.</p> |DEPENDENT |hpe.primera.cpg.state["{#ID}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpgs.members[?(@.id == "{#ID}")].state.first()`</p> |
+|HPE |CPG [{#NAME}]: Logical disk space: Snapshot administration: Total (raw) |<p>Total physical (raw) logical disk space in snapshot administration.</p> |DEPENDENT |hpe.primera.cpg.space.sa["{#ID}",raw_total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpgs.members[?(@.id == "{#ID}")].SAUsage.rawTotalMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `1048576`</p> |
+|HPE |CPG [{#NAME}]: Logical disk space: Snapshot data: Total (raw) |<p>Total physical (raw) logical disk space in snapshot data space.</p> |DEPENDENT |hpe.primera.cpg.space.sd["{#ID}",raw_total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpgs.members[?(@.id == "{#ID}")].SDUsage.rawTotalMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `1048576`</p> |
+|HPE |CPG [{#NAME}]: Logical disk space: User space: Total (raw) |<p>Total physical (raw) logical disk space in user data space.</p> |DEPENDENT |hpe.primera.cpg.space.usr["{#ID}",raw_total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpgs.members[?(@.id == "{#ID}")].UsrUsage.rawTotalMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `1048576`</p> |
+|HPE |CPG [{#NAME}]: Logical disk space: Snapshot administration: Total |<p>Total logical disk space in snapshot administration.</p> |DEPENDENT |hpe.primera.cpg.space.sa["{#ID}",total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpgs.members[?(@.id == "{#ID}")].SAUsage.totalMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `1048576`</p> |
+|HPE |CPG [{#NAME}]: Logical disk space: Snapshot data: Total |<p>Total logical disk space in snapshot data space.</p> |DEPENDENT |hpe.primera.cpg.space.sd["{#ID}",total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpgs.members[?(@.id == "{#ID}")].SDUsage.totalMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `1048576`</p> |
+|HPE |CPG [{#NAME}]: Logical disk space: User space: Total |<p>Total logical disk space in user data space.</p> |DEPENDENT |hpe.primera.cpg.space.usr["{#ID}",total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpgs.members[?(@.id == "{#ID}")].UsrUsage.totalMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `1048576`</p> |
+|HPE |CPG [{#NAME}]: CPG space: Total |<p>Total CPG space.</p> |DEPENDENT |hpe.primera.cpg.space["{#ID}",total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpgs.members[?(@.id == "{#ID}")].totalSpaceMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `1048576`</p> |
+|HPE |CPG [{#NAME}]: Logical disk space: Snapshot administration: Used (raw) |<p>Amount of physical (raw) logical disk used in snapshot administration.</p> |DEPENDENT |hpe.primera.cpg.space.sa["{#ID}",raw_used]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpgs.members[?(@.id == "{#ID}")].SAUsage.rawUsedMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p><p>- MULTIPLIER: `1048576`</p> |
+|HPE |CPG [{#NAME}]: Logical disk space: Snapshot data: Used (raw) |<p>Amount of physical (raw) logical disk used in snapshot data space.</p> |DEPENDENT |hpe.primera.cpg.space.sd["{#ID}",raw_used]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpgs.members[?(@.id == "{#ID}")].SDUsage.rawUsedMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p><p>- MULTIPLIER: `1048576`</p> |
+|HPE |CPG [{#NAME}]: Logical disk space: User space: Used (raw) |<p>Amount of physical (raw) logical disk used in user data space.</p> |DEPENDENT |hpe.primera.cpg.space.usr["{#ID}",raw_used]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpgs.members[?(@.id == "{#ID}")].UsrUsage.rawUsedMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p><p>- MULTIPLIER: `1048576`</p> |
+|HPE |CPG [{#NAME}]: Logical disk space: Snapshot administration: Used |<p>Amount of logical disk used in snapshot administration.</p> |DEPENDENT |hpe.primera.cpg.space.sa["{#ID}",used]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpgs.members[?(@.id == "{#ID}")].SAUsage.usedMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p><p>- MULTIPLIER: `1048576`</p> |
+|HPE |CPG [{#NAME}]: Logical disk space: Snapshot data: Used |<p>Amount of logical disk used in snapshot data space.</p> |DEPENDENT |hpe.primera.cpg.space.sd["{#ID}",used]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpgs.members[?(@.id == "{#ID}")].SDUsage.usedMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p><p>- MULTIPLIER: `1048576`</p> |
+|HPE |CPG [{#NAME}]: Logical disk space: User space: Used |<p>Amount of logical disk used in user data space.</p> |DEPENDENT |hpe.primera.cpg.space.usr["{#ID}",used]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpgs.members[?(@.id == "{#ID}")].UsrUsage.usedMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p><p>- MULTIPLIER: `1048576`</p> |
+|HPE |Disk [{#POSITION}]: Firmware version |<p>Physical disk firmware version.</p> |DEPENDENT |hpe.primera.disk["{#ID}",fw_version]<p>**Preprocessing**:</p><p>- JSONPATH: `$.disks.members[?(@.id == "{#ID}")].fwVersion.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Disk [{#POSITION}]: Free size |<p>Physical disk free size.</p> |DEPENDENT |hpe.primera.disk["{#ID}",free_size]<p>**Preprocessing**:</p><p>- JSONPATH: `$.disks.members[?(@.id == "{#ID}")].freeSizeMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p><p>- MULTIPLIER: `1048576`</p> |
+|HPE |Disk [{#POSITION}]: Manufacturer |<p>Physical disk manufacturer.</p> |DEPENDENT |hpe.primera.disk["{#ID}",manufacturer]<p>**Preprocessing**:</p><p>- JSONPATH: `$.disks.members[?(@.id == "{#ID}")].manufacturer.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Disk [{#POSITION}]: Model |<p>Manufacturer's device ID for disk.</p> |DEPENDENT |hpe.primera.disk["{#ID}",model]<p>**Preprocessing**:</p><p>- JSONPATH: `$.disks.members[?(@.id == "{#ID}")].model.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Disk [{#POSITION}]: Path A0 degraded |<p>Indicates if this is a degraded path for the disk.</p> |DEPENDENT |hpe.primera.disk["{#ID}",loop_a0_degraded]<p>**Preprocessing**:</p><p>- JSONPATH: `$.disks.members[?(@.id == "{#ID}")].loopA0.degraded.first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p><p>- BOOL_TO_DECIMAL</p> |
+|HPE |Disk [{#POSITION}]: Path A1 degraded |<p>Indicates if this is a degraded path for the disk.</p> |DEPENDENT |hpe.primera.disk["{#ID}",loop_a1_degraded]<p>**Preprocessing**:</p><p>- JSONPATH: `$.disks.members[?(@.id == "{#ID}")].loopA1.degraded.first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p><p>- BOOL_TO_DECIMAL</p> |
+|HPE |Disk [{#POSITION}]: Path B0 degraded |<p>Indicates if this is a degraded path for the disk.</p> |DEPENDENT |hpe.primera.disk["{#ID}",loop_b0_degraded]<p>**Preprocessing**:</p><p>- JSONPATH: `$.disks.members[?(@.id == "{#ID}")].loopB0.degraded.first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p><p>- BOOL_TO_DECIMAL</p> |
+|HPE |Disk [{#POSITION}]: Path B1 degraded |<p>Indicates if this is a degraded path for the disk.</p> |DEPENDENT |hpe.primera.disk["{#ID}",loop_b1_degraded]<p>**Preprocessing**:</p><p>- JSONPATH: `$.disks.members[?(@.id == "{#ID}")].loopB1.degraded.first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p><p>- BOOL_TO_DECIMAL</p> |
+|HPE |Disk [{#POSITION}]: RPM |<p>RPM of the physical disk.</p> |DEPENDENT |hpe.primera.disk["{#ID}",rpm]<p>**Preprocessing**:</p><p>- JSONPATH: `$.disks.members[?(@.id == "{#ID}")].RPM.first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Disk [{#POSITION}]: Serial number |<p>Disk drive serial number.</p> |DEPENDENT |hpe.primera.disk["{#ID}",serial_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.disks.members[?(@.id == "{#ID}")].serialNumber.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Disk [{#POSITION}]: State |<p>State of the physical disk:</p><p>Normal (1) - physical disk is in Normal state;</p><p>Degraded (2) - physical disk is not operating normally;</p><p>New (3) - physical disk is new, needs to be admitted;</p><p>Failed (4) - physical disk has failed;</p><p>Unknown (99) - physical disk state is unknown.</p> |DEPENDENT |hpe.primera.disk["{#ID}",state]<p>**Preprocessing**:</p><p>- JSONPATH: `$.disks.members[?(@.id == "{#ID}")].state.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 99`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Disk [{#POSITION}]: Total size |<p>Physical disk total size.</p> |DEPENDENT |hpe.primera.disk["{#ID}",total_size]<p>**Preprocessing**:</p><p>- JSONPATH: `$.disks.members[?(@.id == "{#ID}")].totalSizeMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `1048576`</p> |
+|HPE |Host [{#NAME}]: Comment |<p>Additional information for the host.</p> |DEPENDENT |hpe.primera.host["{#ID}",comment]<p>**Preprocessing**:</p><p>- JSONPATH: `$.hosts.members[?(@.id == "{#ID}")].descriptors.comment.first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Host [{#NAME}]: Contact |<p>The host's owner and contact.</p> |DEPENDENT |hpe.primera.host["{#ID}",contact]<p>**Preprocessing**:</p><p>- JSONPATH: `$.hosts.members[?(@.id == "{#ID}")].descriptors.contact.first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Host [{#NAME}]: IP address |<p>The host's IP address.</p> |DEPENDENT |hpe.primera.host["{#ID}",ipaddress]<p>**Preprocessing**:</p><p>- JSONPATH: `$.hosts.members[?(@.id == "{#ID}")].descriptors.IPAddr.first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Host [{#NAME}]: Location |<p>The host's location.</p> |DEPENDENT |hpe.primera.host["{#ID}",location]<p>**Preprocessing**:</p><p>- JSONPATH: `$.hosts.members[?(@.id == "{#ID}")].descriptors.location.first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Host [{#NAME}]: Model |<p>The host's model.</p> |DEPENDENT |hpe.primera.host["{#ID}",model]<p>**Preprocessing**:</p><p>- JSONPATH: `$.hosts.members[?(@.id == "{#ID}")].descriptors.model.first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Host [{#NAME}]: OS |<p>The operating system running on the host.</p> |DEPENDENT |hpe.primera.host["{#ID}",os]<p>**Preprocessing**:</p><p>- JSONPATH: `$.hosts.members[?(@.id == "{#ID}")].descriptors.os.first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Port [{#NODE}:{#SLOT}:{#CARD.PORT}]: Failover state |<p>The state of the failover operation, shown for the two ports indicated in the N:S:P and Partner columns. The value can be one of the following:</p><p>none (1) - no failover in operation;</p><p>failover_pending (2) - in the process of failing over to partner;</p><p>failed_over (3) - failed over to partner;</p><p>active (4) - the partner port is failed over to this port;</p><p>active_down (5) - the partner port is failed over to this port, but this port is down;</p><p>active_failed (6) - the partner port is failed over to this port, but this port is down;</p><p>failback_pending (7) - in the process of failing back from partner.</p> |DEPENDENT |hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",failover_state]<p>**Preprocessing**:</p><p>- JSONPATH: `$.ports.members[?(@.portPos.node == "{#NODE}" && @.portPos.slot == "{#SLOT}" && @.portPos.cardPort == "{#CARD.PORT}")].failoverState.first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Port [{#NODE}:{#SLOT}:{#CARD.PORT}]: Link state |<p>Port link state:</p><p>CONFIG_WAIT (1) - configuration wait;</p><p>ALPA_WAIT (2) - ALPA wait;</p><p>LOGIN_WAIT (3) - login wait;</p><p>READY (4) - link is ready;</p><p>LOSS_SYNC (5) - link is loss sync;</p><p>ERROR_STATE (6) - in error state;</p><p>XXX (7) - xxx;</p><p>NONPARTICIPATE (8) - link did not participate;</p><p>COREDUMP (9) - taking coredump;</p><p>OFFLINE (10) - link is offline;</p><p>FWDEAD (11) - firmware is dead;</p><p>IDLE_FOR_RESET (12) - link is idle for reset;</p><p>DHCP_IN_PROGRESS (13) - DHCP is in progress;</p><p>PENDING_RESET (14) - link reset is pending;</p><p>NEW (15) - link in new. This value is applicable for only virtual ports;</p><p>DISABLED (16) - link in disabled. This value is applicable for only virtual ports;</p><p>DOWN (17) - link in down. This value is applicable for only virtual ports;</p><p>FAILED (18) - link in failed. This value is applicable for only virtual ports;</p><p>PURGING (19) - link in purging. This value is applicable for only virtual ports.</p> |DEPENDENT |hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",link_state]<p>**Preprocessing**:</p><p>- JSONPATH: `$.ports.members[?(@.portPos.node == "{#NODE}" && @.portPos.slot == "{#SLOT}" && @.portPos.cardPort == "{#CARD.PORT}")].linkState.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Port [{#NODE}:{#SLOT}:{#CARD.PORT}]: Type |<p>Port connection type:</p><p>HOST (1) - FC port connected to hosts or fabric;</p><p>DISK (2) - FC port connected to disks;</p><p>FREE (3) - port is not connected to hosts or disks;</p><p>IPORT (4) - port is in iport mode;</p><p>RCFC (5) - FC port used for remote copy;</p><p>PEER (6) - FC port used for data migration;</p><p>RCIP (7) - IP (Ethernet) port used for remote copy;</p><p>ISCSI (8) - iSCSI (Ethernet) port connected to hosts;</p><p>CNA (9) - CNA port, which can be FCoE or iSCSI;</p><p>FS (10) - Ethernet File Persona ports.</p> |DEPENDENT |hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",type]<p>**Preprocessing**:</p><p>- JSONPATH: `$.ports.members[?(@.portPos.node == "{#NODE}" && @.portPos.slot == "{#SLOT}" && @.portPos.cardPort == "{#CARD.PORT}")].type.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Port [{#NODE}:{#SLOT}:{#CARD.PORT}]: Hardware type |<p>Hardware type:</p><p>FC (1) - Fibre channel HBA;</p><p>ETH (2) - Ethernet NIC;</p><p>iSCSI (3) - iSCSI HBA;</p><p>CNA (4) - Converged network adapter;</p><p>SAS (5) - SAS HBA;</p><p>COMBO (6) - Combo card;</p><p>NVME (7) - NVMe drive;</p><p>UNKNOWN (99) - unknown hardware type.</p> |DEPENDENT |hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",hw_type]<p>**Preprocessing**:</p><p>- JSONPATH: `$.ports.members[?(@.portPos.node == "{#NODE}" && @.portPos.slot == "{#SLOT}" && @.portPos.cardPort == "{#CARD.PORT}")].hardwareType.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Task [{#NAME}]: Finish time |<p>Task finish time.</p> |DEPENDENT |hpe.primera.task["{#ID}",finish_time]<p>**Preprocessing**:</p><p>- JSONPATH: `$.tasks[?(@.id == "{#ID}")].finishTime.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p><p>- NOT_MATCHES_REGEX: `^-$`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- JAVASCRIPT: `The text is too long. Please see the template.`</p> |
+|HPE |Task [{#NAME}]: Start time |<p>Task start time.</p> |DEPENDENT |hpe.primera.task["{#ID}",start_time]<p>**Preprocessing**:</p><p>- JSONPATH: `$.tasks[?(@.id == "{#ID}")].startTime.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p><p>- JAVASCRIPT: `The text is too long. Please see the template.`</p> |
+|HPE |Task [{#NAME}]: Status |<p>Task status:</p><p>DONE (1) - task is finished;</p><p>ACTIVE (2) - task is in progress;</p><p>CANCELLED (3) - task is canceled;</p><p>FAILED (4) - task failed.</p> |DEPENDENT |hpe.primera.task["{#ID}",status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.tasks[?(@.id == "{#ID}")].status.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+|HPE |Task [{#NAME}]: Type |<p>Task type:</p><p>VV_COPY (1) - track the physical copy operations;</p><p>PHYS_COPY_RESYNC (2) - track physical copy resynchronization operations;</p><p>MOVE_REGIONS (3) - track region move operations;</p><p>PROMOTE_SV (4) - track virtual-copy promotions;</p><p>REMOTE_COPY_SYNC (5) - track remote copy group synchronizations;</p><p>REMOTE_COPY_REVERSE (6) - track the reversal of a remote copy group;</p><p>REMOTE_COPY_FAILOVER (7) - track the change-over of a secondary volume group to a primaryvolume group;REMOTE_COPY_RECOVER (8) - track synchronization start after a failover operation from originalsecondary cluster to original primary cluster;</p><p>REMOTE_COPY_RESTORE (9) - tracks the restoration process for groups that have already beenrecovered;</p><p>COMPACT_CPG (10) - track space consolidation in CPGs;</p><p>COMPACT_IDS (11) - track space consolidation in logical disks;</p><p>SNAPSHOT_ACCOUNTING (12) - track progress of snapshot space usage accounting;</p><p>CHECK_VV (13) - track the progress of the check-volume operation;</p><p>SCHEDULED_TASK (14) - track tasks that have been executed by the system scheduler;</p><p>SYSTEM_TASK (15) - track tasks that are periodically run by the storage system;</p><p>BACKGROUND_TASK (16) - track commands started using the starttask command;</p><p>IMPORT_VV (17) - track tasks that migrate data to the local storage system;</p><p>ONLINE_COPY (18) - track physical copy of the volume while online (createvvcopy-online command);</p><p>CONVERT_VV (19) - track tasks that convert a volume from an FPVV to a TPVV, and the reverse;</p><p>BACKGROUND_COMMAND (20) - track background command tasks;</p><p>CLX_SYNC (21) - track CLX synchronization tasks;</p><p>CLX_RECOVERY (22) - track CLX recovery tasks;</p><p>TUNE_SD (23) - tune copy space;</p><p>TUNE_VV (24) - tune virtual volume;</p><p>TUNE_VV_ROLLBACK (25) - tune virtual volume rollback;</p><p>TUNE_VV_RESTART (26) - tune virtual volume restart;</p><p>SYSTEM_TUNING (27) - system tuning;</p><p>NODE_RESCUE (28) - node rescue;</p><p>REPAIR_SYNC (29) - remote copy repair sync;</p><p>REMOTE_COPY_SWOVER (30) - remote copy switchover;</p><p>DEFRAGMENTATION (31) - defragmentation;</p><p>ENCRYPTION_CHANGE (32) - encryption change;</p><p>REMOTE_COPY_FAILSAFE (33) - remote copy failsafe;</p><p>TUNE_TPVV (34) - tune thin virtual volume;</p><p>REMOTE_COPY_CHG_MODE (35) - remote copy change mode;</p><p>ONLINE_PROMOTE (37) - online promote snap;</p><p>RELOCATE_PD (38) - relocate PD;</p><p>PERIODIC_CSS (39) - remote copy periodic CSS;</p><p>TUNEVV_LARGE (40) - tune large virtual volume;</p><p>SD_META_FIXER (41) - compression SD meta fixer;</p><p>DEDUP_DRYRUN (42) - preview dedup ratio;</p><p>COMPR_DRYRUN (43) - compression estimation;</p><p>DEDUP_COMPR_DRYRUN (44) - compression and dedup estimation;</p><p>UNKNOWN (99) - unknown task type.</p> |DEPENDENT |hpe.primera.task["{#ID}",type]<p>**Preprocessing**:</p><p>- JSONPATH: `$.tasks[?(@.id == "{#ID}")].type.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
+|HPE |Volume [{#NAME}]: Administrative space: Free |<p>Free administrative space.</p> |DEPENDENT |hpe.primera.volume.space.admin["{#ID}",free]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].adminSpace.freeMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p><p>- MULTIPLIER: `1048576`</p> |
+|HPE |Volume [{#NAME}]: Administrative space: Raw reserved |<p>Raw reserved administrative space.</p> |DEPENDENT |hpe.primera.volume.space.admin["{#ID}",raw_reserved]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].adminSpace.rawReservedMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `12h`</p><p>- MULTIPLIER: `1048576`</p> |
+|HPE |Volume [{#NAME}]: Administrative space: Reserved |<p>Reserved administrative space.</p> |DEPENDENT |hpe.primera.volume.space.admin["{#ID}",reserved]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].adminSpace.reservedMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `12h`</p><p>- MULTIPLIER: `1048576`</p> |
+|HPE |Volume [{#NAME}]: Administrative space: Used |<p>Used administrative space.</p> |DEPENDENT |hpe.primera.volume.space.admin["{#ID}",used]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].adminSpace.usedMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p><p>- MULTIPLIER: `1048576`</p> |
+|HPE |Volume [{#NAME}]: Compaction ratio |<p>The compaction ratio indicates the overall amount of storage space saved with thin technology.</p> |DEPENDENT |hpe.primera.volume.capacity.efficiency["{#ID}",compaction]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].capacityEfficiency.compaction.first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Volume [{#NAME}]: Compression state |<p>Volume compression state:</p><p>YES (1) - compression is enabled on the volume;</p><p>NO (2) - compression is disabled on the volume;</p><p>OFF (3) - compression is turned off;</p><p>NA (4) - compression is not available on the volume.</p> |DEPENDENT |hpe.primera.volume.state["{#ID}",compression]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].compressionState.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
+|HPE |Volume [{#NAME}]: Deduplication state |<p>Volume deduplication state:</p><p>YES (1) - enables deduplication on the volume;</p><p>NO (2) - disables deduplication on the volume;</p><p>NA (3) - deduplication is not available;</p><p>OFF (4) - deduplication is turned off.</p> |DEPENDENT |hpe.primera.volume.state["{#ID}",deduplication]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].deduplicationState.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
+|HPE |Volume [{#NAME}]: Degraded state |<p>Volume detailed state:</p><p>LDS_NOT_STARTED (1) - LDs not started.</p><p>NOT_STARTED (2) - VV not started.</p><p>NEEDS_CHECK (3) - check for consistency.</p><p>NEEDS_MAINT_CHECK (4) - maintenance check is required.</p><p>INTERNAL_CONSISTENCY_ERROR (5) - internal consistency error.</p><p>SNAPDATA_INVALID (6) - invalid snapshot data.</p><p>PRESERVED (7) - unavailable LD sets due to missing chunklets. Preserved remaining VV data.</p><p>STALE (8) - parts of the VV contain old data because of a copy-on-write operation.</p><p>COPY_FAILED (9) - a promote or copy operation to this volume failed.</p><p>DEGRADED_AVAIL (10) - degraded due to availability.</p><p>DEGRADED_PERF (11) - degraded due to performance.</p><p>PROMOTING (12) - volume is the current target of a promote operation.</p><p>COPY_TARGET (13) - volume is the current target of a physical copy operation.</p><p>RESYNC_TARGET (14) - volume is the current target of a resynchronized copy operation.</p><p>TUNING (15) - volume tuning is in progress.</p><p>CLOSING (16) - volume is closing.</p><p>REMOVING (17) - removing the volume.</p><p>REMOVING_RETRY (18) - retrying a volume removal operation.</p><p>CREATING (19) - creating a volume.</p><p>COPY_SOURCE (20) - copy source.</p><p>IMPORTING (21) - importing a volume.</p><p>CONVERTING (22) - converting a volume.</p><p>INVALID (23) - invalid.</p><p>EXCLUSIVE (24) -lLocal storage system has exclusive access to the volume.</p><p>CONSISTENT (25) - volume is being imported consistently along with other volumes in the VV set.</p><p>STANDBY (26) - volume in standby mode.</p><p>SD_META_INCONSISTENT (27) - SD Meta Inconsistent.</p><p>SD_NEEDS_FIX (28) - SD needs fix.</p><p>SD_META_FIXING (29) - SD meta fix.</p><p>UNKNOWN (999) - unknown state.</p><p>NOT_SUPPORTED_BY_WSAPI (1000) - state not supported by WSAPI.</p> |DEPENDENT |hpe.primera.volume.state["{#ID}",degraded]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].degradedStates.first()`</p> |
+|HPE |Volume [{#NAME}]: Failed state |<p>Volume detailed state:</p><p>LDS_NOT_STARTED (1) - LDs not started.</p><p>NOT_STARTED (2) - VV not started.</p><p>NEEDS_CHECK (3) - check for consistency.</p><p>NEEDS_MAINT_CHECK (4) - maintenance check is required.</p><p>INTERNAL_CONSISTENCY_ERROR (5) - internal consistency error.</p><p>SNAPDATA_INVALID (6) - invalid snapshot data.</p><p>PRESERVED (7) - unavailable LD sets due to missing chunklets. Preserved remaining VV data.</p><p>STALE (8) - parts of the VV contain old data because of a copy-on-write operation.</p><p>COPY_FAILED (9) - a promote or copy operation to this volume failed.</p><p>DEGRADED_AVAIL (10) - degraded due to availability.</p><p>DEGRADED_PERF (11) - degraded due to performance.</p><p>PROMOTING (12) - volume is the current target of a promote operation.</p><p>COPY_TARGET (13) - volume is the current target of a physical copy operation.</p><p>RESYNC_TARGET (14) - volume is the current target of a resynchronized copy operation.</p><p>TUNING (15) - volume tuning is in progress.</p><p>CLOSING (16) - volume is closing.</p><p>REMOVING (17) - removing the volume.</p><p>REMOVING_RETRY (18) - retrying a volume removal operation.</p><p>CREATING (19) - creating a volume.</p><p>COPY_SOURCE (20) - copy source.</p><p>IMPORTING (21) - importing a volume.</p><p>CONVERTING (22) - converting a volume.</p><p>INVALID (23) - invalid.</p><p>EXCLUSIVE (24) - local storage system has exclusive access to the volume.</p><p>CONSISTENT (25) - volume is being imported consistently along with other volumes in the VV set.</p><p>STANDBY (26) - volume in standby mode.</p><p>SD_META_INCONSISTENT (27) - SD Meta Inconsistent.</p><p>SD_NEEDS_FIX (28) - SD needs fix.</p><p>SD_META_FIXING (29) - SD meta fix.</p><p>UNKNOWN (999) - unknown state.</p><p>NOT_SUPPORTED_BY_WSAPI (1000) - state not supported by WSAPI.</p> |DEPENDENT |hpe.primera.volume.state["{#ID}",failed]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].failedStates.first()`</p><p>- JAVASCRIPT: `return JSON.stringify(JSON.parse(value));`</p> |
+|HPE |Volume [{#NAME}]: Overprovisioning ratio |<p>Overprovisioning capacity efficiency ratio.</p> |DEPENDENT |hpe.primera.volume.capacity.efficiency["{#ID}",overprovisioning]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].capacityEfficiency.overProvisioning.first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Volume [{#NAME}]: Remote copy status |<p>Remote copy status of the volume:</p><p>NONE (1) - volume is not associated with remote copy;</p><p>PRIMARY (2) - volume is the primary copy;</p><p>SECONDARY (3) - volume is the secondary copy;</p><p>SNAP (4) - volume is the remote copy snapshot;</p><p>SYNC (5) - volume is a remote copy snapshot being used for synchronization;</p><p>DELETE (6) - volume is a remote copy snapshot that is marked for deletion;</p><p>UNKNOWN (99) - remote copy status is unknown for this volume.</p> |DEPENDENT |hpe.primera.volume.status["{#ID}",rcopy]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].rcopyStatus.first()`</p> |
+|HPE |Volume [{#NAME}]: Snapshot space: Free |<p>Free snapshot space.</p> |DEPENDENT |hpe.primera.volume.space.snapshot["{#ID}",free]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].snapshotSpace.freeMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p><p>- MULTIPLIER: `1048576`</p> |
+|HPE |Volume [{#NAME}]: Snapshot space: Raw reserved |<p>Raw reserved snapshot space.</p> |DEPENDENT |hpe.primera.volume.space.snapshot["{#ID}",raw_reserved]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].snapshotSpace.rawReservedMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `12h`</p><p>- MULTIPLIER: `1048576`</p> |
+|HPE |Volume [{#NAME}]: Snapshot space: Reserved |<p>Reserved snapshot space.</p> |DEPENDENT |hpe.primera.volume.space.snapshot["{#ID}",reserved]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].snapshotSpace.reservedMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `12h`</p><p>- MULTIPLIER: `1048576`</p> |
+|HPE |Volume [{#NAME}]: Snapshot space: Used |<p>Used snapshot space.</p> |DEPENDENT |hpe.primera.volume.space.snapshot["{#ID}",used]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].snapshotSpace.usedMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p><p>- MULTIPLIER: `1048576`</p> |
+|HPE |Volume [{#NAME}]: State |<p>State of the volume:</p><p>NORMAL (1) - normal operation;</p><p>DEGRADED (2) - degraded state;</p><p>FAILED (3) - abnormal operation;</p><p>UNKNOWN (99) - unknown state.</p> |DEPENDENT |hpe.primera.volume.state["{#ID}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].state.first()`</p> |
+|HPE |Volume [{#NAME}]: Storage space saved using compression |<p>Indicates the amount of storage space saved using compression.</p> |DEPENDENT |hpe.primera.volume.capacity.efficiency["{#ID}",compression]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].capacityEfficiency.compression.first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Volume [{#NAME}]: Storage space saved using deduplication |<p>Indicates the amount of storage space saved using deduplication.</p> |DEPENDENT |hpe.primera.volume.capacity.efficiency["{#ID}",deduplication]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].capacityEfficiency.deduplication.first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Volume [{#NAME}]: Storage space saved using deduplication and compression |<p>Indicates the amount of storage space saved using deduplication and compression together.</p> |DEPENDENT |hpe.primera.volume.capacity.efficiency["{#ID}",reduction]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].capacityEfficiency.dataReduction.first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Volume [{#NAME}]: Total reserved space |<p>Total reserved space.</p> |DEPENDENT |hpe.primera.volume.space.total["{#ID}",reserved]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].totalReservedMiB.first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `12h`</p><p>- MULTIPLIER: `1048576`</p> |
+|HPE |Volume [{#NAME}]: Total space |<p>Virtual size of volume.</p> |DEPENDENT |hpe.primera.volume.space.total["{#ID}",size]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].sizeMiB.first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `12h`</p><p>- MULTIPLIER: `1048576`</p> |
+|HPE |Volume [{#NAME}]: Total used space |<p>Total used space. Sum of used user space and used snapshot space.</p> |DEPENDENT |hpe.primera.volume.space.total["{#ID}",used]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].totalUsedMiB.first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- MULTIPLIER: `1048576`</p> |
+|HPE |Volume [{#NAME}]: User space: Free |<p>Free user space.</p> |DEPENDENT |hpe.primera.volume.space.user["{#ID}",free]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].userSpace.freeMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p><p>- MULTIPLIER: `1048576`</p> |
+|HPE |Volume [{#NAME}]: User space: Raw reserved |<p>Raw reserved user space.</p> |DEPENDENT |hpe.primera.volume.space.user["{#ID}",raw_reserved]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].userSpace.rawReservedMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `12h`</p><p>- MULTIPLIER: `1048576`</p> |
+|HPE |Volume [{#NAME}]: User space: Reserved |<p>Reserved user space.</p> |DEPENDENT |hpe.primera.volume.space.user["{#ID}",reserved]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].userSpace.reservedMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `12h`</p><p>- MULTIPLIER: `1048576`</p> |
+|HPE |Volume [{#NAME}]: User space: Used |<p>Used user space.</p> |DEPENDENT |hpe.primera.volume.space.user["{#ID}",used]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].userSpace.usedMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p><p>- MULTIPLIER: `1048576`</p> |
+
+## Triggers
+
+|Name|Description|Expression|Severity|Dependencies and additional info|
+|----|-----------|----|----|----|
+|HPE Primera: There are errors in requests to WSAPI |<p>Zabbix has received errors in requests to WSAPI.</p> |`length(last(/HPE Primera by HTTP/hpe.primera.data.errors))>0` |AVERAGE |<p>**Depends on**:</p><p>- HPE Primera: Service is unavailable</p> |
+|HPE Primera: Service is unavailable |<p>-</p> |`max(/HPE Primera by HTTP/net.tcp.service["{$HPE.PRIMERA.API.SCHEME}","{HOST.CONN}","{$HPE.PRIMERA.API.PORT}"],5m)=0` |HIGH |<p>Manual close: YES</p> |
+|CPG [{#NAME}]: Degraded |<p>CPG [{#NAME}] is in degraded state.</p> |`last(/HPE Primera by HTTP/hpe.primera.cpg.state["{#ID}"])=2` |AVERAGE | |
+|CPG [{#NAME}]: Failed |<p>CPG [{#NAME}] is in failed state.</p> |`last(/HPE Primera by HTTP/hpe.primera.cpg.state["{#ID}"])=3` |HIGH | |
+|Disk [{#POSITION}]: Path A0 degraded |<p>Disk [{#POSITION}] path A0 in degraded state.</p> |`last(/HPE Primera by HTTP/hpe.primera.disk["{#ID}",loop_a0_degraded])=1` |AVERAGE | |
+|Disk [{#POSITION}]: Path A1 degraded |<p>Disk [{#POSITION}] path A1 in degraded state.</p> |`last(/HPE Primera by HTTP/hpe.primera.disk["{#ID}",loop_a1_degraded])=1` |AVERAGE | |
+|Disk [{#POSITION}]: Path B0 degraded |<p>Disk [{#POSITION}] path B0 in degraded state.</p> |`last(/HPE Primera by HTTP/hpe.primera.disk["{#ID}",loop_b0_degraded])=1` |AVERAGE | |
+|Disk [{#POSITION}]: Path B1 degraded |<p>Disk [{#POSITION}] path B1 in degraded state.</p> |`last(/HPE Primera by HTTP/hpe.primera.disk["{#ID}",loop_b1_degraded])=1` |AVERAGE | |
+|Disk [{#POSITION}]: Degraded |<p>Disk [{#POSITION}] in degraded state.</p> |`last(/HPE Primera by HTTP/hpe.primera.disk["{#ID}",state])=2` |AVERAGE | |
+|Disk [{#POSITION}]: Failed |<p>Disk [{#POSITION}] in failed state.</p> |`last(/HPE Primera by HTTP/hpe.primera.disk["{#ID}",state])=3` |HIGH | |
+|Disk [{#POSITION}]: Unknown issue |<p>Disk [{#POSITION}] in unknown state.</p> |`last(/HPE Primera by HTTP/hpe.primera.disk["{#ID}",state])=99` |INFO | |
+|Port [{#NODE}:{#SLOT}:{#CARD.PORT}]: Failover state is {ITEM.VALUE1} |<p>Port [{#NODE}:{#SLOT}:{#CARD.PORT}] has failover error.</p> |`last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",failover_state])<>1 and last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",failover_state])<>4` |AVERAGE | |
+|Port [{#NODE}:{#SLOT}:{#CARD.PORT}]: Link state is {ITEM.VALUE1} |<p>Port [{#NODE}:{#SLOT}:{#CARD.PORT}] not in ready state.</p> |`last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",link_state])<>4 and last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",link_state])<>1 and last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",link_state])<>3 and last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",link_state])<>13 and last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",link_state])<>15 and last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",link_state])<>16` |HIGH | |
+|Port [{#NODE}:{#SLOT}:{#CARD.PORT}]: Link state is {ITEM.VALUE1} |<p>Port [{#NODE}:{#SLOT}:{#CARD.PORT}] not in ready state.</p> |`last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",link_state])=1 or last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",link_state])=3 or last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",link_state])=13 or last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",link_state])=15 or last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",link_state])=16` |AVERAGE | |
+|Task [{#NAME}]: Cancelled |<p>Task [{#NAME}] is cancelled.</p> |`last(/HPE Primera by HTTP/hpe.primera.task["{#ID}",status])=3` |INFO | |
+|Task [{#NAME}]: Failed |<p>Task [{#NAME}] is failed.</p> |`last(/HPE Primera by HTTP/hpe.primera.task["{#ID}",status])=4` |AVERAGE | |
+|Volume [{#NAME}]: Degraded |<p>Volume [{#NAME}] is in degraded state.</p> |`last(/HPE Primera by HTTP/hpe.primera.volume.state["{#ID}"])=2` |AVERAGE | |
+|Volume [{#NAME}]: Failed |<p>Volume [{#NAME}] is in failed state.</p> |`last(/HPE Primera by HTTP/hpe.primera.volume.state["{#ID}"])=3` |HIGH | |
+
+## Feedback
+
+Please report any issues with the template at https://support.zabbix.com
+
+You can also provide feedback, discuss the template or ask for help with it at [ZABBIX forums](https://www.zabbix.com/forum/zabbix-suggestions-and-feedback/).
+
diff --git a/templates/san/hpe_primera_http/template_san_hpe_primera_http.yaml b/templates/san/hpe_primera_http/template_san_hpe_primera_http.yaml
new file mode 100644
index 00000000000..7b92c4e1dd5
--- /dev/null
+++ b/templates/san/hpe_primera_http/template_san_hpe_primera_http.yaml
@@ -0,0 +1,4681 @@
+zabbix_export:
+ version: '6.0'
+ date: '2022-06-01T08:17:46Z'
+ groups:
+ -
+ uuid: 7c2cb727f85b492d88cd56e17127c64d
+ name: Templates/SAN
+ templates:
+ -
+ uuid: b8750c02b5624c6889979b129735bd56
+ template: 'HPE Primera by HTTP'
+ name: 'HPE Primera by HTTP'
+ description: |
+ The template to monitor HPE Primera by HTTP.
+ It works without any external scripts and uses the script item.
+
+ Setup:
+ 1. Create user zabbix on the storage with browse role and enable it for all domains.
+ 2. The WSAPI server does not start automatically.
+ - Log in to the CLI as Super, Service, or any role granted the wsapi_set right.
+ - Start the WSAPI server by command: `startwsapi`.
+ - To check WSAPI state use command: `showwsapi`.
+ 3. Link template to the host.
+ 4. Configure macros {$HPE.PRIMERA.USERNAME} and {$HPE.PRIMERA.PASSWORD}.
+
+ You can discuss this template or leave feedback on our forum https://www.zabbix.com/forum/zabbix-suggestions-and-feedback/
+
+ Template tooling version used: 0.41
+ groups:
+ -
+ name: Templates/SAN
+ items:
+ -
+ uuid: 484a6b9568234bbca9b4bcae2833bbf1
+ name: 'HPE Primera: Get errors'
+ type: DEPENDENT
+ key: hpe.primera.data.errors
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: TEXT
+ description: 'A list of errors from WSAPI requests.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - $.errors
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: raw
+ triggers:
+ -
+ uuid: 570d440e7ec9445585003208eca06e63
+ expression: 'length(last(/HPE Primera by HTTP/hpe.primera.data.errors))>0'
+ name: 'HPE Primera: There are errors in requests to WSAPI'
+ opdata: '{ITEM.LASTVALUE1}'
+ priority: AVERAGE
+ description: 'Zabbix has received errors in requests to WSAPI.'
+ dependencies:
+ -
+ name: 'HPE Primera: Service is unavailable'
+ expression: 'max(/HPE Primera by HTTP/net.tcp.service["{$HPE.PRIMERA.API.SCHEME}","{HOST.CONN}","{$HPE.PRIMERA.API.PORT}"],5m)=0'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ uuid: 530e20083da8423e9d30c8342f1b7da3
+ name: 'HPE Primera: Get data'
+ type: SCRIPT
+ key: hpe.primera.data.get
+ history: 0d
+ trends: '0'
+ value_type: TEXT
+ params: |
+ var Primera = {
+ params: {},
+ session_key: null,
+
+ setParams: function (params) {
+ ['username', 'password', 'base_url'].forEach(function (field) {
+ if (typeof params !== 'object' || typeof params[field] === 'undefined' || params[field] === '') {
+ throw 'Required param is not set: ' + field + '.';
+ }
+ });
+
+ Primera.params = params;
+ if (typeof Primera.params.base_url === 'string' && !Primera.params.base_url.endsWith('/')) {
+ Primera.params.base_url += '/';
+ }
+ },
+
+ login: function () {
+ if (Primera.session_key !== null) {
+ return;
+ }
+
+ var response, request = new HttpRequest();
+ request.addHeader('Content-Type: application/json');
+
+ response = request.post(Primera.params.base_url + 'api/v1/credentials', JSON.stringify({
+ 'user': Primera.params.username,
+ 'password': Primera.params.password
+ }));
+
+ if (request.getStatus() < 200 || request.getStatus() >= 300) {
+ throw 'Auth request failed with status code ' + request.getStatus() + ': ' + response;
+ }
+
+ if (response !== null) {
+ try {
+ auth_data = JSON.parse(response);
+ }
+ catch (error) {
+ throw 'Failed to parse auth response received from device API.';
+ }
+ }
+ else {
+ throw 'No data received by auth request.'
+ }
+
+ if ('key' in auth_data) {
+ Primera.session_key = auth_data['key'];
+ } else {
+ throw 'Auth response does not contain session key.'
+ }
+ },
+
+ logout: function () {
+ if (Primera.session_key !== null) {
+ (new HttpRequest()).delete(Primera.params.base_url + 'api/v1/credentials/' + Primera.session_key);
+ }
+ },
+
+ requestData: function (method) {
+ if (Primera.session_key === null) {
+ return;
+ }
+
+ request = new HttpRequest();
+ request.addHeader('X-HP3PAR-WSAPI-SessionKey: ' + Primera.session_key);
+
+ raw_data = request.get(Primera.params.base_url + 'api/v1/' + method);
+
+ if (request.getStatus() < 200 || request.getStatus() >= 300) {
+ throw 'Request failed with status code ' + request.getStatus() + ': ' + response;
+ }
+
+ if (raw_data !== null) {
+ try {
+ return JSON.parse(raw_data);
+ }
+ catch (error) {
+ throw 'Failed to parse response received from device API.';
+ }
+ else {
+ throw 'No data received by ' + method + ' request.';
+ }
+ }
+ };
+
+ var methods = ['disks', 'cpgs', 'hosts', 'ports', 'system', 'tasks', 'volumes'],
+ data = {};
+
+ data['errors'] = {};
+
+ try {
+ Primera.setParams(JSON.parse(value));
+
+ try {
+ Primera.login();
+ }
+ catch (error) {
+ data.errors.auth = error.toString();
+ }
+
+ if (!('auth' in data.errors)) {
+ for (var i in methods) {
+ try {
+ if (methods[i] === 'tasks') {
+ var result = [],
+ tmp_tasks = {};
+
+ tasks = Primera.requestData(methods[i]);
+
+ tasks.members.forEach(function (task) {
+ tmp_tasks[task.name] = task;
+ });
+
+ for (var task in tmp_tasks) {
+ result.push(tmp_tasks[task]);
+ }
+
+ data[methods[i]] = result;
+ }
+ else {
+ data[methods[i]] = Primera.requestData(methods[i]);
+ }
+ }
+ catch (error) {
+ data.errors[methods[i]] = error.toString();
+ }
+ }
+ }
+ }
+ catch (error) {
+ data.errors.params = error.toString();
+ }
+
+ try {
+ Primera.logout();
+ }
+ catch (error) {
+ }
+
+ if (Object.keys(data.errors).length !== 0) {
+ errors = 'Failed to receive data:';
+ for (var error in data.errors) {
+ errors += '\n' + error + ' : ' + data.errors[error];
+ }
+ data.errors = errors;
+ }
+ else {
+ data.errors = '';
+ }
+
+ return JSON.stringify(data);
+ description: 'The JSON with result of WSAPI requests.'
+ timeout: '{$HPE.PRIMERA.DATA.TIMEOUT}'
+ parameters:
+ -
+ name: base_url
+ value: '{$HPE.PRIMERA.API.SCHEME}://{HOST.CONN}'
+ -
+ name: password
+ value: '{$HPE.PRIMERA.API.PASSWORD}'
+ -
+ name: username
+ value: '{$HPE.PRIMERA.API.USERNAME}'
+ tags:
+ -
+ tag: component
+ value: raw
+ -
+ uuid: d5b8a74991d34652973a78d58203d5fd
+ name: 'HPE Primera: Disks total'
+ type: DEPENDENT
+ key: hpe.primera.disks.total
+ delay: '0'
+ history: 7d
+ description: 'Number of physical disks.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - $.disks.total
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: disk
+ -
+ tag: component
+ value: storage
+ -
+ uuid: efc450d0682c4c5d93df41d05c10eceb
+ name: 'HPE Primera: Capacity allocated'
+ type: DEPENDENT
+ key: hpe.primera.system.capacity.allocated
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'Allocated capacity in the system.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - $.system.allocatedCapacityMiB
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ -
+ type: MULTIPLIER
+ parameters:
+ - '1048576'
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: storage
+ -
+ tag: component
+ value: system
+ -
+ uuid: e3842eec2e45443681670d3c1d194900
+ name: 'HPE Primera: Capacity failed'
+ type: DEPENDENT
+ key: hpe.primera.system.capacity.failed
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'Failed capacity in the system.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - $.system.failedCapacityMiB
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ -
+ type: MULTIPLIER
+ parameters:
+ - '1048576'
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: storage
+ -
+ tag: component
+ value: system
+ -
+ uuid: d23e888299d344238468481689f55e2d
+ name: 'HPE Primera: Capacity free'
+ type: DEPENDENT
+ key: hpe.primera.system.capacity.free
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'Free capacity in the system.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - $.system.freeCapacityMiB
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 10m
+ -
+ type: MULTIPLIER
+ parameters:
+ - '1048576'
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: storage
+ -
+ tag: component
+ value: system
+ -
+ uuid: 1e6fc0d68d18474e84b4fe2e4d3374d1
+ name: 'HPE Primera: Capacity total'
+ type: DEPENDENT
+ key: hpe.primera.system.capacity.total
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'Total capacity in the system.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - $.system.totalCapacityMiB
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ -
+ type: MULTIPLIER
+ parameters:
+ - '1048576'
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: storage
+ -
+ tag: component
+ value: system
+ -
+ uuid: 65bcf3fb456a45358795d2f9d8249e16
+ name: 'HPE Primera: Chunklet size'
+ type: DEPENDENT
+ key: hpe.primera.system.chunklet.size
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'Chunklet size.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - $.system.chunkletSizeMiB
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ -
+ type: MULTIPLIER
+ parameters:
+ - '1048576'
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: storage
+ -
+ tag: component
+ value: system
+ -
+ uuid: dd6fd61256cc4eeeb94f50d0c86fc51f
+ name: 'HPE Primera: System contact'
+ type: DEPENDENT
+ key: hpe.primera.system.contact
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'Contact of the system.'
+ inventory_link: CONTACT
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - $.system.contact
+ error_handler: DISCARD_VALUE
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: system
+ -
+ uuid: dd61f3a680284893801c96bdbd445645
+ name: 'HPE Primera: System location'
+ type: DEPENDENT
+ key: hpe.primera.system.location
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'Location of the system.'
+ inventory_link: LOCATION
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - $.system.location
+ error_handler: DISCARD_VALUE
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: system
+ -
+ uuid: 5f28ec66be1f43208139476af3653997
+ name: 'HPE Primera: Model'
+ type: DEPENDENT
+ key: hpe.primera.system.model
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'System model.'
+ inventory_link: MODEL
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - $.system.model
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: system
+ -
+ uuid: 45281453bf204365a8a8ac2ba7255e54
+ name: 'HPE Primera: System name'
+ type: DEPENDENT
+ key: hpe.primera.system.name
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'System name.'
+ inventory_link: NAME
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - $.system.name
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 6h
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: system
+ -
+ uuid: acf6d37022884dc99a3b55c95f6b19c8
+ name: 'HPE Primera: Nodes online'
+ type: DEPENDENT
+ key: hpe.primera.system.nodes.online
+ delay: '0'
+ history: 7d
+ description: 'Number of online nodes in the system.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - $.system.onlineNodes.length()
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: system
+ -
+ uuid: 65b22e04d7334aaf970a8961a46c22c9
+ name: 'HPE Primera: Nodes total'
+ type: DEPENDENT
+ key: hpe.primera.system.nodes.total
+ delay: '0'
+ history: 7d
+ description: 'Total number of nodes in the system.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - $.system.totalNodes
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: system
+ -
+ uuid: d194672ea7f64dd58296d7fb2537f35b
+ name: 'HPE Primera: Serial number'
+ type: DEPENDENT
+ key: hpe.primera.system.serial_number
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'System serial number.'
+ inventory_link: SERIALNO_A
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - $.system.serialNumber
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: system
+ -
+ uuid: e0f0ff7657784c8eab1a71a68ceefc19
+ name: 'HPE Primera: Software version number'
+ type: DEPENDENT
+ key: hpe.primera.system.sw_version
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'Storage system software version number.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - $.system.systemVersion
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: system
+ -
+ uuid: a0b4fdee38a64c5f82fd051ea74a7b2d
+ name: 'HPE Primera: Service ping'
+ type: SIMPLE
+ key: 'net.tcp.service["{$HPE.PRIMERA.API.SCHEME}","{HOST.CONN}","{$HPE.PRIMERA.API.PORT}"]'
+ history: 7d
+ description: 'Checks if the service is running and accepting TCP connections.'
+ valuemap:
+ name: 'Service state'
+ preprocessing:
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 10m
+ tags:
+ -
+ tag: component
+ value: health
+ -
+ tag: component
+ value: network
+ triggers:
+ -
+ uuid: 8e7aa46322c643878e509461dbb9169d
+ expression: 'max(/HPE Primera by HTTP/net.tcp.service["{$HPE.PRIMERA.API.SCHEME}","{HOST.CONN}","{$HPE.PRIMERA.API.PORT}"],5m)=0'
+ name: 'HPE Primera: Service is unavailable'
+ priority: HIGH
+ manual_close: 'YES'
+ tags:
+ -
+ tag: scope
+ value: availability
+ discovery_rules:
+ -
+ uuid: b9132b095eb349c99e868ea40364596d
+ name: 'Common provisioning groups discovery'
+ type: DEPENDENT
+ key: hpe.primera.cpg.discovery
+ delay: '0'
+ filter:
+ evaltype: AND
+ conditions:
+ -
+ macro: '{#NAME}'
+ value: '{$HPE.PRIMERA.CPG.NAME.MATCHES}'
+ formulaid: A
+ -
+ macro: '{#NAME}'
+ value: '{$HPE.PRIMERA.CPG.NAME.NOT_MATCHES}'
+ operator: NOT_MATCHES_REGEX
+ formulaid: B
+ description: 'List of CPGs resources.'
+ item_prototypes:
+ -
+ uuid: 6d070a747a01498b94c56da721a63192
+ name: 'CPG [{#NAME}]: Number of FPVVs'
+ type: DEPENDENT
+ key: 'hpe.primera.cpg.fpvv["{#ID}",count]'
+ delay: '0'
+ history: 7d
+ description: 'Number of FPVVs (Fully Provisioned Virtual Volumes) allocated in the CPG.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.cpgs.members[?(@.id == "{#ID}")].numFPVVs.first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: cpg
+ -
+ tag: component
+ value: storage
+ -
+ tag: cpg
+ value: '{#NAME}'
+ -
+ uuid: a7fccd5afcf5469ca11a9436240eab5c
+ name: 'CPG [{#NAME}]: Raw space: Free'
+ type: DEPENDENT
+ key: 'hpe.primera.cpg.space.raw["{#ID}",free]'
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'Raw free space.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.cpgs.members[?(@.id == "{#ID}")].rawFreeSpaceMiB.first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 10m
+ -
+ type: MULTIPLIER
+ parameters:
+ - '1048576'
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: cpg
+ -
+ tag: component
+ value: storage
+ -
+ tag: cpg
+ value: '{#NAME}'
+ -
+ uuid: 8f26a54327f54e968f422081e6045217
+ name: 'CPG [{#NAME}]: Raw space: Shared'
+ type: DEPENDENT
+ key: 'hpe.primera.cpg.space.raw["{#ID}",shared]'
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'Raw shared space.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.cpgs.members[?(@.id == "{#ID}")].rawSharedSpaceMiB.first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ -
+ type: MULTIPLIER
+ parameters:
+ - '1048576'
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: cpg
+ -
+ tag: cpg
+ value: '{#NAME}'
+ -
+ uuid: e3ac07e2707a44fd8c166f4618fd79a1
+ name: 'CPG [{#NAME}]: Raw space: Total'
+ type: DEPENDENT
+ key: 'hpe.primera.cpg.space.raw["{#ID}",total]'
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'Raw total space.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.cpgs.members[?(@.id == "{#ID}")].rawTotalSpaceMiB.first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ -
+ type: MULTIPLIER
+ parameters:
+ - '1048576'
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: cpg
+ -
+ tag: component
+ value: storage
+ -
+ tag: cpg
+ value: '{#NAME}'
+ -
+ uuid: 6dfd722ad85b481a9c2b04a4a5eb91fe
+ name: 'CPG [{#NAME}]: Logical disk space: Snapshot administration: Total (raw)'
+ type: DEPENDENT
+ key: 'hpe.primera.cpg.space.sa["{#ID}",raw_total]'
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'Total physical (raw) logical disk space in snapshot administration.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.cpgs.members[?(@.id == "{#ID}")].SAUsage.rawTotalMiB.first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ -
+ type: MULTIPLIER
+ parameters:
+ - '1048576'
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: cpg
+ -
+ tag: component
+ value: storage
+ -
+ tag: cpg
+ value: '{#NAME}'
+ -
+ uuid: 8b2dc75fdcfa48908ece97768641f055
+ name: 'CPG [{#NAME}]: Logical disk space: Snapshot administration: Used (raw)'
+ type: DEPENDENT
+ key: 'hpe.primera.cpg.space.sa["{#ID}",raw_used]'
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'Amount of physical (raw) logical disk used in snapshot administration.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.cpgs.members[?(@.id == "{#ID}")].SAUsage.rawUsedMiB.first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 10m
+ -
+ type: MULTIPLIER
+ parameters:
+ - '1048576'
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: cpg
+ -
+ tag: component
+ value: storage
+ -
+ tag: cpg
+ value: '{#NAME}'
+ -
+ uuid: d55f0eab811641fdbb9a8bc8c54815ee
+ name: 'CPG [{#NAME}]: Logical disk space: Snapshot administration: Total'
+ type: DEPENDENT
+ key: 'hpe.primera.cpg.space.sa["{#ID}",total]'
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'Total logical disk space in snapshot administration.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.cpgs.members[?(@.id == "{#ID}")].SAUsage.totalMiB.first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ -
+ type: MULTIPLIER
+ parameters:
+ - '1048576'
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: cpg
+ -
+ tag: component
+ value: storage
+ -
+ tag: cpg
+ value: '{#NAME}'
+ -
+ uuid: cf467bc7d9ac45259f284eeab6ae7f6a
+ name: 'CPG [{#NAME}]: Logical disk space: Snapshot administration: Used'
+ type: DEPENDENT
+ key: 'hpe.primera.cpg.space.sa["{#ID}",used]'
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'Amount of logical disk used in snapshot administration.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.cpgs.members[?(@.id == "{#ID}")].SAUsage.usedMiB.first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 10m
+ -
+ type: MULTIPLIER
+ parameters:
+ - '1048576'
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: cpg
+ -
+ tag: component
+ value: storage
+ -
+ tag: cpg
+ value: '{#NAME}'
+ -
+ uuid: 61dd9aa18c714863b606d18b2fff6c57
+ name: 'CPG [{#NAME}]: Logical disk space: Snapshot data: Total (raw)'
+ type: DEPENDENT
+ key: 'hpe.primera.cpg.space.sd["{#ID}",raw_total]'
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'Total physical (raw) logical disk space in snapshot data space.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.cpgs.members[?(@.id == "{#ID}")].SDUsage.rawTotalMiB.first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ -
+ type: MULTIPLIER
+ parameters:
+ - '1048576'
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: cpg
+ -
+ tag: component
+ value: storage
+ -
+ tag: cpg
+ value: '{#NAME}'
+ -
+ uuid: f95ee3c4c0c64d46a47dd68b346f2fa5
+ name: 'CPG [{#NAME}]: Logical disk space: Snapshot data: Used (raw)'
+ type: DEPENDENT
+ key: 'hpe.primera.cpg.space.sd["{#ID}",raw_used]'
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'Amount of physical (raw) logical disk used in snapshot data space.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.cpgs.members[?(@.id == "{#ID}")].SDUsage.rawUsedMiB.first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 10m
+ -
+ type: MULTIPLIER
+ parameters:
+ - '1048576'
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: cpg
+ -
+ tag: component
+ value: storage
+ -
+ tag: cpg
+ value: '{#NAME}'
+ -
+ uuid: 797f3335d8704d4bb8b53e34b3e6589e
+ name: 'CPG [{#NAME}]: Logical disk space: Snapshot data: Total'
+ type: DEPENDENT
+ key: 'hpe.primera.cpg.space.sd["{#ID}",total]'
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'Total logical disk space in snapshot data space.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.cpgs.members[?(@.id == "{#ID}")].SDUsage.totalMiB.first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ -
+ type: MULTIPLIER
+ parameters:
+ - '1048576'
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: cpg
+ -
+ tag: component
+ value: storage
+ -
+ tag: cpg
+ value: '{#NAME}'
+ -
+ uuid: 51e40f6a1eb249d58bd79948d403d4f7
+ name: 'CPG [{#NAME}]: Logical disk space: Snapshot data: Used'
+ type: DEPENDENT
+ key: 'hpe.primera.cpg.space.sd["{#ID}",used]'
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'Amount of logical disk used in snapshot data space.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.cpgs.members[?(@.id == "{#ID}")].SDUsage.usedMiB.first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 10m
+ -
+ type: MULTIPLIER
+ parameters:
+ - '1048576'
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: cpg
+ -
+ tag: component
+ value: storage
+ -
+ tag: cpg
+ value: '{#NAME}'
+ -
+ uuid: b7a8880bdafe4f0da4dd8cee6d4fdfa4
+ name: 'CPG [{#NAME}]: Logical disk space: User space: Total (raw)'
+ type: DEPENDENT
+ key: 'hpe.primera.cpg.space.usr["{#ID}",raw_total]'
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'Total physical (raw) logical disk space in user data space.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.cpgs.members[?(@.id == "{#ID}")].UsrUsage.rawTotalMiB.first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ -
+ type: MULTIPLIER
+ parameters:
+ - '1048576'
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: cpg
+ -
+ tag: component
+ value: storage
+ -
+ tag: cpg
+ value: '{#NAME}'
+ -
+ uuid: 1feabd57f12a48b98dab098435179725
+ name: 'CPG [{#NAME}]: Logical disk space: User space: Used (raw)'
+ type: DEPENDENT
+ key: 'hpe.primera.cpg.space.usr["{#ID}",raw_used]'
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'Amount of physical (raw) logical disk used in user data space.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.cpgs.members[?(@.id == "{#ID}")].UsrUsage.rawUsedMiB.first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 10m
+ -
+ type: MULTIPLIER
+ parameters:
+ - '1048576'
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: cpg
+ -
+ tag: component
+ value: storage
+ -
+ tag: cpg
+ value: '{#NAME}'
+ -
+ uuid: 6f85014bc639420aa409d97d42cb75b2
+ name: 'CPG [{#NAME}]: Logical disk space: User space: Total'
+ type: DEPENDENT
+ key: 'hpe.primera.cpg.space.usr["{#ID}",total]'
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'Total logical disk space in user data space.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.cpgs.members[?(@.id == "{#ID}")].UsrUsage.totalMiB.first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ -
+ type: MULTIPLIER
+ parameters:
+ - '1048576'
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: cpg
+ -
+ tag: component
+ value: storage
+ -
+ tag: cpg
+ value: '{#NAME}'
+ -
+ uuid: c21f77a45ab443099bf957fbb39478f3
+ name: 'CPG [{#NAME}]: Logical disk space: User space: Used'
+ type: DEPENDENT
+ key: 'hpe.primera.cpg.space.usr["{#ID}",used]'
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'Amount of logical disk used in user data space.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.cpgs.members[?(@.id == "{#ID}")].UsrUsage.usedMiB.first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 10m
+ -
+ type: MULTIPLIER
+ parameters:
+ - '1048576'
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: cpg
+ -
+ tag: component
+ value: storage
+ -
+ tag: cpg
+ value: '{#NAME}'
+ -
+ uuid: a6cd977f27a8463cb385715327e34955
+ name: 'CPG [{#NAME}]: CPG space: Free'
+ type: DEPENDENT
+ key: 'hpe.primera.cpg.space["{#ID}",free]'
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'Free CPG space.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.cpgs.members[?(@.id == "{#ID}")].freeSpaceMiB.first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 10m
+ -
+ type: MULTIPLIER
+ parameters:
+ - '1048576'
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: cpg
+ -
+ tag: component
+ value: storage
+ -
+ tag: cpg
+ value: '{#NAME}'
+ -
+ uuid: 17cf1cddafd444f8a5616a472c1a019b
+ name: 'CPG [{#NAME}]: CPG space: Shared'
+ type: DEPENDENT
+ key: 'hpe.primera.cpg.space["{#ID}",shared]'
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'Shared CPG space.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.cpgs.members[?(@.id == "{#ID}")].sharedSpaceMiB.first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ -
+ type: MULTIPLIER
+ parameters:
+ - '1048576'
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: cpg
+ -
+ tag: component
+ value: storage
+ -
+ tag: cpg
+ value: '{#NAME}'
+ -
+ uuid: 3950d779a0394615b8ec311525ed4168
+ name: 'CPG [{#NAME}]: CPG space: Total'
+ type: DEPENDENT
+ key: 'hpe.primera.cpg.space["{#ID}",total]'
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'Total CPG space.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.cpgs.members[?(@.id == "{#ID}")].totalSpaceMiB.first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ -
+ type: MULTIPLIER
+ parameters:
+ - '1048576'
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: cpg
+ -
+ tag: component
+ value: storage
+ -
+ tag: cpg
+ value: '{#NAME}'
+ -
+ uuid: a7e2188d600a4715a58deba46f3b46ac
+ name: 'CPG [{#NAME}]: Degraded state'
+ type: DEPENDENT
+ key: 'hpe.primera.cpg.state["{#ID}",degraded]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: |
+ Detailed state of the CPG:
+
+ LDS_NOT_STARTED (1) - LDs not started.
+ NOT_STARTED (2) - VV not started.
+ NEEDS_CHECK (3) - check for consistency.
+ NEEDS_MAINT_CHECK (4) - maintenance check is required.
+ INTERNAL_CONSISTENCY_ERROR (5) - internal consistency error.
+ SNAPDATA_INVALID (6) - invalid snapshot data.
+ PRESERVED (7) - unavailable LD sets due to missing chunklets. Preserved remaining VV data.
+ STALE (8) - parts of the VV contain old data because of a copy-on-write operation.
+ COPY_FAILED (9) - a promote or copy operation to this volume failed.
+ DEGRADED_AVAIL (10) - degraded due to availability.
+ DEGRADED_PERF (11) - degraded due to performance.
+ PROMOTING (12) - volume is the current target of a promote operation.
+ COPY_TARGET (13) - volume is the current target of a physical copy operation.
+ RESYNC_TARGET (14) - volume is the current target of a resynchronized copy operation.
+ TUNING (15) - volume tuning is in progress.
+ CLOSING (16) - volume is closing.
+ REMOVING (17) - removing the volume.
+ REMOVING_RETRY (18) - retrying a volume removal operation.
+ CREATING (19) - creating a volume.
+ COPY_SOURCE (20) - copy source.
+ IMPORTING (21) - importing a volume.
+ CONVERTING (22) - converting a volume.
+ INVALID (23) - invalid.
+ EXCLUSIVE (24) - local storage system has exclusive access to the volume.
+ CONSISTENT (25) - volume is being imported consistently along with other volumes in the VV set.
+ STANDBY (26) - volume in standby mode.
+ SD_META_INCONSISTENT (27) - SD Meta Inconsistent.
+ SD_NEEDS_FIX (28) - SD needs fix.
+ SD_META_FIXING (29) - SD meta fix.
+ UNKNOWN (999) - unknown state.
+ NOT_SUPPORTED_BY_WSAPI (1000) - state not supported by WSAPI.
+ valuemap:
+ name: 'Volume detailed state enum'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.cpgs.members[?(@.id == "{#ID}")].degradedStates.first()'
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: cpg
+ -
+ tag: component
+ value: storage
+ -
+ tag: cpg
+ value: '{#NAME}'
+ -
+ uuid: 837b48053400487885bf051a78f2200a
+ name: 'CPG [{#NAME}]: Failed state'
+ type: DEPENDENT
+ key: 'hpe.primera.cpg.state["{#ID}",failed]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: |
+ Detailed state of the CPG:
+
+ LDS_NOT_STARTED (1) - LDs not started.
+ NOT_STARTED (2) - VV not started.
+ NEEDS_CHECK (3) - check for consistency.
+ NEEDS_MAINT_CHECK (4) - maintenance check is required.
+ INTERNAL_CONSISTENCY_ERROR (5) - internal consistency error.
+ SNAPDATA_INVALID (6) - invalid snapshot data.
+ PRESERVED (7) - unavailable LD sets due to missing chunklets. Preserved remaining VV data.
+ STALE (8) - parts of the VV contain old data because of a copy-on-write operation.
+ COPY_FAILED (9) - a promote or copy operation to this volume failed.
+ DEGRADED_AVAIL (10) - degraded due to availability.
+ DEGRADED_PERF (11) - degraded due to performance.
+ PROMOTING (12) - volume is the current target of a promote operation.
+ COPY_TARGET (13) - volume is the current target of a physical copy operation.
+ RESYNC_TARGET (14) - volume is the current target of a resynchronized copy operation.
+ TUNING (15) - volume tuning is in progress.
+ CLOSING (16) - volume is closing.
+ REMOVING (17) - removing the volume.
+ REMOVING_RETRY (18) - retrying a volume removal operation.
+ CREATING (19) - creating a volume.
+ COPY_SOURCE (20) - copy source.
+ IMPORTING (21) - importing a volume.
+ CONVERTING (22) - converting a volume.
+ INVALID (23) - invalid.
+ EXCLUSIVE (24) - local storage system has exclusive access to the volume.
+ CONSISTENT (25) - volume is being imported consistently along with other volumes in the VV set.
+ STANDBY (26) - volume in standby mode.
+ SD_META_INCONSISTENT (27) - SD Meta Inconsistent.
+ SD_NEEDS_FIX (28) - SD needs fix.
+ SD_META_FIXING (29) - SD meta fix.
+ UNKNOWN (999) - unknown state.
+ NOT_SUPPORTED_BY_WSAPI (1000) - state not supported by WSAPI.
+ valuemap:
+ name: 'Volume detailed state enum'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.cpgs.members[?(@.id == "{#ID}")].failedStates.first()'
+ -
+ type: JAVASCRIPT
+ parameters:
+ - 'return JSON.stringify(JSON.parse(value));'
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: cpg
+ -
+ tag: component
+ value: storage
+ -
+ tag: cpg
+ value: '{#NAME}'
+ -
+ uuid: 3fe9b7c875c248e3b09c98162e30ebf8
+ name: 'CPG [{#NAME}]: State'
+ type: DEPENDENT
+ key: 'hpe.primera.cpg.state["{#ID}"]'
+ delay: '0'
+ history: 7d
+ description: |
+ Overall state of the CPG:
+
+ NORMAL (1) - normal operation;
+ DEGRADED (2) - degraded state;
+ FAILED (3) - abnormal operation;
+ UNKNOWN (99) - unknown state.
+ valuemap:
+ name: 'State enum'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.cpgs.members[?(@.id == "{#ID}")].state.first()'
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: cpg
+ -
+ tag: component
+ value: storage
+ -
+ tag: cpg
+ value: '{#NAME}'
+ trigger_prototypes:
+ -
+ uuid: 32a29b7a4bf340ef8ab07a8db3bef309
+ expression: 'last(/HPE Primera by HTTP/hpe.primera.cpg.state["{#ID}"])=2'
+ name: 'CPG [{#NAME}]: Degraded'
+ opdata: 'Current value: {ITEM.LASTVALUE1}'
+ priority: AVERAGE
+ description: 'CPG [{#NAME}] is in degraded state.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ tag: scope
+ value: capacity
+ -
+ uuid: 85c26e64c8074e8b9ab52f20394afeee
+ expression: 'last(/HPE Primera by HTTP/hpe.primera.cpg.state["{#ID}"])=3'
+ name: 'CPG [{#NAME}]: Failed'
+ opdata: 'Current value: {ITEM.LASTVALUE1}'
+ priority: HIGH
+ description: 'CPG [{#NAME}] is in failed state.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ tag: scope
+ value: capacity
+ -
+ uuid: 18544a7742af4678bd8c37ad84a8d137
+ name: 'CPG [{#NAME}]: Number of TDVVs'
+ type: DEPENDENT
+ key: 'hpe.primera.cpg.tdvv["{#ID}",count]'
+ delay: '0'
+ history: 7d
+ description: 'Number of TDVVs (Thinly Deduplicated Virtual Volume) created in the CPG.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.cpgs.members[?(@.id == "{#ID}")].numTDVVs.first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: cpg
+ -
+ tag: component
+ value: storage
+ -
+ tag: cpg
+ value: '{#NAME}'
+ -
+ uuid: f93dc70fa63a47da9253a447a67df685
+ name: 'CPG [{#NAME}]: Number of TPVVs'
+ type: DEPENDENT
+ key: 'hpe.primera.cpg.tpvv["{#ID}",count]'
+ delay: '0'
+ history: 7d
+ description: 'Number of TPVVs (Thinly Provisioned Virtual Volumes) allocated in the CPG.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.cpgs.members[?(@.id == "{#ID}")].numTPVVs.first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: cpg
+ -
+ tag: component
+ value: storage
+ -
+ tag: cpg
+ value: '{#NAME}'
+ graph_prototypes:
+ -
+ uuid: c5d1e864f752465eae8822c06d635aeb
+ name: 'CPG [{#NAME}]: CPG space'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE Primera by HTTP'
+ key: 'hpe.primera.cpg.space["{#ID}",free]'
+ -
+ sortorder: '1'
+ color: 2774A4
+ item:
+ host: 'HPE Primera by HTTP'
+ key: 'hpe.primera.cpg.space["{#ID}",shared]'
+ -
+ sortorder: '2'
+ color: F63100
+ item:
+ host: 'HPE Primera by HTTP'
+ key: 'hpe.primera.cpg.space["{#ID}",total]'
+ -
+ uuid: 1ef5d5b0090c4f168da6f842972af688
+ name: 'CPG [{#NAME}]: Number of virtual volumes'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE Primera by HTTP'
+ key: 'hpe.primera.cpg.fpvv["{#ID}",count]'
+ -
+ sortorder: '1'
+ color: 2774A4
+ item:
+ host: 'HPE Primera by HTTP'
+ key: 'hpe.primera.cpg.tpvv["{#ID}",count]'
+ -
+ sortorder: '2'
+ color: F63100
+ item:
+ host: 'HPE Primera by HTTP'
+ key: 'hpe.primera.cpg.tdvv["{#ID}",count]'
+ -
+ uuid: 4c15694d488f42d6bc5b9caf4fe9e049
+ name: 'CPG [{#NAME}]: Raw space'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE Primera by HTTP'
+ key: 'hpe.primera.cpg.space.raw["{#ID}",free]'
+ -
+ sortorder: '1'
+ color: 2774A4
+ item:
+ host: 'HPE Primera by HTTP'
+ key: 'hpe.primera.cpg.space.raw["{#ID}",shared]'
+ -
+ sortorder: '2'
+ color: F63100
+ item:
+ host: 'HPE Primera by HTTP'
+ key: 'hpe.primera.cpg.space.raw["{#ID}",total]'
+ -
+ uuid: 0ffb02e0b9144e0583489ca1d1c8d2dd
+ name: 'CPG [{#NAME}]: Snapshot administration space'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE Primera by HTTP'
+ key: 'hpe.primera.cpg.space.sa["{#ID}",total]'
+ -
+ sortorder: '1'
+ color: 2774A4
+ item:
+ host: 'HPE Primera by HTTP'
+ key: 'hpe.primera.cpg.space.sa["{#ID}",used]'
+ -
+ sortorder: '2'
+ color: F63100
+ item:
+ host: 'HPE Primera by HTTP'
+ key: 'hpe.primera.cpg.space.sa["{#ID}",raw_total]'
+ -
+ sortorder: '3'
+ color: A54F10
+ item:
+ host: 'HPE Primera by HTTP'
+ key: 'hpe.primera.cpg.space.sa["{#ID}",raw_used]'
+ -
+ uuid: d64484828dda48cbbf7dc0f8a9c2f34d
+ name: 'CPG [{#NAME}]: Snapshot data space'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE Primera by HTTP'
+ key: 'hpe.primera.cpg.space.sd["{#ID}",total]'
+ -
+ sortorder: '1'
+ color: 2774A4
+ item:
+ host: 'HPE Primera by HTTP'
+ key: 'hpe.primera.cpg.space.sd["{#ID}",used]'
+ -
+ sortorder: '2'
+ color: F63100
+ item:
+ host: 'HPE Primera by HTTP'
+ key: 'hpe.primera.cpg.space.sd["{#ID}",raw_total]'
+ -
+ sortorder: '3'
+ color: A54F10
+ item:
+ host: 'HPE Primera by HTTP'
+ key: 'hpe.primera.cpg.space.sd["{#ID}",raw_used]'
+ -
+ uuid: dd590898a3644130b897f57e8837cb3a
+ name: 'CPG [{#NAME}]: User data space'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE Primera by HTTP'
+ key: 'hpe.primera.cpg.space.usr["{#ID}",total]'
+ -
+ sortorder: '1'
+ color: 2774A4
+ item:
+ host: 'HPE Primera by HTTP'
+ key: 'hpe.primera.cpg.space.usr["{#ID}",used]'
+ -
+ sortorder: '2'
+ color: F63100
+ item:
+ host: 'HPE Primera by HTTP'
+ key: 'hpe.primera.cpg.space.usr["{#ID}",raw_total]'
+ -
+ sortorder: '3'
+ color: A54F10
+ item:
+ host: 'HPE Primera by HTTP'
+ key: 'hpe.primera.cpg.space.usr["{#ID}",raw_used]'
+ master_item:
+ key: hpe.primera.data.get
+ lld_macro_paths:
+ -
+ lld_macro: '{#ID}'
+ path: $.id
+ -
+ lld_macro: '{#NAME}'
+ path: $.name
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - $.cpgs.members
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 6h
+ -
+ uuid: a83ed573e6ab40e8b7306178ddd2658b
+ name: 'Disks discovery'
+ type: DEPENDENT
+ key: hpe.primera.disks.discovery
+ delay: '0'
+ description: 'List of physical disk resources.'
+ item_prototypes:
+ -
+ uuid: 40e074af5d7f44bb8691290971fc7c5c
+ name: 'Disk [{#POSITION}]: Free size'
+ type: DEPENDENT
+ key: 'hpe.primera.disk["{#ID}",free_size]'
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'Physical disk free size.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.disks.members[?(@.id == "{#ID}")].freeSizeMiB.first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 10m
+ -
+ type: MULTIPLIER
+ parameters:
+ - '1048576'
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: disk
+ -
+ tag: component
+ value: storage
+ -
+ tag: disk
+ value: '{#POSITION}'
+ -
+ uuid: 9bb7a86118614d339b4dee3238b261ff
+ name: 'Disk [{#POSITION}]: Firmware version'
+ type: DEPENDENT
+ key: 'hpe.primera.disk["{#ID}",fw_version]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'Physical disk firmware version.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.disks.members[?(@.id == "{#ID}")].fwVersion.first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: disk
+ -
+ tag: component
+ value: storage
+ -
+ tag: disk
+ value: '{#POSITION}'
+ -
+ uuid: 288f7eef0a7c43afa7a3623471c92097
+ name: 'Disk [{#POSITION}]: Path A0 degraded'
+ type: DEPENDENT
+ key: 'hpe.primera.disk["{#ID}",loop_a0_degraded]'
+ delay: '0'
+ history: 7d
+ description: 'Indicates if this is a degraded path for the disk.'
+ valuemap:
+ name: Boolean
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.disks.members[?(@.id == "{#ID}")].loopA0.degraded.first()'
+ error_handler: DISCARD_VALUE
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 10m
+ -
+ type: BOOL_TO_DECIMAL
+ parameters:
+ - ''
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: disk
+ -
+ tag: component
+ value: storage
+ -
+ tag: disk
+ value: '{#POSITION}'
+ trigger_prototypes:
+ -
+ uuid: f1672a33f9404216a1ffdbe3fcefd0bf
+ expression: 'last(/HPE Primera by HTTP/hpe.primera.disk["{#ID}",loop_a0_degraded])=1'
+ name: 'Disk [{#POSITION}]: Path A0 degraded'
+ priority: AVERAGE
+ description: 'Disk [{#POSITION}] path A0 in degraded state.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ uuid: 1e89322b49fb46bdacd22a562995f2fc
+ name: 'Disk [{#POSITION}]: Path A1 degraded'
+ type: DEPENDENT
+ key: 'hpe.primera.disk["{#ID}",loop_a1_degraded]'
+ delay: '0'
+ history: 7d
+ description: 'Indicates if this is a degraded path for the disk.'
+ valuemap:
+ name: Boolean
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.disks.members[?(@.id == "{#ID}")].loopA1.degraded.first()'
+ error_handler: DISCARD_VALUE
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 10m
+ -
+ type: BOOL_TO_DECIMAL
+ parameters:
+ - ''
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: disk
+ -
+ tag: component
+ value: storage
+ -
+ tag: disk
+ value: '{#POSITION}'
+ trigger_prototypes:
+ -
+ uuid: a28b1b4cdc5d4cb4afd9b7dd5e5f4f46
+ expression: 'last(/HPE Primera by HTTP/hpe.primera.disk["{#ID}",loop_a1_degraded])=1'
+ name: 'Disk [{#POSITION}]: Path A1 degraded'
+ priority: AVERAGE
+ description: 'Disk [{#POSITION}] path A1 in degraded state.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ uuid: 2b56e5a6ffbd4e6189fff707d508f955
+ name: 'Disk [{#POSITION}]: Path B0 degraded'
+ type: DEPENDENT
+ key: 'hpe.primera.disk["{#ID}",loop_b0_degraded]'
+ delay: '0'
+ history: 7d
+ description: 'Indicates if this is a degraded path for the disk.'
+ valuemap:
+ name: Boolean
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.disks.members[?(@.id == "{#ID}")].loopB0.degraded.first()'
+ error_handler: DISCARD_VALUE
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 10m
+ -
+ type: BOOL_TO_DECIMAL
+ parameters:
+ - ''
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: disk
+ -
+ tag: component
+ value: storage
+ -
+ tag: disk
+ value: '{#POSITION}'
+ trigger_prototypes:
+ -
+ uuid: 0ff32326e7784111842198ac6457c5cc
+ expression: 'last(/HPE Primera by HTTP/hpe.primera.disk["{#ID}",loop_b0_degraded])=1'
+ name: 'Disk [{#POSITION}]: Path B0 degraded'
+ priority: AVERAGE
+ description: 'Disk [{#POSITION}] path B0 in degraded state.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ uuid: cfb88804564d4e0c914760daec53276f
+ name: 'Disk [{#POSITION}]: Path B1 degraded'
+ type: DEPENDENT
+ key: 'hpe.primera.disk["{#ID}",loop_b1_degraded]'
+ delay: '0'
+ history: 7d
+ description: 'Indicates if this is a degraded path for the disk.'
+ valuemap:
+ name: Boolean
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.disks.members[?(@.id == "{#ID}")].loopB1.degraded.first()'
+ error_handler: DISCARD_VALUE
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 10m
+ -
+ type: BOOL_TO_DECIMAL
+ parameters:
+ - ''
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: disk
+ -
+ tag: component
+ value: storage
+ -
+ tag: disk
+ value: '{#POSITION}'
+ trigger_prototypes:
+ -
+ uuid: d55532408f3c40408dbd05671c66b5f3
+ expression: 'last(/HPE Primera by HTTP/hpe.primera.disk["{#ID}",loop_b1_degraded])=1'
+ name: 'Disk [{#POSITION}]: Path B1 degraded'
+ priority: AVERAGE
+ description: 'Disk [{#POSITION}] path B1 in degraded state.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ uuid: 1387d1129e4a418e91fb0e99179116f5
+ name: 'Disk [{#POSITION}]: Manufacturer'
+ type: DEPENDENT
+ key: 'hpe.primera.disk["{#ID}",manufacturer]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'Physical disk manufacturer.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.disks.members[?(@.id == "{#ID}")].manufacturer.first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: disk
+ -
+ tag: component
+ value: storage
+ -
+ tag: disk
+ value: '{#POSITION}'
+ -
+ uuid: 1892d6230e244e1089a5eca8654ba2fa
+ name: 'Disk [{#POSITION}]: Model'
+ type: DEPENDENT
+ key: 'hpe.primera.disk["{#ID}",model]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'Manufacturer''s device ID for disk.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.disks.members[?(@.id == "{#ID}")].model.first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: disk
+ -
+ tag: component
+ value: storage
+ -
+ tag: disk
+ value: '{#POSITION}'
+ -
+ uuid: 495ceedbdf1644fdb56cf56123c1ec01
+ name: 'Disk [{#POSITION}]: RPM'
+ type: DEPENDENT
+ key: 'hpe.primera.disk["{#ID}",rpm]'
+ delay: '0'
+ history: 7d
+ units: '!rpm'
+ description: 'RPM of the physical disk.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.disks.members[?(@.id == "{#ID}")].RPM.first()'
+ error_handler: DISCARD_VALUE
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: disk
+ -
+ tag: component
+ value: storage
+ -
+ tag: disk
+ value: '{#POSITION}'
+ -
+ uuid: 07fa233e273d4d6e9813705d0afc82f5
+ name: 'Disk [{#POSITION}]: Serial number'
+ type: DEPENDENT
+ key: 'hpe.primera.disk["{#ID}",serial_number]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'Disk drive serial number.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.disks.members[?(@.id == "{#ID}")].serialNumber.first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: disk
+ -
+ tag: component
+ value: storage
+ -
+ tag: disk
+ value: '{#POSITION}'
+ -
+ uuid: acb23a0dc2674f57bada95dd12972662
+ name: 'Disk [{#POSITION}]: State'
+ type: DEPENDENT
+ key: 'hpe.primera.disk["{#ID}",state]'
+ delay: '0'
+ history: 7d
+ description: |
+ State of the physical disk:
+
+ Normal (1) - physical disk is in Normal state;
+ Degraded (2) - physical disk is not operating normally;
+ New (3) - physical disk is new, needs to be admitted;
+ Failed (4) - physical disk has failed;
+ Unknown (99) - physical disk state is unknown.
+ valuemap:
+ name: 'diskState enum'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.disks.members[?(@.id == "{#ID}")].state.first()'
+ error_handler: CUSTOM_VALUE
+ error_handler_params: '99'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: disk
+ -
+ tag: component
+ value: storage
+ -
+ tag: disk
+ value: '{#POSITION}'
+ trigger_prototypes:
+ -
+ uuid: d8991103e26b4ffea5fb64dc3519eb63
+ expression: 'last(/HPE Primera by HTTP/hpe.primera.disk["{#ID}",state])=2'
+ name: 'Disk [{#POSITION}]: Degraded'
+ priority: AVERAGE
+ description: 'Disk [{#POSITION}] in degraded state.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ tag: scope
+ value: capacity
+ -
+ uuid: bc8c8281c3ac4742ba8f570a56753dd3
+ expression: 'last(/HPE Primera by HTTP/hpe.primera.disk["{#ID}",state])=3'
+ name: 'Disk [{#POSITION}]: Failed'
+ priority: HIGH
+ description: 'Disk [{#POSITION}] in failed state.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ tag: scope
+ value: capacity
+ -
+ uuid: d4bda084df0b4a489fac08d1acae4e17
+ expression: 'last(/HPE Primera by HTTP/hpe.primera.disk["{#ID}",state])=99'
+ name: 'Disk [{#POSITION}]: Unknown issue'
+ priority: INFO
+ description: 'Disk [{#POSITION}] in unknown state.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ tag: scope
+ value: capacity
+ -
+ uuid: 83395e3165c949e8997e93bfce0ac1d0
+ name: 'Disk [{#POSITION}]: Total size'
+ type: DEPENDENT
+ key: 'hpe.primera.disk["{#ID}",total_size]'
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'Physical disk total size.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.disks.members[?(@.id == "{#ID}")].totalSizeMiB.first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ -
+ type: MULTIPLIER
+ parameters:
+ - '1048576'
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: disk
+ -
+ tag: component
+ value: storage
+ -
+ tag: disk
+ value: '{#POSITION}'
+ master_item:
+ key: hpe.primera.data.get
+ lld_macro_paths:
+ -
+ lld_macro: '{#ID}'
+ path: $.id
+ -
+ lld_macro: '{#POSITION}'
+ path: $.position
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - $.disks.members
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 6h
+ -
+ uuid: 77ae172949044c148ac8f56f05d3af33
+ name: 'Hosts discovery'
+ type: DEPENDENT
+ key: hpe.primera.hosts.discovery
+ delay: '0'
+ description: 'List of host properties.'
+ filter:
+ evaltype: AND
+ conditions:
+ -
+ macro: '{#NAME}'
+ operator: EXISTS
+ formulaid: A
+ item_prototypes:
+ -
+ uuid: 142a03a36dbf477ebbcb99994efe4246
+ name: 'Host [{#NAME}]: Comment'
+ type: DEPENDENT
+ key: 'hpe.primera.host["{#ID}",comment]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'Additional information for the host.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.hosts.members[?(@.id == "{#ID}")].descriptors.comment.first()'
+ error_handler: DISCARD_VALUE
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: host
+ -
+ tag: host
+ value: '{#NAME}'
+ -
+ uuid: 44a06761b5174c67ace5487b7ec9f0e5
+ name: 'Host [{#NAME}]: Contact'
+ type: DEPENDENT
+ key: 'hpe.primera.host["{#ID}",contact]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'The host''s owner and contact.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.hosts.members[?(@.id == "{#ID}")].descriptors.contact.first()'
+ error_handler: DISCARD_VALUE
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: host
+ -
+ tag: host
+ value: '{#NAME}'
+ -
+ uuid: b3bd017e96d843248bbb9cb2240e861b
+ name: 'Host [{#NAME}]: IP address'
+ type: DEPENDENT
+ key: 'hpe.primera.host["{#ID}",ipaddress]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'The host''s IP address.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.hosts.members[?(@.id == "{#ID}")].descriptors.IPAddr.first()'
+ error_handler: DISCARD_VALUE
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: host
+ -
+ tag: host
+ value: '{#NAME}'
+ -
+ uuid: 367466a0f7084e579f3c11d820dc7f04
+ name: 'Host [{#NAME}]: Location'
+ type: DEPENDENT
+ key: 'hpe.primera.host["{#ID}",location]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'The host''s location.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.hosts.members[?(@.id == "{#ID}")].descriptors.location.first()'
+ error_handler: DISCARD_VALUE
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: host
+ -
+ tag: host
+ value: '{#NAME}'
+ -
+ uuid: 997e52f8f50e47738a1aefbcedaa5a82
+ name: 'Host [{#NAME}]: Model'
+ type: DEPENDENT
+ key: 'hpe.primera.host["{#ID}",model]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'The host''s model.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.hosts.members[?(@.id == "{#ID}")].descriptors.model.first()'
+ error_handler: DISCARD_VALUE
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: host
+ -
+ tag: host
+ value: '{#NAME}'
+ -
+ uuid: 4db5068c8aea4940adb5f8863d50ef47
+ name: 'Host [{#NAME}]: OS'
+ type: DEPENDENT
+ key: 'hpe.primera.host["{#ID}",os]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: 'The operating system running on the host.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.hosts.members[?(@.id == "{#ID}")].descriptors.os.first()'
+ error_handler: DISCARD_VALUE
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1d
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: host
+ -
+ tag: host
+ value: '{#NAME}'
+ master_item:
+ key: hpe.primera.data.get
+ lld_macro_paths:
+ -
+ lld_macro: '{#ID}'
+ path: $.id
+ -
+ lld_macro: '{#NAME}'
+ path: $.name
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - $.hosts.members
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 6h
+ -
+ uuid: 3c9222777f2649749df76cbf61601557
+ name: 'Ports discovery'
+ type: DEPENDENT
+ key: hpe.primera.ports.discovery
+ delay: '0'
+ filter:
+ evaltype: AND
+ conditions:
+ -
+ macro: '{#TYPE}'
+ value: '3'
+ operator: NOT_MATCHES_REGEX
+ formulaid: A
+ description: 'List of ports.'
+ item_prototypes:
+ -
+ uuid: 9241e0b26de74ea49f28e1c09e15a2cd
+ name: 'Port [{#NODE}:{#SLOT}:{#CARD.PORT}]: Failover state'
+ type: DEPENDENT
+ key: 'hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",failover_state]'
+ delay: '0'
+ history: 7d
+ description: |
+ The state of the failover operation, shown for the two ports indicated in the N:S:P and Partner columns. The value can be one of the following:
+
+ none (1) - no failover in operation;
+ failover_pending (2) - in the process of failing over to partner;
+ failed_over (3) - failed over to partner;
+ active (4) - the partner port is failed over to this port;
+ active_down (5) - the partner port is failed over to this port, but this port is down;
+ active_failed (6) - the partner port is failed over to this port, but this port is down;
+ failback_pending (7) - in the process of failing back from partner.
+ valuemap:
+ name: 'portFailOverState enum'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.ports.members[?(@.portPos.node == "{#NODE}" && @.portPos.slot == "{#SLOT}" && @.portPos.cardPort == "{#CARD.PORT}")].failoverState.first()'
+ error_handler: DISCARD_VALUE
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: port
+ -
+ tag: port
+ value: '{#NODE}:{#SLOT}:{#CARD.PORT}'
+ trigger_prototypes:
+ -
+ uuid: 65f3f3b098984842b5246bfb5842bc78
+ expression: 'last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",failover_state])<>1 and last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",failover_state])<>4'
+ name: 'Port [{#NODE}:{#SLOT}:{#CARD.PORT}]: Failover state is {ITEM.VALUE1}'
+ priority: AVERAGE
+ description: 'Port [{#NODE}:{#SLOT}:{#CARD.PORT}] has failover error.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ tag: scope
+ value: capacity
+ -
+ uuid: 8abba6d6f6e749b0be277056421a1958
+ name: 'Port [{#NODE}:{#SLOT}:{#CARD.PORT}]: Hardware type'
+ type: DEPENDENT
+ key: 'hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",hw_type]'
+ delay: '0'
+ history: 7d
+ description: |
+ Hardware type:
+
+ FC (1) - Fibre channel HBA;
+ ETH (2) - Ethernet NIC;
+ iSCSI (3) - iSCSI HBA;
+ CNA (4) - Converged network adapter;
+ SAS (5) - SAS HBA;
+ COMBO (6) - Combo card;
+ NVME (7) - NVMe drive;
+ UNKNOWN (99) - unknown hardware type.
+ valuemap:
+ name: 'hardwareType enum'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.ports.members[?(@.portPos.node == "{#NODE}" && @.portPos.slot == "{#SLOT}" && @.portPos.cardPort == "{#CARD.PORT}")].hardwareType.first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: port
+ -
+ tag: port
+ value: '{#NODE}:{#SLOT}:{#CARD.PORT}'
+ -
+ uuid: 55119ce474024203ac039f4aa797dd4c
+ name: 'Port [{#NODE}:{#SLOT}:{#CARD.PORT}]: Link state'
+ type: DEPENDENT
+ key: 'hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",link_state]'
+ delay: '0'
+ history: 7d
+ description: |
+ Port link state:
+
+ CONFIG_WAIT (1) - configuration wait;
+ ALPA_WAIT (2) - ALPA wait;
+ LOGIN_WAIT (3) - login wait;
+ READY (4) - link is ready;
+ LOSS_SYNC (5) - link is loss sync;
+ ERROR_STATE (6) - in error state;
+ XXX (7) - xxx;
+ NONPARTICIPATE (8) - link did not participate;
+ COREDUMP (9) - taking coredump;
+ OFFLINE (10) - link is offline;
+ FWDEAD (11) - firmware is dead;
+ IDLE_FOR_RESET (12) - link is idle for reset;
+ DHCP_IN_PROGRESS (13) - DHCP is in progress;
+ PENDING_RESET (14) - link reset is pending;
+ NEW (15) - link in new. This value is applicable for only virtual ports;
+ DISABLED (16) - link in disabled. This value is applicable for only virtual ports;
+ DOWN (17) - link in down. This value is applicable for only virtual ports;
+ FAILED (18) - link in failed. This value is applicable for only virtual ports;
+ PURGING (19) - link in purging. This value is applicable for only virtual ports.
+ valuemap:
+ name: 'portLinkState enum'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.ports.members[?(@.portPos.node == "{#NODE}" && @.portPos.slot == "{#SLOT}" && @.portPos.cardPort == "{#CARD.PORT}")].linkState.first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: port
+ -
+ tag: port
+ value: '{#NODE}:{#SLOT}:{#CARD.PORT}'
+ trigger_prototypes:
+ -
+ uuid: c7ee19ea175d4c63a9ae67e0ab59253b
+ expression: 'last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",link_state])<>4 and last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",link_state])<>1 and last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",link_state])<>3 and last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",link_state])<>13 and last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",link_state])<>15 and last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",link_state])<>16'
+ name: 'Port [{#NODE}:{#SLOT}:{#CARD.PORT}]: Link state is {ITEM.VALUE1}'
+ priority: HIGH
+ description: 'Port [{#NODE}:{#SLOT}:{#CARD.PORT}] not in ready state.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ tag: scope
+ value: capacity
+ -
+ uuid: f0c8851f843e41dcb6820c943efcbe2f
+ expression: 'last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",link_state])=1 or last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",link_state])=3 or last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",link_state])=13 or last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",link_state])=15 or last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",link_state])=16'
+ name: 'Port [{#NODE}:{#SLOT}:{#CARD.PORT}]: Link state is {ITEM.VALUE1}'
+ priority: AVERAGE
+ description: 'Port [{#NODE}:{#SLOT}:{#CARD.PORT}] not in ready state.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ tag: scope
+ value: capacity
+ -
+ uuid: c049e53b25bb4cb58cabbff1d91b3e88
+ name: 'Port [{#NODE}:{#SLOT}:{#CARD.PORT}]: Type'
+ type: DEPENDENT
+ key: 'hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",type]'
+ delay: '0'
+ history: 7d
+ description: |
+ Port connection type:
+
+ HOST (1) - FC port connected to hosts or fabric;
+ DISK (2) - FC port connected to disks;
+ FREE (3) - port is not connected to hosts or disks;
+ IPORT (4) - port is in iport mode;
+ RCFC (5) - FC port used for remote copy;
+ PEER (6) - FC port used for data migration;
+ RCIP (7) - IP (Ethernet) port used for remote copy;
+ ISCSI (8) - iSCSI (Ethernet) port connected to hosts;
+ CNA (9) - CNA port, which can be FCoE or iSCSI;
+ FS (10) - Ethernet File Persona ports.
+ valuemap:
+ name: 'portConnType enum'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.ports.members[?(@.portPos.node == "{#NODE}" && @.portPos.slot == "{#SLOT}" && @.portPos.cardPort == "{#CARD.PORT}")].type.first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: port
+ -
+ tag: port
+ value: '{#NODE}:{#SLOT}:{#CARD.PORT}'
+ master_item:
+ key: hpe.primera.data.get
+ lld_macro_paths:
+ -
+ lld_macro: '{#CARD.PORT}'
+ path: $.portPos.cardPort
+ -
+ lld_macro: '{#NODE}'
+ path: $.portPos.node
+ -
+ lld_macro: '{#SLOT}'
+ path: $.portPos.slot
+ -
+ lld_macro: '{#TYPE}'
+ path: $.type
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - $.ports.members
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 6h
+ -
+ uuid: b47a6afafca6486ea4ffb12dd3322bab
+ name: 'Tasks discovery'
+ type: DEPENDENT
+ key: hpe.primera.tasks.discovery
+ delay: '0'
+ filter:
+ evaltype: AND
+ conditions:
+ -
+ macro: '{#NAME}'
+ value: '{$HPE.PRIMERA.LLD.FILTER.TASK.NAME.MATCHES}'
+ formulaid: A
+ -
+ macro: '{#NAME}'
+ value: '{$HPE.PRIMERA.LLD.FILTER.TASK.NAME.NOT_MATCHES}'
+ operator: NOT_MATCHES_REGEX
+ formulaid: B
+ -
+ macro: '{#TYPE}'
+ value: '{$HPE.PRIMERA.LLD.FILTER.TASK.TYPE.MATCHES}'
+ formulaid: C
+ -
+ macro: '{#TYPE}'
+ value: '{$HPE.PRIMERA.LLD.FILTER.TASK.TYPE.NOT_MATCHES}'
+ operator: NOT_MATCHES_REGEX
+ formulaid: D
+ lifetime: 1d
+ description: 'List of tasks started within last 24 hours.'
+ item_prototypes:
+ -
+ uuid: cbcdf169dcf646cb959206bbb6cf3642
+ name: 'Task [{#NAME}]: Finish time'
+ type: DEPENDENT
+ key: 'hpe.primera.task["{#ID}",finish_time]'
+ delay: '0'
+ history: 7d
+ units: unixtime
+ description: 'Task finish time.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.tasks[?(@.id == "{#ID}")].finishTime.first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 6h
+ -
+ type: NOT_MATCHES_REGEX
+ parameters:
+ - ^-$
+ error_handler: DISCARD_VALUE
+ -
+ type: JAVASCRIPT
+ parameters:
+ - |
+ raw_date = value.split(' ');
+
+ return Date.parse(raw_date[0] + 'T' + raw_date[1] + raw_date[2] + ':00')/1000;
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: task
+ -
+ tag: task
+ value: '{#NAME}'
+ -
+ uuid: 66140b134d954319a96eb17750da6b7c
+ name: 'Task [{#NAME}]: Start time'
+ type: DEPENDENT
+ key: 'hpe.primera.task["{#ID}",start_time]'
+ delay: '0'
+ history: 7d
+ units: unixtime
+ description: 'Task start time.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.tasks[?(@.id == "{#ID}")].startTime.first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 6h
+ -
+ type: JAVASCRIPT
+ parameters:
+ - |
+ raw_date = value.split(' ');
+
+ return Date.parse(raw_date[0] + 'T' + raw_date[1] + raw_date[2] + ':00')/1000;
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: task
+ -
+ tag: task
+ value: '{#NAME}'
+ -
+ uuid: e01b3c84a6594e419c358c7ea297159b
+ name: 'Task [{#NAME}]: Status'
+ type: DEPENDENT
+ key: 'hpe.primera.task["{#ID}",status]'
+ delay: '0'
+ history: 7d
+ description: |
+ Task status:
+
+ DONE (1) - task is finished;
+ ACTIVE (2) - task is in progress;
+ CANCELLED (3) - task is canceled;
+ FAILED (4) - task failed.
+ valuemap:
+ name: 'taskStatus enum'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.tasks[?(@.id == "{#ID}")].status.first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 10m
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: task
+ -
+ tag: task
+ value: '{#NAME}'
+ trigger_prototypes:
+ -
+ uuid: 63340c376d86492198e00d7ae10f063c
+ expression: 'last(/HPE Primera by HTTP/hpe.primera.task["{#ID}",status])=3'
+ name: 'Task [{#NAME}]: Cancelled'
+ priority: INFO
+ description: 'Task [{#NAME}] is cancelled.'
+ tags:
+ -
+ tag: scope
+ value: notice
+ -
+ uuid: af326c0f259144d28ebeb60e19bae903
+ expression: 'last(/HPE Primera by HTTP/hpe.primera.task["{#ID}",status])=4'
+ name: 'Task [{#NAME}]: Failed'
+ priority: AVERAGE
+ description: 'Task [{#NAME}] is failed.'
+ tags:
+ -
+ tag: scope
+ value: notice
+ -
+ uuid: a67262cd7be642b9b56194d8bbb7e928
+ name: 'Task [{#NAME}]: Type'
+ type: DEPENDENT
+ key: 'hpe.primera.task["{#ID}",type]'
+ delay: '0'
+ history: 7d
+ description: |
+ Task type:
+
+ VV_COPY (1) - track the physical copy operations;
+ PHYS_COPY_RESYNC (2) - track physical copy resynchronization operations;
+ MOVE_REGIONS (3) - track region move operations;
+ PROMOTE_SV (4) - track virtual-copy promotions;
+ REMOTE_COPY_SYNC (5) - track remote copy group synchronizations;
+ REMOTE_COPY_REVERSE (6) - track the reversal of a remote copy group;
+ REMOTE_COPY_FAILOVER (7) - track the change-over of a secondary volume group to a primaryvolume group;REMOTE_COPY_RECOVER (8) - track synchronization start after a failover operation from originalsecondary cluster to original primary cluster;
+ REMOTE_COPY_RESTORE (9) - tracks the restoration process for groups that have already beenrecovered;
+ COMPACT_CPG (10) - track space consolidation in CPGs;
+ COMPACT_IDS (11) - track space consolidation in logical disks;
+ SNAPSHOT_ACCOUNTING (12) - track progress of snapshot space usage accounting;
+ CHECK_VV (13) - track the progress of the check-volume operation;
+ SCHEDULED_TASK (14) - track tasks that have been executed by the system scheduler;
+ SYSTEM_TASK (15) - track tasks that are periodically run by the storage system;
+ BACKGROUND_TASK (16) - track commands started using the starttask command;
+ IMPORT_VV (17) - track tasks that migrate data to the local storage system;
+ ONLINE_COPY (18) - track physical copy of the volume while online (createvvcopy-online command);
+ CONVERT_VV (19) - track tasks that convert a volume from an FPVV to a TPVV, and the reverse;
+ BACKGROUND_COMMAND (20) - track background command tasks;
+ CLX_SYNC (21) - track CLX synchronization tasks;
+ CLX_RECOVERY (22) - track CLX recovery tasks;
+ TUNE_SD (23) - tune copy space;
+ TUNE_VV (24) - tune virtual volume;
+ TUNE_VV_ROLLBACK (25) - tune virtual volume rollback;
+ TUNE_VV_RESTART (26) - tune virtual volume restart;
+ SYSTEM_TUNING (27) - system tuning;
+ NODE_RESCUE (28) - node rescue;
+ REPAIR_SYNC (29) - remote copy repair sync;
+ REMOTE_COPY_SWOVER (30) - remote copy switchover;
+ DEFRAGMENTATION (31) - defragmentation;
+ ENCRYPTION_CHANGE (32) - encryption change;
+ REMOTE_COPY_FAILSAFE (33) - remote copy failsafe;
+ TUNE_TPVV (34) - tune thin virtual volume;
+ REMOTE_COPY_CHG_MODE (35) - remote copy change mode;
+ ONLINE_PROMOTE (37) - online promote snap;
+ RELOCATE_PD (38) - relocate PD;
+ PERIODIC_CSS (39) - remote copy periodic CSS;
+ TUNEVV_LARGE (40) - tune large virtual volume;
+ SD_META_FIXER (41) - compression SD meta fixer;
+ DEDUP_DRYRUN (42) - preview dedup ratio;
+ COMPR_DRYRUN (43) - compression estimation;
+ DEDUP_COMPR_DRYRUN (44) - compression and dedup estimation;
+ UNKNOWN (99) - unknown task type.
+ valuemap:
+ name: 'taskType enum'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.tasks[?(@.id == "{#ID}")].type.first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 6h
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: task
+ -
+ tag: task
+ value: '{#NAME}'
+ master_item:
+ key: hpe.primera.data.get
+ lld_macro_paths:
+ -
+ lld_macro: '{#ID}'
+ path: $.id
+ -
+ lld_macro: '{#NAME}'
+ path: $.name
+ -
+ lld_macro: '{#TYPE}'
+ path: $.type
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - $.tasks
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 6h
+ -
+ uuid: eb73fcc415c54ac18840d2655f048f6c
+ name: 'Volumes discovery'
+ type: DEPENDENT
+ key: hpe.primera.volumes.discovery
+ delay: '0'
+ filter:
+ evaltype: AND
+ conditions:
+ -
+ macro: '{#NAME}'
+ value: '{$HPE.PRIMERA.VOLUME.NAME.MATCHES}'
+ formulaid: A
+ -
+ macro: '{#NAME}'
+ value: '{$HPE.PRIMERA.VOLUME.NAME.NOT_MATCHES}'
+ operator: NOT_MATCHES_REGEX
+ formulaid: B
+ description: 'List of storage volume resources.'
+ item_prototypes:
+ -
+ uuid: 40db4c8f6d85414e843c97770225f93d
+ name: 'Volume [{#NAME}]: Compaction ratio'
+ type: DEPENDENT
+ key: 'hpe.primera.volume.capacity.efficiency["{#ID}",compaction]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ description: 'The compaction ratio indicates the overall amount of storage space saved with thin technology.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.volumes.members[?(@.id == "{#ID}")].capacityEfficiency.compaction.first()'
+ error_handler: DISCARD_VALUE
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: storage
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: ab2a583c4b4049a6ac8b7bbc02bda8f5
+ name: 'Volume [{#NAME}]: Storage space saved using compression'
+ type: DEPENDENT
+ key: 'hpe.primera.volume.capacity.efficiency["{#ID}",compression]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ description: 'Indicates the amount of storage space saved using compression.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.volumes.members[?(@.id == "{#ID}")].capacityEfficiency.compression.first()'
+ error_handler: DISCARD_VALUE
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: storage
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: 1c4d78b2dcd64efbbf710ef602a94573
+ name: 'Volume [{#NAME}]: Storage space saved using deduplication'
+ type: DEPENDENT
+ key: 'hpe.primera.volume.capacity.efficiency["{#ID}",deduplication]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ description: 'Indicates the amount of storage space saved using deduplication.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.volumes.members[?(@.id == "{#ID}")].capacityEfficiency.deduplication.first()'
+ error_handler: DISCARD_VALUE
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: storage
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: 3b88e49e88484fe4a77e2a96f6d48322
+ name: 'Volume [{#NAME}]: Overprovisioning ratio'
+ type: DEPENDENT
+ key: 'hpe.primera.volume.capacity.efficiency["{#ID}",overprovisioning]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ description: 'Overprovisioning capacity efficiency ratio.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.volumes.members[?(@.id == "{#ID}")].capacityEfficiency.overProvisioning.first()'
+ error_handler: DISCARD_VALUE
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: storage
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: f584938e60f94c46b4ed28cc614c797d
+ name: 'Volume [{#NAME}]: Storage space saved using deduplication and compression'
+ type: DEPENDENT
+ key: 'hpe.primera.volume.capacity.efficiency["{#ID}",reduction]'
+ delay: '0'
+ history: 7d
+ value_type: FLOAT
+ description: 'Indicates the amount of storage space saved using deduplication and compression together.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.volumes.members[?(@.id == "{#ID}")].capacityEfficiency.dataReduction.first()'
+ error_handler: DISCARD_VALUE
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 1h
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: storage
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: bb78eda6a941407581f78cf29ef2b647
+ name: 'Volume [{#NAME}]: Administrative space: Free'
+ type: DEPENDENT
+ key: 'hpe.primera.volume.space.admin["{#ID}",free]'
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'Free administrative space.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.volumes.members[?(@.id == "{#ID}")].adminSpace.freeMiB.first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 10m
+ -
+ type: MULTIPLIER
+ parameters:
+ - '1048576'
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: storage
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: e4f9f8f5c1cd494896eba973b072fc57
+ name: 'Volume [{#NAME}]: Administrative space: Raw reserved'
+ type: DEPENDENT
+ key: 'hpe.primera.volume.space.admin["{#ID}",raw_reserved]'
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'Raw reserved administrative space.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.volumes.members[?(@.id == "{#ID}")].adminSpace.rawReservedMiB.first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 12h
+ -
+ type: MULTIPLIER
+ parameters:
+ - '1048576'
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: storage
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: e5a93a042a3b41bab4cf59dc71ec66bf
+ name: 'Volume [{#NAME}]: Administrative space: Reserved'
+ type: DEPENDENT
+ key: 'hpe.primera.volume.space.admin["{#ID}",reserved]'
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'Reserved administrative space.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.volumes.members[?(@.id == "{#ID}")].adminSpace.reservedMiB.first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 12h
+ -
+ type: MULTIPLIER
+ parameters:
+ - '1048576'
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: storage
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: 67e182afc0124cf5913b0499317a7966
+ name: 'Volume [{#NAME}]: Administrative space: Used'
+ type: DEPENDENT
+ key: 'hpe.primera.volume.space.admin["{#ID}",used]'
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'Used administrative space.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.volumes.members[?(@.id == "{#ID}")].adminSpace.usedMiB.first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 10m
+ -
+ type: MULTIPLIER
+ parameters:
+ - '1048576'
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: storage
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: 5d4659c72ed3492da143ad9c37e71360
+ name: 'Volume [{#NAME}]: Snapshot space: Free'
+ type: DEPENDENT
+ key: 'hpe.primera.volume.space.snapshot["{#ID}",free]'
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'Free snapshot space.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.volumes.members[?(@.id == "{#ID}")].snapshotSpace.freeMiB.first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 10m
+ -
+ type: MULTIPLIER
+ parameters:
+ - '1048576'
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: storage
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: a2c22eed1c004bcc9292b945b5038858
+ name: 'Volume [{#NAME}]: Snapshot space: Raw reserved'
+ type: DEPENDENT
+ key: 'hpe.primera.volume.space.snapshot["{#ID}",raw_reserved]'
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'Raw reserved snapshot space.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.volumes.members[?(@.id == "{#ID}")].snapshotSpace.rawReservedMiB.first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 12h
+ -
+ type: MULTIPLIER
+ parameters:
+ - '1048576'
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: storage
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: a37265c4598f4179bdcfd816769a1d9b
+ name: 'Volume [{#NAME}]: Snapshot space: Reserved'
+ type: DEPENDENT
+ key: 'hpe.primera.volume.space.snapshot["{#ID}",reserved]'
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'Reserved snapshot space.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.volumes.members[?(@.id == "{#ID}")].snapshotSpace.reservedMiB.first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 12h
+ -
+ type: MULTIPLIER
+ parameters:
+ - '1048576'
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: storage
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: a278f5ec08c747b085ec53a36357539c
+ name: 'Volume [{#NAME}]: Snapshot space: Used'
+ type: DEPENDENT
+ key: 'hpe.primera.volume.space.snapshot["{#ID}",used]'
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'Used snapshot space.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.volumes.members[?(@.id == "{#ID}")].snapshotSpace.usedMiB.first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 10m
+ -
+ type: MULTIPLIER
+ parameters:
+ - '1048576'
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: storage
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: 64825e4092c6450a8ea7fb7bce2d85ce
+ name: 'Volume [{#NAME}]: Total reserved space'
+ type: DEPENDENT
+ key: 'hpe.primera.volume.space.total["{#ID}",reserved]'
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'Total reserved space.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.volumes.members[?(@.id == "{#ID}")].totalReservedMiB.first()'
+ error_handler: DISCARD_VALUE
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 12h
+ -
+ type: MULTIPLIER
+ parameters:
+ - '1048576'
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: storage
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: af15ec5befd146afbfb2b9cc017d03be
+ name: 'Volume [{#NAME}]: Total space'
+ type: DEPENDENT
+ key: 'hpe.primera.volume.space.total["{#ID}",size]'
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'Virtual size of volume.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.volumes.members[?(@.id == "{#ID}")].sizeMiB.first()'
+ error_handler: DISCARD_VALUE
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 12h
+ -
+ type: MULTIPLIER
+ parameters:
+ - '1048576'
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: storage
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: 6000ab524b394e65afe111b65f7b6fd8
+ name: 'Volume [{#NAME}]: Total used space'
+ type: DEPENDENT
+ key: 'hpe.primera.volume.space.total["{#ID}",used]'
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'Total used space. Sum of used user space and used snapshot space.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.volumes.members[?(@.id == "{#ID}")].totalUsedMiB.first()'
+ error_handler: DISCARD_VALUE
+ -
+ type: MULTIPLIER
+ parameters:
+ - '1048576'
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: storage
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: 43c278563b174005ac5302ee48e0cd30
+ name: 'Volume [{#NAME}]: User space: Free'
+ type: DEPENDENT
+ key: 'hpe.primera.volume.space.user["{#ID}",free]'
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'Free user space.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.volumes.members[?(@.id == "{#ID}")].userSpace.freeMiB.first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 10m
+ -
+ type: MULTIPLIER
+ parameters:
+ - '1048576'
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: storage
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: 97aaefe8dffd4d2eb83f908ac8ad775b
+ name: 'Volume [{#NAME}]: User space: Raw reserved'
+ type: DEPENDENT
+ key: 'hpe.primera.volume.space.user["{#ID}",raw_reserved]'
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'Raw reserved user space.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.volumes.members[?(@.id == "{#ID}")].userSpace.rawReservedMiB.first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 12h
+ -
+ type: MULTIPLIER
+ parameters:
+ - '1048576'
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: storage
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: 135ad5457781492db1cec36787151a71
+ name: 'Volume [{#NAME}]: User space: Reserved'
+ type: DEPENDENT
+ key: 'hpe.primera.volume.space.user["{#ID}",reserved]'
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'Reserved user space.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.volumes.members[?(@.id == "{#ID}")].userSpace.reservedMiB.first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 12h
+ -
+ type: MULTIPLIER
+ parameters:
+ - '1048576'
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: storage
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: 3adf03216c4f442693fccbb991c0de3d
+ name: 'Volume [{#NAME}]: User space: Used'
+ type: DEPENDENT
+ key: 'hpe.primera.volume.space.user["{#ID}",used]'
+ delay: '0'
+ history: 7d
+ units: B
+ description: 'Used user space.'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.volumes.members[?(@.id == "{#ID}")].userSpace.usedMiB.first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 10m
+ -
+ type: MULTIPLIER
+ parameters:
+ - '1048576'
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: storage
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: beb17415bd07492d83944da714c492e7
+ name: 'Volume [{#NAME}]: Compression state'
+ type: DEPENDENT
+ key: 'hpe.primera.volume.state["{#ID}",compression]'
+ delay: '0'
+ history: 7d
+ description: |
+ Volume compression state:
+
+ YES (1) - compression is enabled on the volume;
+ NO (2) - compression is disabled on the volume;
+ OFF (3) - compression is turned off;
+ NA (4) - compression is not available on the volume.
+ valuemap:
+ name: 'Volume compressionState enum'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.volumes.members[?(@.id == "{#ID}")].compressionState.first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 6h
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: storage
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: 3962d07122a0460fa36c1b151a87717b
+ name: 'Volume [{#NAME}]: Deduplication state'
+ type: DEPENDENT
+ key: 'hpe.primera.volume.state["{#ID}",deduplication]'
+ delay: '0'
+ history: 7d
+ description: |
+ Volume deduplication state:
+
+ YES (1) - enables deduplication on the volume;
+ NO (2) - disables deduplication on the volume;
+ NA (3) - deduplication is not available;
+ OFF (4) - deduplication is turned off.
+ valuemap:
+ name: 'Volume deduplicationState enum'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.volumes.members[?(@.id == "{#ID}")].deduplicationState.first()'
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 6h
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: storage
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: 83222faf4e3e414789e028e0b17350c6
+ name: 'Volume [{#NAME}]: Degraded state'
+ type: DEPENDENT
+ key: 'hpe.primera.volume.state["{#ID}",degraded]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: |
+ Volume detailed state:
+
+ LDS_NOT_STARTED (1) - LDs not started.
+ NOT_STARTED (2) - VV not started.
+ NEEDS_CHECK (3) - check for consistency.
+ NEEDS_MAINT_CHECK (4) - maintenance check is required.
+ INTERNAL_CONSISTENCY_ERROR (5) - internal consistency error.
+ SNAPDATA_INVALID (6) - invalid snapshot data.
+ PRESERVED (7) - unavailable LD sets due to missing chunklets. Preserved remaining VV data.
+ STALE (8) - parts of the VV contain old data because of a copy-on-write operation.
+ COPY_FAILED (9) - a promote or copy operation to this volume failed.
+ DEGRADED_AVAIL (10) - degraded due to availability.
+ DEGRADED_PERF (11) - degraded due to performance.
+ PROMOTING (12) - volume is the current target of a promote operation.
+ COPY_TARGET (13) - volume is the current target of a physical copy operation.
+ RESYNC_TARGET (14) - volume is the current target of a resynchronized copy operation.
+ TUNING (15) - volume tuning is in progress.
+ CLOSING (16) - volume is closing.
+ REMOVING (17) - removing the volume.
+ REMOVING_RETRY (18) - retrying a volume removal operation.
+ CREATING (19) - creating a volume.
+ COPY_SOURCE (20) - copy source.
+ IMPORTING (21) - importing a volume.
+ CONVERTING (22) - converting a volume.
+ INVALID (23) - invalid.
+ EXCLUSIVE (24) -lLocal storage system has exclusive access to the volume.
+ CONSISTENT (25) - volume is being imported consistently along with other volumes in the VV set.
+ STANDBY (26) - volume in standby mode.
+ SD_META_INCONSISTENT (27) - SD Meta Inconsistent.
+ SD_NEEDS_FIX (28) - SD needs fix.
+ SD_META_FIXING (29) - SD meta fix.
+ UNKNOWN (999) - unknown state.
+ NOT_SUPPORTED_BY_WSAPI (1000) - state not supported by WSAPI.
+ valuemap:
+ name: 'Volume detailed state enum'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.volumes.members[?(@.id == "{#ID}")].degradedStates.first()'
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: storage
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: 462e4b491dd94c78b299178af6d34ca0
+ name: 'Volume [{#NAME}]: Failed state'
+ type: DEPENDENT
+ key: 'hpe.primera.volume.state["{#ID}",failed]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: |
+ Volume detailed state:
+
+ LDS_NOT_STARTED (1) - LDs not started.
+ NOT_STARTED (2) - VV not started.
+ NEEDS_CHECK (3) - check for consistency.
+ NEEDS_MAINT_CHECK (4) - maintenance check is required.
+ INTERNAL_CONSISTENCY_ERROR (5) - internal consistency error.
+ SNAPDATA_INVALID (6) - invalid snapshot data.
+ PRESERVED (7) - unavailable LD sets due to missing chunklets. Preserved remaining VV data.
+ STALE (8) - parts of the VV contain old data because of a copy-on-write operation.
+ COPY_FAILED (9) - a promote or copy operation to this volume failed.
+ DEGRADED_AVAIL (10) - degraded due to availability.
+ DEGRADED_PERF (11) - degraded due to performance.
+ PROMOTING (12) - volume is the current target of a promote operation.
+ COPY_TARGET (13) - volume is the current target of a physical copy operation.
+ RESYNC_TARGET (14) - volume is the current target of a resynchronized copy operation.
+ TUNING (15) - volume tuning is in progress.
+ CLOSING (16) - volume is closing.
+ REMOVING (17) - removing the volume.
+ REMOVING_RETRY (18) - retrying a volume removal operation.
+ CREATING (19) - creating a volume.
+ COPY_SOURCE (20) - copy source.
+ IMPORTING (21) - importing a volume.
+ CONVERTING (22) - converting a volume.
+ INVALID (23) - invalid.
+ EXCLUSIVE (24) - local storage system has exclusive access to the volume.
+ CONSISTENT (25) - volume is being imported consistently along with other volumes in the VV set.
+ STANDBY (26) - volume in standby mode.
+ SD_META_INCONSISTENT (27) - SD Meta Inconsistent.
+ SD_NEEDS_FIX (28) - SD needs fix.
+ SD_META_FIXING (29) - SD meta fix.
+ UNKNOWN (999) - unknown state.
+ NOT_SUPPORTED_BY_WSAPI (1000) - state not supported by WSAPI.
+ valuemap:
+ name: 'Volume detailed state enum'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.volumes.members[?(@.id == "{#ID}")].failedStates.first()'
+ -
+ type: JAVASCRIPT
+ parameters:
+ - 'return JSON.stringify(JSON.parse(value));'
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: storage
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ -
+ uuid: 4d4e34fdbac84cada109cbfe9b69812c
+ name: 'Volume [{#NAME}]: State'
+ type: DEPENDENT
+ key: 'hpe.primera.volume.state["{#ID}"]'
+ delay: '0'
+ history: 7d
+ description: |
+ State of the volume:
+
+ NORMAL (1) - normal operation;
+ DEGRADED (2) - degraded state;
+ FAILED (3) - abnormal operation;
+ UNKNOWN (99) - unknown state.
+ valuemap:
+ name: 'State enum'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.volumes.members[?(@.id == "{#ID}")].state.first()'
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: storage
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ trigger_prototypes:
+ -
+ uuid: c91920ca3ceb457cb9e2db0bb70d7fe0
+ expression: 'last(/HPE Primera by HTTP/hpe.primera.volume.state["{#ID}"])=2'
+ name: 'Volume [{#NAME}]: Degraded'
+ opdata: 'Current value: {ITEM.LASTVALUE1}'
+ priority: AVERAGE
+ description: 'Volume [{#NAME}] is in degraded state.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ tag: scope
+ value: capacity
+ -
+ uuid: 394b5bd072ac41acb0702601c5d5f049
+ expression: 'last(/HPE Primera by HTTP/hpe.primera.volume.state["{#ID}"])=3'
+ name: 'Volume [{#NAME}]: Failed'
+ opdata: 'Current value: {ITEM.LASTVALUE1}'
+ priority: HIGH
+ description: 'Volume [{#NAME}] is in failed state.'
+ tags:
+ -
+ tag: scope
+ value: availability
+ -
+ tag: scope
+ value: capacity
+ -
+ uuid: 582544eb48d04a35ab03a9d01901feb9
+ name: 'Volume [{#NAME}]: Remote copy status'
+ type: DEPENDENT
+ key: 'hpe.primera.volume.status["{#ID}",rcopy]'
+ delay: '0'
+ history: 7d
+ trends: '0'
+ value_type: CHAR
+ description: |
+ Remote copy status of the volume:
+
+ NONE (1) - volume is not associated with remote copy;
+ PRIMARY (2) - volume is the primary copy;
+ SECONDARY (3) - volume is the secondary copy;
+ SNAP (4) - volume is the remote copy snapshot;
+ SYNC (5) - volume is a remote copy snapshot being used for synchronization;
+ DELETE (6) - volume is a remote copy snapshot that is marked for deletion;
+ UNKNOWN (99) - remote copy status is unknown for this volume.
+ valuemap:
+ name: 'rcopyStatus enum'
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - '$.volumes.members[?(@.id == "{#ID}")].rcopyStatus.first()'
+ master_item:
+ key: hpe.primera.data.get
+ tags:
+ -
+ tag: component
+ value: storage
+ -
+ tag: component
+ value: volume
+ -
+ tag: volume
+ value: '{#NAME}'
+ graph_prototypes:
+ -
+ uuid: 8c7139d2b7d94773ad6ef813c7fa59c9
+ name: 'Volume [{#NAME}]: Administrative space'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE Primera by HTTP'
+ key: 'hpe.primera.volume.space.admin["{#ID}",free]'
+ -
+ sortorder: '1'
+ color: 2774A4
+ item:
+ host: 'HPE Primera by HTTP'
+ key: 'hpe.primera.volume.space.admin["{#ID}",raw_reserved]'
+ -
+ sortorder: '2'
+ color: F63100
+ item:
+ host: 'HPE Primera by HTTP'
+ key: 'hpe.primera.volume.space.admin["{#ID}",reserved]'
+ -
+ sortorder: '3'
+ color: A54F10
+ item:
+ host: 'HPE Primera by HTTP'
+ key: 'hpe.primera.volume.space.admin["{#ID}",used]'
+ -
+ uuid: c74b88d4e61d4264b804965854eb1da1
+ name: 'Volume [{#NAME}]: Capacity efficiency: Ratio'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE Primera by HTTP'
+ key: 'hpe.primera.volume.capacity.efficiency["{#ID}",compaction]'
+ -
+ sortorder: '1'
+ color: 2774A4
+ item:
+ host: 'HPE Primera by HTTP'
+ key: 'hpe.primera.volume.capacity.efficiency["{#ID}",overprovisioning]'
+ -
+ uuid: d2a75ebb20c94eb4ab5e6a9b13d1d439
+ name: 'Volume [{#NAME}]: Capacity efficiency: Space saved'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE Primera by HTTP'
+ key: 'hpe.primera.volume.capacity.efficiency["{#ID}",compression]'
+ -
+ sortorder: '1'
+ color: 2774A4
+ item:
+ host: 'HPE Primera by HTTP'
+ key: 'hpe.primera.volume.capacity.efficiency["{#ID}",deduplication]'
+ -
+ sortorder: '2'
+ color: F63100
+ item:
+ host: 'HPE Primera by HTTP'
+ key: 'hpe.primera.volume.capacity.efficiency["{#ID}",reduction]'
+ -
+ uuid: 7214605009354fd382368ec8699d5474
+ name: 'Volume [{#NAME}]: Snapshot space'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE Primera by HTTP'
+ key: 'hpe.primera.volume.space.snapshot["{#ID}",free]'
+ -
+ sortorder: '1'
+ color: 2774A4
+ item:
+ host: 'HPE Primera by HTTP'
+ key: 'hpe.primera.volume.space.snapshot["{#ID}",raw_reserved]'
+ -
+ sortorder: '2'
+ color: F63100
+ item:
+ host: 'HPE Primera by HTTP'
+ key: 'hpe.primera.volume.space.snapshot["{#ID}",reserved]'
+ -
+ sortorder: '3'
+ color: A54F10
+ item:
+ host: 'HPE Primera by HTTP'
+ key: 'hpe.primera.volume.space.snapshot["{#ID}",used]'
+ -
+ uuid: 0ad9f609419340168bf7f49de98e0135
+ name: 'Volume [{#NAME}]: User space'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE Primera by HTTP'
+ key: 'hpe.primera.volume.space.user["{#ID}",free]'
+ -
+ sortorder: '1'
+ color: 2774A4
+ item:
+ host: 'HPE Primera by HTTP'
+ key: 'hpe.primera.volume.space.user["{#ID}",raw_reserved]'
+ -
+ sortorder: '2'
+ color: F63100
+ item:
+ host: 'HPE Primera by HTTP'
+ key: 'hpe.primera.volume.space.user["{#ID}",reserved]'
+ -
+ sortorder: '3'
+ color: A54F10
+ item:
+ host: 'HPE Primera by HTTP'
+ key: 'hpe.primera.volume.space.user["{#ID}",used]'
+ master_item:
+ key: hpe.primera.data.get
+ lld_macro_paths:
+ -
+ lld_macro: '{#ID}'
+ path: $.id
+ -
+ lld_macro: '{#NAME}'
+ path: $.name
+ preprocessing:
+ -
+ type: JSONPATH
+ parameters:
+ - $.volumes.members
+ -
+ type: DISCARD_UNCHANGED_HEARTBEAT
+ parameters:
+ - 6h
+ tags:
+ -
+ tag: class
+ value: storage
+ -
+ tag: target
+ value: hpe
+ -
+ tag: target
+ value: primera
+ macros:
+ -
+ macro: '{$HPE.PRIMERA.API.PASSWORD}'
+ type: SECRET_TEXT
+ description: 'Specify password for WSAPI.'
+ -
+ macro: '{$HPE.PRIMERA.API.PORT}'
+ value: '443'
+ description: 'The WSAPI port.'
+ -
+ macro: '{$HPE.PRIMERA.API.SCHEME}'
+ value: https
+ description: 'The WSAPI scheme (http/https).'
+ -
+ macro: '{$HPE.PRIMERA.API.USERNAME}'
+ value: zabbix
+ description: 'Specify user name for WSAPI.'
+ -
+ macro: '{$HPE.PRIMERA.CPG.NAME.MATCHES}'
+ value: '.*'
+ description: 'This macro is used in filters of CPGs discovery rule.'
+ -
+ macro: '{$HPE.PRIMERA.CPG.NAME.NOT_MATCHES}'
+ value: CHANGE_IF_NEEDED
+ description: 'This macro is used in filters of CPGs discovery rule.'
+ -
+ macro: '{$HPE.PRIMERA.DATA.TIMEOUT}'
+ value: 15s
+ description: 'Response timeout for WSAPI.'
+ -
+ macro: '{$HPE.PRIMERA.LLD.FILTER.TASK.NAME.MATCHES}'
+ value: CHANGE_IF_NEEDED
+ description: 'Filter of discoverable tasks by name.'
+ -
+ macro: '{$HPE.PRIMERA.LLD.FILTER.TASK.NAME.NOT_MATCHES}'
+ value: '.*'
+ description: 'Filter to exclude discovered tasks by name.'
+ -
+ macro: '{$HPE.PRIMERA.LLD.FILTER.TASK.TYPE.MATCHES}'
+ value: '.*'
+ description: 'Filter of discoverable tasks by type.'
+ -
+ macro: '{$HPE.PRIMERA.LLD.FILTER.TASK.TYPE.NOT_MATCHES}'
+ value: CHANGE_IF_NEEDED
+ description: 'Filter to exclude discovered tasks by type.'
+ -
+ macro: '{$HPE.PRIMERA.VOLUME.NAME.MATCHES}'
+ value: '.*'
+ description: 'This macro is used in filters of volume discovery rule.'
+ -
+ macro: '{$HPE.PRIMERA.VOLUME.NAME.NOT_MATCHES}'
+ value: ^(admin|.srdata|.mgmtdata)$
+ description: 'This macro is used in filters of volume discovery rule.'
+ valuemaps:
+ -
+ uuid: 79ba0611293541f29f8b43b34e64465d
+ name: Boolean
+ mappings:
+ -
+ value: '0'
+ newvalue: 'No'
+ -
+ value: '1'
+ newvalue: 'Yes'
+ -
+ uuid: fd9a3483b02f45c6836b9a126b669402
+ name: 'diskState enum'
+ mappings:
+ -
+ value: '1'
+ newvalue: Normal
+ -
+ value: '2'
+ newvalue: Degraded
+ -
+ value: '3'
+ newvalue: New
+ -
+ value: '4'
+ newvalue: Failed
+ -
+ value: '99'
+ newvalue: Unknown
+ -
+ uuid: 7513c4c923884ed4b8e35ee9cdf4f627
+ name: 'hardwareType enum'
+ mappings:
+ -
+ value: '1'
+ newvalue: FC
+ -
+ value: '2'
+ newvalue: Eth
+ -
+ value: '3'
+ newvalue: iSCSI
+ -
+ value: '4'
+ newvalue: CNA
+ -
+ value: '5'
+ newvalue: SAS
+ -
+ value: '6'
+ newvalue: Combo
+ -
+ value: '7'
+ newvalue: NVMe
+ -
+ value: '8'
+ newvalue: Unknown
+ -
+ uuid: 5d243add5b534ebfac8ef95d55c4c8c9
+ name: 'portConnType enum'
+ mappings:
+ -
+ value: '1'
+ newvalue: Host
+ -
+ value: '2'
+ newvalue: Disk
+ -
+ value: '3'
+ newvalue: Free
+ -
+ value: '4'
+ newvalue: Iport
+ -
+ value: '5'
+ newvalue: RCFC
+ -
+ value: '6'
+ newvalue: Peer
+ -
+ value: '7'
+ newvalue: RCIP
+ -
+ value: '8'
+ newvalue: ISCSI
+ -
+ value: '9'
+ newvalue: CNA
+ -
+ value: '10'
+ newvalue: FS
+ -
+ uuid: 5762248dd10143a3945c989f0fb73b47
+ name: 'portFailOverState enum'
+ mappings:
+ -
+ value: '1'
+ newvalue: None
+ -
+ value: '2'
+ newvalue: 'Failover pending'
+ -
+ value: '3'
+ newvalue: 'Failed over'
+ -
+ value: '4'
+ newvalue: Active
+ -
+ value: '5'
+ newvalue: 'Active down'
+ -
+ value: '6'
+ newvalue: 'Active failed'
+ -
+ value: '7'
+ newvalue: Failback_pending
+ -
+ uuid: 5a6cdc765c254f17b136c4267fc71349
+ name: 'portLinkState enum'
+ mappings:
+ -
+ value: '1'
+ newvalue: 'Config wait'
+ -
+ value: '2'
+ newvalue: 'ALPA wait'
+ -
+ value: '3'
+ newvalue: 'Login wait'
+ -
+ value: '4'
+ newvalue: 'Link is ready'
+ -
+ value: '5'
+ newvalue: 'Link is loss sync'
+ -
+ value: '6'
+ newvalue: 'In error state'
+ -
+ value: '7'
+ newvalue: xxx
+ -
+ value: '8'
+ newvalue: 'Non participate'
+ -
+ value: '9'
+ newvalue: 'Core dump'
+ -
+ value: '10'
+ newvalue: Offline
+ -
+ value: '11'
+ newvalue: 'FW dead'
+ -
+ value: '12'
+ newvalue: 'Idle for reset'
+ -
+ value: '13'
+ newvalue: 'DHCP in progress'
+ -
+ value: '14'
+ newvalue: 'Pending reset'
+ -
+ value: '15'
+ newvalue: New
+ -
+ value: '16'
+ newvalue: Disabled
+ -
+ value: '17'
+ newvalue: Down
+ -
+ value: '18'
+ newvalue: Failed
+ -
+ value: '19'
+ newvalue: Purging
+ -
+ uuid: eb36574d642d4f9ea51688b4ff971c91
+ name: 'rcopyStatus enum'
+ mappings:
+ -
+ value: '1'
+ newvalue: None
+ -
+ value: '2'
+ newvalue: Primary
+ -
+ value: '3'
+ newvalue: Secondary
+ -
+ value: '4'
+ newvalue: Snap
+ -
+ value: '5'
+ newvalue: Sync
+ -
+ value: '6'
+ newvalue: Delete
+ -
+ value: '99'
+ newvalue: Unknown
+ -
+ uuid: aaa5a863e8524a7088e64a51f5976c98
+ name: 'Service state'
+ mappings:
+ -
+ value: '0'
+ newvalue: Down
+ -
+ value: '1'
+ newvalue: Up
+ -
+ uuid: 86e4337cbc86423fb9f605a9fb3b25b1
+ name: 'State enum'
+ mappings:
+ -
+ value: '1'
+ newvalue: Normal
+ -
+ value: '2'
+ newvalue: Degraded
+ -
+ value: '3'
+ newvalue: Failed
+ -
+ value: '99'
+ newvalue: Unknown
+ -
+ uuid: 5cfd4442b6244399b7aa8b57e9816a4e
+ name: 'taskStatus enum'
+ mappings:
+ -
+ value: '1'
+ newvalue: Done
+ -
+ value: '2'
+ newvalue: Active
+ -
+ value: '3'
+ newvalue: Cancelled
+ -
+ value: '4'
+ newvalue: Failed
+ -
+ uuid: 9a8e1dbbb8f7497c9492fc941fde7177
+ name: 'taskType enum'
+ mappings:
+ -
+ value: '1'
+ newvalue: 'VV copy'
+ -
+ value: '2'
+ newvalue: 'Phys copy resync'
+ -
+ value: '3'
+ newvalue: 'Move regions'
+ -
+ value: '4'
+ newvalue: 'Promote SV'
+ -
+ value: '5'
+ newvalue: 'Remote copy sync'
+ -
+ value: '6'
+ newvalue: 'Remote copy reverse'
+ -
+ value: '7'
+ newvalue: 'Remote copy failover'
+ -
+ value: '8'
+ newvalue: 'Remote copy recover'
+ -
+ value: '9'
+ newvalue: 'Remote copy restore'
+ -
+ value: '10'
+ newvalue: 'Compact CPG'
+ -
+ value: '11'
+ newvalue: 'Compact IDS'
+ -
+ value: '12'
+ newvalue: 'Snapshot accounting'
+ -
+ value: '13'
+ newvalue: 'Check VV'
+ -
+ value: '14'
+ newvalue: 'Scheduled task'
+ -
+ value: '15'
+ newvalue: 'System task'
+ -
+ value: '16'
+ newvalue: 'Background task'
+ -
+ value: '17'
+ newvalue: 'Import VV'
+ -
+ value: '18'
+ newvalue: 'Online copy'
+ -
+ value: '19'
+ newvalue: 'Convert VV'
+ -
+ value: '20'
+ newvalue: 'Background command'
+ -
+ value: '21'
+ newvalue: 'CLX sync'
+ -
+ value: '22'
+ newvalue: 'CLX recovery'
+ -
+ value: '23'
+ newvalue: 'Tune SD'
+ -
+ value: '24'
+ newvalue: 'Tune VV'
+ -
+ value: '25'
+ newvalue: 'Tune VV rollback'
+ -
+ value: '26'
+ newvalue: 'Tune VV restart'
+ -
+ value: '27'
+ newvalue: 'System tuning'
+ -
+ value: '28'
+ newvalue: 'Node rescue'
+ -
+ value: '29'
+ newvalue: 'Repair sync'
+ -
+ value: '30'
+ newvalue: 'Remote copy switchover'
+ -
+ value: '31'
+ newvalue: Defragmentation
+ -
+ value: '32'
+ newvalue: 'Encryption change'
+ -
+ value: '33'
+ newvalue: 'Remote copy failsafe'
+ -
+ value: '34'
+ newvalue: 'Tune TPVV'
+ -
+ value: '35'
+ newvalue: 'Remote copy change mode'
+ -
+ value: '37'
+ newvalue: 'Online promote'
+ -
+ value: '38'
+ newvalue: 'Relocate PD'
+ -
+ value: '39'
+ newvalue: 'Periodic CSS'
+ -
+ value: '40'
+ newvalue: 'Tune VV large'
+ -
+ value: '41'
+ newvalue: 'SD meta fixer'
+ -
+ value: '42'
+ newvalue: 'Dedup dryrun'
+ -
+ value: '43'
+ newvalue: 'Compr dryrun'
+ -
+ value: '44'
+ newvalue: 'Dedup compr dryrun'
+ -
+ value: '99'
+ newvalue: Unknown
+ -
+ uuid: f529111642364bb3a23637adc554e592
+ name: 'Volume compressionState enum'
+ mappings:
+ -
+ value: '1'
+ newvalue: 'Yes'
+ -
+ value: '2'
+ newvalue: 'No'
+ -
+ value: '3'
+ newvalue: 'Off'
+ -
+ value: '4'
+ newvalue: NA
+ -
+ uuid: 716912c7b7d94f8e97fbf911edc9578e
+ name: 'Volume deduplicationState enum'
+ mappings:
+ -
+ value: '1'
+ newvalue: 'Yes'
+ -
+ value: '2'
+ newvalue: 'No'
+ -
+ value: '3'
+ newvalue: NA
+ -
+ value: '4'
+ newvalue: 'Off'
+ -
+ uuid: 6a2355e32bf54483b252f1aaf170aa45
+ name: 'Volume detailed state enum'
+ mappings:
+ -
+ value: '1'
+ newvalue: 'LDS not started'
+ -
+ value: '2'
+ newvalue: 'VV not started'
+ -
+ value: '3'
+ newvalue: 'Needs check'
+ -
+ value: '4'
+ newvalue: 'Needs maint check'
+ -
+ value: '5'
+ newvalue: 'Internal consistency error'
+ -
+ value: '6'
+ newvalue: 'Snapdata invalid'
+ -
+ value: '7'
+ newvalue: Preserved
+ -
+ value: '8'
+ newvalue: Stale
+ -
+ value: '9'
+ newvalue: 'Copy failed'
+ -
+ value: '10'
+ newvalue: 'Degraded avail'
+ -
+ value: '11'
+ newvalue: 'Degraded perf'
+ -
+ value: '12'
+ newvalue: Promoting
+ -
+ value: '13'
+ newvalue: 'Copy target'
+ -
+ value: '14'
+ newvalue: 'Resync target'
+ -
+ value: '15'
+ newvalue: Tuning
+ -
+ value: '16'
+ newvalue: Closing
+ -
+ value: '17'
+ newvalue: Removing
+ -
+ value: '18'
+ newvalue: 'Removing retry'
+ -
+ value: '19'
+ newvalue: Creating
+ -
+ value: '20'
+ newvalue: 'Copy source'
+ -
+ value: '21'
+ newvalue: Importing
+ -
+ value: '22'
+ newvalue: Converting
+ -
+ value: '23'
+ newvalue: Invalid
+ -
+ value: '24'
+ newvalue: Exclusive
+ -
+ value: '25'
+ newvalue: Consistent
+ -
+ value: '26'
+ newvalue: Standby
+ -
+ value: '27'
+ newvalue: 'SD Meta inconsistent'
+ -
+ value: '28'
+ newvalue: 'SD needs fix'
+ -
+ value: '29'
+ newvalue: 'SD meta fix'
+ -
+ value: '999'
+ newvalue: 'Unknown state'
+ -
+ value: '1000'
+ newvalue: 'State not supported by WSAPI'
+ graphs:
+ -
+ uuid: 79e292a64d9247c486812db7a62c0eda
+ name: 'HPE Primera: Capacity'
+ graph_items:
+ -
+ color: 1A7C11
+ item:
+ host: 'HPE Primera by HTTP'
+ key: hpe.primera.system.capacity.allocated
+ -
+ sortorder: '1'
+ color: 2774A4
+ item:
+ host: 'HPE Primera by HTTP'
+ key: hpe.primera.system.capacity.failed
+ -
+ sortorder: '2'
+ color: F63100
+ item:
+ host: 'HPE Primera by HTTP'
+ key: hpe.primera.system.capacity.free
+ -
+ sortorder: '3'
+ color: A54F10
+ item:
+ host: 'HPE Primera by HTTP'
+ key: hpe.primera.system.capacity.total