Welcome to mirror list, hosted at ThFree Co, Russian Federation.

github.com/zabbix/zabbix.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorDenis Rasihov <denis.rasihov@zabbix.com>2022-05-11 08:29:51 +0300
committerDenis Rasihov <denis.rasihov@zabbix.com>2022-05-11 08:35:55 +0300
commitefb8223372133e69023d2c481273f7d6c04cab54 (patch)
tree9fe724f0920ed52ff716fb5ab7ceff9da2591dd1 /templates/san/hpe_msa2060_http/README.md
parentc8311ee7c9ecba61b6c64c27be44a953a8a2fbc6 (diff)
.........T [ZBXNEXT-7630] added templates for HPE MSA 2060 and MSA 2040
Diffstat (limited to 'templates/san/hpe_msa2060_http/README.md')
-rw-r--r--templates/san/hpe_msa2060_http/README.md242
1 files changed, 242 insertions, 0 deletions
diff --git a/templates/san/hpe_msa2060_http/README.md b/templates/san/hpe_msa2060_http/README.md
new file mode 100644
index 00000000000..66de4349d5b
--- /dev/null
+++ b/templates/san/hpe_msa2060_http/README.md
@@ -0,0 +1,242 @@
+
+# HPE MSA 2060 Storage by HTTP
+
+## Overview
+
+For Zabbix version: 6.0 and higher
+The template to monitor HPE MSA 2060 by HTTP.
+It works without any external scripts and uses the script items.
+
+
+This template was tested on:
+
+- MSA 2060, version 21.2.8
+
+## Setup
+
+> See [Zabbix template operation](https://www.zabbix.com/documentation/6.0/manual/config/templates_out_of_the_box/http) for basic instructions.
+
+1. Create user zabbix on the storage with browse role and enable it for all domains.
+2. The WSAPI server does not start automatically. To enable it:
+- log in to the CLI as Super, Service, or any role granted the wsapi_set right;
+- start the WSAPI server by command: 'startwsapi';
+- to check WSAPI state use command: 'showwsapi'.
+3. Link template to the host.
+4. Configure {$HPE.MSA.API.PASSWORD} and {$HPE.PRIMERA.API.PASSWORD}.
+
+
+## Zabbix configuration
+
+No specific Zabbix configuration is required.
+
+### Macros used
+
+|Name|Description|Default|
+|----|-----------|-------|
+|{$HPE.MSA.API.PASSWORD} |<p>Specify password for WSAPI.</p> |`` |
+|{$HPE.MSA.API.PORT} |<p>Connection port for WSAPI.</p> |`443` |
+|{$HPE.MSA.API.SCHEME} |<p>Connection scheme timeout for WSAPI.</p> |`https` |
+|{$HPE.MSA.API.USERNAME} |<p>Specify user name for WSAPI.</p> |`zabbix` |
+|{$HPE.MSA.DATA.TIMEOUT} |<p>Response timeout for WSAPI.</p> |`5s` |
+|{$HPE.PRIMERA.DISKS.GROUP.PUSED.MAX.CRIT} |<p>The critical threshold of the disk group space utilization in percent.</p> |`90` |
+|{$HPE.PRIMERA.DISKS.GROUP.PUSED.MAX.WARN} |<p>The warning threshold of the disk group space utilization in percent.</p> |`80` |
+|{$HPE.PRIMERA.POOL.PUSED.MAX.CRIT} |<p>The critical threshold of the pool space utilization in percent.</p> |`90` |
+|{$HPE.PRIMERA.POOL.PUSED.MAX.WARN} |<p>The warning threshold of the pool space utilization in percent.</p> |`80` |
+
+## Template links
+
+There are no template links in this template.
+
+## Discovery rules
+
+|Name|Description|Type|Key and additional info|
+|----|-----------|----|----|
+|Controllers discovery |<p>Discover controllers.</p> |DEPENDENT |hpe.msa.controllers.discovery<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
+|Controller statistics discovery |<p>Discover controller statistics.</p> |DEPENDENT |hpe.msa.controllers.statistics.discovery<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
+|Disk groups discovery |<p>Discover disk groups.</p> |DEPENDENT |hpe.msa.disks.groups.discovery<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
+|Disk group statistics discovery |<p>Discover disk group statistics.</p> |DEPENDENT |hpe.msa.disks.groups.statistics.discovery<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
+|Disks discovery |<p>Discover disks.</p> |DEPENDENT |hpe.msa.disks.discovery<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p><p>**Overrides:**</p><p>SSD life left<br> - {#TYPE} MATCHES_REGEX `8`<br> - ITEM_PROTOTYPE REGEXP `SSD life left` - DISCOVER</p> |
+|Enclosures discovery |<p>Discover enclosures.</p> |DEPENDENT |hpe.msa.enclosures.discovery<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
+|Fans discovery |<p>Discover fans.</p> |DEPENDENT |hpe.msa.fans.discovery<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
+|Pools discovery |<p>Discover pools.</p> |DEPENDENT |hpe.msa.pools.discovery<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
+|Ports discovery |<p>Discover ports.</p> |DEPENDENT |hpe.msa.ports.discovery<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
+|Power supplies discovery |<p>Discover power supplies.</p> |DEPENDENT |hpe.msa.power_supplies.discovery<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
+|Volumes discovery |<p>Discover volumes.</p> |DEPENDENT |hpe.msa.volumes.discovery<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
+|Volume statistics discovery |<p>Discover volume statistics.</p> |DEPENDENT |hpe.msa.volumes.statistics.discovery<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
+
+## Items collected
+
+|Group|Name|Description|Type|Key and additional info|
+|-----|----|-----------|----|---------------------|
+|HPE |Product ID |<p>The product model identifier.</p> |DEPENDENT |hpe.msa.system.product_id<p>**Preprocessing**:</p><p>- JSONPATH: `$.system[0].['product-id']`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |System contact |<p>The name of the the person who administers the system.</p> |DEPENDENT |hpe.msa.system.contact<p>**Preprocessing**:</p><p>- JSONPATH: `$.system[0].['system-contact']`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |System information |<p>A brief description of what the system is used for or how it is configured.</p> |DEPENDENT |hpe.msa.system.info<p>**Preprocessing**:</p><p>- JSONPATH: `$.system[0].['system-information']`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |System location |<p>The location of the system.</p> |DEPENDENT |hpe.msa.system.location<p>**Preprocessing**:</p><p>- JSONPATH: `$.system[0].['system-location']`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |System name |<p>The name of the storage system.</p> |DEPENDENT |hpe.msa.system.name<p>**Preprocessing**:</p><p>- JSONPATH: `$.system[0].['system-name']`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Vendor name |<p>The vendor name.</p> |DEPENDENT |hpe.msa.system.vendor_name<p>**Preprocessing**:</p><p>- JSONPATH: `$.system[0].['vendor-name']`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |System health |<p>System health status.</p> |DEPENDENT |hpe.msa.system.health<p>**Preprocessing**:</p><p>- JSONPATH: `$.system[0].['health-numeric']`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p> |
+|HPE |HPE MSA: Service ping |<p>Check if HTTP/HTTPS service accepts TCP connections.</p> |SIMPLE |net.tcp.service["{$HPE.MSA.API.SCHEME}","{HOST.CONN}"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Controller [{#DURABLE.ID}]: Firmware version |<p>Storage controller firmware version.</p> |DEPENDENT |hpe.msa.controllers["{#DURABLE.ID}",firmware]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['sc-fw'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Controller [{#DURABLE.ID}]: Part number |<p>Part number of the controller.</p> |DEPENDENT |hpe.msa.controllers["{#DURABLE.ID}",part_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['part-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Controller [{#DURABLE.ID}]: Serial number |<p>Storage controller serial number.</p> |DEPENDENT |hpe.msa.controllers["{#DURABLE.ID}",serial_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['serial-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Controller [{#DURABLE.ID}]: Health |<p>Controller health status.</p> |DEPENDENT |hpe.msa.controllers["{#DURABLE.ID}",health]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['health-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Controller [{#DURABLE.ID}]: Status |<p>Storage controller status.</p> |DEPENDENT |hpe.msa.controllers["{#DURABLE.ID}",status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['status-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Controller [{#DURABLE.ID}]: CPU utilization |<p>Percentage of time the CPU is busy, from 0 to 100.</p> |DEPENDENT |hpe.msa.controllers.cpu["{#DURABLE.ID}",util]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['cpu-load'].first()`</p> |
+|HPE |Controller [{#DURABLE.ID}]: IOPS, rate |<p>Input/output operations per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.</p> |DEPENDENT |hpe.msa.controllers.iops["{#DURABLE.ID}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['iops'].first()`</p> |
+|HPE |Controller [{#DURABLE.ID}]: Uptime |<p>Number of seconds since the controller was restarted.</p> |DEPENDENT |hpe.msa.controllers["{#DURABLE.ID}",uptime]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['power-on-time'].first()`</p> |
+|HPE |Disk group [{#NAME}]: Disks count |<p>Number of disks in the disk group.</p> |DEPENDENT |hpe.msa.disks.groups["{#NAME}",disk_count]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['diskcount'].first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Disk group [{#NAME}]: Health |<p>Disk group health.</p> |DEPENDENT |hpe.msa.disks.groups["{#NAME}",health]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['health-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Disk group [{#NAME}]: Blocks size |<p>The size of a block, in bytes.</p> |DEPENDENT |hpe.msa.disks.groups.blocks["{#NAME}",size]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['blocksize'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Disk group [{#NAME}]: Blocks free |<p>Free space in blocks.</p> |DEPENDENT |hpe.msa.disks.groups.blocks["{#NAME}",free]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['freespace-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Disk group [{#NAME}]: Blocks total |<p>Total space in blocks.</p> |DEPENDENT |hpe.msa.disks.groups.blocks["{#NAME}",total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['blocks'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Disk group [{#NAME}]: Space free |<p>The free space in the disk group.</p> |CALCULATED |hpe.msa.disks.groups.space["{#NAME}",free]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>**Expression**:</p>`last(//hpe.msa.disks.groups.blocks["{#NAME}",size])*last(//hpe.msa.disks.groups.blocks["{#NAME}",free])` |
+|HPE |Disk group [{#NAME}]: Space total |<p>The capacity of the disk group.</p> |CALCULATED |hpe.msa.disks.groups.space["{#NAME}",total]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>**Expression**:</p>`last(//hpe.msa.disks.groups.blocks["{#NAME}",size])*last(//hpe.msa.disks.groups.blocks["{#NAME}",total])` |
+|HPE |Disk group [{#NAME}]: Space utilization |<p>The space utilization percentage in the disk group.</p> |CALCULATED |hpe.msa.disks.groups.space["{#NAME}",util]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>**Expression**:</p>`last(//hpe.msa.disks.groups.space["{#NAME}",free])/last(//hpe.msa.disks.groups.space["{#NAME}",total])*100` |
+|HPE |Disk group [{#NAME}]: RAID type |<p>The RAID level of the disk group.</p> |DEPENDENT |hpe.msa.disks.groups.raid["{#NAME}",type]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['raidtype-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Disk group [{#NAME}]: Status |<p>The status of the disk group:</p><p>- CRIT: Critical. The disk group is online but isn't fault tolerant because some of it's disks are down.</p><p>- DMGD: Damaged. The disk group is online and fault tolerant, but some of it's disks are damaged.</p><p>- FTDN: Fault tolerant with a down disk.The disk group is online and fault tolerant, but some of it's disks are down.</p><p>- FTOL: Fault tolerant.</p><p>- MSNG: Missing. The disk group is online and fault tolerant, but some of it's disks are missing.</p><p>- OFFL: Offline. Either the disk group is using offline initialization, or it's disks are down and data may be lost.</p><p>- QTCR: Quarantined critical. The disk group is critical with at least one inaccessible disk. For example, two disks are inaccessible in a RAID 6 disk group or one disk is inaccessible for other fault-tolerant RAID levels. If the inaccessible disks come online or if after 60 seconds from being quarantined the disk group is QTCRor QTDN, the disk group is automatically dequarantined.</p><p>- QTDN: Quarantined with a down disk. The RAID6 disk group has one inaccessible disk. The disk group is fault tolerant but degraded. If the inaccessible disks come online or if after 60 seconds from being quarantined the disk group is QTCRor QTDN, the disk group is automatically dequarantined.</p><p>- QTOF: Quarantined offline. The disk group is offline with multiple inaccessible disks causing user data to be incomplete, or is an NRAID or RAID 0 disk group.</p><p>- QTUN: Quarantined unsupported. The disk group contains data in a format that is not supported by this system. For example, this system does not support linear disk groups.</p><p>- STOP: The disk group is stopped.</p><p>- UNKN: Unknown.</p><p>- UP: Up. The disk group is online and does not have fault-tolerant attributes.</p> |DEPENDENT |hpe.msa.disks.groups["{#NAME}",status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['status-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Disk group [{#NAME}]: IOPS, rate |<p>Input/output operations per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.</p> |DEPENDENT |hpe.msa.disks.groups.iops["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['iops'].first()`</p> |
+|HPE |Disk group [{#NAME}]: Average response time: Total |<p>Average response time for read and write operations, calculated over the interval since these statistics were last requested or reset.</p> |DEPENDENT |hpe.msa.disks.groups.avg_rsp_time["{#NAME}",total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['avg-rsp-time'].first()`</p><p>- MULTIPLIER: `0.000001`</p> |
+|HPE |Disk group [{#NAME}]: Average response time: Read |<p>Average response time for all read operations, calculated over the interval since these statistics were last requested or reset.</p> |DEPENDENT |hpe.msa.disks.groups.avg_rsp_time["{#NAME}",read]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['avg-read-rsp-time'].first()`</p><p>- MULTIPLIER: `0.000001`</p> |
+|HPE |Disk group [{#NAME}]: Average response time: Write |<p>Average response time for all write operations, calculated over the interval since these statistics were last requested or reset.</p> |DEPENDENT |hpe.msa.disks.groups.avg_rsp_time["{#NAME}",write]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['avg-write-rsp-time'].first()`</p><p>- MULTIPLIER: `0.000001`</p> |
+|HPE |Disk group [{#NAME}]: Reads, rate |<p>Number of read operations per second.</p> |DEPENDENT |hpe.msa.disks.groups.reads["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['number-of-reads'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Disk group [{#NAME}]: Writes, rate |<p>Number of write operations per second.</p> |DEPENDENT |hpe.msa.disks.groups.writes["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['number-of-writes'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Disk group [{#NAME}]: Data transfer rate: Total |<p>The data transfer rate, in bytes per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.</p> |DEPENDENT |hpe.msa.disks.groups.data_transfer.total["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['bytes-per-second-numeric'].first()`</p> |
+|HPE |Disk group [{#NAME}]: Data transfer rate: Reads |<p>The data read rate, in bytes per second.</p> |DEPENDENT |hpe.msa.disks.groups.data_transfer.reads["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['data-read-numeric'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Disk group [{#NAME}]: Data transfer rate: Writes |<p>The data write rate, in bytes per second.</p> |DEPENDENT |hpe.msa.disks.groups.data_transfer.writes["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['data-written-numeric'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Pool [{#NAME}]: Health |<p>Pool health.</p> |DEPENDENT |hpe.msa.pools["{#NAME}",health]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['health-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Pool [{#NAME}]: Blocks size |<p>The size of a block, in bytes.</p> |DEPENDENT |hpe.msa.pools.blocks["{#NAME}",size]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['blocksize'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Pool [{#NAME}]: Blocks available |<p>Available space in blocks.</p> |DEPENDENT |hpe.msa.pools.blocks["{#NAME}",available]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['total-avail-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Pool [{#NAME}]: Blocks total |<p>Total space in blocks.</p> |DEPENDENT |hpe.msa.pools.blocks["{#NAME}",total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['name'] == "{#NAME}")].['total-size-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Pool [{#NAME}]: Space free |<p>The free space in the pool.</p> |CALCULATED |hpe.msa.pools.space["{#NAME}",free]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>**Expression**:</p>`last(//hpe.msa.pools.blocks["{#NAME}",size])*last(//hpe.msa.pools.blocks["{#NAME}",available])` |
+|HPE |Pool [{#NAME}]: Space total |<p>The capacity of the pool.</p> |CALCULATED |hpe.msa.pools.space["{#NAME}",total]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>**Expression**:</p>`last(//hpe.msa.pools.blocks["{#NAME}",size])*last(//hpe.msa.pools.blocks["{#NAME}",total])` |
+|HPE |Pool [{#NAME}]: Space utilization |<p>The space utilization percentage in the pool.</p> |CALCULATED |hpe.msa.pools.space["{#NAME}",util]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>**Expression**:</p>`last(//hpe.msa.pools.space["{#NAME}",free])/last(//hpe.msa.pools.space["{#NAME}",total])*100` |
+|HPE |Volume [{#NAME}]: Blocks size |<p>The size of a block, in bytes.</p> |DEPENDENT |hpe.msa.volumes.blocks["{#NAME}",size]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['blocksize'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Volume [{#NAME}]: Blocks allocated |<p>The amount of blocks currently allocated to the volume.</p> |DEPENDENT |hpe.msa.volumes.blocks["{#NAME}",allocated]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['allocated-size-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Volume [{#NAME}]: Blocks total |<p>Total space in blocks.</p> |DEPENDENT |hpe.msa.volumes.blocks["{#NAME}",total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['blocks'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Volume [{#NAME}]: Space allocated |<p>The amount of space currently allocated to the volume.</p> |CALCULATED |hpe.msa.volumes.space["{#NAME}",allocated]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>**Expression**:</p>`last(//hpe.msa.volumes.blocks["{#NAME}",size])*last(//hpe.msa.volumes.blocks["{#NAME}",allocated])` |
+|HPE |Volume [{#NAME}]: Space total |<p>The capacity of the volume.</p> |CALCULATED |hpe.msa.volumes.space["{#NAME}",total]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>**Expression**:</p>`last(//hpe.msa.volumes.blocks["{#NAME}",size])*last(//hpe.msa.volumes.blocks["{#NAME}",total])` |
+|HPE |Volume [{#NAME}]: IOPS, rate |<p>Input/output operations per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.</p> |DEPENDENT |hpe.msa.volumes.iops["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['volume-name'] == "{#NAME}")].['iops'].first()`</p> |
+|HPE |Volume [{#NAME}]: Reads, rate |<p>Number of read operations per second.</p> |DEPENDENT |hpe.msa.volumes.reads["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['volume-name'] == "{#NAME}")].['number-of-reads'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Volume [{#NAME}]: Writes, rate |<p>Number of write operations per second.</p> |DEPENDENT |hpe.msa.volumes.writes["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['volume-name'] == "{#NAME}")].['number-of-writes'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Volume [{#NAME}]: Data transfer rate: Total |<p>The data transfer rate, in bytes per second, calculated over the interval since these statistics were last requested or reset. This value will be zero if it has not been requested or reset since a controller restart.</p> |DEPENDENT |hpe.msa.volumes.data_transfer.total["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['volume-name'] == "{#NAME}")].['bytes-per-second-numeric'].first()`</p> |
+|HPE |Volume [{#NAME}]: Data transfer rate: Reads |<p>The data read rate, in bytes per second.</p> |DEPENDENT |hpe.msa.volumes.data_transfer.reads["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['volume-name'] == "{#NAME}")].['data-read-numeric'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Volume [{#NAME}]: Data transfer rate: Writes |<p>The data write rate, in bytes per second.</p> |DEPENDENT |hpe.msa.volumes.data_transfer.writes["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['volume-name'] == "{#NAME}")].['data-written-numeric'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Volume [{#NAME}]: Cache: Read hits, rate |<p>For the controller that owns the volume, the number of times the block to be read is found in cache per second.</p> |DEPENDENT |hpe.msa.volumes.cache.read.hits["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['volume-name'] == "{#NAME}")].['read-cache-hits'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Volume [{#NAME}]: Cache: Read misses, rate |<p>For the controller that owns the volume, the number of times the block to be read is not found in cache per second.</p> |DEPENDENT |hpe.msa.volumes.cache.read.misses["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['volume-name'] == "{#NAME}")].['read-cache-misses'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Volume [{#NAME}]: Cache: Write hits, rate |<p>For the controller that owns the volume, the number of times the block written to is found in cache per second.</p> |DEPENDENT |hpe.msa.volumes.cache.write.hits["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['volume-name'] == "{#NAME}")].['write-cache-hits'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Volume [{#NAME}]: Cache: Write misses, rate |<p>For the controller that owns the volume, the number of times the block written to is not found in cache per second.</p> |DEPENDENT |hpe.msa.volumes.cache.write.misses["{#NAME}",rate]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['volume-name'] == "{#NAME}")].['write-cache-misses'].first()`</p><p>- CHANGE_PER_SECOND</p> |
+|HPE |Enclosure [{#DURABLE.ID}]: Health |<p>Enclosure health.</p> |DEPENDENT |hpe.msa.enclosures["{#DURABLE.ID}",health]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['health-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Enclosure [{#DURABLE.ID}]: Status |<p>Enclosure status.</p> |DEPENDENT |hpe.msa.enclosures["{#DURABLE.ID}",status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['status-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 6`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Enclosure [{#DURABLE.ID}]: Midplane serial number |<p>Midplane serial number.</p> |DEPENDENT |hpe.msa.enclosures["{#DURABLE.ID}",midplane_serial_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['midplane-serial-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Enclosure [{#DURABLE.ID}]: Part number. |<p>Enclosure part number.</p> |DEPENDENT |hpe.msa.enclosures["{#DURABLE.ID}",part_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['part-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Enclosure [{#DURABLE.ID}]: Model |<p>Enclosure model.</p> |DEPENDENT |hpe.msa.enclosures["{#DURABLE.ID}",model]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['model'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Enclosure [{#DURABLE.ID}]: Power |<p>Enclosure power in watts.</p> |DEPENDENT |hpe.msa.enclosures["{#DURABLE.ID}",power]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['enclosure-power'].first()`</p> |
+|HPE |Power supply [{#LOCATION}]: Health |<p>Power supply health status.</p> |DEPENDENT |hpe.msa.power_supplies["{#DURABLE.ID}",health]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['health-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Power supply [{#LOCATION}]: Status |<p>Power supply status.</p> |DEPENDENT |hpe.msa.power_supplies["{#DURABLE.ID}",status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['status-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Power supply [{#LOCATION}]: Part number. |<p>Power supply part number.</p> |DEPENDENT |hpe.msa.power_supplies["{#DURABLE.ID}",part_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['part-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Power supply [{#LOCATION}]: Serial number. |<p>Power supply serial number.</p> |DEPENDENT |hpe.msa.power_supplies["{#DURABLE.ID}",serial_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['serial-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Port [{#NAME}]: Health |<p>Port health status.</p> |DEPENDENT |hpe.msa.ports["{#NAME}",health]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['port'] == "{#NAME}")].['health-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Port [{#NAME}]: Status |<p>Port status.</p> |DEPENDENT |hpe.msa.ports["{#NAME}",status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['port'] == "{#NAME}")].['status-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |{#NAME} [{#LOCATION}]: Health |<p>Fan health status.</p> |DEPENDENT |hpe.msa.fans["{#DURABLE.ID}",health]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['health-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |{#NAME} [{#LOCATION}]: Status |<p>Fan status.</p> |DEPENDENT |hpe.msa.fans["{#DURABLE.ID}",status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['status-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |{#NAME} [{#LOCATION}]: Speed |<p>Fan speed (revolutions per minute).</p> |DEPENDENT |hpe.msa.fans["{#DURABLE.ID}",speed]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['speed'].first()`</p> |
+|HPE |Disk [{#ENCLOSURE.ID}.{#SLOT}]: Health |<p>Disk health status.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",health]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['health-numeric'].first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Disk [{#ENCLOSURE.ID}.{#SLOT}]: Temperature status |<p>Disk temperature status.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",temperature_status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['temperature-status-numeric'].first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- IN_RANGE: `1 3`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 4`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Disk [{#ENCLOSURE.ID}.{#SLOT}]: Temperature |<p>Temperature of the disk.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",temperature]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['temperature-numeric'].first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Disk [{#ENCLOSURE.ID}.{#SLOT}]: Type |<p>Disk type:</p><p>SAS: Enterprise SAS spinning disk.</p><p>SAS MDL: Midline SAS spinning disk.</p><p>SSD SAS: SAS solit-state disk.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",type]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['description-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Disk [{#ENCLOSURE.ID}.{#SLOT}]: Disk group |<p>If the disk is in a disk group, the disk group name.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",group]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['disk-group'].first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Disk [{#ENCLOSURE.ID}.{#SLOT}]: Storage pool |<p>If the disk is in a pool, the pool name.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",pool]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['storage-pool-name'].first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Disk [{#ENCLOSURE.ID}.{#SLOT}]: Vendor |<p>Disk vendor.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",vendor]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['vendor'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Disk [{#ENCLOSURE.ID}.{#SLOT}]: Model |<p>Disk model.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",model]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['model'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Disk [{#ENCLOSURE.ID}.{#SLOT}]: Serial number |<p>Disk serial number.</p> |DEPENDENT |hpe.msa.disks["{#DURABLE.ID}",serial_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['serial-number'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+|HPE |Disk [{#ENCLOSURE.ID}.{#SLOT}]: Blocks size |<p>The size of a block, in bytes.</p> |DEPENDENT |hpe.msa.disks.blocks["{#DURABLE.ID}",size]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['blocksize'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Disk [{#ENCLOSURE.ID}.{#SLOT}]: Blocks total |<p>Total space in blocks.</p> |DEPENDENT |hpe.msa.disks.blocks["{#DURABLE.ID}",total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['blocks'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|HPE |Disk [{#ENCLOSURE.ID}.{#SLOT}]: Space total |<p>Total size of the disk.</p> |CALCULATED |hpe.msa.disks.space["{#DURABLE.ID}",total]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p><p>**Expression**:</p>`last(//hpe.msa.disks.blocks["{#DURABLE.ID}",size])*last(//hpe.msa.disks.blocks["{#DURABLE.ID}",total])` |
+|HPE |Disk [{#ENCLOSURE.ID}.{#SLOT}]: SSD life left |<p>The percantage of disk life remaining.</p> |DEPENDENT |hpe.msa.disks.ssd["{#DURABLE.ID}",life_left]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@['durable-id'] == "{#DURABLE.ID}")].['ssd-life-left-numeric'].first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+|Zabbix raw items |HPE MSA: Get system |<p>-</p> |SCRIPT |hpe.msa.raw.system<p>**Expression**:</p>`The text is too long. Please see the template.` |
+|Zabbix raw items |HPE MSA: Get controllers |<p>-</p> |SCRIPT |hpe.msa.raw.controllers<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controllers']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
+|Zabbix raw items |HPE MSA: Get controller statistics |<p>-</p> |SCRIPT |hpe.msa.raw.controllers.statistics<p>**Preprocessing**:</p><p>- JSONPATH: `$.['controller-statistics']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
+|Zabbix raw items |HPE MSA: Get disk groups |<p>-</p> |SCRIPT |hpe.msa.raw.disks.groups<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-groups']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
+|Zabbix raw items |HPE MSA: Get disk group statistics |<p>-</p> |SCRIPT |hpe.msa.raw.disks.groups.statistics<p>**Preprocessing**:</p><p>- JSONPATH: `$.['disk-group-statistics']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
+|Zabbix raw items |HPE MSA: Get disks |<p>-</p> |SCRIPT |hpe.msa.raw.disks<p>**Preprocessing**:</p><p>- JSONPATH: `$.['drives']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
+|Zabbix raw items |HPE MSA: Get enclosures |<p>-</p> |SCRIPT |hpe.msa.raw.enclosures<p>**Preprocessing**:</p><p>- JSONPATH: `$.['enclosures']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
+|Zabbix raw items |HPE MSA: Get fans |<p>-</p> |SCRIPT |hpe.msa.raw.fans<p>**Preprocessing**:</p><p>- JSONPATH: `$.['fan']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
+|Zabbix raw items |HPE MSA: Get pools |<p>-</p> |SCRIPT |hpe.msa.raw.pools<p>**Preprocessing**:</p><p>- JSONPATH: `$.['pools']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
+|Zabbix raw items |HPE MSA: Get ports |<p>-</p> |SCRIPT |hpe.msa.raw.ports<p>**Preprocessing**:</p><p>- JSONPATH: `$.['port']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
+|Zabbix raw items |HPE MSA: Get power supplies |<p>-</p> |SCRIPT |hpe.msa.raw.power_supplies<p>**Preprocessing**:</p><p>- JSONPATH: `$.['power-supplies']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
+|Zabbix raw items |HPE MSA: Get volumes |<p>-</p> |SCRIPT |hpe.msa.raw.volumes<p>**Preprocessing**:</p><p>- JSONPATH: `$.['volumes']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
+|Zabbix raw items |HPE MSA: Get volume statistics |<p>-</p> |SCRIPT |hpe.msa.raw.volumes.statistics<p>**Preprocessing**:</p><p>- JSONPATH: `$.['volume-statistics']`</p><p>**Expression**:</p>`The text is too long. Please see the template.` |
+
+## Triggers
+
+|Name|Description|Expression|Severity|Dependencies and additional info|
+|----|-----------|----|----|----|
+|System health is in degraded state |<p>System health is in degraded state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.system.health)=1` |WARNING | |
+|System health is in fault state |<p>System health is in fault state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.system.health)=2` |AVERAGE | |
+|System health is in unknown state |<p>System health is in unknown state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.system.health)=3` |INFO | |
+|Failed to fetch API data |<p>Zabbix has not received data for items for the last 5 minutes.</p> |`nodata(/HPE MSA 2060 Storage by HTTP/hpe.msa.system.health,5m)=1` |WARNING |<p>**Depends on**:</p><p>- Service is down</p> |
+|Service is down |<p>-</p> |`max(/HPE MSA 2060 Storage by HTTP/net.tcp.service["{$HPE.MSA.API.SCHEME}","{HOST.CONN}"],5m)=0` |WARNING | |
+|Controller [{#DURABLE.ID}]: Controller health is in degraded state |<p>Controller health is in degraded state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers["{#DURABLE.ID}",health])=1` |WARNING |<p>**Depends on**:</p><p>- Controller [{#DURABLE.ID}]: Controller is down</p> |
+|Controller [{#DURABLE.ID}]: Controller health is in fault state |<p>Controller health is in fault state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers["{#DURABLE.ID}",health])=2` |AVERAGE |<p>**Depends on**:</p><p>- Controller [{#DURABLE.ID}]: Controller is down</p> |
+|Controller [{#DURABLE.ID}]: Controller health is in unknown state |<p>Controller health is in unknown state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers["{#DURABLE.ID}",health])=3` |INFO |<p>**Depends on**:</p><p>- Controller [{#DURABLE.ID}]: Controller is down</p> |
+|Controller [{#DURABLE.ID}]: Controller is down |<p>-</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers["{#DURABLE.ID}",status])=1` |HIGH | |
+|Controller [{#DURABLE.ID}]: Controller has been restarted |<p>-</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.controllers["{#DURABLE.ID}",uptime])<10m` |INFO | |
+|Disk group [{#NAME}]: Disk group health is in degraded state |<p>Disk group health is in degraded state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",health])=1` |WARNING | |
+|Disk group [{#NAME}]: Disk group health is in fault state |<p>Disk group health is in fault state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",health])=2` |AVERAGE | |
+|Disk group [{#NAME}]: Disk group health is in unknown state |<p>Disk group health is in unknown state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",health])=3` |INFO | |
+|Disk group [{#NAME}]: Disk group space is low |<p>Disk group is running low on free space (less than {$HPE.PRIMERA.DISKS.GROUP.PUSED.MAX.WARN:"{#NAME}"}% available).</p> |`min(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups.space["{#NAME}",util],5m)>{$HPE.PRIMERA.DISKS.GROUP.PUSED.MAX.WARN:"{#NAME}"}` |WARNING |<p>**Depends on**:</p><p>- Disk group [{#NAME}]: Disk group space is critically low</p> |
+|Disk group [{#NAME}]: Disk group space is critically low |<p>Disk group is running low on free space (less than {$HPE.PRIMERA.DISKS.GROUP.PUSED.MAX.CRIT:"{#NAME}"}% available).</p> |`min(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups.space["{#NAME}",util],5m)>{$HPE.PRIMERA.DISKS.GROUP.PUSED.MAX.CRIT:"{#NAME}"}` |AVERAGE | |
+|Disk group [{#NAME}]: Disk group is fault tolerant with a down disk |<p>The disk group is online and fault tolerant, but some of it's disks are down.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=1` |AVERAGE | |
+|Disk group [{#NAME}]: Disk group has damaged disks |<p>The disk group is online and fault tolerant, but some of it's disks are damaged.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=9` |AVERAGE | |
+|Disk group [{#NAME}]: Disk group has missing disks |<p>The disk group is online and fault tolerant, but some of it's disks are missing.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=8` |AVERAGE | |
+|Disk group [{#NAME}]: Disk group is offline |<p>Either the disk group is using offline initialization, or it's disks are down and data may be lost.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=3` |AVERAGE | |
+|Disk group [{#NAME}]: Disk group is quarantined critical |<p>The disk group is critical with at least one inaccessible disk. For example, two disks are inaccessible in a RAID 6 disk group or one disk is inaccessible for other fault-tolerant RAID levels. If the inaccessible disks come online or if after 60 seconds from being quarantined the disk group is QTCRor QTDN, the disk group is automatically dequarantined.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=4` |AVERAGE | |
+|Disk group [{#NAME}]: Disk group is quarantined offline |<p>The disk group is offline with multiple inaccessible disks causing user data to be incomplete, or is an NRAID or RAID 0 disk group.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=5` |AVERAGE | |
+|Disk group [{#NAME}]: Disk group is quarantined unsupported |<p>The disk group contains data in a format that is not supported by this system. For example, this system does not support linear disk groups.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=5` |AVERAGE | |
+|Disk group [{#NAME}]: Disk group is quarantined with an inaccessible disk |<p>The RAID6 disk group has one inaccessible disk. The disk group is fault tolerant but degraded. If the inaccessible disks come online or if after 60 seconds from being quarantined the disk group is QTCRor QTDN, the disk group is automatically dequarantined.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=6` |AVERAGE | |
+|Disk group [{#NAME}]: Disk group is stopped |<p>The disk group is stopped.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=7` |AVERAGE | |
+|Disk group [{#NAME}]: Disk group status is critical |<p>The disk group is online but isn't fault tolerant because some of its disks are down.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks.groups["{#NAME}",status])=2` |AVERAGE | |
+|Pool [{#NAME}]: Pool health is in degraded state |<p>Pool health is in degraded state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.pools["{#NAME}",health])=1` |WARNING | |
+|Pool [{#NAME}]: Pool health is in fault state |<p>Pool health is in fault state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.pools["{#NAME}",health])=2` |AVERAGE | |
+|Pool [{#NAME}]: Pool health is in unknown state |<p>Pool [{#NAME}] health is in unknown state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.pools["{#NAME}",health])=3` |INFO | |
+|Pool [{#NAME}]: Pool space is low |<p>Pool is running low on free space (less than {$HPE.PRIMERA.POOL.PUSED.MAX.WARN:"{#NAME}"}% available).</p> |`min(/HPE MSA 2060 Storage by HTTP/hpe.msa.pools.space["{#NAME}",util],5m)>{$HPE.PRIMERA.POOL.PUSED.MAX.WARN:"{#NAME}"}` |WARNING |<p>**Depends on**:</p><p>- Pool [{#NAME}]: Pool space is critically low</p> |
+|Pool [{#NAME}]: Pool space is critically low |<p>Pool is running low on free space (less than {$HPE.PRIMERA.POOL.PUSED.MAX.CRIT:"{#NAME}"}% available).</p> |`min(/HPE MSA 2060 Storage by HTTP/hpe.msa.pools.space["{#NAME}",util],5m)>{$HPE.PRIMERA.POOL.PUSED.MAX.CRIT:"{#NAME}"}` |AVERAGE | |
+|Enclosure [{#DURABLE.ID}]: Enclosure health is in degraded state |<p>Enclosure health is in degraded state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",health])=1` |WARNING | |
+|Enclosure [{#DURABLE.ID}]: Enclosure health is in fault state |<p>Enclosure health is in fault state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",health])=2` |AVERAGE | |
+|Enclosure [{#DURABLE.ID}]: Enclosure health is in unknown state |<p>Enclosure health is in unknown state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",health])=3` |INFO | |
+|Enclosure [{#DURABLE.ID}]: Enclosure has critical status |<p>Enclosure has critical status.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",status])=2` |HIGH | |
+|Enclosure [{#DURABLE.ID}]: Enclosure has warning status |<p>Enclosure has warning status.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",status])=3` |WARNING | |
+|Enclosure [{#DURABLE.ID}]: Enclosure is unavailable |<p>Enclosure is unavailable.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",status])=7` |HIGH | |
+|Enclosure [{#DURABLE.ID}]: Enclosure is unrecoverable |<p>Enclosure is unrecoverable.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",status])=4` |HIGH | |
+|Enclosure [{#DURABLE.ID}]: Enclosure has unknown status |<p>Enclosure has unknown status</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.enclosures["{#DURABLE.ID}",status])=6` |INFO | |
+|Power supply [{#LOCATION}]: Power supply health is in degraded state |<p>Power supply health is in degraded state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",health])=1` |WARNING | |
+|Power supply [{#LOCATION}]: Power supply health is in fault state |<p>Power supply health is in fault state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",health])=2` |AVERAGE | |
+|Power supply [{#LOCATION}]: Power supply health is in unknown state |<p>Power supply health is in unknown state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",health])=3` |INFO | |
+|Power supply [{#LOCATION}]: Power supply has error status |<p>Power supply has error status.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",status])=2` |AVERAGE | |
+|Power supply [{#LOCATION}]: Power supply has warning status |<p>Power supply has warning status.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",status])=1` |WARNING | |
+|Power supply [{#LOCATION}]: Power supply has unknown status |<p>Power supply has unknown status.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.power_supplies["{#DURABLE.ID}",status])=4` |INFO | |
+|Port [{#NAME}]: Port health is in degraded state |<p>Port health is in degraded state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.ports["{#NAME}",health])=1` |WARNING | |
+|Port [{#NAME}]: Port health is in fault state |<p>Port health is in fault state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.ports["{#NAME}",health])=2` |AVERAGE | |
+|Port [{#NAME}]: Port health is in unknown state |<p>Port health is in unknown state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.ports["{#NAME}",health])=3` |INFO | |
+|Port [{#NAME}]: Port has error status |<p>Port has error status.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.ports["{#NAME}",status])=2` |AVERAGE | |
+|Port [{#NAME}]: Port has warning status |<p>Port has warning status.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.ports["{#NAME}",status])=1` |WARNING | |
+|Port [{#NAME}]: Port has unknown status |<p>Port has unknown status.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.ports["{#NAME}",status])=4` |INFO | |
+|{#NAME} [{#LOCATION}]: Fan health is in degraded state |<p>Fan health is in degraded state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",health])=1` |WARNING | |
+|{#NAME} [{#LOCATION}]: Fan health is in fault state |<p>Fan health is in fault state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",health])=2` |AVERAGE | |
+|{#NAME} [{#LOCATION}]: Fan health is in unknown state |<p>Fan health is in unknown state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",health])=3` |INFO | |
+|{#NAME} [{#LOCATION}]: Fan has error status |<p>Fan has error status.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",status])=1` |AVERAGE | |
+|{#NAME} [{#LOCATION}]: Fan is missing |<p>Fan is missing.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",status])=3` |INFO | |
+|{#NAME} [{#LOCATION}]: Fan is off |<p>Fan is off.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.fans["{#DURABLE.ID}",status])=2` |WARNING | |
+|Disk [{#ENCLOSURE.ID}.{#SLOT}]: Disk health is in degraded state |<p>Disk health is in degraded state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",health])=1` |WARNING | |
+|Disk [{#ENCLOSURE.ID}.{#SLOT}]: Disk health is in fault state |<p>Disk health is in fault state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",health])=2` |AVERAGE | |
+|Disk [{#ENCLOSURE.ID}.{#SLOT}]: Disk health is in unknown state |<p>Disk health is in unknown state.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",health])=3` |INFO | |
+|Disk [{#ENCLOSURE.ID}.{#SLOT}]: Disk temperature is high |<p>Disk temperature is high.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",temperature_status])=3` |WARNING | |
+|Disk [{#ENCLOSURE.ID}.{#SLOT}]: Disk temperature is critically high |<p>Disk temperature is critically high.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",temperature_status])=2` |AVERAGE | |
+|Disk [{#ENCLOSURE.ID}.{#SLOT}]: Disk temperature is unknown |<p>Disk temperature is unknown.</p> |`last(/HPE MSA 2060 Storage by HTTP/hpe.msa.disks["{#DURABLE.ID}",temperature_status])=4` |INFO | |
+
+## Feedback
+
+Please report any issues with the template at https://support.zabbix.com
+
+You can also provide feedback, discuss the template or ask for help with it at [ZABBIX forums](https://www.zabbix.com/forum/zabbix-suggestions-and-feedback).
+