Welcome to mirror list, hosted at ThFree Co, Russian Federation.

README.md « hpe_primera_http « san « templates - github.com/zabbix/zabbix.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
blob: 1890b4bc9b2d2ef6ca61024aa9fd73a0ba64488f (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189

# HPE Primera by HTTP

## Overview

For Zabbix version: 6.2 and higher  
The template to monitor HPE Primera by HTTP.
It works without any external scripts and uses the script item.

This template was tested on:

- HPE Primera, version 4.2.1.6

## Setup

> See [Zabbix template operation](https://www.zabbix.com/documentation/6.2/manual/config/templates_out_of_the_box/http) for basic instructions.

1. Create user zabbix on the storage with browse role and enable it for all domains.
2. The WSAPI server does not start automatically.
   Log in to the CLI as Super, Service, or any role granted the wsapi_set right.
   Start the WSAPI server by command: `startwsapi`.
   To check WSAPI state use command: `showwsapi`.
3. Link template to the host.
4. Configure macros {$HPE.PRIMERA.API.USERNAME} and {$HPE.PRIMERA.API.PASSWORD}.

## Zabbix configuration

No specific Zabbix configuration is required.

### Macros used

|Name|Description|Default|
|----|-----------|-------|
|{$HPE.PRIMERA.API.PASSWORD} |<p>Specify password for WSAPI.</p> |`` |
|{$HPE.PRIMERA.API.PORT} |<p>The WSAPI port.</p> |`443` |
|{$HPE.PRIMERA.API.SCHEME} |<p>The WSAPI scheme (http/https).</p> |`https` |
|{$HPE.PRIMERA.API.USERNAME} |<p>Specify user name for WSAPI.</p> |`zabbix` |
|{$HPE.PRIMERA.CPG.NAME.MATCHES} |<p>This macro is used in filters of CPGs discovery rule.</p> |`.*` |
|{$HPE.PRIMERA.CPG.NAME.NOT_MATCHES} |<p>This macro is used in filters of CPGs discovery rule.</p> |`CHANGE_IF_NEEDED` |
|{$HPE.PRIMERA.DATA.TIMEOUT} |<p>Response timeout for WSAPI.</p> |`15s` |
|{$HPE.PRIMERA.LLD.FILTER.TASK.NAME.MATCHES} |<p>Filter of discoverable tasks by name.</p> |`CHANGE_IF_NEEDED` |
|{$HPE.PRIMERA.LLD.FILTER.TASK.NAME.NOT_MATCHES} |<p>Filter to exclude discovered tasks by name.</p> |`.*` |
|{$HPE.PRIMERA.LLD.FILTER.TASK.TYPE.MATCHES} |<p>Filter of discoverable tasks by type.</p> |`.*` |
|{$HPE.PRIMERA.LLD.FILTER.TASK.TYPE.NOT_MATCHES} |<p>Filter to exclude discovered tasks by type.</p> |`CHANGE_IF_NEEDED` |
|{$HPE.PRIMERA.VOLUME.NAME.MATCHES} |<p>This macro is used in filters of volume discovery rule.</p> |`.*` |
|{$HPE.PRIMERA.VOLUME.NAME.NOT_MATCHES} |<p>This macro is used in filters of volume discovery rule.</p> |`^(admin|.srdata|.mgmtdata)$` |

## Template links

There are no template links in this template.

## Discovery rules

|Name|Description|Type|Key and additional info|
|----|-----------|----|----|
|Common provisioning groups discovery |<p>List of CPGs resources.</p> |DEPENDENT |hpe.primera.cpg.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpgs.members`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p><p>**Filter**:</p>AND <p>- {#NAME} MATCHES_REGEX `{$HPE.PRIMERA.CPG.NAME.MATCHES}`</p><p>- {#NAME} NOT_MATCHES_REGEX `{$HPE.PRIMERA.CPG.NAME.NOT_MATCHES}`</p> |
|Disks discovery |<p>List of physical disk resources.</p> |DEPENDENT |hpe.primera.disks.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.disks.members`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
|Hosts discovery |<p>List of host properties.</p> |DEPENDENT |hpe.primera.hosts.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.hosts.members`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p><p>**Filter**:</p>AND <p>- {#NAME} EXISTS ``</p> |
|Ports discovery |<p>List of ports.</p> |DEPENDENT |hpe.primera.ports.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.ports.members`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p><p>**Filter**:</p>AND <p>- {#TYPE} NOT_MATCHES_REGEX `3`</p> |
|Tasks discovery |<p>List of tasks started within last 24 hours.</p> |DEPENDENT |hpe.primera.tasks.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.tasks`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p><p>**Filter**:</p>AND <p>- {#NAME} MATCHES_REGEX `{$HPE.PRIMERA.LLD.FILTER.TASK.NAME.MATCHES}`</p><p>- {#NAME} NOT_MATCHES_REGEX `{$HPE.PRIMERA.LLD.FILTER.TASK.NAME.NOT_MATCHES}`</p><p>- {#TYPE} MATCHES_REGEX `{$HPE.PRIMERA.LLD.FILTER.TASK.TYPE.MATCHES}`</p><p>- {#TYPE} NOT_MATCHES_REGEX `{$HPE.PRIMERA.LLD.FILTER.TASK.TYPE.NOT_MATCHES}`</p> |
|Volumes discovery |<p>List of storage volume resources.</p> |DEPENDENT |hpe.primera.volumes.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p><p>**Filter**:</p>AND <p>- {#NAME} MATCHES_REGEX `{$HPE.PRIMERA.VOLUME.NAME.MATCHES}`</p><p>- {#NAME} NOT_MATCHES_REGEX `{$HPE.PRIMERA.VOLUME.NAME.NOT_MATCHES}`</p> |

## Items collected

|Group|Name|Description|Type|Key and additional info|
|-----|----|-----------|----|---------------------|
|HPE |HPE Primera: Get data |<p>The JSON with result of WSAPI requests.</p> |SCRIPT |hpe.primera.data.get<p>**Expression**:</p>`The text is too long. Please see the template.` |
|HPE |HPE Primera: Get errors |<p>A list of errors from WSAPI requests.</p> |DEPENDENT |hpe.primera.data.errors<p>**Preprocessing**:</p><p>- JSONPATH: `$.errors`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
|HPE |HPE Primera: Capacity allocated |<p>Allocated capacity in the system.</p> |DEPENDENT |hpe.primera.system.capacity.allocated<p>**Preprocessing**:</p><p>- JSONPATH: `$.system.allocatedCapacityMiB`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `1048576`</p> |
|HPE |HPE Primera: Chunklet size |<p>Chunklet size.</p> |DEPENDENT |hpe.primera.system.chunklet.size<p>**Preprocessing**:</p><p>- JSONPATH: `$.system.chunkletSizeMiB`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `1048576`</p> |
|HPE |HPE Primera: System contact |<p>Contact of the system.</p> |DEPENDENT |hpe.primera.system.contact<p>**Preprocessing**:</p><p>- JSONPATH: `$.system.contact`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
|HPE |HPE Primera: Capacity failed |<p>Failed capacity in the system.</p> |DEPENDENT |hpe.primera.system.capacity.failed<p>**Preprocessing**:</p><p>- JSONPATH: `$.system.failedCapacityMiB`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `1048576`</p> |
|HPE |HPE Primera: Capacity free |<p>Free capacity in the system.</p> |DEPENDENT |hpe.primera.system.capacity.free<p>**Preprocessing**:</p><p>- JSONPATH: `$.system.freeCapacityMiB`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p><p>- MULTIPLIER: `1048576`</p> |
|HPE |HPE Primera: System location |<p>Location of the system.</p> |DEPENDENT |hpe.primera.system.location<p>**Preprocessing**:</p><p>- JSONPATH: `$.system.location`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
|HPE |HPE Primera: Model |<p>System model.</p> |DEPENDENT |hpe.primera.system.model<p>**Preprocessing**:</p><p>- JSONPATH: `$.system.model`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
|HPE |HPE Primera: System name |<p>System name.</p> |DEPENDENT |hpe.primera.system.name<p>**Preprocessing**:</p><p>- JSONPATH: `$.system.name`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
|HPE |HPE Primera: Serial number |<p>System serial number.</p> |DEPENDENT |hpe.primera.system.serial_number<p>**Preprocessing**:</p><p>- JSONPATH: `$.system.serialNumber`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
|HPE |HPE Primera: Software version number |<p>Storage system software version number.</p> |DEPENDENT |hpe.primera.system.sw_version<p>**Preprocessing**:</p><p>- JSONPATH: `$.system.systemVersion`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
|HPE |HPE Primera: Capacity total |<p>Total capacity in the system.</p> |DEPENDENT |hpe.primera.system.capacity.total<p>**Preprocessing**:</p><p>- JSONPATH: `$.system.totalCapacityMiB`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `1048576`</p> |
|HPE |HPE Primera: Nodes total |<p>Total number of nodes in the system.</p> |DEPENDENT |hpe.primera.system.nodes.total<p>**Preprocessing**:</p><p>- JSONPATH: `$.system.totalNodes`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
|HPE |HPE Primera: Nodes online |<p>Number of online nodes in the system.</p> |DEPENDENT |hpe.primera.system.nodes.online<p>**Preprocessing**:</p><p>- JSONPATH: `$.system.onlineNodes.length()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
|HPE |HPE Primera: Disks total |<p>Number of physical disks.</p> |DEPENDENT |hpe.primera.disks.total<p>**Preprocessing**:</p><p>- JSONPATH: `$.disks.total`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
|HPE |HPE Primera: Service ping |<p>Checks if the service is running and accepting TCP connections.</p> |SIMPLE |net.tcp.service["{$HPE.PRIMERA.API.SCHEME}","{HOST.CONN}","{$HPE.PRIMERA.API.PORT}"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
|HPE |CPG [{#NAME}]: Degraded state |<p>Detailed state of the CPG:</p><p>LDS_NOT_STARTED (1) - LDs not started.</p><p>NOT_STARTED (2) - VV not started.</p><p>NEEDS_CHECK (3) - check for consistency.</p><p>NEEDS_MAINT_CHECK (4) - maintenance check is required.</p><p>INTERNAL_CONSISTENCY_ERROR (5) - internal consistency error.</p><p>SNAPDATA_INVALID (6) - invalid snapshot data.</p><p>PRESERVED (7) - unavailable LD sets due to missing chunklets. Preserved remaining VV data.</p><p>STALE (8) - parts of the VV contain old data because of a copy-on-write operation.</p><p>COPY_FAILED (9) - a promote or copy operation to this volume failed.</p><p>DEGRADED_AVAIL (10) - degraded due to availability.</p><p>DEGRADED_PERF (11) - degraded due to performance.</p><p>PROMOTING (12) - volume is the current target of a promote operation.</p><p>COPY_TARGET (13) - volume is the current target of a physical copy operation.</p><p>RESYNC_TARGET (14) - volume is the current target of a resynchronized copy operation.</p><p>TUNING (15) - volume tuning is in progress.</p><p>CLOSING (16) - volume is closing.</p><p>REMOVING (17) - removing the volume.</p><p>REMOVING_RETRY (18) - retrying a volume removal operation.</p><p>CREATING (19) - creating a volume.</p><p>COPY_SOURCE (20) - copy source.</p><p>IMPORTING (21) - importing a volume.</p><p>CONVERTING (22) - converting a volume.</p><p>INVALID (23) - invalid.</p><p>EXCLUSIVE (24) - local storage system has exclusive access to the volume.</p><p>CONSISTENT (25) - volume is being imported consistently along with other volumes in the VV set.</p><p>STANDBY (26) - volume in standby mode.</p><p>SD_META_INCONSISTENT (27) - SD Meta Inconsistent.</p><p>SD_NEEDS_FIX (28) - SD needs fix.</p><p>SD_META_FIXING (29) - SD meta fix.</p><p>UNKNOWN (999) - unknown state.</p><p>NOT_SUPPORTED_BY_WSAPI (1000) - state not supported by WSAPI.</p> |DEPENDENT |hpe.primera.cpg.state["{#ID}",degraded]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpgs.members[?(@.id == "{#ID}")].degradedStates.first()`</p> |
|HPE |CPG [{#NAME}]: Failed state |<p>Detailed state of the CPG:</p><p>LDS_NOT_STARTED (1) - LDs not started.</p><p>NOT_STARTED (2) - VV not started.</p><p>NEEDS_CHECK (3) - check for consistency.</p><p>NEEDS_MAINT_CHECK (4) - maintenance check is required.</p><p>INTERNAL_CONSISTENCY_ERROR (5) - internal consistency error.</p><p>SNAPDATA_INVALID (6) - invalid snapshot data.</p><p>PRESERVED (7) - unavailable LD sets due to missing chunklets. Preserved remaining VV data.</p><p>STALE (8) - parts of the VV contain old data because of a copy-on-write operation.</p><p>COPY_FAILED (9) - a promote or copy operation to this volume failed.</p><p>DEGRADED_AVAIL (10) - degraded due to availability.</p><p>DEGRADED_PERF (11) - degraded due to performance.</p><p>PROMOTING (12) - volume is the current target of a promote operation.</p><p>COPY_TARGET (13) - volume is the current target of a physical copy operation.</p><p>RESYNC_TARGET (14) - volume is the current target of a resynchronized copy operation.</p><p>TUNING (15) - volume tuning is in progress.</p><p>CLOSING (16) - volume is closing.</p><p>REMOVING (17) - removing the volume.</p><p>REMOVING_RETRY (18) - retrying a volume removal operation.</p><p>CREATING (19) - creating a volume.</p><p>COPY_SOURCE (20) - copy source.</p><p>IMPORTING (21) - importing a volume.</p><p>CONVERTING (22) - converting a volume.</p><p>INVALID (23) - invalid.</p><p>EXCLUSIVE (24) - local storage system has exclusive access to the volume.</p><p>CONSISTENT (25) - volume is being imported consistently along with other volumes in the VV set.</p><p>STANDBY (26) - volume in standby mode.</p><p>SD_META_INCONSISTENT (27) - SD Meta Inconsistent.</p><p>SD_NEEDS_FIX (28) - SD needs fix.</p><p>SD_META_FIXING (29) - SD meta fix.</p><p>UNKNOWN (999) - unknown state.</p><p>NOT_SUPPORTED_BY_WSAPI (1000) - state not supported by WSAPI.</p> |DEPENDENT |hpe.primera.cpg.state["{#ID}",failed]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpgs.members[?(@.id == "{#ID}")].failedStates.first()`</p><p>- JAVASCRIPT: `return JSON.stringify(JSON.parse(value));`</p> |
|HPE |CPG [{#NAME}]: CPG space: Free |<p>Free CPG space.</p> |DEPENDENT |hpe.primera.cpg.space["{#ID}",free]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpgs.members[?(@.id == "{#ID}")].freeSpaceMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p><p>- MULTIPLIER: `1048576`</p> |
|HPE |CPG [{#NAME}]: Number of FPVVs |<p>Number of FPVVs (Fully Provisioned Virtual Volumes) allocated in the CPG.</p> |DEPENDENT |hpe.primera.cpg.fpvv["{#ID}",count]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpgs.members[?(@.id == "{#ID}")].numFPVVs.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
|HPE |CPG [{#NAME}]: Number of TPVVs |<p>Number of TPVVs (Thinly Provisioned Virtual Volumes) allocated in the CPG.</p> |DEPENDENT |hpe.primera.cpg.tpvv["{#ID}",count]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpgs.members[?(@.id == "{#ID}")].numTPVVs.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
|HPE |CPG [{#NAME}]: Number of TDVVs |<p>Number of TDVVs (Thinly Deduplicated Virtual Volume) created in the CPG.</p> |DEPENDENT |hpe.primera.cpg.tdvv["{#ID}",count]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpgs.members[?(@.id == "{#ID}")].numTDVVs.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
|HPE |CPG [{#NAME}]: Raw space: Free |<p>Raw free space.</p> |DEPENDENT |hpe.primera.cpg.space.raw["{#ID}",free]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpgs.members[?(@.id == "{#ID}")].rawFreeSpaceMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p><p>- MULTIPLIER: `1048576`</p> |
|HPE |CPG [{#NAME}]: Raw space: Shared |<p>Raw shared space.</p> |DEPENDENT |hpe.primera.cpg.space.raw["{#ID}",shared]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpgs.members[?(@.id == "{#ID}")].rawSharedSpaceMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `1048576`</p> |
|HPE |CPG [{#NAME}]: Raw space: Total |<p>Raw total space.</p> |DEPENDENT |hpe.primera.cpg.space.raw["{#ID}",total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpgs.members[?(@.id == "{#ID}")].rawTotalSpaceMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `1048576`</p> |
|HPE |CPG [{#NAME}]: CPG space: Shared |<p>Shared CPG space.</p> |DEPENDENT |hpe.primera.cpg.space["{#ID}",shared]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpgs.members[?(@.id == "{#ID}")].sharedSpaceMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `1048576`</p> |
|HPE |CPG [{#NAME}]: State |<p>Overall state of the CPG:</p><p>NORMAL (1) - normal operation;</p><p>DEGRADED (2) - degraded state;</p><p>FAILED (3) - abnormal operation;</p><p>UNKNOWN (99) - unknown state.</p> |DEPENDENT |hpe.primera.cpg.state["{#ID}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpgs.members[?(@.id == "{#ID}")].state.first()`</p> |
|HPE |CPG [{#NAME}]: Logical disk space: Snapshot administration: Total (raw) |<p>Total physical (raw) logical disk space in snapshot administration.</p> |DEPENDENT |hpe.primera.cpg.space.sa["{#ID}",raw_total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpgs.members[?(@.id == "{#ID}")].SAUsage.rawTotalMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `1048576`</p> |
|HPE |CPG [{#NAME}]: Logical disk space: Snapshot data: Total (raw) |<p>Total physical (raw) logical disk space in snapshot data space.</p> |DEPENDENT |hpe.primera.cpg.space.sd["{#ID}",raw_total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpgs.members[?(@.id == "{#ID}")].SDUsage.rawTotalMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `1048576`</p> |
|HPE |CPG [{#NAME}]: Logical disk space: User space: Total (raw) |<p>Total physical (raw) logical disk space in user data space.</p> |DEPENDENT |hpe.primera.cpg.space.usr["{#ID}",raw_total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpgs.members[?(@.id == "{#ID}")].UsrUsage.rawTotalMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `1048576`</p> |
|HPE |CPG [{#NAME}]: Logical disk space: Snapshot administration: Total |<p>Total logical disk space in snapshot administration.</p> |DEPENDENT |hpe.primera.cpg.space.sa["{#ID}",total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpgs.members[?(@.id == "{#ID}")].SAUsage.totalMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `1048576`</p> |
|HPE |CPG [{#NAME}]: Logical disk space: Snapshot data: Total |<p>Total logical disk space in snapshot data space.</p> |DEPENDENT |hpe.primera.cpg.space.sd["{#ID}",total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpgs.members[?(@.id == "{#ID}")].SDUsage.totalMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `1048576`</p> |
|HPE |CPG [{#NAME}]: Logical disk space: User space: Total |<p>Total logical disk space in user data space.</p> |DEPENDENT |hpe.primera.cpg.space.usr["{#ID}",total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpgs.members[?(@.id == "{#ID}")].UsrUsage.totalMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `1048576`</p> |
|HPE |CPG [{#NAME}]: CPG space: Total |<p>Total CPG space.</p> |DEPENDENT |hpe.primera.cpg.space["{#ID}",total]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpgs.members[?(@.id == "{#ID}")].totalSpaceMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `1048576`</p> |
|HPE |CPG [{#NAME}]: Logical disk space: Snapshot administration: Used (raw) |<p>Amount of physical (raw) logical disk used in snapshot administration.</p> |DEPENDENT |hpe.primera.cpg.space.sa["{#ID}",raw_used]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpgs.members[?(@.id == "{#ID}")].SAUsage.rawUsedMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p><p>- MULTIPLIER: `1048576`</p> |
|HPE |CPG [{#NAME}]: Logical disk space: Snapshot data: Used (raw) |<p>Amount of physical (raw) logical disk used in snapshot data space.</p> |DEPENDENT |hpe.primera.cpg.space.sd["{#ID}",raw_used]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpgs.members[?(@.id == "{#ID}")].SDUsage.rawUsedMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p><p>- MULTIPLIER: `1048576`</p> |
|HPE |CPG [{#NAME}]: Logical disk space: User space: Used (raw) |<p>Amount of physical (raw) logical disk used in user data space.</p> |DEPENDENT |hpe.primera.cpg.space.usr["{#ID}",raw_used]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpgs.members[?(@.id == "{#ID}")].UsrUsage.rawUsedMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p><p>- MULTIPLIER: `1048576`</p> |
|HPE |CPG [{#NAME}]: Logical disk space: Snapshot administration: Used |<p>Amount of logical disk used in snapshot administration.</p> |DEPENDENT |hpe.primera.cpg.space.sa["{#ID}",used]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpgs.members[?(@.id == "{#ID}")].SAUsage.usedMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p><p>- MULTIPLIER: `1048576`</p> |
|HPE |CPG [{#NAME}]: Logical disk space: Snapshot data: Used |<p>Amount of logical disk used in snapshot data space.</p> |DEPENDENT |hpe.primera.cpg.space.sd["{#ID}",used]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpgs.members[?(@.id == "{#ID}")].SDUsage.usedMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p><p>- MULTIPLIER: `1048576`</p> |
|HPE |CPG [{#NAME}]: Logical disk space: User space: Used |<p>Amount of logical disk used in user data space.</p> |DEPENDENT |hpe.primera.cpg.space.usr["{#ID}",used]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpgs.members[?(@.id == "{#ID}")].UsrUsage.usedMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p><p>- MULTIPLIER: `1048576`</p> |
|HPE |Disk [{#POSITION}]: Firmware version |<p>Physical disk firmware version.</p> |DEPENDENT |hpe.primera.disk["{#ID}",fw_version]<p>**Preprocessing**:</p><p>- JSONPATH: `$.disks.members[?(@.id == "{#ID}")].fwVersion.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
|HPE |Disk [{#POSITION}]: Free size |<p>Physical disk free size.</p> |DEPENDENT |hpe.primera.disk["{#ID}",free_size]<p>**Preprocessing**:</p><p>- JSONPATH: `$.disks.members[?(@.id == "{#ID}")].freeSizeMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p><p>- MULTIPLIER: `1048576`</p> |
|HPE |Disk [{#POSITION}]: Manufacturer |<p>Physical disk manufacturer.</p> |DEPENDENT |hpe.primera.disk["{#ID}",manufacturer]<p>**Preprocessing**:</p><p>- JSONPATH: `$.disks.members[?(@.id == "{#ID}")].manufacturer.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
|HPE |Disk [{#POSITION}]: Model |<p>Manufacturer's device ID for disk.</p> |DEPENDENT |hpe.primera.disk["{#ID}",model]<p>**Preprocessing**:</p><p>- JSONPATH: `$.disks.members[?(@.id == "{#ID}")].model.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
|HPE |Disk [{#POSITION}]: Path A0 degraded |<p>Indicates if this is a degraded path for the disk.</p> |DEPENDENT |hpe.primera.disk["{#ID}",loop_a0_degraded]<p>**Preprocessing**:</p><p>- JSONPATH: `$.disks.members[?(@.id == "{#ID}")].loopA0.degraded.first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p><p>- BOOL_TO_DECIMAL</p> |
|HPE |Disk [{#POSITION}]: Path A1 degraded |<p>Indicates if this is a degraded path for the disk.</p> |DEPENDENT |hpe.primera.disk["{#ID}",loop_a1_degraded]<p>**Preprocessing**:</p><p>- JSONPATH: `$.disks.members[?(@.id == "{#ID}")].loopA1.degraded.first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p><p>- BOOL_TO_DECIMAL</p> |
|HPE |Disk [{#POSITION}]: Path B0 degraded |<p>Indicates if this is a degraded path for the disk.</p> |DEPENDENT |hpe.primera.disk["{#ID}",loop_b0_degraded]<p>**Preprocessing**:</p><p>- JSONPATH: `$.disks.members[?(@.id == "{#ID}")].loopB0.degraded.first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p><p>- BOOL_TO_DECIMAL</p> |
|HPE |Disk [{#POSITION}]: Path B1 degraded |<p>Indicates if this is a degraded path for the disk.</p> |DEPENDENT |hpe.primera.disk["{#ID}",loop_b1_degraded]<p>**Preprocessing**:</p><p>- JSONPATH: `$.disks.members[?(@.id == "{#ID}")].loopB1.degraded.first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p><p>- BOOL_TO_DECIMAL</p> |
|HPE |Disk [{#POSITION}]: RPM |<p>RPM of the physical disk.</p> |DEPENDENT |hpe.primera.disk["{#ID}",rpm]<p>**Preprocessing**:</p><p>- JSONPATH: `$.disks.members[?(@.id == "{#ID}")].RPM.first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
|HPE |Disk [{#POSITION}]: Serial number |<p>Disk drive serial number.</p> |DEPENDENT |hpe.primera.disk["{#ID}",serial_number]<p>**Preprocessing**:</p><p>- JSONPATH: `$.disks.members[?(@.id == "{#ID}")].serialNumber.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
|HPE |Disk [{#POSITION}]: State |<p>State of the physical disk:</p><p>Normal (1) - physical disk is in Normal state;</p><p>Degraded (2) - physical disk is not operating normally;</p><p>New (3) - physical disk is new, needs to be admitted;</p><p>Failed (4) - physical disk has failed;</p><p>Unknown (99) - physical disk state is unknown.</p> |DEPENDENT |hpe.primera.disk["{#ID}",state]<p>**Preprocessing**:</p><p>- JSONPATH: `$.disks.members[?(@.id == "{#ID}")].state.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 99`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
|HPE |Disk [{#POSITION}]: Total size |<p>Physical disk total size.</p> |DEPENDENT |hpe.primera.disk["{#ID}",total_size]<p>**Preprocessing**:</p><p>- JSONPATH: `$.disks.members[?(@.id == "{#ID}")].totalSizeMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- MULTIPLIER: `1048576`</p> |
|HPE |Host [{#NAME}]: Comment |<p>Additional information for the host.</p> |DEPENDENT |hpe.primera.host["{#ID}",comment]<p>**Preprocessing**:</p><p>- JSONPATH: `$.hosts.members[?(@.id == "{#ID}")].descriptors.comment.first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
|HPE |Host [{#NAME}]: Contact |<p>The host's owner and contact.</p> |DEPENDENT |hpe.primera.host["{#ID}",contact]<p>**Preprocessing**:</p><p>- JSONPATH: `$.hosts.members[?(@.id == "{#ID}")].descriptors.contact.first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
|HPE |Host [{#NAME}]: IP address |<p>The host's IP address.</p> |DEPENDENT |hpe.primera.host["{#ID}",ipaddress]<p>**Preprocessing**:</p><p>- JSONPATH: `$.hosts.members[?(@.id == "{#ID}")].descriptors.IPAddr.first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
|HPE |Host [{#NAME}]: Location |<p>The host's location.</p> |DEPENDENT |hpe.primera.host["{#ID}",location]<p>**Preprocessing**:</p><p>- JSONPATH: `$.hosts.members[?(@.id == "{#ID}")].descriptors.location.first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
|HPE |Host [{#NAME}]: Model |<p>The host's model.</p> |DEPENDENT |hpe.primera.host["{#ID}",model]<p>**Preprocessing**:</p><p>- JSONPATH: `$.hosts.members[?(@.id == "{#ID}")].descriptors.model.first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
|HPE |Host [{#NAME}]: OS |<p>The operating system running on the host.</p> |DEPENDENT |hpe.primera.host["{#ID}",os]<p>**Preprocessing**:</p><p>- JSONPATH: `$.hosts.members[?(@.id == "{#ID}")].descriptors.os.first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
|HPE |Port [{#NODE}:{#SLOT}:{#CARD.PORT}]: Failover state |<p>The state of the failover operation, shown for the two ports indicated in the N:S:P and Partner columns. The value can be one of the following:</p><p>none (1) - no failover in operation;</p><p>failover_pending (2) - in the process of failing over to partner;</p><p>failed_over (3) - failed over to partner;</p><p>active (4) - the partner port is failed over to this port;</p><p>active_down (5) - the partner port is failed over to this port, but this port is down;</p><p>active_failed (6) - the partner port is failed over to this port, but this port is down;</p><p>failback_pending (7) - in the process of failing back from partner.</p> |DEPENDENT |hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",failover_state]<p>**Preprocessing**:</p><p>- JSONPATH: `$.ports.members[?(@.portPos.node == "{#NODE}" && @.portPos.slot == "{#SLOT}" && @.portPos.cardPort == "{#CARD.PORT}")].failoverState.first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
|HPE |Port [{#NODE}:{#SLOT}:{#CARD.PORT}]: Link state |<p>Port link state:</p><p>CONFIG_WAIT (1) - configuration wait;</p><p>ALPA_WAIT (2) - ALPA wait;</p><p>LOGIN_WAIT (3) - login wait;</p><p>READY (4) - link is ready;</p><p>LOSS_SYNC (5) - link is loss sync;</p><p>ERROR_STATE (6) - in error state;</p><p>XXX (7) - xxx;</p><p>NONPARTICIPATE (8) - link did not participate;</p><p>COREDUMP (9) - taking coredump;</p><p>OFFLINE (10) - link is offline;</p><p>FWDEAD (11) - firmware is dead;</p><p>IDLE_FOR_RESET (12) - link is idle for reset;</p><p>DHCP_IN_PROGRESS (13) - DHCP is in progress;</p><p>PENDING_RESET (14) - link reset is pending;</p><p>NEW (15) - link in new. This value is applicable for only virtual ports;</p><p>DISABLED (16) - link in disabled. This value is applicable for only virtual ports;</p><p>DOWN (17) - link in down. This value is applicable for only virtual ports;</p><p>FAILED (18) - link in failed. This value is applicable for only virtual ports;</p><p>PURGING (19) - link in purging. This value is applicable for only virtual ports.</p> |DEPENDENT |hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",link_state]<p>**Preprocessing**:</p><p>- JSONPATH: `$.ports.members[?(@.portPos.node == "{#NODE}" && @.portPos.slot == "{#SLOT}" && @.portPos.cardPort == "{#CARD.PORT}")].linkState.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
|HPE |Port [{#NODE}:{#SLOT}:{#CARD.PORT}]: Type |<p>Port connection type:</p><p>HOST (1) - FC port connected to hosts or fabric;</p><p>DISK (2) - FC port connected to disks;</p><p>FREE (3) - port is not connected to hosts or disks;</p><p>IPORT (4) - port is in iport mode;</p><p>RCFC (5) - FC port used for remote copy;</p><p>PEER (6) - FC port used for data migration;</p><p>RCIP (7) - IP (Ethernet) port used for remote copy;</p><p>ISCSI (8) - iSCSI (Ethernet) port connected to hosts;</p><p>CNA (9) - CNA port, which can be FCoE or iSCSI;</p><p>FS (10) - Ethernet File Persona ports.</p> |DEPENDENT |hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",type]<p>**Preprocessing**:</p><p>- JSONPATH: `$.ports.members[?(@.portPos.node == "{#NODE}" && @.portPos.slot == "{#SLOT}" && @.portPos.cardPort == "{#CARD.PORT}")].type.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
|HPE |Port [{#NODE}:{#SLOT}:{#CARD.PORT}]: Hardware type |<p>Hardware type:</p><p>FC (1) - Fibre channel HBA;</p><p>ETH (2) - Ethernet NIC;</p><p>iSCSI (3) - iSCSI HBA;</p><p>CNA (4) - Converged network adapter;</p><p>SAS (5) - SAS HBA;</p><p>COMBO (6) - Combo card;</p><p>NVME (7) - NVMe drive;</p><p>UNKNOWN (99) - unknown hardware type.</p> |DEPENDENT |hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",hw_type]<p>**Preprocessing**:</p><p>- JSONPATH: `$.ports.members[?(@.portPos.node == "{#NODE}" && @.portPos.slot == "{#SLOT}" && @.portPos.cardPort == "{#CARD.PORT}")].hardwareType.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
|HPE |Task [{#NAME}]: Finish time |<p>Task finish time.</p> |DEPENDENT |hpe.primera.task["{#ID}",finish_time]<p>**Preprocessing**:</p><p>- JSONPATH: `$.tasks[?(@.id == "{#ID}")].finishTime.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p><p>- NOT_MATCHES_REGEX: `^-$`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- JAVASCRIPT: `The text is too long. Please see the template.`</p> |
|HPE |Task [{#NAME}]: Start time |<p>Task start time.</p> |DEPENDENT |hpe.primera.task["{#ID}",start_time]<p>**Preprocessing**:</p><p>- JSONPATH: `$.tasks[?(@.id == "{#ID}")].startTime.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p><p>- JAVASCRIPT: `The text is too long. Please see the template.`</p> |
|HPE |Task [{#NAME}]: Status |<p>Task status:</p><p>DONE (1) - task is finished;</p><p>ACTIVE (2) - task is in progress;</p><p>CANCELLED (3) - task is canceled;</p><p>FAILED (4) - task failed.</p> |DEPENDENT |hpe.primera.task["{#ID}",status]<p>**Preprocessing**:</p><p>- JSONPATH: `$.tasks[?(@.id == "{#ID}")].status.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
|HPE |Task [{#NAME}]: Type |<p>Task type:</p><p>VV_COPY (1) - track the physical copy operations;</p><p>PHYS_COPY_RESYNC (2) - track physical copy resynchronization operations;</p><p>MOVE_REGIONS (3) - track region move operations;</p><p>PROMOTE_SV (4) - track virtual-copy promotions;</p><p>REMOTE_COPY_SYNC (5) - track remote copy group synchronizations;</p><p>REMOTE_COPY_REVERSE (6) - track the reversal of a remote copy group;</p><p>REMOTE_COPY_FAILOVER (7) - track the change-over of a secondary volume group to a primaryvolume group;REMOTE_COPY_RECOVER (8) - track synchronization start after a failover operation from originalsecondary cluster to original primary cluster;</p><p>REMOTE_COPY_RESTORE (9) - tracks the restoration process for groups that have already beenrecovered;</p><p>COMPACT_CPG (10) - track space consolidation in CPGs;</p><p>COMPACT_IDS (11) - track space consolidation in logical disks;</p><p>SNAPSHOT_ACCOUNTING (12) - track progress of snapshot space usage accounting;</p><p>CHECK_VV (13) - track the progress of the check-volume operation;</p><p>SCHEDULED_TASK (14) - track tasks that have been executed by the system scheduler;</p><p>SYSTEM_TASK (15) - track tasks that are periodically run by the storage system;</p><p>BACKGROUND_TASK (16) - track commands started using the starttask command;</p><p>IMPORT_VV (17) - track tasks that migrate data to the local storage system;</p><p>ONLINE_COPY (18) - track physical copy of the volume while online (createvvcopy-online command);</p><p>CONVERT_VV (19) - track tasks that convert a volume from an FPVV to a TPVV, and the reverse;</p><p>BACKGROUND_COMMAND (20) - track background command tasks;</p><p>CLX_SYNC (21) - track CLX synchronization tasks;</p><p>CLX_RECOVERY (22) - track CLX recovery tasks;</p><p>TUNE_SD (23) - tune copy space;</p><p>TUNE_VV (24) - tune virtual volume;</p><p>TUNE_VV_ROLLBACK (25) - tune virtual volume rollback;</p><p>TUNE_VV_RESTART (26) - tune virtual volume restart;</p><p>SYSTEM_TUNING (27) - system tuning;</p><p>NODE_RESCUE (28) - node rescue;</p><p>REPAIR_SYNC (29) - remote copy repair sync;</p><p>REMOTE_COPY_SWOVER (30) - remote copy switchover;</p><p>DEFRAGMENTATION (31) - defragmentation;</p><p>ENCRYPTION_CHANGE (32) - encryption change;</p><p>REMOTE_COPY_FAILSAFE (33) - remote copy failsafe;</p><p>TUNE_TPVV (34) - tune thin virtual volume;</p><p>REMOTE_COPY_CHG_MODE (35) - remote copy change mode;</p><p>ONLINE_PROMOTE (37) - online promote snap;</p><p>RELOCATE_PD (38) - relocate PD;</p><p>PERIODIC_CSS (39) - remote copy periodic CSS;</p><p>TUNEVV_LARGE (40) - tune large virtual volume;</p><p>SD_META_FIXER (41) - compression SD meta fixer;</p><p>DEDUP_DRYRUN (42) - preview dedup ratio;</p><p>COMPR_DRYRUN (43) - compression estimation;</p><p>DEDUP_COMPR_DRYRUN (44) - compression and dedup estimation;</p><p>UNKNOWN (99) - unknown task type.</p> |DEPENDENT |hpe.primera.task["{#ID}",type]<p>**Preprocessing**:</p><p>- JSONPATH: `$.tasks[?(@.id == "{#ID}")].type.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
|HPE |Volume [{#NAME}]: Administrative space: Free |<p>Free administrative space.</p> |DEPENDENT |hpe.primera.volume.space.admin["{#ID}",free]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].adminSpace.freeMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p><p>- MULTIPLIER: `1048576`</p> |
|HPE |Volume [{#NAME}]: Administrative space: Raw reserved |<p>Raw reserved administrative space.</p> |DEPENDENT |hpe.primera.volume.space.admin["{#ID}",raw_reserved]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].adminSpace.rawReservedMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `12h`</p><p>- MULTIPLIER: `1048576`</p> |
|HPE |Volume [{#NAME}]: Administrative space: Reserved |<p>Reserved administrative space.</p> |DEPENDENT |hpe.primera.volume.space.admin["{#ID}",reserved]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].adminSpace.reservedMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `12h`</p><p>- MULTIPLIER: `1048576`</p> |
|HPE |Volume [{#NAME}]: Administrative space: Used |<p>Used administrative space.</p> |DEPENDENT |hpe.primera.volume.space.admin["{#ID}",used]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].adminSpace.usedMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p><p>- MULTIPLIER: `1048576`</p> |
|HPE |Volume [{#NAME}]: Compaction ratio |<p>The compaction ratio indicates the overall amount of storage space saved with thin technology.</p> |DEPENDENT |hpe.primera.volume.capacity.efficiency["{#ID}",compaction]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].capacityEfficiency.compaction.first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
|HPE |Volume [{#NAME}]: Compression state |<p>Volume compression state:</p><p>YES (1) - compression is enabled on the volume;</p><p>NO (2) - compression is disabled on the volume;</p><p>OFF (3) - compression is turned off;</p><p>NA (4) - compression is not available on the volume.</p> |DEPENDENT |hpe.primera.volume.state["{#ID}",compression]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].compressionState.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
|HPE |Volume [{#NAME}]: Deduplication state |<p>Volume deduplication state:</p><p>YES (1) - enables deduplication on the volume;</p><p>NO (2) - disables deduplication on the volume;</p><p>NA (3) - deduplication is not available;</p><p>OFF (4) - deduplication is turned off.</p> |DEPENDENT |hpe.primera.volume.state["{#ID}",deduplication]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].deduplicationState.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
|HPE |Volume [{#NAME}]: Degraded state |<p>Volume detailed state:</p><p>LDS_NOT_STARTED (1) - LDs not started.</p><p>NOT_STARTED (2) - VV not started.</p><p>NEEDS_CHECK (3) - check for consistency.</p><p>NEEDS_MAINT_CHECK (4) - maintenance check is required.</p><p>INTERNAL_CONSISTENCY_ERROR (5) - internal consistency error.</p><p>SNAPDATA_INVALID (6) - invalid snapshot data.</p><p>PRESERVED (7) - unavailable LD sets due to missing chunklets. Preserved remaining VV data.</p><p>STALE (8) - parts of the VV contain old data because of a copy-on-write operation.</p><p>COPY_FAILED (9) - a promote or copy operation to this volume failed.</p><p>DEGRADED_AVAIL (10) - degraded due to availability.</p><p>DEGRADED_PERF (11) - degraded due to performance.</p><p>PROMOTING (12) - volume is the current target of a promote operation.</p><p>COPY_TARGET (13) - volume is the current target of a physical copy operation.</p><p>RESYNC_TARGET (14) - volume is the current target of a resynchronized copy operation.</p><p>TUNING (15) - volume tuning is in progress.</p><p>CLOSING (16) - volume is closing.</p><p>REMOVING (17) - removing the volume.</p><p>REMOVING_RETRY (18) - retrying a volume removal operation.</p><p>CREATING (19) - creating a volume.</p><p>COPY_SOURCE (20) - copy source.</p><p>IMPORTING (21) - importing a volume.</p><p>CONVERTING (22) - converting a volume.</p><p>INVALID (23) - invalid.</p><p>EXCLUSIVE (24) -lLocal storage system has exclusive access to the volume.</p><p>CONSISTENT (25) - volume is being imported consistently along with other volumes in the VV set.</p><p>STANDBY (26) - volume in standby mode.</p><p>SD_META_INCONSISTENT (27) - SD Meta Inconsistent.</p><p>SD_NEEDS_FIX (28) - SD needs fix.</p><p>SD_META_FIXING (29) - SD meta fix.</p><p>UNKNOWN (999) - unknown state.</p><p>NOT_SUPPORTED_BY_WSAPI (1000) - state not supported by WSAPI.</p> |DEPENDENT |hpe.primera.volume.state["{#ID}",degraded]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].degradedStates.first()`</p> |
|HPE |Volume [{#NAME}]: Failed state |<p>Volume detailed state:</p><p>LDS_NOT_STARTED (1) - LDs not started.</p><p>NOT_STARTED (2) - VV not started.</p><p>NEEDS_CHECK (3) - check for consistency.</p><p>NEEDS_MAINT_CHECK (4) - maintenance check is required.</p><p>INTERNAL_CONSISTENCY_ERROR (5) - internal consistency error.</p><p>SNAPDATA_INVALID (6) - invalid snapshot data.</p><p>PRESERVED (7) - unavailable LD sets due to missing chunklets. Preserved remaining VV data.</p><p>STALE (8) - parts of the VV contain old data because of a copy-on-write operation.</p><p>COPY_FAILED (9) - a promote or copy operation to this volume failed.</p><p>DEGRADED_AVAIL (10) - degraded due to availability.</p><p>DEGRADED_PERF (11) - degraded due to performance.</p><p>PROMOTING (12) - volume is the current target of a promote operation.</p><p>COPY_TARGET (13) - volume is the current target of a physical copy operation.</p><p>RESYNC_TARGET (14) - volume is the current target of a resynchronized copy operation.</p><p>TUNING (15) - volume tuning is in progress.</p><p>CLOSING (16) - volume is closing.</p><p>REMOVING (17) - removing the volume.</p><p>REMOVING_RETRY (18) - retrying a volume removal operation.</p><p>CREATING (19) - creating a volume.</p><p>COPY_SOURCE (20) - copy source.</p><p>IMPORTING (21) - importing a volume.</p><p>CONVERTING (22) - converting a volume.</p><p>INVALID (23) - invalid.</p><p>EXCLUSIVE (24) - local storage system has exclusive access to the volume.</p><p>CONSISTENT (25) - volume is being imported consistently along with other volumes in the VV set.</p><p>STANDBY (26) - volume in standby mode.</p><p>SD_META_INCONSISTENT (27) - SD Meta Inconsistent.</p><p>SD_NEEDS_FIX (28) - SD needs fix.</p><p>SD_META_FIXING (29) - SD meta fix.</p><p>UNKNOWN (999) - unknown state.</p><p>NOT_SUPPORTED_BY_WSAPI (1000) - state not supported by WSAPI.</p> |DEPENDENT |hpe.primera.volume.state["{#ID}",failed]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].failedStates.first()`</p><p>- JAVASCRIPT: `return JSON.stringify(JSON.parse(value));`</p> |
|HPE |Volume [{#NAME}]: Overprovisioning ratio |<p>Overprovisioning capacity efficiency ratio.</p> |DEPENDENT |hpe.primera.volume.capacity.efficiency["{#ID}",overprovisioning]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].capacityEfficiency.overProvisioning.first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
|HPE |Volume [{#NAME}]: Remote copy status |<p>Remote copy status of the volume:</p><p>NONE (1) - volume is not associated with remote copy;</p><p>PRIMARY (2) - volume is the primary copy;</p><p>SECONDARY (3) - volume is the secondary copy;</p><p>SNAP (4) - volume is the remote copy snapshot;</p><p>SYNC (5) - volume is a remote copy snapshot being used for synchronization;</p><p>DELETE (6) - volume is a remote copy snapshot that is marked for deletion;</p><p>UNKNOWN (99) - remote copy status is unknown for this volume.</p> |DEPENDENT |hpe.primera.volume.status["{#ID}",rcopy]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].rcopyStatus.first()`</p> |
|HPE |Volume [{#NAME}]: Snapshot space: Free |<p>Free snapshot space.</p> |DEPENDENT |hpe.primera.volume.space.snapshot["{#ID}",free]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].snapshotSpace.freeMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p><p>- MULTIPLIER: `1048576`</p> |
|HPE |Volume [{#NAME}]: Snapshot space: Raw reserved |<p>Raw reserved snapshot space.</p> |DEPENDENT |hpe.primera.volume.space.snapshot["{#ID}",raw_reserved]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].snapshotSpace.rawReservedMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `12h`</p><p>- MULTIPLIER: `1048576`</p> |
|HPE |Volume [{#NAME}]: Snapshot space: Reserved |<p>Reserved snapshot space.</p> |DEPENDENT |hpe.primera.volume.space.snapshot["{#ID}",reserved]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].snapshotSpace.reservedMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `12h`</p><p>- MULTIPLIER: `1048576`</p> |
|HPE |Volume [{#NAME}]: Snapshot space: Used |<p>Used snapshot space.</p> |DEPENDENT |hpe.primera.volume.space.snapshot["{#ID}",used]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].snapshotSpace.usedMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p><p>- MULTIPLIER: `1048576`</p> |
|HPE |Volume [{#NAME}]: State |<p>State of the volume:</p><p>NORMAL (1) - normal operation;</p><p>DEGRADED (2) - degraded state;</p><p>FAILED (3) - abnormal operation;</p><p>UNKNOWN (99) - unknown state.</p> |DEPENDENT |hpe.primera.volume.state["{#ID}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].state.first()`</p> |
|HPE |Volume [{#NAME}]: Storage space saved using compression |<p>Indicates the amount of storage space saved using compression.</p> |DEPENDENT |hpe.primera.volume.capacity.efficiency["{#ID}",compression]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].capacityEfficiency.compression.first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
|HPE |Volume [{#NAME}]: Storage space saved using deduplication |<p>Indicates the amount of storage space saved using deduplication.</p> |DEPENDENT |hpe.primera.volume.capacity.efficiency["{#ID}",deduplication]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].capacityEfficiency.deduplication.first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
|HPE |Volume [{#NAME}]: Storage space saved using deduplication and compression |<p>Indicates the amount of storage space saved using deduplication and compression together.</p> |DEPENDENT |hpe.primera.volume.capacity.efficiency["{#ID}",reduction]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].capacityEfficiency.dataReduction.first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
|HPE |Volume [{#NAME}]: Total reserved space |<p>Total reserved space.</p> |DEPENDENT |hpe.primera.volume.space.total["{#ID}",reserved]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].totalReservedMiB.first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `12h`</p><p>- MULTIPLIER: `1048576`</p> |
|HPE |Volume [{#NAME}]: Total space |<p>Virtual size of volume.</p> |DEPENDENT |hpe.primera.volume.space.total["{#ID}",size]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].sizeMiB.first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `12h`</p><p>- MULTIPLIER: `1048576`</p> |
|HPE |Volume [{#NAME}]: Total used space |<p>Total used space. Sum of used user space and used snapshot space.</p> |DEPENDENT |hpe.primera.volume.space.total["{#ID}",used]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].totalUsedMiB.first()`</p><p>⛔️ON_FAIL: `DISCARD_VALUE -> `</p><p>- MULTIPLIER: `1048576`</p> |
|HPE |Volume [{#NAME}]: User space: Free |<p>Free user space.</p> |DEPENDENT |hpe.primera.volume.space.user["{#ID}",free]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].userSpace.freeMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p><p>- MULTIPLIER: `1048576`</p> |
|HPE |Volume [{#NAME}]: User space: Raw reserved |<p>Raw reserved user space.</p> |DEPENDENT |hpe.primera.volume.space.user["{#ID}",raw_reserved]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].userSpace.rawReservedMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `12h`</p><p>- MULTIPLIER: `1048576`</p> |
|HPE |Volume [{#NAME}]: User space: Reserved |<p>Reserved user space.</p> |DEPENDENT |hpe.primera.volume.space.user["{#ID}",reserved]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].userSpace.reservedMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `12h`</p><p>- MULTIPLIER: `1048576`</p> |
|HPE |Volume [{#NAME}]: User space: Used |<p>Used user space.</p> |DEPENDENT |hpe.primera.volume.space.user["{#ID}",used]<p>**Preprocessing**:</p><p>- JSONPATH: `$.volumes.members[?(@.id == "{#ID}")].userSpace.usedMiB.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p><p>- MULTIPLIER: `1048576`</p> |

## Triggers

|Name|Description|Expression|Severity|Dependencies and additional info|
|----|-----------|----|----|----|
|HPE Primera: There are errors in requests to WSAPI |<p>Zabbix has received errors in requests to WSAPI.</p> |`length(last(/HPE Primera by HTTP/hpe.primera.data.errors))>0` |AVERAGE |<p>**Depends on**:</p><p>- HPE Primera: Service is unavailable</p> |
|HPE Primera: Service is unavailable |<p>-</p> |`max(/HPE Primera by HTTP/net.tcp.service["{$HPE.PRIMERA.API.SCHEME}","{HOST.CONN}","{$HPE.PRIMERA.API.PORT}"],5m)=0` |HIGH |<p>Manual close: YES</p> |
|CPG [{#NAME}]: Degraded |<p>CPG [{#NAME}] is in degraded state.</p> |`last(/HPE Primera by HTTP/hpe.primera.cpg.state["{#ID}"])=2` |AVERAGE | |
|CPG [{#NAME}]: Failed |<p>CPG [{#NAME}] is in failed state.</p> |`last(/HPE Primera by HTTP/hpe.primera.cpg.state["{#ID}"])=3` |HIGH | |
|Disk [{#POSITION}]: Path A0 degraded |<p>Disk [{#POSITION}] path A0 in degraded state.</p> |`last(/HPE Primera by HTTP/hpe.primera.disk["{#ID}",loop_a0_degraded])=1` |AVERAGE | |
|Disk [{#POSITION}]: Path A1 degraded |<p>Disk [{#POSITION}] path A1 in degraded state.</p> |`last(/HPE Primera by HTTP/hpe.primera.disk["{#ID}",loop_a1_degraded])=1` |AVERAGE | |
|Disk [{#POSITION}]: Path B0 degraded |<p>Disk [{#POSITION}] path B0 in degraded state.</p> |`last(/HPE Primera by HTTP/hpe.primera.disk["{#ID}",loop_b0_degraded])=1` |AVERAGE | |
|Disk [{#POSITION}]: Path B1 degraded |<p>Disk [{#POSITION}] path B1 in degraded state.</p> |`last(/HPE Primera by HTTP/hpe.primera.disk["{#ID}",loop_b1_degraded])=1` |AVERAGE | |
|Disk [{#POSITION}]: Degraded |<p>Disk [{#POSITION}] in degraded state.</p> |`last(/HPE Primera by HTTP/hpe.primera.disk["{#ID}",state])=2` |AVERAGE | |
|Disk [{#POSITION}]: Failed |<p>Disk [{#POSITION}] in failed state.</p> |`last(/HPE Primera by HTTP/hpe.primera.disk["{#ID}",state])=3` |HIGH | |
|Disk [{#POSITION}]: Unknown issue |<p>Disk [{#POSITION}] in unknown state.</p> |`last(/HPE Primera by HTTP/hpe.primera.disk["{#ID}",state])=99` |INFO | |
|Port [{#NODE}:{#SLOT}:{#CARD.PORT}]: Failover state is {ITEM.VALUE1} |<p>Port [{#NODE}:{#SLOT}:{#CARD.PORT}] has failover error.</p> |`last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",failover_state])<>1 and last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",failover_state])<>4` |AVERAGE | |
|Port [{#NODE}:{#SLOT}:{#CARD.PORT}]: Link state is {ITEM.VALUE1} |<p>Port [{#NODE}:{#SLOT}:{#CARD.PORT}] not in ready state.</p> |`last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",link_state])<>4 and last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",link_state])<>1 and last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",link_state])<>3 and last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",link_state])<>13 and last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",link_state])<>15 and last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",link_state])<>16` |HIGH | |
|Port [{#NODE}:{#SLOT}:{#CARD.PORT}]: Link state is {ITEM.VALUE1} |<p>Port [{#NODE}:{#SLOT}:{#CARD.PORT}] not in ready state.</p> |`last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",link_state])=1 or last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",link_state])=3 or last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",link_state])=13 or last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",link_state])=15 or last(/HPE Primera by HTTP/hpe.primera.port["{#NODE}:{#SLOT}:{#CARD.PORT}",link_state])=16` |AVERAGE | |
|Task [{#NAME}]: Cancelled |<p>Task [{#NAME}] is cancelled.</p> |`last(/HPE Primera by HTTP/hpe.primera.task["{#ID}",status])=3` |INFO | |
|Task [{#NAME}]: Failed |<p>Task [{#NAME}] is failed.</p> |`last(/HPE Primera by HTTP/hpe.primera.task["{#ID}",status])=4` |AVERAGE | |
|Volume [{#NAME}]: Degraded |<p>Volume [{#NAME}] is in degraded state.</p> |`last(/HPE Primera by HTTP/hpe.primera.volume.state["{#ID}"])=2` |AVERAGE | |
|Volume [{#NAME}]: Failed |<p>Volume [{#NAME}] is in failed state.</p> |`last(/HPE Primera by HTTP/hpe.primera.volume.state["{#ID}"])=3` |HIGH | |

## Feedback

Please report any issues with the template at https://support.zabbix.com

You can also provide feedback, discuss the template or ask for help with it at [ZABBIX forums](https://www.zabbix.com/forum/zabbix-suggestions-and-feedback/).