Welcome to mirror list, hosted at ThFree Co, Russian Federation.

github.com/zabbix/zabbix.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorAnton Fayantsev <anton.fayantsev@zabbix.com>2021-04-22 16:09:07 +0300
committerAnton Fayantsev <anton.fayantsev@zabbix.com>2021-04-22 16:09:07 +0300
commita4eadde0af4f085e2ac4e5e06faf5edaebe7905f (patch)
tree5252ec696d900799b483ff51643d36f6469a19f0 /templates
parent71bbbfce8b3f38683a5a90d3341dbb75260fd5b8 (diff)
.........T [ZBXNEXT-6327] added templates with multi-page dashboards
Diffstat (limited to 'templates')
-rw-r--r--templates/app/apache_agent/README.md110
-rw-r--r--templates/app/apache_agent/template_app_apache_agent.yaml190
-rw-r--r--templates/app/apache_http/README.md96
-rw-r--r--templates/app/apache_http/template_app_apache_http.yaml126
-rw-r--r--templates/app/aranet/README.md4
-rw-r--r--templates/app/docker/README.md206
-rw-r--r--templates/app/docker/template_app_docker.yaml166
-rw-r--r--templates/app/elasticsearch_http/README.md224
-rw-r--r--templates/app/etcd_http/README.md164
-rw-r--r--templates/app/generic_java_jmx/README.md178
-rw-r--r--templates/app/hadoop_http/README.md192
-rw-r--r--templates/app/haproxy_agent/README.md256
-rw-r--r--templates/app/haproxy_agent/template_app_haproxy_agent.yaml220
-rw-r--r--templates/app/haproxy_http/README.md260
-rw-r--r--templates/app/haproxy_http/template_app_haproxy_http.yaml220
-rw-r--r--templates/app/iis_agent/README.md134
-rw-r--r--templates/app/iis_agent_active/README.md134
-rw-r--r--templates/app/kafka_jmx/README.md194
-rw-r--r--templates/app/memcached/README.md94
-rw-r--r--templates/app/nginx_agent/README.md72
-rw-r--r--templates/app/nginx_agent/template_app_nginx_agent.yaml92
-rw-r--r--templates/app/nginx_http/README.md62
-rw-r--r--templates/app/nginx_http/template_app_nginx_http.yaml92
-rw-r--r--templates/app/rabbitmq_agent/README.md300
-rw-r--r--templates/app/rabbitmq_agent/template_app_rabbitmq_agent.yaml370
-rw-r--r--templates/app/rabbitmq_http/README.md288
-rw-r--r--templates/app/rabbitmq_http/template_app_rabbitmq_http.yaml370
-rw-r--r--templates/app/sharepoint_http/template_app_sharepoint_http.yaml72
-rw-r--r--templates/app/squid_snmp/README.md146
-rw-r--r--templates/app/vmware/template_app_vmware.yaml97
-rw-r--r--templates/app/vmware_fqdn/template_app_vmware_fqdn.yaml102
-rw-r--r--templates/app/zookeeper_http/README.md146
-rw-r--r--templates/classic/template_app_remote_zabbix_proxy.yaml124
-rw-r--r--templates/classic/template_app_remote_zabbix_server.yaml186
-rw-r--r--templates/classic/template_app_zabbix_proxy.yaml124
-rw-r--r--templates/classic/template_app_zabbix_server.yaml186
-rw-r--r--templates/classic/template_os_aix.yaml124
-rw-r--r--templates/classic/template_os_freebsd.yaml124
-rw-r--r--templates/classic/template_os_hp-ux.yaml92
-rw-r--r--templates/classic/template_os_mac_os_x.yaml62
-rw-r--r--templates/classic/template_os_openbsd.yaml186
-rw-r--r--templates/classic/template_os_solaris.yaml186
-rw-r--r--templates/db/cassandra_jmx/README.md240
-rw-r--r--templates/db/clickhouse_http/README.md240
-rw-r--r--templates/db/mongodb/template_db_mongodb.yaml846
-rw-r--r--templates/db/mongodb_cluster/template_db_mongodb_cluster.yaml364
-rw-r--r--templates/db/mysql_agent/template_db_mysql_agent.yaml186
-rw-r--r--templates/db/mysql_agent2/template_db_mysql_agent2.yaml186
-rw-r--r--templates/db/mysql_odbc/template_db_mysql_odbc.yaml186
-rw-r--r--templates/db/postgresql/template_db_postgresql.yaml499
-rw-r--r--templates/db/postgresql_agent2/template_db_postgresql_agent2.yaml148
-rw-r--r--templates/db/redis/README.md336
-rw-r--r--templates/db/redis/template_db_redis.yaml442
-rw-r--r--templates/db/tidb_http/tidb_pd_http/template_db_tidb_pd_http.yaml184
-rw-r--r--templates/db/tidb_http/tidb_tidb_http/template_db_tidb_tidb_http.yaml280
-rw-r--r--templates/db/tidb_http/tidb_tikv_http/template_db_tidb_tikv_http.yaml252
-rw-r--r--templates/module/00icmp_ping/README.md30
-rw-r--r--templates/module/ether_like_snmp/README.md20
-rw-r--r--templates/module/generic_snmp_snmp/README.md30
-rw-r--r--templates/module/host_resources_snmp/README.md112
-rw-r--r--templates/module/host_resources_snmp/template_module_host_resources_snmp.yaml96
-rw-r--r--templates/module/interfaces_simple_snmp/README.md72
-rw-r--r--templates/module/interfaces_simple_snmp/template_module_interfaces_simple_snmp.yaml38
-rw-r--r--templates/module/interfaces_snmp/README.md74
-rw-r--r--templates/module/interfaces_snmp/template_module_interfaces_snmp.yaml38
-rw-r--r--templates/module/interfaces_win_snmp/README.md74
-rw-r--r--templates/module/interfaces_win_snmp/template_module_interfaces_win_snmp.yaml38
-rw-r--r--templates/module/smart_agent2/template_module_smart_agent2.yaml73
-rw-r--r--templates/module/smart_agent2_active/template_module_smart_agent2_active.yaml73
-rw-r--r--templates/module/zabbix_agent/README.md54
-rw-r--r--templates/net/alcatel_timetra_snmp/README.md94
-rw-r--r--templates/net/arista_snmp/README.md98
-rw-r--r--templates/net/brocade_fc_sw_snmp/README.md104
-rw-r--r--templates/net/brocade_foundry_sw_snmp/README.md194
-rw-r--r--templates/net/cisco_catalyst_3750/cisco_catalyst_3750_24fs_snmp/template_net_cisco_catalyst_3750_24fs_snmp.yaml238
-rw-r--r--templates/net/cisco_catalyst_3750/cisco_catalyst_3750_24ps_snmp/template_net_cisco_catalyst_3750_24ps_snmp.yaml238
-rw-r--r--templates/net/cisco_catalyst_3750/cisco_catalyst_3750_24ts_snmp/template_net_cisco_catalyst_3750_24ts_snmp.yaml238
-rw-r--r--templates/net/cisco_catalyst_3750/cisco_catalyst_3750_48ps_snmp/template_net_cisco_catalyst_3750_48ps_snmp.yaml238
-rw-r--r--templates/net/cisco_catalyst_3750/cisco_catalyst_3750_48ts_snmp/template_net_cisco_catalyst_3750_48ts_snmp.yaml238
-rw-r--r--templates/net/cisco_snmp/README.md264
-rw-r--r--templates/net/dell_force_s_series_snmp/README.md94
-rw-r--r--templates/net/dlink_des7200_snmp/README.md84
-rw-r--r--templates/net/dlink_des_snmp/README.md86
-rw-r--r--templates/net/extreme_snmp/README.md98
-rw-r--r--templates/net/generic_snmp/README.md12
-rw-r--r--templates/net/hp_hh3c_snmp/README.md98
-rw-r--r--templates/net/hp_hpn_snmp/README.md106
-rw-r--r--templates/net/huawei_snmp/README.md78
-rw-r--r--templates/net/intel_qlogic_infiniband_snmp/README.md80
-rw-r--r--templates/net/juniper_snmp/README.md94
-rw-r--r--templates/net/mellanox_snmp/template_net_mellanox_snmp.yaml168
-rw-r--r--templates/net/mikrotik_snmp/README.md158
-rw-r--r--templates/net/morningstar_snmp/prostar_mppt_snmp/README.md214
-rw-r--r--templates/net/morningstar_snmp/prostar_pwm_snmp/README.md206
-rw-r--r--templates/net/morningstar_snmp/sunsaver_mppt_snmp/README.md184
-rw-r--r--templates/net/morningstar_snmp/suresine_snmp/README.md108
-rw-r--r--templates/net/morningstar_snmp/tristar_mppt_600V_snmp/README.md226
-rw-r--r--templates/net/morningstar_snmp/tristar_mppt_snmp/README.md170
-rw-r--r--templates/net/morningstar_snmp/tristar_pwm_snmp/README.md184
-rw-r--r--templates/net/netgear_snmp/README.md90
-rw-r--r--templates/net/qtech_snmp/README.md90
-rw-r--r--templates/net/tplink_snmp/README.md54
-rw-r--r--templates/net/ubiquiti_airos_snmp/README.md48
-rw-r--r--templates/os/linux/README.md330
-rw-r--r--templates/os/linux/template_os_linux.yaml308
-rw-r--r--templates/os/linux_active/README.md330
-rw-r--r--templates/os/linux_active/template_os_linux_active.yaml308
-rw-r--r--templates/os/linux_prom/README.md242
-rw-r--r--templates/os/linux_prom/template_os_linux_prom.yaml306
-rw-r--r--templates/os/linux_snmp_snmp/README.md204
-rw-r--r--templates/os/linux_snmp_snmp/template_os_linux_snmp_snmp.yaml320
-rw-r--r--templates/os/windows_agent/template_os_windows_agent.yaml352
-rw-r--r--templates/os/windows_agent_active/template_os_windows_agent_active.yaml352
-rw-r--r--templates/os/windows_snmp/README.md12
-rw-r--r--templates/san/huawei_5300v5_snmp/template_san_huawei_5300v5_snmp.yaml296
-rw-r--r--templates/san/netapp_aff_a700_http/template_san_netapp_aff_a700_http.yaml564
-rw-r--r--templates/server/chassis_ipmi/README.md52
-rw-r--r--templates/server/cisco_ucs_snmp/README.md196
-rw-r--r--templates/server/dell_idrac_snmp/README.md222
-rw-r--r--templates/server/ibm_imm_snmp/README.md112
-rw-r--r--templates/server/supermicro_aten_snmp/README.md44
121 files changed, 10868 insertions, 10338 deletions
diff --git a/templates/app/apache_agent/README.md b/templates/app/apache_agent/README.md
index 618106cbcbd..bafcc08911a 100644
--- a/templates/app/apache_agent/README.md
+++ b/templates/app/apache_agent/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
The template to monitor Apache HTTPD by Zabbix that work without any external scripts.
Most of the metrics are collected in one go, thanks to Zabbix bulk data collection.
Template `Apache by Zabbix agent` - collects metrics by polling [mod_status](https://httpd.apache.org/docs/current/mod/mod_status.html) locally with Zabbix agent:
@@ -59,7 +59,7 @@ This template was tested on:
## Setup
-> See [Zabbix template operation](https://www.zabbix.com/documentation/5.2/manual/config/templates_out_of_the_box/zabbix_agent) for basic instructions.
+> See [Zabbix template operation](https://www.zabbix.com/documentation/5.4/manual/config/templates_out_of_the_box/zabbix_agent) for basic instructions.
Setup [mod_status](https://httpd.apache.org/docs/current/mod/mod_status.html)
@@ -84,14 +84,14 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$APACHE.PROCESS_NAME} |<p>Apache server process name</p> |`httpd` |
-|{$APACHE.RESPONSE_TIME.MAX.WARN} |<p>Maximum Apache response time in seconds for trigger expression</p> |`10` |
-|{$APACHE.STATUS.HOST} |<p>Hostname or IP address of the Apache status page</p> |`127.0.0.1` |
-|{$APACHE.STATUS.PATH} |<p>The URL path</p> |`server-status?auto` |
-|{$APACHE.STATUS.PORT} |<p>The port of Apache status page</p> |`80` |
-|{$APACHE.STATUS.SCHEME} |<p>Request scheme which may be http or https</p> |`http` |
+| Name | Description | Default |
+|----------------------------------|-----------------------------------------------------------------------|----------------------|
+| {$APACHE.PROCESS_NAME} | <p>Apache server process name</p> | `httpd` |
+| {$APACHE.RESPONSE_TIME.MAX.WARN} | <p>Maximum Apache response time in seconds for trigger expression</p> | `10` |
+| {$APACHE.STATUS.HOST} | <p>Hostname or IP address of the Apache status page</p> | `127.0.0.1` |
+| {$APACHE.STATUS.PATH} | <p>The URL path</p> | `server-status?auto` |
+| {$APACHE.STATUS.PORT} | <p>The port of Apache status page</p> | `80` |
+| {$APACHE.STATUS.SCHEME} | <p>Request scheme which may be http or https</p> | `http` |
## Template links
@@ -99,57 +99,57 @@ There are no template links in this template.
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Event MPM discovery |<p>Additional metrics if event MPM is used</p><p>https://httpd.apache.org/docs/current/mod/event.html</p> |DEPENDENT |apache.mpm.event.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.ServerMPM`</p><p>- JAVASCRIPT: `return JSON.stringify(value === 'event' ? [{'{#SINGLETON}': ''}] : []);`</p> |
+| Name | Description | Type | Key and additional info |
+|---------------------|-----------------------------------------------------------------------------------------------------------|-----------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Event MPM discovery | <p>Additional metrics if event MPM is used</p><p>https://httpd.apache.org/docs/current/mod/event.html</p> | DEPENDENT | apache.mpm.event.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.ServerMPM`</p><p>- JAVASCRIPT: `return JSON.stringify(value === 'event' ? [{'{#SINGLETON}': ''}] : []);`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Apache |Apache: Service ping |<p>-</p> |ZABBIX_PASSIVE |net.tcp.service[http,"{$APACHE.STATUS.HOST}","{$APACHE.STATUS.PORT}"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|Apache |Apache: Service response time |<p>-</p> |ZABBIX_PASSIVE |net.tcp.service.perf[http,"{$APACHE.STATUS.HOST}","{$APACHE.STATUS.PORT}"] |
-|Apache |Apache: Total bytes |<p>Total bytes served</p> |DEPENDENT |apache.bytes<p>**Preprocessing**:</p><p>- JSONPATH: `$["Total kBytes"]`</p><p>- MULTIPLIER: `1024`</p> |
-|Apache |Apache: Bytes per second |<p>Calculated as change rate for 'Total bytes' stat.</p><p>BytesPerSec is not used, as it counts average since last Apache server start.</p> |DEPENDENT |apache.bytes.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$["Total kBytes"]`</p><p>- MULTIPLIER: `1024`</p><p>- CHANGE_PER_SECOND |
-|Apache |Apache: Requests per second |<p>Calculated as change rate for 'Total requests' stat.</p><p>ReqPerSec is not used, as it counts average since last Apache server start.</p> |DEPENDENT |apache.requests.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$["Total Accesses"]`</p><p>- CHANGE_PER_SECOND |
-|Apache |Apache: Total requests |<p>A total number of accesses</p> |DEPENDENT |apache.requests<p>**Preprocessing**:</p><p>- JSONPATH: `$["Total Accesses"]`</p> |
-|Apache |Apache: Uptime |<p>Service uptime in seconds</p> |DEPENDENT |apache.uptime<p>**Preprocessing**:</p><p>- JSONPATH: `$.ServerUptimeSeconds`</p> |
-|Apache |Apache: Version |<p>Service version</p> |DEPENDENT |apache.version<p>**Preprocessing**:</p><p>- JSONPATH: `$.ServerVersion`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Apache |Apache: Total workers busy |<p>Total number of busy worker threads/processes</p> |DEPENDENT |apache.workers_total.busy<p>**Preprocessing**:</p><p>- JSONPATH: `$.BusyWorkers`</p> |
-|Apache |Apache: Total workers idle |<p>Total number of idle worker threads/processes</p> |DEPENDENT |apache.workers_total.idle<p>**Preprocessing**:</p><p>- JSONPATH: `$.IdleWorkers`</p> |
-|Apache |Apache: Workers closing connection |<p>Number of workers in closing state</p> |DEPENDENT |apache.workers.closing<p>**Preprocessing**:</p><p>- JSONPATH: `$.Workers.closing`</p> |
-|Apache |Apache: Workers DNS lookup |<p>Number of workers in dnslookup state</p> |DEPENDENT |apache.workers.dnslookup<p>**Preprocessing**:</p><p>- JSONPATH: `$.Workers.dnslookup`</p> |
-|Apache |Apache: Workers finishing |<p>Number of workers in finishing state</p> |DEPENDENT |apache.workers.finishing<p>**Preprocessing**:</p><p>- JSONPATH: `$.Workers.finishing`</p> |
-|Apache |Apache: Workers idle cleanup |<p>Number of workers in cleanup state</p> |DEPENDENT |apache.workers.cleanup<p>**Preprocessing**:</p><p>- JSONPATH: `$.Workers.cleanup`</p> |
-|Apache |Apache: Workers keepalive (read) |<p>Number of workers in keepalive state</p> |DEPENDENT |apache.workers.keepalive<p>**Preprocessing**:</p><p>- JSONPATH: `$.Workers.keepalive`</p> |
-|Apache |Apache: Workers logging |<p>Number of workers in logging state</p> |DEPENDENT |apache.workers.logging<p>**Preprocessing**:</p><p>- JSONPATH: `$.Workers.logging`</p> |
-|Apache |Apache: Workers reading request |<p>Number of workers in reading state</p> |DEPENDENT |apache.workers.reading<p>**Preprocessing**:</p><p>- JSONPATH: `$.Workers.reading`</p> |
-|Apache |Apache: Workers sending reply |<p>Number of workers in sending state</p> |DEPENDENT |apache.workers.sending<p>**Preprocessing**:</p><p>- JSONPATH: `$.Workers.sending`</p> |
-|Apache |Apache: Workers slot with no current process |<p>Number of slots with no current process</p> |DEPENDENT |apache.workers.slot<p>**Preprocessing**:</p><p>- JSONPATH: `$.Workers.slot`</p> |
-|Apache |Apache: Workers starting up |<p>Number of workers in starting state</p> |DEPENDENT |apache.workers.starting<p>**Preprocessing**:</p><p>- JSONPATH: `$.Workers.starting`</p> |
-|Apache |Apache: Workers waiting for connection |<p>Number of workers in waiting state</p> |DEPENDENT |apache.workers.waiting<p>**Preprocessing**:</p><p>- JSONPATH: `$.Workers.waiting`</p> |
-|Apache |Apache: Number of processes running |<p>-</p> |ZABBIX_PASSIVE |proc.num["{$APACHE.PROCESS_NAME}"] |
-|Apache |Apache: Memory usage (rss) |<p>Resident set size memory used by process in bytes.</p> |ZABBIX_PASSIVE |proc.mem["{$APACHE.PROCESS_NAME}",,,,rss] |
-|Apache |Apache: Memory usage (vsize) |<p>Virtual memory size used by process in bytes.</p> |ZABBIX_PASSIVE |proc.mem["{$APACHE.PROCESS_NAME}",,,,vsize] |
-|Apache |Apache: CPU utilization |<p>Process CPU utilization percentage.</p> |ZABBIX_PASSIVE |proc.cpu.util["{$APACHE.PROCESS_NAME}"] |
-|Apache |Apache: Connections async closing |<p>Number of async connections in closing state (only applicable to event MPM)</p> |DEPENDENT |apache.connections[async_closing{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.ConnsAsyncClosing`</p> |
-|Apache |Apache: Connections async keep alive |<p>Number of async connections in keep-alive state (only applicable to event MPM)</p> |DEPENDENT |apache.connections[async_keep_alive{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.ConnsAsyncKeepAlive`</p> |
-|Apache |Apache: Connections async writing |<p>Number of async connections in writing state (only applicable to event MPM)</p> |DEPENDENT |apache.connections[async_writing{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.ConnsAsyncWriting`</p> |
-|Apache |Apache: Connections total |<p>Number of total connections</p> |DEPENDENT |apache.connections[total{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.ConnsTotal`</p> |
-|Apache |Apache: Bytes per request |<p>Average number of client requests per second</p> |DEPENDENT |apache.bytes[per_request{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.BytesPerReq`</p> |
-|Apache |Apache: Number of async processes |<p>Number of async processes</p> |DEPENDENT |apache.process[num{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Processes`</p> |
-|Zabbix_raw_items |Apache: Get status |<p>Getting data from a machine-readable version of the Apache status page.</p><p>https://httpd.apache.org/docs/current/mod/mod_status.html</p> |ZABBIX_PASSIVE |web.page.get["{$APACHE.STATUS.SCHEME}://{$APACHE.STATUS.HOST}:{$APACHE.STATUS.PORT}/{$APACHE.STATUS.PATH}"]<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Group | Name | Description | Type | Key and additional info |
+|------------------|----------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------|----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Apache | Apache: Service ping | <p>-</p> | ZABBIX_PASSIVE | net.tcp.service[http,"{$APACHE.STATUS.HOST}","{$APACHE.STATUS.PORT}"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| Apache | Apache: Service response time | <p>-</p> | ZABBIX_PASSIVE | net.tcp.service.perf[http,"{$APACHE.STATUS.HOST}","{$APACHE.STATUS.PORT}"] |
+| Apache | Apache: Total bytes | <p>Total bytes served</p> | DEPENDENT | apache.bytes<p>**Preprocessing**:</p><p>- JSONPATH: `$["Total kBytes"]`</p><p>- MULTIPLIER: `1024`</p> |
+| Apache | Apache: Bytes per second | <p>Calculated as change rate for 'Total bytes' stat.</p><p>BytesPerSec is not used, as it counts average since last Apache server start.</p> | DEPENDENT | apache.bytes.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$["Total kBytes"]`</p><p>- MULTIPLIER: `1024`</p><p>- CHANGE_PER_SECOND |
+| Apache | Apache: Requests per second | <p>Calculated as change rate for 'Total requests' stat.</p><p>ReqPerSec is not used, as it counts average since last Apache server start.</p> | DEPENDENT | apache.requests.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$["Total Accesses"]`</p><p>- CHANGE_PER_SECOND |
+| Apache | Apache: Total requests | <p>A total number of accesses</p> | DEPENDENT | apache.requests<p>**Preprocessing**:</p><p>- JSONPATH: `$["Total Accesses"]`</p> |
+| Apache | Apache: Uptime | <p>Service uptime in seconds</p> | DEPENDENT | apache.uptime<p>**Preprocessing**:</p><p>- JSONPATH: `$.ServerUptimeSeconds`</p> |
+| Apache | Apache: Version | <p>Service version</p> | DEPENDENT | apache.version<p>**Preprocessing**:</p><p>- JSONPATH: `$.ServerVersion`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Apache | Apache: Total workers busy | <p>Total number of busy worker threads/processes</p> | DEPENDENT | apache.workers_total.busy<p>**Preprocessing**:</p><p>- JSONPATH: `$.BusyWorkers`</p> |
+| Apache | Apache: Total workers idle | <p>Total number of idle worker threads/processes</p> | DEPENDENT | apache.workers_total.idle<p>**Preprocessing**:</p><p>- JSONPATH: `$.IdleWorkers`</p> |
+| Apache | Apache: Workers closing connection | <p>Number of workers in closing state</p> | DEPENDENT | apache.workers.closing<p>**Preprocessing**:</p><p>- JSONPATH: `$.Workers.closing`</p> |
+| Apache | Apache: Workers DNS lookup | <p>Number of workers in dnslookup state</p> | DEPENDENT | apache.workers.dnslookup<p>**Preprocessing**:</p><p>- JSONPATH: `$.Workers.dnslookup`</p> |
+| Apache | Apache: Workers finishing | <p>Number of workers in finishing state</p> | DEPENDENT | apache.workers.finishing<p>**Preprocessing**:</p><p>- JSONPATH: `$.Workers.finishing`</p> |
+| Apache | Apache: Workers idle cleanup | <p>Number of workers in cleanup state</p> | DEPENDENT | apache.workers.cleanup<p>**Preprocessing**:</p><p>- JSONPATH: `$.Workers.cleanup`</p> |
+| Apache | Apache: Workers keepalive (read) | <p>Number of workers in keepalive state</p> | DEPENDENT | apache.workers.keepalive<p>**Preprocessing**:</p><p>- JSONPATH: `$.Workers.keepalive`</p> |
+| Apache | Apache: Workers logging | <p>Number of workers in logging state</p> | DEPENDENT | apache.workers.logging<p>**Preprocessing**:</p><p>- JSONPATH: `$.Workers.logging`</p> |
+| Apache | Apache: Workers reading request | <p>Number of workers in reading state</p> | DEPENDENT | apache.workers.reading<p>**Preprocessing**:</p><p>- JSONPATH: `$.Workers.reading`</p> |
+| Apache | Apache: Workers sending reply | <p>Number of workers in sending state</p> | DEPENDENT | apache.workers.sending<p>**Preprocessing**:</p><p>- JSONPATH: `$.Workers.sending`</p> |
+| Apache | Apache: Workers slot with no current process | <p>Number of slots with no current process</p> | DEPENDENT | apache.workers.slot<p>**Preprocessing**:</p><p>- JSONPATH: `$.Workers.slot`</p> |
+| Apache | Apache: Workers starting up | <p>Number of workers in starting state</p> | DEPENDENT | apache.workers.starting<p>**Preprocessing**:</p><p>- JSONPATH: `$.Workers.starting`</p> |
+| Apache | Apache: Workers waiting for connection | <p>Number of workers in waiting state</p> | DEPENDENT | apache.workers.waiting<p>**Preprocessing**:</p><p>- JSONPATH: `$.Workers.waiting`</p> |
+| Apache | Apache: Number of processes running | <p>-</p> | ZABBIX_PASSIVE | proc.num["{$APACHE.PROCESS_NAME}"] |
+| Apache | Apache: Memory usage (rss) | <p>Resident set size memory used by process in bytes.</p> | ZABBIX_PASSIVE | proc.mem["{$APACHE.PROCESS_NAME}",,,,rss] |
+| Apache | Apache: Memory usage (vsize) | <p>Virtual memory size used by process in bytes.</p> | ZABBIX_PASSIVE | proc.mem["{$APACHE.PROCESS_NAME}",,,,vsize] |
+| Apache | Apache: CPU utilization | <p>Process CPU utilization percentage.</p> | ZABBIX_PASSIVE | proc.cpu.util["{$APACHE.PROCESS_NAME}"] |
+| Apache | Apache: Connections async closing | <p>Number of async connections in closing state (only applicable to event MPM)</p> | DEPENDENT | apache.connections[async_closing{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.ConnsAsyncClosing`</p> |
+| Apache | Apache: Connections async keep alive | <p>Number of async connections in keep-alive state (only applicable to event MPM)</p> | DEPENDENT | apache.connections[async_keep_alive{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.ConnsAsyncKeepAlive`</p> |
+| Apache | Apache: Connections async writing | <p>Number of async connections in writing state (only applicable to event MPM)</p> | DEPENDENT | apache.connections[async_writing{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.ConnsAsyncWriting`</p> |
+| Apache | Apache: Connections total | <p>Number of total connections</p> | DEPENDENT | apache.connections[total{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.ConnsTotal`</p> |
+| Apache | Apache: Bytes per request | <p>Average number of client requests per second</p> | DEPENDENT | apache.bytes[per_request{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.BytesPerReq`</p> |
+| Apache | Apache: Number of async processes | <p>Number of async processes</p> | DEPENDENT | apache.process[num{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Processes`</p> |
+| Zabbix_raw_items | Apache: Get status | <p>Getting data from a machine-readable version of the Apache status page.</p><p>https://httpd.apache.org/docs/current/mod/mod_status.html</p> | ZABBIX_PASSIVE | web.page.get["{$APACHE.STATUS.SCHEME}://{$APACHE.STATUS.HOST}:{$APACHE.STATUS.PORT}/{$APACHE.STATUS.PATH}"]<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|Apache: Service is down |<p>-</p> |`{TEMPLATE_NAME:net.tcp.service[http,"{$APACHE.STATUS.HOST}","{$APACHE.STATUS.PORT}"].last()}=0` |AVERAGE |<p>Manual close: YES</p><p>**Depends on**:</p><p>- Apache: Process is not running</p> |
-|Apache: Service response time is too high (over {$APACHE.RESPONSE_TIME.MAX.WARN}s for 5m) |<p>-</p> |`{TEMPLATE_NAME:net.tcp.service.perf[http,"{$APACHE.STATUS.HOST}","{$APACHE.STATUS.PORT}"].min(5m)}>{$APACHE.RESPONSE_TIME.MAX.WARN}` |WARNING |<p>Manual close: YES</p><p>**Depends on**:</p><p>- Apache: Process is not running</p><p>- Apache: Service is down</p> |
-|Apache: has been restarted (uptime < 10m) |<p>Uptime is less than 10 minutes</p> |`{TEMPLATE_NAME:apache.uptime.last()}<10m` |INFO |<p>Manual close: YES</p> |
-|Apache: Version has changed (new version: {ITEM.VALUE}) |<p>Apache version has changed. Ack to close.</p> |`{TEMPLATE_NAME:apache.version.diff()}=1 and {TEMPLATE_NAME:apache.version.strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|Apache: Process is not running |<p>-</p> |`{TEMPLATE_NAME:proc.num["{$APACHE.PROCESS_NAME}"].last()}=0` |HIGH | |
-|Apache: Failed to fetch status page (or no data for 30m) |<p>Zabbix has not received data for items for the last 30 minutes.</p> |`{TEMPLATE_NAME:web.page.get["{$APACHE.STATUS.SCHEME}://{$APACHE.STATUS.HOST}:{$APACHE.STATUS.PORT}/{$APACHE.STATUS.PATH}"].nodata(30m)}=1` |WARNING |<p>Manual close: YES</p><p>**Depends on**:</p><p>- Apache: Process is not running</p><p>- Apache: Service is down</p> |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|-------------------------------------------------------------------------------------------|------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------|----------|-----------------------------------------------------------------------------------------------------------------------|
+| Apache: Service is down | <p>-</p> | `{TEMPLATE_NAME:net.tcp.service[http,"{$APACHE.STATUS.HOST}","{$APACHE.STATUS.PORT}"].last()}=0` | AVERAGE | <p>Manual close: YES</p><p>**Depends on**:</p><p>- Apache: Process is not running</p> |
+| Apache: Service response time is too high (over {$APACHE.RESPONSE_TIME.MAX.WARN}s for 5m) | <p>-</p> | `{TEMPLATE_NAME:net.tcp.service.perf[http,"{$APACHE.STATUS.HOST}","{$APACHE.STATUS.PORT}"].min(5m)}>{$APACHE.RESPONSE_TIME.MAX.WARN}` | WARNING | <p>Manual close: YES</p><p>**Depends on**:</p><p>- Apache: Process is not running</p><p>- Apache: Service is down</p> |
+| Apache: has been restarted (uptime < 10m) | <p>Uptime is less than 10 minutes</p> | `{TEMPLATE_NAME:apache.uptime.last()}<10m` | INFO | <p>Manual close: YES</p> |
+| Apache: Version has changed (new version: {ITEM.VALUE}) | <p>Apache version has changed. Ack to close.</p> | `{TEMPLATE_NAME:apache.version.diff()}=1 and {TEMPLATE_NAME:apache.version.strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| Apache: Process is not running | <p>-</p> | `{TEMPLATE_NAME:proc.num["{$APACHE.PROCESS_NAME}"].last()}=0` | HIGH | |
+| Apache: Failed to fetch status page (or no data for 30m) | <p>Zabbix has not received data for items for the last 30 minutes.</p> | `{TEMPLATE_NAME:web.page.get["{$APACHE.STATUS.SCHEME}://{$APACHE.STATUS.HOST}:{$APACHE.STATUS.PORT}/{$APACHE.STATUS.PATH}"].nodata(30m)}=1` | WARNING | <p>Manual close: YES</p><p>**Depends on**:</p><p>- Apache: Process is not running</p><p>- Apache: Service is down</p> |
## Feedback
diff --git a/templates/app/apache_agent/template_app_apache_agent.yaml b/templates/app/apache_agent/template_app_apache_agent.yaml
index d0927c57aad..aa296696109 100644
--- a/templates/app/apache_agent/template_app_apache_agent.yaml
+++ b/templates/app/apache_agent/template_app_apache_agent.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-02-23T11:04:42Z'
+ date: '2021-04-22T11:27:43Z'
groups:
-
name: Templates/Applications
@@ -768,103 +768,105 @@ zabbix_export:
dashboards:
-
name: 'Apache performance'
- widgets:
- -
- type: GRAPH_CLASSIC
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Apache: Requests per second'
- host: 'Apache by Zabbix agent'
- -
- type: GRAPH_CLASSIC
- x: '12'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Apache: Workers total'
- host: 'Apache by Zabbix agent'
- -
- type: GRAPH_PROTOTYPE
- 'y': '5'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '1'
- -
- type: INTEGER
- name: source_type
- value: '2'
+ pages:
+ -
+ widgets:
+ -
+ type: GRAPH_CLASSIC
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Apache: Requests per second'
+ host: 'Apache by Zabbix agent'
+ -
+ type: GRAPH_CLASSIC
+ x: '12'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Apache: Workers total'
+ host: 'Apache by Zabbix agent'
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: 'Apache: Current async connections{#SINGLETON}'
- host: 'Apache by Zabbix agent'
- -
- type: GRAPH_PROTOTYPE
- x: '12'
- 'y': '5'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '1'
- -
- type: INTEGER
- name: source_type
- value: '2'
+ 'y': '5'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '1'
+ -
+ type: INTEGER
+ name: source_type
+ value: '2'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: 'Apache: Current async connections{#SINGLETON}'
+ host: 'Apache by Zabbix agent'
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: 'Apache: Current async processes{#SINGLETON}'
- host: 'Apache by Zabbix agent'
- -
- type: GRAPH_CLASSIC
- 'y': '10'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Apache: Worker states'
- host: 'Apache by Zabbix agent'
+ x: '12'
+ 'y': '5'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '1'
+ -
+ type: INTEGER
+ name: source_type
+ value: '2'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: 'Apache: Current async processes{#SINGLETON}'
+ host: 'Apache by Zabbix agent'
+ -
+ type: GRAPH_CLASSIC
+ 'y': '10'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Apache: Worker states'
+ host: 'Apache by Zabbix agent'
valuemaps:
-
name: 'Service state'
diff --git a/templates/app/apache_http/README.md b/templates/app/apache_http/README.md
index fe14cf4bad5..1517108b3ad 100644
--- a/templates/app/apache_http/README.md
+++ b/templates/app/apache_http/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
The template to monitor Apache HTTPD by Zabbix that work without any external scripts.
Most of the metrics are collected in one go, thanks to Zabbix bulk data collection.
Template `Apache by HTTP` - collects metrics by polling [mod_status](https://httpd.apache.org/docs/current/mod/mod_status.html) with HTTP agent remotely:
@@ -57,7 +57,7 @@ This template was tested on:
## Setup
-> See [Zabbix template operation](https://www.zabbix.com/documentation/5.2/manual/config/templates_out_of_the_box/http) for basic instructions.
+> See [Zabbix template operation](https://www.zabbix.com/documentation/5.4/manual/config/templates_out_of_the_box/http) for basic instructions.
Setup [mod_status](https://httpd.apache.org/docs/current/mod/mod_status.html)
@@ -81,12 +81,12 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$APACHE.RESPONSE_TIME.MAX.WARN} |<p>Maximum Apache response time in seconds for trigger expression</p> |`10` |
-|{$APACHE.STATUS.PATH} |<p>The URL path</p> |`server-status?auto` |
-|{$APACHE.STATUS.PORT} |<p>The port of Apache status page</p> |`80` |
-|{$APACHE.STATUS.SCHEME} |<p>Request scheme which may be http or https</p> |`http` |
+| Name | Description | Default |
+|----------------------------------|-----------------------------------------------------------------------|----------------------|
+| {$APACHE.RESPONSE_TIME.MAX.WARN} | <p>Maximum Apache response time in seconds for trigger expression</p> | `10` |
+| {$APACHE.STATUS.PATH} | <p>The URL path</p> | `server-status?auto` |
+| {$APACHE.STATUS.PORT} | <p>The port of Apache status page</p> | `80` |
+| {$APACHE.STATUS.SCHEME} | <p>Request scheme which may be http or https</p> | `http` |
## Template links
@@ -94,52 +94,52 @@ There are no template links in this template.
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Event MPM discovery |<p>Additional metrics if event MPM is used</p><p>https://httpd.apache.org/docs/current/mod/event.html</p> |DEPENDENT |apache.mpm.event.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.ServerMPM`</p><p>- JAVASCRIPT: `return JSON.stringify(value === 'event' ? [{'{#SINGLETON}': ''}] : []);`</p> |
+| Name | Description | Type | Key and additional info |
+|---------------------|-----------------------------------------------------------------------------------------------------------|-----------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Event MPM discovery | <p>Additional metrics if event MPM is used</p><p>https://httpd.apache.org/docs/current/mod/event.html</p> | DEPENDENT | apache.mpm.event.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.ServerMPM`</p><p>- JAVASCRIPT: `return JSON.stringify(value === 'event' ? [{'{#SINGLETON}': ''}] : []);`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Apache |Apache: Service ping |<p>-</p> |SIMPLE |net.tcp.service[http,"{HOST.CONN}","{$APACHE.STATUS.PORT}"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|Apache |Apache: Service response time |<p>-</p> |SIMPLE |net.tcp.service.perf[http,"{HOST.CONN}","{$APACHE.STATUS.PORT}"] |
-|Apache |Apache: Total bytes |<p>Total bytes served</p> |DEPENDENT |apache.bytes<p>**Preprocessing**:</p><p>- JSONPATH: `$["Total kBytes"]`</p><p>- MULTIPLIER: `1024`</p> |
-|Apache |Apache: Bytes per second |<p>Calculated as change rate for 'Total bytes' stat.</p><p>BytesPerSec is not used, as it counts average since last Apache server start.</p> |DEPENDENT |apache.bytes.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$["Total kBytes"]`</p><p>- MULTIPLIER: `1024`</p><p>- CHANGE_PER_SECOND |
-|Apache |Apache: Requests per second |<p>Calculated as change rate for 'Total requests' stat.</p><p>ReqPerSec is not used, as it counts average since last Apache server start.</p> |DEPENDENT |apache.requests.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$["Total Accesses"]`</p><p>- CHANGE_PER_SECOND |
-|Apache |Apache: Total requests |<p>A total number of accesses</p> |DEPENDENT |apache.requests<p>**Preprocessing**:</p><p>- JSONPATH: `$["Total Accesses"]`</p> |
-|Apache |Apache: Uptime |<p>Service uptime in seconds</p> |DEPENDENT |apache.uptime<p>**Preprocessing**:</p><p>- JSONPATH: `$.ServerUptimeSeconds`</p> |
-|Apache |Apache: Version |<p>Service version</p> |DEPENDENT |apache.version<p>**Preprocessing**:</p><p>- JSONPATH: `$.ServerVersion`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Apache |Apache: Total workers busy |<p>Total number of busy worker threads/processes</p> |DEPENDENT |apache.workers_total.busy<p>**Preprocessing**:</p><p>- JSONPATH: `$.BusyWorkers`</p> |
-|Apache |Apache: Total workers idle |<p>Total number of idle worker threads/processes</p> |DEPENDENT |apache.workers_total.idle<p>**Preprocessing**:</p><p>- JSONPATH: `$.IdleWorkers`</p> |
-|Apache |Apache: Workers closing connection |<p>Number of workers in closing state</p> |DEPENDENT |apache.workers.closing<p>**Preprocessing**:</p><p>- JSONPATH: `$.Workers.closing`</p> |
-|Apache |Apache: Workers DNS lookup |<p>Number of workers in dnslookup state</p> |DEPENDENT |apache.workers.dnslookup<p>**Preprocessing**:</p><p>- JSONPATH: `$.Workers.dnslookup`</p> |
-|Apache |Apache: Workers finishing |<p>Number of workers in finishing state</p> |DEPENDENT |apache.workers.finishing<p>**Preprocessing**:</p><p>- JSONPATH: `$.Workers.finishing`</p> |
-|Apache |Apache: Workers idle cleanup |<p>Number of workers in cleanup state</p> |DEPENDENT |apache.workers.cleanup<p>**Preprocessing**:</p><p>- JSONPATH: `$.Workers.cleanup`</p> |
-|Apache |Apache: Workers keepalive (read) |<p>Number of workers in keepalive state</p> |DEPENDENT |apache.workers.keepalive<p>**Preprocessing**:</p><p>- JSONPATH: `$.Workers.keepalive`</p> |
-|Apache |Apache: Workers logging |<p>Number of workers in logging state</p> |DEPENDENT |apache.workers.logging<p>**Preprocessing**:</p><p>- JSONPATH: `$.Workers.logging`</p> |
-|Apache |Apache: Workers reading request |<p>Number of workers in reading state</p> |DEPENDENT |apache.workers.reading<p>**Preprocessing**:</p><p>- JSONPATH: `$.Workers.reading`</p> |
-|Apache |Apache: Workers sending reply |<p>Number of workers in sending state</p> |DEPENDENT |apache.workers.sending<p>**Preprocessing**:</p><p>- JSONPATH: `$.Workers.sending`</p> |
-|Apache |Apache: Workers slot with no current process |<p>Number of slots with no current process</p> |DEPENDENT |apache.workers.slot<p>**Preprocessing**:</p><p>- JSONPATH: `$.Workers.slot`</p> |
-|Apache |Apache: Workers starting up |<p>Number of workers in starting state</p> |DEPENDENT |apache.workers.starting<p>**Preprocessing**:</p><p>- JSONPATH: `$.Workers.starting`</p> |
-|Apache |Apache: Workers waiting for connection |<p>Number of workers in waiting state</p> |DEPENDENT |apache.workers.waiting<p>**Preprocessing**:</p><p>- JSONPATH: `$.Workers.waiting`</p> |
-|Apache |Apache: Connections async closing |<p>Number of async connections in closing state (only applicable to event MPM)</p> |DEPENDENT |apache.connections[async_closing{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.ConnsAsyncClosing`</p> |
-|Apache |Apache: Connections async keep alive |<p>Number of async connections in keep-alive state (only applicable to event MPM)</p> |DEPENDENT |apache.connections[async_keep_alive{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.ConnsAsyncKeepAlive`</p> |
-|Apache |Apache: Connections async writing |<p>Number of async connections in writing state (only applicable to event MPM)</p> |DEPENDENT |apache.connections[async_writing{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.ConnsAsyncWriting`</p> |
-|Apache |Apache: Connections total |<p>Number of total connections</p> |DEPENDENT |apache.connections[total{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.ConnsTotal`</p> |
-|Apache |Apache: Bytes per request |<p>Average number of client requests per second</p> |DEPENDENT |apache.bytes[per_request{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.BytesPerReq`</p> |
-|Apache |Apache: Number of async processes |<p>Number of async processes</p> |DEPENDENT |apache.process[num{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Processes`</p> |
-|Zabbix_raw_items |Apache: Get status |<p>Getting data from a machine-readable version of the Apache status page.</p><p>https://httpd.apache.org/docs/current/mod/mod_status.html</p> |HTTP_AGENT |apache.get_status<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Group | Name | Description | Type | Key and additional info |
+|------------------|----------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------|------------|-----------------------------------------------------------------------------------------------------------------------------------|
+| Apache | Apache: Service ping | <p>-</p> | SIMPLE | net.tcp.service[http,"{HOST.CONN}","{$APACHE.STATUS.PORT}"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| Apache | Apache: Service response time | <p>-</p> | SIMPLE | net.tcp.service.perf[http,"{HOST.CONN}","{$APACHE.STATUS.PORT}"] |
+| Apache | Apache: Total bytes | <p>Total bytes served</p> | DEPENDENT | apache.bytes<p>**Preprocessing**:</p><p>- JSONPATH: `$["Total kBytes"]`</p><p>- MULTIPLIER: `1024`</p> |
+| Apache | Apache: Bytes per second | <p>Calculated as change rate for 'Total bytes' stat.</p><p>BytesPerSec is not used, as it counts average since last Apache server start.</p> | DEPENDENT | apache.bytes.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$["Total kBytes"]`</p><p>- MULTIPLIER: `1024`</p><p>- CHANGE_PER_SECOND |
+| Apache | Apache: Requests per second | <p>Calculated as change rate for 'Total requests' stat.</p><p>ReqPerSec is not used, as it counts average since last Apache server start.</p> | DEPENDENT | apache.requests.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$["Total Accesses"]`</p><p>- CHANGE_PER_SECOND |
+| Apache | Apache: Total requests | <p>A total number of accesses</p> | DEPENDENT | apache.requests<p>**Preprocessing**:</p><p>- JSONPATH: `$["Total Accesses"]`</p> |
+| Apache | Apache: Uptime | <p>Service uptime in seconds</p> | DEPENDENT | apache.uptime<p>**Preprocessing**:</p><p>- JSONPATH: `$.ServerUptimeSeconds`</p> |
+| Apache | Apache: Version | <p>Service version</p> | DEPENDENT | apache.version<p>**Preprocessing**:</p><p>- JSONPATH: `$.ServerVersion`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Apache | Apache: Total workers busy | <p>Total number of busy worker threads/processes</p> | DEPENDENT | apache.workers_total.busy<p>**Preprocessing**:</p><p>- JSONPATH: `$.BusyWorkers`</p> |
+| Apache | Apache: Total workers idle | <p>Total number of idle worker threads/processes</p> | DEPENDENT | apache.workers_total.idle<p>**Preprocessing**:</p><p>- JSONPATH: `$.IdleWorkers`</p> |
+| Apache | Apache: Workers closing connection | <p>Number of workers in closing state</p> | DEPENDENT | apache.workers.closing<p>**Preprocessing**:</p><p>- JSONPATH: `$.Workers.closing`</p> |
+| Apache | Apache: Workers DNS lookup | <p>Number of workers in dnslookup state</p> | DEPENDENT | apache.workers.dnslookup<p>**Preprocessing**:</p><p>- JSONPATH: `$.Workers.dnslookup`</p> |
+| Apache | Apache: Workers finishing | <p>Number of workers in finishing state</p> | DEPENDENT | apache.workers.finishing<p>**Preprocessing**:</p><p>- JSONPATH: `$.Workers.finishing`</p> |
+| Apache | Apache: Workers idle cleanup | <p>Number of workers in cleanup state</p> | DEPENDENT | apache.workers.cleanup<p>**Preprocessing**:</p><p>- JSONPATH: `$.Workers.cleanup`</p> |
+| Apache | Apache: Workers keepalive (read) | <p>Number of workers in keepalive state</p> | DEPENDENT | apache.workers.keepalive<p>**Preprocessing**:</p><p>- JSONPATH: `$.Workers.keepalive`</p> |
+| Apache | Apache: Workers logging | <p>Number of workers in logging state</p> | DEPENDENT | apache.workers.logging<p>**Preprocessing**:</p><p>- JSONPATH: `$.Workers.logging`</p> |
+| Apache | Apache: Workers reading request | <p>Number of workers in reading state</p> | DEPENDENT | apache.workers.reading<p>**Preprocessing**:</p><p>- JSONPATH: `$.Workers.reading`</p> |
+| Apache | Apache: Workers sending reply | <p>Number of workers in sending state</p> | DEPENDENT | apache.workers.sending<p>**Preprocessing**:</p><p>- JSONPATH: `$.Workers.sending`</p> |
+| Apache | Apache: Workers slot with no current process | <p>Number of slots with no current process</p> | DEPENDENT | apache.workers.slot<p>**Preprocessing**:</p><p>- JSONPATH: `$.Workers.slot`</p> |
+| Apache | Apache: Workers starting up | <p>Number of workers in starting state</p> | DEPENDENT | apache.workers.starting<p>**Preprocessing**:</p><p>- JSONPATH: `$.Workers.starting`</p> |
+| Apache | Apache: Workers waiting for connection | <p>Number of workers in waiting state</p> | DEPENDENT | apache.workers.waiting<p>**Preprocessing**:</p><p>- JSONPATH: `$.Workers.waiting`</p> |
+| Apache | Apache: Connections async closing | <p>Number of async connections in closing state (only applicable to event MPM)</p> | DEPENDENT | apache.connections[async_closing{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.ConnsAsyncClosing`</p> |
+| Apache | Apache: Connections async keep alive | <p>Number of async connections in keep-alive state (only applicable to event MPM)</p> | DEPENDENT | apache.connections[async_keep_alive{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.ConnsAsyncKeepAlive`</p> |
+| Apache | Apache: Connections async writing | <p>Number of async connections in writing state (only applicable to event MPM)</p> | DEPENDENT | apache.connections[async_writing{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.ConnsAsyncWriting`</p> |
+| Apache | Apache: Connections total | <p>Number of total connections</p> | DEPENDENT | apache.connections[total{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.ConnsTotal`</p> |
+| Apache | Apache: Bytes per request | <p>Average number of client requests per second</p> | DEPENDENT | apache.bytes[per_request{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.BytesPerReq`</p> |
+| Apache | Apache: Number of async processes | <p>Number of async processes</p> | DEPENDENT | apache.process[num{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Processes`</p> |
+| Zabbix_raw_items | Apache: Get status | <p>Getting data from a machine-readable version of the Apache status page.</p><p>https://httpd.apache.org/docs/current/mod/mod_status.html</p> | HTTP_AGENT | apache.get_status<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|Apache: Service is down |<p>-</p> |`{TEMPLATE_NAME:net.tcp.service[http,"{HOST.CONN}","{$APACHE.STATUS.PORT}"].last()}=0` |AVERAGE |<p>Manual close: YES</p> |
-|Apache: Service response time is too high (over {$APACHE.RESPONSE_TIME.MAX.WARN}s for 5m) |<p>-</p> |`{TEMPLATE_NAME:net.tcp.service.perf[http,"{HOST.CONN}","{$APACHE.STATUS.PORT}"].min(5m)}>{$APACHE.RESPONSE_TIME.MAX.WARN}` |WARNING |<p>Manual close: YES</p><p>**Depends on**:</p><p>- Apache: Service is down</p> |
-|Apache: has been restarted (uptime < 10m) |<p>Uptime is less than 10 minutes</p> |`{TEMPLATE_NAME:apache.uptime.last()}<10m` |INFO |<p>Manual close: YES</p> |
-|Apache: Version has changed (new version: {ITEM.VALUE}) |<p>Apache version has changed. Ack to close.</p> |`{TEMPLATE_NAME:apache.version.diff()}=1 and {TEMPLATE_NAME:apache.version.strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|Apache: Failed to fetch status page (or no data for 30m) |<p>Zabbix has not received data for items for the last 30 minutes.</p> |`{TEMPLATE_NAME:apache.get_status.nodata(30m)}=1` |WARNING |<p>Manual close: YES</p><p>**Depends on**:</p><p>- Apache: Service is down</p> |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|-------------------------------------------------------------------------------------------|------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------|----------|--------------------------------------------------------------------------------|
+| Apache: Service is down | <p>-</p> | `{TEMPLATE_NAME:net.tcp.service[http,"{HOST.CONN}","{$APACHE.STATUS.PORT}"].last()}=0` | AVERAGE | <p>Manual close: YES</p> |
+| Apache: Service response time is too high (over {$APACHE.RESPONSE_TIME.MAX.WARN}s for 5m) | <p>-</p> | `{TEMPLATE_NAME:net.tcp.service.perf[http,"{HOST.CONN}","{$APACHE.STATUS.PORT}"].min(5m)}>{$APACHE.RESPONSE_TIME.MAX.WARN}` | WARNING | <p>Manual close: YES</p><p>**Depends on**:</p><p>- Apache: Service is down</p> |
+| Apache: has been restarted (uptime < 10m) | <p>Uptime is less than 10 minutes</p> | `{TEMPLATE_NAME:apache.uptime.last()}<10m` | INFO | <p>Manual close: YES</p> |
+| Apache: Version has changed (new version: {ITEM.VALUE}) | <p>Apache version has changed. Ack to close.</p> | `{TEMPLATE_NAME:apache.version.diff()}=1 and {TEMPLATE_NAME:apache.version.strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| Apache: Failed to fetch status page (or no data for 30m) | <p>Zabbix has not received data for items for the last 30 minutes.</p> | `{TEMPLATE_NAME:apache.get_status.nodata(30m)}=1` | WARNING | <p>Manual close: YES</p><p>**Depends on**:</p><p>- Apache: Service is down</p> |
## Feedback
diff --git a/templates/app/apache_http/template_app_apache_http.yaml b/templates/app/apache_http/template_app_apache_http.yaml
index 87e894dfae7..dd4cf61b4ba 100644
--- a/templates/app/apache_http/template_app_apache_http.yaml
+++ b/templates/app/apache_http/template_app_apache_http.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-02-23T11:04:40Z'
+ date: '2021-04-22T11:27:36Z'
groups:
-
name: Templates/Applications
@@ -711,75 +711,77 @@ zabbix_export:
dashboards:
-
name: 'Apache performance'
- widgets:
+ pages:
-
- type: GRAPH_CLASSIC
- width: '12'
- height: '5'
- fields:
+ widgets:
-
- type: GRAPH
- name: graphid
- value:
- name: 'Apache: Requests per second'
- host: 'Apache by HTTP'
- -
- type: GRAPH_CLASSIC
- x: '12'
- width: '12'
- height: '5'
- fields:
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Apache: Workers total'
- host: 'Apache by HTTP'
- -
- type: GRAPH_PROTOTYPE
- 'y': '5'
- width: '12'
- height: '5'
- fields:
+ type: GRAPH_CLASSIC
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Apache: Requests per second'
+ host: 'Apache by HTTP'
-
- type: INTEGER
- name: columns
- value: '1'
+ type: GRAPH_CLASSIC
+ x: '12'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Apache: Workers total'
+ host: 'Apache by HTTP'
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: 'Apache: Current async connections{#SINGLETON}'
- host: 'Apache by HTTP'
- -
- type: GRAPH_PROTOTYPE
- x: '12'
- 'y': '5'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
+ 'y': '5'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: 'Apache: Current async connections{#SINGLETON}'
+ host: 'Apache by HTTP'
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: 'Apache: Current async processes{#SINGLETON}'
- host: 'Apache by HTTP'
- -
- type: GRAPH_CLASSIC
- 'y': '10'
- width: '12'
- height: '6'
- fields:
+ x: '12'
+ 'y': '5'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: 'Apache: Current async processes{#SINGLETON}'
+ host: 'Apache by HTTP'
-
- type: GRAPH
- name: graphid
- value:
- name: 'Apache: Worker states'
- host: 'Apache by HTTP'
+ type: GRAPH_CLASSIC
+ 'y': '10'
+ width: '12'
+ height: '6'
+ fields:
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Apache: Worker states'
+ host: 'Apache by HTTP'
valuemaps:
-
name: 'Service state'
diff --git a/templates/app/aranet/README.md b/templates/app/aranet/README.md
index dfa901aa4f8..547840c6033 100644
--- a/templates/app/aranet/README.md
+++ b/templates/app/aranet/README.md
@@ -3,11 +3,11 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
-> See [Zabbix template operation](https://www.zabbix.com/documentation/5.2/manual/config/templates_out_of_the_box/http) for basic instructions.
+> See [Zabbix template operation](https://www.zabbix.com/documentation/5.4/manual/config/templates_out_of_the_box/http) for basic instructions.
Refer to the vendor documentation.
diff --git a/templates/app/docker/README.md b/templates/app/docker/README.md
index 83c3f6c9311..49471c7d79d 100644
--- a/templates/app/docker/README.md
+++ b/templates/app/docker/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
The template to monitor Docker engine by Zabbix that work without any external scripts.
Most of the metrics are collected in one go, thanks to Zabbix bulk data collection.
@@ -17,7 +17,7 @@ This template was tested on:
## Setup
-> See [Zabbix template operation](https://www.zabbix.com/documentation/5.2/manual/config/templates_out_of_the_box/zabbix_agent2) for basic instructions.
+> See [Zabbix template operation](https://www.zabbix.com/documentation/5.4/manual/config/templates_out_of_the_box/zabbix_agent2) for basic instructions.
Setup and configure zabbix-agent2 compiled with the Docker monitoring plugin.
@@ -30,12 +30,12 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$DOCKER.LLD.FILTER.CONTAINER.MATCHES} |<p>Filter of discoverable containers</p> |`.*` |
-|{$DOCKER.LLD.FILTER.CONTAINER.NOT_MATCHES} |<p>Filter to exclude discovered containers</p> |`CHANGE_IF_NEEDED` |
-|{$DOCKER.LLD.FILTER.IMAGE.MATCHES} |<p>Filter of discoverable images</p> |`.*` |
-|{$DOCKER.LLD.FILTER.IMAGE.NOT_MATCHES} |<p>Filter to exclude discovered images</p> |`CHANGE_IF_NEEDED` |
+| Name | Description | Default |
+|--------------------------------------------|------------------------------------------------|--------------------|
+| {$DOCKER.LLD.FILTER.CONTAINER.MATCHES} | <p>Filter of discoverable containers</p> | `.*` |
+| {$DOCKER.LLD.FILTER.CONTAINER.NOT_MATCHES} | <p>Filter to exclude discovered containers</p> | `CHANGE_IF_NEEDED` |
+| {$DOCKER.LLD.FILTER.IMAGE.MATCHES} | <p>Filter of discoverable images</p> | `.*` |
+| {$DOCKER.LLD.FILTER.IMAGE.NOT_MATCHES} | <p>Filter to exclude discovered images</p> | `CHANGE_IF_NEEDED` |
## Template links
@@ -43,107 +43,107 @@ There are no template links in this template.
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Images discovery |<p>Discovery for images metrics</p> |ZABBIX_PASSIVE |docker.images.discovery<p>**Filter**:</p>AND <p>- A: {#NAME} MATCHES_REGEX `{$DOCKER.LLD.FILTER.IMAGE.MATCHES}`</p><p>- B: {#NAME} NOT_MATCHES_REGEX `{$DOCKER.LLD.FILTER.IMAGE.NOT_MATCHES}`</p> |
-|Containers discovery |<p>Discovery for containers metrics</p><p>Parameter:</p><p>true - Returns all containers</p><p>false - Returns only running containers</p> |ZABBIX_PASSIVE |docker.containers.discovery[false]<p>**Filter**:</p>AND <p>- A: {#NAME} MATCHES_REGEX `{$DOCKER.LLD.FILTER.CONTAINER.MATCHES}`</p><p>- B: {#NAME} NOT_MATCHES_REGEX `{$DOCKER.LLD.FILTER.CONTAINER.NOT_MATCHES}`</p> |
+| Name | Description | Type | Key and additional info |
+|----------------------|---------------------------------------------------------------------------------------------------------------------------------------------|----------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Images discovery | <p>Discovery for images metrics</p> | ZABBIX_PASSIVE | docker.images.discovery<p>**Filter**:</p>AND <p>- A: {#NAME} MATCHES_REGEX `{$DOCKER.LLD.FILTER.IMAGE.MATCHES}`</p><p>- B: {#NAME} NOT_MATCHES_REGEX `{$DOCKER.LLD.FILTER.IMAGE.NOT_MATCHES}`</p> |
+| Containers discovery | <p>Discovery for containers metrics</p><p>Parameter:</p><p>true - Returns all containers</p><p>false - Returns only running containers</p> | ZABBIX_PASSIVE | docker.containers.discovery[false]<p>**Filter**:</p>AND <p>- A: {#NAME} MATCHES_REGEX `{$DOCKER.LLD.FILTER.CONTAINER.MATCHES}`</p><p>- B: {#NAME} NOT_MATCHES_REGEX `{$DOCKER.LLD.FILTER.CONTAINER.NOT_MATCHES}`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Docker |Docker: Ping | |ZABBIX_PASSIVE |docker.ping<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|Docker |Docker: Containers total |<p>Total number of containers on this host</p> |DEPENDENT |docker.containers.total<p>**Preprocessing**:</p><p>- JSONPATH: `$.Containers`</p> |
-|Docker |Docker: Containers running |<p>Total number of containers running on this host</p> |DEPENDENT |docker.containers.running<p>**Preprocessing**:</p><p>- JSONPATH: `$.ContainersRunning`</p> |
-|Docker |Docker: Containers stopped |<p>Total number of containers stopped on this host</p> |DEPENDENT |docker.containers.stopped<p>**Preprocessing**:</p><p>- JSONPATH: `$.ContainersStopped`</p> |
-|Docker |Docker: Containers paused |<p>Total number of containers paused on this host</p> |DEPENDENT |docker.containers.paused<p>**Preprocessing**:</p><p>- JSONPATH: `$.ContainersPaused`</p> |
-|Docker |Docker: Images total |<p>Number of images with intermediate image layers</p> |DEPENDENT |docker.images.total<p>**Preprocessing**:</p><p>- JSONPATH: `$.Images`</p> |
-|Docker |Docker: Storage driver |<p>Docker storage driver </p><p> https://docs.docker.com/storage/storagedriver/</p> |DEPENDENT |docker.driver<p>**Preprocessing**:</p><p>- JSONPATH: `$.Driver`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Docker |Docker: Memory limit enabled |<p>-</p> |DEPENDENT |docker.mem_limit.enabled<p>**Preprocessing**:</p><p>- JSONPATH: `$.MemoryLimit`</p><p>- BOOL_TO_DECIMAL<p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Docker |Docker: Swap limit enabled |<p>-</p> |DEPENDENT |docker.swap_limit.enabled<p>**Preprocessing**:</p><p>- JSONPATH: `$.SwapLimit`</p><p>- BOOL_TO_DECIMAL<p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Docker |Docker: Kernel memory enabled |<p>-</p> |DEPENDENT |docker.kernel_mem.enabled<p>**Preprocessing**:</p><p>- JSONPATH: `$.KernelMemory`</p><p>- BOOL_TO_DECIMAL<p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Docker |Docker: Kernel memory TCP enabled |<p>-</p> |DEPENDENT |docker.kernel_mem_tcp.enabled<p>**Preprocessing**:</p><p>- JSONPATH: `$.KernelMemoryTCP`</p><p>- BOOL_TO_DECIMAL<p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Docker |Docker: CPU CFS Period enabled |<p>https://docs.docker.com/config/containers/resource_constraints/#configure-the-default-cfs-scheduler</p> |DEPENDENT |docker.cpu_cfs_period.enabled<p>**Preprocessing**:</p><p>- JSONPATH: `$.CpuCfsPeriod`</p><p>- BOOL_TO_DECIMAL<p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Docker |Docker: CPU CFS Quota enabled |<p>https://docs.docker.com/config/containers/resource_constraints/#configure-the-default-cfs-scheduler</p> |DEPENDENT |docker.cpu_cfs_quota.enabled<p>**Preprocessing**:</p><p>- JSONPATH: `$.CpuCfsQuota`</p><p>- BOOL_TO_DECIMAL<p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Docker |Docker: CPU Shares enabled |<p>https://docs.docker.com/config/containers/resource_constraints/#configure-the-default-cfs-scheduler</p> |DEPENDENT |docker.cpu_shares.enabled<p>**Preprocessing**:</p><p>- JSONPATH: `$.CPUShares`</p><p>- BOOL_TO_DECIMAL<p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Docker |Docker: CPU Set enabled |<p>https://docs.docker.com/config/containers/resource_constraints/#configure-the-default-cfs-scheduler</p> |DEPENDENT |docker.cpu_set.enabled<p>**Preprocessing**:</p><p>- JSONPATH: `$.CPUSet`</p><p>- BOOL_TO_DECIMAL<p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Docker |Docker: Pids limit enabled |<p>-</p> |DEPENDENT |docker.pids_limit.enabled<p>**Preprocessing**:</p><p>- JSONPATH: `$.PidsLimit`</p><p>- BOOL_TO_DECIMAL<p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Docker |Docker: IPv4 Forwarding enabled |<p>-</p> |DEPENDENT |docker.ipv4_forwarding.enabled<p>**Preprocessing**:</p><p>- JSONPATH: `$.IPv4Forwarding`</p><p>- BOOL_TO_DECIMAL<p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Docker |Docker: Debug enabled |<p>-</p> |DEPENDENT |docker.debug.enabled<p>**Preprocessing**:</p><p>- JSONPATH: `$.Debug`</p><p>- BOOL_TO_DECIMAL<p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Docker |Docker: Nfd |<p>Number of used File Descriptors</p> |DEPENDENT |docker.nfd<p>**Preprocessing**:</p><p>- JSONPATH: `$.NFd`</p> |
-|Docker |Docker: OomKill disabled |<p>-</p> |DEPENDENT |docker.oomkill.disabled<p>**Preprocessing**:</p><p>- JSONPATH: `$.OomKillDisable`</p><p>- BOOL_TO_DECIMAL<p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Docker |Docker: Goroutines |<p>Number of goroutines</p> |DEPENDENT |docker.goroutines<p>**Preprocessing**:</p><p>- JSONPATH: `$.NGoroutines`</p> |
-|Docker |Docker: Logging driver |<p>-</p> |DEPENDENT |docker.logging_driver<p>**Preprocessing**:</p><p>- JSONPATH: `$.LoggingDriver`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Docker |Docker: Cgroup driver |<p>-</p> |DEPENDENT |docker.cgroup_driver<p>**Preprocessing**:</p><p>- JSONPATH: `$.CgroupDriver`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Docker |Docker: NEvents listener |<p>-</p> |DEPENDENT |docker.nevents_listener<p>**Preprocessing**:</p><p>- JSONPATH: `$.NEventsListener`</p> |
-|Docker |Docker: Kernel version |<p>-</p> |DEPENDENT |docker.kernel_version<p>**Preprocessing**:</p><p>- JSONPATH: `$.KernelVersion`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Docker |Docker: Operating system |<p>-</p> |DEPENDENT |docker.operating_system<p>**Preprocessing**:</p><p>- JSONPATH: `$.OperatingSystem`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Docker |Docker: OS type |<p>-</p> |DEPENDENT |docker.os_type<p>**Preprocessing**:</p><p>- JSONPATH: `$.OSType`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Docker |Docker: Architecture |<p>-</p> |DEPENDENT |docker.architecture<p>**Preprocessing**:</p><p>- JSONPATH: `$.Architecture`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Docker |Docker: NCPU |<p>-</p> |DEPENDENT |docker.ncpu<p>**Preprocessing**:</p><p>- JSONPATH: `$.NCPU`</p> |
-|Docker |Docker: Memory total |<p>-</p> |DEPENDENT |docker.mem.total<p>**Preprocessing**:</p><p>- JSONPATH: `$.MemTotal`</p> |
-|Docker |Docker: Docker root dir |<p>-</p> |DEPENDENT |docker.root_dir<p>**Preprocessing**:</p><p>- JSONPATH: `$.DockerRootDir`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Docker |Docker: Name |<p>-</p> |DEPENDENT |docker.name<p>**Preprocessing**:</p><p>- JSONPATH: `$.Name`</p> |
-|Docker |Docker: Server version |<p>-</p> |DEPENDENT |docker.server_version<p>**Preprocessing**:</p><p>- JSONPATH: `$.ServerVersion`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Docker |Docker: Default runtime |<p>-</p> |DEPENDENT |docker.default_runtime<p>**Preprocessing**:</p><p>- JSONPATH: `$.DefaultRuntime`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Docker |Docker: Live restore enabled |<p>-</p> |DEPENDENT |docker.live_restore.enabled<p>**Preprocessing**:</p><p>- JSONPATH: `$.LiveRestoreEnabled`</p><p>- BOOL_TO_DECIMAL<p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Docker |Docker: Layers size |<p>-</p> |DEPENDENT |docker.layers_size<p>**Preprocessing**:</p><p>- JSONPATH: `$.LayersSize`</p> |
-|Docker |Docker: Images size |<p>-</p> |DEPENDENT |docker.images_size<p>**Preprocessing**:</p><p>- JSONPATH: `$.Images[*].Size.sum()`</p> |
-|Docker |Docker: Containers size |<p>-</p> |DEPENDENT |docker.containers_size<p>**Preprocessing**:</p><p>- JSONPATH: `$.Containers[*].SizeRw.sum()`</p> |
-|Docker |Docker: Volumes size |<p>-</p> |DEPENDENT |docker.volumes_size<p>**Preprocessing**:</p><p>- JSONPATH: `$.Volumes[*].UsageData.Size.sum()`</p> |
-|Docker |Docker: Images available |<p>Number of top-level images</p> |DEPENDENT |docker.images.top_level<p>**Preprocessing**:</p><p>- JSONPATH: `$.length()`</p> |
-|Docker |Image {#NAME}: Created |<p>-</p> |DEPENDENT |docker.image.created["{#ID}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.Id == "{#ID}")].Created.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Docker |Image {#NAME}: Size |<p>-</p> |DEPENDENT |docker.image.size["{#ID}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.Id == "{#ID}")].Size.first()`</p> |
-|Docker |Container {#NAME}: Get stats |<p>Get container stats based on resource usage</p> |ZABBIX_PASSIVE |docker.container_stats["{#NAME}"] |
-|Docker |Container {#NAME}: CPU total usage per second |<p>-</p> |DEPENDENT |docker.container_stats.cpu_usage.total.rate["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpu_stats.cpu_usage.total_usage`</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `1.0E-9`</p> |
-|Docker |Container {#NAME}: CPU kernelmode usage per second |<p>-</p> |DEPENDENT |docker.container_stats.cpu_usage.kernel.rate["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpu_stats.cpu_usage.usage_in_kernelmode`</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `1.0E-9`</p> |
-|Docker |Container {#NAME}: CPU usermode usage per second |<p>-</p> |DEPENDENT |docker.container_stats.cpu_usage.user.rate["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpu_stats.cpu_usage.usage_in_usermode`</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `1.0E-9`</p> |
-|Docker |Container {#NAME}: Online CPUs |<p>-</p> |DEPENDENT |docker.container_stats.online_cpus["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpu_stats.online_cpus`</p> |
-|Docker |Container {#NAME}: Throttling periods |<p>Number of periods with throttling active</p> |DEPENDENT |docker.container_stats.cpu_usage.throttling_periods["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpu_stats.throttling_data.periods`</p> |
-|Docker |Container {#NAME}: Throttled periods |<p>Number of periods when the container hits its throttling limit</p> |DEPENDENT |docker.container_stats.cpu_usage.throttled_periods["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpu_stats.throttling_data.throttled_periods`</p> |
-|Docker |Container {#NAME}: Throttled time |<p>Aggregate time the container was throttled for in nanoseconds</p> |DEPENDENT |docker.container_stats.cpu_usage.throttled_time["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpu_stats.throttling_data.throttled_time`</p><p>- MULTIPLIER: `1.0E-9`</p> |
-|Docker |Container {#NAME}: Memory usage |<p>-</p> |DEPENDENT |docker.container_stats.memory.usage["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.memory_stats.usage`</p> |
-|Docker |Container {#NAME}: Memory maximum usage |<p>-</p> |DEPENDENT |docker.container_stats.memory.max_usage["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.memory_stats.max_usage`</p> |
-|Docker |Container {#NAME}: Memory commit bytes |<p>-</p> |DEPENDENT |docker.container_stats.memory.commit_bytes["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.memory_stats.commitbytes`</p> |
-|Docker |Container {#NAME}: Memory commit peak bytes |<p>-</p> |DEPENDENT |docker.container_stats.memory.commit_peak_bytes["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.memory_stats.commitpeakbytes`</p> |
-|Docker |Container {#NAME}: Memory private working set |<p>-</p> |DEPENDENT |docker.container_stats.memory.private_working_set["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.memory_stats.privateworkingset`</p> |
-|Docker |Container {#NAME}: Networks bytes received per second |<p>-</p> |DEPENDENT |docker.networks.rx_bytes["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.networks[*].rx_bytes.sum()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
-|Docker |Container {#NAME}: Networks packets received per second |<p>-</p> |DEPENDENT |docker.networks.rx_packets["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.networks[*].rx_packets.sum()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
-|Docker |Container {#NAME}: Networks errors received per second |<p>-</p> |DEPENDENT |docker.networks.rx_errors["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.networks[*].rx_errors.sum()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
-|Docker |Container {#NAME}: Networks incoming packets dropped per second |<p>-</p> |DEPENDENT |docker.networks.rx_dropped["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.networks[*].rx_dropped.sum()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
-|Docker |Container {#NAME}: Networks bytes sent per second |<p>-</p> |DEPENDENT |docker.networks.tx_bytes["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.networks[*].tx_bytes.sum()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
-|Docker |Container {#NAME}: Networks packets sent per second |<p>-</p> |DEPENDENT |docker.networks.tx_packets["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.networks[*].tx_packets.sum()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
-|Docker |Container {#NAME}: Networks errors sent per second |<p>-</p> |DEPENDENT |docker.networks.tx_errors["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.networks[*].tx_errors.sum()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
-|Docker |Container {#NAME}: Networks outgoing packets dropped per second |<p>-</p> |DEPENDENT |docker.networks.tx_dropped["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.networks[*].tx_dropped.sum()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
-|Docker |Container {#NAME}: Get info |<p>Return low-level information about a container</p> |ZABBIX_PASSIVE |docker.container_info["{#NAME}"] |
-|Docker |Container {#NAME}: Created |<p>-</p> |DEPENDENT |docker.container_info.created["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Created`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Docker |Container {#NAME}: Image |<p>-</p> |DEPENDENT |docker.container_info.image["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.Names[0] == "{#NAME}")].Image.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Docker |Container {#NAME}: Restart count |<p>-</p> |DEPENDENT |docker.container_info.restart_count["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.RestartCount`</p> |
-|Docker |Container {#NAME}: Status |<p>-</p> |DEPENDENT |docker.container_info.state.status["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.State.Status`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|Docker |Container {#NAME}: Running |<p>-</p> |DEPENDENT |docker.container_info.state.running["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.State.Running`</p><p>- BOOL_TO_DECIMAL |
-|Docker |Container {#NAME}: Paused |<p>-</p> |DEPENDENT |docker.container_info.state.paused["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.State.Paused`</p><p>- BOOL_TO_DECIMAL |
-|Docker |Container {#NAME}: Restarting |<p>-</p> |DEPENDENT |docker.container_info.state.restarting["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.State.Restarting`</p><p>- BOOL_TO_DECIMAL |
-|Docker |Container {#NAME}: OOMKilled |<p>-</p> |DEPENDENT |docker.container_info.state.oomkilled["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.State.OOMKilled`</p><p>- BOOL_TO_DECIMAL |
-|Docker |Container {#NAME}: Dead |<p>-</p> |DEPENDENT |docker.container_info.state.dead["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.State.Dead`</p><p>- BOOL_TO_DECIMAL |
-|Docker |Container {#NAME}: Pid |<p>-</p> |DEPENDENT |docker.container_info.state.pid["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.State.Pid`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Docker |Container {#NAME}: Exit code |<p>-</p> |DEPENDENT |docker.container_info.state.exitcode["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.State.ExitCode`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Docker |Container {#NAME}: Error |<p>-</p> |DEPENDENT |docker.container_info.state.error["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.State.Error`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Docker |Container {#NAME}: Started at |<p>-</p> |DEPENDENT |docker.container_info.started["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.State.StartedAt`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Docker |Container {#NAME}: Finished at |<p>-</p> |DEPENDENT |docker.container_info.finished["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.State.FinishedAt`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Zabbix_raw_items |Docker: Get info | |ZABBIX_PASSIVE |docker.info |
-|Zabbix_raw_items |Docker: Get containers | |ZABBIX_PASSIVE |docker.containers |
-|Zabbix_raw_items |Docker: Get images | |ZABBIX_PASSIVE |docker.images |
-|Zabbix_raw_items |Docker: Get data_usage | |ZABBIX_PASSIVE |docker.data_usage |
+| Group | Name | Description | Type | Key and additional info |
+|------------------|-----------------------------------------------------------------|------------------------------------------------------------------------------------------------------------|----------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Docker | Docker: Ping | | ZABBIX_PASSIVE | docker.ping<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| Docker | Docker: Containers total | <p>Total number of containers on this host</p> | DEPENDENT | docker.containers.total<p>**Preprocessing**:</p><p>- JSONPATH: `$.Containers`</p> |
+| Docker | Docker: Containers running | <p>Total number of containers running on this host</p> | DEPENDENT | docker.containers.running<p>**Preprocessing**:</p><p>- JSONPATH: `$.ContainersRunning`</p> |
+| Docker | Docker: Containers stopped | <p>Total number of containers stopped on this host</p> | DEPENDENT | docker.containers.stopped<p>**Preprocessing**:</p><p>- JSONPATH: `$.ContainersStopped`</p> |
+| Docker | Docker: Containers paused | <p>Total number of containers paused on this host</p> | DEPENDENT | docker.containers.paused<p>**Preprocessing**:</p><p>- JSONPATH: `$.ContainersPaused`</p> |
+| Docker | Docker: Images total | <p>Number of images with intermediate image layers</p> | DEPENDENT | docker.images.total<p>**Preprocessing**:</p><p>- JSONPATH: `$.Images`</p> |
+| Docker | Docker: Storage driver | <p>Docker storage driver </p><p> https://docs.docker.com/storage/storagedriver/</p> | DEPENDENT | docker.driver<p>**Preprocessing**:</p><p>- JSONPATH: `$.Driver`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Docker | Docker: Memory limit enabled | <p>-</p> | DEPENDENT | docker.mem_limit.enabled<p>**Preprocessing**:</p><p>- JSONPATH: `$.MemoryLimit`</p><p>- BOOL_TO_DECIMAL<p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Docker | Docker: Swap limit enabled | <p>-</p> | DEPENDENT | docker.swap_limit.enabled<p>**Preprocessing**:</p><p>- JSONPATH: `$.SwapLimit`</p><p>- BOOL_TO_DECIMAL<p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Docker | Docker: Kernel memory enabled | <p>-</p> | DEPENDENT | docker.kernel_mem.enabled<p>**Preprocessing**:</p><p>- JSONPATH: `$.KernelMemory`</p><p>- BOOL_TO_DECIMAL<p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Docker | Docker: Kernel memory TCP enabled | <p>-</p> | DEPENDENT | docker.kernel_mem_tcp.enabled<p>**Preprocessing**:</p><p>- JSONPATH: `$.KernelMemoryTCP`</p><p>- BOOL_TO_DECIMAL<p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Docker | Docker: CPU CFS Period enabled | <p>https://docs.docker.com/config/containers/resource_constraints/#configure-the-default-cfs-scheduler</p> | DEPENDENT | docker.cpu_cfs_period.enabled<p>**Preprocessing**:</p><p>- JSONPATH: `$.CpuCfsPeriod`</p><p>- BOOL_TO_DECIMAL<p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Docker | Docker: CPU CFS Quota enabled | <p>https://docs.docker.com/config/containers/resource_constraints/#configure-the-default-cfs-scheduler</p> | DEPENDENT | docker.cpu_cfs_quota.enabled<p>**Preprocessing**:</p><p>- JSONPATH: `$.CpuCfsQuota`</p><p>- BOOL_TO_DECIMAL<p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Docker | Docker: CPU Shares enabled | <p>https://docs.docker.com/config/containers/resource_constraints/#configure-the-default-cfs-scheduler</p> | DEPENDENT | docker.cpu_shares.enabled<p>**Preprocessing**:</p><p>- JSONPATH: `$.CPUShares`</p><p>- BOOL_TO_DECIMAL<p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Docker | Docker: CPU Set enabled | <p>https://docs.docker.com/config/containers/resource_constraints/#configure-the-default-cfs-scheduler</p> | DEPENDENT | docker.cpu_set.enabled<p>**Preprocessing**:</p><p>- JSONPATH: `$.CPUSet`</p><p>- BOOL_TO_DECIMAL<p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Docker | Docker: Pids limit enabled | <p>-</p> | DEPENDENT | docker.pids_limit.enabled<p>**Preprocessing**:</p><p>- JSONPATH: `$.PidsLimit`</p><p>- BOOL_TO_DECIMAL<p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Docker | Docker: IPv4 Forwarding enabled | <p>-</p> | DEPENDENT | docker.ipv4_forwarding.enabled<p>**Preprocessing**:</p><p>- JSONPATH: `$.IPv4Forwarding`</p><p>- BOOL_TO_DECIMAL<p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Docker | Docker: Debug enabled | <p>-</p> | DEPENDENT | docker.debug.enabled<p>**Preprocessing**:</p><p>- JSONPATH: `$.Debug`</p><p>- BOOL_TO_DECIMAL<p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Docker | Docker: Nfd | <p>Number of used File Descriptors</p> | DEPENDENT | docker.nfd<p>**Preprocessing**:</p><p>- JSONPATH: `$.NFd`</p> |
+| Docker | Docker: OomKill disabled | <p>-</p> | DEPENDENT | docker.oomkill.disabled<p>**Preprocessing**:</p><p>- JSONPATH: `$.OomKillDisable`</p><p>- BOOL_TO_DECIMAL<p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Docker | Docker: Goroutines | <p>Number of goroutines</p> | DEPENDENT | docker.goroutines<p>**Preprocessing**:</p><p>- JSONPATH: `$.NGoroutines`</p> |
+| Docker | Docker: Logging driver | <p>-</p> | DEPENDENT | docker.logging_driver<p>**Preprocessing**:</p><p>- JSONPATH: `$.LoggingDriver`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Docker | Docker: Cgroup driver | <p>-</p> | DEPENDENT | docker.cgroup_driver<p>**Preprocessing**:</p><p>- JSONPATH: `$.CgroupDriver`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Docker | Docker: NEvents listener | <p>-</p> | DEPENDENT | docker.nevents_listener<p>**Preprocessing**:</p><p>- JSONPATH: `$.NEventsListener`</p> |
+| Docker | Docker: Kernel version | <p>-</p> | DEPENDENT | docker.kernel_version<p>**Preprocessing**:</p><p>- JSONPATH: `$.KernelVersion`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Docker | Docker: Operating system | <p>-</p> | DEPENDENT | docker.operating_system<p>**Preprocessing**:</p><p>- JSONPATH: `$.OperatingSystem`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Docker | Docker: OS type | <p>-</p> | DEPENDENT | docker.os_type<p>**Preprocessing**:</p><p>- JSONPATH: `$.OSType`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Docker | Docker: Architecture | <p>-</p> | DEPENDENT | docker.architecture<p>**Preprocessing**:</p><p>- JSONPATH: `$.Architecture`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Docker | Docker: NCPU | <p>-</p> | DEPENDENT | docker.ncpu<p>**Preprocessing**:</p><p>- JSONPATH: `$.NCPU`</p> |
+| Docker | Docker: Memory total | <p>-</p> | DEPENDENT | docker.mem.total<p>**Preprocessing**:</p><p>- JSONPATH: `$.MemTotal`</p> |
+| Docker | Docker: Docker root dir | <p>-</p> | DEPENDENT | docker.root_dir<p>**Preprocessing**:</p><p>- JSONPATH: `$.DockerRootDir`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Docker | Docker: Name | <p>-</p> | DEPENDENT | docker.name<p>**Preprocessing**:</p><p>- JSONPATH: `$.Name`</p> |
+| Docker | Docker: Server version | <p>-</p> | DEPENDENT | docker.server_version<p>**Preprocessing**:</p><p>- JSONPATH: `$.ServerVersion`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Docker | Docker: Default runtime | <p>-</p> | DEPENDENT | docker.default_runtime<p>**Preprocessing**:</p><p>- JSONPATH: `$.DefaultRuntime`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Docker | Docker: Live restore enabled | <p>-</p> | DEPENDENT | docker.live_restore.enabled<p>**Preprocessing**:</p><p>- JSONPATH: `$.LiveRestoreEnabled`</p><p>- BOOL_TO_DECIMAL<p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Docker | Docker: Layers size | <p>-</p> | DEPENDENT | docker.layers_size<p>**Preprocessing**:</p><p>- JSONPATH: `$.LayersSize`</p> |
+| Docker | Docker: Images size | <p>-</p> | DEPENDENT | docker.images_size<p>**Preprocessing**:</p><p>- JSONPATH: `$.Images[*].Size.sum()`</p> |
+| Docker | Docker: Containers size | <p>-</p> | DEPENDENT | docker.containers_size<p>**Preprocessing**:</p><p>- JSONPATH: `$.Containers[*].SizeRw.sum()`</p> |
+| Docker | Docker: Volumes size | <p>-</p> | DEPENDENT | docker.volumes_size<p>**Preprocessing**:</p><p>- JSONPATH: `$.Volumes[*].UsageData.Size.sum()`</p> |
+| Docker | Docker: Images available | <p>Number of top-level images</p> | DEPENDENT | docker.images.top_level<p>**Preprocessing**:</p><p>- JSONPATH: `$.length()`</p> |
+| Docker | Image {#NAME}: Created | <p>-</p> | DEPENDENT | docker.image.created["{#ID}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.Id == "{#ID}")].Created.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Docker | Image {#NAME}: Size | <p>-</p> | DEPENDENT | docker.image.size["{#ID}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.Id == "{#ID}")].Size.first()`</p> |
+| Docker | Container {#NAME}: Get stats | <p>Get container stats based on resource usage</p> | ZABBIX_PASSIVE | docker.container_stats["{#NAME}"] |
+| Docker | Container {#NAME}: CPU total usage per second | <p>-</p> | DEPENDENT | docker.container_stats.cpu_usage.total.rate["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpu_stats.cpu_usage.total_usage`</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `1.0E-9`</p> |
+| Docker | Container {#NAME}: CPU kernelmode usage per second | <p>-</p> | DEPENDENT | docker.container_stats.cpu_usage.kernel.rate["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpu_stats.cpu_usage.usage_in_kernelmode`</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `1.0E-9`</p> |
+| Docker | Container {#NAME}: CPU usermode usage per second | <p>-</p> | DEPENDENT | docker.container_stats.cpu_usage.user.rate["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpu_stats.cpu_usage.usage_in_usermode`</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `1.0E-9`</p> |
+| Docker | Container {#NAME}: Online CPUs | <p>-</p> | DEPENDENT | docker.container_stats.online_cpus["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpu_stats.online_cpus`</p> |
+| Docker | Container {#NAME}: Throttling periods | <p>Number of periods with throttling active</p> | DEPENDENT | docker.container_stats.cpu_usage.throttling_periods["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpu_stats.throttling_data.periods`</p> |
+| Docker | Container {#NAME}: Throttled periods | <p>Number of periods when the container hits its throttling limit</p> | DEPENDENT | docker.container_stats.cpu_usage.throttled_periods["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpu_stats.throttling_data.throttled_periods`</p> |
+| Docker | Container {#NAME}: Throttled time | <p>Aggregate time the container was throttled for in nanoseconds</p> | DEPENDENT | docker.container_stats.cpu_usage.throttled_time["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.cpu_stats.throttling_data.throttled_time`</p><p>- MULTIPLIER: `1.0E-9`</p> |
+| Docker | Container {#NAME}: Memory usage | <p>-</p> | DEPENDENT | docker.container_stats.memory.usage["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.memory_stats.usage`</p> |
+| Docker | Container {#NAME}: Memory maximum usage | <p>-</p> | DEPENDENT | docker.container_stats.memory.max_usage["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.memory_stats.max_usage`</p> |
+| Docker | Container {#NAME}: Memory commit bytes | <p>-</p> | DEPENDENT | docker.container_stats.memory.commit_bytes["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.memory_stats.commitbytes`</p> |
+| Docker | Container {#NAME}: Memory commit peak bytes | <p>-</p> | DEPENDENT | docker.container_stats.memory.commit_peak_bytes["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.memory_stats.commitpeakbytes`</p> |
+| Docker | Container {#NAME}: Memory private working set | <p>-</p> | DEPENDENT | docker.container_stats.memory.private_working_set["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.memory_stats.privateworkingset`</p> |
+| Docker | Container {#NAME}: Networks bytes received per second | <p>-</p> | DEPENDENT | docker.networks.rx_bytes["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.networks[*].rx_bytes.sum()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
+| Docker | Container {#NAME}: Networks packets received per second | <p>-</p> | DEPENDENT | docker.networks.rx_packets["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.networks[*].rx_packets.sum()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
+| Docker | Container {#NAME}: Networks errors received per second | <p>-</p> | DEPENDENT | docker.networks.rx_errors["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.networks[*].rx_errors.sum()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
+| Docker | Container {#NAME}: Networks incoming packets dropped per second | <p>-</p> | DEPENDENT | docker.networks.rx_dropped["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.networks[*].rx_dropped.sum()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
+| Docker | Container {#NAME}: Networks bytes sent per second | <p>-</p> | DEPENDENT | docker.networks.tx_bytes["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.networks[*].tx_bytes.sum()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
+| Docker | Container {#NAME}: Networks packets sent per second | <p>-</p> | DEPENDENT | docker.networks.tx_packets["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.networks[*].tx_packets.sum()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
+| Docker | Container {#NAME}: Networks errors sent per second | <p>-</p> | DEPENDENT | docker.networks.tx_errors["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.networks[*].tx_errors.sum()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
+| Docker | Container {#NAME}: Networks outgoing packets dropped per second | <p>-</p> | DEPENDENT | docker.networks.tx_dropped["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.networks[*].tx_dropped.sum()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
+| Docker | Container {#NAME}: Get info | <p>Return low-level information about a container</p> | ZABBIX_PASSIVE | docker.container_info["{#NAME}"] |
+| Docker | Container {#NAME}: Created | <p>-</p> | DEPENDENT | docker.container_info.created["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Created`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Docker | Container {#NAME}: Image | <p>-</p> | DEPENDENT | docker.container_info.image["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.Names[0] == "{#NAME}")].Image.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Docker | Container {#NAME}: Restart count | <p>-</p> | DEPENDENT | docker.container_info.restart_count["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.RestartCount`</p> |
+| Docker | Container {#NAME}: Status | <p>-</p> | DEPENDENT | docker.container_info.state.status["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.State.Status`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| Docker | Container {#NAME}: Running | <p>-</p> | DEPENDENT | docker.container_info.state.running["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.State.Running`</p><p>- BOOL_TO_DECIMAL |
+| Docker | Container {#NAME}: Paused | <p>-</p> | DEPENDENT | docker.container_info.state.paused["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.State.Paused`</p><p>- BOOL_TO_DECIMAL |
+| Docker | Container {#NAME}: Restarting | <p>-</p> | DEPENDENT | docker.container_info.state.restarting["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.State.Restarting`</p><p>- BOOL_TO_DECIMAL |
+| Docker | Container {#NAME}: OOMKilled | <p>-</p> | DEPENDENT | docker.container_info.state.oomkilled["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.State.OOMKilled`</p><p>- BOOL_TO_DECIMAL |
+| Docker | Container {#NAME}: Dead | <p>-</p> | DEPENDENT | docker.container_info.state.dead["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.State.Dead`</p><p>- BOOL_TO_DECIMAL |
+| Docker | Container {#NAME}: Pid | <p>-</p> | DEPENDENT | docker.container_info.state.pid["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.State.Pid`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Docker | Container {#NAME}: Exit code | <p>-</p> | DEPENDENT | docker.container_info.state.exitcode["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.State.ExitCode`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Docker | Container {#NAME}: Error | <p>-</p> | DEPENDENT | docker.container_info.state.error["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.State.Error`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Docker | Container {#NAME}: Started at | <p>-</p> | DEPENDENT | docker.container_info.started["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.State.StartedAt`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Docker | Container {#NAME}: Finished at | <p>-</p> | DEPENDENT | docker.container_info.finished["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.State.FinishedAt`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Zabbix_raw_items | Docker: Get info | | ZABBIX_PASSIVE | docker.info |
+| Zabbix_raw_items | Docker: Get containers | | ZABBIX_PASSIVE | docker.containers |
+| Zabbix_raw_items | Docker: Get images | | ZABBIX_PASSIVE | docker.images |
+| Zabbix_raw_items | Docker: Get data_usage | | ZABBIX_PASSIVE | docker.data_usage |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|Docker: Service is down |<p>-</p> |`{TEMPLATE_NAME:docker.ping.last()}=0` |AVERAGE |<p>Manual close: YES</p> |
-|Docker: Failed to fetch info data (or no data for 30m) |<p>Zabbix has not received data for items for the last 30 minutes</p> |`{TEMPLATE_NAME:docker.name.nodata(30m)}=1` |WARNING |<p>Manual close: YES</p><p>**Depends on**:</p><p>- Docker: Service is down</p> |
-|Docker: Version has changed (new version: {ITEM.VALUE}) |<p>Docker version has changed. Ack to close.</p> |`{TEMPLATE_NAME:docker.server_version.diff()}=1 and {TEMPLATE_NAME:docker.server_version.strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|Container {#NAME}: Container has been stopped with error code |<p>-</p> |`{TEMPLATE_NAME:docker.container_info.state.exitcode["{#NAME}"].last()}>0 and {Docker:docker.container_info.state.running["{#NAME}"].last()}=0` |AVERAGE |<p>Manual close: YES</p> |
-|Container {#NAME}: An error has occurred in the container |<p>Container {#NAME} has an error. Ack to close.</p> |`{TEMPLATE_NAME:docker.container_info.state.error["{#NAME}"].diff()}=1 and {TEMPLATE_NAME:docker.container_info.state.error["{#NAME}"].strlen()}>0` |WARNING |<p>Manual close: YES</p> |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|---------------------------------------------------------------|-----------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------|----------|--------------------------------------------------------------------------------|
+| Docker: Service is down | <p>-</p> | `{TEMPLATE_NAME:docker.ping.last()}=0` | AVERAGE | <p>Manual close: YES</p> |
+| Docker: Failed to fetch info data (or no data for 30m) | <p>Zabbix has not received data for items for the last 30 minutes</p> | `{TEMPLATE_NAME:docker.name.nodata(30m)}=1` | WARNING | <p>Manual close: YES</p><p>**Depends on**:</p><p>- Docker: Service is down</p> |
+| Docker: Version has changed (new version: {ITEM.VALUE}) | <p>Docker version has changed. Ack to close.</p> | `{TEMPLATE_NAME:docker.server_version.diff()}=1 and {TEMPLATE_NAME:docker.server_version.strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| Container {#NAME}: Container has been stopped with error code | <p>-</p> | `{TEMPLATE_NAME:docker.container_info.state.exitcode["{#NAME}"].last()}>0 and {Docker:docker.container_info.state.running["{#NAME}"].last()}=0` | AVERAGE | <p>Manual close: YES</p> |
+| Container {#NAME}: An error has occurred in the container | <p>Container {#NAME} has an error. Ack to close.</p> | `{TEMPLATE_NAME:docker.container_info.state.error["{#NAME}"].diff()}=1 and {TEMPLATE_NAME:docker.container_info.state.error["{#NAME}"].strlen()}>0` | WARNING | <p>Manual close: YES</p> |
## Feedback
diff --git a/templates/app/docker/template_app_docker.yaml b/templates/app/docker/template_app_docker.yaml
index 32979b2fb14..a98e75ed7ee 100644
--- a/templates/app/docker/template_app_docker.yaml
+++ b/templates/app/docker/template_app_docker.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-02-23T11:05:29Z'
+ date: '2021-04-22T11:26:28Z'
groups:
-
name: Templates/Applications
@@ -1962,87 +1962,89 @@ zabbix_export:
dashboards:
-
name: 'Docker overview'
- widgets:
- -
- type: GRAPH_CLASSIC
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Docker: Containers'
- host: Docker
- -
- type: GRAPH_CLASSIC
- x: '12'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Docker: Size'
- host: Docker
- -
- type: GRAPH_CLASSIC
- 'y': '5'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Docker: Memory total'
- host: Docker
- -
- type: GRAPH_CLASSIC
- x: '12'
- 'y': '5'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Docker: Goroutines'
- host: Docker
- -
- type: GRAPH_CLASSIC
- 'y': '10'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Docker: Images'
- host: Docker
+ pages:
+ -
+ widgets:
+ -
+ type: GRAPH_CLASSIC
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Docker: Containers'
+ host: Docker
+ -
+ type: GRAPH_CLASSIC
+ x: '12'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Docker: Size'
+ host: Docker
+ -
+ type: GRAPH_CLASSIC
+ 'y': '5'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Docker: Memory total'
+ host: Docker
+ -
+ type: GRAPH_CLASSIC
+ x: '12'
+ 'y': '5'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Docker: Goroutines'
+ host: Docker
+ -
+ type: GRAPH_CLASSIC
+ 'y': '10'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Docker: Images'
+ host: Docker
valuemaps:
-
name: 'Docker flag'
diff --git a/templates/app/elasticsearch_http/README.md b/templates/app/elasticsearch_http/README.md
index 2307342849c..fddd2f187e8 100644
--- a/templates/app/elasticsearch_http/README.md
+++ b/templates/app/elasticsearch_http/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
The template to monitor Elasticsearch by Zabbix that work without any external scripts.
It works with both standalone and cluster instances.
The metrics are collected in one pass remotely using an HTTP agent.
@@ -17,7 +17,7 @@ This template was tested on:
## Setup
-> See [Zabbix template operation](https://www.zabbix.com/documentation/5.2/manual/config/templates_out_of_the_box/http) for basic instructions.
+> See [Zabbix template operation](https://www.zabbix.com/documentation/5.4/manual/config/templates_out_of_the_box/http) for basic instructions.
You can set {$ELASTICSEARCH.USERNAME} and {$ELASTICSEARCH.PASSWORD} macros in the template for using on the host level.
If you use an atypical location ES API, don't forget to change the macros {$ELASTICSEARCH.SCHEME},{$ELASTICSEARCH.PORT}.
@@ -29,19 +29,19 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$ELASTICSEARCH.FETCH_LATENCY.MAX.WARN} |<p>Maximum of fetch latency in milliseconds for trigger expression.</p> |`100` |
-|{$ELASTICSEARCH.FLUSH_LATENCY.MAX.WARN} |<p>Maximum of flush latency in milliseconds for trigger expression.</p> |`100` |
-|{$ELASTICSEARCH.HEAP_USED.MAX.CRIT} |<p>The maximum percent in the use of JVM heap for critically trigger expression.</p> |`95` |
-|{$ELASTICSEARCH.HEAP_USED.MAX.WARN} |<p>The maximum percent in the use of JVM heap for warning trigger expression.</p> |`85` |
-|{$ELASTICSEARCH.INDEXING_LATENCY.MAX.WARN} |<p>Maximum of indexing latency in milliseconds for trigger expression.</p> |`100` |
-|{$ELASTICSEARCH.PASSWORD} |<p>The password of the Elasticsearch.</p> |`` |
-|{$ELASTICSEARCH.PORT} |<p>The port of the Elasticsearch host.</p> |`9200` |
-|{$ELASTICSEARCH.QUERY_LATENCY.MAX.WARN} |<p>Maximum of query latency in milliseconds for trigger expression.</p> |`100` |
-|{$ELASTICSEARCH.RESPONSE_TIME.MAX.WARN} |<p>The ES cluster maximum response time in seconds for trigger expression.</p> |`10s` |
-|{$ELASTICSEARCH.SCHEME} |<p>The scheme of the Elasticsearch (http/https).</p> |`http` |
-|{$ELASTICSEARCH.USERNAME} |<p>The username of the Elasticsearch.</p> |`` |
+| Name | Description | Default |
+|--------------------------------------------|--------------------------------------------------------------------------------------|---------|
+| {$ELASTICSEARCH.FETCH_LATENCY.MAX.WARN} | <p>Maximum of fetch latency in milliseconds for trigger expression.</p> | `100` |
+| {$ELASTICSEARCH.FLUSH_LATENCY.MAX.WARN} | <p>Maximum of flush latency in milliseconds for trigger expression.</p> | `100` |
+| {$ELASTICSEARCH.HEAP_USED.MAX.CRIT} | <p>The maximum percent in the use of JVM heap for critically trigger expression.</p> | `95` |
+| {$ELASTICSEARCH.HEAP_USED.MAX.WARN} | <p>The maximum percent in the use of JVM heap for warning trigger expression.</p> | `85` |
+| {$ELASTICSEARCH.INDEXING_LATENCY.MAX.WARN} | <p>Maximum of indexing latency in milliseconds for trigger expression.</p> | `100` |
+| {$ELASTICSEARCH.PASSWORD} | <p>The password of the Elasticsearch.</p> | `` |
+| {$ELASTICSEARCH.PORT} | <p>The port of the Elasticsearch host.</p> | `9200` |
+| {$ELASTICSEARCH.QUERY_LATENCY.MAX.WARN} | <p>Maximum of query latency in milliseconds for trigger expression.</p> | `100` |
+| {$ELASTICSEARCH.RESPONSE_TIME.MAX.WARN} | <p>The ES cluster maximum response time in seconds for trigger expression.</p> | `10s` |
+| {$ELASTICSEARCH.SCHEME} | <p>The scheme of the Elasticsearch (http/https).</p> | `http` |
+| {$ELASTICSEARCH.USERNAME} | <p>The username of the Elasticsearch.</p> | `` |
## Template links
@@ -49,109 +49,109 @@ There are no template links in this template.
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Cluster nodes discovery |<p>Discovery ES cluster nodes.</p> |HTTP_AGENT |es.nodes.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.nodes.[*]`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Name | Description | Type | Key and additional info |
+|-------------------------|------------------------------------|------------|-----------------------------------------------------------------------------------------------------------------------|
+| Cluster nodes discovery | <p>Discovery ES cluster nodes.</p> | HTTP_AGENT | es.nodes.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.nodes.[*]`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|ES_cluster |ES: Service status |<p>Checks if the service is running and accepting TCP connections.</p> |SIMPLE |net.tcp.service["{$ELASTICSEARCH.SCHEME}","{HOST.CONN}","{$ELASTICSEARCH.PORT}"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|ES_cluster |ES: Service response time |<p>Checks performance of the TCP service.</p> |SIMPLE |net.tcp.service.perf["{$ELASTICSEARCH.SCHEME}","{HOST.CONN}","{$ELASTICSEARCH.PORT}"] |
-|ES_cluster |ES: Cluster health status |<p>Health status of the cluster, based on the state of its primary and replica shards. Statuses are:</p><p>green</p><p>All shards are assigned.</p><p>yellow</p><p>All primary shards are assigned, but one or more replica shards are unassigned. If a node in the cluster fails, some data could be unavailable until that node is repaired.</p><p>red</p><p>One or more primary shards are unassigned, so some data is unavailable. This can occur briefly during cluster startup as primary shards are assigned.</p> |DEPENDENT |es.cluster.status<p>**Preprocessing**:</p><p>- JSONPATH: `$.status`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|ES_cluster |ES: Number of nodes |<p>The number of nodes within the cluster.</p> |DEPENDENT |es.cluster.number_of_nodes<p>**Preprocessing**:</p><p>- JSONPATH: `$.number_of_nodes`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|ES_cluster |ES: Number of data nodes |<p>The number of nodes that are dedicated to data nodes.</p> |DEPENDENT |es.cluster.number_of_data_nodes<p>**Preprocessing**:</p><p>- JSONPATH: `$.number_of_data_nodes`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|ES_cluster |ES: Number of relocating shards |<p>The number of shards that are under relocation.</p> |DEPENDENT |es.cluster.relocating_shards<p>**Preprocessing**:</p><p>- JSONPATH: `$.relocating_shards`</p> |
-|ES_cluster |ES: Number of initializing shards |<p>The number of shards that are under initialization.</p> |DEPENDENT |es.cluster.initializing_shards<p>**Preprocessing**:</p><p>- JSONPATH: `$.initializing_shards`</p> |
-|ES_cluster |ES: Number of unassigned shards |<p>The number of shards that are not allocated.</p> |DEPENDENT |es.cluster.unassigned_shards<p>**Preprocessing**:</p><p>- JSONPATH: `$.unassigned_shards`</p> |
-|ES_cluster |ES: Delayed unassigned shards |<p>The number of shards whose allocation has been delayed by the timeout settings.</p> |DEPENDENT |es.cluster.delayed_unassigned_shards<p>**Preprocessing**:</p><p>- JSONPATH: `$.delayed_unassigned_shards`</p> |
-|ES_cluster |ES: Number of pending tasks |<p>The number of cluster-level changes that have not yet been executed.</p> |DEPENDENT |es.cluster.number_of_pending_tasks<p>**Preprocessing**:</p><p>- JSONPATH: `$.number_of_pending_tasks`</p> |
-|ES_cluster |ES: Task max waiting in queue |<p>The time expressed in seconds since the earliest initiated task is waiting for being performed.</p> |DEPENDENT |es.cluster.task_max_waiting_in_queue<p>**Preprocessing**:</p><p>- JSONPATH: `$.task_max_waiting_in_queue_millis`</p><p>- MULTIPLIER: `0.001`</p> |
-|ES_cluster |ES: Inactive shards percentage |<p>The ratio of inactive shards in the cluster expressed as a percentage.</p> |DEPENDENT |es.cluster.inactive_shards_percent_as_number<p>**Preprocessing**:</p><p>- JSONPATH: `$.active_shards_percent_as_number`</p><p>- JAVASCRIPT: `return (100 - value)`</p> |
-|ES_cluster |ES: Cluster uptime |<p>Uptime duration in seconds since JVM has last started.</p> |DEPENDENT |es.nodes.jvm.max_uptime[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.nodes.jvm.max_uptime_in_millis`</p><p>- MULTIPLIER: `0.001`</p> |
-|ES_cluster |ES: Number of non-deleted documents |<p>The total number of non-deleted documents across all primary shards assigned to the selected nodes.</p><p>This number is based on the documents in Lucene segments and may include the documents from nested fields.</p> |DEPENDENT |es.indices.docs.count<p>**Preprocessing**:</p><p>- JSONPATH: `$.indices.docs.count`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|ES_cluster |ES: Indices with shards assigned to nodes |<p>The total number of indices with shards assigned to the selected nodes.</p> |DEPENDENT |es.indices.count<p>**Preprocessing**:</p><p>- JSONPATH: `$.indices.count`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|ES_cluster |ES: Total size of all file stores |<p>The total size in bytes of all file stores across all selected nodes.</p> |DEPENDENT |es.nodes.fs.total_in_bytes<p>**Preprocessing**:</p><p>- JSONPATH: `$.nodes.fs.total_in_bytes`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|ES_cluster |ES: Total available size to JVM in all file stores |<p>The total number of bytes available to JVM in the file stores across all selected nodes.</p><p>Depending on OS or process-level restrictions, this number may be less than nodes.fs.free_in_byes. </p><p>This is the actual amount of free disk space the selected Elasticsearch nodes can use.</p> |DEPENDENT |es.nodes.fs.available_in_bytes<p>**Preprocessing**:</p><p>- JSONPATH: `$.nodes.fs.available_in_bytes`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|ES_cluster |ES: Nodes with the data role |<p>The number of selected nodes with the data role.</p> |DEPENDENT |es.nodes.count.data<p>**Preprocessing**:</p><p>- JSONPATH: `$.nodes.count.data`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|ES_cluster |ES: Nodes with the ingest role |<p>The number of selected nodes with the ingest role.</p> |DEPENDENT |es.nodes.count.ingest<p>**Preprocessing**:</p><p>- JSONPATH: `$.nodes.count.ingest`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|ES_cluster |ES: Nodes with the master role |<p>The number of selected nodes with the master role.</p> |DEPENDENT |es.nodes.count.master<p>**Preprocessing**:</p><p>- JSONPATH: `$.nodes.count.master`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|ES_cluster |ES {#ES.NODE}: Total size |<p>Total size (in bytes) of all file stores.</p> |DEPENDENT |es.node.fs.total.total_in_bytes[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].fs.total.total_in_bytes.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|ES_cluster |ES {#ES.NODE}: Total available size |<p>The total number of bytes available to this Java virtual machine on all file stores. </p><p>Depending on OS or process level restrictions, this might appear less than fs.total.free_in_bytes. </p><p>This is the actual amount of free disk space the Elasticsearch node can utilize.</p> |DEPENDENT |es.node.fs.total.available_in_bytes[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].fs.total.available_in_bytes.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|ES_cluster |ES {#ES.NODE}: Node uptime |<p>JVM uptime in seconds.</p> |DEPENDENT |es.node.jvm.uptime[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].jvm.uptime_in_millis.first()`</p><p>- MULTIPLIER: `0.001`</p> |
-|ES_cluster |ES {#ES.NODE}: Maximum JVM memory available for use |<p>The maximum amount of memory, in bytes, available for use by the heap.</p> |DEPENDENT |es.node.jvm.mem.heap_max_in_bytes[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].jvm.mem.heap_max_in_bytes.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|ES_cluster |ES {#ES.NODE}: Amount of JVM heap currently in use |<p>The memory, in bytes, currently in use by the heap.</p> |DEPENDENT |es.node.jvm.mem.heap_used_in_bytes[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].jvm.mem.heap_used_in_bytes.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|ES_cluster |ES {#ES.NODE}: Percent of JVM heap currently in use |<p>The percentage of memory currently in use by the heap.</p> |DEPENDENT |es.node.jvm.mem.heap_used_percent[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].jvm.mem.heap_used_percent.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|ES_cluster |ES {#ES.NODE}: Amount of JVM heap committed |<p>The amount of memory, in bytes, available for use by the heap.</p> |DEPENDENT |es.node.jvm.mem.heap_committed_in_bytes[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].jvm.mem.heap_committed_in_bytes.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|ES_cluster |ES {#ES.NODE}: Number of open HTTP connections |<p>The number of currently open HTTP connections for the node.</p> |DEPENDENT |es.node.http.current_open[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].http.current_open.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|ES_cluster |ES {#ES.NODE}: Rate of HTTP connections opened |<p>The number of HTTP connections opened for the node per second.</p> |DEPENDENT |es.node.http.opened.rate[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].http.total_opened.first()`</p><p>- CHANGE_PER_SECOND |
-|ES_cluster |ES {#ES.NODE}: Time spent throttling operations |<p>Time in seconds spent throttling operations for the last measuring span.</p> |DEPENDENT |es.node.indices.indexing.throttle_time[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].indices.indexing.throttle_time_in_millis.first()`</p><p>- MULTIPLIER: `0.001`</p><p>- SIMPLE_CHANGE |
-|ES_cluster |ES {#ES.NODE}: Time spent throttling recovery operations |<p>Time in seconds spent throttling recovery operations for the last measuring span.</p> |DEPENDENT |es.node.indices.recovery.throttle_time[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].indices.recovery.throttle_time_in_millis.first()`</p><p>- MULTIPLIER: `0.001`</p><p>- SIMPLE_CHANGE |
-|ES_cluster |ES {#ES.NODE}: Time spent throttling merge operations |<p>Time in seconds spent throttling merge operations for the last measuring span.</p> |DEPENDENT |es.node.indices.merges.total_throttled_time[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].indices.merges.total_throttled_time_in_millis.first()`</p><p>- MULTIPLIER: `0.001`</p><p>- SIMPLE_CHANGE |
-|ES_cluster |ES {#ES.NODE}: Rate of queries |<p>The number of query operations per second.</p> |DEPENDENT |es.node.indices.search.query.rate[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].indices.search.query_total.first()`</p><p>- CHANGE_PER_SECOND |
-|ES_cluster |ES {#ES.NODE}: Time spent performing query |<p>Time in seconds spent performing query operations for the last measuring span.</p> |DEPENDENT |es.node.indices.search.query_time[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].indices.search.query_time_in_millis.first()`</p><p>- MULTIPLIER: `0.001`</p><p>- SIMPLE_CHANGE |
-|ES_cluster |ES {#ES.NODE}: Query latency |<p>The average query latency calculated by sampling the total number of queries and the total elapsed time at regular intervals.</p> |CALCULATED |es.node.indices.search.query_latency[{#ES.NODE}]<p>**Expression**:</p>`change(es.node.indices.search.query_time_in_millis[{#ES.NODE}]) / ( change(es.node.indices.search.query_total[{#ES.NODE}]) + (change(es.node.indices.search.query_total[{#ES.NODE}]) = 0) ) ` |
-|ES_cluster |ES {#ES.NODE}: Current query operations |<p>The number of query operations currently running.</p> |DEPENDENT |es.node.indices.search.query_current[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].indices.search.query_current.first()`</p> |
-|ES_cluster |ES {#ES.NODE}: Rate of fetch |<p>The number of fetch operations per second.</p> |DEPENDENT |es.node.indices.search.fetch.rate[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].indices.search.fetch_total.first()`</p><p>- CHANGE_PER_SECOND |
-|ES_cluster |ES {#ES.NODE}: Time spent performing fetch |<p>Time in seconds spent performing fetch operations for the last measuring span.</p> |DEPENDENT |es.node.indices.search.fetch_time[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].indices.search.fetch_time_in_millis.first()`</p><p>- MULTIPLIER: `0.001`</p><p>- SIMPLE_CHANGE |
-|ES_cluster |ES {#ES.NODE}: Fetch latency |<p>The average fetch latency calculated by sampling the total number of fetches and the total elapsed time at regular intervals.</p> |CALCULATED |es.node.indices.search.fetch_latency[{#ES.NODE}]<p>**Expression**:</p>`change(es.node.indices.search.fetch_time_in_millis[{#ES.NODE}]) / ( change(es.node.indices.search.fetch_total[{#ES.NODE}]) + (change(es.node.indices.search.fetch_total[{#ES.NODE}]) = 0) )` |
-|ES_cluster |ES {#ES.NODE}: Current fetch operations |<p>The number of fetch operations currently running.</p> |DEPENDENT |es.node.indices.search.fetch_current[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].indices.search.fetch_current.first()`</p> |
-|ES_cluster |ES {#ES.NODE}: Write thread pool executor tasks completed |<p>The number of tasks completed by the write thread pool executor.</p> |DEPENDENT |es.node.thread_pool.write.completed.rate[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].thread_pool.write.completed.first()`</p><p>- CHANGE_PER_SECOND |
-|ES_cluster |ES {#ES.NODE}: Write thread pool active threads |<p>The number of active threads in the write thread pool.</p> |DEPENDENT |es.node.thread_pool.write.active[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].thread_pool.write.active.first()`</p> |
-|ES_cluster |ES {#ES.NODE}: Write thread pool tasks in queue |<p>The number of tasks in queue for the write thread pool.</p> |DEPENDENT |es.node.thread_pool.write.queue[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].thread_pool.write.queue.first()`</p> |
-|ES_cluster |ES {#ES.NODE}: Write thread pool executor tasks rejected |<p>The number of tasks rejected by the write thread pool executor.</p> |DEPENDENT |es.node.thread_pool.write.rejected.rate[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].thread_pool.write.rejected.first()`</p><p>- CHANGE_PER_SECOND |
-|ES_cluster |ES {#ES.NODE}: Search thread pool executor tasks completed |<p>The number of tasks completed by the search thread pool executor.</p> |DEPENDENT |es.node.thread_pool.search.completed.rate[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].thread_pool.search.completed.first()`</p><p>- CHANGE_PER_SECOND |
-|ES_cluster |ES {#ES.NODE}: Search thread pool active threads |<p>The number of active threads in the search thread pool.</p> |DEPENDENT |es.node.thread_pool.search.active[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].thread_pool.search.active.first()`</p> |
-|ES_cluster |ES {#ES.NODE}: Search thread pool tasks in queue |<p>The number of tasks in queue for the search thread pool.</p> |DEPENDENT |es.node.thread_pool.search.queue[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].thread_pool.search.queue.first()`</p> |
-|ES_cluster |ES {#ES.NODE}: Search thread pool executor tasks rejected |<p>The number of tasks rejected by the search thread pool executor.</p> |DEPENDENT |es.node.thread_pool.search.rejected.rate[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].thread_pool.search.rejected.first()`</p><p>- CHANGE_PER_SECOND |
-|ES_cluster |ES {#ES.NODE}: Refresh thread pool executor tasks completed |<p>The number of tasks completed by the refresh thread pool executor.</p> |DEPENDENT |es.node.thread_pool.refresh.completed.rate[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].thread_pool.refresh.completed.first()`</p><p>- CHANGE_PER_SECOND |
-|ES_cluster |ES {#ES.NODE}: Refresh thread pool active threads |<p>The number of active threads in the refresh thread pool.</p> |DEPENDENT |es.node.thread_pool.refresh.active[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].thread_pool.refresh.active.first()`</p> |
-|ES_cluster |ES {#ES.NODE}: Refresh thread pool tasks in queue |<p>The number of tasks in queue for the refresh thread pool.</p> |DEPENDENT |es.node.thread_pool.refresh.queue[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].thread_pool.refresh.queue.first()`</p> |
-|ES_cluster |ES {#ES.NODE}: Refresh thread pool executor tasks rejected |<p>The number of tasks rejected by the refresh thread pool executor.</p> |DEPENDENT |es.node.thread_pool.refresh.rejected.rate[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].thread_pool.refresh.rejected.first()`</p><p>- CHANGE_PER_SECOND |
-|ES_cluster |ES {#ES.NODE}: Indexing latency |<p>The average indexing latency calculated from the available index_total and index_time_in_millis metrics.</p> |CALCULATED |es.node.indices.indexing.index_latency[{#ES.NODE}]<p>**Expression**:</p>`change(es.node.indices.indexing.index_time_in_millis[{#ES.NODE}]) / ( change(es.node.indices.indexing.index_total[{#ES.NODE}]) + (change(es.node.indices.indexing.index_total[{#ES.NODE}]) = 0) )` |
-|ES_cluster |ES {#ES.NODE}: Current indexing operations |<p>The number of indexing operations currently running.</p> |DEPENDENT |es.node.indices.indexing.index_current[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].indices.indexing.index_current.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|ES_cluster |ES {#ES.NODE}: Flush latency |<p>The average flush latency calculated from the available flush.total and flush.total_time_in_millis metrics.</p> |CALCULATED |es.node.indices.flush.latency[{#ES.NODE}]<p>**Expression**:</p>`change(es.node.indices.flush.total_time_in_millis[{#ES.NODE}]) / ( change(es.node.indices.flush.total[{#ES.NODE}]) + (change(es.node.indices.flush.total[{#ES.NODE}]) = 0) )` |
-|ES_cluster |ES {#ES.NODE}: Rate of index refreshes |<p>The number of refresh operations per second.</p> |DEPENDENT |es.node.indices.refresh.rate[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].indices.refresh.total.first()`</p><p>- CHANGE_PER_SECOND |
-|ES_cluster |ES {#ES.NODE}: Time spent performing refresh |<p>Time in seconds spent performing refresh operations for the last measuring span.</p> |DEPENDENT |es.node.indices.refresh.time[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].indices.refresh.total_time_in_millis.first()`</p><p>- MULTIPLIER: `0.001`</p><p>- SIMPLE_CHANGE |
-|Zabbix_raw_items |ES: Get cluster health |<p>Returns the health status of a cluster.</p> |HTTP_AGENT |es.cluster.get_health |
-|Zabbix_raw_items |ES: Get cluster stats |<p>Returns cluster statistics.</p> |HTTP_AGENT |es.cluster.get_stats |
-|Zabbix_raw_items |ES: Get nodes stats |<p>Returns cluster nodes statistics.</p> |HTTP_AGENT |es.nodes.get_stats |
-|Zabbix_raw_items |ES {#ES.NODE}: Total number of query |<p>The total number of query operations.</p> |DEPENDENT |es.node.indices.search.query_total[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].indices.search.query_total.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|Zabbix_raw_items |ES {#ES.NODE}: Total time spent performing query |<p>Time in milliseconds spent performing query operations.</p> |DEPENDENT |es.node.indices.search.query_time_in_millis[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].indices.search.query_time_in_millis.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|Zabbix_raw_items |ES {#ES.NODE}: Total number of fetch |<p>The total number of fetch operations.</p> |DEPENDENT |es.node.indices.search.fetch_total[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].indices.search.fetch_total.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|Zabbix_raw_items |ES {#ES.NODE}: Total time spent performing fetch |<p>Time in milliseconds spent performing fetch operations.</p> |DEPENDENT |es.node.indices.search.fetch_time_in_millis[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].indices.search.fetch_time_in_millis.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|Zabbix_raw_items |ES {#ES.NODE}: Total number of indexing |<p>The total number of indexing operations.</p> |DEPENDENT |es.node.indices.indexing.index_total[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].indices.indexing.index_total.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|Zabbix_raw_items |ES {#ES.NODE}: Total time spent performing indexing |<p>Total time in milliseconds spent performing indexing operations.</p> |DEPENDENT |es.node.indices.indexing.index_time_in_millis[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].indices.indexing.index_time_in_millis.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|Zabbix_raw_items |ES {#ES.NODE}: Total number of index flushes to disk |<p>The total number of flush operations.</p> |DEPENDENT |es.node.indices.flush.total[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].indices.flush.total.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|Zabbix_raw_items |ES {#ES.NODE}: Total time spent on flushing indices to disk |<p>Total time in milliseconds spent performing flush operations.</p> |DEPENDENT |es.node.indices.flush.total_time_in_millis[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].indices.flush.total_time_in_millis.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| Group | Name | Description | Type | Key and additional info |
+|------------------|-------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| ES_cluster | ES: Service status | <p>Checks if the service is running and accepting TCP connections.</p> | SIMPLE | net.tcp.service["{$ELASTICSEARCH.SCHEME}","{HOST.CONN}","{$ELASTICSEARCH.PORT}"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| ES_cluster | ES: Service response time | <p>Checks performance of the TCP service.</p> | SIMPLE | net.tcp.service.perf["{$ELASTICSEARCH.SCHEME}","{HOST.CONN}","{$ELASTICSEARCH.PORT}"] |
+| ES_cluster | ES: Cluster health status | <p>Health status of the cluster, based on the state of its primary and replica shards. Statuses are:</p><p>green</p><p>All shards are assigned.</p><p>yellow</p><p>All primary shards are assigned, but one or more replica shards are unassigned. If a node in the cluster fails, some data could be unavailable until that node is repaired.</p><p>red</p><p>One or more primary shards are unassigned, so some data is unavailable. This can occur briefly during cluster startup as primary shards are assigned.</p> | DEPENDENT | es.cluster.status<p>**Preprocessing**:</p><p>- JSONPATH: `$.status`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| ES_cluster | ES: Number of nodes | <p>The number of nodes within the cluster.</p> | DEPENDENT | es.cluster.number_of_nodes<p>**Preprocessing**:</p><p>- JSONPATH: `$.number_of_nodes`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| ES_cluster | ES: Number of data nodes | <p>The number of nodes that are dedicated to data nodes.</p> | DEPENDENT | es.cluster.number_of_data_nodes<p>**Preprocessing**:</p><p>- JSONPATH: `$.number_of_data_nodes`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| ES_cluster | ES: Number of relocating shards | <p>The number of shards that are under relocation.</p> | DEPENDENT | es.cluster.relocating_shards<p>**Preprocessing**:</p><p>- JSONPATH: `$.relocating_shards`</p> |
+| ES_cluster | ES: Number of initializing shards | <p>The number of shards that are under initialization.</p> | DEPENDENT | es.cluster.initializing_shards<p>**Preprocessing**:</p><p>- JSONPATH: `$.initializing_shards`</p> |
+| ES_cluster | ES: Number of unassigned shards | <p>The number of shards that are not allocated.</p> | DEPENDENT | es.cluster.unassigned_shards<p>**Preprocessing**:</p><p>- JSONPATH: `$.unassigned_shards`</p> |
+| ES_cluster | ES: Delayed unassigned shards | <p>The number of shards whose allocation has been delayed by the timeout settings.</p> | DEPENDENT | es.cluster.delayed_unassigned_shards<p>**Preprocessing**:</p><p>- JSONPATH: `$.delayed_unassigned_shards`</p> |
+| ES_cluster | ES: Number of pending tasks | <p>The number of cluster-level changes that have not yet been executed.</p> | DEPENDENT | es.cluster.number_of_pending_tasks<p>**Preprocessing**:</p><p>- JSONPATH: `$.number_of_pending_tasks`</p> |
+| ES_cluster | ES: Task max waiting in queue | <p>The time expressed in seconds since the earliest initiated task is waiting for being performed.</p> | DEPENDENT | es.cluster.task_max_waiting_in_queue<p>**Preprocessing**:</p><p>- JSONPATH: `$.task_max_waiting_in_queue_millis`</p><p>- MULTIPLIER: `0.001`</p> |
+| ES_cluster | ES: Inactive shards percentage | <p>The ratio of inactive shards in the cluster expressed as a percentage.</p> | DEPENDENT | es.cluster.inactive_shards_percent_as_number<p>**Preprocessing**:</p><p>- JSONPATH: `$.active_shards_percent_as_number`</p><p>- JAVASCRIPT: `return (100 - value)`</p> |
+| ES_cluster | ES: Cluster uptime | <p>Uptime duration in seconds since JVM has last started.</p> | DEPENDENT | es.nodes.jvm.max_uptime[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.nodes.jvm.max_uptime_in_millis`</p><p>- MULTIPLIER: `0.001`</p> |
+| ES_cluster | ES: Number of non-deleted documents | <p>The total number of non-deleted documents across all primary shards assigned to the selected nodes.</p><p>This number is based on the documents in Lucene segments and may include the documents from nested fields.</p> | DEPENDENT | es.indices.docs.count<p>**Preprocessing**:</p><p>- JSONPATH: `$.indices.docs.count`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| ES_cluster | ES: Indices with shards assigned to nodes | <p>The total number of indices with shards assigned to the selected nodes.</p> | DEPENDENT | es.indices.count<p>**Preprocessing**:</p><p>- JSONPATH: `$.indices.count`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| ES_cluster | ES: Total size of all file stores | <p>The total size in bytes of all file stores across all selected nodes.</p> | DEPENDENT | es.nodes.fs.total_in_bytes<p>**Preprocessing**:</p><p>- JSONPATH: `$.nodes.fs.total_in_bytes`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| ES_cluster | ES: Total available size to JVM in all file stores | <p>The total number of bytes available to JVM in the file stores across all selected nodes.</p><p>Depending on OS or process-level restrictions, this number may be less than nodes.fs.free_in_byes. </p><p>This is the actual amount of free disk space the selected Elasticsearch nodes can use.</p> | DEPENDENT | es.nodes.fs.available_in_bytes<p>**Preprocessing**:</p><p>- JSONPATH: `$.nodes.fs.available_in_bytes`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| ES_cluster | ES: Nodes with the data role | <p>The number of selected nodes with the data role.</p> | DEPENDENT | es.nodes.count.data<p>**Preprocessing**:</p><p>- JSONPATH: `$.nodes.count.data`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| ES_cluster | ES: Nodes with the ingest role | <p>The number of selected nodes with the ingest role.</p> | DEPENDENT | es.nodes.count.ingest<p>**Preprocessing**:</p><p>- JSONPATH: `$.nodes.count.ingest`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| ES_cluster | ES: Nodes with the master role | <p>The number of selected nodes with the master role.</p> | DEPENDENT | es.nodes.count.master<p>**Preprocessing**:</p><p>- JSONPATH: `$.nodes.count.master`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| ES_cluster | ES {#ES.NODE}: Total size | <p>Total size (in bytes) of all file stores.</p> | DEPENDENT | es.node.fs.total.total_in_bytes[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].fs.total.total_in_bytes.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| ES_cluster | ES {#ES.NODE}: Total available size | <p>The total number of bytes available to this Java virtual machine on all file stores. </p><p>Depending on OS or process level restrictions, this might appear less than fs.total.free_in_bytes. </p><p>This is the actual amount of free disk space the Elasticsearch node can utilize.</p> | DEPENDENT | es.node.fs.total.available_in_bytes[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].fs.total.available_in_bytes.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| ES_cluster | ES {#ES.NODE}: Node uptime | <p>JVM uptime in seconds.</p> | DEPENDENT | es.node.jvm.uptime[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].jvm.uptime_in_millis.first()`</p><p>- MULTIPLIER: `0.001`</p> |
+| ES_cluster | ES {#ES.NODE}: Maximum JVM memory available for use | <p>The maximum amount of memory, in bytes, available for use by the heap.</p> | DEPENDENT | es.node.jvm.mem.heap_max_in_bytes[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].jvm.mem.heap_max_in_bytes.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| ES_cluster | ES {#ES.NODE}: Amount of JVM heap currently in use | <p>The memory, in bytes, currently in use by the heap.</p> | DEPENDENT | es.node.jvm.mem.heap_used_in_bytes[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].jvm.mem.heap_used_in_bytes.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| ES_cluster | ES {#ES.NODE}: Percent of JVM heap currently in use | <p>The percentage of memory currently in use by the heap.</p> | DEPENDENT | es.node.jvm.mem.heap_used_percent[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].jvm.mem.heap_used_percent.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| ES_cluster | ES {#ES.NODE}: Amount of JVM heap committed | <p>The amount of memory, in bytes, available for use by the heap.</p> | DEPENDENT | es.node.jvm.mem.heap_committed_in_bytes[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].jvm.mem.heap_committed_in_bytes.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| ES_cluster | ES {#ES.NODE}: Number of open HTTP connections | <p>The number of currently open HTTP connections for the node.</p> | DEPENDENT | es.node.http.current_open[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].http.current_open.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| ES_cluster | ES {#ES.NODE}: Rate of HTTP connections opened | <p>The number of HTTP connections opened for the node per second.</p> | DEPENDENT | es.node.http.opened.rate[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].http.total_opened.first()`</p><p>- CHANGE_PER_SECOND |
+| ES_cluster | ES {#ES.NODE}: Time spent throttling operations | <p>Time in seconds spent throttling operations for the last measuring span.</p> | DEPENDENT | es.node.indices.indexing.throttle_time[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].indices.indexing.throttle_time_in_millis.first()`</p><p>- MULTIPLIER: `0.001`</p><p>- SIMPLE_CHANGE |
+| ES_cluster | ES {#ES.NODE}: Time spent throttling recovery operations | <p>Time in seconds spent throttling recovery operations for the last measuring span.</p> | DEPENDENT | es.node.indices.recovery.throttle_time[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].indices.recovery.throttle_time_in_millis.first()`</p><p>- MULTIPLIER: `0.001`</p><p>- SIMPLE_CHANGE |
+| ES_cluster | ES {#ES.NODE}: Time spent throttling merge operations | <p>Time in seconds spent throttling merge operations for the last measuring span.</p> | DEPENDENT | es.node.indices.merges.total_throttled_time[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].indices.merges.total_throttled_time_in_millis.first()`</p><p>- MULTIPLIER: `0.001`</p><p>- SIMPLE_CHANGE |
+| ES_cluster | ES {#ES.NODE}: Rate of queries | <p>The number of query operations per second.</p> | DEPENDENT | es.node.indices.search.query.rate[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].indices.search.query_total.first()`</p><p>- CHANGE_PER_SECOND |
+| ES_cluster | ES {#ES.NODE}: Time spent performing query | <p>Time in seconds spent performing query operations for the last measuring span.</p> | DEPENDENT | es.node.indices.search.query_time[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].indices.search.query_time_in_millis.first()`</p><p>- MULTIPLIER: `0.001`</p><p>- SIMPLE_CHANGE |
+| ES_cluster | ES {#ES.NODE}: Query latency | <p>The average query latency calculated by sampling the total number of queries and the total elapsed time at regular intervals.</p> | CALCULATED | es.node.indices.search.query_latency[{#ES.NODE}]<p>**Expression**:</p>`change(es.node.indices.search.query_time_in_millis[{#ES.NODE}]) / ( change(es.node.indices.search.query_total[{#ES.NODE}]) + (change(es.node.indices.search.query_total[{#ES.NODE}]) = 0) ) ` |
+| ES_cluster | ES {#ES.NODE}: Current query operations | <p>The number of query operations currently running.</p> | DEPENDENT | es.node.indices.search.query_current[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].indices.search.query_current.first()`</p> |
+| ES_cluster | ES {#ES.NODE}: Rate of fetch | <p>The number of fetch operations per second.</p> | DEPENDENT | es.node.indices.search.fetch.rate[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].indices.search.fetch_total.first()`</p><p>- CHANGE_PER_SECOND |
+| ES_cluster | ES {#ES.NODE}: Time spent performing fetch | <p>Time in seconds spent performing fetch operations for the last measuring span.</p> | DEPENDENT | es.node.indices.search.fetch_time[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].indices.search.fetch_time_in_millis.first()`</p><p>- MULTIPLIER: `0.001`</p><p>- SIMPLE_CHANGE |
+| ES_cluster | ES {#ES.NODE}: Fetch latency | <p>The average fetch latency calculated by sampling the total number of fetches and the total elapsed time at regular intervals.</p> | CALCULATED | es.node.indices.search.fetch_latency[{#ES.NODE}]<p>**Expression**:</p>`change(es.node.indices.search.fetch_time_in_millis[{#ES.NODE}]) / ( change(es.node.indices.search.fetch_total[{#ES.NODE}]) + (change(es.node.indices.search.fetch_total[{#ES.NODE}]) = 0) )` |
+| ES_cluster | ES {#ES.NODE}: Current fetch operations | <p>The number of fetch operations currently running.</p> | DEPENDENT | es.node.indices.search.fetch_current[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].indices.search.fetch_current.first()`</p> |
+| ES_cluster | ES {#ES.NODE}: Write thread pool executor tasks completed | <p>The number of tasks completed by the write thread pool executor.</p> | DEPENDENT | es.node.thread_pool.write.completed.rate[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].thread_pool.write.completed.first()`</p><p>- CHANGE_PER_SECOND |
+| ES_cluster | ES {#ES.NODE}: Write thread pool active threads | <p>The number of active threads in the write thread pool.</p> | DEPENDENT | es.node.thread_pool.write.active[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].thread_pool.write.active.first()`</p> |
+| ES_cluster | ES {#ES.NODE}: Write thread pool tasks in queue | <p>The number of tasks in queue for the write thread pool.</p> | DEPENDENT | es.node.thread_pool.write.queue[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].thread_pool.write.queue.first()`</p> |
+| ES_cluster | ES {#ES.NODE}: Write thread pool executor tasks rejected | <p>The number of tasks rejected by the write thread pool executor.</p> | DEPENDENT | es.node.thread_pool.write.rejected.rate[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].thread_pool.write.rejected.first()`</p><p>- CHANGE_PER_SECOND |
+| ES_cluster | ES {#ES.NODE}: Search thread pool executor tasks completed | <p>The number of tasks completed by the search thread pool executor.</p> | DEPENDENT | es.node.thread_pool.search.completed.rate[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].thread_pool.search.completed.first()`</p><p>- CHANGE_PER_SECOND |
+| ES_cluster | ES {#ES.NODE}: Search thread pool active threads | <p>The number of active threads in the search thread pool.</p> | DEPENDENT | es.node.thread_pool.search.active[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].thread_pool.search.active.first()`</p> |
+| ES_cluster | ES {#ES.NODE}: Search thread pool tasks in queue | <p>The number of tasks in queue for the search thread pool.</p> | DEPENDENT | es.node.thread_pool.search.queue[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].thread_pool.search.queue.first()`</p> |
+| ES_cluster | ES {#ES.NODE}: Search thread pool executor tasks rejected | <p>The number of tasks rejected by the search thread pool executor.</p> | DEPENDENT | es.node.thread_pool.search.rejected.rate[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].thread_pool.search.rejected.first()`</p><p>- CHANGE_PER_SECOND |
+| ES_cluster | ES {#ES.NODE}: Refresh thread pool executor tasks completed | <p>The number of tasks completed by the refresh thread pool executor.</p> | DEPENDENT | es.node.thread_pool.refresh.completed.rate[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].thread_pool.refresh.completed.first()`</p><p>- CHANGE_PER_SECOND |
+| ES_cluster | ES {#ES.NODE}: Refresh thread pool active threads | <p>The number of active threads in the refresh thread pool.</p> | DEPENDENT | es.node.thread_pool.refresh.active[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].thread_pool.refresh.active.first()`</p> |
+| ES_cluster | ES {#ES.NODE}: Refresh thread pool tasks in queue | <p>The number of tasks in queue for the refresh thread pool.</p> | DEPENDENT | es.node.thread_pool.refresh.queue[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].thread_pool.refresh.queue.first()`</p> |
+| ES_cluster | ES {#ES.NODE}: Refresh thread pool executor tasks rejected | <p>The number of tasks rejected by the refresh thread pool executor.</p> | DEPENDENT | es.node.thread_pool.refresh.rejected.rate[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].thread_pool.refresh.rejected.first()`</p><p>- CHANGE_PER_SECOND |
+| ES_cluster | ES {#ES.NODE}: Indexing latency | <p>The average indexing latency calculated from the available index_total and index_time_in_millis metrics.</p> | CALCULATED | es.node.indices.indexing.index_latency[{#ES.NODE}]<p>**Expression**:</p>`change(es.node.indices.indexing.index_time_in_millis[{#ES.NODE}]) / ( change(es.node.indices.indexing.index_total[{#ES.NODE}]) + (change(es.node.indices.indexing.index_total[{#ES.NODE}]) = 0) )` |
+| ES_cluster | ES {#ES.NODE}: Current indexing operations | <p>The number of indexing operations currently running.</p> | DEPENDENT | es.node.indices.indexing.index_current[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].indices.indexing.index_current.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| ES_cluster | ES {#ES.NODE}: Flush latency | <p>The average flush latency calculated from the available flush.total and flush.total_time_in_millis metrics.</p> | CALCULATED | es.node.indices.flush.latency[{#ES.NODE}]<p>**Expression**:</p>`change(es.node.indices.flush.total_time_in_millis[{#ES.NODE}]) / ( change(es.node.indices.flush.total[{#ES.NODE}]) + (change(es.node.indices.flush.total[{#ES.NODE}]) = 0) )` |
+| ES_cluster | ES {#ES.NODE}: Rate of index refreshes | <p>The number of refresh operations per second.</p> | DEPENDENT | es.node.indices.refresh.rate[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].indices.refresh.total.first()`</p><p>- CHANGE_PER_SECOND |
+| ES_cluster | ES {#ES.NODE}: Time spent performing refresh | <p>Time in seconds spent performing refresh operations for the last measuring span.</p> | DEPENDENT | es.node.indices.refresh.time[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].indices.refresh.total_time_in_millis.first()`</p><p>- MULTIPLIER: `0.001`</p><p>- SIMPLE_CHANGE |
+| Zabbix_raw_items | ES: Get cluster health | <p>Returns the health status of a cluster.</p> | HTTP_AGENT | es.cluster.get_health |
+| Zabbix_raw_items | ES: Get cluster stats | <p>Returns cluster statistics.</p> | HTTP_AGENT | es.cluster.get_stats |
+| Zabbix_raw_items | ES: Get nodes stats | <p>Returns cluster nodes statistics.</p> | HTTP_AGENT | es.nodes.get_stats |
+| Zabbix_raw_items | ES {#ES.NODE}: Total number of query | <p>The total number of query operations.</p> | DEPENDENT | es.node.indices.search.query_total[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].indices.search.query_total.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| Zabbix_raw_items | ES {#ES.NODE}: Total time spent performing query | <p>Time in milliseconds spent performing query operations.</p> | DEPENDENT | es.node.indices.search.query_time_in_millis[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].indices.search.query_time_in_millis.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| Zabbix_raw_items | ES {#ES.NODE}: Total number of fetch | <p>The total number of fetch operations.</p> | DEPENDENT | es.node.indices.search.fetch_total[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].indices.search.fetch_total.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| Zabbix_raw_items | ES {#ES.NODE}: Total time spent performing fetch | <p>Time in milliseconds spent performing fetch operations.</p> | DEPENDENT | es.node.indices.search.fetch_time_in_millis[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].indices.search.fetch_time_in_millis.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| Zabbix_raw_items | ES {#ES.NODE}: Total number of indexing | <p>The total number of indexing operations.</p> | DEPENDENT | es.node.indices.indexing.index_total[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].indices.indexing.index_total.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| Zabbix_raw_items | ES {#ES.NODE}: Total time spent performing indexing | <p>Total time in milliseconds spent performing indexing operations.</p> | DEPENDENT | es.node.indices.indexing.index_time_in_millis[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].indices.indexing.index_time_in_millis.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| Zabbix_raw_items | ES {#ES.NODE}: Total number of index flushes to disk | <p>The total number of flush operations.</p> | DEPENDENT | es.node.indices.flush.total[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].indices.flush.total.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| Zabbix_raw_items | ES {#ES.NODE}: Total time spent on flushing indices to disk | <p>Total time in milliseconds spent performing flush operations.</p> | DEPENDENT | es.node.indices.flush.total_time_in_millis[{#ES.NODE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$..[?(@.name=='{#ES.NODE}')].indices.flush.total_time_in_millis.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|ES: Service is down |<p>The service is unavailable or does not accept TCP connections.</p> |`{TEMPLATE_NAME:net.tcp.service["{$ELASTICSEARCH.SCHEME}","{HOST.CONN}","{$ELASTICSEARCH.PORT}"].last()}=0` |AVERAGE |<p>Manual close: YES</p> |
-|ES: Service response time is too high (over {$ELASTICSEARCH.RESPONSE_TIME.MAX.WARN} for 5m) |<p>The performance of the TCP service is very low.</p> |`{TEMPLATE_NAME:net.tcp.service.perf["{$ELASTICSEARCH.SCHEME}","{HOST.CONN}","{$ELASTICSEARCH.PORT}"].min(5m)}>{$ELASTICSEARCH.RESPONSE_TIME.MAX.WARN}` |WARNING |<p>Manual close: YES</p><p>**Depends on**:</p><p>- ES: Service is down</p> |
-|ES: Health is YELLOW |<p>All primary shards are assigned, but one or more replica shards are unassigned. </p><p>If a node in the cluster fails, some data could be unavailable until that node is repaired.</p> |`{TEMPLATE_NAME:es.cluster.status.last()}=1` |AVERAGE | |
-|ES: Health is RED |<p>One or more primary shards are unassigned, so some data is unavailable. </p><p>This can occur briefly during cluster startup as primary shards are assigned.</p> |`{TEMPLATE_NAME:es.cluster.status.last()}=2` |HIGH | |
-|ES: Health is UNKNOWN |<p>The health status of the cluster is unknown or cannot be obtained.</p> |`{TEMPLATE_NAME:es.cluster.status.last()}=255` |HIGH | |
-|ES: The number of nodes within the cluster has decreased | |`{TEMPLATE_NAME:es.cluster.number_of_nodes.change()}<0` |INFO |<p>Manual close: YES</p> |
-|ES: The number of nodes within the cluster has increased | |`{TEMPLATE_NAME:es.cluster.number_of_nodes.change()}>0` |INFO |<p>Manual close: YES</p> |
-|ES: Cluster has the initializing shards |<p>The cluster has the initializing shards longer than 10 minutes.</p> |`{TEMPLATE_NAME:es.cluster.initializing_shards.min(10m)}>0` |AVERAGE | |
-|ES: Cluster has the unassigned shards |<p>The cluster has the unassigned shards longer than 10 minutes.</p> |`{TEMPLATE_NAME:es.cluster.unassigned_shards.min(10m)}>0` |AVERAGE | |
-|ES: Cluster has been restarted (uptime < 10m) |<p>Uptime is less than 10 minutes</p> |`{TEMPLATE_NAME:es.nodes.jvm.max_uptime[{#ES.NODE}].last()}<10m` |INFO |<p>Manual close: YES</p> |
-|ES: Cluster does not have enough space for resharding |<p>There is not enough disk space for index resharding.</p> |`({Elasticsearch Cluster by HTTP:es.nodes.fs.total_in_bytes.last()}-{TEMPLATE_NAME:es.nodes.fs.available_in_bytes.last()})/({Elasticsearch Cluster by HTTP:es.cluster.number_of_data_nodes.last()}-1)>{TEMPLATE_NAME:es.nodes.fs.available_in_bytes.last()}` |HIGH | |
-|ES: Cluster has only two master nodes |<p>The cluster has only two nodes with a master role and will be unavailable if one of them breaks.</p> |`{TEMPLATE_NAME:es.nodes.count.master.last()}=2` |DISASTER | |
-|ES {#ES.NODE}: Node {#ES.NODE} has been restarted (uptime < 10m) |<p>Uptime is less than 10 minutes</p> |`{TEMPLATE_NAME:es.node.jvm.uptime[{#ES.NODE}].last()}<10m` |INFO |<p>Manual close: YES</p> |
-|ES {#ES.NODE}: Percent of JVM heap in use is high (over {$ELASTICSEARCH.HEAP_USED.MAX.WARN}% for 1h) |<p>This indicates that the rate of garbage collection isn’t keeping up with the rate of garbage creation. </p><p>To address this problem, you can either increase your heap size (as long as it remains below the recommended </p><p>guidelines stated above), or scale out the cluster by adding more nodes.</p> |`{TEMPLATE_NAME:es.node.jvm.mem.heap_used_percent[{#ES.NODE}].min(1h)}>{$ELASTICSEARCH.HEAP_USED.MAX.WARN}` |WARNING |<p>**Depends on**:</p><p>- ES {#ES.NODE}: Percent of JVM heap in use is critical (over {$ELASTICSEARCH.HEAP_USED.MAX.CRIT}% for 1h)</p> |
-|ES {#ES.NODE}: Percent of JVM heap in use is critical (over {$ELASTICSEARCH.HEAP_USED.MAX.CRIT}% for 1h) |<p>This indicates that the rate of garbage collection isn’t keeping up with the rate of garbage creation. </p><p>To address this problem, you can either increase your heap size (as long as it remains below the recommended </p><p>guidelines stated above), or scale out the cluster by adding more nodes.</p> |`{TEMPLATE_NAME:es.node.jvm.mem.heap_used_percent[{#ES.NODE}].min(1h)}>{$ELASTICSEARCH.HEAP_USED.MAX.CRIT}` |HIGH | |
-|ES {#ES.NODE}: Query latency is too high (over {$ELASTICSEARCH.QUERY_LATENCY.MAX.WARN}ms for 5m) |<p>If latency exceeds a threshold, look for potential resource bottlenecks, or investigate whether you need to optimize your queries.</p> |`{TEMPLATE_NAME:es.node.indices.search.query_latency[{#ES.NODE}].min(5m)}>{$ELASTICSEARCH.QUERY_LATENCY.MAX.WARN}` |WARNING | |
-|ES {#ES.NODE}: Fetch latency is too high (over {$ELASTICSEARCH.FETCH_LATENCY.MAX.WARN}ms for 5m) |<p>The fetch phase should typically take much less time than the query phase. If you notice this metric consistently increasing, </p><p>this could indicate a problem with slow disks, enriching of documents (highlighting the relevant text in search results, etc.), </p><p>or requesting too many results.</p> |`{TEMPLATE_NAME:es.node.indices.search.fetch_latency[{#ES.NODE}].min(5m)}>{$ELASTICSEARCH.FETCH_LATENCY.MAX.WARN}` |WARNING | |
-|ES {#ES.NODE}: Write thread pool executor has the rejected tasks (for 5m) |<p>The number of tasks rejected by the write thread pool executor is over 0 for 5m.</p> |`{TEMPLATE_NAME:es.node.thread_pool.write.rejected.rate[{#ES.NODE}].min(5m)}>0` |WARNING | |
-|ES {#ES.NODE}: Search thread pool executor has the rejected tasks (for 5m) |<p>The number of tasks rejected by the search thread pool executor is over 0 for 5m.</p> |`{TEMPLATE_NAME:es.node.thread_pool.search.rejected.rate[{#ES.NODE}].min(5m)}>0` |WARNING | |
-|ES {#ES.NODE}: Refresh thread pool executor has the rejected tasks (for 5m) |<p>The number of tasks rejected by the refresh thread pool executor is over 0 for 5m.</p> |`{TEMPLATE_NAME:es.node.thread_pool.refresh.rejected.rate[{#ES.NODE}].min(5m)}>0` |WARNING | |
-|ES {#ES.NODE}: Indexing latency is too high (over {$ELASTICSEARCH.INDEXING_LATENCY.MAX.WARN}ms for 5m) |<p>If the latency is increasing, it may indicate that you are indexing too many documents at the same time (Elasticsearch’s documentation </p><p>recommends starting with a bulk indexing size of 5 to 15 megabytes and increasing slowly from there).</p> |`{TEMPLATE_NAME:es.node.indices.indexing.index_latency[{#ES.NODE}].min(5m)}>{$ELASTICSEARCH.INDEXING_LATENCY.MAX.WARN}` |WARNING | |
-|ES {#ES.NODE}: Flush latency is too high (over {$ELASTICSEARCH.FLUSH_LATENCY.MAX.WARN}ms for 5m) |<p>If you see this metric increasing steadily, it may indicate a problem with slow disks; this problem may escalate </p><p>and eventually prevent you from being able to add new information to your index.</p> |`{TEMPLATE_NAME:es.node.indices.flush.latency[{#ES.NODE}].min(5m)}>{$ELASTICSEARCH.FLUSH_LATENCY.MAX.WARN}` |WARNING | |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|----------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|-----------------------------------------------------------------------------------------------------------------------------------------|
+| ES: Service is down | <p>The service is unavailable or does not accept TCP connections.</p> | `{TEMPLATE_NAME:net.tcp.service["{$ELASTICSEARCH.SCHEME}","{HOST.CONN}","{$ELASTICSEARCH.PORT}"].last()}=0` | AVERAGE | <p>Manual close: YES</p> |
+| ES: Service response time is too high (over {$ELASTICSEARCH.RESPONSE_TIME.MAX.WARN} for 5m) | <p>The performance of the TCP service is very low.</p> | `{TEMPLATE_NAME:net.tcp.service.perf["{$ELASTICSEARCH.SCHEME}","{HOST.CONN}","{$ELASTICSEARCH.PORT}"].min(5m)}>{$ELASTICSEARCH.RESPONSE_TIME.MAX.WARN}` | WARNING | <p>Manual close: YES</p><p>**Depends on**:</p><p>- ES: Service is down</p> |
+| ES: Health is YELLOW | <p>All primary shards are assigned, but one or more replica shards are unassigned. </p><p>If a node in the cluster fails, some data could be unavailable until that node is repaired.</p> | `{TEMPLATE_NAME:es.cluster.status.last()}=1` | AVERAGE | |
+| ES: Health is RED | <p>One or more primary shards are unassigned, so some data is unavailable. </p><p>This can occur briefly during cluster startup as primary shards are assigned.</p> | `{TEMPLATE_NAME:es.cluster.status.last()}=2` | HIGH | |
+| ES: Health is UNKNOWN | <p>The health status of the cluster is unknown or cannot be obtained.</p> | `{TEMPLATE_NAME:es.cluster.status.last()}=255` | HIGH | |
+| ES: The number of nodes within the cluster has decreased | | `{TEMPLATE_NAME:es.cluster.number_of_nodes.change()}<0` | INFO | <p>Manual close: YES</p> |
+| ES: The number of nodes within the cluster has increased | | `{TEMPLATE_NAME:es.cluster.number_of_nodes.change()}>0` | INFO | <p>Manual close: YES</p> |
+| ES: Cluster has the initializing shards | <p>The cluster has the initializing shards longer than 10 minutes.</p> | `{TEMPLATE_NAME:es.cluster.initializing_shards.min(10m)}>0` | AVERAGE | |
+| ES: Cluster has the unassigned shards | <p>The cluster has the unassigned shards longer than 10 minutes.</p> | `{TEMPLATE_NAME:es.cluster.unassigned_shards.min(10m)}>0` | AVERAGE | |
+| ES: Cluster has been restarted (uptime < 10m) | <p>Uptime is less than 10 minutes</p> | `{TEMPLATE_NAME:es.nodes.jvm.max_uptime[{#ES.NODE}].last()}<10m` | INFO | <p>Manual close: YES</p> |
+| ES: Cluster does not have enough space for resharding | <p>There is not enough disk space for index resharding.</p> | `({Elasticsearch Cluster by HTTP:es.nodes.fs.total_in_bytes.last()}-{TEMPLATE_NAME:es.nodes.fs.available_in_bytes.last()})/({Elasticsearch Cluster by HTTP:es.cluster.number_of_data_nodes.last()}-1)>{TEMPLATE_NAME:es.nodes.fs.available_in_bytes.last()}` | HIGH | |
+| ES: Cluster has only two master nodes | <p>The cluster has only two nodes with a master role and will be unavailable if one of them breaks.</p> | `{TEMPLATE_NAME:es.nodes.count.master.last()}=2` | DISASTER | |
+| ES {#ES.NODE}: Node {#ES.NODE} has been restarted (uptime < 10m) | <p>Uptime is less than 10 minutes</p> | `{TEMPLATE_NAME:es.node.jvm.uptime[{#ES.NODE}].last()}<10m` | INFO | <p>Manual close: YES</p> |
+| ES {#ES.NODE}: Percent of JVM heap in use is high (over {$ELASTICSEARCH.HEAP_USED.MAX.WARN}% for 1h) | <p>This indicates that the rate of garbage collection isn’t keeping up with the rate of garbage creation. </p><p>To address this problem, you can either increase your heap size (as long as it remains below the recommended </p><p>guidelines stated above), or scale out the cluster by adding more nodes.</p> | `{TEMPLATE_NAME:es.node.jvm.mem.heap_used_percent[{#ES.NODE}].min(1h)}>{$ELASTICSEARCH.HEAP_USED.MAX.WARN}` | WARNING | <p>**Depends on**:</p><p>- ES {#ES.NODE}: Percent of JVM heap in use is critical (over {$ELASTICSEARCH.HEAP_USED.MAX.CRIT}% for 1h)</p> |
+| ES {#ES.NODE}: Percent of JVM heap in use is critical (over {$ELASTICSEARCH.HEAP_USED.MAX.CRIT}% for 1h) | <p>This indicates that the rate of garbage collection isn’t keeping up with the rate of garbage creation. </p><p>To address this problem, you can either increase your heap size (as long as it remains below the recommended </p><p>guidelines stated above), or scale out the cluster by adding more nodes.</p> | `{TEMPLATE_NAME:es.node.jvm.mem.heap_used_percent[{#ES.NODE}].min(1h)}>{$ELASTICSEARCH.HEAP_USED.MAX.CRIT}` | HIGH | |
+| ES {#ES.NODE}: Query latency is too high (over {$ELASTICSEARCH.QUERY_LATENCY.MAX.WARN}ms for 5m) | <p>If latency exceeds a threshold, look for potential resource bottlenecks, or investigate whether you need to optimize your queries.</p> | `{TEMPLATE_NAME:es.node.indices.search.query_latency[{#ES.NODE}].min(5m)}>{$ELASTICSEARCH.QUERY_LATENCY.MAX.WARN}` | WARNING | |
+| ES {#ES.NODE}: Fetch latency is too high (over {$ELASTICSEARCH.FETCH_LATENCY.MAX.WARN}ms for 5m) | <p>The fetch phase should typically take much less time than the query phase. If you notice this metric consistently increasing, </p><p>this could indicate a problem with slow disks, enriching of documents (highlighting the relevant text in search results, etc.), </p><p>or requesting too many results.</p> | `{TEMPLATE_NAME:es.node.indices.search.fetch_latency[{#ES.NODE}].min(5m)}>{$ELASTICSEARCH.FETCH_LATENCY.MAX.WARN}` | WARNING | |
+| ES {#ES.NODE}: Write thread pool executor has the rejected tasks (for 5m) | <p>The number of tasks rejected by the write thread pool executor is over 0 for 5m.</p> | `{TEMPLATE_NAME:es.node.thread_pool.write.rejected.rate[{#ES.NODE}].min(5m)}>0` | WARNING | |
+| ES {#ES.NODE}: Search thread pool executor has the rejected tasks (for 5m) | <p>The number of tasks rejected by the search thread pool executor is over 0 for 5m.</p> | `{TEMPLATE_NAME:es.node.thread_pool.search.rejected.rate[{#ES.NODE}].min(5m)}>0` | WARNING | |
+| ES {#ES.NODE}: Refresh thread pool executor has the rejected tasks (for 5m) | <p>The number of tasks rejected by the refresh thread pool executor is over 0 for 5m.</p> | `{TEMPLATE_NAME:es.node.thread_pool.refresh.rejected.rate[{#ES.NODE}].min(5m)}>0` | WARNING | |
+| ES {#ES.NODE}: Indexing latency is too high (over {$ELASTICSEARCH.INDEXING_LATENCY.MAX.WARN}ms for 5m) | <p>If the latency is increasing, it may indicate that you are indexing too many documents at the same time (Elasticsearch’s documentation </p><p>recommends starting with a bulk indexing size of 5 to 15 megabytes and increasing slowly from there).</p> | `{TEMPLATE_NAME:es.node.indices.indexing.index_latency[{#ES.NODE}].min(5m)}>{$ELASTICSEARCH.INDEXING_LATENCY.MAX.WARN}` | WARNING | |
+| ES {#ES.NODE}: Flush latency is too high (over {$ELASTICSEARCH.FLUSH_LATENCY.MAX.WARN}ms for 5m) | <p>If you see this metric increasing steadily, it may indicate a problem with slow disks; this problem may escalate </p><p>and eventually prevent you from being able to add new information to your index.</p> | `{TEMPLATE_NAME:es.node.indices.flush.latency[{#ES.NODE}].min(5m)}>{$ELASTICSEARCH.FLUSH_LATENCY.MAX.WARN}` | WARNING | |
## Feedback
diff --git a/templates/app/etcd_http/README.md b/templates/app/etcd_http/README.md
index 3ea55aae000..340ab638df5 100644
--- a/templates/app/etcd_http/README.md
+++ b/templates/app/etcd_http/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
The template to monitor Etcd by Zabbix that work without any external scripts.
Most of the metrics are collected in one go, thanks to Zabbix bulk data collection.
@@ -18,7 +18,7 @@ This template was tested on:
## Setup
-> See [Zabbix template operation](https://www.zabbix.com/documentation/5.2/manual/config/templates_out_of_the_box/http) for basic instructions.
+> See [Zabbix template operation](https://www.zabbix.com/documentation/5.4/manual/config/templates_out_of_the_box/http) for basic instructions.
1. Import template into Zabbix
2. After importing template make sure that etcd allows for metric collection.
@@ -43,21 +43,21 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$ETCD.GRPC.ERRORS.MAX.WARN} |<p>Maximum number of gRPC requests failures</p> |`1` |
-|{$ETCD.GRPC_CODE.MATCHES} |<p>Filter of discoverable gRPC codes https://github.com/grpc/grpc/blob/master/doc/statuscodes.md</p> |`.*` |
-|{$ETCD.GRPC_CODE.NOT_MATCHES} |<p>Filter to exclude discovered gRPC codes https://github.com/grpc/grpc/blob/master/doc/statuscodes.md</p> |`CHANGE_IF_NEEDED` |
-|{$ETCD.GRPC_CODE.TRIGGER.MATCHES} |<p>Filter of discoverable gRPC codes which will be create triggers</p> |`Aborted|Unavailable` |
-|{$ETCD.HTTP.FAIL.MAX.WARN} |<p>Maximum number of HTTP requests failures</p> |`2` |
-|{$ETCD.LEADER.CHANGES.MAX.WARN} |<p>Maximum number of leader changes</p> |`5` |
-|{$ETCD.OPEN.FDS.MAX.WARN} |<p>Maximum percentage of used file descriptors</p> |`90` |
-|{$ETCD.PASSWORD} |<p>-</p> |`` |
-|{$ETCD.PORT} |<p>The port of Etcd API endpoint</p> |`2379` |
-|{$ETCD.PROPOSAL.FAIL.MAX.WARN} |<p>Maximum number of proposal failures</p> |`2` |
-|{$ETCD.PROPOSAL.PENDING.MAX.WARN} |<p>Maximum number of proposals in queue</p> |`5` |
-|{$ETCD.SCHEME} |<p>Request scheme which may be http or https</p> |`http` |
-|{$ETCD.USER} |<p>-</p> |`` |
+| Name | Description | Default |
+|-----------------------------------|------------------------------------------------------------------------------------------------------------|-----------------------|
+| {$ETCD.GRPC.ERRORS.MAX.WARN} | <p>Maximum number of gRPC requests failures</p> | `1` |
+| {$ETCD.GRPC_CODE.MATCHES} | <p>Filter of discoverable gRPC codes https://github.com/grpc/grpc/blob/master/doc/statuscodes.md</p> | `.*` |
+| {$ETCD.GRPC_CODE.NOT_MATCHES} | <p>Filter to exclude discovered gRPC codes https://github.com/grpc/grpc/blob/master/doc/statuscodes.md</p> | `CHANGE_IF_NEEDED` |
+| {$ETCD.GRPC_CODE.TRIGGER.MATCHES} | <p>Filter of discoverable gRPC codes which will be create triggers</p> | `Aborted|Unavailable` |
+| {$ETCD.HTTP.FAIL.MAX.WARN} | <p>Maximum number of HTTP requests failures</p> | `2` |
+| {$ETCD.LEADER.CHANGES.MAX.WARN} | <p>Maximum number of leader changes</p> | `5` |
+| {$ETCD.OPEN.FDS.MAX.WARN} | <p>Maximum percentage of used file descriptors</p> | `90` |
+| {$ETCD.PASSWORD} | <p>-</p> | `` |
+| {$ETCD.PORT} | <p>The port of Etcd API endpoint</p> | `2379` |
+| {$ETCD.PROPOSAL.FAIL.MAX.WARN} | <p>Maximum number of proposal failures</p> | `2` |
+| {$ETCD.PROPOSAL.PENDING.MAX.WARN} | <p>Maximum number of proposals in queue</p> | `5` |
+| {$ETCD.SCHEME} | <p>Request scheme which may be http or https</p> | `http` |
+| {$ETCD.USER} | <p>-</p> | `` |
## Template links
@@ -65,77 +65,77 @@ There are no template links in this template.
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|gRPC codes discovery | |DEPENDENT |etcd.grpc_code.discovery<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `grpc_server_handled_total`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>**Filter**:</p>AND <p>- A: {#GRPC.CODE} NOT_MATCHES_REGEX `{$ETCD.GRPC_CODE.NOT_MATCHES}`</p><p>- B: {#GRPC.CODE} MATCHES_REGEX `{$ETCD.GRPC_CODE.MATCHES}`</p> |
-|Peers discovery | |DEPENDENT |etcd.peer.discovery<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `etcd_network_peer_sent_bytes_total`</p> |
+| Name | Description | Type | Key and additional info |
+|----------------------|-------------|-----------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| gRPC codes discovery | | DEPENDENT | etcd.grpc_code.discovery<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `grpc_server_handled_total`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>**Filter**:</p>AND <p>- A: {#GRPC.CODE} NOT_MATCHES_REGEX `{$ETCD.GRPC_CODE.NOT_MATCHES}`</p><p>- B: {#GRPC.CODE} MATCHES_REGEX `{$ETCD.GRPC_CODE.MATCHES}`</p> |
+| Peers discovery | | DEPENDENT | etcd.peer.discovery<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `etcd_network_peer_sent_bytes_total`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Etcd |Etcd: Service's TCP port state |<p>-</p> |SIMPLE |net.tcp.service["{$ETCD.SCHEME}","{HOST.CONN}","{$ETCD.PORT}"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|Etcd |Etcd: Node health |<p>-</p> |HTTP_AGENT |etcd.health<p>**Preprocessing**:</p><p>- JSONPATH: `$.health`</p><p>- BOOL_TO_DECIMAL<p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|Etcd |Etcd: Server is a leader |<p>Whether or not this member is a leader. 1 if is, 0 otherwise.</p> |DEPENDENT |etcd.is.leader<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `etcd_server_is_leader `</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|Etcd |Etcd: Server has a leader |<p>Whether or not a leader exists. 1 is existence, 0 is not.</p> |DEPENDENT |etcd.has.leader<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `etcd_server_has_leader `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|Etcd |Etcd: Leader changes |<p>The the number of leader changes the member has seen since its start.</p> |DEPENDENT |etcd.leader.changes<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `etcd_server_leader_changes_seen_total `</p> |
-|Etcd |Etcd: Proposals committed per second |<p>The number of consensus proposals committed.</p> |DEPENDENT |etcd.proposals.committed.rate<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `etcd_server_proposals_committed_total `</p><p>- CHANGE_PER_SECOND |
-|Etcd |Etcd: Proposals applied per second |<p>The number of consensus proposals applied.</p> |DEPENDENT |etcd.proposals.applied.rate<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `etcd_server_proposals_applied_total `</p><p>- CHANGE_PER_SECOND |
-|Etcd |Etcd: Proposals failed per second |<p>The number of failed proposals seen.</p> |DEPENDENT |etcd.proposals.failed.rate<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `etcd_server_proposals_failed_total `</p><p>- CHANGE_PER_SECOND |
-|Etcd |Etcd: Proposals pending |<p>The current number of pending proposals to commit.</p> |DEPENDENT |etcd.proposals.pending<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `etcd_server_proposals_pending `</p> |
-|Etcd |Etcd: Reads per second |<p>Number of reads action by (get/getRecursive), local to this member.</p> |DEPENDENT |etcd.reads.rate<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `etcd_debugging_store_reads_total`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>- CHANGE_PER_SECOND |
-|Etcd |Etcd: Writes per second |<p>Number of writes (e.g. set/compareAndDelete) seen by this member.</p> |DEPENDENT |etcd.writes.rate<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `etcd_debugging_store_writes_total`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>- CHANGE_PER_SECOND |
-|Etcd |Etcd: Client gRPC received bytes per second |<p>The number of bytes received from grpc clients per second</p> |DEPENDENT |etcd.network.grpc.received.rate<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `etcd_network_client_grpc_received_bytes_total `</p><p>- CHANGE_PER_SECOND |
-|Etcd |Etcd: Client gRPC sent bytes per second |<p>The number of bytes sent from grpc clients per second</p> |DEPENDENT |etcd.network.grpc.sent.rate<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `etcd_network_client_grpc_sent_bytes_total `</p><p>- CHANGE_PER_SECOND |
-|Etcd |Etcd: HTTP requests received |<p>Number of requests received into the system (successfully parsed and authd).</p> |DEPENDENT |etcd.http.requests.rate<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `etcd_http_received_total`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>- CHANGE_PER_SECOND |
-|Etcd |Etcd: HTTP 5XX |<p>Number of handle failures of requests (non-watches), by method (GET/PUT etc.), and code 5XX.</p> |DEPENDENT |etcd.http.requests.5xx.rate<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `etcd_http_failed_total{code=~"5.+"}`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>- CHANGE_PER_SECOND |
-|Etcd |Etcd: HTTP 4XX |<p>Number of handle failures of requests (non-watches), by method (GET/PUT etc.), and code 4XX.</p> |DEPENDENT |etcd.http.requests.4xx.rate<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `etcd_http_failed_total{code=~"4.+"}`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>- CHANGE_PER_SECOND |
-|Etcd |Etcd: RPCs received per second |<p>The number of RPC stream messages received on the server.</p> |DEPENDENT |etcd.grpc.received.rate<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `grpc_server_msg_received_total`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>- CHANGE_PER_SECOND |
-|Etcd |Etcd: RPCs sent per second |<p>The number of gRPC stream messages sent by the server.</p> |DEPENDENT |etcd.grpc.sent.rate<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `grpc_server_msg_sent_total`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>- CHANGE_PER_SECOND |
-|Etcd |Etcd: RPCs started per second |<p>The number of RPCs started on the server.</p> |DEPENDENT |etcd.grpc.started.rate<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `grpc_server_started_total`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>- CHANGE_PER_SECOND |
-|Etcd |Etcd: Server version |<p>Version of the Etcd server.</p> |DEPENDENT |etcd.server.version<p>**Preprocessing**:</p><p>- JSONPATH: `$.etcdserver`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Etcd |Etcd: Cluster version |<p>Version of the Etcd cluster.</p> |DEPENDENT |etcd.cluster.version<p>**Preprocessing**:</p><p>- JSONPATH: `$.etcdcluster`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Etcd |Etcd: DB size |<p>Total size of the underlying database.</p> |DEPENDENT |etcd.db.size<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `etcd_debugging_mvcc_db_total_size_in_bytes `</p> |
-|Etcd |Etcd: Keys compacted per second |<p>The number of DB keys compacted per second.</p> |DEPENDENT |etcd.keys.compacted.rate<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `etcd_debugging_mvcc_db_compaction_keys_total `</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
-|Etcd |Etcd: Keys expired per second |<p>The number of expired keys per second.</p> |DEPENDENT |etcd.keys.expired.rate<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `etcd_debugging_store_expires_total `</p><p>- CHANGE_PER_SECOND |
-|Etcd |Etcd: Keys total |<p>Total number of keys.</p> |DEPENDENT |etcd.keys.total<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `etcd_debugging_mvcc_keys_total `</p> |
-|Etcd |Etcd: Uptime |<p>Etcd server uptime.</p> |DEPENDENT |etcd.uptime<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `process_start_time_seconds `</p><p>- JAVASCRIPT: `//use boottime to calculate uptime return (Math.floor(Date.now()/1000)-Number(value));`</p> |
-|Etcd |Etcd: Virtual memory |<p>Virtual memory size in bytes.</p> |DEPENDENT |etcd.virtual.bytes<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `process_virtual_memory_bytes `</p> |
-|Etcd |Etcd: Resident memory |<p>Resident memory size in bytes.</p> |DEPENDENT |etcd.res.bytes<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `process_resident_memory_bytes `</p> |
-|Etcd |Etcd: CPU |<p>Total user and system CPU time spent in seconds.</p> |DEPENDENT |etcd.cpu.util<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `process_cpu_seconds_total `</p><p>- CHANGE_PER_SECOND |
-|Etcd |Etcd: Open file descriptors |<p>Number of open file descriptors.</p> |DEPENDENT |etcd.open.fds<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `process_open_fds `</p> |
-|Etcd |Etcd: Maximum open file descriptors |<p>The Maximum number of open file descriptors.</p> |DEPENDENT |etcd.max.fds<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `process_max_fds `</p> |
-|Etcd |Etcd: Deletes per second |<p>The number of deletes seen by this member per second.</p> |DEPENDENT |etcd.delete.rate<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `etcd_debugging_mvcc_delete_total `</p><p>- CHANGE_PER_SECOND |
-|Etcd |Etcd: PUT per second |<p>The number of puts seen by this member per second.</p> |DEPENDENT |etcd.put.rate<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `etcd_debugging_mvcc_put_total `</p><p>- CHANGE_PER_SECOND |
-|Etcd |Etcd: Range per second |<p>The number of ranges seen by this member per second.</p> |DEPENDENT |etcd.range.rate<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `etcd_debugging_mvcc_range_total `</p><p>- CHANGE_PER_SECOND |
-|Etcd |Etcd: Transaction per second |<p>The number of transactions seen by this member per second.</p> |DEPENDENT |etcd.txn.rate<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `etcd_debugging_mvcc_range_total `</p><p>- CHANGE_PER_SECOND |
-|Etcd |Etcd: Events sent per second |<p>The number of events sent by this member per second</p> |DEPENDENT |etcd.events.sent.rate<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `etcd_debugging_mvcc_events_total `</p><p>- CHANGE_PER_SECOND |
-|Etcd |Etcd: Pending events |<p>Total number of pending events to be sent.</p> |DEPENDENT |etcd.events.sent.rate<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `etcd_debugging_mvcc_pending_events_total `</p> |
-|Etcd |Etcd: RPCs completed with code {#GRPC.CODE} |<p>The number of RPCs completed on the server with grpc_code {#GRPC.CODE}</p> |DEPENDENT |etcd.grpc.handled.rate[{#GRPC.CODE}]<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `grpc_server_handled_total{grpc_method="{#GRPC.CODE}"}`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>- CHANGE_PER_SECOND |
-|Etcd |Etcd: Etcd peer {#ETCD.PEER}: Bytes sent |<p>The number of bytes sent to peer with ID {#ETCD.PEER}</p> |DEPENDENT |etcd.bytes.sent.rate[{#ETCD.PEER}]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `etcd_network_peer_sent_bytes_total{To="{#ETCD.PEER}"} `</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
-|Etcd |Etcd: Etcd peer {#ETCD.PEER}: Bytes received |<p>The number of bytes received from peer with ID {#ETCD.PEER}</p> |DEPENDENT |etcd.bytes.received.rate[{#ETCD.PEER}]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `etcd_network_peer_received_bytes_total{From="{#ETCD.PEER}"} `</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
-|Etcd |Etcd: Etcd peer {#ETCD.PEER}: Send failures |<p>The number of send failures from peer with ID {#ETCD.PEER}</p> |DEPENDENT |etcd.sent.fail.rate[{#ETCD.PEER}]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `etcd_network_peer_sent_failures_total{To="{#ETCD.PEER}"} `</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
-|Etcd |Etcd: Etcd peer {#ETCD.PEER}: Receive failures failures |<p>The number of receive failures from the peer with ID {#ETCD.PEER}</p> |DEPENDENT |etcd.received.fail.rate[{#ETCD.PEER}]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `etcd_network_peer_received_failures_total{To="{#ETCD.PEER}"} `</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
-|Zabbix_raw_items |Etcd: Get node metrics |<p>-</p> |HTTP_AGENT |etcd.get_metrics |
-|Zabbix_raw_items |Etcd: Get version |<p>-</p> |HTTP_AGENT |etcd.get_version |
+| Group | Name | Description | Type | Key and additional info |
+|------------------|---------------------------------------------------------|-----------------------------------------------------------------------------------------------------|------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Etcd | Etcd: Service's TCP port state | <p>-</p> | SIMPLE | net.tcp.service["{$ETCD.SCHEME}","{HOST.CONN}","{$ETCD.PORT}"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| Etcd | Etcd: Node health | <p>-</p> | HTTP_AGENT | etcd.health<p>**Preprocessing**:</p><p>- JSONPATH: `$.health`</p><p>- BOOL_TO_DECIMAL<p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| Etcd | Etcd: Server is a leader | <p>Whether or not this member is a leader. 1 if is, 0 otherwise.</p> | DEPENDENT | etcd.is.leader<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `etcd_server_is_leader `</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| Etcd | Etcd: Server has a leader | <p>Whether or not a leader exists. 1 is existence, 0 is not.</p> | DEPENDENT | etcd.has.leader<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `etcd_server_has_leader `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| Etcd | Etcd: Leader changes | <p>The the number of leader changes the member has seen since its start.</p> | DEPENDENT | etcd.leader.changes<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `etcd_server_leader_changes_seen_total `</p> |
+| Etcd | Etcd: Proposals committed per second | <p>The number of consensus proposals committed.</p> | DEPENDENT | etcd.proposals.committed.rate<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `etcd_server_proposals_committed_total `</p><p>- CHANGE_PER_SECOND |
+| Etcd | Etcd: Proposals applied per second | <p>The number of consensus proposals applied.</p> | DEPENDENT | etcd.proposals.applied.rate<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `etcd_server_proposals_applied_total `</p><p>- CHANGE_PER_SECOND |
+| Etcd | Etcd: Proposals failed per second | <p>The number of failed proposals seen.</p> | DEPENDENT | etcd.proposals.failed.rate<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `etcd_server_proposals_failed_total `</p><p>- CHANGE_PER_SECOND |
+| Etcd | Etcd: Proposals pending | <p>The current number of pending proposals to commit.</p> | DEPENDENT | etcd.proposals.pending<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `etcd_server_proposals_pending `</p> |
+| Etcd | Etcd: Reads per second | <p>Number of reads action by (get/getRecursive), local to this member.</p> | DEPENDENT | etcd.reads.rate<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `etcd_debugging_store_reads_total`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>- CHANGE_PER_SECOND |
+| Etcd | Etcd: Writes per second | <p>Number of writes (e.g. set/compareAndDelete) seen by this member.</p> | DEPENDENT | etcd.writes.rate<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `etcd_debugging_store_writes_total`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>- CHANGE_PER_SECOND |
+| Etcd | Etcd: Client gRPC received bytes per second | <p>The number of bytes received from grpc clients per second</p> | DEPENDENT | etcd.network.grpc.received.rate<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `etcd_network_client_grpc_received_bytes_total `</p><p>- CHANGE_PER_SECOND |
+| Etcd | Etcd: Client gRPC sent bytes per second | <p>The number of bytes sent from grpc clients per second</p> | DEPENDENT | etcd.network.grpc.sent.rate<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `etcd_network_client_grpc_sent_bytes_total `</p><p>- CHANGE_PER_SECOND |
+| Etcd | Etcd: HTTP requests received | <p>Number of requests received into the system (successfully parsed and authd).</p> | DEPENDENT | etcd.http.requests.rate<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `etcd_http_received_total`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>- CHANGE_PER_SECOND |
+| Etcd | Etcd: HTTP 5XX | <p>Number of handle failures of requests (non-watches), by method (GET/PUT etc.), and code 5XX.</p> | DEPENDENT | etcd.http.requests.5xx.rate<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `etcd_http_failed_total{code=~"5.+"}`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>- CHANGE_PER_SECOND |
+| Etcd | Etcd: HTTP 4XX | <p>Number of handle failures of requests (non-watches), by method (GET/PUT etc.), and code 4XX.</p> | DEPENDENT | etcd.http.requests.4xx.rate<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `etcd_http_failed_total{code=~"4.+"}`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>- CHANGE_PER_SECOND |
+| Etcd | Etcd: RPCs received per second | <p>The number of RPC stream messages received on the server.</p> | DEPENDENT | etcd.grpc.received.rate<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `grpc_server_msg_received_total`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>- CHANGE_PER_SECOND |
+| Etcd | Etcd: RPCs sent per second | <p>The number of gRPC stream messages sent by the server.</p> | DEPENDENT | etcd.grpc.sent.rate<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `grpc_server_msg_sent_total`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>- CHANGE_PER_SECOND |
+| Etcd | Etcd: RPCs started per second | <p>The number of RPCs started on the server.</p> | DEPENDENT | etcd.grpc.started.rate<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `grpc_server_started_total`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>- CHANGE_PER_SECOND |
+| Etcd | Etcd: Server version | <p>Version of the Etcd server.</p> | DEPENDENT | etcd.server.version<p>**Preprocessing**:</p><p>- JSONPATH: `$.etcdserver`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Etcd | Etcd: Cluster version | <p>Version of the Etcd cluster.</p> | DEPENDENT | etcd.cluster.version<p>**Preprocessing**:</p><p>- JSONPATH: `$.etcdcluster`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Etcd | Etcd: DB size | <p>Total size of the underlying database.</p> | DEPENDENT | etcd.db.size<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `etcd_debugging_mvcc_db_total_size_in_bytes `</p> |
+| Etcd | Etcd: Keys compacted per second | <p>The number of DB keys compacted per second.</p> | DEPENDENT | etcd.keys.compacted.rate<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `etcd_debugging_mvcc_db_compaction_keys_total `</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
+| Etcd | Etcd: Keys expired per second | <p>The number of expired keys per second.</p> | DEPENDENT | etcd.keys.expired.rate<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `etcd_debugging_store_expires_total `</p><p>- CHANGE_PER_SECOND |
+| Etcd | Etcd: Keys total | <p>Total number of keys.</p> | DEPENDENT | etcd.keys.total<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `etcd_debugging_mvcc_keys_total `</p> |
+| Etcd | Etcd: Uptime | <p>Etcd server uptime.</p> | DEPENDENT | etcd.uptime<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `process_start_time_seconds `</p><p>- JAVASCRIPT: `//use boottime to calculate uptime return (Math.floor(Date.now()/1000)-Number(value));`</p> |
+| Etcd | Etcd: Virtual memory | <p>Virtual memory size in bytes.</p> | DEPENDENT | etcd.virtual.bytes<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `process_virtual_memory_bytes `</p> |
+| Etcd | Etcd: Resident memory | <p>Resident memory size in bytes.</p> | DEPENDENT | etcd.res.bytes<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `process_resident_memory_bytes `</p> |
+| Etcd | Etcd: CPU | <p>Total user and system CPU time spent in seconds.</p> | DEPENDENT | etcd.cpu.util<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `process_cpu_seconds_total `</p><p>- CHANGE_PER_SECOND |
+| Etcd | Etcd: Open file descriptors | <p>Number of open file descriptors.</p> | DEPENDENT | etcd.open.fds<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `process_open_fds `</p> |
+| Etcd | Etcd: Maximum open file descriptors | <p>The Maximum number of open file descriptors.</p> | DEPENDENT | etcd.max.fds<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `process_max_fds `</p> |
+| Etcd | Etcd: Deletes per second | <p>The number of deletes seen by this member per second.</p> | DEPENDENT | etcd.delete.rate<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `etcd_debugging_mvcc_delete_total `</p><p>- CHANGE_PER_SECOND |
+| Etcd | Etcd: PUT per second | <p>The number of puts seen by this member per second.</p> | DEPENDENT | etcd.put.rate<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `etcd_debugging_mvcc_put_total `</p><p>- CHANGE_PER_SECOND |
+| Etcd | Etcd: Range per second | <p>The number of ranges seen by this member per second.</p> | DEPENDENT | etcd.range.rate<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `etcd_debugging_mvcc_range_total `</p><p>- CHANGE_PER_SECOND |
+| Etcd | Etcd: Transaction per second | <p>The number of transactions seen by this member per second.</p> | DEPENDENT | etcd.txn.rate<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `etcd_debugging_mvcc_range_total `</p><p>- CHANGE_PER_SECOND |
+| Etcd | Etcd: Events sent per second | <p>The number of events sent by this member per second</p> | DEPENDENT | etcd.events.sent.rate<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `etcd_debugging_mvcc_events_total `</p><p>- CHANGE_PER_SECOND |
+| Etcd | Etcd: Pending events | <p>Total number of pending events to be sent.</p> | DEPENDENT | etcd.events.sent.rate<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `etcd_debugging_mvcc_pending_events_total `</p> |
+| Etcd | Etcd: RPCs completed with code {#GRPC.CODE} | <p>The number of RPCs completed on the server with grpc_code {#GRPC.CODE}</p> | DEPENDENT | etcd.grpc.handled.rate[{#GRPC.CODE}]<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `grpc_server_handled_total{grpc_method="{#GRPC.CODE}"}`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>- CHANGE_PER_SECOND |
+| Etcd | Etcd: Etcd peer {#ETCD.PEER}: Bytes sent | <p>The number of bytes sent to peer with ID {#ETCD.PEER}</p> | DEPENDENT | etcd.bytes.sent.rate[{#ETCD.PEER}]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `etcd_network_peer_sent_bytes_total{To="{#ETCD.PEER}"} `</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
+| Etcd | Etcd: Etcd peer {#ETCD.PEER}: Bytes received | <p>The number of bytes received from peer with ID {#ETCD.PEER}</p> | DEPENDENT | etcd.bytes.received.rate[{#ETCD.PEER}]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `etcd_network_peer_received_bytes_total{From="{#ETCD.PEER}"} `</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
+| Etcd | Etcd: Etcd peer {#ETCD.PEER}: Send failures | <p>The number of send failures from peer with ID {#ETCD.PEER}</p> | DEPENDENT | etcd.sent.fail.rate[{#ETCD.PEER}]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `etcd_network_peer_sent_failures_total{To="{#ETCD.PEER}"} `</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
+| Etcd | Etcd: Etcd peer {#ETCD.PEER}: Receive failures failures | <p>The number of receive failures from the peer with ID {#ETCD.PEER}</p> | DEPENDENT | etcd.received.fail.rate[{#ETCD.PEER}]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `etcd_network_peer_received_failures_total{To="{#ETCD.PEER}"} `</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
+| Zabbix_raw_items | Etcd: Get node metrics | <p>-</p> | HTTP_AGENT | etcd.get_metrics |
+| Zabbix_raw_items | Etcd: Get version | <p>-</p> | HTTP_AGENT | etcd.get_version |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|Etcd: Service is unavailable |<p>-</p> |`{TEMPLATE_NAME:net.tcp.service["{$ETCD.SCHEME}","{HOST.CONN}","{$ETCD.PORT}"].last()}=0` |AVERAGE |<p>Manual close: YES</p> |
-|Etcd: Node healthcheck failed |<p>https://etcd.io/docs/v3.4.0/op-guide/monitoring/#health-check</p> |`{TEMPLATE_NAME:etcd.health.last()}=0` |AVERAGE |<p>**Depends on**:</p><p>- Etcd: Service is unavailable</p> |
-|Etcd: Failed to fetch info data (or no data for 30m) |<p>Zabbix has not received data for items for the last 30 minutes</p> |`{TEMPLATE_NAME:etcd.is.leader.nodata(30m)}=1` |WARNING |<p>Manual close: YES</p><p>**Depends on**:</p><p>- Etcd: Service is unavailable</p> |
-|Etcd: Member has no leader |<p>"If a member does not have a leader, it is totally unavailable."</p> |`{TEMPLATE_NAME:etcd.has.leader.last()}=0` |AVERAGE | |
-|Etcd: Instance has seen too many leader changes (over {$ETCD.LEADER.CHANGES.MAX.WARN} for 15m)' |<p>Rapid leadership changes impact the performance of etcd significantly. It also signals that the leader is unstable, perhaps due to network connectivity issues or excessive load hitting the etcd cluster.</p> |`{TEMPLATE_NAME:etcd.leader.changes.delta(15m)}>{$ETCD.LEADER.CHANGES.MAX.WARN}` |WARNING | |
-|Etcd: Too many proposal failures (over {$ETCD.PROPOSAL.FAIL.MAX.WARN} for 5m)' |<p>"Normally related to two issues: temporary failures related to a leader election or </p><p>longer downtime caused by a loss of quorum in the cluster."</p> |`{TEMPLATE_NAME:etcd.proposals.failed.rate.min(5m)}>{$ETCD.PROPOSAL.FAIL.MAX.WARN}` |WARNING | |
-|Etcd: Too many proposals are queued to commit (over {$ETCD.PROPOSAL.PENDING.MAX.WARN} for 5m)' |<p>"Rising pending proposals suggests there is a high client load or the member cannot commit proposals."</p> |`{TEMPLATE_NAME:etcd.proposals.pending.min(5m)}>{$ETCD.PROPOSAL.PENDING.MAX.WARN}` |WARNING | |
-|Etcd: Too many HTTP requests failures (over {$ETCD.HTTP.FAIL.MAX.WARN} for 5m)' |<p>"Too many reqvests failed on etcd instance with 5xx HTTP code"</p> |`{TEMPLATE_NAME:etcd.http.requests.5xx.rate.min(5m)}>{$ETCD.HTTP.FAIL.MAX.WARN}` |WARNING | |
-|Etcd: Server version has changed (new version: {ITEM.VALUE}) |<p>Etcd version has changed. Ack to close.</p> |`{TEMPLATE_NAME:etcd.server.version.diff()}=1 and {TEMPLATE_NAME:etcd.server.version.strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|Etcd: Cluster version has changed (new version: {ITEM.VALUE}) |<p>Etcd version has changed. Ack to close.</p> |`{TEMPLATE_NAME:etcd.cluster.version.diff()}=1 and {TEMPLATE_NAME:etcd.cluster.version.strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|Etcd: has been restarted (uptime < 10m) |<p>Uptime is less than 10 minutes</p> |`{TEMPLATE_NAME:etcd.uptime.last()}<10m` |INFO |<p>Manual close: YES</p> |
-|Etcd: Current number of open files is too high (over {$ETCD.OPEN.FDS.MAX.WARN}% for 5m) |<p>"Heavy file descriptor usage (i.e., near the process’s file descriptor limit) indicates a potential file descriptor exhaustion issue. </p><p>If the file descriptors are exhausted, etcd may panic because it cannot create new WAL files."</p> |`{TEMPLATE_NAME:etcd.open.fds.min(5m)}/{Etcd by HTTP:etcd.max.fds.last()}*100>{$ETCD.OPEN.FDS.MAX.WARN}` |WARNING | |
-|Etcd: Too many failed gRPC requests with code: {#GRPC.CODE} (over {$ETCD.GRPC.ERRORS.MAX.WARN} in 5m) |<p>-</p> |`{TEMPLATE_NAME:etcd.grpc.handled.rate[{#GRPC.CODE}].min(5m)}>{$ETCD.GRPC.ERRORS.MAX.WARN}` |WARNING | |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|-------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------|----------|-------------------------------------------------------------------------------------|
+| Etcd: Service is unavailable | <p>-</p> | `{TEMPLATE_NAME:net.tcp.service["{$ETCD.SCHEME}","{HOST.CONN}","{$ETCD.PORT}"].last()}=0` | AVERAGE | <p>Manual close: YES</p> |
+| Etcd: Node healthcheck failed | <p>https://etcd.io/docs/v3.4.0/op-guide/monitoring/#health-check</p> | `{TEMPLATE_NAME:etcd.health.last()}=0` | AVERAGE | <p>**Depends on**:</p><p>- Etcd: Service is unavailable</p> |
+| Etcd: Failed to fetch info data (or no data for 30m) | <p>Zabbix has not received data for items for the last 30 minutes</p> | `{TEMPLATE_NAME:etcd.is.leader.nodata(30m)}=1` | WARNING | <p>Manual close: YES</p><p>**Depends on**:</p><p>- Etcd: Service is unavailable</p> |
+| Etcd: Member has no leader | <p>"If a member does not have a leader, it is totally unavailable."</p> | `{TEMPLATE_NAME:etcd.has.leader.last()}=0` | AVERAGE | |
+| Etcd: Instance has seen too many leader changes (over {$ETCD.LEADER.CHANGES.MAX.WARN} for 15m)' | <p>Rapid leadership changes impact the performance of etcd significantly. It also signals that the leader is unstable, perhaps due to network connectivity issues or excessive load hitting the etcd cluster.</p> | `{TEMPLATE_NAME:etcd.leader.changes.delta(15m)}>{$ETCD.LEADER.CHANGES.MAX.WARN}` | WARNING | |
+| Etcd: Too many proposal failures (over {$ETCD.PROPOSAL.FAIL.MAX.WARN} for 5m)' | <p>"Normally related to two issues: temporary failures related to a leader election or </p><p>longer downtime caused by a loss of quorum in the cluster."</p> | `{TEMPLATE_NAME:etcd.proposals.failed.rate.min(5m)}>{$ETCD.PROPOSAL.FAIL.MAX.WARN}` | WARNING | |
+| Etcd: Too many proposals are queued to commit (over {$ETCD.PROPOSAL.PENDING.MAX.WARN} for 5m)' | <p>"Rising pending proposals suggests there is a high client load or the member cannot commit proposals."</p> | `{TEMPLATE_NAME:etcd.proposals.pending.min(5m)}>{$ETCD.PROPOSAL.PENDING.MAX.WARN}` | WARNING | |
+| Etcd: Too many HTTP requests failures (over {$ETCD.HTTP.FAIL.MAX.WARN} for 5m)' | <p>"Too many reqvests failed on etcd instance with 5xx HTTP code"</p> | `{TEMPLATE_NAME:etcd.http.requests.5xx.rate.min(5m)}>{$ETCD.HTTP.FAIL.MAX.WARN}` | WARNING | |
+| Etcd: Server version has changed (new version: {ITEM.VALUE}) | <p>Etcd version has changed. Ack to close.</p> | `{TEMPLATE_NAME:etcd.server.version.diff()}=1 and {TEMPLATE_NAME:etcd.server.version.strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| Etcd: Cluster version has changed (new version: {ITEM.VALUE}) | <p>Etcd version has changed. Ack to close.</p> | `{TEMPLATE_NAME:etcd.cluster.version.diff()}=1 and {TEMPLATE_NAME:etcd.cluster.version.strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| Etcd: has been restarted (uptime < 10m) | <p>Uptime is less than 10 minutes</p> | `{TEMPLATE_NAME:etcd.uptime.last()}<10m` | INFO | <p>Manual close: YES</p> |
+| Etcd: Current number of open files is too high (over {$ETCD.OPEN.FDS.MAX.WARN}% for 5m) | <p>"Heavy file descriptor usage (i.e., near the process’s file descriptor limit) indicates a potential file descriptor exhaustion issue. </p><p>If the file descriptors are exhausted, etcd may panic because it cannot create new WAL files."</p> | `{TEMPLATE_NAME:etcd.open.fds.min(5m)}/{Etcd by HTTP:etcd.max.fds.last()}*100>{$ETCD.OPEN.FDS.MAX.WARN}` | WARNING | |
+| Etcd: Too many failed gRPC requests with code: {#GRPC.CODE} (over {$ETCD.GRPC.ERRORS.MAX.WARN} in 5m) | <p>-</p> | `{TEMPLATE_NAME:etcd.grpc.handled.rate[{#GRPC.CODE}].min(5m)}>{$ETCD.GRPC.ERRORS.MAX.WARN}` | WARNING | |
## Feedback
diff --git a/templates/app/generic_java_jmx/README.md b/templates/app/generic_java_jmx/README.md
index db07e4efb4b..33ee31493f0 100644
--- a/templates/app/generic_java_jmx/README.md
+++ b/templates/app/generic_java_jmx/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
Official JMX Template from Zabbix distribution. Could be useful for many Java Applications (JMX).
@@ -18,18 +18,18 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$JMX.CPU.LOAD.MAX} |<p>A threshold in percent for CPU utilization trigger.</p> |`85` |
-|{$JMX.CPU.LOAD.TIME} |<p>The time during which the CPU utilization may exceed the threshold.</p> |`5m` |
-|{$JMX.FILE.DESCRIPTORS.MAX} |<p>A threshold in percent for file descriptors count trigger.</p> |`85` |
-|{$JMX.FILE.DESCRIPTORS.TIME} |<p>The time during which the file descriptors count may exceed the threshold.</p> |`3m` |
-|{$JMX.HEAP.MEM.USAGE.MAX} |<p>A threshold in percent for Heap memory utilization trigger.</p> |`85` |
-|{$JMX.HEAP.MEM.USAGE.TIME} |<p>The time during which the Heap memory utilization may exceed the threshold.</p> |`10m` |
-|{$JMX.MP.USAGE.MAX} |<p>A threshold in percent for memory pools utilization trigger. Use a context to change the threshold for a specific pool.</p> |`85` |
-|{$JMX.MP.USAGE.TIME} |<p>The time during which the memory pools utilization may exceed the threshold.</p> |`10m` |
-|{$JMX.NONHEAP.MEM.USAGE.MAX} |<p>A threshold in percent for Non-heap memory utilization trigger.</p> |`85` |
-|{$JMX.NONHEAP.MEM.USAGE.TIME} |<p>The time during which the Non-heap memory utilization may exceed the threshold.</p> |`10m` |
+| Name | Description | Default |
+|-------------------------------|--------------------------------------------------------------------------------------------------------------------------------|---------|
+| {$JMX.CPU.LOAD.MAX} | <p>A threshold in percent for CPU utilization trigger.</p> | `85` |
+| {$JMX.CPU.LOAD.TIME} | <p>The time during which the CPU utilization may exceed the threshold.</p> | `5m` |
+| {$JMX.FILE.DESCRIPTORS.MAX} | <p>A threshold in percent for file descriptors count trigger.</p> | `85` |
+| {$JMX.FILE.DESCRIPTORS.TIME} | <p>The time during which the file descriptors count may exceed the threshold.</p> | `3m` |
+| {$JMX.HEAP.MEM.USAGE.MAX} | <p>A threshold in percent for Heap memory utilization trigger.</p> | `85` |
+| {$JMX.HEAP.MEM.USAGE.TIME} | <p>The time during which the Heap memory utilization may exceed the threshold.</p> | `10m` |
+| {$JMX.MP.USAGE.MAX} | <p>A threshold in percent for memory pools utilization trigger. Use a context to change the threshold for a specific pool.</p> | `85` |
+| {$JMX.MP.USAGE.TIME} | <p>The time during which the memory pools utilization may exceed the threshold.</p> | `10m` |
+| {$JMX.NONHEAP.MEM.USAGE.MAX} | <p>A threshold in percent for Non-heap memory utilization trigger.</p> | `85` |
+| {$JMX.NONHEAP.MEM.USAGE.TIME} | <p>The time during which the Non-heap memory utilization may exceed the threshold.</p> | `10m` |
## Template links
@@ -40,85 +40,85 @@ There are no template links in this template.
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|JMX |ClassLoading: Loaded class count |<p>Displays number of classes that are currently loaded in the Java virtual machine.</p> |JMX |jmx["java.lang:type=ClassLoading","LoadedClassCount"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|JMX |ClassLoading: Total loaded class count |<p>Displays the total number of classes that have been loaded since the Java virtual machine has started execution.</p> |JMX |jmx["java.lang:type=ClassLoading","TotalLoadedClassCount"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|JMX |ClassLoading: Unloaded class count |<p>Displays the total number of classes that have been loaded since the Java virtual machine has started execution.</p> |JMX |jmx["java.lang:type=ClassLoading","UnloadedClassCount"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|JMX |Compilation: Name of the current JIT compiler |<p>Displays the total number of classes unloaded since the Java virtual machine has started execution.</p> |JMX |jmx["java.lang:type=Compilation","Name"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `30m`</p> |
-|JMX |Compilation: Accumulated time spent |<p>Displays the approximate accumulated elapsed time spent in compilation, in seconds.</p> |JMX |jmx["java.lang:type=Compilation","TotalCompilationTime"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|JMX |GarbageCollector: ConcurrentMarkSweep number of collections per second |<p>Displays the total number of collections that have occurred per second.</p> |JMX |jmx["java.lang:type=GarbageCollector,name=ConcurrentMarkSweep","CollectionCount"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|JMX |GarbageCollector: ConcurrentMarkSweep accumulated time spent in collection |<p>Displays the approximate accumulated collection elapsed time, in seconds.</p> |JMX |jmx["java.lang:type=GarbageCollector,name=ConcurrentMarkSweep","CollectionTime"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|JMX |GarbageCollector: Copy number of collections per second |<p>Displays the total number of collections that have occurred per second.</p> |JMX |jmx["java.lang:type=GarbageCollector,name=Copy","CollectionCount"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|JMX |GarbageCollector: Copy accumulated time spent in collection |<p>Displays the approximate accumulated collection elapsed time, in seconds.</p> |JMX |jmx["java.lang:type=GarbageCollector,name=Copy","CollectionTime"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|JMX |GarbageCollector: MarkSweepCompact number of collections per second |<p>Displays the total number of collections that have occurred per second.</p> |JMX |jmx["java.lang:type=GarbageCollector,name=MarkSweepCompact","CollectionCount"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|JMX |GarbageCollector: MarkSweepCompact accumulated time spent in collection |<p>Displays the approximate accumulated collection elapsed time, in seconds.</p> |JMX |jmx["java.lang:type=GarbageCollector,name=MarkSweepCompact","CollectionTime"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|JMX |GarbageCollector: ParNew number of collections per second |<p>Displays the total number of collections that have occurred per second.</p> |JMX |jmx["java.lang:type=GarbageCollector,name=ParNew","CollectionCount"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|JMX |GarbageCollector: ParNew accumulated time spent in collection |<p>Displays the approximate accumulated collection elapsed time, in seconds.</p> |JMX |jmx["java.lang:type=GarbageCollector,name=ParNew","CollectionTime"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|JMX |GarbageCollector: PS MarkSweep number of collections per second |<p>Displays the total number of collections that have occurred per second.</p> |JMX |jmx["java.lang:type=GarbageCollector,name=PS MarkSweep","CollectionCount"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|JMX |GarbageCollector: PS MarkSweep accumulated time spent in collection |<p>Displays the approximate accumulated collection elapsed time, in seconds.</p> |JMX |jmx["java.lang:type=GarbageCollector,name=PS MarkSweep","CollectionTime"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|JMX |GarbageCollector: PS Scavenge number of collections per second |<p>Displays the total number of collections that have occurred per second.</p> |JMX |jmx["java.lang:type=GarbageCollector,name=PS Scavenge","CollectionCount"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|JMX |GarbageCollector: PS Scavenge accumulated time spent in collection |<p>Displays the approximate accumulated collection elapsed time, in seconds.</p> |JMX |jmx["java.lang:type=GarbageCollector,name=PS Scavenge","CollectionTime"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|JMX |Memory: Heap memory committed |<p>Current heap memory allocated. This amount of memory is guaranteed for the Java virtual machine to use.</p> |JMX |jmx["java.lang:type=Memory","HeapMemoryUsage.committed"] |
-|JMX |Memory: Heap memory maximum size |<p>Maximum amount of heap that can be used for memory management. This amount of memory is not guaranteed to be available if it is greater than the amount of committed memory. The Java virtual machine may fail to allocate memory even if the amount of used memory does not exceed this maximum size.</p> |JMX |jmx["java.lang:type=Memory","HeapMemoryUsage.max"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|JMX |Memory: Heap memory used |<p>Current memory usage outside the heap.</p> |JMX |jmx["java.lang:type=Memory","HeapMemoryUsage.used"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|JMX |Memory: Non-Heap memory committed |<p>Current memory allocated outside the heap. This amount of memory is guaranteed for the Java virtual machine to use.</p> |JMX |jmx["java.lang:type=Memory","NonHeapMemoryUsage.committed"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|JMX |Memory: Non-Heap memory maximum size |<p>Maximum amount of non-heap memory that can be used for memory management. This amount of memory is not guaranteed to be available if it is greater than the amount of committed memory. The Java virtual machine may fail to allocate memory even if the amount of used memory does not exceed this maximum size.</p> |JMX |jmx["java.lang:type=Memory","NonHeapMemoryUsage.max"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|JMX |Memory: Non-Heap memory used |<p>Current memory usage outside the heap</p> |JMX |jmx["java.lang:type=Memory","NonHeapMemoryUsage.used"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|JMX |Memory: Object pending finalization count |<p>The approximate number of objects for which finalization is pending.</p> |JMX |jmx["java.lang:type=Memory","ObjectPendingFinalizationCount"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|JMX |MemoryPool: CMS Old Gen committed |<p>Current memory allocated</p> |JMX |jmx["java.lang:type=MemoryPool,name=CMS Old Gen","Usage.committed"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|JMX |MemoryPool: CMS Old Gen maximum size |<p>Maximum amount of memory that can be used for memory management. This amount of memory is not guaranteed to be available if it is greater than the amount of committed memory. The Java virtual machine may fail to allocate memory even if the amount of used memory does not exceed this maximum size.</p> |JMX |jmx["java.lang:type=MemoryPool,name=CMS Old Gen","Usage.max"] |
-|JMX |MemoryPool: CMS Old Gen used |<p>Current memory usage</p> |JMX |jmx["java.lang:type=MemoryPool,name=CMS Old Gen","Usage.used"] |
-|JMX |MemoryPool: CMS Perm Gen committed |<p>Current memory allocated</p> |JMX |jmx["java.lang:type=MemoryPool,name=CMS Perm Gen","Usage.committed"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|JMX |MemoryPool: CMS Perm Gen maximum size |<p>Maximum amount of memory that can be used for memory management. This amount of memory is not guaranteed to be available if it is greater than the amount of committed memory. The Java virtual machine may fail to allocate memory even if the amount of used memory does not exceed this maximum size.</p> |JMX |jmx["java.lang:type=MemoryPool,name=CMS Perm Gen","Usage.max"] |
-|JMX |MemoryPool: CMS Perm Gen used |<p>Current memory usage</p> |JMX |jmx["java.lang:type=MemoryPool,name=CMS Perm Gen","Usage.used"] |
-|JMX |MemoryPool: Code Cache committed |<p>Current memory allocated</p> |JMX |jmx["java.lang:type=MemoryPool,name=Code Cache","Usage.committed"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|JMX |MemoryPool: CodeCache maximum size |<p>Maximum amount of memory that can be used for memory management. This amount of memory is not guaranteed to be available if it is greater than the amount of committed memory. The Java virtual machine may fail to allocate memory even if the amount of used memory does not exceed this maximum size.</p> |JMX |jmx["java.lang:type=MemoryPool,name=Code Cache","Usage.max"] |
-|JMX |MemoryPool: Code Cache used |<p>Current memory usage</p> |JMX |jmx["java.lang:type=MemoryPool,name=Code Cache","Usage.used"] |
-|JMX |MemoryPool: Perm Gen committed |<p>Current memory allocated</p> |JMX |jmx["java.lang:type=MemoryPool,name=Perm Gen","Usage.committed"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|JMX |MemoryPool: Perm Gen maximum size |<p>Maximum amount of memory that can be used for memory management. This amount of memory is not guaranteed to be available if it is greater than the amount of committed memory. The Java virtual machine may fail to allocate memory even if the amount of used memory does not exceed this maximum size.</p> |JMX |jmx["java.lang:type=MemoryPool,name=Perm Gen","Usage.max"] |
-|JMX |MemoryPool: Perm Gen used |<p>Current memory usage</p> |JMX |jmx["java.lang:type=MemoryPool,name=Perm Gen","Usage.used"] |
-|JMX |MemoryPool: PS Old Gen |<p>Current memory allocated</p> |JMX |jmx["java.lang:type=MemoryPool,name=PS Old Gen","Usage.committed"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|JMX |MemoryPool: PS Old Gen maximum size |<p>Maximum amount of memory that can be used for memory management. This amount of memory is not guaranteed to be available if it is greater than the amount of committed memory. The Java virtual machine may fail to allocate memory even if the amount of used memory does not exceed this maximum size.</p> |JMX |jmx["java.lang:type=MemoryPool,name=PS Old Gen","Usage.max"] |
-|JMX |MemoryPool: PS Old Gen used |<p>Current memory usage</p> |JMX |jmx["java.lang:type=MemoryPool,name=PS Old Gen","Usage.used"] |
-|JMX |MemoryPool: PS Perm Gen committed |<p>Current memory allocated</p> |JMX |jmx["java.lang:type=MemoryPool,name=PS Perm Gen","Usage.committed"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|JMX |MemoryPool: PS Perm Gen maximum size |<p>Maximum amount of memory that can be used for memory management. This amount of memory is not guaranteed to be available if it is greater than the amount of committed memory. The Java virtual machine may fail to allocate memory even if the amount of used memory does not exceed this maximum size.</p> |JMX |jmx["java.lang:type=MemoryPool,name=PS Perm Gen","Usage.max"] |
-|JMX |MemoryPool: PS Perm Gen used |<p>Current memory usage</p> |JMX |jmx["java.lang:type=MemoryPool,name=PS Perm Gen","Usage.used"] |
-|JMX |MemoryPool: Tenured Gen committed |<p>Current memory allocated</p> |JMX |jmx["java.lang:type=MemoryPool,name=Tenured Gen","Usage.committed"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|JMX |MemoryPool: Tenured Gen maximum size |<p>Maximum amount of memory that can be used for memory management. This amount of memory is not guaranteed to be available if it is greater than the amount of committed memory. The Java virtual machine may fail to allocate memory even if the amount of used memory does not exceed this maximum size.</p> |JMX |jmx["java.lang:type=MemoryPool,name=Tenured Gen","Usage.max"] |
-|JMX |MemoryPool: Tenured Gen used |<p>Current memory usage</p> |JMX |jmx["java.lang:type=MemoryPool,name=Tenured Gen","Usage.used"] |
-|JMX |OperatingSystem: File descriptors maximum count |<p>This is the number of file descriptors we can have opened in the same process, as determined by the operating system. You can never have more file descriptors than this number.</p> |JMX |jmx["java.lang:type=OperatingSystem","MaxFileDescriptorCount"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|JMX |OperatingSystem: File descriptors opened |<p>This is the number of opened file descriptors at the moment, if this reaches the MaxFileDescriptorCount, the application will throw an IOException: Too many open files. This could mean you’re are opening file descriptors and never closing them.</p> |JMX |jmx["java.lang:type=OperatingSystem","OpenFileDescriptorCount"] |
-|JMX |OperatingSystem: Process CPU Load |<p>ProcessCpuLoad represents the CPU load in this process.</p> |JMX |jmx["java.lang:type=OperatingSystem","ProcessCpuLoad"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `100`</p> |
-|JMX |Runtime: JVM uptime |<p>-</p> |JMX |jmx["java.lang:type=Runtime","Uptime"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
-|JMX |Runtime: JVM name |<p>-</p> |JMX |jmx["java.lang:type=Runtime","VmName"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `30m`</p> |
-|JMX |Runtime: JVM version |<p>-</p> |JMX |jmx["java.lang:type=Runtime","VmVersion"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `30m`</p> |
-|JMX |Threading: Daemon thread count |<p>Number of daemon threads running.</p> |JMX |jmx["java.lang:type=Threading","DaemonThreadCount"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|JMX |Threading: Peak thread count |<p>Maximum number of threads being executed at the same time since the JVM was started or the peak was reset.</p> |JMX |jmx["java.lang:type=Threading","PeakThreadCount"] |
-|JMX |Threading: Thread count |<p>The number of threads running at the current moment.</p> |JMX |jmx["java.lang:type=Threading","ThreadCount"] |
-|JMX |Threading: Total started thread count |<p>The number of threads started since the JVM was launched.</p> |JMX |jmx["java.lang:type=Threading","TotalStartedThreadCount"] |
+| Group | Name | Description | Type | Key and additional info |
+|-------|----------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| JMX | ClassLoading: Loaded class count | <p>Displays number of classes that are currently loaded in the Java virtual machine.</p> | JMX | jmx["java.lang:type=ClassLoading","LoadedClassCount"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| JMX | ClassLoading: Total loaded class count | <p>Displays the total number of classes that have been loaded since the Java virtual machine has started execution.</p> | JMX | jmx["java.lang:type=ClassLoading","TotalLoadedClassCount"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| JMX | ClassLoading: Unloaded class count | <p>Displays the total number of classes that have been loaded since the Java virtual machine has started execution.</p> | JMX | jmx["java.lang:type=ClassLoading","UnloadedClassCount"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| JMX | Compilation: Name of the current JIT compiler | <p>Displays the total number of classes unloaded since the Java virtual machine has started execution.</p> | JMX | jmx["java.lang:type=Compilation","Name"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `30m`</p> |
+| JMX | Compilation: Accumulated time spent | <p>Displays the approximate accumulated elapsed time spent in compilation, in seconds.</p> | JMX | jmx["java.lang:type=Compilation","TotalCompilationTime"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| JMX | GarbageCollector: ConcurrentMarkSweep number of collections per second | <p>Displays the total number of collections that have occurred per second.</p> | JMX | jmx["java.lang:type=GarbageCollector,name=ConcurrentMarkSweep","CollectionCount"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| JMX | GarbageCollector: ConcurrentMarkSweep accumulated time spent in collection | <p>Displays the approximate accumulated collection elapsed time, in seconds.</p> | JMX | jmx["java.lang:type=GarbageCollector,name=ConcurrentMarkSweep","CollectionTime"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| JMX | GarbageCollector: Copy number of collections per second | <p>Displays the total number of collections that have occurred per second.</p> | JMX | jmx["java.lang:type=GarbageCollector,name=Copy","CollectionCount"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| JMX | GarbageCollector: Copy accumulated time spent in collection | <p>Displays the approximate accumulated collection elapsed time, in seconds.</p> | JMX | jmx["java.lang:type=GarbageCollector,name=Copy","CollectionTime"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| JMX | GarbageCollector: MarkSweepCompact number of collections per second | <p>Displays the total number of collections that have occurred per second.</p> | JMX | jmx["java.lang:type=GarbageCollector,name=MarkSweepCompact","CollectionCount"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| JMX | GarbageCollector: MarkSweepCompact accumulated time spent in collection | <p>Displays the approximate accumulated collection elapsed time, in seconds.</p> | JMX | jmx["java.lang:type=GarbageCollector,name=MarkSweepCompact","CollectionTime"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| JMX | GarbageCollector: ParNew number of collections per second | <p>Displays the total number of collections that have occurred per second.</p> | JMX | jmx["java.lang:type=GarbageCollector,name=ParNew","CollectionCount"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| JMX | GarbageCollector: ParNew accumulated time spent in collection | <p>Displays the approximate accumulated collection elapsed time, in seconds.</p> | JMX | jmx["java.lang:type=GarbageCollector,name=ParNew","CollectionTime"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| JMX | GarbageCollector: PS MarkSweep number of collections per second | <p>Displays the total number of collections that have occurred per second.</p> | JMX | jmx["java.lang:type=GarbageCollector,name=PS MarkSweep","CollectionCount"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| JMX | GarbageCollector: PS MarkSweep accumulated time spent in collection | <p>Displays the approximate accumulated collection elapsed time, in seconds.</p> | JMX | jmx["java.lang:type=GarbageCollector,name=PS MarkSweep","CollectionTime"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| JMX | GarbageCollector: PS Scavenge number of collections per second | <p>Displays the total number of collections that have occurred per second.</p> | JMX | jmx["java.lang:type=GarbageCollector,name=PS Scavenge","CollectionCount"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| JMX | GarbageCollector: PS Scavenge accumulated time spent in collection | <p>Displays the approximate accumulated collection elapsed time, in seconds.</p> | JMX | jmx["java.lang:type=GarbageCollector,name=PS Scavenge","CollectionTime"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| JMX | Memory: Heap memory committed | <p>Current heap memory allocated. This amount of memory is guaranteed for the Java virtual machine to use.</p> | JMX | jmx["java.lang:type=Memory","HeapMemoryUsage.committed"] |
+| JMX | Memory: Heap memory maximum size | <p>Maximum amount of heap that can be used for memory management. This amount of memory is not guaranteed to be available if it is greater than the amount of committed memory. The Java virtual machine may fail to allocate memory even if the amount of used memory does not exceed this maximum size.</p> | JMX | jmx["java.lang:type=Memory","HeapMemoryUsage.max"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| JMX | Memory: Heap memory used | <p>Current memory usage outside the heap.</p> | JMX | jmx["java.lang:type=Memory","HeapMemoryUsage.used"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| JMX | Memory: Non-Heap memory committed | <p>Current memory allocated outside the heap. This amount of memory is guaranteed for the Java virtual machine to use.</p> | JMX | jmx["java.lang:type=Memory","NonHeapMemoryUsage.committed"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| JMX | Memory: Non-Heap memory maximum size | <p>Maximum amount of non-heap memory that can be used for memory management. This amount of memory is not guaranteed to be available if it is greater than the amount of committed memory. The Java virtual machine may fail to allocate memory even if the amount of used memory does not exceed this maximum size.</p> | JMX | jmx["java.lang:type=Memory","NonHeapMemoryUsage.max"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| JMX | Memory: Non-Heap memory used | <p>Current memory usage outside the heap</p> | JMX | jmx["java.lang:type=Memory","NonHeapMemoryUsage.used"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| JMX | Memory: Object pending finalization count | <p>The approximate number of objects for which finalization is pending.</p> | JMX | jmx["java.lang:type=Memory","ObjectPendingFinalizationCount"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| JMX | MemoryPool: CMS Old Gen committed | <p>Current memory allocated</p> | JMX | jmx["java.lang:type=MemoryPool,name=CMS Old Gen","Usage.committed"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| JMX | MemoryPool: CMS Old Gen maximum size | <p>Maximum amount of memory that can be used for memory management. This amount of memory is not guaranteed to be available if it is greater than the amount of committed memory. The Java virtual machine may fail to allocate memory even if the amount of used memory does not exceed this maximum size.</p> | JMX | jmx["java.lang:type=MemoryPool,name=CMS Old Gen","Usage.max"] |
+| JMX | MemoryPool: CMS Old Gen used | <p>Current memory usage</p> | JMX | jmx["java.lang:type=MemoryPool,name=CMS Old Gen","Usage.used"] |
+| JMX | MemoryPool: CMS Perm Gen committed | <p>Current memory allocated</p> | JMX | jmx["java.lang:type=MemoryPool,name=CMS Perm Gen","Usage.committed"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| JMX | MemoryPool: CMS Perm Gen maximum size | <p>Maximum amount of memory that can be used for memory management. This amount of memory is not guaranteed to be available if it is greater than the amount of committed memory. The Java virtual machine may fail to allocate memory even if the amount of used memory does not exceed this maximum size.</p> | JMX | jmx["java.lang:type=MemoryPool,name=CMS Perm Gen","Usage.max"] |
+| JMX | MemoryPool: CMS Perm Gen used | <p>Current memory usage</p> | JMX | jmx["java.lang:type=MemoryPool,name=CMS Perm Gen","Usage.used"] |
+| JMX | MemoryPool: Code Cache committed | <p>Current memory allocated</p> | JMX | jmx["java.lang:type=MemoryPool,name=Code Cache","Usage.committed"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| JMX | MemoryPool: CodeCache maximum size | <p>Maximum amount of memory that can be used for memory management. This amount of memory is not guaranteed to be available if it is greater than the amount of committed memory. The Java virtual machine may fail to allocate memory even if the amount of used memory does not exceed this maximum size.</p> | JMX | jmx["java.lang:type=MemoryPool,name=Code Cache","Usage.max"] |
+| JMX | MemoryPool: Code Cache used | <p>Current memory usage</p> | JMX | jmx["java.lang:type=MemoryPool,name=Code Cache","Usage.used"] |
+| JMX | MemoryPool: Perm Gen committed | <p>Current memory allocated</p> | JMX | jmx["java.lang:type=MemoryPool,name=Perm Gen","Usage.committed"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| JMX | MemoryPool: Perm Gen maximum size | <p>Maximum amount of memory that can be used for memory management. This amount of memory is not guaranteed to be available if it is greater than the amount of committed memory. The Java virtual machine may fail to allocate memory even if the amount of used memory does not exceed this maximum size.</p> | JMX | jmx["java.lang:type=MemoryPool,name=Perm Gen","Usage.max"] |
+| JMX | MemoryPool: Perm Gen used | <p>Current memory usage</p> | JMX | jmx["java.lang:type=MemoryPool,name=Perm Gen","Usage.used"] |
+| JMX | MemoryPool: PS Old Gen | <p>Current memory allocated</p> | JMX | jmx["java.lang:type=MemoryPool,name=PS Old Gen","Usage.committed"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| JMX | MemoryPool: PS Old Gen maximum size | <p>Maximum amount of memory that can be used for memory management. This amount of memory is not guaranteed to be available if it is greater than the amount of committed memory. The Java virtual machine may fail to allocate memory even if the amount of used memory does not exceed this maximum size.</p> | JMX | jmx["java.lang:type=MemoryPool,name=PS Old Gen","Usage.max"] |
+| JMX | MemoryPool: PS Old Gen used | <p>Current memory usage</p> | JMX | jmx["java.lang:type=MemoryPool,name=PS Old Gen","Usage.used"] |
+| JMX | MemoryPool: PS Perm Gen committed | <p>Current memory allocated</p> | JMX | jmx["java.lang:type=MemoryPool,name=PS Perm Gen","Usage.committed"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| JMX | MemoryPool: PS Perm Gen maximum size | <p>Maximum amount of memory that can be used for memory management. This amount of memory is not guaranteed to be available if it is greater than the amount of committed memory. The Java virtual machine may fail to allocate memory even if the amount of used memory does not exceed this maximum size.</p> | JMX | jmx["java.lang:type=MemoryPool,name=PS Perm Gen","Usage.max"] |
+| JMX | MemoryPool: PS Perm Gen used | <p>Current memory usage</p> | JMX | jmx["java.lang:type=MemoryPool,name=PS Perm Gen","Usage.used"] |
+| JMX | MemoryPool: Tenured Gen committed | <p>Current memory allocated</p> | JMX | jmx["java.lang:type=MemoryPool,name=Tenured Gen","Usage.committed"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| JMX | MemoryPool: Tenured Gen maximum size | <p>Maximum amount of memory that can be used for memory management. This amount of memory is not guaranteed to be available if it is greater than the amount of committed memory. The Java virtual machine may fail to allocate memory even if the amount of used memory does not exceed this maximum size.</p> | JMX | jmx["java.lang:type=MemoryPool,name=Tenured Gen","Usage.max"] |
+| JMX | MemoryPool: Tenured Gen used | <p>Current memory usage</p> | JMX | jmx["java.lang:type=MemoryPool,name=Tenured Gen","Usage.used"] |
+| JMX | OperatingSystem: File descriptors maximum count | <p>This is the number of file descriptors we can have opened in the same process, as determined by the operating system. You can never have more file descriptors than this number.</p> | JMX | jmx["java.lang:type=OperatingSystem","MaxFileDescriptorCount"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| JMX | OperatingSystem: File descriptors opened | <p>This is the number of opened file descriptors at the moment, if this reaches the MaxFileDescriptorCount, the application will throw an IOException: Too many open files. This could mean you’re are opening file descriptors and never closing them.</p> | JMX | jmx["java.lang:type=OperatingSystem","OpenFileDescriptorCount"] |
+| JMX | OperatingSystem: Process CPU Load | <p>ProcessCpuLoad represents the CPU load in this process.</p> | JMX | jmx["java.lang:type=OperatingSystem","ProcessCpuLoad"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `100`</p> |
+| JMX | Runtime: JVM uptime | <p>-</p> | JMX | jmx["java.lang:type=Runtime","Uptime"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
+| JMX | Runtime: JVM name | <p>-</p> | JMX | jmx["java.lang:type=Runtime","VmName"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `30m`</p> |
+| JMX | Runtime: JVM version | <p>-</p> | JMX | jmx["java.lang:type=Runtime","VmVersion"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `30m`</p> |
+| JMX | Threading: Daemon thread count | <p>Number of daemon threads running.</p> | JMX | jmx["java.lang:type=Threading","DaemonThreadCount"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| JMX | Threading: Peak thread count | <p>Maximum number of threads being executed at the same time since the JVM was started or the peak was reset.</p> | JMX | jmx["java.lang:type=Threading","PeakThreadCount"] |
+| JMX | Threading: Thread count | <p>The number of threads running at the current moment.</p> | JMX | jmx["java.lang:type=Threading","ThreadCount"] |
+| JMX | Threading: Total started thread count | <p>The number of threads started since the JVM was launched.</p> | JMX | jmx["java.lang:type=Threading","TotalStartedThreadCount"] |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|Compilation: {HOST.NAME} uses suboptimal JIT compiler |<p>-</p> |`{TEMPLATE_NAME:jmx["java.lang:type=Compilation","Name"].str(Client)}=1` |INFO |<p>Manual close: YES</p> |
-|GarbageCollector: Concurrent Mark Sweep in fire fighting mode |<p>-</p> |`{TEMPLATE_NAME:jmx["java.lang:type=GarbageCollector,name=ConcurrentMarkSweep","CollectionCount"].last()}>{Generic Java JMX:jmx["java.lang:type=GarbageCollector,name=ParNew","CollectionCount"].last()}` |AVERAGE | |
-|GarbageCollector: Mark Sweep Compact in fire fighting mode |<p>-</p> |`{TEMPLATE_NAME:jmx["java.lang:type=GarbageCollector,name=MarkSweepCompact","CollectionCount"].last()}>{Generic Java JMX:jmx["java.lang:type=GarbageCollector,name=Copy","CollectionCount"].last()}` |AVERAGE | |
-|GarbageCollector: PS Mark Sweep in fire fighting mode |<p>-</p> |`{TEMPLATE_NAME:jmx["java.lang:type=GarbageCollector,name=PS MarkSweep","CollectionCount"].last()}>{Generic Java JMX:jmx["java.lang:type=GarbageCollector,name=PS Scavenge","CollectionCount"].last()}` |AVERAGE | |
-|Memory: Heap memory usage more than {$JMX.HEAP.USAGE.MAX}% for {$JMX.HEAP.MEM.USAGE.TIME} |<p>-</p> |`{TEMPLATE_NAME:jmx["java.lang:type=Memory","HeapMemoryUsage.used"].min({$JMX.HEAP.MEM.USAGE.TIME})}>({Generic Java JMX:jmx["java.lang:type=Memory","HeapMemoryUsage.max"].last()}*{$JMX.HEAP.MEM.USAGE.MAX}/100)` |WARNING | |
-|Memory: Non-Heap memory usage more than {$JMX.NONHEAP.MEM.USAGE.MAX}% for {$JMX.NONHEAP.MEM.USAGE.TIME} |<p>-</p> |`{TEMPLATE_NAME:jmx["java.lang:type=Memory","NonHeapMemoryUsage.used"].min({$JMX.NONHEAP.MEM.USAGE.TIME})}>({Generic Java JMX:jmx["java.lang:type=Memory","NonHeapMemoryUsage.max"].last()}*{$JMX.NONHEAP.MEM.USAGE.MAX}/100)` |WARNING | |
-|MemoryPool: CMS Old Gen memory usage more than {$JMX.MP.USAGE.MAX:"CMS Old Gen"}% for {$JMX.MP.USAGE.TIME:"CMS Old Gen"} |<p>-</p> |`{TEMPLATE_NAME:jmx["java.lang:type=MemoryPool,name=CMS Old Gen","Usage.used"].min({$JMX.MP.USAGE.TIME:"CMS Old Gen"})}>({Generic Java JMX:jmx["java.lang:type=MemoryPool,name=CMS Old Gen","Usage.max"].last()}*{$JMX.MP.USAGE.MAX:"CMS Old Gen"}/100)` |WARNING | |
-|MemoryPool: CMS Perm Gen memory usage more than {$JMX.MP.USAGE.MAX:"CMS Perm Gen"}% for {$JMX.MP.USAGE.TIME:"CMS Perm Gen"} |<p>-</p> |`{TEMPLATE_NAME:jmx["java.lang:type=MemoryPool,name=CMS Perm Gen","Usage.used"].min({$JMX.MP.USAGE.TIME:"CMS Perm Gen"})}>({Generic Java JMX:jmx["java.lang:type=MemoryPool,name=CMS Perm Gen","Usage.max"].last()}*{$JMX.MP.USAGE.MAX:"CMS Perm Gen"}/100)` |WARNING | |
-|MemoryPool: Code Cache memory usage more than {$JMX.MP.USAGE.MAX:"Code Cache"}% for {$JMX.MP.USAGE.TIME:"Code Cache"} |<p>-</p> |`{TEMPLATE_NAME:jmx["java.lang:type=MemoryPool,name=Code Cache","Usage.used"].min({$JMX.MP.USAGE.TIME:"Code Cache"})}>({Generic Java JMX:jmx["java.lang:type=MemoryPool,name=Code Cache","Usage.max"].last()}*{$JMX.MP.USAGE.MAX:"Code Cache"}/100)` |WARNING | |
-|MemoryPool: Perm Gen memory usage more than {$JMX.MP.USAGE.MAX:"Perm Gen"}% for {$JMX.MP.USAGE.TIME:"Perm Gen"} |<p>-</p> |`{TEMPLATE_NAME:jmx["java.lang:type=MemoryPool,name=Perm Gen","Usage.used"].min({$JMX.MP.USAGE.TIME:"Perm Gen"})}>({Generic Java JMX:jmx["java.lang:type=MemoryPool,name=Perm Gen","Usage.max"].last()}*{$JMX.MP.USAGE.MAX:"Perm Gen"}/100)` |WARNING | |
-|MemoryPool: PS Old Gen memory usage more than {$JMX.MP.USAGE.MAX:"PS Old Gen"}% for {$JMX.MP.USAGE.TIME:"PS Old Gen"} |<p>-</p> |`{TEMPLATE_NAME:jmx["java.lang:type=MemoryPool,name=PS Old Gen","Usage.used"].min({$JMX.MP.USAGE.TIME:"PS Old Gen"})}>({Generic Java JMX:jmx["java.lang:type=MemoryPool,name=PS Old Gen","Usage.max"].last()}*{$JMX.MP.USAGE.MAX:"PS Old Gen"}/100)` |WARNING | |
-|MemoryPool: PS Perm Gen memory usage more than {$JMX.MP.USAGE.MAX:"PS Perm Gen"}% for {$JMX.MP.USAGE.TIME:"PS Perm Gen"} |<p>-</p> |`{TEMPLATE_NAME:jmx["java.lang:type=MemoryPool,name=PS Perm Gen","Usage.used"].min({$JMX.MP.USAGE.TIME:"PS Perm Gen"})}>({Generic Java JMX:jmx["java.lang:type=MemoryPool,name=PS Perm Gen","Usage.max"].last()}*{$JMX.MP.USAGE.MAX:"PS Perm Gen"}/100)` |WARNING | |
-|MemoryPool: Tenured Gen memory usage more than {$JMX.MP.USAGE.MAX:"Tenured Gen"}% for {$JMX.MP.USAGE.TIME:"Tenured Gen"} |<p>-</p> |`{TEMPLATE_NAME:jmx["java.lang:type=MemoryPool,name=Tenured Gen","Usage.used"].min({$JMX.MP.USAGE.TIME:"Tenured Gen"})}>({Generic Java JMX:jmx["java.lang:type=MemoryPool,name=Tenured Gen","Usage.max"].last()}*{$JMX.MP.USAGE.MAX:"Tenured Gen"}/100)` |WARNING | |
-|OperatingSystem: Opened file descriptor count more than {$JMX.FILE.DESCRIPTORS.MAX}% of maximum |<p>-</p> |`{TEMPLATE_NAME:jmx["java.lang:type=OperatingSystem","OpenFileDescriptorCount"].min({$JMX.FILE.DESCRIPTORS.TIME})}>({Generic Java JMX:jmx["java.lang:type=OperatingSystem","MaxFileDescriptorCount"].last()}*{$JMX.FILE.DESCRIPTORS.MAX}/100)` |WARNING | |
-|OperatingSystem: Process CPU Load more than {$JMX.CPU.LOAD.MAX}% for {$JMX.CPU.LOAD.TIME} |<p>-</p> |`{TEMPLATE_NAME:jmx["java.lang:type=OperatingSystem","ProcessCpuLoad"].min({$JMX.CPU.LOAD.TIME})}>{$JMX.CPU.LOAD.MAX}` |AVERAGE | |
-|Runtime: JVM is not reachable |<p>-</p> |`{TEMPLATE_NAME:jmx["java.lang:type=Runtime","Uptime"].nodata(5m)}=1` |AVERAGE |<p>Manual close: YES</p> |
-|Runtime: {HOST.NAME} runs suboptimal VM type |<p>-</p> |`{TEMPLATE_NAME:jmx["java.lang:type=Runtime","VmName"].str(Server)}<>1` |INFO |<p>Manual close: YES</p> |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|-----------------------------------------------------------------------------------------------------------------------------|-------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|----------------------------------|
+| Compilation: {HOST.NAME} uses suboptimal JIT compiler | <p>-</p> | `{TEMPLATE_NAME:jmx["java.lang:type=Compilation","Name"].str(Client)}=1` | INFO | <p>Manual close: YES</p> |
+| GarbageCollector: Concurrent Mark Sweep in fire fighting mode | <p>-</p> | `{TEMPLATE_NAME:jmx["java.lang:type=GarbageCollector,name=ConcurrentMarkSweep","CollectionCount"].last()}>{Generic Java JMX:jmx["java.lang:type=GarbageCollector,name=ParNew","CollectionCount"].last()}` | AVERAGE | |
+| GarbageCollector: Mark Sweep Compact in fire fighting mode | <p>-</p> | `{TEMPLATE_NAME:jmx["java.lang:type=GarbageCollector,name=MarkSweepCompact","CollectionCount"].last()}>{Generic Java JMX:jmx["java.lang:type=GarbageCollector,name=Copy","CollectionCount"].last()}` | AVERAGE | |
+| GarbageCollector: PS Mark Sweep in fire fighting mode | <p>-</p> | `{TEMPLATE_NAME:jmx["java.lang:type=GarbageCollector,name=PS MarkSweep","CollectionCount"].last()}>{Generic Java JMX:jmx["java.lang:type=GarbageCollector,name=PS Scavenge","CollectionCount"].last()}` | AVERAGE | |
+| Memory: Heap memory usage more than {$JMX.HEAP.USAGE.MAX}% for {$JMX.HEAP.MEM.USAGE.TIME} | <p>-</p> | `{TEMPLATE_NAME:jmx["java.lang:type=Memory","HeapMemoryUsage.used"].min({$JMX.HEAP.MEM.USAGE.TIME})}>({Generic Java JMX:jmx["java.lang:type=Memory","HeapMemoryUsage.max"].last()}*{$JMX.HEAP.MEM.USAGE.MAX}/100)` | WARNING | |
+| Memory: Non-Heap memory usage more than {$JMX.NONHEAP.MEM.USAGE.MAX}% for {$JMX.NONHEAP.MEM.USAGE.TIME} | <p>-</p> | `{TEMPLATE_NAME:jmx["java.lang:type=Memory","NonHeapMemoryUsage.used"].min({$JMX.NONHEAP.MEM.USAGE.TIME})}>({Generic Java JMX:jmx["java.lang:type=Memory","NonHeapMemoryUsage.max"].last()}*{$JMX.NONHEAP.MEM.USAGE.MAX}/100)` | WARNING | |
+| MemoryPool: CMS Old Gen memory usage more than {$JMX.MP.USAGE.MAX:"CMS Old Gen"}% for {$JMX.MP.USAGE.TIME:"CMS Old Gen"} | <p>-</p> | `{TEMPLATE_NAME:jmx["java.lang:type=MemoryPool,name=CMS Old Gen","Usage.used"].min({$JMX.MP.USAGE.TIME:"CMS Old Gen"})}>({Generic Java JMX:jmx["java.lang:type=MemoryPool,name=CMS Old Gen","Usage.max"].last()}*{$JMX.MP.USAGE.MAX:"CMS Old Gen"}/100)` | WARNING | |
+| MemoryPool: CMS Perm Gen memory usage more than {$JMX.MP.USAGE.MAX:"CMS Perm Gen"}% for {$JMX.MP.USAGE.TIME:"CMS Perm Gen"} | <p>-</p> | `{TEMPLATE_NAME:jmx["java.lang:type=MemoryPool,name=CMS Perm Gen","Usage.used"].min({$JMX.MP.USAGE.TIME:"CMS Perm Gen"})}>({Generic Java JMX:jmx["java.lang:type=MemoryPool,name=CMS Perm Gen","Usage.max"].last()}*{$JMX.MP.USAGE.MAX:"CMS Perm Gen"}/100)` | WARNING | |
+| MemoryPool: Code Cache memory usage more than {$JMX.MP.USAGE.MAX:"Code Cache"}% for {$JMX.MP.USAGE.TIME:"Code Cache"} | <p>-</p> | `{TEMPLATE_NAME:jmx["java.lang:type=MemoryPool,name=Code Cache","Usage.used"].min({$JMX.MP.USAGE.TIME:"Code Cache"})}>({Generic Java JMX:jmx["java.lang:type=MemoryPool,name=Code Cache","Usage.max"].last()}*{$JMX.MP.USAGE.MAX:"Code Cache"}/100)` | WARNING | |
+| MemoryPool: Perm Gen memory usage more than {$JMX.MP.USAGE.MAX:"Perm Gen"}% for {$JMX.MP.USAGE.TIME:"Perm Gen"} | <p>-</p> | `{TEMPLATE_NAME:jmx["java.lang:type=MemoryPool,name=Perm Gen","Usage.used"].min({$JMX.MP.USAGE.TIME:"Perm Gen"})}>({Generic Java JMX:jmx["java.lang:type=MemoryPool,name=Perm Gen","Usage.max"].last()}*{$JMX.MP.USAGE.MAX:"Perm Gen"}/100)` | WARNING | |
+| MemoryPool: PS Old Gen memory usage more than {$JMX.MP.USAGE.MAX:"PS Old Gen"}% for {$JMX.MP.USAGE.TIME:"PS Old Gen"} | <p>-</p> | `{TEMPLATE_NAME:jmx["java.lang:type=MemoryPool,name=PS Old Gen","Usage.used"].min({$JMX.MP.USAGE.TIME:"PS Old Gen"})}>({Generic Java JMX:jmx["java.lang:type=MemoryPool,name=PS Old Gen","Usage.max"].last()}*{$JMX.MP.USAGE.MAX:"PS Old Gen"}/100)` | WARNING | |
+| MemoryPool: PS Perm Gen memory usage more than {$JMX.MP.USAGE.MAX:"PS Perm Gen"}% for {$JMX.MP.USAGE.TIME:"PS Perm Gen"} | <p>-</p> | `{TEMPLATE_NAME:jmx["java.lang:type=MemoryPool,name=PS Perm Gen","Usage.used"].min({$JMX.MP.USAGE.TIME:"PS Perm Gen"})}>({Generic Java JMX:jmx["java.lang:type=MemoryPool,name=PS Perm Gen","Usage.max"].last()}*{$JMX.MP.USAGE.MAX:"PS Perm Gen"}/100)` | WARNING | |
+| MemoryPool: Tenured Gen memory usage more than {$JMX.MP.USAGE.MAX:"Tenured Gen"}% for {$JMX.MP.USAGE.TIME:"Tenured Gen"} | <p>-</p> | `{TEMPLATE_NAME:jmx["java.lang:type=MemoryPool,name=Tenured Gen","Usage.used"].min({$JMX.MP.USAGE.TIME:"Tenured Gen"})}>({Generic Java JMX:jmx["java.lang:type=MemoryPool,name=Tenured Gen","Usage.max"].last()}*{$JMX.MP.USAGE.MAX:"Tenured Gen"}/100)` | WARNING | |
+| OperatingSystem: Opened file descriptor count more than {$JMX.FILE.DESCRIPTORS.MAX}% of maximum | <p>-</p> | `{TEMPLATE_NAME:jmx["java.lang:type=OperatingSystem","OpenFileDescriptorCount"].min({$JMX.FILE.DESCRIPTORS.TIME})}>({Generic Java JMX:jmx["java.lang:type=OperatingSystem","MaxFileDescriptorCount"].last()}*{$JMX.FILE.DESCRIPTORS.MAX}/100)` | WARNING | |
+| OperatingSystem: Process CPU Load more than {$JMX.CPU.LOAD.MAX}% for {$JMX.CPU.LOAD.TIME} | <p>-</p> | `{TEMPLATE_NAME:jmx["java.lang:type=OperatingSystem","ProcessCpuLoad"].min({$JMX.CPU.LOAD.TIME})}>{$JMX.CPU.LOAD.MAX}` | AVERAGE | |
+| Runtime: JVM is not reachable | <p>-</p> | `{TEMPLATE_NAME:jmx["java.lang:type=Runtime","Uptime"].nodata(5m)}=1` | AVERAGE | <p>Manual close: YES</p> |
+| Runtime: {HOST.NAME} runs suboptimal VM type | <p>-</p> | `{TEMPLATE_NAME:jmx["java.lang:type=Runtime","VmName"].str(Server)}<>1` | INFO | <p>Manual close: YES</p> |
## Feedback
diff --git a/templates/app/hadoop_http/README.md b/templates/app/hadoop_http/README.md
index 7c768d92f07..6274dcaf1e6 100644
--- a/templates/app/hadoop_http/README.md
+++ b/templates/app/hadoop_http/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
The template for monitoring Hadoop over HTTP that works without any external scripts.
It collects metrics by polling the Hadoop API remotely using an HTTP agent and JSONPath preprocessing.
Zabbix server (or proxy) execute direct requests to ResourceManager, NodeManagers, NameNode, DataNodes APIs.
@@ -17,7 +17,7 @@ This template was tested on:
## Setup
-> See [Zabbix template operation](https://www.zabbix.com/documentation/5.2/manual/config/templates_out_of_the_box/http) for basic instructions.
+> See [Zabbix template operation](https://www.zabbix.com/documentation/5.4/manual/config/templates_out_of_the_box/http) for basic instructions.
You should define the IP address (or FQDN) and Web-UI port for the ResourceManager in {$HADOOP.RESOURCEMANAGER.HOST} and {$HADOOP.RESOURCEMANAGER.PORT} macros and for the NameNode in {$HADOOP.NAMENODE.HOST} and {$HADOOP.NAMENODE.PORT} macros respectively. Macros can be set in the template or overridden at the host level.
@@ -27,15 +27,15 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$HADOOP.CAPACITY_REMAINING.MIN.WARN} |<p>The Hadoop cluster capacity remaining percent for trigger expression.</p> |`20` |
-|{$HADOOP.NAMENODE.HOST} |<p>The Hadoop NameNode host IP address or FQDN.</p> |`NameNode` |
-|{$HADOOP.NAMENODE.PORT} |<p>The Hadoop NameNode Web-UI port.</p> |`9870` |
-|{$HADOOP.NAMENODE.RESPONSE_TIME.MAX.WARN} |<p>The Hadoop NameNode API page maximum response time in seconds for trigger expression.</p> |`10s` |
-|{$HADOOP.RESOURCEMANAGER.HOST} |<p>The Hadoop ResourceManager host IP address or FQDN.</p> |`ResourceManager` |
-|{$HADOOP.RESOURCEMANAGER.PORT} |<p>The Hadoop ResourceManager Web-UI port.</p> |`8088` |
-|{$HADOOP.RESOURCEMANAGER.RESPONSE_TIME.MAX.WARN} |<p>The Hadoop ResourceManager API page maximum response time in seconds for trigger expression.</p> |`10s` |
+| Name | Description | Default |
+|--------------------------------------------------|-----------------------------------------------------------------------------------------------------|-------------------|
+| {$HADOOP.CAPACITY_REMAINING.MIN.WARN} | <p>The Hadoop cluster capacity remaining percent for trigger expression.</p> | `20` |
+| {$HADOOP.NAMENODE.HOST} | <p>The Hadoop NameNode host IP address or FQDN.</p> | `NameNode` |
+| {$HADOOP.NAMENODE.PORT} | <p>The Hadoop NameNode Web-UI port.</p> | `9870` |
+| {$HADOOP.NAMENODE.RESPONSE_TIME.MAX.WARN} | <p>The Hadoop NameNode API page maximum response time in seconds for trigger expression.</p> | `10s` |
+| {$HADOOP.RESOURCEMANAGER.HOST} | <p>The Hadoop ResourceManager host IP address or FQDN.</p> | `ResourceManager` |
+| {$HADOOP.RESOURCEMANAGER.PORT} | <p>The Hadoop ResourceManager Web-UI port.</p> | `8088` |
+| {$HADOOP.RESOURCEMANAGER.RESPONSE_TIME.MAX.WARN} | <p>The Hadoop ResourceManager API page maximum response time in seconds for trigger expression.</p> | `10s` |
## Template links
@@ -43,97 +43,97 @@ There are no template links in this template.
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Node manager discovery |<p>-</p> |HTTP_AGENT |hadoop.nodemanager.discovery<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
-|Data node discovery |<p>-</p> |HTTP_AGENT |hadoop.datanode.discovery<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Name | Description | Type | Key and additional info |
+|------------------------|-------------|------------|------------------------------------------------------------------------------------------------------------------------|
+| Node manager discovery | <p>-</p> | HTTP_AGENT | hadoop.nodemanager.discovery<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Data node discovery | <p>-</p> | HTTP_AGENT | hadoop.datanode.discovery<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Hadoop |ResourceManager: Service status |<p>Hadoop ResourceManager API port availability.</p> |SIMPLE |net.tcp.service["tcp","{$HADOOP.RESOURCEMANAGER.HOST}","{$HADOOP.RESOURCEMANAGER.PORT}"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|Hadoop |ResourceManager: Service response time |<p>Hadoop ResourceManager API performance.</p> |SIMPLE |net.tcp.service.perf["tcp","{$HADOOP.RESOURCEMANAGER.HOST}","{$HADOOP.RESOURCEMANAGER.PORT}"] |
-|Hadoop |ResourceManager: Uptime | |DEPENDENT |hadoop.resourcemanager.uptime<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='java.lang:type=Runtime')].Uptime.first()`</p><p>- MULTIPLIER: `0.001`</p> |
-|Hadoop |ResourceManager: RPC queue & processing time |<p>Average time spent on processing RPC requests.</p> |DEPENDENT |hadoop.resourcemanager.rpc_processing_time_avg<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=ResourceManager,name=RpcActivityForPort8031')].RpcProcessingTimeAvgTime.first()`</p> |
-|Hadoop |ResourceManager: Active NMs |<p>Number of Active NodeManagers.</p> |DEPENDENT |hadoop.resourcemanager.num_active_nm<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=ResourceManager,name=ClusterMetrics')].NumActiveNMs.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
-|Hadoop |ResourceManager: Decommissioning NMs |<p>Number of Decommissioning NodeManagers.</p> |DEPENDENT |hadoop.resourcemanager.num_decommissioning_nm<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=ResourceManager,name=ClusterMetrics')].NumDecommissioningNMs.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
-|Hadoop |ResourceManager: Decommissioned NMs |<p>Number of Decommissioned NodeManagers.</p> |DEPENDENT |hadoop.resourcemanager.num_decommissioned_nm<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=ResourceManager,name=ClusterMetrics')].NumDecommissionedNMs.first()`</p> |
-|Hadoop |ResourceManager: Lost NMs |<p>Number of Lost NodeManagers.</p> |DEPENDENT |hadoop.resourcemanager.num_lost_nm<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=ResourceManager,name=ClusterMetrics')].NumLostNMs.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
-|Hadoop |ResourceManager: Unhealthy NMs |<p>Number of Unhealthy NodeManagers.</p> |DEPENDENT |hadoop.resourcemanager.num_unhealthy_nm<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=ResourceManager,name=ClusterMetrics')].NumUnhealthyNMs.first()`</p> |
-|Hadoop |ResourceManager: Rebooted NMs |<p>Number of Rebooted NodeManagers.</p> |DEPENDENT |hadoop.resourcemanager.num_rebooted_nm<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=ResourceManager,name=ClusterMetrics')].NumRebootedNMs.first()`</p> |
-|Hadoop |ResourceManager: Shutdown NMs |<p>Number of Shutdown NodeManagers.</p> |DEPENDENT |hadoop.resourcemanager.num_shutdown_nm<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=ResourceManager,name=ClusterMetrics')].NumShutdownNMs.first()`</p> |
-|Hadoop |NameNode: Service status |<p>Hadoop NameNode API port availability.</p> |SIMPLE |net.tcp.service["tcp","{$HADOOP.NAMENODE.HOST}","{$HADOOP.NAMENODE.PORT}"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|Hadoop |NameNode: Service response time |<p>Hadoop NameNode API performance.</p> |SIMPLE |net.tcp.service.perf["tcp","{$HADOOP.NAMENODE.HOST}","{$HADOOP.NAMENODE.PORT}"] |
-|Hadoop |NameNode: Uptime | |DEPENDENT |hadoop.namenode.uptime<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='java.lang:type=Runtime')].Uptime.first()`</p><p>- MULTIPLIER: `0.001`</p> |
-|Hadoop |NameNode: RPC queue & processing time |<p>Average time spent on processing RPC requests.</p> |DEPENDENT |hadoop.namenode.rpc_processing_time_avg<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=NameNode,name=RpcActivityForPort9000')].RpcProcessingTimeAvgTime.first()`</p> |
-|Hadoop |NameNode: Block Pool Renaming | |DEPENDENT |hadoop.namenode.percent_block_pool_used<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=NameNode,name=NameNodeInfo')].PercentBlockPoolUsed.first()`</p> |
-|Hadoop |NameNode: Transactions since last checkpoint |<p>Total number of transactions since last checkpoint.</p> |DEPENDENT |hadoop.namenode.transactions_since_last_checkpoint<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=NameNode,name=FSNamesystem')].TransactionsSinceLastCheckpoint.first()`</p> |
-|Hadoop |NameNode: Percent capacity remaining |<p>Available capacity in percent.</p> |DEPENDENT |hadoop.namenode.percent_remaining<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=NameNode,name=NameNodeInfo')].PercentRemaining.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
-|Hadoop |NameNode: Capacity remaining |<p>Available capacity.</p> |DEPENDENT |hadoop.namenode.capacity_remaining<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=NameNode,name=FSNamesystem')].CapacityRemaining.first()`</p> |
-|Hadoop |NameNode: Corrupt blocks |<p>Number of corrupt blocks.</p> |DEPENDENT |hadoop.namenode.corrupt_blocks<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=NameNode,name=FSNamesystem')].CorruptBlocks.first()`</p> |
-|Hadoop |NameNode: Missing blocks |<p>Number of missing blocks.</p> |DEPENDENT |hadoop.namenode.missing_blocks<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=NameNode,name=FSNamesystem')].MissingBlocks.first()`</p> |
-|Hadoop |NameNode: Failed volumes |<p>Number of failed volumes.</p> |DEPENDENT |hadoop.namenode.volume_failures_total<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=NameNode,name=FSNamesystem')].VolumeFailuresTotal.first()`</p> |
-|Hadoop |NameNode: Alive DataNodes |<p>Count of alive DataNodes.</p> |DEPENDENT |hadoop.namenode.num_live_data_nodes<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=NameNode,name=FSNamesystem')].NumLiveDataNodes.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
-|Hadoop |NameNode: Dead DataNodes |<p>Count of dead DataNodes.</p> |DEPENDENT |hadoop.namenode.num_dead_data_nodes<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=NameNode,name=FSNamesystem')].NumDeadDataNodes.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
-|Hadoop |NameNode: Stale DataNodes |<p>DataNodes that do not send a heartbeat within 30 seconds are marked as "stale".</p> |DEPENDENT |hadoop.namenode.num_stale_data_nodes<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=NameNode,name=FSNamesystem')].StaleDataNodes.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
-|Hadoop |NameNode: Total files |<p>Total count of files tracked by the NameNode.</p> |DEPENDENT |hadoop.namenode.files_total<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=NameNode,name=FSNamesystem')].FilesTotal.first()`</p> |
-|Hadoop |NameNode: Total load |<p>The current number of concurrent file accesses (read/write) across all DataNodes.</p> |DEPENDENT |hadoop.namenode.total_load<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=NameNode,name=FSNamesystem')].TotalLoad.first()`</p> |
-|Hadoop |NameNode: Blocks allocable |<p>Maximum number of blocks allocable.</p> |DEPENDENT |hadoop.namenode.block_capacity<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=NameNode,name=FSNamesystem')].BlockCapacity.first()`</p> |
-|Hadoop |NameNode: Total blocks |<p>Count of blocks tracked by NameNode.</p> |DEPENDENT |hadoop.namenode.blocks_total<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=NameNode,name=FSNamesystem')].BlocksTotal.first()`</p> |
-|Hadoop |NameNode: Under-replicated blocks |<p>The number of blocks with insufficient replication.</p> |DEPENDENT |hadoop.namenode.under_replicated_blocks<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=NameNode,name=FSNamesystem')].UnderReplicatedBlocks.first()`</p> |
-|Hadoop |{#HOSTNAME}: RPC queue & processing time |<p>Average time spent on processing RPC requests.</p> |DEPENDENT |hadoop.nodemanager.rpc_processing_time_avg[{#HOSTNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=NodeManager,name=RpcActivityForPort8040')].RpcProcessingTimeAvgTime.first()`</p> |
-|Hadoop |{#HOSTNAME}: Container launch avg duration | |DEPENDENT |hadoop.nodemanager.container_launch_duration_avg[{#HOSTNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=NodeManager,name=NodeManagerMetrics')].ContainerLaunchDurationAvgTime.first()`</p> |
-|Hadoop |{#HOSTNAME}: JVM Threads |<p>The number of JVM threads.</p> |DEPENDENT |hadoop.nodemanager.jvm.threads[{#HOSTNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='java.lang:type=Threading')].ThreadCount.first()`</p> |
-|Hadoop |{#HOSTNAME}: JVM Garbage collection time |<p>The JVM garbage collection time in milliseconds.</p> |DEPENDENT |hadoop.nodemanager.jvm.gc_time[{#HOSTNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=NodeManager,name=JvmMetrics')].GcTimeMillis.first()`</p> |
-|Hadoop |{#HOSTNAME}: JVM Heap usage |<p>The JVM heap usage in MBytes.</p> |DEPENDENT |hadoop.nodemanager.jvm.mem_heap_used[{#HOSTNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=NodeManager,name=JvmMetrics')].MemHeapUsedM.first()`</p> |
-|Hadoop |{#HOSTNAME}: Uptime | |DEPENDENT |hadoop.nodemanager.uptime[{#HOSTNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='java.lang:type=Runtime')].Uptime.first()`</p><p>- MULTIPLIER: `0.001`</p> |
-|Hadoop |{#HOSTNAME}: State |<p>State of the node - valid values are: NEW, RUNNING, UNHEALTHY, DECOMMISSIONING, DECOMMISSIONED, LOST, REBOOTED, SHUTDOWN.</p> |DEPENDENT |hadoop.nodemanager.state[{#HOSTNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.HostName=='{#HOSTNAME}')].State.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
-|Hadoop |{#HOSTNAME}: Version | |DEPENDENT |hadoop.nodemanager.version[{#HOSTNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.HostName=='{#HOSTNAME}')].NodeManagerVersion.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
-|Hadoop |{#HOSTNAME}: Number of containers | |DEPENDENT |hadoop.nodemanager.numcontainers[{#HOSTNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.HostName=='{#HOSTNAME}')].NumContainers.first()`</p> |
-|Hadoop |{#HOSTNAME}: Used memory | |DEPENDENT |hadoop.nodemanager.usedmemory[{#HOSTNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.HostName=='{#HOSTNAME}')].UsedMemoryMB.first()`</p> |
-|Hadoop |{#HOSTNAME}: Available memory | |DEPENDENT |hadoop.nodemanager.availablememory[{#HOSTNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.HostName=='{#HOSTNAME}')].AvailableMemoryMB.first()`</p> |
-|Hadoop |{#HOSTNAME}: Remaining |<p>Remaining disk space.</p> |DEPENDENT |hadoop.datanode.remaining[{#HOSTNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=DataNode,name=FSDatasetState')].Remaining.first()`</p> |
-|Hadoop |{#HOSTNAME}: Used |<p>Used disk space.</p> |DEPENDENT |hadoop.datanode.dfs_used[{#HOSTNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=DataNode,name=FSDatasetState')].DfsUsed.first()`</p> |
-|Hadoop |{#HOSTNAME}: Number of failed volumes |<p>Number of failed storage volumes.</p> |DEPENDENT |hadoop.datanode.numfailedvolumes[{#HOSTNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=DataNode,name=FSDatasetState')].NumFailedVolumes.first()`</p> |
-|Hadoop |{#HOSTNAME}: JVM Threads |<p>The number of JVM threads.</p> |DEPENDENT |hadoop.datanode.jvm.threads[{#HOSTNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='java.lang:type=Threading')].ThreadCount.first()`</p> |
-|Hadoop |{#HOSTNAME}: JVM Garbage collection time |<p>The JVM garbage collection time in milliseconds.</p> |DEPENDENT |hadoop.datanode.jvm.gc_time[{#HOSTNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=DataNode,name=JvmMetrics')].GcTimeMillis.first()`</p> |
-|Hadoop |{#HOSTNAME}: JVM Heap usage |<p>The JVM heap usage in MBytes.</p> |DEPENDENT |hadoop.datanode.jvm.mem_heap_used[{#HOSTNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=DataNode,name=JvmMetrics')].MemHeapUsedM.first()`</p> |
-|Hadoop |{#HOSTNAME}: Uptime | |DEPENDENT |hadoop.datanode.uptime[{#HOSTNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='java.lang:type=Runtime')].Uptime.first()`</p><p>- MULTIPLIER: `0.001`</p> |
-|Hadoop |{#HOSTNAME}: Version |<p>DataNode software version.</p> |DEPENDENT |hadoop.datanode.version[{#HOSTNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.HostName=='{#HOSTNAME}')].version.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
-|Hadoop |{#HOSTNAME}: Admin state |<p>Administrative state.</p> |DEPENDENT |hadoop.datanode.admin_state[{#HOSTNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.HostName=='{#HOSTNAME}')].adminState.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
-|Hadoop |{#HOSTNAME}: Oper state |<p>Operational state.</p> |DEPENDENT |hadoop.datanode.oper_state[{#HOSTNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.HostName=='{#HOSTNAME}')].operState.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
-|Zabbix_raw_items |Get ResourceManager stats |<p>-</p> |HTTP_AGENT |hadoop.resourcemanager.get |
-|Zabbix_raw_items |Get NameNode stats |<p>-</p> |HTTP_AGENT |hadoop.namenode.get |
-|Zabbix_raw_items |Get NodeManagers states |<p>-</p> |HTTP_AGENT |hadoop.nodemanagers.get<p>**Preprocessing**:</p><p>- JAVASCRIPT: `return JSON.stringify(JSON.parse(JSON.parse(value).beans[0].LiveNodeManagers))`</p> |
-|Zabbix_raw_items |Get DataNodes states |<p>-</p> |HTTP_AGENT |hadoop.datanodes.get<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
-|Zabbix_raw_items |Hadoop NodeManager {#HOSTNAME}: Get stats | |HTTP_AGENT |hadoop.nodemanager.get[{#HOSTNAME}] |
-|Zabbix_raw_items |Hadoop DataNode {#HOSTNAME}: Get stats | |HTTP_AGENT |hadoop.datanode.get[{#HOSTNAME}] |
+| Group | Name | Description | Type | Key and additional info |
+|------------------|----------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------|------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Hadoop | ResourceManager: Service status | <p>Hadoop ResourceManager API port availability.</p> | SIMPLE | net.tcp.service["tcp","{$HADOOP.RESOURCEMANAGER.HOST}","{$HADOOP.RESOURCEMANAGER.PORT}"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| Hadoop | ResourceManager: Service response time | <p>Hadoop ResourceManager API performance.</p> | SIMPLE | net.tcp.service.perf["tcp","{$HADOOP.RESOURCEMANAGER.HOST}","{$HADOOP.RESOURCEMANAGER.PORT}"] |
+| Hadoop | ResourceManager: Uptime | | DEPENDENT | hadoop.resourcemanager.uptime<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='java.lang:type=Runtime')].Uptime.first()`</p><p>- MULTIPLIER: `0.001`</p> |
+| Hadoop | ResourceManager: RPC queue & processing time | <p>Average time spent on processing RPC requests.</p> | DEPENDENT | hadoop.resourcemanager.rpc_processing_time_avg<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=ResourceManager,name=RpcActivityForPort8031')].RpcProcessingTimeAvgTime.first()`</p> |
+| Hadoop | ResourceManager: Active NMs | <p>Number of Active NodeManagers.</p> | DEPENDENT | hadoop.resourcemanager.num_active_nm<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=ResourceManager,name=ClusterMetrics')].NumActiveNMs.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
+| Hadoop | ResourceManager: Decommissioning NMs | <p>Number of Decommissioning NodeManagers.</p> | DEPENDENT | hadoop.resourcemanager.num_decommissioning_nm<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=ResourceManager,name=ClusterMetrics')].NumDecommissioningNMs.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
+| Hadoop | ResourceManager: Decommissioned NMs | <p>Number of Decommissioned NodeManagers.</p> | DEPENDENT | hadoop.resourcemanager.num_decommissioned_nm<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=ResourceManager,name=ClusterMetrics')].NumDecommissionedNMs.first()`</p> |
+| Hadoop | ResourceManager: Lost NMs | <p>Number of Lost NodeManagers.</p> | DEPENDENT | hadoop.resourcemanager.num_lost_nm<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=ResourceManager,name=ClusterMetrics')].NumLostNMs.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
+| Hadoop | ResourceManager: Unhealthy NMs | <p>Number of Unhealthy NodeManagers.</p> | DEPENDENT | hadoop.resourcemanager.num_unhealthy_nm<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=ResourceManager,name=ClusterMetrics')].NumUnhealthyNMs.first()`</p> |
+| Hadoop | ResourceManager: Rebooted NMs | <p>Number of Rebooted NodeManagers.</p> | DEPENDENT | hadoop.resourcemanager.num_rebooted_nm<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=ResourceManager,name=ClusterMetrics')].NumRebootedNMs.first()`</p> |
+| Hadoop | ResourceManager: Shutdown NMs | <p>Number of Shutdown NodeManagers.</p> | DEPENDENT | hadoop.resourcemanager.num_shutdown_nm<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=ResourceManager,name=ClusterMetrics')].NumShutdownNMs.first()`</p> |
+| Hadoop | NameNode: Service status | <p>Hadoop NameNode API port availability.</p> | SIMPLE | net.tcp.service["tcp","{$HADOOP.NAMENODE.HOST}","{$HADOOP.NAMENODE.PORT}"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| Hadoop | NameNode: Service response time | <p>Hadoop NameNode API performance.</p> | SIMPLE | net.tcp.service.perf["tcp","{$HADOOP.NAMENODE.HOST}","{$HADOOP.NAMENODE.PORT}"] |
+| Hadoop | NameNode: Uptime | | DEPENDENT | hadoop.namenode.uptime<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='java.lang:type=Runtime')].Uptime.first()`</p><p>- MULTIPLIER: `0.001`</p> |
+| Hadoop | NameNode: RPC queue & processing time | <p>Average time spent on processing RPC requests.</p> | DEPENDENT | hadoop.namenode.rpc_processing_time_avg<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=NameNode,name=RpcActivityForPort9000')].RpcProcessingTimeAvgTime.first()`</p> |
+| Hadoop | NameNode: Block Pool Renaming | | DEPENDENT | hadoop.namenode.percent_block_pool_used<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=NameNode,name=NameNodeInfo')].PercentBlockPoolUsed.first()`</p> |
+| Hadoop | NameNode: Transactions since last checkpoint | <p>Total number of transactions since last checkpoint.</p> | DEPENDENT | hadoop.namenode.transactions_since_last_checkpoint<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=NameNode,name=FSNamesystem')].TransactionsSinceLastCheckpoint.first()`</p> |
+| Hadoop | NameNode: Percent capacity remaining | <p>Available capacity in percent.</p> | DEPENDENT | hadoop.namenode.percent_remaining<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=NameNode,name=NameNodeInfo')].PercentRemaining.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
+| Hadoop | NameNode: Capacity remaining | <p>Available capacity.</p> | DEPENDENT | hadoop.namenode.capacity_remaining<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=NameNode,name=FSNamesystem')].CapacityRemaining.first()`</p> |
+| Hadoop | NameNode: Corrupt blocks | <p>Number of corrupt blocks.</p> | DEPENDENT | hadoop.namenode.corrupt_blocks<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=NameNode,name=FSNamesystem')].CorruptBlocks.first()`</p> |
+| Hadoop | NameNode: Missing blocks | <p>Number of missing blocks.</p> | DEPENDENT | hadoop.namenode.missing_blocks<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=NameNode,name=FSNamesystem')].MissingBlocks.first()`</p> |
+| Hadoop | NameNode: Failed volumes | <p>Number of failed volumes.</p> | DEPENDENT | hadoop.namenode.volume_failures_total<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=NameNode,name=FSNamesystem')].VolumeFailuresTotal.first()`</p> |
+| Hadoop | NameNode: Alive DataNodes | <p>Count of alive DataNodes.</p> | DEPENDENT | hadoop.namenode.num_live_data_nodes<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=NameNode,name=FSNamesystem')].NumLiveDataNodes.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
+| Hadoop | NameNode: Dead DataNodes | <p>Count of dead DataNodes.</p> | DEPENDENT | hadoop.namenode.num_dead_data_nodes<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=NameNode,name=FSNamesystem')].NumDeadDataNodes.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
+| Hadoop | NameNode: Stale DataNodes | <p>DataNodes that do not send a heartbeat within 30 seconds are marked as "stale".</p> | DEPENDENT | hadoop.namenode.num_stale_data_nodes<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=NameNode,name=FSNamesystem')].StaleDataNodes.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
+| Hadoop | NameNode: Total files | <p>Total count of files tracked by the NameNode.</p> | DEPENDENT | hadoop.namenode.files_total<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=NameNode,name=FSNamesystem')].FilesTotal.first()`</p> |
+| Hadoop | NameNode: Total load | <p>The current number of concurrent file accesses (read/write) across all DataNodes.</p> | DEPENDENT | hadoop.namenode.total_load<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=NameNode,name=FSNamesystem')].TotalLoad.first()`</p> |
+| Hadoop | NameNode: Blocks allocable | <p>Maximum number of blocks allocable.</p> | DEPENDENT | hadoop.namenode.block_capacity<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=NameNode,name=FSNamesystem')].BlockCapacity.first()`</p> |
+| Hadoop | NameNode: Total blocks | <p>Count of blocks tracked by NameNode.</p> | DEPENDENT | hadoop.namenode.blocks_total<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=NameNode,name=FSNamesystem')].BlocksTotal.first()`</p> |
+| Hadoop | NameNode: Under-replicated blocks | <p>The number of blocks with insufficient replication.</p> | DEPENDENT | hadoop.namenode.under_replicated_blocks<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=NameNode,name=FSNamesystem')].UnderReplicatedBlocks.first()`</p> |
+| Hadoop | {#HOSTNAME}: RPC queue & processing time | <p>Average time spent on processing RPC requests.</p> | DEPENDENT | hadoop.nodemanager.rpc_processing_time_avg[{#HOSTNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=NodeManager,name=RpcActivityForPort8040')].RpcProcessingTimeAvgTime.first()`</p> |
+| Hadoop | {#HOSTNAME}: Container launch avg duration | | DEPENDENT | hadoop.nodemanager.container_launch_duration_avg[{#HOSTNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=NodeManager,name=NodeManagerMetrics')].ContainerLaunchDurationAvgTime.first()`</p> |
+| Hadoop | {#HOSTNAME}: JVM Threads | <p>The number of JVM threads.</p> | DEPENDENT | hadoop.nodemanager.jvm.threads[{#HOSTNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='java.lang:type=Threading')].ThreadCount.first()`</p> |
+| Hadoop | {#HOSTNAME}: JVM Garbage collection time | <p>The JVM garbage collection time in milliseconds.</p> | DEPENDENT | hadoop.nodemanager.jvm.gc_time[{#HOSTNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=NodeManager,name=JvmMetrics')].GcTimeMillis.first()`</p> |
+| Hadoop | {#HOSTNAME}: JVM Heap usage | <p>The JVM heap usage in MBytes.</p> | DEPENDENT | hadoop.nodemanager.jvm.mem_heap_used[{#HOSTNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=NodeManager,name=JvmMetrics')].MemHeapUsedM.first()`</p> |
+| Hadoop | {#HOSTNAME}: Uptime | | DEPENDENT | hadoop.nodemanager.uptime[{#HOSTNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='java.lang:type=Runtime')].Uptime.first()`</p><p>- MULTIPLIER: `0.001`</p> |
+| Hadoop | {#HOSTNAME}: State | <p>State of the node - valid values are: NEW, RUNNING, UNHEALTHY, DECOMMISSIONING, DECOMMISSIONED, LOST, REBOOTED, SHUTDOWN.</p> | DEPENDENT | hadoop.nodemanager.state[{#HOSTNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.HostName=='{#HOSTNAME}')].State.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
+| Hadoop | {#HOSTNAME}: Version | | DEPENDENT | hadoop.nodemanager.version[{#HOSTNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.HostName=='{#HOSTNAME}')].NodeManagerVersion.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
+| Hadoop | {#HOSTNAME}: Number of containers | | DEPENDENT | hadoop.nodemanager.numcontainers[{#HOSTNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.HostName=='{#HOSTNAME}')].NumContainers.first()`</p> |
+| Hadoop | {#HOSTNAME}: Used memory | | DEPENDENT | hadoop.nodemanager.usedmemory[{#HOSTNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.HostName=='{#HOSTNAME}')].UsedMemoryMB.first()`</p> |
+| Hadoop | {#HOSTNAME}: Available memory | | DEPENDENT | hadoop.nodemanager.availablememory[{#HOSTNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.HostName=='{#HOSTNAME}')].AvailableMemoryMB.first()`</p> |
+| Hadoop | {#HOSTNAME}: Remaining | <p>Remaining disk space.</p> | DEPENDENT | hadoop.datanode.remaining[{#HOSTNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=DataNode,name=FSDatasetState')].Remaining.first()`</p> |
+| Hadoop | {#HOSTNAME}: Used | <p>Used disk space.</p> | DEPENDENT | hadoop.datanode.dfs_used[{#HOSTNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=DataNode,name=FSDatasetState')].DfsUsed.first()`</p> |
+| Hadoop | {#HOSTNAME}: Number of failed volumes | <p>Number of failed storage volumes.</p> | DEPENDENT | hadoop.datanode.numfailedvolumes[{#HOSTNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=DataNode,name=FSDatasetState')].NumFailedVolumes.first()`</p> |
+| Hadoop | {#HOSTNAME}: JVM Threads | <p>The number of JVM threads.</p> | DEPENDENT | hadoop.datanode.jvm.threads[{#HOSTNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='java.lang:type=Threading')].ThreadCount.first()`</p> |
+| Hadoop | {#HOSTNAME}: JVM Garbage collection time | <p>The JVM garbage collection time in milliseconds.</p> | DEPENDENT | hadoop.datanode.jvm.gc_time[{#HOSTNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=DataNode,name=JvmMetrics')].GcTimeMillis.first()`</p> |
+| Hadoop | {#HOSTNAME}: JVM Heap usage | <p>The JVM heap usage in MBytes.</p> | DEPENDENT | hadoop.datanode.jvm.mem_heap_used[{#HOSTNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='Hadoop:service=DataNode,name=JvmMetrics')].MemHeapUsedM.first()`</p> |
+| Hadoop | {#HOSTNAME}: Uptime | | DEPENDENT | hadoop.datanode.uptime[{#HOSTNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.beans[?(@.name=='java.lang:type=Runtime')].Uptime.first()`</p><p>- MULTIPLIER: `0.001`</p> |
+| Hadoop | {#HOSTNAME}: Version | <p>DataNode software version.</p> | DEPENDENT | hadoop.datanode.version[{#HOSTNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.HostName=='{#HOSTNAME}')].version.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
+| Hadoop | {#HOSTNAME}: Admin state | <p>Administrative state.</p> | DEPENDENT | hadoop.datanode.admin_state[{#HOSTNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.HostName=='{#HOSTNAME}')].adminState.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
+| Hadoop | {#HOSTNAME}: Oper state | <p>Operational state.</p> | DEPENDENT | hadoop.datanode.oper_state[{#HOSTNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.HostName=='{#HOSTNAME}')].operState.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
+| Zabbix_raw_items | Get ResourceManager stats | <p>-</p> | HTTP_AGENT | hadoop.resourcemanager.get |
+| Zabbix_raw_items | Get NameNode stats | <p>-</p> | HTTP_AGENT | hadoop.namenode.get |
+| Zabbix_raw_items | Get NodeManagers states | <p>-</p> | HTTP_AGENT | hadoop.nodemanagers.get<p>**Preprocessing**:</p><p>- JAVASCRIPT: `return JSON.stringify(JSON.parse(JSON.parse(value).beans[0].LiveNodeManagers))`</p> |
+| Zabbix_raw_items | Get DataNodes states | <p>-</p> | HTTP_AGENT | hadoop.datanodes.get<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Zabbix_raw_items | Hadoop NodeManager {#HOSTNAME}: Get stats | | HTTP_AGENT | hadoop.nodemanager.get[{#HOSTNAME}] |
+| Zabbix_raw_items | Hadoop DataNode {#HOSTNAME}: Get stats | | HTTP_AGENT | hadoop.datanode.get[{#HOSTNAME}] |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|ResourceManager: Service is unavailable |<p>-</p> |`{TEMPLATE_NAME:net.tcp.service["tcp","{$HADOOP.RESOURCEMANAGER.HOST}","{$HADOOP.RESOURCEMANAGER.PORT}"].last()}=0` |AVERAGE |<p>Manual close: YES</p> |
-|ResourceManager: Service response time is too high (over {$HADOOP.RESOURCEMANAGER.RESPONSE_TIME.MAX.WARN} for 5m) |<p>-</p> |`{TEMPLATE_NAME:net.tcp.service.perf["tcp","{$HADOOP.RESOURCEMANAGER.HOST}","{$HADOOP.RESOURCEMANAGER.PORT}"].min(5m)}>{$HADOOP.RESOURCEMANAGER.RESPONSE_TIME.MAX.WARN}` |WARNING |<p>Manual close: YES</p><p>**Depends on**:</p><p>- ResourceManager: Service is unavailable</p> |
-|ResourceManager: Service has been restarted (uptime < 10m) |<p>Uptime is less than 10 minutes</p> |`{TEMPLATE_NAME:hadoop.resourcemanager.uptime.last()}<10m` |INFO |<p>Manual close: YES</p> |
-|ResourceManager: Failed to fetch ResourceManager API page (or no data for 30m) |<p>Zabbix has not received data for items for the last 30 minutes.</p> |`{TEMPLATE_NAME:hadoop.resourcemanager.uptime.nodata(30m)}=1` |WARNING |<p>Manual close: YES</p><p>**Depends on**:</p><p>- ResourceManager: Service is unavailable</p> |
-|ResourceManager: Cluster has no active NodeManagers |<p>Cluster is unable to execute any jobs without at least one NodeManager.</p> |`{TEMPLATE_NAME:hadoop.resourcemanager.num_active_nm.max(5m)}=0` |HIGH | |
-|ResourceManager: Cluster has unhealthy NodeManagers |<p>YARN considers any node with disk utilization exceeding the value specified under the property yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage (in yarn-site.xml) to be unhealthy. Ample disk space is critical to ensure uninterrupted operation of a Hadoop cluster, and large numbers of unhealthyNodes (the number to alert on depends on the size of your cluster) should be quickly investigated and resolved.</p> |`{TEMPLATE_NAME:hadoop.resourcemanager.num_unhealthy_nm.min(15m)}>0` |AVERAGE | |
-|NameNode: Service is unavailable |<p>-</p> |`{TEMPLATE_NAME:net.tcp.service["tcp","{$HADOOP.NAMENODE.HOST}","{$HADOOP.NAMENODE.PORT}"].last()}=0` |AVERAGE |<p>Manual close: YES</p> |
-|NameNode: Service response time is too high (over {$HADOOP.NAMENODE.RESPONSE_TIME.MAX.WARN} for 5m) |<p>-</p> |`{TEMPLATE_NAME:net.tcp.service.perf["tcp","{$HADOOP.NAMENODE.HOST}","{$HADOOP.NAMENODE.PORT}"].min(5m)}>{$HADOOP.NAMENODE.RESPONSE_TIME.MAX.WARN}` |WARNING |<p>Manual close: YES</p><p>**Depends on**:</p><p>- NameNode: Service is unavailable</p> |
-|NameNode: Service has been restarted (uptime < 10m) |<p>Uptime is less than 10 minutes</p> |`{TEMPLATE_NAME:hadoop.namenode.uptime.last()}<10m` |INFO |<p>Manual close: YES</p> |
-|NameNode: Failed to fetch NameNode API page (or no data for 30m) |<p>Zabbix has not received data for items for the last 30 minutes.</p> |`{TEMPLATE_NAME:hadoop.namenode.uptime.nodata(30m)}=1` |WARNING |<p>Manual close: YES</p><p>**Depends on**:</p><p>- NameNode: Service is unavailable</p> |
-|NameNode: Cluster capacity remaining is low (below {$HADOOP.CAPACITY_REMAINING.MIN.WARN}% for 15m) |<p>A good practice is to ensure that disk use never exceeds 80 percent capacity.</p> |`{TEMPLATE_NAME:hadoop.namenode.percent_remaining.max(15m)}<{$HADOOP.CAPACITY_REMAINING.MIN.WARN}` |WARNING | |
-|NameNode: Cluster has missing blocks |<p>A missing block is far worse than a corrupt block, because a missing block cannot be recovered by copying a replica.</p> |`{TEMPLATE_NAME:hadoop.namenode.missing_blocks.min(15m)}>0` |AVERAGE | |
-|NameNode: Cluster has volume failures |<p>HDFS now allows for disks to fail in place, without affecting DataNode operations, until a threshold value is reached. This is set on each DataNode via the dfs.datanode.failed.volumes.tolerated property; it defaults to 0, meaning that any volume failure will shut down the DataNode; on a production cluster where DataNodes typically have 6, 8, or 12 disks, setting this parameter to 1 or 2 is typically the best practice.</p> |`{TEMPLATE_NAME:hadoop.namenode.volume_failures_total.min(15m)}>0` |AVERAGE | |
-|NameNode: Cluster has DataNodes in Dead state |<p>The death of a DataNode causes a flurry of network activity, as the NameNode initiates replication of blocks lost on the dead nodes.</p> |`{TEMPLATE_NAME:hadoop.namenode.num_dead_data_nodes.min(5m)}>0` |AVERAGE | |
-|{#HOSTNAME}: Service has been restarted (uptime < 10m) |<p>Uptime is less than 10 minutes</p> |`{TEMPLATE_NAME:hadoop.nodemanager.uptime[{#HOSTNAME}].last()}<10m` |INFO |<p>Manual close: YES</p> |
-|{#HOSTNAME}: Failed to fetch NodeManager API page (or no data for 30m) |<p>Zabbix has not received data for items for the last 30 minutes.</p> |`{TEMPLATE_NAME:hadoop.nodemanager.uptime[{#HOSTNAME}].nodata(30m)}=1` |WARNING |<p>Manual close: YES</p><p>**Depends on**:</p><p>- {#HOSTNAME}: NodeManager has state {ITEM.VALUE}.</p> |
-|{#HOSTNAME}: NodeManager has state {ITEM.VALUE}. |<p>The state is different from normal.</p> |`{TEMPLATE_NAME:hadoop.nodemanager.state[{#HOSTNAME}].last()}<>"RUNNING"` |AVERAGE | |
-|{#HOSTNAME}: Service has been restarted (uptime < 10m) |<p>Uptime is less than 10 minutes</p> |`{TEMPLATE_NAME:hadoop.datanode.uptime[{#HOSTNAME}].last()}<10m` |INFO |<p>Manual close: YES</p> |
-|{#HOSTNAME}: Failed to fetch DataNode API page (or no data for 30m) |<p>Zabbix has not received data for items for the last 30 minutes.</p> |`{TEMPLATE_NAME:hadoop.datanode.uptime[{#HOSTNAME}].nodata(30m)}=1` |WARNING |<p>Manual close: YES</p><p>**Depends on**:</p><p>- {#HOSTNAME}: DataNode has state {ITEM.VALUE}.</p> |
-|{#HOSTNAME}: DataNode has state {ITEM.VALUE}. |<p>The state is different from normal.</p> |`{TEMPLATE_NAME:hadoop.datanode.oper_state[{#HOSTNAME}].last()}<>"Live"` |AVERAGE | |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|-------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|---------------------------------------------------------------------------------------------------------|
+| ResourceManager: Service is unavailable | <p>-</p> | `{TEMPLATE_NAME:net.tcp.service["tcp","{$HADOOP.RESOURCEMANAGER.HOST}","{$HADOOP.RESOURCEMANAGER.PORT}"].last()}=0` | AVERAGE | <p>Manual close: YES</p> |
+| ResourceManager: Service response time is too high (over {$HADOOP.RESOURCEMANAGER.RESPONSE_TIME.MAX.WARN} for 5m) | <p>-</p> | `{TEMPLATE_NAME:net.tcp.service.perf["tcp","{$HADOOP.RESOURCEMANAGER.HOST}","{$HADOOP.RESOURCEMANAGER.PORT}"].min(5m)}>{$HADOOP.RESOURCEMANAGER.RESPONSE_TIME.MAX.WARN}` | WARNING | <p>Manual close: YES</p><p>**Depends on**:</p><p>- ResourceManager: Service is unavailable</p> |
+| ResourceManager: Service has been restarted (uptime < 10m) | <p>Uptime is less than 10 minutes</p> | `{TEMPLATE_NAME:hadoop.resourcemanager.uptime.last()}<10m` | INFO | <p>Manual close: YES</p> |
+| ResourceManager: Failed to fetch ResourceManager API page (or no data for 30m) | <p>Zabbix has not received data for items for the last 30 minutes.</p> | `{TEMPLATE_NAME:hadoop.resourcemanager.uptime.nodata(30m)}=1` | WARNING | <p>Manual close: YES</p><p>**Depends on**:</p><p>- ResourceManager: Service is unavailable</p> |
+| ResourceManager: Cluster has no active NodeManagers | <p>Cluster is unable to execute any jobs without at least one NodeManager.</p> | `{TEMPLATE_NAME:hadoop.resourcemanager.num_active_nm.max(5m)}=0` | HIGH | |
+| ResourceManager: Cluster has unhealthy NodeManagers | <p>YARN considers any node with disk utilization exceeding the value specified under the property yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage (in yarn-site.xml) to be unhealthy. Ample disk space is critical to ensure uninterrupted operation of a Hadoop cluster, and large numbers of unhealthyNodes (the number to alert on depends on the size of your cluster) should be quickly investigated and resolved.</p> | `{TEMPLATE_NAME:hadoop.resourcemanager.num_unhealthy_nm.min(15m)}>0` | AVERAGE | |
+| NameNode: Service is unavailable | <p>-</p> | `{TEMPLATE_NAME:net.tcp.service["tcp","{$HADOOP.NAMENODE.HOST}","{$HADOOP.NAMENODE.PORT}"].last()}=0` | AVERAGE | <p>Manual close: YES</p> |
+| NameNode: Service response time is too high (over {$HADOOP.NAMENODE.RESPONSE_TIME.MAX.WARN} for 5m) | <p>-</p> | `{TEMPLATE_NAME:net.tcp.service.perf["tcp","{$HADOOP.NAMENODE.HOST}","{$HADOOP.NAMENODE.PORT}"].min(5m)}>{$HADOOP.NAMENODE.RESPONSE_TIME.MAX.WARN}` | WARNING | <p>Manual close: YES</p><p>**Depends on**:</p><p>- NameNode: Service is unavailable</p> |
+| NameNode: Service has been restarted (uptime < 10m) | <p>Uptime is less than 10 minutes</p> | `{TEMPLATE_NAME:hadoop.namenode.uptime.last()}<10m` | INFO | <p>Manual close: YES</p> |
+| NameNode: Failed to fetch NameNode API page (or no data for 30m) | <p>Zabbix has not received data for items for the last 30 minutes.</p> | `{TEMPLATE_NAME:hadoop.namenode.uptime.nodata(30m)}=1` | WARNING | <p>Manual close: YES</p><p>**Depends on**:</p><p>- NameNode: Service is unavailable</p> |
+| NameNode: Cluster capacity remaining is low (below {$HADOOP.CAPACITY_REMAINING.MIN.WARN}% for 15m) | <p>A good practice is to ensure that disk use never exceeds 80 percent capacity.</p> | `{TEMPLATE_NAME:hadoop.namenode.percent_remaining.max(15m)}<{$HADOOP.CAPACITY_REMAINING.MIN.WARN}` | WARNING | |
+| NameNode: Cluster has missing blocks | <p>A missing block is far worse than a corrupt block, because a missing block cannot be recovered by copying a replica.</p> | `{TEMPLATE_NAME:hadoop.namenode.missing_blocks.min(15m)}>0` | AVERAGE | |
+| NameNode: Cluster has volume failures | <p>HDFS now allows for disks to fail in place, without affecting DataNode operations, until a threshold value is reached. This is set on each DataNode via the dfs.datanode.failed.volumes.tolerated property; it defaults to 0, meaning that any volume failure will shut down the DataNode; on a production cluster where DataNodes typically have 6, 8, or 12 disks, setting this parameter to 1 or 2 is typically the best practice.</p> | `{TEMPLATE_NAME:hadoop.namenode.volume_failures_total.min(15m)}>0` | AVERAGE | |
+| NameNode: Cluster has DataNodes in Dead state | <p>The death of a DataNode causes a flurry of network activity, as the NameNode initiates replication of blocks lost on the dead nodes.</p> | `{TEMPLATE_NAME:hadoop.namenode.num_dead_data_nodes.min(5m)}>0` | AVERAGE | |
+| {#HOSTNAME}: Service has been restarted (uptime < 10m) | <p>Uptime is less than 10 minutes</p> | `{TEMPLATE_NAME:hadoop.nodemanager.uptime[{#HOSTNAME}].last()}<10m` | INFO | <p>Manual close: YES</p> |
+| {#HOSTNAME}: Failed to fetch NodeManager API page (or no data for 30m) | <p>Zabbix has not received data for items for the last 30 minutes.</p> | `{TEMPLATE_NAME:hadoop.nodemanager.uptime[{#HOSTNAME}].nodata(30m)}=1` | WARNING | <p>Manual close: YES</p><p>**Depends on**:</p><p>- {#HOSTNAME}: NodeManager has state {ITEM.VALUE}.</p> |
+| {#HOSTNAME}: NodeManager has state {ITEM.VALUE}. | <p>The state is different from normal.</p> | `{TEMPLATE_NAME:hadoop.nodemanager.state[{#HOSTNAME}].last()}<>"RUNNING"` | AVERAGE | |
+| {#HOSTNAME}: Service has been restarted (uptime < 10m) | <p>Uptime is less than 10 minutes</p> | `{TEMPLATE_NAME:hadoop.datanode.uptime[{#HOSTNAME}].last()}<10m` | INFO | <p>Manual close: YES</p> |
+| {#HOSTNAME}: Failed to fetch DataNode API page (or no data for 30m) | <p>Zabbix has not received data for items for the last 30 minutes.</p> | `{TEMPLATE_NAME:hadoop.datanode.uptime[{#HOSTNAME}].nodata(30m)}=1` | WARNING | <p>Manual close: YES</p><p>**Depends on**:</p><p>- {#HOSTNAME}: DataNode has state {ITEM.VALUE}.</p> |
+| {#HOSTNAME}: DataNode has state {ITEM.VALUE}. | <p>The state is different from normal.</p> | `{TEMPLATE_NAME:hadoop.datanode.oper_state[{#HOSTNAME}].last()}<>"Live"` | AVERAGE | |
## Feedback
diff --git a/templates/app/haproxy_agent/README.md b/templates/app/haproxy_agent/README.md
index 29cbb8696ae..5b14516c373 100644
--- a/templates/app/haproxy_agent/README.md
+++ b/templates/app/haproxy_agent/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
The template to monitor HAProxy by Zabbix that work without any external scripts.
Most of the metrics are collected in one go, thanks to Zabbix bulk data collection.
@@ -19,7 +19,7 @@ This template was tested on:
## Setup
-> See [Zabbix template operation](https://www.zabbix.com/documentation/5.2/manual/config/templates_out_of_the_box/zabbix_agent) for basic instructions.
+> See [Zabbix template operation](https://www.zabbix.com/documentation/5.4/manual/config/templates_out_of_the_box/zabbix_agent) for basic instructions.
Setup [HAProxy Stats Page](https://www.haproxy.com/blog/exploring-the-haproxy-stats-page/).
@@ -41,23 +41,23 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$HAPROXY.BACK_ERESP.MAX.WARN} |<p>Maximum of responses with error on BACKEND for trigger expression.</p> |`10` |
-|{$HAPROXY.BACK_QCUR.MAX.WARN} |<p>Maximum number of requests on BACKEND unassigned in queue for trigger expression.</p> |`10` |
-|{$HAPROXY.BACK_QTIME.MAX.WARN} |<p>Maximum of average time spent in queue on BACKEND for trigger expression.</p> |`10s` |
-|{$HAPROXY.BACK_RTIME.MAX.WARN} |<p>Maximum of average BACKEND response time for trigger expression.</p> |`10s` |
-|{$HAPROXY.FRONT_DREQ.MAX.WARN} |<p>The HAProxy maximum denied requests for trigger expression.</p> |`10` |
-|{$HAPROXY.FRONT_EREQ.MAX.WARN} |<p>The HAProxy maximum number of request errors for trigger expression.</p> |`10` |
-|{$HAPROXY.FRONT_SUTIL.MAX.WARN} |<p>Maximum of session usage percentage on frontend for trigger expression.</p> |`80` |
-|{$HAPROXY.RESPONSE_TIME.MAX.WARN} |<p>The HAProxy stats page maximum response time in seconds for trigger expression.</p> |`10s` |
-|{$HAPROXY.SERVER_ERESP.MAX.WARN} |<p>Maximum of responses with error on server for trigger expression.</p> |`10` |
-|{$HAPROXY.SERVER_QCUR.MAX.WARN} |<p>Maximum number of requests on server unassigned in queue for trigger expression.</p> |`10` |
-|{$HAPROXY.SERVER_QTIME.MAX.WARN} |<p>Maximum of average time spent in queue on server for trigger expression.</p> |`10s` |
-|{$HAPROXY.SERVER_RTIME.MAX.WARN} |<p>Maximum of average server response time for trigger expression.</p> |`10s` |
-|{$HAPROXY.STATS.PATH} |<p>The path of HAProxy stats page.</p> |`stats` |
-|{$HAPROXY.STATS.PORT} |<p>The port of the HAProxy stats host or container.</p> |`8404` |
-|{$HAPROXY.STATS.SCHEME} |<p>The scheme of HAProxy stats page(http/https).</p> |`http` |
+| Name | Description | Default |
+|-----------------------------------|------------------------------------------------------------------------------------------|---------|
+| {$HAPROXY.BACK_ERESP.MAX.WARN} | <p>Maximum of responses with error on BACKEND for trigger expression.</p> | `10` |
+| {$HAPROXY.BACK_QCUR.MAX.WARN} | <p>Maximum number of requests on BACKEND unassigned in queue for trigger expression.</p> | `10` |
+| {$HAPROXY.BACK_QTIME.MAX.WARN} | <p>Maximum of average time spent in queue on BACKEND for trigger expression.</p> | `10s` |
+| {$HAPROXY.BACK_RTIME.MAX.WARN} | <p>Maximum of average BACKEND response time for trigger expression.</p> | `10s` |
+| {$HAPROXY.FRONT_DREQ.MAX.WARN} | <p>The HAProxy maximum denied requests for trigger expression.</p> | `10` |
+| {$HAPROXY.FRONT_EREQ.MAX.WARN} | <p>The HAProxy maximum number of request errors for trigger expression.</p> | `10` |
+| {$HAPROXY.FRONT_SUTIL.MAX.WARN} | <p>Maximum of session usage percentage on frontend for trigger expression.</p> | `80` |
+| {$HAPROXY.RESPONSE_TIME.MAX.WARN} | <p>The HAProxy stats page maximum response time in seconds for trigger expression.</p> | `10s` |
+| {$HAPROXY.SERVER_ERESP.MAX.WARN} | <p>Maximum of responses with error on server for trigger expression.</p> | `10` |
+| {$HAPROXY.SERVER_QCUR.MAX.WARN} | <p>Maximum number of requests on server unassigned in queue for trigger expression.</p> | `10` |
+| {$HAPROXY.SERVER_QTIME.MAX.WARN} | <p>Maximum of average time spent in queue on server for trigger expression.</p> | `10s` |
+| {$HAPROXY.SERVER_RTIME.MAX.WARN} | <p>Maximum of average server response time for trigger expression.</p> | `10s` |
+| {$HAPROXY.STATS.PATH} | <p>The path of HAProxy stats page.</p> | `stats` |
+| {$HAPROXY.STATS.PORT} | <p>The port of the HAProxy stats host or container.</p> | `8404` |
+| {$HAPROXY.STATS.SCHEME} | <p>The scheme of HAProxy stats page(http/https).</p> | `http` |
## Template links
@@ -65,121 +65,121 @@ There are no template links in this template.
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|BACKEND discovery |<p>Discovery backends</p> |DEPENDENT |haproxy.backend.discovery<p>**Filter**:</p>AND <p>- A: {#SVNAME} MATCHES_REGEX `BACKEND`</p><p>- B: {#MODE} MATCHES_REGEX `http`</p> |
-|FRONTEND discovery |<p>Discovery frontends</p> |DEPENDENT |haproxy.frontend.discovery<p>**Filter**:</p>AND <p>- A: {#SVNAME} MATCHES_REGEX `FRONTEND`</p><p>- B: {#MODE} MATCHES_REGEX `http`</p> |
-|Servers discovery |<p>Discovery servers</p> |DEPENDENT |haproxy.server.discovery<p>**Filter**:</p>AND <p>- A: {#SVNAME} NOT_MATCHES_REGEX `FRONTEND|BACKEND`</p><p>- B: {#MODE} MATCHES_REGEX `http`</p> |
-|TCP BACKEND discovery |<p>Discovery TCP backends</p> |DEPENDENT |haproxy.backend_tcp.discovery<p>**Filter**:</p>AND <p>- A: {#SVNAME} MATCHES_REGEX `BACKEND`</p><p>- B: {#MODE} MATCHES_REGEX `tcp`</p> |
-|TCP FRONTEND discovery |<p>Discovery TCP frontends</p> |DEPENDENT |haproxy.frontend_tcp.discovery<p>**Filter**:</p>AND <p>- A: {#SVNAME} MATCHES_REGEX `FRONTEND`</p><p>- B: {#MODE} MATCHES_REGEX `tcp`</p> |
-|TCP Servers discovery |<p>Discovery tcp servers</p> |DEPENDENT |haproxy.server_tcp.discovery<p>**Filter**:</p>AND <p>- A: {#SVNAME} NOT_MATCHES_REGEX `FRONTEND|BACKEND`</p><p>- B: {#MODE} MATCHES_REGEX `tcp`</p> |
+| Name | Description | Type | Key and additional info |
+|------------------------|--------------------------------|-----------|-----------------------------------------------------------------------------------------------------------------------------------------------------|
+| BACKEND discovery | <p>Discovery backends</p> | DEPENDENT | haproxy.backend.discovery<p>**Filter**:</p>AND <p>- A: {#SVNAME} MATCHES_REGEX `BACKEND`</p><p>- B: {#MODE} MATCHES_REGEX `http`</p> |
+| FRONTEND discovery | <p>Discovery frontends</p> | DEPENDENT | haproxy.frontend.discovery<p>**Filter**:</p>AND <p>- A: {#SVNAME} MATCHES_REGEX `FRONTEND`</p><p>- B: {#MODE} MATCHES_REGEX `http`</p> |
+| Servers discovery | <p>Discovery servers</p> | DEPENDENT | haproxy.server.discovery<p>**Filter**:</p>AND <p>- A: {#SVNAME} NOT_MATCHES_REGEX `FRONTEND|BACKEND`</p><p>- B: {#MODE} MATCHES_REGEX `http`</p> |
+| TCP BACKEND discovery | <p>Discovery TCP backends</p> | DEPENDENT | haproxy.backend_tcp.discovery<p>**Filter**:</p>AND <p>- A: {#SVNAME} MATCHES_REGEX `BACKEND`</p><p>- B: {#MODE} MATCHES_REGEX `tcp`</p> |
+| TCP FRONTEND discovery | <p>Discovery TCP frontends</p> | DEPENDENT | haproxy.frontend_tcp.discovery<p>**Filter**:</p>AND <p>- A: {#SVNAME} MATCHES_REGEX `FRONTEND`</p><p>- B: {#MODE} MATCHES_REGEX `tcp`</p> |
+| TCP Servers discovery | <p>Discovery tcp servers</p> | DEPENDENT | haproxy.server_tcp.discovery<p>**Filter**:</p>AND <p>- A: {#SVNAME} NOT_MATCHES_REGEX `FRONTEND|BACKEND`</p><p>- B: {#MODE} MATCHES_REGEX `tcp`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|HAProxy |HAProxy: Version |<p>-</p> |DEPENDENT |haproxy.version<p>**Preprocessing**:</p><p>- REGEX: `HAProxy version ([^,]*), \1`</p><p>⛔️ON_FAIL: `CUSTOM_ERROR -> HAProxy version is not found`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|HAProxy |HAProxy: Uptime |<p>-</p> |DEPENDENT |haproxy.uptime<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
-|HAProxy |HAProxy: Service status |<p>-</p> |ZABBIX_PASSIVE |net.tcp.service["{$HAPROXY.STATS.SCHEME}","{HOST.CONN}","{$HAPROXY.STATS.PORT}"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|HAProxy |HAProxy: Service response time |<p>-</p> |ZABBIX_PASSIVE |net.tcp.service.perf["{$HAPROXY.STATS.SCHEME}","{HOST.CONN}","{$HAPROXY.STATS.PORT}"] |
-|HAProxy |HAProxy Backend {#PXNAME}: Status |<p>-</p> |DEPENDENT |haproxy.backend.status[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].status.first()`</p><p>- BOOL_TO_DECIMAL<p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|HAProxy |HAProxy Backend {#PXNAME}: Responses time |<p>Average backend response time (in ms) for the last 1,024 requests</p> |DEPENDENT |haproxy.backend.rtime[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].rtime.first()`</p><p>- MULTIPLIER: `0.001`</p> |
-|HAProxy |HAProxy Backend {#PXNAME}: Errors connection per second |<p>Number of requests that encountered an error attempting to connect to a backend server.</p> |DEPENDENT |haproxy.backend.econ.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].econ.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy Backend {#PXNAME}: Responses denied per second |<p>Responses denied due to security concerns (ACL-restricted).</p> |DEPENDENT |haproxy.backend.dresp.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].dresp.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy Backend {#PXNAME}: Response errors per second |<p>Number of requests whose responses yielded an error</p> |DEPENDENT |haproxy.backend.eresp.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].eresp.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy Backend {#PXNAME}: Unassigned requests |<p>Current number of requests unassigned in queue.</p> |DEPENDENT |haproxy.backend.qcur[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].qcur.first()`</p> |
-|HAProxy |HAProxy Backend {#PXNAME}: Time in queue |<p>Average time spent in queue (in ms) for the last 1,024 requests</p> |DEPENDENT |haproxy.backend.qtime[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].qtime.first()`</p><p>- MULTIPLIER: `0.001`</p> |
-|HAProxy |HAProxy Backend {#PXNAME}: Redispatched requests per second |<p>Number of times a request was redispatched to a different backend.</p> |DEPENDENT |haproxy.backend.wredis.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].wredis.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy Backend {#PXNAME}: Retried connections per second |<p>Number of times a connection was retried.</p> |DEPENDENT |haproxy.backend.wretr.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].wretr.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy Frontend {#PXNAME}: Requests rate |<p>HTTP requests per second</p> |DEPENDENT |haproxy.frontend.req_rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].req_rate.first()`</p> |
-|HAProxy |HAProxy Frontend {#PXNAME}: Sessions rate |<p>Number of sessions created per second</p> |DEPENDENT |haproxy.frontend.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].rate.first()`</p> |
-|HAProxy |HAProxy Frontend {#PXNAME}: Established sessions |<p>The current number of established sessions.</p> |DEPENDENT |haproxy.frontend.scur[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].scur.first()`</p> |
-|HAProxy |HAProxy Frontend {#PXNAME}: Session limits |<p>The most simultaneous sessions that are allowed, as defined by the maxconn setting in the frontend.</p> |DEPENDENT |haproxy.frontend.slim[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].slim.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|HAProxy |HAProxy Frontend {#PXNAME}: Session utilization |<p>Percentage of sessions used (scur / slim * 100).</p> |CALCULATED |haproxy.frontend.sutil[{#PXNAME}:{#SVNAME}]<p>**Expression**:</p>`last(haproxy.frontend.scur[{#PXNAME}:{#SVNAME}]) / last(haproxy.frontend.slim[{#PXNAME}:{#SVNAME}]) * 100` |
-|HAProxy |HAProxy Frontend {#PXNAME}: Request errors per second |<p>Number of request errors per second.</p> |DEPENDENT |haproxy.frontend.ereq.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].ereq.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy Frontend {#PXNAME}: Denied requests per second |<p>Requests denied due to security concerns (ACL-restricted) per second.</p> |DEPENDENT |haproxy.frontend.dreq.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].dreq.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy Frontend {#PXNAME}: Number of responses with codes 1xx per second |<p>Number of informational HTTP responses per second.</p> |DEPENDENT |haproxy.frontend.hrsp_1xx.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].hrsp_1xx.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy Frontend {#PXNAME}: Number of responses with codes 2xx per second |<p>Number of successful HTTP responses per second.</p> |DEPENDENT |haproxy.frontend.hrsp_2xx.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].hrsp_2xx.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy Frontend {#PXNAME}: Number of responses with codes 3xx per second |<p>Number of HTTP redirections per second.</p> |DEPENDENT |haproxy.frontend.hrsp_3xx.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].hrsp_3xx.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy Frontend {#PXNAME}: Number of responses with codes 4xx per second |<p>Number of HTTP client errors per second.</p> |DEPENDENT |haproxy.frontend.hrsp_4xx.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].hrsp_4xx.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy Frontend {#PXNAME}: Number of responses with codes 5xx per second |<p>Number of HTTP server errors per second.</p> |DEPENDENT |haproxy.frontend.hrsp_5xx.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].hrsp_5xx.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy Frontend {#PXNAME}: Incoming traffic |<p>Number of bits received by the frontend</p> |DEPENDENT |haproxy.frontend.bin[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].bin.first()`</p><p>- MULTIPLIER: `8`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy Frontend {#PXNAME}: Outgoing traffic |<p>Number of bits sent by the frontend</p> |DEPENDENT |haproxy.frontend.bout[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].bout.first()`</p><p>- MULTIPLIER: `8`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy {#PXNAME} {#SVNAME}: Status |<p>-</p> |DEPENDENT |haproxy.server.status[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].status.first()`</p><p>- BOOL_TO_DECIMAL<p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|HAProxy |HAProxy {#PXNAME} {#SVNAME}: Responses time |<p>Average server response time (in ms) for the last 1,024 requests.</p> |DEPENDENT |haproxy.server.rtime[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].rtime.first()`</p><p>- MULTIPLIER: `0.001`</p> |
-|HAProxy |HAProxy {#PXNAME} {#SVNAME}: Errors connection per second |<p>Number of requests that encountered an error attempting to connect to a backend server.</p> |DEPENDENT |haproxy.server.econ.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].econ.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy {#PXNAME} {#SVNAME}: Responses denied per second |<p>Responses denied due to security concerns (ACL-restricted).</p> |DEPENDENT |haproxy.server.dresp.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].dresp.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy {#PXNAME} {#SVNAME}: Response errors per second |<p>Number of requests whose responses yielded an error.</p> |DEPENDENT |haproxy.server.eresp.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].eresp.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy {#PXNAME} {#SVNAME}: Unassigned requests |<p>Current number of requests unassigned in queue.</p> |DEPENDENT |haproxy.server.qcur[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].qcur.first()`</p> |
-|HAProxy |HAProxy {#PXNAME} {#SVNAME}: Time in queue |<p>Average time spent in queue (in ms) for the last 1,024 requests.</p> |DEPENDENT |haproxy.server.qtime[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].qtime.first()`</p><p>- MULTIPLIER: `0.001`</p> |
-|HAProxy |HAProxy {#PXNAME} {#SVNAME}: Redispatched requests per second |<p>Number of times a request was redispatched to a different backend.</p> |DEPENDENT |haproxy.server.wredis.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].wredis.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy {#PXNAME} {#SVNAME}: Retried connections per second |<p>Number of times a connection was retried.</p> |DEPENDENT |haproxy.server.wretr.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].wretr.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy {#PXNAME} {#SVNAME}: Number of responses with codes 4xx per second |<p>Number of HTTP client errors per second.</p> |DEPENDENT |haproxy.server.hrsp_4xx.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].hrsp_4xx.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy {#PXNAME} {#SVNAME}: Number of responses with codes 5xx per second |<p>Number of HTTP server errors per second.</p> |DEPENDENT |haproxy.server.hrsp_5xx.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].hrsp_5xx.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy TCP Backend {#PXNAME}: Status |<p>-</p> |DEPENDENT |haproxy.backend_tcp.status[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].status.first()`</p><p>- BOOL_TO_DECIMAL<p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|HAProxy |HAProxy TCP Backend {#PXNAME}: Responses time |<p>Average backend response time (in ms) for the last 1,024 requests</p> |DEPENDENT |haproxy.backend_tcp.rtime[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].rtime.first()`</p><p>- MULTIPLIER: `0.001`</p> |
-|HAProxy |HAProxy TCP Backend {#PXNAME}: Errors connection per second |<p>Number of requests that encountered an error attempting to connect to a backend server.</p> |DEPENDENT |haproxy.backend_tcp.econ.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].econ.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy TCP Backend {#PXNAME}: Responses denied per second |<p>Responses denied due to security concerns (ACL-restricted).</p> |DEPENDENT |haproxy.backend_tcp.dresp.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].dresp.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy TCP Backend {#PXNAME}: Response errors per second |<p>Number of requests whose responses yielded an error</p> |DEPENDENT |haproxy.backend_tcp.eresp.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].eresp.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy TCP Backend {#PXNAME}: Unassigned requests |<p>Current number of requests unassigned in queue.</p> |DEPENDENT |haproxy.backend_tcp.qcur[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].qcur.first()`</p> |
-|HAProxy |HAProxy TCP Backend {#PXNAME}: Time in queue |<p>Average time spent in queue (in ms) for the last 1,024 requests</p> |DEPENDENT |haproxy.backend_tcp.qtime[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].qtime.first()`</p><p>- MULTIPLIER: `0.001`</p> |
-|HAProxy |HAProxy TCP Backend {#PXNAME}: Redispatched requests per second |<p>Number of times a request was redispatched to a different backend.</p> |DEPENDENT |haproxy.backend_tcp.wredis.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].wredis.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy TCP Backend {#PXNAME}: Retried connections per second |<p>Number of times a connection was retried.</p> |DEPENDENT |haproxy.backend_tcp.wretr.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].wretr.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy TCP Frontend {#PXNAME}: Requests rate |<p>HTTP requests per second</p> |DEPENDENT |haproxy.frontend_tcp.req_rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].req_rate.first()`</p> |
-|HAProxy |HAProxy TCP Frontend {#PXNAME}: Sessions rate |<p>Number of sessions created per second</p> |DEPENDENT |haproxy.frontend_tcp.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].rate.first()`</p> |
-|HAProxy |HAProxy TCP Frontend {#PXNAME}: Established sessions |<p>The current number of established sessions.</p> |DEPENDENT |haproxy.frontend_tcp.scur[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].scur.first()`</p> |
-|HAProxy |HAProxy TCP Frontend {#PXNAME}: Session limits |<p>The most simultaneous sessions that are allowed, as defined by the maxconn setting in the frontend.</p> |DEPENDENT |haproxy.frontend_tcp.slim[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].slim.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|HAProxy |HAProxy TCP Frontend {#PXNAME}: Session utilization |<p>Percentage of sessions used (scur / slim * 100).</p> |CALCULATED |haproxy.frontend_tcp.sutil[{#PXNAME}:{#SVNAME}]<p>**Expression**:</p>`last(haproxy.frontend_tcp.scur[{#PXNAME}:{#SVNAME}]) / last(haproxy.frontend_tcp.slim[{#PXNAME}:{#SVNAME}]) * 100` |
-|HAProxy |HAProxy TCP Frontend {#PXNAME}: Request errors per second |<p>Number of request errors per second.</p> |DEPENDENT |haproxy.frontend_tcp.ereq.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].ereq.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy TCP Frontend {#PXNAME}: Denied requests per second |<p>Requests denied due to security concerns (ACL-restricted) per second.</p> |DEPENDENT |haproxy.frontend_tcp.dreq.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].dreq.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy TCP Frontend {#PXNAME}: Incoming traffic |<p>Number of bits received by the frontend</p> |DEPENDENT |haproxy.frontend_tcp.bin[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].bin.first()`</p><p>- MULTIPLIER: `8`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy TCP Frontend {#PXNAME}: Outgoing traffic |<p>Number of bits sent by the frontend</p> |DEPENDENT |haproxy.frontend_tcp.bout[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].bout.first()`</p><p>- MULTIPLIER: `8`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy TCP {#PXNAME} {#SVNAME}: Status |<p>-</p> |DEPENDENT |haproxy.server_tcp.status[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].status.first()`</p><p>- BOOL_TO_DECIMAL<p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|HAProxy |HAProxy TCP {#PXNAME} {#SVNAME}: Responses time |<p>Average server response time (in ms) for the last 1,024 requests.</p> |DEPENDENT |haproxy.server_tcp.rtime[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].rtime.first()`</p><p>- MULTIPLIER: `0.001`</p> |
-|HAProxy |HAProxy TCP {#PXNAME} {#SVNAME}: Errors connection per second |<p>Number of requests that encountered an error attempting to connect to a backend server.</p> |DEPENDENT |haproxy.server_tcp.econ.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].econ.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy TCP {#PXNAME} {#SVNAME}: Responses denied per second |<p>Responses denied due to security concerns (ACL-restricted).</p> |DEPENDENT |haproxy.server_tcp.dresp.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].dresp.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy TCP {#PXNAME} {#SVNAME}: Response errors per second |<p>Number of requests whose responses yielded an error.</p> |DEPENDENT |haproxy.server_tcp.eresp.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].eresp.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy TCP {#PXNAME} {#SVNAME}: Unassigned requests |<p>Current number of requests unassigned in queue.</p> |DEPENDENT |haproxy.server_tcp.qcur[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].qcur.first()`</p> |
-|HAProxy |HAProxy TCP {#PXNAME} {#SVNAME}: Time in queue |<p>Average time spent in queue (in ms) for the last 1,024 requests.</p> |DEPENDENT |haproxy.server_tcp.qtime[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].qtime.first()`</p><p>- MULTIPLIER: `0.001`</p> |
-|HAProxy |HAProxy TCP {#PXNAME} {#SVNAME}: Redispatched requests per second |<p>Number of times a request was redispatched to a different backend.</p> |DEPENDENT |haproxy.server_tcp.wredis.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].wredis.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy TCP {#PXNAME} {#SVNAME}: Retried connections per second |<p>Number of times a connection was retried.</p> |DEPENDENT |haproxy.server_tcp.wretr.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].wretr.first()`</p><p>- CHANGE_PER_SECOND |
-|Zabbix_raw_items |HAProxy: Get stats |<p>HAProxy Statistics Report in CSV format</p> |ZABBIX_PASSIVE |web.page.get["{$HAPROXY.STATS.SCHEME}://{HOST.CONN}:{$HAPROXY.STATS.PORT}/{$HAPROXY.STATS.PATH};csv"]<p>**Preprocessing**:</p><p>- REGEX: `# ([\s\S]*) \1`</p><p>- CSV_TO_JSON: ` 1`</p> |
-|Zabbix_raw_items |HAProxy: Get stats page |<p>HAProxy Statistics Report HTML</p> |ZABBIX_PASSIVE |web.page.get["{$HAPROXY.STATS.SCHEME}://{HOST.CONN}:{$HAPROXY.STATS.PORT}/{$HAPROXY.STATS.PATH}"] |
+| Group | Name | Description | Type | Key and additional info |
+|------------------|----------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------|----------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| HAProxy | HAProxy: Version | <p>-</p> | DEPENDENT | haproxy.version<p>**Preprocessing**:</p><p>- REGEX: `HAProxy version ([^,]*), \1`</p><p>⛔️ON_FAIL: `CUSTOM_ERROR -> HAProxy version is not found`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| HAProxy | HAProxy: Uptime | <p>-</p> | DEPENDENT | haproxy.uptime<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| HAProxy | HAProxy: Service status | <p>-</p> | ZABBIX_PASSIVE | net.tcp.service["{$HAPROXY.STATS.SCHEME}","{HOST.CONN}","{$HAPROXY.STATS.PORT}"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| HAProxy | HAProxy: Service response time | <p>-</p> | ZABBIX_PASSIVE | net.tcp.service.perf["{$HAPROXY.STATS.SCHEME}","{HOST.CONN}","{$HAPROXY.STATS.PORT}"] |
+| HAProxy | HAProxy Backend {#PXNAME}: Status | <p>-</p> | DEPENDENT | haproxy.backend.status[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].status.first()`</p><p>- BOOL_TO_DECIMAL<p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| HAProxy | HAProxy Backend {#PXNAME}: Responses time | <p>Average backend response time (in ms) for the last 1,024 requests</p> | DEPENDENT | haproxy.backend.rtime[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].rtime.first()`</p><p>- MULTIPLIER: `0.001`</p> |
+| HAProxy | HAProxy Backend {#PXNAME}: Errors connection per second | <p>Number of requests that encountered an error attempting to connect to a backend server.</p> | DEPENDENT | haproxy.backend.econ.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].econ.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy Backend {#PXNAME}: Responses denied per second | <p>Responses denied due to security concerns (ACL-restricted).</p> | DEPENDENT | haproxy.backend.dresp.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].dresp.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy Backend {#PXNAME}: Response errors per second | <p>Number of requests whose responses yielded an error</p> | DEPENDENT | haproxy.backend.eresp.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].eresp.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy Backend {#PXNAME}: Unassigned requests | <p>Current number of requests unassigned in queue.</p> | DEPENDENT | haproxy.backend.qcur[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].qcur.first()`</p> |
+| HAProxy | HAProxy Backend {#PXNAME}: Time in queue | <p>Average time spent in queue (in ms) for the last 1,024 requests</p> | DEPENDENT | haproxy.backend.qtime[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].qtime.first()`</p><p>- MULTIPLIER: `0.001`</p> |
+| HAProxy | HAProxy Backend {#PXNAME}: Redispatched requests per second | <p>Number of times a request was redispatched to a different backend.</p> | DEPENDENT | haproxy.backend.wredis.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].wredis.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy Backend {#PXNAME}: Retried connections per second | <p>Number of times a connection was retried.</p> | DEPENDENT | haproxy.backend.wretr.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].wretr.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy Frontend {#PXNAME}: Requests rate | <p>HTTP requests per second</p> | DEPENDENT | haproxy.frontend.req_rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].req_rate.first()`</p> |
+| HAProxy | HAProxy Frontend {#PXNAME}: Sessions rate | <p>Number of sessions created per second</p> | DEPENDENT | haproxy.frontend.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].rate.first()`</p> |
+| HAProxy | HAProxy Frontend {#PXNAME}: Established sessions | <p>The current number of established sessions.</p> | DEPENDENT | haproxy.frontend.scur[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].scur.first()`</p> |
+| HAProxy | HAProxy Frontend {#PXNAME}: Session limits | <p>The most simultaneous sessions that are allowed, as defined by the maxconn setting in the frontend.</p> | DEPENDENT | haproxy.frontend.slim[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].slim.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| HAProxy | HAProxy Frontend {#PXNAME}: Session utilization | <p>Percentage of sessions used (scur / slim * 100).</p> | CALCULATED | haproxy.frontend.sutil[{#PXNAME}:{#SVNAME}]<p>**Expression**:</p>`last(haproxy.frontend.scur[{#PXNAME}:{#SVNAME}]) / last(haproxy.frontend.slim[{#PXNAME}:{#SVNAME}]) * 100` |
+| HAProxy | HAProxy Frontend {#PXNAME}: Request errors per second | <p>Number of request errors per second.</p> | DEPENDENT | haproxy.frontend.ereq.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].ereq.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy Frontend {#PXNAME}: Denied requests per second | <p>Requests denied due to security concerns (ACL-restricted) per second.</p> | DEPENDENT | haproxy.frontend.dreq.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].dreq.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy Frontend {#PXNAME}: Number of responses with codes 1xx per second | <p>Number of informational HTTP responses per second.</p> | DEPENDENT | haproxy.frontend.hrsp_1xx.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].hrsp_1xx.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy Frontend {#PXNAME}: Number of responses with codes 2xx per second | <p>Number of successful HTTP responses per second.</p> | DEPENDENT | haproxy.frontend.hrsp_2xx.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].hrsp_2xx.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy Frontend {#PXNAME}: Number of responses with codes 3xx per second | <p>Number of HTTP redirections per second.</p> | DEPENDENT | haproxy.frontend.hrsp_3xx.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].hrsp_3xx.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy Frontend {#PXNAME}: Number of responses with codes 4xx per second | <p>Number of HTTP client errors per second.</p> | DEPENDENT | haproxy.frontend.hrsp_4xx.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].hrsp_4xx.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy Frontend {#PXNAME}: Number of responses with codes 5xx per second | <p>Number of HTTP server errors per second.</p> | DEPENDENT | haproxy.frontend.hrsp_5xx.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].hrsp_5xx.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy Frontend {#PXNAME}: Incoming traffic | <p>Number of bits received by the frontend</p> | DEPENDENT | haproxy.frontend.bin[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].bin.first()`</p><p>- MULTIPLIER: `8`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy Frontend {#PXNAME}: Outgoing traffic | <p>Number of bits sent by the frontend</p> | DEPENDENT | haproxy.frontend.bout[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].bout.first()`</p><p>- MULTIPLIER: `8`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy {#PXNAME} {#SVNAME}: Status | <p>-</p> | DEPENDENT | haproxy.server.status[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].status.first()`</p><p>- BOOL_TO_DECIMAL<p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| HAProxy | HAProxy {#PXNAME} {#SVNAME}: Responses time | <p>Average server response time (in ms) for the last 1,024 requests.</p> | DEPENDENT | haproxy.server.rtime[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].rtime.first()`</p><p>- MULTIPLIER: `0.001`</p> |
+| HAProxy | HAProxy {#PXNAME} {#SVNAME}: Errors connection per second | <p>Number of requests that encountered an error attempting to connect to a backend server.</p> | DEPENDENT | haproxy.server.econ.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].econ.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy {#PXNAME} {#SVNAME}: Responses denied per second | <p>Responses denied due to security concerns (ACL-restricted).</p> | DEPENDENT | haproxy.server.dresp.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].dresp.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy {#PXNAME} {#SVNAME}: Response errors per second | <p>Number of requests whose responses yielded an error.</p> | DEPENDENT | haproxy.server.eresp.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].eresp.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy {#PXNAME} {#SVNAME}: Unassigned requests | <p>Current number of requests unassigned in queue.</p> | DEPENDENT | haproxy.server.qcur[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].qcur.first()`</p> |
+| HAProxy | HAProxy {#PXNAME} {#SVNAME}: Time in queue | <p>Average time spent in queue (in ms) for the last 1,024 requests.</p> | DEPENDENT | haproxy.server.qtime[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].qtime.first()`</p><p>- MULTIPLIER: `0.001`</p> |
+| HAProxy | HAProxy {#PXNAME} {#SVNAME}: Redispatched requests per second | <p>Number of times a request was redispatched to a different backend.</p> | DEPENDENT | haproxy.server.wredis.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].wredis.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy {#PXNAME} {#SVNAME}: Retried connections per second | <p>Number of times a connection was retried.</p> | DEPENDENT | haproxy.server.wretr.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].wretr.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy {#PXNAME} {#SVNAME}: Number of responses with codes 4xx per second | <p>Number of HTTP client errors per second.</p> | DEPENDENT | haproxy.server.hrsp_4xx.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].hrsp_4xx.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy {#PXNAME} {#SVNAME}: Number of responses with codes 5xx per second | <p>Number of HTTP server errors per second.</p> | DEPENDENT | haproxy.server.hrsp_5xx.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].hrsp_5xx.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy TCP Backend {#PXNAME}: Status | <p>-</p> | DEPENDENT | haproxy.backend_tcp.status[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].status.first()`</p><p>- BOOL_TO_DECIMAL<p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| HAProxy | HAProxy TCP Backend {#PXNAME}: Responses time | <p>Average backend response time (in ms) for the last 1,024 requests</p> | DEPENDENT | haproxy.backend_tcp.rtime[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].rtime.first()`</p><p>- MULTIPLIER: `0.001`</p> |
+| HAProxy | HAProxy TCP Backend {#PXNAME}: Errors connection per second | <p>Number of requests that encountered an error attempting to connect to a backend server.</p> | DEPENDENT | haproxy.backend_tcp.econ.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].econ.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy TCP Backend {#PXNAME}: Responses denied per second | <p>Responses denied due to security concerns (ACL-restricted).</p> | DEPENDENT | haproxy.backend_tcp.dresp.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].dresp.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy TCP Backend {#PXNAME}: Response errors per second | <p>Number of requests whose responses yielded an error</p> | DEPENDENT | haproxy.backend_tcp.eresp.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].eresp.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy TCP Backend {#PXNAME}: Unassigned requests | <p>Current number of requests unassigned in queue.</p> | DEPENDENT | haproxy.backend_tcp.qcur[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].qcur.first()`</p> |
+| HAProxy | HAProxy TCP Backend {#PXNAME}: Time in queue | <p>Average time spent in queue (in ms) for the last 1,024 requests</p> | DEPENDENT | haproxy.backend_tcp.qtime[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].qtime.first()`</p><p>- MULTIPLIER: `0.001`</p> |
+| HAProxy | HAProxy TCP Backend {#PXNAME}: Redispatched requests per second | <p>Number of times a request was redispatched to a different backend.</p> | DEPENDENT | haproxy.backend_tcp.wredis.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].wredis.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy TCP Backend {#PXNAME}: Retried connections per second | <p>Number of times a connection was retried.</p> | DEPENDENT | haproxy.backend_tcp.wretr.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].wretr.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy TCP Frontend {#PXNAME}: Requests rate | <p>HTTP requests per second</p> | DEPENDENT | haproxy.frontend_tcp.req_rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].req_rate.first()`</p> |
+| HAProxy | HAProxy TCP Frontend {#PXNAME}: Sessions rate | <p>Number of sessions created per second</p> | DEPENDENT | haproxy.frontend_tcp.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].rate.first()`</p> |
+| HAProxy | HAProxy TCP Frontend {#PXNAME}: Established sessions | <p>The current number of established sessions.</p> | DEPENDENT | haproxy.frontend_tcp.scur[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].scur.first()`</p> |
+| HAProxy | HAProxy TCP Frontend {#PXNAME}: Session limits | <p>The most simultaneous sessions that are allowed, as defined by the maxconn setting in the frontend.</p> | DEPENDENT | haproxy.frontend_tcp.slim[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].slim.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| HAProxy | HAProxy TCP Frontend {#PXNAME}: Session utilization | <p>Percentage of sessions used (scur / slim * 100).</p> | CALCULATED | haproxy.frontend_tcp.sutil[{#PXNAME}:{#SVNAME}]<p>**Expression**:</p>`last(haproxy.frontend_tcp.scur[{#PXNAME}:{#SVNAME}]) / last(haproxy.frontend_tcp.slim[{#PXNAME}:{#SVNAME}]) * 100` |
+| HAProxy | HAProxy TCP Frontend {#PXNAME}: Request errors per second | <p>Number of request errors per second.</p> | DEPENDENT | haproxy.frontend_tcp.ereq.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].ereq.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy TCP Frontend {#PXNAME}: Denied requests per second | <p>Requests denied due to security concerns (ACL-restricted) per second.</p> | DEPENDENT | haproxy.frontend_tcp.dreq.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].dreq.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy TCP Frontend {#PXNAME}: Incoming traffic | <p>Number of bits received by the frontend</p> | DEPENDENT | haproxy.frontend_tcp.bin[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].bin.first()`</p><p>- MULTIPLIER: `8`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy TCP Frontend {#PXNAME}: Outgoing traffic | <p>Number of bits sent by the frontend</p> | DEPENDENT | haproxy.frontend_tcp.bout[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].bout.first()`</p><p>- MULTIPLIER: `8`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy TCP {#PXNAME} {#SVNAME}: Status | <p>-</p> | DEPENDENT | haproxy.server_tcp.status[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].status.first()`</p><p>- BOOL_TO_DECIMAL<p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| HAProxy | HAProxy TCP {#PXNAME} {#SVNAME}: Responses time | <p>Average server response time (in ms) for the last 1,024 requests.</p> | DEPENDENT | haproxy.server_tcp.rtime[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].rtime.first()`</p><p>- MULTIPLIER: `0.001`</p> |
+| HAProxy | HAProxy TCP {#PXNAME} {#SVNAME}: Errors connection per second | <p>Number of requests that encountered an error attempting to connect to a backend server.</p> | DEPENDENT | haproxy.server_tcp.econ.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].econ.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy TCP {#PXNAME} {#SVNAME}: Responses denied per second | <p>Responses denied due to security concerns (ACL-restricted).</p> | DEPENDENT | haproxy.server_tcp.dresp.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].dresp.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy TCP {#PXNAME} {#SVNAME}: Response errors per second | <p>Number of requests whose responses yielded an error.</p> | DEPENDENT | haproxy.server_tcp.eresp.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].eresp.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy TCP {#PXNAME} {#SVNAME}: Unassigned requests | <p>Current number of requests unassigned in queue.</p> | DEPENDENT | haproxy.server_tcp.qcur[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].qcur.first()`</p> |
+| HAProxy | HAProxy TCP {#PXNAME} {#SVNAME}: Time in queue | <p>Average time spent in queue (in ms) for the last 1,024 requests.</p> | DEPENDENT | haproxy.server_tcp.qtime[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].qtime.first()`</p><p>- MULTIPLIER: `0.001`</p> |
+| HAProxy | HAProxy TCP {#PXNAME} {#SVNAME}: Redispatched requests per second | <p>Number of times a request was redispatched to a different backend.</p> | DEPENDENT | haproxy.server_tcp.wredis.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].wredis.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy TCP {#PXNAME} {#SVNAME}: Retried connections per second | <p>Number of times a connection was retried.</p> | DEPENDENT | haproxy.server_tcp.wretr.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].wretr.first()`</p><p>- CHANGE_PER_SECOND |
+| Zabbix_raw_items | HAProxy: Get stats | <p>HAProxy Statistics Report in CSV format</p> | ZABBIX_PASSIVE | web.page.get["{$HAPROXY.STATS.SCHEME}://{HOST.CONN}:{$HAPROXY.STATS.PORT}/{$HAPROXY.STATS.PATH};csv"]<p>**Preprocessing**:</p><p>- REGEX: `# ([\s\S]*) \1`</p><p>- CSV_TO_JSON: ` 1`</p> |
+| Zabbix_raw_items | HAProxy: Get stats page | <p>HAProxy Statistics Report HTML</p> | ZABBIX_PASSIVE | web.page.get["{$HAPROXY.STATS.SCHEME}://{HOST.CONN}:{$HAPROXY.STATS.PORT}/{$HAPROXY.STATS.PATH}"] |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|HAProxy: Version has changed (new version: {ITEM.VALUE}) |<p>HAProxy version has changed. Ack to close.</p> |`{TEMPLATE_NAME:haproxy.version.diff()}=1 and {TEMPLATE_NAME:haproxy.version.strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|HAProxy: has been restarted (uptime < 10m) |<p>Uptime is less than 10 minutes</p> |`{TEMPLATE_NAME:haproxy.uptime.last()}<10m` |INFO |<p>Manual close: YES</p> |
-|HAProxy: Service is down |<p>-</p> |`{TEMPLATE_NAME:net.tcp.service["{$HAPROXY.STATS.SCHEME}","{HOST.CONN}","{$HAPROXY.STATS.PORT}"].last()}=0` |AVERAGE |<p>Manual close: YES</p> |
-|HAProxy: Service response time is too high (over {$HAPROXY.RESPONSE_TIME.MAX.WARN} for 5m) |<p>-</p> |`{TEMPLATE_NAME:net.tcp.service.perf["{$HAPROXY.STATS.SCHEME}","{HOST.CONN}","{$HAPROXY.STATS.PORT}"].min(5m)}>{$HAPROXY.RESPONSE_TIME.MAX.WARN}` |WARNING |<p>Manual close: YES</p><p>**Depends on**:</p><p>- HAProxy: Service is down</p> |
-|HAProxy backend {#PXNAME}: Server is DOWN |<p>Backend is not available.</p> |`{TEMPLATE_NAME:haproxy.backend.status[{#PXNAME}:{#SVNAME}].max(#5)}=0` |AVERAGE | |
-|HAProxy backend {#PXNAME}: Average response time is more than {$HAPROXY.BACK_RTIME.MAX.WARN} for 5m |<p>Average backend response time (in ms) for the last 1,024 requests is more than {$HAPROXY.BACK_RTIME.MAX.WARN}.</p> |`{TEMPLATE_NAME:haproxy.backend.rtime[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.BACK_RTIME.MAX.WARN}` |WARNING | |
-|HAProxy backend {#PXNAME}: Number of responses with error is more than {$HAPROXY.BACK_ERESP.MAX.WARN} for 5m |<p>Number of requests on backend, whose responses yielded an error, is more than {$HAPROXY.BACK_ERESP.MAX.WARN}.</p> |`{TEMPLATE_NAME:haproxy.backend.eresp.rate[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.BACK_ERESP.MAX.WARN}` |WARNING | |
-|HAProxy backend {#PXNAME}: Current number of requests unassigned in queue is more than {$HAPROXY.BACK_QCUR.MAX.WARN} for 5m |<p>Current number of requests on backend unassigned in queue is more than {$HAPROXY.BACK_QCUR.MAX.WARN}.</p> |`{TEMPLATE_NAME:haproxy.backend.qcur[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.BACK_QCUR.MAX.WARN}` |WARNING | |
-|HAProxy backend {#PXNAME}: Average time spent in queue is more than {$HAPROXY.BACK_QTIME.MAX.WARN} for 5m |<p>Average time spent in queue (in ms) for the last 1,024 requests is more than {$HAPROXY.BACK_QTIME.MAX.WARN}.</p> |`{TEMPLATE_NAME:haproxy.backend.qtime[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.BACK_QTIME.MAX.WARN}` |WARNING | |
-|HAProxy frontend {#PXNAME}: Session utilization is more than {$HAPROXY.FRONT_SUTIL.MAX.WARN}% for 5m |<p>Alerting on this metric is essential to ensure your server has sufficient capacity to handle all concurrent sessions. Unlike requests, upon reaching the session limit HAProxy will deny additional clients until resource consumption drops. Furthermore, if you find your session usage percentage to be hovering above 80%, it could be time to either modify HAProxy’s configuration to allow more sessions, or migrate your HAProxy server to a bigger box.</p> |`{TEMPLATE_NAME:haproxy.frontend.sutil[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.FRONT_SUTIL.MAX.WARN}` |WARNING | |
-|HAProxy frontend {#PXNAME}: Number of request errors is more than {$HAPROXY.FRONT_EREQ.MAX.WARN} for 5m |<p>Number of request errors is more than {$HAPROXY.FRONT_EREQ.MAX.WARN}.</p> |`{TEMPLATE_NAME:haproxy.frontend.ereq.rate[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.FRONT_EREQ.MAX.WARN}` |WARNING | |
-|HAProxy frontend {#PXNAME}: Number of requests denied is more than {$HAPROXY.FRONT_DREQ.MAX.WARN} for 5m |<p>Number of requests denied due to security concerns (ACL-restricted) is more than {$HAPROXY.FRONT_DREQ.MAX.WARN}.</p> |`{TEMPLATE_NAME:haproxy.frontend.dreq.rate[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.FRONT_DREQ.MAX.WARN}` |WARNING | |
-|HAProxy {#PXNAME} {#SVNAME}: Server is DOWN |<p>Server is not available.</p> |`{TEMPLATE_NAME:haproxy.server.status[{#PXNAME}:{#SVNAME}].max(#5)}=0` |WARNING | |
-|HAProxy {#PXNAME} {#SVNAME}: Average response time is more than {$HAPROXY.SERVER_RTIME.MAX.WARN} for 5m |<p>Average server response time (in ms) for the last 1,024 requests is more than {$HAPROXY.SERVER_RTIME.MAX.WARN}.</p> |`{TEMPLATE_NAME:haproxy.server.rtime[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.SERVER_RTIME.MAX.WARN}` |WARNING | |
-|HAProxy {#PXNAME} {#SVNAME}: Number of responses with error is more than {$HAPROXY.SERVER_ERESP.MAX.WARN} for 5m |<p>Number of requests on server, whose responses yielded an error, is more than {$HAPROXY.SERVER_ERESP.MAX.WARN}.</p> |`{TEMPLATE_NAME:haproxy.server.eresp.rate[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.SERVER_ERESP.MAX.WARN}` |WARNING | |
-|HAProxy {#PXNAME} {#SVNAME}: Current number of requests unassigned in queue is more than {$HAPROXY.SERVER_QCUR.MAX.WARN} for 5m |<p>Current number of requests unassigned in queue is more than {$HAPROXY.SERVER_QCUR.MAX.WARN}.</p> |`{TEMPLATE_NAME:haproxy.server.qcur[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.SERVER_QCUR.MAX.WARN}` |WARNING | |
-|HAProxy {#PXNAME} {#SVNAME}: Average time spent in queue is more than {$HAPROXY.SERVER_QTIME.MAX.WARN} for 5m |<p>Average time spent in queue (in ms) for the last 1,024 requests is more than {$HAPROXY.SERVER_QTIME.MAX.WARN}.</p> |`{TEMPLATE_NAME:haproxy.server.qtime[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.SERVER_QTIME.MAX.WARN}` |WARNING | |
-|HAProxy TCP Backend {#PXNAME}: Server is DOWN |<p>Backend is not available.</p> |`{TEMPLATE_NAME:haproxy.backend_tcp.status[{#PXNAME}:{#SVNAME}].max(#5)}=0` |AVERAGE | |
-|HAProxy TCP Backend {#PXNAME}: Average response time is more than {$HAPROXY.BACK_RTIME.MAX.WARN} for 5m |<p>Average backend response time (in ms) for the last 1,024 requests is more than {$HAPROXY.BACK_RTIME.MAX.WARN}.</p> |`{TEMPLATE_NAME:haproxy.backend_tcp.rtime[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.BACK_RTIME.MAX.WARN}` |WARNING | |
-|HAProxy TCP Backend {#PXNAME}: Number of responses with error is more than {$HAPROXY.BACK_ERESP.MAX.WARN} for 5m |<p>Number of requests on backend, whose responses yielded an error, is more than {$HAPROXY.BACK_ERESP.MAX.WARN}.</p> |`{TEMPLATE_NAME:haproxy.backend_tcp.eresp.rate[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.BACK_ERESP.MAX.WARN}` |WARNING | |
-|HAProxy TCP Backend {#PXNAME}: Current number of requests unassigned in queue is more than {$HAPROXY.BACK_QCUR.MAX.WARN} for 5m |<p>Current number of requests on backend unassigned in queue is more than {$HAPROXY.BACK_QCUR.MAX.WARN}.</p> |`{TEMPLATE_NAME:haproxy.backend_tcp.qcur[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.BACK_QCUR.MAX.WARN}` |WARNING | |
-|HAProxy TCP Backend {#PXNAME}: Average time spent in queue is more than {$HAPROXY.BACK_QTIME.MAX.WARN} for 5m |<p>Average time spent in queue (in ms) for the last 1,024 requests is more than {$HAPROXY.BACK_QTIME.MAX.WARN}.</p> |`{TEMPLATE_NAME:haproxy.backend_tcp.qtime[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.BACK_QTIME.MAX.WARN}` |WARNING | |
-|HAProxy TCP Frontend {#PXNAME}: Session utilization is more than {$HAPROXY.FRONT_SUTIL.MAX.WARN}% for 5m |<p>Alerting on this metric is essential to ensure your server has sufficient capacity to handle all concurrent sessions. Unlike requests, upon reaching the session limit HAProxy will deny additional clients until resource consumption drops. Furthermore, if you find your session usage percentage to be hovering above 80%, it could be time to either modify HAProxy’s configuration to allow more sessions, or migrate your HAProxy server to a bigger box.</p> |`{TEMPLATE_NAME:haproxy.frontend_tcp.sutil[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.FRONT_SUTIL.MAX.WARN}` |WARNING | |
-|HAProxy TCP Frontend {#PXNAME}: Number of request errors is more than {$HAPROXY.FRONT_EREQ.MAX.WARN} for 5m |<p>Number of request errors is more than {$HAPROXY.FRONT_EREQ.MAX.WARN}.</p> |`{TEMPLATE_NAME:haproxy.frontend_tcp.ereq.rate[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.FRONT_EREQ.MAX.WARN}` |WARNING | |
-|HAProxy TCP Frontend {#PXNAME}: Number of requests denied is more than {$HAPROXY.FRONT_DREQ.MAX.WARN} for 5m |<p>Number of requests denied due to security concerns (ACL-restricted) is more than {$HAPROXY.FRONT_DREQ.MAX.WARN}.</p> |`{TEMPLATE_NAME:haproxy.frontend_tcp.dreq.rate[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.FRONT_DREQ.MAX.WARN}` |WARNING | |
-|HAProxy TCP {#PXNAME} {#SVNAME}: Server is DOWN |<p>Server is not available.</p> |`{TEMPLATE_NAME:haproxy.server_tcp.status[{#PXNAME}:{#SVNAME}].max(#5)}=0` |WARNING | |
-|HAProxy TCP {#PXNAME} {#SVNAME}: Average response time is more than {$HAPROXY.SERVER_RTIME.MAX.WARN} for 5m |<p>Average server response time (in ms) for the last 1,024 requests is more than {$HAPROXY.SERVER_RTIME.MAX.WARN}.</p> |`{TEMPLATE_NAME:haproxy.server_tcp.rtime[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.SERVER_RTIME.MAX.WARN}` |WARNING | |
-|HAProxy TCP {#PXNAME} {#SVNAME}: Number of responses with error is more than {$HAPROXY.SERVER_ERESP.MAX.WARN} for 5m |<p>Number of requests on server, whose responses yielded an error, is more than {$HAPROXY.SERVER_ERESP.MAX.WARN}.</p> |`{TEMPLATE_NAME:haproxy.server_tcp.eresp.rate[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.SERVER_ERESP.MAX.WARN}` |WARNING | |
-|HAProxy TCP {#PXNAME} {#SVNAME}: Current number of requests unassigned in queue is more than {$HAPROXY.SERVER_QCUR.MAX.WARN} for 5m |<p>Current number of requests unassigned in queue is more than {$HAPROXY.SERVER_QCUR.MAX.WARN}.</p> |`{TEMPLATE_NAME:haproxy.server_tcp.qcur[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.SERVER_QCUR.MAX.WARN}` |WARNING | |
-|HAProxy TCP {#PXNAME} {#SVNAME}: Average time spent in queue is more than {$HAPROXY.SERVER_QTIME.MAX.WARN} for 5m |<p>Average time spent in queue (in ms) for the last 1,024 requests is more than {$HAPROXY.SERVER_QTIME.MAX.WARN}.</p> |`{TEMPLATE_NAME:haproxy.server_tcp.qtime[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.SERVER_QTIME.MAX.WARN}` |WARNING | |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|-------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------|----------|---------------------------------------------------------------------------------|
+| HAProxy: Version has changed (new version: {ITEM.VALUE}) | <p>HAProxy version has changed. Ack to close.</p> | `{TEMPLATE_NAME:haproxy.version.diff()}=1 and {TEMPLATE_NAME:haproxy.version.strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| HAProxy: has been restarted (uptime < 10m) | <p>Uptime is less than 10 minutes</p> | `{TEMPLATE_NAME:haproxy.uptime.last()}<10m` | INFO | <p>Manual close: YES</p> |
+| HAProxy: Service is down | <p>-</p> | `{TEMPLATE_NAME:net.tcp.service["{$HAPROXY.STATS.SCHEME}","{HOST.CONN}","{$HAPROXY.STATS.PORT}"].last()}=0` | AVERAGE | <p>Manual close: YES</p> |
+| HAProxy: Service response time is too high (over {$HAPROXY.RESPONSE_TIME.MAX.WARN} for 5m) | <p>-</p> | `{TEMPLATE_NAME:net.tcp.service.perf["{$HAPROXY.STATS.SCHEME}","{HOST.CONN}","{$HAPROXY.STATS.PORT}"].min(5m)}>{$HAPROXY.RESPONSE_TIME.MAX.WARN}` | WARNING | <p>Manual close: YES</p><p>**Depends on**:</p><p>- HAProxy: Service is down</p> |
+| HAProxy backend {#PXNAME}: Server is DOWN | <p>Backend is not available.</p> | `{TEMPLATE_NAME:haproxy.backend.status[{#PXNAME}:{#SVNAME}].max(#5)}=0` | AVERAGE | |
+| HAProxy backend {#PXNAME}: Average response time is more than {$HAPROXY.BACK_RTIME.MAX.WARN} for 5m | <p>Average backend response time (in ms) for the last 1,024 requests is more than {$HAPROXY.BACK_RTIME.MAX.WARN}.</p> | `{TEMPLATE_NAME:haproxy.backend.rtime[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.BACK_RTIME.MAX.WARN}` | WARNING | |
+| HAProxy backend {#PXNAME}: Number of responses with error is more than {$HAPROXY.BACK_ERESP.MAX.WARN} for 5m | <p>Number of requests on backend, whose responses yielded an error, is more than {$HAPROXY.BACK_ERESP.MAX.WARN}.</p> | `{TEMPLATE_NAME:haproxy.backend.eresp.rate[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.BACK_ERESP.MAX.WARN}` | WARNING | |
+| HAProxy backend {#PXNAME}: Current number of requests unassigned in queue is more than {$HAPROXY.BACK_QCUR.MAX.WARN} for 5m | <p>Current number of requests on backend unassigned in queue is more than {$HAPROXY.BACK_QCUR.MAX.WARN}.</p> | `{TEMPLATE_NAME:haproxy.backend.qcur[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.BACK_QCUR.MAX.WARN}` | WARNING | |
+| HAProxy backend {#PXNAME}: Average time spent in queue is more than {$HAPROXY.BACK_QTIME.MAX.WARN} for 5m | <p>Average time spent in queue (in ms) for the last 1,024 requests is more than {$HAPROXY.BACK_QTIME.MAX.WARN}.</p> | `{TEMPLATE_NAME:haproxy.backend.qtime[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.BACK_QTIME.MAX.WARN}` | WARNING | |
+| HAProxy frontend {#PXNAME}: Session utilization is more than {$HAPROXY.FRONT_SUTIL.MAX.WARN}% for 5m | <p>Alerting on this metric is essential to ensure your server has sufficient capacity to handle all concurrent sessions. Unlike requests, upon reaching the session limit HAProxy will deny additional clients until resource consumption drops. Furthermore, if you find your session usage percentage to be hovering above 80%, it could be time to either modify HAProxy’s configuration to allow more sessions, or migrate your HAProxy server to a bigger box.</p> | `{TEMPLATE_NAME:haproxy.frontend.sutil[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.FRONT_SUTIL.MAX.WARN}` | WARNING | |
+| HAProxy frontend {#PXNAME}: Number of request errors is more than {$HAPROXY.FRONT_EREQ.MAX.WARN} for 5m | <p>Number of request errors is more than {$HAPROXY.FRONT_EREQ.MAX.WARN}.</p> | `{TEMPLATE_NAME:haproxy.frontend.ereq.rate[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.FRONT_EREQ.MAX.WARN}` | WARNING | |
+| HAProxy frontend {#PXNAME}: Number of requests denied is more than {$HAPROXY.FRONT_DREQ.MAX.WARN} for 5m | <p>Number of requests denied due to security concerns (ACL-restricted) is more than {$HAPROXY.FRONT_DREQ.MAX.WARN}.</p> | `{TEMPLATE_NAME:haproxy.frontend.dreq.rate[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.FRONT_DREQ.MAX.WARN}` | WARNING | |
+| HAProxy {#PXNAME} {#SVNAME}: Server is DOWN | <p>Server is not available.</p> | `{TEMPLATE_NAME:haproxy.server.status[{#PXNAME}:{#SVNAME}].max(#5)}=0` | WARNING | |
+| HAProxy {#PXNAME} {#SVNAME}: Average response time is more than {$HAPROXY.SERVER_RTIME.MAX.WARN} for 5m | <p>Average server response time (in ms) for the last 1,024 requests is more than {$HAPROXY.SERVER_RTIME.MAX.WARN}.</p> | `{TEMPLATE_NAME:haproxy.server.rtime[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.SERVER_RTIME.MAX.WARN}` | WARNING | |
+| HAProxy {#PXNAME} {#SVNAME}: Number of responses with error is more than {$HAPROXY.SERVER_ERESP.MAX.WARN} for 5m | <p>Number of requests on server, whose responses yielded an error, is more than {$HAPROXY.SERVER_ERESP.MAX.WARN}.</p> | `{TEMPLATE_NAME:haproxy.server.eresp.rate[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.SERVER_ERESP.MAX.WARN}` | WARNING | |
+| HAProxy {#PXNAME} {#SVNAME}: Current number of requests unassigned in queue is more than {$HAPROXY.SERVER_QCUR.MAX.WARN} for 5m | <p>Current number of requests unassigned in queue is more than {$HAPROXY.SERVER_QCUR.MAX.WARN}.</p> | `{TEMPLATE_NAME:haproxy.server.qcur[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.SERVER_QCUR.MAX.WARN}` | WARNING | |
+| HAProxy {#PXNAME} {#SVNAME}: Average time spent in queue is more than {$HAPROXY.SERVER_QTIME.MAX.WARN} for 5m | <p>Average time spent in queue (in ms) for the last 1,024 requests is more than {$HAPROXY.SERVER_QTIME.MAX.WARN}.</p> | `{TEMPLATE_NAME:haproxy.server.qtime[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.SERVER_QTIME.MAX.WARN}` | WARNING | |
+| HAProxy TCP Backend {#PXNAME}: Server is DOWN | <p>Backend is not available.</p> | `{TEMPLATE_NAME:haproxy.backend_tcp.status[{#PXNAME}:{#SVNAME}].max(#5)}=0` | AVERAGE | |
+| HAProxy TCP Backend {#PXNAME}: Average response time is more than {$HAPROXY.BACK_RTIME.MAX.WARN} for 5m | <p>Average backend response time (in ms) for the last 1,024 requests is more than {$HAPROXY.BACK_RTIME.MAX.WARN}.</p> | `{TEMPLATE_NAME:haproxy.backend_tcp.rtime[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.BACK_RTIME.MAX.WARN}` | WARNING | |
+| HAProxy TCP Backend {#PXNAME}: Number of responses with error is more than {$HAPROXY.BACK_ERESP.MAX.WARN} for 5m | <p>Number of requests on backend, whose responses yielded an error, is more than {$HAPROXY.BACK_ERESP.MAX.WARN}.</p> | `{TEMPLATE_NAME:haproxy.backend_tcp.eresp.rate[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.BACK_ERESP.MAX.WARN}` | WARNING | |
+| HAProxy TCP Backend {#PXNAME}: Current number of requests unassigned in queue is more than {$HAPROXY.BACK_QCUR.MAX.WARN} for 5m | <p>Current number of requests on backend unassigned in queue is more than {$HAPROXY.BACK_QCUR.MAX.WARN}.</p> | `{TEMPLATE_NAME:haproxy.backend_tcp.qcur[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.BACK_QCUR.MAX.WARN}` | WARNING | |
+| HAProxy TCP Backend {#PXNAME}: Average time spent in queue is more than {$HAPROXY.BACK_QTIME.MAX.WARN} for 5m | <p>Average time spent in queue (in ms) for the last 1,024 requests is more than {$HAPROXY.BACK_QTIME.MAX.WARN}.</p> | `{TEMPLATE_NAME:haproxy.backend_tcp.qtime[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.BACK_QTIME.MAX.WARN}` | WARNING | |
+| HAProxy TCP Frontend {#PXNAME}: Session utilization is more than {$HAPROXY.FRONT_SUTIL.MAX.WARN}% for 5m | <p>Alerting on this metric is essential to ensure your server has sufficient capacity to handle all concurrent sessions. Unlike requests, upon reaching the session limit HAProxy will deny additional clients until resource consumption drops. Furthermore, if you find your session usage percentage to be hovering above 80%, it could be time to either modify HAProxy’s configuration to allow more sessions, or migrate your HAProxy server to a bigger box.</p> | `{TEMPLATE_NAME:haproxy.frontend_tcp.sutil[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.FRONT_SUTIL.MAX.WARN}` | WARNING | |
+| HAProxy TCP Frontend {#PXNAME}: Number of request errors is more than {$HAPROXY.FRONT_EREQ.MAX.WARN} for 5m | <p>Number of request errors is more than {$HAPROXY.FRONT_EREQ.MAX.WARN}.</p> | `{TEMPLATE_NAME:haproxy.frontend_tcp.ereq.rate[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.FRONT_EREQ.MAX.WARN}` | WARNING | |
+| HAProxy TCP Frontend {#PXNAME}: Number of requests denied is more than {$HAPROXY.FRONT_DREQ.MAX.WARN} for 5m | <p>Number of requests denied due to security concerns (ACL-restricted) is more than {$HAPROXY.FRONT_DREQ.MAX.WARN}.</p> | `{TEMPLATE_NAME:haproxy.frontend_tcp.dreq.rate[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.FRONT_DREQ.MAX.WARN}` | WARNING | |
+| HAProxy TCP {#PXNAME} {#SVNAME}: Server is DOWN | <p>Server is not available.</p> | `{TEMPLATE_NAME:haproxy.server_tcp.status[{#PXNAME}:{#SVNAME}].max(#5)}=0` | WARNING | |
+| HAProxy TCP {#PXNAME} {#SVNAME}: Average response time is more than {$HAPROXY.SERVER_RTIME.MAX.WARN} for 5m | <p>Average server response time (in ms) for the last 1,024 requests is more than {$HAPROXY.SERVER_RTIME.MAX.WARN}.</p> | `{TEMPLATE_NAME:haproxy.server_tcp.rtime[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.SERVER_RTIME.MAX.WARN}` | WARNING | |
+| HAProxy TCP {#PXNAME} {#SVNAME}: Number of responses with error is more than {$HAPROXY.SERVER_ERESP.MAX.WARN} for 5m | <p>Number of requests on server, whose responses yielded an error, is more than {$HAPROXY.SERVER_ERESP.MAX.WARN}.</p> | `{TEMPLATE_NAME:haproxy.server_tcp.eresp.rate[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.SERVER_ERESP.MAX.WARN}` | WARNING | |
+| HAProxy TCP {#PXNAME} {#SVNAME}: Current number of requests unassigned in queue is more than {$HAPROXY.SERVER_QCUR.MAX.WARN} for 5m | <p>Current number of requests unassigned in queue is more than {$HAPROXY.SERVER_QCUR.MAX.WARN}.</p> | `{TEMPLATE_NAME:haproxy.server_tcp.qcur[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.SERVER_QCUR.MAX.WARN}` | WARNING | |
+| HAProxy TCP {#PXNAME} {#SVNAME}: Average time spent in queue is more than {$HAPROXY.SERVER_QTIME.MAX.WARN} for 5m | <p>Average time spent in queue (in ms) for the last 1,024 requests is more than {$HAPROXY.SERVER_QTIME.MAX.WARN}.</p> | `{TEMPLATE_NAME:haproxy.server_tcp.qtime[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.SERVER_QTIME.MAX.WARN}` | WARNING | |
## Feedback
diff --git a/templates/app/haproxy_agent/template_app_haproxy_agent.yaml b/templates/app/haproxy_agent/template_app_haproxy_agent.yaml
index 7ec6941088b..a180a63809d 100644
--- a/templates/app/haproxy_agent/template_app_haproxy_agent.yaml
+++ b/templates/app/haproxy_agent/template_app_haproxy_agent.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-03-02T19:42:21Z'
+ date: '2021-04-22T11:27:36Z'
groups:
-
name: Templates/Applications
@@ -2094,131 +2094,137 @@ zabbix_export:
dashboards:
-
name: 'HAProxy Backend performance'
- widgets:
+ pages:
-
- type: GRAPH_PROTOTYPE
- width: '24'
- height: '12'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '3'
+ widgets:
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: 'HAProxy: Backend {#PXNAME} Redispatched requests and retried connections per second'
- host: 'HAProxy by Zabbix agent'
+ width: '24'
+ height: '12'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '3'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: 'HAProxy: Backend {#PXNAME} Redispatched requests and retried connections per second'
+ host: 'HAProxy by Zabbix agent'
-
name: 'HAProxy Frontend performance'
- widgets:
+ pages:
-
- type: GRAPH_PROTOTYPE
- width: '12'
- height: '12'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '3'
+ widgets:
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: 'HAProxy: Frontend {#PXNAME} Requests and sessions per second'
- host: 'HAProxy by Zabbix agent'
- -
- type: GRAPH_PROTOTYPE
- x: '12'
- width: '12'
- height: '12'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '3'
+ width: '12'
+ height: '12'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '3'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: 'HAProxy: Frontend {#PXNAME} Requests and sessions per second'
+ host: 'HAProxy by Zabbix agent'
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: 'HAProxy: Frontend {#PXNAME} Errors and denials per second'
- host: 'HAProxy by Zabbix agent'
- -
- type: GRAPH_PROTOTYPE
- 'y': '12'
- width: '12'
- height: '13'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '3'
+ x: '12'
+ width: '12'
+ height: '12'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '3'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: 'HAProxy: Frontend {#PXNAME} Errors and denials per second'
+ host: 'HAProxy by Zabbix agent'
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: 'HAProxy: Frontend {#PXNAME} Responses by HTTP code'
- host: 'HAProxy by Zabbix agent'
- -
- type: GRAPH_PROTOTYPE
- x: '12'
- 'y': '12'
- width: '12'
- height: '13'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '3'
+ 'y': '12'
+ width: '12'
+ height: '13'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '3'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: 'HAProxy: Frontend {#PXNAME} Responses by HTTP code'
+ host: 'HAProxy by Zabbix agent'
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: 'HAProxy: Frontend {#PXNAME} In/Out traffic'
- host: 'HAProxy by Zabbix agent'
+ x: '12'
+ 'y': '12'
+ width: '12'
+ height: '13'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '3'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: 'HAProxy: Frontend {#PXNAME} In/Out traffic'
+ host: 'HAProxy by Zabbix agent'
-
name: 'HAProxy Server performance'
- widgets:
+ pages:
-
- type: GRAPH_PROTOTYPE
- width: '24'
- height: '12'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '3'
+ widgets:
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: 'HAProxy: {#PXNAME} {#SVNAME} Response time and time in queue'
- host: 'HAProxy by Zabbix agent'
+ width: '24'
+ height: '12'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '3'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: 'HAProxy: {#PXNAME} {#SVNAME} Response time and time in queue'
+ host: 'HAProxy by Zabbix agent'
valuemaps:
-
name: 'Service state'
diff --git a/templates/app/haproxy_http/README.md b/templates/app/haproxy_http/README.md
index d0b3f8a1252..1250e4750c3 100644
--- a/templates/app/haproxy_http/README.md
+++ b/templates/app/haproxy_http/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
The template to monitor HAProxy by Zabbix that work without any external scripts.
Most of the metrics are collected in one go, thanks to Zabbix bulk data collection.
@@ -19,7 +19,7 @@ This template was tested on:
## Setup
-> See [Zabbix template operation](https://www.zabbix.com/documentation/5.2/manual/config/templates_out_of_the_box/http) for basic instructions.
+> See [Zabbix template operation](https://www.zabbix.com/documentation/5.4/manual/config/templates_out_of_the_box/http) for basic instructions.
Setup [HAProxy Stats Page](https://www.haproxy.com/blog/exploring-the-haproxy-stats-page/).
@@ -45,25 +45,25 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$HAPROXY.BACK_ERESP.MAX.WARN} |<p>Maximum of responses with error on Backend for trigger expression.</p> |`10` |
-|{$HAPROXY.BACK_QCUR.MAX.WARN} |<p>Maximum number of requests on Backend unassigned in queue for trigger expression.</p> |`10` |
-|{$HAPROXY.BACK_QTIME.MAX.WARN} |<p>Maximum of average time spent in queue on Backend for trigger expression.</p> |`10s` |
-|{$HAPROXY.BACK_RTIME.MAX.WARN} |<p>Maximum of average Backend response time for trigger expression.</p> |`10s` |
-|{$HAPROXY.FRONT_DREQ.MAX.WARN} |<p>The HAProxy maximum denied requests for trigger expression.</p> |`10` |
-|{$HAPROXY.FRONT_EREQ.MAX.WARN} |<p>The HAProxy maximum number of request errors for trigger expression.</p> |`10` |
-|{$HAPROXY.FRONT_SUTIL.MAX.WARN} |<p>Maximum of session usage percentage on frontend for trigger expression.</p> |`80` |
-|{$HAPROXY.PASSWORD} |<p>The password of the HAProxy stats page.</p> |`` |
-|{$HAPROXY.RESPONSE_TIME.MAX.WARN} |<p>The HAProxy stats page maximum response time in seconds for trigger expression.</p> |`10s` |
-|{$HAPROXY.SERVER_ERESP.MAX.WARN} |<p>Maximum of responses with error on server for trigger expression.</p> |`10` |
-|{$HAPROXY.SERVER_QCUR.MAX.WARN} |<p>Maximum number of requests on server unassigned in queue for trigger expression.</p> |`10` |
-|{$HAPROXY.SERVER_QTIME.MAX.WARN} |<p>Maximum of average time spent in queue on server for trigger expression.</p> |`10s` |
-|{$HAPROXY.SERVER_RTIME.MAX.WARN} |<p>Maximum of average server response time for trigger expression.</p> |`10s` |
-|{$HAPROXY.STATS.PATH} |<p>The path of the HAProxy stats page.</p> |`stats` |
-|{$HAPROXY.STATS.PORT} |<p>The port of the HAProxy stats host or container.</p> |`8404` |
-|{$HAPROXY.STATS.SCHEME} |<p>The scheme of HAProxy stats page(http/https).</p> |`http` |
-|{$HAPROXY.USERNAME} |<p>The username of the HAProxy stats page.</p> |`` |
+| Name | Description | Default |
+|-----------------------------------|------------------------------------------------------------------------------------------|---------|
+| {$HAPROXY.BACK_ERESP.MAX.WARN} | <p>Maximum of responses with error on Backend for trigger expression.</p> | `10` |
+| {$HAPROXY.BACK_QCUR.MAX.WARN} | <p>Maximum number of requests on Backend unassigned in queue for trigger expression.</p> | `10` |
+| {$HAPROXY.BACK_QTIME.MAX.WARN} | <p>Maximum of average time spent in queue on Backend for trigger expression.</p> | `10s` |
+| {$HAPROXY.BACK_RTIME.MAX.WARN} | <p>Maximum of average Backend response time for trigger expression.</p> | `10s` |
+| {$HAPROXY.FRONT_DREQ.MAX.WARN} | <p>The HAProxy maximum denied requests for trigger expression.</p> | `10` |
+| {$HAPROXY.FRONT_EREQ.MAX.WARN} | <p>The HAProxy maximum number of request errors for trigger expression.</p> | `10` |
+| {$HAPROXY.FRONT_SUTIL.MAX.WARN} | <p>Maximum of session usage percentage on frontend for trigger expression.</p> | `80` |
+| {$HAPROXY.PASSWORD} | <p>The password of the HAProxy stats page.</p> | `` |
+| {$HAPROXY.RESPONSE_TIME.MAX.WARN} | <p>The HAProxy stats page maximum response time in seconds for trigger expression.</p> | `10s` |
+| {$HAPROXY.SERVER_ERESP.MAX.WARN} | <p>Maximum of responses with error on server for trigger expression.</p> | `10` |
+| {$HAPROXY.SERVER_QCUR.MAX.WARN} | <p>Maximum number of requests on server unassigned in queue for trigger expression.</p> | `10` |
+| {$HAPROXY.SERVER_QTIME.MAX.WARN} | <p>Maximum of average time spent in queue on server for trigger expression.</p> | `10s` |
+| {$HAPROXY.SERVER_RTIME.MAX.WARN} | <p>Maximum of average server response time for trigger expression.</p> | `10s` |
+| {$HAPROXY.STATS.PATH} | <p>The path of the HAProxy stats page.</p> | `stats` |
+| {$HAPROXY.STATS.PORT} | <p>The port of the HAProxy stats host or container.</p> | `8404` |
+| {$HAPROXY.STATS.SCHEME} | <p>The scheme of HAProxy stats page(http/https).</p> | `http` |
+| {$HAPROXY.USERNAME} | <p>The username of the HAProxy stats page.</p> | `` |
## Template links
@@ -71,121 +71,121 @@ There are no template links in this template.
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Backend discovery |<p>Discovery backends</p> |DEPENDENT |haproxy.backend.discovery<p>**Filter**:</p>AND <p>- A: {#SVNAME} MATCHES_REGEX `BACKEND`</p><p>- B: {#MODE} MATCHES_REGEX `http`</p> |
-|FRONTEND discovery |<p>Discovery frontends</p> |DEPENDENT |haproxy.frontend.discovery<p>**Filter**:</p>AND <p>- A: {#SVNAME} MATCHES_REGEX `FRONTEND`</p><p>- B: {#MODE} MATCHES_REGEX `http`</p> |
-|Servers discovery |<p>Discovery servers</p> |DEPENDENT |haproxy.server.discovery<p>**Filter**:</p>AND <p>- A: {#SVNAME} NOT_MATCHES_REGEX `FRONTEND|BACKEND`</p><p>- B: {#MODE} MATCHES_REGEX `http`</p> |
-|TCP Backend discovery |<p>Discovery TCP backends</p> |DEPENDENT |haproxy.backend_tcp.discovery<p>**Filter**:</p>AND <p>- A: {#SVNAME} MATCHES_REGEX `BACKEND`</p><p>- B: {#MODE} MATCHES_REGEX `tcp`</p> |
-|TCP FRONTEND discovery |<p>Discovery TCP frontends</p> |DEPENDENT |haproxy.frontend_tcp.discovery<p>**Filter**:</p>AND <p>- A: {#SVNAME} MATCHES_REGEX `FRONTEND`</p><p>- B: {#MODE} MATCHES_REGEX `tcp`</p> |
-|TCP Servers discovery |<p>Discovery tcp servers</p> |DEPENDENT |haproxy.server_tcp.discovery<p>**Filter**:</p>AND <p>- A: {#SVNAME} NOT_MATCHES_REGEX `FRONTEND|BACKEND`</p><p>- B: {#MODE} MATCHES_REGEX `tcp`</p> |
+| Name | Description | Type | Key and additional info |
+|------------------------|--------------------------------|-----------|-----------------------------------------------------------------------------------------------------------------------------------------------------|
+| Backend discovery | <p>Discovery backends</p> | DEPENDENT | haproxy.backend.discovery<p>**Filter**:</p>AND <p>- A: {#SVNAME} MATCHES_REGEX `BACKEND`</p><p>- B: {#MODE} MATCHES_REGEX `http`</p> |
+| FRONTEND discovery | <p>Discovery frontends</p> | DEPENDENT | haproxy.frontend.discovery<p>**Filter**:</p>AND <p>- A: {#SVNAME} MATCHES_REGEX `FRONTEND`</p><p>- B: {#MODE} MATCHES_REGEX `http`</p> |
+| Servers discovery | <p>Discovery servers</p> | DEPENDENT | haproxy.server.discovery<p>**Filter**:</p>AND <p>- A: {#SVNAME} NOT_MATCHES_REGEX `FRONTEND|BACKEND`</p><p>- B: {#MODE} MATCHES_REGEX `http`</p> |
+| TCP Backend discovery | <p>Discovery TCP backends</p> | DEPENDENT | haproxy.backend_tcp.discovery<p>**Filter**:</p>AND <p>- A: {#SVNAME} MATCHES_REGEX `BACKEND`</p><p>- B: {#MODE} MATCHES_REGEX `tcp`</p> |
+| TCP FRONTEND discovery | <p>Discovery TCP frontends</p> | DEPENDENT | haproxy.frontend_tcp.discovery<p>**Filter**:</p>AND <p>- A: {#SVNAME} MATCHES_REGEX `FRONTEND`</p><p>- B: {#MODE} MATCHES_REGEX `tcp`</p> |
+| TCP Servers discovery | <p>Discovery tcp servers</p> | DEPENDENT | haproxy.server_tcp.discovery<p>**Filter**:</p>AND <p>- A: {#SVNAME} NOT_MATCHES_REGEX `FRONTEND|BACKEND`</p><p>- B: {#MODE} MATCHES_REGEX `tcp`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|HAProxy |HAProxy: Version |<p>-</p> |DEPENDENT |haproxy.version<p>**Preprocessing**:</p><p>- REGEX: `HAProxy version ([^,]*), \1`</p><p>⛔️ON_FAIL: `CUSTOM_ERROR -> HAProxy version is not found`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|HAProxy |HAProxy: Uptime |<p>-</p> |DEPENDENT |haproxy.uptime<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
-|HAProxy |HAProxy: Service status |<p>-</p> |SIMPLE |net.tcp.service["{$HAPROXY.STATS.SCHEME}","{HOST.CONN}","{$HAPROXY.STATS.PORT}"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|HAProxy |HAProxy: Service response time |<p>-</p> |SIMPLE |net.tcp.service.perf["{$HAPROXY.STATS.SCHEME}","{HOST.CONN}","{$HAPROXY.STATS.PORT}"] |
-|HAProxy |HAProxy Backend {#PXNAME}: Status |<p>-</p> |DEPENDENT |haproxy.backend.status[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].status.first()`</p><p>- BOOL_TO_DECIMAL<p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|HAProxy |HAProxy Backend {#PXNAME}: Responses time |<p>Average backend response time (in ms) for the last 1,024 requests</p> |DEPENDENT |haproxy.backend.rtime[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].rtime.first()`</p><p>- MULTIPLIER: `0.001`</p> |
-|HAProxy |HAProxy Backend {#PXNAME}: Errors connection per second |<p>Number of requests that encountered an error attempting to connect to a backend server.</p> |DEPENDENT |haproxy.backend.econ.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].econ.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy Backend {#PXNAME}: Responses denied per second |<p>Responses denied due to security concerns (ACL-restricted).</p> |DEPENDENT |haproxy.backend.dresp.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].dresp.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy Backend {#PXNAME}: Response errors per second |<p>Number of requests whose responses yielded an error</p> |DEPENDENT |haproxy.backend.eresp.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].eresp.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy Backend {#PXNAME}: Unassigned requests |<p>Current number of requests unassigned in queue.</p> |DEPENDENT |haproxy.backend.qcur[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].qcur.first()`</p> |
-|HAProxy |HAProxy Backend {#PXNAME}: Time in queue |<p>Average time spent in queue (in ms) for the last 1,024 requests</p> |DEPENDENT |haproxy.backend.qtime[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].qtime.first()`</p><p>- MULTIPLIER: `0.001`</p> |
-|HAProxy |HAProxy Backend {#PXNAME}: Redispatched requests per second |<p>Number of times a request was redispatched to a different backend.</p> |DEPENDENT |haproxy.backend.wredis.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].wredis.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy Backend {#PXNAME}: Retried connections per second |<p>Number of times a connection was retried.</p> |DEPENDENT |haproxy.backend.wretr.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].wretr.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy Frontend {#PXNAME}: Requests rate |<p>HTTP requests per second</p> |DEPENDENT |haproxy.frontend.req_rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].req_rate.first()`</p> |
-|HAProxy |HAProxy Frontend {#PXNAME}: Sessions rate |<p>Number of sessions created per second</p> |DEPENDENT |haproxy.frontend.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].rate.first()`</p> |
-|HAProxy |HAProxy Frontend {#PXNAME}: Established sessions |<p>The current number of established sessions.</p> |DEPENDENT |haproxy.frontend.scur[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].scur.first()`</p> |
-|HAProxy |HAProxy Frontend {#PXNAME}: Session limits |<p>The most simultaneous sessions that are allowed, as defined by the maxconn setting in the frontend.</p> |DEPENDENT |haproxy.frontend.slim[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].slim.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|HAProxy |HAProxy Frontend {#PXNAME}: Session utilization |<p>Percentage of sessions used (scur / slim * 100).</p> |CALCULATED |haproxy.frontend.sutil[{#PXNAME}:{#SVNAME}]<p>**Expression**:</p>`last(haproxy.frontend.scur[{#PXNAME}:{#SVNAME}]) / last(haproxy.frontend.slim[{#PXNAME}:{#SVNAME}]) * 100` |
-|HAProxy |HAProxy Frontend {#PXNAME}: Request errors per second |<p>Number of request errors per second.</p> |DEPENDENT |haproxy.frontend.ereq.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].ereq.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy Frontend {#PXNAME}: Denied requests per second |<p>Requests denied due to security concerns (ACL-restricted) per second.</p> |DEPENDENT |haproxy.frontend.dreq.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].dreq.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy Frontend {#PXNAME}: Number of responses with codes 1xx per second |<p>Number of informational HTTP responses per second.</p> |DEPENDENT |haproxy.frontend.hrsp_1xx.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].hrsp_1xx.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy Frontend {#PXNAME}: Number of responses with codes 2xx per second |<p>Number of successful HTTP responses per second.</p> |DEPENDENT |haproxy.frontend.hrsp_2xx.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].hrsp_2xx.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy Frontend {#PXNAME}: Number of responses with codes 3xx per second |<p>Number of HTTP redirections per second.</p> |DEPENDENT |haproxy.frontend.hrsp_3xx.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].hrsp_3xx.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy Frontend {#PXNAME}: Number of responses with codes 4xx per second |<p>Number of HTTP client errors per second.</p> |DEPENDENT |haproxy.frontend.hrsp_4xx.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].hrsp_4xx.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy Frontend {#PXNAME}: Number of responses with codes 5xx per second |<p>Number of HTTP server errors per second.</p> |DEPENDENT |haproxy.frontend.hrsp_5xx.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].hrsp_5xx.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy Frontend {#PXNAME}: Incoming traffic |<p>Number of bits received by the frontend</p> |DEPENDENT |haproxy.frontend.bin[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].bin.first()`</p><p>- MULTIPLIER: `8`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy Frontend {#PXNAME}: Outgoing traffic |<p>Number of bits sent by the frontend</p> |DEPENDENT |haproxy.frontend.bout[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].bout.first()`</p><p>- MULTIPLIER: `8`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy {#PXNAME} {#SVNAME}: Status |<p>-</p> |DEPENDENT |haproxy.server.status[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].status.first()`</p><p>- BOOL_TO_DECIMAL<p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|HAProxy |HAProxy {#PXNAME} {#SVNAME}: Responses time |<p>Average server response time (in ms) for the last 1,024 requests.</p> |DEPENDENT |haproxy.server.rtime[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].rtime.first()`</p><p>- MULTIPLIER: `0.001`</p> |
-|HAProxy |HAProxy {#PXNAME} {#SVNAME}: Errors connection per second |<p>Number of requests that encountered an error attempting to connect to a backend server.</p> |DEPENDENT |haproxy.server.econ.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].econ.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy {#PXNAME} {#SVNAME}: Responses denied per second |<p>Responses denied due to security concerns (ACL-restricted).</p> |DEPENDENT |haproxy.server.dresp.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].dresp.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy {#PXNAME} {#SVNAME}: Response errors per second |<p>Number of requests whose responses yielded an error.</p> |DEPENDENT |haproxy.server.eresp.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].eresp.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy {#PXNAME} {#SVNAME}: Unassigned requests |<p>Current number of requests unassigned in queue.</p> |DEPENDENT |haproxy.server.qcur[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].qcur.first()`</p> |
-|HAProxy |HAProxy {#PXNAME} {#SVNAME}: Time in queue |<p>Average time spent in queue (in ms) for the last 1,024 requests.</p> |DEPENDENT |haproxy.server.qtime[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].qtime.first()`</p><p>- MULTIPLIER: `0.001`</p> |
-|HAProxy |HAProxy {#PXNAME} {#SVNAME}: Redispatched requests per second |<p>Number of times a request was redispatched to a different backend.</p> |DEPENDENT |haproxy.server.wredis.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].wredis.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy {#PXNAME} {#SVNAME}: Retried connections per second |<p>Number of times a connection was retried.</p> |DEPENDENT |haproxy.server.wretr.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].wretr.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy {#PXNAME} {#SVNAME}: Number of responses with codes 4xx per second |<p>Number of HTTP client errors per second.</p> |DEPENDENT |haproxy.server.hrsp_4xx.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].hrsp_4xx.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy {#PXNAME} {#SVNAME}: Number of responses with codes 5xx per second |<p>Number of HTTP server errors per second.</p> |DEPENDENT |haproxy.server.hrsp_5xx.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].hrsp_5xx.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy TCP Backend {#PXNAME}: Status |<p>-</p> |DEPENDENT |haproxy.backend_tcp.status[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].status.first()`</p><p>- BOOL_TO_DECIMAL<p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|HAProxy |HAProxy TCP Backend {#PXNAME}: Responses time |<p>Average backend response time (in ms) for the last 1,024 requests</p> |DEPENDENT |haproxy.backend_tcp.rtime[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].rtime.first()`</p><p>- MULTIPLIER: `0.001`</p> |
-|HAProxy |HAProxy TCP Backend {#PXNAME}: Errors connection per second |<p>Number of requests that encountered an error attempting to connect to a backend server.</p> |DEPENDENT |haproxy.backend_tcp.econ.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].econ.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy TCP Backend {#PXNAME}: Responses denied per second |<p>Responses denied due to security concerns (ACL-restricted).</p> |DEPENDENT |haproxy.backend_tcp.dresp.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].dresp.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy TCP Backend {#PXNAME}: Response errors per second |<p>Number of requests whose responses yielded an error</p> |DEPENDENT |haproxy.backend_tcp.eresp.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].eresp.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy TCP Backend {#PXNAME}: Unassigned requests |<p>Current number of requests unassigned in queue.</p> |DEPENDENT |haproxy.backend_tcp.qcur[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].qcur.first()`</p> |
-|HAProxy |HAProxy TCP Backend {#PXNAME}: Time in queue |<p>Average time spent in queue (in ms) for the last 1,024 requests</p> |DEPENDENT |haproxy.backend_tcp.qtime[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].qtime.first()`</p><p>- MULTIPLIER: `0.001`</p> |
-|HAProxy |HAProxy TCP Backend {#PXNAME}: Redispatched requests per second |<p>Number of times a request was redispatched to a different backend.</p> |DEPENDENT |haproxy.backend_tcp.wredis.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].wredis.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy TCP Backend {#PXNAME}: Retried connections per second |<p>Number of times a connection was retried.</p> |DEPENDENT |haproxy.backend_tcp.wretr.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].wretr.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy TCP Frontend {#PXNAME}: Requests rate |<p>HTTP requests per second</p> |DEPENDENT |haproxy.frontend_tcp.req_rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].req_rate.first()`</p> |
-|HAProxy |HAProxy TCP Frontend {#PXNAME}: Sessions rate |<p>Number of sessions created per second</p> |DEPENDENT |haproxy.frontend_tcp.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].rate.first()`</p> |
-|HAProxy |HAProxy TCP Frontend {#PXNAME}: Established sessions |<p>The current number of established sessions.</p> |DEPENDENT |haproxy.frontend_tcp.scur[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].scur.first()`</p> |
-|HAProxy |HAProxy TCP Frontend {#PXNAME}: Session limits |<p>The most simultaneous sessions that are allowed, as defined by the maxconn setting in the frontend.</p> |DEPENDENT |haproxy.frontend_tcp.slim[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].slim.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|HAProxy |HAProxy TCP Frontend {#PXNAME}: Session utilization |<p>Percentage of sessions used (scur / slim * 100).</p> |CALCULATED |haproxy.frontend_tcp.sutil[{#PXNAME}:{#SVNAME}]<p>**Expression**:</p>`last(haproxy.frontend_tcp.scur[{#PXNAME}:{#SVNAME}]) / last(haproxy.frontend_tcp.slim[{#PXNAME}:{#SVNAME}]) * 100` |
-|HAProxy |HAProxy TCP Frontend {#PXNAME}: Request errors per second |<p>Number of request errors per second.</p> |DEPENDENT |haproxy.frontend_tcp.ereq.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].ereq.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy TCP Frontend {#PXNAME}: Denied requests per second |<p>Requests denied due to security concerns (ACL-restricted) per second.</p> |DEPENDENT |haproxy.frontend_tcp.dreq.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].dreq.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy TCP Frontend {#PXNAME}: Incoming traffic |<p>Number of bits received by the frontend</p> |DEPENDENT |haproxy.frontend_tcp.bin[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].bin.first()`</p><p>- MULTIPLIER: `8`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy TCP Frontend {#PXNAME}: Outgoing traffic |<p>Number of bits sent by the frontend</p> |DEPENDENT |haproxy.frontend_tcp.bout[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].bout.first()`</p><p>- MULTIPLIER: `8`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy TCP {#PXNAME} {#SVNAME}: Status |<p>-</p> |DEPENDENT |haproxy.server_tcp.status[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].status.first()`</p><p>- BOOL_TO_DECIMAL<p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|HAProxy |HAProxy TCP {#PXNAME} {#SVNAME}: Responses time |<p>Average server response time (in ms) for the last 1,024 requests.</p> |DEPENDENT |haproxy.server_tcp.rtime[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].rtime.first()`</p><p>- MULTIPLIER: `0.001`</p> |
-|HAProxy |HAProxy TCP {#PXNAME} {#SVNAME}: Errors connection per second |<p>Number of requests that encountered an error attempting to connect to a backend server.</p> |DEPENDENT |haproxy.server_tcp.econ.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].econ.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy TCP {#PXNAME} {#SVNAME}: Responses denied per second |<p>Responses denied due to security concerns (ACL-restricted).</p> |DEPENDENT |haproxy.server_tcp.dresp.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].dresp.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy TCP {#PXNAME} {#SVNAME}: Response errors per second |<p>Number of requests whose responses yielded an error.</p> |DEPENDENT |haproxy.server_tcp.eresp.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].eresp.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy TCP {#PXNAME} {#SVNAME}: Unassigned requests |<p>Current number of requests unassigned in queue.</p> |DEPENDENT |haproxy.server_tcp.qcur[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].qcur.first()`</p> |
-|HAProxy |HAProxy TCP {#PXNAME} {#SVNAME}: Time in queue |<p>Average time spent in queue (in ms) for the last 1,024 requests.</p> |DEPENDENT |haproxy.server_tcp.qtime[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].qtime.first()`</p><p>- MULTIPLIER: `0.001`</p> |
-|HAProxy |HAProxy TCP {#PXNAME} {#SVNAME}: Redispatched requests per second |<p>Number of times a request was redispatched to a different backend.</p> |DEPENDENT |haproxy.server_tcp.wredis.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].wredis.first()`</p><p>- CHANGE_PER_SECOND |
-|HAProxy |HAProxy TCP {#PXNAME} {#SVNAME}: Retried connections per second |<p>Number of times a connection was retried.</p> |DEPENDENT |haproxy.server_tcp.wretr.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].wretr.first()`</p><p>- CHANGE_PER_SECOND |
-|Zabbix_raw_items |HAProxy: Get stats |<p>HAProxy Statistics Report in CSV format</p> |HTTP_AGENT |haproxy.get<p>**Preprocessing**:</p><p>- REGEX: `# ([\s\S]*)\n \1`</p><p>- CSV_TO_JSON: ` 1`</p> |
-|Zabbix_raw_items |HAProxy: Get stats page |<p>HAProxy Statistics Report HTML</p> |HTTP_AGENT |haproxy.get_html |
+| Group | Name | Description | Type | Key and additional info |
+|------------------|----------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------|------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| HAProxy | HAProxy: Version | <p>-</p> | DEPENDENT | haproxy.version<p>**Preprocessing**:</p><p>- REGEX: `HAProxy version ([^,]*), \1`</p><p>⛔️ON_FAIL: `CUSTOM_ERROR -> HAProxy version is not found`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| HAProxy | HAProxy: Uptime | <p>-</p> | DEPENDENT | haproxy.uptime<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| HAProxy | HAProxy: Service status | <p>-</p> | SIMPLE | net.tcp.service["{$HAPROXY.STATS.SCHEME}","{HOST.CONN}","{$HAPROXY.STATS.PORT}"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| HAProxy | HAProxy: Service response time | <p>-</p> | SIMPLE | net.tcp.service.perf["{$HAPROXY.STATS.SCHEME}","{HOST.CONN}","{$HAPROXY.STATS.PORT}"] |
+| HAProxy | HAProxy Backend {#PXNAME}: Status | <p>-</p> | DEPENDENT | haproxy.backend.status[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].status.first()`</p><p>- BOOL_TO_DECIMAL<p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| HAProxy | HAProxy Backend {#PXNAME}: Responses time | <p>Average backend response time (in ms) for the last 1,024 requests</p> | DEPENDENT | haproxy.backend.rtime[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].rtime.first()`</p><p>- MULTIPLIER: `0.001`</p> |
+| HAProxy | HAProxy Backend {#PXNAME}: Errors connection per second | <p>Number of requests that encountered an error attempting to connect to a backend server.</p> | DEPENDENT | haproxy.backend.econ.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].econ.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy Backend {#PXNAME}: Responses denied per second | <p>Responses denied due to security concerns (ACL-restricted).</p> | DEPENDENT | haproxy.backend.dresp.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].dresp.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy Backend {#PXNAME}: Response errors per second | <p>Number of requests whose responses yielded an error</p> | DEPENDENT | haproxy.backend.eresp.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].eresp.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy Backend {#PXNAME}: Unassigned requests | <p>Current number of requests unassigned in queue.</p> | DEPENDENT | haproxy.backend.qcur[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].qcur.first()`</p> |
+| HAProxy | HAProxy Backend {#PXNAME}: Time in queue | <p>Average time spent in queue (in ms) for the last 1,024 requests</p> | DEPENDENT | haproxy.backend.qtime[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].qtime.first()`</p><p>- MULTIPLIER: `0.001`</p> |
+| HAProxy | HAProxy Backend {#PXNAME}: Redispatched requests per second | <p>Number of times a request was redispatched to a different backend.</p> | DEPENDENT | haproxy.backend.wredis.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].wredis.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy Backend {#PXNAME}: Retried connections per second | <p>Number of times a connection was retried.</p> | DEPENDENT | haproxy.backend.wretr.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].wretr.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy Frontend {#PXNAME}: Requests rate | <p>HTTP requests per second</p> | DEPENDENT | haproxy.frontend.req_rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].req_rate.first()`</p> |
+| HAProxy | HAProxy Frontend {#PXNAME}: Sessions rate | <p>Number of sessions created per second</p> | DEPENDENT | haproxy.frontend.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].rate.first()`</p> |
+| HAProxy | HAProxy Frontend {#PXNAME}: Established sessions | <p>The current number of established sessions.</p> | DEPENDENT | haproxy.frontend.scur[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].scur.first()`</p> |
+| HAProxy | HAProxy Frontend {#PXNAME}: Session limits | <p>The most simultaneous sessions that are allowed, as defined by the maxconn setting in the frontend.</p> | DEPENDENT | haproxy.frontend.slim[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].slim.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| HAProxy | HAProxy Frontend {#PXNAME}: Session utilization | <p>Percentage of sessions used (scur / slim * 100).</p> | CALCULATED | haproxy.frontend.sutil[{#PXNAME}:{#SVNAME}]<p>**Expression**:</p>`last(haproxy.frontend.scur[{#PXNAME}:{#SVNAME}]) / last(haproxy.frontend.slim[{#PXNAME}:{#SVNAME}]) * 100` |
+| HAProxy | HAProxy Frontend {#PXNAME}: Request errors per second | <p>Number of request errors per second.</p> | DEPENDENT | haproxy.frontend.ereq.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].ereq.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy Frontend {#PXNAME}: Denied requests per second | <p>Requests denied due to security concerns (ACL-restricted) per second.</p> | DEPENDENT | haproxy.frontend.dreq.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].dreq.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy Frontend {#PXNAME}: Number of responses with codes 1xx per second | <p>Number of informational HTTP responses per second.</p> | DEPENDENT | haproxy.frontend.hrsp_1xx.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].hrsp_1xx.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy Frontend {#PXNAME}: Number of responses with codes 2xx per second | <p>Number of successful HTTP responses per second.</p> | DEPENDENT | haproxy.frontend.hrsp_2xx.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].hrsp_2xx.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy Frontend {#PXNAME}: Number of responses with codes 3xx per second | <p>Number of HTTP redirections per second.</p> | DEPENDENT | haproxy.frontend.hrsp_3xx.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].hrsp_3xx.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy Frontend {#PXNAME}: Number of responses with codes 4xx per second | <p>Number of HTTP client errors per second.</p> | DEPENDENT | haproxy.frontend.hrsp_4xx.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].hrsp_4xx.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy Frontend {#PXNAME}: Number of responses with codes 5xx per second | <p>Number of HTTP server errors per second.</p> | DEPENDENT | haproxy.frontend.hrsp_5xx.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].hrsp_5xx.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy Frontend {#PXNAME}: Incoming traffic | <p>Number of bits received by the frontend</p> | DEPENDENT | haproxy.frontend.bin[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].bin.first()`</p><p>- MULTIPLIER: `8`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy Frontend {#PXNAME}: Outgoing traffic | <p>Number of bits sent by the frontend</p> | DEPENDENT | haproxy.frontend.bout[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].bout.first()`</p><p>- MULTIPLIER: `8`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy {#PXNAME} {#SVNAME}: Status | <p>-</p> | DEPENDENT | haproxy.server.status[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].status.first()`</p><p>- BOOL_TO_DECIMAL<p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| HAProxy | HAProxy {#PXNAME} {#SVNAME}: Responses time | <p>Average server response time (in ms) for the last 1,024 requests.</p> | DEPENDENT | haproxy.server.rtime[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].rtime.first()`</p><p>- MULTIPLIER: `0.001`</p> |
+| HAProxy | HAProxy {#PXNAME} {#SVNAME}: Errors connection per second | <p>Number of requests that encountered an error attempting to connect to a backend server.</p> | DEPENDENT | haproxy.server.econ.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].econ.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy {#PXNAME} {#SVNAME}: Responses denied per second | <p>Responses denied due to security concerns (ACL-restricted).</p> | DEPENDENT | haproxy.server.dresp.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].dresp.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy {#PXNAME} {#SVNAME}: Response errors per second | <p>Number of requests whose responses yielded an error.</p> | DEPENDENT | haproxy.server.eresp.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].eresp.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy {#PXNAME} {#SVNAME}: Unassigned requests | <p>Current number of requests unassigned in queue.</p> | DEPENDENT | haproxy.server.qcur[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].qcur.first()`</p> |
+| HAProxy | HAProxy {#PXNAME} {#SVNAME}: Time in queue | <p>Average time spent in queue (in ms) for the last 1,024 requests.</p> | DEPENDENT | haproxy.server.qtime[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].qtime.first()`</p><p>- MULTIPLIER: `0.001`</p> |
+| HAProxy | HAProxy {#PXNAME} {#SVNAME}: Redispatched requests per second | <p>Number of times a request was redispatched to a different backend.</p> | DEPENDENT | haproxy.server.wredis.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].wredis.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy {#PXNAME} {#SVNAME}: Retried connections per second | <p>Number of times a connection was retried.</p> | DEPENDENT | haproxy.server.wretr.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].wretr.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy {#PXNAME} {#SVNAME}: Number of responses with codes 4xx per second | <p>Number of HTTP client errors per second.</p> | DEPENDENT | haproxy.server.hrsp_4xx.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].hrsp_4xx.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy {#PXNAME} {#SVNAME}: Number of responses with codes 5xx per second | <p>Number of HTTP server errors per second.</p> | DEPENDENT | haproxy.server.hrsp_5xx.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].hrsp_5xx.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy TCP Backend {#PXNAME}: Status | <p>-</p> | DEPENDENT | haproxy.backend_tcp.status[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].status.first()`</p><p>- BOOL_TO_DECIMAL<p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| HAProxy | HAProxy TCP Backend {#PXNAME}: Responses time | <p>Average backend response time (in ms) for the last 1,024 requests</p> | DEPENDENT | haproxy.backend_tcp.rtime[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].rtime.first()`</p><p>- MULTIPLIER: `0.001`</p> |
+| HAProxy | HAProxy TCP Backend {#PXNAME}: Errors connection per second | <p>Number of requests that encountered an error attempting to connect to a backend server.</p> | DEPENDENT | haproxy.backend_tcp.econ.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].econ.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy TCP Backend {#PXNAME}: Responses denied per second | <p>Responses denied due to security concerns (ACL-restricted).</p> | DEPENDENT | haproxy.backend_tcp.dresp.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].dresp.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy TCP Backend {#PXNAME}: Response errors per second | <p>Number of requests whose responses yielded an error</p> | DEPENDENT | haproxy.backend_tcp.eresp.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].eresp.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy TCP Backend {#PXNAME}: Unassigned requests | <p>Current number of requests unassigned in queue.</p> | DEPENDENT | haproxy.backend_tcp.qcur[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].qcur.first()`</p> |
+| HAProxy | HAProxy TCP Backend {#PXNAME}: Time in queue | <p>Average time spent in queue (in ms) for the last 1,024 requests</p> | DEPENDENT | haproxy.backend_tcp.qtime[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].qtime.first()`</p><p>- MULTIPLIER: `0.001`</p> |
+| HAProxy | HAProxy TCP Backend {#PXNAME}: Redispatched requests per second | <p>Number of times a request was redispatched to a different backend.</p> | DEPENDENT | haproxy.backend_tcp.wredis.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].wredis.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy TCP Backend {#PXNAME}: Retried connections per second | <p>Number of times a connection was retried.</p> | DEPENDENT | haproxy.backend_tcp.wretr.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].wretr.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy TCP Frontend {#PXNAME}: Requests rate | <p>HTTP requests per second</p> | DEPENDENT | haproxy.frontend_tcp.req_rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].req_rate.first()`</p> |
+| HAProxy | HAProxy TCP Frontend {#PXNAME}: Sessions rate | <p>Number of sessions created per second</p> | DEPENDENT | haproxy.frontend_tcp.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].rate.first()`</p> |
+| HAProxy | HAProxy TCP Frontend {#PXNAME}: Established sessions | <p>The current number of established sessions.</p> | DEPENDENT | haproxy.frontend_tcp.scur[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].scur.first()`</p> |
+| HAProxy | HAProxy TCP Frontend {#PXNAME}: Session limits | <p>The most simultaneous sessions that are allowed, as defined by the maxconn setting in the frontend.</p> | DEPENDENT | haproxy.frontend_tcp.slim[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].slim.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| HAProxy | HAProxy TCP Frontend {#PXNAME}: Session utilization | <p>Percentage of sessions used (scur / slim * 100).</p> | CALCULATED | haproxy.frontend_tcp.sutil[{#PXNAME}:{#SVNAME}]<p>**Expression**:</p>`last(haproxy.frontend_tcp.scur[{#PXNAME}:{#SVNAME}]) / last(haproxy.frontend_tcp.slim[{#PXNAME}:{#SVNAME}]) * 100` |
+| HAProxy | HAProxy TCP Frontend {#PXNAME}: Request errors per second | <p>Number of request errors per second.</p> | DEPENDENT | haproxy.frontend_tcp.ereq.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].ereq.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy TCP Frontend {#PXNAME}: Denied requests per second | <p>Requests denied due to security concerns (ACL-restricted) per second.</p> | DEPENDENT | haproxy.frontend_tcp.dreq.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].dreq.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy TCP Frontend {#PXNAME}: Incoming traffic | <p>Number of bits received by the frontend</p> | DEPENDENT | haproxy.frontend_tcp.bin[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].bin.first()`</p><p>- MULTIPLIER: `8`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy TCP Frontend {#PXNAME}: Outgoing traffic | <p>Number of bits sent by the frontend</p> | DEPENDENT | haproxy.frontend_tcp.bout[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].bout.first()`</p><p>- MULTIPLIER: `8`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy TCP {#PXNAME} {#SVNAME}: Status | <p>-</p> | DEPENDENT | haproxy.server_tcp.status[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].status.first()`</p><p>- BOOL_TO_DECIMAL<p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| HAProxy | HAProxy TCP {#PXNAME} {#SVNAME}: Responses time | <p>Average server response time (in ms) for the last 1,024 requests.</p> | DEPENDENT | haproxy.server_tcp.rtime[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].rtime.first()`</p><p>- MULTIPLIER: `0.001`</p> |
+| HAProxy | HAProxy TCP {#PXNAME} {#SVNAME}: Errors connection per second | <p>Number of requests that encountered an error attempting to connect to a backend server.</p> | DEPENDENT | haproxy.server_tcp.econ.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].econ.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy TCP {#PXNAME} {#SVNAME}: Responses denied per second | <p>Responses denied due to security concerns (ACL-restricted).</p> | DEPENDENT | haproxy.server_tcp.dresp.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].dresp.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy TCP {#PXNAME} {#SVNAME}: Response errors per second | <p>Number of requests whose responses yielded an error.</p> | DEPENDENT | haproxy.server_tcp.eresp.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].eresp.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy TCP {#PXNAME} {#SVNAME}: Unassigned requests | <p>Current number of requests unassigned in queue.</p> | DEPENDENT | haproxy.server_tcp.qcur[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].qcur.first()`</p> |
+| HAProxy | HAProxy TCP {#PXNAME} {#SVNAME}: Time in queue | <p>Average time spent in queue (in ms) for the last 1,024 requests.</p> | DEPENDENT | haproxy.server_tcp.qtime[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].qtime.first()`</p><p>- MULTIPLIER: `0.001`</p> |
+| HAProxy | HAProxy TCP {#PXNAME} {#SVNAME}: Redispatched requests per second | <p>Number of times a request was redispatched to a different backend.</p> | DEPENDENT | haproxy.server_tcp.wredis.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].wredis.first()`</p><p>- CHANGE_PER_SECOND |
+| HAProxy | HAProxy TCP {#PXNAME} {#SVNAME}: Retried connections per second | <p>Number of times a connection was retried.</p> | DEPENDENT | haproxy.server_tcp.wretr.rate[{#PXNAME}:{#SVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.pxname == '{#PXNAME}' && @.svname == '{#SVNAME}')].wretr.first()`</p><p>- CHANGE_PER_SECOND |
+| Zabbix_raw_items | HAProxy: Get stats | <p>HAProxy Statistics Report in CSV format</p> | HTTP_AGENT | haproxy.get<p>**Preprocessing**:</p><p>- REGEX: `# ([\s\S]*)\n \1`</p><p>- CSV_TO_JSON: ` 1`</p> |
+| Zabbix_raw_items | HAProxy: Get stats page | <p>HAProxy Statistics Report HTML</p> | HTTP_AGENT | haproxy.get_html |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|HAProxy: Version has changed (new version: {ITEM.VALUE}) |<p>HAProxy version has changed. Ack to close.</p> |`{TEMPLATE_NAME:haproxy.version.diff()}=1 and {TEMPLATE_NAME:haproxy.version.strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|HAProxy: has been restarted (uptime < 10m) |<p>Uptime is less than 10 minutes</p> |`{TEMPLATE_NAME:haproxy.uptime.last()}<10m` |INFO |<p>Manual close: YES</p> |
-|HAProxy: Service is down |<p>-</p> |`{TEMPLATE_NAME:net.tcp.service["{$HAPROXY.STATS.SCHEME}","{HOST.CONN}","{$HAPROXY.STATS.PORT}"].last()}=0` |AVERAGE |<p>Manual close: YES</p> |
-|HAProxy: Service response time is too high (over {$HAPROXY.RESPONSE_TIME.MAX.WARN} for 5m) |<p>-</p> |`{TEMPLATE_NAME:net.tcp.service.perf["{$HAPROXY.STATS.SCHEME}","{HOST.CONN}","{$HAPROXY.STATS.PORT}"].min(5m)}>{$HAPROXY.RESPONSE_TIME.MAX.WARN}` |WARNING |<p>Manual close: YES</p><p>**Depends on**:</p><p>- HAProxy: Service is down</p> |
-|HAProxy backend {#PXNAME}: Server is DOWN |<p>Backend is not available.</p> |`{TEMPLATE_NAME:haproxy.backend.status[{#PXNAME}:{#SVNAME}].max(#5)}=0` |AVERAGE | |
-|HAProxy backend {#PXNAME}: Average response time is more than {$HAPROXY.BACK_RTIME.MAX.WARN} for 5m |<p>Average backend response time (in ms) for the last 1,024 requests is more than {$HAPROXY.BACK_RTIME.MAX.WARN}.</p> |`{TEMPLATE_NAME:haproxy.backend.rtime[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.BACK_RTIME.MAX.WARN}` |WARNING | |
-|HAProxy backend {#PXNAME}: Number of responses with error is more than {$HAPROXY.BACK_ERESP.MAX.WARN} for 5m |<p>Number of requests on backend, whose responses yielded an error, is more than {$HAPROXY.BACK_ERESP.MAX.WARN}.</p> |`{TEMPLATE_NAME:haproxy.backend.eresp.rate[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.BACK_ERESP.MAX.WARN}` |WARNING | |
-|HAProxy backend {#PXNAME}: Current number of requests unassigned in queue is more than {$HAPROXY.BACK_QCUR.MAX.WARN} for 5m |<p>Current number of requests on backend unassigned in queue is more than {$HAPROXY.BACK_QCUR.MAX.WARN}.</p> |`{TEMPLATE_NAME:haproxy.backend.qcur[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.BACK_QCUR.MAX.WARN}` |WARNING | |
-|HAProxy backend {#PXNAME}: Average time spent in queue is more than {$HAPROXY.BACK_QTIME.MAX.WARN} for 5m |<p>Average time spent in queue (in ms) for the last 1,024 requests is more than {$HAPROXY.BACK_QTIME.MAX.WARN}.</p> |`{TEMPLATE_NAME:haproxy.backend.qtime[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.BACK_QTIME.MAX.WARN}` |WARNING | |
-|HAProxy frontend {#PXNAME}: Session utilization is more than {$HAPROXY.FRONT_SUTIL.MAX.WARN}% for 5m |<p>Alerting on this metric is essential to ensure your server has sufficient capacity to handle all concurrent sessions. Unlike requests, upon reaching the session limit HAProxy will deny additional clients until resource consumption drops. Furthermore, if you find your session usage percentage to be hovering above 80%, it could be time to either modify HAProxy’s configuration to allow more sessions, or migrate your HAProxy server to a bigger box.</p> |`{TEMPLATE_NAME:haproxy.frontend.sutil[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.FRONT_SUTIL.MAX.WARN}` |WARNING | |
-|HAProxy frontend {#PXNAME}: Number of request errors is more than {$HAPROXY.FRONT_EREQ.MAX.WARN} for 5m |<p>Number of request errors is more than {$HAPROXY.FRONT_EREQ.MAX.WARN}.</p> |`{TEMPLATE_NAME:haproxy.frontend.ereq.rate[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.FRONT_EREQ.MAX.WARN}` |WARNING | |
-|HAProxy frontend {#PXNAME}: Number of requests denied is more than {$HAPROXY.FRONT_DREQ.MAX.WARN} for 5m |<p>Number of requests denied due to security concerns (ACL-restricted) is more than {$HAPROXY.FRONT_DREQ.MAX.WARN}.</p> |`{TEMPLATE_NAME:haproxy.frontend.dreq.rate[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.FRONT_DREQ.MAX.WARN}` |WARNING | |
-|HAProxy {#PXNAME} {#SVNAME}: Server is DOWN |<p>Server is not available.</p> |`{TEMPLATE_NAME:haproxy.server.status[{#PXNAME}:{#SVNAME}].max(#5)}=0` |WARNING | |
-|HAProxy {#PXNAME} {#SVNAME}: Average response time is more than {$HAPROXY.SERVER_RTIME.MAX.WARN} for 5m |<p>Average server response time (in ms) for the last 1,024 requests is more than {$HAPROXY.SERVER_RTIME.MAX.WARN}.</p> |`{TEMPLATE_NAME:haproxy.server.rtime[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.SERVER_RTIME.MAX.WARN}` |WARNING | |
-|HAProxy {#PXNAME} {#SVNAME}: Number of responses with error is more than {$HAPROXY.SERVER_ERESP.MAX.WARN} for 5m |<p>Number of requests on server, whose responses yielded an error, is more than {$HAPROXY.SERVER_ERESP.MAX.WARN}.</p> |`{TEMPLATE_NAME:haproxy.server.eresp.rate[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.SERVER_ERESP.MAX.WARN}` |WARNING | |
-|HAProxy {#PXNAME} {#SVNAME}: Current number of requests unassigned in queue is more than {$HAPROXY.SERVER_QCUR.MAX.WARN} for 5m |<p>Current number of requests unassigned in queue is more than {$HAPROXY.SERVER_QCUR.MAX.WARN}.</p> |`{TEMPLATE_NAME:haproxy.server.qcur[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.SERVER_QCUR.MAX.WARN}` |WARNING | |
-|HAProxy {#PXNAME} {#SVNAME}: Average time spent in queue is more than {$HAPROXY.SERVER_QTIME.MAX.WARN} for 5m |<p>Average time spent in queue (in ms) for the last 1,024 requests is more than {$HAPROXY.SERVER_QTIME.MAX.WARN}.</p> |`{TEMPLATE_NAME:haproxy.server.qtime[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.SERVER_QTIME.MAX.WARN}` |WARNING | |
-|HAProxy TCP Backend {#PXNAME}: Server is DOWN |<p>Backend is not available.</p> |`{TEMPLATE_NAME:haproxy.backend_tcp.status[{#PXNAME}:{#SVNAME}].max(#5)}=0` |AVERAGE | |
-|HAProxy TCP Backend {#PXNAME}: Average response time is more than {$HAPROXY.BACK_RTIME.MAX.WARN} for 5m |<p>Average backend response time (in ms) for the last 1,024 requests is more than {$HAPROXY.BACK_RTIME.MAX.WARN}.</p> |`{TEMPLATE_NAME:haproxy.backend_tcp.rtime[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.BACK_RTIME.MAX.WARN}` |WARNING | |
-|HAProxy TCP Backend {#PXNAME}: Number of responses with error is more than {$HAPROXY.BACK_ERESP.MAX.WARN} for 5m |<p>Number of requests on backend, whose responses yielded an error, is more than {$HAPROXY.BACK_ERESP.MAX.WARN}.</p> |`{TEMPLATE_NAME:haproxy.backend_tcp.eresp.rate[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.BACK_ERESP.MAX.WARN}` |WARNING | |
-|HAProxy TCP Backend {#PXNAME}: Current number of requests unassigned in queue is more than {$HAPROXY.BACK_QCUR.MAX.WARN} for 5m |<p>Current number of requests on backend unassigned in queue is more than {$HAPROXY.BACK_QCUR.MAX.WARN}.</p> |`{TEMPLATE_NAME:haproxy.backend_tcp.qcur[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.BACK_QCUR.MAX.WARN}` |WARNING | |
-|HAProxy TCP Backend {#PXNAME}: Average time spent in queue is more than {$HAPROXY.BACK_QTIME.MAX.WARN} for 5m |<p>Average time spent in queue (in ms) for the last 1,024 requests is more than {$HAPROXY.BACK_QTIME.MAX.WARN}.</p> |`{TEMPLATE_NAME:haproxy.backend_tcp.qtime[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.BACK_QTIME.MAX.WARN}` |WARNING | |
-|HAProxy TCP Frontend {#PXNAME}: Session utilization is more than {$HAPROXY.FRONT_SUTIL.MAX.WARN}% for 5m |<p>Alerting on this metric is essential to ensure your server has sufficient capacity to handle all concurrent sessions. Unlike requests, upon reaching the session limit HAProxy will deny additional clients until resource consumption drops. Furthermore, if you find your session usage percentage to be hovering above 80%, it could be time to either modify HAProxy’s configuration to allow more sessions, or migrate your HAProxy server to a bigger box.</p> |`{TEMPLATE_NAME:haproxy.frontend_tcp.sutil[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.FRONT_SUTIL.MAX.WARN}` |WARNING | |
-|HAProxy TCP Frontend {#PXNAME}: Number of request errors is more than {$HAPROXY.FRONT_EREQ.MAX.WARN} for 5m |<p>Number of request errors is more than {$HAPROXY.FRONT_EREQ.MAX.WARN}.</p> |`{TEMPLATE_NAME:haproxy.frontend_tcp.ereq.rate[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.FRONT_EREQ.MAX.WARN}` |WARNING | |
-|HAProxy TCP Frontend {#PXNAME}: Number of requests denied is more than {$HAPROXY.FRONT_DREQ.MAX.WARN} for 5m |<p>Number of requests denied due to security concerns (ACL-restricted) is more than {$HAPROXY.FRONT_DREQ.MAX.WARN}.</p> |`{TEMPLATE_NAME:haproxy.frontend_tcp.dreq.rate[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.FRONT_DREQ.MAX.WARN}` |WARNING | |
-|HAProxy TCP {#PXNAME} {#SVNAME}: Server is DOWN |<p>Server is not available.</p> |`{TEMPLATE_NAME:haproxy.server_tcp.status[{#PXNAME}:{#SVNAME}].max(#5)}=0` |WARNING | |
-|HAProxy TCP {#PXNAME} {#SVNAME}: Average response time is more than {$HAPROXY.SERVER_RTIME.MAX.WARN} for 5m |<p>Average server response time (in ms) for the last 1,024 requests is more than {$HAPROXY.SERVER_RTIME.MAX.WARN}.</p> |`{TEMPLATE_NAME:haproxy.server_tcp.rtime[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.SERVER_RTIME.MAX.WARN}` |WARNING | |
-|HAProxy TCP {#PXNAME} {#SVNAME}: Number of responses with error is more than {$HAPROXY.SERVER_ERESP.MAX.WARN} for 5m |<p>Number of requests on server, whose responses yielded an error, is more than {$HAPROXY.SERVER_ERESP.MAX.WARN}.</p> |`{TEMPLATE_NAME:haproxy.server_tcp.eresp.rate[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.SERVER_ERESP.MAX.WARN}` |WARNING | |
-|HAProxy TCP {#PXNAME} {#SVNAME}: Current number of requests unassigned in queue is more than {$HAPROXY.SERVER_QCUR.MAX.WARN} for 5m |<p>Current number of requests unassigned in queue is more than {$HAPROXY.SERVER_QCUR.MAX.WARN}.</p> |`{TEMPLATE_NAME:haproxy.server_tcp.qcur[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.SERVER_QCUR.MAX.WARN}` |WARNING | |
-|HAProxy TCP {#PXNAME} {#SVNAME}: Average time spent in queue is more than {$HAPROXY.SERVER_QTIME.MAX.WARN} for 5m |<p>Average time spent in queue (in ms) for the last 1,024 requests is more than {$HAPROXY.SERVER_QTIME.MAX.WARN}.</p> |`{TEMPLATE_NAME:haproxy.server_tcp.qtime[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.SERVER_QTIME.MAX.WARN}` |WARNING | |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|-------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------|----------|---------------------------------------------------------------------------------|
+| HAProxy: Version has changed (new version: {ITEM.VALUE}) | <p>HAProxy version has changed. Ack to close.</p> | `{TEMPLATE_NAME:haproxy.version.diff()}=1 and {TEMPLATE_NAME:haproxy.version.strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| HAProxy: has been restarted (uptime < 10m) | <p>Uptime is less than 10 minutes</p> | `{TEMPLATE_NAME:haproxy.uptime.last()}<10m` | INFO | <p>Manual close: YES</p> |
+| HAProxy: Service is down | <p>-</p> | `{TEMPLATE_NAME:net.tcp.service["{$HAPROXY.STATS.SCHEME}","{HOST.CONN}","{$HAPROXY.STATS.PORT}"].last()}=0` | AVERAGE | <p>Manual close: YES</p> |
+| HAProxy: Service response time is too high (over {$HAPROXY.RESPONSE_TIME.MAX.WARN} for 5m) | <p>-</p> | `{TEMPLATE_NAME:net.tcp.service.perf["{$HAPROXY.STATS.SCHEME}","{HOST.CONN}","{$HAPROXY.STATS.PORT}"].min(5m)}>{$HAPROXY.RESPONSE_TIME.MAX.WARN}` | WARNING | <p>Manual close: YES</p><p>**Depends on**:</p><p>- HAProxy: Service is down</p> |
+| HAProxy backend {#PXNAME}: Server is DOWN | <p>Backend is not available.</p> | `{TEMPLATE_NAME:haproxy.backend.status[{#PXNAME}:{#SVNAME}].max(#5)}=0` | AVERAGE | |
+| HAProxy backend {#PXNAME}: Average response time is more than {$HAPROXY.BACK_RTIME.MAX.WARN} for 5m | <p>Average backend response time (in ms) for the last 1,024 requests is more than {$HAPROXY.BACK_RTIME.MAX.WARN}.</p> | `{TEMPLATE_NAME:haproxy.backend.rtime[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.BACK_RTIME.MAX.WARN}` | WARNING | |
+| HAProxy backend {#PXNAME}: Number of responses with error is more than {$HAPROXY.BACK_ERESP.MAX.WARN} for 5m | <p>Number of requests on backend, whose responses yielded an error, is more than {$HAPROXY.BACK_ERESP.MAX.WARN}.</p> | `{TEMPLATE_NAME:haproxy.backend.eresp.rate[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.BACK_ERESP.MAX.WARN}` | WARNING | |
+| HAProxy backend {#PXNAME}: Current number of requests unassigned in queue is more than {$HAPROXY.BACK_QCUR.MAX.WARN} for 5m | <p>Current number of requests on backend unassigned in queue is more than {$HAPROXY.BACK_QCUR.MAX.WARN}.</p> | `{TEMPLATE_NAME:haproxy.backend.qcur[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.BACK_QCUR.MAX.WARN}` | WARNING | |
+| HAProxy backend {#PXNAME}: Average time spent in queue is more than {$HAPROXY.BACK_QTIME.MAX.WARN} for 5m | <p>Average time spent in queue (in ms) for the last 1,024 requests is more than {$HAPROXY.BACK_QTIME.MAX.WARN}.</p> | `{TEMPLATE_NAME:haproxy.backend.qtime[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.BACK_QTIME.MAX.WARN}` | WARNING | |
+| HAProxy frontend {#PXNAME}: Session utilization is more than {$HAPROXY.FRONT_SUTIL.MAX.WARN}% for 5m | <p>Alerting on this metric is essential to ensure your server has sufficient capacity to handle all concurrent sessions. Unlike requests, upon reaching the session limit HAProxy will deny additional clients until resource consumption drops. Furthermore, if you find your session usage percentage to be hovering above 80%, it could be time to either modify HAProxy’s configuration to allow more sessions, or migrate your HAProxy server to a bigger box.</p> | `{TEMPLATE_NAME:haproxy.frontend.sutil[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.FRONT_SUTIL.MAX.WARN}` | WARNING | |
+| HAProxy frontend {#PXNAME}: Number of request errors is more than {$HAPROXY.FRONT_EREQ.MAX.WARN} for 5m | <p>Number of request errors is more than {$HAPROXY.FRONT_EREQ.MAX.WARN}.</p> | `{TEMPLATE_NAME:haproxy.frontend.ereq.rate[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.FRONT_EREQ.MAX.WARN}` | WARNING | |
+| HAProxy frontend {#PXNAME}: Number of requests denied is more than {$HAPROXY.FRONT_DREQ.MAX.WARN} for 5m | <p>Number of requests denied due to security concerns (ACL-restricted) is more than {$HAPROXY.FRONT_DREQ.MAX.WARN}.</p> | `{TEMPLATE_NAME:haproxy.frontend.dreq.rate[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.FRONT_DREQ.MAX.WARN}` | WARNING | |
+| HAProxy {#PXNAME} {#SVNAME}: Server is DOWN | <p>Server is not available.</p> | `{TEMPLATE_NAME:haproxy.server.status[{#PXNAME}:{#SVNAME}].max(#5)}=0` | WARNING | |
+| HAProxy {#PXNAME} {#SVNAME}: Average response time is more than {$HAPROXY.SERVER_RTIME.MAX.WARN} for 5m | <p>Average server response time (in ms) for the last 1,024 requests is more than {$HAPROXY.SERVER_RTIME.MAX.WARN}.</p> | `{TEMPLATE_NAME:haproxy.server.rtime[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.SERVER_RTIME.MAX.WARN}` | WARNING | |
+| HAProxy {#PXNAME} {#SVNAME}: Number of responses with error is more than {$HAPROXY.SERVER_ERESP.MAX.WARN} for 5m | <p>Number of requests on server, whose responses yielded an error, is more than {$HAPROXY.SERVER_ERESP.MAX.WARN}.</p> | `{TEMPLATE_NAME:haproxy.server.eresp.rate[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.SERVER_ERESP.MAX.WARN}` | WARNING | |
+| HAProxy {#PXNAME} {#SVNAME}: Current number of requests unassigned in queue is more than {$HAPROXY.SERVER_QCUR.MAX.WARN} for 5m | <p>Current number of requests unassigned in queue is more than {$HAPROXY.SERVER_QCUR.MAX.WARN}.</p> | `{TEMPLATE_NAME:haproxy.server.qcur[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.SERVER_QCUR.MAX.WARN}` | WARNING | |
+| HAProxy {#PXNAME} {#SVNAME}: Average time spent in queue is more than {$HAPROXY.SERVER_QTIME.MAX.WARN} for 5m | <p>Average time spent in queue (in ms) for the last 1,024 requests is more than {$HAPROXY.SERVER_QTIME.MAX.WARN}.</p> | `{TEMPLATE_NAME:haproxy.server.qtime[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.SERVER_QTIME.MAX.WARN}` | WARNING | |
+| HAProxy TCP Backend {#PXNAME}: Server is DOWN | <p>Backend is not available.</p> | `{TEMPLATE_NAME:haproxy.backend_tcp.status[{#PXNAME}:{#SVNAME}].max(#5)}=0` | AVERAGE | |
+| HAProxy TCP Backend {#PXNAME}: Average response time is more than {$HAPROXY.BACK_RTIME.MAX.WARN} for 5m | <p>Average backend response time (in ms) for the last 1,024 requests is more than {$HAPROXY.BACK_RTIME.MAX.WARN}.</p> | `{TEMPLATE_NAME:haproxy.backend_tcp.rtime[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.BACK_RTIME.MAX.WARN}` | WARNING | |
+| HAProxy TCP Backend {#PXNAME}: Number of responses with error is more than {$HAPROXY.BACK_ERESP.MAX.WARN} for 5m | <p>Number of requests on backend, whose responses yielded an error, is more than {$HAPROXY.BACK_ERESP.MAX.WARN}.</p> | `{TEMPLATE_NAME:haproxy.backend_tcp.eresp.rate[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.BACK_ERESP.MAX.WARN}` | WARNING | |
+| HAProxy TCP Backend {#PXNAME}: Current number of requests unassigned in queue is more than {$HAPROXY.BACK_QCUR.MAX.WARN} for 5m | <p>Current number of requests on backend unassigned in queue is more than {$HAPROXY.BACK_QCUR.MAX.WARN}.</p> | `{TEMPLATE_NAME:haproxy.backend_tcp.qcur[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.BACK_QCUR.MAX.WARN}` | WARNING | |
+| HAProxy TCP Backend {#PXNAME}: Average time spent in queue is more than {$HAPROXY.BACK_QTIME.MAX.WARN} for 5m | <p>Average time spent in queue (in ms) for the last 1,024 requests is more than {$HAPROXY.BACK_QTIME.MAX.WARN}.</p> | `{TEMPLATE_NAME:haproxy.backend_tcp.qtime[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.BACK_QTIME.MAX.WARN}` | WARNING | |
+| HAProxy TCP Frontend {#PXNAME}: Session utilization is more than {$HAPROXY.FRONT_SUTIL.MAX.WARN}% for 5m | <p>Alerting on this metric is essential to ensure your server has sufficient capacity to handle all concurrent sessions. Unlike requests, upon reaching the session limit HAProxy will deny additional clients until resource consumption drops. Furthermore, if you find your session usage percentage to be hovering above 80%, it could be time to either modify HAProxy’s configuration to allow more sessions, or migrate your HAProxy server to a bigger box.</p> | `{TEMPLATE_NAME:haproxy.frontend_tcp.sutil[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.FRONT_SUTIL.MAX.WARN}` | WARNING | |
+| HAProxy TCP Frontend {#PXNAME}: Number of request errors is more than {$HAPROXY.FRONT_EREQ.MAX.WARN} for 5m | <p>Number of request errors is more than {$HAPROXY.FRONT_EREQ.MAX.WARN}.</p> | `{TEMPLATE_NAME:haproxy.frontend_tcp.ereq.rate[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.FRONT_EREQ.MAX.WARN}` | WARNING | |
+| HAProxy TCP Frontend {#PXNAME}: Number of requests denied is more than {$HAPROXY.FRONT_DREQ.MAX.WARN} for 5m | <p>Number of requests denied due to security concerns (ACL-restricted) is more than {$HAPROXY.FRONT_DREQ.MAX.WARN}.</p> | `{TEMPLATE_NAME:haproxy.frontend_tcp.dreq.rate[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.FRONT_DREQ.MAX.WARN}` | WARNING | |
+| HAProxy TCP {#PXNAME} {#SVNAME}: Server is DOWN | <p>Server is not available.</p> | `{TEMPLATE_NAME:haproxy.server_tcp.status[{#PXNAME}:{#SVNAME}].max(#5)}=0` | WARNING | |
+| HAProxy TCP {#PXNAME} {#SVNAME}: Average response time is more than {$HAPROXY.SERVER_RTIME.MAX.WARN} for 5m | <p>Average server response time (in ms) for the last 1,024 requests is more than {$HAPROXY.SERVER_RTIME.MAX.WARN}.</p> | `{TEMPLATE_NAME:haproxy.server_tcp.rtime[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.SERVER_RTIME.MAX.WARN}` | WARNING | |
+| HAProxy TCP {#PXNAME} {#SVNAME}: Number of responses with error is more than {$HAPROXY.SERVER_ERESP.MAX.WARN} for 5m | <p>Number of requests on server, whose responses yielded an error, is more than {$HAPROXY.SERVER_ERESP.MAX.WARN}.</p> | `{TEMPLATE_NAME:haproxy.server_tcp.eresp.rate[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.SERVER_ERESP.MAX.WARN}` | WARNING | |
+| HAProxy TCP {#PXNAME} {#SVNAME}: Current number of requests unassigned in queue is more than {$HAPROXY.SERVER_QCUR.MAX.WARN} for 5m | <p>Current number of requests unassigned in queue is more than {$HAPROXY.SERVER_QCUR.MAX.WARN}.</p> | `{TEMPLATE_NAME:haproxy.server_tcp.qcur[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.SERVER_QCUR.MAX.WARN}` | WARNING | |
+| HAProxy TCP {#PXNAME} {#SVNAME}: Average time spent in queue is more than {$HAPROXY.SERVER_QTIME.MAX.WARN} for 5m | <p>Average time spent in queue (in ms) for the last 1,024 requests is more than {$HAPROXY.SERVER_QTIME.MAX.WARN}.</p> | `{TEMPLATE_NAME:haproxy.server_tcp.qtime[{#PXNAME}:{#SVNAME}].min(5m)}>{$HAPROXY.SERVER_QTIME.MAX.WARN}` | WARNING | |
## Feedback
diff --git a/templates/app/haproxy_http/template_app_haproxy_http.yaml b/templates/app/haproxy_http/template_app_haproxy_http.yaml
index f1a985e8b05..b7e99e1c83a 100644
--- a/templates/app/haproxy_http/template_app_haproxy_http.yaml
+++ b/templates/app/haproxy_http/template_app_haproxy_http.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-03-02T19:42:22Z'
+ date: '2021-04-22T11:27:23Z'
groups:
-
name: Templates/Applications
@@ -2113,131 +2113,137 @@ zabbix_export:
dashboards:
-
name: 'HAProxy Backend performance'
- widgets:
+ pages:
-
- type: GRAPH_PROTOTYPE
- width: '24'
- height: '12'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '3'
+ widgets:
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: 'HAProxy: Backend {#PXNAME} Redispatched requests and retried connections per second'
- host: 'HAProxy by HTTP'
+ width: '24'
+ height: '12'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '3'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: 'HAProxy: Backend {#PXNAME} Redispatched requests and retried connections per second'
+ host: 'HAProxy by HTTP'
-
name: 'HAProxy Frontend performance'
- widgets:
+ pages:
-
- type: GRAPH_PROTOTYPE
- width: '12'
- height: '12'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '3'
+ widgets:
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: 'HAProxy: Frontend {#PXNAME} Requests and sessions per second'
- host: 'HAProxy by HTTP'
- -
- type: GRAPH_PROTOTYPE
- x: '12'
- width: '12'
- height: '12'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '3'
+ width: '12'
+ height: '12'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '3'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: 'HAProxy: Frontend {#PXNAME} Requests and sessions per second'
+ host: 'HAProxy by HTTP'
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: 'HAProxy: Frontend {#PXNAME} Errors and denials per second'
- host: 'HAProxy by HTTP'
- -
- type: GRAPH_PROTOTYPE
- 'y': '12'
- width: '12'
- height: '12'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '3'
+ x: '12'
+ width: '12'
+ height: '12'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '3'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: 'HAProxy: Frontend {#PXNAME} Errors and denials per second'
+ host: 'HAProxy by HTTP'
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: 'HAProxy: Frontend {#PXNAME} Responses by HTTP code'
- host: 'HAProxy by HTTP'
- -
- type: GRAPH_PROTOTYPE
- x: '12'
- 'y': '12'
- width: '12'
- height: '12'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '3'
+ 'y': '12'
+ width: '12'
+ height: '12'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '3'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: 'HAProxy: Frontend {#PXNAME} Responses by HTTP code'
+ host: 'HAProxy by HTTP'
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: 'HAProxy: Frontend {#PXNAME} In/Out traffic'
- host: 'HAProxy by HTTP'
+ x: '12'
+ 'y': '12'
+ width: '12'
+ height: '12'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '3'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: 'HAProxy: Frontend {#PXNAME} In/Out traffic'
+ host: 'HAProxy by HTTP'
-
name: 'HAProxy Server performance'
- widgets:
+ pages:
-
- type: GRAPH_PROTOTYPE
- width: '24'
- height: '12'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '3'
+ widgets:
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: 'HAProxy: {#PXNAME} {#SVNAME} Response time and time in queue'
- host: 'HAProxy by HTTP'
+ width: '24'
+ height: '12'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '3'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: 'HAProxy: {#PXNAME} {#SVNAME} Response time and time in queue'
+ host: 'HAProxy by HTTP'
valuemaps:
-
name: 'Service state'
diff --git a/templates/app/iis_agent/README.md b/templates/app/iis_agent/README.md
index 1198cd7dd51..6c9f0b4ef1e 100644
--- a/templates/app/iis_agent/README.md
+++ b/templates/app/iis_agent/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
The template to monitor IIS (Internet Information Services) by Zabbix that works without any external scripts.<br>
Your server must have the following roles:
```text
@@ -19,7 +19,7 @@ This template was tested on:
## Setup
-> See [Zabbix template operation](https://www.zabbix.com/documentation/5.2/manual/config/templates_out_of_the_box/zabbix_agent) for basic instructions.
+> See [Zabbix template operation](https://www.zabbix.com/documentation/5.4/manual/config/templates_out_of_the_box/zabbix_agent) for basic instructions.
1\. [Import](https://www.zabbix.com/documentation/5.4/manual/xml_export_import/templates) the template ([template_app_iis_agent.yaml](template_app_iis_agent.yaml) or [template_app_iis_agent_active.yaml](template_app_iis_agent_active.yaml)) into Zabbix.
@@ -39,15 +39,15 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$IIS.APPPOOL.MATCHES} |<p>This macro is used in application pools discovery. Can be overridden on the host or linked template level.</p> |`.+` |
-|{$IIS.APPPOOL.MONITORED} |<p>Monitoring status for discovered application pools. Use context to avoid trigger firing for specific application pools. "1" - enabled, "0" - disabled.</p> |`1` |
-|{$IIS.APPPOOL.NOT_MATCHES} |<p>This macro is used in application pools discovery. Can be overridden on the host or linked template level.</p> |`<CHANGE_IF_NEEDED>` |
-|{$IIS.PORT} |<p>Listening port.</p> |`80` |
-|{$IIS.QUEUE.MAX.TIME} |<p>The time during which the queue length may exceed the threshold.</p> |`5m` |
-|{$IIS.QUEUE.MAX.WARN} |<p>Maximum application pool's request queue length for trigger expression.</p> |`` |
-|{$IIS.SERVICE} |<p>The service (http/https/etc) for port check. See "net.tcp.service" documentation page for more information: https://www.zabbix.com/documentation/5.4/manual/config/items/itemtypes/simple_checks</p> |`http` |
+| Name | Description | Default |
+|----------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------|
+| {$IIS.APPPOOL.MATCHES} | <p>This macro is used in application pools discovery. Can be overridden on the host or linked template level.</p> | `.+` |
+| {$IIS.APPPOOL.MONITORED} | <p>Monitoring status for discovered application pools. Use context to avoid trigger firing for specific application pools. "1" - enabled, "0" - disabled.</p> | `1` |
+| {$IIS.APPPOOL.NOT_MATCHES} | <p>This macro is used in application pools discovery. Can be overridden on the host or linked template level.</p> | `<CHANGE_IF_NEEDED>` |
+| {$IIS.PORT} | <p>Listening port.</p> | `80` |
+| {$IIS.QUEUE.MAX.TIME} | <p>The time during which the queue length may exceed the threshold.</p> | `5m` |
+| {$IIS.QUEUE.MAX.WARN} | <p>Maximum application pool's request queue length for trigger expression.</p> | `` |
+| {$IIS.SERVICE} | <p>The service (http/https/etc) for port check. See "net.tcp.service" documentation page for more information: https://www.zabbix.com/documentation/5.4/manual/config/items/itemtypes/simple_checks</p> | `http` |
## Template links
@@ -55,68 +55,68 @@ There are no template links in this template.
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Application pools discovery |<p>-</p> |ZABBIX_PASSIVE |wmi.getall[root\webAdministration, select Name from ApplicationPool]<p>**Filter**:</p>AND <p>- A: {#APPPOOL} NOT_MATCHES_REGEX `{$IIS.APPPOOL.NOT_MATCHES}`</p><p>- B: {#APPPOOL} MATCHES_REGEX `{$IIS.APPPOOL.MATCHES}`</p> |
+| Name | Description | Type | Key and additional info |
+|-----------------------------|-------------|----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Application pools discovery | <p>-</p> | ZABBIX_PASSIVE | wmi.getall[root\webAdministration, select Name from ApplicationPool]<p>**Filter**:</p>AND <p>- A: {#APPPOOL} NOT_MATCHES_REGEX `{$IIS.APPPOOL.NOT_MATCHES}`</p><p>- B: {#APPPOOL} MATCHES_REGEX `{$IIS.APPPOOL.MATCHES}`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|IIS |IIS: World Wide Web Publishing Service (W3SVC) state |<p>The World Wide Web Publishing Service (W3SVC) provides web connectivity and administration of websites through the IIS snap-in. If the World Wide Web Publishing Service stops, the operating system cannot serve any form of web request. This service was dependent on "Windows Process Activation Service".</p> |ZABBIX_PASSIVE |service_state[W3SVC]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: Windows Process Activation Service (WAS) state |<p>Windows Process Activation Service (WAS) is a tool for managing worker processes that contain applications that host Windows Communication Foundation (WCF) services. Worker processes handle requests that are sent to a Web Server for specific application pools. Each application pool sets boundaries for the applications it contains.</p> |ZABBIX_PASSIVE |service_state[WAS]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: {$IIS.PORT} port ping |<p>-</p> |SIMPLE |net.tcp.service[{$IIS.SERVICE},,{$IIS.PORT}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: Uptime |<p>Service uptime in seconds.</p> |ZABBIX_PASSIVE |perf_counter_en["\Web Service(_Total)\Service Uptime"] |
-|IIS |IIS: Bytes Received per second |<p>The average rate per minute at which data bytes are received by the service at the Application Layer. Does not include protocol headers or control bytes.</p> |ZABBIX_PASSIVE |perf_counter_en["\Web Service(_Total)\Bytes Received/sec", 60] |
-|IIS |IIS: Bytes Sent per second |<p>The average rate per minute at which data bytes are sent by the service.</p> |ZABBIX_PASSIVE |perf_counter_en["\Web Service(_Total)\Bytes Sent/sec", 60] |
-|IIS |IIS: Bytes Total per second |<p>The average rate per minute of total bytes/sec transferred by the Web service (sum of bytes sent/sec and bytes received/sec).</p> |ZABBIX_PASSIVE |perf_counter_en["\Web Service(_Total)\Bytes Total/Sec", 60] |
-|IIS |IIS: Current connections |<p>The number of active connections.</p> |ZABBIX_PASSIVE |perf_counter_en["\Web Service(_Total)\Current Connections"] |
-|IIS |IIS: Total connection attempts |<p>The total number of connections to the Web or FTP service that have been attempted since service startup. The count is the total for all Web sites or FTP sites combined.</p> |ZABBIX_PASSIVE |perf_counter_en["\Web Service(_Total)\Total Connection Attempts (all instances)"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: Connection attempts per second |<p>The average rate per minute that connections using the Web service are being attempted. The count is the average for all Web sites combined.</p> |ZABBIX_PASSIVE |perf_counter_en["\Web Service(_Total)\Connection Attempts/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: Anonymous users per second |<p>The number of requests from users over an anonymous connection per second. Average per minute.</p> |ZABBIX_PASSIVE |perf_counter_en["\Web Service(_Total)\Anonymous Users/sec", 60] |
-|IIS |IIS: NonAnonymous users per second |<p>The number of requests from users over a non-anonymous connection per second. Average per minute.</p> |ZABBIX_PASSIVE |perf_counter_en["\Web Service(_Total)\NonAnonymous Users/sec", 60] |
-|IIS |IIS: Method Method GET requests per second |<p>The rate of HTTP requests made using the GET method. GET requests are generally used for basic file retrievals or image maps, though they can be used with forms. Average per minute.</p> |ZABBIX_PASSIVE |perf_counter_en["\Web Service(_Total)\Get Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: Method COPY requests per second |<p>The rate of HTTP requests made using the COPY method. Copy requests are used for copying files and directories. Average per minute.</p> |ZABBIX_PASSIVE |perf_counter_en["\Web Service(_Total)\Copy Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: Method CGI requests per second |<p>The rate of CGI requests that are simultaneously being processed by the Web service. Average per minute.</p> |ZABBIX_PASSIVE |perf_counter_en["\Web Service(_Total)\CGI Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: Method DELETE requests per second |<p>The rate of HTTP requests using the DELETE method made. Average per minute.</p> |ZABBIX_PASSIVE |perf_counter_en["\Web Service(_Total)\Delete Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: Method HEAD requests per second |<p>The rate of HTTP requests using the HEAD method made. HEAD requests generally indicate a client is querying the state of a document they already have to see if it needs to be refreshed. Average per minute.</p> |ZABBIX_PASSIVE |perf_counter_en["\Web Service(_Total)\Head Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: Method ISAPI requests per second |<p>The rate of ISAPI Extension requests that are simultaneously being processed by the Web service. Average per minute.</p> |ZABBIX_PASSIVE |perf_counter_en["\Web Service(_Total)\ISAPI Extension Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: Method LOCK requests per second |<p>The rate of HTTP requests made using the LOCK method. Lock requests are used to lock a file for one user so that only that user can modify the file. Average per minute.</p> |ZABBIX_PASSIVE |perf_counter_en["\Web Service(_Total)\Lock Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: Method MKCOL requests per second |<p>The rate of HTTP requests using the MKCOL method made. Mkcol requests are used to create directories on the server. Average per minute.</p> |ZABBIX_PASSIVE |perf_counter_en["\Web Service(_Total)\Mkcol Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: Method MOVE requests per second |<p>The rate of HTTP requests using the MOVE method made. Move requests are used for moving files and directories. Average per minute.</p> |ZABBIX_PASSIVE |perf_counter_en["\Web Service(_Total)\Move Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: Method OPTIONS requests per second |<p>The rate of HTTP requests using the OPTIONS method made. Average per minute.</p> |ZABBIX_PASSIVE |perf_counter_en["\Web Service(_Total)\Options Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: Method POST requests per second |<p>Rate of HTTP requests using POST method. Generally used for forms or gateway requests. Average per minute.</p> |ZABBIX_PASSIVE |perf_counter_en["\Web Service(_Total)\Post Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: Method PROPFIND requests per second |<p>The rate of HTTP requests using the PROPFIND method made. Propfind requests retrieve property values on files and directories. Average per minute.</p> |ZABBIX_PASSIVE |perf_counter_en["\Web Service(_Total)\Propfind Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: Method PROPPATCH requests per second |<p>The rate of HTTP requests using the PROPPATCH method made. Proppatch requests set property values on files and directories. Average per minute.</p> |ZABBIX_PASSIVE |perf_counter_en["\Web Service(_Total)\Proppatch Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: Method PUT requests per second |<p>The rate of HTTP requests using the PUT method made. Average per minute.</p> |ZABBIX_PASSIVE |perf_counter_en["\Web Service(_Total)\Put Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: Method MS-SEARCH requests per second |<p>The rate of HTTP requests using the MS-SEARCH method made. Search requests are used to query the server to find resources that match a set of conditions provided by the client. Average per minute.</p> |ZABBIX_PASSIVE |perf_counter_en["\Web Service(_Total)\Search Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: Method TRACE requests per second |<p>The rate of HTTP requests using the TRACE method made. Average per minute.</p> |ZABBIX_PASSIVE |perf_counter_en["\Web Service(_Total)\Trace Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: Method TRACE requests per second |<p>The rate of HTTP requests using the UNLOCK method made. Unlock requests are used to remove locks from files. Average per minute.</p> |ZABBIX_PASSIVE |perf_counter_en["\Web Service(_Total)\Unlock Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: Method Total requests per second |<p>The rate of all HTTP requests received. Average per minute.</p> |ZABBIX_PASSIVE |perf_counter_en["\Web Service(_Total)\Total Method Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: Method Total Other requests per second |<p>Total Other Request Methods is the number of HTTP requests that are not OPTIONS, GET, HEAD, POST, PUT, DELETE, TRACE, MOVE, COPY, MKCOL, PROPFIND, PROPPATCH, SEARCH, LOCK or UNLOCK methods (since service startup). Average per minute.</p> |ZABBIX_PASSIVE |perf_counter_en["\Web Service(_Total)\Other Request Methods/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: Locked errors per second |<p>The rate of errors due to requests that couldn't be satisfied by the server because the requested document was locked. These are generally reported as an HTTP 423 error code to the client. Average per minute.</p> |ZABBIX_PASSIVE |perf_counter_en["\Web Service(_Total)\Locked Errors/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: Not Found errors per second |<p>The rate of errors due to requests that couldn't be satisfied by the server because the requested document could not be found. These are generally reported to the client with HTTP error code 404. Average per minute.</p> |ZABBIX_PASSIVE |perf_counter_en["\Web Service(_Total)\Not Found Errors/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: Files cache hits percentage |<p>The ratio of user-mode file cache hits to total cache requests (since service startup). Note: This value might be low if the Kernel URI cache hits percentage is high.</p> |ZABBIX_PASSIVE |perf_counter_en["\Web Service Cache\File Cache Hits %"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: URIs cache hits percentage |<p>The ratio of user-mode URI Cache Hits to total cache requests (since service startup)</p> |ZABBIX_PASSIVE |perf_counter_en["\Web Service Cache\URI Cache Hits %"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: File cache misses |<p>The total number of unsuccessful lookups in the user-mode file cache since service startup.</p> |ZABBIX_PASSIVE |perf_counter_en["\Web Service Cache\File Cache Misses"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: URI cache misses |<p>The total number of unsuccessful lookups in the user-mode URI cache since service startup.</p> |ZABBIX_PASSIVE |perf_counter_en["\Web Service Cache\URI Cache Misses"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: {#APPPOOL} Uptime |<p>The web application uptime period since the last restart.</p> |ZABBIX_PASSIVE |perf_counter_en["\APP_POOL_WAS({#APPPOOL})\Current Application Pool Uptime"] |
-|IIS |IIS: AppPool {#APPPOOL} state |<p>The state of the application pool.</p> |ZABBIX_PASSIVE |perf_counter_en["\APP_POOL_WAS({#APPPOOL})\Current Application Pool State"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: AppPool {#APPPOOL} recycles |<p>The number of times the application pool has been recycled since Windows Process Activation Service (WAS) started.</p> |ZABBIX_PASSIVE |perf_counter_en["\APP_POOL_WAS({#APPPOOL})\Total Application Pool Recycles"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: AppPool {#APPPOOL} current queue size |<p>The number of requests in the queue.</p> |ZABBIX_PASSIVE |perf_counter_en["\HTTP Service Request Queues({#APPPOOL})\CurrentQueueSize"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| Group | Name | Description | Type | Key and additional info |
+|-------|------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------|
+| IIS | IIS: World Wide Web Publishing Service (W3SVC) state | <p>The World Wide Web Publishing Service (W3SVC) provides web connectivity and administration of websites through the IIS snap-in. If the World Wide Web Publishing Service stops, the operating system cannot serve any form of web request. This service was dependent on "Windows Process Activation Service".</p> | ZABBIX_PASSIVE | service_state[W3SVC]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: Windows Process Activation Service (WAS) state | <p>Windows Process Activation Service (WAS) is a tool for managing worker processes that contain applications that host Windows Communication Foundation (WCF) services. Worker processes handle requests that are sent to a Web Server for specific application pools. Each application pool sets boundaries for the applications it contains.</p> | ZABBIX_PASSIVE | service_state[WAS]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: {$IIS.PORT} port ping | <p>-</p> | SIMPLE | net.tcp.service[{$IIS.SERVICE},,{$IIS.PORT}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: Uptime | <p>Service uptime in seconds.</p> | ZABBIX_PASSIVE | perf_counter_en["\Web Service(_Total)\Service Uptime"] |
+| IIS | IIS: Bytes Received per second | <p>The average rate per minute at which data bytes are received by the service at the Application Layer. Does not include protocol headers or control bytes.</p> | ZABBIX_PASSIVE | perf_counter_en["\Web Service(_Total)\Bytes Received/sec", 60] |
+| IIS | IIS: Bytes Sent per second | <p>The average rate per minute at which data bytes are sent by the service.</p> | ZABBIX_PASSIVE | perf_counter_en["\Web Service(_Total)\Bytes Sent/sec", 60] |
+| IIS | IIS: Bytes Total per second | <p>The average rate per minute of total bytes/sec transferred by the Web service (sum of bytes sent/sec and bytes received/sec).</p> | ZABBIX_PASSIVE | perf_counter_en["\Web Service(_Total)\Bytes Total/Sec", 60] |
+| IIS | IIS: Current connections | <p>The number of active connections.</p> | ZABBIX_PASSIVE | perf_counter_en["\Web Service(_Total)\Current Connections"] |
+| IIS | IIS: Total connection attempts | <p>The total number of connections to the Web or FTP service that have been attempted since service startup. The count is the total for all Web sites or FTP sites combined.</p> | ZABBIX_PASSIVE | perf_counter_en["\Web Service(_Total)\Total Connection Attempts (all instances)"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: Connection attempts per second | <p>The average rate per minute that connections using the Web service are being attempted. The count is the average for all Web sites combined.</p> | ZABBIX_PASSIVE | perf_counter_en["\Web Service(_Total)\Connection Attempts/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: Anonymous users per second | <p>The number of requests from users over an anonymous connection per second. Average per minute.</p> | ZABBIX_PASSIVE | perf_counter_en["\Web Service(_Total)\Anonymous Users/sec", 60] |
+| IIS | IIS: NonAnonymous users per second | <p>The number of requests from users over a non-anonymous connection per second. Average per minute.</p> | ZABBIX_PASSIVE | perf_counter_en["\Web Service(_Total)\NonAnonymous Users/sec", 60] |
+| IIS | IIS: Method Method GET requests per second | <p>The rate of HTTP requests made using the GET method. GET requests are generally used for basic file retrievals or image maps, though they can be used with forms. Average per minute.</p> | ZABBIX_PASSIVE | perf_counter_en["\Web Service(_Total)\Get Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: Method COPY requests per second | <p>The rate of HTTP requests made using the COPY method. Copy requests are used for copying files and directories. Average per minute.</p> | ZABBIX_PASSIVE | perf_counter_en["\Web Service(_Total)\Copy Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: Method CGI requests per second | <p>The rate of CGI requests that are simultaneously being processed by the Web service. Average per minute.</p> | ZABBIX_PASSIVE | perf_counter_en["\Web Service(_Total)\CGI Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: Method DELETE requests per second | <p>The rate of HTTP requests using the DELETE method made. Average per minute.</p> | ZABBIX_PASSIVE | perf_counter_en["\Web Service(_Total)\Delete Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: Method HEAD requests per second | <p>The rate of HTTP requests using the HEAD method made. HEAD requests generally indicate a client is querying the state of a document they already have to see if it needs to be refreshed. Average per minute.</p> | ZABBIX_PASSIVE | perf_counter_en["\Web Service(_Total)\Head Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: Method ISAPI requests per second | <p>The rate of ISAPI Extension requests that are simultaneously being processed by the Web service. Average per minute.</p> | ZABBIX_PASSIVE | perf_counter_en["\Web Service(_Total)\ISAPI Extension Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: Method LOCK requests per second | <p>The rate of HTTP requests made using the LOCK method. Lock requests are used to lock a file for one user so that only that user can modify the file. Average per minute.</p> | ZABBIX_PASSIVE | perf_counter_en["\Web Service(_Total)\Lock Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: Method MKCOL requests per second | <p>The rate of HTTP requests using the MKCOL method made. Mkcol requests are used to create directories on the server. Average per minute.</p> | ZABBIX_PASSIVE | perf_counter_en["\Web Service(_Total)\Mkcol Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: Method MOVE requests per second | <p>The rate of HTTP requests using the MOVE method made. Move requests are used for moving files and directories. Average per minute.</p> | ZABBIX_PASSIVE | perf_counter_en["\Web Service(_Total)\Move Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: Method OPTIONS requests per second | <p>The rate of HTTP requests using the OPTIONS method made. Average per minute.</p> | ZABBIX_PASSIVE | perf_counter_en["\Web Service(_Total)\Options Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: Method POST requests per second | <p>Rate of HTTP requests using POST method. Generally used for forms or gateway requests. Average per minute.</p> | ZABBIX_PASSIVE | perf_counter_en["\Web Service(_Total)\Post Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: Method PROPFIND requests per second | <p>The rate of HTTP requests using the PROPFIND method made. Propfind requests retrieve property values on files and directories. Average per minute.</p> | ZABBIX_PASSIVE | perf_counter_en["\Web Service(_Total)\Propfind Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: Method PROPPATCH requests per second | <p>The rate of HTTP requests using the PROPPATCH method made. Proppatch requests set property values on files and directories. Average per minute.</p> | ZABBIX_PASSIVE | perf_counter_en["\Web Service(_Total)\Proppatch Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: Method PUT requests per second | <p>The rate of HTTP requests using the PUT method made. Average per minute.</p> | ZABBIX_PASSIVE | perf_counter_en["\Web Service(_Total)\Put Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: Method MS-SEARCH requests per second | <p>The rate of HTTP requests using the MS-SEARCH method made. Search requests are used to query the server to find resources that match a set of conditions provided by the client. Average per minute.</p> | ZABBIX_PASSIVE | perf_counter_en["\Web Service(_Total)\Search Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: Method TRACE requests per second | <p>The rate of HTTP requests using the TRACE method made. Average per minute.</p> | ZABBIX_PASSIVE | perf_counter_en["\Web Service(_Total)\Trace Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: Method TRACE requests per second | <p>The rate of HTTP requests using the UNLOCK method made. Unlock requests are used to remove locks from files. Average per minute.</p> | ZABBIX_PASSIVE | perf_counter_en["\Web Service(_Total)\Unlock Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: Method Total requests per second | <p>The rate of all HTTP requests received. Average per minute.</p> | ZABBIX_PASSIVE | perf_counter_en["\Web Service(_Total)\Total Method Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: Method Total Other requests per second | <p>Total Other Request Methods is the number of HTTP requests that are not OPTIONS, GET, HEAD, POST, PUT, DELETE, TRACE, MOVE, COPY, MKCOL, PROPFIND, PROPPATCH, SEARCH, LOCK or UNLOCK methods (since service startup). Average per minute.</p> | ZABBIX_PASSIVE | perf_counter_en["\Web Service(_Total)\Other Request Methods/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: Locked errors per second | <p>The rate of errors due to requests that couldn't be satisfied by the server because the requested document was locked. These are generally reported as an HTTP 423 error code to the client. Average per minute.</p> | ZABBIX_PASSIVE | perf_counter_en["\Web Service(_Total)\Locked Errors/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: Not Found errors per second | <p>The rate of errors due to requests that couldn't be satisfied by the server because the requested document could not be found. These are generally reported to the client with HTTP error code 404. Average per minute.</p> | ZABBIX_PASSIVE | perf_counter_en["\Web Service(_Total)\Not Found Errors/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: Files cache hits percentage | <p>The ratio of user-mode file cache hits to total cache requests (since service startup). Note: This value might be low if the Kernel URI cache hits percentage is high.</p> | ZABBIX_PASSIVE | perf_counter_en["\Web Service Cache\File Cache Hits %"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: URIs cache hits percentage | <p>The ratio of user-mode URI Cache Hits to total cache requests (since service startup)</p> | ZABBIX_PASSIVE | perf_counter_en["\Web Service Cache\URI Cache Hits %"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: File cache misses | <p>The total number of unsuccessful lookups in the user-mode file cache since service startup.</p> | ZABBIX_PASSIVE | perf_counter_en["\Web Service Cache\File Cache Misses"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: URI cache misses | <p>The total number of unsuccessful lookups in the user-mode URI cache since service startup.</p> | ZABBIX_PASSIVE | perf_counter_en["\Web Service Cache\URI Cache Misses"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: {#APPPOOL} Uptime | <p>The web application uptime period since the last restart.</p> | ZABBIX_PASSIVE | perf_counter_en["\APP_POOL_WAS({#APPPOOL})\Current Application Pool Uptime"] |
+| IIS | IIS: AppPool {#APPPOOL} state | <p>The state of the application pool.</p> | ZABBIX_PASSIVE | perf_counter_en["\APP_POOL_WAS({#APPPOOL})\Current Application Pool State"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: AppPool {#APPPOOL} recycles | <p>The number of times the application pool has been recycled since Windows Process Activation Service (WAS) started.</p> | ZABBIX_PASSIVE | perf_counter_en["\APP_POOL_WAS({#APPPOOL})\Total Application Pool Recycles"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: AppPool {#APPPOOL} current queue size | <p>The number of requests in the queue.</p> | ZABBIX_PASSIVE | perf_counter_en["\HTTP Service Request Queues({#APPPOOL})\CurrentQueueSize"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|IIS: The World Wide Web Publishing Service (W3SVC) is not running |<p>The World Wide Web Publishing Service (W3SVC) is not in running state. IIS cannot start.</p> |`{TEMPLATE_NAME:service_state[W3SVC].last()}<>0` |HIGH |<p>**Depends on**:</p><p>- IIS: Windows process Activation Service (WAS) is not the running</p> |
-|IIS: Windows process Activation Service (WAS) is not the running |<p>Windows Process Activation Service (WAS) is not in the running state. IIS cannot start.</p> |`{TEMPLATE_NAME:service_state[WAS].last()}<>0` |HIGH | |
-|IIS: Port {$IIS.PORT} is down |<p>-</p> |`{TEMPLATE_NAME:net.tcp.service[{$IIS.SERVICE},,{$IIS.PORT}].last()}=0` |AVERAGE |<p>Manual close: YES</p><p>**Depends on**:</p><p>- IIS: The World Wide Web Publishing Service (W3SVC) is not running</p> |
-|IIS: has been restarted (uptime < 10m) |<p>Uptime is less than 10 minutes</p> |`{TEMPLATE_NAME:perf_counter_en["\Web Service(_Total)\Service Uptime"].last()}<10m` |INFO |<p>Manual close: YES</p> |
-|IIS: {#APPPOOL} has been restarted (uptime < 10m) |<p>Uptime is less than 10 minutes</p> |`{TEMPLATE_NAME:perf_counter_en["\APP_POOL_WAS({#APPPOOL})\Current Application Pool Uptime"].last()}<10m` |INFO |<p>Manual close: YES</p> |
-|IIS: Application pool {#APPPOOL} is not in Running state |<p>-</p> |`{TEMPLATE_NAME:perf_counter_en["\APP_POOL_WAS({#APPPOOL})\Current Application Pool State"].last()}<>3 and {$IIS.APPPOOL.MONITORED:"{#APPPOOL}"}=1` |HIGH | |
-|IIS: Application pool {#APPPOOL} has been recycled |<p>-</p> |`{TEMPLATE_NAME:perf_counter_en["\APP_POOL_WAS({#APPPOOL})\Total Application Pool Recycles"].diff()}=1 and {$IIS.APPPOOL.MONITORED:"{#APPPOOL}"}=1` |INFO | |
-|IIS: Request queue of {#APPPOOL} is too large (over {$IIS.QUEUE.MAX.WARN}) |<p>-</p> |`{TEMPLATE_NAME:perf_counter_en["\HTTP Service Request Queues({#APPPOOL})\CurrentQueueSize"].min({$IIS.QUEUE.MAX.TIME})}>{$IIS.QUEUE.MAX.WARN}` |WARNING |<p>**Depends on**:</p><p>- IIS: Application pool {#APPPOOL} is not in Running state</p> |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|----------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------|----------|--------------------------------------------------------------------------------------------------------------------------|
+| IIS: The World Wide Web Publishing Service (W3SVC) is not running | <p>The World Wide Web Publishing Service (W3SVC) is not in running state. IIS cannot start.</p> | `{TEMPLATE_NAME:service_state[W3SVC].last()}<>0` | HIGH | <p>**Depends on**:</p><p>- IIS: Windows process Activation Service (WAS) is not the running</p> |
+| IIS: Windows process Activation Service (WAS) is not the running | <p>Windows Process Activation Service (WAS) is not in the running state. IIS cannot start.</p> | `{TEMPLATE_NAME:service_state[WAS].last()}<>0` | HIGH | |
+| IIS: Port {$IIS.PORT} is down | <p>-</p> | `{TEMPLATE_NAME:net.tcp.service[{$IIS.SERVICE},,{$IIS.PORT}].last()}=0` | AVERAGE | <p>Manual close: YES</p><p>**Depends on**:</p><p>- IIS: The World Wide Web Publishing Service (W3SVC) is not running</p> |
+| IIS: has been restarted (uptime < 10m) | <p>Uptime is less than 10 minutes</p> | `{TEMPLATE_NAME:perf_counter_en["\Web Service(_Total)\Service Uptime"].last()}<10m` | INFO | <p>Manual close: YES</p> |
+| IIS: {#APPPOOL} has been restarted (uptime < 10m) | <p>Uptime is less than 10 minutes</p> | `{TEMPLATE_NAME:perf_counter_en["\APP_POOL_WAS({#APPPOOL})\Current Application Pool Uptime"].last()}<10m` | INFO | <p>Manual close: YES</p> |
+| IIS: Application pool {#APPPOOL} is not in Running state | <p>-</p> | `{TEMPLATE_NAME:perf_counter_en["\APP_POOL_WAS({#APPPOOL})\Current Application Pool State"].last()}<>3 and {$IIS.APPPOOL.MONITORED:"{#APPPOOL}"}=1` | HIGH | |
+| IIS: Application pool {#APPPOOL} has been recycled | <p>-</p> | `{TEMPLATE_NAME:perf_counter_en["\APP_POOL_WAS({#APPPOOL})\Total Application Pool Recycles"].diff()}=1 and {$IIS.APPPOOL.MONITORED:"{#APPPOOL}"}=1` | INFO | |
+| IIS: Request queue of {#APPPOOL} is too large (over {$IIS.QUEUE.MAX.WARN}) | <p>-</p> | `{TEMPLATE_NAME:perf_counter_en["\HTTP Service Request Queues({#APPPOOL})\CurrentQueueSize"].min({$IIS.QUEUE.MAX.TIME})}>{$IIS.QUEUE.MAX.WARN}` | WARNING | <p>**Depends on**:</p><p>- IIS: Application pool {#APPPOOL} is not in Running state</p> |
## Feedback
diff --git a/templates/app/iis_agent_active/README.md b/templates/app/iis_agent_active/README.md
index b4b5a3ff74d..05e9de2b8d5 100644
--- a/templates/app/iis_agent_active/README.md
+++ b/templates/app/iis_agent_active/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
The template to monitor IIS (Internet Information Services) by Zabbix that works without any external scripts.<br>
Your server must have the following roles:
```text
@@ -19,7 +19,7 @@ This template was tested on:
## Setup
-> See [Zabbix template operation](https://www.zabbix.com/documentation/5.2/manual/config/templates_out_of_the_box/zabbix_agent) for basic instructions.
+> See [Zabbix template operation](https://www.zabbix.com/documentation/5.4/manual/config/templates_out_of_the_box/zabbix_agent) for basic instructions.
1\. [Import](https://www.zabbix.com/documentation/5.4/manual/xml_export_import/templates) the template ([template_app_iis_agent.yaml](template_app_iis_agent.yaml) or [template_app_iis_agent_active.yaml](template_app_iis_agent_active.yaml)) into Zabbix.
@@ -39,15 +39,15 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$IIS.APPPOOL.MATCHES} |<p>This macro is used in application pools discovery. Can be overridden on the host or linked template level.</p> |`.+` |
-|{$IIS.APPPOOL.MONITORED} |<p>Monitoring status for discovered application pools. Use context to avoid trigger firing for specific application pools. "1" - enabled, "0" - disabled.</p> |`1` |
-|{$IIS.APPPOOL.NOT_MATCHES} |<p>This macro is used in application pools discovery. Can be overridden on the host or linked template level.</p> |`<CHANGE_IF_NEEDED>` |
-|{$IIS.PORT} |<p>Listening port.</p> |`80` |
-|{$IIS.QUEUE.MAX.TIME} |<p>The time during which the queue length may exceed the threshold.</p> |`5m` |
-|{$IIS.QUEUE.MAX.WARN} |<p>Maximum application pool's request queue length for trigger expression.</p> |`` |
-|{$IIS.SERVICE} |<p>The service (http/https/etc) for port check. See "net.tcp.service" documentation page for more information: https://www.zabbix.com/documentation/5.4/manual/config/items/itemtypes/simple_checks</p> |`http` |
+| Name | Description | Default |
+|----------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------|
+| {$IIS.APPPOOL.MATCHES} | <p>This macro is used in application pools discovery. Can be overridden on the host or linked template level.</p> | `.+` |
+| {$IIS.APPPOOL.MONITORED} | <p>Monitoring status for discovered application pools. Use context to avoid trigger firing for specific application pools. "1" - enabled, "0" - disabled.</p> | `1` |
+| {$IIS.APPPOOL.NOT_MATCHES} | <p>This macro is used in application pools discovery. Can be overridden on the host or linked template level.</p> | `<CHANGE_IF_NEEDED>` |
+| {$IIS.PORT} | <p>Listening port.</p> | `80` |
+| {$IIS.QUEUE.MAX.TIME} | <p>The time during which the queue length may exceed the threshold.</p> | `5m` |
+| {$IIS.QUEUE.MAX.WARN} | <p>Maximum application pool's request queue length for trigger expression.</p> | `` |
+| {$IIS.SERVICE} | <p>The service (http/https/etc) for port check. See "net.tcp.service" documentation page for more information: https://www.zabbix.com/documentation/5.4/manual/config/items/itemtypes/simple_checks</p> | `http` |
## Template links
@@ -55,68 +55,68 @@ There are no template links in this template.
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Application pools discovery |<p>-</p> |ZABBIX_ACTIVE |wmi.getall[root\webAdministration, select Name from ApplicationPool]<p>**Filter**:</p>AND <p>- A: {#APPPOOL} NOT_MATCHES_REGEX `{$IIS.APPPOOL.NOT_MATCHES}`</p><p>- B: {#APPPOOL} MATCHES_REGEX `{$IIS.APPPOOL.MATCHES}`</p> |
+| Name | Description | Type | Key and additional info |
+|-----------------------------|-------------|---------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Application pools discovery | <p>-</p> | ZABBIX_ACTIVE | wmi.getall[root\webAdministration, select Name from ApplicationPool]<p>**Filter**:</p>AND <p>- A: {#APPPOOL} NOT_MATCHES_REGEX `{$IIS.APPPOOL.NOT_MATCHES}`</p><p>- B: {#APPPOOL} MATCHES_REGEX `{$IIS.APPPOOL.MATCHES}`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|IIS |IIS: World Wide Web Publishing Service (W3SVC) state |<p>The World Wide Web Publishing Service (W3SVC) provides web connectivity and administration of websites through the IIS snap-in. If the World Wide Web Publishing Service stops, the operating system cannot serve any form of web request. This service was dependent on "Windows Process Activation Service".</p> |ZABBIX_ACTIVE |service_state[W3SVC]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: Windows Process Activation Service (WAS) state |<p>Windows Process Activation Service (WAS) is a tool for managing worker processes that contain applications that host Windows Communication Foundation (WCF) services. Worker processes handle requests that are sent to a Web Server for specific application pools. Each application pool sets boundaries for the applications it contains.</p> |ZABBIX_ACTIVE |service_state[WAS]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: {$IIS.PORT} port ping |<p>-</p> |SIMPLE |net.tcp.service[{$IIS.SERVICE},,{$IIS.PORT}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: Uptime |<p>Service uptime in seconds.</p> |ZABBIX_ACTIVE |perf_counter_en["\Web Service(_Total)\Service Uptime"] |
-|IIS |IIS: Bytes Received per second |<p>The average rate per minute at which data bytes are received by the service at the Application Layer. Does not include protocol headers or control bytes.</p> |ZABBIX_ACTIVE |perf_counter_en["\Web Service(_Total)\Bytes Received/sec", 60] |
-|IIS |IIS: Bytes Sent per second |<p>The average rate per minute at which data bytes are sent by the service.</p> |ZABBIX_ACTIVE |perf_counter_en["\Web Service(_Total)\Bytes Sent/sec", 60] |
-|IIS |IIS: Bytes Total per second |<p>The average rate per minute of total bytes/sec transferred by the Web service (sum of bytes sent/sec and bytes received/sec).</p> |ZABBIX_ACTIVE |perf_counter_en["\Web Service(_Total)\Bytes Total/Sec", 60] |
-|IIS |IIS: Current connections |<p>The number of active connections.</p> |ZABBIX_ACTIVE |perf_counter_en["\Web Service(_Total)\Current Connections"] |
-|IIS |IIS: Total connection attempts |<p>The total number of connections to the Web or FTP service that have been attempted since service startup. The count is the total for all Web sites or FTP sites combined.</p> |ZABBIX_ACTIVE |perf_counter_en["\Web Service(_Total)\Total Connection Attempts (all instances)"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: Connection attempts per second |<p>The average rate per minute that connections using the Web service are being attempted. The count is the average for all Web sites combined.</p> |ZABBIX_ACTIVE |perf_counter_en["\Web Service(_Total)\Connection Attempts/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: Anonymous users per second |<p>The number of requests from users over an anonymous connection per second. Average per minute.</p> |ZABBIX_ACTIVE |perf_counter_en["\Web Service(_Total)\Anonymous Users/sec", 60] |
-|IIS |IIS: NonAnonymous users per second |<p>The number of requests from users over a non-anonymous connection per second. Average per minute.</p> |ZABBIX_ACTIVE |perf_counter_en["\Web Service(_Total)\NonAnonymous Users/sec", 60] |
-|IIS |IIS: Method Method GET requests per second |<p>The rate of HTTP requests made using the GET method. GET requests are generally used for basic file retrievals or image maps, though they can be used with forms. Average per minute.</p> |ZABBIX_ACTIVE |perf_counter_en["\Web Service(_Total)\Get Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: Method COPY requests per second |<p>The rate of HTTP requests made using the COPY method. Copy requests are used for copying files and directories. Average per minute.</p> |ZABBIX_ACTIVE |perf_counter_en["\Web Service(_Total)\Copy Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: Method CGI requests per second |<p>The rate of CGI requests that are simultaneously being processed by the Web service. Average per minute.</p> |ZABBIX_ACTIVE |perf_counter_en["\Web Service(_Total)\CGI Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: Method DELETE requests per second |<p>The rate of HTTP requests using the DELETE method made. Average per minute.</p> |ZABBIX_ACTIVE |perf_counter_en["\Web Service(_Total)\Delete Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: Method HEAD requests per second |<p>The rate of HTTP requests using the HEAD method made. HEAD requests generally indicate a client is querying the state of a document they already have to see if it needs to be refreshed. Average per minute.</p> |ZABBIX_ACTIVE |perf_counter_en["\Web Service(_Total)\Head Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: Method ISAPI requests per second |<p>The rate of ISAPI Extension requests that are simultaneously being processed by the Web service. Average per minute.</p> |ZABBIX_ACTIVE |perf_counter_en["\Web Service(_Total)\ISAPI Extension Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: Method LOCK requests per second |<p>The rate of HTTP requests made using the LOCK method. Lock requests are used to lock a file for one user so that only that user can modify the file. Average per minute.</p> |ZABBIX_ACTIVE |perf_counter_en["\Web Service(_Total)\Lock Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: Method MKCOL requests per second |<p>The rate of HTTP requests using the MKCOL method made. Mkcol requests are used to create directories on the server. Average per minute.</p> |ZABBIX_ACTIVE |perf_counter_en["\Web Service(_Total)\Mkcol Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: Method MOVE requests per second |<p>The rate of HTTP requests using the MOVE method made. Move requests are used for moving files and directories. Average per minute.</p> |ZABBIX_ACTIVE |perf_counter_en["\Web Service(_Total)\Move Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: Method OPTIONS requests per second |<p>The rate of HTTP requests using the OPTIONS method made. Average per minute.</p> |ZABBIX_ACTIVE |perf_counter_en["\Web Service(_Total)\Options Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: Method POST requests per second |<p>Rate of HTTP requests using POST method. Generally used for forms or gateway requests. Average per minute.</p> |ZABBIX_ACTIVE |perf_counter_en["\Web Service(_Total)\Post Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: Method PROPFIND requests per second |<p>The rate of HTTP requests using the PROPFIND method made. Propfind requests retrieve property values on files and directories. Average per minute.</p> |ZABBIX_ACTIVE |perf_counter_en["\Web Service(_Total)\Propfind Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: Method PROPPATCH requests per second |<p>The rate of HTTP requests using the PROPPATCH method made. Proppatch requests set property values on files and directories. Average per minute.</p> |ZABBIX_ACTIVE |perf_counter_en["\Web Service(_Total)\Proppatch Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: Method PUT requests per second |<p>The rate of HTTP requests using the PUT method made. Average per minute.</p> |ZABBIX_ACTIVE |perf_counter_en["\Web Service(_Total)\Put Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: Method MS-SEARCH requests per second |<p>The rate of HTTP requests using the MS-SEARCH method made. Search requests are used to query the server to find resources that match a set of conditions provided by the client. Average per minute.</p> |ZABBIX_ACTIVE |perf_counter_en["\Web Service(_Total)\Search Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: Method TRACE requests per second |<p>The rate of HTTP requests using the TRACE method made. Average per minute.</p> |ZABBIX_ACTIVE |perf_counter_en["\Web Service(_Total)\Trace Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: Method TRACE requests per second |<p>The rate of HTTP requests using the UNLOCK method made. Unlock requests are used to remove locks from files. Average per minute.</p> |ZABBIX_ACTIVE |perf_counter_en["\Web Service(_Total)\Unlock Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: Method Total requests per second |<p>The rate of all HTTP requests received. Average per minute.</p> |ZABBIX_ACTIVE |perf_counter_en["\Web Service(_Total)\Total Method Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: Method Total Other requests per second |<p>Total Other Request Methods is the number of HTTP requests that are not OPTIONS, GET, HEAD, POST, PUT, DELETE, TRACE, MOVE, COPY, MKCOL, PROPFIND, PROPPATCH, SEARCH, LOCK or UNLOCK methods (since service startup). Average per minute.</p> |ZABBIX_ACTIVE |perf_counter_en["\Web Service(_Total)\Other Request Methods/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: Locked errors per second |<p>The rate of errors due to requests that couldn't be satisfied by the server because the requested document was locked. These are generally reported as an HTTP 423 error code to the client. Average per minute.</p> |ZABBIX_ACTIVE |perf_counter_en["\Web Service(_Total)\Locked Errors/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: Not Found errors per second |<p>The rate of errors due to requests that couldn't be satisfied by the server because the requested document could not be found. These are generally reported to the client with HTTP error code 404. Average per minute.</p> |ZABBIX_ACTIVE |perf_counter_en["\Web Service(_Total)\Not Found Errors/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: Files cache hits percentage |<p>The ratio of user-mode file cache hits to total cache requests (since service startup). Note: This value might be low if the Kernel URI cache hits percentage is high.</p> |ZABBIX_ACTIVE |perf_counter_en["\Web Service Cache\File Cache Hits %"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: URIs cache hits percentage |<p>The ratio of user-mode URI Cache Hits to total cache requests (since service startup)</p> |ZABBIX_ACTIVE |perf_counter_en["\Web Service Cache\URI Cache Hits %"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: File cache misses |<p>The total number of unsuccessful lookups in the user-mode file cache since service startup.</p> |ZABBIX_ACTIVE |perf_counter_en["\Web Service Cache\File Cache Misses"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: URI cache misses |<p>The total number of unsuccessful lookups in the user-mode URI cache since service startup.</p> |ZABBIX_ACTIVE |perf_counter_en["\Web Service Cache\URI Cache Misses"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: {#APPPOOL} Uptime |<p>The web application uptime period since the last restart.</p> |ZABBIX_ACTIVE |perf_counter_en["\APP_POOL_WAS({#APPPOOL})\Current Application Pool Uptime"] |
-|IIS |IIS: AppPool {#APPPOOL} state |<p>The state of the application pool.</p> |ZABBIX_ACTIVE |perf_counter_en["\APP_POOL_WAS({#APPPOOL})\Current Application Pool State"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: AppPool {#APPPOOL} recycles |<p>The number of times the application pool has been recycled since Windows Process Activation Service (WAS) started.</p> |ZABBIX_ACTIVE |perf_counter_en["\APP_POOL_WAS({#APPPOOL})\Total Application Pool Recycles"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|IIS |IIS: AppPool {#APPPOOL} current queue size |<p>The number of requests in the queue.</p> |ZABBIX_ACTIVE |perf_counter_en["\HTTP Service Request Queues({#APPPOOL})\CurrentQueueSize"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| Group | Name | Description | Type | Key and additional info |
+|-------|------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------|-------------------------------------------------------------------------------------------------------------------------------------------------------|
+| IIS | IIS: World Wide Web Publishing Service (W3SVC) state | <p>The World Wide Web Publishing Service (W3SVC) provides web connectivity and administration of websites through the IIS snap-in. If the World Wide Web Publishing Service stops, the operating system cannot serve any form of web request. This service was dependent on "Windows Process Activation Service".</p> | ZABBIX_ACTIVE | service_state[W3SVC]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: Windows Process Activation Service (WAS) state | <p>Windows Process Activation Service (WAS) is a tool for managing worker processes that contain applications that host Windows Communication Foundation (WCF) services. Worker processes handle requests that are sent to a Web Server for specific application pools. Each application pool sets boundaries for the applications it contains.</p> | ZABBIX_ACTIVE | service_state[WAS]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: {$IIS.PORT} port ping | <p>-</p> | SIMPLE | net.tcp.service[{$IIS.SERVICE},,{$IIS.PORT}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: Uptime | <p>Service uptime in seconds.</p> | ZABBIX_ACTIVE | perf_counter_en["\Web Service(_Total)\Service Uptime"] |
+| IIS | IIS: Bytes Received per second | <p>The average rate per minute at which data bytes are received by the service at the Application Layer. Does not include protocol headers or control bytes.</p> | ZABBIX_ACTIVE | perf_counter_en["\Web Service(_Total)\Bytes Received/sec", 60] |
+| IIS | IIS: Bytes Sent per second | <p>The average rate per minute at which data bytes are sent by the service.</p> | ZABBIX_ACTIVE | perf_counter_en["\Web Service(_Total)\Bytes Sent/sec", 60] |
+| IIS | IIS: Bytes Total per second | <p>The average rate per minute of total bytes/sec transferred by the Web service (sum of bytes sent/sec and bytes received/sec).</p> | ZABBIX_ACTIVE | perf_counter_en["\Web Service(_Total)\Bytes Total/Sec", 60] |
+| IIS | IIS: Current connections | <p>The number of active connections.</p> | ZABBIX_ACTIVE | perf_counter_en["\Web Service(_Total)\Current Connections"] |
+| IIS | IIS: Total connection attempts | <p>The total number of connections to the Web or FTP service that have been attempted since service startup. The count is the total for all Web sites or FTP sites combined.</p> | ZABBIX_ACTIVE | perf_counter_en["\Web Service(_Total)\Total Connection Attempts (all instances)"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: Connection attempts per second | <p>The average rate per minute that connections using the Web service are being attempted. The count is the average for all Web sites combined.</p> | ZABBIX_ACTIVE | perf_counter_en["\Web Service(_Total)\Connection Attempts/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: Anonymous users per second | <p>The number of requests from users over an anonymous connection per second. Average per minute.</p> | ZABBIX_ACTIVE | perf_counter_en["\Web Service(_Total)\Anonymous Users/sec", 60] |
+| IIS | IIS: NonAnonymous users per second | <p>The number of requests from users over a non-anonymous connection per second. Average per minute.</p> | ZABBIX_ACTIVE | perf_counter_en["\Web Service(_Total)\NonAnonymous Users/sec", 60] |
+| IIS | IIS: Method Method GET requests per second | <p>The rate of HTTP requests made using the GET method. GET requests are generally used for basic file retrievals or image maps, though they can be used with forms. Average per minute.</p> | ZABBIX_ACTIVE | perf_counter_en["\Web Service(_Total)\Get Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: Method COPY requests per second | <p>The rate of HTTP requests made using the COPY method. Copy requests are used for copying files and directories. Average per minute.</p> | ZABBIX_ACTIVE | perf_counter_en["\Web Service(_Total)\Copy Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: Method CGI requests per second | <p>The rate of CGI requests that are simultaneously being processed by the Web service. Average per minute.</p> | ZABBIX_ACTIVE | perf_counter_en["\Web Service(_Total)\CGI Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: Method DELETE requests per second | <p>The rate of HTTP requests using the DELETE method made. Average per minute.</p> | ZABBIX_ACTIVE | perf_counter_en["\Web Service(_Total)\Delete Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: Method HEAD requests per second | <p>The rate of HTTP requests using the HEAD method made. HEAD requests generally indicate a client is querying the state of a document they already have to see if it needs to be refreshed. Average per minute.</p> | ZABBIX_ACTIVE | perf_counter_en["\Web Service(_Total)\Head Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: Method ISAPI requests per second | <p>The rate of ISAPI Extension requests that are simultaneously being processed by the Web service. Average per minute.</p> | ZABBIX_ACTIVE | perf_counter_en["\Web Service(_Total)\ISAPI Extension Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: Method LOCK requests per second | <p>The rate of HTTP requests made using the LOCK method. Lock requests are used to lock a file for one user so that only that user can modify the file. Average per minute.</p> | ZABBIX_ACTIVE | perf_counter_en["\Web Service(_Total)\Lock Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: Method MKCOL requests per second | <p>The rate of HTTP requests using the MKCOL method made. Mkcol requests are used to create directories on the server. Average per minute.</p> | ZABBIX_ACTIVE | perf_counter_en["\Web Service(_Total)\Mkcol Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: Method MOVE requests per second | <p>The rate of HTTP requests using the MOVE method made. Move requests are used for moving files and directories. Average per minute.</p> | ZABBIX_ACTIVE | perf_counter_en["\Web Service(_Total)\Move Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: Method OPTIONS requests per second | <p>The rate of HTTP requests using the OPTIONS method made. Average per minute.</p> | ZABBIX_ACTIVE | perf_counter_en["\Web Service(_Total)\Options Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: Method POST requests per second | <p>Rate of HTTP requests using POST method. Generally used for forms or gateway requests. Average per minute.</p> | ZABBIX_ACTIVE | perf_counter_en["\Web Service(_Total)\Post Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: Method PROPFIND requests per second | <p>The rate of HTTP requests using the PROPFIND method made. Propfind requests retrieve property values on files and directories. Average per minute.</p> | ZABBIX_ACTIVE | perf_counter_en["\Web Service(_Total)\Propfind Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: Method PROPPATCH requests per second | <p>The rate of HTTP requests using the PROPPATCH method made. Proppatch requests set property values on files and directories. Average per minute.</p> | ZABBIX_ACTIVE | perf_counter_en["\Web Service(_Total)\Proppatch Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: Method PUT requests per second | <p>The rate of HTTP requests using the PUT method made. Average per minute.</p> | ZABBIX_ACTIVE | perf_counter_en["\Web Service(_Total)\Put Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: Method MS-SEARCH requests per second | <p>The rate of HTTP requests using the MS-SEARCH method made. Search requests are used to query the server to find resources that match a set of conditions provided by the client. Average per minute.</p> | ZABBIX_ACTIVE | perf_counter_en["\Web Service(_Total)\Search Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: Method TRACE requests per second | <p>The rate of HTTP requests using the TRACE method made. Average per minute.</p> | ZABBIX_ACTIVE | perf_counter_en["\Web Service(_Total)\Trace Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: Method TRACE requests per second | <p>The rate of HTTP requests using the UNLOCK method made. Unlock requests are used to remove locks from files. Average per minute.</p> | ZABBIX_ACTIVE | perf_counter_en["\Web Service(_Total)\Unlock Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: Method Total requests per second | <p>The rate of all HTTP requests received. Average per minute.</p> | ZABBIX_ACTIVE | perf_counter_en["\Web Service(_Total)\Total Method Requests/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: Method Total Other requests per second | <p>Total Other Request Methods is the number of HTTP requests that are not OPTIONS, GET, HEAD, POST, PUT, DELETE, TRACE, MOVE, COPY, MKCOL, PROPFIND, PROPPATCH, SEARCH, LOCK or UNLOCK methods (since service startup). Average per minute.</p> | ZABBIX_ACTIVE | perf_counter_en["\Web Service(_Total)\Other Request Methods/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: Locked errors per second | <p>The rate of errors due to requests that couldn't be satisfied by the server because the requested document was locked. These are generally reported as an HTTP 423 error code to the client. Average per minute.</p> | ZABBIX_ACTIVE | perf_counter_en["\Web Service(_Total)\Locked Errors/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: Not Found errors per second | <p>The rate of errors due to requests that couldn't be satisfied by the server because the requested document could not be found. These are generally reported to the client with HTTP error code 404. Average per minute.</p> | ZABBIX_ACTIVE | perf_counter_en["\Web Service(_Total)\Not Found Errors/Sec", 60]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: Files cache hits percentage | <p>The ratio of user-mode file cache hits to total cache requests (since service startup). Note: This value might be low if the Kernel URI cache hits percentage is high.</p> | ZABBIX_ACTIVE | perf_counter_en["\Web Service Cache\File Cache Hits %"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: URIs cache hits percentage | <p>The ratio of user-mode URI Cache Hits to total cache requests (since service startup)</p> | ZABBIX_ACTIVE | perf_counter_en["\Web Service Cache\URI Cache Hits %"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: File cache misses | <p>The total number of unsuccessful lookups in the user-mode file cache since service startup.</p> | ZABBIX_ACTIVE | perf_counter_en["\Web Service Cache\File Cache Misses"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: URI cache misses | <p>The total number of unsuccessful lookups in the user-mode URI cache since service startup.</p> | ZABBIX_ACTIVE | perf_counter_en["\Web Service Cache\URI Cache Misses"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: {#APPPOOL} Uptime | <p>The web application uptime period since the last restart.</p> | ZABBIX_ACTIVE | perf_counter_en["\APP_POOL_WAS({#APPPOOL})\Current Application Pool Uptime"] |
+| IIS | IIS: AppPool {#APPPOOL} state | <p>The state of the application pool.</p> | ZABBIX_ACTIVE | perf_counter_en["\APP_POOL_WAS({#APPPOOL})\Current Application Pool State"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: AppPool {#APPPOOL} recycles | <p>The number of times the application pool has been recycled since Windows Process Activation Service (WAS) started.</p> | ZABBIX_ACTIVE | perf_counter_en["\APP_POOL_WAS({#APPPOOL})\Total Application Pool Recycles"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| IIS | IIS: AppPool {#APPPOOL} current queue size | <p>The number of requests in the queue.</p> | ZABBIX_ACTIVE | perf_counter_en["\HTTP Service Request Queues({#APPPOOL})\CurrentQueueSize"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|IIS: The World Wide Web Publishing Service (W3SVC) is not running |<p>The World Wide Web Publishing Service (W3SVC) is not in running state. IIS cannot start.</p> |`{TEMPLATE_NAME:service_state[W3SVC].last()}<>0` |HIGH |<p>**Depends on**:</p><p>- IIS: Windows process Activation Service (WAS) is not the running</p> |
-|IIS: Windows process Activation Service (WAS) is not the running |<p>Windows Process Activation Service (WAS) is not in the running state. IIS cannot start.</p> |`{TEMPLATE_NAME:service_state[WAS].last()}<>0` |HIGH | |
-|IIS: Port {$IIS.PORT} is down |<p>-</p> |`{TEMPLATE_NAME:net.tcp.service[{$IIS.SERVICE},,{$IIS.PORT}].last()}=0` |AVERAGE |<p>Manual close: YES</p><p>**Depends on**:</p><p>- IIS: The World Wide Web Publishing Service (W3SVC) is not running</p> |
-|IIS: has been restarted (uptime < 10m) |<p>Uptime is less than 10 minutes</p> |`{TEMPLATE_NAME:perf_counter_en["\Web Service(_Total)\Service Uptime"].last()}<10m` |INFO |<p>Manual close: YES</p> |
-|IIS: {#APPPOOL} has been restarted (uptime < 10m) |<p>Uptime is less than 10 minutes</p> |`{TEMPLATE_NAME:perf_counter_en["\APP_POOL_WAS({#APPPOOL})\Current Application Pool Uptime"].last()}<10m` |INFO |<p>Manual close: YES</p> |
-|IIS: Application pool {#APPPOOL} is not in Running state |<p>-</p> |`{TEMPLATE_NAME:perf_counter_en["\APP_POOL_WAS({#APPPOOL})\Current Application Pool State"].last()}<>3 and {$IIS.APPPOOL.MONITORED:"{#APPPOOL}"}=1` |HIGH | |
-|IIS: Application pool {#APPPOOL} has been recycled |<p>-</p> |`{TEMPLATE_NAME:perf_counter_en["\APP_POOL_WAS({#APPPOOL})\Total Application Pool Recycles"].diff()}=1 and {$IIS.APPPOOL.MONITORED:"{#APPPOOL}"}=1` |INFO | |
-|IIS: Request queue of {#APPPOOL} is too large (over {$IIS.QUEUE.MAX.WARN}) |<p>-</p> |`{TEMPLATE_NAME:perf_counter_en["\HTTP Service Request Queues({#APPPOOL})\CurrentQueueSize"].min({$IIS.QUEUE.MAX.TIME})}>{$IIS.QUEUE.MAX.WARN}` |WARNING |<p>**Depends on**:</p><p>- IIS: Application pool {#APPPOOL} is not in Running state</p> |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|----------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------|----------|--------------------------------------------------------------------------------------------------------------------------|
+| IIS: The World Wide Web Publishing Service (W3SVC) is not running | <p>The World Wide Web Publishing Service (W3SVC) is not in running state. IIS cannot start.</p> | `{TEMPLATE_NAME:service_state[W3SVC].last()}<>0` | HIGH | <p>**Depends on**:</p><p>- IIS: Windows process Activation Service (WAS) is not the running</p> |
+| IIS: Windows process Activation Service (WAS) is not the running | <p>Windows Process Activation Service (WAS) is not in the running state. IIS cannot start.</p> | `{TEMPLATE_NAME:service_state[WAS].last()}<>0` | HIGH | |
+| IIS: Port {$IIS.PORT} is down | <p>-</p> | `{TEMPLATE_NAME:net.tcp.service[{$IIS.SERVICE},,{$IIS.PORT}].last()}=0` | AVERAGE | <p>Manual close: YES</p><p>**Depends on**:</p><p>- IIS: The World Wide Web Publishing Service (W3SVC) is not running</p> |
+| IIS: has been restarted (uptime < 10m) | <p>Uptime is less than 10 minutes</p> | `{TEMPLATE_NAME:perf_counter_en["\Web Service(_Total)\Service Uptime"].last()}<10m` | INFO | <p>Manual close: YES</p> |
+| IIS: {#APPPOOL} has been restarted (uptime < 10m) | <p>Uptime is less than 10 minutes</p> | `{TEMPLATE_NAME:perf_counter_en["\APP_POOL_WAS({#APPPOOL})\Current Application Pool Uptime"].last()}<10m` | INFO | <p>Manual close: YES</p> |
+| IIS: Application pool {#APPPOOL} is not in Running state | <p>-</p> | `{TEMPLATE_NAME:perf_counter_en["\APP_POOL_WAS({#APPPOOL})\Current Application Pool State"].last()}<>3 and {$IIS.APPPOOL.MONITORED:"{#APPPOOL}"}=1` | HIGH | |
+| IIS: Application pool {#APPPOOL} has been recycled | <p>-</p> | `{TEMPLATE_NAME:perf_counter_en["\APP_POOL_WAS({#APPPOOL})\Total Application Pool Recycles"].diff()}=1 and {$IIS.APPPOOL.MONITORED:"{#APPPOOL}"}=1` | INFO | |
+| IIS: Request queue of {#APPPOOL} is too large (over {$IIS.QUEUE.MAX.WARN}) | <p>-</p> | `{TEMPLATE_NAME:perf_counter_en["\HTTP Service Request Queues({#APPPOOL})\CurrentQueueSize"].min({$IIS.QUEUE.MAX.TIME})}>{$IIS.QUEUE.MAX.WARN}` | WARNING | <p>**Depends on**:</p><p>- IIS: Application pool {#APPPOOL} is not in Running state</p> |
## Feedback
diff --git a/templates/app/kafka_jmx/README.md b/templates/app/kafka_jmx/README.md
index 6d0a67cfaeb..326b0188712 100644
--- a/templates/app/kafka_jmx/README.md
+++ b/templates/app/kafka_jmx/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
Official JMX Template for Apache Kafka.
@@ -14,7 +14,7 @@ This template was tested on:
## Setup
-> See [Zabbix template operation](https://www.zabbix.com/documentation/5.2/manual/config/templates_out_of_the_box/jmx) for basic instructions.
+> See [Zabbix template operation](https://www.zabbix.com/documentation/5.4/manual/config/templates_out_of_the_box/jmx) for basic instructions.
Metrics are collected by JMX.
@@ -28,14 +28,14 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$KAFKA.NET_PROC_AVG_IDLE.MIN.WARN} |<p>The minimum Network processor average idle percent for trigger expression.</p> |`30` |
-|{$KAFKA.PASSWORD} |<p>-</p> |`zabbix` |
-|{$KAFKA.REQUEST_HANDLER_AVG_IDLE.MIN.WARN} |<p>The minimum Request handler average idle percent for trigger expression.</p> |`30` |
-|{$KAFKA.TOPIC.MATCHES} |<p>Filter of discoverable topics</p> |`.*` |
-|{$KAFKA.TOPIC.NOT_MATCHES} |<p>Filter to exclude discovered topics</p> |`__consumer_offsets` |
-|{$KAFKA.USER} |<p>-</p> |`zabbix` |
+| Name | Description | Default |
+|--------------------------------------------|-----------------------------------------------------------------------------------|----------------------|
+| {$KAFKA.NET_PROC_AVG_IDLE.MIN.WARN} | <p>The minimum Network processor average idle percent for trigger expression.</p> | `30` |
+| {$KAFKA.PASSWORD} | <p>-</p> | `zabbix` |
+| {$KAFKA.REQUEST_HANDLER_AVG_IDLE.MIN.WARN} | <p>The minimum Request handler average idle percent for trigger expression.</p> | `30` |
+| {$KAFKA.TOPIC.MATCHES} | <p>Filter of discoverable topics</p> | `.*` |
+| {$KAFKA.TOPIC.NOT_MATCHES} | <p>Filter to exclude discovered topics</p> | `__consumer_offsets` |
+| {$KAFKA.USER} | <p>-</p> | `zabbix` |
## Template links
@@ -43,99 +43,99 @@ There are no template links in this template.
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Topic Metrics (write) |<p>-</p> |JMX |jmx.discovery[beans,"kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec,topic=*"]<p>**Filter**:</p>AND <p>- A: {#JMXTOPIC} MATCHES_REGEX `{$KAFKA.TOPIC.MATCHES}`</p><p>- B: {#JMXTOPIC} NOT_MATCHES_REGEX `{$KAFKA.TOPIC.NOT_MATCHES}`</p> |
-|Topic Metrics (read) |<p>-</p> |JMX |jmx.discovery[beans,"kafka.server:type=BrokerTopicMetrics,name=BytesOutPerSec,topic=*"]<p>**Filter**:</p>AND <p>- A: {#JMXTOPIC} MATCHES_REGEX `{$KAFKA.TOPIC.MATCHES}`</p><p>- B: {#JMXTOPIC} NOT_MATCHES_REGEX `{$KAFKA.TOPIC.NOT_MATCHES}`</p> |
-|Topic Metrics (errors) |<p>-</p> |JMX |jmx.discovery[beans,"kafka.server:type=BrokerTopicMetrics,name=BytesRejectedPerSec,topic=*"]<p>**Filter**:</p>AND <p>- A: {#JMXTOPIC} MATCHES_REGEX `{$KAFKA.TOPIC.MATCHES}`</p><p>- B: {#JMXTOPIC} NOT_MATCHES_REGEX `{$KAFKA.TOPIC.NOT_MATCHES}`</p> |
+| Name | Description | Type | Key and additional info |
+|------------------------|-------------|------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Topic Metrics (write) | <p>-</p> | JMX | jmx.discovery[beans,"kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec,topic=*"]<p>**Filter**:</p>AND <p>- A: {#JMXTOPIC} MATCHES_REGEX `{$KAFKA.TOPIC.MATCHES}`</p><p>- B: {#JMXTOPIC} NOT_MATCHES_REGEX `{$KAFKA.TOPIC.NOT_MATCHES}`</p> |
+| Topic Metrics (read) | <p>-</p> | JMX | jmx.discovery[beans,"kafka.server:type=BrokerTopicMetrics,name=BytesOutPerSec,topic=*"]<p>**Filter**:</p>AND <p>- A: {#JMXTOPIC} MATCHES_REGEX `{$KAFKA.TOPIC.MATCHES}`</p><p>- B: {#JMXTOPIC} NOT_MATCHES_REGEX `{$KAFKA.TOPIC.NOT_MATCHES}`</p> |
+| Topic Metrics (errors) | <p>-</p> | JMX | jmx.discovery[beans,"kafka.server:type=BrokerTopicMetrics,name=BytesRejectedPerSec,topic=*"]<p>**Filter**:</p>AND <p>- A: {#JMXTOPIC} MATCHES_REGEX `{$KAFKA.TOPIC.MATCHES}`</p><p>- B: {#JMXTOPIC} NOT_MATCHES_REGEX `{$KAFKA.TOPIC.NOT_MATCHES}`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Kafka |Kafka: Leader election per second |<p>Number of leader elections per second.</p> |JMX |jmx["kafka.controller:type=ControllerStats,name=LeaderElectionRateAndTimeMs","Count"] |
-|Kafka |Kafka: Unclean leader election per second |<p>Number of “unclean” elections per second.</p> |JMX |jmx["kafka.controller:type=ControllerStats,name=UncleanLeaderElectionsPerSec","Count"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Kafka |Kafka: Controller state on broker |<p>One indicates that the broker is the controller for the cluster.</p> |JMX |jmx["kafka.controller:type=KafkaController,name=ActiveControllerCount","Value"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|Kafka |Kafka: Ineligible pending replica deletes |<p>The number of ineligible pending replica deletes.</p> |JMX |jmx["kafka.controller:type=KafkaController,name=ReplicasIneligibleToDeleteCount","Value"] |
-|Kafka |Kafka: Pending replica deletes |<p>The number of pending replica deletes.</p> |JMX |jmx["kafka.controller:type=KafkaController,name=ReplicasToDeleteCount","Value"] |
-|Kafka |Kafka: Ineligible pending topic deletes |<p>The number of ineligible pending topic deletes.</p> |JMX |jmx["kafka.controller:type=KafkaController,name=TopicsIneligibleToDeleteCount","Value"] |
-|Kafka |Kafka: Pending topic deletes |<p>The number of pending topic deletes.</p> |JMX |jmx["kafka.controller:type=KafkaController,name=TopicsToDeleteCount","Value"] |
-|Kafka |Kafka: Offline log directory count |<p>The number of offline log directories (for example, after a hardware failure).</p> |JMX |jmx["kafka.log:type=LogManager,name=OfflineLogDirectoryCount","Value"] |
-|Kafka |Kafka: Offline partitions count |<p>Number of partitions that don't have an active leader.</p> |JMX |jmx["kafka.controller:type=KafkaController,name=OfflinePartitionsCount","Value"] |
-|Kafka |Kafka: Bytes out per second |<p>The rate at which data is fetched and read from the broker by consumers.</p> |JMX |jmx["kafka.server:type=BrokerTopicMetrics,name=BytesOutPerSec","Count"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Kafka |Kafka: Bytes in per second |<p>The rate at which data sent from producers is consumed by the broker.</p> |JMX |jmx["kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec","Count"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Kafka |Kafka: Messages in per second |<p>The rate at which individual messages are consumed by the broker.</p> |JMX |jmx["kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec","Count"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Kafka |Kafka: Bytes rejected per second |<p>The rate at which bytes rejected per second by the broker.</p> |JMX |jmx["kafka.server:type=BrokerTopicMetrics,name=BytesRejectedPerSec","Count"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Kafka |Kafka: Client fetch request failed per second |<p>Number of client fetch request failures per second.</p> |JMX |jmx["kafka.server:type=BrokerTopicMetrics,name=FailedFetchRequestsPerSec","Count"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Kafka |Kafka: Produce requests failed per second |<p>Number of failed produce requests per second.</p> |JMX |jmx["kafka.server:type=BrokerTopicMetrics,name=FailedProduceRequestsPerSec","Count"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Kafka |Kafka: Request handler average idle percent |<p>Indicates the percentage of time that the request handler (IO) threads are not in use.</p> |JMX |jmx["kafka.server:type=KafkaRequestHandlerPool,name=RequestHandlerAvgIdlePercent","OneMinuteRate"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `100`</p> |
-|Kafka |Kafka: Fetch-Consumer response send time, mean |<p>Average time taken, in milliseconds, to send the response.</p> |JMX |jmx["kafka.network:type=RequestMetrics,name=ResponseSendTimeMs,request=FetchConsumer","Mean"] |
-|Kafka |Kafka: Fetch-Consumer response send time, p95 |<p>The time taken, in milliseconds, to send the response for 95th percentile.</p> |JMX |jmx["kafka.network:type=RequestMetrics,name=ResponseSendTimeMs,request=FetchConsumer","95thPercentile"] |
-|Kafka |Kafka: Fetch-Consumer response send time, p99 |<p>The time taken, in milliseconds, to send the response for 99th percentile.</p> |JMX |jmx["kafka.network:type=RequestMetrics,name=ResponseSendTimeMs,request=FetchConsumer","99thPercentile"] |
-|Kafka |Kafka: Fetch-Follower response send time, mean |<p>Average time taken, in milliseconds, to send the response.</p> |JMX |jmx["kafka.network:type=RequestMetrics,name=ResponseSendTimeMs,request=FetchFollower","Mean"] |
-|Kafka |Kafka: Fetch-Follower response send time, p95 |<p>The time taken, in milliseconds, to send the response for 95th percentile.</p> |JMX |jmx["kafka.network:type=RequestMetrics,name=ResponseSendTimeMs,request=FetchFollower","95thPercentile"] |
-|Kafka |Kafka: Fetch-Follower response send time, p99 |<p>The time taken, in milliseconds, to send the response for 99th percentile.</p> |JMX |jmx["kafka.network:type=RequestMetrics,name=ResponseSendTimeMs,request=FetchFollower","99thPercentile"] |
-|Kafka |Kafka: Produce response send time, mean |<p>Average time taken, in milliseconds, to send the response.</p> |JMX |jmx["kafka.network:type=RequestMetrics,name=ResponseSendTimeMs,request=Produce","Mean"] |
-|Kafka |Kafka: Produce response send time, p95 |<p>The time taken, in milliseconds, to send the response for 95th percentile.</p> |JMX |jmx["kafka.network:type=RequestMetrics,name=ResponseSendTimeMs,request=Produce","95thPercentile"] |
-|Kafka |Kafka: Produce response send time, p99 |<p>The time taken, in milliseconds, to send the response for 99th percentile.</p> |JMX |jmx["kafka.network:type=RequestMetrics,name=ResponseSendTimeMs,request=Produce","99thPercentile"] |
-|Kafka |Kafka: Fetch-Consumer request total time, mean |<p>Average time in ms to serve the Fetch-Consumer request.</p> |JMX |jmx["kafka.network:type=RequestMetrics,name=TotalTimeMs,request=FetchConsumer","Mean"] |
-|Kafka |Kafka: Fetch-Consumer request total time, p95 |<p>Time in ms to serve the Fetch-Consumer request for 95th percentile.</p> |JMX |jmx["kafka.network:type=RequestMetrics,name=TotalTimeMs,request=FetchConsumer","95thPercentile"] |
-|Kafka |Kafka: Fetch-Consumer request total time, p99 |<p>Time in ms to serve the specified Fetch-Consumer for 99th percentile.</p> |JMX |jmx["kafka.network:type=RequestMetrics,name=TotalTimeMs,request=FetchConsumer","99thPercentile"] |
-|Kafka |Kafka: Fetch-Follower request total time, mean |<p>Average time in ms to serve the Fetch-Follower request.</p> |JMX |jmx["kafka.network:type=RequestMetrics,name=TotalTimeMs,request=FetchFollower","Mean"] |
-|Kafka |Kafka: Fetch-Follower request total time, p95 |<p>Time in ms to serve the Fetch-Follower request for 95th percentile.</p> |JMX |jmx["kafka.network:type=RequestMetrics,name=TotalTimeMs,request=FetchFollower","95thPercentile"] |
-|Kafka |Kafka: Fetch-Follower request total time, p99 |<p>Time in ms to serve the Fetch-Follower request for 99th percentile.</p> |JMX |jmx["kafka.network:type=RequestMetrics,name=TotalTimeMs,request=FetchFollower","99thPercentile"] |
-|Kafka |Kafka: Produce request total time, mean |<p>Average time in ms to serve the Produce request.</p> |JMX |jmx["kafka.network:type=RequestMetrics,name=TotalTimeMs,request=Produce","Mean"] |
-|Kafka |Kafka: Produce request total time, p95 |<p>Time in ms to serve the Produce requests for 95th percentile.</p> |JMX |jmx["kafka.network:type=RequestMetrics,name=TotalTimeMs,request=Produce","95thPercentile"] |
-|Kafka |Kafka: Produce request total time, p99 |<p>Time in ms to serve the Produce requests for 99th percentile.</p> |JMX |jmx["kafka.network:type=RequestMetrics,name=TotalTimeMs,request=Produce","99thPercentile"] |
-|Kafka |Kafka: Fetch-Consumer request total time, mean |<p>Average time for a request to update metadata.</p> |JMX |jmx["kafka.network:type=RequestMetrics,name=TotalTimeMs,request=UpdateMetadata","Mean"] |
-|Kafka |Kafka: UpdateMetadata request total time, p95 |<p>Time for update metadata requests for 95th percentile.</p> |JMX |jmx["kafka.network:type=RequestMetrics,name=TotalTimeMs,request=UpdateMetadata","95thPercentile"] |
-|Kafka |Kafka: UpdateMetadata request total time, p99 |<p>Time for update metadata requests for 99th percentile.</p> |JMX |jmx["kafka.network:type=RequestMetrics,name=TotalTimeMs,request=UpdateMetadata","99thPercentile"] |
-|Kafka |Kafka: Temporary memory size in bytes (Fetch), max |<p>The maximum of temporary memory used for converting message formats and decompressing messages.</p> |JMX |jmx["kafka.network:type=RequestMetrics,name=TemporaryMemoryBytes,request=Fetch","Max"] |
-|Kafka |Kafka: Temporary memory size in bytes (Fetch), avg |<p>The amount of temporary memory used for converting message formats and decompressing messages.</p> |JMX |jmx["kafka.network:type=RequestMetrics,name=TemporaryMemoryBytes,request=Fetch","Mean"] |
-|Kafka |Kafka: Temporary memory size in bytes (Fetch), min |<p>The minimum of temporary memory used for converting message formats and decompressing messages.</p> |JMX |jmx["kafka.network:type=RequestMetrics,name=TemporaryMemoryBytes,request=Fetch","Mean"] |
-|Kafka |Kafka: Temporary memory size in bytes (Produce), max |<p>The maximum of temporary memory used for converting message formats and decompressing messages.</p> |JMX |jmx["kafka.network:type=RequestMetrics,name=TemporaryMemoryBytes,request=Produce","Max"] |
-|Kafka |Kafka: Temporary memory size in bytes (Produce), avg |<p>The amount of temporary memory used for converting message formats and decompressing messages.</p> |JMX |jmx["kafka.network:type=RequestMetrics,name=TemporaryMemoryBytes,request=Produce","Mean"] |
-|Kafka |Kafka: Temporary memory size in bytes (Produce), min |<p>The minimum of temporary memory used for converting message formats and decompressing messages.</p> |JMX |jmx["kafka.network:type=RequestMetrics,name=TemporaryMemoryBytes,request=Produce","Min"] |
-|Kafka |Kafka: Network processor average idle percent |<p>The average percentage of time that the network processors are idle.</p> |JMX |jmx["kafka.network:type=SocketServer,name=NetworkProcessorAvgIdlePercent","Value"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `100`</p> |
-|Kafka |Kafka: Requests in producer purgatory |<p>Number of requests waiting in producer purgatory.</p> |JMX |jmx["kafka.server:type=DelayedOperationPurgatory,name=PurgatorySize,delayedOperation=Fetch","Value"] |
-|Kafka |Kafka: Requests in fetch purgatory |<p>Number of requests waiting in fetch purgatory.</p> |JMX |jmx["kafka.server:type=DelayedOperationPurgatory,name=PurgatorySize,delayedOperation=Produce","Value"] |
-|Kafka |Kafka: Replication maximum lag |<p>The maximum lag between the time that messages are received by the leader replica and by the follower replicas.</p> |JMX |jmx["kafka.server:type=ReplicaFetcherManager,name=MaxLag,clientId=Replica","Value"] |
-|Kafka |Kafka: Under minimum ISR partition count |<p>The number of partitions under the minimum In-Sync Replica (ISR) count.</p> |JMX |jmx["kafka.server:type=ReplicaManager,name=UnderMinIsrPartitionCount","Value"] |
-|Kafka |Kafka: Under replicated partitions |<p>The number of partitions that have not been fully replicated in the follower replicas (the number of non-reassigning replicas - the number of ISR > 0).</p> |JMX |jmx["kafka.server:type=ReplicaManager,name=UnderReplicatedPartitions","Value"] |
-|Kafka |Kafka: ISR expands per second |<p>The rate at which the number of ISRs in the broker increases.</p> |JMX |jmx["kafka.server:type=ReplicaManager,name=IsrExpandsPerSec","Count"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Kafka |Kafka: ISR shrink per second |<p>Rate of replicas leaving the ISR pool.</p> |JMX |jmx["kafka.server:type=ReplicaManager,name=IsrShrinksPerSec","Count"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Kafka |Kafka: Leader count |<p>The number of replicas for which this broker is the leader.</p> |JMX |jmx["kafka.server:type=ReplicaManager,name=LeaderCount","Value"] |
-|Kafka |Kafka: Partition count |<p>The number of partitions in the broker.</p> |JMX |jmx["kafka.server:type=ReplicaManager,name=PartitionCount","Value"] |
-|Kafka |Kafka: Number of reassigning partitions |<p>The number of reassigning leader partitions on a broker.</p> |JMX |jmx["kafka.server:type=ReplicaManager,name=ReassigningPartitions","Value"] |
-|Kafka |Kafka: Request queue size |<p>The size of the delay queue.</p> |JMX |jmx["kafka.server:type=Request","queue-size"] |
-|Kafka |Kafka: Version |<p>Current version of brocker.</p> |JMX |jmx["kafka.server:type=app-info","version"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|Kafka |Kafka: Uptime |<p>Service uptime in seconds.</p> |JMX |jmx["kafka.server:type=app-info","start-time-ms"]<p>**Preprocessing**:</p><p>- JAVASCRIPT: `return (Math.floor((Date.now()-Number(value))/1000))`</p> |
-|Kafka |Kafka: ZooKeeper client request latency |<p>Latency in millseconds for ZooKeeper requests from broker.</p> |JMX |jmx["kafka.server:type=ZooKeeperClientMetrics,name=ZooKeeperRequestLatencyMs","Count"] |
-|Kafka |Kafka: ZooKeeper connection status |<p>Connection status of broker's ZooKeeper session.</p> |JMX |jmx["kafka.server:type=SessionExpireListener,name=SessionState","Value"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|Kafka |Kafka: ZooKeeper disconnect rate |<p>ZooKeeper client disconnect per second.</p> |JMX |jmx["kafka.server:type=SessionExpireListener,name=ZooKeeperDisconnectsPerSec","Count"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Kafka |Kafka: ZooKeeper session expiration rate |<p>ZooKeeper client session expiration per second.</p> |JMX |jmx["kafka.server:type=SessionExpireListener,name=ZooKeeperExpiresPerSec","Count"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Kafka |Kafka: ZooKeeper readonly rate |<p>ZooKeeper client readonly per second.</p> |JMX |jmx["kafka.server:type=SessionExpireListener,name=ZooKeeperReadOnlyConnectsPerSec","Count"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Kafka |Kafka: ZooKeeper sync rate |<p>ZooKeeper client sync per second.</p> |JMX |jmx["kafka.server:type=SessionExpireListener,name=ZooKeeperSyncConnectsPerSec","Count"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Kafka |Kafka {#JMXTOPIC}: Messages in per second |<p>The rate at which individual messages are consumed by topic.</p> |JMX |jmx["kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec,topic={#JMXTOPIC}","Count"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Kafka |Kafka {#JMXTOPIC}: Bytes in per second |<p>The rate at which data sent from producers is consumed by topic.</p> |JMX |jmx["kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic={#JMXTOPIC}","Count"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Kafka |Kafka {#JMXTOPIC}: Bytes out per second |<p>The rate at which data is fetched and read from the broker by consumers (by topic).</p> |JMX |jmx["kafka.server:type=BrokerTopicMetrics,name=BytesOutPerSec,topic={#JMXTOPIC}","Count"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Kafka |Kafka {#JMXTOPIC}: Bytes rejected per second |<p>Rejected bytes rate by topic.</p> |JMX |jmx["kafka.server:type=BrokerTopicMetrics,name=BytesRejectedPerSec,topic={#JMXTOPIC}","Count"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Group | Name | Description | Type | Key and additional info |
+|-------|------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------|------|-------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Kafka | Kafka: Leader election per second | <p>Number of leader elections per second.</p> | JMX | jmx["kafka.controller:type=ControllerStats,name=LeaderElectionRateAndTimeMs","Count"] |
+| Kafka | Kafka: Unclean leader election per second | <p>Number of “unclean” elections per second.</p> | JMX | jmx["kafka.controller:type=ControllerStats,name=UncleanLeaderElectionsPerSec","Count"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Kafka | Kafka: Controller state on broker | <p>One indicates that the broker is the controller for the cluster.</p> | JMX | jmx["kafka.controller:type=KafkaController,name=ActiveControllerCount","Value"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| Kafka | Kafka: Ineligible pending replica deletes | <p>The number of ineligible pending replica deletes.</p> | JMX | jmx["kafka.controller:type=KafkaController,name=ReplicasIneligibleToDeleteCount","Value"] |
+| Kafka | Kafka: Pending replica deletes | <p>The number of pending replica deletes.</p> | JMX | jmx["kafka.controller:type=KafkaController,name=ReplicasToDeleteCount","Value"] |
+| Kafka | Kafka: Ineligible pending topic deletes | <p>The number of ineligible pending topic deletes.</p> | JMX | jmx["kafka.controller:type=KafkaController,name=TopicsIneligibleToDeleteCount","Value"] |
+| Kafka | Kafka: Pending topic deletes | <p>The number of pending topic deletes.</p> | JMX | jmx["kafka.controller:type=KafkaController,name=TopicsToDeleteCount","Value"] |
+| Kafka | Kafka: Offline log directory count | <p>The number of offline log directories (for example, after a hardware failure).</p> | JMX | jmx["kafka.log:type=LogManager,name=OfflineLogDirectoryCount","Value"] |
+| Kafka | Kafka: Offline partitions count | <p>Number of partitions that don't have an active leader.</p> | JMX | jmx["kafka.controller:type=KafkaController,name=OfflinePartitionsCount","Value"] |
+| Kafka | Kafka: Bytes out per second | <p>The rate at which data is fetched and read from the broker by consumers.</p> | JMX | jmx["kafka.server:type=BrokerTopicMetrics,name=BytesOutPerSec","Count"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Kafka | Kafka: Bytes in per second | <p>The rate at which data sent from producers is consumed by the broker.</p> | JMX | jmx["kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec","Count"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Kafka | Kafka: Messages in per second | <p>The rate at which individual messages are consumed by the broker.</p> | JMX | jmx["kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec","Count"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Kafka | Kafka: Bytes rejected per second | <p>The rate at which bytes rejected per second by the broker.</p> | JMX | jmx["kafka.server:type=BrokerTopicMetrics,name=BytesRejectedPerSec","Count"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Kafka | Kafka: Client fetch request failed per second | <p>Number of client fetch request failures per second.</p> | JMX | jmx["kafka.server:type=BrokerTopicMetrics,name=FailedFetchRequestsPerSec","Count"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Kafka | Kafka: Produce requests failed per second | <p>Number of failed produce requests per second.</p> | JMX | jmx["kafka.server:type=BrokerTopicMetrics,name=FailedProduceRequestsPerSec","Count"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Kafka | Kafka: Request handler average idle percent | <p>Indicates the percentage of time that the request handler (IO) threads are not in use.</p> | JMX | jmx["kafka.server:type=KafkaRequestHandlerPool,name=RequestHandlerAvgIdlePercent","OneMinuteRate"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `100`</p> |
+| Kafka | Kafka: Fetch-Consumer response send time, mean | <p>Average time taken, in milliseconds, to send the response.</p> | JMX | jmx["kafka.network:type=RequestMetrics,name=ResponseSendTimeMs,request=FetchConsumer","Mean"] |
+| Kafka | Kafka: Fetch-Consumer response send time, p95 | <p>The time taken, in milliseconds, to send the response for 95th percentile.</p> | JMX | jmx["kafka.network:type=RequestMetrics,name=ResponseSendTimeMs,request=FetchConsumer","95thPercentile"] |
+| Kafka | Kafka: Fetch-Consumer response send time, p99 | <p>The time taken, in milliseconds, to send the response for 99th percentile.</p> | JMX | jmx["kafka.network:type=RequestMetrics,name=ResponseSendTimeMs,request=FetchConsumer","99thPercentile"] |
+| Kafka | Kafka: Fetch-Follower response send time, mean | <p>Average time taken, in milliseconds, to send the response.</p> | JMX | jmx["kafka.network:type=RequestMetrics,name=ResponseSendTimeMs,request=FetchFollower","Mean"] |
+| Kafka | Kafka: Fetch-Follower response send time, p95 | <p>The time taken, in milliseconds, to send the response for 95th percentile.</p> | JMX | jmx["kafka.network:type=RequestMetrics,name=ResponseSendTimeMs,request=FetchFollower","95thPercentile"] |
+| Kafka | Kafka: Fetch-Follower response send time, p99 | <p>The time taken, in milliseconds, to send the response for 99th percentile.</p> | JMX | jmx["kafka.network:type=RequestMetrics,name=ResponseSendTimeMs,request=FetchFollower","99thPercentile"] |
+| Kafka | Kafka: Produce response send time, mean | <p>Average time taken, in milliseconds, to send the response.</p> | JMX | jmx["kafka.network:type=RequestMetrics,name=ResponseSendTimeMs,request=Produce","Mean"] |
+| Kafka | Kafka: Produce response send time, p95 | <p>The time taken, in milliseconds, to send the response for 95th percentile.</p> | JMX | jmx["kafka.network:type=RequestMetrics,name=ResponseSendTimeMs,request=Produce","95thPercentile"] |
+| Kafka | Kafka: Produce response send time, p99 | <p>The time taken, in milliseconds, to send the response for 99th percentile.</p> | JMX | jmx["kafka.network:type=RequestMetrics,name=ResponseSendTimeMs,request=Produce","99thPercentile"] |
+| Kafka | Kafka: Fetch-Consumer request total time, mean | <p>Average time in ms to serve the Fetch-Consumer request.</p> | JMX | jmx["kafka.network:type=RequestMetrics,name=TotalTimeMs,request=FetchConsumer","Mean"] |
+| Kafka | Kafka: Fetch-Consumer request total time, p95 | <p>Time in ms to serve the Fetch-Consumer request for 95th percentile.</p> | JMX | jmx["kafka.network:type=RequestMetrics,name=TotalTimeMs,request=FetchConsumer","95thPercentile"] |
+| Kafka | Kafka: Fetch-Consumer request total time, p99 | <p>Time in ms to serve the specified Fetch-Consumer for 99th percentile.</p> | JMX | jmx["kafka.network:type=RequestMetrics,name=TotalTimeMs,request=FetchConsumer","99thPercentile"] |
+| Kafka | Kafka: Fetch-Follower request total time, mean | <p>Average time in ms to serve the Fetch-Follower request.</p> | JMX | jmx["kafka.network:type=RequestMetrics,name=TotalTimeMs,request=FetchFollower","Mean"] |
+| Kafka | Kafka: Fetch-Follower request total time, p95 | <p>Time in ms to serve the Fetch-Follower request for 95th percentile.</p> | JMX | jmx["kafka.network:type=RequestMetrics,name=TotalTimeMs,request=FetchFollower","95thPercentile"] |
+| Kafka | Kafka: Fetch-Follower request total time, p99 | <p>Time in ms to serve the Fetch-Follower request for 99th percentile.</p> | JMX | jmx["kafka.network:type=RequestMetrics,name=TotalTimeMs,request=FetchFollower","99thPercentile"] |
+| Kafka | Kafka: Produce request total time, mean | <p>Average time in ms to serve the Produce request.</p> | JMX | jmx["kafka.network:type=RequestMetrics,name=TotalTimeMs,request=Produce","Mean"] |
+| Kafka | Kafka: Produce request total time, p95 | <p>Time in ms to serve the Produce requests for 95th percentile.</p> | JMX | jmx["kafka.network:type=RequestMetrics,name=TotalTimeMs,request=Produce","95thPercentile"] |
+| Kafka | Kafka: Produce request total time, p99 | <p>Time in ms to serve the Produce requests for 99th percentile.</p> | JMX | jmx["kafka.network:type=RequestMetrics,name=TotalTimeMs,request=Produce","99thPercentile"] |
+| Kafka | Kafka: Fetch-Consumer request total time, mean | <p>Average time for a request to update metadata.</p> | JMX | jmx["kafka.network:type=RequestMetrics,name=TotalTimeMs,request=UpdateMetadata","Mean"] |
+| Kafka | Kafka: UpdateMetadata request total time, p95 | <p>Time for update metadata requests for 95th percentile.</p> | JMX | jmx["kafka.network:type=RequestMetrics,name=TotalTimeMs,request=UpdateMetadata","95thPercentile"] |
+| Kafka | Kafka: UpdateMetadata request total time, p99 | <p>Time for update metadata requests for 99th percentile.</p> | JMX | jmx["kafka.network:type=RequestMetrics,name=TotalTimeMs,request=UpdateMetadata","99thPercentile"] |
+| Kafka | Kafka: Temporary memory size in bytes (Fetch), max | <p>The maximum of temporary memory used for converting message formats and decompressing messages.</p> | JMX | jmx["kafka.network:type=RequestMetrics,name=TemporaryMemoryBytes,request=Fetch","Max"] |
+| Kafka | Kafka: Temporary memory size in bytes (Fetch), avg | <p>The amount of temporary memory used for converting message formats and decompressing messages.</p> | JMX | jmx["kafka.network:type=RequestMetrics,name=TemporaryMemoryBytes,request=Fetch","Mean"] |
+| Kafka | Kafka: Temporary memory size in bytes (Fetch), min | <p>The minimum of temporary memory used for converting message formats and decompressing messages.</p> | JMX | jmx["kafka.network:type=RequestMetrics,name=TemporaryMemoryBytes,request=Fetch","Mean"] |
+| Kafka | Kafka: Temporary memory size in bytes (Produce), max | <p>The maximum of temporary memory used for converting message formats and decompressing messages.</p> | JMX | jmx["kafka.network:type=RequestMetrics,name=TemporaryMemoryBytes,request=Produce","Max"] |
+| Kafka | Kafka: Temporary memory size in bytes (Produce), avg | <p>The amount of temporary memory used for converting message formats and decompressing messages.</p> | JMX | jmx["kafka.network:type=RequestMetrics,name=TemporaryMemoryBytes,request=Produce","Mean"] |
+| Kafka | Kafka: Temporary memory size in bytes (Produce), min | <p>The minimum of temporary memory used for converting message formats and decompressing messages.</p> | JMX | jmx["kafka.network:type=RequestMetrics,name=TemporaryMemoryBytes,request=Produce","Min"] |
+| Kafka | Kafka: Network processor average idle percent | <p>The average percentage of time that the network processors are idle.</p> | JMX | jmx["kafka.network:type=SocketServer,name=NetworkProcessorAvgIdlePercent","Value"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `100`</p> |
+| Kafka | Kafka: Requests in producer purgatory | <p>Number of requests waiting in producer purgatory.</p> | JMX | jmx["kafka.server:type=DelayedOperationPurgatory,name=PurgatorySize,delayedOperation=Fetch","Value"] |
+| Kafka | Kafka: Requests in fetch purgatory | <p>Number of requests waiting in fetch purgatory.</p> | JMX | jmx["kafka.server:type=DelayedOperationPurgatory,name=PurgatorySize,delayedOperation=Produce","Value"] |
+| Kafka | Kafka: Replication maximum lag | <p>The maximum lag between the time that messages are received by the leader replica and by the follower replicas.</p> | JMX | jmx["kafka.server:type=ReplicaFetcherManager,name=MaxLag,clientId=Replica","Value"] |
+| Kafka | Kafka: Under minimum ISR partition count | <p>The number of partitions under the minimum In-Sync Replica (ISR) count.</p> | JMX | jmx["kafka.server:type=ReplicaManager,name=UnderMinIsrPartitionCount","Value"] |
+| Kafka | Kafka: Under replicated partitions | <p>The number of partitions that have not been fully replicated in the follower replicas (the number of non-reassigning replicas - the number of ISR > 0).</p> | JMX | jmx["kafka.server:type=ReplicaManager,name=UnderReplicatedPartitions","Value"] |
+| Kafka | Kafka: ISR expands per second | <p>The rate at which the number of ISRs in the broker increases.</p> | JMX | jmx["kafka.server:type=ReplicaManager,name=IsrExpandsPerSec","Count"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Kafka | Kafka: ISR shrink per second | <p>Rate of replicas leaving the ISR pool.</p> | JMX | jmx["kafka.server:type=ReplicaManager,name=IsrShrinksPerSec","Count"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Kafka | Kafka: Leader count | <p>The number of replicas for which this broker is the leader.</p> | JMX | jmx["kafka.server:type=ReplicaManager,name=LeaderCount","Value"] |
+| Kafka | Kafka: Partition count | <p>The number of partitions in the broker.</p> | JMX | jmx["kafka.server:type=ReplicaManager,name=PartitionCount","Value"] |
+| Kafka | Kafka: Number of reassigning partitions | <p>The number of reassigning leader partitions on a broker.</p> | JMX | jmx["kafka.server:type=ReplicaManager,name=ReassigningPartitions","Value"] |
+| Kafka | Kafka: Request queue size | <p>The size of the delay queue.</p> | JMX | jmx["kafka.server:type=Request","queue-size"] |
+| Kafka | Kafka: Version | <p>Current version of brocker.</p> | JMX | jmx["kafka.server:type=app-info","version"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| Kafka | Kafka: Uptime | <p>Service uptime in seconds.</p> | JMX | jmx["kafka.server:type=app-info","start-time-ms"]<p>**Preprocessing**:</p><p>- JAVASCRIPT: `return (Math.floor((Date.now()-Number(value))/1000))`</p> |
+| Kafka | Kafka: ZooKeeper client request latency | <p>Latency in millseconds for ZooKeeper requests from broker.</p> | JMX | jmx["kafka.server:type=ZooKeeperClientMetrics,name=ZooKeeperRequestLatencyMs","Count"] |
+| Kafka | Kafka: ZooKeeper connection status | <p>Connection status of broker's ZooKeeper session.</p> | JMX | jmx["kafka.server:type=SessionExpireListener,name=SessionState","Value"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| Kafka | Kafka: ZooKeeper disconnect rate | <p>ZooKeeper client disconnect per second.</p> | JMX | jmx["kafka.server:type=SessionExpireListener,name=ZooKeeperDisconnectsPerSec","Count"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Kafka | Kafka: ZooKeeper session expiration rate | <p>ZooKeeper client session expiration per second.</p> | JMX | jmx["kafka.server:type=SessionExpireListener,name=ZooKeeperExpiresPerSec","Count"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Kafka | Kafka: ZooKeeper readonly rate | <p>ZooKeeper client readonly per second.</p> | JMX | jmx["kafka.server:type=SessionExpireListener,name=ZooKeeperReadOnlyConnectsPerSec","Count"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Kafka | Kafka: ZooKeeper sync rate | <p>ZooKeeper client sync per second.</p> | JMX | jmx["kafka.server:type=SessionExpireListener,name=ZooKeeperSyncConnectsPerSec","Count"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Kafka | Kafka {#JMXTOPIC}: Messages in per second | <p>The rate at which individual messages are consumed by topic.</p> | JMX | jmx["kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec,topic={#JMXTOPIC}","Count"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Kafka | Kafka {#JMXTOPIC}: Bytes in per second | <p>The rate at which data sent from producers is consumed by topic.</p> | JMX | jmx["kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic={#JMXTOPIC}","Count"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Kafka | Kafka {#JMXTOPIC}: Bytes out per second | <p>The rate at which data is fetched and read from the broker by consumers (by topic).</p> | JMX | jmx["kafka.server:type=BrokerTopicMetrics,name=BytesOutPerSec,topic={#JMXTOPIC}","Count"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Kafka | Kafka {#JMXTOPIC}: Bytes rejected per second | <p>Rejected bytes rate by topic.</p> | JMX | jmx["kafka.server:type=BrokerTopicMetrics,name=BytesRejectedPerSec,topic={#JMXTOPIC}","Count"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|Kafka: Unclean leader election detected |<p>Unclean leader elections occur when there is no qualified partition leader among Kafka brokers. If Kafka is configured to allow an unclean leader election, a leader is chosen from the out-of-sync replicas, and any messages that were not synced prior to the loss of the former leader are lost forever. Essentially, unclean leader elections sacrifice consistency for availability.</p> |`{TEMPLATE_NAME:jmx["kafka.controller:type=ControllerStats,name=UncleanLeaderElectionsPerSec","Count"].last()}>0` |AVERAGE | |
-|Kafka: There are offline log directories |<p>The offline log directory count metric indicate the number of log directories which are offline (due to an hardware failure for example) so that the broker cannot store incoming messages anymore.</p> |`{TEMPLATE_NAME:jmx["kafka.log:type=LogManager,name=OfflineLogDirectoryCount","Value"].last()} > 0` |WARNING | |
-|Kafka: One or more partitions have no leader |<p>Any partition without an active leader will be completely inaccessible, and both consumers and producers of that partition will be blocked until a leader becomes available.</p> |`{TEMPLATE_NAME:jmx["kafka.controller:type=KafkaController,name=OfflinePartitionsCount","Value"].last()} > 0` |WARNING | |
-|Kafka: Request handler average idle percent is too low (under {$KAFKA.REQUEST_HANDLER_AVG_IDLE.MIN.WARN} for 15m) |<p>The request handler idle ratio metric indicates the percentage of time the request handlers are not in use. The lower this number, the more loaded the broker is.</p> |`{TEMPLATE_NAME:jmx["kafka.server:type=KafkaRequestHandlerPool,name=RequestHandlerAvgIdlePercent","OneMinuteRate"].max(15m)}<{$KAFKA.REQUEST_HANDLER_AVG_IDLE.MIN.WARN}` |AVERAGE | |
-|Kafka: Network processor average idle percent is too low (under {$KAFKA.NET_PROC_AVG_IDLE.MIN.WARN} for 15m) |<p>The network processor idle ratio metric indicates the percentage of time the network processor are not in use. The lower this number, the more loaded the broker is.</p> |`{TEMPLATE_NAME:jmx["kafka.network:type=SocketServer,name=NetworkProcessorAvgIdlePercent","Value"].max(15m)}<{$KAFKA.NET_PROC_AVG_IDLE.MIN.WARN}` |AVERAGE | |
-|Kafka: Failed to fetch info data (or no data for 15m) |<p>Zabbix has not received data for items for the last 15 minutes</p> |`{TEMPLATE_NAME:jmx["kafka.network:type=SocketServer,name=NetworkProcessorAvgIdlePercent","Value"].nodata(15m)}=1` |WARNING | |
-|Kafka: There are partitions under the min ISR |<p>The Under min ISR partitions metric displays the number of partitions, where the number of In-Sync Replicas (ISR) is less than the minimum number of in-sync replicas specified. The two most common causes of under-min ISR partitions are that one or more brokers is unresponsive, or the cluster is experiencing performance issues and one or more brokers are falling behind.</p> |`{TEMPLATE_NAME:jmx["kafka.server:type=ReplicaManager,name=UnderMinIsrPartitionCount","Value"].last()}>0` |AVERAGE | |
-|Kafka: There are under replicated partitions |<p>The Under replicated partitions metric displays the number of partitions that do not have enough replicas to meet the desired replication factor. A partition will also be considered under-replicated if the correct number of replicas exist, but one or more of the replicas have fallen significantly behind the partition leader. The two most common causes of under-replicated partitions are that one or more brokers is unresponsive, or the cluster is experiencing performance issues and one or more brokers have fallen behind.</p> |`{TEMPLATE_NAME:jmx["kafka.server:type=ReplicaManager,name=UnderReplicatedPartitions","Value"].last()}>0` |AVERAGE | |
-|Kafka: Version has changed (new version: {ITEM.VALUE}) |<p>Kafka version has changed. Ack to close.</p> |`{TEMPLATE_NAME:jmx["kafka.server:type=app-info","version"].diff()}=1 and {TEMPLATE_NAME:jmx["kafka.server:type=app-info","version"].strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|Kafka: has been restarted (uptime < 10m) |<p>Uptime is less than 10 minutes</p> |`{TEMPLATE_NAME:jmx["kafka.server:type=app-info","start-time-ms"].last()}<10m` |INFO |<p>Manual close: YES</p> |
-|Kafka: Broker is not connected to ZooKeeper |<p>-</p> |`{TEMPLATE_NAME:jmx["kafka.server:type=SessionExpireListener,name=SessionState","Value"].regexp("CONNECTED")}=0` |AVERAGE | |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|-------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|----------------------------------|
+| Kafka: Unclean leader election detected | <p>Unclean leader elections occur when there is no qualified partition leader among Kafka brokers. If Kafka is configured to allow an unclean leader election, a leader is chosen from the out-of-sync replicas, and any messages that were not synced prior to the loss of the former leader are lost forever. Essentially, unclean leader elections sacrifice consistency for availability.</p> | `{TEMPLATE_NAME:jmx["kafka.controller:type=ControllerStats,name=UncleanLeaderElectionsPerSec","Count"].last()}>0` | AVERAGE | |
+| Kafka: There are offline log directories | <p>The offline log directory count metric indicate the number of log directories which are offline (due to an hardware failure for example) so that the broker cannot store incoming messages anymore.</p> | `{TEMPLATE_NAME:jmx["kafka.log:type=LogManager,name=OfflineLogDirectoryCount","Value"].last()} > 0` | WARNING | |
+| Kafka: One or more partitions have no leader | <p>Any partition without an active leader will be completely inaccessible, and both consumers and producers of that partition will be blocked until a leader becomes available.</p> | `{TEMPLATE_NAME:jmx["kafka.controller:type=KafkaController,name=OfflinePartitionsCount","Value"].last()} > 0` | WARNING | |
+| Kafka: Request handler average idle percent is too low (under {$KAFKA.REQUEST_HANDLER_AVG_IDLE.MIN.WARN} for 15m) | <p>The request handler idle ratio metric indicates the percentage of time the request handlers are not in use. The lower this number, the more loaded the broker is.</p> | `{TEMPLATE_NAME:jmx["kafka.server:type=KafkaRequestHandlerPool,name=RequestHandlerAvgIdlePercent","OneMinuteRate"].max(15m)}<{$KAFKA.REQUEST_HANDLER_AVG_IDLE.MIN.WARN}` | AVERAGE | |
+| Kafka: Network processor average idle percent is too low (under {$KAFKA.NET_PROC_AVG_IDLE.MIN.WARN} for 15m) | <p>The network processor idle ratio metric indicates the percentage of time the network processor are not in use. The lower this number, the more loaded the broker is.</p> | `{TEMPLATE_NAME:jmx["kafka.network:type=SocketServer,name=NetworkProcessorAvgIdlePercent","Value"].max(15m)}<{$KAFKA.NET_PROC_AVG_IDLE.MIN.WARN}` | AVERAGE | |
+| Kafka: Failed to fetch info data (or no data for 15m) | <p>Zabbix has not received data for items for the last 15 minutes</p> | `{TEMPLATE_NAME:jmx["kafka.network:type=SocketServer,name=NetworkProcessorAvgIdlePercent","Value"].nodata(15m)}=1` | WARNING | |
+| Kafka: There are partitions under the min ISR | <p>The Under min ISR partitions metric displays the number of partitions, where the number of In-Sync Replicas (ISR) is less than the minimum number of in-sync replicas specified. The two most common causes of under-min ISR partitions are that one or more brokers is unresponsive, or the cluster is experiencing performance issues and one or more brokers are falling behind.</p> | `{TEMPLATE_NAME:jmx["kafka.server:type=ReplicaManager,name=UnderMinIsrPartitionCount","Value"].last()}>0` | AVERAGE | |
+| Kafka: There are under replicated partitions | <p>The Under replicated partitions metric displays the number of partitions that do not have enough replicas to meet the desired replication factor. A partition will also be considered under-replicated if the correct number of replicas exist, but one or more of the replicas have fallen significantly behind the partition leader. The two most common causes of under-replicated partitions are that one or more brokers is unresponsive, or the cluster is experiencing performance issues and one or more brokers have fallen behind.</p> | `{TEMPLATE_NAME:jmx["kafka.server:type=ReplicaManager,name=UnderReplicatedPartitions","Value"].last()}>0` | AVERAGE | |
+| Kafka: Version has changed (new version: {ITEM.VALUE}) | <p>Kafka version has changed. Ack to close.</p> | `{TEMPLATE_NAME:jmx["kafka.server:type=app-info","version"].diff()}=1 and {TEMPLATE_NAME:jmx["kafka.server:type=app-info","version"].strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| Kafka: has been restarted (uptime < 10m) | <p>Uptime is less than 10 minutes</p> | `{TEMPLATE_NAME:jmx["kafka.server:type=app-info","start-time-ms"].last()}<10m` | INFO | <p>Manual close: YES</p> |
+| Kafka: Broker is not connected to ZooKeeper | <p>-</p> | `{TEMPLATE_NAME:jmx["kafka.server:type=SessionExpireListener,name=SessionState","Value"].regexp("CONNECTED")}=0` | AVERAGE | |
## Feedback
diff --git a/templates/app/memcached/README.md b/templates/app/memcached/README.md
index 4c7c1bd68e2..778e5a8b03d 100644
--- a/templates/app/memcached/README.md
+++ b/templates/app/memcached/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
The template to monitor Memcached server by Zabbix that work without any external scripts.
Most of the metrics are collected in one go, thanks to Zabbix bulk data collection.
@@ -17,7 +17,7 @@ This template was tested on:
## Setup
-> See [Zabbix template operation](https://www.zabbix.com/documentation/5.2/manual/config/templates_out_of_the_box/zabbix_agent2) for basic instructions.
+> See [Zabbix template operation](https://www.zabbix.com/documentation/5.4/manual/config/templates_out_of_the_box/zabbix_agent2) for basic instructions.
Setup and configure zabbix-agent2 compiled with the Memcached monitoring [plugin](/go/plugins/memcached).
@@ -30,13 +30,13 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$MEMCACHED.CONN.PRC.MAX.WARN} |<p>Maximum percentage of connected clients</p> |`80` |
-|{$MEMCACHED.CONN.QUEUED.MAX.WARN} |<p>Maximum number of queued connections per second</p> |`1` |
-|{$MEMCACHED.CONN.THROTTLED.MAX.WARN} |<p>Maximum number of throttled connections per second</p> |`1` |
-|{$MEMCACHED.CONN.URI} |<p>Connection string in the URI format (password is not used). This param overwrites a value configured in the "Plugins.Memcached.Uri" option of the configuration file (if it's set), otherwise, the plugin's default value is used: "tcp://localhost:11211"</p> |`tcp://localhost:11211` |
-|{$MEMCACHED.MEM.PUSED.MAX.WARN} |<p>Maximum percentage of memory used</p> |`90` |
+| Name | Description | Default |
+|--------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------|
+| {$MEMCACHED.CONN.PRC.MAX.WARN} | <p>Maximum percentage of connected clients</p> | `80` |
+| {$MEMCACHED.CONN.QUEUED.MAX.WARN} | <p>Maximum number of queued connections per second</p> | `1` |
+| {$MEMCACHED.CONN.THROTTLED.MAX.WARN} | <p>Maximum number of throttled connections per second</p> | `1` |
+| {$MEMCACHED.CONN.URI} | <p>Connection string in the URI format (password is not used). This param overwrites a value configured in the "Plugins.Memcached.Uri" option of the configuration file (if it's set), otherwise, the plugin's default value is used: "tcp://localhost:11211"</p> | `tcp://localhost:11211` |
+| {$MEMCACHED.MEM.PUSED.MAX.WARN} | <p>Maximum percentage of memory used</p> | `90` |
## Template links
@@ -47,47 +47,47 @@ There are no template links in this template.
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Memcached |Memcached: Ping | |ZABBIX_PASSIVE |memcached.ping["{$MEMCACHED.CONN.URI}"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|Memcached |Memcached: Max connections |<p>Max number of concurrent connections</p> |DEPENDENT |memcached.connections.max<p>**Preprocessing**:</p><p>- JSONPATH: `$.max_connections`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `30m`</p> |
-|Memcached |Memcached: Maximum number of bytes |<p>Maximum number of bytes allowed in cache. You can adjust this setting via a config file or the command line while starting your Memcached server.</p> |DEPENDENT |memcached.config.limit_maxbytes<p>**Preprocessing**:</p><p>- JSONPATH: `$.limit_maxbytes`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `30m`</p> |
-|Memcached |Memcached: CPU sys |<p>System CPU consumed by the Memcached server</p> |DEPENDENT |memcached.cpu.sys<p>**Preprocessing**:</p><p>- JSONPATH: `$.rusage_system`</p> |
-|Memcached |Memcached: CPU user |<p>User CPU consumed by the Memcached server</p> |DEPENDENT |memcached.cpu.user<p>**Preprocessing**:</p><p>- JSONPATH: `$.rusage_user`</p> |
-|Memcached |Memcached: Queued connections per second |<p>Number of times that memcached has hit its connections limit and disabled its listener</p> |DEPENDENT |memcached.connections.queued.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.listen_disabled_num`</p><p>- CHANGE_PER_SECOND |
-|Memcached |Memcached: New connections per second |<p>Number of connections opened per second</p> |DEPENDENT |memcached.connections.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.total_connections`</p><p>- CHANGE_PER_SECOND |
-|Memcached |Memcached: Throttled connections |<p>Number of times a client connection was throttled. When sending GETs in batch mode and the connection contains too many requests (limited by -R parameter) the connection might be throttled to prevent starvation.</p> |DEPENDENT |memcached.connections.throttled.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.conn_yields`</p><p>- CHANGE_PER_SECOND |
-|Memcached |Memcached: Connection structures |<p>Number of connection structures allocated by the server</p> |DEPENDENT |memcached.connections.structures<p>**Preprocessing**:</p><p>- JSONPATH: `$.connection_structures`</p> |
-|Memcached |Memcached: Open connections |<p>The number of clients presently connected</p> |DEPENDENT |memcached.connections.current<p>**Preprocessing**:</p><p>- JSONPATH: `$.curr_connections`</p> |
-|Memcached |Memcached: Commands: FLUSH per second |<p>The flush_all command invalidates all items in the database. This operation incurs a performance penalty and shouldn’t take place in production, so check your debug scripts.</p> |DEPENDENT |memcached.commands.flush.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.cmd_flush`</p><p>- CHANGE_PER_SECOND |
-|Memcached |Memcached: Commands: GET per second |<p>Number of GET requests received by server per second.</p> |DEPENDENT |memcached.commands.get.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.cmd_get`</p><p>- CHANGE_PER_SECOND |
-|Memcached |Memcached: Commands: SET per second |<p>Number of SET requests received by server per second.</p> |DEPENDENT |memcached.commands.set.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.cmd_set`</p><p>- CHANGE_PER_SECOND |
-|Memcached |Memcached: Process id |<p>PID of the server process</p> |DEPENDENT |memcached.process_id<p>**Preprocessing**:</p><p>- JSONPATH: `$.pid`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Memcached |Memcached: Memcached version |<p>Version of the Memcached server</p> |DEPENDENT |memcached.version<p>**Preprocessing**:</p><p>- JSONPATH: `$.version`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Memcached |Memcached: Uptime |<p>Number of seconds since Memcached server start</p> |DEPENDENT |memcached.uptime<p>**Preprocessing**:</p><p>- JSONPATH: `$.uptime`</p> |
-|Memcached |Memcached: Bytes used |<p>Current number of bytes used to store items.</p> |DEPENDENT |memcached.stats.bytes<p>**Preprocessing**:</p><p>- JSONPATH: `$.bytes`</p> |
-|Memcached |Memcached: Written bytes per second |<p>The network's read rate per second in B/sec</p> |DEPENDENT |memcached.stats.bytes_written.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.bytes_written`</p><p>- CHANGE_PER_SECOND |
-|Memcached |Memcached: Read bytes per second |<p>The network's read rate per second in B/sec</p> |DEPENDENT |memcached.stats.bytes_read.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.bytes_read`</p><p>- CHANGE_PER_SECOND |
-|Memcached |Memcached: Hits per second |<p>Number of successful GET requests (items requested and found) per second.</p> |DEPENDENT |memcached.stats.hits.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.get_hits`</p><p>- CHANGE_PER_SECOND |
-|Memcached |Memcached: Misses per second |<p>Number of missed GET requests (items requested but not found) per second.</p> |DEPENDENT |memcached.stats.misses.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.get_misses`</p><p>- CHANGE_PER_SECOND |
-|Memcached |Memcached: Evictions per second |<p>"An eviction is when an item that still has time to live is removed from the cache because a brand new item needs to be allocated.</p><p>The item is selected with a pseudo-LRU mechanism.</p><p>A high number of evictions coupled with a low hit rate means your application is setting a large number of keys that are never used again."</p> |DEPENDENT |memcached.stats.evictions.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.evictions`</p><p>- CHANGE_PER_SECOND |
-|Memcached |Memcached: New items per second |<p>Number of new items stored per second.</p> |DEPENDENT |memcached.stats.total_items.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.total_items`</p><p>- CHANGE_PER_SECOND |
-|Memcached |Memcached: Current number of items stored |<p>Current number of items stored by this instance.</p> |DEPENDENT |memcached.stats.curr_items<p>**Preprocessing**:</p><p>- JSONPATH: `$.curr_items`</p> |
-|Memcached |Memcached: Threads |<p>Number of worker threads requested</p> |DEPENDENT |memcached.stats.threads<p>**Preprocessing**:</p><p>- JSONPATH: `$.threads`</p> |
-|Zabbix_raw_items |Memcached: Get status | |ZABBIX_PASSIVE |memcached.stats["{$MEMCACHED.CONN.URI}"] |
+| Group | Name | Description | Type | Key and additional info |
+|------------------|-------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------|------------------------------------------------------------------------------------------------------------------------------------------|
+| Memcached | Memcached: Ping | | ZABBIX_PASSIVE | memcached.ping["{$MEMCACHED.CONN.URI}"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| Memcached | Memcached: Max connections | <p>Max number of concurrent connections</p> | DEPENDENT | memcached.connections.max<p>**Preprocessing**:</p><p>- JSONPATH: `$.max_connections`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `30m`</p> |
+| Memcached | Memcached: Maximum number of bytes | <p>Maximum number of bytes allowed in cache. You can adjust this setting via a config file or the command line while starting your Memcached server.</p> | DEPENDENT | memcached.config.limit_maxbytes<p>**Preprocessing**:</p><p>- JSONPATH: `$.limit_maxbytes`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `30m`</p> |
+| Memcached | Memcached: CPU sys | <p>System CPU consumed by the Memcached server</p> | DEPENDENT | memcached.cpu.sys<p>**Preprocessing**:</p><p>- JSONPATH: `$.rusage_system`</p> |
+| Memcached | Memcached: CPU user | <p>User CPU consumed by the Memcached server</p> | DEPENDENT | memcached.cpu.user<p>**Preprocessing**:</p><p>- JSONPATH: `$.rusage_user`</p> |
+| Memcached | Memcached: Queued connections per second | <p>Number of times that memcached has hit its connections limit and disabled its listener</p> | DEPENDENT | memcached.connections.queued.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.listen_disabled_num`</p><p>- CHANGE_PER_SECOND |
+| Memcached | Memcached: New connections per second | <p>Number of connections opened per second</p> | DEPENDENT | memcached.connections.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.total_connections`</p><p>- CHANGE_PER_SECOND |
+| Memcached | Memcached: Throttled connections | <p>Number of times a client connection was throttled. When sending GETs in batch mode and the connection contains too many requests (limited by -R parameter) the connection might be throttled to prevent starvation.</p> | DEPENDENT | memcached.connections.throttled.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.conn_yields`</p><p>- CHANGE_PER_SECOND |
+| Memcached | Memcached: Connection structures | <p>Number of connection structures allocated by the server</p> | DEPENDENT | memcached.connections.structures<p>**Preprocessing**:</p><p>- JSONPATH: `$.connection_structures`</p> |
+| Memcached | Memcached: Open connections | <p>The number of clients presently connected</p> | DEPENDENT | memcached.connections.current<p>**Preprocessing**:</p><p>- JSONPATH: `$.curr_connections`</p> |
+| Memcached | Memcached: Commands: FLUSH per second | <p>The flush_all command invalidates all items in the database. This operation incurs a performance penalty and shouldn’t take place in production, so check your debug scripts.</p> | DEPENDENT | memcached.commands.flush.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.cmd_flush`</p><p>- CHANGE_PER_SECOND |
+| Memcached | Memcached: Commands: GET per second | <p>Number of GET requests received by server per second.</p> | DEPENDENT | memcached.commands.get.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.cmd_get`</p><p>- CHANGE_PER_SECOND |
+| Memcached | Memcached: Commands: SET per second | <p>Number of SET requests received by server per second.</p> | DEPENDENT | memcached.commands.set.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.cmd_set`</p><p>- CHANGE_PER_SECOND |
+| Memcached | Memcached: Process id | <p>PID of the server process</p> | DEPENDENT | memcached.process_id<p>**Preprocessing**:</p><p>- JSONPATH: `$.pid`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Memcached | Memcached: Memcached version | <p>Version of the Memcached server</p> | DEPENDENT | memcached.version<p>**Preprocessing**:</p><p>- JSONPATH: `$.version`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Memcached | Memcached: Uptime | <p>Number of seconds since Memcached server start</p> | DEPENDENT | memcached.uptime<p>**Preprocessing**:</p><p>- JSONPATH: `$.uptime`</p> |
+| Memcached | Memcached: Bytes used | <p>Current number of bytes used to store items.</p> | DEPENDENT | memcached.stats.bytes<p>**Preprocessing**:</p><p>- JSONPATH: `$.bytes`</p> |
+| Memcached | Memcached: Written bytes per second | <p>The network's read rate per second in B/sec</p> | DEPENDENT | memcached.stats.bytes_written.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.bytes_written`</p><p>- CHANGE_PER_SECOND |
+| Memcached | Memcached: Read bytes per second | <p>The network's read rate per second in B/sec</p> | DEPENDENT | memcached.stats.bytes_read.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.bytes_read`</p><p>- CHANGE_PER_SECOND |
+| Memcached | Memcached: Hits per second | <p>Number of successful GET requests (items requested and found) per second.</p> | DEPENDENT | memcached.stats.hits.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.get_hits`</p><p>- CHANGE_PER_SECOND |
+| Memcached | Memcached: Misses per second | <p>Number of missed GET requests (items requested but not found) per second.</p> | DEPENDENT | memcached.stats.misses.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.get_misses`</p><p>- CHANGE_PER_SECOND |
+| Memcached | Memcached: Evictions per second | <p>"An eviction is when an item that still has time to live is removed from the cache because a brand new item needs to be allocated.</p><p>The item is selected with a pseudo-LRU mechanism.</p><p>A high number of evictions coupled with a low hit rate means your application is setting a large number of keys that are never used again."</p> | DEPENDENT | memcached.stats.evictions.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.evictions`</p><p>- CHANGE_PER_SECOND |
+| Memcached | Memcached: New items per second | <p>Number of new items stored per second.</p> | DEPENDENT | memcached.stats.total_items.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.total_items`</p><p>- CHANGE_PER_SECOND |
+| Memcached | Memcached: Current number of items stored | <p>Current number of items stored by this instance.</p> | DEPENDENT | memcached.stats.curr_items<p>**Preprocessing**:</p><p>- JSONPATH: `$.curr_items`</p> |
+| Memcached | Memcached: Threads | <p>Number of worker threads requested</p> | DEPENDENT | memcached.stats.threads<p>**Preprocessing**:</p><p>- JSONPATH: `$.threads`</p> |
+| Zabbix_raw_items | Memcached: Get status | | ZABBIX_PASSIVE | memcached.stats["{$MEMCACHED.CONN.URI}"] |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|Memcached: Service is down |<p>-</p> |`{TEMPLATE_NAME:memcached.ping["{$MEMCACHED.CONN.URI}"].last()}=0` |AVERAGE |<p>Manual close: YES</p> |
-|Memcached: Failed to fetch info data (or no data for 30m) |<p>Zabbix has not received data for items for the last 30 minutes</p> |`{TEMPLATE_NAME:memcached.cpu.sys.nodata(30m)}=1` |WARNING |<p>Manual close: YES</p><p>**Depends on**:</p><p>- Memcached: Service is down</p> |
-|Memcached: Too many queued connections (over {$MEMCACHED.CONN.QUEUED.MAX.WARN} in 5m) |<p>The max number of connections is reachedand and a new connection had to wait in the queue as a result.</p> |`{TEMPLATE_NAME:memcached.connections.queued.rate.min(5m)}>{$MEMCACHED.CONN.QUEUED.MAX.WARN}` |WARNING | |
-|Memcached: Too many throttled connections (over {$MEMCACHED.CONN.THROTTLED.MAX.WARN} in 5m) |<p>Number of times a client connection was throttled is too high.</p><p>When sending GETs in batch mode and the connection contains too many requests (limited by -R parameter) the connection might be throttled to prevent starvation.</p> |`{TEMPLATE_NAME:memcached.connections.throttled.rate.min(5m)}>{$MEMCACHED.CONN.THROTTLED.MAX.WARN}` |WARNING | |
-|Memcached: Total number of connected clients is too high (over {$MEMCACHED.CONN.PRC.MAX.WARN}% in 5m) |<p>When the number of connections reaches the value of the "max_connections" parameter, new connections will be rejected.</p> |`{TEMPLATE_NAME:memcached.connections.current.min(5m)}/{Memcached:memcached.connections.max.last()}*100>{$MEMCACHED.CONN.PRC.MAX.WARN}` |WARNING | |
-|Memcached: Version has changed (new version: {ITEM.VALUE}) |<p>Memcached version has changed. Ack to close.</p> |`{TEMPLATE_NAME:memcached.version.diff()}=1 and {TEMPLATE_NAME:memcached.version.strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|Memcached: has been restarted (uptime < 10m) |<p>Uptime is less than 10 minutes</p> |`{TEMPLATE_NAME:memcached.uptime.last()}<10m` |INFO |<p>Manual close: YES</p> |
-|Memcached: Memory usage is too high (over {$MEMCACHED.MEM.PUSED.MAX.WARN} in 5m) |<p>-</p> |`{TEMPLATE_NAME:memcached.stats.bytes.min(5m)}/{Memcached:memcached.config.limit_maxbytes.last()}*100>{$MEMCACHED.MEM.PUSED.MAX.WARN}` |WARNING | |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|-------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------|----------|-----------------------------------------------------------------------------------|
+| Memcached: Service is down | <p>-</p> | `{TEMPLATE_NAME:memcached.ping["{$MEMCACHED.CONN.URI}"].last()}=0` | AVERAGE | <p>Manual close: YES</p> |
+| Memcached: Failed to fetch info data (or no data for 30m) | <p>Zabbix has not received data for items for the last 30 minutes</p> | `{TEMPLATE_NAME:memcached.cpu.sys.nodata(30m)}=1` | WARNING | <p>Manual close: YES</p><p>**Depends on**:</p><p>- Memcached: Service is down</p> |
+| Memcached: Too many queued connections (over {$MEMCACHED.CONN.QUEUED.MAX.WARN} in 5m) | <p>The max number of connections is reachedand and a new connection had to wait in the queue as a result.</p> | `{TEMPLATE_NAME:memcached.connections.queued.rate.min(5m)}>{$MEMCACHED.CONN.QUEUED.MAX.WARN}` | WARNING | |
+| Memcached: Too many throttled connections (over {$MEMCACHED.CONN.THROTTLED.MAX.WARN} in 5m) | <p>Number of times a client connection was throttled is too high.</p><p>When sending GETs in batch mode and the connection contains too many requests (limited by -R parameter) the connection might be throttled to prevent starvation.</p> | `{TEMPLATE_NAME:memcached.connections.throttled.rate.min(5m)}>{$MEMCACHED.CONN.THROTTLED.MAX.WARN}` | WARNING | |
+| Memcached: Total number of connected clients is too high (over {$MEMCACHED.CONN.PRC.MAX.WARN}% in 5m) | <p>When the number of connections reaches the value of the "max_connections" parameter, new connections will be rejected.</p> | `{TEMPLATE_NAME:memcached.connections.current.min(5m)}/{Memcached:memcached.connections.max.last()}*100>{$MEMCACHED.CONN.PRC.MAX.WARN}` | WARNING | |
+| Memcached: Version has changed (new version: {ITEM.VALUE}) | <p>Memcached version has changed. Ack to close.</p> | `{TEMPLATE_NAME:memcached.version.diff()}=1 and {TEMPLATE_NAME:memcached.version.strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| Memcached: has been restarted (uptime < 10m) | <p>Uptime is less than 10 minutes</p> | `{TEMPLATE_NAME:memcached.uptime.last()}<10m` | INFO | <p>Manual close: YES</p> |
+| Memcached: Memory usage is too high (over {$MEMCACHED.MEM.PUSED.MAX.WARN} in 5m) | <p>-</p> | `{TEMPLATE_NAME:memcached.stats.bytes.min(5m)}/{Memcached:memcached.config.limit_maxbytes.last()}*100>{$MEMCACHED.MEM.PUSED.MAX.WARN}` | WARNING | |
## Feedback
diff --git a/templates/app/nginx_agent/README.md b/templates/app/nginx_agent/README.md
index e5c8d1ce228..d8dcfd0c834 100644
--- a/templates/app/nginx_agent/README.md
+++ b/templates/app/nginx_agent/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
The template to monitor Nginx by Zabbix that work without any external scripts.
Most of the metrics are collected in one go, thanks to Zabbix bulk data collection.
@@ -27,7 +27,7 @@ This template was tested on:
## Setup
-> See [Zabbix template operation](https://www.zabbix.com/documentation/5.2/manual/config/templates_out_of_the_box/zabbix_agent) for basic instructions.
+> See [Zabbix template operation](https://www.zabbix.com/documentation/5.4/manual/config/templates_out_of_the_box/zabbix_agent) for basic instructions.
Setup [ngx_http_stub_status_module](https://nginx.ru/en/docs/http/ngx_http_stub_status_module.html).
Test availability of http_stub_status module with `nginx -V 2>&1 | grep -o with-http_stub_status_module`.
@@ -52,13 +52,13 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$NGINX.DROP_RATE.MAX.WARN} |<p>The critical rate of the dropped connections for trigger expression.</p> |`1` |
-|{$NGINX.RESPONSE_TIME.MAX.WARN} |<p>The Nginx maximum response time in seconds for trigger expression.</p> |`10` |
-|{$NGINX.STUB_STATUS.HOST} |<p>Hostname or IP of Nginx stub_status host or container.</p> |`localhost` |
-|{$NGINX.STUB_STATUS.PATH} |<p>The path of Nginx stub_status page.</p> |`basic_status` |
-|{$NGINX.STUB_STATUS.PORT} |<p>The port of Nginx stub_status host or container.</p> |`80` |
+| Name | Description | Default |
+|---------------------------------|-----------------------------------------------------------------------------|----------------|
+| {$NGINX.DROP_RATE.MAX.WARN} | <p>The critical rate of the dropped connections for trigger expression.</p> | `1` |
+| {$NGINX.RESPONSE_TIME.MAX.WARN} | <p>The Nginx maximum response time in seconds for trigger expression.</p> | `10` |
+| {$NGINX.STUB_STATUS.HOST} | <p>Hostname or IP of Nginx stub_status host or container.</p> | `localhost` |
+| {$NGINX.STUB_STATUS.PATH} | <p>The path of Nginx stub_status page.</p> | `basic_status` |
+| {$NGINX.STUB_STATUS.PORT} | <p>The port of Nginx stub_status host or container.</p> | `80` |
## Template links
@@ -69,36 +69,36 @@ There are no template links in this template.
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Nginx |Nginx: Service status |<p>-</p> |ZABBIX_PASSIVE |net.tcp.service[http,"{$NGINX.STUB_STATUS.HOST}","{$NGINX.STUB_STATUS.PORT}"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|Nginx |Nginx: Service response time |<p>-</p> |ZABBIX_PASSIVE |net.tcp.service.perf[http,"{$NGINX.STUB_STATUS.HOST}","{$NGINX.STUB_STATUS.PORT}"] |
-|Nginx |Nginx: Requests total |<p>The total number of client requests.</p> |DEPENDENT |nginx.requests.total<p>**Preprocessing**:</p><p>- REGEX: `server accepts handled requests\s+([0-9]+) ([0-9]+) ([0-9]+) \3`</p> |
-|Nginx |Nginx: Requests per second |<p>The total number of client requests.</p> |DEPENDENT |nginx.requests.total.rate<p>**Preprocessing**:</p><p>- REGEX: `server accepts handled requests\s+([0-9]+) ([0-9]+) ([0-9]+) \3`</p><p>- CHANGE_PER_SECOND |
-|Nginx |Nginx: Connections accepted per second |<p>The total number of accepted client connections.</p> |DEPENDENT |nginx.connections.accepted.rate<p>**Preprocessing**:</p><p>- REGEX: `server accepts handled requests\s+([0-9]+) ([0-9]+) ([0-9]+) \1`</p><p>- CHANGE_PER_SECOND |
-|Nginx |Nginx: Connections dropped per second |<p>The total number of dropped client connections.</p> |DEPENDENT |nginx.connections.dropped.rate<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>- CHANGE_PER_SECOND |
-|Nginx |Nginx: Connections handled per second |<p>The total number of handled connections. Generally, the parameter value is the same as accepts unless some resource limits have been reached (for example, the worker_connections limit).</p> |DEPENDENT |nginx.connections.handled.rate<p>**Preprocessing**:</p><p>- REGEX: `server accepts handled requests\s+([0-9]+) ([0-9]+) ([0-9]+) \2`</p><p>- CHANGE_PER_SECOND |
-|Nginx |Nginx: Connections active |<p>The current number of active client connections including Waiting connections.</p> |DEPENDENT |nginx.connections.active<p>**Preprocessing**:</p><p>- REGEX: `Active connections: ([0-9]+) \1`</p> |
-|Nginx |Nginx: Connections reading |<p>The current number of connections where nginx is reading the request header.</p> |DEPENDENT |nginx.connections.reading<p>**Preprocessing**:</p><p>- REGEX: `Reading: ([0-9]+) Writing: ([0-9]+) Waiting: ([0-9]+) \1`</p> |
-|Nginx |Nginx: Connections waiting |<p>The current number of idle client connections waiting for a request.</p> |DEPENDENT |nginx.connections.waiting<p>**Preprocessing**:</p><p>- REGEX: `Reading: ([0-9]+) Writing: ([0-9]+) Waiting: ([0-9]+) \3`</p> |
-|Nginx |Nginx: Connections writing |<p>The current number of connections where nginx is writing the response back to the client.</p> |DEPENDENT |nginx.connections.writing<p>**Preprocessing**:</p><p>- REGEX: `Reading: ([0-9]+) Writing: ([0-9]+) Waiting: ([0-9]+) \2`</p> |
-|Nginx |Nginx: Number of processes running |<p>Number of the Nginx processes running.</p> |ZABBIX_PASSIVE |proc.num[nginx] |
-|Nginx |Nginx: Memory usage (vsize) |<p>Virtual memory size used by process in bytes.</p> |ZABBIX_PASSIVE |proc.mem[nginx,,,,vsize] |
-|Nginx |Nginx: Memory usage (rss) |<p>Resident set size memory used by process in bytes.</p> |ZABBIX_PASSIVE |proc.mem[nginx,,,,rss] |
-|Nginx |Nginx: CPU utilization |<p>Process CPU utilization percentage.</p> |ZABBIX_PASSIVE |proc.cpu.util[nginx] |
-|Nginx |Nginx: Version |<p>-</p> |DEPENDENT |nginx.version<p>**Preprocessing**:</p><p>- REGEX: `Server: nginx\/(.+(?<!\r)) \1`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Zabbix_raw_items |Nginx: Get stub status page |<p>The following status information is provided:</p><p>Active connections - the current number of active client connections including Waiting connections.</p><p>Accepts - the total number of accepted client connections.</p><p>Handled - the total number of handled connections. Generally, the parameter value is the same as accepts unless some resource limits have been reached (for example, the worker_connections limit).</p><p>Requests - the total number of client requests.</p><p>Reading - the current number of connections where nginx is reading the request header.</p><p>Writing - the current number of connections where nginx is writing the response back to the client.</p><p>Waiting - the current number of idle client connections waiting for a request.</p><p>https://nginx.org/en/docs/http/ngx_http_stub_status_module.html</p> |ZABBIX_PASSIVE |web.page.get["{$NGINX.STUB_STATUS.HOST}","{$NGINX.STUB_STATUS.PATH}","{$NGINX.STUB_STATUS.PORT}"] |
+| Group | Name | Description | Type | Key and additional info |
+|------------------|----------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Nginx | Nginx: Service status | <p>-</p> | ZABBIX_PASSIVE | net.tcp.service[http,"{$NGINX.STUB_STATUS.HOST}","{$NGINX.STUB_STATUS.PORT}"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| Nginx | Nginx: Service response time | <p>-</p> | ZABBIX_PASSIVE | net.tcp.service.perf[http,"{$NGINX.STUB_STATUS.HOST}","{$NGINX.STUB_STATUS.PORT}"] |
+| Nginx | Nginx: Requests total | <p>The total number of client requests.</p> | DEPENDENT | nginx.requests.total<p>**Preprocessing**:</p><p>- REGEX: `server accepts handled requests\s+([0-9]+) ([0-9]+) ([0-9]+) \3`</p> |
+| Nginx | Nginx: Requests per second | <p>The total number of client requests.</p> | DEPENDENT | nginx.requests.total.rate<p>**Preprocessing**:</p><p>- REGEX: `server accepts handled requests\s+([0-9]+) ([0-9]+) ([0-9]+) \3`</p><p>- CHANGE_PER_SECOND |
+| Nginx | Nginx: Connections accepted per second | <p>The total number of accepted client connections.</p> | DEPENDENT | nginx.connections.accepted.rate<p>**Preprocessing**:</p><p>- REGEX: `server accepts handled requests\s+([0-9]+) ([0-9]+) ([0-9]+) \1`</p><p>- CHANGE_PER_SECOND |
+| Nginx | Nginx: Connections dropped per second | <p>The total number of dropped client connections.</p> | DEPENDENT | nginx.connections.dropped.rate<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>- CHANGE_PER_SECOND |
+| Nginx | Nginx: Connections handled per second | <p>The total number of handled connections. Generally, the parameter value is the same as accepts unless some resource limits have been reached (for example, the worker_connections limit).</p> | DEPENDENT | nginx.connections.handled.rate<p>**Preprocessing**:</p><p>- REGEX: `server accepts handled requests\s+([0-9]+) ([0-9]+) ([0-9]+) \2`</p><p>- CHANGE_PER_SECOND |
+| Nginx | Nginx: Connections active | <p>The current number of active client connections including Waiting connections.</p> | DEPENDENT | nginx.connections.active<p>**Preprocessing**:</p><p>- REGEX: `Active connections: ([0-9]+) \1`</p> |
+| Nginx | Nginx: Connections reading | <p>The current number of connections where nginx is reading the request header.</p> | DEPENDENT | nginx.connections.reading<p>**Preprocessing**:</p><p>- REGEX: `Reading: ([0-9]+) Writing: ([0-9]+) Waiting: ([0-9]+) \1`</p> |
+| Nginx | Nginx: Connections waiting | <p>The current number of idle client connections waiting for a request.</p> | DEPENDENT | nginx.connections.waiting<p>**Preprocessing**:</p><p>- REGEX: `Reading: ([0-9]+) Writing: ([0-9]+) Waiting: ([0-9]+) \3`</p> |
+| Nginx | Nginx: Connections writing | <p>The current number of connections where nginx is writing the response back to the client.</p> | DEPENDENT | nginx.connections.writing<p>**Preprocessing**:</p><p>- REGEX: `Reading: ([0-9]+) Writing: ([0-9]+) Waiting: ([0-9]+) \2`</p> |
+| Nginx | Nginx: Number of processes running | <p>Number of the Nginx processes running.</p> | ZABBIX_PASSIVE | proc.num[nginx] |
+| Nginx | Nginx: Memory usage (vsize) | <p>Virtual memory size used by process in bytes.</p> | ZABBIX_PASSIVE | proc.mem[nginx,,,,vsize] |
+| Nginx | Nginx: Memory usage (rss) | <p>Resident set size memory used by process in bytes.</p> | ZABBIX_PASSIVE | proc.mem[nginx,,,,rss] |
+| Nginx | Nginx: CPU utilization | <p>Process CPU utilization percentage.</p> | ZABBIX_PASSIVE | proc.cpu.util[nginx] |
+| Nginx | Nginx: Version | <p>-</p> | DEPENDENT | nginx.version<p>**Preprocessing**:</p><p>- REGEX: `Server: nginx\/(.+(?<!\r)) \1`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Zabbix_raw_items | Nginx: Get stub status page | <p>The following status information is provided:</p><p>Active connections - the current number of active client connections including Waiting connections.</p><p>Accepts - the total number of accepted client connections.</p><p>Handled - the total number of handled connections. Generally, the parameter value is the same as accepts unless some resource limits have been reached (for example, the worker_connections limit).</p><p>Requests - the total number of client requests.</p><p>Reading - the current number of connections where nginx is reading the request header.</p><p>Writing - the current number of connections where nginx is writing the response back to the client.</p><p>Waiting - the current number of idle client connections waiting for a request.</p><p>https://nginx.org/en/docs/http/ngx_http_stub_status_module.html</p> | ZABBIX_PASSIVE | web.page.get["{$NGINX.STUB_STATUS.HOST}","{$NGINX.STUB_STATUS.PATH}","{$NGINX.STUB_STATUS.PORT}"] |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|Nginx: Service is down |<p>-</p> |`{TEMPLATE_NAME:net.tcp.service[http,"{$NGINX.STUB_STATUS.HOST}","{$NGINX.STUB_STATUS.PORT}"].last()}=0` |AVERAGE |<p>Manual close: YES</p><p>**Depends on**:</p><p>- Nginx: Process is not running</p> |
-|Nginx: Service response time is too high (over {$NGINX.RESPONSE_TIME.MAX.WARN}s for 5m) |<p>-</p> |`{TEMPLATE_NAME:net.tcp.service.perf[http,"{$NGINX.STUB_STATUS.HOST}","{$NGINX.STUB_STATUS.PORT}"].min(5m)}>{$NGINX.RESPONSE_TIME.MAX.WARN}` |WARNING |<p>Manual close: YES</p><p>**Depends on**:</p><p>- Nginx: Process is not running</p><p>- Nginx: Service is down</p> |
-|Nginx: High connections drop rate (more than {$NGINX.DROP_RATE.MAX.WARN} for 5m) |<p>The dropping rate connections is greater than {$NGINX.DROP_RATE.MAX.WARN} for the last 5 minutes.</p> |`{TEMPLATE_NAME:nginx.connections.dropped.rate.min(5m)} > {$NGINX.DROP_RATE.MAX.WARN}` |WARNING |<p>**Depends on**:</p><p>- Nginx: Process is not running</p><p>- Nginx: Service is down</p> |
-|Nginx: Process is not running |<p>-</p> |`{TEMPLATE_NAME:proc.num[nginx].last()}=0` |HIGH | |
-|Nginx: Version has changed (new version: {ITEM.VALUE}) |<p>Nginx version has changed. Ack to close.</p> |`{TEMPLATE_NAME:nginx.version.diff()}=1 and {TEMPLATE_NAME:nginx.version.strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|Nginx: Failed to fetch stub status page (or no data for 30m) |<p>Zabbix has not received data for items for the last 30 minutes.</p> |`{TEMPLATE_NAME:web.page.get["{$NGINX.STUB_STATUS.HOST}","{$NGINX.STUB_STATUS.PATH}","{$NGINX.STUB_STATUS.PORT}"].str("HTTP/1.1 200")}=0 or {TEMPLATE_NAME:web.page.get["{$NGINX.STUB_STATUS.HOST}","{$NGINX.STUB_STATUS.PATH}","{$NGINX.STUB_STATUS.PORT}"].nodata(30m)}=1` |WARNING |<p>Manual close: YES</p><p>**Depends on**:</p><p>- Nginx: Process is not running</p><p>- Nginx: Service is down</p> |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|-----------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|---------------------------------------------------------------------------------------------------------------------|
+| Nginx: Service is down | <p>-</p> | `{TEMPLATE_NAME:net.tcp.service[http,"{$NGINX.STUB_STATUS.HOST}","{$NGINX.STUB_STATUS.PORT}"].last()}=0` | AVERAGE | <p>Manual close: YES</p><p>**Depends on**:</p><p>- Nginx: Process is not running</p> |
+| Nginx: Service response time is too high (over {$NGINX.RESPONSE_TIME.MAX.WARN}s for 5m) | <p>-</p> | `{TEMPLATE_NAME:net.tcp.service.perf[http,"{$NGINX.STUB_STATUS.HOST}","{$NGINX.STUB_STATUS.PORT}"].min(5m)}>{$NGINX.RESPONSE_TIME.MAX.WARN}` | WARNING | <p>Manual close: YES</p><p>**Depends on**:</p><p>- Nginx: Process is not running</p><p>- Nginx: Service is down</p> |
+| Nginx: High connections drop rate (more than {$NGINX.DROP_RATE.MAX.WARN} for 5m) | <p>The dropping rate connections is greater than {$NGINX.DROP_RATE.MAX.WARN} for the last 5 minutes.</p> | `{TEMPLATE_NAME:nginx.connections.dropped.rate.min(5m)} > {$NGINX.DROP_RATE.MAX.WARN}` | WARNING | <p>**Depends on**:</p><p>- Nginx: Process is not running</p><p>- Nginx: Service is down</p> |
+| Nginx: Process is not running | <p>-</p> | `{TEMPLATE_NAME:proc.num[nginx].last()}=0` | HIGH | |
+| Nginx: Version has changed (new version: {ITEM.VALUE}) | <p>Nginx version has changed. Ack to close.</p> | `{TEMPLATE_NAME:nginx.version.diff()}=1 and {TEMPLATE_NAME:nginx.version.strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| Nginx: Failed to fetch stub status page (or no data for 30m) | <p>Zabbix has not received data for items for the last 30 minutes.</p> | `{TEMPLATE_NAME:web.page.get["{$NGINX.STUB_STATUS.HOST}","{$NGINX.STUB_STATUS.PATH}","{$NGINX.STUB_STATUS.PORT}"].str("HTTP/1.1 200")}=0 or {TEMPLATE_NAME:web.page.get["{$NGINX.STUB_STATUS.HOST}","{$NGINX.STUB_STATUS.PATH}","{$NGINX.STUB_STATUS.PORT}"].nodata(30m)}=1` | WARNING | <p>Manual close: YES</p><p>**Depends on**:</p><p>- Nginx: Process is not running</p><p>- Nginx: Service is down</p> |
## Feedback
diff --git a/templates/app/nginx_agent/template_app_nginx_agent.yaml b/templates/app/nginx_agent/template_app_nginx_agent.yaml
index 160c3cd634b..d0a71288a14 100644
--- a/templates/app/nginx_agent/template_app_nginx_agent.yaml
+++ b/templates/app/nginx_agent/template_app_nginx_agent.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-02-23T11:04:42Z'
+ date: '2021-04-22T11:27:42Z'
groups:
-
name: Templates/Applications
@@ -412,54 +412,56 @@ zabbix_export:
dashboards:
-
name: 'Nginx performance'
- widgets:
+ pages:
-
- type: GRAPH_CLASSIC
- width: '12'
- height: '5'
- fields:
+ widgets:
-
- type: INTEGER
- name: source_type
- value: '0'
+ type: GRAPH_CLASSIC
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Nginx: Connections by state'
+ host: 'Nginx by Zabbix agent'
-
- type: GRAPH
- name: graphid
- value:
- name: 'Nginx: Connections by state'
- host: 'Nginx by Zabbix agent'
- -
- type: GRAPH_CLASSIC
- x: '12'
- width: '12'
- height: '5'
- fields:
+ type: GRAPH_CLASSIC
+ x: '12'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Nginx: Requests per second'
+ host: 'Nginx by Zabbix agent'
-
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Nginx: Requests per second'
- host: 'Nginx by Zabbix agent'
- -
- type: GRAPH_CLASSIC
- 'y': '5'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Nginx: Connections per second'
- host: 'Nginx by Zabbix agent'
+ type: GRAPH_CLASSIC
+ 'y': '5'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Nginx: Connections per second'
+ host: 'Nginx by Zabbix agent'
valuemaps:
-
name: 'Service state'
diff --git a/templates/app/nginx_http/README.md b/templates/app/nginx_http/README.md
index e1f53ffea40..f608495c2e6 100644
--- a/templates/app/nginx_http/README.md
+++ b/templates/app/nginx_http/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
The template to monitor Nginx by Zabbix that work without any external scripts.
Most of the metrics are collected in one go, thanks to Zabbix bulk data collection.
@@ -25,7 +25,7 @@ This template was tested on:
## Setup
-> See [Zabbix template operation](https://www.zabbix.com/documentation/5.2/manual/config/templates_out_of_the_box/http) for basic instructions.
+> See [Zabbix template operation](https://www.zabbix.com/documentation/5.4/manual/config/templates_out_of_the_box/http) for basic instructions.
Setup [ngx_http_stub_status_module](https://nginx.ru/en/docs/http/ngx_http_stub_status_module.html).
Test availability of http_stub_status module with `nginx -V 2>&1 | grep -o with-http_stub_status_module`.
@@ -48,13 +48,13 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$NGINX.DROP_RATE.MAX.WARN} |<p>The critical rate of the dropped connections for trigger expression.</p> |`1` |
-|{$NGINX.RESPONSE_TIME.MAX.WARN} |<p>The Nginx maximum response time in seconds for trigger expression.</p> |`10` |
-|{$NGINX.STUB_STATUS.PATH} |<p>The path of Nginx stub_status page.</p> |`basic_status` |
-|{$NGINX.STUB_STATUS.PORT} |<p>The port of Nginx stub_status host or container.</p> |`80` |
-|{$NGINX.STUB_STATUS.SCHEME} |<p>The protocol http or https of Nginx stub_status host or container.</p> |`http` |
+| Name | Description | Default |
+|---------------------------------|-----------------------------------------------------------------------------|----------------|
+| {$NGINX.DROP_RATE.MAX.WARN} | <p>The critical rate of the dropped connections for trigger expression.</p> | `1` |
+| {$NGINX.RESPONSE_TIME.MAX.WARN} | <p>The Nginx maximum response time in seconds for trigger expression.</p> | `10` |
+| {$NGINX.STUB_STATUS.PATH} | <p>The path of Nginx stub_status page.</p> | `basic_status` |
+| {$NGINX.STUB_STATUS.PORT} | <p>The port of Nginx stub_status host or container.</p> | `80` |
+| {$NGINX.STUB_STATUS.SCHEME} | <p>The protocol http or https of Nginx stub_status host or container.</p> | `http` |
## Template links
@@ -65,31 +65,31 @@ There are no template links in this template.
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Nginx |Nginx: Service status |<p>-</p> |SIMPLE |net.tcp.service[http,"{HOST.CONN}","{$NGINX.STUB_STATUS.PORT}"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|Nginx |Nginx: Service response time |<p>-</p> |SIMPLE |net.tcp.service.perf[http,"{HOST.CONN}","{$NGINX.STUB_STATUS.PORT}"] |
-|Nginx |Nginx: Requests total |<p>The total number of client requests.</p> |DEPENDENT |nginx.requests.total<p>**Preprocessing**:</p><p>- REGEX: `server accepts handled requests\s+([0-9]+) ([0-9]+) ([0-9]+) \3`</p> |
-|Nginx |Nginx: Requests per second |<p>The total number of client requests.</p> |DEPENDENT |nginx.requests.total.rate<p>**Preprocessing**:</p><p>- REGEX: `server accepts handled requests\s+([0-9]+) ([0-9]+) ([0-9]+) \3`</p><p>- CHANGE_PER_SECOND |
-|Nginx |Nginx: Connections accepted per second |<p>The total number of accepted client connections.</p> |DEPENDENT |nginx.connections.accepted.rate<p>**Preprocessing**:</p><p>- REGEX: `server accepts handled requests\s+([0-9]+) ([0-9]+) ([0-9]+) \1`</p><p>- CHANGE_PER_SECOND |
-|Nginx |Nginx: Connections dropped per second |<p>The total number of dropped client connections.</p> |DEPENDENT |nginx.connections.dropped.rate<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>- CHANGE_PER_SECOND |
-|Nginx |Nginx: Connections handled per second |<p>The total number of handled connections. Generally, the parameter value is the same as accepts unless some resource limits have been reached (for example, the worker_connections limit).</p> |DEPENDENT |nginx.connections.handled.rate<p>**Preprocessing**:</p><p>- REGEX: `server accepts handled requests\s+([0-9]+) ([0-9]+) ([0-9]+) \2`</p><p>- CHANGE_PER_SECOND |
-|Nginx |Nginx: Connections active |<p>The current number of active client connections including Waiting connections.</p> |DEPENDENT |nginx.connections.active<p>**Preprocessing**:</p><p>- REGEX: `Active connections: ([0-9]+) \1`</p> |
-|Nginx |Nginx: Connections reading |<p>The current number of connections where nginx is reading the request header.</p> |DEPENDENT |nginx.connections.reading<p>**Preprocessing**:</p><p>- REGEX: `Reading: ([0-9]+) Writing: ([0-9]+) Waiting: ([0-9]+) \1`</p> |
-|Nginx |Nginx: Connections waiting |<p>The current number of idle client connections waiting for a request.</p> |DEPENDENT |nginx.connections.waiting<p>**Preprocessing**:</p><p>- REGEX: `Reading: ([0-9]+) Writing: ([0-9]+) Waiting: ([0-9]+) \3`</p> |
-|Nginx |Nginx: Connections writing |<p>The current number of connections where nginx is writing the response back to the client.</p> |DEPENDENT |nginx.connections.writing<p>**Preprocessing**:</p><p>- REGEX: `Reading: ([0-9]+) Writing: ([0-9]+) Waiting: ([0-9]+) \2`</p> |
-|Nginx |Nginx: Version |<p>-</p> |DEPENDENT |nginx.version<p>**Preprocessing**:</p><p>- REGEX: `Server: nginx\/(.+(?<!\r)) \1`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Zabbix_raw_items |Nginx: Get stub status page |<p>The following status information is provided:</p><p>Active connections - the current number of active client connections including Waiting connections.</p><p>Accepts - the total number of accepted client connections.</p><p>Handled - the total number of handled connections. Generally, the parameter value is the same as accepts unless some resource limits have been reached (for example, the worker_connections limit).</p><p>Requests - the total number of client requests.</p><p>Reading - the current number of connections where nginx is reading the request header.</p><p>Writing - the current number of connections where nginx is writing the response back to the client.</p><p>Waiting - the current number of idle client connections waiting for a request.</p><p>https://nginx.org/en/docs/http/ngx_http_stub_status_module.html</p> |HTTP_AGENT |nginx.get_stub_status |
+| Group | Name | Description | Type | Key and additional info |
+|------------------|----------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Nginx | Nginx: Service status | <p>-</p> | SIMPLE | net.tcp.service[http,"{HOST.CONN}","{$NGINX.STUB_STATUS.PORT}"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| Nginx | Nginx: Service response time | <p>-</p> | SIMPLE | net.tcp.service.perf[http,"{HOST.CONN}","{$NGINX.STUB_STATUS.PORT}"] |
+| Nginx | Nginx: Requests total | <p>The total number of client requests.</p> | DEPENDENT | nginx.requests.total<p>**Preprocessing**:</p><p>- REGEX: `server accepts handled requests\s+([0-9]+) ([0-9]+) ([0-9]+) \3`</p> |
+| Nginx | Nginx: Requests per second | <p>The total number of client requests.</p> | DEPENDENT | nginx.requests.total.rate<p>**Preprocessing**:</p><p>- REGEX: `server accepts handled requests\s+([0-9]+) ([0-9]+) ([0-9]+) \3`</p><p>- CHANGE_PER_SECOND |
+| Nginx | Nginx: Connections accepted per second | <p>The total number of accepted client connections.</p> | DEPENDENT | nginx.connections.accepted.rate<p>**Preprocessing**:</p><p>- REGEX: `server accepts handled requests\s+([0-9]+) ([0-9]+) ([0-9]+) \1`</p><p>- CHANGE_PER_SECOND |
+| Nginx | Nginx: Connections dropped per second | <p>The total number of dropped client connections.</p> | DEPENDENT | nginx.connections.dropped.rate<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>- CHANGE_PER_SECOND |
+| Nginx | Nginx: Connections handled per second | <p>The total number of handled connections. Generally, the parameter value is the same as accepts unless some resource limits have been reached (for example, the worker_connections limit).</p> | DEPENDENT | nginx.connections.handled.rate<p>**Preprocessing**:</p><p>- REGEX: `server accepts handled requests\s+([0-9]+) ([0-9]+) ([0-9]+) \2`</p><p>- CHANGE_PER_SECOND |
+| Nginx | Nginx: Connections active | <p>The current number of active client connections including Waiting connections.</p> | DEPENDENT | nginx.connections.active<p>**Preprocessing**:</p><p>- REGEX: `Active connections: ([0-9]+) \1`</p> |
+| Nginx | Nginx: Connections reading | <p>The current number of connections where nginx is reading the request header.</p> | DEPENDENT | nginx.connections.reading<p>**Preprocessing**:</p><p>- REGEX: `Reading: ([0-9]+) Writing: ([0-9]+) Waiting: ([0-9]+) \1`</p> |
+| Nginx | Nginx: Connections waiting | <p>The current number of idle client connections waiting for a request.</p> | DEPENDENT | nginx.connections.waiting<p>**Preprocessing**:</p><p>- REGEX: `Reading: ([0-9]+) Writing: ([0-9]+) Waiting: ([0-9]+) \3`</p> |
+| Nginx | Nginx: Connections writing | <p>The current number of connections where nginx is writing the response back to the client.</p> | DEPENDENT | nginx.connections.writing<p>**Preprocessing**:</p><p>- REGEX: `Reading: ([0-9]+) Writing: ([0-9]+) Waiting: ([0-9]+) \2`</p> |
+| Nginx | Nginx: Version | <p>-</p> | DEPENDENT | nginx.version<p>**Preprocessing**:</p><p>- REGEX: `Server: nginx\/(.+(?<!\r)) \1`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Zabbix_raw_items | Nginx: Get stub status page | <p>The following status information is provided:</p><p>Active connections - the current number of active client connections including Waiting connections.</p><p>Accepts - the total number of accepted client connections.</p><p>Handled - the total number of handled connections. Generally, the parameter value is the same as accepts unless some resource limits have been reached (for example, the worker_connections limit).</p><p>Requests - the total number of client requests.</p><p>Reading - the current number of connections where nginx is reading the request header.</p><p>Writing - the current number of connections where nginx is writing the response back to the client.</p><p>Waiting - the current number of idle client connections waiting for a request.</p><p>https://nginx.org/en/docs/http/ngx_http_stub_status_module.html</p> | HTTP_AGENT | nginx.get_stub_status |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|Nginx: Service is down |<p>-</p> |`{TEMPLATE_NAME:net.tcp.service[http,"{HOST.CONN}","{$NGINX.STUB_STATUS.PORT}"].last()}=0` |AVERAGE |<p>Manual close: YES</p> |
-|Nginx: Service response time is too high (over {$NGINX.RESPONSE_TIME.MAX.WARN}s for 5m) |<p>-</p> |`{TEMPLATE_NAME:net.tcp.service.perf[http,"{HOST.CONN}","{$NGINX.STUB_STATUS.PORT}"].min(5m)}>{$NGINX.RESPONSE_TIME.MAX.WARN}` |WARNING |<p>Manual close: YES</p><p>**Depends on**:</p><p>- Nginx: Service is down</p> |
-|Nginx: High connections drop rate (more than {$NGINX.DROP_RATE.MAX.WARN} for 5m) |<p>The dropping rate connections is greater than {$NGINX.DROP_RATE.MAX.WARN} for the last 5 minutes.</p> |`{TEMPLATE_NAME:nginx.connections.dropped.rate.min(5m)} > {$NGINX.DROP_RATE.MAX.WARN}` |WARNING |<p>**Depends on**:</p><p>- Nginx: Service is down</p> |
-|Nginx: Version has changed (new version: {ITEM.VALUE}) |<p>Nginx version has changed. Ack to close.</p> |`{TEMPLATE_NAME:nginx.version.diff()}=1 and {TEMPLATE_NAME:nginx.version.strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|Nginx: Failed to fetch stub status page (or no data for 30m) |<p>Zabbix has not received data for items for the last 30 minutes.</p> |`{TEMPLATE_NAME:nginx.get_stub_status.str("HTTP/1.1 200")}=0 or {TEMPLATE_NAME:nginx.get_stub_status.nodata(30m)}=1` |WARNING |<p>Manual close: YES</p><p>**Depends on**:</p><p>- Nginx: Service is down</p> |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|-----------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------|----------|-------------------------------------------------------------------------------|
+| Nginx: Service is down | <p>-</p> | `{TEMPLATE_NAME:net.tcp.service[http,"{HOST.CONN}","{$NGINX.STUB_STATUS.PORT}"].last()}=0` | AVERAGE | <p>Manual close: YES</p> |
+| Nginx: Service response time is too high (over {$NGINX.RESPONSE_TIME.MAX.WARN}s for 5m) | <p>-</p> | `{TEMPLATE_NAME:net.tcp.service.perf[http,"{HOST.CONN}","{$NGINX.STUB_STATUS.PORT}"].min(5m)}>{$NGINX.RESPONSE_TIME.MAX.WARN}` | WARNING | <p>Manual close: YES</p><p>**Depends on**:</p><p>- Nginx: Service is down</p> |
+| Nginx: High connections drop rate (more than {$NGINX.DROP_RATE.MAX.WARN} for 5m) | <p>The dropping rate connections is greater than {$NGINX.DROP_RATE.MAX.WARN} for the last 5 minutes.</p> | `{TEMPLATE_NAME:nginx.connections.dropped.rate.min(5m)} > {$NGINX.DROP_RATE.MAX.WARN}` | WARNING | <p>**Depends on**:</p><p>- Nginx: Service is down</p> |
+| Nginx: Version has changed (new version: {ITEM.VALUE}) | <p>Nginx version has changed. Ack to close.</p> | `{TEMPLATE_NAME:nginx.version.diff()}=1 and {TEMPLATE_NAME:nginx.version.strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| Nginx: Failed to fetch stub status page (or no data for 30m) | <p>Zabbix has not received data for items for the last 30 minutes.</p> | `{TEMPLATE_NAME:nginx.get_stub_status.str("HTTP/1.1 200")}=0 or {TEMPLATE_NAME:nginx.get_stub_status.nodata(30m)}=1` | WARNING | <p>Manual close: YES</p><p>**Depends on**:</p><p>- Nginx: Service is down</p> |
## Feedback
diff --git a/templates/app/nginx_http/template_app_nginx_http.yaml b/templates/app/nginx_http/template_app_nginx_http.yaml
index 11500543e89..c78e37d9d87 100644
--- a/templates/app/nginx_http/template_app_nginx_http.yaml
+++ b/templates/app/nginx_http/template_app_nginx_http.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-02-23T11:04:44Z'
+ date: '2021-04-22T11:27:50Z'
groups:
-
name: Templates/Applications
@@ -359,54 +359,56 @@ zabbix_export:
dashboards:
-
name: 'Nginx performance'
- widgets:
+ pages:
-
- type: GRAPH_CLASSIC
- width: '12'
- height: '5'
- fields:
+ widgets:
-
- type: INTEGER
- name: source_type
- value: '0'
+ type: GRAPH_CLASSIC
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Nginx: Connections by state'
+ host: 'Nginx by HTTP'
-
- type: GRAPH
- name: graphid
- value:
- name: 'Nginx: Connections by state'
- host: 'Nginx by HTTP'
- -
- type: GRAPH_CLASSIC
- x: '12'
- width: '12'
- height: '5'
- fields:
+ type: GRAPH_CLASSIC
+ x: '12'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Nginx: Requests per second'
+ host: 'Nginx by HTTP'
-
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Nginx: Requests per second'
- host: 'Nginx by HTTP'
- -
- type: GRAPH_CLASSIC
- 'y': '5'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Nginx: Connections per second'
- host: 'Nginx by HTTP'
+ type: GRAPH_CLASSIC
+ 'y': '5'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Nginx: Connections per second'
+ host: 'Nginx by HTTP'
valuemaps:
-
name: 'Service state'
diff --git a/templates/app/rabbitmq_agent/README.md b/templates/app/rabbitmq_agent/README.md
index 7431ac013f5..d06d9907e75 100644
--- a/templates/app/rabbitmq_agent/README.md
+++ b/templates/app/rabbitmq_agent/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
The template to monitor RabbitMQ by Zabbix that work without any external scripts.
Most of the metrics are collected in one go, thanks to Zabbix bulk data collection.
@@ -17,7 +17,7 @@ This template was tested on:
## Setup
-> See [Zabbix template operation](https://www.zabbix.com/documentation/5.2/manual/config/templates_out_of_the_box/zabbix_agent) for basic instructions.
+> See [Zabbix template operation](https://www.zabbix.com/documentation/5.4/manual/config/templates_out_of_the_box/zabbix_agent) for basic instructions.
Enable the RabbitMQ management plugin. See [RabbitMQ’s documentation](https://www.rabbitmq.com/management.html) to enable it.
@@ -48,14 +48,14 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$RABBITMQ.API.CLUSTER_HOST} |<p>The hostname or IP of RabbitMQ cluster API endpoint</p> |`127.0.0.1` |
-|{$RABBITMQ.API.PASSWORD} |<p>-</p> |`zabbix` |
-|{$RABBITMQ.API.PORT} |<p>The port of RabbitMQ API endpoint</p> |`15672` |
-|{$RABBITMQ.API.USER} |<p>-</p> |`zbx_monitor` |
-|{$RABBITMQ.LLD.FILTER.EXCHANGE.MATCHES} |<p>Filter of discoverable exchanges</p> |`.*` |
-|{$RABBITMQ.LLD.FILTER.EXCHANGE.NOT_MATCHES} |<p>Filter to exclude discovered exchanges</p> |`CHANGE_IF_NEEDED` |
+| Name | Description | Default |
+|---------------------------------------------|------------------------------------------------------------|--------------------|
+| {$RABBITMQ.API.CLUSTER_HOST} | <p>The hostname or IP of RabbitMQ cluster API endpoint</p> | `127.0.0.1` |
+| {$RABBITMQ.API.PASSWORD} | <p>-</p> | `zabbix` |
+| {$RABBITMQ.API.PORT} | <p>The port of RabbitMQ API endpoint</p> | `15672` |
+| {$RABBITMQ.API.USER} | <p>-</p> | `zbx_monitor` |
+| {$RABBITMQ.LLD.FILTER.EXCHANGE.MATCHES} | <p>Filter of discoverable exchanges</p> | `.*` |
+| {$RABBITMQ.LLD.FILTER.EXCHANGE.NOT_MATCHES} | <p>Filter to exclude discovered exchanges</p> | `CHANGE_IF_NEEDED` |
## Template links
@@ -63,65 +63,65 @@ There are no template links in this template.
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Health Check 3.8.10+ discovery |<p>Version 3.8.10+ specific metrics</p> |DEPENDENT |rabbitmq.healthcheck.v3810.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.management_version`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
-|Exchanges discovery |<p>Individual exchange metrics</p> |DEPENDENT |rabbitmq.exchanges.discovery<p>**Filter**:</p>AND <p>- A: {#EXCHANGE} MATCHES_REGEX `{$RABBITMQ.LLD.FILTER.EXCHANGE.MATCHES}`</p><p>- B: {#EXCHANGE} NOT_MATCHES_REGEX `{$RABBITMQ.LLD.FILTER.EXCHANGE.NOT_MATCHES}`</p> |
+| Name | Description | Type | Key and additional info |
+|--------------------------------|-----------------------------------------|-----------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Health Check 3.8.10+ discovery | <p>Version 3.8.10+ specific metrics</p> | DEPENDENT | rabbitmq.healthcheck.v3810.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.management_version`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Exchanges discovery | <p>Individual exchange metrics</p> | DEPENDENT | rabbitmq.exchanges.discovery<p>**Filter**:</p>AND <p>- A: {#EXCHANGE} MATCHES_REGEX `{$RABBITMQ.LLD.FILTER.EXCHANGE.MATCHES}`</p><p>- B: {#EXCHANGE} NOT_MATCHES_REGEX `{$RABBITMQ.LLD.FILTER.EXCHANGE.NOT_MATCHES}`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|RabbitMQ |RabbitMQ: Connections total |<p>Total number of connections</p> |DEPENDENT |rabbitmq.overview.object_totals.connections<p>**Preprocessing**:</p><p>- JSONPATH: `$.object_totals.connections`</p> |
-|RabbitMQ |RabbitMQ: Channels total |<p>Total number of channels</p> |DEPENDENT |rabbitmq.overview.object_totals.channels<p>**Preprocessing**:</p><p>- JSONPATH: `$.object_totals.channels`</p> |
-|RabbitMQ |RabbitMQ: Queues total |<p>Total number of queues</p> |DEPENDENT |rabbitmq.overview.object_totals.queues<p>**Preprocessing**:</p><p>- JSONPATH: `$.object_totals.queues`</p> |
-|RabbitMQ |RabbitMQ: Consumers total |<p>Total number of consumers</p> |DEPENDENT |rabbitmq.overview.object_totals.consumers<p>**Preprocessing**:</p><p>- JSONPATH: `$.object_totals.consumers`</p> |
-|RabbitMQ |RabbitMQ: Exchanges total |<p>Total number of exchanges</p> |DEPENDENT |rabbitmq.overview.object_totals.exchanges<p>**Preprocessing**:</p><p>- JSONPATH: `$.object_totals.exchanges`</p> |
-|RabbitMQ |RabbitMQ: Messages total |<p>Total number of messages (ready plus unacknowledged)</p> |DEPENDENT |rabbitmq.overview.queue_totals.messages<p>**Preprocessing**:</p><p>- JSONPATH: `$.queue_totals.messages`</p> |
-|RabbitMQ |RabbitMQ: Messages ready for delivery |<p>Number of messages ready for deliver</p> |DEPENDENT |rabbitmq.overview.queue_totals.messages.ready<p>**Preprocessing**:</p><p>- JSONPATH: `$.queue_totals.messages_ready`</p> |
-|RabbitMQ |RabbitMQ: Messages unacknowledged |<p>Number of unacknowledged messages</p> |DEPENDENT |rabbitmq.overview.queue_totals.messages.unacknowledged<p>**Preprocessing**:</p><p>- JSONPATH: `$.queue_totals.messages_unacknowledged`</p> |
-|RabbitMQ |RabbitMQ: Messages acknowledged |<p>Number of messages delivered to clients and acknowledged</p> |DEPENDENT |rabbitmq.overview.messages.ack<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.ack`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Messages acknowledged per second |<p>Rate of messages delivered to clients and acknowledged per second</p> |DEPENDENT |rabbitmq.overview.messages.ack.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.ack_details.rate`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Messages confirmed |<p>Count of messages confirmed</p> |DEPENDENT |rabbitmq.overview.messages.confirm<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.confirm`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Messages confirmed per second |<p>Rate of messages confirmed per second</p> |DEPENDENT |rabbitmq.overview.messages.confirm.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.confirm_details.rate`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Messages delivered |<p>Sum of messages delivered in acknowledgement mode to consumers, in no-acknowledgement mode to consumers, in acknowledgement mode in response to basic.get, and in no-acknowledgement mode in response to basic.get</p> |DEPENDENT |rabbitmq.overview.messages.deliver_get<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.deliver_get`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Messages delivered per second |<p>Rate per second of the sum of messages delivered in acknowledgement mode to consumers, in no-acknowledgement mode to consumers, in acknowledgement mode in response to basic.get, and in no-acknowledgement mode in response to basic.get</p> |DEPENDENT |rabbitmq.overview.messages.deliver_get.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.deliver_get_details.rate`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Messages published |<p>Count of messages published</p> |DEPENDENT |rabbitmq.overview.messages.publish<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.publish`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Messages published per second |<p>Rate of messages published per second</p> |DEPENDENT |rabbitmq.overview.messages.publish.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.publish_details.rate`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Messages publish_in |<p>Count of messages published from channels into this overview</p> |DEPENDENT |rabbitmq.overview.messages.publish_in<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.publish_in`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Messages publish_in per second |<p>Rate of messages published from channels into this overview per sec</p> |DEPENDENT |rabbitmq.overview.messages.publish_in.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.publish_in_details.rate`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Messages publish_out |<p>Count of messages published from this overview into queues</p> |DEPENDENT |rabbitmq.overview.messages.publish_out<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.publish_out`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Messages publish_out per second |<p>Rate of messages published from this overview into queues per second,0,rabbitmq,total msgs pub out rate</p> |DEPENDENT |rabbitmq.overview.messages.publish_out.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.publish_out_details.rate`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Messages returned unroutable |<p>Count of messages returned to publisher as unroutable</p> |DEPENDENT |rabbitmq.overview.messages.return_unroutable<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.return_unroutable`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Messages returned unroutable per second |<p>Rate of messages returned to publisher as unroutable per second</p> |DEPENDENT |rabbitmq.overview.messages.return_unroutable.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.return_unroutable_details.rate`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Messages returned redeliver |<p>Count of subset of messages in deliver_get which had the redelivered flag set</p> |DEPENDENT |rabbitmq.overview.messages.redeliver<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.redeliver`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Messages returned redeliver per second |<p>Rate of subset of messages in deliver_get which had the redelivered flag set per second</p> |DEPENDENT |rabbitmq.overview.messages.redeliver.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.redeliver_details.rate`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Healthcheck: alarms in effect in the cluster{#SINGLETON} |<p>Responds a 200 OK if there are no alarms in effect in the cluster, otherwise responds with a 503 Service Unavailable.</p> |ZABBIX_PASSIVE |web.page.get["http://{$RABBITMQ.API.USER}:{$RABBITMQ.API.PASSWORD}@{$RABBITMQ.API.CLUSTER_HOST}:{$RABBITMQ.API.PORT}/api/health/checks/alarms{#SINGLETON}"]<p>**Preprocessing**:</p><p>- REGEX: `HTTP\/1\.1\b\s(\d+) \1`</p><p>- JAVASCRIPT: `switch(value){ case '200': return 1 case '503': return 0 default: 2}`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `3h`</p> |
-|RabbitMQ |RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages acknowledged |<p>Number of messages delivered to clients and acknowledged</p> |DEPENDENT |rabbitmq.exchange.messages.ack["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.ack.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages acknowledged per second |<p>Rate of messages delivered to clients and acknowledged per second</p> |DEPENDENT |rabbitmq.exchange.messages.ack.rate["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.ack_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages confirmed |<p>Count of messages confirmed</p> |DEPENDENT |rabbitmq.exchange.messages.confirm["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.confirm.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages confirmed per second |<p>Rate of messages confirmed per second</p> |DEPENDENT |rabbitmq.exchange.messages.confirm.rate["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.confirm_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages delivered |<p>Sum of messages delivered in acknowledgement mode to consumers, in no-acknowledgement mode to consumers, in acknowledgement mode in response to basic.get, and in no-acknowledgement mode in response to basic.get</p> |DEPENDENT |rabbitmq.exchange.messages.deliver_get["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.deliver_get.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages delivered per second |<p>Rate per second of the sum of messages delivered in acknowledgement mode to consumers, in no-acknowledgement mode to consumers, in acknowledgement mode in response to basic.get, and in no-acknowledgement mode in response to basic.get</p> |DEPENDENT |rabbitmq.exchange.messages.deliver_get.rate["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.deliver_get_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages published |<p>Count of messages published</p> |DEPENDENT |rabbitmq.exchange.messages.publish["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.publish.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages published per second |<p>Rate of messages published per second</p> |DEPENDENT |rabbitmq.exchange.messages.publish.rate["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.publish_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages publish_in |<p>Count of messages published from channels into this overview</p> |DEPENDENT |rabbitmq.exchange.messages.publish_in["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.publish_in.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages publish_in per second |<p>Rate of messages published from channels into this overview per sec</p> |DEPENDENT |rabbitmq.exchange.messages.publish_in.rate["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.publish_in_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages publish_out |<p>Count of messages published from this overview into queues</p> |DEPENDENT |rabbitmq.exchange.messages.publish_out["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.publish_out.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages publish_out per second |<p>Rate of messages published from this overview into queues per second,0,rabbitmq,total msgs pub out rate</p> |DEPENDENT |rabbitmq.exchange.messages.publish_out.rate["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.publish_out_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages returned unroutable |<p>Count of messages returned to publisher as unroutable</p> |DEPENDENT |rabbitmq.exchange.messages.return_unroutable["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.return_unroutable.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages returned unroutable per second |<p>Rate of messages returned to publisher as unroutable per second</p> |DEPENDENT |rabbitmq.exchange.messages.return_unroutable.rate["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.return_unroutable_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages redelivered |<p>Count of subset of messages in deliver_get which had the redelivered flag set</p> |DEPENDENT |rabbitmq.exchange.messages.redeliver["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.redeliver.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages redelivered per second |<p>Rate of subset of messages in deliver_get which had the redelivered flag set per second</p> |DEPENDENT |rabbitmq.exchange.messages.redeliver.rate["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.redeliver_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|Zabbix_raw_items |RabbitMQ: Get overview |<p>The HTTP API endpoint that returns cluster-wide metrics</p> |ZABBIX_PASSIVE |web.page.get["http://{$RABBITMQ.API.USER}:{$RABBITMQ.API.PASSWORD}@{$RABBITMQ.API.CLUSTER_HOST}:{$RABBITMQ.API.PORT}/api/overview"]<p>**Preprocessing**:</p><p>- REGEX: `\n\s?\n(.*) \1`</p> |
-|Zabbix_raw_items |RabbitMQ: Get exchanges |<p>The HTTP API endpoint that returns exchanges metrics</p> |ZABBIX_PASSIVE |web.page.get["http://{$RABBITMQ.API.USER}:{$RABBITMQ.API.PASSWORD}@{$RABBITMQ.API.CLUSTER_HOST}:{$RABBITMQ.API.PORT}/api/exchanges"]<p>**Preprocessing**:</p><p>- REGEX: `\n\s?\n(.*) \1`</p> |
+| Group | Name | Description | Type | Key and additional info |
+|------------------|------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| RabbitMQ | RabbitMQ: Connections total | <p>Total number of connections</p> | DEPENDENT | rabbitmq.overview.object_totals.connections<p>**Preprocessing**:</p><p>- JSONPATH: `$.object_totals.connections`</p> |
+| RabbitMQ | RabbitMQ: Channels total | <p>Total number of channels</p> | DEPENDENT | rabbitmq.overview.object_totals.channels<p>**Preprocessing**:</p><p>- JSONPATH: `$.object_totals.channels`</p> |
+| RabbitMQ | RabbitMQ: Queues total | <p>Total number of queues</p> | DEPENDENT | rabbitmq.overview.object_totals.queues<p>**Preprocessing**:</p><p>- JSONPATH: `$.object_totals.queues`</p> |
+| RabbitMQ | RabbitMQ: Consumers total | <p>Total number of consumers</p> | DEPENDENT | rabbitmq.overview.object_totals.consumers<p>**Preprocessing**:</p><p>- JSONPATH: `$.object_totals.consumers`</p> |
+| RabbitMQ | RabbitMQ: Exchanges total | <p>Total number of exchanges</p> | DEPENDENT | rabbitmq.overview.object_totals.exchanges<p>**Preprocessing**:</p><p>- JSONPATH: `$.object_totals.exchanges`</p> |
+| RabbitMQ | RabbitMQ: Messages total | <p>Total number of messages (ready plus unacknowledged)</p> | DEPENDENT | rabbitmq.overview.queue_totals.messages<p>**Preprocessing**:</p><p>- JSONPATH: `$.queue_totals.messages`</p> |
+| RabbitMQ | RabbitMQ: Messages ready for delivery | <p>Number of messages ready for deliver</p> | DEPENDENT | rabbitmq.overview.queue_totals.messages.ready<p>**Preprocessing**:</p><p>- JSONPATH: `$.queue_totals.messages_ready`</p> |
+| RabbitMQ | RabbitMQ: Messages unacknowledged | <p>Number of unacknowledged messages</p> | DEPENDENT | rabbitmq.overview.queue_totals.messages.unacknowledged<p>**Preprocessing**:</p><p>- JSONPATH: `$.queue_totals.messages_unacknowledged`</p> |
+| RabbitMQ | RabbitMQ: Messages acknowledged | <p>Number of messages delivered to clients and acknowledged</p> | DEPENDENT | rabbitmq.overview.messages.ack<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.ack`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Messages acknowledged per second | <p>Rate of messages delivered to clients and acknowledged per second</p> | DEPENDENT | rabbitmq.overview.messages.ack.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.ack_details.rate`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Messages confirmed | <p>Count of messages confirmed</p> | DEPENDENT | rabbitmq.overview.messages.confirm<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.confirm`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Messages confirmed per second | <p>Rate of messages confirmed per second</p> | DEPENDENT | rabbitmq.overview.messages.confirm.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.confirm_details.rate`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Messages delivered | <p>Sum of messages delivered in acknowledgement mode to consumers, in no-acknowledgement mode to consumers, in acknowledgement mode in response to basic.get, and in no-acknowledgement mode in response to basic.get</p> | DEPENDENT | rabbitmq.overview.messages.deliver_get<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.deliver_get`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Messages delivered per second | <p>Rate per second of the sum of messages delivered in acknowledgement mode to consumers, in no-acknowledgement mode to consumers, in acknowledgement mode in response to basic.get, and in no-acknowledgement mode in response to basic.get</p> | DEPENDENT | rabbitmq.overview.messages.deliver_get.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.deliver_get_details.rate`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Messages published | <p>Count of messages published</p> | DEPENDENT | rabbitmq.overview.messages.publish<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.publish`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Messages published per second | <p>Rate of messages published per second</p> | DEPENDENT | rabbitmq.overview.messages.publish.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.publish_details.rate`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Messages publish_in | <p>Count of messages published from channels into this overview</p> | DEPENDENT | rabbitmq.overview.messages.publish_in<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.publish_in`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Messages publish_in per second | <p>Rate of messages published from channels into this overview per sec</p> | DEPENDENT | rabbitmq.overview.messages.publish_in.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.publish_in_details.rate`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Messages publish_out | <p>Count of messages published from this overview into queues</p> | DEPENDENT | rabbitmq.overview.messages.publish_out<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.publish_out`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Messages publish_out per second | <p>Rate of messages published from this overview into queues per second,0,rabbitmq,total msgs pub out rate</p> | DEPENDENT | rabbitmq.overview.messages.publish_out.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.publish_out_details.rate`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Messages returned unroutable | <p>Count of messages returned to publisher as unroutable</p> | DEPENDENT | rabbitmq.overview.messages.return_unroutable<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.return_unroutable`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Messages returned unroutable per second | <p>Rate of messages returned to publisher as unroutable per second</p> | DEPENDENT | rabbitmq.overview.messages.return_unroutable.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.return_unroutable_details.rate`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Messages returned redeliver | <p>Count of subset of messages in deliver_get which had the redelivered flag set</p> | DEPENDENT | rabbitmq.overview.messages.redeliver<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.redeliver`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Messages returned redeliver per second | <p>Rate of subset of messages in deliver_get which had the redelivered flag set per second</p> | DEPENDENT | rabbitmq.overview.messages.redeliver.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.redeliver_details.rate`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Healthcheck: alarms in effect in the cluster{#SINGLETON} | <p>Responds a 200 OK if there are no alarms in effect in the cluster, otherwise responds with a 503 Service Unavailable.</p> | ZABBIX_PASSIVE | web.page.get["http://{$RABBITMQ.API.USER}:{$RABBITMQ.API.PASSWORD}@{$RABBITMQ.API.CLUSTER_HOST}:{$RABBITMQ.API.PORT}/api/health/checks/alarms{#SINGLETON}"]<p>**Preprocessing**:</p><p>- REGEX: `HTTP\/1\.1\b\s(\d+) \1`</p><p>- JAVASCRIPT: `switch(value){ case '200': return 1 case '503': return 0 default: 2}`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `3h`</p> |
+| RabbitMQ | RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages acknowledged | <p>Number of messages delivered to clients and acknowledged</p> | DEPENDENT | rabbitmq.exchange.messages.ack["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.ack.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages acknowledged per second | <p>Rate of messages delivered to clients and acknowledged per second</p> | DEPENDENT | rabbitmq.exchange.messages.ack.rate["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.ack_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages confirmed | <p>Count of messages confirmed</p> | DEPENDENT | rabbitmq.exchange.messages.confirm["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.confirm.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages confirmed per second | <p>Rate of messages confirmed per second</p> | DEPENDENT | rabbitmq.exchange.messages.confirm.rate["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.confirm_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages delivered | <p>Sum of messages delivered in acknowledgement mode to consumers, in no-acknowledgement mode to consumers, in acknowledgement mode in response to basic.get, and in no-acknowledgement mode in response to basic.get</p> | DEPENDENT | rabbitmq.exchange.messages.deliver_get["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.deliver_get.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages delivered per second | <p>Rate per second of the sum of messages delivered in acknowledgement mode to consumers, in no-acknowledgement mode to consumers, in acknowledgement mode in response to basic.get, and in no-acknowledgement mode in response to basic.get</p> | DEPENDENT | rabbitmq.exchange.messages.deliver_get.rate["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.deliver_get_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages published | <p>Count of messages published</p> | DEPENDENT | rabbitmq.exchange.messages.publish["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.publish.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages published per second | <p>Rate of messages published per second</p> | DEPENDENT | rabbitmq.exchange.messages.publish.rate["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.publish_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages publish_in | <p>Count of messages published from channels into this overview</p> | DEPENDENT | rabbitmq.exchange.messages.publish_in["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.publish_in.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages publish_in per second | <p>Rate of messages published from channels into this overview per sec</p> | DEPENDENT | rabbitmq.exchange.messages.publish_in.rate["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.publish_in_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages publish_out | <p>Count of messages published from this overview into queues</p> | DEPENDENT | rabbitmq.exchange.messages.publish_out["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.publish_out.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages publish_out per second | <p>Rate of messages published from this overview into queues per second,0,rabbitmq,total msgs pub out rate</p> | DEPENDENT | rabbitmq.exchange.messages.publish_out.rate["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.publish_out_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages returned unroutable | <p>Count of messages returned to publisher as unroutable</p> | DEPENDENT | rabbitmq.exchange.messages.return_unroutable["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.return_unroutable.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages returned unroutable per second | <p>Rate of messages returned to publisher as unroutable per second</p> | DEPENDENT | rabbitmq.exchange.messages.return_unroutable.rate["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.return_unroutable_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages redelivered | <p>Count of subset of messages in deliver_get which had the redelivered flag set</p> | DEPENDENT | rabbitmq.exchange.messages.redeliver["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.redeliver.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages redelivered per second | <p>Rate of subset of messages in deliver_get which had the redelivered flag set per second</p> | DEPENDENT | rabbitmq.exchange.messages.redeliver.rate["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.redeliver_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| Zabbix_raw_items | RabbitMQ: Get overview | <p>The HTTP API endpoint that returns cluster-wide metrics</p> | ZABBIX_PASSIVE | web.page.get["http://{$RABBITMQ.API.USER}:{$RABBITMQ.API.PASSWORD}@{$RABBITMQ.API.CLUSTER_HOST}:{$RABBITMQ.API.PORT}/api/overview"]<p>**Preprocessing**:</p><p>- REGEX: `\n\s?\n(.*) \1`</p> |
+| Zabbix_raw_items | RabbitMQ: Get exchanges | <p>The HTTP API endpoint that returns exchanges metrics</p> | ZABBIX_PASSIVE | web.page.get["http://{$RABBITMQ.API.USER}:{$RABBITMQ.API.PASSWORD}@{$RABBITMQ.API.CLUSTER_HOST}:{$RABBITMQ.API.PORT}/api/exchanges"]<p>**Preprocessing**:</p><p>- REGEX: `\n\s?\n(.*) \1`</p> |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|RabbitMQ: There are active alarms in the cluster |<p>http://{HOST.CONN}:{$RABBITMQ.API.PORT}/api/index.html</p> |`{TEMPLATE_NAME:web.page.get["http://{$RABBITMQ.API.USER}:{$RABBITMQ.API.PASSWORD}@{$RABBITMQ.API.CLUSTER_HOST}:{$RABBITMQ.API.PORT}/api/health/checks/alarms{#SINGLETON}"].last()}=503` |AVERAGE | |
-|RabbitMQ: Failed to fetch overview data (or no data for 30m) |<p>Zabbix has not received data for items for the last 30 minutes</p> |`{TEMPLATE_NAME:web.page.get["http://{$RABBITMQ.API.USER}:{$RABBITMQ.API.PASSWORD}@{$RABBITMQ.API.CLUSTER_HOST}:{$RABBITMQ.API.PORT}/api/overview"].nodata(30m)}=1` |WARNING |<p>Manual close: YES</p> |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|--------------------------------------------------------------|-----------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|----------------------------------|
+| RabbitMQ: There are active alarms in the cluster | <p>http://{HOST.CONN}:{$RABBITMQ.API.PORT}/api/index.html</p> | `{TEMPLATE_NAME:web.page.get["http://{$RABBITMQ.API.USER}:{$RABBITMQ.API.PASSWORD}@{$RABBITMQ.API.CLUSTER_HOST}:{$RABBITMQ.API.PORT}/api/health/checks/alarms{#SINGLETON}"].last()}=503` | AVERAGE | |
+| RabbitMQ: Failed to fetch overview data (or no data for 30m) | <p>Zabbix has not received data for items for the last 30 minutes</p> | `{TEMPLATE_NAME:web.page.get["http://{$RABBITMQ.API.USER}:{$RABBITMQ.API.PASSWORD}@{$RABBITMQ.API.CLUSTER_HOST}:{$RABBITMQ.API.PORT}/api/overview"].nodata(30m)}=1` | WARNING | <p>Manual close: YES</p> |
## Feedback
@@ -133,7 +133,7 @@ You can also provide a feedback, discuss the template or ask for help with it at
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
The template to monitor RabbitMQ by Zabbix that work without any external scripts.
Most of the metrics are collected in one go, thanks to Zabbix bulk data collection.
@@ -174,18 +174,18 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$RABBITMQ.API.HOST} |<p>The hostname or IP of RabbitMQ API endpoint</p> |`127.0.0.1` |
-|{$RABBITMQ.API.PASSWORD} |<p>-</p> |`zabbix` |
-|{$RABBITMQ.API.PORT} |<p>The port of RabbitMQ API endpoint</p> |`15672` |
-|{$RABBITMQ.API.USER} |<p>-</p> |`zbx_monitor` |
-|{$RABBITMQ.CLUSTER.NAME} |<p>The name of RabbitMQ cluster</p> |`rabbit` |
-|{$RABBITMQ.LLD.FILTER.QUEUE.MATCHES} |<p>Filter of discoverable queues</p> |`.*` |
-|{$RABBITMQ.LLD.FILTER.QUEUE.NOT_MATCHES} |<p>Filter to exclude discovered queues</p> |`CHANGE_IF_NEEDED` |
-|{$RABBITMQ.MESSAGES.MAX.WARN} |<p>Maximum number of messages in the queue for trigger expression</p> |`1000` |
-|{$RABBITMQ.PROCESS_NAME} |<p>RabbitMQ server process name</p> |`beam.smp` |
-|{$RABBITMQ.RESPONSE_TIME.MAX.WARN} |<p>Maximum RabbitMQ response time in seconds for trigger expression</p> |`10` |
+| Name | Description | Default |
+|------------------------------------------|-------------------------------------------------------------------------|--------------------|
+| {$RABBITMQ.API.HOST} | <p>The hostname or IP of RabbitMQ API endpoint</p> | `127.0.0.1` |
+| {$RABBITMQ.API.PASSWORD} | <p>-</p> | `zabbix` |
+| {$RABBITMQ.API.PORT} | <p>The port of RabbitMQ API endpoint</p> | `15672` |
+| {$RABBITMQ.API.USER} | <p>-</p> | `zbx_monitor` |
+| {$RABBITMQ.CLUSTER.NAME} | <p>The name of RabbitMQ cluster</p> | `rabbit` |
+| {$RABBITMQ.LLD.FILTER.QUEUE.MATCHES} | <p>Filter of discoverable queues</p> | `.*` |
+| {$RABBITMQ.LLD.FILTER.QUEUE.NOT_MATCHES} | <p>Filter to exclude discovered queues</p> | `CHANGE_IF_NEEDED` |
+| {$RABBITMQ.MESSAGES.MAX.WARN} | <p>Maximum number of messages in the queue for trigger expression</p> | `1000` |
+| {$RABBITMQ.PROCESS_NAME} | <p>RabbitMQ server process name</p> | `beam.smp` |
+| {$RABBITMQ.RESPONSE_TIME.MAX.WARN} | <p>Maximum RabbitMQ response time in seconds for trigger expression</p> | `10` |
## Template links
@@ -193,86 +193,86 @@ There are no template links in this template.
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Health Check 3.8.10+ discovery |<p>Version 3.8.10+ specific metrics</p> |DEPENDENT |rabbitmq.healthcheck.v3810.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.management_version`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
-|Health Check 3.8.9- discovery |<p>Specific metrics up to and including version 3.8.4</p> |DEPENDENT |rabbitmq.healthcheck.v389.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.management_version`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
-|Queues discovery |<p>Individual queue metrics</p> |DEPENDENT |rabbitmq.queues.discovery<p>**Filter**:</p>AND <p>- A: {#QUEUE} MATCHES_REGEX `{$RABBITMQ.LLD.FILTER.QUEUE.MATCHES}`</p><p>- B: {#QUEUE} NOT_MATCHES_REGEX `{$RABBITMQ.LLD.FILTER.QUEUE.NOT_MATCHES}`</p><p>- C: {#NODE} MATCHES_REGEX `{$RABBITMQ.CLUSTER.NAME}@{HOST.NAME}`</p> |
+| Name | Description | Type | Key and additional info |
+|--------------------------------|-----------------------------------------------------------|-----------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Health Check 3.8.10+ discovery | <p>Version 3.8.10+ specific metrics</p> | DEPENDENT | rabbitmq.healthcheck.v3810.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.management_version`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Health Check 3.8.9- discovery | <p>Specific metrics up to and including version 3.8.4</p> | DEPENDENT | rabbitmq.healthcheck.v389.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.management_version`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Queues discovery | <p>Individual queue metrics</p> | DEPENDENT | rabbitmq.queues.discovery<p>**Filter**:</p>AND <p>- A: {#QUEUE} MATCHES_REGEX `{$RABBITMQ.LLD.FILTER.QUEUE.MATCHES}`</p><p>- B: {#QUEUE} NOT_MATCHES_REGEX `{$RABBITMQ.LLD.FILTER.QUEUE.NOT_MATCHES}`</p><p>- C: {#NODE} MATCHES_REGEX `{$RABBITMQ.CLUSTER.NAME}@{HOST.NAME}`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|RabbitMQ |RabbitMQ: Management plugin version |<p>Version of the management plugin in use</p> |DEPENDENT |rabbitmq.node.overview.management_version<p>**Preprocessing**:</p><p>- JSONPATH: `$.management_version`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|RabbitMQ |RabbitMQ: RabbitMQ version |<p>Version of RabbitMQ on the node which processed this request</p> |DEPENDENT |rabbitmq.node.overview.rabbitmq_version<p>**Preprocessing**:</p><p>- JSONPATH: `$.rabbitmq_version`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|RabbitMQ |RabbitMQ: Used file descriptors |<p>Used file descriptors</p> |DEPENDENT |rabbitmq.node.fd_used<p>**Preprocessing**:</p><p>- JSONPATH: `$.fd_used`</p> |
-|RabbitMQ |RabbitMQ: Free disk space |<p>Current free disk space</p> |DEPENDENT |rabbitmq.node.disk_free<p>**Preprocessing**:</p><p>- JSONPATH: `$.disk_free`</p> |
-|RabbitMQ |RabbitMQ: Memory used |<p>Memory used in bytes</p> |DEPENDENT |rabbitmq.node.mem_used<p>**Preprocessing**:</p><p>- JSONPATH: `$.mem_used`</p> |
-|RabbitMQ |RabbitMQ: Memory limit |<p>Memory usage high watermark in bytes</p> |DEPENDENT |rabbitmq.node.mem_limit<p>**Preprocessing**:</p><p>- JSONPATH: `$.mem_limit`</p> |
-|RabbitMQ |RabbitMQ: Disk free limit |<p>Disk free space limit in bytes</p> |DEPENDENT |rabbitmq.node.disk_free_limit<p>**Preprocessing**:</p><p>- JSONPATH: `$.disk_free_limit`</p> |
-|RabbitMQ |RabbitMQ: Runtime run queue |<p>Average number of Erlang processes waiting to run</p> |DEPENDENT |rabbitmq.node.run_queue<p>**Preprocessing**:</p><p>- JSONPATH: `$.run_queue`</p> |
-|RabbitMQ |RabbitMQ: Sockets used |<p>Number of file descriptors used as sockets</p> |DEPENDENT |rabbitmq.node.sockets_used<p>**Preprocessing**:</p><p>- JSONPATH: `$.sockets_used`</p> |
-|RabbitMQ |RabbitMQ: Sockets available |<p>File descriptors available for use as sockets</p> |DEPENDENT |rabbitmq.node.sockets_total<p>**Preprocessing**:</p><p>- JSONPATH: `$.sockets_total`</p> |
-|RabbitMQ |RabbitMQ: Number of network partitions |<p>Number of network partitions this node is seeing</p> |DEPENDENT |rabbitmq.node.partitions<p>**Preprocessing**:</p><p>- JSONPATH: `$.partitions`</p><p>- JAVASCRIPT: `return JSON.parse(value).length;`</p> |
-|RabbitMQ |RabbitMQ: Is running |<p>Is the node running or not</p> |DEPENDENT |rabbitmq.node.running<p>**Preprocessing**:</p><p>- JSONPATH: `$.running`</p><p>- BOOL_TO_DECIMAL |
-|RabbitMQ |RabbitMQ: Memory alarm |<p>Does the host has memory alarm</p> |DEPENDENT |rabbitmq.node.mem_alarm<p>**Preprocessing**:</p><p>- JSONPATH: `$.mem_alarm`</p><p>- BOOL_TO_DECIMAL |
-|RabbitMQ |RabbitMQ: Disk free alarm |<p>Does the node have disk alarm</p> |DEPENDENT |rabbitmq.node.disk_free_alarm<p>**Preprocessing**:</p><p>- JSONPATH: `$.disk_free_alarm`</p><p>- BOOL_TO_DECIMAL |
-|RabbitMQ |RabbitMQ: Uptime |<p>Uptime in milliseconds</p> |DEPENDENT |rabbitmq.node.uptime<p>**Preprocessing**:</p><p>- JSONPATH: `$.uptime`</p><p>- MULTIPLIER: `0.001`</p> |
-|RabbitMQ |RabbitMQ: Number of processes running |<p>-</p> |ZABBIX_PASSIVE |proc.num["{$RABBITMQ.PROCESS_NAME}"] |
-|RabbitMQ |RabbitMQ: Memory usage (rss) |<p>Resident set size memory used by process in bytes.</p> |ZABBIX_PASSIVE |proc.mem["{$RABBITMQ.PROCESS_NAME}",,,,rss] |
-|RabbitMQ |RabbitMQ: Memory usage (vsize) |<p>Virtual memory size used by process in bytes.</p> |ZABBIX_PASSIVE |proc.mem["{$RABBITMQ.PROCESS_NAME}",,,,vsize] |
-|RabbitMQ |RabbitMQ: CPU utilization |<p>Process CPU utilization percentage.</p> |ZABBIX_PASSIVE |proc.cpu.util["{$RABBITMQ.PROCESS_NAME}"] |
-|RabbitMQ |RabbitMQ: Service ping |<p>-</p> |ZABBIX_PASSIVE |net.tcp.service[http,"{$RABBITMQ.API.HOST}","{$RABBITMQ.API.PORT}"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|RabbitMQ |RabbitMQ: Service response time |<p>-</p> |ZABBIX_PASSIVE |net.tcp.service.perf[http,"{$RABBITMQ.API.HOST}","{$RABBITMQ.API.PORT}"] |
-|RabbitMQ |RabbitMQ: Healthcheck: local alarms in effect on the this node{#SINGLETON} |<p>Responds a 200 OK if there are no local alarms in effect on the target node, otherwise responds with a 503 Service Unavailable.</p> |ZABBIX_PASSIVE |web.page.get["http://{$RABBITMQ.API.USER}:{$RABBITMQ.API.PASSWORD}@{$RABBITMQ.API.HOST}:{$RABBITMQ.API.PORT}/api/health/checks/local-alarms{#SINGLETON}"]<p>**Preprocessing**:</p><p>- REGEX: `HTTP\/1\.1\b\s(\d+) \1`</p><p>- JAVASCRIPT: `switch(value){ case '200': return 1 case '503': return 0 default: 2}`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `3h`</p> |
-|RabbitMQ |RabbitMQ: Healthcheck: expiration date on the certificates{#SINGLETON} |<p>Checks the expiration date on the certificates for every listener configured to use TLS. Responds a 200 OK if all certificates are valid (have not expired), otherwise responds with a 503 Service Unavailable.</p> |ZABBIX_PASSIVE |web.page.get["http://{$RABBITMQ.API.USER}:{$RABBITMQ.API.PASSWORD}@{$RABBITMQ.API.HOST}:{$RABBITMQ.API.PORT}/api/health/checks/certificate-expiration/1/months{#SINGLETON}"]<p>**Preprocessing**:</p><p>- REGEX: `HTTP\/1\.1\b\s(\d+) \1`</p><p>- JAVASCRIPT: `switch(value){ case '200': return 1 case '503': return 0 default: 2}`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `3h`</p> |
-|RabbitMQ |RabbitMQ: Healthcheck: virtual hosts on the this node{#SINGLETON} |<p>Responds a 200 OK if all virtual hosts and running on the target node, otherwise responds with a 503 Service Unavailable.</p> |ZABBIX_PASSIVE |web.page.get["http://{$RABBITMQ.API.USER}:{$RABBITMQ.API.PASSWORD}@{$RABBITMQ.API.HOST}:{$RABBITMQ.API.PORT}/api/health/checks/virtual-hosts{#SINGLETON}"]<p>**Preprocessing**:</p><p>- REGEX: `HTTP\/1\.1\b\s(\d+) \1`</p><p>- JAVASCRIPT: `switch(value){ case '200': return 1 case '503': return 0 default: 2}`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `3h`</p> |
-|RabbitMQ |RabbitMQ: Healthcheck: classic mirrored queues without synchronised mirrors online{#SINGLETON} |<p>Checks if there are classic mirrored queues without synchronised mirrors online (queues that would potentially lose data if the target node is shut down). Responds a 200 OK if there are no such classic mirrored queues, otherwise responds with a 503 Service Unavailable.</p> |ZABBIX_PASSIVE |web.page.get["http://{$RABBITMQ.API.USER}:{$RABBITMQ.API.PASSWORD}@{$RABBITMQ.API.HOST}:{$RABBITMQ.API.PORT}/api/health/checks/node-is-mirror-sync-critical{#SINGLETON}"]<p>**Preprocessing**:</p><p>- REGEX: `HTTP\/1\.1\b\s(\d+) \1`</p><p>- JAVASCRIPT: `switch(value){ case '200': return 1 case '503': return 0 default: 2}`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `3h`</p> |
-|RabbitMQ |RabbitMQ: Healthcheck: queues with minimum online quorum{#SINGLETON} |<p>Checks if there are quorum queues with minimum online quorum (queues that would lose their quorum and availability if the target node is shut down). Responds a 200 OK if there are no such quorum queues, otherwise responds with a 503 Service Unavailable.</p> |ZABBIX_PASSIVE |web.page.get["http://{$RABBITMQ.API.USER}:{$RABBITMQ.API.PASSWORD}@{$RABBITMQ.API.HOST}:{$RABBITMQ.API.PORT}/api/health/checks/node-is-quorum-critical{#SINGLETON}"]<p>**Preprocessing**:</p><p>- REGEX: `HTTP\/1\.1\b\s(\d+) \1`</p><p>- JAVASCRIPT: `switch(value){ case '200': return 1 case '503': return 0 default: 2}`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `3h`</p> |
-|RabbitMQ |RabbitMQ: Healthcheck{#SINGLETON} |<p>Runs basic healthchecks in the current node. Checks that the rabbit application is running, channels and queues can be listed successfully, and that no alarms are in effect.</p> |ZABBIX_PASSIVE |web.page.get["http://{$RABBITMQ.API.USER}:{$RABBITMQ.API.PASSWORD}@{$RABBITMQ.API.HOST}:{$RABBITMQ.API.PORT}/api/healthchecks/node{#SINGLETON}"]<p>**Preprocessing**:</p><p>- REGEX: `\n\s?\n(.*) \1`</p><p>- JSONPATH: `$.status`</p><p>- BOOL_TO_DECIMAL |
-|RabbitMQ |RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages |<p>Count of the total messages in the queue</p> |DEPENDENT |rabbitmq.queue.messages["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].messages.first()`</p> |
-|RabbitMQ |RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages per second |<p>Count per second of the total messages in the queue</p> |DEPENDENT |rabbitmq.queue.messages.rate["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].messages_details.rate.first()`</p> |
-|RabbitMQ |RabbitMQ: Queue {#VHOST}/{#QUEUE}: Consumers |<p>Number of consumers</p> |DEPENDENT |rabbitmq.queue.consumers["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].consumers.first()`</p> |
-|RabbitMQ |RabbitMQ: Queue {#VHOST}/{#QUEUE}: Memory |<p>Bytes of memory consumed by the Erlang process associated with the queue, including stack, heap and internal structures</p> |DEPENDENT |rabbitmq.queue.memory["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].memory.first()`</p> |
-|RabbitMQ |RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages ready |<p>Number of messages ready to be delivered to clients</p> |DEPENDENT |rabbitmq.queue.messages_ready["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].messages_ready.first()`</p> |
-|RabbitMQ |RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages ready per second |<p>Number per second of messages ready to be delivered to clients</p> |DEPENDENT |rabbitmq.queue.messages_ready.rate["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].messages_ready_details.rate.first()`</p> |
-|RabbitMQ |RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages unacknowledged |<p>Number of messages delivered to clients but not yet acknowledged</p> |DEPENDENT |rabbitmq.queue.messages_unacknowledged["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].messages_unacknowledged.first()`</p> |
-|RabbitMQ |RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages unacknowledged per second |<p>Number per second of messages delivered to clients but not yet acknowledged</p> |DEPENDENT |rabbitmq.queue.messages_unacknowledged.rate["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].messages_unacknowledged_details.rate.first()`</p> |
-|RabbitMQ |RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages acknowledged |<p>Number of messages delivered to clients and acknowledged</p> |DEPENDENT |rabbitmq.queue.messages.ack["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].message_stats.ack.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages acknowledged per second |<p>Number per second of messages delivered to clients and acknowledged</p> |DEPENDENT |rabbitmq.queue.messages.ack.rate["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].message_stats.ack_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages delivered |<p>Count of messages delivered in acknowledgement mode to consumers</p> |DEPENDENT |rabbitmq.queue.messages.deliver["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].message_stats.deliver.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages delivered per second |<p>Count of messages delivered in acknowledgement mode to consumers</p> |DEPENDENT |rabbitmq.queue.messages.deliver.rate["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].message_stats.deliver_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages delivered |<p>Sum of messages delivered in acknowledgement mode to consumers, in no-acknowledgement mode to consumers, in acknowledgement mode in response to basic.get, and in no-acknowledgement mode in response to basic.get</p> |DEPENDENT |rabbitmq.queue.messages.deliver_get["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].message_stats.deliver_get.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages delivered per second |<p>Rate per second of the sum of messages delivered in acknowledgement mode to consumers, in no-acknowledgement mode to consumers, in acknowledgement mode in response to basic.get, and in no-acknowledgement mode in response to basic.get</p> |DEPENDENT |rabbitmq.queue.messages.deliver_get.rate["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].message_stats.deliver_get_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages published |<p>Count of messages published</p> |DEPENDENT |rabbitmq.queue.messages.publish["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].message_stats.publish.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages published per second |<p>Rate per second of messages published</p> |DEPENDENT |rabbitmq.queue.messages.publish.rate["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].message_stats.publish_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages redelivered |<p>Count of subset of messages in deliver_get which had the redelivered flag set</p> |DEPENDENT |rabbitmq.queue.messages.redeliver["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].message_stats.redeliver.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages redelivered per second |<p>Rate per second of subset of messages in deliver_get which had the redelivered flag set</p> |DEPENDENT |rabbitmq.queue.messages.redeliver.rate["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].message_stats.redeliver_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|Zabbix_raw_items |RabbitMQ: Get node overview |<p>The HTTP API endpoint that returns cluster-wide metrics</p> |ZABBIX_PASSIVE |web.page.get["http://{$RABBITMQ.API.USER}:{$RABBITMQ.API.PASSWORD}@{$RABBITMQ.API.HOST}:{$RABBITMQ.API.PORT}/api/overview"]<p>**Preprocessing**:</p><p>- REGEX: `\n\s?\n(.*) \1`</p> |
-|Zabbix_raw_items |RabbitMQ: Get nodes |<p>The HTTP API endpoint that returns nodes metrics</p> |ZABBIX_PASSIVE |web.page.get["http://{$RABBITMQ.API.USER}:{$RABBITMQ.API.PASSWORD}@{$RABBITMQ.API.HOST}:{$RABBITMQ.API.PORT}/api/nodes/{$RABBITMQ.CLUSTER.NAME}@{HOST.NAME}?memory=true"]<p>**Preprocessing**:</p><p>- REGEX: `\n\s?\n(.*) \1`</p> |
-|Zabbix_raw_items |RabbitMQ: Get queues |<p>The HTTP API endpoint that returns queues metrics</p> |ZABBIX_PASSIVE |web.page.get["http://{$RABBITMQ.API.USER}:{$RABBITMQ.API.PASSWORD}@{$RABBITMQ.API.HOST}:{$RABBITMQ.API.PORT}/api/queues"]<p>**Preprocessing**:</p><p>- REGEX: `\n\s?\n(.*) \1`</p> |
+| Group | Name | Description | Type | Key and additional info |
+|------------------|------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| RabbitMQ | RabbitMQ: Management plugin version | <p>Version of the management plugin in use</p> | DEPENDENT | rabbitmq.node.overview.management_version<p>**Preprocessing**:</p><p>- JSONPATH: `$.management_version`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| RabbitMQ | RabbitMQ: RabbitMQ version | <p>Version of RabbitMQ on the node which processed this request</p> | DEPENDENT | rabbitmq.node.overview.rabbitmq_version<p>**Preprocessing**:</p><p>- JSONPATH: `$.rabbitmq_version`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| RabbitMQ | RabbitMQ: Used file descriptors | <p>Used file descriptors</p> | DEPENDENT | rabbitmq.node.fd_used<p>**Preprocessing**:</p><p>- JSONPATH: `$.fd_used`</p> |
+| RabbitMQ | RabbitMQ: Free disk space | <p>Current free disk space</p> | DEPENDENT | rabbitmq.node.disk_free<p>**Preprocessing**:</p><p>- JSONPATH: `$.disk_free`</p> |
+| RabbitMQ | RabbitMQ: Memory used | <p>Memory used in bytes</p> | DEPENDENT | rabbitmq.node.mem_used<p>**Preprocessing**:</p><p>- JSONPATH: `$.mem_used`</p> |
+| RabbitMQ | RabbitMQ: Memory limit | <p>Memory usage high watermark in bytes</p> | DEPENDENT | rabbitmq.node.mem_limit<p>**Preprocessing**:</p><p>- JSONPATH: `$.mem_limit`</p> |
+| RabbitMQ | RabbitMQ: Disk free limit | <p>Disk free space limit in bytes</p> | DEPENDENT | rabbitmq.node.disk_free_limit<p>**Preprocessing**:</p><p>- JSONPATH: `$.disk_free_limit`</p> |
+| RabbitMQ | RabbitMQ: Runtime run queue | <p>Average number of Erlang processes waiting to run</p> | DEPENDENT | rabbitmq.node.run_queue<p>**Preprocessing**:</p><p>- JSONPATH: `$.run_queue`</p> |
+| RabbitMQ | RabbitMQ: Sockets used | <p>Number of file descriptors used as sockets</p> | DEPENDENT | rabbitmq.node.sockets_used<p>**Preprocessing**:</p><p>- JSONPATH: `$.sockets_used`</p> |
+| RabbitMQ | RabbitMQ: Sockets available | <p>File descriptors available for use as sockets</p> | DEPENDENT | rabbitmq.node.sockets_total<p>**Preprocessing**:</p><p>- JSONPATH: `$.sockets_total`</p> |
+| RabbitMQ | RabbitMQ: Number of network partitions | <p>Number of network partitions this node is seeing</p> | DEPENDENT | rabbitmq.node.partitions<p>**Preprocessing**:</p><p>- JSONPATH: `$.partitions`</p><p>- JAVASCRIPT: `return JSON.parse(value).length;`</p> |
+| RabbitMQ | RabbitMQ: Is running | <p>Is the node running or not</p> | DEPENDENT | rabbitmq.node.running<p>**Preprocessing**:</p><p>- JSONPATH: `$.running`</p><p>- BOOL_TO_DECIMAL |
+| RabbitMQ | RabbitMQ: Memory alarm | <p>Does the host has memory alarm</p> | DEPENDENT | rabbitmq.node.mem_alarm<p>**Preprocessing**:</p><p>- JSONPATH: `$.mem_alarm`</p><p>- BOOL_TO_DECIMAL |
+| RabbitMQ | RabbitMQ: Disk free alarm | <p>Does the node have disk alarm</p> | DEPENDENT | rabbitmq.node.disk_free_alarm<p>**Preprocessing**:</p><p>- JSONPATH: `$.disk_free_alarm`</p><p>- BOOL_TO_DECIMAL |
+| RabbitMQ | RabbitMQ: Uptime | <p>Uptime in milliseconds</p> | DEPENDENT | rabbitmq.node.uptime<p>**Preprocessing**:</p><p>- JSONPATH: `$.uptime`</p><p>- MULTIPLIER: `0.001`</p> |
+| RabbitMQ | RabbitMQ: Number of processes running | <p>-</p> | ZABBIX_PASSIVE | proc.num["{$RABBITMQ.PROCESS_NAME}"] |
+| RabbitMQ | RabbitMQ: Memory usage (rss) | <p>Resident set size memory used by process in bytes.</p> | ZABBIX_PASSIVE | proc.mem["{$RABBITMQ.PROCESS_NAME}",,,,rss] |
+| RabbitMQ | RabbitMQ: Memory usage (vsize) | <p>Virtual memory size used by process in bytes.</p> | ZABBIX_PASSIVE | proc.mem["{$RABBITMQ.PROCESS_NAME}",,,,vsize] |
+| RabbitMQ | RabbitMQ: CPU utilization | <p>Process CPU utilization percentage.</p> | ZABBIX_PASSIVE | proc.cpu.util["{$RABBITMQ.PROCESS_NAME}"] |
+| RabbitMQ | RabbitMQ: Service ping | <p>-</p> | ZABBIX_PASSIVE | net.tcp.service[http,"{$RABBITMQ.API.HOST}","{$RABBITMQ.API.PORT}"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| RabbitMQ | RabbitMQ: Service response time | <p>-</p> | ZABBIX_PASSIVE | net.tcp.service.perf[http,"{$RABBITMQ.API.HOST}","{$RABBITMQ.API.PORT}"] |
+| RabbitMQ | RabbitMQ: Healthcheck: local alarms in effect on the this node{#SINGLETON} | <p>Responds a 200 OK if there are no local alarms in effect on the target node, otherwise responds with a 503 Service Unavailable.</p> | ZABBIX_PASSIVE | web.page.get["http://{$RABBITMQ.API.USER}:{$RABBITMQ.API.PASSWORD}@{$RABBITMQ.API.HOST}:{$RABBITMQ.API.PORT}/api/health/checks/local-alarms{#SINGLETON}"]<p>**Preprocessing**:</p><p>- REGEX: `HTTP\/1\.1\b\s(\d+) \1`</p><p>- JAVASCRIPT: `switch(value){ case '200': return 1 case '503': return 0 default: 2}`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `3h`</p> |
+| RabbitMQ | RabbitMQ: Healthcheck: expiration date on the certificates{#SINGLETON} | <p>Checks the expiration date on the certificates for every listener configured to use TLS. Responds a 200 OK if all certificates are valid (have not expired), otherwise responds with a 503 Service Unavailable.</p> | ZABBIX_PASSIVE | web.page.get["http://{$RABBITMQ.API.USER}:{$RABBITMQ.API.PASSWORD}@{$RABBITMQ.API.HOST}:{$RABBITMQ.API.PORT}/api/health/checks/certificate-expiration/1/months{#SINGLETON}"]<p>**Preprocessing**:</p><p>- REGEX: `HTTP\/1\.1\b\s(\d+) \1`</p><p>- JAVASCRIPT: `switch(value){ case '200': return 1 case '503': return 0 default: 2}`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `3h`</p> |
+| RabbitMQ | RabbitMQ: Healthcheck: virtual hosts on the this node{#SINGLETON} | <p>Responds a 200 OK if all virtual hosts and running on the target node, otherwise responds with a 503 Service Unavailable.</p> | ZABBIX_PASSIVE | web.page.get["http://{$RABBITMQ.API.USER}:{$RABBITMQ.API.PASSWORD}@{$RABBITMQ.API.HOST}:{$RABBITMQ.API.PORT}/api/health/checks/virtual-hosts{#SINGLETON}"]<p>**Preprocessing**:</p><p>- REGEX: `HTTP\/1\.1\b\s(\d+) \1`</p><p>- JAVASCRIPT: `switch(value){ case '200': return 1 case '503': return 0 default: 2}`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `3h`</p> |
+| RabbitMQ | RabbitMQ: Healthcheck: classic mirrored queues without synchronised mirrors online{#SINGLETON} | <p>Checks if there are classic mirrored queues without synchronised mirrors online (queues that would potentially lose data if the target node is shut down). Responds a 200 OK if there are no such classic mirrored queues, otherwise responds with a 503 Service Unavailable.</p> | ZABBIX_PASSIVE | web.page.get["http://{$RABBITMQ.API.USER}:{$RABBITMQ.API.PASSWORD}@{$RABBITMQ.API.HOST}:{$RABBITMQ.API.PORT}/api/health/checks/node-is-mirror-sync-critical{#SINGLETON}"]<p>**Preprocessing**:</p><p>- REGEX: `HTTP\/1\.1\b\s(\d+) \1`</p><p>- JAVASCRIPT: `switch(value){ case '200': return 1 case '503': return 0 default: 2}`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `3h`</p> |
+| RabbitMQ | RabbitMQ: Healthcheck: queues with minimum online quorum{#SINGLETON} | <p>Checks if there are quorum queues with minimum online quorum (queues that would lose their quorum and availability if the target node is shut down). Responds a 200 OK if there are no such quorum queues, otherwise responds with a 503 Service Unavailable.</p> | ZABBIX_PASSIVE | web.page.get["http://{$RABBITMQ.API.USER}:{$RABBITMQ.API.PASSWORD}@{$RABBITMQ.API.HOST}:{$RABBITMQ.API.PORT}/api/health/checks/node-is-quorum-critical{#SINGLETON}"]<p>**Preprocessing**:</p><p>- REGEX: `HTTP\/1\.1\b\s(\d+) \1`</p><p>- JAVASCRIPT: `switch(value){ case '200': return 1 case '503': return 0 default: 2}`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `3h`</p> |
+| RabbitMQ | RabbitMQ: Healthcheck{#SINGLETON} | <p>Runs basic healthchecks in the current node. Checks that the rabbit application is running, channels and queues can be listed successfully, and that no alarms are in effect.</p> | ZABBIX_PASSIVE | web.page.get["http://{$RABBITMQ.API.USER}:{$RABBITMQ.API.PASSWORD}@{$RABBITMQ.API.HOST}:{$RABBITMQ.API.PORT}/api/healthchecks/node{#SINGLETON}"]<p>**Preprocessing**:</p><p>- REGEX: `\n\s?\n(.*) \1`</p><p>- JSONPATH: `$.status`</p><p>- BOOL_TO_DECIMAL |
+| RabbitMQ | RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages | <p>Count of the total messages in the queue</p> | DEPENDENT | rabbitmq.queue.messages["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].messages.first()`</p> |
+| RabbitMQ | RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages per second | <p>Count per second of the total messages in the queue</p> | DEPENDENT | rabbitmq.queue.messages.rate["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].messages_details.rate.first()`</p> |
+| RabbitMQ | RabbitMQ: Queue {#VHOST}/{#QUEUE}: Consumers | <p>Number of consumers</p> | DEPENDENT | rabbitmq.queue.consumers["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].consumers.first()`</p> |
+| RabbitMQ | RabbitMQ: Queue {#VHOST}/{#QUEUE}: Memory | <p>Bytes of memory consumed by the Erlang process associated with the queue, including stack, heap and internal structures</p> | DEPENDENT | rabbitmq.queue.memory["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].memory.first()`</p> |
+| RabbitMQ | RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages ready | <p>Number of messages ready to be delivered to clients</p> | DEPENDENT | rabbitmq.queue.messages_ready["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].messages_ready.first()`</p> |
+| RabbitMQ | RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages ready per second | <p>Number per second of messages ready to be delivered to clients</p> | DEPENDENT | rabbitmq.queue.messages_ready.rate["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].messages_ready_details.rate.first()`</p> |
+| RabbitMQ | RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages unacknowledged | <p>Number of messages delivered to clients but not yet acknowledged</p> | DEPENDENT | rabbitmq.queue.messages_unacknowledged["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].messages_unacknowledged.first()`</p> |
+| RabbitMQ | RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages unacknowledged per second | <p>Number per second of messages delivered to clients but not yet acknowledged</p> | DEPENDENT | rabbitmq.queue.messages_unacknowledged.rate["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].messages_unacknowledged_details.rate.first()`</p> |
+| RabbitMQ | RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages acknowledged | <p>Number of messages delivered to clients and acknowledged</p> | DEPENDENT | rabbitmq.queue.messages.ack["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].message_stats.ack.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages acknowledged per second | <p>Number per second of messages delivered to clients and acknowledged</p> | DEPENDENT | rabbitmq.queue.messages.ack.rate["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].message_stats.ack_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages delivered | <p>Count of messages delivered in acknowledgement mode to consumers</p> | DEPENDENT | rabbitmq.queue.messages.deliver["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].message_stats.deliver.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages delivered per second | <p>Count of messages delivered in acknowledgement mode to consumers</p> | DEPENDENT | rabbitmq.queue.messages.deliver.rate["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].message_stats.deliver_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages delivered | <p>Sum of messages delivered in acknowledgement mode to consumers, in no-acknowledgement mode to consumers, in acknowledgement mode in response to basic.get, and in no-acknowledgement mode in response to basic.get</p> | DEPENDENT | rabbitmq.queue.messages.deliver_get["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].message_stats.deliver_get.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages delivered per second | <p>Rate per second of the sum of messages delivered in acknowledgement mode to consumers, in no-acknowledgement mode to consumers, in acknowledgement mode in response to basic.get, and in no-acknowledgement mode in response to basic.get</p> | DEPENDENT | rabbitmq.queue.messages.deliver_get.rate["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].message_stats.deliver_get_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages published | <p>Count of messages published</p> | DEPENDENT | rabbitmq.queue.messages.publish["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].message_stats.publish.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages published per second | <p>Rate per second of messages published</p> | DEPENDENT | rabbitmq.queue.messages.publish.rate["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].message_stats.publish_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages redelivered | <p>Count of subset of messages in deliver_get which had the redelivered flag set</p> | DEPENDENT | rabbitmq.queue.messages.redeliver["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].message_stats.redeliver.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages redelivered per second | <p>Rate per second of subset of messages in deliver_get which had the redelivered flag set</p> | DEPENDENT | rabbitmq.queue.messages.redeliver.rate["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].message_stats.redeliver_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| Zabbix_raw_items | RabbitMQ: Get node overview | <p>The HTTP API endpoint that returns cluster-wide metrics</p> | ZABBIX_PASSIVE | web.page.get["http://{$RABBITMQ.API.USER}:{$RABBITMQ.API.PASSWORD}@{$RABBITMQ.API.HOST}:{$RABBITMQ.API.PORT}/api/overview"]<p>**Preprocessing**:</p><p>- REGEX: `\n\s?\n(.*) \1`</p> |
+| Zabbix_raw_items | RabbitMQ: Get nodes | <p>The HTTP API endpoint that returns nodes metrics</p> | ZABBIX_PASSIVE | web.page.get["http://{$RABBITMQ.API.USER}:{$RABBITMQ.API.PASSWORD}@{$RABBITMQ.API.HOST}:{$RABBITMQ.API.PORT}/api/nodes/{$RABBITMQ.CLUSTER.NAME}@{HOST.NAME}?memory=true"]<p>**Preprocessing**:</p><p>- REGEX: `\n\s?\n(.*) \1`</p> |
+| Zabbix_raw_items | RabbitMQ: Get queues | <p>The HTTP API endpoint that returns queues metrics</p> | ZABBIX_PASSIVE | web.page.get["http://{$RABBITMQ.API.USER}:{$RABBITMQ.API.PASSWORD}@{$RABBITMQ.API.HOST}:{$RABBITMQ.API.PORT}/api/queues"]<p>**Preprocessing**:</p><p>- REGEX: `\n\s?\n(.*) \1`</p> |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|RabbitMQ: Version has changed (new version: {ITEM.VALUE}) |<p>RabbitMQ version has changed. Ack to close.</p> |`{TEMPLATE_NAME:rabbitmq.node.overview.rabbitmq_version.diff()}=1 and {TEMPLATE_NAME:rabbitmq.node.overview.rabbitmq_version.strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|RabbitMQ: Number of network partitions is too high (more than 0 for 5m) |<p>https://www.rabbitmq.com/partitions.html#detecting</p> |`{TEMPLATE_NAME:rabbitmq.node.partitions.min(5m)}>0` |WARNING | |
-|RabbitMQ: Node is not running |<p>RabbitMQ node is not running</p> |`{TEMPLATE_NAME:rabbitmq.node.running.max(5m)}=0` |AVERAGE |<p>**Depends on**:</p><p>- RabbitMQ: Process is not running</p><p>- RabbitMQ: Service is down</p> |
-|RabbitMQ: Memory alarm (Memory usage threshold has been reached) |<p>https://www.rabbitmq.com/memory.html</p> |`{TEMPLATE_NAME:rabbitmq.node.mem_alarm.last()}=1` |AVERAGE | |
-|RabbitMQ: Free disk space alarm (Free space threshold has been reached) |<p>https://www.rabbitmq.com/disk-alarms.html</p> |`{TEMPLATE_NAME:rabbitmq.node.disk_free_alarm.last()}=1` |AVERAGE | |
-|RabbitMQ: has been restarted (uptime < 10m) |<p>Uptime is less than 10 minutes</p> |`{TEMPLATE_NAME:rabbitmq.node.uptime.last()}<10m` |INFO |<p>Manual close: YES</p> |
-|RabbitMQ: Process is not running |<p>-</p> |`{TEMPLATE_NAME:proc.num["{$RABBITMQ.PROCESS_NAME}"].last()}=0` |HIGH | |
-|RabbitMQ: Service is down |<p>-</p> |`{TEMPLATE_NAME:net.tcp.service[http,"{$RABBITMQ.API.HOST}","{$RABBITMQ.API.PORT}"].last()}=0` |AVERAGE |<p>Manual close: YES</p><p>**Depends on**:</p><p>- RabbitMQ: Process is not running</p> |
-|RabbitMQ: Service response time is too high (over {$RABBITMQ.RESPONSE_TIME.MAX.WARN}s for 5m) |<p>-</p> |`{TEMPLATE_NAME:net.tcp.service.perf[http,"{$RABBITMQ.API.HOST}","{$RABBITMQ.API.PORT}"].min(5m)}>{$RABBITMQ.RESPONSE_TIME.MAX.WARN}` |WARNING |<p>Manual close: YES</p><p>**Depends on**:</p><p>- RabbitMQ: Process is not running</p><p>- RabbitMQ: Service is down</p> |
-|RabbitMQ: There are active alarms in the node |<p>http://{HOST.CONN}:{$RABBITMQ.API.PORT}/api/index.html</p> |`{TEMPLATE_NAME:web.page.get["http://{$RABBITMQ.API.USER}:{$RABBITMQ.API.PASSWORD}@{$RABBITMQ.API.HOST}:{$RABBITMQ.API.PORT}/api/health/checks/local-alarms{#SINGLETON}"].last()}=503` |AVERAGE | |
-|RabbitMQ: There are valid TLS certificates expiring in the next month |<p>http://{HOST.CONN}:{$RABBITMQ.API.PORT}/api/index.html</p> |`{TEMPLATE_NAME:web.page.get["http://{$RABBITMQ.API.USER}:{$RABBITMQ.API.PASSWORD}@{$RABBITMQ.API.HOST}:{$RABBITMQ.API.PORT}/api/health/checks/certificate-expiration/1/months{#SINGLETON}"].last()}=503` |AVERAGE | |
-|RabbitMQ: There are not running virtual hosts |<p>http://{HOST.CONN}:{$RABBITMQ.API.PORT}/api/index.html</p> |`{TEMPLATE_NAME:web.page.get["http://{$RABBITMQ.API.USER}:{$RABBITMQ.API.PASSWORD}@{$RABBITMQ.API.HOST}:{$RABBITMQ.API.PORT}/api/health/checks/virtual-hosts{#SINGLETON}"].last()}=503` |AVERAGE | |
-|RabbitMQ: There are queues that could potentially lose data if the this node goes offline. |<p>http://{HOST.CONN}:{$RABBITMQ.API.PORT}/api/index.html</p> |`{TEMPLATE_NAME:web.page.get["http://{$RABBITMQ.API.USER}:{$RABBITMQ.API.PASSWORD}@{$RABBITMQ.API.HOST}:{$RABBITMQ.API.PORT}/api/health/checks/node-is-mirror-sync-critical{#SINGLETON}"].last()}=503` |AVERAGE | |
-|RabbitMQ: There are queues that would lose their quorum and availability if the this node is shut down. |<p>http://{HOST.CONN}:{$RABBITMQ.API.PORT}/api/index.html</p> |`{TEMPLATE_NAME:web.page.get["http://{$RABBITMQ.API.USER}:{$RABBITMQ.API.PASSWORD}@{$RABBITMQ.API.HOST}:{$RABBITMQ.API.PORT}/api/health/checks/node-is-quorum-critical{#SINGLETON}"].last()}=503` |AVERAGE | |
-|RabbitMQ: Node healthcheck failed |<p>https://www.rabbitmq.com/monitoring.html#health-checks</p> |`{TEMPLATE_NAME:web.page.get["http://{$RABBITMQ.API.USER}:{$RABBITMQ.API.PASSWORD}@{$RABBITMQ.API.HOST}:{$RABBITMQ.API.PORT}/api/healthchecks/node{#SINGLETON}"].last()}=0` |AVERAGE | |
-|RabbitMQ: Too many messages in queue (over {$RABBITMQ.MESSAGES.MAX.WARN} for 5m) |<p>-</p> |`{TEMPLATE_NAME:rabbitmq.queue.messages["{#VHOST}/{#QUEUE}"].min(5m)}>{$RABBITMQ.MESSAGES.MAX.WARN:"{#QUEUE}"}` |WARNING | |
-|RabbitMQ: Failed to fetch nodes data (or no data for 30m) |<p>Zabbix has not received data for items for the last 30 minutes.</p> |`{TEMPLATE_NAME:web.page.get["http://{$RABBITMQ.API.USER}:{$RABBITMQ.API.PASSWORD}@{$RABBITMQ.API.HOST}:{$RABBITMQ.API.PORT}/api/nodes/{$RABBITMQ.CLUSTER.NAME}@{HOST.NAME}?memory=true"].nodata(30m)}=1` |WARNING |<p>Manual close: YES</p><p>**Depends on**:</p><p>- RabbitMQ: Process is not running</p><p>- RabbitMQ: Service is down</p> |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|---------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|---------------------------------------------------------------------------------------------------------------------------|
+| RabbitMQ: Version has changed (new version: {ITEM.VALUE}) | <p>RabbitMQ version has changed. Ack to close.</p> | `{TEMPLATE_NAME:rabbitmq.node.overview.rabbitmq_version.diff()}=1 and {TEMPLATE_NAME:rabbitmq.node.overview.rabbitmq_version.strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| RabbitMQ: Number of network partitions is too high (more than 0 for 5m) | <p>https://www.rabbitmq.com/partitions.html#detecting</p> | `{TEMPLATE_NAME:rabbitmq.node.partitions.min(5m)}>0` | WARNING | |
+| RabbitMQ: Node is not running | <p>RabbitMQ node is not running</p> | `{TEMPLATE_NAME:rabbitmq.node.running.max(5m)}=0` | AVERAGE | <p>**Depends on**:</p><p>- RabbitMQ: Process is not running</p><p>- RabbitMQ: Service is down</p> |
+| RabbitMQ: Memory alarm (Memory usage threshold has been reached) | <p>https://www.rabbitmq.com/memory.html</p> | `{TEMPLATE_NAME:rabbitmq.node.mem_alarm.last()}=1` | AVERAGE | |
+| RabbitMQ: Free disk space alarm (Free space threshold has been reached) | <p>https://www.rabbitmq.com/disk-alarms.html</p> | `{TEMPLATE_NAME:rabbitmq.node.disk_free_alarm.last()}=1` | AVERAGE | |
+| RabbitMQ: has been restarted (uptime < 10m) | <p>Uptime is less than 10 minutes</p> | `{TEMPLATE_NAME:rabbitmq.node.uptime.last()}<10m` | INFO | <p>Manual close: YES</p> |
+| RabbitMQ: Process is not running | <p>-</p> | `{TEMPLATE_NAME:proc.num["{$RABBITMQ.PROCESS_NAME}"].last()}=0` | HIGH | |
+| RabbitMQ: Service is down | <p>-</p> | `{TEMPLATE_NAME:net.tcp.service[http,"{$RABBITMQ.API.HOST}","{$RABBITMQ.API.PORT}"].last()}=0` | AVERAGE | <p>Manual close: YES</p><p>**Depends on**:</p><p>- RabbitMQ: Process is not running</p> |
+| RabbitMQ: Service response time is too high (over {$RABBITMQ.RESPONSE_TIME.MAX.WARN}s for 5m) | <p>-</p> | `{TEMPLATE_NAME:net.tcp.service.perf[http,"{$RABBITMQ.API.HOST}","{$RABBITMQ.API.PORT}"].min(5m)}>{$RABBITMQ.RESPONSE_TIME.MAX.WARN}` | WARNING | <p>Manual close: YES</p><p>**Depends on**:</p><p>- RabbitMQ: Process is not running</p><p>- RabbitMQ: Service is down</p> |
+| RabbitMQ: There are active alarms in the node | <p>http://{HOST.CONN}:{$RABBITMQ.API.PORT}/api/index.html</p> | `{TEMPLATE_NAME:web.page.get["http://{$RABBITMQ.API.USER}:{$RABBITMQ.API.PASSWORD}@{$RABBITMQ.API.HOST}:{$RABBITMQ.API.PORT}/api/health/checks/local-alarms{#SINGLETON}"].last()}=503` | AVERAGE | |
+| RabbitMQ: There are valid TLS certificates expiring in the next month | <p>http://{HOST.CONN}:{$RABBITMQ.API.PORT}/api/index.html</p> | `{TEMPLATE_NAME:web.page.get["http://{$RABBITMQ.API.USER}:{$RABBITMQ.API.PASSWORD}@{$RABBITMQ.API.HOST}:{$RABBITMQ.API.PORT}/api/health/checks/certificate-expiration/1/months{#SINGLETON}"].last()}=503` | AVERAGE | |
+| RabbitMQ: There are not running virtual hosts | <p>http://{HOST.CONN}:{$RABBITMQ.API.PORT}/api/index.html</p> | `{TEMPLATE_NAME:web.page.get["http://{$RABBITMQ.API.USER}:{$RABBITMQ.API.PASSWORD}@{$RABBITMQ.API.HOST}:{$RABBITMQ.API.PORT}/api/health/checks/virtual-hosts{#SINGLETON}"].last()}=503` | AVERAGE | |
+| RabbitMQ: There are queues that could potentially lose data if the this node goes offline. | <p>http://{HOST.CONN}:{$RABBITMQ.API.PORT}/api/index.html</p> | `{TEMPLATE_NAME:web.page.get["http://{$RABBITMQ.API.USER}:{$RABBITMQ.API.PASSWORD}@{$RABBITMQ.API.HOST}:{$RABBITMQ.API.PORT}/api/health/checks/node-is-mirror-sync-critical{#SINGLETON}"].last()}=503` | AVERAGE | |
+| RabbitMQ: There are queues that would lose their quorum and availability if the this node is shut down. | <p>http://{HOST.CONN}:{$RABBITMQ.API.PORT}/api/index.html</p> | `{TEMPLATE_NAME:web.page.get["http://{$RABBITMQ.API.USER}:{$RABBITMQ.API.PASSWORD}@{$RABBITMQ.API.HOST}:{$RABBITMQ.API.PORT}/api/health/checks/node-is-quorum-critical{#SINGLETON}"].last()}=503` | AVERAGE | |
+| RabbitMQ: Node healthcheck failed | <p>https://www.rabbitmq.com/monitoring.html#health-checks</p> | `{TEMPLATE_NAME:web.page.get["http://{$RABBITMQ.API.USER}:{$RABBITMQ.API.PASSWORD}@{$RABBITMQ.API.HOST}:{$RABBITMQ.API.PORT}/api/healthchecks/node{#SINGLETON}"].last()}=0` | AVERAGE | |
+| RabbitMQ: Too many messages in queue (over {$RABBITMQ.MESSAGES.MAX.WARN} for 5m) | <p>-</p> | `{TEMPLATE_NAME:rabbitmq.queue.messages["{#VHOST}/{#QUEUE}"].min(5m)}>{$RABBITMQ.MESSAGES.MAX.WARN:"{#QUEUE}"}` | WARNING | |
+| RabbitMQ: Failed to fetch nodes data (or no data for 30m) | <p>Zabbix has not received data for items for the last 30 minutes.</p> | `{TEMPLATE_NAME:web.page.get["http://{$RABBITMQ.API.USER}:{$RABBITMQ.API.PASSWORD}@{$RABBITMQ.API.HOST}:{$RABBITMQ.API.PORT}/api/nodes/{$RABBITMQ.CLUSTER.NAME}@{HOST.NAME}?memory=true"].nodata(30m)}=1` | WARNING | <p>Manual close: YES</p><p>**Depends on**:</p><p>- RabbitMQ: Process is not running</p><p>- RabbitMQ: Service is down</p> |
## Feedback
diff --git a/templates/app/rabbitmq_agent/template_app_rabbitmq_agent.yaml b/templates/app/rabbitmq_agent/template_app_rabbitmq_agent.yaml
index cb2e53e9e73..1494e08ba90 100644
--- a/templates/app/rabbitmq_agent/template_app_rabbitmq_agent.yaml
+++ b/templates/app/rabbitmq_agent/template_app_rabbitmq_agent.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-03-09T06:37:29Z'
+ date: '2021-04-22T11:27:36Z'
groups:
-
name: Templates/Applications
@@ -980,87 +980,89 @@ zabbix_export:
dashboards:
-
name: 'RabbitMQ overview'
- widgets:
- -
- type: GRAPH_CLASSIC
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'RabbitMQ: Messages'
- host: 'RabbitMQ cluster by Zabbix agent'
- -
- type: GRAPH_CLASSIC
- x: '12'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'RabbitMQ: Connections'
- host: 'RabbitMQ cluster by Zabbix agent'
- -
- type: GRAPH_CLASSIC
- 'y': '5'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'RabbitMQ: Messages status'
- host: 'RabbitMQ cluster by Zabbix agent'
- -
- type: GRAPH_CLASSIC
- x: '12'
- 'y': '5'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'RabbitMQ: Queues'
- host: 'RabbitMQ cluster by Zabbix agent'
- -
- type: GRAPH_CLASSIC
- 'y': '10'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'RabbitMQ: Messages per second'
- host: 'RabbitMQ cluster by Zabbix agent'
+ pages:
+ -
+ widgets:
+ -
+ type: GRAPH_CLASSIC
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'RabbitMQ: Messages'
+ host: 'RabbitMQ cluster by Zabbix agent'
+ -
+ type: GRAPH_CLASSIC
+ x: '12'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'RabbitMQ: Connections'
+ host: 'RabbitMQ cluster by Zabbix agent'
+ -
+ type: GRAPH_CLASSIC
+ 'y': '5'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'RabbitMQ: Messages status'
+ host: 'RabbitMQ cluster by Zabbix agent'
+ -
+ type: GRAPH_CLASSIC
+ x: '12'
+ 'y': '5'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'RabbitMQ: Queues'
+ host: 'RabbitMQ cluster by Zabbix agent'
+ -
+ type: GRAPH_CLASSIC
+ 'y': '10'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'RabbitMQ: Messages per second'
+ host: 'RabbitMQ cluster by Zabbix agent'
valuemaps:
-
name: 'RabbitMQ healthcheck'
@@ -2373,14 +2375,14 @@ zabbix_export:
key: 'web.page.get["http://{$RABBITMQ.API.USER}:{$RABBITMQ.API.PASSWORD}@{$RABBITMQ.API.HOST}:{$RABBITMQ.API.PORT}/api/queues"]'
lld_macro_paths:
-
+ lld_macro: '{#NODE}'
+ path: $.node
+ -
lld_macro: '{#QUEUE}'
path: $.name
-
lld_macro: '{#VHOST}'
path: $.vhost
- -
- lld_macro: '{#NODE}'
- path: $.node
macros:
-
macro: '{$RABBITMQ.API.HOST}'
@@ -2423,104 +2425,106 @@ zabbix_export:
dashboards:
-
name: 'RabbitMQ node status'
- widgets:
- -
- type: GRAPH_CLASSIC
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'RabbitMQ: Node status'
- host: 'RabbitMQ node by Zabbix agent'
- -
- type: GRAPH_CLASSIC
- x: '12'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'RabbitMQ: Uptime'
- host: 'RabbitMQ node by Zabbix agent'
- -
- type: GRAPH_CLASSIC
- 'y': '5'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'RabbitMQ: Disk free'
- host: 'RabbitMQ node by Zabbix agent'
- -
- type: GRAPH_CLASSIC
- x: '12'
- 'y': '5'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'RabbitMQ: Memory used'
- host: 'RabbitMQ node by Zabbix agent'
- -
- type: GRAPH_CLASSIC
- 'y': '10'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'RabbitMQ: File descriptors'
- host: 'RabbitMQ node by Zabbix agent'
- -
- type: GRAPH_CLASSIC
- x: '12'
- 'y': '10'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'RabbitMQ: Sockets'
- host: 'RabbitMQ node by Zabbix agent'
+ pages:
+ -
+ widgets:
+ -
+ type: GRAPH_CLASSIC
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'RabbitMQ: Node status'
+ host: 'RabbitMQ node by Zabbix agent'
+ -
+ type: GRAPH_CLASSIC
+ x: '12'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'RabbitMQ: Uptime'
+ host: 'RabbitMQ node by Zabbix agent'
+ -
+ type: GRAPH_CLASSIC
+ 'y': '5'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'RabbitMQ: Disk free'
+ host: 'RabbitMQ node by Zabbix agent'
+ -
+ type: GRAPH_CLASSIC
+ x: '12'
+ 'y': '5'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'RabbitMQ: Memory used'
+ host: 'RabbitMQ node by Zabbix agent'
+ -
+ type: GRAPH_CLASSIC
+ 'y': '10'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'RabbitMQ: File descriptors'
+ host: 'RabbitMQ node by Zabbix agent'
+ -
+ type: GRAPH_CLASSIC
+ x: '12'
+ 'y': '10'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'RabbitMQ: Sockets'
+ host: 'RabbitMQ node by Zabbix agent'
valuemaps:
-
name: 'Alarm state'
diff --git a/templates/app/rabbitmq_http/README.md b/templates/app/rabbitmq_http/README.md
index 1230430fa1c..36d9a386faf 100644
--- a/templates/app/rabbitmq_http/README.md
+++ b/templates/app/rabbitmq_http/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
The template to monitor RabbitMQ by Zabbix that work without any external scripts.
Most of the metrics are collected in one go, thanks to Zabbix bulk data collection.
@@ -17,7 +17,7 @@ This template was tested on:
## Setup
-> See [Zabbix template operation](https://www.zabbix.com/documentation/5.2/manual/config/templates_out_of_the_box/http) for basic instructions.
+> See [Zabbix template operation](https://www.zabbix.com/documentation/5.4/manual/config/templates_out_of_the_box/http) for basic instructions.
Enable the RabbitMQ management plugin. See [RabbitMQ’s documentation](https://www.rabbitmq.com/management.html) to enable it.
@@ -41,14 +41,14 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$RABBITMQ.API.PASSWORD} |<p>-</p> |`zabbix` |
-|{$RABBITMQ.API.PORT} |<p>The port of RabbitMQ API endpoint</p> |`15672` |
-|{$RABBITMQ.API.SCHEME} |<p>Request scheme which may be http or https</p> |`http` |
-|{$RABBITMQ.API.USER} |<p>-</p> |`zbx_monitor` |
-|{$RABBITMQ.LLD.FILTER.EXCHANGE.MATCHES} |<p>Filter of discoverable exchanges</p> |`.*` |
-|{$RABBITMQ.LLD.FILTER.EXCHANGE.NOT_MATCHES} |<p>Filter to exclude discovered exchanges</p> |`CHANGE_IF_NEEDED` |
+| Name | Description | Default |
+|---------------------------------------------|--------------------------------------------------|--------------------|
+| {$RABBITMQ.API.PASSWORD} | <p>-</p> | `zabbix` |
+| {$RABBITMQ.API.PORT} | <p>The port of RabbitMQ API endpoint</p> | `15672` |
+| {$RABBITMQ.API.SCHEME} | <p>Request scheme which may be http or https</p> | `http` |
+| {$RABBITMQ.API.USER} | <p>-</p> | `zbx_monitor` |
+| {$RABBITMQ.LLD.FILTER.EXCHANGE.MATCHES} | <p>Filter of discoverable exchanges</p> | `.*` |
+| {$RABBITMQ.LLD.FILTER.EXCHANGE.NOT_MATCHES} | <p>Filter to exclude discovered exchanges</p> | `CHANGE_IF_NEEDED` |
## Template links
@@ -56,65 +56,65 @@ There are no template links in this template.
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Health Check 3.8.10+ discovery |<p>Version 3.8.10+ specific metrics</p> |DEPENDENT |rabbitmq.healthcheck.v3810.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.management_version`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
-|Exchanges discovery |<p>Individual exchange metrics</p> |DEPENDENT |rabbitmq.exchanges.discovery<p>**Filter**:</p>AND <p>- A: {#EXCHANGE} MATCHES_REGEX `{$RABBITMQ.LLD.FILTER.EXCHANGE.MATCHES}`</p><p>- B: {#EXCHANGE} NOT_MATCHES_REGEX `{$RABBITMQ.LLD.FILTER.EXCHANGE.NOT_MATCHES}`</p> |
+| Name | Description | Type | Key and additional info |
+|--------------------------------|-----------------------------------------|-----------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Health Check 3.8.10+ discovery | <p>Version 3.8.10+ specific metrics</p> | DEPENDENT | rabbitmq.healthcheck.v3810.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.management_version`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Exchanges discovery | <p>Individual exchange metrics</p> | DEPENDENT | rabbitmq.exchanges.discovery<p>**Filter**:</p>AND <p>- A: {#EXCHANGE} MATCHES_REGEX `{$RABBITMQ.LLD.FILTER.EXCHANGE.MATCHES}`</p><p>- B: {#EXCHANGE} NOT_MATCHES_REGEX `{$RABBITMQ.LLD.FILTER.EXCHANGE.NOT_MATCHES}`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|RabbitMQ |RabbitMQ: Connections total |<p>Total number of connections</p> |DEPENDENT |rabbitmq.overview.object_totals.connections<p>**Preprocessing**:</p><p>- JSONPATH: `$.object_totals.connections`</p> |
-|RabbitMQ |RabbitMQ: Channels total |<p>Total number of channels</p> |DEPENDENT |rabbitmq.overview.object_totals.channels<p>**Preprocessing**:</p><p>- JSONPATH: `$.object_totals.channels`</p> |
-|RabbitMQ |RabbitMQ: Queues total |<p>Total number of queues</p> |DEPENDENT |rabbitmq.overview.object_totals.queues<p>**Preprocessing**:</p><p>- JSONPATH: `$.object_totals.queues`</p> |
-|RabbitMQ |RabbitMQ: Consumers total |<p>Total number of consumers</p> |DEPENDENT |rabbitmq.overview.object_totals.consumers<p>**Preprocessing**:</p><p>- JSONPATH: `$.object_totals.consumers`</p> |
-|RabbitMQ |RabbitMQ: Exchanges total |<p>Total number of exchanges</p> |DEPENDENT |rabbitmq.overview.object_totals.exchanges<p>**Preprocessing**:</p><p>- JSONPATH: `$.object_totals.exchanges`</p> |
-|RabbitMQ |RabbitMQ: Messages total |<p>Total number of messages (ready plus unacknowledged)</p> |DEPENDENT |rabbitmq.overview.queue_totals.messages<p>**Preprocessing**:</p><p>- JSONPATH: `$.queue_totals.messages`</p> |
-|RabbitMQ |RabbitMQ: Messages ready for delivery |<p>Number of messages ready for deliver</p> |DEPENDENT |rabbitmq.overview.queue_totals.messages.ready<p>**Preprocessing**:</p><p>- JSONPATH: `$.queue_totals.messages_ready`</p> |
-|RabbitMQ |RabbitMQ: Messages unacknowledged |<p>Number of unacknowledged messages</p> |DEPENDENT |rabbitmq.overview.queue_totals.messages.unacknowledged<p>**Preprocessing**:</p><p>- JSONPATH: `$.queue_totals.messages_unacknowledged`</p> |
-|RabbitMQ |RabbitMQ: Messages acknowledged |<p>Number of messages delivered to clients and acknowledged</p> |DEPENDENT |rabbitmq.overview.messages.ack<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.ack`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Messages acknowledged per second |<p>Rate of messages delivered to clients and acknowledged per second</p> |DEPENDENT |rabbitmq.overview.messages.ack.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.ack_details.rate`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Messages confirmed |<p>Count of messages confirmed</p> |DEPENDENT |rabbitmq.overview.messages.confirm<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.confirm`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Messages confirmed per second |<p>Rate of messages confirmed per second</p> |DEPENDENT |rabbitmq.overview.messages.confirm.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.confirm_details.rate`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Messages delivered |<p>Sum of messages delivered in acknowledgement mode to consumers, in no-acknowledgement mode to consumers, in acknowledgement mode in response to basic.get, and in no-acknowledgement mode in response to basic.get</p> |DEPENDENT |rabbitmq.overview.messages.deliver_get<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.deliver_get`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Messages delivered per second |<p>Rate per second of the sum of messages delivered in acknowledgement mode to consumers, in no-acknowledgement mode to consumers, in acknowledgement mode in response to basic.get, and in no-acknowledgement mode in response to basic.get</p> |DEPENDENT |rabbitmq.overview.messages.deliver_get.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.deliver_get_details.rate`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Messages published |<p>Count of messages published</p> |DEPENDENT |rabbitmq.overview.messages.publish<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.publish`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Messages published per second |<p>Rate of messages published per second</p> |DEPENDENT |rabbitmq.overview.messages.publish.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.publish_details.rate`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Messages publish_in |<p>Count of messages published from channels into this overview</p> |DEPENDENT |rabbitmq.overview.messages.publish_in<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.publish_in`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Messages publish_in per second |<p>Rate of messages published from channels into this overview per sec</p> |DEPENDENT |rabbitmq.overview.messages.publish_in.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.publish_in_details.rate`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Messages publish_out |<p>Count of messages published from this overview into queues</p> |DEPENDENT |rabbitmq.overview.messages.publish_out<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.publish_out`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Messages publish_out per second |<p>Rate of messages published from this overview into queues per second,0,rabbitmq,total msgs pub out rate</p> |DEPENDENT |rabbitmq.overview.messages.publish_out.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.publish_out_details.rate`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Messages returned unroutable |<p>Count of messages returned to publisher as unroutable</p> |DEPENDENT |rabbitmq.overview.messages.return_unroutable<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.return_unroutable`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Messages returned unroutable per second |<p>Rate of messages returned to publisher as unroutable per second</p> |DEPENDENT |rabbitmq.overview.messages.return_unroutable.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.return_unroutable_details.rate`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Messages returned redeliver |<p>Count of subset of messages in deliver_get which had the redelivered flag set</p> |DEPENDENT |rabbitmq.overview.messages.redeliver<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.redeliver`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Messages returned redeliver per second |<p>Rate of subset of messages in deliver_get which had the redelivered flag set per second</p> |DEPENDENT |rabbitmq.overview.messages.redeliver.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.redeliver_details.rate`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Healthcheck: alarms in effect in the cluster{#SINGLETON} |<p>Responds a 200 OK if there are no alarms in effect in the cluster, otherwise responds with a 503 Service Unavailable.</p> |HTTP_AGENT |rabbitmq.healthcheck.alarms[{#SINGLETON}]<p>**Preprocessing**:</p><p>- REGEX: `HTTP\/1\.1\b\s(\d+) \1`</p><p>- JAVASCRIPT: `switch(value){ case '200': return 1 case '503': return 0 default: 2}`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `3h`</p> |
-|RabbitMQ |RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages acknowledged |<p>Number of messages delivered to clients and acknowledged</p> |DEPENDENT |rabbitmq.exchange.messages.ack["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.ack.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages acknowledged per second |<p>Rate of messages delivered to clients and acknowledged per second</p> |DEPENDENT |rabbitmq.exchange.messages.ack.rate["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.ack_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages confirmed |<p>Count of messages confirmed</p> |DEPENDENT |rabbitmq.exchange.messages.confirm["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.confirm.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages confirmed per second |<p>Rate of messages confirmed per second</p> |DEPENDENT |rabbitmq.exchange.messages.confirm.rate["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.confirm_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages delivered |<p>Sum of messages delivered in acknowledgement mode to consumers, in no-acknowledgement mode to consumers, in acknowledgement mode in response to basic.get, and in no-acknowledgement mode in response to basic.get</p> |DEPENDENT |rabbitmq.exchange.messages.deliver_get["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.deliver_get.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages delivered per second |<p>Rate per second of the sum of messages delivered in acknowledgement mode to consumers, in no-acknowledgement mode to consumers, in acknowledgement mode in response to basic.get, and in no-acknowledgement mode in response to basic.get</p> |DEPENDENT |rabbitmq.exchange.messages.deliver_get.rate["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.deliver_get_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages published |<p>Count of messages published</p> |DEPENDENT |rabbitmq.exchange.messages.publish["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.publish.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages published per second |<p>Rate of messages published per second</p> |DEPENDENT |rabbitmq.exchange.messages.publish.rate["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.publish_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages publish_in |<p>Count of messages published from channels into this overview</p> |DEPENDENT |rabbitmq.exchange.messages.publish_in["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.publish_in.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages publish_in per second |<p>Rate of messages published from channels into this overview per sec</p> |DEPENDENT |rabbitmq.exchange.messages.publish_in.rate["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.publish_in_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages publish_out |<p>Count of messages published from this overview into queues</p> |DEPENDENT |rabbitmq.exchange.messages.publish_out["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.publish_out.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages publish_out per second |<p>Rate of messages published from this overview into queues per second,0,rabbitmq,total msgs pub out rate</p> |DEPENDENT |rabbitmq.exchange.messages.publish_out.rate["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.publish_out_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages returned unroutable |<p>Count of messages returned to publisher as unroutable</p> |DEPENDENT |rabbitmq.exchange.messages.return_unroutable["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.return_unroutable.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages returned unroutable per second |<p>Rate of messages returned to publisher as unroutable per second</p> |DEPENDENT |rabbitmq.exchange.messages.return_unroutable.rate["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.return_unroutable_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages redelivered |<p>Count of subset of messages in deliver_get which had the redelivered flag set</p> |DEPENDENT |rabbitmq.exchange.messages.redeliver["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.redeliver.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages redelivered per second |<p>Rate of subset of messages in deliver_get which had the redelivered flag set per second</p> |DEPENDENT |rabbitmq.exchange.messages.redeliver.rate["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.redeliver_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|Zabbix_raw_items |RabbitMQ: Get overview |<p>The HTTP API endpoint that returns cluster-wide metrics</p> |HTTP_AGENT |rabbitmq.get_overview |
-|Zabbix_raw_items |RabbitMQ: Get exchanges |<p>The HTTP API endpoint that returns exchanges metrics</p> |HTTP_AGENT |rabbitmq.get_exchanges |
+| Group | Name | Description | Type | Key and additional info |
+|------------------|------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| RabbitMQ | RabbitMQ: Connections total | <p>Total number of connections</p> | DEPENDENT | rabbitmq.overview.object_totals.connections<p>**Preprocessing**:</p><p>- JSONPATH: `$.object_totals.connections`</p> |
+| RabbitMQ | RabbitMQ: Channels total | <p>Total number of channels</p> | DEPENDENT | rabbitmq.overview.object_totals.channels<p>**Preprocessing**:</p><p>- JSONPATH: `$.object_totals.channels`</p> |
+| RabbitMQ | RabbitMQ: Queues total | <p>Total number of queues</p> | DEPENDENT | rabbitmq.overview.object_totals.queues<p>**Preprocessing**:</p><p>- JSONPATH: `$.object_totals.queues`</p> |
+| RabbitMQ | RabbitMQ: Consumers total | <p>Total number of consumers</p> | DEPENDENT | rabbitmq.overview.object_totals.consumers<p>**Preprocessing**:</p><p>- JSONPATH: `$.object_totals.consumers`</p> |
+| RabbitMQ | RabbitMQ: Exchanges total | <p>Total number of exchanges</p> | DEPENDENT | rabbitmq.overview.object_totals.exchanges<p>**Preprocessing**:</p><p>- JSONPATH: `$.object_totals.exchanges`</p> |
+| RabbitMQ | RabbitMQ: Messages total | <p>Total number of messages (ready plus unacknowledged)</p> | DEPENDENT | rabbitmq.overview.queue_totals.messages<p>**Preprocessing**:</p><p>- JSONPATH: `$.queue_totals.messages`</p> |
+| RabbitMQ | RabbitMQ: Messages ready for delivery | <p>Number of messages ready for deliver</p> | DEPENDENT | rabbitmq.overview.queue_totals.messages.ready<p>**Preprocessing**:</p><p>- JSONPATH: `$.queue_totals.messages_ready`</p> |
+| RabbitMQ | RabbitMQ: Messages unacknowledged | <p>Number of unacknowledged messages</p> | DEPENDENT | rabbitmq.overview.queue_totals.messages.unacknowledged<p>**Preprocessing**:</p><p>- JSONPATH: `$.queue_totals.messages_unacknowledged`</p> |
+| RabbitMQ | RabbitMQ: Messages acknowledged | <p>Number of messages delivered to clients and acknowledged</p> | DEPENDENT | rabbitmq.overview.messages.ack<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.ack`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Messages acknowledged per second | <p>Rate of messages delivered to clients and acknowledged per second</p> | DEPENDENT | rabbitmq.overview.messages.ack.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.ack_details.rate`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Messages confirmed | <p>Count of messages confirmed</p> | DEPENDENT | rabbitmq.overview.messages.confirm<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.confirm`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Messages confirmed per second | <p>Rate of messages confirmed per second</p> | DEPENDENT | rabbitmq.overview.messages.confirm.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.confirm_details.rate`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Messages delivered | <p>Sum of messages delivered in acknowledgement mode to consumers, in no-acknowledgement mode to consumers, in acknowledgement mode in response to basic.get, and in no-acknowledgement mode in response to basic.get</p> | DEPENDENT | rabbitmq.overview.messages.deliver_get<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.deliver_get`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Messages delivered per second | <p>Rate per second of the sum of messages delivered in acknowledgement mode to consumers, in no-acknowledgement mode to consumers, in acknowledgement mode in response to basic.get, and in no-acknowledgement mode in response to basic.get</p> | DEPENDENT | rabbitmq.overview.messages.deliver_get.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.deliver_get_details.rate`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Messages published | <p>Count of messages published</p> | DEPENDENT | rabbitmq.overview.messages.publish<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.publish`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Messages published per second | <p>Rate of messages published per second</p> | DEPENDENT | rabbitmq.overview.messages.publish.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.publish_details.rate`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Messages publish_in | <p>Count of messages published from channels into this overview</p> | DEPENDENT | rabbitmq.overview.messages.publish_in<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.publish_in`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Messages publish_in per second | <p>Rate of messages published from channels into this overview per sec</p> | DEPENDENT | rabbitmq.overview.messages.publish_in.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.publish_in_details.rate`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Messages publish_out | <p>Count of messages published from this overview into queues</p> | DEPENDENT | rabbitmq.overview.messages.publish_out<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.publish_out`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Messages publish_out per second | <p>Rate of messages published from this overview into queues per second,0,rabbitmq,total msgs pub out rate</p> | DEPENDENT | rabbitmq.overview.messages.publish_out.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.publish_out_details.rate`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Messages returned unroutable | <p>Count of messages returned to publisher as unroutable</p> | DEPENDENT | rabbitmq.overview.messages.return_unroutable<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.return_unroutable`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Messages returned unroutable per second | <p>Rate of messages returned to publisher as unroutable per second</p> | DEPENDENT | rabbitmq.overview.messages.return_unroutable.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.return_unroutable_details.rate`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Messages returned redeliver | <p>Count of subset of messages in deliver_get which had the redelivered flag set</p> | DEPENDENT | rabbitmq.overview.messages.redeliver<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.redeliver`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Messages returned redeliver per second | <p>Rate of subset of messages in deliver_get which had the redelivered flag set per second</p> | DEPENDENT | rabbitmq.overview.messages.redeliver.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.message_stats.redeliver_details.rate`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Healthcheck: alarms in effect in the cluster{#SINGLETON} | <p>Responds a 200 OK if there are no alarms in effect in the cluster, otherwise responds with a 503 Service Unavailable.</p> | HTTP_AGENT | rabbitmq.healthcheck.alarms[{#SINGLETON}]<p>**Preprocessing**:</p><p>- REGEX: `HTTP\/1\.1\b\s(\d+) \1`</p><p>- JAVASCRIPT: `switch(value){ case '200': return 1 case '503': return 0 default: 2}`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `3h`</p> |
+| RabbitMQ | RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages acknowledged | <p>Number of messages delivered to clients and acknowledged</p> | DEPENDENT | rabbitmq.exchange.messages.ack["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.ack.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages acknowledged per second | <p>Rate of messages delivered to clients and acknowledged per second</p> | DEPENDENT | rabbitmq.exchange.messages.ack.rate["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.ack_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages confirmed | <p>Count of messages confirmed</p> | DEPENDENT | rabbitmq.exchange.messages.confirm["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.confirm.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages confirmed per second | <p>Rate of messages confirmed per second</p> | DEPENDENT | rabbitmq.exchange.messages.confirm.rate["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.confirm_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages delivered | <p>Sum of messages delivered in acknowledgement mode to consumers, in no-acknowledgement mode to consumers, in acknowledgement mode in response to basic.get, and in no-acknowledgement mode in response to basic.get</p> | DEPENDENT | rabbitmq.exchange.messages.deliver_get["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.deliver_get.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages delivered per second | <p>Rate per second of the sum of messages delivered in acknowledgement mode to consumers, in no-acknowledgement mode to consumers, in acknowledgement mode in response to basic.get, and in no-acknowledgement mode in response to basic.get</p> | DEPENDENT | rabbitmq.exchange.messages.deliver_get.rate["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.deliver_get_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages published | <p>Count of messages published</p> | DEPENDENT | rabbitmq.exchange.messages.publish["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.publish.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages published per second | <p>Rate of messages published per second</p> | DEPENDENT | rabbitmq.exchange.messages.publish.rate["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.publish_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages publish_in | <p>Count of messages published from channels into this overview</p> | DEPENDENT | rabbitmq.exchange.messages.publish_in["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.publish_in.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages publish_in per second | <p>Rate of messages published from channels into this overview per sec</p> | DEPENDENT | rabbitmq.exchange.messages.publish_in.rate["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.publish_in_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages publish_out | <p>Count of messages published from this overview into queues</p> | DEPENDENT | rabbitmq.exchange.messages.publish_out["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.publish_out.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages publish_out per second | <p>Rate of messages published from this overview into queues per second,0,rabbitmq,total msgs pub out rate</p> | DEPENDENT | rabbitmq.exchange.messages.publish_out.rate["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.publish_out_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages returned unroutable | <p>Count of messages returned to publisher as unroutable</p> | DEPENDENT | rabbitmq.exchange.messages.return_unroutable["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.return_unroutable.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages returned unroutable per second | <p>Rate of messages returned to publisher as unroutable per second</p> | DEPENDENT | rabbitmq.exchange.messages.return_unroutable.rate["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.return_unroutable_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages redelivered | <p>Count of subset of messages in deliver_get which had the redelivered flag set</p> | DEPENDENT | rabbitmq.exchange.messages.redeliver["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.redeliver.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Exchange {#VHOST}/{#EXCHANGE}/{#TYPE}: Messages redelivered per second | <p>Rate of subset of messages in deliver_get which had the redelivered flag set per second</p> | DEPENDENT | rabbitmq.exchange.messages.redeliver.rate["{#VHOST}/{#EXCHANGE}/{#TYPE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#EXCHANGE}" && @.vhost == "{#VHOST}" && @.type =="{#TYPE}")].message_stats.redeliver_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| Zabbix_raw_items | RabbitMQ: Get overview | <p>The HTTP API endpoint that returns cluster-wide metrics</p> | HTTP_AGENT | rabbitmq.get_overview |
+| Zabbix_raw_items | RabbitMQ: Get exchanges | <p>The HTTP API endpoint that returns exchanges metrics</p> | HTTP_AGENT | rabbitmq.get_exchanges |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|RabbitMQ: There are active alarms in the cluster |<p>http://{HOST.CONN}:{$RABBITMQ.API.PORT}/api/index.html</p> |`{TEMPLATE_NAME:rabbitmq.healthcheck.alarms[{#SINGLETON}].last()}=503` |AVERAGE | |
-|RabbitMQ: Failed to fetch overview data (or no data for 30m) |<p>Zabbix has not received data for items for the last 30 minutes</p> |`{TEMPLATE_NAME:rabbitmq.get_overview.nodata(30m)}=1` |WARNING |<p>Manual close: YES</p> |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|--------------------------------------------------------------|-----------------------------------------------------------------------|------------------------------------------------------------------------|----------|----------------------------------|
+| RabbitMQ: There are active alarms in the cluster | <p>http://{HOST.CONN}:{$RABBITMQ.API.PORT}/api/index.html</p> | `{TEMPLATE_NAME:rabbitmq.healthcheck.alarms[{#SINGLETON}].last()}=503` | AVERAGE | |
+| RabbitMQ: Failed to fetch overview data (or no data for 30m) | <p>Zabbix has not received data for items for the last 30 minutes</p> | `{TEMPLATE_NAME:rabbitmq.get_overview.nodata(30m)}=1` | WARNING | <p>Manual close: YES</p> |
## Feedback
@@ -126,7 +126,7 @@ You can also provide a feedback, discuss the template or ask for help with it at
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
The template to monitor RabbitMQ by Zabbix that work without any external scripts.
Most of the metrics are collected in one go, thanks to Zabbix bulk data collection.
@@ -162,17 +162,17 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$RABBITMQ.API.PASSWORD} |<p>-</p> |`zabbix` |
-|{$RABBITMQ.API.PORT} |<p>The port of RabbitMQ API endpoint</p> |`15672` |
-|{$RABBITMQ.API.SCHEME} |<p>Request scheme which may be http or https</p> |`http` |
-|{$RABBITMQ.API.USER} |<p>-</p> |`zbx_monitor` |
-|{$RABBITMQ.CLUSTER.NAME} |<p>The name of RabbitMQ cluster</p> |`rabbit` |
-|{$RABBITMQ.LLD.FILTER.QUEUE.MATCHES} |<p>Filter of discoverable queues</p> |`.*` |
-|{$RABBITMQ.LLD.FILTER.QUEUE.NOT_MATCHES} |<p>Filter to exclude discovered queues</p> |`CHANGE_IF_NEEDED` |
-|{$RABBITMQ.MESSAGES.MAX.WARN} |<p>Maximum number of messages in the queue for trigger expression</p> |`1000` |
-|{$RABBITMQ.RESPONSE_TIME.MAX.WARN} |<p>Maximum RabbitMQ response time in seconds for trigger expression</p> |`10` |
+| Name | Description | Default |
+|------------------------------------------|-------------------------------------------------------------------------|--------------------|
+| {$RABBITMQ.API.PASSWORD} | <p>-</p> | `zabbix` |
+| {$RABBITMQ.API.PORT} | <p>The port of RabbitMQ API endpoint</p> | `15672` |
+| {$RABBITMQ.API.SCHEME} | <p>Request scheme which may be http or https</p> | `http` |
+| {$RABBITMQ.API.USER} | <p>-</p> | `zbx_monitor` |
+| {$RABBITMQ.CLUSTER.NAME} | <p>The name of RabbitMQ cluster</p> | `rabbit` |
+| {$RABBITMQ.LLD.FILTER.QUEUE.MATCHES} | <p>Filter of discoverable queues</p> | `.*` |
+| {$RABBITMQ.LLD.FILTER.QUEUE.NOT_MATCHES} | <p>Filter to exclude discovered queues</p> | `CHANGE_IF_NEEDED` |
+| {$RABBITMQ.MESSAGES.MAX.WARN} | <p>Maximum number of messages in the queue for trigger expression</p> | `1000` |
+| {$RABBITMQ.RESPONSE_TIME.MAX.WARN} | <p>Maximum RabbitMQ response time in seconds for trigger expression</p> | `10` |
## Template links
@@ -180,81 +180,81 @@ There are no template links in this template.
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Health Check 3.8.10+ discovery |<p>Version 3.8.10+ specific metrics</p> |DEPENDENT |rabbitmq.healthcheck.v3810.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.management_version`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
-|Health Check 3.8.9- discovery |<p>Specific metrics up to and including version 3.8.4</p> |DEPENDENT |rabbitmq.healthcheck.v389.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.management_version`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
-|Queues discovery |<p>Individual queue metrics</p> |DEPENDENT |rabbitmq.queues.discovery<p>**Filter**:</p>AND <p>- A: {#QUEUE} MATCHES_REGEX `{$RABBITMQ.LLD.FILTER.QUEUE.MATCHES}`</p><p>- B: {#QUEUE} NOT_MATCHES_REGEX `{$RABBITMQ.LLD.FILTER.QUEUE.NOT_MATCHES}`</p><p>- C: {#NODE} MATCHES_REGEX `{$RABBITMQ.CLUSTER.NAME}@{HOST.NAME}`</p> |
+| Name | Description | Type | Key and additional info |
+|--------------------------------|-----------------------------------------------------------|-----------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Health Check 3.8.10+ discovery | <p>Version 3.8.10+ specific metrics</p> | DEPENDENT | rabbitmq.healthcheck.v3810.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.management_version`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Health Check 3.8.9- discovery | <p>Specific metrics up to and including version 3.8.4</p> | DEPENDENT | rabbitmq.healthcheck.v389.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.management_version`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Queues discovery | <p>Individual queue metrics</p> | DEPENDENT | rabbitmq.queues.discovery<p>**Filter**:</p>AND <p>- A: {#QUEUE} MATCHES_REGEX `{$RABBITMQ.LLD.FILTER.QUEUE.MATCHES}`</p><p>- B: {#QUEUE} NOT_MATCHES_REGEX `{$RABBITMQ.LLD.FILTER.QUEUE.NOT_MATCHES}`</p><p>- C: {#NODE} MATCHES_REGEX `{$RABBITMQ.CLUSTER.NAME}@{HOST.NAME}`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|RabbitMQ |RabbitMQ: Management plugin version |<p>Version of the management plugin in use</p> |DEPENDENT |rabbitmq.node.overview.management_version<p>**Preprocessing**:</p><p>- JSONPATH: `$.management_version`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|RabbitMQ |RabbitMQ: RabbitMQ version |<p>Version of RabbitMQ on the node which processed this request</p> |DEPENDENT |rabbitmq.node.overview.rabbitmq_version<p>**Preprocessing**:</p><p>- JSONPATH: `$.rabbitmq_version`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|RabbitMQ |RabbitMQ: Used file descriptors |<p>Used file descriptors</p> |DEPENDENT |rabbitmq.node.fd_used<p>**Preprocessing**:</p><p>- JSONPATH: `$.fd_used`</p> |
-|RabbitMQ |RabbitMQ: Free disk space |<p>Current free disk space</p> |DEPENDENT |rabbitmq.node.disk_free<p>**Preprocessing**:</p><p>- JSONPATH: `$.disk_free`</p> |
-|RabbitMQ |RabbitMQ: Disk free limit |<p>Disk free space limit in bytes</p> |DEPENDENT |rabbitmq.node.disk_free_limit<p>**Preprocessing**:</p><p>- JSONPATH: `$.disk_free_limit`</p> |
-|RabbitMQ |RabbitMQ: Memory used |<p>Memory used in bytes</p> |DEPENDENT |rabbitmq.node.mem_used<p>**Preprocessing**:</p><p>- JSONPATH: `$.mem_used`</p> |
-|RabbitMQ |RabbitMQ: Memory limit |<p>Memory usage high watermark in bytes</p> |DEPENDENT |rabbitmq.node.mem_limit<p>**Preprocessing**:</p><p>- JSONPATH: `$.mem_limit`</p> |
-|RabbitMQ |RabbitMQ: Runtime run queue |<p>Average number of Erlang processes waiting to run</p> |DEPENDENT |rabbitmq.node.run_queue<p>**Preprocessing**:</p><p>- JSONPATH: `$.run_queue`</p> |
-|RabbitMQ |RabbitMQ: Sockets used |<p>Number of file descriptors used as sockets</p> |DEPENDENT |rabbitmq.node.sockets_used<p>**Preprocessing**:</p><p>- JSONPATH: `$.sockets_used`</p> |
-|RabbitMQ |RabbitMQ: Sockets available |<p>File descriptors available for use as sockets</p> |DEPENDENT |rabbitmq.node.sockets_total<p>**Preprocessing**:</p><p>- JSONPATH: `$.sockets_total`</p> |
-|RabbitMQ |RabbitMQ: Number of network partitions |<p>Number of network partitions this node is seeing</p> |DEPENDENT |rabbitmq.node.partitions<p>**Preprocessing**:</p><p>- JSONPATH: `$.partitions`</p><p>- JAVASCRIPT: `return JSON.parse(value).length;`</p> |
-|RabbitMQ |RabbitMQ: Is running |<p>Is the node running or not</p> |DEPENDENT |rabbitmq.node.running<p>**Preprocessing**:</p><p>- JSONPATH: `$.running`</p><p>- BOOL_TO_DECIMAL |
-|RabbitMQ |RabbitMQ: Memory alarm |<p>Does the host has memory alarm</p> |DEPENDENT |rabbitmq.node.mem_alarm<p>**Preprocessing**:</p><p>- JSONPATH: `$.mem_alarm`</p><p>- BOOL_TO_DECIMAL |
-|RabbitMQ |RabbitMQ: Disk free alarm |<p>Does the node have disk alarm</p> |DEPENDENT |rabbitmq.node.disk_free_alarm<p>**Preprocessing**:</p><p>- JSONPATH: `$.disk_free_alarm`</p><p>- BOOL_TO_DECIMAL |
-|RabbitMQ |RabbitMQ: Uptime |<p>Uptime in milliseconds</p> |DEPENDENT |rabbitmq.node.uptime<p>**Preprocessing**:</p><p>- JSONPATH: `$.uptime`</p><p>- MULTIPLIER: `0.001`</p> |
-|RabbitMQ |RabbitMQ: Service ping |<p>-</p> |SIMPLE |net.tcp.service[http,"{HOST.CONN}","{$RABBITMQ.API.PORT}"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|RabbitMQ |RabbitMQ: Service response time |<p>-</p> |SIMPLE |net.tcp.service.perf[http,"{HOST.CONN}","{$RABBITMQ.API.PORT}"] |
-|RabbitMQ |RabbitMQ: Healthcheck: local alarms in effect on the this node{#SINGLETON} |<p>Responds a 200 OK if there are no local alarms in effect on the target node, otherwise responds with a 503 Service Unavailable.</p> |HTTP_AGENT |rabbitmq.healthcheck.local_alarms[{#SINGLETON}]<p>**Preprocessing**:</p><p>- REGEX: `HTTP\/1\.1\b\s(\d+) \1`</p><p>- JAVASCRIPT: `switch(value){ case '200': return 1 case '503': return 0 default: 2}`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `3h`</p> |
-|RabbitMQ |RabbitMQ: Healthcheck: expiration date on the certificates{#SINGLETON} |<p>Checks the expiration date on the certificates for every listener configured to use TLS. Responds a 200 OK if all certificates are valid (have not expired), otherwise responds with a 503 Service Unavailable.</p> |HTTP_AGENT |rabbitmq.healthcheck.certificate_expiration[{#SINGLETON}]<p>**Preprocessing**:</p><p>- REGEX: `HTTP\/1\.1\b\s(\d+) \1`</p><p>- JAVASCRIPT: `switch(value){ case '200': return 1 case '503': return 0 default: 2}`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `3h`</p> |
-|RabbitMQ |RabbitMQ: Healthcheck: virtual hosts on the this node{#SINGLETON} |<p>Responds a 200 OK if all virtual hosts and running on the target node, otherwise responds with a 503 Service Unavailable.</p> |HTTP_AGENT |rabbitmq.healthcheck.virtual_hosts[{#SINGLETON}]<p>**Preprocessing**:</p><p>- REGEX: `HTTP\/1\.1\b\s(\d+) \1`</p><p>- JAVASCRIPT: `switch(value){ case '200': return 1 case '503': return 0 default: 2}`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `3h`</p> |
-|RabbitMQ |RabbitMQ: Healthcheck: classic mirrored queues without synchronised mirrors online{#SINGLETON} |<p>Checks if there are classic mirrored queues without synchronised mirrors online (queues that would potentially lose data if the target node is shut down). Responds a 200 OK if there are no such classic mirrored queues, otherwise responds with a 503 Service Unavailable.</p> |HTTP_AGENT |rabbitmq.healthcheck.mirror_sync[{#SINGLETON}]<p>**Preprocessing**:</p><p>- REGEX: `HTTP\/1\.1\b\s(\d+) \1`</p><p>- JAVASCRIPT: `switch(value){ case '200': return 1 case '503': return 0 default: 2}`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `3h`</p> |
-|RabbitMQ |RabbitMQ: Healthcheck: queues with minimum online quorum{#SINGLETON} |<p>Checks if there are quorum queues with minimum online quorum (queues that would lose their quorum and availability if the target node is shut down). Responds a 200 OK if there are no such quorum queues, otherwise responds with a 503 Service Unavailable.</p> |HTTP_AGENT |rabbitmq.healthcheck.quorum[{#SINGLETON}]<p>**Preprocessing**:</p><p>- REGEX: `HTTP\/1\.1\b\s(\d+) \1`</p><p>- JAVASCRIPT: `switch(value){ case '200': return 1 case '503': return 0 default: 2}`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `3h`</p> |
-|RabbitMQ |RabbitMQ: Healthcheck{#SINGLETON} |<p>Runs basic healthchecks in the current node. Checks that the rabbit application is running, channels and queues can be listed successfully, and that no alarms are in effect.</p> |HTTP_AGENT |rabbitmq.healthcheck[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.status`</p><p>- BOOL_TO_DECIMAL |
-|RabbitMQ |RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages |<p>Count of the total messages in the queue</p> |DEPENDENT |rabbitmq.queue.messages["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].messages.first()`</p> |
-|RabbitMQ |RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages per second |<p>Count per second of the total messages in the queue</p> |DEPENDENT |rabbitmq.queue.messages.rate["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].messages_details.rate.first()`</p> |
-|RabbitMQ |RabbitMQ: Queue {#VHOST}/{#QUEUE}: Consumers |<p>Number of consumers</p> |DEPENDENT |rabbitmq.queue.consumers["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].consumers.first()`</p> |
-|RabbitMQ |RabbitMQ: Queue {#VHOST}/{#QUEUE}: Memory |<p>Bytes of memory consumed by the Erlang process associated with the queue, including stack, heap and internal structures</p> |DEPENDENT |rabbitmq.queue.memory["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].memory.first()`</p> |
-|RabbitMQ |RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages ready |<p>Number of messages ready to be delivered to clients</p> |DEPENDENT |rabbitmq.queue.messages_ready["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].messages_ready.first()`</p> |
-|RabbitMQ |RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages ready per second |<p>Number per second of messages ready to be delivered to clients</p> |DEPENDENT |rabbitmq.queue.messages_ready.rate["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].messages_ready_details.rate.first()`</p> |
-|RabbitMQ |RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages unacknowledged |<p>Number of messages delivered to clients but not yet acknowledged</p> |DEPENDENT |rabbitmq.queue.messages_unacknowledged["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].messages_unacknowledged.first()`</p> |
-|RabbitMQ |RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages unacknowledged per second |<p>Number per second of messages delivered to clients but not yet acknowledged</p> |DEPENDENT |rabbitmq.queue.messages_unacknowledged.rate["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].messages_unacknowledged_details.rate.first()`</p> |
-|RabbitMQ |RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages acknowledged |<p>Number of messages delivered to clients and acknowledged</p> |DEPENDENT |rabbitmq.queue.messages.ack["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].message_stats.ack.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages acknowledged per second |<p>Number per second of messages delivered to clients and acknowledged</p> |DEPENDENT |rabbitmq.queue.messages.ack.rate["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].message_stats.ack_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages delivered |<p>Count of messages delivered in acknowledgement mode to consumers</p> |DEPENDENT |rabbitmq.queue.messages.deliver["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].message_stats.deliver.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages delivered per second |<p>Count of messages delivered in acknowledgement mode to consumers</p> |DEPENDENT |rabbitmq.queue.messages.deliver.rate["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].message_stats.deliver_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages delivered |<p>Sum of messages delivered in acknowledgement mode to consumers, in no-acknowledgement mode to consumers, in acknowledgement mode in response to basic.get, and in no-acknowledgement mode in response to basic.get</p> |DEPENDENT |rabbitmq.queue.messages.deliver_get["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].message_stats.deliver_get.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages delivered per second |<p>Rate per second of the sum of messages delivered in acknowledgement mode to consumers, in no-acknowledgement mode to consumers, in acknowledgement mode in response to basic.get, and in no-acknowledgement mode in response to basic.get</p> |DEPENDENT |rabbitmq.queue.messages.deliver_get.rate["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].message_stats.deliver_get_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages published |<p>Count of messages published</p> |DEPENDENT |rabbitmq.queue.messages.publish["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].message_stats.publish.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages published per second |<p>Rate per second of messages published</p> |DEPENDENT |rabbitmq.queue.messages.publish.rate["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].message_stats.publish_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages redelivered |<p>Count of subset of messages in deliver_get which had the redelivered flag set</p> |DEPENDENT |rabbitmq.queue.messages.redeliver["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].message_stats.redeliver.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|RabbitMQ |RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages redelivered per second |<p>Rate per second of subset of messages in deliver_get which had the redelivered flag set</p> |DEPENDENT |rabbitmq.queue.messages.redeliver.rate["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].message_stats.redeliver_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|Zabbix_raw_items |RabbitMQ: Get node overview |<p>The HTTP API endpoint that returns cluster-wide metrics</p> |HTTP_AGENT |rabbitmq.get_node_overview |
-|Zabbix_raw_items |RabbitMQ: Get nodes |<p>The HTTP API endpoint that returns nodes metrics</p> |HTTP_AGENT |rabbitmq.get_nodes |
-|Zabbix_raw_items |RabbitMQ: Get queues |<p>The HTTP API endpoint that returns queues metrics</p> |HTTP_AGENT |rabbitmq.get_queues |
+| Group | Name | Description | Type | Key and additional info |
+|------------------|------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| RabbitMQ | RabbitMQ: Management plugin version | <p>Version of the management plugin in use</p> | DEPENDENT | rabbitmq.node.overview.management_version<p>**Preprocessing**:</p><p>- JSONPATH: `$.management_version`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| RabbitMQ | RabbitMQ: RabbitMQ version | <p>Version of RabbitMQ on the node which processed this request</p> | DEPENDENT | rabbitmq.node.overview.rabbitmq_version<p>**Preprocessing**:</p><p>- JSONPATH: `$.rabbitmq_version`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| RabbitMQ | RabbitMQ: Used file descriptors | <p>Used file descriptors</p> | DEPENDENT | rabbitmq.node.fd_used<p>**Preprocessing**:</p><p>- JSONPATH: `$.fd_used`</p> |
+| RabbitMQ | RabbitMQ: Free disk space | <p>Current free disk space</p> | DEPENDENT | rabbitmq.node.disk_free<p>**Preprocessing**:</p><p>- JSONPATH: `$.disk_free`</p> |
+| RabbitMQ | RabbitMQ: Disk free limit | <p>Disk free space limit in bytes</p> | DEPENDENT | rabbitmq.node.disk_free_limit<p>**Preprocessing**:</p><p>- JSONPATH: `$.disk_free_limit`</p> |
+| RabbitMQ | RabbitMQ: Memory used | <p>Memory used in bytes</p> | DEPENDENT | rabbitmq.node.mem_used<p>**Preprocessing**:</p><p>- JSONPATH: `$.mem_used`</p> |
+| RabbitMQ | RabbitMQ: Memory limit | <p>Memory usage high watermark in bytes</p> | DEPENDENT | rabbitmq.node.mem_limit<p>**Preprocessing**:</p><p>- JSONPATH: `$.mem_limit`</p> |
+| RabbitMQ | RabbitMQ: Runtime run queue | <p>Average number of Erlang processes waiting to run</p> | DEPENDENT | rabbitmq.node.run_queue<p>**Preprocessing**:</p><p>- JSONPATH: `$.run_queue`</p> |
+| RabbitMQ | RabbitMQ: Sockets used | <p>Number of file descriptors used as sockets</p> | DEPENDENT | rabbitmq.node.sockets_used<p>**Preprocessing**:</p><p>- JSONPATH: `$.sockets_used`</p> |
+| RabbitMQ | RabbitMQ: Sockets available | <p>File descriptors available for use as sockets</p> | DEPENDENT | rabbitmq.node.sockets_total<p>**Preprocessing**:</p><p>- JSONPATH: `$.sockets_total`</p> |
+| RabbitMQ | RabbitMQ: Number of network partitions | <p>Number of network partitions this node is seeing</p> | DEPENDENT | rabbitmq.node.partitions<p>**Preprocessing**:</p><p>- JSONPATH: `$.partitions`</p><p>- JAVASCRIPT: `return JSON.parse(value).length;`</p> |
+| RabbitMQ | RabbitMQ: Is running | <p>Is the node running or not</p> | DEPENDENT | rabbitmq.node.running<p>**Preprocessing**:</p><p>- JSONPATH: `$.running`</p><p>- BOOL_TO_DECIMAL |
+| RabbitMQ | RabbitMQ: Memory alarm | <p>Does the host has memory alarm</p> | DEPENDENT | rabbitmq.node.mem_alarm<p>**Preprocessing**:</p><p>- JSONPATH: `$.mem_alarm`</p><p>- BOOL_TO_DECIMAL |
+| RabbitMQ | RabbitMQ: Disk free alarm | <p>Does the node have disk alarm</p> | DEPENDENT | rabbitmq.node.disk_free_alarm<p>**Preprocessing**:</p><p>- JSONPATH: `$.disk_free_alarm`</p><p>- BOOL_TO_DECIMAL |
+| RabbitMQ | RabbitMQ: Uptime | <p>Uptime in milliseconds</p> | DEPENDENT | rabbitmq.node.uptime<p>**Preprocessing**:</p><p>- JSONPATH: `$.uptime`</p><p>- MULTIPLIER: `0.001`</p> |
+| RabbitMQ | RabbitMQ: Service ping | <p>-</p> | SIMPLE | net.tcp.service[http,"{HOST.CONN}","{$RABBITMQ.API.PORT}"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| RabbitMQ | RabbitMQ: Service response time | <p>-</p> | SIMPLE | net.tcp.service.perf[http,"{HOST.CONN}","{$RABBITMQ.API.PORT}"] |
+| RabbitMQ | RabbitMQ: Healthcheck: local alarms in effect on the this node{#SINGLETON} | <p>Responds a 200 OK if there are no local alarms in effect on the target node, otherwise responds with a 503 Service Unavailable.</p> | HTTP_AGENT | rabbitmq.healthcheck.local_alarms[{#SINGLETON}]<p>**Preprocessing**:</p><p>- REGEX: `HTTP\/1\.1\b\s(\d+) \1`</p><p>- JAVASCRIPT: `switch(value){ case '200': return 1 case '503': return 0 default: 2}`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `3h`</p> |
+| RabbitMQ | RabbitMQ: Healthcheck: expiration date on the certificates{#SINGLETON} | <p>Checks the expiration date on the certificates for every listener configured to use TLS. Responds a 200 OK if all certificates are valid (have not expired), otherwise responds with a 503 Service Unavailable.</p> | HTTP_AGENT | rabbitmq.healthcheck.certificate_expiration[{#SINGLETON}]<p>**Preprocessing**:</p><p>- REGEX: `HTTP\/1\.1\b\s(\d+) \1`</p><p>- JAVASCRIPT: `switch(value){ case '200': return 1 case '503': return 0 default: 2}`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `3h`</p> |
+| RabbitMQ | RabbitMQ: Healthcheck: virtual hosts on the this node{#SINGLETON} | <p>Responds a 200 OK if all virtual hosts and running on the target node, otherwise responds with a 503 Service Unavailable.</p> | HTTP_AGENT | rabbitmq.healthcheck.virtual_hosts[{#SINGLETON}]<p>**Preprocessing**:</p><p>- REGEX: `HTTP\/1\.1\b\s(\d+) \1`</p><p>- JAVASCRIPT: `switch(value){ case '200': return 1 case '503': return 0 default: 2}`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `3h`</p> |
+| RabbitMQ | RabbitMQ: Healthcheck: classic mirrored queues without synchronised mirrors online{#SINGLETON} | <p>Checks if there are classic mirrored queues without synchronised mirrors online (queues that would potentially lose data if the target node is shut down). Responds a 200 OK if there are no such classic mirrored queues, otherwise responds with a 503 Service Unavailable.</p> | HTTP_AGENT | rabbitmq.healthcheck.mirror_sync[{#SINGLETON}]<p>**Preprocessing**:</p><p>- REGEX: `HTTP\/1\.1\b\s(\d+) \1`</p><p>- JAVASCRIPT: `switch(value){ case '200': return 1 case '503': return 0 default: 2}`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `3h`</p> |
+| RabbitMQ | RabbitMQ: Healthcheck: queues with minimum online quorum{#SINGLETON} | <p>Checks if there are quorum queues with minimum online quorum (queues that would lose their quorum and availability if the target node is shut down). Responds a 200 OK if there are no such quorum queues, otherwise responds with a 503 Service Unavailable.</p> | HTTP_AGENT | rabbitmq.healthcheck.quorum[{#SINGLETON}]<p>**Preprocessing**:</p><p>- REGEX: `HTTP\/1\.1\b\s(\d+) \1`</p><p>- JAVASCRIPT: `switch(value){ case '200': return 1 case '503': return 0 default: 2}`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `3h`</p> |
+| RabbitMQ | RabbitMQ: Healthcheck{#SINGLETON} | <p>Runs basic healthchecks in the current node. Checks that the rabbit application is running, channels and queues can be listed successfully, and that no alarms are in effect.</p> | HTTP_AGENT | rabbitmq.healthcheck[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.status`</p><p>- BOOL_TO_DECIMAL |
+| RabbitMQ | RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages | <p>Count of the total messages in the queue</p> | DEPENDENT | rabbitmq.queue.messages["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].messages.first()`</p> |
+| RabbitMQ | RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages per second | <p>Count per second of the total messages in the queue</p> | DEPENDENT | rabbitmq.queue.messages.rate["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].messages_details.rate.first()`</p> |
+| RabbitMQ | RabbitMQ: Queue {#VHOST}/{#QUEUE}: Consumers | <p>Number of consumers</p> | DEPENDENT | rabbitmq.queue.consumers["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].consumers.first()`</p> |
+| RabbitMQ | RabbitMQ: Queue {#VHOST}/{#QUEUE}: Memory | <p>Bytes of memory consumed by the Erlang process associated with the queue, including stack, heap and internal structures</p> | DEPENDENT | rabbitmq.queue.memory["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].memory.first()`</p> |
+| RabbitMQ | RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages ready | <p>Number of messages ready to be delivered to clients</p> | DEPENDENT | rabbitmq.queue.messages_ready["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].messages_ready.first()`</p> |
+| RabbitMQ | RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages ready per second | <p>Number per second of messages ready to be delivered to clients</p> | DEPENDENT | rabbitmq.queue.messages_ready.rate["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].messages_ready_details.rate.first()`</p> |
+| RabbitMQ | RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages unacknowledged | <p>Number of messages delivered to clients but not yet acknowledged</p> | DEPENDENT | rabbitmq.queue.messages_unacknowledged["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].messages_unacknowledged.first()`</p> |
+| RabbitMQ | RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages unacknowledged per second | <p>Number per second of messages delivered to clients but not yet acknowledged</p> | DEPENDENT | rabbitmq.queue.messages_unacknowledged.rate["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].messages_unacknowledged_details.rate.first()`</p> |
+| RabbitMQ | RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages acknowledged | <p>Number of messages delivered to clients and acknowledged</p> | DEPENDENT | rabbitmq.queue.messages.ack["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].message_stats.ack.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages acknowledged per second | <p>Number per second of messages delivered to clients and acknowledged</p> | DEPENDENT | rabbitmq.queue.messages.ack.rate["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].message_stats.ack_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages delivered | <p>Count of messages delivered in acknowledgement mode to consumers</p> | DEPENDENT | rabbitmq.queue.messages.deliver["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].message_stats.deliver.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages delivered per second | <p>Count of messages delivered in acknowledgement mode to consumers</p> | DEPENDENT | rabbitmq.queue.messages.deliver.rate["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].message_stats.deliver_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages delivered | <p>Sum of messages delivered in acknowledgement mode to consumers, in no-acknowledgement mode to consumers, in acknowledgement mode in response to basic.get, and in no-acknowledgement mode in response to basic.get</p> | DEPENDENT | rabbitmq.queue.messages.deliver_get["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].message_stats.deliver_get.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages delivered per second | <p>Rate per second of the sum of messages delivered in acknowledgement mode to consumers, in no-acknowledgement mode to consumers, in acknowledgement mode in response to basic.get, and in no-acknowledgement mode in response to basic.get</p> | DEPENDENT | rabbitmq.queue.messages.deliver_get.rate["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].message_stats.deliver_get_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages published | <p>Count of messages published</p> | DEPENDENT | rabbitmq.queue.messages.publish["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].message_stats.publish.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages published per second | <p>Rate per second of messages published</p> | DEPENDENT | rabbitmq.queue.messages.publish.rate["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].message_stats.publish_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages redelivered | <p>Count of subset of messages in deliver_get which had the redelivered flag set</p> | DEPENDENT | rabbitmq.queue.messages.redeliver["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].message_stats.redeliver.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| RabbitMQ | RabbitMQ: Queue {#VHOST}/{#QUEUE}: Messages redelivered per second | <p>Rate per second of subset of messages in deliver_get which had the redelivered flag set</p> | DEPENDENT | rabbitmq.queue.messages.redeliver.rate["{#VHOST}/{#QUEUE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#QUEUE}" && @.vhost == "{#VHOST}")].message_stats.redeliver_details.rate.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| Zabbix_raw_items | RabbitMQ: Get node overview | <p>The HTTP API endpoint that returns cluster-wide metrics</p> | HTTP_AGENT | rabbitmq.get_node_overview |
+| Zabbix_raw_items | RabbitMQ: Get nodes | <p>The HTTP API endpoint that returns nodes metrics</p> | HTTP_AGENT | rabbitmq.get_nodes |
+| Zabbix_raw_items | RabbitMQ: Get queues | <p>The HTTP API endpoint that returns queues metrics</p> | HTTP_AGENT | rabbitmq.get_queues |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|RabbitMQ: Version has changed (new version: {ITEM.VALUE}) |<p>RabbitMQ version has changed. Ack to close.</p> |`{TEMPLATE_NAME:rabbitmq.node.overview.rabbitmq_version.diff()}=1 and {TEMPLATE_NAME:rabbitmq.node.overview.rabbitmq_version.strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|RabbitMQ: Number of network partitions is too high (more than 0 for 5m) |<p>https://www.rabbitmq.com/partitions.html#detecting</p> |`{TEMPLATE_NAME:rabbitmq.node.partitions.min(5m)}>0` |WARNING | |
-|RabbitMQ: Node is not running |<p>RabbitMQ node is not running</p> |`{TEMPLATE_NAME:rabbitmq.node.running.max(5m)}=0` |AVERAGE |<p>**Depends on**:</p><p>- RabbitMQ: Service is down</p> |
-|RabbitMQ: Memory alarm (Memory usage threshold has been reached) |<p>https://www.rabbitmq.com/memory.html</p> |`{TEMPLATE_NAME:rabbitmq.node.mem_alarm.last()}=1` |AVERAGE | |
-|RabbitMQ: Free disk space alarm (Free space threshold has been reached) |<p>https://www.rabbitmq.com/disk-alarms.html</p> |`{TEMPLATE_NAME:rabbitmq.node.disk_free_alarm.last()}=1` |AVERAGE | |
-|RabbitMQ: has been restarted (uptime < 10m) |<p>Uptime is less than 10 minutes</p> |`{TEMPLATE_NAME:rabbitmq.node.uptime.last()}<10m` |INFO |<p>Manual close: YES</p> |
-|RabbitMQ: Service is down |<p>-</p> |`{TEMPLATE_NAME:net.tcp.service[http,"{HOST.CONN}","{$RABBITMQ.API.PORT}"].last()}=0` |AVERAGE |<p>Manual close: YES</p> |
-|RabbitMQ: Service response time is too high (over {$RABBITMQ.RESPONSE_TIME.MAX.WARN}s for 5m) |<p>-</p> |`{TEMPLATE_NAME:net.tcp.service.perf[http,"{HOST.CONN}","{$RABBITMQ.API.PORT}"].min(5m)}>{$RABBITMQ.RESPONSE_TIME.MAX.WARN}` |WARNING |<p>Manual close: YES</p><p>**Depends on**:</p><p>- RabbitMQ: Service is down</p> |
-|RabbitMQ: There are active alarms in the node |<p>http://{HOST.CONN}:{$RABBITMQ.API.PORT}/api/index.html</p> |`{TEMPLATE_NAME:rabbitmq.healthcheck.local_alarms[{#SINGLETON}].last()}=503` |AVERAGE | |
-|RabbitMQ: There are valid TLS certificates expiring in the next month |<p>http://{HOST.CONN}:{$RABBITMQ.API.PORT}/api/index.html</p> |`{TEMPLATE_NAME:rabbitmq.healthcheck.certificate_expiration[{#SINGLETON}].last()}=503` |AVERAGE | |
-|RabbitMQ: There are not running virtual hosts |<p>http://{HOST.CONN}:{$RABBITMQ.API.PORT}/api/index.html</p> |`{TEMPLATE_NAME:rabbitmq.healthcheck.virtual_hosts[{#SINGLETON}].last()}=503` |AVERAGE | |
-|RabbitMQ: There are queues that could potentially lose data if the this node goes offline. |<p>http://{HOST.CONN}:{$RABBITMQ.API.PORT}/api/index.html</p> |`{TEMPLATE_NAME:rabbitmq.healthcheck.mirror_sync[{#SINGLETON}].last()}=503` |AVERAGE | |
-|RabbitMQ: There are queues that would lose their quorum and availability if the this node is shut down. |<p>http://{HOST.CONN}:{$RABBITMQ.API.PORT}/api/index.html</p> |`{TEMPLATE_NAME:rabbitmq.healthcheck.quorum[{#SINGLETON}].last()}=503` |AVERAGE | |
-|RabbitMQ: Node healthcheck failed |<p>https://www.rabbitmq.com/monitoring.html#health-checks</p> |`{TEMPLATE_NAME:rabbitmq.healthcheck[{#SINGLETON}].last()}=0` |AVERAGE | |
-|RabbitMQ: Too many messages in queue (over {$RABBITMQ.MESSAGES.MAX.WARN} for 5m) |<p>-</p> |`{TEMPLATE_NAME:rabbitmq.queue.messages["{#VHOST}/{#QUEUE}"].min(5m)}>{$RABBITMQ.MESSAGES.MAX.WARN:"{#QUEUE}"}` |WARNING | |
-|RabbitMQ: Failed to fetch nodes data (or no data for 30m) |<p>Zabbix has not received data for items for the last 30 minutes.</p> |`{TEMPLATE_NAME:rabbitmq.get_nodes.nodata(30m)}=1` |WARNING |<p>Manual close: YES</p><p>**Depends on**:</p><p>- RabbitMQ: Service is down</p> |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|---------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------|----------|----------------------------------------------------------------------------------|
+| RabbitMQ: Version has changed (new version: {ITEM.VALUE}) | <p>RabbitMQ version has changed. Ack to close.</p> | `{TEMPLATE_NAME:rabbitmq.node.overview.rabbitmq_version.diff()}=1 and {TEMPLATE_NAME:rabbitmq.node.overview.rabbitmq_version.strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| RabbitMQ: Number of network partitions is too high (more than 0 for 5m) | <p>https://www.rabbitmq.com/partitions.html#detecting</p> | `{TEMPLATE_NAME:rabbitmq.node.partitions.min(5m)}>0` | WARNING | |
+| RabbitMQ: Node is not running | <p>RabbitMQ node is not running</p> | `{TEMPLATE_NAME:rabbitmq.node.running.max(5m)}=0` | AVERAGE | <p>**Depends on**:</p><p>- RabbitMQ: Service is down</p> |
+| RabbitMQ: Memory alarm (Memory usage threshold has been reached) | <p>https://www.rabbitmq.com/memory.html</p> | `{TEMPLATE_NAME:rabbitmq.node.mem_alarm.last()}=1` | AVERAGE | |
+| RabbitMQ: Free disk space alarm (Free space threshold has been reached) | <p>https://www.rabbitmq.com/disk-alarms.html</p> | `{TEMPLATE_NAME:rabbitmq.node.disk_free_alarm.last()}=1` | AVERAGE | |
+| RabbitMQ: has been restarted (uptime < 10m) | <p>Uptime is less than 10 minutes</p> | `{TEMPLATE_NAME:rabbitmq.node.uptime.last()}<10m` | INFO | <p>Manual close: YES</p> |
+| RabbitMQ: Service is down | <p>-</p> | `{TEMPLATE_NAME:net.tcp.service[http,"{HOST.CONN}","{$RABBITMQ.API.PORT}"].last()}=0` | AVERAGE | <p>Manual close: YES</p> |
+| RabbitMQ: Service response time is too high (over {$RABBITMQ.RESPONSE_TIME.MAX.WARN}s for 5m) | <p>-</p> | `{TEMPLATE_NAME:net.tcp.service.perf[http,"{HOST.CONN}","{$RABBITMQ.API.PORT}"].min(5m)}>{$RABBITMQ.RESPONSE_TIME.MAX.WARN}` | WARNING | <p>Manual close: YES</p><p>**Depends on**:</p><p>- RabbitMQ: Service is down</p> |
+| RabbitMQ: There are active alarms in the node | <p>http://{HOST.CONN}:{$RABBITMQ.API.PORT}/api/index.html</p> | `{TEMPLATE_NAME:rabbitmq.healthcheck.local_alarms[{#SINGLETON}].last()}=503` | AVERAGE | |
+| RabbitMQ: There are valid TLS certificates expiring in the next month | <p>http://{HOST.CONN}:{$RABBITMQ.API.PORT}/api/index.html</p> | `{TEMPLATE_NAME:rabbitmq.healthcheck.certificate_expiration[{#SINGLETON}].last()}=503` | AVERAGE | |
+| RabbitMQ: There are not running virtual hosts | <p>http://{HOST.CONN}:{$RABBITMQ.API.PORT}/api/index.html</p> | `{TEMPLATE_NAME:rabbitmq.healthcheck.virtual_hosts[{#SINGLETON}].last()}=503` | AVERAGE | |
+| RabbitMQ: There are queues that could potentially lose data if the this node goes offline. | <p>http://{HOST.CONN}:{$RABBITMQ.API.PORT}/api/index.html</p> | `{TEMPLATE_NAME:rabbitmq.healthcheck.mirror_sync[{#SINGLETON}].last()}=503` | AVERAGE | |
+| RabbitMQ: There are queues that would lose their quorum and availability if the this node is shut down. | <p>http://{HOST.CONN}:{$RABBITMQ.API.PORT}/api/index.html</p> | `{TEMPLATE_NAME:rabbitmq.healthcheck.quorum[{#SINGLETON}].last()}=503` | AVERAGE | |
+| RabbitMQ: Node healthcheck failed | <p>https://www.rabbitmq.com/monitoring.html#health-checks</p> | `{TEMPLATE_NAME:rabbitmq.healthcheck[{#SINGLETON}].last()}=0` | AVERAGE | |
+| RabbitMQ: Too many messages in queue (over {$RABBITMQ.MESSAGES.MAX.WARN} for 5m) | <p>-</p> | `{TEMPLATE_NAME:rabbitmq.queue.messages["{#VHOST}/{#QUEUE}"].min(5m)}>{$RABBITMQ.MESSAGES.MAX.WARN:"{#QUEUE}"}` | WARNING | |
+| RabbitMQ: Failed to fetch nodes data (or no data for 30m) | <p>Zabbix has not received data for items for the last 30 minutes.</p> | `{TEMPLATE_NAME:rabbitmq.get_nodes.nodata(30m)}=1` | WARNING | <p>Manual close: YES</p><p>**Depends on**:</p><p>- RabbitMQ: Service is down</p> |
## Feedback
diff --git a/templates/app/rabbitmq_http/template_app_rabbitmq_http.yaml b/templates/app/rabbitmq_http/template_app_rabbitmq_http.yaml
index b27a4aa2a1b..296c05d77fb 100644
--- a/templates/app/rabbitmq_http/template_app_rabbitmq_http.yaml
+++ b/templates/app/rabbitmq_http/template_app_rabbitmq_http.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-03-09T06:37:09Z'
+ date: '2021-04-22T11:27:26Z'
groups:
-
name: Templates/Applications
@@ -980,87 +980,89 @@ zabbix_export:
dashboards:
-
name: 'RabbitMQ overview'
- widgets:
- -
- type: GRAPH_CLASSIC
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'RabbitMQ: Messages'
- host: 'RabbitMQ cluster by HTTP'
- -
- type: GRAPH_CLASSIC
- x: '12'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'RabbitMQ: Connections'
- host: 'RabbitMQ cluster by HTTP'
- -
- type: GRAPH_CLASSIC
- 'y': '5'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'RabbitMQ: Messages status'
- host: 'RabbitMQ cluster by HTTP'
- -
- type: GRAPH_CLASSIC
- x: '12'
- 'y': '5'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'RabbitMQ: Queues'
- host: 'RabbitMQ cluster by HTTP'
- -
- type: GRAPH_CLASSIC
- 'y': '10'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'RabbitMQ: Messages per second'
- host: 'RabbitMQ cluster by HTTP'
+ pages:
+ -
+ widgets:
+ -
+ type: GRAPH_CLASSIC
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'RabbitMQ: Messages'
+ host: 'RabbitMQ cluster by HTTP'
+ -
+ type: GRAPH_CLASSIC
+ x: '12'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'RabbitMQ: Connections'
+ host: 'RabbitMQ cluster by HTTP'
+ -
+ type: GRAPH_CLASSIC
+ 'y': '5'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'RabbitMQ: Messages status'
+ host: 'RabbitMQ cluster by HTTP'
+ -
+ type: GRAPH_CLASSIC
+ x: '12'
+ 'y': '5'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'RabbitMQ: Queues'
+ host: 'RabbitMQ cluster by HTTP'
+ -
+ type: GRAPH_CLASSIC
+ 'y': '10'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'RabbitMQ: Messages per second'
+ host: 'RabbitMQ cluster by HTTP'
valuemaps:
-
name: 'RabbitMQ healthcheck'
@@ -2330,14 +2332,14 @@ zabbix_export:
key: rabbitmq.get_queues
lld_macro_paths:
-
+ lld_macro: '{#NODE}'
+ path: $.node
+ -
lld_macro: '{#QUEUE}'
path: $.name
-
lld_macro: '{#VHOST}'
path: $.vhost
- -
- lld_macro: '{#NODE}'
- path: $.node
macros:
-
macro: '{$RABBITMQ.API.PASSWORD}'
@@ -2376,104 +2378,106 @@ zabbix_export:
dashboards:
-
name: 'RabbitMQ node status'
- widgets:
- -
- type: GRAPH_CLASSIC
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'RabbitMQ: Node status'
- host: 'RabbitMQ node by HTTP'
- -
- type: GRAPH_CLASSIC
- x: '12'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'RabbitMQ: Uptime'
- host: 'RabbitMQ node by HTTP'
- -
- type: GRAPH_CLASSIC
- 'y': '5'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'RabbitMQ: Disk free'
- host: 'RabbitMQ node by HTTP'
- -
- type: GRAPH_CLASSIC
- x: '12'
- 'y': '5'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'RabbitMQ: Memory used'
- host: 'RabbitMQ node by HTTP'
- -
- type: GRAPH_CLASSIC
- 'y': '10'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'RabbitMQ: File descriptors'
- host: 'RabbitMQ node by HTTP'
- -
- type: GRAPH_CLASSIC
- x: '12'
- 'y': '10'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'RabbitMQ: Sockets'
- host: 'RabbitMQ node by HTTP'
+ pages:
+ -
+ widgets:
+ -
+ type: GRAPH_CLASSIC
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'RabbitMQ: Node status'
+ host: 'RabbitMQ node by HTTP'
+ -
+ type: GRAPH_CLASSIC
+ x: '12'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'RabbitMQ: Uptime'
+ host: 'RabbitMQ node by HTTP'
+ -
+ type: GRAPH_CLASSIC
+ 'y': '5'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'RabbitMQ: Disk free'
+ host: 'RabbitMQ node by HTTP'
+ -
+ type: GRAPH_CLASSIC
+ x: '12'
+ 'y': '5'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'RabbitMQ: Memory used'
+ host: 'RabbitMQ node by HTTP'
+ -
+ type: GRAPH_CLASSIC
+ 'y': '10'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'RabbitMQ: File descriptors'
+ host: 'RabbitMQ node by HTTP'
+ -
+ type: GRAPH_CLASSIC
+ x: '12'
+ 'y': '10'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'RabbitMQ: Sockets'
+ host: 'RabbitMQ node by HTTP'
valuemaps:
-
name: 'Alarm state'
diff --git a/templates/app/sharepoint_http/template_app_sharepoint_http.yaml b/templates/app/sharepoint_http/template_app_sharepoint_http.yaml
index e1c3fd236bb..3383d6d18a0 100644
--- a/templates/app/sharepoint_http/template_app_sharepoint_http.yaml
+++ b/templates/app/sharepoint_http/template_app_sharepoint_http.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-02-18T09:22:26Z'
+ date: '2021-04-22T11:27:41Z'
groups:
-
name: Templates/Applications
@@ -18,13 +18,6 @@ zabbix_export:
groups:
-
name: Templates/Applications
- applications:
- -
- name: Sharepoint
- -
- name: 'Sharepoint front page'
- -
- name: 'Zabbix raw items'
items:
-
name: 'Sharepoint: Get directory structure'
@@ -133,9 +126,6 @@ zabbix_export:
result.time = new Date().getTime() - js_start;
return JSON.stringify(result);
description: 'Used to get directory structure information'
- applications:
- -
- name: 'Zabbix raw items'
preprocessing:
-
type: CHECK_NOT_SUPPORTED
@@ -156,6 +146,10 @@ zabbix_export:
-
name: user
value: '{$SHAREPOINT.USER}'
+ tags:
+ -
+ tag: Application
+ value: 'Zabbix raw items'
-
name: 'Sharepoint: Get directory structure: Status'
type: DEPENDENT
@@ -163,9 +157,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'HTTP response (status) code. Indicates whether the HTTP request was successfully completed. Additional information is available in the server log file.'
- applications:
- -
- name: Sharepoint
valuemap:
name: 'HTTP response status code'
preprocessing:
@@ -181,6 +172,10 @@ zabbix_export:
- 3h
master_item:
key: sharepoint.get_dir
+ tags:
+ -
+ tag: Application
+ value: Sharepoint
triggers:
-
expression: '{last()}<>200'
@@ -195,9 +190,6 @@ zabbix_export:
history: 7d
units: '!ms'
description: 'The time taken to execute the script for obtaining the data structure (in ms). Less is better.'
- applications:
- -
- name: Sharepoint
preprocessing:
-
type: JSONPATH
@@ -211,6 +203,10 @@ zabbix_export:
- 3h
master_item:
key: sharepoint.get_dir
+ tags:
+ -
+ tag: Application
+ value: Sharepoint
triggers:
-
expression: '{last()}>2000'
@@ -225,9 +221,6 @@ zabbix_export:
username: '{$SHAREPOINT.USER}'
password: '{$SHAREPOINT.PASSWORD}'
description: 'This item specifies a value between 0 and 10, where 0 represents a low load and a high ability to process requests and 10 represents a high load and that the server is throttling requests to maintain adequate throughput.'
- applications:
- -
- name: Sharepoint
preprocessing:
-
type: REGEX
@@ -247,6 +240,10 @@ zabbix_export:
url: '{$SHAREPOINT.URL}'
retrieve_mode: HEADERS
request_method: HEAD
+ tags:
+ -
+ tag: Application
+ value: Sharepoint
triggers:
-
expression: '{last()}>"{$SHAREPOINT.MAX_HEALT_SCORE}"'
@@ -370,9 +367,6 @@ zabbix_export:
description: |
Date of creation:
{#SHAREPOINT.LLD.FULL_PATH}
- application_prototypes:
- -
- name: 'Sharepoint object [{#SHAREPOINT.LLD.FULL_PATH}]"'
preprocessing:
-
type: JSONPATH
@@ -385,6 +379,10 @@ zabbix_export:
- 3h
master_item:
key: sharepoint.get_dir
+ tags:
+ -
+ tag: Application
+ value: 'Sharepoint object [{#SHAREPOINT.LLD.FULL_PATH}]"'
-
name: 'Sharepoint: Modified ({#SHAREPOINT.LLD.FULL_PATH})'
type: DEPENDENT
@@ -395,9 +393,6 @@ zabbix_export:
description: |
Date of change:
{#SHAREPOINT.LLD.FULL_PATH}
- application_prototypes:
- -
- name: 'Sharepoint object [{#SHAREPOINT.LLD.FULL_PATH}]"'
preprocessing:
-
type: JSONPATH
@@ -410,6 +405,10 @@ zabbix_export:
- 3h
master_item:
key: sharepoint.get_dir
+ tags:
+ -
+ tag: Application
+ value: 'Sharepoint object [{#SHAREPOINT.LLD.FULL_PATH}]"'
trigger_prototypes:
-
expression: '{diff()}=1'
@@ -428,9 +427,6 @@ zabbix_export:
description: |
Size of:
{#SHAREPOINT.LLD.FULL_PATH}
- application_prototypes:
- -
- name: 'Sharepoint object [{#SHAREPOINT.LLD.FULL_PATH}]"'
preprocessing:
-
type: JSONPATH
@@ -443,6 +439,10 @@ zabbix_export:
- 24h
master_item:
key: sharepoint.get_dir
+ tags:
+ -
+ tag: Application
+ value: 'Sharepoint object [{#SHAREPOINT.LLD.FULL_PATH}]"'
parameters:
-
name: password
@@ -461,20 +461,6 @@ zabbix_export:
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 6h
- httptests:
- -
- name: 'Sharepoint: Front page'
- application:
- name: 'Sharepoint front page'
- delay: 10m
- authentication: NTLM
- http_user: '{$SHAREPOINT.USER}'
- http_password: '{$SHAREPOINT.PASSWORD}'
- steps:
- -
- name: Login
- url: '{$SHAREPOINT.URL}'
- status_codes: '200'
macros:
-
macro: '{$SHAREPOINT.GET_INTERVAL}'
diff --git a/templates/app/squid_snmp/README.md b/templates/app/squid_snmp/README.md
index c9a89579aaf..a2265301607 100644
--- a/templates/app/squid_snmp/README.md
+++ b/templates/app/squid_snmp/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
This template was tested on:
@@ -36,13 +36,13 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$SQUID.FILE.DESC.WARN.MIN} |<p>The threshold for minimum number of available file descriptors</p> |`100` |
-|{$SQUID.HTTP.PORT} |<p>http_port configured in squid.conf (Default: 3128)</p> |`3128` |
-|{$SQUID.PAGE.FAULT.WARN} |<p>The threshold for sys page faults rate in percent of received HTTP requests</p> |`90` |
-|{$SQUID.SNMP.COMMUNITY} |<p>SNMP community allowed by ACL in squid.conf</p> |`public` |
-|{$SQUID.SNMP.PORT} |<p>snmp_port configured in squid.conf (Default: 3401)</p> |`3401` |
+| Name | Description | Default |
+|-----------------------------|------------------------------------------------------------------------------------|----------|
+| {$SQUID.FILE.DESC.WARN.MIN} | <p>The threshold for minimum number of available file descriptors</p> | `100` |
+| {$SQUID.HTTP.PORT} | <p>http_port configured in squid.conf (Default: 3128)</p> | `3128` |
+| {$SQUID.PAGE.FAULT.WARN} | <p>The threshold for sys page faults rate in percent of received HTTP requests</p> | `90` |
+| {$SQUID.SNMP.COMMUNITY} | <p>SNMP community allowed by ACL in squid.conf</p> | `public` |
+| {$SQUID.SNMP.PORT} | <p>snmp_port configured in squid.conf (Default: 3401)</p> | `3401` |
## Template links
@@ -53,74 +53,74 @@ There are no template links in this template.
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Squid |Squid: Service ping |<p>-</p> |SIMPLE |net.tcp.service[tcp,,{$SQUID.HTTP.PORT}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|Squid |Squid: Uptime |<p>The Uptime of the cache in timeticks (in hundredths of a second) with preprocessing</p> |SNMP |squid[cacheUptime]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.01`</p> |
-|Squid |Squid: Version |<p>Cache Software Version</p> |SNMP |squid[cacheVersionId]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
-|Squid |Squid: CPU usage |<p>The percentage use of the CPU</p> |SNMP |squid[cacheCpuUsage] |
-|Squid |Squid: Memory maximum resident size |<p>Maximum Resident Size</p> |SNMP |squid[cacheMaxResSize]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1024`</p> |
-|Squid |Squid: Memory maximum cache size |<p>The value of the cache_mem parameter</p> |SNMP |squid[cacheMemMaxSize]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1048576`</p> |
-|Squid |Squid: Memory cache usage |<p>Total accounted memory</p> |SNMP |squid[cacheMemUsage]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1024`</p> |
-|Squid |Squid: Cache swap low water mark |<p>Cache Swap Low Water Mark</p> |SNMP |squid[cacheSwapLowWM] |
-|Squid |Squid: Cache swap high water mark |<p>Cache Swap High Water Mark</p> |SNMP |squid[cacheSwapHighWM] |
-|Squid |Squid: Cache swap directory size |<p>The total of the cache_dir space allocated</p> |SNMP |squid[cacheSwapMaxSize]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1048576`</p> |
-|Squid |Squid: Cache swap current size |<p>Storage Swap Size</p> |SNMP |squid[cacheCurrentSwapSize]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1048576`</p> |
-|Squid |Squid: File descriptor count - current used |<p>Number of file descriptors in use</p> |SNMP |squid[cacheCurrentFileDescrCnt] |
-|Squid |Squid: File descriptor count - current maximum |<p>Highest number of file descriptors in use</p> |SNMP |squid[cacheCurrentFileDescrMax] |
-|Squid |Squid: File descriptor count - current reserved |<p>Reserved number of file descriptors</p> |SNMP |squid[cacheCurrentResFileDescrCnt] |
-|Squid |Squid: File descriptor count - current available |<p>Available number of file descriptors</p> |SNMP |squid[cacheCurrentUnusedFDescrCnt] |
-|Squid |Squid: Byte hit ratio per 1 minute |<p>Byte Hit Ratios</p> |SNMP |squid[cacheRequestByteRatio.1] |
-|Squid |Squid: Byte hit ratio per 5 minutes |<p>Byte Hit Ratios</p> |SNMP |squid[cacheRequestByteRatio.5] |
-|Squid |Squid: Byte hit ratio per 1 hour |<p>Byte Hit Ratios</p> |SNMP |squid[cacheRequestByteRatio.60] |
-|Squid |Squid: Request hit ratio per 1 minute |<p>Byte Hit Ratios</p> |SNMP |squid[cacheRequestHitRatio.1] |
-|Squid |Squid: Request hit ratio per 5 minutes |<p>Byte Hit Ratios</p> |SNMP |squid[cacheRequestHitRatio.5] |
-|Squid |Squid: Request hit ratio per 1 hour |<p>Byte Hit Ratios</p> |SNMP |squid[cacheRequestHitRatio.60] |
-|Squid |Squid: Sys page faults per second |<p>Page faults with physical I/O</p> |SNMP |squid[cacheSysPageFaults]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Squid |Squid: HTTP requests received per second |<p>Number of HTTP requests received</p> |SNMP |squid[cacheProtoClientHttpRequests]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Squid |Squid: HTTP traffic received per second |<p>Number of HTTP traffic received from clients</p> |SNMP |squid[cacheHttpInKb]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1024`</p><p>- CHANGE_PER_SECOND |
-|Squid |Squid: HTTP traffic sent per second |<p>Number of HTTP traffic sent to clients</p> |SNMP |squid[cacheHttpOutKb]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1024`</p><p>- CHANGE_PER_SECOND |
-|Squid |Squid: HTTP Hits sent from cache per second |<p>Number of HTTP Hits sent to clients from cache</p> |SNMP |squid[cacheHttpHits]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Squid |Squid: HTTP Errors sent per second |<p>Number of HTTP Errors sent to clients</p> |SNMP |squid[cacheHttpErrors]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Squid |Squid: ICP messages sent per second |<p>Number of ICP messages sent</p> |SNMP |squid[cacheIcpPktsSent]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Squid |Squid: ICP messages received per second |<p>Number of ICP messages received</p> |SNMP |squid[cacheIcpPktsRecv]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Squid |Squid: ICP traffic transmitted per second |<p>Number of ICP traffic transmitted</p> |SNMP |squid[cacheIcpKbSent]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1024`</p><p>- CHANGE_PER_SECOND |
-|Squid |Squid: ICP traffic received per second |<p>Number of ICP traffic received</p> |SNMP |squid[cacheIcpKbRecv]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1024`</p><p>- CHANGE_PER_SECOND |
-|Squid |Squid: DNS server requests per second |<p>Number of external dns server requests</p> |SNMP |squid[cacheDnsRequests]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Squid |Squid: DNS server replies per second |<p>Number of external dns server replies</p> |SNMP |squid[cacheDnsReplies]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Squid |Squid: FQDN cache requests per second |<p>Number of FQDN Cache requests</p> |SNMP |squid[cacheFqdnRequests]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Squid |Squid: FQDN cache hits per second |<p>Number of FQDN Cache hits</p> |SNMP |squid[cacheFqdnHits]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Squid |Squid: FQDN cache misses per second |<p>Number of FQDN Cache misses</p> |SNMP |squid[cacheFqdnMisses]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Squid |Squid: IP cache requests per second |<p>Number of IP Cache requests</p> |SNMP |squid[cacheIpRequests]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Squid |Squid: IP cache hits per second |<p>Number of IP Cache hits</p> |SNMP |squid[cacheIpHits]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Squid |Squid: IP cache misses per second |<p>Number of IP Cache misses</p> |SNMP |squid[cacheIpMisses]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Squid |Squid: Objects count |<p>Number of objects stored by the cache</p> |SNMP |squid[cacheNumObjCount] |
-|Squid |Squid: Objects LRU expiration age |<p>Storage LRU Expiration Age</p> |SNMP |squid[cacheCurrentLRUExpiration]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.01`</p> |
-|Squid |Squid: Objects unlinkd requests |<p>Requests given to unlinkd</p> |SNMP |squid[cacheCurrentUnlinkRequests] |
-|Squid |Squid: HTTP all service time per 5 minutes |<p>HTTP all service time per 5 minutes</p> |SNMP |squid[cacheHttpAllSvcTime.5]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
-|Squid |Squid: HTTP all service time per hour |<p>HTTP all service time per hour</p> |SNMP |squid[cacheHttpAllSvcTime.60]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
-|Squid |Squid: HTTP miss service time per 5 minutes |<p>HTTP miss service time per 5 minutes</p> |SNMP |squid[cacheHttpMissSvcTime.5]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
-|Squid |Squid: HTTP miss service time per hour |<p>HTTP miss service time per hour</p> |SNMP |squid[cacheHttpMissSvcTime.60]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
-|Squid |Squid: HTTP miss service time per 5 minutes |<p>HTTP hit service time per 5 minutes</p> |SNMP |squid[cacheHttpHitSvcTime.5]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
-|Squid |Squid: HTTP hit service time per hour |<p>HTTP hit service time per hour</p> |SNMP |squid[cacheHttpHitSvcTime.60]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
-|Squid |Squid: ICP query service time per 5 minutes |<p>ICP query service time per 5 minutes</p> |SNMP |squid[cacheIcpQuerySvcTime.5]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
-|Squid |Squid: ICP query service time per hour |<p>ICP query service time per hour</p> |SNMP |squid[cacheIcpQuerySvcTime.60]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
-|Squid |Squid: ICP reply service time per 5 minutes |<p>ICP reply service time per 5 minutes</p> |SNMP |squid[cacheIcpReplySvcTime.5]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
-|Squid |Squid: ICP reply service time per hour |<p>ICP reply service time per hour</p> |SNMP |squid[cacheIcpReplySvcTime.60]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
-|Squid |Squid: DNS service time per 5 minutes |<p>DNS service time per 5 minutes</p> |SNMP |squid[cacheDnsSvcTime.5]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
-|Squid |Squid: DNS service time per hour |<p>DNS service time per hour</p> |SNMP |squid[cacheDnsSvcTime.60]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
+| Group | Name | Description | Type | Key and additional info |
+|-------|--------------------------------------------------|--------------------------------------------------------------------------------------------|--------|--------------------------------------------------------------------------------------------------------------|
+| Squid | Squid: Service ping | <p>-</p> | SIMPLE | net.tcp.service[tcp,,{$SQUID.HTTP.PORT}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| Squid | Squid: Uptime | <p>The Uptime of the cache in timeticks (in hundredths of a second) with preprocessing</p> | SNMP | squid[cacheUptime]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.01`</p> |
+| Squid | Squid: Version | <p>Cache Software Version</p> | SNMP | squid[cacheVersionId]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
+| Squid | Squid: CPU usage | <p>The percentage use of the CPU</p> | SNMP | squid[cacheCpuUsage] |
+| Squid | Squid: Memory maximum resident size | <p>Maximum Resident Size</p> | SNMP | squid[cacheMaxResSize]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1024`</p> |
+| Squid | Squid: Memory maximum cache size | <p>The value of the cache_mem parameter</p> | SNMP | squid[cacheMemMaxSize]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1048576`</p> |
+| Squid | Squid: Memory cache usage | <p>Total accounted memory</p> | SNMP | squid[cacheMemUsage]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1024`</p> |
+| Squid | Squid: Cache swap low water mark | <p>Cache Swap Low Water Mark</p> | SNMP | squid[cacheSwapLowWM] |
+| Squid | Squid: Cache swap high water mark | <p>Cache Swap High Water Mark</p> | SNMP | squid[cacheSwapHighWM] |
+| Squid | Squid: Cache swap directory size | <p>The total of the cache_dir space allocated</p> | SNMP | squid[cacheSwapMaxSize]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1048576`</p> |
+| Squid | Squid: Cache swap current size | <p>Storage Swap Size</p> | SNMP | squid[cacheCurrentSwapSize]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1048576`</p> |
+| Squid | Squid: File descriptor count - current used | <p>Number of file descriptors in use</p> | SNMP | squid[cacheCurrentFileDescrCnt] |
+| Squid | Squid: File descriptor count - current maximum | <p>Highest number of file descriptors in use</p> | SNMP | squid[cacheCurrentFileDescrMax] |
+| Squid | Squid: File descriptor count - current reserved | <p>Reserved number of file descriptors</p> | SNMP | squid[cacheCurrentResFileDescrCnt] |
+| Squid | Squid: File descriptor count - current available | <p>Available number of file descriptors</p> | SNMP | squid[cacheCurrentUnusedFDescrCnt] |
+| Squid | Squid: Byte hit ratio per 1 minute | <p>Byte Hit Ratios</p> | SNMP | squid[cacheRequestByteRatio.1] |
+| Squid | Squid: Byte hit ratio per 5 minutes | <p>Byte Hit Ratios</p> | SNMP | squid[cacheRequestByteRatio.5] |
+| Squid | Squid: Byte hit ratio per 1 hour | <p>Byte Hit Ratios</p> | SNMP | squid[cacheRequestByteRatio.60] |
+| Squid | Squid: Request hit ratio per 1 minute | <p>Byte Hit Ratios</p> | SNMP | squid[cacheRequestHitRatio.1] |
+| Squid | Squid: Request hit ratio per 5 minutes | <p>Byte Hit Ratios</p> | SNMP | squid[cacheRequestHitRatio.5] |
+| Squid | Squid: Request hit ratio per 1 hour | <p>Byte Hit Ratios</p> | SNMP | squid[cacheRequestHitRatio.60] |
+| Squid | Squid: Sys page faults per second | <p>Page faults with physical I/O</p> | SNMP | squid[cacheSysPageFaults]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Squid | Squid: HTTP requests received per second | <p>Number of HTTP requests received</p> | SNMP | squid[cacheProtoClientHttpRequests]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Squid | Squid: HTTP traffic received per second | <p>Number of HTTP traffic received from clients</p> | SNMP | squid[cacheHttpInKb]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1024`</p><p>- CHANGE_PER_SECOND |
+| Squid | Squid: HTTP traffic sent per second | <p>Number of HTTP traffic sent to clients</p> | SNMP | squid[cacheHttpOutKb]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1024`</p><p>- CHANGE_PER_SECOND |
+| Squid | Squid: HTTP Hits sent from cache per second | <p>Number of HTTP Hits sent to clients from cache</p> | SNMP | squid[cacheHttpHits]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Squid | Squid: HTTP Errors sent per second | <p>Number of HTTP Errors sent to clients</p> | SNMP | squid[cacheHttpErrors]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Squid | Squid: ICP messages sent per second | <p>Number of ICP messages sent</p> | SNMP | squid[cacheIcpPktsSent]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Squid | Squid: ICP messages received per second | <p>Number of ICP messages received</p> | SNMP | squid[cacheIcpPktsRecv]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Squid | Squid: ICP traffic transmitted per second | <p>Number of ICP traffic transmitted</p> | SNMP | squid[cacheIcpKbSent]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1024`</p><p>- CHANGE_PER_SECOND |
+| Squid | Squid: ICP traffic received per second | <p>Number of ICP traffic received</p> | SNMP | squid[cacheIcpKbRecv]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1024`</p><p>- CHANGE_PER_SECOND |
+| Squid | Squid: DNS server requests per second | <p>Number of external dns server requests</p> | SNMP | squid[cacheDnsRequests]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Squid | Squid: DNS server replies per second | <p>Number of external dns server replies</p> | SNMP | squid[cacheDnsReplies]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Squid | Squid: FQDN cache requests per second | <p>Number of FQDN Cache requests</p> | SNMP | squid[cacheFqdnRequests]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Squid | Squid: FQDN cache hits per second | <p>Number of FQDN Cache hits</p> | SNMP | squid[cacheFqdnHits]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Squid | Squid: FQDN cache misses per second | <p>Number of FQDN Cache misses</p> | SNMP | squid[cacheFqdnMisses]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Squid | Squid: IP cache requests per second | <p>Number of IP Cache requests</p> | SNMP | squid[cacheIpRequests]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Squid | Squid: IP cache hits per second | <p>Number of IP Cache hits</p> | SNMP | squid[cacheIpHits]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Squid | Squid: IP cache misses per second | <p>Number of IP Cache misses</p> | SNMP | squid[cacheIpMisses]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Squid | Squid: Objects count | <p>Number of objects stored by the cache</p> | SNMP | squid[cacheNumObjCount] |
+| Squid | Squid: Objects LRU expiration age | <p>Storage LRU Expiration Age</p> | SNMP | squid[cacheCurrentLRUExpiration]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.01`</p> |
+| Squid | Squid: Objects unlinkd requests | <p>Requests given to unlinkd</p> | SNMP | squid[cacheCurrentUnlinkRequests] |
+| Squid | Squid: HTTP all service time per 5 minutes | <p>HTTP all service time per 5 minutes</p> | SNMP | squid[cacheHttpAllSvcTime.5]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
+| Squid | Squid: HTTP all service time per hour | <p>HTTP all service time per hour</p> | SNMP | squid[cacheHttpAllSvcTime.60]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
+| Squid | Squid: HTTP miss service time per 5 minutes | <p>HTTP miss service time per 5 minutes</p> | SNMP | squid[cacheHttpMissSvcTime.5]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
+| Squid | Squid: HTTP miss service time per hour | <p>HTTP miss service time per hour</p> | SNMP | squid[cacheHttpMissSvcTime.60]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
+| Squid | Squid: HTTP miss service time per 5 minutes | <p>HTTP hit service time per 5 minutes</p> | SNMP | squid[cacheHttpHitSvcTime.5]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
+| Squid | Squid: HTTP hit service time per hour | <p>HTTP hit service time per hour</p> | SNMP | squid[cacheHttpHitSvcTime.60]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
+| Squid | Squid: ICP query service time per 5 minutes | <p>ICP query service time per 5 minutes</p> | SNMP | squid[cacheIcpQuerySvcTime.5]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
+| Squid | Squid: ICP query service time per hour | <p>ICP query service time per hour</p> | SNMP | squid[cacheIcpQuerySvcTime.60]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
+| Squid | Squid: ICP reply service time per 5 minutes | <p>ICP reply service time per 5 minutes</p> | SNMP | squid[cacheIcpReplySvcTime.5]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
+| Squid | Squid: ICP reply service time per hour | <p>ICP reply service time per hour</p> | SNMP | squid[cacheIcpReplySvcTime.60]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
+| Squid | Squid: DNS service time per 5 minutes | <p>DNS service time per 5 minutes</p> | SNMP | squid[cacheDnsSvcTime.5]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
+| Squid | Squid: DNS service time per hour | <p>DNS service time per hour</p> | SNMP | squid[cacheDnsSvcTime.60]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|Squid: Port {$SQUID.HTTP.PORT} is down |<p>-</p> |`{TEMPLATE_NAME:net.tcp.service[tcp,,{$SQUID.HTTP.PORT}].last()}=0` |AVERAGE |<p>Manual close: YES</p> |
-|Squid: Squid has been restarted (uptime < 10m) |<p>Uptime is less than 10 minutes</p> |`{TEMPLATE_NAME:squid[cacheUptime].last()}<10m` |INFO |<p>Manual close: YES</p> |
-|Squid: Squid version has been changed |<p>Squid version has changed. Ack to close.</p> |`{TEMPLATE_NAME:squid[cacheVersionId].diff()}=1 and {TEMPLATE_NAME:squid[cacheVersionId].strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|Squid: Swap usage is more than low watermark (>{ITEM.VALUE2}%) |<p>-</p> |`{TEMPLATE_NAME:squid[cacheCurrentSwapSize].last()}>{Squid SNMP:squid[cacheSwapLowWM].last()}*{Squid SNMP:squid[cacheSwapMaxSize].last()}/100` |WARNING | |
-|Squid: Swap usage is more than high watermark (>{ITEM.VALUE2}%) |<p>-</p> |`{TEMPLATE_NAME:squid[cacheCurrentSwapSize].last()}>{Squid SNMP:squid[cacheSwapHighWM].last()}*{Squid SNMP:squid[cacheSwapMaxSize].last()}/100` |HIGH | |
-|Squid: Squid is running out of file descriptors (<{$SQUID.FILE.DESC.WARN.MIN}) |<p>-</p> |`{TEMPLATE_NAME:squid[cacheCurrentUnusedFDescrCnt].last()}<{$SQUID.FILE.DESC.WARN.MIN}` |WARNING | |
-|Squid: High sys page faults rate (>{$SQUID.PAGE.FAULT.WARN}% of received HTTP requests) |<p>-</p> |`{TEMPLATE_NAME:squid[cacheSysPageFaults].avg(5m)}>{Squid SNMP:squid[cacheProtoClientHttpRequests].avg(5m)}/100*{$SQUID.PAGE.FAULT.WARN}` |WARNING | |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|-----------------------------------------------------------------------------------------|-------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------|----------|----------------------------------|
+| Squid: Port {$SQUID.HTTP.PORT} is down | <p>-</p> | `{TEMPLATE_NAME:net.tcp.service[tcp,,{$SQUID.HTTP.PORT}].last()}=0` | AVERAGE | <p>Manual close: YES</p> |
+| Squid: Squid has been restarted (uptime < 10m) | <p>Uptime is less than 10 minutes</p> | `{TEMPLATE_NAME:squid[cacheUptime].last()}<10m` | INFO | <p>Manual close: YES</p> |
+| Squid: Squid version has been changed | <p>Squid version has changed. Ack to close.</p> | `{TEMPLATE_NAME:squid[cacheVersionId].diff()}=1 and {TEMPLATE_NAME:squid[cacheVersionId].strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| Squid: Swap usage is more than low watermark (>{ITEM.VALUE2}%) | <p>-</p> | `{TEMPLATE_NAME:squid[cacheCurrentSwapSize].last()}>{Squid SNMP:squid[cacheSwapLowWM].last()}*{Squid SNMP:squid[cacheSwapMaxSize].last()}/100` | WARNING | |
+| Squid: Swap usage is more than high watermark (>{ITEM.VALUE2}%) | <p>-</p> | `{TEMPLATE_NAME:squid[cacheCurrentSwapSize].last()}>{Squid SNMP:squid[cacheSwapHighWM].last()}*{Squid SNMP:squid[cacheSwapMaxSize].last()}/100` | HIGH | |
+| Squid: Squid is running out of file descriptors (<{$SQUID.FILE.DESC.WARN.MIN}) | <p>-</p> | `{TEMPLATE_NAME:squid[cacheCurrentUnusedFDescrCnt].last()}<{$SQUID.FILE.DESC.WARN.MIN}` | WARNING | |
+| Squid: High sys page faults rate (>{$SQUID.PAGE.FAULT.WARN}% of received HTTP requests) | <p>-</p> | `{TEMPLATE_NAME:squid[cacheSysPageFaults].avg(5m)}>{Squid SNMP:squid[cacheProtoClientHttpRequests].avg(5m)}/100*{$SQUID.PAGE.FAULT.WARN}` | WARNING | |
## Feedback
diff --git a/templates/app/vmware/template_app_vmware.yaml b/templates/app/vmware/template_app_vmware.yaml
index 1829f8c2b0c..a9465b63911 100644
--- a/templates/app/vmware/template_app_vmware.yaml
+++ b/templates/app/vmware/template_app_vmware.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-02-24T07:24:01Z'
+ date: '2021-04-22T12:04:30Z'
groups:
-
name: Templates/Applications
@@ -282,9 +282,10 @@ zabbix_export:
username: '{$VMWARE.USERNAME}'
password: '{$VMWARE.PASSWORD}'
description: 'Percentage of time the virtual machine is unable to run because it is contending for access to the physical CPU(s).'
- applications:
+ tags:
-
- name: VMware
+ tag: Application
+ value: VMware
-
name: 'VMware: Number of virtual CPUs'
type: SIMPLE
@@ -312,9 +313,10 @@ zabbix_export:
username: '{$VMWARE.USERNAME}'
password: '{$VMWARE.PASSWORD}'
description: 'Percentage of time that the virtual machine was ready, but could not get scheduled to run on the physical CPU.'
- applications:
+ tags:
-
- name: VMware
+ tag: Application
+ value: VMware
-
name: 'VMware: CPU ready'
type: SIMPLE
@@ -337,9 +339,10 @@ zabbix_export:
username: '{$VMWARE.USERNAME}'
password: '{$VMWARE.PASSWORD}'
description: 'Percentage of CPU time spent waiting for swap-in.'
- applications:
+ tags:
-
- name: VMware
+ tag: Application
+ value: VMware
-
name: 'VMware: CPU usage in percents'
type: SIMPLE
@@ -349,9 +352,10 @@ zabbix_export:
username: '{$VMWARE.USERNAME}'
password: '{$VMWARE.PASSWORD}'
description: 'CPU usage as a percentage during the interval.'
- applications:
+ tags:
-
- name: VMware
+ tag: Application
+ value: VMware
-
name: 'VMware: CPU usage'
type: SIMPLE
@@ -394,9 +398,10 @@ zabbix_export:
username: '{$VMWARE.USERNAME}'
password: '{$VMWARE.PASSWORD}'
description: 'Amount of guest physical memory that is swapped out to the swap space.'
- applications:
+ tags:
-
- name: VMware
+ tag: Application
+ value: VMware
-
name: 'VMware: Uptime of guest OS'
type: SIMPLE
@@ -406,9 +411,10 @@ zabbix_export:
username: '{$VMWARE.USERNAME}'
password: '{$VMWARE.PASSWORD}'
description: 'Total time elapsed since the last operating system boot-up (in seconds).'
- applications:
+ tags:
-
- name: VMware
+ tag: Application
+ value: VMware
triggers:
-
expression: '{last()}<10m'
@@ -471,9 +477,10 @@ zabbix_export:
username: '{$VMWARE.USERNAME}'
password: '{$VMWARE.PASSWORD}'
description: 'Amount of host physical memory consumed for backing up guest physical memory pages.'
- applications:
+ tags:
-
- name: VMware
+ tag: Application
+ value: VMware
-
name: 'VMware: Private memory'
type: SIMPLE
@@ -567,9 +574,10 @@ zabbix_export:
username: '{$VMWARE.USERNAME}'
password: '{$VMWARE.PASSWORD}'
description: 'Percentage of host physical memory that has been consumed.'
- applications:
+ tags:
-
- name: VMware
+ tag: Application
+ value: VMware
-
name: 'VMware: Power state'
type: SIMPLE
@@ -710,15 +718,15 @@ zabbix_export:
username: '{$VMWARE.USERNAME}'
password: '{$VMWARE.PASSWORD}'
description: 'VMware virtual machine network utilization (combined transmit-rates and receive-rates) during the interval.'
- tags:
- -
- tag: Application
- value: VMware
preprocessing:
-
type: MULTIPLIER
parameters:
- '1024'
+ tags:
+ -
+ tag: Application
+ value: VMware
-
name: 'Disk device discovery'
type: SIMPLE
@@ -736,9 +744,10 @@ zabbix_export:
username: '{$VMWARE.USERNAME}'
password: '{$VMWARE.PASSWORD}'
description: 'Average number of outstanding read requests to the virtual disk during the collection interval.'
- applications:
+ tags:
-
- name: VMware
+ tag: Application
+ value: VMware
-
name: 'VMware: Average read latency to the disk {#DISKDESC}'
type: SIMPLE
@@ -748,9 +757,10 @@ zabbix_export:
username: '{$VMWARE.USERNAME}'
password: '{$VMWARE.PASSWORD}'
description: 'The average time a read from the virtual disk takes.'
- applications:
+ tags:
-
- name: VMware
+ tag: Application
+ value: VMware
-
name: 'VMware: Average write latency to the disk {#DISKDESC}'
type: SIMPLE
@@ -760,9 +770,10 @@ zabbix_export:
username: '{$VMWARE.USERNAME}'
password: '{$VMWARE.PASSWORD}'
description: 'The average time a write to the virtual disk takes.'
- applications:
+ tags:
-
- name: VMware
+ tag: Application
+ value: VMware
-
name: 'VMware: Average number of outstanding write requests to the disk {#DISKDESC}'
type: SIMPLE
@@ -771,9 +782,10 @@ zabbix_export:
username: '{$VMWARE.USERNAME}'
password: '{$VMWARE.PASSWORD}'
description: 'Average number of outstanding write requests to the virtual disk during the collection interval.'
- applications:
+ tags:
-
- name: VMware
+ tag: Application
+ value: VMware
-
name: 'VMware: Average number of bytes read from the disk {#DISKDESC}'
type: SIMPLE
@@ -969,9 +981,10 @@ zabbix_export:
username: '{$VMWARE.USERNAME}'
password: '{$VMWARE.PASSWORD}'
description: 'CPU usage as a percentage during the interval.'
- applications:
+ tags:
-
- name: VMware
+ tag: Application
+ value: VMware
-
name: 'VMware: CPU usage'
type: SIMPLE
@@ -994,9 +1007,10 @@ zabbix_export:
username: '{$VMWARE.USERNAME}'
password: '{$VMWARE.PASSWORD}'
description: 'CPU usage as a percentage during the interval depends on power management or HT.'
- applications:
+ tags:
-
- name: VMware
+ tag: Application
+ value: VMware
-
name: 'VMware: Datacenter name'
type: SIMPLE
@@ -1217,14 +1231,15 @@ zabbix_export:
username: '{$VMWARE.USERNAME}'
password: '{$VMWARE.PASSWORD}'
description: 'Maximum allowed power usage.'
- applications:
- -
- name: VMware
preprocessing:
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 6h
+ tags:
+ -
+ tag: Application
+ value: VMware
-
name: 'VMware: Power usage'
type: SIMPLE
@@ -1234,9 +1249,10 @@ zabbix_export:
username: '{$VMWARE.USERNAME}'
password: '{$VMWARE.PASSWORD}'
description: 'Current power usage.'
- applications:
+ tags:
-
- name: VMware
+ tag: Application
+ value: VMware
-
name: 'VMware: Health state rollup'
type: SIMPLE
@@ -1370,9 +1386,10 @@ zabbix_export:
username: '{$VMWARE.USERNAME}'
password: '{$VMWARE.PASSWORD}'
description: 'Number of available datastore paths.'
- applications:
+ tags:
-
- name: VMware
+ tag: Application
+ value: VMware
trigger_prototypes:
-
expression: '{diff()}=1 and {last()}<{#MULTIPATH.COUNT}'
diff --git a/templates/app/vmware_fqdn/template_app_vmware_fqdn.yaml b/templates/app/vmware_fqdn/template_app_vmware_fqdn.yaml
index b88cf18ab64..d3111000576 100644
--- a/templates/app/vmware_fqdn/template_app_vmware_fqdn.yaml
+++ b/templates/app/vmware_fqdn/template_app_vmware_fqdn.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-02-24T07:22:14Z'
+ date: '2021-04-22T12:04:34Z'
groups:
-
name: Templates/Applications
@@ -290,9 +290,10 @@ zabbix_export:
username: '{$VMWARE.USERNAME}'
password: '{$VMWARE.PASSWORD}'
description: 'Percentage of time the virtual machine is unable to run because it is contending for access to the physical CPU(s).'
- applications:
+ tags:
-
- name: VMware
+ tag: Application
+ value: VMware
-
name: 'VMware: Number of virtual CPUs'
type: SIMPLE
@@ -320,9 +321,10 @@ zabbix_export:
username: '{$VMWARE.USERNAME}'
password: '{$VMWARE.PASSWORD}'
description: 'Percentage of time that the virtual machine was ready, but could not get scheduled to run on the physical CPU.'
- applications:
+ tags:
-
- name: VMware
+ tag: Application
+ value: VMware
-
name: 'VMware: CPU ready'
type: SIMPLE
@@ -345,9 +347,10 @@ zabbix_export:
username: '{$VMWARE.USERNAME}'
password: '{$VMWARE.PASSWORD}'
description: 'Percentage of CPU time spent waiting for swap-in.'
- applications:
+ tags:
-
- name: VMware
+ tag: Application
+ value: VMware
-
name: 'VMware: CPU usage in percents'
type: SIMPLE
@@ -357,9 +360,10 @@ zabbix_export:
username: '{$VMWARE.USERNAME}'
password: '{$VMWARE.PASSWORD}'
description: 'CPU usage as a percentage during the interval.'
- applications:
+ tags:
-
- name: VMware
+ tag: Application
+ value: VMware
-
name: 'VMware: CPU usage'
type: SIMPLE
@@ -402,9 +406,10 @@ zabbix_export:
username: '{$VMWARE.USERNAME}'
password: '{$VMWARE.PASSWORD}'
description: 'Amount of guest physical memory that is swapped out to the swap space.'
- applications:
+ tags:
-
- name: VMware
+ tag: Application
+ value: VMware
-
name: 'VMware: Uptime of guest OS'
type: SIMPLE
@@ -414,9 +419,10 @@ zabbix_export:
username: '{$VMWARE.USERNAME}'
password: '{$VMWARE.PASSWORD}'
description: 'Total time elapsed since the last operating system boot-up (in seconds).'
- applications:
+ tags:
-
- name: VMware
+ tag: Application
+ value: VMware
triggers:
-
expression: '{last()}<10m'
@@ -479,9 +485,10 @@ zabbix_export:
username: '{$VMWARE.USERNAME}'
password: '{$VMWARE.PASSWORD}'
description: 'Amount of host physical memory consumed for backing up guest physical memory pages.'
- applications:
+ tags:
-
- name: VMware
+ tag: Application
+ value: VMware
-
name: 'VMware: Private memory'
type: SIMPLE
@@ -575,9 +582,10 @@ zabbix_export:
username: '{$VMWARE.USERNAME}'
password: '{$VMWARE.PASSWORD}'
description: 'Percentage of host physical memory that has been consumed.'
- applications:
+ tags:
-
- name: VMware
+ tag: Application
+ value: VMware
-
name: 'VMware: Power state'
type: SIMPLE
@@ -649,7 +657,6 @@ zabbix_export:
-
tag: Application
value: VMware
- -
discovery_rules:
-
name: 'Network device discovery'
@@ -708,7 +715,8 @@ zabbix_export:
description: 'VMware virtual machine network interface output statistics (packets per second).'
tags:
-
- name: VMware
+ tag: Application
+ value: VMware
-
name: 'VMware: Network utilization on interface {#IFDESC}'
type: SIMPLE
@@ -718,16 +726,15 @@ zabbix_export:
username: '{$VMWARE.USERNAME}'
password: '{$VMWARE.PASSWORD}'
description: 'VMware virtual machine network utilization (combined transmit-rates and receive-rates) during the interval.'
- tags:
- -
- tag: Application
- value: VMware
- -
preprocessing:
-
type: MULTIPLIER
parameters:
- '1024'
+ tags:
+ -
+ tag: Application
+ value: VMware
-
name: 'Disk device discovery'
type: SIMPLE
@@ -745,9 +752,10 @@ zabbix_export:
username: '{$VMWARE.USERNAME}'
password: '{$VMWARE.PASSWORD}'
description: 'Average number of outstanding read requests to the virtual disk during the collection interval.'
- applications:
+ tags:
-
- name: VMware
+ tag: Application
+ value: VMware
-
name: 'VMware: Average read latency to the disk {#DISKDESC}'
type: SIMPLE
@@ -757,9 +765,10 @@ zabbix_export:
username: '{$VMWARE.USERNAME}'
password: '{$VMWARE.PASSWORD}'
description: 'The average time a read from the virtual disk takes.'
- applications:
+ tags:
-
- name: VMware
+ tag: Application
+ value: VMware
-
name: 'VMware: Average write latency to the disk {#DISKDESC}'
type: SIMPLE
@@ -769,9 +778,10 @@ zabbix_export:
username: '{$VMWARE.USERNAME}'
password: '{$VMWARE.PASSWORD}'
description: 'The average time a write to the virtual disk takes.'
- applications:
+ tags:
-
- name: VMware
+ tag: Application
+ value: VMware
-
name: 'VMware: Average number of outstanding write requests to the disk {#DISKDESC}'
type: SIMPLE
@@ -780,9 +790,10 @@ zabbix_export:
username: '{$VMWARE.USERNAME}'
password: '{$VMWARE.PASSWORD}'
description: 'Average number of outstanding write requests to the virtual disk during the collection interval.'
- applications:
+ tags:
-
- name: VMware
+ tag: Application
+ value: VMware
-
name: 'VMware: Average number of bytes read from the disk {#DISKDESC}'
type: SIMPLE
@@ -978,9 +989,10 @@ zabbix_export:
username: '{$VMWARE.USERNAME}'
password: '{$VMWARE.PASSWORD}'
description: 'CPU usage as a percentage during the interval.'
- applications:
+ tags:
-
- name: VMware
+ tag: Application
+ value: VMware
-
name: 'VMware: CPU usage'
type: SIMPLE
@@ -1003,9 +1015,10 @@ zabbix_export:
username: '{$VMWARE.USERNAME}'
password: '{$VMWARE.PASSWORD}'
description: 'CPU usage as a percentage during the interval depends on power management or HT.'
- applications:
+ tags:
-
- name: VMware
+ tag: Application
+ value: VMware
-
name: 'VMware: Datacenter name'
type: SIMPLE
@@ -1226,14 +1239,15 @@ zabbix_export:
username: '{$VMWARE.USERNAME}'
password: '{$VMWARE.PASSWORD}'
description: 'Maximum allowed power usage.'
- applications:
- -
- name: VMware
preprocessing:
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 6h
+ tags:
+ -
+ tag: Application
+ value: VMware
-
name: 'VMware: Power usage'
type: SIMPLE
@@ -1243,9 +1257,10 @@ zabbix_export:
username: '{$VMWARE.USERNAME}'
password: '{$VMWARE.PASSWORD}'
description: 'Current power usage.'
- applications:
+ tags:
-
- name: VMware
+ tag: Application
+ value: VMware
-
name: 'VMware: Health state rollup'
type: SIMPLE
@@ -1379,9 +1394,10 @@ zabbix_export:
username: '{$VMWARE.USERNAME}'
password: '{$VMWARE.PASSWORD}'
description: 'Number of available datastore paths.'
- applications:
+ tags:
-
- name: VMware
+ tag: Application
+ value: VMware
trigger_prototypes:
-
expression: '{diff()}=1 and {last()}<{#MULTIPATH.COUNT}'
diff --git a/templates/app/zookeeper_http/README.md b/templates/app/zookeeper_http/README.md
index 89a0472fa93..975cfeb832e 100644
--- a/templates/app/zookeeper_http/README.md
+++ b/templates/app/zookeeper_http/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
The template to monitor Apache Zookeeper by Zabbix that work without any external scripts.
Most of the metrics are collected in one go, thanks to Zabbix bulk data collection.
@@ -15,7 +15,7 @@ This template was tested on:
## Setup
-> See [Zabbix template operation](https://www.zabbix.com/documentation/5.2/manual/config/templates_out_of_the_box/http) for basic instructions.
+> See [Zabbix template operation](https://www.zabbix.com/documentation/5.4/manual/config/templates_out_of_the_box/http) for basic instructions.
This template works with standalone and cluster instances. Metrics are collected from each Zookeper node by requests to [AdminServer](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_adminserver).
By default AdminServer is enabled and listens on port 8080.
@@ -29,14 +29,14 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$ZOOKEEPER.COMMAND_URL} |<p>The URL for listing and issuing commands relative to the root URL (admin.commandURL).</p> |`commands` |
-|{$ZOOKEEPER.FILE_DESCRIPTORS.MAX.WARN} |<p>Maximum percentage of file descriptors usage alert threshold (for trigger expression).</p> |`85` |
-|{$ZOOKEEPER.OUTSTANDING_REQ.MAX.WARN} |<p>Maximum number of outstanding requests (for trigger expression).</p> |`10` |
-|{$ZOOKEEPER.PENDING_SYNCS.MAX.WARN} |<p>Maximum number of pending syncs from the followers (for trigger expression).</p> |`10` |
-|{$ZOOKEEPER.PORT} |<p>The port the embedded Jetty server listens on (admin.serverPort).</p> |`8080` |
-|{$ZOOKEEPER.SCHEME} |<p>Request scheme which may be http or https</p> |`http` |
+| Name | Description | Default |
+|----------------------------------------|-----------------------------------------------------------------------------------------------|------------|
+| {$ZOOKEEPER.COMMAND_URL} | <p>The URL for listing and issuing commands relative to the root URL (admin.commandURL).</p> | `commands` |
+| {$ZOOKEEPER.FILE_DESCRIPTORS.MAX.WARN} | <p>Maximum percentage of file descriptors usage alert threshold (for trigger expression).</p> | `85` |
+| {$ZOOKEEPER.OUTSTANDING_REQ.MAX.WARN} | <p>Maximum number of outstanding requests (for trigger expression).</p> | `10` |
+| {$ZOOKEEPER.PENDING_SYNCS.MAX.WARN} | <p>Maximum number of pending syncs from the followers (for trigger expression).</p> | `10` |
+| {$ZOOKEEPER.PORT} | <p>The port the embedded Jetty server listens on (admin.serverPort).</p> | `8080` |
+| {$ZOOKEEPER.SCHEME} | <p>Request scheme which may be http or https</p> | `http` |
## Template links
@@ -44,75 +44,75 @@ There are no template links in this template.
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Leader metrics discovery |<p>Additional metrics for leader node</p> |DEPENDENT |zookeeper.metrics.leader<p>**Preprocessing**:</p><p>- JSONPATH: `$.server_state`</p><p>- JAVASCRIPT: `return JSON.stringify(value == 'leader' ? [{'{#SINGLETON}': ''}] : []);`</p> |
-|Clients discovery |<p>Get list of client connections.</p><p>Note, depending on the number of client connections this operation may be expensive (i.e. impact server performance).</p> |HTTP_AGENT |zookeeper.clients<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Name | Description | Type | Key and additional info |
+|--------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Leader metrics discovery | <p>Additional metrics for leader node</p> | DEPENDENT | zookeeper.metrics.leader<p>**Preprocessing**:</p><p>- JSONPATH: `$.server_state`</p><p>- JAVASCRIPT: `return JSON.stringify(value == 'leader' ? [{'{#SINGLETON}': ''}] : []);`</p> |
+| Clients discovery | <p>Get list of client connections.</p><p>Note, depending on the number of client connections this operation may be expensive (i.e. impact server performance).</p> | HTTP_AGENT | zookeeper.clients<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Zabbix_raw_items |Zookeeper: Get server metrics |<p>-</p> |HTTP_AGENT |zookeeper.get_metrics |
-|Zabbix_raw_items |Zookeeper: Get connections stats |<p>Get information on client connections to server. Note, depending on the number of client connections this operation may be expensive (i.e. impact server performance).</p> |HTTP_AGENT |zookeeper.get_connections_stats |
-|Zookeeper |Zookeeper: Server mode |<p>Mode of the server. In an ensemble, this may either be leader or follower. Otherwise, it is standalone</p> |DEPENDENT |zookeeper.server_state<p>**Preprocessing**:</p><p>- JSONPATH: `$.server_state`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|Zookeeper |Zookeeper: Uptime |<p>Uptime of Zookeeper server.</p> |DEPENDENT |zookeeper.uptime<p>**Preprocessing**:</p><p>- JSONPATH: `$.uptime`</p><p>- MULTIPLIER: `0.001`</p> |
-|Zookeeper |Zookeeper: Version |<p>Version of Zookeeper server.</p> |DEPENDENT |zookeeper.version<p>**Preprocessing**:</p><p>- JSONPATH: `$.version`</p><p>- REGEX: `([^,]+)--(.+) \1`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `3h`</p> |
-|Zookeeper |Zookeeper: Approximate data size |<p>Data tree size in bytes.The size includes the znode path and its value.</p> |DEPENDENT |zookeeper.approximate_data_size<p>**Preprocessing**:</p><p>- JSONPATH: `$.approximate_data_size`</p> |
-|Zookeeper |Zookeeper: File descriptors, max |<p>Maximum number of file descriptors that a zookeeper server can open.</p> |DEPENDENT |zookeeper.max_file_descriptor_count<p>**Preprocessing**:</p><p>- JSONPATH: `$.max_file_descriptor_count`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|Zookeeper |Zookeeper: File descriptors, open |<p>Number of file descriptors that a zookeeper server has open.</p> |DEPENDENT |zookeeper.open_file_descriptor_count<p>**Preprocessing**:</p><p>- JSONPATH: `$.open_file_descriptor_count`</p> |
-|Zookeeper |Zookeeper: Outstanding requests |<p>The number of queued requests when the server is under load and is receiving more sustained requests than it can process.</p> |DEPENDENT |zookeeper.outstanding_requests<p>**Preprocessing**:</p><p>- JSONPATH: `$.outstanding_requests`</p> |
-|Zookeeper |Zookeeper: Commit per sec |<p>The number of commits performed per second</p> |DEPENDENT |zookeeper.commit_count.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.commit_count`</p><p>- CHANGE_PER_SECOND |
-|Zookeeper |Zookeeper: Diff syncs per sec |<p>Number of diff syncs performed per second</p> |DEPENDENT |zookeeper.diff_count.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.diff_count`</p><p>- CHANGE_PER_SECOND |
-|Zookeeper |Zookeeper: Snap syncs per sec |<p>Number of snap syncs performed per second</p> |DEPENDENT |zookeeper.snap_count.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.snap_count`</p><p>- CHANGE_PER_SECOND |
-|Zookeeper |Zookeeper: Looking per sec |<p>Rate of transitions into looking state.</p> |DEPENDENT |zookeeper.looking_count.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.looking_count`</p><p>- CHANGE_PER_SECOND |
-|Zookeeper |Zookeeper: Alive connections |<p>Number of active clients connected to a zookeeper server.</p> |DEPENDENT |zookeeper.num_alive_connections<p>**Preprocessing**:</p><p>- JSONPATH: `$.num_alive_connections`</p> |
-|Zookeeper |Zookeeper: Global sessions |<p>Number of global sessions.</p> |DEPENDENT |zookeeper.global_sessions<p>**Preprocessing**:</p><p>- JSONPATH: `$.global_sessions`</p> |
-|Zookeeper |Zookeeper: Local sessions |<p>Number of local sessions.</p> |DEPENDENT |zookeeper.local_sessions<p>**Preprocessing**:</p><p>- JSONPATH: `$.local_sessions`</p> |
-|Zookeeper |Zookeeper: Drop connections per sec |<p>Rate of connection drops.</p> |DEPENDENT |zookeeper.connection_drop_count.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.connection_drop_count`</p><p>- CHANGE_PER_SECOND |
-|Zookeeper |Zookeeper: Rejected connections per sec |<p>Rate of connection rejected.</p> |DEPENDENT |zookeeper.connection_rejected.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.connection_rejected`</p><p>- CHANGE_PER_SECOND |
-|Zookeeper |Zookeeper: Revalidate connections per sec |<p>Rate ofconnection revalidations.</p> |DEPENDENT |zookeeper.connection_revalidate_count.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.connection_revalidate_count`</p><p>- CHANGE_PER_SECOND |
-|Zookeeper |Zookeeper: Revalidate per sec |<p>Rate of revalidations.</p> |DEPENDENT |zookeeper.revalidate_count.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.revalidate_count`</p><p>- CHANGE_PER_SECOND |
-|Zookeeper |Zookeeper: Latency, max |<p>The maximum amount of time it takes for the server to respond to a client request.</p> |DEPENDENT |zookeeper.max_latency<p>**Preprocessing**:</p><p>- JSONPATH: `$.max_latency`</p> |
-|Zookeeper |Zookeeper: Latency, min |<p>The minimum amount of time it takes for the server to respond to a client request.</p> |DEPENDENT |zookeeper.min_latency<p>**Preprocessing**:</p><p>- JSONPATH: `$.min_latency`</p> |
-|Zookeeper |Zookeeper: Latency, avg |<p>The average amount of time it takes for the server to respond to a client request.</p> |DEPENDENT |zookeeper.avg_latency<p>**Preprocessing**:</p><p>- JSONPATH: `$.avg_latency`</p> |
-|Zookeeper |Zookeeper: Znode count |<p>The number of znodes in the ZooKeeper namespace (the data)</p> |DEPENDENT |zookeeper.znode_count<p>**Preprocessing**:</p><p>- JSONPATH: `$.znode_count`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|Zookeeper |Zookeeper: Ephemeral nodes count |<p>Number of ephemeral nodes that a zookeeper server has in its data tree.</p> |DEPENDENT |zookeeper.ephemerals_count<p>**Preprocessing**:</p><p>- JSONPATH: `$.ephemerals_count`</p> |
-|Zookeeper |Zookeeper: Watch count |<p>Number of watches currently set on the local ZooKeeper process.</p> |DEPENDENT |zookeeper.watch_count<p>**Preprocessing**:</p><p>- JSONPATH: `$.watch_count`</p> |
-|Zookeeper |Zookeeper: Packets sent per sec |<p>The number of zookeeper packets sent from a server per second.</p> |DEPENDENT |zookeeper.packets_sent<p>**Preprocessing**:</p><p>- JSONPATH: `$.packets_sent`</p><p>- CHANGE_PER_SECOND |
-|Zookeeper |Zookeeper: Packets received per sec |<p>The number of zookeeper packets received by a server per second.</p> |DEPENDENT |zookeeper.packets_received.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.packets_received`</p><p>- CHANGE_PER_SECOND |
-|Zookeeper |Zookeeper: Bytes received per sec |<p>Number of bytes received per second.</p> |DEPENDENT |zookeeper.bytes_received_count.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.bytes_received_count`</p><p>- CHANGE_PER_SECOND |
-|Zookeeper |Zookeeper: Election time, avg |<p>Time between entering and leaving election.</p> |DEPENDENT |zookeeper.avg_election_time<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
-|Zookeeper |Zookeeper: Elections |<p>Number of elections happened.</p> |DEPENDENT |zookeeper.cnt_election_time<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
-|Zookeeper |Zookeeper: Fsync time, avg |<p>Time to fsync transaction log.</p> |DEPENDENT |zookeeper.avg_fsynctime<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
-|Zookeeper |Zookeeper: Fsync |<p>Count of performed fsyncs.</p> |DEPENDENT |zookeeper.cnt_fsynctime<p>**Preprocessing**:</p><p>- JAVASCRIPT: `var metrics = JSON.parse(value) return metrics.cnt_fsynctime || metrics.fsynctime_count`</p> |
-|Zookeeper |Zookeeper: Snapshot write time, avg |<p>Average time to write a snapshot.</p> |DEPENDENT |zookeeper.avg_snapshottime<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
-|Zookeeper |Zookeeper: Snapshot writes |<p>Count of performed snapshot writes.</p> |DEPENDENT |zookeeper.cnt_snapshottime<p>**Preprocessing**:</p><p>- JAVASCRIPT: `var metrics = JSON.parse(value) return metrics.snapshottime_count || metrics.cnt_snapshottime`</p> |
-|Zookeeper |Zookeeper: Pending syncs{#SINGLETON} |<p>Number of pending syncs to carry out to ZooKeeper ensemble followers.</p> |DEPENDENT |zookeeper.pending_syncs[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.pending_syncs`</p> |
-|Zookeeper |Zookeeper: Quorum size{#SINGLETON} | |DEPENDENT |zookeeper.quorum_size[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.quorum_size`</p> |
-|Zookeeper |Zookeeper: Synced followers{#SINGLETON} |<p>Number of synced followers reported when a node server_state is leader.</p> |DEPENDENT |zookeeper.synced_followers[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.synced_followers`</p> |
-|Zookeeper |Zookeeper: Synced non-voting follower{#SINGLETON} |<p>Number of synced voting followers reported when a node server_state is leader.</p> |DEPENDENT |zookeeper.synced_non_voting_followers[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.synced_non_voting_followers`</p> |
-|Zookeeper |Zookeeper: Synced observers{#SINGLETON} |<p>Number of synced observers.</p> |DEPENDENT |zookeeper.synced_observers[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.synced_observers`</p> |
-|Zookeeper |Zookeeper: Learners{#SINGLETON} |<p>Number of learners.</p> |DEPENDENT |zookeeper.learners[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.learners`</p> |
-|Zookeeper |Zookeeper client {#TYPE} [{#CLIENT}]: Latency, max |<p>The maximum amount of time it takes for the server to respond to a client request.</p> |DEPENDENT |zookeeper.max_latency[{#TYPE},{#CLIENT}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.{#TYPE}.[?(@.remote_socket_address == "{#ADDRESS}")].max_latency.first()`</p> |
-|Zookeeper |Zookeeper client {#TYPE} [{#CLIENT}]: Latency, min |<p>The minimum amount of time it takes for the server to respond to a client request.</p> |DEPENDENT |zookeeper.min_latency[{#TYPE},{#CLIENT}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.{#TYPE}.[?(@.remote_socket_address == "{#ADDRESS}")].min_latency.first()`</p> |
-|Zookeeper |Zookeeper client {#TYPE} [{#CLIENT}]: Latency, avg |<p>The average amount of time it takes for the server to respond to a client request.</p> |DEPENDENT |zookeeper.avg_latency[{#TYPE},{#CLIENT}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.{#TYPE}.[?(@.remote_socket_address == "{#ADDRESS}")].avg_latency.first()`</p> |
-|Zookeeper |Zookeeper client {#TYPE} [{#CLIENT}]: Packets sent per sec |<p>The number of packets sent.</p> |DEPENDENT |zookeeper.packets_sent[{#TYPE},{#CLIENT}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.{#TYPE}.[?(@.remote_socket_address == "{#ADDRESS}")].packets_sent.first()`</p><p>- CHANGE_PER_SECOND |
-|Zookeeper |Zookeeper client {#TYPE} [{#CLIENT}]: Packets received per sec |<p>The number of packets received.</p> |DEPENDENT |zookeeper.packets_received[{#TYPE},{#CLIENT}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.{#TYPE}.[?(@.remote_socket_address == "{#ADDRESS}")].packets_received.first()`</p><p>- CHANGE_PER_SECOND |
-|Zookeeper |Zookeeper client {#TYPE} [{#CLIENT}]: Outstanding requests |<p>The number of queued requests when the server is under load and is receiving more sustained requests than it can process.</p> |DEPENDENT |zookeeper.outstanding_requests[{#TYPE},{#CLIENT}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.{#TYPE}.[?(@.remote_socket_address == "{#ADDRESS}")].outstanding_requests.first()`</p> |
+| Group | Name | Description | Type | Key and additional info |
+|------------------|----------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Zabbix_raw_items | Zookeeper: Get server metrics | <p>-</p> | HTTP_AGENT | zookeeper.get_metrics |
+| Zabbix_raw_items | Zookeeper: Get connections stats | <p>Get information on client connections to server. Note, depending on the number of client connections this operation may be expensive (i.e. impact server performance).</p> | HTTP_AGENT | zookeeper.get_connections_stats |
+| Zookeeper | Zookeeper: Server mode | <p>Mode of the server. In an ensemble, this may either be leader or follower. Otherwise, it is standalone</p> | DEPENDENT | zookeeper.server_state<p>**Preprocessing**:</p><p>- JSONPATH: `$.server_state`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| Zookeeper | Zookeeper: Uptime | <p>Uptime of Zookeeper server.</p> | DEPENDENT | zookeeper.uptime<p>**Preprocessing**:</p><p>- JSONPATH: `$.uptime`</p><p>- MULTIPLIER: `0.001`</p> |
+| Zookeeper | Zookeeper: Version | <p>Version of Zookeeper server.</p> | DEPENDENT | zookeeper.version<p>**Preprocessing**:</p><p>- JSONPATH: `$.version`</p><p>- REGEX: `([^,]+)--(.+) \1`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `3h`</p> |
+| Zookeeper | Zookeeper: Approximate data size | <p>Data tree size in bytes.The size includes the znode path and its value.</p> | DEPENDENT | zookeeper.approximate_data_size<p>**Preprocessing**:</p><p>- JSONPATH: `$.approximate_data_size`</p> |
+| Zookeeper | Zookeeper: File descriptors, max | <p>Maximum number of file descriptors that a zookeeper server can open.</p> | DEPENDENT | zookeeper.max_file_descriptor_count<p>**Preprocessing**:</p><p>- JSONPATH: `$.max_file_descriptor_count`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| Zookeeper | Zookeeper: File descriptors, open | <p>Number of file descriptors that a zookeeper server has open.</p> | DEPENDENT | zookeeper.open_file_descriptor_count<p>**Preprocessing**:</p><p>- JSONPATH: `$.open_file_descriptor_count`</p> |
+| Zookeeper | Zookeeper: Outstanding requests | <p>The number of queued requests when the server is under load and is receiving more sustained requests than it can process.</p> | DEPENDENT | zookeeper.outstanding_requests<p>**Preprocessing**:</p><p>- JSONPATH: `$.outstanding_requests`</p> |
+| Zookeeper | Zookeeper: Commit per sec | <p>The number of commits performed per second</p> | DEPENDENT | zookeeper.commit_count.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.commit_count`</p><p>- CHANGE_PER_SECOND |
+| Zookeeper | Zookeeper: Diff syncs per sec | <p>Number of diff syncs performed per second</p> | DEPENDENT | zookeeper.diff_count.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.diff_count`</p><p>- CHANGE_PER_SECOND |
+| Zookeeper | Zookeeper: Snap syncs per sec | <p>Number of snap syncs performed per second</p> | DEPENDENT | zookeeper.snap_count.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.snap_count`</p><p>- CHANGE_PER_SECOND |
+| Zookeeper | Zookeeper: Looking per sec | <p>Rate of transitions into looking state.</p> | DEPENDENT | zookeeper.looking_count.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.looking_count`</p><p>- CHANGE_PER_SECOND |
+| Zookeeper | Zookeeper: Alive connections | <p>Number of active clients connected to a zookeeper server.</p> | DEPENDENT | zookeeper.num_alive_connections<p>**Preprocessing**:</p><p>- JSONPATH: `$.num_alive_connections`</p> |
+| Zookeeper | Zookeeper: Global sessions | <p>Number of global sessions.</p> | DEPENDENT | zookeeper.global_sessions<p>**Preprocessing**:</p><p>- JSONPATH: `$.global_sessions`</p> |
+| Zookeeper | Zookeeper: Local sessions | <p>Number of local sessions.</p> | DEPENDENT | zookeeper.local_sessions<p>**Preprocessing**:</p><p>- JSONPATH: `$.local_sessions`</p> |
+| Zookeeper | Zookeeper: Drop connections per sec | <p>Rate of connection drops.</p> | DEPENDENT | zookeeper.connection_drop_count.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.connection_drop_count`</p><p>- CHANGE_PER_SECOND |
+| Zookeeper | Zookeeper: Rejected connections per sec | <p>Rate of connection rejected.</p> | DEPENDENT | zookeeper.connection_rejected.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.connection_rejected`</p><p>- CHANGE_PER_SECOND |
+| Zookeeper | Zookeeper: Revalidate connections per sec | <p>Rate ofconnection revalidations.</p> | DEPENDENT | zookeeper.connection_revalidate_count.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.connection_revalidate_count`</p><p>- CHANGE_PER_SECOND |
+| Zookeeper | Zookeeper: Revalidate per sec | <p>Rate of revalidations.</p> | DEPENDENT | zookeeper.revalidate_count.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.revalidate_count`</p><p>- CHANGE_PER_SECOND |
+| Zookeeper | Zookeeper: Latency, max | <p>The maximum amount of time it takes for the server to respond to a client request.</p> | DEPENDENT | zookeeper.max_latency<p>**Preprocessing**:</p><p>- JSONPATH: `$.max_latency`</p> |
+| Zookeeper | Zookeeper: Latency, min | <p>The minimum amount of time it takes for the server to respond to a client request.</p> | DEPENDENT | zookeeper.min_latency<p>**Preprocessing**:</p><p>- JSONPATH: `$.min_latency`</p> |
+| Zookeeper | Zookeeper: Latency, avg | <p>The average amount of time it takes for the server to respond to a client request.</p> | DEPENDENT | zookeeper.avg_latency<p>**Preprocessing**:</p><p>- JSONPATH: `$.avg_latency`</p> |
+| Zookeeper | Zookeeper: Znode count | <p>The number of znodes in the ZooKeeper namespace (the data)</p> | DEPENDENT | zookeeper.znode_count<p>**Preprocessing**:</p><p>- JSONPATH: `$.znode_count`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| Zookeeper | Zookeeper: Ephemeral nodes count | <p>Number of ephemeral nodes that a zookeeper server has in its data tree.</p> | DEPENDENT | zookeeper.ephemerals_count<p>**Preprocessing**:</p><p>- JSONPATH: `$.ephemerals_count`</p> |
+| Zookeeper | Zookeeper: Watch count | <p>Number of watches currently set on the local ZooKeeper process.</p> | DEPENDENT | zookeeper.watch_count<p>**Preprocessing**:</p><p>- JSONPATH: `$.watch_count`</p> |
+| Zookeeper | Zookeeper: Packets sent per sec | <p>The number of zookeeper packets sent from a server per second.</p> | DEPENDENT | zookeeper.packets_sent<p>**Preprocessing**:</p><p>- JSONPATH: `$.packets_sent`</p><p>- CHANGE_PER_SECOND |
+| Zookeeper | Zookeeper: Packets received per sec | <p>The number of zookeeper packets received by a server per second.</p> | DEPENDENT | zookeeper.packets_received.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.packets_received`</p><p>- CHANGE_PER_SECOND |
+| Zookeeper | Zookeeper: Bytes received per sec | <p>Number of bytes received per second.</p> | DEPENDENT | zookeeper.bytes_received_count.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.bytes_received_count`</p><p>- CHANGE_PER_SECOND |
+| Zookeeper | Zookeeper: Election time, avg | <p>Time between entering and leaving election.</p> | DEPENDENT | zookeeper.avg_election_time<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Zookeeper | Zookeeper: Elections | <p>Number of elections happened.</p> | DEPENDENT | zookeeper.cnt_election_time<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Zookeeper | Zookeeper: Fsync time, avg | <p>Time to fsync transaction log.</p> | DEPENDENT | zookeeper.avg_fsynctime<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Zookeeper | Zookeeper: Fsync | <p>Count of performed fsyncs.</p> | DEPENDENT | zookeeper.cnt_fsynctime<p>**Preprocessing**:</p><p>- JAVASCRIPT: `var metrics = JSON.parse(value) return metrics.cnt_fsynctime || metrics.fsynctime_count`</p> |
+| Zookeeper | Zookeeper: Snapshot write time, avg | <p>Average time to write a snapshot.</p> | DEPENDENT | zookeeper.avg_snapshottime<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Zookeeper | Zookeeper: Snapshot writes | <p>Count of performed snapshot writes.</p> | DEPENDENT | zookeeper.cnt_snapshottime<p>**Preprocessing**:</p><p>- JAVASCRIPT: `var metrics = JSON.parse(value) return metrics.snapshottime_count || metrics.cnt_snapshottime`</p> |
+| Zookeeper | Zookeeper: Pending syncs{#SINGLETON} | <p>Number of pending syncs to carry out to ZooKeeper ensemble followers.</p> | DEPENDENT | zookeeper.pending_syncs[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.pending_syncs`</p> |
+| Zookeeper | Zookeeper: Quorum size{#SINGLETON} | | DEPENDENT | zookeeper.quorum_size[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.quorum_size`</p> |
+| Zookeeper | Zookeeper: Synced followers{#SINGLETON} | <p>Number of synced followers reported when a node server_state is leader.</p> | DEPENDENT | zookeeper.synced_followers[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.synced_followers`</p> |
+| Zookeeper | Zookeeper: Synced non-voting follower{#SINGLETON} | <p>Number of synced voting followers reported when a node server_state is leader.</p> | DEPENDENT | zookeeper.synced_non_voting_followers[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.synced_non_voting_followers`</p> |
+| Zookeeper | Zookeeper: Synced observers{#SINGLETON} | <p>Number of synced observers.</p> | DEPENDENT | zookeeper.synced_observers[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.synced_observers`</p> |
+| Zookeeper | Zookeeper: Learners{#SINGLETON} | <p>Number of learners.</p> | DEPENDENT | zookeeper.learners[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.learners`</p> |
+| Zookeeper | Zookeeper client {#TYPE} [{#CLIENT}]: Latency, max | <p>The maximum amount of time it takes for the server to respond to a client request.</p> | DEPENDENT | zookeeper.max_latency[{#TYPE},{#CLIENT}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.{#TYPE}.[?(@.remote_socket_address == "{#ADDRESS}")].max_latency.first()`</p> |
+| Zookeeper | Zookeeper client {#TYPE} [{#CLIENT}]: Latency, min | <p>The minimum amount of time it takes for the server to respond to a client request.</p> | DEPENDENT | zookeeper.min_latency[{#TYPE},{#CLIENT}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.{#TYPE}.[?(@.remote_socket_address == "{#ADDRESS}")].min_latency.first()`</p> |
+| Zookeeper | Zookeeper client {#TYPE} [{#CLIENT}]: Latency, avg | <p>The average amount of time it takes for the server to respond to a client request.</p> | DEPENDENT | zookeeper.avg_latency[{#TYPE},{#CLIENT}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.{#TYPE}.[?(@.remote_socket_address == "{#ADDRESS}")].avg_latency.first()`</p> |
+| Zookeeper | Zookeeper client {#TYPE} [{#CLIENT}]: Packets sent per sec | <p>The number of packets sent.</p> | DEPENDENT | zookeeper.packets_sent[{#TYPE},{#CLIENT}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.{#TYPE}.[?(@.remote_socket_address == "{#ADDRESS}")].packets_sent.first()`</p><p>- CHANGE_PER_SECOND |
+| Zookeeper | Zookeeper client {#TYPE} [{#CLIENT}]: Packets received per sec | <p>The number of packets received.</p> | DEPENDENT | zookeeper.packets_received[{#TYPE},{#CLIENT}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.{#TYPE}.[?(@.remote_socket_address == "{#ADDRESS}")].packets_received.first()`</p><p>- CHANGE_PER_SECOND |
+| Zookeeper | Zookeeper client {#TYPE} [{#CLIENT}]: Outstanding requests | <p>The number of queued requests when the server is under load and is receiving more sustained requests than it can process.</p> | DEPENDENT | zookeeper.outstanding_requests[{#TYPE},{#CLIENT}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.{#TYPE}.[?(@.remote_socket_address == "{#ADDRESS}")].outstanding_requests.first()`</p> |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|Zookeeper: Server mode has changed (new mode: {ITEM.VALUE}) |<p>Zookeeper node state has changed. Ack to close.</p> |`{TEMPLATE_NAME:zookeeper.server_state.diff()}=1 and {TEMPLATE_NAME:zookeeper.server_state.strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|Zookeeper: has been restarted (uptime < 10m) |<p>Uptime is less than 10 minutes</p> |`{TEMPLATE_NAME:zookeeper.uptime.last()}<10m` |INFO |<p>Manual close: YES</p> |
-|Zookeeper: Failed to fetch info data (or no data for 10m) |<p>Zabbix has not received data for items for the last 10 minutes</p> |`{TEMPLATE_NAME:zookeeper.uptime.nodata(10m)}=1` |WARNING |<p>Manual close: YES</p> |
-|Zookeeper: Version has changed (new version: {ITEM.VALUE}) |<p>Zookeeper version has changed. Ack to close.</p> |`{TEMPLATE_NAME:zookeeper.version.diff()}=1 and {TEMPLATE_NAME:zookeeper.version.strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|Zookeeper: Too many file descriptors used (over {$ZOOKEEPER.FILE_DESCRIPTORS.MAX.WARN}% for 5 min) |<p>Number of file descriptors used more than {$ZOOKEEPER.FILE_DESCRIPTORS.MAX.WARN}% of the available number of file descriptors.</p> |`{TEMPLATE_NAME:zookeeper.open_file_descriptor_count.min(5m)} * 100 / {Zookeeper by HTTP:zookeeper.max_file_descriptor_count.last()} > {$ZOOKEEPER.FILE_DESCRIPTORS.MAX.WARN}` |WARNING | |
-|Zookeeper: Too many queued requests (over {$ZOOKEEPER.OUTSTANDING_REQ.MAX.WARN}% for 5 min) |<p>Number of queued requests in the server. This goes up when the server receives more requests than it can process.</p> |`{TEMPLATE_NAME:zookeeper.outstanding_requests.min(5m)}>{$ZOOKEEPER.OUTSTANDING_REQ.MAX.WARN}` |AVERAGE |<p>Manual close: YES</p> |
-|Zookeeper: Too many pending syncs (over {$ZOOKEEPER.PENDING_SYNCS.MAX.WARN}% for 5 min) |<p>-</p> |`{TEMPLATE_NAME:zookeeper.pending_syncs[{#SINGLETON}].min(5m)}>{$ZOOKEEPER.PENDING_SYNCS.MAX.WARN}` |AVERAGE |<p>Manual close: YES</p> |
-|Zookeeper: Too few active followers |<p>The number of followers should equal the total size of your ZooKeeper ensemble, minus 1 (the leader is not included in the follower count). If the ensemble fails to maintain quorum, all automatic failover features are suspended. </p> |`{TEMPLATE_NAME:zookeeper.synced_followers[{#SINGLETON}].last()} < {Zookeeper by HTTP:zookeeper.quorum_size[{#SINGLETON}].last()}-1` |AVERAGE | |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|----------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|----------------------------------|
+| Zookeeper: Server mode has changed (new mode: {ITEM.VALUE}) | <p>Zookeeper node state has changed. Ack to close.</p> | `{TEMPLATE_NAME:zookeeper.server_state.diff()}=1 and {TEMPLATE_NAME:zookeeper.server_state.strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| Zookeeper: has been restarted (uptime < 10m) | <p>Uptime is less than 10 minutes</p> | `{TEMPLATE_NAME:zookeeper.uptime.last()}<10m` | INFO | <p>Manual close: YES</p> |
+| Zookeeper: Failed to fetch info data (or no data for 10m) | <p>Zabbix has not received data for items for the last 10 minutes</p> | `{TEMPLATE_NAME:zookeeper.uptime.nodata(10m)}=1` | WARNING | <p>Manual close: YES</p> |
+| Zookeeper: Version has changed (new version: {ITEM.VALUE}) | <p>Zookeeper version has changed. Ack to close.</p> | `{TEMPLATE_NAME:zookeeper.version.diff()}=1 and {TEMPLATE_NAME:zookeeper.version.strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| Zookeeper: Too many file descriptors used (over {$ZOOKEEPER.FILE_DESCRIPTORS.MAX.WARN}% for 5 min) | <p>Number of file descriptors used more than {$ZOOKEEPER.FILE_DESCRIPTORS.MAX.WARN}% of the available number of file descriptors.</p> | `{TEMPLATE_NAME:zookeeper.open_file_descriptor_count.min(5m)} * 100 / {Zookeeper by HTTP:zookeeper.max_file_descriptor_count.last()} > {$ZOOKEEPER.FILE_DESCRIPTORS.MAX.WARN}` | WARNING | |
+| Zookeeper: Too many queued requests (over {$ZOOKEEPER.OUTSTANDING_REQ.MAX.WARN}% for 5 min) | <p>Number of queued requests in the server. This goes up when the server receives more requests than it can process.</p> | `{TEMPLATE_NAME:zookeeper.outstanding_requests.min(5m)}>{$ZOOKEEPER.OUTSTANDING_REQ.MAX.WARN}` | AVERAGE | <p>Manual close: YES</p> |
+| Zookeeper: Too many pending syncs (over {$ZOOKEEPER.PENDING_SYNCS.MAX.WARN}% for 5 min) | <p>-</p> | `{TEMPLATE_NAME:zookeeper.pending_syncs[{#SINGLETON}].min(5m)}>{$ZOOKEEPER.PENDING_SYNCS.MAX.WARN}` | AVERAGE | <p>Manual close: YES</p> |
+| Zookeeper: Too few active followers | <p>The number of followers should equal the total size of your ZooKeeper ensemble, minus 1 (the leader is not included in the follower count). If the ensemble fails to maintain quorum, all automatic failover features are suspended. </p> | `{TEMPLATE_NAME:zookeeper.synced_followers[{#SINGLETON}].last()} < {Zookeeper by HTTP:zookeeper.quorum_size[{#SINGLETON}].last()}-1` | AVERAGE | |
## Feedback
diff --git a/templates/classic/template_app_remote_zabbix_proxy.yaml b/templates/classic/template_app_remote_zabbix_proxy.yaml
index d111814f63a..d03c0cf68b7 100644
--- a/templates/classic/template_app_remote_zabbix_proxy.yaml
+++ b/templates/classic/template_app_remote_zabbix_proxy.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-02-23T11:04:46Z'
+ date: '2021-04-22T11:28:05Z'
groups:
-
name: Templates/Applications
@@ -848,71 +848,73 @@ zabbix_export:
dashboards:
-
name: 'Zabbix proxy health'
- widgets:
+ pages:
-
- type: GRAPH_CLASSIC
- width: '12'
- height: '6'
- fields:
+ widgets:
-
- type: INTEGER
- name: source_type
- value: '0'
+ type: GRAPH_CLASSIC
+ width: '12'
+ height: '6'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Zabbix proxy performance'
+ host: 'Remote Zabbix proxy'
-
- type: GRAPH
- name: graphid
- value:
- name: 'Zabbix proxy performance'
- host: 'Remote Zabbix proxy'
- -
- type: GRAPH_CLASSIC
- x: '12'
- width: '12'
- height: '6'
- fields:
+ type: GRAPH_CLASSIC
+ x: '12'
+ width: '12'
+ height: '6'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Zabbix data gathering process busy %'
+ host: 'Remote Zabbix proxy'
-
- type: INTEGER
- name: source_type
- value: '0'
+ type: GRAPH_CLASSIC
+ 'y': '6'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Zabbix internal process busy %'
+ host: 'Remote Zabbix proxy'
-
- type: GRAPH
- name: graphid
- value:
- name: 'Zabbix data gathering process busy %'
- host: 'Remote Zabbix proxy'
- -
- type: GRAPH_CLASSIC
- 'y': '6'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Zabbix internal process busy %'
- host: 'Remote Zabbix proxy'
- -
- type: GRAPH_CLASSIC
- x: '12'
- 'y': '6'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Zabbix cache usage, % free'
- host: 'Remote Zabbix proxy'
+ type: GRAPH_CLASSIC
+ x: '12'
+ 'y': '6'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Zabbix cache usage, % free'
+ host: 'Remote Zabbix proxy'
graphs:
-
name: 'Zabbix cache usage, % free'
diff --git a/templates/classic/template_app_remote_zabbix_server.yaml b/templates/classic/template_app_remote_zabbix_server.yaml
index 2b6cd879826..0015b3d34b6 100644
--- a/templates/classic/template_app_remote_zabbix_server.yaml
+++ b/templates/classic/template_app_remote_zabbix_server.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-02-23T11:04:50Z'
+ date: '2021-04-22T11:28:20Z'
groups:
-
name: Templates/Applications
@@ -1275,104 +1275,106 @@ zabbix_export:
dashboards:
-
name: 'Zabbix server health'
- widgets:
+ pages:
-
- type: GRAPH_CLASSIC
- width: '12'
- height: '6'
- fields:
+ widgets:
-
- type: INTEGER
- name: source_type
- value: '0'
+ type: GRAPH_CLASSIC
+ width: '12'
+ height: '6'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Zabbix server performance'
+ host: 'Remote Zabbix server'
-
- type: GRAPH
- name: graphid
- value:
- name: 'Zabbix server performance'
- host: 'Remote Zabbix server'
- -
- type: GRAPH_CLASSIC
- x: '12'
- width: '12'
- height: '6'
- fields:
+ type: GRAPH_CLASSIC
+ x: '12'
+ width: '12'
+ height: '6'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Zabbix data gathering process busy %'
+ host: 'Remote Zabbix server'
-
- type: INTEGER
- name: source_type
- value: '0'
+ type: GRAPH_CLASSIC
+ 'y': '6'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Zabbix internal process busy %'
+ host: 'Remote Zabbix server'
-
- type: GRAPH
- name: graphid
- value:
- name: 'Zabbix data gathering process busy %'
- host: 'Remote Zabbix server'
- -
- type: GRAPH_CLASSIC
- 'y': '6'
- width: '12'
- height: '5'
- fields:
+ type: GRAPH_CLASSIC
+ x: '12'
+ 'y': '6'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Zabbix cache usage, % free'
+ host: 'Remote Zabbix server'
-
- type: INTEGER
- name: source_type
- value: '0'
+ type: GRAPH_CLASSIC
+ 'y': '11'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Value cache effectiveness'
+ host: 'Remote Zabbix server'
-
- type: GRAPH
- name: graphid
- value:
- name: 'Zabbix internal process busy %'
- host: 'Remote Zabbix server'
- -
- type: GRAPH_CLASSIC
- x: '12'
- 'y': '6'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Zabbix cache usage, % free'
- host: 'Remote Zabbix server'
- -
- type: GRAPH_CLASSIC
- 'y': '11'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Value cache effectiveness'
- host: 'Remote Zabbix server'
- -
- type: GRAPH_CLASSIC
- x: '12'
- 'y': '11'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Zabbix internal queues'
- host: 'Remote Zabbix server'
+ type: GRAPH_CLASSIC
+ x: '12'
+ 'y': '11'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Zabbix internal queues'
+ host: 'Remote Zabbix server'
valuemaps:
-
name: 'Value cache operating mode'
diff --git a/templates/classic/template_app_zabbix_proxy.yaml b/templates/classic/template_app_zabbix_proxy.yaml
index d71fd1a365b..b21294a12e4 100644
--- a/templates/classic/template_app_zabbix_proxy.yaml
+++ b/templates/classic/template_app_zabbix_proxy.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-02-23T11:04:47Z'
+ date: '2021-04-22T11:28:11Z'
groups:
-
name: Templates/Applications
@@ -652,71 +652,73 @@ zabbix_export:
dashboards:
-
name: 'Zabbix proxy health'
- widgets:
+ pages:
-
- type: GRAPH_CLASSIC
- width: '12'
- height: '6'
- fields:
+ widgets:
-
- type: INTEGER
- name: source_type
- value: '0'
+ type: GRAPH_CLASSIC
+ width: '12'
+ height: '6'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Zabbix proxy performance'
+ host: 'Zabbix Proxy'
-
- type: GRAPH
- name: graphid
- value:
- name: 'Zabbix proxy performance'
- host: 'Zabbix Proxy'
- -
- type: GRAPH_CLASSIC
- x: '12'
- width: '12'
- height: '6'
- fields:
+ type: GRAPH_CLASSIC
+ x: '12'
+ width: '12'
+ height: '6'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Zabbix data gathering process busy %'
+ host: 'Zabbix Proxy'
-
- type: INTEGER
- name: source_type
- value: '0'
+ type: GRAPH_CLASSIC
+ 'y': '6'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Zabbix internal process busy %'
+ host: 'Zabbix Proxy'
-
- type: GRAPH
- name: graphid
- value:
- name: 'Zabbix data gathering process busy %'
- host: 'Zabbix Proxy'
- -
- type: GRAPH_CLASSIC
- 'y': '6'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Zabbix internal process busy %'
- host: 'Zabbix Proxy'
- -
- type: GRAPH_CLASSIC
- x: '12'
- 'y': '6'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Zabbix cache usage, % used'
- host: 'Zabbix Proxy'
+ type: GRAPH_CLASSIC
+ x: '12'
+ 'y': '6'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Zabbix cache usage, % used'
+ host: 'Zabbix Proxy'
graphs:
-
name: 'Zabbix cache usage, % used'
diff --git a/templates/classic/template_app_zabbix_server.yaml b/templates/classic/template_app_zabbix_server.yaml
index ac949655079..0b96eef6b27 100644
--- a/templates/classic/template_app_zabbix_server.yaml
+++ b/templates/classic/template_app_zabbix_server.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-02-23T11:04:45Z'
+ date: '2021-04-22T11:27:59Z'
groups:
-
name: Templates/Applications
@@ -841,104 +841,106 @@ zabbix_export:
dashboards:
-
name: 'Zabbix server health'
- widgets:
+ pages:
-
- type: GRAPH_CLASSIC
- width: '12'
- height: '6'
- fields:
+ widgets:
-
- type: INTEGER
- name: source_type
- value: '0'
+ type: GRAPH_CLASSIC
+ width: '12'
+ height: '6'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Zabbix server performance'
+ host: 'Zabbix Server'
-
- type: GRAPH
- name: graphid
- value:
- name: 'Zabbix server performance'
- host: 'Zabbix Server'
- -
- type: GRAPH_CLASSIC
- x: '12'
- width: '12'
- height: '6'
- fields:
+ type: GRAPH_CLASSIC
+ x: '12'
+ width: '12'
+ height: '6'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Zabbix data gathering process busy %'
+ host: 'Zabbix Server'
-
- type: INTEGER
- name: source_type
- value: '0'
+ type: GRAPH_CLASSIC
+ 'y': '6'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Zabbix internal process busy %'
+ host: 'Zabbix Server'
-
- type: GRAPH
- name: graphid
- value:
- name: 'Zabbix data gathering process busy %'
- host: 'Zabbix Server'
- -
- type: GRAPH_CLASSIC
- 'y': '6'
- width: '12'
- height: '5'
- fields:
+ type: GRAPH_CLASSIC
+ x: '12'
+ 'y': '6'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Zabbix cache usage, % used'
+ host: 'Zabbix Server'
-
- type: INTEGER
- name: source_type
- value: '0'
+ type: GRAPH_CLASSIC
+ 'y': '11'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Value cache effectiveness'
+ host: 'Zabbix Server'
-
- type: GRAPH
- name: graphid
- value:
- name: 'Zabbix internal process busy %'
- host: 'Zabbix Server'
- -
- type: GRAPH_CLASSIC
- x: '12'
- 'y': '6'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Zabbix cache usage, % used'
- host: 'Zabbix Server'
- -
- type: GRAPH_CLASSIC
- 'y': '11'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Value cache effectiveness'
- host: 'Zabbix Server'
- -
- type: GRAPH_CLASSIC
- x: '12'
- 'y': '11'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Zabbix internal queues'
- host: 'Zabbix Server'
+ type: GRAPH_CLASSIC
+ x: '12'
+ 'y': '11'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Zabbix internal queues'
+ host: 'Zabbix Server'
valuemaps:
-
name: 'Value cache operating mode'
diff --git a/templates/classic/template_os_aix.yaml b/templates/classic/template_os_aix.yaml
index f82fb14df17..647dafb1866 100644
--- a/templates/classic/template_os_aix.yaml
+++ b/templates/classic/template_os_aix.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-02-23T11:05:35Z'
+ date: '2021-04-22T11:26:34Z'
groups:
-
name: 'Templates/Operating systems'
@@ -694,71 +694,73 @@ zabbix_export:
dashboards:
-
name: 'System performance'
- widgets:
+ pages:
-
- type: GRAPH_CLASSIC
- width: '12'
- height: '5'
- fields:
+ widgets:
-
- type: INTEGER
- name: source_type
- value: '0'
+ type: GRAPH_CLASSIC
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'CPU load'
+ host: AIX
-
- type: GRAPH
- name: graphid
- value:
- name: 'CPU load'
- host: AIX
- -
- type: GRAPH_CLASSIC
- x: '12'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'CPU jumps'
- host: AIX
- -
- type: GRAPH_CLASSIC
- 'y': '5'
- width: '12'
- height: '5'
- fields:
+ type: GRAPH_CLASSIC
+ x: '12'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'CPU jumps'
+ host: AIX
-
- type: INTEGER
- name: source_type
- value: '0'
+ type: GRAPH_CLASSIC
+ 'y': '5'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'CPU utilization'
+ host: AIX
-
- type: GRAPH
- name: graphid
- value:
- name: 'CPU utilization'
- host: AIX
- -
- type: GRAPH_CLASSIC
- x: '12'
- 'y': '5'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Memory usage'
- host: AIX
+ type: GRAPH_CLASSIC
+ x: '12'
+ 'y': '5'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Memory usage'
+ host: AIX
graphs:
-
name: 'CPU jumps'
diff --git a/templates/classic/template_os_freebsd.yaml b/templates/classic/template_os_freebsd.yaml
index 41b1316b7db..4e8f1f8f64a 100644
--- a/templates/classic/template_os_freebsd.yaml
+++ b/templates/classic/template_os_freebsd.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-02-23T11:05:35Z'
+ date: '2021-04-22T11:26:34Z'
groups:
-
name: 'Templates/Operating systems'
@@ -554,71 +554,73 @@ zabbix_export:
dashboards:
-
name: 'System performance'
- widgets:
+ pages:
-
- type: GRAPH_CLASSIC
- width: '12'
- height: '5'
- fields:
+ widgets:
-
- type: INTEGER
- name: source_type
- value: '0'
+ type: GRAPH_CLASSIC
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'CPU load'
+ host: FreeBSD
-
- type: GRAPH
- name: graphid
- value:
- name: 'CPU load'
- host: FreeBSD
- -
- type: GRAPH_CLASSIC
- x: '12'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'CPU utilization'
- host: FreeBSD
- -
- type: GRAPH_CLASSIC
- 'y': '5'
- width: '12'
- height: '7'
- fields:
+ type: GRAPH_CLASSIC
+ x: '12'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'CPU utilization'
+ host: FreeBSD
-
- type: INTEGER
- name: source_type
- value: '0'
+ type: GRAPH_CLASSIC
+ 'y': '5'
+ width: '12'
+ height: '7'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Memory usage'
+ host: FreeBSD
-
- type: GRAPH
- name: graphid
- value:
- name: 'Memory usage'
- host: FreeBSD
- -
- type: GRAPH_CLASSIC
- x: '12'
- 'y': '5'
- width: '12'
- height: '7'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Swap usage'
- host: FreeBSD
+ type: GRAPH_CLASSIC
+ x: '12'
+ 'y': '5'
+ width: '12'
+ height: '7'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Swap usage'
+ host: FreeBSD
graphs:
-
name: 'CPU jumps'
diff --git a/templates/classic/template_os_hp-ux.yaml b/templates/classic/template_os_hp-ux.yaml
index 904d1204617..187d7c15313 100644
--- a/templates/classic/template_os_hp-ux.yaml
+++ b/templates/classic/template_os_hp-ux.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-02-23T11:05:34Z'
+ date: '2021-04-22T11:26:33Z'
groups:
-
name: 'Templates/Operating systems'
@@ -382,54 +382,56 @@ zabbix_export:
dashboards:
-
name: 'System performance'
- widgets:
+ pages:
-
- type: GRAPH_CLASSIC
- width: '12'
- height: '5'
- fields:
+ widgets:
-
- type: INTEGER
- name: source_type
- value: '0'
+ type: GRAPH_CLASSIC
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'CPU load'
+ host: HP-UX
-
- type: GRAPH
- name: graphid
- value:
- name: 'CPU load'
- host: HP-UX
- -
- type: GRAPH_CLASSIC
- x: '12'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'CPU utilization'
- host: HP-UX
- -
- type: GRAPH_CLASSIC
- 'y': '5'
- width: '12'
- height: '5'
- fields:
+ type: GRAPH_CLASSIC
+ x: '12'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'CPU utilization'
+ host: HP-UX
-
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Memory usage'
- host: HP-UX
+ type: GRAPH_CLASSIC
+ 'y': '5'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Memory usage'
+ host: HP-UX
graphs:
-
name: 'CPU load'
diff --git a/templates/classic/template_os_mac_os_x.yaml b/templates/classic/template_os_mac_os_x.yaml
index 3c4870a19d9..d7d88b953f9 100644
--- a/templates/classic/template_os_mac_os_x.yaml
+++ b/templates/classic/template_os_mac_os_x.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-02-23T11:04:45Z'
+ date: '2021-04-22T11:27:57Z'
groups:
-
name: 'Templates/Operating systems'
@@ -357,38 +357,40 @@ zabbix_export:
dashboards:
-
name: 'System performance'
- widgets:
+ pages:
-
- type: GRAPH_CLASSIC
- width: '24'
- height: '5'
- fields:
+ widgets:
-
- type: INTEGER
- name: source_type
- value: '0'
+ type: GRAPH_CLASSIC
+ width: '24'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'CPU load'
+ host: 'Mac OS X'
-
- type: GRAPH
- name: graphid
- value:
- name: 'CPU load'
- host: 'Mac OS X'
- -
- type: GRAPH_CLASSIC
- 'y': '5'
- width: '24'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Memory usage'
- host: 'Mac OS X'
+ type: GRAPH_CLASSIC
+ 'y': '5'
+ width: '24'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Memory usage'
+ host: 'Mac OS X'
graphs:
-
name: 'CPU load'
diff --git a/templates/classic/template_os_openbsd.yaml b/templates/classic/template_os_openbsd.yaml
index a7f94a57a91..203ae90751a 100644
--- a/templates/classic/template_os_openbsd.yaml
+++ b/templates/classic/template_os_openbsd.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-02-23T11:05:34Z'
+ date: '2021-04-22T11:26:32Z'
groups:
-
name: 'Templates/Operating systems'
@@ -554,104 +554,106 @@ zabbix_export:
dashboards:
-
name: 'System performance'
- widgets:
+ pages:
-
- type: GRAPH_CLASSIC
- width: '12'
- height: '5'
- fields:
+ widgets:
-
- type: INTEGER
- name: source_type
- value: '0'
+ type: GRAPH_CLASSIC
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'CPU load'
+ host: OpenBSD
-
- type: GRAPH
- name: graphid
- value:
- name: 'CPU load'
- host: OpenBSD
- -
- type: GRAPH_CLASSIC
- x: '12'
- width: '12'
- height: '5'
- fields:
+ type: GRAPH_CLASSIC
+ x: '12'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'CPU utilization'
+ host: OpenBSD
-
- type: INTEGER
- name: source_type
- value: '0'
+ type: GRAPH_CLASSIC
+ 'y': '5'
+ width: '12'
+ height: '7'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Memory usage'
+ host: OpenBSD
-
- type: GRAPH
- name: graphid
- value:
- name: 'CPU utilization'
- host: OpenBSD
- -
- type: GRAPH_CLASSIC
- 'y': '5'
- width: '12'
- height: '7'
- fields:
+ type: GRAPH_CLASSIC
+ x: '12'
+ 'y': '5'
+ width: '12'
+ height: '7'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Swap usage'
+ host: OpenBSD
-
- type: INTEGER
- name: source_type
- value: '0'
+ type: GRAPH_CLASSIC
+ 'y': '12'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '1'
+ -
+ type: ITEM
+ name: itemid
+ value:
+ key: 'proc.num[]'
+ host: OpenBSD
-
- type: GRAPH
- name: graphid
- value:
- name: 'Memory usage'
- host: OpenBSD
- -
- type: GRAPH_CLASSIC
- x: '12'
- 'y': '5'
- width: '12'
- height: '7'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Swap usage'
- host: OpenBSD
- -
- type: GRAPH_CLASSIC
- 'y': '12'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '1'
- -
- type: ITEM
- name: itemid
- value:
- key: 'proc.num[]'
- host: OpenBSD
- -
- type: GRAPH_CLASSIC
- x: '12'
- 'y': '12'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '1'
- -
- type: ITEM
- name: itemid
- value:
- key: 'proc.num[,,run]'
- host: OpenBSD
+ type: GRAPH_CLASSIC
+ x: '12'
+ 'y': '12'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '1'
+ -
+ type: ITEM
+ name: itemid
+ value:
+ key: 'proc.num[,,run]'
+ host: OpenBSD
graphs:
-
name: 'CPU jumps'
diff --git a/templates/classic/template_os_solaris.yaml b/templates/classic/template_os_solaris.yaml
index c3a16fd1de3..4b385544d88 100644
--- a/templates/classic/template_os_solaris.yaml
+++ b/templates/classic/template_os_solaris.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-02-23T11:05:36Z'
+ date: '2021-04-22T11:26:35Z'
groups:
-
name: 'Templates/Operating systems'
@@ -531,104 +531,106 @@ zabbix_export:
dashboards:
-
name: 'System performance'
- widgets:
+ pages:
-
- type: GRAPH_CLASSIC
- width: '12'
- height: '5'
- fields:
+ widgets:
-
- type: INTEGER
- name: source_type
- value: '0'
+ type: GRAPH_CLASSIC
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'CPU load'
+ host: Solaris
-
- type: GRAPH
- name: graphid
- value:
- name: 'CPU load'
- host: Solaris
- -
- type: GRAPH_CLASSIC
- x: '12'
- width: '12'
- height: '5'
- fields:
+ type: GRAPH_CLASSIC
+ x: '12'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'CPU utilization'
+ host: Solaris
-
- type: INTEGER
- name: source_type
- value: '0'
+ type: GRAPH_CLASSIC
+ 'y': '5'
+ width: '12'
+ height: '7'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Memory usage'
+ host: Solaris
-
- type: GRAPH
- name: graphid
- value:
- name: 'CPU utilization'
- host: Solaris
- -
- type: GRAPH_CLASSIC
- 'y': '5'
- width: '12'
- height: '7'
- fields:
+ type: GRAPH_CLASSIC
+ x: '12'
+ 'y': '5'
+ width: '12'
+ height: '7'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Swap usage'
+ host: Solaris
-
- type: INTEGER
- name: source_type
- value: '0'
+ type: GRAPH_CLASSIC
+ 'y': '12'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '1'
+ -
+ type: ITEM
+ name: itemid
+ value:
+ key: 'proc.num[]'
+ host: Solaris
-
- type: GRAPH
- name: graphid
- value:
- name: 'Memory usage'
- host: Solaris
- -
- type: GRAPH_CLASSIC
- x: '12'
- 'y': '5'
- width: '12'
- height: '7'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Swap usage'
- host: Solaris
- -
- type: GRAPH_CLASSIC
- 'y': '12'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '1'
- -
- type: ITEM
- name: itemid
- value:
- key: 'proc.num[]'
- host: Solaris
- -
- type: GRAPH_CLASSIC
- x: '12'
- 'y': '12'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '1'
- -
- type: ITEM
- name: itemid
- value:
- key: 'proc.num[,,run]'
- host: Solaris
+ type: GRAPH_CLASSIC
+ x: '12'
+ 'y': '12'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '1'
+ -
+ type: ITEM
+ name: itemid
+ value:
+ key: 'proc.num[,,run]'
+ host: Solaris
graphs:
-
name: 'CPU jumps'
diff --git a/templates/db/cassandra_jmx/README.md b/templates/db/cassandra_jmx/README.md
index b0498bbaa95..3ec52ebcbae 100644
--- a/templates/db/cassandra_jmx/README.md
+++ b/templates/db/cassandra_jmx/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
Official JMX Template for Apache Cassandra DBSM.
@@ -14,7 +14,7 @@ This template was tested on:
## Setup
-> See [Zabbix template operation](https://www.zabbix.com/documentation/5.2/manual/config/templates_out_of_the_box/jmx) for basic instructions.
+> See [Zabbix template operation](https://www.zabbix.com/documentation/5.4/manual/config/templates_out_of_the_box/jmx) for basic instructions.
This template works with standalone and cluster instances.
Metrics are collected by JMX.
@@ -30,14 +30,14 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$CASSANDRA.KEY_SPACE.MATCHES} |<p>Filter of discoverable key spaces</p> |`.*` |
-|{$CASSANDRA.KEY_SPACE.NOT_MATCHES} |<p>Filter to exclude discovered key spaces</p> |`(system|system_auth|system_distributed|system_schema)` |
-|{$CASSANDRA.PASSWORD} |<p>-</p> |`zabbix` |
-|{$CASSANDRA.PENDING_TASKS.MAX.HIGH} |<p>-</p> |`500` |
-|{$CASSANDRA.PENDING_TASKS.MAX.WARN} |<p>-</p> |`350` |
-|{$CASSANDRA.USER} |<p>-</p> |`zabbix` |
+| Name | Description | Default |
+|-------------------------------------|------------------------------------------------|---------------------------------------------------------|
+| {$CASSANDRA.KEY_SPACE.MATCHES} | <p>Filter of discoverable key spaces</p> | `.*` |
+| {$CASSANDRA.KEY_SPACE.NOT_MATCHES} | <p>Filter to exclude discovered key spaces</p> | `(system|system_auth|system_distributed|system_schema)` |
+| {$CASSANDRA.PASSWORD} | <p>-</p> | `zabbix` |
+| {$CASSANDRA.PENDING_TASKS.MAX.HIGH} | <p>-</p> | `500` |
+| {$CASSANDRA.PENDING_TASKS.MAX.WARN} | <p>-</p> | `350` |
+| {$CASSANDRA.USER} | <p>-</p> | `zabbix` |
## Template links
@@ -45,122 +45,122 @@ There are no template links in this template.
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Tables |<p>Info about keyspaces and tables</p> |JMX |jmx.discovery[beans,"org.apache.cassandra.metrics:type=Table,keyspace=*,scope=*,name=ReadLatency"]<p>**Filter**:</p>AND <p>- A: {#JMXKEYSPACE} MATCHES_REGEX `{$CASSANDRA.KEY_SPACE.MATCHES}`</p><p>- B: {#JMXKEYSPACE} NOT_MATCHES_REGEX `{$CASSANDRA.KEY_SPACE.NOT_MATCHES}`</p> |
+| Name | Description | Type | Key and additional info |
+|--------|----------------------------------------|------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Tables | <p>Info about keyspaces and tables</p> | JMX | jmx.discovery[beans,"org.apache.cassandra.metrics:type=Table,keyspace=*,scope=*,name=ReadLatency"]<p>**Filter**:</p>AND <p>- A: {#JMXKEYSPACE} MATCHES_REGEX `{$CASSANDRA.KEY_SPACE.MATCHES}`</p><p>- B: {#JMXKEYSPACE} NOT_MATCHES_REGEX `{$CASSANDRA.KEY_SPACE.NOT_MATCHES}`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Cassandra |Cluster: Nodes down |<p>-</p> |JMX |jmx["org.apache.cassandra.net:type=FailureDetector","DownEndpointCount"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|Cassandra |Cluster: Nodes up |<p>-</p> |JMX |jmx["org.apache.cassandra.net:type=FailureDetector","UpEndpointCount"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|Cassandra |Cluster: Name |<p>-</p> |JMX |jmx["org.apache.cassandra.db:type=StorageService","ClusterName"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|Cassandra |Version |<p>-</p> |JMX |jmx["org.apache.cassandra.db:type=StorageService","ReleaseVersion"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|Cassandra |Dropped messages: Write (Mutation) |<p>Number of dropped regular writes messages.</p> |JMX |jmx["org.apache.cassandra.metrics:type=DroppedMessage,scope=MUTATION,name=Dropped","Count"] |
-|Cassandra |Dropped messages: Read |<p>Number of dropped regular reads messages.</p> |JMX |jmx["org.apache.cassandra.metrics:type=DroppedMessage,scope=READ,name=Dropped","Count"] |
-|Cassandra |Storage: Used (bytes) |<p>Size, in bytes, of the on disk data size this node manages.</p> |JMX |jmx["org.apache.cassandra.metrics:type=Storage,name=Load","Count"] |
-|Cassandra |Storage: Errors |<p>Number of internal exceptions caught. Under normal exceptions this should be zero.</p> |JMX |jmx["org.apache.cassandra.metrics:type=Storage,name=Exceptions","Count"] |
-|Cassandra |Storage: Hints |<p>Number of hint messages written to this node since [re]start. Includes one entry for each host to be hinted per hint.</p> |JMX |jmx["org.apache.cassandra.metrics:type=Storage,name=TotalHints","Count"] |
-|Cassandra |Compaction: Number of completed tasks |<p>Number of completed compactions since server [re]start.</p> |JMX |jmx["org.apache.cassandra.metrics:name=CompletedTasks,type=Compaction","Value"] |
-|Cassandra |Compaction: Total compactions completed |<p>Throughput of completed compactions since server [re]start.</p> |JMX |jmx["org.apache.cassandra.metrics:name=TotalCompactionsCompleted,type=Compaction","Count"] |
-|Cassandra |Compaction: Pending tasks |<p>Estimated number of compactions remaining to perform.</p> |JMX |jmx["org.apache.cassandra.metrics:type=Compaction,name=PendingTasks","Value"] |
-|Cassandra |Commitlog: Pending tasks |<p>Number of commit log messages written but yet to be fsync’d.</p> |JMX |jmx["org.apache.cassandra.metrics:name=PendingTasks,type=CommitLog","Value"] |
-|Cassandra |Commitlog: Total size |<p>Current size, in bytes, used by all the commit log segments.</p> |JMX |jmx["org.apache.cassandra.metrics:name=TotalCommitLogSize,type=CommitLog","Value"] |
-|Cassandra |Latency: Read median |<p>Latency read from disk in milliseconds - median.</p> |JMX |jmx["org.apache.cassandra.metrics:name=ReadLatency,type=Table","50thPercentile"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
-|Cassandra |Latency: Read 75 percentile |<p>Latency read from disk in milliseconds - p75.</p> |JMX |jmx["org.apache.cassandra.metrics:name=ReadLatency,type=Table","75thPercentile"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
-|Cassandra |Latency: Read 95 percentile |<p>Latency read from disk in milliseconds - p95.</p> |JMX |jmx["org.apache.cassandra.metrics:name=ReadLatency,type=Table","95thPercentile"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
-|Cassandra |Latency: Write median |<p>Latency write to disk in milliseconds - median.</p> |JMX |jmx["org.apache.cassandra.metrics:name=WriteLatency,type=Table","50thPercentile"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
-|Cassandra |Latency: Write 75 percentile |<p>Latency write to disk in milliseconds - p75.</p> |JMX |jmx["org.apache.cassandra.metrics:name=WriteLatency,type=Table","75thPercentile"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
-|Cassandra |Latency: Write 95 percentile |<p>Latency write to disk in milliseconds - p95.</p> |JMX |jmx["org.apache.cassandra.metrics:name=WriteLatency,type=Table","95thPercentile"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
-|Cassandra |Latency: Client request read median |<p>Total latency serving data to clients in milliseconds - median.</p> |JMX |jmx["org.apache.cassandra.metrics:type=ClientRequest,scope=Read,name=Latency","50thPercentile"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
-|Cassandra |Latency: Client request read 75 percentile |<p>Total latency serving data to clients in milliseconds - p75.</p> |JMX |jmx["org.apache.cassandra.metrics:type=ClientRequest,scope=Read,name=Latency","75thPercentile"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
-|Cassandra |Latency: Client request read 95 percentile |<p>Total latency serving data to clients in milliseconds - p95.</p> |JMX |jmx["org.apache.cassandra.metrics:type=ClientRequest,scope=Read,name=Latency","95thPercentile"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
-|Cassandra |Latency: Client request write median |<p>Total latency serving write requests from clients in milliseconds - median.</p> |JMX |jmx["org.apache.cassandra.metrics:type=ClientRequest,scope=Write,name=Latency","50thPercentile"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
-|Cassandra |Latency: Client request write 75 percentile |<p>Total latency serving write requests from clients in milliseconds - p75.</p> |JMX |jmx["org.apache.cassandra.metrics:type=ClientRequest,scope=Write,name=Latency","75thPercentile"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
-|Cassandra |Latency: Client request write 95 percentile |<p>Total latency serving write requests from clients in milliseconds - p95.</p> |JMX |jmx["org.apache.cassandra.metrics:type=ClientRequest,scope=Write,name=Latency","95thPercentile"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
-|Cassandra |KeyCache: Capacity |<p>Cache capacity in bytes.</p> |JMX |jmx["org.apache.cassandra.metrics:type=Cache,scope=KeyCache,name=Capacity","Value"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|Cassandra |KeyCache: Entries |<p>Total number of cache entries.</p> |JMX |jmx["org.apache.cassandra.metrics:type=Cache,scope=KeyCache,name=Entries","Value"] |
-|Cassandra |KeyCache: HitRate |<p>All time cache hit rate.</p> |JMX |jmx["org.apache.cassandra.metrics:type=Cache,scope=KeyCache,name=HitRate","Value"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `100`</p> |
-|Cassandra |KeyCache: Hits per second |<p>Rate of cache hits.</p> |JMX |jmx["org.apache.cassandra.metrics:type=Cache,scope=KeyCache,name=Hits","Count"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Cassandra |KeyCache: requests per second |<p>Rate of cache requests.</p> |JMX |jmx["org.apache.cassandra.metrics:type=Cache,scope=KeyCache,name=Requests","Count"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Cassandra |KeyCache: Size |<p>Total size of occupied cache, in bytes.</p> |JMX |jmx["org.apache.cassandra.metrics:type=Cache,scope=KeyCache,name=Size","Value"] |
-|Cassandra |Client connections: Native |<p>Number of clients connected to this nodes native protocol server.</p> |JMX |jmx["org.apache.cassandra.metrics:type=Client,name=connectedNativeClients","Value"] |
-|Cassandra |Client connections: Trifts |<p>Number of connected to this nodes thrift clients.</p> |JMX |jmx["org.apache.cassandra.metrics:type=Client,name=connectedThriftClients","Value"] |
-|Cassandra |Client request: Read per second |<p>The number of client requests per second.</p> |JMX |jmx["org.apache.cassandra.metrics:type=ClientRequest,scope=Read,name=Latency","Count"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Cassandra |Client request: Write per second |<p>The number of local write requests per second.</p> |JMX |jmx["org.apache.cassandra.metrics:type=ClientRequest,scope=Write,name=Latency","Count"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Cassandra |Client request: Write Timeouts |<p>Number of write requests timeouts encountered.</p> |JMX |jmx["org.apache.cassandra.metrics:type=ClientRequest,scope=Write,name=Timeouts","Count"] |
-|Cassandra |Thread pool.MutationStage: Pending tasks |<p>Number of queued tasks queued up on this pool.</p><p>MutationStage: Responsible for writes (exclude materialized and counter writes).</p> |JMX |jmx["org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=MutationStage,name=PendingTasks","Value"] |
-|Cassandra |Thread pool MutationStage: Currently blocked task |<p>Number of tasks that are currently blocked due to queue saturation but on retry will become unblocked.</p><p>MutationStage: Responsible for writes (exclude materialized and counter writes).</p> |JMX |jmx["org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=MutationStage,name=CurrentlyBlockedTasks","Count"] |
-|Cassandra |Thread pool MutationStage: Total blocked tasks |<p>Number of tasks that were blocked due to queue saturation.</p><p>MutationStage: Responsible for writes (exclude materialized and counter writes).</p> |JMX |jmx["org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=MutationStage,name=TotalBlockedTasks","Count"] |
-|Cassandra |Thread pool CounterMutationStage: Pending tasks |<p>Number of queued tasks queued up on this pool.</p><p>CounterMutationStage: Responsible for counter writes.</p> |JMX |jmx["org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=CounterMutationStage,name=PendingTasks","Value"] |
-|Cassandra |Thread pool CounterMutationStage: Currently blocked task |<p>Number of tasks that are currently blocked due to queue saturation but on retry will become unblocked.</p><p>CounterMutationStage: Responsible for counter writes.</p> |JMX |jmx["org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=CounterMutationStage,name=CurrentlyBlockedTasks","Count"] |
-|Cassandra |Thread pool CounterMutationStage: Total blocked tasks |<p>Number of tasks that were blocked due to queue saturation.</p><p>CounterMutationStage: Responsible for counter writes.</p> |JMX |jmx["org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=CounterMutationStage,name=TotalBlockedTasks","Count"] |
-|Cassandra |Thread pool ReadStage: Pending tasks |<p>Number of queued tasks queued up on this pool.</p><p>ReadStage: Local reads run on this thread pool.</p> |JMX |jmx["org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=ReadStage,name=PendingTasks","Value"] |
-|Cassandra |Thread pool ReadStage: Currently blocked task |<p>Number of tasks that are currently blocked due to queue saturation but on retry will become unblocked.</p><p>ReadStage: Local reads run on this thread pool.</p> |JMX |jmx["org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=ReadStage,name=CurrentlyBlockedTasks","Count"] |
-|Cassandra |Thread pool ReadStage: Total blocked tasks |<p>Number of tasks that were blocked due to queue saturation.</p><p>ReadStage: Local reads run on this thread pool.</p> |JMX |jmx["org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=ReadStage,name=TotalBlockedTasks","Count"] |
-|Cassandra |Thread pool ViewMutationStage: Pending tasks |<p>Number of queued tasks queued up on this pool.</p><p>ViewMutationStage: Responsible for materialized view writes.</p> |JMX |jmx["org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=ViewMutationStage,name=PendingTasks","Value"] |
-|Cassandra |Thread pool ViewMutationStage: Currently blocked task |<p>Number of tasks that are currently blocked due to queue saturation but on retry will become unblocked.</p><p>ViewMutationStage: Responsible for materialized view writes.</p> |JMX |jmx["org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=ViewMutationStage,name=CurrentlyBlockedTasks","Count"] |
-|Cassandra |Thread pool ViewMutationStage: Total blocked tasks |<p>Number of tasks that were blocked due to queue saturation.</p><p>ViewMutationStage: Responsible for materialized view writes.</p> |JMX |jmx["org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=ViewMutationStage,name=TotalBlockedTasks","Count"] |
-|Cassandra |Thread pool MemtableFlushWriter: Pending tasks |<p>Number of queued tasks queued up on this pool.</p><p>MemtableFlushWriter: Writes memtables to disk.</p> |JMX |jmx["org.apache.cassandra.metrics:type=ThreadPools,path=internal,scope=MemtableFlushWriter,name=PendingTasks","Value"] |
-|Cassandra |Thread pool MemtableFlushWriter: Currently blocked task |<p>Number of tasks that are currently blocked due to queue saturation but on retry will become unblocked.</p><p>MemtableFlushWriter: Writes memtables to disk.</p> |JMX |jmx["org.apache.cassandra.metrics:type=ThreadPools,path=internal,scope=MemtableFlushWriter,name=CurrentlyBlockedTasks","Count"] |
-|Cassandra |Thread pool MemtableFlushWriter: Total blocked tasks |<p>Number of tasks that were blocked due to queue saturation.</p><p>MemtableFlushWriter: Writes memtables to disk.</p> |JMX |jmx["org.apache.cassandra.metrics:type=ThreadPools,path=internal,scope=MemtableFlushWriter,name=TotalBlockedTasks","Count"] |
-|Cassandra |Thread pool HintsDispatcher: Pending tasks |<p>Number of queued tasks queued up on this pool.</p><p>HintsDispatcher: Performs hinted handoff.</p> |JMX |jmx["org.apache.cassandra.metrics:type=ThreadPools,path=internal,scope=HintsDispatcher,name=PendingTasks","Value"] |
-|Cassandra |Thread pool HintsDispatcher: Currently blocked task |<p>Number of tasks that are currently blocked due to queue saturation but on retry will become unblocked.</p><p>HintsDispatcher: Performs hinted handoff.</p> |JMX |jmx["org.apache.cassandra.metrics:type=ThreadPools,path=internal,scope=HintsDispatcher,name=CurrentlyBlockedTasks","Count"] |
-|Cassandra |Thread pool HintsDispatcher: Total blocked tasks |<p>Number of tasks that were blocked due to queue saturation.</p><p>HintsDispatcher: Performs hinted handoff.</p> |JMX |jmx["org.apache.cassandra.metrics:type=ThreadPools,path=internal,scope=HintsDispatcher,name=TotalBlockedTasks","Count"] |
-|Cassandra |Thread pool MemtablePostFlush: Pending tasks |<p>Number of queued tasks queued up on this pool.</p><p>MemtablePostFlush: Cleans up commit log after memtable is written to disk.</p> |JMX |jmx["org.apache.cassandra.metrics:type=ThreadPools,path=internal,scope=MemtablePostFlush,name=PendingTasks","Value"] |
-|Cassandra |Thread pool MemtablePostFlush: Currently blocked task |<p>Number of tasks that are currently blocked due to queue saturation but on retry will become unblocked.</p><p>MemtablePostFlush: Cleans up commit log after memtable is written to disk.</p> |JMX |jmx["org.apache.cassandra.metrics:type=ThreadPools,path=internal,scope=MemtablePostFlush,name=CurrentlyBlockedTasks","Count"] |
-|Cassandra |Thread pool MemtablePostFlush: Total blocked tasks |<p>Number of tasks that were blocked due to queue saturation.</p><p>MemtablePostFlush: Cleans up commit log after memtable is written to disk.</p> |JMX |jmx["org.apache.cassandra.metrics:type=ThreadPools,path=internal,scope=MemtablePostFlush,name=TotalBlockedTasks","Count"] |
-|Cassandra |Thread pool MigrationStage: Pending tasks |<p>Number of queued tasks queued up on this pool.</p><p>MigrationStage: Runs schema migrations.</p> |JMX |jmx["org.apache.cassandra.metrics:type=ThreadPools,path=internal,scope=MigrationStage,name=PendingTasks","Value"] |
-|Cassandra |Thread pool MigrationStage: Currently blocked task |<p>Number of tasks that are currently blocked due to queue saturation but on retry will become unblocked.</p><p>MigrationStage: Runs schema migrations.</p> |JMX |jmx["org.apache.cassandra.metrics:type=ThreadPools,path=internal,scope=MigrationStage,name=CurrentlyBlockedTasks","Count"] |
-|Cassandra |Thread pool MigrationStage: Total blocked tasks |<p>Number of tasks that were blocked due to queue saturation.</p><p>MigrationStage: Runs schema migrations.</p> |JMX |jmx["org.apache.cassandra.metrics:type=ThreadPools,path=internal,scope=MigrationStage,name=TotalBlockedTasks","Count"] |
-|Cassandra |Thread pool MiscStage: Pending tasks |<p>Number of queued tasks queued up on this pool.</p><p>MiscStage: Misceleneous tasks run here.</p> |JMX |jmx["org.apache.cassandra.metrics:type=ThreadPools,path=internal,scope=MiscStage,name=PendingTasks","Value"] |
-|Cassandra |Thread pool MiscStage: Currently blocked task |<p>Number of tasks that are currently blocked due to queue saturation but on retry will become unblocked.</p><p>MiscStage: Misceleneous tasks run here.</p> |JMX |jmx["org.apache.cassandra.metrics:type=ThreadPools,path=internal,scope=MiscStage,name=CurrentlyBlockedTasks","Count"] |
-|Cassandra |Thread pool MiscStage: Total blocked tasks |<p>Number of tasks that were blocked due to queue saturation.</p><p>MiscStage: Misceleneous tasks run here.</p> |JMX |jmx["org.apache.cassandra.metrics:type=ThreadPools,path=internal,scope=MiscStage,name=TotalBlockedTasks","Count"] |
-|Cassandra |Thread pool SecondaryIndexManagement: Pending tasks |<p>Number of queued tasks queued up on this pool.</p><p>SecondaryIndexManagement: Performs updates to secondary indexes.</p> |JMX |jmx["org.apache.cassandra.metrics:type=ThreadPools,path=internal,scope=SecondaryIndexManagement,name=PendingTasks","Value"] |
-|Cassandra |Thread pool SecondaryIndexManagement: Currently blocked task |<p>Number of tasks that are currently blocked due to queue saturation but on retry will become unblocked.</p><p>SecondaryIndexManagement: Performs updates to secondary indexes.</p> |JMX |jmx["org.apache.cassandra.metrics:type=ThreadPools,path=internal,scope=SecondaryIndexManagement,name=CurrentlyBlockedTasks","Count"] |
-|Cassandra |Thread pool SecondaryIndexManagement: Total blocked tasks |<p>Number of tasks that were blocked due to queue saturation.</p><p>SecondaryIndexManagement: Performs updates to secondary indexes.</p> |JMX |jmx["org.apache.cassandra.metrics:type=ThreadPools,path=internal,scope=SecondaryIndexManagement,name=TotalBlockedTasks","Count"] |
-|Cassandra |{#JMXKEYSPACE}.{#JMXSCOPE}: SS Tables per read 75 percentile |<p>The number of SSTable data files accessed per read - p75.</p> |JMX |jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=SSTablesPerReadHistogram","75thPercentile"] |
-|Cassandra |{#JMXKEYSPACE}.{#JMXSCOPE}: SS Tables per read 95 percentile |<p>The number of SSTable data files accessed per read - p95.</p> |JMX |jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=SSTablesPerReadHistogram","95thPercentile"] |
-|Cassandra |{#JMXKEYSPACE}.{#JMXSCOPE}: Tombstone scanned 75 percentile |<p>Number of tombstones scanned per read - p75.</p> |JMX |jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=TombstoneScannedHistogram","75thPercentile"] |
-|Cassandra |{#JMXKEYSPACE}.{#JMXSCOPE}: Tombstone scanned 95 percentile |<p>Number of tombstones scanned per read - p95.</p> |JMX |jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=TombstoneScannedHistogram","95thPercentile"] |
-|Cassandra |{#JMXKEYSPACE}.{#JMXSCOPE}: Waiting on free memtable space 75 percentile |<p>The time spent waiting for free memtable space either on- or off-heap - p75.</p> |JMX |jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=WaitingOnFreeMemtableSpace","75thPercentile"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
-|Cassandra |{#JMXKEYSPACE}.{#JMXSCOPE}: Waiting on free memtable space95 percentile |<p>The time spent waiting for free memtable space either on- or off-heap - p95.</p> |JMX |jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=WaitingOnFreeMemtableSpace","95thPercentile"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
-|Cassandra |{#JMXKEYSPACE}.{#JMXSCOPE}: Col update time delta75 percentile |<p>The column update time delta - p75.</p> |JMX |jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=ColUpdateTimeDeltaHistogram","75thPercentile"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
-|Cassandra |{#JMXKEYSPACE}.{#JMXSCOPE}: Col update time delta 95 percentile |<p>The column update time delta - p95.</p> |JMX |jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=ColUpdateTimeDeltaHistogram","95thPercentile"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
-|Cassandra |{#JMXKEYSPACE}.{#JMXSCOPE}: Bloom filter false ratio |<p>The ratio of Bloom filter false positives to total checks.</p> |JMX |jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=BloomFilterFalseRatio","Value"] |
-|Cassandra |{#JMXKEYSPACE}.{#JMXSCOPE}: Compression ratio |<p>The compression ratio for all SSTables.</p> |JMX |jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=CompressionRatio","Value"] |
-|Cassandra |{#JMXKEYSPACE}.{#JMXSCOPE}: KeyCache hit rate |<p>The key cache hit rate.</p> |JMX |jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=KeyCacheHitRate","Value"] |
-|Cassandra |{#JMXKEYSPACE}.{#JMXSCOPE}: Live SS Table |<p>Number of "live" (in use) SSTables.</p> |JMX |jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=LiveSSTableCount","Value"] |
-|Cassandra |{#JMXKEYSPACE}.{#JMXSCOPE}: Max sartition size |<p>The size of the largest compacted partition.</p> |JMX |jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=MaxPartitionSize","Value"] |
-|Cassandra |{#JMXKEYSPACE}.{#JMXSCOPE}: Mean partition size |<p>The average size of compacted partition.</p> |JMX |jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=MeanPartitionSize","Value"] |
-|Cassandra |{#JMXKEYSPACE}.{#JMXSCOPE}: Pending compactions |<p>The number of pending compactions.</p> |JMX |jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=PendingCompactions","Value"] |
-|Cassandra |{#JMXKEYSPACE}.{#JMXSCOPE}: Snapshots size |<p>The disk space truly used by snapshots.</p> |JMX |jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=SnapshotsSize","Value"] |
-|Cassandra |{#JMXKEYSPACE}.{#JMXSCOPE}: Compaction bytes written |<p>The amount of data that was compacted since (re)start.</p> |JMX |jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=CompactionBytesWritten","Count"] |
-|Cassandra |{#JMXKEYSPACE}.{#JMXSCOPE}: Bytes flushed |<p>The amount of data that was flushed since (re)start.</p> |JMX |jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=BytesFlushed","Count"] |
-|Cassandra |{#JMXKEYSPACE}.{#JMXSCOPE}: Pending flushes |<p>The number of pending flushes.</p> |JMX |jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=PendingFlushes","Count"] |
-|Cassandra |{#JMXKEYSPACE}.{#JMXSCOPE}: Live disk space used |<p>The disk space used by "live" SSTables (only counts in use files).</p> |JMX |jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=LiveDiskSpaceUsed","Count"] |
-|Cassandra |{#JMXKEYSPACE}.{#JMXSCOPE}: Disk space used |<p>Disk space used.</p> |JMX |jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=TotalDiskSpaceUsed","Count"] |
-|Cassandra |{#JMXKEYSPACE}.{#JMXSCOPE}: Out of row cache hits |<p>The number of row cache hits that do not satisfy the query filter and went to disk.</p> |JMX |jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=RowCacheHitOutOfRange","Count"] |
-|Cassandra |{#JMXKEYSPACE}.{#JMXSCOPE}: Row cache hits |<p>The number of row cache hits.</p> |JMX |jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=RowCacheHit","Count"] |
-|Cassandra |{#JMXKEYSPACE}.{#JMXSCOPE}: Row cache misses |<p>The number of table row cache misses.</p> |JMX |jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=RowCacheMiss","Count"] |
-|Cassandra |{#JMXKEYSPACE}.{#JMXSCOPE}: Read latency 75 percentile |<p>Latency read from disk in milliseconds.</p> |JMX |jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=ReadLatency","75thPercentile"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
-|Cassandra |{#JMXKEYSPACE}.{#JMXSCOPE}: Read latency 95 percentile |<p>Latency read from disk in milliseconds.</p> |JMX |jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=ReadLatency","95thPercentile"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
-|Cassandra |{#JMXKEYSPACE}.{#JMXSCOPE}: Read per second |<p>The number of client requests per second.</p> |JMX |jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=ReadLatency","Count"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Cassandra |{#JMXKEYSPACE}.{#JMXSCOPE}: Write latency 75 percentile |<p>Latency write to disk in milliseconds.</p> |JMX |jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=WriteLatency","75thPercentile"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
-|Cassandra |{#JMXKEYSPACE}.{#JMXSCOPE}: Write latency 95 percentile |<p>Latency write to disk in milliseconds.</p> |JMX |jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=WriteLatency","95thPercentile"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
-|Cassandra |{#JMXKEYSPACE}.{#JMXSCOPE}: Write per second |<p>The number of local write requests per second.</p> |JMX |jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=WriteLatency","Count"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Group | Name | Description | Type | Key and additional info |
+|-----------|--------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Cassandra | Cluster: Nodes down | <p>-</p> | JMX | jmx["org.apache.cassandra.net:type=FailureDetector","DownEndpointCount"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| Cassandra | Cluster: Nodes up | <p>-</p> | JMX | jmx["org.apache.cassandra.net:type=FailureDetector","UpEndpointCount"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| Cassandra | Cluster: Name | <p>-</p> | JMX | jmx["org.apache.cassandra.db:type=StorageService","ClusterName"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| Cassandra | Version | <p>-</p> | JMX | jmx["org.apache.cassandra.db:type=StorageService","ReleaseVersion"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| Cassandra | Dropped messages: Write (Mutation) | <p>Number of dropped regular writes messages.</p> | JMX | jmx["org.apache.cassandra.metrics:type=DroppedMessage,scope=MUTATION,name=Dropped","Count"] |
+| Cassandra | Dropped messages: Read | <p>Number of dropped regular reads messages.</p> | JMX | jmx["org.apache.cassandra.metrics:type=DroppedMessage,scope=READ,name=Dropped","Count"] |
+| Cassandra | Storage: Used (bytes) | <p>Size, in bytes, of the on disk data size this node manages.</p> | JMX | jmx["org.apache.cassandra.metrics:type=Storage,name=Load","Count"] |
+| Cassandra | Storage: Errors | <p>Number of internal exceptions caught. Under normal exceptions this should be zero.</p> | JMX | jmx["org.apache.cassandra.metrics:type=Storage,name=Exceptions","Count"] |
+| Cassandra | Storage: Hints | <p>Number of hint messages written to this node since [re]start. Includes one entry for each host to be hinted per hint.</p> | JMX | jmx["org.apache.cassandra.metrics:type=Storage,name=TotalHints","Count"] |
+| Cassandra | Compaction: Number of completed tasks | <p>Number of completed compactions since server [re]start.</p> | JMX | jmx["org.apache.cassandra.metrics:name=CompletedTasks,type=Compaction","Value"] |
+| Cassandra | Compaction: Total compactions completed | <p>Throughput of completed compactions since server [re]start.</p> | JMX | jmx["org.apache.cassandra.metrics:name=TotalCompactionsCompleted,type=Compaction","Count"] |
+| Cassandra | Compaction: Pending tasks | <p>Estimated number of compactions remaining to perform.</p> | JMX | jmx["org.apache.cassandra.metrics:type=Compaction,name=PendingTasks","Value"] |
+| Cassandra | Commitlog: Pending tasks | <p>Number of commit log messages written but yet to be fsync’d.</p> | JMX | jmx["org.apache.cassandra.metrics:name=PendingTasks,type=CommitLog","Value"] |
+| Cassandra | Commitlog: Total size | <p>Current size, in bytes, used by all the commit log segments.</p> | JMX | jmx["org.apache.cassandra.metrics:name=TotalCommitLogSize,type=CommitLog","Value"] |
+| Cassandra | Latency: Read median | <p>Latency read from disk in milliseconds - median.</p> | JMX | jmx["org.apache.cassandra.metrics:name=ReadLatency,type=Table","50thPercentile"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
+| Cassandra | Latency: Read 75 percentile | <p>Latency read from disk in milliseconds - p75.</p> | JMX | jmx["org.apache.cassandra.metrics:name=ReadLatency,type=Table","75thPercentile"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
+| Cassandra | Latency: Read 95 percentile | <p>Latency read from disk in milliseconds - p95.</p> | JMX | jmx["org.apache.cassandra.metrics:name=ReadLatency,type=Table","95thPercentile"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
+| Cassandra | Latency: Write median | <p>Latency write to disk in milliseconds - median.</p> | JMX | jmx["org.apache.cassandra.metrics:name=WriteLatency,type=Table","50thPercentile"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
+| Cassandra | Latency: Write 75 percentile | <p>Latency write to disk in milliseconds - p75.</p> | JMX | jmx["org.apache.cassandra.metrics:name=WriteLatency,type=Table","75thPercentile"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
+| Cassandra | Latency: Write 95 percentile | <p>Latency write to disk in milliseconds - p95.</p> | JMX | jmx["org.apache.cassandra.metrics:name=WriteLatency,type=Table","95thPercentile"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
+| Cassandra | Latency: Client request read median | <p>Total latency serving data to clients in milliseconds - median.</p> | JMX | jmx["org.apache.cassandra.metrics:type=ClientRequest,scope=Read,name=Latency","50thPercentile"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
+| Cassandra | Latency: Client request read 75 percentile | <p>Total latency serving data to clients in milliseconds - p75.</p> | JMX | jmx["org.apache.cassandra.metrics:type=ClientRequest,scope=Read,name=Latency","75thPercentile"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
+| Cassandra | Latency: Client request read 95 percentile | <p>Total latency serving data to clients in milliseconds - p95.</p> | JMX | jmx["org.apache.cassandra.metrics:type=ClientRequest,scope=Read,name=Latency","95thPercentile"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
+| Cassandra | Latency: Client request write median | <p>Total latency serving write requests from clients in milliseconds - median.</p> | JMX | jmx["org.apache.cassandra.metrics:type=ClientRequest,scope=Write,name=Latency","50thPercentile"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
+| Cassandra | Latency: Client request write 75 percentile | <p>Total latency serving write requests from clients in milliseconds - p75.</p> | JMX | jmx["org.apache.cassandra.metrics:type=ClientRequest,scope=Write,name=Latency","75thPercentile"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
+| Cassandra | Latency: Client request write 95 percentile | <p>Total latency serving write requests from clients in milliseconds - p95.</p> | JMX | jmx["org.apache.cassandra.metrics:type=ClientRequest,scope=Write,name=Latency","95thPercentile"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
+| Cassandra | KeyCache: Capacity | <p>Cache capacity in bytes.</p> | JMX | jmx["org.apache.cassandra.metrics:type=Cache,scope=KeyCache,name=Capacity","Value"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| Cassandra | KeyCache: Entries | <p>Total number of cache entries.</p> | JMX | jmx["org.apache.cassandra.metrics:type=Cache,scope=KeyCache,name=Entries","Value"] |
+| Cassandra | KeyCache: HitRate | <p>All time cache hit rate.</p> | JMX | jmx["org.apache.cassandra.metrics:type=Cache,scope=KeyCache,name=HitRate","Value"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `100`</p> |
+| Cassandra | KeyCache: Hits per second | <p>Rate of cache hits.</p> | JMX | jmx["org.apache.cassandra.metrics:type=Cache,scope=KeyCache,name=Hits","Count"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Cassandra | KeyCache: requests per second | <p>Rate of cache requests.</p> | JMX | jmx["org.apache.cassandra.metrics:type=Cache,scope=KeyCache,name=Requests","Count"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Cassandra | KeyCache: Size | <p>Total size of occupied cache, in bytes.</p> | JMX | jmx["org.apache.cassandra.metrics:type=Cache,scope=KeyCache,name=Size","Value"] |
+| Cassandra | Client connections: Native | <p>Number of clients connected to this nodes native protocol server.</p> | JMX | jmx["org.apache.cassandra.metrics:type=Client,name=connectedNativeClients","Value"] |
+| Cassandra | Client connections: Trifts | <p>Number of connected to this nodes thrift clients.</p> | JMX | jmx["org.apache.cassandra.metrics:type=Client,name=connectedThriftClients","Value"] |
+| Cassandra | Client request: Read per second | <p>The number of client requests per second.</p> | JMX | jmx["org.apache.cassandra.metrics:type=ClientRequest,scope=Read,name=Latency","Count"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Cassandra | Client request: Write per second | <p>The number of local write requests per second.</p> | JMX | jmx["org.apache.cassandra.metrics:type=ClientRequest,scope=Write,name=Latency","Count"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Cassandra | Client request: Write Timeouts | <p>Number of write requests timeouts encountered.</p> | JMX | jmx["org.apache.cassandra.metrics:type=ClientRequest,scope=Write,name=Timeouts","Count"] |
+| Cassandra | Thread pool.MutationStage: Pending tasks | <p>Number of queued tasks queued up on this pool.</p><p>MutationStage: Responsible for writes (exclude materialized and counter writes).</p> | JMX | jmx["org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=MutationStage,name=PendingTasks","Value"] |
+| Cassandra | Thread pool MutationStage: Currently blocked task | <p>Number of tasks that are currently blocked due to queue saturation but on retry will become unblocked.</p><p>MutationStage: Responsible for writes (exclude materialized and counter writes).</p> | JMX | jmx["org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=MutationStage,name=CurrentlyBlockedTasks","Count"] |
+| Cassandra | Thread pool MutationStage: Total blocked tasks | <p>Number of tasks that were blocked due to queue saturation.</p><p>MutationStage: Responsible for writes (exclude materialized and counter writes).</p> | JMX | jmx["org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=MutationStage,name=TotalBlockedTasks","Count"] |
+| Cassandra | Thread pool CounterMutationStage: Pending tasks | <p>Number of queued tasks queued up on this pool.</p><p>CounterMutationStage: Responsible for counter writes.</p> | JMX | jmx["org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=CounterMutationStage,name=PendingTasks","Value"] |
+| Cassandra | Thread pool CounterMutationStage: Currently blocked task | <p>Number of tasks that are currently blocked due to queue saturation but on retry will become unblocked.</p><p>CounterMutationStage: Responsible for counter writes.</p> | JMX | jmx["org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=CounterMutationStage,name=CurrentlyBlockedTasks","Count"] |
+| Cassandra | Thread pool CounterMutationStage: Total blocked tasks | <p>Number of tasks that were blocked due to queue saturation.</p><p>CounterMutationStage: Responsible for counter writes.</p> | JMX | jmx["org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=CounterMutationStage,name=TotalBlockedTasks","Count"] |
+| Cassandra | Thread pool ReadStage: Pending tasks | <p>Number of queued tasks queued up on this pool.</p><p>ReadStage: Local reads run on this thread pool.</p> | JMX | jmx["org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=ReadStage,name=PendingTasks","Value"] |
+| Cassandra | Thread pool ReadStage: Currently blocked task | <p>Number of tasks that are currently blocked due to queue saturation but on retry will become unblocked.</p><p>ReadStage: Local reads run on this thread pool.</p> | JMX | jmx["org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=ReadStage,name=CurrentlyBlockedTasks","Count"] |
+| Cassandra | Thread pool ReadStage: Total blocked tasks | <p>Number of tasks that were blocked due to queue saturation.</p><p>ReadStage: Local reads run on this thread pool.</p> | JMX | jmx["org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=ReadStage,name=TotalBlockedTasks","Count"] |
+| Cassandra | Thread pool ViewMutationStage: Pending tasks | <p>Number of queued tasks queued up on this pool.</p><p>ViewMutationStage: Responsible for materialized view writes.</p> | JMX | jmx["org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=ViewMutationStage,name=PendingTasks","Value"] |
+| Cassandra | Thread pool ViewMutationStage: Currently blocked task | <p>Number of tasks that are currently blocked due to queue saturation but on retry will become unblocked.</p><p>ViewMutationStage: Responsible for materialized view writes.</p> | JMX | jmx["org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=ViewMutationStage,name=CurrentlyBlockedTasks","Count"] |
+| Cassandra | Thread pool ViewMutationStage: Total blocked tasks | <p>Number of tasks that were blocked due to queue saturation.</p><p>ViewMutationStage: Responsible for materialized view writes.</p> | JMX | jmx["org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=ViewMutationStage,name=TotalBlockedTasks","Count"] |
+| Cassandra | Thread pool MemtableFlushWriter: Pending tasks | <p>Number of queued tasks queued up on this pool.</p><p>MemtableFlushWriter: Writes memtables to disk.</p> | JMX | jmx["org.apache.cassandra.metrics:type=ThreadPools,path=internal,scope=MemtableFlushWriter,name=PendingTasks","Value"] |
+| Cassandra | Thread pool MemtableFlushWriter: Currently blocked task | <p>Number of tasks that are currently blocked due to queue saturation but on retry will become unblocked.</p><p>MemtableFlushWriter: Writes memtables to disk.</p> | JMX | jmx["org.apache.cassandra.metrics:type=ThreadPools,path=internal,scope=MemtableFlushWriter,name=CurrentlyBlockedTasks","Count"] |
+| Cassandra | Thread pool MemtableFlushWriter: Total blocked tasks | <p>Number of tasks that were blocked due to queue saturation.</p><p>MemtableFlushWriter: Writes memtables to disk.</p> | JMX | jmx["org.apache.cassandra.metrics:type=ThreadPools,path=internal,scope=MemtableFlushWriter,name=TotalBlockedTasks","Count"] |
+| Cassandra | Thread pool HintsDispatcher: Pending tasks | <p>Number of queued tasks queued up on this pool.</p><p>HintsDispatcher: Performs hinted handoff.</p> | JMX | jmx["org.apache.cassandra.metrics:type=ThreadPools,path=internal,scope=HintsDispatcher,name=PendingTasks","Value"] |
+| Cassandra | Thread pool HintsDispatcher: Currently blocked task | <p>Number of tasks that are currently blocked due to queue saturation but on retry will become unblocked.</p><p>HintsDispatcher: Performs hinted handoff.</p> | JMX | jmx["org.apache.cassandra.metrics:type=ThreadPools,path=internal,scope=HintsDispatcher,name=CurrentlyBlockedTasks","Count"] |
+| Cassandra | Thread pool HintsDispatcher: Total blocked tasks | <p>Number of tasks that were blocked due to queue saturation.</p><p>HintsDispatcher: Performs hinted handoff.</p> | JMX | jmx["org.apache.cassandra.metrics:type=ThreadPools,path=internal,scope=HintsDispatcher,name=TotalBlockedTasks","Count"] |
+| Cassandra | Thread pool MemtablePostFlush: Pending tasks | <p>Number of queued tasks queued up on this pool.</p><p>MemtablePostFlush: Cleans up commit log after memtable is written to disk.</p> | JMX | jmx["org.apache.cassandra.metrics:type=ThreadPools,path=internal,scope=MemtablePostFlush,name=PendingTasks","Value"] |
+| Cassandra | Thread pool MemtablePostFlush: Currently blocked task | <p>Number of tasks that are currently blocked due to queue saturation but on retry will become unblocked.</p><p>MemtablePostFlush: Cleans up commit log after memtable is written to disk.</p> | JMX | jmx["org.apache.cassandra.metrics:type=ThreadPools,path=internal,scope=MemtablePostFlush,name=CurrentlyBlockedTasks","Count"] |
+| Cassandra | Thread pool MemtablePostFlush: Total blocked tasks | <p>Number of tasks that were blocked due to queue saturation.</p><p>MemtablePostFlush: Cleans up commit log after memtable is written to disk.</p> | JMX | jmx["org.apache.cassandra.metrics:type=ThreadPools,path=internal,scope=MemtablePostFlush,name=TotalBlockedTasks","Count"] |
+| Cassandra | Thread pool MigrationStage: Pending tasks | <p>Number of queued tasks queued up on this pool.</p><p>MigrationStage: Runs schema migrations.</p> | JMX | jmx["org.apache.cassandra.metrics:type=ThreadPools,path=internal,scope=MigrationStage,name=PendingTasks","Value"] |
+| Cassandra | Thread pool MigrationStage: Currently blocked task | <p>Number of tasks that are currently blocked due to queue saturation but on retry will become unblocked.</p><p>MigrationStage: Runs schema migrations.</p> | JMX | jmx["org.apache.cassandra.metrics:type=ThreadPools,path=internal,scope=MigrationStage,name=CurrentlyBlockedTasks","Count"] |
+| Cassandra | Thread pool MigrationStage: Total blocked tasks | <p>Number of tasks that were blocked due to queue saturation.</p><p>MigrationStage: Runs schema migrations.</p> | JMX | jmx["org.apache.cassandra.metrics:type=ThreadPools,path=internal,scope=MigrationStage,name=TotalBlockedTasks","Count"] |
+| Cassandra | Thread pool MiscStage: Pending tasks | <p>Number of queued tasks queued up on this pool.</p><p>MiscStage: Misceleneous tasks run here.</p> | JMX | jmx["org.apache.cassandra.metrics:type=ThreadPools,path=internal,scope=MiscStage,name=PendingTasks","Value"] |
+| Cassandra | Thread pool MiscStage: Currently blocked task | <p>Number of tasks that are currently blocked due to queue saturation but on retry will become unblocked.</p><p>MiscStage: Misceleneous tasks run here.</p> | JMX | jmx["org.apache.cassandra.metrics:type=ThreadPools,path=internal,scope=MiscStage,name=CurrentlyBlockedTasks","Count"] |
+| Cassandra | Thread pool MiscStage: Total blocked tasks | <p>Number of tasks that were blocked due to queue saturation.</p><p>MiscStage: Misceleneous tasks run here.</p> | JMX | jmx["org.apache.cassandra.metrics:type=ThreadPools,path=internal,scope=MiscStage,name=TotalBlockedTasks","Count"] |
+| Cassandra | Thread pool SecondaryIndexManagement: Pending tasks | <p>Number of queued tasks queued up on this pool.</p><p>SecondaryIndexManagement: Performs updates to secondary indexes.</p> | JMX | jmx["org.apache.cassandra.metrics:type=ThreadPools,path=internal,scope=SecondaryIndexManagement,name=PendingTasks","Value"] |
+| Cassandra | Thread pool SecondaryIndexManagement: Currently blocked task | <p>Number of tasks that are currently blocked due to queue saturation but on retry will become unblocked.</p><p>SecondaryIndexManagement: Performs updates to secondary indexes.</p> | JMX | jmx["org.apache.cassandra.metrics:type=ThreadPools,path=internal,scope=SecondaryIndexManagement,name=CurrentlyBlockedTasks","Count"] |
+| Cassandra | Thread pool SecondaryIndexManagement: Total blocked tasks | <p>Number of tasks that were blocked due to queue saturation.</p><p>SecondaryIndexManagement: Performs updates to secondary indexes.</p> | JMX | jmx["org.apache.cassandra.metrics:type=ThreadPools,path=internal,scope=SecondaryIndexManagement,name=TotalBlockedTasks","Count"] |
+| Cassandra | {#JMXKEYSPACE}.{#JMXSCOPE}: SS Tables per read 75 percentile | <p>The number of SSTable data files accessed per read - p75.</p> | JMX | jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=SSTablesPerReadHistogram","75thPercentile"] |
+| Cassandra | {#JMXKEYSPACE}.{#JMXSCOPE}: SS Tables per read 95 percentile | <p>The number of SSTable data files accessed per read - p95.</p> | JMX | jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=SSTablesPerReadHistogram","95thPercentile"] |
+| Cassandra | {#JMXKEYSPACE}.{#JMXSCOPE}: Tombstone scanned 75 percentile | <p>Number of tombstones scanned per read - p75.</p> | JMX | jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=TombstoneScannedHistogram","75thPercentile"] |
+| Cassandra | {#JMXKEYSPACE}.{#JMXSCOPE}: Tombstone scanned 95 percentile | <p>Number of tombstones scanned per read - p95.</p> | JMX | jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=TombstoneScannedHistogram","95thPercentile"] |
+| Cassandra | {#JMXKEYSPACE}.{#JMXSCOPE}: Waiting on free memtable space 75 percentile | <p>The time spent waiting for free memtable space either on- or off-heap - p75.</p> | JMX | jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=WaitingOnFreeMemtableSpace","75thPercentile"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
+| Cassandra | {#JMXKEYSPACE}.{#JMXSCOPE}: Waiting on free memtable space95 percentile | <p>The time spent waiting for free memtable space either on- or off-heap - p95.</p> | JMX | jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=WaitingOnFreeMemtableSpace","95thPercentile"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
+| Cassandra | {#JMXKEYSPACE}.{#JMXSCOPE}: Col update time delta75 percentile | <p>The column update time delta - p75.</p> | JMX | jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=ColUpdateTimeDeltaHistogram","75thPercentile"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
+| Cassandra | {#JMXKEYSPACE}.{#JMXSCOPE}: Col update time delta 95 percentile | <p>The column update time delta - p95.</p> | JMX | jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=ColUpdateTimeDeltaHistogram","95thPercentile"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
+| Cassandra | {#JMXKEYSPACE}.{#JMXSCOPE}: Bloom filter false ratio | <p>The ratio of Bloom filter false positives to total checks.</p> | JMX | jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=BloomFilterFalseRatio","Value"] |
+| Cassandra | {#JMXKEYSPACE}.{#JMXSCOPE}: Compression ratio | <p>The compression ratio for all SSTables.</p> | JMX | jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=CompressionRatio","Value"] |
+| Cassandra | {#JMXKEYSPACE}.{#JMXSCOPE}: KeyCache hit rate | <p>The key cache hit rate.</p> | JMX | jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=KeyCacheHitRate","Value"] |
+| Cassandra | {#JMXKEYSPACE}.{#JMXSCOPE}: Live SS Table | <p>Number of "live" (in use) SSTables.</p> | JMX | jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=LiveSSTableCount","Value"] |
+| Cassandra | {#JMXKEYSPACE}.{#JMXSCOPE}: Max sartition size | <p>The size of the largest compacted partition.</p> | JMX | jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=MaxPartitionSize","Value"] |
+| Cassandra | {#JMXKEYSPACE}.{#JMXSCOPE}: Mean partition size | <p>The average size of compacted partition.</p> | JMX | jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=MeanPartitionSize","Value"] |
+| Cassandra | {#JMXKEYSPACE}.{#JMXSCOPE}: Pending compactions | <p>The number of pending compactions.</p> | JMX | jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=PendingCompactions","Value"] |
+| Cassandra | {#JMXKEYSPACE}.{#JMXSCOPE}: Snapshots size | <p>The disk space truly used by snapshots.</p> | JMX | jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=SnapshotsSize","Value"] |
+| Cassandra | {#JMXKEYSPACE}.{#JMXSCOPE}: Compaction bytes written | <p>The amount of data that was compacted since (re)start.</p> | JMX | jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=CompactionBytesWritten","Count"] |
+| Cassandra | {#JMXKEYSPACE}.{#JMXSCOPE}: Bytes flushed | <p>The amount of data that was flushed since (re)start.</p> | JMX | jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=BytesFlushed","Count"] |
+| Cassandra | {#JMXKEYSPACE}.{#JMXSCOPE}: Pending flushes | <p>The number of pending flushes.</p> | JMX | jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=PendingFlushes","Count"] |
+| Cassandra | {#JMXKEYSPACE}.{#JMXSCOPE}: Live disk space used | <p>The disk space used by "live" SSTables (only counts in use files).</p> | JMX | jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=LiveDiskSpaceUsed","Count"] |
+| Cassandra | {#JMXKEYSPACE}.{#JMXSCOPE}: Disk space used | <p>Disk space used.</p> | JMX | jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=TotalDiskSpaceUsed","Count"] |
+| Cassandra | {#JMXKEYSPACE}.{#JMXSCOPE}: Out of row cache hits | <p>The number of row cache hits that do not satisfy the query filter and went to disk.</p> | JMX | jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=RowCacheHitOutOfRange","Count"] |
+| Cassandra | {#JMXKEYSPACE}.{#JMXSCOPE}: Row cache hits | <p>The number of row cache hits.</p> | JMX | jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=RowCacheHit","Count"] |
+| Cassandra | {#JMXKEYSPACE}.{#JMXSCOPE}: Row cache misses | <p>The number of table row cache misses.</p> | JMX | jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=RowCacheMiss","Count"] |
+| Cassandra | {#JMXKEYSPACE}.{#JMXSCOPE}: Read latency 75 percentile | <p>Latency read from disk in milliseconds.</p> | JMX | jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=ReadLatency","75thPercentile"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
+| Cassandra | {#JMXKEYSPACE}.{#JMXSCOPE}: Read latency 95 percentile | <p>Latency read from disk in milliseconds.</p> | JMX | jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=ReadLatency","95thPercentile"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
+| Cassandra | {#JMXKEYSPACE}.{#JMXSCOPE}: Read per second | <p>The number of client requests per second.</p> | JMX | jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=ReadLatency","Count"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Cassandra | {#JMXKEYSPACE}.{#JMXSCOPE}: Write latency 75 percentile | <p>Latency write to disk in milliseconds.</p> | JMX | jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=WriteLatency","75thPercentile"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
+| Cassandra | {#JMXKEYSPACE}.{#JMXSCOPE}: Write latency 95 percentile | <p>Latency write to disk in milliseconds.</p> | JMX | jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=WriteLatency","95thPercentile"]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p> |
+| Cassandra | {#JMXKEYSPACE}.{#JMXSCOPE}: Write per second | <p>The number of local write requests per second.</p> | JMX | jmx["org.apache.cassandra.metrics:type=Table,keyspace={#JMXKEYSPACE},scope={#JMXSCOPE},name=WriteLatency","Count"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|There are down nodes in cluster |<p>-</p> |`{TEMPLATE_NAME:jmx["org.apache.cassandra.net:type=FailureDetector","DownEndpointCount"].last()}>0` |AVERAGE | |
-|Version has changed (new version: {ITEM.VALUE}) |<p>Cassandra version has changed. Ack to close.</p> |`{TEMPLATE_NAME:jmx["org.apache.cassandra.db:type=StorageService","ReleaseVersion"].diff()}=1 and {TEMPLATE_NAME:jmx["org.apache.cassandra.db:type=StorageService","ReleaseVersion"].strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|Failed to fetch info data (or no data for 15m) |<p>Zabbix has not received data for items for the last 15 minutes</p> |`{TEMPLATE_NAME:jmx["org.apache.cassandra.metrics:type=Storage,name=Load","Count"].nodata(15m)}=1` |WARNING | |
-|Too many storage exceptions |<p>-</p> |`{TEMPLATE_NAME:jmx["org.apache.cassandra.metrics:type=Storage,name=Exceptions","Count"].min(5m)}>0` |WARNING | |
-|Many pending tasks (over {$CASSANDRA.PENDING_TASKS.MAX.WARN} for 15m) |<p>-</p> |`{TEMPLATE_NAME:jmx["org.apache.cassandra.metrics:type=Compaction,name=PendingTasks","Value"].min(15m)}>{$CASSANDRA.PENDING_TASKS.MAX.WARN}` |WARNING |<p>**Depends on**:</p><p>- Too many pending tasks (over {$CASSANDRA.PENDING_TASKS.MAX.HIGH} for 15m)</p> |
-|Too many pending tasks (over {$CASSANDRA.PENDING_TASKS.MAX.HIGH} for 15m) |<p>-</p> |`{TEMPLATE_NAME:jmx["org.apache.cassandra.metrics:type=Compaction,name=PendingTasks","Value"].min(15m)}>{$CASSANDRA.PENDING_TASKS.MAX.HIGH}` |AVERAGE | |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|---------------------------------------------------------------------------|-----------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|----------------------------------------------------------------------------------------------------------|
+| There are down nodes in cluster | <p>-</p> | `{TEMPLATE_NAME:jmx["org.apache.cassandra.net:type=FailureDetector","DownEndpointCount"].last()}>0` | AVERAGE | |
+| Version has changed (new version: {ITEM.VALUE}) | <p>Cassandra version has changed. Ack to close.</p> | `{TEMPLATE_NAME:jmx["org.apache.cassandra.db:type=StorageService","ReleaseVersion"].diff()}=1 and {TEMPLATE_NAME:jmx["org.apache.cassandra.db:type=StorageService","ReleaseVersion"].strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| Failed to fetch info data (or no data for 15m) | <p>Zabbix has not received data for items for the last 15 minutes</p> | `{TEMPLATE_NAME:jmx["org.apache.cassandra.metrics:type=Storage,name=Load","Count"].nodata(15m)}=1` | WARNING | |
+| Too many storage exceptions | <p>-</p> | `{TEMPLATE_NAME:jmx["org.apache.cassandra.metrics:type=Storage,name=Exceptions","Count"].min(5m)}>0` | WARNING | |
+| Many pending tasks (over {$CASSANDRA.PENDING_TASKS.MAX.WARN} for 15m) | <p>-</p> | `{TEMPLATE_NAME:jmx["org.apache.cassandra.metrics:type=Compaction,name=PendingTasks","Value"].min(15m)}>{$CASSANDRA.PENDING_TASKS.MAX.WARN}` | WARNING | <p>**Depends on**:</p><p>- Too many pending tasks (over {$CASSANDRA.PENDING_TASKS.MAX.HIGH} for 15m)</p> |
+| Too many pending tasks (over {$CASSANDRA.PENDING_TASKS.MAX.HIGH} for 15m) | <p>-</p> | `{TEMPLATE_NAME:jmx["org.apache.cassandra.metrics:type=Compaction,name=PendingTasks","Value"].min(15m)}>{$CASSANDRA.PENDING_TASKS.MAX.HIGH}` | AVERAGE | |
## Feedback
diff --git a/templates/db/clickhouse_http/README.md b/templates/db/clickhouse_http/README.md
index 5ee34e5b384..8e965b472aa 100644
--- a/templates/db/clickhouse_http/README.md
+++ b/templates/db/clickhouse_http/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
The template to monitor ClickHouse by Zabbix that work without any external scripts.
Most of the metrics are collected in one go, thanks to Zabbix bulk data collection.
@@ -15,7 +15,7 @@ This template was tested on:
## Setup
-> See [Zabbix template operation](https://www.zabbix.com/documentation/5.2/manual/config/templates_out_of_the_box/http) for basic instructions.
+> See [Zabbix template operation](https://www.zabbix.com/documentation/5.4/manual/config/templates_out_of_the_box/http) for basic instructions.
Create a user to monitor the service:
@@ -50,24 +50,24 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$CLICKHOUSE.DELAYED.FILES.DISTRIBUTED.COUNT.MAX.WARN} |<p>Maximum size of distributed files queue to insert for trigger expression.</p> |`600` |
-|{$CLICKHOUSE.DELAYED.INSERTS.MAX.WARN} |<p>Maximum number of delayed inserts for trigger expression.</p> |`0` |
-|{$CLICKHOUSE.LLD.FILTER.DB.MATCHES} |<p>Filter of discoverable databases</p> |`.*` |
-|{$CLICKHOUSE.LLD.FILTER.DB.NOT_MATCHES} |<p>Filter to exclude discovered databases</p> |`CHANGE_IF_NEEDED` |
-|{$CLICKHOUSE.LLD.FILTER.DICT.MATCHES} |<p>Filter of discoverable dictionaries</p> |`.*` |
-|{$CLICKHOUSE.LLD.FILTER.DICT.NOT_MATCHES} |<p>Filter to exclude discovered dictionaries</p> |`CHANGE_IF_NEEDED` |
-|{$CLICKHOUSE.LOG_POSITION.DIFF.MAX.WARN} |<p>Maximum diff between log_pointer and log_max_index.</p> |`30` |
-|{$CLICKHOUSE.NETWORK.ERRORS.MAX.WARN} |<p>Maximum number of smth for trigger expression</p> |`5` |
-|{$CLICKHOUSE.PARTS.PER.PARTITION.WARN} |<p>Maximum number of parts per partition for trigger expression.</p> |`300` |
-|{$CLICKHOUSE.PASSWORD} |<p>-</p> |`zabbix_pass` |
-|{$CLICKHOUSE.PORT} |<p>The port of ClickHouse HTTP endpoint</p> |`8123` |
-|{$CLICKHOUSE.QUERY_TIME.MAX.WARN} |<p>Maximum ClickHouse query time in seconds for trigger expression</p> |`600` |
-|{$CLICKHOUSE.QUEUE.SIZE.MAX.WARN} |<p>Maximum size of the queue for operations waiting to be performed for trigger expression.</p> |`20` |
-|{$CLICKHOUSE.REPLICA.MAX.WARN} |<p>Replication lag across all tables for trigger expression.</p> |`600` |
-|{$CLICKHOUSE.SCHEME} |<p>Request scheme which may be http or https</p> |`http` |
-|{$CLICKHOUSE.USER} |<p>-</p> |`zabbix` |
+| Name | Description | Default |
+|--------------------------------------------------------|-------------------------------------------------------------------------------------------------|--------------------|
+| {$CLICKHOUSE.DELAYED.FILES.DISTRIBUTED.COUNT.MAX.WARN} | <p>Maximum size of distributed files queue to insert for trigger expression.</p> | `600` |
+| {$CLICKHOUSE.DELAYED.INSERTS.MAX.WARN} | <p>Maximum number of delayed inserts for trigger expression.</p> | `0` |
+| {$CLICKHOUSE.LLD.FILTER.DB.MATCHES} | <p>Filter of discoverable databases</p> | `.*` |
+| {$CLICKHOUSE.LLD.FILTER.DB.NOT_MATCHES} | <p>Filter to exclude discovered databases</p> | `CHANGE_IF_NEEDED` |
+| {$CLICKHOUSE.LLD.FILTER.DICT.MATCHES} | <p>Filter of discoverable dictionaries</p> | `.*` |
+| {$CLICKHOUSE.LLD.FILTER.DICT.NOT_MATCHES} | <p>Filter to exclude discovered dictionaries</p> | `CHANGE_IF_NEEDED` |
+| {$CLICKHOUSE.LOG_POSITION.DIFF.MAX.WARN} | <p>Maximum diff between log_pointer and log_max_index.</p> | `30` |
+| {$CLICKHOUSE.NETWORK.ERRORS.MAX.WARN} | <p>Maximum number of smth for trigger expression</p> | `5` |
+| {$CLICKHOUSE.PARTS.PER.PARTITION.WARN} | <p>Maximum number of parts per partition for trigger expression.</p> | `300` |
+| {$CLICKHOUSE.PASSWORD} | <p>-</p> | `zabbix_pass` |
+| {$CLICKHOUSE.PORT} | <p>The port of ClickHouse HTTP endpoint</p> | `8123` |
+| {$CLICKHOUSE.QUERY_TIME.MAX.WARN} | <p>Maximum ClickHouse query time in seconds for trigger expression</p> | `600` |
+| {$CLICKHOUSE.QUEUE.SIZE.MAX.WARN} | <p>Maximum size of the queue for operations waiting to be performed for trigger expression.</p> | `20` |
+| {$CLICKHOUSE.REPLICA.MAX.WARN} | <p>Replication lag across all tables for trigger expression.</p> | `600` |
+| {$CLICKHOUSE.SCHEME} | <p>Request scheme which may be http or https</p> | `http` |
+| {$CLICKHOUSE.USER} | <p>-</p> | `zabbix` |
## Template links
@@ -75,112 +75,112 @@ There are no template links in this template.
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Tables |<p>Info about tables</p> |DEPENDENT |clickhouse.tables.discovery<p>**Filter**:</p>AND <p>- A: {#DB} MATCHES_REGEX `{$CLICKHOUSE.LLD.FILTER.DB.MATCHES}`</p><p>- B: {#DB} NOT_MATCHES_REGEX `{$CLICKHOUSE.LLD.FILTER.DB.NOT_MATCHES}`</p> |
-|Replicas |<p>Info about replicas</p> |DEPENDENT |clickhouse.replicas.discovery<p>**Filter**:</p>AND <p>- A: {#DB} MATCHES_REGEX `{$CLICKHOUSE.LLD.FILTER.DB.MATCHES}`</p><p>- B: {#DB} NOT_MATCHES_REGEX `{$CLICKHOUSE.LLD.FILTER.DB.NOT_MATCHES}`</p> |
-|Dictionaries |<p>Info about dictionaries</p> |DEPENDENT |clickhouse.dictionaries.discovery<p>**Filter**:</p>AND <p>- A: {#NAME} MATCHES_REGEX `{$CLICKHOUSE.LLD.FILTER.DICT.MATCHES}`</p><p>- B: {#NAME} NOT_MATCHES_REGEX `{$CLICKHOUSE.LLD.FILTER.DICT.NOT_MATCHES}`</p> |
+| Name | Description | Type | Key and additional info |
+|--------------|--------------------------------|-----------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Tables | <p>Info about tables</p> | DEPENDENT | clickhouse.tables.discovery<p>**Filter**:</p>AND <p>- A: {#DB} MATCHES_REGEX `{$CLICKHOUSE.LLD.FILTER.DB.MATCHES}`</p><p>- B: {#DB} NOT_MATCHES_REGEX `{$CLICKHOUSE.LLD.FILTER.DB.NOT_MATCHES}`</p> |
+| Replicas | <p>Info about replicas</p> | DEPENDENT | clickhouse.replicas.discovery<p>**Filter**:</p>AND <p>- A: {#DB} MATCHES_REGEX `{$CLICKHOUSE.LLD.FILTER.DB.MATCHES}`</p><p>- B: {#DB} NOT_MATCHES_REGEX `{$CLICKHOUSE.LLD.FILTER.DB.NOT_MATCHES}`</p> |
+| Dictionaries | <p>Info about dictionaries</p> | DEPENDENT | clickhouse.dictionaries.discovery<p>**Filter**:</p>AND <p>- A: {#NAME} MATCHES_REGEX `{$CLICKHOUSE.LLD.FILTER.DICT.MATCHES}`</p><p>- B: {#NAME} NOT_MATCHES_REGEX `{$CLICKHOUSE.LLD.FILTER.DICT.NOT_MATCHES}`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|ClickHouse |ClickHouse: Longest currently running query time |<p>Get longest running query.</p> |HTTP_AGENT |clickhouse.process.elapsed |
-|ClickHouse |ClickHouse: Check port availability |<p>-</p> |SIMPLE |net.tcp.service[{$CLICKHOUSE.SCHEME},"{HOST.CONN}","{$CLICKHOUSE.PORT}"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|ClickHouse |ClickHouse: Ping | |HTTP_AGENT |clickhouse.ping<p>**Preprocessing**:</p><p>- REGEX: `Ok\. 1`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|ClickHouse |ClickHouse: Version |<p>Version of the server</p> |HTTP_AGENT |clickhouse.version<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|ClickHouse |ClickHouse: Revision |<p>Revision of the server.</p> |DEPENDENT |clickhouse.revision<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "Revision")].value.first()`</p> |
-|ClickHouse |ClickHouse: Uptime |<p>Number of seconds since ClickHouse server start</p> |DEPENDENT |clickhouse.uptime<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "Uptime")].value.first()`</p> |
-|ClickHouse |ClickHouse: New queries per second |<p>Number of queries to be interpreted and potentially executed. Does not include queries that failed to parse or were rejected due to AST size limits, quota limits or limits on the number of simultaneously running queries. May include internal queries initiated by ClickHouse itself. Does not count subqueries.</p> |DEPENDENT |clickhouse.query.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.data.event == "Query")].value.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
-|ClickHouse |ClickHouse: New SELECT queries per second |<p>Number of SELECT queries to be interpreted and potentially executed. Does not include queries that failed to parse or were rejected due to AST size limits, quota limits or limits on the number of simultaneously running queries. May include internal queries initiated by ClickHouse itself. Does not count subqueries.</p> |DEPENDENT |clickhouse.select_query.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.event == "SelectQuery")].value.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
-|ClickHouse |ClickHouse: New INSERT queries per second |<p>Number of INSERT queries to be interpreted and potentially executed. Does not include queries that failed to parse or were rejected due to AST size limits, quota limits or limits on the number of simultaneously running queries. May include internal queries initiated by ClickHouse itself. Does not count subqueries.</p> |DEPENDENT |clickhouse.insert_query.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.event == "InsertQuery")].value.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
-|ClickHouse |ClickHouse: Delayed insert queries |<p>"Number of INSERT queries that are throttled due to high number of active data parts for partition in a MergeTree table."</p> |DEPENDENT |clickhouse.insert.delay<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "DelayedInserts")].value.first()`</p> |
-|ClickHouse |ClickHouse: Current running queries |<p>Number of executing queries</p> |DEPENDENT |clickhouse.query.current<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "Query")].value.first()`</p> |
-|ClickHouse |ClickHouse: Current running merges |<p>Number of executing background merges</p> |DEPENDENT |clickhouse.merge.current<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "Merge")].value.first()`</p> |
-|ClickHouse |ClickHouse: Inserted bytes per second |<p>The number of uncompressed bytes inserted in all tables.</p> |DEPENDENT |clickhouse.inserted_bytes.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.event == "InsertedBytes")].value.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
-|ClickHouse |ClickHouse: Read bytes per second |<p>"Number of bytes (the number of bytes before decompression) read from compressed sources (files, network)."</p> |DEPENDENT |clickhouse.read_bytes.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.event == "ReadCompressedBytes")].value.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
-|ClickHouse |ClickHouse: Inserted rows per second |<p>The number of rows inserted in all tables.</p> |DEPENDENT |clickhouse.inserted_rows.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.event == "InsertedRows")].value.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
-|ClickHouse |ClickHouse: Merged rows per second |<p>Rows read for background merges.</p> |DEPENDENT |clickhouse.merge_rows.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.event == "MergedRows")].value.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
-|ClickHouse |ClickHouse: Uncompressed bytes merged per second |<p>Uncompressed bytes that were read for background merges</p> |DEPENDENT |clickhouse.merge_bytes.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.event == "MergedUncompressedBytes")].value.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
-|ClickHouse |ClickHouse: Max count of parts per partition across all tables |<p>"Clickhouse MergeTree table engine split each INSERT query to partitions (PARTITION BY expression) and add one or more PARTS per INSERT inside each partition, </p><p>after that background merge process run."</p> |DEPENDENT |clickhouse.max.part.count.for.partition<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "MaxPartCountForPartition")].value.first()`</p> |
-|ClickHouse |ClickHouse: Current TCP connections |<p>Number of connections to TCP server (clients with native interface).</p> |DEPENDENT |clickhouse.connections.tcp<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "TCPConnection")].value.first()`</p> |
-|ClickHouse |ClickHouse: Current HTTP connections |<p>Number of connections to HTTP server.</p> |DEPENDENT |clickhouse.connections.http<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "HTTPConnection")].value.first()`</p> |
-|ClickHouse |ClickHouse: Current distribute connections |<p>Number of connections to remote servers sending data that was INSERTed into Distributed tables.</p> |DEPENDENT |clickhouse.connections.distribute<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "DistributedSend")].value.first()`</p> |
-|ClickHouse |ClickHouse: Current MySQL connections |<p>Number of connections to MySQL server.</p> |DEPENDENT |clickhouse.connections.mysql<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "MySQLConnection")].value.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|ClickHouse |ClickHouse: Current Interserver connections |<p>Number of connections from other replicas to fetch parts.</p> |DEPENDENT |clickhouse.connections.interserver<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "InterserverConnection")].value.first()`</p> |
-|ClickHouse |ClickHouse: Network errors per second |<p>Network errors (timeouts and connection failures) during query execution, background pool tasks and DNS cache update.</p> |DEPENDENT |clickhouse.network.error.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.event == "NetworkErrors")].value.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
-|ClickHouse |ClickHouse: Read syscalls in fly |<p>Number of read (read, pread, io_getevents, etc.) syscalls in fly</p> |DEPENDENT |clickhouse.read<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "Read")].value.first()`</p> |
-|ClickHouse |ClickHouse: Write syscalls in fly |<p>Number of write (write, pwrite, io_getevents, etc.) syscalls in fly</p> |DEPENDENT |clickhouse.write<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "Write")].value.first()`</p> |
-|ClickHouse |ClickHouse: Allocated bytes |<p>"Total number of bytes allocated by the application."</p> |DEPENDENT |clickhouse.jemalloc.allocated<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "jemalloc.allocated")].value.first()`</p> |
-|ClickHouse |ClickHouse: Resident memory |<p>"Maximum number of bytes in physically resident data pages mapped by the allocator, </p><p>comprising all pages dedicated to allocator metadata, pages backing active allocations, </p><p>and unused dirty pages."</p> |DEPENDENT |clickhouse.jemalloc.resident<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "jemalloc.resident")].value.first()`</p> |
-|ClickHouse |ClickHouse: Mapped memory |<p>"Total number of bytes in active extents mapped by the allocator."</p> |DEPENDENT |clickhouse.jemalloc.mapped<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "jemalloc.mapped")].value.first()`</p> |
-|ClickHouse |ClickHouse: Memory used for queries |<p>"Total amount of memory (bytes) allocated in currently executing queries."</p> |DEPENDENT |clickhouse.memory.tracking<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "MemoryTracking")].value.first()`</p> |
-|ClickHouse |ClickHouse: Memory used for background merges |<p>"Total amount of memory (bytes) allocated in background processing pool (that is dedicated for background merges, mutations and fetches).</p><p> Note that this value may include a drift when the memory was allocated in a context of background processing pool and freed in other context or vice-versa. This happens naturally due to caches for tables indexes and doesn't indicate memory leaks."</p> |DEPENDENT |clickhouse.memory.tracking.background<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "MemoryTrackingInBackgroundProcessingPool")].value.first()`</p> |
-|ClickHouse |ClickHouse: Memory used for background moves |<p>"Total amount of memory (bytes) allocated in background processing pool (that is dedicated for background moves). Note that this value may include a drift when the memory was allocated in a context of background processing pool and freed in other context or vice-versa.</p><p> This happens naturally due to caches for tables indexes and doesn't indicate memory leaks."</p> |DEPENDENT |clickhouse.memory.tracking.background.moves<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "MemoryTrackingInBackgroundMoveProcessingPool")].value.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
-|ClickHouse |ClickHouse: Memory used for background schedule pool |<p>"Total amount of memory (bytes) allocated in background schedule pool (that is dedicated for bookkeeping tasks of Replicated tables)."</p> |DEPENDENT |clickhouse.memory.tracking.schedule.pool<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "MemoryTrackingInBackgroundSchedulePool")].value.first()`</p> |
-|ClickHouse |ClickHouse: Memory used for merges |<p>"Total amount of memory (bytes) allocated for background merges. Included in MemoryTrackingInBackgroundProcessingPool. Note that this value may include a drift when the memory was allocated in a context of background processing pool and freed in other context or vice-versa. </p><p>This happens naturally due to caches for tables indexes and doesn't indicate memory leaks."</p> |DEPENDENT |clickhouse.memory.tracking.merges<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "MemoryTrackingForMerges")].value.first()`</p> |
-|ClickHouse |ClickHouse: Current distributed files to insert |<p>Number of pending files to process for asynchronous insertion into Distributed tables. Number of files for every shard is summed.</p> |DEPENDENT |clickhouse.distributed.files<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "DistributedFilesToInsert")].value.first()`</p> |
-|ClickHouse |ClickHouse: Distributed connection fail with retry per second |<p>Connection retries in replicated DB connection pool</p> |DEPENDENT |clickhouse.distributed.files.retry.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "DistributedConnectionFailTry")].value.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
-|ClickHouse |ClickHouse: Distributed connection fail with retry per second |<p>"Connection failures after all retries in replicated DB connection pool"</p> |DEPENDENT |clickhouse.distributed.files.fail.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "DistributedConnectionFailAtAll")].value.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
-|ClickHouse |ClickHouse: Replication lag across all tables |<p>Maximum replica queue delay relative to current time</p> |DEPENDENT |clickhouse.replicas.max.absolute.delay<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "ReplicasMaxAbsoluteDelay")].value.first()`</p> |
-|ClickHouse |ClickHouse: Total replication tasks in queue | |DEPENDENT |clickhouse.replicas.sum.queue.size<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "ReplicasSumQueueSize")].value.first()`</p> |
-|ClickHouse |ClickHouse: Total number read-only Replicas |<p>"Number of Replicated tables that are currently in readonly state </p><p>due to re-initialization after ZooKeeper session loss </p><p>or due to startup without ZooKeeper configured."</p> |DEPENDENT |clickhouse.replicas.readonly.total<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "ReadonlyReplica")].value.first()`</p> |
-|ClickHouse |ClickHouse: {#DB}.{#TABLE}: Bytes |<p>Table size in bytes. Database: {#DB}, table: {#TABLE}</p> |DEPENDENT |clickhouse.table.bytes["{#DB}.{#TABLE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.database == "{#DB}" && @.table == "{#TABLE}")].bytes.first()`</p> |
-|ClickHouse |ClickHouse: {#DB}.{#TABLE}: Parts |<p>Number of parts of the table. Database: {#DB}, table: {#TABLE}</p> |DEPENDENT |clickhouse.table.parts["{#DB}.{#TABLE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.database == "{#DB}" && @.table == "{#TABLE}")].parts.first()`</p> |
-|ClickHouse |ClickHouse: {#DB}.{#TABLE}: Rows |<p>Number of rows in the table. Database: {#DB}, table: {#TABLE}</p> |DEPENDENT |clickhouse.table.rows["{#DB}.{#TABLE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.database == "{#DB}" && @.table == "{#TABLE}")].rows.first()`</p> |
-|ClickHouse |ClickHouse: {#DB}: Bytes |<p>Database size in bytes.</p> |DEPENDENT |clickhouse.db.bytes["{#DB}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.database == "{#DB}")].bytes.sum()`</p> |
-|ClickHouse |ClickHouse: {#DB}.{#TABLE}: Replica readonly |<p>Whether the replica is in read-only mode.</p><p>This mode is turned on if the config doesn’t have sections with ZooKeeper, if an unknown error occurred when reinitializing sessions in ZooKeeper, and during session reinitialization in ZooKeeper.</p> |DEPENDENT |clickhouse.replica.is_readonly["{#DB}.{#TABLE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.database == "{#DB}" && @.table == "{#TABLE}")].is_readonly.first()`</p> |
-|ClickHouse |ClickHouse: {#DB}.{#TABLE}: Replica session expired |<p>True if the ZooKeeper session expired</p> |DEPENDENT |clickhouse.replica.is_session_expired["{#DB}.{#TABLE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.database == "{#DB}" && @.table == "{#TABLE}")].is_session_expired.first()`</p> |
-|ClickHouse |ClickHouse: {#DB}.{#TABLE}: Replica future parts |<p>Number of data parts that will appear as the result of INSERTs or merges that haven’t been done yet.</p> |DEPENDENT |clickhouse.replica.future_parts["{#DB}.{#TABLE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.database == "{#DB}" && @.table == "{#TABLE}")].future_parts.first()`</p> |
-|ClickHouse |ClickHouse: {#DB}.{#TABLE}: Replica parts to check |<p>Number of data parts in the queue for verification. A part is put in the verification queue if there is suspicion that it might be damaged.</p> |DEPENDENT |clickhouse.replica.parts_to_check["{#DB}.{#TABLE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.database == "{#DB}" && @.table == "{#TABLE}")].parts_to_check.first()`</p> |
-|ClickHouse |ClickHouse: {#DB}.{#TABLE}: Replica queue size |<p>Size of the queue for operations waiting to be performed.</p> |DEPENDENT |clickhouse.replica.queue_size["{#DB}.{#TABLE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.database == "{#DB}" && @.table == "{#TABLE}")].queue_size.first()`</p> |
-|ClickHouse |ClickHouse: {#DB}.{#TABLE}: Replica queue inserts size |<p>Number of inserts of blocks of data that need to be made.</p> |DEPENDENT |clickhouse.replica.inserts_in_queue["{#DB}.{#TABLE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.database == "{#DB}" && @.table == "{#TABLE}")].inserts_in_queue.first()`</p> |
-|ClickHouse |ClickHouse: {#DB}.{#TABLE}: Replica queue merges size |<p>Number of merges waiting to be made. </p> |DEPENDENT |clickhouse.replica.merges_in_queue["{#DB}.{#TABLE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.database == "{#DB}" && @.table == "{#TABLE}")].merges_in_queue.first()`</p> |
-|ClickHouse |ClickHouse: {#DB}.{#TABLE}: Replica log max index |<p>Maximum entry number in the log of general activity. (Have a non-zero value only where there is an active session with ZooKeeper).</p> |DEPENDENT |clickhouse.replica.log_max_index["{#DB}.{#TABLE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.database == "{#DB}" && @.table == "{#TABLE}")].log_max_index.first()`</p> |
-|ClickHouse |ClickHouse: {#DB}.{#TABLE}: Replica log pointer |<p> Maximum entry number in the log of general activity that the replica copied to its execution queue, plus one. (Have a non-zero value only where there is an active session with ZooKeeper).</p> |DEPENDENT |clickhouse.replica.log_pointer["{#DB}.{#TABLE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.database == "{#DB}" && @.table == "{#TABLE}")].log_pointer.first()`</p> |
-|ClickHouse |ClickHouse: {#DB}.{#TABLE}: Total replicas |<p>Total number of known replicas of this table. (Have a non-zero value only where there is an active session with ZooKeeper).</p> |DEPENDENT |clickhouse.replica.total_replicas["{#DB}.{#TABLE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.database == "{#DB}" && @.table == "{#TABLE}")].total_replicas.first()`</p> |
-|ClickHouse |ClickHouse: {#DB}.{#TABLE}: Active replicas |<p>Number of replicas of this table that have a session in ZooKeeper (i.e., the number of functioning replicas). (Have a non-zero value only where there is an active session with ZooKeeper).</p> |DEPENDENT |clickhouse.replica.active_replicas["{#DB}.{#TABLE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.database == "{#DB}" && @.table == "{#TABLE}")].active_replicas.first()`</p> |
-|ClickHouse |ClickHouse: {#DB}.{#TABLE}: Replica lag |<p>Difference between log_max_index and log_pointer</p> |DEPENDENT |clickhouse.replica.lag["{#DB}.{#TABLE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.database == "{#DB}" && @.table == "{#TABLE}")].replica_lag.first()`</p> |
-|ClickHouse |ClickHouse: Dictionary {#NAME}: Bytes allocated |<p>The amount of RAM the dictionary uses.</p> |DEPENDENT |clickhouse.dictionary.bytes_allocated["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#NAME}")].bytes_allocated.first()`</p> |
-|ClickHouse |ClickHouse: Dictionary {#NAME}: Element count |<p>Number of items stored in the dictionary.</p> |DEPENDENT |clickhouse.dictionary.element_count["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#NAME}")].element_count.first()`</p> |
-|ClickHouse |ClickHouse: Dictionary {#NAME}: Load factor |<p>The percentage filled in the dictionary (for a hashed dictionary, the percentage filled in the hash table).</p> |DEPENDENT |clickhouse.dictionary.load_factor["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#NAME}")].bytes_allocated.first()`</p><p>- MULTIPLIER: `100`</p> |
-|ClickHouse_ZooKeeper |ClickHouse: ZooKeeper sessions |<p>Number of sessions (connections) to ZooKeeper. Should be no more than one.</p> |DEPENDENT |clickhouse.zookeper.session<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "ZooKeeperSession")].value.first()`</p> |
-|ClickHouse_ZooKeeper |ClickHouse: ZooKeeper watches |<p>Number of watches (e.g., event subscriptions) in ZooKeeperr.</p> |DEPENDENT |clickhouse.zookeper.watch<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "ZooKeeperWatch")].value.first()`</p> |
-|ClickHouse_ZooKeeper |ClickHouse: ZooKeeper requests |<p>Number of requests to ZooKeeper in progress.</p> |DEPENDENT |clickhouse.zookeper.request<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "ZooKeeperRequest")].value.first()`</p> |
-|ClickHouse_ZooKeeper |ClickHouse: ZooKeeper wait time |<p>Time spent in waiting for ZooKeeper operations.</p> |DEPENDENT |clickhouse.zookeper.wait.time<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.event == "ZooKeeperWaitMicroseconds")].value.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- MULTIPLIER: `0.000001`</p><p>- CHANGE_PER_SECOND |
-|ClickHouse_ZooKeeper |ClickHouse: ZooKeeper exceptions per second |<p>Count of ZooKeeper exceptions that does not belong to user/hardware exceptions.</p> |DEPENDENT |clickhouse.zookeper.exceptions.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.event == "ZooKeeperOtherExceptions")].value.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
-|ClickHouse_ZooKeeper |ClickHouse: ZooKeeper hardware exceptions per second |<p>Count of ZooKeeper exceptions caused by session moved/expired, connection loss, marshalling error, operation timed out and invalid zhandle state.</p> |DEPENDENT |clickhouse.zookeper.hw_exeptions.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.event == "ZooKeeperHardwareExceptions")].value.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
-|ClickHouse_ZooKeeper |ClickHouse: ZooKeeper user exceptions per second |<p>Count of ZooKeeper exceptions caused by no znodes, bad version, node exists, node empty and no children for ephemeral.</p> |DEPENDENT |clickhouse.zookeper.user_exeptions.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.event == "ZooKeeperUserExceptions")].value.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
-|Zabbix_raw_items |ClickHouse: Get system.events |<p>Get information about the number of events that have occurred in the system.</p> |HTTP_AGENT |clickhouse.system.events<p>**Preprocessing**:</p><p>- JSONPATH: `$.data`</p> |
-|Zabbix_raw_items |ClickHouse: Get system.metrics |<p>Get metrics which can be calculated instantly, or have a current value format JSONEachRow</p> |HTTP_AGENT |clickhouse.system.metrics<p>**Preprocessing**:</p><p>- JSONPATH: `$.data`</p> |
-|Zabbix_raw_items |ClickHouse: Get system.asynchronous_metrics |<p>Get metrics that are calculated periodically in the background</p> |HTTP_AGENT |clickhouse.system.asynchronous_metrics<p>**Preprocessing**:</p><p>- JSONPATH: `$.data`</p> |
-|Zabbix_raw_items |ClickHouse: Get system.settings |<p>Get information about settings that are currently in use.</p> |HTTP_AGENT |clickhouse.system.settings<p>**Preprocessing**:</p><p>- JSONPATH: `$.data`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|Zabbix_raw_items |ClickHouse: Get replicas info |<p>-</p> |HTTP_AGENT |clickhouse.replicas<p>**Preprocessing**:</p><p>- JSONPATH: `$.data`</p> |
-|Zabbix_raw_items |ClickHouse: Get tables info |<p>-</p> |HTTP_AGENT |clickhouse.tables<p>**Preprocessing**:</p><p>- JSONPATH: `$.data`</p> |
-|Zabbix_raw_items |ClickHouse: Get dictionaries info |<p>-</p> |HTTP_AGENT |clickhouse.dictionaries<p>**Preprocessing**:</p><p>- JSONPATH: `$.data`</p> |
+| Group | Name | Description | Type | Key and additional info |
+|----------------------|----------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| ClickHouse | ClickHouse: Longest currently running query time | <p>Get longest running query.</p> | HTTP_AGENT | clickhouse.process.elapsed |
+| ClickHouse | ClickHouse: Check port availability | <p>-</p> | SIMPLE | net.tcp.service[{$CLICKHOUSE.SCHEME},"{HOST.CONN}","{$CLICKHOUSE.PORT}"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| ClickHouse | ClickHouse: Ping | | HTTP_AGENT | clickhouse.ping<p>**Preprocessing**:</p><p>- REGEX: `Ok\. 1`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| ClickHouse | ClickHouse: Version | <p>Version of the server</p> | HTTP_AGENT | clickhouse.version<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| ClickHouse | ClickHouse: Revision | <p>Revision of the server.</p> | DEPENDENT | clickhouse.revision<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "Revision")].value.first()`</p> |
+| ClickHouse | ClickHouse: Uptime | <p>Number of seconds since ClickHouse server start</p> | DEPENDENT | clickhouse.uptime<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "Uptime")].value.first()`</p> |
+| ClickHouse | ClickHouse: New queries per second | <p>Number of queries to be interpreted and potentially executed. Does not include queries that failed to parse or were rejected due to AST size limits, quota limits or limits on the number of simultaneously running queries. May include internal queries initiated by ClickHouse itself. Does not count subqueries.</p> | DEPENDENT | clickhouse.query.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.data.event == "Query")].value.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
+| ClickHouse | ClickHouse: New SELECT queries per second | <p>Number of SELECT queries to be interpreted and potentially executed. Does not include queries that failed to parse or were rejected due to AST size limits, quota limits or limits on the number of simultaneously running queries. May include internal queries initiated by ClickHouse itself. Does not count subqueries.</p> | DEPENDENT | clickhouse.select_query.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.event == "SelectQuery")].value.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
+| ClickHouse | ClickHouse: New INSERT queries per second | <p>Number of INSERT queries to be interpreted and potentially executed. Does not include queries that failed to parse or were rejected due to AST size limits, quota limits or limits on the number of simultaneously running queries. May include internal queries initiated by ClickHouse itself. Does not count subqueries.</p> | DEPENDENT | clickhouse.insert_query.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.event == "InsertQuery")].value.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
+| ClickHouse | ClickHouse: Delayed insert queries | <p>"Number of INSERT queries that are throttled due to high number of active data parts for partition in a MergeTree table."</p> | DEPENDENT | clickhouse.insert.delay<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "DelayedInserts")].value.first()`</p> |
+| ClickHouse | ClickHouse: Current running queries | <p>Number of executing queries</p> | DEPENDENT | clickhouse.query.current<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "Query")].value.first()`</p> |
+| ClickHouse | ClickHouse: Current running merges | <p>Number of executing background merges</p> | DEPENDENT | clickhouse.merge.current<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "Merge")].value.first()`</p> |
+| ClickHouse | ClickHouse: Inserted bytes per second | <p>The number of uncompressed bytes inserted in all tables.</p> | DEPENDENT | clickhouse.inserted_bytes.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.event == "InsertedBytes")].value.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
+| ClickHouse | ClickHouse: Read bytes per second | <p>"Number of bytes (the number of bytes before decompression) read from compressed sources (files, network)."</p> | DEPENDENT | clickhouse.read_bytes.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.event == "ReadCompressedBytes")].value.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
+| ClickHouse | ClickHouse: Inserted rows per second | <p>The number of rows inserted in all tables.</p> | DEPENDENT | clickhouse.inserted_rows.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.event == "InsertedRows")].value.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
+| ClickHouse | ClickHouse: Merged rows per second | <p>Rows read for background merges.</p> | DEPENDENT | clickhouse.merge_rows.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.event == "MergedRows")].value.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
+| ClickHouse | ClickHouse: Uncompressed bytes merged per second | <p>Uncompressed bytes that were read for background merges</p> | DEPENDENT | clickhouse.merge_bytes.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.event == "MergedUncompressedBytes")].value.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
+| ClickHouse | ClickHouse: Max count of parts per partition across all tables | <p>"Clickhouse MergeTree table engine split each INSERT query to partitions (PARTITION BY expression) and add one or more PARTS per INSERT inside each partition, </p><p>after that background merge process run."</p> | DEPENDENT | clickhouse.max.part.count.for.partition<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "MaxPartCountForPartition")].value.first()`</p> |
+| ClickHouse | ClickHouse: Current TCP connections | <p>Number of connections to TCP server (clients with native interface).</p> | DEPENDENT | clickhouse.connections.tcp<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "TCPConnection")].value.first()`</p> |
+| ClickHouse | ClickHouse: Current HTTP connections | <p>Number of connections to HTTP server.</p> | DEPENDENT | clickhouse.connections.http<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "HTTPConnection")].value.first()`</p> |
+| ClickHouse | ClickHouse: Current distribute connections | <p>Number of connections to remote servers sending data that was INSERTed into Distributed tables.</p> | DEPENDENT | clickhouse.connections.distribute<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "DistributedSend")].value.first()`</p> |
+| ClickHouse | ClickHouse: Current MySQL connections | <p>Number of connections to MySQL server.</p> | DEPENDENT | clickhouse.connections.mysql<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "MySQLConnection")].value.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| ClickHouse | ClickHouse: Current Interserver connections | <p>Number of connections from other replicas to fetch parts.</p> | DEPENDENT | clickhouse.connections.interserver<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "InterserverConnection")].value.first()`</p> |
+| ClickHouse | ClickHouse: Network errors per second | <p>Network errors (timeouts and connection failures) during query execution, background pool tasks and DNS cache update.</p> | DEPENDENT | clickhouse.network.error.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.event == "NetworkErrors")].value.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
+| ClickHouse | ClickHouse: Read syscalls in fly | <p>Number of read (read, pread, io_getevents, etc.) syscalls in fly</p> | DEPENDENT | clickhouse.read<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "Read")].value.first()`</p> |
+| ClickHouse | ClickHouse: Write syscalls in fly | <p>Number of write (write, pwrite, io_getevents, etc.) syscalls in fly</p> | DEPENDENT | clickhouse.write<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "Write")].value.first()`</p> |
+| ClickHouse | ClickHouse: Allocated bytes | <p>"Total number of bytes allocated by the application."</p> | DEPENDENT | clickhouse.jemalloc.allocated<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "jemalloc.allocated")].value.first()`</p> |
+| ClickHouse | ClickHouse: Resident memory | <p>"Maximum number of bytes in physically resident data pages mapped by the allocator, </p><p>comprising all pages dedicated to allocator metadata, pages backing active allocations, </p><p>and unused dirty pages."</p> | DEPENDENT | clickhouse.jemalloc.resident<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "jemalloc.resident")].value.first()`</p> |
+| ClickHouse | ClickHouse: Mapped memory | <p>"Total number of bytes in active extents mapped by the allocator."</p> | DEPENDENT | clickhouse.jemalloc.mapped<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "jemalloc.mapped")].value.first()`</p> |
+| ClickHouse | ClickHouse: Memory used for queries | <p>"Total amount of memory (bytes) allocated in currently executing queries."</p> | DEPENDENT | clickhouse.memory.tracking<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "MemoryTracking")].value.first()`</p> |
+| ClickHouse | ClickHouse: Memory used for background merges | <p>"Total amount of memory (bytes) allocated in background processing pool (that is dedicated for background merges, mutations and fetches).</p><p> Note that this value may include a drift when the memory was allocated in a context of background processing pool and freed in other context or vice-versa. This happens naturally due to caches for tables indexes and doesn't indicate memory leaks."</p> | DEPENDENT | clickhouse.memory.tracking.background<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "MemoryTrackingInBackgroundProcessingPool")].value.first()`</p> |
+| ClickHouse | ClickHouse: Memory used for background moves | <p>"Total amount of memory (bytes) allocated in background processing pool (that is dedicated for background moves). Note that this value may include a drift when the memory was allocated in a context of background processing pool and freed in other context or vice-versa.</p><p> This happens naturally due to caches for tables indexes and doesn't indicate memory leaks."</p> | DEPENDENT | clickhouse.memory.tracking.background.moves<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "MemoryTrackingInBackgroundMoveProcessingPool")].value.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+| ClickHouse | ClickHouse: Memory used for background schedule pool | <p>"Total amount of memory (bytes) allocated in background schedule pool (that is dedicated for bookkeeping tasks of Replicated tables)."</p> | DEPENDENT | clickhouse.memory.tracking.schedule.pool<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "MemoryTrackingInBackgroundSchedulePool")].value.first()`</p> |
+| ClickHouse | ClickHouse: Memory used for merges | <p>"Total amount of memory (bytes) allocated for background merges. Included in MemoryTrackingInBackgroundProcessingPool. Note that this value may include a drift when the memory was allocated in a context of background processing pool and freed in other context or vice-versa. </p><p>This happens naturally due to caches for tables indexes and doesn't indicate memory leaks."</p> | DEPENDENT | clickhouse.memory.tracking.merges<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "MemoryTrackingForMerges")].value.first()`</p> |
+| ClickHouse | ClickHouse: Current distributed files to insert | <p>Number of pending files to process for asynchronous insertion into Distributed tables. Number of files for every shard is summed.</p> | DEPENDENT | clickhouse.distributed.files<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "DistributedFilesToInsert")].value.first()`</p> |
+| ClickHouse | ClickHouse: Distributed connection fail with retry per second | <p>Connection retries in replicated DB connection pool</p> | DEPENDENT | clickhouse.distributed.files.retry.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "DistributedConnectionFailTry")].value.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
+| ClickHouse | ClickHouse: Distributed connection fail with retry per second | <p>"Connection failures after all retries in replicated DB connection pool"</p> | DEPENDENT | clickhouse.distributed.files.fail.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "DistributedConnectionFailAtAll")].value.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
+| ClickHouse | ClickHouse: Replication lag across all tables | <p>Maximum replica queue delay relative to current time</p> | DEPENDENT | clickhouse.replicas.max.absolute.delay<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "ReplicasMaxAbsoluteDelay")].value.first()`</p> |
+| ClickHouse | ClickHouse: Total replication tasks in queue | | DEPENDENT | clickhouse.replicas.sum.queue.size<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "ReplicasSumQueueSize")].value.first()`</p> |
+| ClickHouse | ClickHouse: Total number read-only Replicas | <p>"Number of Replicated tables that are currently in readonly state </p><p>due to re-initialization after ZooKeeper session loss </p><p>or due to startup without ZooKeeper configured."</p> | DEPENDENT | clickhouse.replicas.readonly.total<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "ReadonlyReplica")].value.first()`</p> |
+| ClickHouse | ClickHouse: {#DB}.{#TABLE}: Bytes | <p>Table size in bytes. Database: {#DB}, table: {#TABLE}</p> | DEPENDENT | clickhouse.table.bytes["{#DB}.{#TABLE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.database == "{#DB}" && @.table == "{#TABLE}")].bytes.first()`</p> |
+| ClickHouse | ClickHouse: {#DB}.{#TABLE}: Parts | <p>Number of parts of the table. Database: {#DB}, table: {#TABLE}</p> | DEPENDENT | clickhouse.table.parts["{#DB}.{#TABLE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.database == "{#DB}" && @.table == "{#TABLE}")].parts.first()`</p> |
+| ClickHouse | ClickHouse: {#DB}.{#TABLE}: Rows | <p>Number of rows in the table. Database: {#DB}, table: {#TABLE}</p> | DEPENDENT | clickhouse.table.rows["{#DB}.{#TABLE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.database == "{#DB}" && @.table == "{#TABLE}")].rows.first()`</p> |
+| ClickHouse | ClickHouse: {#DB}: Bytes | <p>Database size in bytes.</p> | DEPENDENT | clickhouse.db.bytes["{#DB}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.database == "{#DB}")].bytes.sum()`</p> |
+| ClickHouse | ClickHouse: {#DB}.{#TABLE}: Replica readonly | <p>Whether the replica is in read-only mode.</p><p>This mode is turned on if the config doesn’t have sections with ZooKeeper, if an unknown error occurred when reinitializing sessions in ZooKeeper, and during session reinitialization in ZooKeeper.</p> | DEPENDENT | clickhouse.replica.is_readonly["{#DB}.{#TABLE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.database == "{#DB}" && @.table == "{#TABLE}")].is_readonly.first()`</p> |
+| ClickHouse | ClickHouse: {#DB}.{#TABLE}: Replica session expired | <p>True if the ZooKeeper session expired</p> | DEPENDENT | clickhouse.replica.is_session_expired["{#DB}.{#TABLE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.database == "{#DB}" && @.table == "{#TABLE}")].is_session_expired.first()`</p> |
+| ClickHouse | ClickHouse: {#DB}.{#TABLE}: Replica future parts | <p>Number of data parts that will appear as the result of INSERTs or merges that haven’t been done yet.</p> | DEPENDENT | clickhouse.replica.future_parts["{#DB}.{#TABLE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.database == "{#DB}" && @.table == "{#TABLE}")].future_parts.first()`</p> |
+| ClickHouse | ClickHouse: {#DB}.{#TABLE}: Replica parts to check | <p>Number of data parts in the queue for verification. A part is put in the verification queue if there is suspicion that it might be damaged.</p> | DEPENDENT | clickhouse.replica.parts_to_check["{#DB}.{#TABLE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.database == "{#DB}" && @.table == "{#TABLE}")].parts_to_check.first()`</p> |
+| ClickHouse | ClickHouse: {#DB}.{#TABLE}: Replica queue size | <p>Size of the queue for operations waiting to be performed.</p> | DEPENDENT | clickhouse.replica.queue_size["{#DB}.{#TABLE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.database == "{#DB}" && @.table == "{#TABLE}")].queue_size.first()`</p> |
+| ClickHouse | ClickHouse: {#DB}.{#TABLE}: Replica queue inserts size | <p>Number of inserts of blocks of data that need to be made.</p> | DEPENDENT | clickhouse.replica.inserts_in_queue["{#DB}.{#TABLE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.database == "{#DB}" && @.table == "{#TABLE}")].inserts_in_queue.first()`</p> |
+| ClickHouse | ClickHouse: {#DB}.{#TABLE}: Replica queue merges size | <p>Number of merges waiting to be made. </p> | DEPENDENT | clickhouse.replica.merges_in_queue["{#DB}.{#TABLE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.database == "{#DB}" && @.table == "{#TABLE}")].merges_in_queue.first()`</p> |
+| ClickHouse | ClickHouse: {#DB}.{#TABLE}: Replica log max index | <p>Maximum entry number in the log of general activity. (Have a non-zero value only where there is an active session with ZooKeeper).</p> | DEPENDENT | clickhouse.replica.log_max_index["{#DB}.{#TABLE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.database == "{#DB}" && @.table == "{#TABLE}")].log_max_index.first()`</p> |
+| ClickHouse | ClickHouse: {#DB}.{#TABLE}: Replica log pointer | <p> Maximum entry number in the log of general activity that the replica copied to its execution queue, plus one. (Have a non-zero value only where there is an active session with ZooKeeper).</p> | DEPENDENT | clickhouse.replica.log_pointer["{#DB}.{#TABLE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.database == "{#DB}" && @.table == "{#TABLE}")].log_pointer.first()`</p> |
+| ClickHouse | ClickHouse: {#DB}.{#TABLE}: Total replicas | <p>Total number of known replicas of this table. (Have a non-zero value only where there is an active session with ZooKeeper).</p> | DEPENDENT | clickhouse.replica.total_replicas["{#DB}.{#TABLE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.database == "{#DB}" && @.table == "{#TABLE}")].total_replicas.first()`</p> |
+| ClickHouse | ClickHouse: {#DB}.{#TABLE}: Active replicas | <p>Number of replicas of this table that have a session in ZooKeeper (i.e., the number of functioning replicas). (Have a non-zero value only where there is an active session with ZooKeeper).</p> | DEPENDENT | clickhouse.replica.active_replicas["{#DB}.{#TABLE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.database == "{#DB}" && @.table == "{#TABLE}")].active_replicas.first()`</p> |
+| ClickHouse | ClickHouse: {#DB}.{#TABLE}: Replica lag | <p>Difference between log_max_index and log_pointer</p> | DEPENDENT | clickhouse.replica.lag["{#DB}.{#TABLE}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.database == "{#DB}" && @.table == "{#TABLE}")].replica_lag.first()`</p> |
+| ClickHouse | ClickHouse: Dictionary {#NAME}: Bytes allocated | <p>The amount of RAM the dictionary uses.</p> | DEPENDENT | clickhouse.dictionary.bytes_allocated["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#NAME}")].bytes_allocated.first()`</p> |
+| ClickHouse | ClickHouse: Dictionary {#NAME}: Element count | <p>Number of items stored in the dictionary.</p> | DEPENDENT | clickhouse.dictionary.element_count["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#NAME}")].element_count.first()`</p> |
+| ClickHouse | ClickHouse: Dictionary {#NAME}: Load factor | <p>The percentage filled in the dictionary (for a hashed dictionary, the percentage filled in the hash table).</p> | DEPENDENT | clickhouse.dictionary.load_factor["{#NAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.name == "{#NAME}")].bytes_allocated.first()`</p><p>- MULTIPLIER: `100`</p> |
+| ClickHouse_ZooKeeper | ClickHouse: ZooKeeper sessions | <p>Number of sessions (connections) to ZooKeeper. Should be no more than one.</p> | DEPENDENT | clickhouse.zookeper.session<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "ZooKeeperSession")].value.first()`</p> |
+| ClickHouse_ZooKeeper | ClickHouse: ZooKeeper watches | <p>Number of watches (e.g., event subscriptions) in ZooKeeperr.</p> | DEPENDENT | clickhouse.zookeper.watch<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "ZooKeeperWatch")].value.first()`</p> |
+| ClickHouse_ZooKeeper | ClickHouse: ZooKeeper requests | <p>Number of requests to ZooKeeper in progress.</p> | DEPENDENT | clickhouse.zookeper.request<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "ZooKeeperRequest")].value.first()`</p> |
+| ClickHouse_ZooKeeper | ClickHouse: ZooKeeper wait time | <p>Time spent in waiting for ZooKeeper operations.</p> | DEPENDENT | clickhouse.zookeper.wait.time<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.event == "ZooKeeperWaitMicroseconds")].value.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- MULTIPLIER: `0.000001`</p><p>- CHANGE_PER_SECOND |
+| ClickHouse_ZooKeeper | ClickHouse: ZooKeeper exceptions per second | <p>Count of ZooKeeper exceptions that does not belong to user/hardware exceptions.</p> | DEPENDENT | clickhouse.zookeper.exceptions.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.event == "ZooKeeperOtherExceptions")].value.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
+| ClickHouse_ZooKeeper | ClickHouse: ZooKeeper hardware exceptions per second | <p>Count of ZooKeeper exceptions caused by session moved/expired, connection loss, marshalling error, operation timed out and invalid zhandle state.</p> | DEPENDENT | clickhouse.zookeper.hw_exeptions.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.event == "ZooKeeperHardwareExceptions")].value.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
+| ClickHouse_ZooKeeper | ClickHouse: ZooKeeper user exceptions per second | <p>Count of ZooKeeper exceptions caused by no znodes, bad version, node exists, node empty and no children for ephemeral.</p> | DEPENDENT | clickhouse.zookeper.user_exeptions.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.event == "ZooKeeperUserExceptions")].value.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
+| Zabbix_raw_items | ClickHouse: Get system.events | <p>Get information about the number of events that have occurred in the system.</p> | HTTP_AGENT | clickhouse.system.events<p>**Preprocessing**:</p><p>- JSONPATH: `$.data`</p> |
+| Zabbix_raw_items | ClickHouse: Get system.metrics | <p>Get metrics which can be calculated instantly, or have a current value format JSONEachRow</p> | HTTP_AGENT | clickhouse.system.metrics<p>**Preprocessing**:</p><p>- JSONPATH: `$.data`</p> |
+| Zabbix_raw_items | ClickHouse: Get system.asynchronous_metrics | <p>Get metrics that are calculated periodically in the background</p> | HTTP_AGENT | clickhouse.system.asynchronous_metrics<p>**Preprocessing**:</p><p>- JSONPATH: `$.data`</p> |
+| Zabbix_raw_items | ClickHouse: Get system.settings | <p>Get information about settings that are currently in use.</p> | HTTP_AGENT | clickhouse.system.settings<p>**Preprocessing**:</p><p>- JSONPATH: `$.data`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| Zabbix_raw_items | ClickHouse: Get replicas info | <p>-</p> | HTTP_AGENT | clickhouse.replicas<p>**Preprocessing**:</p><p>- JSONPATH: `$.data`</p> |
+| Zabbix_raw_items | ClickHouse: Get tables info | <p>-</p> | HTTP_AGENT | clickhouse.tables<p>**Preprocessing**:</p><p>- JSONPATH: `$.data`</p> |
+| Zabbix_raw_items | ClickHouse: Get dictionaries info | <p>-</p> | HTTP_AGENT | clickhouse.dictionaries<p>**Preprocessing**:</p><p>- JSONPATH: `$.data`</p> |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|ClickHouse: There are queries running more than {$CLICKHOUSE.QUERY_TIME.MAX.WARN} seconds |<p>-</p> |`{TEMPLATE_NAME:clickhouse.process.elapsed.last()}>{$CLICKHOUSE.QUERY_TIME.MAX.WARN}` |AVERAGE |<p>Manual close: YES</p> |
-|ClickHouse: Port {$CLICKHOUSE.PORT} is unavailable |<p>-</p> |`{TEMPLATE_NAME:net.tcp.service[{$CLICKHOUSE.SCHEME},"{HOST.CONN}","{$CLICKHOUSE.PORT}"].last()}=0` |AVERAGE |<p>Manual close: YES</p> |
-|ClickHouse: Service is down |<p>-</p> |`{TEMPLATE_NAME:clickhouse.ping.last()}=0 or {ClickHouse by HTTP:net.tcp.service[{$CLICKHOUSE.SCHEME},"{HOST.CONN}","{$CLICKHOUSE.PORT}"].last()} = 0` |AVERAGE |<p>Manual close: YES</p><p>**Depends on**:</p><p>- ClickHouse: Port {$CLICKHOUSE.PORT} is unavailable</p> |
-|ClickHouse: Version has changed (new version: {ITEM.VALUE}) |<p>ClickHouse version has changed. Ack to close.</p> |`{TEMPLATE_NAME:clickhouse.version.diff()}=1 and {TEMPLATE_NAME:clickhouse.version.strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|ClickHouse: has been restarted (uptime < 10m) |<p>Uptime is less than 10 minutes</p> |`{TEMPLATE_NAME:clickhouse.uptime.last()}<10m` |INFO |<p>Manual close: YES</p> |
-|ClickHouse: Failed to fetch info data (or no data for 30m) |<p>Zabbix has not received data for items for the last 30 minutes</p> |`{TEMPLATE_NAME:clickhouse.uptime.nodata(30m)}=1` |WARNING |<p>Manual close: YES</p><p>**Depends on**:</p><p>- ClickHouse: Service is down</p> |
-|ClickHouse: Too many throttled insert queries (over {$CLICKHOUSE.DELAYED.INSERTS.MAX.WARN) for 5 min) |<p>Clickhouse have INSERT queries that are throttled due to high number of active data parts for partition in a MergeTree, please decrease INSERT frequency</p> |`{TEMPLATE_NAME:clickhouse.insert.delay.min(5m)}>{$CLICKHOUSE.DELAYED.INSERTS.MAX.WARN}` |WARNING |<p>Manual close: YES</p> |
-|ClickHouse: Too many MergeTree parts (over 90% of {$CLICKHOUSE.PARTS.PER.PARTITION.WARN}) |<p>"Descease INSERT queries frequency.</p><p>Clickhouse MergeTree table engine split each INSERT query to partitions (PARTITION BY expression) </p><p>and add one or more PARTS per INSERT inside each partition, </p><p>after that background merge process run, and when you have too much unmerged parts inside partition, </p><p>SELECT queries performance can significate degrade, so clickhouse try delay insert, or abort it"</p> |`{TEMPLATE_NAME:clickhouse.max.part.count.for.partition.min(5m)}>{$CLICKHOUSE.PARTS.PER.PARTITION.WARN} * 0.9` |WARNING |<p>Manual close: YES</p> |
-|ClickHouse: Too many network errors (over {$CLICKHOUSE.NETWORK.ERRORS.MAX.WARN} in 5m) |<p>Number of errors (timeouts and connection failures) during query execution, background pool tasks and DNS cache update is too high.</p> |`{TEMPLATE_NAME:clickhouse.network.error.rate.min(5m)}>{$CLICKHOUSE.NETWORK.ERRORS.MAX.WARN}` |WARNING | |
-|ClickHouse: Too many distributed files to insert (over {$CLICKHOUSE.DELAYED.FILES.DISTRIBUTED.COUNT.MAX.WARN} for 5 min) |<p>"Clickhouse servers and <remote_servers> in config.xml</p><p>https://clickhouse.tech/docs/en/operations/table_engines/distributed/"</p> |`{TEMPLATE_NAME:clickhouse.distributed.files.min(5m)}>{$CLICKHOUSE.DELAYED.FILES.DISTRIBUTED.COUNT.MAX.WARN}` |WARNING |<p>Manual close: YES</p> |
-|ClickHouse: Replication lag is too high (over {$CLICKHOUSE.REPLICA.MAX.WARN} sec for 5min) |<p>"When replica have too much lag, it can be skipped from Distributed SELECT Queries without errors </p><p>and you will have wrong query results."</p> |`{TEMPLATE_NAME:clickhouse.replicas.max.absolute.delay.min(5m)}>{$CLICKHOUSE.REPLICA.MAX.WARN}` |WARNING |<p>Manual close: YES</p> |
-|ClickHouse: {#DB}.{#TABLE} Replica is readonly |<p>This mode is turned on if the config doesn’t have sections with ZooKeeper, if an unknown error occurred when reinitializing sessions in ZooKeeper, and during session reinitialization in ZooKeeper.</p> |`{TEMPLATE_NAME:clickhouse.replica.is_readonly["{#DB}.{#TABLE}"].min(5m)}=1` |WARNING | |
-|ClickHouse: {#DB}.{#TABLE} Replica session is expired |<p>This mode is turned on if the config doesn’t have sections with ZooKeeper, if an unknown error occurred when reinitializing sessions in ZooKeeper, and during session reinitialization in ZooKeeper.</p> |`{TEMPLATE_NAME:clickhouse.replica.is_session_expired["{#DB}.{#TABLE}"].min(5m)}=1` |WARNING | |
-|ClickHouse: {#DB}.{#TABLE}: Too many operations in queue (over {$CLICKHOUSE.QUEUE.SIZE.MAX.WARN} for 5m) |<p>-</p> |`{TEMPLATE_NAME:clickhouse.replica.queue_size["{#DB}.{#TABLE}"].min(5m)}>{$CLICKHOUSE.QUEUE.SIZE.MAX.WARN:"{#TABLE}"}` |WARNING | |
-|ClickHouse: {#DB}.{#TABLE}: Number of active replicas less than number of total replicas |<p>-</p> |`{TEMPLATE_NAME:clickhouse.replica.active_replicas["{#DB}.{#TABLE}"].max(5m)} < {ClickHouse by HTTP:clickhouse.replica.total_replicas["{#DB}.{#TABLE}"].last()}` |WARNING | |
-|ClickHouse: {#DB}.{#TABLE}: Difference between log_max_index and log_pointer is too high (More than {$CLICKHOUSE.LOG_POSITION.DIFF.MAX.WARN} for 5m) |<p>-</p> |`{TEMPLATE_NAME:clickhouse.replica.lag["{#DB}.{#TABLE}"].min(5m)} > {$CLICKHOUSE.LOG_POSITION.DIFF.MAX.WARN}` |WARNING | |
-|ClickHouse: Too many ZooKeeper sessions opened |<p>"Number of sessions (connections) to ZooKeeper. </p><p>Should be no more than one, because using more than one connection to ZooKeeper may lead to bugs due to lack of linearizability (stale reads) that ZooKeeper consistency model allows."</p> |`{TEMPLATE_NAME:clickhouse.zookeper.session.min(5m)}>1` |WARNING | |
-|ClickHouse: Configuration has been changed |<p>ClickHouse configuration has been changed. Ack to close.</p> |`{TEMPLATE_NAME:clickhouse.system.settings.diff()}=1 and {TEMPLATE_NAME:clickhouse.system.settings.strlen()}>0` |INFO |<p>Manual close: YES</p> |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|-----------------------------------------------------------------------------------------------------------|
+| ClickHouse: There are queries running more than {$CLICKHOUSE.QUERY_TIME.MAX.WARN} seconds | <p>-</p> | `{TEMPLATE_NAME:clickhouse.process.elapsed.last()}>{$CLICKHOUSE.QUERY_TIME.MAX.WARN}` | AVERAGE | <p>Manual close: YES</p> |
+| ClickHouse: Port {$CLICKHOUSE.PORT} is unavailable | <p>-</p> | `{TEMPLATE_NAME:net.tcp.service[{$CLICKHOUSE.SCHEME},"{HOST.CONN}","{$CLICKHOUSE.PORT}"].last()}=0` | AVERAGE | <p>Manual close: YES</p> |
+| ClickHouse: Service is down | <p>-</p> | `{TEMPLATE_NAME:clickhouse.ping.last()}=0 or {ClickHouse by HTTP:net.tcp.service[{$CLICKHOUSE.SCHEME},"{HOST.CONN}","{$CLICKHOUSE.PORT}"].last()} = 0` | AVERAGE | <p>Manual close: YES</p><p>**Depends on**:</p><p>- ClickHouse: Port {$CLICKHOUSE.PORT} is unavailable</p> |
+| ClickHouse: Version has changed (new version: {ITEM.VALUE}) | <p>ClickHouse version has changed. Ack to close.</p> | `{TEMPLATE_NAME:clickhouse.version.diff()}=1 and {TEMPLATE_NAME:clickhouse.version.strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| ClickHouse: has been restarted (uptime < 10m) | <p>Uptime is less than 10 minutes</p> | `{TEMPLATE_NAME:clickhouse.uptime.last()}<10m` | INFO | <p>Manual close: YES</p> |
+| ClickHouse: Failed to fetch info data (or no data for 30m) | <p>Zabbix has not received data for items for the last 30 minutes</p> | `{TEMPLATE_NAME:clickhouse.uptime.nodata(30m)}=1` | WARNING | <p>Manual close: YES</p><p>**Depends on**:</p><p>- ClickHouse: Service is down</p> |
+| ClickHouse: Too many throttled insert queries (over {$CLICKHOUSE.DELAYED.INSERTS.MAX.WARN) for 5 min) | <p>Clickhouse have INSERT queries that are throttled due to high number of active data parts for partition in a MergeTree, please decrease INSERT frequency</p> | `{TEMPLATE_NAME:clickhouse.insert.delay.min(5m)}>{$CLICKHOUSE.DELAYED.INSERTS.MAX.WARN}` | WARNING | <p>Manual close: YES</p> |
+| ClickHouse: Too many MergeTree parts (over 90% of {$CLICKHOUSE.PARTS.PER.PARTITION.WARN}) | <p>"Descease INSERT queries frequency.</p><p>Clickhouse MergeTree table engine split each INSERT query to partitions (PARTITION BY expression) </p><p>and add one or more PARTS per INSERT inside each partition, </p><p>after that background merge process run, and when you have too much unmerged parts inside partition, </p><p>SELECT queries performance can significate degrade, so clickhouse try delay insert, or abort it"</p> | `{TEMPLATE_NAME:clickhouse.max.part.count.for.partition.min(5m)}>{$CLICKHOUSE.PARTS.PER.PARTITION.WARN} * 0.9` | WARNING | <p>Manual close: YES</p> |
+| ClickHouse: Too many network errors (over {$CLICKHOUSE.NETWORK.ERRORS.MAX.WARN} in 5m) | <p>Number of errors (timeouts and connection failures) during query execution, background pool tasks and DNS cache update is too high.</p> | `{TEMPLATE_NAME:clickhouse.network.error.rate.min(5m)}>{$CLICKHOUSE.NETWORK.ERRORS.MAX.WARN}` | WARNING | |
+| ClickHouse: Too many distributed files to insert (over {$CLICKHOUSE.DELAYED.FILES.DISTRIBUTED.COUNT.MAX.WARN} for 5 min) | <p>"Clickhouse servers and <remote_servers> in config.xml</p><p>https://clickhouse.tech/docs/en/operations/table_engines/distributed/"</p> | `{TEMPLATE_NAME:clickhouse.distributed.files.min(5m)}>{$CLICKHOUSE.DELAYED.FILES.DISTRIBUTED.COUNT.MAX.WARN}` | WARNING | <p>Manual close: YES</p> |
+| ClickHouse: Replication lag is too high (over {$CLICKHOUSE.REPLICA.MAX.WARN} sec for 5min) | <p>"When replica have too much lag, it can be skipped from Distributed SELECT Queries without errors </p><p>and you will have wrong query results."</p> | `{TEMPLATE_NAME:clickhouse.replicas.max.absolute.delay.min(5m)}>{$CLICKHOUSE.REPLICA.MAX.WARN}` | WARNING | <p>Manual close: YES</p> |
+| ClickHouse: {#DB}.{#TABLE} Replica is readonly | <p>This mode is turned on if the config doesn’t have sections with ZooKeeper, if an unknown error occurred when reinitializing sessions in ZooKeeper, and during session reinitialization in ZooKeeper.</p> | `{TEMPLATE_NAME:clickhouse.replica.is_readonly["{#DB}.{#TABLE}"].min(5m)}=1` | WARNING | |
+| ClickHouse: {#DB}.{#TABLE} Replica session is expired | <p>This mode is turned on if the config doesn’t have sections with ZooKeeper, if an unknown error occurred when reinitializing sessions in ZooKeeper, and during session reinitialization in ZooKeeper.</p> | `{TEMPLATE_NAME:clickhouse.replica.is_session_expired["{#DB}.{#TABLE}"].min(5m)}=1` | WARNING | |
+| ClickHouse: {#DB}.{#TABLE}: Too many operations in queue (over {$CLICKHOUSE.QUEUE.SIZE.MAX.WARN} for 5m) | <p>-</p> | `{TEMPLATE_NAME:clickhouse.replica.queue_size["{#DB}.{#TABLE}"].min(5m)}>{$CLICKHOUSE.QUEUE.SIZE.MAX.WARN:"{#TABLE}"}` | WARNING | |
+| ClickHouse: {#DB}.{#TABLE}: Number of active replicas less than number of total replicas | <p>-</p> | `{TEMPLATE_NAME:clickhouse.replica.active_replicas["{#DB}.{#TABLE}"].max(5m)} < {ClickHouse by HTTP:clickhouse.replica.total_replicas["{#DB}.{#TABLE}"].last()}` | WARNING | |
+| ClickHouse: {#DB}.{#TABLE}: Difference between log_max_index and log_pointer is too high (More than {$CLICKHOUSE.LOG_POSITION.DIFF.MAX.WARN} for 5m) | <p>-</p> | `{TEMPLATE_NAME:clickhouse.replica.lag["{#DB}.{#TABLE}"].min(5m)} > {$CLICKHOUSE.LOG_POSITION.DIFF.MAX.WARN}` | WARNING | |
+| ClickHouse: Too many ZooKeeper sessions opened | <p>"Number of sessions (connections) to ZooKeeper. </p><p>Should be no more than one, because using more than one connection to ZooKeeper may lead to bugs due to lack of linearizability (stale reads) that ZooKeeper consistency model allows."</p> | `{TEMPLATE_NAME:clickhouse.zookeper.session.min(5m)}>1` | WARNING | |
+| ClickHouse: Configuration has been changed | <p>ClickHouse configuration has been changed. Ack to close.</p> | `{TEMPLATE_NAME:clickhouse.system.settings.diff()}=1 and {TEMPLATE_NAME:clickhouse.system.settings.strlen()}>0` | INFO | <p>Manual close: YES</p> |
## Feedback
diff --git a/templates/db/mongodb/template_db_mongodb.yaml b/templates/db/mongodb/template_db_mongodb.yaml
index 52c778dc328..e603e9dffac 100644
--- a/templates/db/mongodb/template_db_mongodb.yaml
+++ b/templates/db/mongodb/template_db_mongodb.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-04-01T10:04:17Z'
+ date: '2021-04-22T11:28:34Z'
groups:
-
name: Templates/Databases
@@ -20,11 +20,6 @@ zabbix_export:
groups:
-
name: Templates/Databases
- applications:
- -
- name: MongoDB
- -
- name: 'Zabbix raw items'
items:
-
name: 'MongoDB: Document: deleted, rate'
@@ -34,9 +29,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'Number of documents deleted per second.'
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -48,6 +40,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
-
name: 'MongoDB: Document: inserted, rate'
type: DEPENDENT
@@ -56,9 +52,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'Number of documents inserted per second.'
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -70,6 +63,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
-
name: 'MongoDB: Document: returned, rate'
type: DEPENDENT
@@ -78,9 +75,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'Number of documents returned by queries per second.'
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -92,6 +86,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
-
name: 'MongoDB: Document: updated, rate'
type: DEPENDENT
@@ -100,9 +98,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'Number of documents updated per second.'
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -114,6 +109,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
-
name: 'MongoDB: Active clients: readers'
type: DEPENDENT
@@ -121,9 +120,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'The number of the active client connections performing read operations.'
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -131,6 +127,10 @@ zabbix_export:
- $.globalLock.activeClients.readers
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
-
name: 'MongoDB: Active clients: total'
type: DEPENDENT
@@ -138,9 +138,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'The total number of internal client connections to the database including system threads as well as queued readers and writers.'
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -148,6 +145,10 @@ zabbix_export:
- $.globalLock.activeClients.total
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
-
name: 'MongoDB: Active clients: writers'
type: DEPENDENT
@@ -155,9 +156,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'The number of active client connections performing write operations.'
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -165,6 +163,10 @@ zabbix_export:
- $.globalLock.activeClients.writers
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
-
name: 'MongoDB: Asserts: message, rate'
type: DEPENDENT
@@ -175,9 +177,6 @@ zabbix_export:
description: |
The number of message assertions raised per second.
Check the log file for more information about these messages.
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -189,6 +188,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
-
name: 'MongoDB: Asserts: regular, rate'
type: DEPENDENT
@@ -199,9 +202,6 @@ zabbix_export:
description: |
The number of regular assertions raised per second.
Check the log file for more information about these messages.
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -213,6 +213,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
-
name: 'MongoDB: Asserts: rollovers, rate'
type: DEPENDENT
@@ -223,9 +227,6 @@ zabbix_export:
description: |
Number of times that the rollover counters roll over per second.
The counters rollover to zero every 2^30 assertions.
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -237,6 +238,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
-
name: 'MongoDB: Asserts: user, rate'
type: DEPENDENT
@@ -247,9 +252,6 @@ zabbix_export:
description: |
The number of “user asserts” that have occurred per second.
These are errors that user may generate, such as out of disk space or duplicate key.
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -261,6 +263,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
-
name: 'MongoDB: Asserts: warning, rate'
type: DEPENDENT
@@ -269,9 +275,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'The number of warnings raised per second.'
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -283,6 +286,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
-
name: 'MongoDB: Get collections usage stats'
key: 'mongodb.collections.usage["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
@@ -290,9 +297,10 @@ zabbix_export:
trends: '0'
value_type: TEXT
description: 'Returns usage statistics for each collection.'
- applications:
+ tags:
-
- name: 'Zabbix raw items'
+ tag: Application
+ value: 'Zabbix raw items'
-
name: 'MongoDB: Connections, active'
type: DEPENDENT
@@ -303,9 +311,6 @@ zabbix_export:
The number of active client connections to the server.
Active client connections refers to client connections that currently have operations in progress.
Available starting in 4.0.7, 0 for older versions.
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -314,6 +319,10 @@ zabbix_export:
error_handler: DISCARD_VALUE
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
-
name: 'MongoDB: Connections, available'
type: DEPENDENT
@@ -321,9 +330,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'The number of unused incoming connections available.'
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -331,6 +337,10 @@ zabbix_export:
- $.connections.available
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
-
name: 'MongoDB: Connections, current'
type: DEPENDENT
@@ -340,9 +350,6 @@ zabbix_export:
description: |
The number of incoming connections from clients to the database server.
This number includes the current shell session
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -350,6 +357,10 @@ zabbix_export:
- $.connections.current
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
-
name: 'MongoDB: New connections, rate'
type: DEPENDENT
@@ -359,9 +370,6 @@ zabbix_export:
value_type: FLOAT
units: Rps
description: 'Rate of all incoming connections created to the server.'
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -373,6 +381,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
-
name: 'MongoDB: Current queue: readers'
type: DEPENDENT
@@ -382,9 +394,6 @@ zabbix_export:
description: |
The number of operations that are currently queued and waiting for the read lock.
A consistently small read-queue, particularly of shorter operations, should cause no concern.
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -392,6 +401,10 @@ zabbix_export:
- $.globalLock.currentQueue.readers
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
-
name: 'MongoDB: Current queue: total'
type: DEPENDENT
@@ -399,9 +412,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'The total number of operations queued waiting for the lock.'
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -409,6 +419,10 @@ zabbix_export:
- $.globalLock.currentQueue.total
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
-
name: 'MongoDB: Current queue: writers'
type: DEPENDENT
@@ -418,9 +432,6 @@ zabbix_export:
description: |
The number of operations that are currently queued and waiting for the write lock.
A consistently small write-queue, particularly of shorter operations, is no cause for concern.
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -428,6 +439,10 @@ zabbix_export:
- $.globalLock.currentQueue.writers
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
-
name: 'MongoDB: Cursor: open pinned'
type: DEPENDENT
@@ -435,9 +450,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'Number of pinned open cursors.'
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -445,6 +457,10 @@ zabbix_export:
- $.metrics.cursor.open.pinned
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
-
name: 'MongoDB: Cursor: open total'
type: DEPENDENT
@@ -452,9 +468,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'Number of cursors that MongoDB is maintaining for clients.'
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -462,6 +475,10 @@ zabbix_export:
- $.metrics.cursor.open.total
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
triggers:
-
expression: '{min(5m)}>{$MONGODB.CURSOR.OPEN.MAX.WARN}'
@@ -475,9 +492,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'Number of cursors that time out, per second.'
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -489,6 +503,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
triggers:
-
expression: '{min(5m)}>{$MONGODB.CURSOR.TIMEOUT.MAX.WARN}'
@@ -502,9 +520,6 @@ zabbix_export:
history: 7d
units: bit
description: 'A number, either 64 or 32, that indicates whether the MongoDB instance is compiled for 64-bit or 32-bit architecture.'
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -516,6 +531,10 @@ zabbix_export:
- 3h
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
-
name: 'MongoDB: Memory: mapped'
type: DEPENDENT
@@ -524,9 +543,6 @@ zabbix_export:
history: 7d
units: B
description: 'Amount of mapped memory by the database.'
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -539,6 +555,10 @@ zabbix_export:
- '1048576'
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
-
name: 'MongoDB: Memory: mapped with journal'
type: DEPENDENT
@@ -547,9 +567,6 @@ zabbix_export:
history: 7d
units: B
description: 'The amount of mapped memory, including the memory used for journaling.'
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -562,6 +579,10 @@ zabbix_export:
- '1048576'
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
-
name: 'MongoDB: Memory: resident'
type: DEPENDENT
@@ -570,9 +591,6 @@ zabbix_export:
history: 7d
units: B
description: 'Amount of memory currently used by the database process.'
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -584,6 +602,10 @@ zabbix_export:
- '1048576'
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
-
name: 'MongoDB: Memory: virtual'
type: DEPENDENT
@@ -592,9 +614,6 @@ zabbix_export:
history: 7d
units: B
description: 'Amount of virtual memory used by the mongod process.'
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -606,6 +625,10 @@ zabbix_export:
- '1048576'
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
-
name: 'MongoDB: Cursor: open no timeout'
type: DEPENDENT
@@ -613,9 +636,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'Number of open cursors with the option DBQuery.Option.noTimeout set to prevent timeout after a period of inactivity.'
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -623,6 +643,10 @@ zabbix_export:
- $.metrics.cursor.open.noTimeout
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
-
name: 'MongoDB: Bytes in, rate'
type: DEPENDENT
@@ -632,9 +656,6 @@ zabbix_export:
value_type: FLOAT
units: Bps
description: 'The total number of bytes that the server has received over network connections initiated by clients or other mongod/mongos instances per second.'
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -646,6 +667,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
-
name: 'MongoDB: Bytes out, rate'
type: DEPENDENT
@@ -655,9 +680,6 @@ zabbix_export:
value_type: FLOAT
units: Bps
description: 'The total number of bytes that the server has sent over network connections initiated by clients or other mongod/mongos instances per second.'
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -669,6 +691,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
-
name: 'MongoDB: Requests, rate'
type: DEPENDENT
@@ -678,9 +704,6 @@ zabbix_export:
value_type: FLOAT
units: '!Rps'
description: 'Number of distinct requests that the server has received per second'
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -692,6 +715,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
-
name: 'MongoDB: Operations: command, rate'
type: DEPENDENT
@@ -702,9 +729,6 @@ zabbix_export:
description: |
The number of commands issued to the database the mongod instance per second.
Counts all commands except the write commands: insert, update, and delete.
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -716,6 +740,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
-
name: 'MongoDB: Operations: delete, rate'
type: DEPENDENT
@@ -724,9 +752,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'The number of delete operations the mongod instance per second.'
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -738,6 +763,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
-
name: 'MongoDB: Operations: getmore, rate'
type: DEPENDENT
@@ -748,9 +777,6 @@ zabbix_export:
description: |
The number of “getmore” operations since the mongod instance per second. This counter can be high even if the query count is low.
Secondary nodes send getMore operations as part of the replication process.
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -762,6 +788,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
-
name: 'MongoDB: Operations: insert, rate'
type: DEPENDENT
@@ -770,9 +800,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'The number of insert operations received since the mongod instance per second.'
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -784,6 +811,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
-
name: 'MongoDB: Operations: query, rate'
type: DEPENDENT
@@ -792,9 +823,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'The number of queries received the mongod instance per second.'
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -806,6 +834,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
-
name: 'MongoDB: Operations: update, rate'
type: DEPENDENT
@@ -814,9 +846,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'The number of update operations the mongod instance per second.'
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -828,6 +857,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
-
name: 'MongoDB: Get oplog stats'
key: 'mongodb.oplog.stats["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
@@ -835,18 +868,16 @@ zabbix_export:
trends: '0'
value_type: TEXT
description: 'Returns status of the replica set, using data polled from the oplog.'
- applications:
+ tags:
-
- name: 'Zabbix raw items'
+ tag: Application
+ value: 'Zabbix raw items'
-
name: 'MongoDB: Ping'
key: 'mongodb.ping["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
delay: 30s
history: 7d
description: 'Test if a connection is alive or not.'
- applications:
- -
- name: MongoDB
valuemap:
name: 'Service state'
preprocessing:
@@ -854,6 +885,10 @@ zabbix_export:
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 30m
+ tags:
+ -
+ tag: Application
+ value: MongoDB
triggers:
-
expression: '{last()}=0'
@@ -867,9 +902,10 @@ zabbix_export:
trends: '0'
value_type: TEXT
description: 'Returns the replica set status from the point of view of the member where the method is run.'
- applications:
+ tags:
-
- name: 'Zabbix raw items'
+ tag: Application
+ value: 'Zabbix raw items'
-
name: 'MongoDB: Get server status'
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
@@ -877,9 +913,10 @@ zabbix_export:
trends: '0'
value_type: TEXT
description: 'Returns a database’s state.'
- applications:
+ tags:
-
- name: 'Zabbix raw items'
+ tag: Application
+ value: 'Zabbix raw items'
-
name: 'MongoDB: Uptime'
type: DEPENDENT
@@ -889,9 +926,6 @@ zabbix_export:
value_type: FLOAT
units: s
description: 'Number of seconds that the mongod process has been active.'
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -899,6 +933,10 @@ zabbix_export:
- $.uptime
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
triggers:
-
expression: '{nodata(10m)}=1'
@@ -925,9 +963,6 @@ zabbix_export:
trends: '0'
value_type: CHAR
description: 'Version of the MongoDB server.'
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -939,6 +974,10 @@ zabbix_export:
- 3h
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
triggers:
-
expression: '{diff()}=1 and {strlen()}>0'
@@ -972,9 +1011,7 @@ zabbix_export:
value: '{$MONGODB.LLD.FILTER.COLLECTION.NOT_MATCHES}'
operator: NOT_MATCHES_REGEX
formulaid: B
- description: |
- Collect collections metrics.
- Note, depending on the number of DBs and collections this discovery operation may be expensive. Use filters with macros {$MONGODB.LLD.FILTER.DB.MATCHES}, {$MONGODB.LLD.FILTER.DB.NOT_MATCHES}, {$MONGODB.LLD.FILTER.COLLECTION.MATCHES}, {$MONGODB.LLD.FILTER.COLLECTION.NOT_MATCHES}.
+ description: 'Collect collections metrics.'
item_prototypes:
-
name: 'MongoDB {#DBNAME}.{#COLLECTION}: Objects, avg size'
@@ -985,9 +1022,6 @@ zabbix_export:
value_type: FLOAT
units: B
description: 'The size of the average object in the collection in bytes.'
- application_prototypes:
- -
- name: 'MongoDB: {#DBNAME}.{#COLLECTION}'
preprocessing:
-
type: JSONPATH
@@ -996,6 +1030,10 @@ zabbix_export:
error_handler: DISCARD_VALUE
master_item:
key: 'mongodb.collection.stats["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}","{#DBNAME}","{#COLLECTION}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB: {#DBNAME}.{#COLLECTION}'
-
name: 'MongoDB {#DBNAME}.{#COLLECTION}: Capped'
type: DEPENDENT
@@ -1003,9 +1041,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'Whether or not the collection is capped.'
- application_prototypes:
- -
- name: 'MongoDB: {#DBNAME}.{#COLLECTION}'
valuemap:
name: 'MongoDB flag'
preprocessing:
@@ -1023,6 +1058,10 @@ zabbix_export:
- 3h
master_item:
key: 'mongodb.collection.stats["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}","{#DBNAME}","{#COLLECTION}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB: {#DBNAME}.{#COLLECTION}'
-
name: 'MongoDB {#DBNAME}.{#COLLECTION}: Objects, count'
type: DEPENDENT
@@ -1030,9 +1069,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'Total number of objects in the collection.'
- application_prototypes:
- -
- name: 'MongoDB: {#DBNAME}.{#COLLECTION}'
preprocessing:
-
type: JSONPATH
@@ -1040,6 +1076,10 @@ zabbix_export:
- $.count
master_item:
key: 'mongodb.collection.stats["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}","{#DBNAME}","{#COLLECTION}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB: {#DBNAME}.{#COLLECTION}'
-
name: 'MongoDB {#DBNAME}.{#COLLECTION}: Capped: max number'
type: DEPENDENT
@@ -1048,9 +1088,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'Maximum number of documents that may be present in a capped collection.'
- application_prototypes:
- -
- name: 'MongoDB: {#DBNAME}.{#COLLECTION}'
preprocessing:
-
type: JSONPATH
@@ -1059,6 +1096,10 @@ zabbix_export:
error_handler: DISCARD_VALUE
master_item:
key: 'mongodb.collection.stats["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}","{#DBNAME}","{#COLLECTION}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB: {#DBNAME}.{#COLLECTION}'
-
name: 'MongoDB {#DBNAME}.{#COLLECTION}: Capped: max size'
type: DEPENDENT
@@ -1067,9 +1108,6 @@ zabbix_export:
history: 7d
units: B
description: 'Maximum size of a capped collection in bytes.'
- application_prototypes:
- -
- name: 'MongoDB: {#DBNAME}.{#COLLECTION}'
preprocessing:
-
type: JSONPATH
@@ -1078,6 +1116,10 @@ zabbix_export:
error_handler: DISCARD_VALUE
master_item:
key: 'mongodb.collection.stats["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}","{#DBNAME}","{#COLLECTION}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB: {#DBNAME}.{#COLLECTION}'
-
name: 'MongoDB {#DBNAME}.{#COLLECTION}: Indexes'
type: DEPENDENT
@@ -1085,9 +1127,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'Total number of indices on the collection.'
- application_prototypes:
- -
- name: 'MongoDB: {#DBNAME}.{#COLLECTION}'
preprocessing:
-
type: JSONPATH
@@ -1095,6 +1134,10 @@ zabbix_export:
- $.nindexes
master_item:
key: 'mongodb.collection.stats["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}","{#DBNAME}","{#COLLECTION}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB: {#DBNAME}.{#COLLECTION}'
-
name: 'MongoDB {#DBNAME}.{#COLLECTION}: Operations: commands, ms/s'
type: DEPENDENT
@@ -1104,9 +1147,6 @@ zabbix_export:
value_type: FLOAT
units: ms/s
description: 'Fraction of time (ms/s) the mongod has spent to operations.'
- application_prototypes:
- -
- name: 'MongoDB: {#DBNAME}.{#COLLECTION}'
preprocessing:
-
type: JSONPATH
@@ -1118,6 +1158,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.collections.usage["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB: {#DBNAME}.{#COLLECTION}'
-
name: 'MongoDB {#DBNAME}.{#COLLECTION}: Operations: commands, rate'
type: DEPENDENT
@@ -1126,9 +1170,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'The number of operations per second.'
- application_prototypes:
- -
- name: 'MongoDB: {#DBNAME}.{#COLLECTION}'
preprocessing:
-
type: JSONPATH
@@ -1140,6 +1181,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.collections.usage["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB: {#DBNAME}.{#COLLECTION}'
-
name: 'MongoDB {#DBNAME}.{#COLLECTION}: Operations: getmore, ms/s'
type: DEPENDENT
@@ -1149,9 +1194,6 @@ zabbix_export:
value_type: FLOAT
units: ms/s
description: 'Fraction of time (ms/s) the mongod has spent to operations.'
- application_prototypes:
- -
- name: 'MongoDB: {#DBNAME}.{#COLLECTION}'
preprocessing:
-
type: JSONPATH
@@ -1163,6 +1205,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.collections.usage["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB: {#DBNAME}.{#COLLECTION}'
-
name: 'MongoDB {#DBNAME}.{#COLLECTION}: Operations: getmore, rate'
type: DEPENDENT
@@ -1171,9 +1217,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'The number of operations per second.'
- application_prototypes:
- -
- name: 'MongoDB: {#DBNAME}.{#COLLECTION}'
preprocessing:
-
type: JSONPATH
@@ -1185,6 +1228,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.collections.usage["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB: {#DBNAME}.{#COLLECTION}'
-
name: 'MongoDB {#DBNAME}.{#COLLECTION}: Operations: insert, ms/s'
type: DEPENDENT
@@ -1194,9 +1241,6 @@ zabbix_export:
value_type: FLOAT
units: ms/s
description: 'Fraction of time (ms/s) the mongod has spent to operations.'
- application_prototypes:
- -
- name: 'MongoDB: {#DBNAME}.{#COLLECTION}'
preprocessing:
-
type: JSONPATH
@@ -1208,6 +1252,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.collections.usage["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB: {#DBNAME}.{#COLLECTION}'
-
name: 'MongoDB {#DBNAME}.{#COLLECTION}: Operations: insert, rate'
type: DEPENDENT
@@ -1216,9 +1264,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'The number of operations per second.'
- application_prototypes:
- -
- name: 'MongoDB: {#DBNAME}.{#COLLECTION}'
preprocessing:
-
type: JSONPATH
@@ -1230,6 +1275,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.collections.usage["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB: {#DBNAME}.{#COLLECTION}'
-
name: 'MongoDB {#DBNAME}.{#COLLECTION}: Operations: queries, ms/s'
type: DEPENDENT
@@ -1239,9 +1288,6 @@ zabbix_export:
value_type: FLOAT
units: ms/s
description: 'Fraction of time (ms/s) the mongod has spent to operations.'
- application_prototypes:
- -
- name: 'MongoDB: {#DBNAME}.{#COLLECTION}'
preprocessing:
-
type: JSONPATH
@@ -1253,6 +1299,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.collections.usage["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB: {#DBNAME}.{#COLLECTION}'
-
name: 'MongoDB {#DBNAME}.{#COLLECTION}: Operations: queries, rate'
type: DEPENDENT
@@ -1261,9 +1311,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'The number of operations per second.'
- application_prototypes:
- -
- name: 'MongoDB: {#DBNAME}.{#COLLECTION}'
preprocessing:
-
type: JSONPATH
@@ -1275,6 +1322,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.collections.usage["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB: {#DBNAME}.{#COLLECTION}'
-
name: 'MongoDB {#DBNAME}.{#COLLECTION}: Operations: remove, ms/s'
type: DEPENDENT
@@ -1284,9 +1335,6 @@ zabbix_export:
value_type: FLOAT
units: ms/s
description: 'Fraction of time (ms/s) the mongod has spent to operations.'
- application_prototypes:
- -
- name: 'MongoDB: {#DBNAME}.{#COLLECTION}'
preprocessing:
-
type: JSONPATH
@@ -1298,6 +1346,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.collections.usage["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB: {#DBNAME}.{#COLLECTION}'
-
name: 'MongoDB {#DBNAME}.{#COLLECTION}: Operations: remove, rate'
type: DEPENDENT
@@ -1306,9 +1358,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'The number of operations per second.'
- application_prototypes:
- -
- name: 'MongoDB: {#DBNAME}.{#COLLECTION}'
preprocessing:
-
type: JSONPATH
@@ -1320,6 +1369,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.collections.usage["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB: {#DBNAME}.{#COLLECTION}'
-
name: 'MongoDB {#DBNAME}.{#COLLECTION}: Operations: total, ms/s'
type: DEPENDENT
@@ -1329,9 +1382,6 @@ zabbix_export:
value_type: FLOAT
units: ms/s
description: 'Fraction of time (ms/s) the mongod has spent to operations.'
- application_prototypes:
- -
- name: 'MongoDB: {#DBNAME}.{#COLLECTION}'
preprocessing:
-
type: JSONPATH
@@ -1343,6 +1393,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.collections.usage["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB: {#DBNAME}.{#COLLECTION}'
-
name: 'MongoDB {#DBNAME}.{#COLLECTION}: Operations: total, rate'
type: DEPENDENT
@@ -1351,9 +1405,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'The number of operations per second.'
- application_prototypes:
- -
- name: 'MongoDB: {#DBNAME}.{#COLLECTION}'
preprocessing:
-
type: JSONPATH
@@ -1365,6 +1416,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.collections.usage["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB: {#DBNAME}.{#COLLECTION}'
-
name: 'MongoDB {#DBNAME}.{#COLLECTION}: Operations: update, ms/s'
type: DEPENDENT
@@ -1374,9 +1429,6 @@ zabbix_export:
value_type: FLOAT
units: ms/s
description: 'Fraction of time (ms/s) the mongod has spent to operations.'
- application_prototypes:
- -
- name: 'MongoDB: {#DBNAME}.{#COLLECTION}'
preprocessing:
-
type: JSONPATH
@@ -1388,6 +1440,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.collections.usage["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB: {#DBNAME}.{#COLLECTION}'
-
name: 'MongoDB {#DBNAME}.{#COLLECTION}: Operations: update, rate'
type: DEPENDENT
@@ -1396,9 +1452,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'The number of operations per second.'
- application_prototypes:
- -
- name: 'MongoDB: {#DBNAME}.{#COLLECTION}'
preprocessing:
-
type: JSONPATH
@@ -1410,6 +1463,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.collections.usage["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB: {#DBNAME}.{#COLLECTION}'
-
name: 'MongoDB {#DBNAME}.{#COLLECTION}: Read lock, ms/s'
type: DEPENDENT
@@ -1419,9 +1476,6 @@ zabbix_export:
value_type: FLOAT
units: ms/s
description: 'Fraction of time (ms/s) the mongod has spent to operations.'
- application_prototypes:
- -
- name: 'MongoDB: {#DBNAME}.{#COLLECTION}'
preprocessing:
-
type: JSONPATH
@@ -1433,6 +1487,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.collections.usage["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB: {#DBNAME}.{#COLLECTION}'
-
name: 'MongoDB {#DBNAME}.{#COLLECTION}: Read lock, rate'
type: DEPENDENT
@@ -1441,9 +1499,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'The number of operations per second.'
- application_prototypes:
- -
- name: 'MongoDB: {#DBNAME}.{#COLLECTION}'
preprocessing:
-
type: JSONPATH
@@ -1455,6 +1510,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.collections.usage["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB: {#DBNAME}.{#COLLECTION}'
-
name: 'MongoDB {#DBNAME}.{#COLLECTION}: Size'
type: DEPENDENT
@@ -1463,9 +1522,6 @@ zabbix_export:
history: 7d
units: B
description: 'The total size in bytes of the data in the collection plus the size of every indexes on the mongodb.collection.'
- application_prototypes:
- -
- name: 'MongoDB: {#DBNAME}.{#COLLECTION}'
preprocessing:
-
type: JSONPATH
@@ -1473,6 +1529,10 @@ zabbix_export:
- $.size
master_item:
key: 'mongodb.collection.stats["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}","{#DBNAME}","{#COLLECTION}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB: {#DBNAME}.{#COLLECTION}'
-
name: 'MongoDB {#DBNAME}.{#COLLECTION}: Get collection stats {#DBNAME}.{#COLLECTION}'
key: 'mongodb.collection.stats["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}","{#DBNAME}","{#COLLECTION}"]'
@@ -1480,9 +1540,10 @@ zabbix_export:
trends: '0'
value_type: TEXT
description: 'Returns a variety of storage statistics for a given collection.'
- applications:
+ tags:
-
- name: 'Zabbix raw items'
+ tag: Application
+ value: 'Zabbix raw items'
-
name: 'MongoDB {#DBNAME}.{#COLLECTION}: Storage size'
type: DEPENDENT
@@ -1491,9 +1552,6 @@ zabbix_export:
history: 7d
units: B
description: 'Total storage space allocated to this collection for document storage.'
- application_prototypes:
- -
- name: 'MongoDB: {#DBNAME}.{#COLLECTION}'
preprocessing:
-
type: JSONPATH
@@ -1501,6 +1559,10 @@ zabbix_export:
- $.storageSize
master_item:
key: 'mongodb.collection.stats["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}","{#DBNAME}","{#COLLECTION}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB: {#DBNAME}.{#COLLECTION}'
-
name: 'MongoDB {#DBNAME}.{#COLLECTION}: Write lock, ms/s'
type: DEPENDENT
@@ -1510,9 +1572,6 @@ zabbix_export:
value_type: FLOAT
units: ms/s
description: 'Fraction of time (ms/s) the mongod has spent to operations.'
- application_prototypes:
- -
- name: 'MongoDB: {#DBNAME}.{#COLLECTION}'
preprocessing:
-
type: JSONPATH
@@ -1524,6 +1583,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.collections.usage["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB: {#DBNAME}.{#COLLECTION}'
-
name: 'MongoDB {#DBNAME}.{#COLLECTION}: Write lock, rate'
type: DEPENDENT
@@ -1532,9 +1595,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'The number of operations per second.'
- application_prototypes:
- -
- name: 'MongoDB: {#DBNAME}.{#COLLECTION}'
preprocessing:
-
type: JSONPATH
@@ -1546,6 +1606,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.collections.usage["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB: {#DBNAME}.{#COLLECTION}'
graph_prototypes:
-
name: 'MongoDB {#DBNAME}.{#COLLECTION}: Locks'
@@ -1681,9 +1745,7 @@ zabbix_export:
value: '{$MONGODB.LLD.FILTER.DB.NOT_MATCHES}'
operator: NOT_MATCHES_REGEX
formulaid: B
- description: |
- Collect database metrics.
- Note, depending on the number of DBs this discovery operation may be expensive. Use filters with macros {$MONGODB.LLD.FILTER.DB.MATCHES}, {$MONGODB.LLD.FILTER.DB.NOT_MATCHES}.
+ description: 'Collect database metrics.'
item_prototypes:
-
name: 'MongoDB {#DBNAME}: Collections'
@@ -1692,9 +1754,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'Contains a count of the number of collections in that database.'
- application_prototypes:
- -
- name: 'MongoDB: {#DBNAME}'
preprocessing:
-
type: JSONPATH
@@ -1702,6 +1761,10 @@ zabbix_export:
- $.collections
master_item:
key: 'mongodb.db.stats["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}","{#DBNAME}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB: {#DBNAME}'
-
name: 'MongoDB {#DBNAME}: Size, data'
type: DEPENDENT
@@ -1710,9 +1773,6 @@ zabbix_export:
history: 7d
units: B
description: 'Total size of the data held in this database including the padding factor.'
- application_prototypes:
- -
- name: 'MongoDB: {#DBNAME}'
preprocessing:
-
type: JSONPATH
@@ -1720,6 +1780,10 @@ zabbix_export:
- $.dataSize
master_item:
key: 'mongodb.db.stats["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}","{#DBNAME}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB: {#DBNAME}'
-
name: 'MongoDB {#DBNAME}: Extents'
type: DEPENDENT
@@ -1727,9 +1791,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'Contains a count of the number of extents in the database across all collections.'
- application_prototypes:
- -
- name: 'MongoDB: {#DBNAME}'
preprocessing:
-
type: JSONPATH
@@ -1738,6 +1799,10 @@ zabbix_export:
error_handler: DISCARD_VALUE
master_item:
key: 'mongodb.db.stats["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}","{#DBNAME}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB: {#DBNAME}'
-
name: 'MongoDB {#DBNAME}: Size, file'
type: DEPENDENT
@@ -1746,9 +1811,6 @@ zabbix_export:
history: 7d
units: B
description: 'Total size of the data held in this database including the padding factor (only available with the mmapv1 storage engine).'
- application_prototypes:
- -
- name: 'MongoDB: {#DBNAME}'
preprocessing:
-
type: JSONPATH
@@ -1757,6 +1819,10 @@ zabbix_export:
error_handler: DISCARD_VALUE
master_item:
key: 'mongodb.db.stats["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}","{#DBNAME}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB: {#DBNAME}'
-
name: 'MongoDB {#DBNAME}: Size, index'
type: DEPENDENT
@@ -1765,9 +1831,6 @@ zabbix_export:
history: 7d
units: B
description: 'Total size of all indexes created on this database.'
- application_prototypes:
- -
- name: 'MongoDB: {#DBNAME}'
preprocessing:
-
type: JSONPATH
@@ -1775,6 +1838,10 @@ zabbix_export:
- $.indexSize
master_item:
key: 'mongodb.db.stats["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}","{#DBNAME}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB: {#DBNAME}'
-
name: 'MongoDB {#DBNAME}: Objects, count'
type: DEPENDENT
@@ -1782,9 +1849,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'Number of objects (documents) in the database across all collections.'
- application_prototypes:
- -
- name: 'MongoDB: {#DBNAME}'
preprocessing:
-
type: JSONPATH
@@ -1792,6 +1856,10 @@ zabbix_export:
- $.objects
master_item:
key: 'mongodb.db.stats["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}","{#DBNAME}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB: {#DBNAME}'
-
name: 'MongoDB {#DBNAME}: Objects, avg size'
type: DEPENDENT
@@ -1801,9 +1869,6 @@ zabbix_export:
value_type: FLOAT
units: B
description: 'The average size of each document in bytes.'
- application_prototypes:
- -
- name: 'MongoDB: {#DBNAME}'
preprocessing:
-
type: JSONPATH
@@ -1811,6 +1876,10 @@ zabbix_export:
- $.avgObjSize
master_item:
key: 'mongodb.db.stats["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}","{#DBNAME}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB: {#DBNAME}'
-
name: 'MongoDB {#DBNAME}: Get db stats {#DBNAME}'
key: 'mongodb.db.stats["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}","{#DBNAME}"]'
@@ -1818,9 +1887,10 @@ zabbix_export:
trends: '0'
value_type: TEXT
description: 'Returns statistics reflecting the database system’s state.'
- applications:
+ tags:
-
- name: 'Zabbix raw items'
+ tag: Application
+ value: 'Zabbix raw items'
-
name: 'MongoDB {#DBNAME}: Size, storage'
type: DEPENDENT
@@ -1829,9 +1899,6 @@ zabbix_export:
history: 7d
units: B
description: 'Total amount of space allocated to collections in this database for document storage.'
- application_prototypes:
- -
- name: 'MongoDB: {#DBNAME}'
preprocessing:
-
type: JSONPATH
@@ -1839,6 +1906,10 @@ zabbix_export:
- $.storageSize
master_item:
key: 'mongodb.db.stats["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}","{#DBNAME}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB: {#DBNAME}'
graph_prototypes:
-
name: 'MongoDB {#DBNAME}: Collections stats'
@@ -1896,9 +1967,6 @@ zabbix_export:
value_type: FLOAT
units: ms/s
description: 'Fraction of time (ms/s) the mongod has spent applying operations from the oplog.'
- application_prototypes:
- -
- name: 'MongoDB Replica Set: {#RS_NAME}'
preprocessing:
-
type: JSONPATH
@@ -1910,6 +1978,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB Replica Set: {#RS_NAME}'
-
name: 'MongoDB: Apply batches, rate'
type: DEPENDENT
@@ -1918,9 +1990,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'Number of batches applied across all databases per second.'
- application_prototypes:
- -
- name: 'MongoDB Replica Set: {#RS_NAME}'
preprocessing:
-
type: JSONPATH
@@ -1932,6 +2001,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB Replica Set: {#RS_NAME}'
-
name: 'MongoDB: Apply ops, rate'
type: DEPENDENT
@@ -1940,9 +2013,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'Number of oplog operations applied per second.'
- application_prototypes:
- -
- name: 'MongoDB Replica Set: {#RS_NAME}'
preprocessing:
-
type: JSONPATH
@@ -1954,6 +2024,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB Replica Set: {#RS_NAME}'
-
name: 'MongoDB: Buffer'
type: DEPENDENT
@@ -1961,9 +2035,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'Number of operations in the oplog buffer.'
- application_prototypes:
- -
- name: 'MongoDB Replica Set: {#RS_NAME}'
preprocessing:
-
type: JSONPATH
@@ -1971,6 +2042,10 @@ zabbix_export:
- $.metrics.repl.buffer.count
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB Replica Set: {#RS_NAME}'
-
name: 'MongoDB: Buffer, max size'
type: DEPENDENT
@@ -1979,9 +2054,6 @@ zabbix_export:
history: 7d
units: B
description: 'Maximum size of the buffer.'
- application_prototypes:
- -
- name: 'MongoDB Replica Set: {#RS_NAME}'
preprocessing:
-
type: JSONPATH
@@ -1989,6 +2061,10 @@ zabbix_export:
- $.metrics.repl.buffer.maxSizeBytes
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB Replica Set: {#RS_NAME}'
-
name: 'MongoDB: Buffer, size'
type: DEPENDENT
@@ -1997,9 +2073,6 @@ zabbix_export:
history: 7d
units: B
description: 'Current size of the contents of the oplog buffer.'
- application_prototypes:
- -
- name: 'MongoDB Replica Set: {#RS_NAME}'
preprocessing:
-
type: JSONPATH
@@ -2007,6 +2080,10 @@ zabbix_export:
- $.metrics.repl.buffer.sizeBytes
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB Replica Set: {#RS_NAME}'
-
name: 'MongoDB: Replication lag'
type: DEPENDENT
@@ -2016,9 +2093,6 @@ zabbix_export:
value_type: FLOAT
units: s
description: 'Delay between a write operation on the primary and its copy to a secondary.'
- application_prototypes:
- -
- name: 'MongoDB Replica Set: {#RS_NAME}'
preprocessing:
-
type: JSONPATH
@@ -2026,6 +2100,10 @@ zabbix_export:
- '$.members[?(@.self == "true")].lag.first()'
master_item:
key: 'mongodb.rs.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB Replica Set: {#RS_NAME}'
trigger_prototypes:
-
expression: '{min(5m)}>{$MONGODB.REPL.LAG.MAX.WARN}'
@@ -2040,9 +2118,6 @@ zabbix_export:
value_type: FLOAT
units: Bps
description: 'Amount of data read from the replication sync source per second.'
- application_prototypes:
- -
- name: 'MongoDB Replica Set: {#RS_NAME}'
preprocessing:
-
type: JSONPATH
@@ -2054,6 +2129,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB Replica Set: {#RS_NAME}'
-
name: 'MongoDB: Network getmores, ms/s'
type: DEPENDENT
@@ -2063,9 +2142,6 @@ zabbix_export:
value_type: FLOAT
units: ms/s
description: 'Fraction of time (ms/s) required to collect data from getmore operations.'
- application_prototypes:
- -
- name: 'MongoDB Replica Set: {#RS_NAME}'
preprocessing:
-
type: JSONPATH
@@ -2077,6 +2153,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB Replica Set: {#RS_NAME}'
-
name: 'MongoDB: Network getmores, rate'
type: DEPENDENT
@@ -2085,9 +2165,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'Number of getmore operations per second.'
- application_prototypes:
- -
- name: 'MongoDB Replica Set: {#RS_NAME}'
preprocessing:
-
type: JSONPATH
@@ -2099,6 +2176,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB Replica Set: {#RS_NAME}'
-
name: 'MongoDB: Network ops, rate'
type: DEPENDENT
@@ -2107,9 +2188,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'Number of operations read from the replication source per second.'
- application_prototypes:
- -
- name: 'MongoDB Replica Set: {#RS_NAME}'
preprocessing:
-
type: JSONPATH
@@ -2121,6 +2199,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB Replica Set: {#RS_NAME}'
-
name: 'MongoDB: Network readers created, rate'
type: DEPENDENT
@@ -2129,9 +2211,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'Number of oplog query processes created per second.'
- application_prototypes:
- -
- name: 'MongoDB Replica Set: {#RS_NAME}'
preprocessing:
-
type: JSONPATH
@@ -2143,6 +2222,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB Replica Set: {#RS_NAME}'
-
name: 'MongoDB {#RS_NAME}: Oplog time diff'
type: DEPENDENT
@@ -2151,9 +2234,6 @@ zabbix_export:
history: 7d
units: s
description: 'Oplog window: difference between the first and last operation in the oplog. Only present if there are entries in the oplog.'
- application_prototypes:
- -
- name: 'MongoDB Replica Set: {#RS_NAME}'
preprocessing:
-
type: JSONPATH
@@ -2161,6 +2241,10 @@ zabbix_export:
- $.timediff
master_item:
key: 'mongodb.oplog.stats["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB Replica Set: {#RS_NAME}'
-
name: 'MongoDB: Preload docs, ms/s'
type: DEPENDENT
@@ -2170,9 +2254,6 @@ zabbix_export:
value_type: FLOAT
units: ms/s
description: 'Fraction of time (ms/s) spent loading documents as part of the pre-fetch stage of replication.'
- application_prototypes:
- -
- name: 'MongoDB Replica Set: {#RS_NAME}'
preprocessing:
-
type: JSONPATH
@@ -2184,6 +2265,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB Replica Set: {#RS_NAME}'
-
name: 'MongoDB: Preload docs, rate'
type: DEPENDENT
@@ -2192,9 +2277,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'Number of documents loaded per second during the pre-fetch stage of replication.'
- application_prototypes:
- -
- name: 'MongoDB Replica Set: {#RS_NAME}'
preprocessing:
-
type: JSONPATH
@@ -2206,6 +2288,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB Replica Set: {#RS_NAME}'
-
name: 'MongoDB: Preload indexes, ms/s'
type: DEPENDENT
@@ -2215,9 +2301,6 @@ zabbix_export:
value_type: FLOAT
units: ms/s
description: 'Fraction of time (ms/s) spent loading documents as part of the pre-fetch stage of replication.'
- application_prototypes:
- -
- name: 'MongoDB Replica Set: {#RS_NAME}'
preprocessing:
-
type: JSONPATH
@@ -2229,6 +2312,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB Replica Set: {#RS_NAME}'
-
name: 'MongoDB: Preload indexes, rate'
type: DEPENDENT
@@ -2237,9 +2324,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'Number of index entries loaded by members before updating documents as part of the pre-fetch stage of replication.'
- application_prototypes:
- -
- name: 'MongoDB Replica Set: {#RS_NAME}'
preprocessing:
-
type: JSONPATH
@@ -2251,6 +2335,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB Replica Set: {#RS_NAME}'
-
name: 'MongoDB: Node state'
type: DEPENDENT
@@ -2258,9 +2346,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'An integer between 0 and 10 that represents the replica state of the current member.'
- application_prototypes:
- -
- name: 'MongoDB Replica Set: {#RS_NAME}'
valuemap:
name: 'ReplicaSet node state'
preprocessing:
@@ -2274,6 +2359,10 @@ zabbix_export:
- 1h
master_item:
key: 'mongodb.rs.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB Replica Set: {#RS_NAME}'
trigger_prototypes:
-
expression: '{change()}=1'
@@ -2290,9 +2379,6 @@ zabbix_export:
history: 7d
discover: NO_DISCOVER
description: 'The number of replucated nodes in current ReplicaSet.'
- application_prototypes:
- -
- name: 'MongoDB Replica Set: {#RS_NAME}'
preprocessing:
-
type: JSONPATH
@@ -2304,6 +2390,10 @@ zabbix_export:
- 1h
master_item:
key: 'mongodb.rs.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB Replica Set: {#RS_NAME}'
-
name: 'MongoDB: Unhealthy replicas'
type: DEPENDENT
@@ -2314,9 +2404,6 @@ zabbix_export:
discover: NO_DISCOVER
value_type: CHAR
description: 'The replicated nodes in current ReplicaSet with member health value = 0.'
- application_prototypes:
- -
- name: 'MongoDB Replica Set: {#RS_NAME}'
preprocessing:
-
type: JSONPATH
@@ -2328,13 +2415,16 @@ zabbix_export:
- |
var value = JSON.parse(value);
return value.length ? JSON.stringify(value) : '';
-
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1h
master_item:
key: 'mongodb.rs.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB Replica Set: {#RS_NAME}'
-
name: 'MongoDB: Number of unhealthy replicas'
type: DEPENDENT
@@ -2343,9 +2433,6 @@ zabbix_export:
history: 7d
discover: NO_DISCOVER
description: 'The number of replicated nodes with member health value = 0.'
- application_prototypes:
- -
- name: 'MongoDB Replica Set: {#RS_NAME}'
preprocessing:
-
type: JSONPATH
@@ -2357,6 +2444,10 @@ zabbix_export:
- 1h
master_item:
key: 'mongodb.rs.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB Replica Set: {#RS_NAME}'
trigger_prototypes:
-
expression: '{MongoDB node by Zabbix Agent 2:mongodb.rs.unhealthy_count[{#RS_NAME}].last()}>0 and {MongoDB node by Zabbix Agent 2:mongodb.rs.unhealthy[{#RS_NAME}].strlen()}>0'
@@ -2389,22 +2480,6 @@ zabbix_export:
- 1h
overrides:
-
- name: 'Arbiter metrics'
- step: '2'
- filter:
- conditions:
- -
- macro: '{#NODE_STATE}'
- value: '7'
- formulaid: A
- operations:
- -
- operationobject: ITEM_PROTOTYPE
- operator: LIKE
- value: 'Replication lag'
- status: ENABLED
- discover: NO_DISCOVER
- -
name: 'Primary metrics'
step: '1'
filter:
@@ -2453,9 +2528,6 @@ zabbix_export:
history: 7d
units: B
description: 'Size of the data currently in cache.'
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -2463,6 +2535,10 @@ zabbix_export:
- '$.wiredTiger.cache[''bytes currently in the cache'']'
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
-
name: 'MongoDB: WiredTiger cache: bytes, max'
type: DEPENDENT
@@ -2471,9 +2547,6 @@ zabbix_export:
history: 7d
units: B
description: 'Maximum cache size.'
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -2481,6 +2554,10 @@ zabbix_export:
- '$.wiredTiger.cache[''maximum bytes configured'']'
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
-
name: 'MongoDB: WiredTiger cache: max page size at eviction'
type: DEPENDENT
@@ -2489,9 +2566,6 @@ zabbix_export:
history: 7d
units: B
description: 'Maximum page size at eviction.'
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -2499,6 +2573,10 @@ zabbix_export:
- '$.wiredTiger.cache[''maximum page size at eviction'']'
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
-
name: 'MongoDB: WiredTiger cache: modified pages evicted'
type: DEPENDENT
@@ -2506,9 +2584,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'Number of pages, that have been modified, evicted from the cache.'
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -2516,6 +2591,10 @@ zabbix_export:
- '$.wiredTiger.cache[''modified pages evicted'']'
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
-
name: 'MongoDB: WiredTiger cache: pages evicted by application threads, rate'
type: DEPENDENT
@@ -2524,9 +2603,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'Number of page evicted by application threads per second.'
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -2534,6 +2610,10 @@ zabbix_export:
- '$.wiredTiger.cache.[''pages evicted by application threads'']'
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
-
name: 'MongoDB: WiredTiger cache: pages held in cache'
type: DEPENDENT
@@ -2541,9 +2621,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'Number of pages currently held in the cache.'
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -2551,6 +2628,10 @@ zabbix_export:
- '$.wiredTiger.cache[''pages currently held in the cache'']'
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
-
name: 'MongoDB: WiredTiger cache: pages read into cache'
type: DEPENDENT
@@ -2558,9 +2639,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'Number of pages read into the cache.'
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -2568,6 +2646,10 @@ zabbix_export:
- '$.wiredTiger.cache[''pages read into cache'']'
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
-
name: 'MongoDB: WiredTiger cache: pages written from cache'
type: DEPENDENT
@@ -2575,9 +2657,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'Number of pages writtent from the cache.'
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -2585,6 +2664,10 @@ zabbix_export:
- '$.wiredTiger.cache[''pages written from cache'']'
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
-
name: 'MongoDB: WiredTiger cache: in-memory page splits'
type: DEPENDENT
@@ -2592,9 +2675,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'In-memory page splits.'
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -2602,6 +2682,10 @@ zabbix_export:
- '$.wiredTiger.cache[''in-memory page splits'']'
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
-
name: 'MongoDB: WiredTiger cache: tracked dirty bytes in the cache'
type: DEPENDENT
@@ -2610,9 +2694,6 @@ zabbix_export:
history: 7d
units: B
description: 'Size of the dirty data in the cache.'
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -2620,6 +2701,10 @@ zabbix_export:
- '$.wiredTiger.cache.[''tracked dirty bytes in the cache'']'
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
-
name: 'MongoDB: WiredTiger cache: unmodified pages evicted'
type: DEPENDENT
@@ -2627,9 +2712,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'Number of pages, that were not modified, evicted from the cache.'
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -2637,6 +2719,10 @@ zabbix_export:
- '$.wiredTiger.cache.[''unmodified pages evicted'']'
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
-
name: 'MongoDB: WiredTiger concurrent transactions: read, available'
type: DEPENDENT
@@ -2644,9 +2730,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'Number of available read tickets (concurrent transactions) remaining.'
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -2654,6 +2737,10 @@ zabbix_export:
- $.wiredTiger.concurrentTransactions.read.available
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
trigger_prototypes:
-
expression: '{max(5m)}<{$MONGODB.WIRED_TIGER.TICKETS.AVAILABLE.MIN.WARN}'
@@ -2669,9 +2756,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'Number of read tickets (concurrent transactions) in use.'
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -2679,6 +2763,10 @@ zabbix_export:
- $.wiredTiger.concurrentTransactions.read.out
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
-
name: 'MongoDB: WiredTiger concurrent transactions: read, total tickets'
type: DEPENDENT
@@ -2686,9 +2774,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'Total number of read tickets (concurrent transactions) available.'
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -2696,6 +2781,10 @@ zabbix_export:
- $.wiredTiger.concurrentTransactions.read.totalTickets
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
-
name: 'MongoDB: WiredTiger concurrent transactions: write, available'
type: DEPENDENT
@@ -2703,9 +2792,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'Number of available write tickets (concurrent transactions) remaining.'
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -2713,6 +2799,10 @@ zabbix_export:
- $.wiredTiger.concurrentTransactions.write.available
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
trigger_prototypes:
-
expression: '{max(5m)}<{$MONGODB.WIRED_TIGER.TICKETS.AVAILABLE.MIN.WARN}'
@@ -2728,9 +2818,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'Number of write tickets (concurrent transactions) in use.'
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -2738,6 +2825,10 @@ zabbix_export:
- $.wiredTiger.concurrentTransactions.write.out
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
-
name: 'MongoDB: WiredTiger concurrent transactions: write, total tickets'
type: DEPENDENT
@@ -2745,9 +2836,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'Total number of write tickets (concurrent transactions) available.'
- applications:
- -
- name: MongoDB
preprocessing:
-
type: JSONPATH
@@ -2755,6 +2843,10 @@ zabbix_export:
- $.wiredTiger.concurrentTransactions.write.totalTickets
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: MongoDB
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
preprocessing:
diff --git a/templates/db/mongodb_cluster/template_db_mongodb_cluster.yaml b/templates/db/mongodb_cluster/template_db_mongodb_cluster.yaml
index 6547dca43b0..e3a2339b1c4 100644
--- a/templates/db/mongodb_cluster/template_db_mongodb_cluster.yaml
+++ b/templates/db/mongodb_cluster/template_db_mongodb_cluster.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-04-01T10:04:12Z'
+ date: '2021-04-22T11:28:28Z'
groups:
-
name: Templates/Databases
@@ -22,11 +22,6 @@ zabbix_export:
groups:
-
name: Templates/Databases
- applications:
- -
- name: 'MongoDB sharded cluster'
- -
- name: 'Zabbix raw items'
items:
-
name: 'MongoDB cluster: Configserver heartbeat'
@@ -36,9 +31,6 @@ zabbix_export:
history: 7d
units: s
description: 'Difference between the latest optime of the CSRS primary that the mongos has seen and cluster time.'
- applications:
- -
- name: 'MongoDB sharded cluster'
preprocessing:
-
type: JAVASCRIPT
@@ -46,9 +38,12 @@ zabbix_export:
- |
data = JSON.parse(value)
return (data["$clusterTime"].clusterTime-data.sharding.lastSeenConfigServerOpTime.ts)/Math.pow(2,32);
-
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB sharded cluster'
-
name: 'MongoDB cluster: Connections, active'
type: DEPENDENT
@@ -59,9 +54,6 @@ zabbix_export:
"The number of active client connections to the server.
Active client connections refers to client connections that currently have operations in progress.
Available starting in 4.0.7, 0 for older versions."
- applications:
- -
- name: 'MongoDB sharded cluster'
preprocessing:
-
type: JSONPATH
@@ -71,6 +63,10 @@ zabbix_export:
error_handler_params: '0'
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB sharded cluster'
-
name: 'MongoDB cluster: Connections, available'
type: DEPENDENT
@@ -78,9 +74,6 @@ zabbix_export:
delay: '0'
history: 7d
description: '"The number of unused incoming connections available."'
- applications:
- -
- name: 'MongoDB sharded cluster'
preprocessing:
-
type: JSONPATH
@@ -88,6 +81,10 @@ zabbix_export:
- $.connections.available
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB sharded cluster'
triggers:
-
expression: '{max(5m)}<{$MONGODB.CONNS.AVAILABLE.MIN.WARN}'
@@ -105,9 +102,6 @@ zabbix_export:
description: |
"The number of incoming connections from clients to the database server.
This number includes the current shell session"
- applications:
- -
- name: 'MongoDB sharded cluster'
preprocessing:
-
type: JSONPATH
@@ -115,6 +109,10 @@ zabbix_export:
- $.connections.current
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB sharded cluster'
-
name: 'MongoDB cluster: New connections, rate'
type: DEPENDENT
@@ -123,9 +121,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: '"Rate of all incoming connections created to the server."'
- applications:
- -
- name: 'MongoDB sharded cluster'
preprocessing:
-
type: JSONPATH
@@ -137,6 +132,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB sharded cluster'
-
name: 'MongoDB cluster: Connection pool: available'
type: DEPENDENT
@@ -144,9 +143,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'The total number of available outgoing connections from the current mongos instance to other members of the sharded cluster.'
- applications:
- -
- name: 'MongoDB sharded cluster'
preprocessing:
-
type: JSONPATH
@@ -154,6 +150,10 @@ zabbix_export:
- $.totalAvailable
master_item:
key: 'mongodb.connpool.stats["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB sharded cluster'
-
name: 'MongoDB cluster: Connection pool: client connections'
type: DEPENDENT
@@ -161,9 +161,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'The number of active and stored outgoing synchronous connections from the current mongos instance to other members of the sharded cluster.'
- applications:
- -
- name: 'MongoDB sharded cluster'
preprocessing:
-
type: JSONPATH
@@ -171,6 +168,10 @@ zabbix_export:
- $.numClientConnections
master_item:
key: 'mongodb.connpool.stats["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB sharded cluster'
-
name: 'MongoDB cluster: Connection pool: created, rate'
type: DEPENDENT
@@ -179,9 +180,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'The total number of outgoing connections created per second by the current mongos instance to other members of the sharded cluster.'
- applications:
- -
- name: 'MongoDB sharded cluster'
preprocessing:
-
type: JSONPATH
@@ -193,6 +191,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.connpool.stats["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB sharded cluster'
-
name: 'MongoDB cluster: Connection pool: in use'
type: DEPENDENT
@@ -200,9 +202,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'Reports the total number of outgoing connections from the current mongos instance to other members of the sharded cluster set that are currently in use.'
- applications:
- -
- name: 'MongoDB sharded cluster'
preprocessing:
-
type: JSONPATH
@@ -210,6 +209,10 @@ zabbix_export:
- $.totalInUse
master_item:
key: 'mongodb.connpool.stats["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB sharded cluster'
-
name: 'MongoDB cluster: Connection pool: refreshing'
type: DEPENDENT
@@ -217,9 +220,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'Reports the total number of outgoing connections from the current mongos instance to other members of the sharded cluster that are currently being refreshed.'
- applications:
- -
- name: 'MongoDB sharded cluster'
preprocessing:
-
type: JSONPATH
@@ -227,6 +227,10 @@ zabbix_export:
- $.totalRefreshing
master_item:
key: 'mongodb.connpool.stats["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB sharded cluster'
-
name: 'MongoDB cluster: Connection pool: scoped'
type: DEPENDENT
@@ -234,9 +238,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'Number of active and stored outgoing scoped synchronous connections from the current mongos instance to other members of the sharded cluster.'
- applications:
- -
- name: 'MongoDB sharded cluster'
preprocessing:
-
type: JSONPATH
@@ -244,6 +245,10 @@ zabbix_export:
- $.numAScopedConnections
master_item:
key: 'mongodb.connpool.stats["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB sharded cluster'
-
name: 'MongoDB cluster: Get mongodb.connpool.stats'
key: 'mongodb.connpool.stats["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
@@ -251,9 +256,10 @@ zabbix_export:
trends: '0'
value_type: TEXT
description: 'Returns current info about connpool.stats.'
- applications:
+ tags:
-
- name: 'Zabbix raw items'
+ tag: Application
+ value: 'Zabbix raw items'
-
name: 'MongoDB cluster: Cursor: open pinned'
type: DEPENDENT
@@ -261,9 +267,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'Number of pinned open cursors.'
- applications:
- -
- name: 'MongoDB sharded cluster'
preprocessing:
-
type: JSONPATH
@@ -271,6 +274,10 @@ zabbix_export:
- $.metrics.cursor.open.pinned
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB sharded cluster'
-
name: 'MongoDB cluster: Cursor: open total'
type: DEPENDENT
@@ -278,9 +285,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'Number of cursors that MongoDB is maintaining for clients.'
- applications:
- -
- name: 'MongoDB sharded cluster'
preprocessing:
-
type: JSONPATH
@@ -288,6 +292,10 @@ zabbix_export:
- $.metrics.cursor.open.total
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB sharded cluster'
triggers:
-
expression: '{min(5m)}>{$MONGODB.CURSOR.OPEN.MAX.WARN}'
@@ -301,9 +309,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'Number of cursors that time out, per second.'
- applications:
- -
- name: 'MongoDB sharded cluster'
preprocessing:
-
type: JSONPATH
@@ -315,6 +320,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB sharded cluster'
triggers:
-
expression: '{min(5m)}>{$MONGODB.CURSOR.TIMEOUT.MAX.WARN}'
@@ -325,9 +334,10 @@ zabbix_export:
key: 'mongodb.jumbo_chunks.count["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
history: 7d
description: 'Total number of ''jumbo'' chunks in the mongo cluster.'
- applications:
+ tags:
-
- name: 'MongoDB sharded cluster'
+ tag: Application
+ value: 'MongoDB sharded cluster'
-
name: 'MongoDB cluster: Last seen configserver'
type: DEPENDENT
@@ -336,9 +346,6 @@ zabbix_export:
history: 7d
units: unixtime
description: 'The latest optime of the CSRS primary that the mongos has seen.'
- applications:
- -
- name: 'MongoDB sharded cluster'
preprocessing:
-
type: JAVASCRIPT
@@ -346,9 +353,12 @@ zabbix_export:
- |
data = JSON.parse(value)
return data.sharding.lastSeenConfigServerOpTime.ts/Math.pow(2,32)
-
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB sharded cluster'
-
name: 'MongoDB cluster: Architecture'
type: DEPENDENT
@@ -357,9 +367,6 @@ zabbix_export:
history: 7d
units: bit
description: 'A number, either 64 or 32, that indicates whether the MongoDB instance is compiled for 64-bit or 32-bit architecture.'
- applications:
- -
- name: 'MongoDB sharded cluster'
preprocessing:
-
type: JSONPATH
@@ -371,6 +378,10 @@ zabbix_export:
- 3h
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB sharded cluster'
-
name: 'MongoDB cluster: Memory: resident'
type: DEPENDENT
@@ -379,9 +390,6 @@ zabbix_export:
history: 7d
units: B
description: 'Amount of memory currently used by the database process.'
- applications:
- -
- name: 'MongoDB sharded cluster'
preprocessing:
-
type: JSONPATH
@@ -393,6 +401,10 @@ zabbix_export:
- '1048576'
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB sharded cluster'
-
name: 'MongoDB cluster: Memory: virtual'
type: DEPENDENT
@@ -401,9 +413,6 @@ zabbix_export:
history: 7d
units: B
description: 'Amount of virtual memory used by the mongos process.'
- applications:
- -
- name: 'MongoDB sharded cluster'
preprocessing:
-
type: JSONPATH
@@ -415,6 +424,10 @@ zabbix_export:
- '1048576'
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB sharded cluster'
-
name: 'MongoDB cluster: Cursor: open no timeout'
type: DEPENDENT
@@ -422,9 +435,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'Number of open cursors with the option DBQuery.Option.noTimeout set to prevent timeout after a period of inactivity.'
- applications:
- -
- name: 'MongoDB sharded cluster'
preprocessing:
-
type: JSONPATH
@@ -432,6 +442,10 @@ zabbix_export:
- $.metrics.cursor.open.noTimeout
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB sharded cluster'
-
name: 'MongoDB cluster: Bytes in, rate'
type: DEPENDENT
@@ -440,9 +454,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'The total number of bytes that the server has received over network connections initiated by clients or other mongod/mongos instances per second.'
- applications:
- -
- name: 'MongoDB sharded cluster'
preprocessing:
-
type: JSONPATH
@@ -454,6 +465,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB sharded cluster'
-
name: 'MongoDB cluster: Bytes out, rate'
type: DEPENDENT
@@ -463,9 +478,6 @@ zabbix_export:
value_type: FLOAT
units: Bps
description: 'The total number of bytes that the server has sent over network connections initiated by clients or other mongod/mongos instances per second.'
- applications:
- -
- name: 'MongoDB sharded cluster'
preprocessing:
-
type: JSONPATH
@@ -477,6 +489,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB sharded cluster'
-
name: 'MongoDB cluster: Requests, rate'
type: DEPENDENT
@@ -486,9 +502,6 @@ zabbix_export:
value_type: FLOAT
units: '!Rps'
description: 'Number of distinct requests that the server has received per second'
- applications:
- -
- name: 'MongoDB sharded cluster'
preprocessing:
-
type: JSONPATH
@@ -500,6 +513,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB sharded cluster'
-
name: 'MongoDB cluster: Operations: command'
type: DEPENDENT
@@ -510,9 +527,6 @@ zabbix_export:
description: |
"The number of commands issued to the database per second.
Counts all commands except the write commands: insert, update, and delete."
- applications:
- -
- name: 'MongoDB sharded cluster'
preprocessing:
-
type: JSONPATH
@@ -524,6 +538,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB sharded cluster'
-
name: 'MongoDB cluster: Operations: delete'
type: DEPENDENT
@@ -532,9 +550,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: '"The number of delete operations the mongos instance per second."'
- applications:
- -
- name: 'MongoDB sharded cluster'
preprocessing:
-
type: JSONPATH
@@ -546,6 +561,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB sharded cluster'
-
name: 'MongoDB cluster: Operations: getmore, rate'
type: DEPENDENT
@@ -556,9 +575,6 @@ zabbix_export:
description: |
"The number of “getmore” operations the mongos per second. This counter can be high even if the query count is low.
Secondary nodes send getMore operations as part of the replication process."
- applications:
- -
- name: 'MongoDB sharded cluster'
preprocessing:
-
type: JSONPATH
@@ -570,6 +586,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB sharded cluster'
-
name: 'MongoDB cluster: Operations: insert, rate'
type: DEPENDENT
@@ -578,9 +598,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: '"The number of insert operations received the mongos instance per second."'
- applications:
- -
- name: 'MongoDB sharded cluster'
preprocessing:
-
type: JSONPATH
@@ -592,6 +609,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB sharded cluster'
-
name: 'MongoDB cluster: Operations: query, rate'
type: DEPENDENT
@@ -600,9 +621,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: '"The number of queries received the mongos instance per second."'
- applications:
- -
- name: 'MongoDB sharded cluster'
preprocessing:
-
type: JSONPATH
@@ -614,6 +632,10 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB sharded cluster'
-
name: 'MongoDB cluster: Operations: update, rate'
type: DEPENDENT
@@ -622,9 +644,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: '"The number of update operations the mongos instance per second."'
- applications:
- -
- name: 'MongoDB sharded cluster'
preprocessing:
-
type: JSONPATH
@@ -636,15 +655,16 @@ zabbix_export:
- ''
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB sharded cluster'
-
name: 'MongoDB cluster: Ping'
key: 'mongodb.ping["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
delay: 30s
history: 7d
description: 'Test if a connection is alive or not.'
- applications:
- -
- name: 'MongoDB sharded cluster'
valuemap:
name: 'Service state'
preprocessing:
@@ -652,6 +672,10 @@ zabbix_export:
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 30m
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB sharded cluster'
triggers:
-
expression: '{last()}=0'
@@ -665,9 +689,10 @@ zabbix_export:
trends: '0'
value_type: TEXT
description: 'The mongos statistic'
- applications:
+ tags:
-
- name: 'Zabbix raw items'
+ tag: Application
+ value: 'Zabbix raw items'
-
name: 'MongoDB cluster: Uptime'
type: DEPENDENT
@@ -676,9 +701,6 @@ zabbix_export:
history: 7d
units: s
description: 'Number of seconds since Mongos server start'
- applications:
- -
- name: 'MongoDB sharded cluster'
preprocessing:
-
type: JSONPATH
@@ -686,6 +708,10 @@ zabbix_export:
- $.uptime
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB sharded cluster'
triggers:
-
expression: '{nodata(10m)}=1'
@@ -712,9 +738,6 @@ zabbix_export:
trends: '0'
value_type: CHAR
description: 'Version of the Mongos server'
- applications:
- -
- name: 'MongoDB sharded cluster'
preprocessing:
-
type: JSONPATH
@@ -726,6 +749,10 @@ zabbix_export:
- 3h
master_item:
key: 'mongodb.server.status["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB sharded cluster'
triggers:
-
expression: '{diff()}=1 and {strlen()}>0'
@@ -783,9 +810,7 @@ zabbix_export:
value: '{$MONGODB.LLD.FILTER.COLLECTION.NOT_MATCHES}'
operator: NOT_MATCHES_REGEX
formulaid: B
- description: |
- Collect collections metrics.
- Note, depending on the number of DBs and collections this discovery operation may be expensive. Use filters with macros {$MONGODB.LLD.FILTER.DB.MATCHES}, {$MONGODB.LLD.FILTER.DB.NOT_MATCHES}, {$MONGODB.LLD.FILTER.COLLECTION.MATCHES}, {$MONGODB.LLD.FILTER.COLLECTION.NOT_MATCHES}.
+ description: 'Collect collections metrics.'
item_prototypes:
-
name: 'MongoDB {#DBNAME}.{#COLLECTION}: Objects, avg size'
@@ -796,9 +821,6 @@ zabbix_export:
value_type: FLOAT
units: B
description: 'The size of the average object in the collection in bytes.'
- application_prototypes:
- -
- name: 'MongoDB sharded cluster: {#DBNAME}.{#COLLECTION}'
preprocessing:
-
type: JSONPATH
@@ -807,6 +829,10 @@ zabbix_export:
error_handler: DISCARD_VALUE
master_item:
key: 'mongodb.collection.stats["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}","{#DBNAME}","{#COLLECTION}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB sharded cluster: {#DBNAME}.{#COLLECTION}'
-
name: 'MongoDB {#DBNAME}.{#COLLECTION}: Capped'
type: DEPENDENT
@@ -816,9 +842,6 @@ zabbix_export:
trends: '0'
value_type: CHAR
description: 'Whether or not the collection is capped.'
- application_prototypes:
- -
- name: 'MongoDB sharded cluster: {#DBNAME}.{#COLLECTION}'
valuemap:
name: 'MongoDB flag'
preprocessing:
@@ -836,6 +859,10 @@ zabbix_export:
- 3h
master_item:
key: 'mongodb.collection.stats["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}","{#DBNAME}","{#COLLECTION}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB sharded cluster: {#DBNAME}.{#COLLECTION}'
-
name: 'MongoDB {#DBNAME}.{#COLLECTION}: Objects, count'
type: DEPENDENT
@@ -843,9 +870,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'Total number of objects in the collection.'
- application_prototypes:
- -
- name: 'MongoDB sharded cluster: {#DBNAME}.{#COLLECTION}'
preprocessing:
-
type: JSONPATH
@@ -853,6 +877,10 @@ zabbix_export:
- $.count
master_item:
key: 'mongodb.collection.stats["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}","{#DBNAME}","{#COLLECTION}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB sharded cluster: {#DBNAME}.{#COLLECTION}'
-
name: 'MongoDB {#DBNAME}.{#COLLECTION}: Capped, max number'
type: DEPENDENT
@@ -861,9 +889,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'Maximum number of documents in a capped collection.'
- application_prototypes:
- -
- name: 'MongoDB sharded cluster: {#DBNAME}.{#COLLECTION}'
preprocessing:
-
type: JSONPATH
@@ -872,6 +897,10 @@ zabbix_export:
error_handler: DISCARD_VALUE
master_item:
key: 'mongodb.collection.stats["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}","{#DBNAME}","{#COLLECTION}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB sharded cluster: {#DBNAME}.{#COLLECTION}'
-
name: 'MongoDB {#DBNAME}.{#COLLECTION}: Capped, max size'
type: DEPENDENT
@@ -880,9 +909,6 @@ zabbix_export:
history: 7d
units: B
description: 'Maximum size of a capped collection in bytes.'
- application_prototypes:
- -
- name: 'MongoDB sharded cluster: {#DBNAME}.{#COLLECTION}'
preprocessing:
-
type: JSONPATH
@@ -891,6 +917,10 @@ zabbix_export:
error_handler: DISCARD_VALUE
master_item:
key: 'mongodb.collection.stats["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}","{#DBNAME}","{#COLLECTION}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB sharded cluster: {#DBNAME}.{#COLLECTION}'
-
name: 'MongoDB {#DBNAME}.{#COLLECTION}: Indexes'
type: DEPENDENT
@@ -898,9 +928,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'Total number of indices on the collection.'
- application_prototypes:
- -
- name: 'MongoDB sharded cluster: {#DBNAME}.{#COLLECTION}'
preprocessing:
-
type: JSONPATH
@@ -908,6 +935,10 @@ zabbix_export:
- $.nindexes
master_item:
key: 'mongodb.collection.stats["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}","{#DBNAME}","{#COLLECTION}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB sharded cluster: {#DBNAME}.{#COLLECTION}'
-
name: 'MongoDB {#DBNAME}.{#COLLECTION}: Size'
type: DEPENDENT
@@ -916,9 +947,6 @@ zabbix_export:
history: 7d
units: B
description: 'The total size in bytes of the data in the collection plus the size of every indexes on the mongodb.collection.'
- application_prototypes:
- -
- name: 'MongoDB sharded cluster: {#DBNAME}.{#COLLECTION}'
preprocessing:
-
type: JSONPATH
@@ -926,6 +954,10 @@ zabbix_export:
- $.size
master_item:
key: 'mongodb.collection.stats["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}","{#DBNAME}","{#COLLECTION}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB sharded cluster: {#DBNAME}.{#COLLECTION}'
-
name: 'MongoDB {#DBNAME}.{#COLLECTION}: Get collection stats {#DBNAME}.{#COLLECTION}'
key: 'mongodb.collection.stats["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}","{#DBNAME}","{#COLLECTION}"]'
@@ -933,9 +965,10 @@ zabbix_export:
trends: '0'
value_type: TEXT
description: 'Returns a variety of storage statistics for a given collection.'
- applications:
+ tags:
-
- name: 'Zabbix raw items'
+ tag: Application
+ value: 'Zabbix raw items'
-
name: 'MongoDB {#DBNAME}.{#COLLECTION}: Storage size'
type: DEPENDENT
@@ -944,9 +977,6 @@ zabbix_export:
history: 7d
units: B
description: 'Total storage space allocated to this collection for document storage.'
- application_prototypes:
- -
- name: 'MongoDB sharded cluster: {#DBNAME}.{#COLLECTION}'
preprocessing:
-
type: JSONPATH
@@ -954,6 +984,10 @@ zabbix_export:
- $.storageSize
master_item:
key: 'mongodb.collection.stats["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}","{#DBNAME}","{#COLLECTION}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB sharded cluster: {#DBNAME}.{#COLLECTION}'
-
name: 'Database discovery'
key: 'mongodb.db.discovery["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}"]'
@@ -970,9 +1004,7 @@ zabbix_export:
value: '{$MONGODB.LLD.FILTER.DB.NOT_MATCHES}'
operator: NOT_MATCHES_REGEX
formulaid: B
- description: |
- Collect database metrics.
- Note, depending on the number of DBs this discovery operation may be expensive. Use filters with macros {$MONGODB.LLD.FILTER.DB.MATCHES}, {$MONGODB.LLD.FILTER.DB.NOT_MATCHES}.
+ description: 'Collect database metrics.'
item_prototypes:
-
name: 'MongoDB {#DBNAME}: Size, data'
@@ -982,9 +1014,6 @@ zabbix_export:
history: 7d
units: B
description: 'Total size of the data held in this database including the padding factor.'
- application_prototypes:
- -
- name: 'MongoDB sharded cluster: {#DBNAME}'
preprocessing:
-
type: JSONPATH
@@ -992,6 +1021,10 @@ zabbix_export:
- $.dataSize
master_item:
key: 'mongodb.db.stats["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}","{#DBNAME}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB sharded cluster: {#DBNAME}'
-
name: 'MongoDB {#DBNAME}: Extents'
type: DEPENDENT
@@ -999,9 +1032,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'Contains a count of the number of extents in the database across all collections.'
- application_prototypes:
- -
- name: 'MongoDB sharded cluster: {#DBNAME}'
preprocessing:
-
type: JSONPATH
@@ -1010,6 +1040,10 @@ zabbix_export:
error_handler: DISCARD_VALUE
master_item:
key: 'mongodb.db.stats["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}","{#DBNAME}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB sharded cluster: {#DBNAME}'
-
name: 'MongoDB {#DBNAME}: Size, file'
type: DEPENDENT
@@ -1018,9 +1052,6 @@ zabbix_export:
history: 7d
units: B
description: 'Total size of the data held in this database including the padding factor (only available with the mmapv1 storage engine).'
- application_prototypes:
- -
- name: 'MongoDB sharded cluster: {#DBNAME}'
preprocessing:
-
type: JSONPATH
@@ -1029,6 +1060,10 @@ zabbix_export:
error_handler: DISCARD_VALUE
master_item:
key: 'mongodb.db.stats["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}","{#DBNAME}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB sharded cluster: {#DBNAME}'
-
name: 'MongoDB {#DBNAME}: Size, index'
type: DEPENDENT
@@ -1037,9 +1072,6 @@ zabbix_export:
history: 7d
units: B
description: 'Total size of all indexes created on this database.'
- application_prototypes:
- -
- name: 'MongoDB sharded cluster: {#DBNAME}'
preprocessing:
-
type: JSONPATH
@@ -1047,6 +1079,10 @@ zabbix_export:
- $.indexSize
master_item:
key: 'mongodb.db.stats["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}","{#DBNAME}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB sharded cluster: {#DBNAME}'
-
name: 'MongoDB {#DBNAME}: Objects, count'
type: DEPENDENT
@@ -1054,9 +1090,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'Number of objects (documents) in the database across all collections.'
- application_prototypes:
- -
- name: 'MongoDB sharded cluster: {#DBNAME}'
preprocessing:
-
type: JSONPATH
@@ -1064,6 +1097,10 @@ zabbix_export:
- $.objects
master_item:
key: 'mongodb.db.stats["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}","{#DBNAME}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB sharded cluster: {#DBNAME}'
-
name: 'MongoDB {#DBNAME}: Objects, avg size'
type: DEPENDENT
@@ -1073,9 +1110,6 @@ zabbix_export:
value_type: FLOAT
units: B
description: 'The average size of each document in bytes.'
- application_prototypes:
- -
- name: 'MongoDB sharded cluster: {#DBNAME}'
preprocessing:
-
type: JSONPATH
@@ -1083,6 +1117,10 @@ zabbix_export:
- $.avgObjSize
master_item:
key: 'mongodb.db.stats["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}","{#DBNAME}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB sharded cluster: {#DBNAME}'
-
name: 'MongoDB {#DBNAME}: Get db stats {#DBNAME}'
key: 'mongodb.db.stats["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}","{#DBNAME}"]'
@@ -1090,9 +1128,10 @@ zabbix_export:
trends: '0'
value_type: TEXT
description: 'Returns statistics reflecting the database system’s state.'
- applications:
+ tags:
-
- name: 'Zabbix raw items'
+ tag: Application
+ value: 'Zabbix raw items'
-
name: 'MongoDB {#DBNAME}: Size, storage'
type: DEPENDENT
@@ -1101,9 +1140,6 @@ zabbix_export:
history: 7d
units: B
description: 'Total amount of space allocated to collections in this database for document storage.'
- application_prototypes:
- -
- name: 'MongoDB sharded cluster: {#DBNAME}'
preprocessing:
-
type: JSONPATH
@@ -1111,6 +1147,10 @@ zabbix_export:
- $.storageSize
master_item:
key: 'mongodb.db.stats["{$MONGODB.CONNSTRING}","{$MONGODB.USER}","{$MONGODB.PASSWORD}","{#DBNAME}"]'
+ tags:
+ -
+ tag: Application
+ value: 'MongoDB sharded cluster: {#DBNAME}'
graph_prototypes:
-
name: 'MongoDB {#DBNAME}: Disk usage'
diff --git a/templates/db/mysql_agent/template_db_mysql_agent.yaml b/templates/db/mysql_agent/template_db_mysql_agent.yaml
index b9ef4763ee1..9cbbd3c7d63 100644
--- a/templates/db/mysql_agent/template_db_mysql_agent.yaml
+++ b/templates/db/mysql_agent/template_db_mysql_agent.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-02-23T11:04:57Z'
+ date: '2021-04-22T11:28:39Z'
groups:
-
name: Templates/Databases
@@ -1515,104 +1515,106 @@ zabbix_export:
dashboards:
-
name: 'MySQL performance'
- widgets:
+ pages:
-
- type: GRAPH_CLASSIC
- width: '12'
- height: '5'
- fields:
+ widgets:
-
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'MySQL: Operations'
- host: 'MySQL by Zabbix agent'
- -
- type: GRAPH_CLASSIC
- x: '12'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'MySQL: Queries'
- host: 'MySQL by Zabbix agent'
- -
- type: GRAPH_CLASSIC
- 'y': '5'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'MySQL: Connections'
- host: 'MySQL by Zabbix agent'
- -
- type: GRAPH_CLASSIC
- x: '12'
- 'y': '5'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
+ type: GRAPH_CLASSIC
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'MySQL: Operations'
+ host: 'MySQL by Zabbix agent'
-
- type: GRAPH
- name: graphid
- value:
- name: 'MySQL: Bandwidth'
- host: 'MySQL by Zabbix agent'
- -
- type: GRAPH_CLASSIC
- 'y': '10'
- width: '12'
- height: '5'
- fields:
+ type: GRAPH_CLASSIC
+ x: '12'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'MySQL: Queries'
+ host: 'MySQL by Zabbix agent'
-
- type: INTEGER
- name: source_type
- value: '0'
+ type: GRAPH_CLASSIC
+ 'y': '5'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'MySQL: Connections'
+ host: 'MySQL by Zabbix agent'
-
- type: GRAPH
- name: graphid
- value:
- name: 'MySQL: InnoDB buffer pool'
- host: 'MySQL by Zabbix agent'
- -
- type: GRAPH_CLASSIC
- x: '12'
- 'y': '10'
- width: '12'
- height: '5'
- fields:
+ type: GRAPH_CLASSIC
+ x: '12'
+ 'y': '5'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'MySQL: Bandwidth'
+ host: 'MySQL by Zabbix agent'
-
- type: INTEGER
- name: source_type
- value: '0'
+ type: GRAPH_CLASSIC
+ 'y': '10'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'MySQL: InnoDB buffer pool'
+ host: 'MySQL by Zabbix agent'
-
- type: GRAPH
- name: graphid
- value:
- name: 'MySQL: Threads'
- host: 'MySQL by Zabbix agent'
+ type: GRAPH_CLASSIC
+ x: '12'
+ 'y': '10'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'MySQL: Threads'
+ host: 'MySQL by Zabbix agent'
valuemaps:
-
name: 'Service state'
diff --git a/templates/db/mysql_agent2/template_db_mysql_agent2.yaml b/templates/db/mysql_agent2/template_db_mysql_agent2.yaml
index b33344ccef6..2590a3b96d9 100644
--- a/templates/db/mysql_agent2/template_db_mysql_agent2.yaml
+++ b/templates/db/mysql_agent2/template_db_mysql_agent2.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-02-23T11:04:56Z'
+ date: '2021-04-22T11:28:36Z'
groups:
-
name: Templates/Databases
@@ -1503,104 +1503,106 @@ zabbix_export:
dashboards:
-
name: 'MySQL performance'
- widgets:
+ pages:
-
- type: GRAPH_CLASSIC
- width: '12'
- height: '5'
- fields:
+ widgets:
-
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'MySQL: Operations'
- host: 'MySQL by Zabbix agent 2'
- -
- type: GRAPH_CLASSIC
- x: '12'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'MySQL: Queries'
- host: 'MySQL by Zabbix agent 2'
- -
- type: GRAPH_CLASSIC
- 'y': '5'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'MySQL: Connections'
- host: 'MySQL by Zabbix agent 2'
- -
- type: GRAPH_CLASSIC
- x: '12'
- 'y': '5'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
+ type: GRAPH_CLASSIC
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'MySQL: Operations'
+ host: 'MySQL by Zabbix agent 2'
-
- type: GRAPH
- name: graphid
- value:
- name: 'MySQL: Bandwidth'
- host: 'MySQL by Zabbix agent 2'
- -
- type: GRAPH_CLASSIC
- 'y': '10'
- width: '12'
- height: '5'
- fields:
+ type: GRAPH_CLASSIC
+ x: '12'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'MySQL: Queries'
+ host: 'MySQL by Zabbix agent 2'
-
- type: INTEGER
- name: source_type
- value: '0'
+ type: GRAPH_CLASSIC
+ 'y': '5'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'MySQL: Connections'
+ host: 'MySQL by Zabbix agent 2'
-
- type: GRAPH
- name: graphid
- value:
- name: 'MySQL: InnoDB buffer pool'
- host: 'MySQL by Zabbix agent 2'
- -
- type: GRAPH_CLASSIC
- x: '12'
- 'y': '10'
- width: '12'
- height: '5'
- fields:
+ type: GRAPH_CLASSIC
+ x: '12'
+ 'y': '5'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'MySQL: Bandwidth'
+ host: 'MySQL by Zabbix agent 2'
-
- type: INTEGER
- name: source_type
- value: '0'
+ type: GRAPH_CLASSIC
+ 'y': '10'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'MySQL: InnoDB buffer pool'
+ host: 'MySQL by Zabbix agent 2'
-
- type: GRAPH
- name: graphid
- value:
- name: 'MySQL: Threads'
- host: 'MySQL by Zabbix agent 2'
+ type: GRAPH_CLASSIC
+ x: '12'
+ 'y': '10'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'MySQL: Threads'
+ host: 'MySQL by Zabbix agent 2'
valuemaps:
-
name: 'Service state'
diff --git a/templates/db/mysql_odbc/template_db_mysql_odbc.yaml b/templates/db/mysql_odbc/template_db_mysql_odbc.yaml
index 601bf59040a..864136ea258 100644
--- a/templates/db/mysql_odbc/template_db_mysql_odbc.yaml
+++ b/templates/db/mysql_odbc/template_db_mysql_odbc.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-02-23T11:04:54Z'
+ date: '2021-04-22T11:28:26Z'
groups:
-
name: Templates/Databases
@@ -1520,104 +1520,106 @@ zabbix_export:
dashboards:
-
name: 'MySQL performance'
- widgets:
+ pages:
-
- type: GRAPH_CLASSIC
- width: '12'
- height: '5'
- fields:
+ widgets:
-
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'MySQL: Operations'
- host: 'MySQL by ODBC'
- -
- type: GRAPH_CLASSIC
- x: '12'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'MySQL: Queries'
- host: 'MySQL by ODBC'
- -
- type: GRAPH_CLASSIC
- 'y': '5'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'MySQL: Connections'
- host: 'MySQL by ODBC'
- -
- type: GRAPH_CLASSIC
- x: '12'
- 'y': '5'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
+ type: GRAPH_CLASSIC
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'MySQL: Operations'
+ host: 'MySQL by ODBC'
-
- type: GRAPH
- name: graphid
- value:
- name: 'MySQL: Bandwidth'
- host: 'MySQL by ODBC'
- -
- type: GRAPH_CLASSIC
- 'y': '10'
- width: '12'
- height: '5'
- fields:
+ type: GRAPH_CLASSIC
+ x: '12'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'MySQL: Queries'
+ host: 'MySQL by ODBC'
-
- type: INTEGER
- name: source_type
- value: '0'
+ type: GRAPH_CLASSIC
+ 'y': '5'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'MySQL: Connections'
+ host: 'MySQL by ODBC'
-
- type: GRAPH
- name: graphid
- value:
- name: 'MySQL: InnoDB buffer pool'
- host: 'MySQL by ODBC'
- -
- type: GRAPH_CLASSIC
- x: '12'
- 'y': '10'
- width: '12'
- height: '5'
- fields:
+ type: GRAPH_CLASSIC
+ x: '12'
+ 'y': '5'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'MySQL: Bandwidth'
+ host: 'MySQL by ODBC'
-
- type: INTEGER
- name: source_type
- value: '0'
+ type: GRAPH_CLASSIC
+ 'y': '10'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'MySQL: InnoDB buffer pool'
+ host: 'MySQL by ODBC'
-
- type: GRAPH
- name: graphid
- value:
- name: 'MySQL: Threads'
- host: 'MySQL by ODBC'
+ type: GRAPH_CLASSIC
+ x: '12'
+ 'y': '10'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'MySQL: Threads'
+ host: 'MySQL by ODBC'
valuemaps:
-
name: 'Service state'
diff --git a/templates/db/postgresql/template_db_postgresql.yaml b/templates/db/postgresql/template_db_postgresql.yaml
index 4c23bf644ec..d75f00be354 100644
--- a/templates/db/postgresql/template_db_postgresql.yaml
+++ b/templates/db/postgresql/template_db_postgresql.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-03-29T14:21:23Z'
+ date: '2021-04-22T11:26:37Z'
groups:
-
name: Templates/Databases
@@ -1686,270 +1686,274 @@ zabbix_export:
dashboards:
-
name: 'PostgreSQL databases'
- widgets:
+ pages:
-
- type: GRAPH_PROTOTYPE
- width: '12'
- height: '12'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '3'
+ widgets:
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: 'DB {#DBNAME}: Tuples'
- host: PostgreSQL
- -
- type: GRAPH_PROTOTYPE
- x: '12'
- width: '12'
- height: '12'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '3'
+ width: '12'
+ height: '12'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '3'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: 'DB {#DBNAME}: Tuples'
+ host: PostgreSQL
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: 'DB {#DBNAME}: Events'
- host: PostgreSQL
- -
- type: GRAPH_PROTOTYPE
- 'y': '12'
- width: '12'
- height: '12'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '3'
+ x: '12'
+ width: '12'
+ height: '12'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '3'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: 'DB {#DBNAME}: Events'
+ host: PostgreSQL
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: 'DB {#DBNAME}: Block hit/read'
- host: PostgreSQL
- -
- type: GRAPH_PROTOTYPE
- x: '12'
- 'y': '12'
- width: '12'
- height: '12'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '3'
+ 'y': '12'
+ width: '12'
+ height: '12'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '3'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: 'DB {#DBNAME}: Block hit/read'
+ host: PostgreSQL
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: 'DB {#DBNAME}: Temp files'
- host: PostgreSQL
- -
- type: GRAPH_PROTOTYPE
- 'y': '24'
- width: '12'
- height: '12'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '3'
+ x: '12'
+ 'y': '12'
+ width: '12'
+ height: '12'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '3'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: 'DB {#DBNAME}: Temp files'
+ host: PostgreSQL
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: 'DB {#DBNAME}: Locks'
- host: PostgreSQL
- -
- type: GRAPH_PROTOTYPE
- x: '12'
- 'y': '24'
- width: '12'
- height: '12'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '3'
+ 'y': '24'
+ width: '12'
+ height: '12'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '3'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: 'DB {#DBNAME}: Locks'
+ host: PostgreSQL
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: 'DB {#DBNAME}: Database size'
- host: PostgreSQL
- -
- type: GRAPH_PROTOTYPE
- 'y': '36'
- width: '12'
- height: '12'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '3'
+ x: '12'
+ 'y': '24'
+ width: '12'
+ height: '12'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '3'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: 'DB {#DBNAME}: Database size'
+ host: PostgreSQL
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: 'DB {#DBNAME}: Queries'
- host: PostgreSQL
- -
- type: GRAPH_PROTOTYPE
- x: '12'
- 'y': '36'
- width: '12'
- height: '12'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '3'
+ 'y': '36'
+ width: '12'
+ height: '12'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '3'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: 'DB {#DBNAME}: Queries'
+ host: PostgreSQL
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: 'DB {#DBNAME}: Slow queries'
- host: PostgreSQL
+ x: '12'
+ 'y': '36'
+ width: '12'
+ height: '12'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '3'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: 'DB {#DBNAME}: Slow queries'
+ host: PostgreSQL
-
name: 'PostgreSQL stat'
- widgets:
- -
- type: GRAPH_CLASSIC
- width: '12'
- height: '6'
- fields:
- -
- type: GRAPH
- name: graphid
- value:
- name: 'PostgreSQL connections'
- host: PostgreSQL
- -
- type: GRAPH_CLASSIC
- x: '12'
- width: '12'
- height: '6'
- fields:
- -
- type: GRAPH
- name: graphid
- value:
- name: 'PostgreSQL transactions'
- host: PostgreSQL
- -
- type: GRAPH_CLASSIC
- 'y': '6'
- width: '12'
- height: '6'
- fields:
- -
- type: GRAPH
- name: graphid
- value:
- name: 'PostgreSQL ping'
- host: PostgreSQL
- -
- type: GRAPH_CLASSIC
- x: '12'
- 'y': '6'
- width: '12'
- height: '6'
- fields:
- -
- type: GRAPH
- name: graphid
- value:
- name: 'PostgreSQL uptime'
- host: PostgreSQL
- -
- type: GRAPH_CLASSIC
- 'y': '12'
- width: '12'
- height: '6'
- fields:
- -
- type: GRAPH
- name: graphid
- value:
- name: 'PostgreSQL replication lag'
- host: PostgreSQL
- -
- type: GRAPH_CLASSIC
- x: '12'
- 'y': '12'
- width: '12'
- height: '6'
- fields:
- -
- type: GRAPH
- name: graphid
- value:
- name: 'PostgreSQL WAL'
- host: PostgreSQL
- -
- type: GRAPH_CLASSIC
- 'y': '18'
- width: '12'
- height: '6'
- fields:
- -
- type: GRAPH
- name: graphid
- value:
- name: 'PostgreSQL bgwriter'
- host: PostgreSQL
- -
- type: GRAPH_CLASSIC
- x: '12'
- 'y': '18'
- width: '12'
- height: '6'
- fields:
- -
- type: GRAPH
- name: graphid
- value:
- name: 'PostgreSQL checkpoints'
- host: PostgreSQL
+ pages:
+ -
+ widgets:
+ -
+ type: GRAPH_CLASSIC
+ width: '12'
+ height: '6'
+ fields:
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'PostgreSQL connections'
+ host: PostgreSQL
+ -
+ type: GRAPH_CLASSIC
+ x: '12'
+ width: '12'
+ height: '6'
+ fields:
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'PostgreSQL transactions'
+ host: PostgreSQL
+ -
+ type: GRAPH_CLASSIC
+ 'y': '6'
+ width: '12'
+ height: '6'
+ fields:
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'PostgreSQL ping'
+ host: PostgreSQL
+ -
+ type: GRAPH_CLASSIC
+ x: '12'
+ 'y': '6'
+ width: '12'
+ height: '6'
+ fields:
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'PostgreSQL uptime'
+ host: PostgreSQL
+ -
+ type: GRAPH_CLASSIC
+ 'y': '12'
+ width: '12'
+ height: '6'
+ fields:
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'PostgreSQL replication lag'
+ host: PostgreSQL
+ -
+ type: GRAPH_CLASSIC
+ x: '12'
+ 'y': '12'
+ width: '12'
+ height: '6'
+ fields:
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'PostgreSQL WAL'
+ host: PostgreSQL
+ -
+ type: GRAPH_CLASSIC
+ 'y': '18'
+ width: '12'
+ height: '6'
+ fields:
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'PostgreSQL bgwriter'
+ host: PostgreSQL
+ -
+ type: GRAPH_CLASSIC
+ x: '12'
+ 'y': '18'
+ width: '12'
+ height: '6'
+ fields:
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'PostgreSQL checkpoints'
+ host: PostgreSQL
valuemaps:
-
name: 'PostgreSQL recovery role'
@@ -2111,14 +2115,13 @@ zabbix_export:
ymin_type_1: FIXED
graph_items:
-
- sortorder: '1'
drawtype: GRADIENT_LINE
color: A5D6A7
item:
host: PostgreSQL
key: 'pgsql.ping["{$PG.HOST}","{$PG.PORT}","{$PG.USER}","{$PG.DB}"]'
-
- sortorder: '2'
+ sortorder: '1'
color: 039BE5
yaxisside: RIGHT
item:
diff --git a/templates/db/postgresql_agent2/template_db_postgresql_agent2.yaml b/templates/db/postgresql_agent2/template_db_postgresql_agent2.yaml
index 549c15be09c..648ecd6ea18 100644
--- a/templates/db/postgresql_agent2/template_db_postgresql_agent2.yaml
+++ b/templates/db/postgresql_agent2/template_db_postgresql_agent2.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-03-02T19:42:24Z'
+ date: '2021-04-22T11:28:35Z'
groups:
-
name: Templates/Databases
@@ -2078,87 +2078,89 @@ zabbix_export:
dashboards:
-
name: 'PostgreSQL databases'
- widgets:
+ pages:
-
- type: GRAPH_PROTOTYPE
- width: '12'
- height: '12'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '3'
+ widgets:
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: 'DB {#DBNAME}: pg_stat_database metrics'
- host: 'PostgreSQL Agent 2'
- -
- type: GRAPH_PROTOTYPE
- x: '12'
- width: '12'
- height: '12'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '3'
+ width: '12'
+ height: '12'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '3'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: 'DB {#DBNAME}: pg_stat_database metrics'
+ host: 'PostgreSQL Agent 2'
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: 'DB {#DBNAME}: Locks'
- host: 'PostgreSQL Agent 2'
- -
- type: GRAPH_PROTOTYPE
- 'y': '12'
- width: '12'
- height: '12'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '3'
+ x: '12'
+ width: '12'
+ height: '12'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '3'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: 'DB {#DBNAME}: Locks'
+ host: 'PostgreSQL Agent 2'
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: 'DB {#DBNAME}: Size'
- host: 'PostgreSQL Agent 2'
- -
- type: GRAPH_PROTOTYPE
- x: '12'
- 'y': '12'
- width: '12'
- height: '12'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '3'
+ 'y': '12'
+ width: '12'
+ height: '12'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '3'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: 'DB {#DBNAME}: Size'
+ host: 'PostgreSQL Agent 2'
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: 'DB {#DBNAME}: Number of bloating tables'
- host: 'PostgreSQL Agent 2'
+ x: '12'
+ 'y': '12'
+ width: '12'
+ height: '12'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '3'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: 'DB {#DBNAME}: Number of bloating tables'
+ host: 'PostgreSQL Agent 2'
valuemaps:
-
name: 'PostgreSQL recovery role'
diff --git a/templates/db/redis/README.md b/templates/db/redis/README.md
index 29a17b0240c..7b8f4ef001b 100644
--- a/templates/db/redis/README.md
+++ b/templates/db/redis/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
The template to monitor Redis server by Zabbix that work without any external scripts.
Most of the metrics are collected in one go, thanks to Zabbix bulk data collection.
@@ -17,7 +17,7 @@ This template was tested on:
## Setup
-> See [Zabbix template operation](https://www.zabbix.com/documentation/5.2/manual/config/templates_out_of_the_box/zabbix_agent2) for basic instructions.
+> See [Zabbix template operation](https://www.zabbix.com/documentation/5.4/manual/config/templates_out_of_the_box/zabbix_agent2) for basic instructions.
Setup and configure zabbix-agent2 compiled with the Redis monitoring plugin (ZBXNEXT-5428-4.3).
@@ -30,18 +30,18 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$REDIS.CLIENTS.PRC.MAX.WARN} |<p>Maximum percentage of connected clients</p> |`80` |
-|{$REDIS.CONN.URI} |<p>Connection string in the URI format (password is not used). This param overwrites a value configured in the "Server" option of the configuration file (if it's set), otherwise, the plugin's default value is used: "tcp://localhost:6379"</p> |`tcp://localhost:6379` |
-|{$REDIS.LLD.FILTER.DB.MATCHES} |<p>Filter of discoverable databases</p> |`.*` |
-|{$REDIS.LLD.FILTER.DB.NOT_MATCHES} |<p>Filter to exclude discovered databases</p> |`CHANGE_IF_NEEDED` |
-|{$REDIS.LLD.PROCESS_NAME} |<p>Redis server process name for LLD</p> |`redis-server` |
-|{$REDIS.MEM.FRAG_RATIO.MAX.WARN} |<p>Maximum memory fragmentation ratio</p> |`1.5` |
-|{$REDIS.MEM.PUSED.MAX.WARN} |<p>Maximum percentage of memory used</p> |`90` |
-|{$REDIS.PROCESS_NAME} |<p>Redis server process name</p> |`redis-server` |
-|{$REDIS.REPL.LAG.MAX.WARN} |<p>Maximum replication lag in seconds</p> |`30s` |
-|{$REDIS.SLOWLOG.COUNT.MAX.WARN} |<p>Maximum number of slowlog entries per second</p> |`1` |
+| Name | Description | Default |
+|------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------|
+| {$REDIS.CLIENTS.PRC.MAX.WARN} | <p>Maximum percentage of connected clients</p> | `80` |
+| {$REDIS.CONN.URI} | <p>Connection string in the URI format (password is not used). This param overwrites a value configured in the "Server" option of the configuration file (if it's set), otherwise, the plugin's default value is used: "tcp://localhost:6379"</p> | `tcp://localhost:6379` |
+| {$REDIS.LLD.FILTER.DB.MATCHES} | <p>Filter of discoverable databases</p> | `.*` |
+| {$REDIS.LLD.FILTER.DB.NOT_MATCHES} | <p>Filter to exclude discovered databases</p> | `CHANGE_IF_NEEDED` |
+| {$REDIS.LLD.PROCESS_NAME} | <p>Redis server process name for LLD</p> | `redis-server` |
+| {$REDIS.MEM.FRAG_RATIO.MAX.WARN} | <p>Maximum memory fragmentation ratio</p> | `1.5` |
+| {$REDIS.MEM.PUSED.MAX.WARN} | <p>Maximum percentage of memory used</p> | `90` |
+| {$REDIS.PROCESS_NAME} | <p>Redis server process name</p> | `redis-server` |
+| {$REDIS.REPL.LAG.MAX.WARN} | <p>Maximum replication lag in seconds</p> | `30s` |
+| {$REDIS.SLOWLOG.COUNT.MAX.WARN} | <p>Maximum number of slowlog entries per second</p> | `1` |
## Template links
@@ -49,166 +49,166 @@ There are no template links in this template.
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Keyspace discovery |<p>Individual keyspace metrics</p> |DEPENDENT |redis.keyspace.discovery<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>**Filter**:</p>AND <p>- A: {#DB} MATCHES_REGEX `{$REDIS.LLD.FILTER.DB.MATCHES}`</p><p>- B: {#DB} NOT_MATCHES_REGEX `{$REDIS.LLD.FILTER.DB.NOT_MATCHES}`</p> |
-|AOF metrics discovery |<p>If AOF is activated, additional metrics will be added</p> |DEPENDENT |redis.persistence.aof.discovery<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
-|Slave metrics discovery |<p>If the instance is a replica, additional metrics are provided</p> |DEPENDENT |redis.replication.slave.discovery<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
-|Replication metrics discovery |<p>If the instance is the master and the slaves are connected, additional metrics are provided</p> |DEPENDENT |redis.replication.master.discovery<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
-|Process metrics discovery |<p>Collect metrics by Zabbix agent if it exists</p> |ZABBIX_PASSIVE |proc.num["{$REDIS.LLD.PROCESS_NAME}"]<p>**Preprocessing**:</p><p>- JAVASCRIPT: `return JSON.stringify(value > 0 ? [{'{#SINGLETON}': ''}] : []);`</p> |
-|Version 4+ metrics discovery |<p>Additional metrics for versions 4+</p> |DEPENDENT |redis.metrics.v4.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.Server.redis_version`</p><p>- JAVASCRIPT: `return JSON.stringify(parseInt(value.split('.')[0]) >= 4 ? [{'{#SINGLETON}': ''}] : []);`</p> |
-|Version 5+ metrics discovery |<p>Additional metrics for versions 5+</p> |DEPENDENT |redis.metrics.v5.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.Server.redis_version`</p><p>- JAVASCRIPT: `return JSON.stringify(parseInt(value.split('.')[0]) >= 5 ? [{'{#SINGLETON}': ''}] : []);`</p> |
+| Name | Description | Type | Key and additional info |
+|-------------------------------|----------------------------------------------------------------------------------------------------|----------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Keyspace discovery | <p>Individual keyspace metrics</p> | DEPENDENT | redis.keyspace.discovery<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>**Filter**:</p>AND <p>- A: {#DB} MATCHES_REGEX `{$REDIS.LLD.FILTER.DB.MATCHES}`</p><p>- B: {#DB} NOT_MATCHES_REGEX `{$REDIS.LLD.FILTER.DB.NOT_MATCHES}`</p> |
+| AOF metrics discovery | <p>If AOF is activated, additional metrics will be added</p> | DEPENDENT | redis.persistence.aof.discovery<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Slave metrics discovery | <p>If the instance is a replica, additional metrics are provided</p> | DEPENDENT | redis.replication.slave.discovery<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Replication metrics discovery | <p>If the instance is the master and the slaves are connected, additional metrics are provided</p> | DEPENDENT | redis.replication.master.discovery<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Process metrics discovery | <p>Collect metrics by Zabbix agent if it exists</p> | ZABBIX_PASSIVE | proc.num["{$REDIS.LLD.PROCESS_NAME}"]<p>**Preprocessing**:</p><p>- JAVASCRIPT: `return JSON.stringify(value > 0 ? [{'{#SINGLETON}': ''}] : []);`</p> |
+| Version 4+ metrics discovery | <p>Additional metrics for versions 4+</p> | DEPENDENT | redis.metrics.v4.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.Server.redis_version`</p><p>- JAVASCRIPT: `return JSON.stringify(parseInt(value.split('.')[0]) >= 4 ? [{'{#SINGLETON}': ''}] : []);`</p> |
+| Version 5+ metrics discovery | <p>Additional metrics for versions 5+</p> | DEPENDENT | redis.metrics.v5.discovery<p>**Preprocessing**:</p><p>- JSONPATH: `$.Server.redis_version`</p><p>- JAVASCRIPT: `return JSON.stringify(parseInt(value.split('.')[0]) >= 5 ? [{'{#SINGLETON}': ''}] : []);`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Redis |Redis: Ping | |ZABBIX_PASSIVE |redis.ping["{$REDIS.CONN.URI}"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
-|Redis |Redis: Slowlog entries per second | |ZABBIX_PASSIVE |redis.slowlog.count["{$REDIS.CONN.URI}"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Redis |Redis: CPU sys |<p>System CPU consumed by the Redis server</p> |DEPENDENT |redis.cpu.sys<p>**Preprocessing**:</p><p>- JSONPATH: `$.CPU.used_cpu_sys`</p> |
-|Redis |Redis: CPU sys children |<p>System CPU consumed by the background processes</p> |DEPENDENT |redis.cpu.sys_children<p>**Preprocessing**:</p><p>- JSONPATH: `$.CPU.used_cpu_sys_children`</p> |
-|Redis |Redis: CPU user |<p>User CPU consumed by the Redis server</p> |DEPENDENT |redis.cpu.user<p>**Preprocessing**:</p><p>- JSONPATH: `$.CPU.used_cpu_user`</p> |
-|Redis |Redis: CPU user children |<p>User CPU consumed by the background processes</p> |DEPENDENT |redis.cpu.user_children<p>**Preprocessing**:</p><p>- JSONPATH: `$.CPU.used_cpu_user_children`</p> |
-|Redis |Redis: Blocked clients |<p>The number of connections waiting on a blocking call</p> |DEPENDENT |redis.clients.blocked<p>**Preprocessing**:</p><p>- JSONPATH: `$.Clients.blocked_clients`</p> |
-|Redis |Redis: Max input buffer |<p>The biggest input buffer among current client connections</p> |DEPENDENT |redis.clients.max_input_buffer<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
-|Redis |Redis: Max output buffer |<p>The biggest output buffer among current client connections</p> |DEPENDENT |redis.clients.max_output_buffer<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
-|Redis |Redis: Connected clients |<p>The number of connected clients</p> |DEPENDENT |redis.clients.connected<p>**Preprocessing**:</p><p>- JSONPATH: `$.Clients.connected_clients`</p> |
-|Redis |Redis: Cluster enabled |<p>Indicate Redis cluster is enabled</p> |DEPENDENT |redis.cluster.enabled<p>**Preprocessing**:</p><p>- JSONPATH: `$.Cluster.cluster_enabled`</p> |
-|Redis |Redis: Memory used |<p>Total number of bytes allocated by Redis using its allocator</p> |DEPENDENT |redis.memory.used_memory<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.used_memory`</p> |
-|Redis |Redis: Memory used Lua |<p>Amount of memory used by the Lua engine</p> |DEPENDENT |redis.memory.used_memory_lua<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.used_memory_lua`</p> |
-|Redis |Redis: Memory used peak |<p>Peak memory consumed by Redis (in bytes)</p> |DEPENDENT |redis.memory.used_memory_peak<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.used_memory_peak`</p> |
-|Redis |Redis: Memory used RSS |<p>Number of bytes that Redis allocated as seen by the operating system</p> |DEPENDENT |redis.memory.used_memory_rss<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.used_memory_rss`</p> |
-|Redis |Redis: Memory fragmentation ratio |<p>This ratio is an indication of memory mapping efficiency:</p><p> — Value over 1.0 indicate that memory fragmentation is very likely. Consider restarting the Redis server so the operating system can recover fragmented memory, especially with a ratio over 1.5.</p><p> — Value under 1.0 indicate that Redis likely has insufficient memory available. Consider optimizing memory usage or adding more RAM.</p><p>Note: If your peak memory usage is much higher than your current memory usage, the memory fragmentation ratio may be unreliable.</p><p>https://redis.io/topics/memory-optimization</p> |DEPENDENT |redis.memory.fragmentation_ratio<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.mem_fragmentation_ratio`</p> |
-|Redis |Redis: AOF current rewrite time sec |<p>Duration of the on-going AOF rewrite operation if any</p> |DEPENDENT |redis.persistence.aof_current_rewrite_time_sec<p>**Preprocessing**:</p><p>- JSONPATH: `$.Persistence.aof_current_rewrite_time_sec`</p> |
-|Redis |Redis: AOF enabled |<p>Flag indicating AOF logging is activated</p> |DEPENDENT |redis.persistence.aof_enabled<p>**Preprocessing**:</p><p>- JSONPATH: `$.Persistence.aof_enabled`</p> |
-|Redis |Redis: AOF last bgrewrite status |<p>Status of the last AOF rewrite operation</p> |DEPENDENT |redis.persistence.aof_last_bgrewrite_status<p>**Preprocessing**:</p><p>- JSONPATH: `$.Persistence.aof_last_bgrewrite_status`</p><p>- BOOL_TO_DECIMAL |
-|Redis |Redis: AOF last rewrite time sec |<p>Duration of the last AOF rewrite</p> |DEPENDENT |redis.persistence.aof_last_rewrite_time_sec<p>**Preprocessing**:</p><p>- JSONPATH: `$.Persistence.aof_last_rewrite_time_sec`</p> |
-|Redis |Redis: AOF last write status |<p>Status of the last write operation to the AOF</p> |DEPENDENT |redis.persistence.aof_last_write_status<p>**Preprocessing**:</p><p>- JSONPATH: `$.Persistence.aof_last_write_status`</p><p>- BOOL_TO_DECIMAL |
-|Redis |Redis: AOF rewrite in progress |<p>Flag indicating a AOF rewrite operation is on-going</p> |DEPENDENT |redis.persistence.aof_rewrite_in_progress<p>**Preprocessing**:</p><p>- JSONPATH: `$.Persistence.aof_rewrite_in_progress`</p> |
-|Redis |Redis: AOF rewrite scheduled |<p>Flag indicating an AOF rewrite operation will be scheduled once the on-going RDB save is complete</p> |DEPENDENT |redis.persistence.aof_rewrite_scheduled<p>**Preprocessing**:</p><p>- JSONPATH: `$.Persistence.aof_rewrite_scheduled`</p> |
-|Redis |Redis: Dump loading |<p>Flag indicating if the load of a dump file is on-going</p> |DEPENDENT |redis.persistence.loading<p>**Preprocessing**:</p><p>- JSONPATH: `$.Persistence.loading`</p> |
-|Redis |Redis: RDB bgsave in progress |<p>"1" if bgsave is in progress and "0" otherwise</p> |DEPENDENT |redis.persistence.rdb_bgsave_in_progress<p>**Preprocessing**:</p><p>- JSONPATH: `$.Persistence.rdb_bgsave_in_progress`</p> |
-|Redis |Redis: RDB changes since last save |<p>Number of changes since the last background save</p> |DEPENDENT |redis.persistence.rdb_changes_since_last_save<p>**Preprocessing**:</p><p>- JSONPATH: `$.Persistence.rdb_changes_since_last_save`</p> |
-|Redis |Redis: RDB current bgsave time sec |<p>Duration of the on-going RDB save operation if any</p> |DEPENDENT |redis.persistence.rdb_current_bgsave_time_sec<p>**Preprocessing**:</p><p>- JSONPATH: `$.Persistence.rdb_current_bgsave_time_sec`</p> |
-|Redis |Redis: RDB last bgsave status |<p>Status of the last RDB save operation</p> |DEPENDENT |redis.persistence.rdb_last_bgsave_status<p>**Preprocessing**:</p><p>- JSONPATH: `$.Persistence.rdb_last_bgsave_status`</p><p>- BOOL_TO_DECIMAL |
-|Redis |Redis: RDB last bgsave time sec |<p>Duration of the last bg_save operation</p> |DEPENDENT |redis.persistence.rdb_last_bgsave_time_sec<p>**Preprocessing**:</p><p>- JSONPATH: `$.Persistence.rdb_last_bgsave_time_sec`</p> |
-|Redis |Redis: RDB last save time |<p>Epoch-based timestamp of last successful RDB save</p> |DEPENDENT |redis.persistence.rdb_last_save_time<p>**Preprocessing**:</p><p>- JSONPATH: `$.Persistence.rdb_last_save_time`</p> |
-|Redis |Redis: Connected slaves |<p>Number of connected slaves</p> |DEPENDENT |redis.replication.connected_slaves<p>**Preprocessing**:</p><p>- JSONPATH: `$.Replication.connected_slaves`</p> |
-|Redis |Redis: Replication backlog active |<p>Flag indicating replication backlog is active</p> |DEPENDENT |redis.replication.repl_backlog_active<p>**Preprocessing**:</p><p>- JSONPATH: `$.Replication.repl_backlog_active`</p> |
-|Redis |Redis: Replication backlog first byte offset |<p>The master offset of the replication backlog buffer</p> |DEPENDENT |redis.replication.repl_backlog_first_byte_offset<p>**Preprocessing**:</p><p>- JSONPATH: `$.Replication.repl_backlog_first_byte_offset`</p> |
-|Redis |Redis: Replication backlog history length |<p>Amount of data in the backlog sync buffer</p> |DEPENDENT |redis.replication.repl_backlog_histlen<p>**Preprocessing**:</p><p>- JSONPATH: `$.Replication.repl_backlog_histlen`</p> |
-|Redis |Redis: Replication backlog size |<p>Total size in bytes of the replication backlog buffer</p> |DEPENDENT |redis.replication.repl_backlog_size<p>**Preprocessing**:</p><p>- JSONPATH: `$.Replication.repl_backlog_size`</p> |
-|Redis |Redis: Replication role |<p>Value is "master" if the instance is replica of no one, or "slave" if the instance is a replica of some master instance. Note that a replica can be master of another replica (chained replication).</p> |DEPENDENT |redis.replication.role<p>**Preprocessing**:</p><p>- JSONPATH: `$.Replication.role`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Redis |Redis: Master replication offset |<p>Replication offset reported by the master</p> |DEPENDENT |redis.replication.master_repl_offset<p>**Preprocessing**:</p><p>- JSONPATH: `$.Replication.master_repl_offset`</p> |
-|Redis |Redis: Process id |<p>PID of the server process</p> |DEPENDENT |redis.server.process_id<p>**Preprocessing**:</p><p>- JSONPATH: `$.Server.process_id`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Redis |Redis: Redis mode |<p>The server's mode ("standalone", "sentinel" or "cluster")</p> |DEPENDENT |redis.server.redis_mode<p>**Preprocessing**:</p><p>- JSONPATH: `$.Server.redis_mode`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Redis |Redis: Redis version |<p>Version of the Redis server</p> |DEPENDENT |redis.server.redis_version<p>**Preprocessing**:</p><p>- JSONPATH: `$.Server.redis_version`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Redis |Redis: TCP port |<p>TCP/IP listen port</p> |DEPENDENT |redis.server.tcp_port<p>**Preprocessing**:</p><p>- JSONPATH: `$.Server.tcp_port`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Redis |Redis: Uptime |<p>Number of seconds since Redis server start</p> |DEPENDENT |redis.server.uptime<p>**Preprocessing**:</p><p>- JSONPATH: `$.Server.uptime_in_seconds`</p> |
-|Redis |Redis: Evicted keys |<p>Number of evicted keys due to maxmemory limit</p> |DEPENDENT |redis.stats.evicted_keys<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.evicted_keys`</p> |
-|Redis |Redis: Expired keys |<p>Total number of key expiration events</p> |DEPENDENT |redis.stats.expired_keys<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.expired_keys`</p> |
-|Redis |Redis: Instantaneous input bytes per second |<p>The network's read rate per second in KB/sec</p> |DEPENDENT |redis.stats.instantaneous_input.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.instantaneous_input_kbps`</p><p>- MULTIPLIER: `1024`</p> |
-|Redis |Redis: Instantaneous operations per sec |<p>Number of commands processed per second</p> |DEPENDENT |redis.stats.instantaneous_ops.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.instantaneous_ops_per_sec`</p> |
-|Redis |Redis: Instantaneous output bytes per second |<p>The network's write rate per second in KB/sec</p> |DEPENDENT |redis.stats.instantaneous_output.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.instantaneous_output_kbps`</p><p>- MULTIPLIER: `1024`</p> |
-|Redis |Redis: Keyspace hits |<p>Number of successful lookup of keys in the main dictionary</p> |DEPENDENT |redis.stats.keyspace_hits<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.keyspace_hits`</p> |
-|Redis |Redis: Keyspace misses |<p>Number of failed lookup of keys in the main dictionary</p> |DEPENDENT |redis.stats.keyspace_misses<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.keyspace_misses`</p> |
-|Redis |Redis: Latest fork usec |<p>Duration of the latest fork operation in microseconds</p> |DEPENDENT |redis.stats.latest_fork_usec<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.latest_fork_usec`</p><p>- MULTIPLIER: `1.0E-5`</p> |
-|Redis |Redis: Migrate cached sockets |<p>The number of sockets open for MIGRATE purposes</p> |DEPENDENT |redis.stats.migrate_cached_sockets<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.migrate_cached_sockets`</p> |
-|Redis |Redis: Pubsub channels |<p>Global number of pub/sub channels with client subscriptions</p> |DEPENDENT |redis.stats.pubsub_channels<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.pubsub_channels`</p> |
-|Redis |Redis: Pubsub patterns |<p>Global number of pub/sub pattern with client subscriptions</p> |DEPENDENT |redis.stats.pubsub_patterns<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.pubsub_patterns`</p> |
-|Redis |Redis: Rejected connections |<p>Number of connections rejected because of maxclients limit</p> |DEPENDENT |redis.stats.rejected_connections<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.rejected_connections`</p> |
-|Redis |Redis: Sync full |<p>The number of full resyncs with replicas</p> |DEPENDENT |redis.stats.sync_full<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.sync_full`</p> |
-|Redis |Redis: Sync partial err |<p>The number of denied partial resync requests</p> |DEPENDENT |redis.stats.sync_partial_err<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.sync_partial_err`</p> |
-|Redis |Redis: Sync partial ok |<p>The number of accepted partial resync requests</p> |DEPENDENT |redis.stats.sync_partial_ok<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.sync_partial_ok`</p> |
-|Redis |Redis: Total commands processed |<p>Total number of commands processed by the server</p> |DEPENDENT |redis.stats.total_commands_processed<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.total_commands_processed`</p> |
-|Redis |Redis: Total connections received |<p>Total number of connections accepted by the server</p> |DEPENDENT |redis.stats.total_connections_received<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.total_connections_received`</p> |
-|Redis |Redis: Total net input bytes |<p>The total number of bytes read from the network</p> |DEPENDENT |redis.stats.total_net_input_bytes<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.total_net_input_bytes`</p> |
-|Redis |Redis: Total net output bytes |<p>The total number of bytes written to the network</p> |DEPENDENT |redis.stats.total_net_output_bytes<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.total_net_output_bytes`</p> |
-|Redis |Redis: Max clients |<p>Max number of connected clients at the same time.</p><p>Once the limit is reached Redis will close all the new connections sending an error "max number of clients reached".</p> |DEPENDENT |redis.config.maxclients<p>**Preprocessing**:</p><p>- JSONPATH: `$.maxclients`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `30m`</p> |
-|Redis |DB {#DB}: Average TTL |<p>Average TTL</p> |DEPENDENT |redis.db.avg_ttl["{#DB}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Keyspace["{#DB}"].avg_ttl`</p><p>- MULTIPLIER: `0.001`</p> |
-|Redis |DB {#DB}: Expires |<p>Number of keys with an expiration</p> |DEPENDENT |redis.db.expires["{#DB}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Keyspace["{#DB}"].expires`</p> |
-|Redis |DB {#DB}: Keys |<p>Total number of keys</p> |DEPENDENT |redis.db.keys["{#DB}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Keyspace["{#DB}"].keys`</p> |
-|Redis |Redis: AOF current size{#SINGLETON} |<p>AOF current file size</p> |DEPENDENT |redis.persistence.aof_current_size[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Persistence.aof_current_size`</p> |
-|Redis |Redis: AOF base size{#SINGLETON} |<p>AOF file size on latest startup or rewrite</p> |DEPENDENT |redis.persistence.aof_base_size[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Persistence.aof_base_size`</p> |
-|Redis |Redis: AOF pending rewrite{#SINGLETON} |<p>Flag indicating an AOF rewrite operation will</p> |DEPENDENT |redis.persistence.aof_pending_rewrite[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Persistence.aof_pending_rewrite`</p> |
-|Redis |Redis: AOF buffer length{#SINGLETON} |<p>Size of the AOF buffer</p> |DEPENDENT |redis.persistence.aof_buffer_length[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Persistence.aof_buffer_length`</p> |
-|Redis |Redis: AOF rewrite buffer length{#SINGLETON} |<p>Size of the AOF rewrite buffer</p> |DEPENDENT |redis.persistence.aof_rewrite_buffer_length[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Persistence.aof_rewrite_buffer_length`</p> |
-|Redis |Redis: AOF pending background I/O fsync{#SINGLETON} |<p>Number of fsync pending jobs in background I/O queue</p> |DEPENDENT |redis.persistence.aof_pending_bio_fsync[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Persistence.aof_pending_bio_fsync`</p> |
-|Redis |Redis: AOF delayed fsync{#SINGLETON} |<p>Delayed fsync counter</p> |DEPENDENT |redis.persistence.aof_delayed_fsync[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Persistence.aof_delayed_fsync`</p> |
-|Redis |Redis: Master host{#SINGLETON} |<p>Host or IP address of the master</p> |DEPENDENT |redis.replication.master_host[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Replication.master_host`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Redis |Redis: Master port{#SINGLETON} |<p>Master listening TCP port</p> |DEPENDENT |redis.replication.master_port[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Replication.master_port`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Redis |Redis: Master link status{#SINGLETON} |<p>Status of the link (up/down)</p> |DEPENDENT |redis.replication.master_link_status[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Replication.master_link_status`</p><p>- BOOL_TO_DECIMAL |
-|Redis |Redis: Master last I/O seconds ago{#SINGLETON} |<p>Number of seconds since the last interaction with master</p> |DEPENDENT |redis.replication.master_last_io_seconds_ago[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Replication.master_last_io_seconds_ago`</p> |
-|Redis |Redis: Master sync in progress{#SINGLETON} |<p>Indicate the master is syncing to the replica</p> |DEPENDENT |redis.replication.master_sync_in_progress[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Replication.master_sync_in_progress`</p> |
-|Redis |Redis: Slave replication offset{#SINGLETON} |<p>The replication offset of the replica instance</p> |DEPENDENT |redis.replication.slave_repl_offset[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Replication.slave_repl_offset`</p> |
-|Redis |Redis: Slave priority{#SINGLETON} |<p>The priority of the instance as a candidate for failover</p> |DEPENDENT |redis.replication.slave_priority[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Replication.slave_priority`</p> |
-|Redis |Redis: Slave priority{#SINGLETON} |<p>Flag indicating if the replica is read-only</p> |DEPENDENT |redis.replication.slave_read_only[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Replication.slave_read_only`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Redis |Redis slave {#SLAVE_IP}:{#SLAVE_PORT}: Replication lag in bytes |<p>Replication lag in bytes</p> |DEPENDENT |redis.replication.lag_bytes["{#SLAVE_IP}:{#SLAVE_PORT}"]<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
-|Redis |Redis: Number of processes running |<p>-</p> |ZABBIX_PASSIVE |proc.num["{$REDIS.PROCESS_NAME}{#SINGLETON}"] |
-|Redis |Redis: Memory usage (rss) |<p>Resident set size memory used by process in bytes.</p> |ZABBIX_PASSIVE |proc.mem["{$REDIS.PROCESS_NAME}{#SINGLETON}",,,,rss] |
-|Redis |Redis: Memory usage (vsize) |<p>Virtual memory size used by process in bytes.</p> |ZABBIX_PASSIVE |proc.mem["{$REDIS.PROCESS_NAME}{#SINGLETON}",,,,vsize] |
-|Redis |Redis: CPU utilization |<p>Process CPU utilization percentage.</p> |ZABBIX_PASSIVE |proc.cpu.util["{$REDIS.PROCESS_NAME}{#SINGLETON}"] |
-|Redis |Redis: Executable path{#SINGLETON} |<p>The path to the server's executable</p> |DEPENDENT |redis.server.executable[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Server.executable`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Redis |Redis: Memory used peak %{#SINGLETON} |<p>The percentage of used_memory_peak out of used_memory</p> |DEPENDENT |redis.memory.used_memory_peak_perc[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.used_memory_peak_perc`</p><p>- REGEX: `(.+)% \1`</p> |
-|Redis |Redis: Memory used overhead{#SINGLETON} |<p>The sum in bytes of all overheads that the server allocated for managing its internal data structures</p> |DEPENDENT |redis.memory.used_memory_overhead[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.used_memory_overhead`</p> |
-|Redis |Redis: Memory used startup{#SINGLETON} |<p>Initial amount of memory consumed by Redis at startup in bytes</p> |DEPENDENT |redis.memory.used_memory_startup[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.used_memory_startup`</p> |
-|Redis |Redis: Memory used dataset{#SINGLETON} |<p>The size in bytes of the dataset</p> |DEPENDENT |redis.memory.used_memory_dataset[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.used_memory_dataset`</p> |
-|Redis |Redis: Memory used dataset %{#SINGLETON} |<p>The percentage of used_memory_dataset out of the net memory usage (used_memory minus used_memory_startup)</p> |DEPENDENT |redis.memory.used_memory_dataset_perc[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.used_memory_dataset_perc`</p><p>- REGEX: `(.+)% \1`</p> |
-|Redis |Redis: Total system memory{#SINGLETON} |<p>The total amount of memory that the Redis host has</p> |DEPENDENT |redis.memory.total_system_memory[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.total_system_memory`</p> |
-|Redis |Redis: Max memory{#SINGLETON} |<p>Maximum amount of memory allocated to the Redisdb system</p> |DEPENDENT |redis.memory.maxmemory[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.maxmemory`</p> |
-|Redis |Redis: Max memory policy{#SINGLETON} |<p>The value of the maxmemory-policy configuration directive</p> |DEPENDENT |redis.memory.maxmemory_policy[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.maxmemory_policy`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Redis |Redis: Active defrag running{#SINGLETON} |<p>Flag indicating if active defragmentation is active</p> |DEPENDENT |redis.memory.active_defrag_running[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.active_defrag_running`</p> |
-|Redis |Redis: Lazyfree pending objects{#SINGLETON} |<p>The number of objects waiting to be freed (as a result of calling UNLINK, or FLUSHDB and FLUSHALL with the ASYNC option)</p> |DEPENDENT |redis.memory.lazyfree_pending_objects[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.lazyfree_pending_objects`</p> |
-|Redis |Redis: RDB last CoW size{#SINGLETON} |<p>The size in bytes of copy-on-write allocations during the last RDB save operation</p> |DEPENDENT |redis.persistence.rdb_last_cow_size[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Persistence.rdb_last_cow_size`</p> |
-|Redis |Redis: AOF last CoW size{#SINGLETON} |<p>The size in bytes of copy-on-write allocations during the last AOF rewrite operation</p> |DEPENDENT |redis.persistence.aof_last_cow_size[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Persistence.aof_last_cow_size`</p> |
-|Redis |Redis: Expired stale %{#SINGLETON} | |DEPENDENT |redis.stats.expired_stale_perc[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.expired_stale_perc`</p> |
-|Redis |Redis: Expired time cap reached count{#SINGLETON} | |DEPENDENT |redis.stats.expired_time_cap_reached_count[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.expired_time_cap_reached_count`</p> |
-|Redis |Redis: Slave expires tracked keys{#SINGLETON} |<p>The number of keys tracked for expiry purposes (applicable only to writable replicas)</p> |DEPENDENT |redis.stats.slave_expires_tracked_keys[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.slave_expires_tracked_keys`</p> |
-|Redis |Redis: Active defrag hits{#SINGLETON} |<p>Number of value reallocations performed by active the defragmentation process</p> |DEPENDENT |redis.stats.active_defrag_hits[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.active_defrag_hits`</p> |
-|Redis |Redis: Active defrag misses{#SINGLETON} |<p>Number of aborted value reallocations started by the active defragmentation process</p> |DEPENDENT |redis.stats.active_defrag_misses[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.active_defrag_misses`</p> |
-|Redis |Redis: Active defrag key hits{#SINGLETON} |<p>Number of keys that were actively defragmented</p> |DEPENDENT |redis.stats.active_defrag_key_hits[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.active_defrag_key_hits`</p> |
-|Redis |Redis: Active defrag key misses{#SINGLETON} |<p>Number of keys that were skipped by the active defragmentation process</p> |DEPENDENT |redis.stats.active_defrag_key_misses[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.active_defrag_key_misses`</p> |
-|Redis |Redis: Replication second offset{#SINGLETON} |<p>Offset up to which replication IDs are accepted</p> |DEPENDENT |redis.replication.second_repl_offset[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Replication.second_repl_offset`</p> |
-|Redis |Redis: Allocator active{#SINGLETON} | |DEPENDENT |redis.memory.allocator_active[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.allocator_active`</p> |
-|Redis |Redis: Allocator allocated{#SINGLETON} | |DEPENDENT |redis.memory.allocator_allocated[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.allocator_allocated`</p> |
-|Redis |Redis: Allocator resident{#SINGLETON} | |DEPENDENT |redis.memory.allocator_resident[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.allocator_resident`</p> |
-|Redis |Redis: Memory used scripts{#SINGLETON} | |DEPENDENT |redis.memory.used_memory_scripts[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.used_memory_scripts`</p> |
-|Redis |Redis: Memory number of cached scripts{#SINGLETON} | |DEPENDENT |redis.memory.number_of_cached_scripts[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.number_of_cached_scripts`</p> |
-|Redis |Redis: Allocator fragmentation bytes{#SINGLETON} | |DEPENDENT |redis.memory.allocator_frag_bytes[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.allocator_frag_bytes`</p> |
-|Redis |Redis: Allocator fragmentation ratio{#SINGLETON} | |DEPENDENT |redis.memory.allocator_frag_ratio[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.allocator_frag_ratio`</p> |
-|Redis |Redis: Allocator RSS bytes{#SINGLETON} | |DEPENDENT |redis.memory.allocator_rss_bytes[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.allocator_rss_bytes`</p> |
-|Redis |Redis: Allocator RSS ratio{#SINGLETON} | |DEPENDENT |redis.memory.allocator_rss_ratio[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.allocator_rss_ratio`</p> |
-|Redis |Redis: Memory RSS overhead bytes{#SINGLETON} | |DEPENDENT |redis.memory.rss_overhead_bytes[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.rss_overhead_bytes`</p> |
-|Redis |Redis: Memory RSS overhead ratio{#SINGLETON} | |DEPENDENT |redis.memory.rss_overhead_ratio[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.rss_overhead_ratio`</p> |
-|Redis |Redis: Memory fragmentation bytes{#SINGLETON} | |DEPENDENT |redis.memory.fragmentation_bytes[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.mem_fragmentation_bytes`</p> |
-|Redis |Redis: Memory not counted for evict{#SINGLETON} | |DEPENDENT |redis.memory.not_counted_for_evict[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.mem_not_counted_for_evict`</p> |
-|Redis |Redis: Memory replication backlog{#SINGLETON} | |DEPENDENT |redis.memory.replication_backlog[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.mem_replication_backlog`</p> |
-|Redis |Redis: Memory clients normal{#SINGLETON} | |DEPENDENT |redis.memory.mem_clients_normal[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.mem_clients_normal`</p> |
-|Redis |Redis: Memory clients slaves{#SINGLETON} | |DEPENDENT |redis.memory.mem_clients_slaves[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.mem_clients_slaves`</p> |
-|Redis |Redis: Memory AOF buffer{#SINGLETON} |<p>Size of the AOF buffer</p> |DEPENDENT |redis.memory.mem_aof_buffer[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.mem_aof_buffer`</p> |
-|Zabbix_raw_items |Redis: Get info | |ZABBIX_PASSIVE |redis.info["{$REDIS.CONN.URI}"] |
-|Zabbix_raw_items |Redis: Get config | |ZABBIX_PASSIVE |redis.config["{$REDIS.CONN.URI}"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| Group | Name | Description | Type | Key and additional info |
+|------------------|-----------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Redis | Redis: Ping | | ZABBIX_PASSIVE | redis.ping["{$REDIS.CONN.URI}"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `10m`</p> |
+| Redis | Redis: Slowlog entries per second | | ZABBIX_PASSIVE | redis.slowlog.count["{$REDIS.CONN.URI}"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Redis | Redis: CPU sys | <p>System CPU consumed by the Redis server</p> | DEPENDENT | redis.cpu.sys<p>**Preprocessing**:</p><p>- JSONPATH: `$.CPU.used_cpu_sys`</p> |
+| Redis | Redis: CPU sys children | <p>System CPU consumed by the background processes</p> | DEPENDENT | redis.cpu.sys_children<p>**Preprocessing**:</p><p>- JSONPATH: `$.CPU.used_cpu_sys_children`</p> |
+| Redis | Redis: CPU user | <p>User CPU consumed by the Redis server</p> | DEPENDENT | redis.cpu.user<p>**Preprocessing**:</p><p>- JSONPATH: `$.CPU.used_cpu_user`</p> |
+| Redis | Redis: CPU user children | <p>User CPU consumed by the background processes</p> | DEPENDENT | redis.cpu.user_children<p>**Preprocessing**:</p><p>- JSONPATH: `$.CPU.used_cpu_user_children`</p> |
+| Redis | Redis: Blocked clients | <p>The number of connections waiting on a blocking call</p> | DEPENDENT | redis.clients.blocked<p>**Preprocessing**:</p><p>- JSONPATH: `$.Clients.blocked_clients`</p> |
+| Redis | Redis: Max input buffer | <p>The biggest input buffer among current client connections</p> | DEPENDENT | redis.clients.max_input_buffer<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Redis | Redis: Max output buffer | <p>The biggest output buffer among current client connections</p> | DEPENDENT | redis.clients.max_output_buffer<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Redis | Redis: Connected clients | <p>The number of connected clients</p> | DEPENDENT | redis.clients.connected<p>**Preprocessing**:</p><p>- JSONPATH: `$.Clients.connected_clients`</p> |
+| Redis | Redis: Cluster enabled | <p>Indicate Redis cluster is enabled</p> | DEPENDENT | redis.cluster.enabled<p>**Preprocessing**:</p><p>- JSONPATH: `$.Cluster.cluster_enabled`</p> |
+| Redis | Redis: Memory used | <p>Total number of bytes allocated by Redis using its allocator</p> | DEPENDENT | redis.memory.used_memory<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.used_memory`</p> |
+| Redis | Redis: Memory used Lua | <p>Amount of memory used by the Lua engine</p> | DEPENDENT | redis.memory.used_memory_lua<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.used_memory_lua`</p> |
+| Redis | Redis: Memory used peak | <p>Peak memory consumed by Redis (in bytes)</p> | DEPENDENT | redis.memory.used_memory_peak<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.used_memory_peak`</p> |
+| Redis | Redis: Memory used RSS | <p>Number of bytes that Redis allocated as seen by the operating system</p> | DEPENDENT | redis.memory.used_memory_rss<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.used_memory_rss`</p> |
+| Redis | Redis: Memory fragmentation ratio | <p>This ratio is an indication of memory mapping efficiency:</p><p> — Value over 1.0 indicate that memory fragmentation is very likely. Consider restarting the Redis server so the operating system can recover fragmented memory, especially with a ratio over 1.5.</p><p> — Value under 1.0 indicate that Redis likely has insufficient memory available. Consider optimizing memory usage or adding more RAM.</p><p>Note: If your peak memory usage is much higher than your current memory usage, the memory fragmentation ratio may be unreliable.</p><p>https://redis.io/topics/memory-optimization</p> | DEPENDENT | redis.memory.fragmentation_ratio<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.mem_fragmentation_ratio`</p> |
+| Redis | Redis: AOF current rewrite time sec | <p>Duration of the on-going AOF rewrite operation if any</p> | DEPENDENT | redis.persistence.aof_current_rewrite_time_sec<p>**Preprocessing**:</p><p>- JSONPATH: `$.Persistence.aof_current_rewrite_time_sec`</p> |
+| Redis | Redis: AOF enabled | <p>Flag indicating AOF logging is activated</p> | DEPENDENT | redis.persistence.aof_enabled<p>**Preprocessing**:</p><p>- JSONPATH: `$.Persistence.aof_enabled`</p> |
+| Redis | Redis: AOF last bgrewrite status | <p>Status of the last AOF rewrite operation</p> | DEPENDENT | redis.persistence.aof_last_bgrewrite_status<p>**Preprocessing**:</p><p>- JSONPATH: `$.Persistence.aof_last_bgrewrite_status`</p><p>- BOOL_TO_DECIMAL |
+| Redis | Redis: AOF last rewrite time sec | <p>Duration of the last AOF rewrite</p> | DEPENDENT | redis.persistence.aof_last_rewrite_time_sec<p>**Preprocessing**:</p><p>- JSONPATH: `$.Persistence.aof_last_rewrite_time_sec`</p> |
+| Redis | Redis: AOF last write status | <p>Status of the last write operation to the AOF</p> | DEPENDENT | redis.persistence.aof_last_write_status<p>**Preprocessing**:</p><p>- JSONPATH: `$.Persistence.aof_last_write_status`</p><p>- BOOL_TO_DECIMAL |
+| Redis | Redis: AOF rewrite in progress | <p>Flag indicating a AOF rewrite operation is on-going</p> | DEPENDENT | redis.persistence.aof_rewrite_in_progress<p>**Preprocessing**:</p><p>- JSONPATH: `$.Persistence.aof_rewrite_in_progress`</p> |
+| Redis | Redis: AOF rewrite scheduled | <p>Flag indicating an AOF rewrite operation will be scheduled once the on-going RDB save is complete</p> | DEPENDENT | redis.persistence.aof_rewrite_scheduled<p>**Preprocessing**:</p><p>- JSONPATH: `$.Persistence.aof_rewrite_scheduled`</p> |
+| Redis | Redis: Dump loading | <p>Flag indicating if the load of a dump file is on-going</p> | DEPENDENT | redis.persistence.loading<p>**Preprocessing**:</p><p>- JSONPATH: `$.Persistence.loading`</p> |
+| Redis | Redis: RDB bgsave in progress | <p>"1" if bgsave is in progress and "0" otherwise</p> | DEPENDENT | redis.persistence.rdb_bgsave_in_progress<p>**Preprocessing**:</p><p>- JSONPATH: `$.Persistence.rdb_bgsave_in_progress`</p> |
+| Redis | Redis: RDB changes since last save | <p>Number of changes since the last background save</p> | DEPENDENT | redis.persistence.rdb_changes_since_last_save<p>**Preprocessing**:</p><p>- JSONPATH: `$.Persistence.rdb_changes_since_last_save`</p> |
+| Redis | Redis: RDB current bgsave time sec | <p>Duration of the on-going RDB save operation if any</p> | DEPENDENT | redis.persistence.rdb_current_bgsave_time_sec<p>**Preprocessing**:</p><p>- JSONPATH: `$.Persistence.rdb_current_bgsave_time_sec`</p> |
+| Redis | Redis: RDB last bgsave status | <p>Status of the last RDB save operation</p> | DEPENDENT | redis.persistence.rdb_last_bgsave_status<p>**Preprocessing**:</p><p>- JSONPATH: `$.Persistence.rdb_last_bgsave_status`</p><p>- BOOL_TO_DECIMAL |
+| Redis | Redis: RDB last bgsave time sec | <p>Duration of the last bg_save operation</p> | DEPENDENT | redis.persistence.rdb_last_bgsave_time_sec<p>**Preprocessing**:</p><p>- JSONPATH: `$.Persistence.rdb_last_bgsave_time_sec`</p> |
+| Redis | Redis: RDB last save time | <p>Epoch-based timestamp of last successful RDB save</p> | DEPENDENT | redis.persistence.rdb_last_save_time<p>**Preprocessing**:</p><p>- JSONPATH: `$.Persistence.rdb_last_save_time`</p> |
+| Redis | Redis: Connected slaves | <p>Number of connected slaves</p> | DEPENDENT | redis.replication.connected_slaves<p>**Preprocessing**:</p><p>- JSONPATH: `$.Replication.connected_slaves`</p> |
+| Redis | Redis: Replication backlog active | <p>Flag indicating replication backlog is active</p> | DEPENDENT | redis.replication.repl_backlog_active<p>**Preprocessing**:</p><p>- JSONPATH: `$.Replication.repl_backlog_active`</p> |
+| Redis | Redis: Replication backlog first byte offset | <p>The master offset of the replication backlog buffer</p> | DEPENDENT | redis.replication.repl_backlog_first_byte_offset<p>**Preprocessing**:</p><p>- JSONPATH: `$.Replication.repl_backlog_first_byte_offset`</p> |
+| Redis | Redis: Replication backlog history length | <p>Amount of data in the backlog sync buffer</p> | DEPENDENT | redis.replication.repl_backlog_histlen<p>**Preprocessing**:</p><p>- JSONPATH: `$.Replication.repl_backlog_histlen`</p> |
+| Redis | Redis: Replication backlog size | <p>Total size in bytes of the replication backlog buffer</p> | DEPENDENT | redis.replication.repl_backlog_size<p>**Preprocessing**:</p><p>- JSONPATH: `$.Replication.repl_backlog_size`</p> |
+| Redis | Redis: Replication role | <p>Value is "master" if the instance is replica of no one, or "slave" if the instance is a replica of some master instance. Note that a replica can be master of another replica (chained replication).</p> | DEPENDENT | redis.replication.role<p>**Preprocessing**:</p><p>- JSONPATH: `$.Replication.role`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Redis | Redis: Master replication offset | <p>Replication offset reported by the master</p> | DEPENDENT | redis.replication.master_repl_offset<p>**Preprocessing**:</p><p>- JSONPATH: `$.Replication.master_repl_offset`</p> |
+| Redis | Redis: Process id | <p>PID of the server process</p> | DEPENDENT | redis.server.process_id<p>**Preprocessing**:</p><p>- JSONPATH: `$.Server.process_id`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Redis | Redis: Redis mode | <p>The server's mode ("standalone", "sentinel" or "cluster")</p> | DEPENDENT | redis.server.redis_mode<p>**Preprocessing**:</p><p>- JSONPATH: `$.Server.redis_mode`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Redis | Redis: Redis version | <p>Version of the Redis server</p> | DEPENDENT | redis.server.redis_version<p>**Preprocessing**:</p><p>- JSONPATH: `$.Server.redis_version`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Redis | Redis: TCP port | <p>TCP/IP listen port</p> | DEPENDENT | redis.server.tcp_port<p>**Preprocessing**:</p><p>- JSONPATH: `$.Server.tcp_port`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Redis | Redis: Uptime | <p>Number of seconds since Redis server start</p> | DEPENDENT | redis.server.uptime<p>**Preprocessing**:</p><p>- JSONPATH: `$.Server.uptime_in_seconds`</p> |
+| Redis | Redis: Evicted keys | <p>Number of evicted keys due to maxmemory limit</p> | DEPENDENT | redis.stats.evicted_keys<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.evicted_keys`</p> |
+| Redis | Redis: Expired keys | <p>Total number of key expiration events</p> | DEPENDENT | redis.stats.expired_keys<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.expired_keys`</p> |
+| Redis | Redis: Instantaneous input bytes per second | <p>The network's read rate per second in KB/sec</p> | DEPENDENT | redis.stats.instantaneous_input.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.instantaneous_input_kbps`</p><p>- MULTIPLIER: `1024`</p> |
+| Redis | Redis: Instantaneous operations per sec | <p>Number of commands processed per second</p> | DEPENDENT | redis.stats.instantaneous_ops.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.instantaneous_ops_per_sec`</p> |
+| Redis | Redis: Instantaneous output bytes per second | <p>The network's write rate per second in KB/sec</p> | DEPENDENT | redis.stats.instantaneous_output.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.instantaneous_output_kbps`</p><p>- MULTIPLIER: `1024`</p> |
+| Redis | Redis: Keyspace hits | <p>Number of successful lookup of keys in the main dictionary</p> | DEPENDENT | redis.stats.keyspace_hits<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.keyspace_hits`</p> |
+| Redis | Redis: Keyspace misses | <p>Number of failed lookup of keys in the main dictionary</p> | DEPENDENT | redis.stats.keyspace_misses<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.keyspace_misses`</p> |
+| Redis | Redis: Latest fork usec | <p>Duration of the latest fork operation in microseconds</p> | DEPENDENT | redis.stats.latest_fork_usec<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.latest_fork_usec`</p><p>- MULTIPLIER: `1.0E-5`</p> |
+| Redis | Redis: Migrate cached sockets | <p>The number of sockets open for MIGRATE purposes</p> | DEPENDENT | redis.stats.migrate_cached_sockets<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.migrate_cached_sockets`</p> |
+| Redis | Redis: Pubsub channels | <p>Global number of pub/sub channels with client subscriptions</p> | DEPENDENT | redis.stats.pubsub_channels<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.pubsub_channels`</p> |
+| Redis | Redis: Pubsub patterns | <p>Global number of pub/sub pattern with client subscriptions</p> | DEPENDENT | redis.stats.pubsub_patterns<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.pubsub_patterns`</p> |
+| Redis | Redis: Rejected connections | <p>Number of connections rejected because of maxclients limit</p> | DEPENDENT | redis.stats.rejected_connections<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.rejected_connections`</p> |
+| Redis | Redis: Sync full | <p>The number of full resyncs with replicas</p> | DEPENDENT | redis.stats.sync_full<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.sync_full`</p> |
+| Redis | Redis: Sync partial err | <p>The number of denied partial resync requests</p> | DEPENDENT | redis.stats.sync_partial_err<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.sync_partial_err`</p> |
+| Redis | Redis: Sync partial ok | <p>The number of accepted partial resync requests</p> | DEPENDENT | redis.stats.sync_partial_ok<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.sync_partial_ok`</p> |
+| Redis | Redis: Total commands processed | <p>Total number of commands processed by the server</p> | DEPENDENT | redis.stats.total_commands_processed<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.total_commands_processed`</p> |
+| Redis | Redis: Total connections received | <p>Total number of connections accepted by the server</p> | DEPENDENT | redis.stats.total_connections_received<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.total_connections_received`</p> |
+| Redis | Redis: Total net input bytes | <p>The total number of bytes read from the network</p> | DEPENDENT | redis.stats.total_net_input_bytes<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.total_net_input_bytes`</p> |
+| Redis | Redis: Total net output bytes | <p>The total number of bytes written to the network</p> | DEPENDENT | redis.stats.total_net_output_bytes<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.total_net_output_bytes`</p> |
+| Redis | Redis: Max clients | <p>Max number of connected clients at the same time.</p><p>Once the limit is reached Redis will close all the new connections sending an error "max number of clients reached".</p> | DEPENDENT | redis.config.maxclients<p>**Preprocessing**:</p><p>- JSONPATH: `$.maxclients`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `30m`</p> |
+| Redis | DB {#DB}: Average TTL | <p>Average TTL</p> | DEPENDENT | redis.db.avg_ttl["{#DB}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Keyspace["{#DB}"].avg_ttl`</p><p>- MULTIPLIER: `0.001`</p> |
+| Redis | DB {#DB}: Expires | <p>Number of keys with an expiration</p> | DEPENDENT | redis.db.expires["{#DB}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Keyspace["{#DB}"].expires`</p> |
+| Redis | DB {#DB}: Keys | <p>Total number of keys</p> | DEPENDENT | redis.db.keys["{#DB}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Keyspace["{#DB}"].keys`</p> |
+| Redis | Redis: AOF current size{#SINGLETON} | <p>AOF current file size</p> | DEPENDENT | redis.persistence.aof_current_size[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Persistence.aof_current_size`</p> |
+| Redis | Redis: AOF base size{#SINGLETON} | <p>AOF file size on latest startup or rewrite</p> | DEPENDENT | redis.persistence.aof_base_size[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Persistence.aof_base_size`</p> |
+| Redis | Redis: AOF pending rewrite{#SINGLETON} | <p>Flag indicating an AOF rewrite operation will</p> | DEPENDENT | redis.persistence.aof_pending_rewrite[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Persistence.aof_pending_rewrite`</p> |
+| Redis | Redis: AOF buffer length{#SINGLETON} | <p>Size of the AOF buffer</p> | DEPENDENT | redis.persistence.aof_buffer_length[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Persistence.aof_buffer_length`</p> |
+| Redis | Redis: AOF rewrite buffer length{#SINGLETON} | <p>Size of the AOF rewrite buffer</p> | DEPENDENT | redis.persistence.aof_rewrite_buffer_length[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Persistence.aof_rewrite_buffer_length`</p> |
+| Redis | Redis: AOF pending background I/O fsync{#SINGLETON} | <p>Number of fsync pending jobs in background I/O queue</p> | DEPENDENT | redis.persistence.aof_pending_bio_fsync[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Persistence.aof_pending_bio_fsync`</p> |
+| Redis | Redis: AOF delayed fsync{#SINGLETON} | <p>Delayed fsync counter</p> | DEPENDENT | redis.persistence.aof_delayed_fsync[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Persistence.aof_delayed_fsync`</p> |
+| Redis | Redis: Master host{#SINGLETON} | <p>Host or IP address of the master</p> | DEPENDENT | redis.replication.master_host[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Replication.master_host`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Redis | Redis: Master port{#SINGLETON} | <p>Master listening TCP port</p> | DEPENDENT | redis.replication.master_port[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Replication.master_port`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Redis | Redis: Master link status{#SINGLETON} | <p>Status of the link (up/down)</p> | DEPENDENT | redis.replication.master_link_status[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Replication.master_link_status`</p><p>- BOOL_TO_DECIMAL |
+| Redis | Redis: Master last I/O seconds ago{#SINGLETON} | <p>Number of seconds since the last interaction with master</p> | DEPENDENT | redis.replication.master_last_io_seconds_ago[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Replication.master_last_io_seconds_ago`</p> |
+| Redis | Redis: Master sync in progress{#SINGLETON} | <p>Indicate the master is syncing to the replica</p> | DEPENDENT | redis.replication.master_sync_in_progress[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Replication.master_sync_in_progress`</p> |
+| Redis | Redis: Slave replication offset{#SINGLETON} | <p>The replication offset of the replica instance</p> | DEPENDENT | redis.replication.slave_repl_offset[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Replication.slave_repl_offset`</p> |
+| Redis | Redis: Slave priority{#SINGLETON} | <p>The priority of the instance as a candidate for failover</p> | DEPENDENT | redis.replication.slave_priority[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Replication.slave_priority`</p> |
+| Redis | Redis: Slave priority{#SINGLETON} | <p>Flag indicating if the replica is read-only</p> | DEPENDENT | redis.replication.slave_read_only[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Replication.slave_read_only`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Redis | Redis slave {#SLAVE_IP}:{#SLAVE_PORT}: Replication lag in bytes | <p>Replication lag in bytes</p> | DEPENDENT | redis.replication.lag_bytes["{#SLAVE_IP}:{#SLAVE_PORT}"]<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Redis | Redis: Number of processes running | <p>-</p> | ZABBIX_PASSIVE | proc.num["{$REDIS.PROCESS_NAME}{#SINGLETON}"] |
+| Redis | Redis: Memory usage (rss) | <p>Resident set size memory used by process in bytes.</p> | ZABBIX_PASSIVE | proc.mem["{$REDIS.PROCESS_NAME}{#SINGLETON}",,,,rss] |
+| Redis | Redis: Memory usage (vsize) | <p>Virtual memory size used by process in bytes.</p> | ZABBIX_PASSIVE | proc.mem["{$REDIS.PROCESS_NAME}{#SINGLETON}",,,,vsize] |
+| Redis | Redis: CPU utilization | <p>Process CPU utilization percentage.</p> | ZABBIX_PASSIVE | proc.cpu.util["{$REDIS.PROCESS_NAME}{#SINGLETON}"] |
+| Redis | Redis: Executable path{#SINGLETON} | <p>The path to the server's executable</p> | DEPENDENT | redis.server.executable[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Server.executable`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Redis | Redis: Memory used peak %{#SINGLETON} | <p>The percentage of used_memory_peak out of used_memory</p> | DEPENDENT | redis.memory.used_memory_peak_perc[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.used_memory_peak_perc`</p><p>- REGEX: `(.+)% \1`</p> |
+| Redis | Redis: Memory used overhead{#SINGLETON} | <p>The sum in bytes of all overheads that the server allocated for managing its internal data structures</p> | DEPENDENT | redis.memory.used_memory_overhead[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.used_memory_overhead`</p> |
+| Redis | Redis: Memory used startup{#SINGLETON} | <p>Initial amount of memory consumed by Redis at startup in bytes</p> | DEPENDENT | redis.memory.used_memory_startup[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.used_memory_startup`</p> |
+| Redis | Redis: Memory used dataset{#SINGLETON} | <p>The size in bytes of the dataset</p> | DEPENDENT | redis.memory.used_memory_dataset[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.used_memory_dataset`</p> |
+| Redis | Redis: Memory used dataset %{#SINGLETON} | <p>The percentage of used_memory_dataset out of the net memory usage (used_memory minus used_memory_startup)</p> | DEPENDENT | redis.memory.used_memory_dataset_perc[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.used_memory_dataset_perc`</p><p>- REGEX: `(.+)% \1`</p> |
+| Redis | Redis: Total system memory{#SINGLETON} | <p>The total amount of memory that the Redis host has</p> | DEPENDENT | redis.memory.total_system_memory[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.total_system_memory`</p> |
+| Redis | Redis: Max memory{#SINGLETON} | <p>Maximum amount of memory allocated to the Redisdb system</p> | DEPENDENT | redis.memory.maxmemory[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.maxmemory`</p> |
+| Redis | Redis: Max memory policy{#SINGLETON} | <p>The value of the maxmemory-policy configuration directive</p> | DEPENDENT | redis.memory.maxmemory_policy[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.maxmemory_policy`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Redis | Redis: Active defrag running{#SINGLETON} | <p>Flag indicating if active defragmentation is active</p> | DEPENDENT | redis.memory.active_defrag_running[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.active_defrag_running`</p> |
+| Redis | Redis: Lazyfree pending objects{#SINGLETON} | <p>The number of objects waiting to be freed (as a result of calling UNLINK, or FLUSHDB and FLUSHALL with the ASYNC option)</p> | DEPENDENT | redis.memory.lazyfree_pending_objects[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.lazyfree_pending_objects`</p> |
+| Redis | Redis: RDB last CoW size{#SINGLETON} | <p>The size in bytes of copy-on-write allocations during the last RDB save operation</p> | DEPENDENT | redis.persistence.rdb_last_cow_size[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Persistence.rdb_last_cow_size`</p> |
+| Redis | Redis: AOF last CoW size{#SINGLETON} | <p>The size in bytes of copy-on-write allocations during the last AOF rewrite operation</p> | DEPENDENT | redis.persistence.aof_last_cow_size[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Persistence.aof_last_cow_size`</p> |
+| Redis | Redis: Expired stale %{#SINGLETON} | | DEPENDENT | redis.stats.expired_stale_perc[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.expired_stale_perc`</p> |
+| Redis | Redis: Expired time cap reached count{#SINGLETON} | | DEPENDENT | redis.stats.expired_time_cap_reached_count[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.expired_time_cap_reached_count`</p> |
+| Redis | Redis: Slave expires tracked keys{#SINGLETON} | <p>The number of keys tracked for expiry purposes (applicable only to writable replicas)</p> | DEPENDENT | redis.stats.slave_expires_tracked_keys[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.slave_expires_tracked_keys`</p> |
+| Redis | Redis: Active defrag hits{#SINGLETON} | <p>Number of value reallocations performed by active the defragmentation process</p> | DEPENDENT | redis.stats.active_defrag_hits[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.active_defrag_hits`</p> |
+| Redis | Redis: Active defrag misses{#SINGLETON} | <p>Number of aborted value reallocations started by the active defragmentation process</p> | DEPENDENT | redis.stats.active_defrag_misses[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.active_defrag_misses`</p> |
+| Redis | Redis: Active defrag key hits{#SINGLETON} | <p>Number of keys that were actively defragmented</p> | DEPENDENT | redis.stats.active_defrag_key_hits[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.active_defrag_key_hits`</p> |
+| Redis | Redis: Active defrag key misses{#SINGLETON} | <p>Number of keys that were skipped by the active defragmentation process</p> | DEPENDENT | redis.stats.active_defrag_key_misses[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Stats.active_defrag_key_misses`</p> |
+| Redis | Redis: Replication second offset{#SINGLETON} | <p>Offset up to which replication IDs are accepted</p> | DEPENDENT | redis.replication.second_repl_offset[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Replication.second_repl_offset`</p> |
+| Redis | Redis: Allocator active{#SINGLETON} | | DEPENDENT | redis.memory.allocator_active[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.allocator_active`</p> |
+| Redis | Redis: Allocator allocated{#SINGLETON} | | DEPENDENT | redis.memory.allocator_allocated[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.allocator_allocated`</p> |
+| Redis | Redis: Allocator resident{#SINGLETON} | | DEPENDENT | redis.memory.allocator_resident[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.allocator_resident`</p> |
+| Redis | Redis: Memory used scripts{#SINGLETON} | | DEPENDENT | redis.memory.used_memory_scripts[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.used_memory_scripts`</p> |
+| Redis | Redis: Memory number of cached scripts{#SINGLETON} | | DEPENDENT | redis.memory.number_of_cached_scripts[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.number_of_cached_scripts`</p> |
+| Redis | Redis: Allocator fragmentation bytes{#SINGLETON} | | DEPENDENT | redis.memory.allocator_frag_bytes[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.allocator_frag_bytes`</p> |
+| Redis | Redis: Allocator fragmentation ratio{#SINGLETON} | | DEPENDENT | redis.memory.allocator_frag_ratio[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.allocator_frag_ratio`</p> |
+| Redis | Redis: Allocator RSS bytes{#SINGLETON} | | DEPENDENT | redis.memory.allocator_rss_bytes[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.allocator_rss_bytes`</p> |
+| Redis | Redis: Allocator RSS ratio{#SINGLETON} | | DEPENDENT | redis.memory.allocator_rss_ratio[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.allocator_rss_ratio`</p> |
+| Redis | Redis: Memory RSS overhead bytes{#SINGLETON} | | DEPENDENT | redis.memory.rss_overhead_bytes[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.rss_overhead_bytes`</p> |
+| Redis | Redis: Memory RSS overhead ratio{#SINGLETON} | | DEPENDENT | redis.memory.rss_overhead_ratio[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.rss_overhead_ratio`</p> |
+| Redis | Redis: Memory fragmentation bytes{#SINGLETON} | | DEPENDENT | redis.memory.fragmentation_bytes[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.mem_fragmentation_bytes`</p> |
+| Redis | Redis: Memory not counted for evict{#SINGLETON} | | DEPENDENT | redis.memory.not_counted_for_evict[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.mem_not_counted_for_evict`</p> |
+| Redis | Redis: Memory replication backlog{#SINGLETON} | | DEPENDENT | redis.memory.replication_backlog[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.mem_replication_backlog`</p> |
+| Redis | Redis: Memory clients normal{#SINGLETON} | | DEPENDENT | redis.memory.mem_clients_normal[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.mem_clients_normal`</p> |
+| Redis | Redis: Memory clients slaves{#SINGLETON} | | DEPENDENT | redis.memory.mem_clients_slaves[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.mem_clients_slaves`</p> |
+| Redis | Redis: Memory AOF buffer{#SINGLETON} | <p>Size of the AOF buffer</p> | DEPENDENT | redis.memory.mem_aof_buffer[{#SINGLETON}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.Memory.mem_aof_buffer`</p> |
+| Zabbix_raw_items | Redis: Get info | | ZABBIX_PASSIVE | redis.info["{$REDIS.CONN.URI}"] |
+| Zabbix_raw_items | Redis: Get config | | ZABBIX_PASSIVE | redis.config["{$REDIS.CONN.URI}"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|Redis: Service is down |<p>-</p> |`{TEMPLATE_NAME:redis.ping["{$REDIS.CONN.URI}"].last()}=0` |AVERAGE |<p>Manual close: YES</p> |
-|Redis: Too many entries in the slowlog (over {$REDIS.SLOWLOG.COUNT.MAX.WARN} per second in 5m) |<p>-</p> |`{TEMPLATE_NAME:redis.slowlog.count["{$REDIS.CONN.URI}"].min(5m)}>{$REDIS.SLOWLOG.COUNT.MAX.WARN}` |INFO | |
-|Redis: Total number of connected clients is too high (over {$REDIS.CLIENTS.PRC.MAX.WARN}% in 5m) |<p>When the number of clients reaches the value of the "maxclients" parameter, new connections will be rejected.</p><p>https://redis.io/topics/clients#maximum-number-of-clients</p> |`{TEMPLATE_NAME:redis.clients.connected.min(5m)}/{Redis:redis.config.maxclients.last()}*100>{$REDIS.CLIENTS.PRC.MAX.WARN}` |WARNING | |
-|Redis: Memory fragmentation ratio is too high (over {$REDIS.MEM.FRAG_RATIO.MAX.WARN} in 15m) |<p>This ratio is an indication of memory mapping efficiency:</p><p> — Value over 1.0 indicate that memory fragmentation is very likely. Consider restarting the Redis server so the operating system can recover fragmented memory, especially with a ratio over 1.5.</p><p> — Value under 1.0 indicate that Redis likely has insufficient memory available. Consider optimizing memory usage or adding more RAM.</p><p>Note: If your peak memory usage is much higher than your current memory usage, the memory fragmentation ratio may be unreliable.</p><p>https://redis.io/topics/memory-optimization</p> |`{TEMPLATE_NAME:redis.memory.fragmentation_ratio.min(15m)}>{$REDIS.MEM.FRAG_RATIO.MAX.WARN}` |WARNING | |
-|Redis: Last AOF write operation failed |<p>Detailed information about persistence: https://redis.io/topics/persistence</p> |`{TEMPLATE_NAME:redis.persistence.aof_last_write_status.last()}=0` |WARNING | |
-|Redis: Last RDB save operation failed |<p>Detailed information about persistence: https://redis.io/topics/persistence</p> |`{TEMPLATE_NAME:redis.persistence.rdb_last_bgsave_status.last()}=0` |WARNING | |
-|Redis: Number of slaves has changed |<p>Redis number of slaves has changed. Ack to close.</p> |`{TEMPLATE_NAME:redis.replication.connected_slaves.diff()}=1` |INFO |<p>Manual close: YES</p> |
-|Redis: Replication role has changed (new role: {ITEM.VALUE}) |<p>Redis replication role has changed. Ack to close.</p> |`{TEMPLATE_NAME:redis.replication.role.diff()}=1 and {TEMPLATE_NAME:redis.replication.role.strlen()}>0` |WARNING |<p>Manual close: YES</p> |
-|Redis: Version has changed (new version: {ITEM.VALUE}) |<p>Redis version has changed. Ack to close.</p> |`{TEMPLATE_NAME:redis.server.redis_version.diff()}=1 and {TEMPLATE_NAME:redis.server.redis_version.strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|Redis: has been restarted (uptime < 10m) |<p>Uptime is less than 10 minutes</p> |`{TEMPLATE_NAME:redis.server.uptime.last()}<10m` |INFO |<p>Manual close: YES</p> |
-|Redis: Connections are rejected |<p>The number of connections has reached the value of "maxclients".</p><p>https://redis.io/topics/clients</p> |`{TEMPLATE_NAME:redis.stats.rejected_connections.last()}>0` |HIGH | |
-|Redis: Replication lag with master is too high (over {$REDIS.REPL.LAG.MAX.WARN} in 5m) |<p>-</p> |`{TEMPLATE_NAME:redis.replication.master_last_io_seconds_ago[{#SINGLETON}].min(5m)}>{$REDIS.REPL.LAG.MAX.WARN}` |WARNING | |
-|Redis: Process is not running |<p>-</p> |`{TEMPLATE_NAME:proc.num["{$REDIS.PROCESS_NAME}{#SINGLETON}"].last()}=0` |HIGH | |
-|Redis: Memory usage is too high (over {$REDIS.MEM.PUSED.MAX.WARN}% in 5m) |<p>-</p> |`{TEMPLATE_NAME:redis.memory.used_memory.last()}/{TEMPLATE_NAME:redis.memory.maxmemory[{#SINGLETON}].min(5m)}*100>{$REDIS.MEM.PUSED.MAX.WARN}` |WARNING | |
-|Redis: Failed to fetch info data (or no data for 30m) |<p>Zabbix has not received data for items for the last 30 minutes</p> |`{TEMPLATE_NAME:redis.info["{$REDIS.CONN.URI}"].nodata(30m)}=1` |WARNING |<p>Manual close: YES</p><p>**Depends on**:</p><p>- Redis: Service is down</p> |
-|Redis: Configuration has changed |<p>Redis configuration has changed. Ack to close.</p> |`{TEMPLATE_NAME:redis.config["{$REDIS.CONN.URI}"].diff()}=1 and {TEMPLATE_NAME:redis.config["{$REDIS.CONN.URI}"].strlen()}>0` |INFO |<p>Manual close: YES</p> |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|--------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------|----------|-------------------------------------------------------------------------------|
+| Redis: Service is down | <p>-</p> | `{TEMPLATE_NAME:redis.ping["{$REDIS.CONN.URI}"].last()}=0` | AVERAGE | <p>Manual close: YES</p> |
+| Redis: Too many entries in the slowlog (over {$REDIS.SLOWLOG.COUNT.MAX.WARN} per second in 5m) | <p>-</p> | `{TEMPLATE_NAME:redis.slowlog.count["{$REDIS.CONN.URI}"].min(5m)}>{$REDIS.SLOWLOG.COUNT.MAX.WARN}` | INFO | |
+| Redis: Total number of connected clients is too high (over {$REDIS.CLIENTS.PRC.MAX.WARN}% in 5m) | <p>When the number of clients reaches the value of the "maxclients" parameter, new connections will be rejected.</p><p>https://redis.io/topics/clients#maximum-number-of-clients</p> | `{TEMPLATE_NAME:redis.clients.connected.min(5m)}/{Redis:redis.config.maxclients.last()}*100>{$REDIS.CLIENTS.PRC.MAX.WARN}` | WARNING | |
+| Redis: Memory fragmentation ratio is too high (over {$REDIS.MEM.FRAG_RATIO.MAX.WARN} in 15m) | <p>This ratio is an indication of memory mapping efficiency:</p><p> — Value over 1.0 indicate that memory fragmentation is very likely. Consider restarting the Redis server so the operating system can recover fragmented memory, especially with a ratio over 1.5.</p><p> — Value under 1.0 indicate that Redis likely has insufficient memory available. Consider optimizing memory usage or adding more RAM.</p><p>Note: If your peak memory usage is much higher than your current memory usage, the memory fragmentation ratio may be unreliable.</p><p>https://redis.io/topics/memory-optimization</p> | `{TEMPLATE_NAME:redis.memory.fragmentation_ratio.min(15m)}>{$REDIS.MEM.FRAG_RATIO.MAX.WARN}` | WARNING | |
+| Redis: Last AOF write operation failed | <p>Detailed information about persistence: https://redis.io/topics/persistence</p> | `{TEMPLATE_NAME:redis.persistence.aof_last_write_status.last()}=0` | WARNING | |
+| Redis: Last RDB save operation failed | <p>Detailed information about persistence: https://redis.io/topics/persistence</p> | `{TEMPLATE_NAME:redis.persistence.rdb_last_bgsave_status.last()}=0` | WARNING | |
+| Redis: Number of slaves has changed | <p>Redis number of slaves has changed. Ack to close.</p> | `{TEMPLATE_NAME:redis.replication.connected_slaves.diff()}=1` | INFO | <p>Manual close: YES</p> |
+| Redis: Replication role has changed (new role: {ITEM.VALUE}) | <p>Redis replication role has changed. Ack to close.</p> | `{TEMPLATE_NAME:redis.replication.role.diff()}=1 and {TEMPLATE_NAME:redis.replication.role.strlen()}>0` | WARNING | <p>Manual close: YES</p> |
+| Redis: Version has changed (new version: {ITEM.VALUE}) | <p>Redis version has changed. Ack to close.</p> | `{TEMPLATE_NAME:redis.server.redis_version.diff()}=1 and {TEMPLATE_NAME:redis.server.redis_version.strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| Redis: has been restarted (uptime < 10m) | <p>Uptime is less than 10 minutes</p> | `{TEMPLATE_NAME:redis.server.uptime.last()}<10m` | INFO | <p>Manual close: YES</p> |
+| Redis: Connections are rejected | <p>The number of connections has reached the value of "maxclients".</p><p>https://redis.io/topics/clients</p> | `{TEMPLATE_NAME:redis.stats.rejected_connections.last()}>0` | HIGH | |
+| Redis: Replication lag with master is too high (over {$REDIS.REPL.LAG.MAX.WARN} in 5m) | <p>-</p> | `{TEMPLATE_NAME:redis.replication.master_last_io_seconds_ago[{#SINGLETON}].min(5m)}>{$REDIS.REPL.LAG.MAX.WARN}` | WARNING | |
+| Redis: Process is not running | <p>-</p> | `{TEMPLATE_NAME:proc.num["{$REDIS.PROCESS_NAME}{#SINGLETON}"].last()}=0` | HIGH | |
+| Redis: Memory usage is too high (over {$REDIS.MEM.PUSED.MAX.WARN}% in 5m) | <p>-</p> | `{TEMPLATE_NAME:redis.memory.used_memory.last()}/{TEMPLATE_NAME:redis.memory.maxmemory[{#SINGLETON}].min(5m)}*100>{$REDIS.MEM.PUSED.MAX.WARN}` | WARNING | |
+| Redis: Failed to fetch info data (or no data for 30m) | <p>Zabbix has not received data for items for the last 30 minutes</p> | `{TEMPLATE_NAME:redis.info["{$REDIS.CONN.URI}"].nodata(30m)}=1` | WARNING | <p>Manual close: YES</p><p>**Depends on**:</p><p>- Redis: Service is down</p> |
+| Redis: Configuration has changed | <p>Redis configuration has changed. Ack to close.</p> | `{TEMPLATE_NAME:redis.config["{$REDIS.CONN.URI}"].diff()}=1 and {TEMPLATE_NAME:redis.config["{$REDIS.CONN.URI}"].strlen()}>0` | INFO | <p>Manual close: YES</p> |
## Feedback
diff --git a/templates/db/redis/template_db_redis.yaml b/templates/db/redis/template_db_redis.yaml
index d2ffe4379d3..a93104af7be 100644
--- a/templates/db/redis/template_db_redis.yaml
+++ b/templates/db/redis/template_db_redis.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-02-23T11:05:39Z'
+ date: '2021-04-22T11:26:38Z'
groups:
-
name: Templates/Databases
@@ -2746,228 +2746,232 @@ zabbix_export:
dashboards:
-
name: 'Redis overview'
- widgets:
- -
- type: GRAPH_CLASSIC
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Redis: Clients'
- host: Redis
- -
- type: GRAPH_CLASSIC
- x: '12'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Redis: Keyspace'
- host: Redis
- -
- type: GRAPH_CLASSIC
- 'y': '5'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Redis: Commands'
- host: Redis
- -
- type: GRAPH_CLASSIC
- x: '12'
- 'y': '5'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Redis: Expired keys'
- host: Redis
- -
- type: GRAPH_CLASSIC
- 'y': '10'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Redis: Persistence'
- host: Redis
- -
- type: GRAPH_CLASSIC
- x: '12'
- 'y': '10'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Redis: Slaves'
- host: Redis
- -
- type: GRAPH_CLASSIC
- 'y': '15'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Redis: Slowlog'
- host: Redis
- -
- type: GRAPH_CLASSIC
- x: '12'
- 'y': '15'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Redis: Uptime'
- host: Redis
- -
- type: GRAPH_PROTOTYPE
- 'y': '20'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '1'
- -
- type: INTEGER
- name: source_type
- value: '2'
+ pages:
+ -
+ widgets:
+ -
+ type: GRAPH_CLASSIC
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Redis: Clients'
+ host: Redis
+ -
+ type: GRAPH_CLASSIC
+ x: '12'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Redis: Keyspace'
+ host: Redis
+ -
+ type: GRAPH_CLASSIC
+ 'y': '5'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Redis: Commands'
+ host: Redis
+ -
+ type: GRAPH_CLASSIC
+ x: '12'
+ 'y': '5'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Redis: Expired keys'
+ host: Redis
+ -
+ type: GRAPH_CLASSIC
+ 'y': '10'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Redis: Persistence'
+ host: Redis
+ -
+ type: GRAPH_CLASSIC
+ x: '12'
+ 'y': '10'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Redis: Slaves'
+ host: Redis
+ -
+ type: GRAPH_CLASSIC
+ 'y': '15'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Redis: Slowlog'
+ host: Redis
+ -
+ type: GRAPH_CLASSIC
+ x: '12'
+ 'y': '15'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Redis: Uptime'
+ host: Redis
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: 'Redis: Replication lag time{#SINGLETON}'
- host: Redis
+ 'y': '20'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '1'
+ -
+ type: INTEGER
+ name: source_type
+ value: '2'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: 'Redis: Replication lag time{#SINGLETON}'
+ host: Redis
-
name: 'Redis performance'
- widgets:
- -
- type: GRAPH_CLASSIC
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Redis: CPU'
- host: Redis
- -
- type: GRAPH_CLASSIC
- x: '12'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Redis: Network'
- host: Redis
- -
- type: GRAPH_CLASSIC
- 'y': '5'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Redis: Memory'
- host: Redis
- -
- type: GRAPH_CLASSIC
- x: '12'
- 'y': '5'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Redis: Memory fragmentation'
- host: Redis
+ pages:
+ -
+ widgets:
+ -
+ type: GRAPH_CLASSIC
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Redis: CPU'
+ host: Redis
+ -
+ type: GRAPH_CLASSIC
+ x: '12'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Redis: Network'
+ host: Redis
+ -
+ type: GRAPH_CLASSIC
+ 'y': '5'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Redis: Memory'
+ host: Redis
+ -
+ type: GRAPH_CLASSIC
+ x: '12'
+ 'y': '5'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Redis: Memory fragmentation'
+ host: Redis
valuemaps:
-
name: 'Redis bgsave time'
diff --git a/templates/db/tidb_http/tidb_pd_http/template_db_tidb_pd_http.yaml b/templates/db/tidb_http/tidb_pd_http/template_db_tidb_pd_http.yaml
index e53fccf2695..3f0442aea37 100644
--- a/templates/db/tidb_http/tidb_pd_http/template_db_tidb_pd_http.yaml
+++ b/templates/db/tidb_http/tidb_pd_http/template_db_tidb_pd_http.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-04-08T09:02:39Z'
+ date: '2021-04-22T12:58:27Z'
groups:
-
name: Templates/Databases
@@ -21,13 +21,6 @@ zabbix_export:
groups:
-
name: Templates/Databases
- applications:
- -
- name: 'PD instance'
- -
- name: 'TiDB cluster'
- -
- name: 'Zabbix raw items'
items:
-
name: 'PD: Get instance metrics'
@@ -37,9 +30,6 @@ zabbix_export:
trends: '0'
value_type: TEXT
description: 'Get TiDB PD instance metrics.'
- applications:
- -
- name: 'Zabbix raw items'
preprocessing:
-
type: CHECK_NOT_SUPPORTED
@@ -50,6 +40,10 @@ zabbix_export:
parameters:
- ''
url: '{$PD.URL}:{$PD.PORT}/metrics'
+ tags:
+ -
+ tag: Application
+ value: 'Zabbix raw items'
-
name: 'PD: Get instance status'
type: HTTP_AGENT
@@ -58,9 +52,6 @@ zabbix_export:
trends: '0'
value_type: TEXT
description: 'Get TiDB PD instance status info.'
- applications:
- -
- name: 'Zabbix raw items'
preprocessing:
-
type: CHECK_NOT_SUPPORTED
@@ -69,6 +60,10 @@ zabbix_export:
error_handler: CUSTOM_VALUE
error_handler_params: '{"status": "0"}'
url: '{$PD.URL}:{$PD.PORT}/pd/api/v1/status'
+ tags:
+ -
+ tag: Application
+ value: 'Zabbix raw items'
-
name: 'PD: GRPC Commands total, rate'
type: DEPENDENT
@@ -77,9 +72,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'The rate at which gRPC commands are completed.'
- applications:
- -
- name: 'PD instance'
preprocessing:
-
type: JSONPATH
@@ -92,6 +84,10 @@ zabbix_export:
- ''
master_item:
key: pd.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'PD instance'
-
name: 'PD: Status'
type: DEPENDENT
@@ -101,9 +97,6 @@ zabbix_export:
trends: '0'
value_type: CHAR
description: 'Status of PD instance.'
- applications:
- -
- name: 'PD instance'
valuemap:
name: 'Service state'
preprocessing:
@@ -119,6 +112,10 @@ zabbix_export:
- 1h
master_item:
key: pd.get_status
+ tags:
+ -
+ tag: Application
+ value: 'PD instance'
triggers:
-
expression: '{last()}=0'
@@ -133,9 +130,6 @@ zabbix_export:
value_type: FLOAT
units: uptime
description: 'The runtime of each PD instance.'
- applications:
- -
- name: 'PD instance'
preprocessing:
-
type: JSONPATH
@@ -149,6 +143,10 @@ zabbix_export:
return (Math.floor(Date.now()/1000)-Number(value));
master_item:
key: pd.get_status
+ tags:
+ -
+ tag: Application
+ value: 'PD instance'
triggers:
-
expression: '{last()}<10m'
@@ -165,9 +163,6 @@ zabbix_export:
trends: '0'
value_type: CHAR
description: 'Version of the PD instance.'
- applications:
- -
- name: 'PD instance'
preprocessing:
-
type: JSONPATH
@@ -179,6 +174,10 @@ zabbix_export:
- 3h
master_item:
key: pd.get_status
+ tags:
+ -
+ tag: Application
+ value: 'PD instance'
triggers:
-
expression: '{diff()}=1 and {strlen()}>0'
@@ -201,9 +200,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'The total count of cluster Regions.'
- applications:
- -
- name: 'TiDB cluster'
preprocessing:
-
type: JSONPATH
@@ -211,6 +207,10 @@ zabbix_export:
- '$[?(@.name == "pd_cluster_status" && @.labels.type == "leader_count")].value.first()'
master_item:
key: pd.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB cluster'
-
name: 'TiDB cluster: Current peer count'
type: DEPENDENT
@@ -218,9 +218,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'The current count of all cluster peers.'
- applications:
- -
- name: 'TiDB cluster'
preprocessing:
-
type: JSONPATH
@@ -228,6 +225,10 @@ zabbix_export:
- '$[?(@.name == "pd_cluster_status" && @.labels.type == "region_count")].value.first()'
master_item:
key: pd.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB cluster'
-
name: 'TiDB cluster: Storage capacity'
type: DEPENDENT
@@ -237,9 +238,6 @@ zabbix_export:
value_type: FLOAT
units: B
description: 'The total storage capacity for this TiDB cluster.'
- applications:
- -
- name: 'TiDB cluster'
preprocessing:
-
type: JSONPATH
@@ -251,6 +249,10 @@ zabbix_export:
- 1h
master_item:
key: pd.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB cluster'
-
name: 'TiDB cluster: Storage size'
type: DEPENDENT
@@ -260,9 +262,6 @@ zabbix_export:
value_type: FLOAT
units: B
description: 'The storage size that is currently used by the TiDB cluster.'
- applications:
- -
- name: 'TiDB cluster'
preprocessing:
-
type: JSONPATH
@@ -270,6 +269,10 @@ zabbix_export:
- '$[?(@.name == "pd_cluster_status" && @.labels.type == "storage_size")].value.first()'
master_item:
key: pd.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB cluster'
-
name: 'TiDB cluster: Disconnect stores'
type: DEPENDENT
@@ -277,9 +280,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'The count of disconnected stores.'
- applications:
- -
- name: 'TiDB cluster'
preprocessing:
-
type: JSONPATH
@@ -291,6 +291,10 @@ zabbix_export:
- 1h
master_item:
key: pd.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB cluster'
trigger_prototypes:
-
expression: '{last()}>0'
@@ -304,9 +308,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'The count of down stores.'
- applications:
- -
- name: 'TiDB cluster'
preprocessing:
-
type: JSONPATH
@@ -318,6 +319,10 @@ zabbix_export:
- 1h
master_item:
key: pd.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB cluster'
trigger_prototypes:
-
expression: '{last()}>0'
@@ -331,9 +336,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'The count of low space stores.'
- applications:
- -
- name: 'TiDB cluster'
preprocessing:
-
type: JSONPATH
@@ -345,6 +347,10 @@ zabbix_export:
- 1h
master_item:
key: pd.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB cluster'
trigger_prototypes:
-
expression: '{last()}>0'
@@ -357,9 +363,6 @@ zabbix_export:
key: 'pd.cluster_status.store_offline[{#SINGLETON}]'
delay: '0'
history: 7d
- applications:
- -
- name: 'TiDB cluster'
preprocessing:
-
type: JSONPATH
@@ -371,6 +374,10 @@ zabbix_export:
- 1h
master_item:
key: pd.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB cluster'
-
name: 'TiDB cluster: Tombstone stores'
type: DEPENDENT
@@ -378,9 +385,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'The count of tombstone stores.'
- applications:
- -
- name: 'TiDB cluster'
preprocessing:
-
type: JSONPATH
@@ -392,6 +396,10 @@ zabbix_export:
- 1h
master_item:
key: pd.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB cluster'
-
name: 'TiDB cluster: Unhealth stores'
type: DEPENDENT
@@ -399,9 +407,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'The count of unhealthy stores.'
- applications:
- -
- name: 'TiDB cluster'
preprocessing:
-
type: JSONPATH
@@ -413,6 +418,10 @@ zabbix_export:
- 1h
master_item:
key: pd.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB cluster'
-
name: 'TiDB cluster: Normal stores'
type: DEPENDENT
@@ -420,9 +429,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'The count of healthy storage instances.'
- applications:
- -
- name: 'TiDB cluster'
preprocessing:
-
type: JSONPATH
@@ -434,6 +440,10 @@ zabbix_export:
- 1h
master_item:
key: pd.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB cluster'
trigger_prototypes:
-
expression: '{TiDB PD by HTTP:pd.cluster_status.storage_size[{#SINGLETON}].min(5m)}/{TiDB PD by HTTP:pd.cluster_status.storage_capacity[{#SINGLETON}].last()}*100>{$PD.STORAGE_USAGE.MAX.WARN}'
@@ -487,9 +497,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'The rate per command type at which gRPC commands are completed.'
- applications:
- -
- name: 'PD instance'
preprocessing:
-
type: JSONPATH
@@ -501,6 +508,10 @@ zabbix_export:
- ''
master_item:
key: pd.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'PD instance'
master_item:
key: pd.get_metrics
preprocessing:
@@ -544,9 +555,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'The count of heartbeats with the error status per second.'
- application_prototypes:
- -
- name: 'TiDB Store [{#STORE_ADDRESS}]'
preprocessing:
-
type: JSONPATH
@@ -560,6 +568,10 @@ zabbix_export:
- ''
master_item:
key: pd.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB Store [{#STORE_ADDRESS}]'
-
name: 'PD: Region heartbeat: active, rate'
type: DEPENDENT
@@ -568,9 +580,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'The count of heartbeats with the ok status per second.'
- application_prototypes:
- -
- name: 'TiDB Store [{#STORE_ADDRESS}]'
preprocessing:
-
type: JSONPATH
@@ -584,6 +593,10 @@ zabbix_export:
- ''
master_item:
key: pd.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB Store [{#STORE_ADDRESS}]'
-
name: 'PD: Region schedule push: total, rate'
type: DEPENDENT
@@ -591,9 +604,6 @@ zabbix_export:
delay: '0'
history: 7d
value_type: FLOAT
- application_prototypes:
- -
- name: 'TiDB Store [{#STORE_ADDRESS}]'
preprocessing:
-
type: JSONPATH
@@ -607,6 +617,10 @@ zabbix_export:
- ''
master_item:
key: pd.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB Store [{#STORE_ADDRESS}]'
-
name: 'PD: Region heartbeat: total, rate'
type: DEPENDENT
@@ -615,9 +629,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'The count of heartbeats reported to PD per instance per second.'
- application_prototypes:
- -
- name: 'TiDB Store [{#STORE_ADDRESS}]'
preprocessing:
-
type: JSONPATH
@@ -631,6 +642,10 @@ zabbix_export:
- ''
master_item:
key: pd.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB Store [{#STORE_ADDRESS}]'
master_item:
key: pd.get_metrics
preprocessing:
@@ -674,9 +689,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'The number of Regions in different label levels.'
- applications:
- -
- name: 'TiDB cluster'
preprocessing:
-
type: JSONPATH
@@ -684,6 +696,10 @@ zabbix_export:
- '$[?(@.name == "pd_regions_label_level" && @.labels.type == "{#TYPE}")].value.first()'
master_item:
key: pd.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB cluster'
master_item:
key: pd.get_metrics
preprocessing:
@@ -720,9 +736,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'The health status of Regions indicated via the count of unusual Regions including pending peers, down peers, extra peers, offline peers, missing peers, learner peers and incorrect namespaces.'
- applications:
- -
- name: 'TiDB cluster'
preprocessing:
-
type: JSONPATH
@@ -730,6 +743,10 @@ zabbix_export:
- '$[?(@.name == "pd_regions_status" && @.labels.type == "{#TYPE}")].value.first()'
master_item:
key: pd.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB cluster'
trigger_prototypes:
-
expression: '{min(5m)}>0'
@@ -812,9 +829,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'The current running schedulers.'
- applications:
- -
- name: 'TiDB cluster'
preprocessing:
-
type: JSONPATH
@@ -824,6 +838,10 @@ zabbix_export:
error_handler_params: '0'
master_item:
key: pd.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB cluster'
master_item:
key: pd.get_metrics
preprocessing:
diff --git a/templates/db/tidb_http/tidb_tidb_http/template_db_tidb_tidb_http.yaml b/templates/db/tidb_http/tidb_tidb_http/template_db_tidb_tidb_http.yaml
index 32fe1a7cc51..2d6808ce59e 100644
--- a/templates/db/tidb_http/tidb_tidb_http/template_db_tidb_tidb_http.yaml
+++ b/templates/db/tidb_http/tidb_tidb_http/template_db_tidb_tidb_http.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-04-08T09:02:36Z'
+ date: '2021-04-22T12:58:24Z'
groups:
-
name: Templates/Databases
@@ -21,11 +21,6 @@ zabbix_export:
groups:
-
name: Templates/Databases
- applications:
- -
- name: 'TiDB node'
- -
- name: 'Zabbix raw items'
items:
-
name: 'TiDB: CPU'
@@ -36,9 +31,6 @@ zabbix_export:
value_type: FLOAT
units: '%'
description: 'Total user and system CPU usage ratio.'
- applications:
- -
- name: 'TiDB node'
preprocessing:
-
type: JSONPATH
@@ -54,6 +46,10 @@ zabbix_export:
- '100'
master_item:
key: tidb.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB node'
-
name: 'TiDB: DDL waiting jobs'
type: DEPENDENT
@@ -62,9 +58,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'The number of TiDB operations that resolve locks per second. When TiDB''s read or write request encounters a lock, it tries to resolve the lock.'
- applications:
- -
- name: 'TiDB node'
preprocessing:
-
type: JSONPATH
@@ -72,6 +65,10 @@ zabbix_export:
- '$[?(@.name=="tidb_ddl_waiting_jobs")].value.sum()'
master_item:
key: tidb.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB node'
triggers:
-
expression: '{min(5m)}>{$TIDB.DDL.WAITING.MAX.WARN}'
@@ -85,9 +82,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'The total number of failures to reload the latest schema information in TiDB per second.'
- applications:
- -
- name: 'TiDB node'
preprocessing:
-
type: JSONPATH
@@ -100,6 +94,10 @@ zabbix_export:
- ''
master_item:
key: tidb.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB node'
triggers:
-
expression: '{min(5m)}>{$TIDB.SCHEMA_LOAD_ERRORS.MAX.WARN}'
@@ -113,9 +111,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'The statistics of the schemas that TiDB obtains from TiKV per second.'
- applications:
- -
- name: 'TiDB node'
preprocessing:
-
type: JSONPATH
@@ -127,6 +122,10 @@ zabbix_export:
- ''
master_item:
key: tidb.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB node'
-
name: 'TiDB: Failed Query, rate'
type: DEPENDENT
@@ -135,9 +134,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'The number of error occurred when executing SQL statements per second (such as syntax errors and primary key conflicts).'
- applications:
- -
- name: 'TiDB node'
preprocessing:
-
type: JSONPATH
@@ -150,6 +146,10 @@ zabbix_export:
- ''
master_item:
key: tidb.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB node'
-
name: 'TiDB: Get instance metrics'
type: HTTP_AGENT
@@ -158,9 +158,6 @@ zabbix_export:
trends: '0'
value_type: TEXT
description: 'Get TiDB instance metrics.'
- applications:
- -
- name: 'Zabbix raw items'
preprocessing:
-
type: CHECK_NOT_SUPPORTED
@@ -171,6 +168,10 @@ zabbix_export:
parameters:
- ''
url: '{$TIDB.URL}:{$TIDB.PORT}/metrics'
+ tags:
+ -
+ tag: Application
+ value: 'Zabbix raw items'
-
name: 'TiDB: Get instance status'
type: HTTP_AGENT
@@ -179,9 +180,6 @@ zabbix_export:
trends: '0'
value_type: TEXT
description: 'Get TiDB instance status info.'
- applications:
- -
- name: 'Zabbix raw items'
preprocessing:
-
type: CHECK_NOT_SUPPORTED
@@ -190,6 +188,10 @@ zabbix_export:
error_handler: CUSTOM_VALUE
error_handler_params: '{"status": "0"}'
url: '{$TIDB.URL}:{$TIDB.PORT}/status'
+ tags:
+ -
+ tag: Application
+ value: 'Zabbix raw items'
-
name: 'TiDB: Goroutine count'
type: DEPENDENT
@@ -197,9 +199,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'The number of Goroutines on TiDB instance.'
- applications:
- -
- name: 'TiDB node'
preprocessing:
-
type: JSONPATH
@@ -207,6 +206,10 @@ zabbix_export:
- '$[?(@.name=="go_goroutines")].value.first()'
master_item:
key: tidb.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB node'
-
name: 'TiDB: Heap memory usage'
type: DEPENDENT
@@ -216,9 +219,6 @@ zabbix_export:
value_type: FLOAT
units: B
description: 'Number of heap bytes that are in use.'
- applications:
- -
- name: 'TiDB node'
preprocessing:
-
type: JSONPATH
@@ -226,6 +226,10 @@ zabbix_export:
- '$[?(@.name=="go_memstats_heap_inuse_bytes")].value.first()'
master_item:
key: tidb.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB node'
triggers:
-
expression: '{min(5m)}>{$TIDB.HEAP.USAGE.MAX.WARN}'
@@ -240,9 +244,6 @@ zabbix_export:
value_type: FLOAT
units: Ops
description: 'The number of times that the metrics are refreshed on TiDB instance per minute.'
- applications:
- -
- name: 'TiDB node'
preprocessing:
-
type: JSONPATH
@@ -255,6 +256,10 @@ zabbix_export:
- ''
master_item:
key: tidb.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB node'
triggers:
-
expression: '{max(5m)}<{$TIDB.MONITOR_KEEP_ALIVE.MAX.WARN}'
@@ -270,9 +275,6 @@ zabbix_export:
value_type: FLOAT
units: Ops
description: 'The number of times that the operating system rewinds every second.'
- applications:
- -
- name: 'TiDB node'
preprocessing:
-
type: JSONPATH
@@ -284,6 +286,10 @@ zabbix_export:
- ''
master_item:
key: tidb.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB node'
triggers:
-
expression: '{min(5m)}>{$TIDB.TIME_JUMP_BACK.MAX.WARN}'
@@ -298,9 +304,6 @@ zabbix_export:
value_type: FLOAT
units: Ops
description: 'The number of TSO commands that TiDB obtains from PD per second.'
- applications:
- -
- name: 'TiDB node'
preprocessing:
-
type: JSONPATH
@@ -312,6 +315,10 @@ zabbix_export:
- ''
master_item:
key: tidb.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB node'
-
name: 'TiDB: PD TSO requests, rate'
type: DEPENDENT
@@ -321,9 +328,6 @@ zabbix_export:
value_type: FLOAT
units: Ops
description: 'The number of TSO requests that TiDB obtains from PD per second.'
- applications:
- -
- name: 'TiDB node'
preprocessing:
-
type: JSONPATH
@@ -335,6 +339,10 @@ zabbix_export:
- ''
master_item:
key: tidb.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB node'
-
name: 'TiDB: Open file descriptors, max'
type: DEPENDENT
@@ -343,9 +351,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'Maximum number of open file descriptors.'
- applications:
- -
- name: 'TiDB node'
preprocessing:
-
type: JSONPATH
@@ -353,6 +358,10 @@ zabbix_export:
- '$[?(@.name=="process_max_fds")].value.first()'
master_item:
key: tidb.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB node'
-
name: 'TiDB: Open file descriptors'
type: DEPENDENT
@@ -361,9 +370,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'Number of open file descriptors.'
- applications:
- -
- name: 'TiDB node'
preprocessing:
-
type: JSONPATH
@@ -371,6 +377,10 @@ zabbix_export:
- '$[?(@.name=="process_open_fds")].value.first()'
master_item:
key: tidb.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB node'
-
name: 'TiDB: RSS memory usage'
type: DEPENDENT
@@ -380,9 +390,6 @@ zabbix_export:
value_type: FLOAT
units: B
description: 'Resident memory size in bytes.'
- applications:
- -
- name: 'TiDB node'
preprocessing:
-
type: JSONPATH
@@ -390,6 +397,10 @@ zabbix_export:
- '$[?(@.name=="process_resident_memory_bytes")].value.first()'
master_item:
key: tidb.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB node'
-
name: 'TiDB: Total "error" server query, rate'
type: DEPENDENT
@@ -399,9 +410,6 @@ zabbix_export:
value_type: FLOAT
units: Qps
description: 'The number of queries on TiDB instance per second with failure of command execution results.'
- applications:
- -
- name: 'TiDB node'
preprocessing:
-
type: JSONPATH
@@ -413,6 +421,10 @@ zabbix_export:
- ''
master_item:
key: tidb.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB node'
-
name: 'TiDB: Total "ok" server query, rate'
type: DEPENDENT
@@ -422,9 +434,6 @@ zabbix_export:
value_type: FLOAT
units: Qps
description: 'The number of queries on TiDB instance per second with success of command execution results.'
- applications:
- -
- name: 'TiDB node'
preprocessing:
-
type: JSONPATH
@@ -436,6 +445,10 @@ zabbix_export:
- ''
master_item:
key: tidb.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB node'
-
name: 'TiDB: Total server query, rate'
type: DEPENDENT
@@ -445,9 +458,6 @@ zabbix_export:
value_type: FLOAT
units: Qps
description: 'The number of queries per second on TiDB instance.'
- applications:
- -
- name: 'TiDB node'
preprocessing:
-
type: JSONPATH
@@ -459,6 +469,10 @@ zabbix_export:
- ''
master_item:
key: tidb.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB node'
-
name: 'TiDB: Schema lease "change" errors, rate'
type: DEPENDENT
@@ -469,9 +483,6 @@ zabbix_export:
description: |
The number of schema lease errors per second.
"change" means that the schema has changed
- applications:
- -
- name: 'TiDB node'
preprocessing:
-
type: JSONPATH
@@ -484,6 +495,10 @@ zabbix_export:
- ''
master_item:
key: tidb.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB node'
-
name: 'TiDB: Schema lease "outdate" errors , rate'
type: DEPENDENT
@@ -494,9 +509,6 @@ zabbix_export:
description: |
The number of schema lease errors per second.
"outdate" errors means that the schema cannot be updated, which is a more serious error and triggers an alert.
- applications:
- -
- name: 'TiDB node'
preprocessing:
-
type: JSONPATH
@@ -509,6 +521,10 @@ zabbix_export:
- ''
master_item:
key: tidb.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB node'
triggers:
-
expression: '{min(5m)}>{$TIDB.SCHEMA_LEASE_ERRORS.MAX.WARN}'
@@ -523,9 +539,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'The total number of SQL statements executed per second.'
- applications:
- -
- name: 'TiDB node'
preprocessing:
-
type: JSONPATH
@@ -537,6 +550,10 @@ zabbix_export:
- ''
master_item:
key: tidb.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB node'
-
name: 'TiDB: Status'
type: DEPENDENT
@@ -546,9 +563,6 @@ zabbix_export:
trends: '0'
value_type: CHAR
description: 'Status of PD instance.'
- applications:
- -
- name: 'TiDB node'
valuemap:
name: 'Service state'
preprocessing:
@@ -564,6 +578,10 @@ zabbix_export:
- 1h
master_item:
key: tidb.get_status
+ tags:
+ -
+ tag: Application
+ value: 'TiDB node'
triggers:
-
expression: '{last()}=0'
@@ -576,9 +594,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'The connection number of current TiDB instance.'
- applications:
- -
- name: 'TiDB node'
preprocessing:
-
type: JSONPATH
@@ -586,6 +601,10 @@ zabbix_export:
- '$[?(@.name=="tidb_server_connections")].value.first()'
master_item:
key: tidb.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB node'
-
name: 'TiDB: Server critical error, rate'
type: DEPENDENT
@@ -594,9 +613,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'The number of critical errors occurred in TiDB per second.'
- applications:
- -
- name: 'TiDB node'
preprocessing:
-
type: JSONPATH
@@ -608,6 +624,10 @@ zabbix_export:
- ''
master_item:
key: tidb.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB node'
-
name: 'TiDB: Server panic, rate'
type: DEPENDENT
@@ -616,9 +636,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'The number of panics occurred in TiDB per second.'
- applications:
- -
- name: 'TiDB node'
preprocessing:
-
type: JSONPATH
@@ -631,6 +648,10 @@ zabbix_export:
- ''
master_item:
key: tidb.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB node'
triggers:
-
expression: '{last()}>0'
@@ -646,9 +667,6 @@ zabbix_export:
value_type: FLOAT
units: Ops
description: 'The number of errors returned by TiKV.'
- applications:
- -
- name: 'TiDB node'
preprocessing:
-
type: JSONPATH
@@ -661,6 +679,10 @@ zabbix_export:
- ''
master_item:
key: tidb.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB node'
-
name: 'TiDB: Lock resolves, rate'
type: DEPENDENT
@@ -670,9 +692,6 @@ zabbix_export:
value_type: FLOAT
units: Ops
description: 'The number of DDL tasks that are waiting.'
- applications:
- -
- name: 'TiDB node'
preprocessing:
-
type: JSONPATH
@@ -684,6 +703,10 @@ zabbix_export:
- ''
master_item:
key: tidb.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB node'
-
name: 'TiDB: TiClient region errors, rate'
type: DEPENDENT
@@ -693,9 +716,6 @@ zabbix_export:
value_type: FLOAT
units: Ops
description: 'The number of region related errors returned by TiKV per second.'
- applications:
- -
- name: 'TiDB node'
preprocessing:
-
type: JSONPATH
@@ -707,6 +727,10 @@ zabbix_export:
- ''
master_item:
key: tidb.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB node'
triggers:
-
expression: '{min(5m)}>{$TIDB.REGION_ERROR.MAX.WARN}'
@@ -721,9 +745,6 @@ zabbix_export:
value_type: FLOAT
units: Ops
description: 'The number of executed KV commands per second.'
- applications:
- -
- name: 'TiDB node'
preprocessing:
-
type: JSONPATH
@@ -735,6 +756,10 @@ zabbix_export:
- ''
master_item:
key: tidb.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB node'
-
name: 'TiDB: Uptime'
type: DEPENDENT
@@ -744,9 +769,6 @@ zabbix_export:
value_type: FLOAT
units: uptime
description: 'The runtime of each TiDB instance.'
- applications:
- -
- name: 'TiDB node'
preprocessing:
-
type: JSONPATH
@@ -760,6 +782,10 @@ zabbix_export:
return (Math.floor(Date.now()/1000)-Number(value));
master_item:
key: tidb.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB node'
triggers:
-
expression: '{last()}<10m'
@@ -776,9 +802,6 @@ zabbix_export:
trends: '0'
value_type: CHAR
description: 'Version of the TiDB instance.'
- applications:
- -
- name: 'TiDB node'
preprocessing:
-
type: JSONPATH
@@ -790,6 +813,10 @@ zabbix_export:
- 3h
master_item:
key: tidb.get_status
+ tags:
+ -
+ tag: Application
+ value: 'TiDB node'
triggers:
-
expression: '{diff()}=1 and {strlen()}>0'
@@ -814,9 +841,6 @@ zabbix_export:
value_type: FLOAT
units: Ops
description: 'The number of executed KV commands per second.'
- applications:
- -
- name: 'TiDB node'
preprocessing:
-
type: JSONPATH
@@ -828,6 +852,10 @@ zabbix_export:
- ''
master_item:
key: tidb.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB node'
master_item:
key: tidb.get_metrics
preprocessing:
@@ -864,9 +892,6 @@ zabbix_export:
value_type: FLOAT
units: Qps
description: 'The number of queries on TiDB instance per second with failure of command execution results.'
- applications:
- -
- name: 'TiDB node'
preprocessing:
-
type: JSONPATH
@@ -878,6 +903,10 @@ zabbix_export:
- ''
master_item:
key: tidb.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB node'
-
name: 'TiDB: Server query "OK": {#TYPE}, rate'
type: DEPENDENT
@@ -887,9 +916,6 @@ zabbix_export:
value_type: FLOAT
units: Qps
description: 'The number of queries on TiDB instance per second with success of command execution results.'
- applications:
- -
- name: 'TiDB node'
preprocessing:
-
type: JSONPATH
@@ -901,6 +927,10 @@ zabbix_export:
- ''
master_item:
key: tidb.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB node'
master_item:
key: tidb.get_metrics
preprocessing:
@@ -943,9 +973,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'The number of SQL statements executed per second.'
- applications:
- -
- name: 'TiDB node'
preprocessing:
-
type: JSONPATH
@@ -957,6 +984,10 @@ zabbix_export:
- ''
master_item:
key: tidb.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB node'
master_item:
key: tidb.get_metrics
preprocessing:
@@ -993,9 +1024,6 @@ zabbix_export:
value_type: FLOAT
units: Ops
description: 'The number of TiDB operations that resolve locks per second. When TiDB''s read or write request encounters a lock, it tries to resolve the lock.'
- applications:
- -
- name: 'TiDB node'
preprocessing:
-
type: JSONPATH
@@ -1007,6 +1035,10 @@ zabbix_export:
- ''
master_item:
key: tidb.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB node'
master_item:
key: tidb.get_metrics
preprocessing:
@@ -1044,9 +1076,6 @@ zabbix_export:
value_type: FLOAT
units: Ops
description: 'The number of results of GC-related operations per second.'
- applications:
- -
- name: 'TiDB node'
preprocessing:
-
type: JSONPATH
@@ -1058,6 +1087,10 @@ zabbix_export:
- ''
master_item:
key: tidb.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB node'
trigger_prototypes:
-
expression: '{min(5m)}>{$TIDB.GC_ACTIONS.ERRORS.MAX.WARN}'
@@ -1118,9 +1151,6 @@ zabbix_export:
value_type: FLOAT
units: Ops
description: 'The number of TiDB operations that resolve locks per second. When TiDB''s read or write request encounters a lock, it tries to resolve the lock.'
- applications:
- -
- name: 'TiDB node'
preprocessing:
-
type: JSONPATH
@@ -1132,6 +1162,10 @@ zabbix_export:
- ''
master_item:
key: tidb.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiDB node'
master_item:
key: tidb.get_metrics
preprocessing:
diff --git a/templates/db/tidb_http/tidb_tikv_http/template_db_tidb_tikv_http.yaml b/templates/db/tidb_http/tidb_tikv_http/template_db_tidb_tikv_http.yaml
index 74ae7c37684..c724cc76d64 100644
--- a/templates/db/tidb_http/tidb_tikv_http/template_db_tidb_tikv_http.yaml
+++ b/templates/db/tidb_http/tidb_tikv_http/template_db_tidb_tikv_http.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-04-08T09:02:42Z'
+ date: '2021-04-22T12:58:29Z'
groups:
-
name: Templates/Databases
@@ -21,11 +21,6 @@ zabbix_export:
groups:
-
name: Templates/Databases
- applications:
- -
- name: 'TiKV node'
- -
- name: 'Zabbix raw items'
items:
-
name: 'TiKV: Scheduler: High priority commands total, rate'
@@ -35,9 +30,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'Total count of high priority commands per second.'
- applications:
- -
- name: 'TiKV node'
preprocessing:
-
type: JSONPATH
@@ -49,6 +41,10 @@ zabbix_export:
- ''
master_item:
key: tikv.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiKV node'
-
name: 'TiKV: Scheduler: Low priority commands total, rate'
type: DEPENDENT
@@ -57,9 +53,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'Total count of low priority commands per second.'
- applications:
- -
- name: 'TiKV node'
preprocessing:
-
type: JSONPATH
@@ -71,6 +64,10 @@ zabbix_export:
- ''
master_item:
key: tikv.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiKV node'
-
name: 'TiKV: Scheduler: Normal priority commands total, rate'
type: DEPENDENT
@@ -79,9 +76,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'Total count of normal priority commands per second.'
- applications:
- -
- name: 'TiKV node'
preprocessing:
-
type: JSONPATH
@@ -93,6 +87,10 @@ zabbix_export:
- ''
master_item:
key: tikv.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiKV node'
-
name: 'TiKV: Coprocessor: Requests, rate'
type: DEPENDENT
@@ -102,9 +100,6 @@ zabbix_export:
value_type: FLOAT
units: Ops
description: 'Total number of coprocessor requests per second.'
- applications:
- -
- name: 'TiKV node'
preprocessing:
-
type: JSONPATH
@@ -116,6 +111,10 @@ zabbix_export:
- ''
master_item:
key: tikv.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiKV node'
-
name: 'TiKV: Coprocessor: Errors, rate'
type: DEPENDENT
@@ -125,9 +124,6 @@ zabbix_export:
value_type: FLOAT
units: Ops
description: 'Total number of push down request error per second.'
- applications:
- -
- name: 'TiKV node'
preprocessing:
-
type: JSONPATH
@@ -140,6 +136,10 @@ zabbix_export:
- ''
master_item:
key: tikv.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiKV node'
triggers:
-
expression: '{min(5m)}>{$TIKV.COPOCESSOR.ERRORS.MAX.WARN}'
@@ -154,9 +154,6 @@ zabbix_export:
value_type: FLOAT
units: Ops
description: 'Total number of RocksDB internal operations from PerfContext per second.'
- applications:
- -
- name: 'TiKV node'
preprocessing:
-
type: JSONPATH
@@ -168,6 +165,10 @@ zabbix_export:
- ''
master_item:
key: tikv.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiKV node'
-
name: 'TiKV: Coprocessor: Response size, rate'
type: DEPENDENT
@@ -177,9 +178,6 @@ zabbix_export:
value_type: FLOAT
units: Bps
description: 'The total size of coprocessor response per second.'
- applications:
- -
- name: 'TiKV node'
preprocessing:
-
type: JSONPATH
@@ -191,6 +189,10 @@ zabbix_export:
- ''
master_item:
key: tikv.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiKV node'
-
name: 'TiKV: CPU util'
type: DEPENDENT
@@ -200,9 +202,6 @@ zabbix_export:
value_type: FLOAT
units: '%'
description: 'The CPU usage ratio on TiKV instance.'
- applications:
- -
- name: 'TiKV node'
preprocessing:
-
type: JSONPATH
@@ -218,6 +217,10 @@ zabbix_export:
- '100'
master_item:
key: tikv.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiKV node'
-
name: 'TiKV: Bytes read'
type: DEPENDENT
@@ -227,9 +230,6 @@ zabbix_export:
value_type: FLOAT
units: Bps
description: 'The total bytes of read in TiKV instance.'
- applications:
- -
- name: 'TiKV node'
preprocessing:
-
type: JSONPATH
@@ -237,6 +237,10 @@ zabbix_export:
- '$[?(@.name == "tikv_engine_flow_bytes" && @.labels.db == "kv" && @.labels.type =~ "bytes_read|iter_bytes_read")].value.sum()'
master_item:
key: tikv.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiKV node'
-
name: 'TiKV: Bytes write'
type: DEPENDENT
@@ -246,9 +250,6 @@ zabbix_export:
value_type: FLOAT
units: Bps
description: 'The total bytes of write in TiKV instance.'
- applications:
- -
- name: 'TiKV node'
preprocessing:
-
type: JSONPATH
@@ -256,6 +257,10 @@ zabbix_export:
- '$[?(@.name == "tikv_engine_flow_bytes" && @.labels.db == "kv" && @.labels.type == "wal_file_bytes")].value.first()'
master_item:
key: tikv.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiKV node'
-
name: 'TiKV: Store size'
type: DEPENDENT
@@ -265,9 +270,6 @@ zabbix_export:
value_type: FLOAT
units: B
description: 'The storage size of TiKV instance.'
- applications:
- -
- name: 'TiKV node'
preprocessing:
-
type: JSONPATH
@@ -275,6 +277,10 @@ zabbix_export:
- '$[?(@.name == "tikv_engine_size_bytes")].value.sum()'
master_item:
key: tikv.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiKV node'
-
name: 'TiKV: Get instance metrics'
type: HTTP_AGENT
@@ -283,9 +289,6 @@ zabbix_export:
trends: '0'
value_type: TEXT
description: 'Get TiKV instance metrics.'
- applications:
- -
- name: 'Zabbix raw items'
preprocessing:
-
type: CHECK_NOT_SUPPORTED
@@ -296,6 +299,10 @@ zabbix_export:
parameters:
- ''
url: '{$TIKV.URL}:{$TIKV.PORT}/metrics'
+ tags:
+ -
+ tag: Application
+ value: 'Zabbix raw items'
-
name: 'TiKV: Total query, rate'
type: DEPENDENT
@@ -305,9 +312,6 @@ zabbix_export:
value_type: FLOAT
units: Ops
description: 'The total QPS in TiKV instance.'
- applications:
- -
- name: 'TiKV node'
preprocessing:
-
type: JSONPATH
@@ -319,6 +323,10 @@ zabbix_export:
- ''
master_item:
key: tikv.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiKV node'
-
name: 'TiKV: Total query errors, rate'
type: DEPENDENT
@@ -328,9 +336,6 @@ zabbix_export:
value_type: FLOAT
units: Ops
description: 'The total number of gRPC message handling failure per second.'
- applications:
- -
- name: 'TiKV node'
preprocessing:
-
type: JSONPATH
@@ -343,6 +348,10 @@ zabbix_export:
- ''
master_item:
key: tikv.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiKV node'
-
name: 'TiKV: Server: failure messages total, rate'
type: DEPENDENT
@@ -351,9 +360,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'Total number of reporting failure messages per second.'
- applications:
- -
- name: 'TiKV node'
preprocessing:
-
type: JSONPATH
@@ -366,6 +372,10 @@ zabbix_export:
- ''
master_item:
key: tikv.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiKV node'
-
name: 'TiKV: Regions, count'
type: DEPENDENT
@@ -373,9 +383,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'The number of regions collected in TiKV instance.'
- applications:
- -
- name: 'TiKV node'
preprocessing:
-
type: JSONPATH
@@ -383,6 +390,10 @@ zabbix_export:
- '$[?(@.name == "tikv_raftstore_region_count" && @.labels.type == "region" )].value.first()'
master_item:
key: tikv.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiKV node'
-
name: 'TiKV: Regions, leader'
type: DEPENDENT
@@ -390,9 +401,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'The number of leaders in TiKV instance.'
- applications:
- -
- name: 'TiKV node'
preprocessing:
-
type: JSONPATH
@@ -400,6 +408,10 @@ zabbix_export:
- '$[?(@.name == "tikv_raftstore_region_count" && @.labels.type == "leader" )].value.first()'
master_item:
key: tikv.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiKV node'
-
name: 'TiKV: RSS memory usage'
type: DEPENDENT
@@ -409,9 +421,6 @@ zabbix_export:
value_type: FLOAT
units: B
description: 'Resident memory size in bytes.'
- applications:
- -
- name: 'TiKV node'
preprocessing:
-
type: JSONPATH
@@ -419,6 +428,10 @@ zabbix_export:
- '$[?(@.name == "process_resident_memory_bytes")].value.first()'
master_item:
key: tikv.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiKV node'
-
name: 'TiKV: Scheduler: Commands total, rate'
type: DEPENDENT
@@ -427,9 +440,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'Total number of commands per second.'
- applications:
- -
- name: 'TiKV node'
preprocessing:
-
type: JSONPATH
@@ -443,6 +453,10 @@ zabbix_export:
- ''
master_item:
key: tikv.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiKV node'
-
name: 'TiKV: Snapshot: Pending tasks'
type: DEPENDENT
@@ -450,9 +464,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'The number of tasks currently running by the worker or pending.'
- applications:
- -
- name: 'TiKV node'
preprocessing:
-
type: JSONPATH
@@ -460,6 +471,10 @@ zabbix_export:
- '$[?(@.name == "tikv_worker_pending_task_total")].value.first()'
master_item:
key: tikv.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiKV node'
triggers:
-
expression: '{min(5m)}>{$TIKV.PENDING_COMMANDS.MAX.WARN}'
@@ -477,9 +492,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'The total count of too busy schedulers per second.'
- applications:
- -
- name: 'TiKV node'
preprocessing:
-
type: JSONPATH
@@ -492,6 +504,10 @@ zabbix_export:
- ''
master_item:
key: tikv.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiKV node'
-
name: 'TiKV: Snapshot: Applying'
type: DEPENDENT
@@ -499,9 +515,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'The total amount of raftstore snapshot traffic.'
- applications:
- -
- name: 'TiKV node'
preprocessing:
-
type: JSONPATH
@@ -509,6 +522,10 @@ zabbix_export:
- '$[?(@.name == "tikv_raftstore_snapshot_traffic_total" && @.labels.type == "applying")].value.first()'
master_item:
key: tikv.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiKV node'
-
name: 'TiKV: Snapshot: Receiving'
type: DEPENDENT
@@ -516,9 +533,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'The total amount of raftstore snapshot traffic.'
- applications:
- -
- name: 'TiKV node'
preprocessing:
-
type: JSONPATH
@@ -526,6 +540,10 @@ zabbix_export:
- '$[?(@.name == "tikv_raftstore_snapshot_traffic_total" && @.labels.type == "receiving")].value.first()'
master_item:
key: tikv.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiKV node'
-
name: 'TiKV: Snapshot: Sending'
type: DEPENDENT
@@ -533,9 +551,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'The total amount of raftstore snapshot traffic.'
- applications:
- -
- name: 'TiKV node'
preprocessing:
-
type: JSONPATH
@@ -543,6 +558,10 @@ zabbix_export:
- '$[?(@.name == "tikv_raftstore_snapshot_traffic_total" && @.labels.type == "sending")].value.first()'
master_item:
key: tikv.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiKV node'
-
name: 'TiKV: Storage: commands total, rate'
type: DEPENDENT
@@ -551,9 +570,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'Total number of commands received per second.'
- applications:
- -
- name: 'TiKV node'
preprocessing:
-
type: JSONPATH
@@ -565,6 +581,10 @@ zabbix_export:
- ''
master_item:
key: tikv.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiKV node'
-
name: 'TiKV: Available size'
type: DEPENDENT
@@ -574,9 +594,6 @@ zabbix_export:
value_type: FLOAT
units: B
description: 'The available capacity of TiKV instance.'
- applications:
- -
- name: 'TiKV node'
preprocessing:
-
type: JSONPATH
@@ -584,6 +601,10 @@ zabbix_export:
- '$[?(@.name == "tikv_store_size_bytes" && @.labels.type == "available")].value.first()'
master_item:
key: tikv.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiKV node'
-
name: 'TiKV: Capacity size'
type: DEPENDENT
@@ -593,9 +614,6 @@ zabbix_export:
value_type: FLOAT
units: B
description: 'The capacity size of TiKV instance.'
- applications:
- -
- name: 'TiKV node'
preprocessing:
-
type: JSONPATH
@@ -603,6 +621,10 @@ zabbix_export:
- '$[?(@.name == "tikv_store_size_bytes" && @.labels.type == "capacity")].value.first()'
master_item:
key: tikv.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiKV node'
-
name: 'TiKV: Uptime'
type: DEPENDENT
@@ -612,9 +634,6 @@ zabbix_export:
value_type: FLOAT
units: uptime
description: 'The runtime of each TiKV instance.'
- applications:
- -
- name: 'TiKV node'
preprocessing:
-
type: JSONPATH
@@ -628,6 +647,10 @@ zabbix_export:
return (Math.floor(Date.now()/1000)-Number(value));
master_item:
key: tikv.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiKV node'
triggers:
-
expression: '{last()}<10m'
@@ -652,9 +675,6 @@ zabbix_export:
value_type: FLOAT
units: Ops
description: 'Total number of coprocessor requests per second.'
- applications:
- -
- name: 'TiKV node'
preprocessing:
-
type: JSONPATH
@@ -666,6 +686,10 @@ zabbix_export:
- ''
master_item:
key: tikv.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiKV node'
-
name: 'TiKV: Coprocessor: {#REQ_TYPE} errors, rate'
type: DEPENDENT
@@ -675,9 +699,6 @@ zabbix_export:
value_type: FLOAT
units: Ops
description: 'Total number of push down request error per second.'
- applications:
- -
- name: 'TiKV node'
preprocessing:
-
type: JSONPATH
@@ -690,6 +711,10 @@ zabbix_export:
- ''
master_item:
key: tikv.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiKV node'
-
name: 'TiKV: Coprocessor: {#REQ_TYPE} RocksDB ops, rate'
type: DEPENDENT
@@ -699,9 +724,6 @@ zabbix_export:
value_type: FLOAT
units: Ops
description: 'Total number of RocksDB internal operations from PerfContext per second.'
- applications:
- -
- name: 'TiKV node'
preprocessing:
-
type: JSONPATH
@@ -713,6 +735,10 @@ zabbix_export:
- ''
master_item:
key: tikv.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiKV node'
-
name: 'TiKV: Coprocessor: {#REQ_TYPE} scan keys, rate'
type: DEPENDENT
@@ -722,9 +748,6 @@ zabbix_export:
value_type: FLOAT
units: Ops
description: 'Total number of scan keys observed per request per second.'
- applications:
- -
- name: 'TiKV node'
preprocessing:
-
type: JSONPATH
@@ -736,6 +759,10 @@ zabbix_export:
- ''
master_item:
key: tikv.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiKV node'
master_item:
key: tikv.get_metrics
preprocessing:
@@ -772,9 +799,6 @@ zabbix_export:
value_type: FLOAT
units: Ops
description: 'The QPS per command in TiKV instance.'
- applications:
- -
- name: 'TiKV node'
preprocessing:
-
type: JSONPATH
@@ -783,6 +807,10 @@ zabbix_export:
error_handler: CUSTOM_VALUE
master_item:
key: tikv.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiKV node'
master_item:
key: tikv.get_metrics
preprocessing:
@@ -818,9 +846,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'Total number of commands on each stage per second.'
- applications:
- -
- name: 'TiKV node'
preprocessing:
-
type: JSONPATH
@@ -834,6 +859,10 @@ zabbix_export:
- ''
master_item:
key: tikv.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiKV node'
master_item:
key: tikv.get_metrics
preprocessing:
@@ -876,9 +905,6 @@ zabbix_export:
history: 7d
value_type: FLOAT
description: 'Total number of reporting failure messages. The metric has two labels: type and store_id. type represents the failure type, and store_id represents the destination peer store id.'
- applications:
- -
- name: 'TiKV node'
preprocessing:
-
type: JSONPATH
@@ -890,6 +916,10 @@ zabbix_export:
- ''
master_item:
key: tikv.get_metrics
+ tags:
+ -
+ tag: Application
+ value: 'TiKV node'
trigger_prototypes:
-
expression: '{min(5m)}>{$TIKV.STORE.ERRORS.MAX.WARN}'
diff --git a/templates/module/00icmp_ping/README.md b/templates/module/00icmp_ping/README.md
index 90965691c6e..4dcb47bca95 100644
--- a/templates/module/00icmp_ping/README.md
+++ b/templates/module/00icmp_ping/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
@@ -15,10 +15,10 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$ICMP_LOSS_WARN} |<p>-</p> |`20` |
-|{$ICMP_RESPONSE_TIME_WARN} |<p>-</p> |`0.15` |
+| Name | Description | Default |
+|----------------------------|-------------|---------|
+| {$ICMP_LOSS_WARN} | <p>-</p> | `20` |
+| {$ICMP_RESPONSE_TIME_WARN} | <p>-</p> | `0.15` |
## Template links
@@ -29,19 +29,19 @@ There are no template links in this template.
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Status |ICMP ping |<p>-</p> |SIMPLE |icmpping |
-|Status |ICMP loss |<p>-</p> |SIMPLE |icmppingloss |
-|Status |ICMP response time |<p>-</p> |SIMPLE |icmppingsec |
+| Group | Name | Description | Type | Key and additional info |
+|--------|--------------------|-------------|--------|-------------------------|
+| Status | ICMP ping | <p>-</p> | SIMPLE | icmpping |
+| Status | ICMP loss | <p>-</p> | SIMPLE | icmppingloss |
+| Status | ICMP response time | <p>-</p> | SIMPLE | icmppingsec |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|Unavailable by ICMP ping |<p>Last three attempts returned timeout. Please check device connectivity.</p> |`{TEMPLATE_NAME:icmpping.max(#3)}=0` |HIGH | |
-|High ICMP ping loss |<p>-</p> |`{TEMPLATE_NAME:icmppingloss.min(5m)}>{$ICMP_LOSS_WARN} and {TEMPLATE_NAME:icmppingloss.min(5m)}<100` |WARNING |<p>**Depends on**:</p><p>- Unavailable by ICMP ping</p> |
-|High ICMP ping response time |<p>-</p> |`{TEMPLATE_NAME:icmppingsec.avg(5m)}>{$ICMP_RESPONSE_TIME_WARN}` |WARNING |<p>**Depends on**:</p><p>- High ICMP ping loss</p><p>- Unavailable by ICMP ping</p> |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|------------------------------|---------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------|----------|-------------------------------------------------------------------------------------|
+| Unavailable by ICMP ping | <p>Last three attempts returned timeout. Please check device connectivity.</p> | `{TEMPLATE_NAME:icmpping.max(#3)}=0` | HIGH | |
+| High ICMP ping loss | <p>-</p> | `{TEMPLATE_NAME:icmppingloss.min(5m)}>{$ICMP_LOSS_WARN} and {TEMPLATE_NAME:icmppingloss.min(5m)}<100` | WARNING | <p>**Depends on**:</p><p>- Unavailable by ICMP ping</p> |
+| High ICMP ping response time | <p>-</p> | `{TEMPLATE_NAME:icmppingsec.avg(5m)}>{$ICMP_RESPONSE_TIME_WARN}` | WARNING | <p>**Depends on**:</p><p>- High ICMP ping loss</p><p>- Unavailable by ICMP ping</p> |
## Feedback
diff --git a/templates/module/ether_like_snmp/README.md b/templates/module/ether_like_snmp/README.md
index efa8d057e40..007415ef19e 100644
--- a/templates/module/ether_like_snmp/README.md
+++ b/templates/module/ether_like_snmp/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
@@ -20,21 +20,21 @@ There are no template links in this template.
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|EtherLike-MIB Discovery |<p>Discovering interfaces from IF-MIB and EtherLike-MIB. Interfaces with up(1) Operational Status are discovered.</p> |SNMP |net.if.duplex.discovery<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>**Filter**:</p>AND <p>- A: {#IFOPERSTATUS} MATCHES_REGEX `1`</p><p>- B: {#SNMPVALUE} MATCHES_REGEX `(2|3)`</p> |
+| Name | Description | Type | Key and additional info |
+|-------------------------|-----------------------------------------------------------------------------------------------------------------------|------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| EtherLike-MIB Discovery | <p>Discovering interfaces from IF-MIB and EtherLike-MIB. Interfaces with up(1) Operational Status are discovered.</p> | SNMP | net.if.duplex.discovery<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>**Filter**:</p>AND <p>- A: {#IFOPERSTATUS} MATCHES_REGEX `1`</p><p>- B: {#SNMPVALUE} MATCHES_REGEX `(2|3)`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Network_interfaces |Interface {#IFNAME}({#IFALIAS}): Duplex status |<p>MIB: EtherLike-MIB</p><p>The current mode of operation of the MAC</p><p>entity. 'unknown' indicates that the current</p><p>duplex mode could not be determined.</p><p>Management control of the duplex mode is</p><p>accomplished through the MAU MIB. When</p><p>an interface does not support autonegotiation,</p><p>or when autonegotiation is not enabled, the</p><p>duplex mode is controlled using</p><p>ifMauDefaultType. When autonegotiation is</p><p>supported and enabled, duplex mode is controlled</p><p>using ifMauAutoNegAdvertisedBits. In either</p><p>case, the currently operating duplex mode is</p><p>reflected both in this object and in ifMauType.</p><p>Note that this object provides redundant</p><p>information with ifMauType. Normally, redundant</p><p>objects are discouraged. However, in this</p><p>instance, it allows a management application to</p><p>determine the duplex status of an interface</p><p>without having to know every possible value of</p><p>ifMauType. This was felt to be sufficiently</p><p>valuable to justify the redundancy.</p><p>Reference: [IEEE 802.3 Std.], 30.3.1.1.32,aDuplexStatus.</p> |SNMP |net.if.duplex[dot3StatsDuplexStatus.{#SNMPINDEX}] |
+| Group | Name | Description | Type | Key and additional info |
+|--------------------|------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------|---------------------------------------------------|
+| Network_interfaces | Interface {#IFNAME}({#IFALIAS}): Duplex status | <p>MIB: EtherLike-MIB</p><p>The current mode of operation of the MAC</p><p>entity. 'unknown' indicates that the current</p><p>duplex mode could not be determined.</p><p>Management control of the duplex mode is</p><p>accomplished through the MAU MIB. When</p><p>an interface does not support autonegotiation,</p><p>or when autonegotiation is not enabled, the</p><p>duplex mode is controlled using</p><p>ifMauDefaultType. When autonegotiation is</p><p>supported and enabled, duplex mode is controlled</p><p>using ifMauAutoNegAdvertisedBits. In either</p><p>case, the currently operating duplex mode is</p><p>reflected both in this object and in ifMauType.</p><p>Note that this object provides redundant</p><p>information with ifMauType. Normally, redundant</p><p>objects are discouraged. However, in this</p><p>instance, it allows a management application to</p><p>determine the duplex status of an interface</p><p>without having to know every possible value of</p><p>ifMauType. This was felt to be sufficiently</p><p>valuable to justify the redundancy.</p><p>Reference: [IEEE 802.3 Std.], 30.3.1.1.32,aDuplexStatus.</p> | SNMP | net.if.duplex[dot3StatsDuplexStatus.{#SNMPINDEX}] |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|Interface {#IFNAME}({#IFALIAS}): In half-duplex mode |<p>Please check autonegotiation settings and cabling</p> |`{TEMPLATE_NAME:net.if.duplex[dot3StatsDuplexStatus.{#SNMPINDEX}].last()}=2` |WARNING |<p>Manual close: YES</p> |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|------------------------------------------------------|----------------------------------------------------------|------------------------------------------------------------------------------|----------|----------------------------------|
+| Interface {#IFNAME}({#IFALIAS}): In half-duplex mode | <p>Please check autonegotiation settings and cabling</p> | `{TEMPLATE_NAME:net.if.duplex[dot3StatsDuplexStatus.{#SNMPINDEX}].last()}=2` | WARNING | <p>Manual close: YES</p> |
## Feedback
diff --git a/templates/module/generic_snmp_snmp/README.md b/templates/module/generic_snmp_snmp/README.md
index 6127166e00f..e59660d5205 100644
--- a/templates/module/generic_snmp_snmp/README.md
+++ b/templates/module/generic_snmp_snmp/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
@@ -15,24 +15,24 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$SNMP.TIMEOUT} |<p>-</p> |`5m` |
+| Name | Description | Default |
+|-----------------|-------------|---------|
+| {$SNMP.TIMEOUT} | <p>-</p> | `5m` |
## Template links
-|Name|
-|----|
-|ICMP Ping |
+| Name |
+|-----------|
+| ICMP Ping |
## Discovery rules
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|General |SNMP traps (fallback) |<p>Item is used to collect all SNMP traps unmatched by other snmptrap items</p> |SNMP_TRAP |snmptrap.fallback |
+| Group | Name | Description | Type | Key and additional info |
+|---------|-----------------------|---------------------------------------------------------------------------------|-----------|-------------------------|
+| General | SNMP traps (fallback) | <p>Item is used to collect all SNMP traps unmatched by other snmptrap items</p> | SNMP_TRAP | snmptrap.fallback |
|General |System location |<p>MIB: SNMPv2-MIB</p><p>The physical location of this node (e.g., `telephone closet, 3rd floor'). If the location is unknown, the value is the zero-length string.</p> |SNMP |system.location[sysLocation.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
|General |System contact details |<p>MIB: SNMPv2-MIB</p><p>The textual identification of the contact person for this managed node, together with information on how to contact this person. If no contact information is known, the value is the zero-length string.</p> |SNMP |system.contact[sysContact.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
|General |System object ID |<p>MIB: SNMPv2-MIB</p><p>The vendor's authoritative identification of the network management subsystem contained in the entity. This value is allocated within the SMI enterprises subtree (1.3.6.1.4.1) and provides an easy and unambiguous means for determining`what kind of box' is being managed. For example, if vendor`Flintstones, Inc.' was assigned the subtree1.3.6.1.4.1.4242, it could assign the identifier 1.3.6.1.4.1.4242.1.1 to its `Fred Router'.</p> |SNMP |system.objectid[sysObjectID.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
@@ -43,11 +43,11 @@ No specific Zabbix configuration is required.
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|System name has changed (new name: {ITEM.VALUE}) |<p>System name has changed. Ack to close.</p> |`{TEMPLATE_NAME:system.name.diff()}=1 and {TEMPLATE_NAME:system.name.strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|{HOST.NAME} has been restarted (uptime < 10m) |<p>Uptime is less than 10 minutes</p> |`{TEMPLATE_NAME:system.uptime[sysUpTime.0].last()}<10m` |WARNING |<p>Manual close: YES</p><p>**Depends on**:</p><p>- No SNMP data collection</p> |
-|No SNMP data collection |<p>SNMP is not available for polling. Please check device connectivity and SNMP settings.</p> |`{TEMPLATE_NAME:zabbix[host,snmp,available].max({$SNMP.TIMEOUT})}=0` |WARNING |<p>**Depends on**:</p><p>- Unavailable by ICMP ping</p> |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|--------------------------------------------------|-----------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------|----------|--------------------------------------------------------------------------------|
+| System name has changed (new name: {ITEM.VALUE}) | <p>System name has changed. Ack to close.</p> | `{TEMPLATE_NAME:system.name.diff()}=1 and {TEMPLATE_NAME:system.name.strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| {HOST.NAME} has been restarted (uptime < 10m) | <p>Uptime is less than 10 minutes</p> | `{TEMPLATE_NAME:system.uptime[sysUpTime.0].last()}<10m` | WARNING | <p>Manual close: YES</p><p>**Depends on**:</p><p>- No SNMP data collection</p> |
+| No SNMP data collection | <p>SNMP is not available for polling. Please check device connectivity and SNMP settings.</p> | `{TEMPLATE_NAME:zabbix[host,snmp,available].max({$SNMP.TIMEOUT})}=0` | WARNING | <p>**Depends on**:</p><p>- Unavailable by ICMP ping</p> |
## Feedback
diff --git a/templates/module/host_resources_snmp/README.md b/templates/module/host_resources_snmp/README.md
index b569f3235a8..706005f73f5 100644
--- a/templates/module/host_resources_snmp/README.md
+++ b/templates/module/host_resources_snmp/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
@@ -15,14 +15,14 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$VFS.FS.FSNAME.MATCHES} |<p>This macro is used in filesystems discovery. Can be overridden on the host or linked template level.</p> |`.+` |
-|{$VFS.FS.FSNAME.NOT_MATCHES} |<p>This macro is used in filesystems discovery. Can be overridden on the host or linked template level.</p> |`^(/dev|/sys|/run|/proc|.+/shm$)` |
-|{$VFS.FS.FSTYPE.MATCHES} |<p>This macro is used in filesystems discovery. Can be overridden on the host or linked template level.</p> |`.*(\.4|\.9|hrStorageFixedDisk|hrStorageFlashMemory)$` |
-|{$VFS.FS.FSTYPE.NOT_MATCHES} |<p>This macro is used in filesystems discovery. Can be overridden on the host or linked template level.</p> |`CHANGE_IF_NEEDED` |
-|{$VFS.FS.PUSED.MAX.CRIT} |<p>-</p> |`90` |
-|{$VFS.FS.PUSED.MAX.WARN} |<p>-</p> |`80` |
+| Name | Description | Default |
+|------------------------------|-------------------------------------------------------------------------------------------------------------|--------------------------------------------------------|
+| {$VFS.FS.FSNAME.MATCHES} | <p>This macro is used in filesystems discovery. Can be overridden on the host or linked template level.</p> | `.+` |
+| {$VFS.FS.FSNAME.NOT_MATCHES} | <p>This macro is used in filesystems discovery. Can be overridden on the host or linked template level.</p> | `^(/dev|/sys|/run|/proc|.+/shm$)` |
+| {$VFS.FS.FSTYPE.MATCHES} | <p>This macro is used in filesystems discovery. Can be overridden on the host or linked template level.</p> | `.*(\.4|\.9|hrStorageFixedDisk|hrStorageFlashMemory)$` |
+| {$VFS.FS.FSTYPE.NOT_MATCHES} | <p>This macro is used in filesystems discovery. Can be overridden on the host or linked template level.</p> | `CHANGE_IF_NEEDED` |
+| {$VFS.FS.PUSED.MAX.CRIT} | <p>-</p> | `90` |
+| {$VFS.FS.PUSED.MAX.WARN} | <p>-</p> | `80` |
## Template links
@@ -30,24 +30,24 @@ There are no template links in this template.
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Storage discovery |<p>HOST-RESOURCES-MIB::hrStorage discovery with storage filter.</p> |SNMP |vfs.fs.discovery[snmp]<p>**Filter**:</p>AND <p>- A: {#FSTYPE} MATCHES_REGEX `{$VFS.FS.FSTYPE.MATCHES}`</p><p>- B: {#FSTYPE} NOT_MATCHES_REGEX `{$VFS.FS.FSTYPE.NOT_MATCHES}`</p><p>- C: {#FSNAME} MATCHES_REGEX `{$VFS.FS.FSNAME.MATCHES}`</p><p>- D: {#FSNAME} NOT_MATCHES_REGEX `{$VFS.FS.FSNAME.NOT_MATCHES}`</p> |
+| Name | Description | Type | Key and additional info |
+|-------------------|---------------------------------------------------------------------|------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Storage discovery | <p>HOST-RESOURCES-MIB::hrStorage discovery with storage filter.</p> | SNMP | vfs.fs.discovery[snmp]<p>**Filter**:</p>AND <p>- A: {#FSTYPE} MATCHES_REGEX `{$VFS.FS.FSTYPE.MATCHES}`</p><p>- B: {#FSTYPE} NOT_MATCHES_REGEX `{$VFS.FS.FSTYPE.NOT_MATCHES}`</p><p>- C: {#FSNAME} MATCHES_REGEX `{$VFS.FS.FSNAME.MATCHES}`</p><p>- D: {#FSNAME} NOT_MATCHES_REGEX `{$VFS.FS.FSNAME.NOT_MATCHES}`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Storage |{#FSNAME}: Used space |<p>MIB: HOST-RESOURCES-MIB</p><p>The amount of the storage represented by this entry that is allocated, in units of hrStorageAllocationUnits.</p> |SNMP |vfs.fs.used[hrStorageUsed.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `{#ALLOC_UNITS}`</p> |
-|Storage |{#FSNAME}: Total space |<p>MIB: HOST-RESOURCES-MIB</p><p>The size of the storage represented by this entry, in units of hrStorageAllocationUnits.</p><p>This object is writable to allow remote configuration of the size of the storage area in those cases where such an operation makes sense and is possible on the underlying system.</p><p>For example, the amount of main storage allocated to a buffer pool might be modified or the amount of disk space allocated to virtual storage might be modified.</p> |SNMP |vfs.fs.total[hrStorageSize.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `{#ALLOC_UNITS}`</p> |
-|Storage |{#FSNAME}: Space utilization |<p>Space utilization in % for {#FSNAME}</p> |CALCULATED |vfs.fs.pused[storageUsedPercentage.{#SNMPINDEX}]<p>**Expression**:</p>`(last("vfs.fs.used[hrStorageUsed.{#SNMPINDEX}]")/last("vfs.fs.total[hrStorageSize.{#SNMPINDEX}]"))*100` |
+| Group | Name | Description | Type | Key and additional info |
+|---------|------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Storage | {#FSNAME}: Used space | <p>MIB: HOST-RESOURCES-MIB</p><p>The amount of the storage represented by this entry that is allocated, in units of hrStorageAllocationUnits.</p> | SNMP | vfs.fs.used[hrStorageUsed.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `{#ALLOC_UNITS}`</p> |
+| Storage | {#FSNAME}: Total space | <p>MIB: HOST-RESOURCES-MIB</p><p>The size of the storage represented by this entry, in units of hrStorageAllocationUnits.</p><p>This object is writable to allow remote configuration of the size of the storage area in those cases where such an operation makes sense and is possible on the underlying system.</p><p>For example, the amount of main storage allocated to a buffer pool might be modified or the amount of disk space allocated to virtual storage might be modified.</p> | SNMP | vfs.fs.total[hrStorageSize.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `{#ALLOC_UNITS}`</p> |
+| Storage | {#FSNAME}: Space utilization | <p>Space utilization in % for {#FSNAME}</p> | CALCULATED | vfs.fs.pused[storageUsedPercentage.{#SNMPINDEX}]<p>**Expression**:</p>`(last("vfs.fs.used[hrStorageUsed.{#SNMPINDEX}]")/last("vfs.fs.total[hrStorageSize.{#SNMPINDEX}]"))*100` |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|{#FSNAME}: Disk space is critically low (used > {$VFS.FS.PUSED.MAX.CRIT:"{#FSNAME}"}%) |<p>Two conditions should match: First, space utilization should be above {$VFS.FS.PUSED.MAX.CRIT:"{#FSNAME}"}.</p><p> Second condition should be one of the following:</p><p> - The disk free space is less than 5G.</p><p> - The disk will be full in less than 24 hours.</p> |`{TEMPLATE_NAME:vfs.fs.pused[storageUsedPercentage.{#SNMPINDEX}].last()}>{$VFS.FS.PUSED.MAX.CRIT:"{#FSNAME}"} and (({HOST-RESOURCES-MIB storage SNMP:vfs.fs.total[hrStorageSize.{#SNMPINDEX}].last()}-{HOST-RESOURCES-MIB storage SNMP:vfs.fs.used[hrStorageUsed.{#SNMPINDEX}].last()})<5G or {TEMPLATE_NAME:vfs.fs.pused[storageUsedPercentage.{#SNMPINDEX}].timeleft(1h,,100)}<1d)` |AVERAGE |<p>Manual close: YES</p> |
-|{#FSNAME}: Disk space is low (used > {$VFS.FS.PUSED.MAX.WARN:"{#FSNAME}"}%) |<p>Two conditions should match: First, space utilization should be above {$VFS.FS.PUSED.MAX.WARN:"{#FSNAME}"}.</p><p> Second condition should be one of the following:</p><p> - The disk free space is less than 10G.</p><p> - The disk will be full in less than 24 hours.</p> |`{TEMPLATE_NAME:vfs.fs.pused[storageUsedPercentage.{#SNMPINDEX}].last()}>{$VFS.FS.PUSED.MAX.WARN:"{#FSNAME}"} and (({HOST-RESOURCES-MIB storage SNMP:vfs.fs.total[hrStorageSize.{#SNMPINDEX}].last()}-{HOST-RESOURCES-MIB storage SNMP:vfs.fs.used[hrStorageUsed.{#SNMPINDEX}].last()})<10G or {TEMPLATE_NAME:vfs.fs.pused[storageUsedPercentage.{#SNMPINDEX}].timeleft(1h,,100)}<1d)` |WARNING |<p>Manual close: YES</p><p>**Depends on**:</p><p>- {#FSNAME}: Disk space is critically low (used > {$VFS.FS.PUSED.MAX.CRIT:"{#FSNAME}"}%)</p> |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|----------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|-----------------------------------------------------------------------------------------------------------------------------------------------|
+| {#FSNAME}: Disk space is critically low (used > {$VFS.FS.PUSED.MAX.CRIT:"{#FSNAME}"}%) | <p>Two conditions should match: First, space utilization should be above {$VFS.FS.PUSED.MAX.CRIT:"{#FSNAME}"}.</p><p> Second condition should be one of the following:</p><p> - The disk free space is less than 5G.</p><p> - The disk will be full in less than 24 hours.</p> | `{TEMPLATE_NAME:vfs.fs.pused[storageUsedPercentage.{#SNMPINDEX}].last()}>{$VFS.FS.PUSED.MAX.CRIT:"{#FSNAME}"} and (({HOST-RESOURCES-MIB storage SNMP:vfs.fs.total[hrStorageSize.{#SNMPINDEX}].last()}-{HOST-RESOURCES-MIB storage SNMP:vfs.fs.used[hrStorageUsed.{#SNMPINDEX}].last()})<5G or {TEMPLATE_NAME:vfs.fs.pused[storageUsedPercentage.{#SNMPINDEX}].timeleft(1h,,100)}<1d)` | AVERAGE | <p>Manual close: YES</p> |
+| {#FSNAME}: Disk space is low (used > {$VFS.FS.PUSED.MAX.WARN:"{#FSNAME}"}%) | <p>Two conditions should match: First, space utilization should be above {$VFS.FS.PUSED.MAX.WARN:"{#FSNAME}"}.</p><p> Second condition should be one of the following:</p><p> - The disk free space is less than 10G.</p><p> - The disk will be full in less than 24 hours.</p> | `{TEMPLATE_NAME:vfs.fs.pused[storageUsedPercentage.{#SNMPINDEX}].last()}>{$VFS.FS.PUSED.MAX.WARN:"{#FSNAME}"} and (({HOST-RESOURCES-MIB storage SNMP:vfs.fs.total[hrStorageSize.{#SNMPINDEX}].last()}-{HOST-RESOURCES-MIB storage SNMP:vfs.fs.used[hrStorageUsed.{#SNMPINDEX}].last()})<10G or {TEMPLATE_NAME:vfs.fs.pused[storageUsedPercentage.{#SNMPINDEX}].timeleft(1h,,100)}<1d)` | WARNING | <p>Manual close: YES</p><p>**Depends on**:</p><p>- {#FSNAME}: Disk space is critically low (used > {$VFS.FS.PUSED.MAX.CRIT:"{#FSNAME}"}%)</p> |
## Feedback
@@ -57,7 +57,7 @@ Please report any issues with the template at https://support.zabbix.com
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
@@ -69,13 +69,13 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$MEMORY.NAME.MATCHES} |<p>This macro is used in memory discovery. Can be overridden on the host or linked template level.</p> |`.*` |
-|{$MEMORY.NAME.NOT_MATCHES} |<p>This macro is used in memory discovery. Can be overridden on the host or linked template level if you need to filter out results.</p> |`CHANGE_IF_NEEDED` |
-|{$MEMORY.TYPE.MATCHES} |<p>This macro is used in memory discovery. Can be overridden on the host or linked template level.</p> |`.*(\.2|hrStorageRam)$` |
-|{$MEMORY.TYPE.NOT_MATCHES} |<p>This macro is used in memory discovery. Can be overridden on the host or linked template level if you need to filter out results.</p> |`CHANGE_IF_NEEDED` |
-|{$MEMORY.UTIL.MAX} |<p>The warning threshold of the "Physical memory: Memory utilization" item.</p> |`90` |
+| Name | Description | Default |
+|----------------------------|------------------------------------------------------------------------------------------------------------------------------------------|-------------------------|
+| {$MEMORY.NAME.MATCHES} | <p>This macro is used in memory discovery. Can be overridden on the host or linked template level.</p> | `.*` |
+| {$MEMORY.NAME.NOT_MATCHES} | <p>This macro is used in memory discovery. Can be overridden on the host or linked template level if you need to filter out results.</p> | `CHANGE_IF_NEEDED` |
+| {$MEMORY.TYPE.MATCHES} | <p>This macro is used in memory discovery. Can be overridden on the host or linked template level.</p> | `.*(\.2|hrStorageRam)$` |
+| {$MEMORY.TYPE.NOT_MATCHES} | <p>This macro is used in memory discovery. Can be overridden on the host or linked template level if you need to filter out results.</p> | `CHANGE_IF_NEEDED` |
+| {$MEMORY.UTIL.MAX} | <p>The warning threshold of the "Physical memory: Memory utilization" item.</p> | `90` |
## Template links
@@ -83,23 +83,23 @@ There are no template links in this template.
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Memory discovery |<p>HOST-RESOURCES-MIB::hrStorage discovery with memory filter</p> |SNMP |vm.memory.discovery<p>**Filter**:</p>AND <p>- A: {#MEMTYPE} MATCHES_REGEX `{$MEMORY.TYPE.MATCHES}`</p><p>- B: {#MEMTYPE} NOT_MATCHES_REGEX `{$MEMORY.TYPE.NOT_MATCHES}`</p><p>- C: {#MEMNAME} MATCHES_REGEX `{$MEMORY.NAME.MATCHES}`</p><p>- D: {#MEMNAME} NOT_MATCHES_REGEX `{$MEMORY.NAME.NOT_MATCHES}`</p> |
+| Name | Description | Type | Key and additional info |
+|------------------|-------------------------------------------------------------------|------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Memory discovery | <p>HOST-RESOURCES-MIB::hrStorage discovery with memory filter</p> | SNMP | vm.memory.discovery<p>**Filter**:</p>AND <p>- A: {#MEMTYPE} MATCHES_REGEX `{$MEMORY.TYPE.MATCHES}`</p><p>- B: {#MEMTYPE} NOT_MATCHES_REGEX `{$MEMORY.TYPE.NOT_MATCHES}`</p><p>- C: {#MEMNAME} MATCHES_REGEX `{$MEMORY.NAME.MATCHES}`</p><p>- D: {#MEMNAME} NOT_MATCHES_REGEX `{$MEMORY.NAME.NOT_MATCHES}`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Memory |{#MEMNAME}: Used memory |<p>MIB: HOST-RESOURCES-MIB</p><p>The amount of the storage represented by this entry that is allocated, in units of hrStorageAllocationUnits.</p> |SNMP |vm.memory.used[hrStorageUsed.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `{#ALLOC_UNITS}`</p> |
-|Memory |{#MEMNAME}: Total memory |<p>MIB: HOST-RESOURCES-MIB</p><p>The size of the storage represented by this entry, in units of hrStorageAllocationUnits.</p><p>This object is writable to allow remote configuration of the size of the storage area in those cases where such an operation makes sense and is possible on the underlying system.</p><p>For example, the amount of main memory allocated to a buffer pool might be modified or the amount of disk space allocated to virtual memory might be modified.</p> |SNMP |vm.memory.total[hrStorageSize.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `{#ALLOC_UNITS}`</p> |
-|Memory |{#MEMNAME}: Memory utilization |<p>Memory utilization in %</p> |CALCULATED |vm.memory.util[memoryUsedPercentage.{#SNMPINDEX}]<p>**Expression**:</p>`last("vm.memory.used[hrStorageUsed.{#SNMPINDEX}]")/last("vm.memory.total[hrStorageSize.{#SNMPINDEX}]")*100` |
+| Group | Name | Description | Type | Key and additional info |
+|--------|--------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Memory | {#MEMNAME}: Used memory | <p>MIB: HOST-RESOURCES-MIB</p><p>The amount of the storage represented by this entry that is allocated, in units of hrStorageAllocationUnits.</p> | SNMP | vm.memory.used[hrStorageUsed.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `{#ALLOC_UNITS}`</p> |
+| Memory | {#MEMNAME}: Total memory | <p>MIB: HOST-RESOURCES-MIB</p><p>The size of the storage represented by this entry, in units of hrStorageAllocationUnits.</p><p>This object is writable to allow remote configuration of the size of the storage area in those cases where such an operation makes sense and is possible on the underlying system.</p><p>For example, the amount of main memory allocated to a buffer pool might be modified or the amount of disk space allocated to virtual memory might be modified.</p> | SNMP | vm.memory.total[hrStorageSize.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `{#ALLOC_UNITS}`</p> |
+| Memory | {#MEMNAME}: Memory utilization | <p>Memory utilization in %</p> | CALCULATED | vm.memory.util[memoryUsedPercentage.{#SNMPINDEX}]<p>**Expression**:</p>`last("vm.memory.used[hrStorageUsed.{#SNMPINDEX}]")/last("vm.memory.total[hrStorageSize.{#SNMPINDEX}]")*100` |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|{#MEMNAME}: High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m) |<p>The system is running out of free memory.</p> |`{TEMPLATE_NAME:vm.memory.util[memoryUsedPercentage.{#SNMPINDEX}].min(5m)}>{$MEMORY.UTIL.MAX}` |AVERAGE | |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|-------------------------------------------------------------------|--------------------------------------------------|------------------------------------------------------------------------------------------------|----------|----------------------------------|
+| {#MEMNAME}: High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m) | <p>The system is running out of free memory.</p> | `{TEMPLATE_NAME:vm.memory.util[memoryUsedPercentage.{#SNMPINDEX}].min(5m)}>{$MEMORY.UTIL.MAX}` | AVERAGE | |
## Feedback
@@ -109,7 +109,7 @@ Please report any issues with the template at https://support.zabbix.com
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
@@ -121,9 +121,9 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$CPU.UTIL.CRIT} |<p>-</p> |`90` |
+| Name | Description | Default |
+|------------------|-------------|---------|
+| {$CPU.UTIL.CRIT} | <p>-</p> | `90` |
## Template links
@@ -134,15 +134,15 @@ There are no template links in this template.
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|CPU |CPU utilization |<p>MIB: HOST-RESOURCES-MIB</p><p>The average, over the last minute, of the percentage of time that processors was not idle.</p><p>Implementations may approximate this one minute smoothing period if necessary.</p> |SNMP |system.cpu.util<p>**Preprocessing**:</p><p>- JSONPATH: `$..['{#CPU.UTIL}'].avg()`</p> |
+| Group | Name | Description | Type | Key and additional info |
+|-------|-----------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------|---------------------------------------------------------------------------------------|
+| CPU | CPU utilization | <p>MIB: HOST-RESOURCES-MIB</p><p>The average, over the last minute, of the percentage of time that processors was not idle.</p><p>Implementations may approximate this one minute smoothing period if necessary.</p> | SNMP | system.cpu.util<p>**Preprocessing**:</p><p>- JSONPATH: `$..['{#CPU.UTIL}'].avg()`</p> |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|High CPU utilization (over {$CPU.UTIL.CRIT}% for 5m) |<p>CPU utilization is too high. The system might be slow to respond.</p> |`{TEMPLATE_NAME:system.cpu.util.min(5m)}>{$CPU.UTIL.CRIT}` |WARNING | |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|------------------------------------------------------|--------------------------------------------------------------------------|------------------------------------------------------------|----------|----------------------------------|
+| High CPU utilization (over {$CPU.UTIL.CRIT}% for 5m) | <p>CPU utilization is too high. The system might be slow to respond.</p> | `{TEMPLATE_NAME:system.cpu.util.min(5m)}>{$CPU.UTIL.CRIT}` | WARNING | |
## Feedback
@@ -152,7 +152,7 @@ Please report any issues with the template at https://support.zabbix.com
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
@@ -165,11 +165,11 @@ No specific Zabbix configuration is required.
## Template links
-|Name|
-|----|
-|HOST-RESOURCES-MIB CPU SNMP |
-|HOST-RESOURCES-MIB memory SNMP |
-|HOST-RESOURCES-MIB storage SNMP |
+| Name |
+|---------------------------------|
+| HOST-RESOURCES-MIB CPU SNMP |
+| HOST-RESOURCES-MIB memory SNMP |
+| HOST-RESOURCES-MIB storage SNMP |
## Discovery rules
diff --git a/templates/module/host_resources_snmp/template_module_host_resources_snmp.yaml b/templates/module/host_resources_snmp/template_module_host_resources_snmp.yaml
index f3495c6c320..6ea8c931bd8 100644
--- a/templates/module/host_resources_snmp/template_module_host_resources_snmp.yaml
+++ b/templates/module/host_resources_snmp/template_module_host_resources_snmp.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-03-02T19:42:25Z'
+ date: '2021-04-22T11:28:23Z'
groups:
-
name: Templates/Modules
@@ -204,58 +204,60 @@ zabbix_export:
dashboards:
-
name: 'System performance'
- widgets:
+ pages:
-
- type: GRAPH_CLASSIC
- width: '24'
- height: '5'
- fields:
+ widgets:
-
- type: GRAPH
- name: graphid
- value:
- name: 'CPU utilization'
- host: 'HOST-RESOURCES-MIB SNMP'
- -
- type: GRAPH_PROTOTYPE
- 'y': '5'
- width: '24'
- height: '12'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '3'
+ type: GRAPH_CLASSIC
+ width: '24'
+ height: '5'
+ fields:
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'CPU utilization'
+ host: 'HOST-RESOURCES-MIB SNMP'
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: '{#MEMNAME}: Memory utilization'
- host: 'HOST-RESOURCES-MIB SNMP'
- -
- type: GRAPH_PROTOTYPE
- 'y': '17'
- width: '24'
- height: '12'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '3'
+ 'y': '5'
+ width: '24'
+ height: '12'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '3'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: '{#MEMNAME}: Memory utilization'
+ host: 'HOST-RESOURCES-MIB SNMP'
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: '{#FSNAME}: Disk space usage'
- host: 'HOST-RESOURCES-MIB SNMP'
+ 'y': '17'
+ width: '24'
+ height: '12'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '3'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: '{#FSNAME}: Disk space usage'
+ host: 'HOST-RESOURCES-MIB SNMP'
-
template: 'HOST-RESOURCES-MIB storage SNMP'
name: 'HOST-RESOURCES-MIB storage SNMP'
diff --git a/templates/module/interfaces_simple_snmp/README.md b/templates/module/interfaces_simple_snmp/README.md
index ff2d30256b3..16561603635 100644
--- a/templates/module/interfaces_simple_snmp/README.md
+++ b/templates/module/interfaces_simple_snmp/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
@@ -15,21 +15,21 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$IF.ERRORS.WARN} |<p>-</p> |`2` |
-|{$IF.UTIL.MAX} |<p>-</p> |`95` |
-|{$IFCONTROL} |<p>-</p> |`1` |
-|{$NET.IF.IFADMINSTATUS.MATCHES} |<p>Ignore notPresent(6)</p> |`^.*` |
-|{$NET.IF.IFADMINSTATUS.NOT_MATCHES} |<p>Ignore down(2) administrative status</p> |`^2$` |
-|{$NET.IF.IFDESCR.MATCHES} |<p>-</p> |`.*` |
-|{$NET.IF.IFDESCR.NOT_MATCHES} |<p>-</p> |`CHANGE_IF_NEEDED` |
-|{$NET.IF.IFNAME.MATCHES} |<p>-</p> |`^.*$` |
-|{$NET.IF.IFNAME.NOT_MATCHES} |<p>Filter out loopbacks, nulls, docker veth links and docker0 bridge by default</p> |`(^Software Loopback Interface|^NULL[0-9.]*$|^[Ll]o[0-9.]*$|^[Ss]ystem$|^Nu[0-9.]*$|^veth[0-9a-z]+$|docker[0-9]+|br-[a-z0-9]{12})` |
-|{$NET.IF.IFOPERSTATUS.MATCHES} |<p>-</p> |`^.*$` |
-|{$NET.IF.IFOPERSTATUS.NOT_MATCHES} |<p>Ignore notPresent(6)</p> |`^6$` |
-|{$NET.IF.IFTYPE.MATCHES} |<p>-</p> |`.*` |
-|{$NET.IF.IFTYPE.NOT_MATCHES} |<p>-</p> |`CHANGE_IF_NEEDED` |
+| Name | Description | Default |
+|-------------------------------------|-------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------|
+| {$IF.ERRORS.WARN} | <p>-</p> | `2` |
+| {$IF.UTIL.MAX} | <p>-</p> | `95` |
+| {$IFCONTROL} | <p>-</p> | `1` |
+| {$NET.IF.IFADMINSTATUS.MATCHES} | <p>Ignore notPresent(6)</p> | `^.*` |
+| {$NET.IF.IFADMINSTATUS.NOT_MATCHES} | <p>Ignore down(2) administrative status</p> | `^2$` |
+| {$NET.IF.IFDESCR.MATCHES} | <p>-</p> | `.*` |
+| {$NET.IF.IFDESCR.NOT_MATCHES} | <p>-</p> | `CHANGE_IF_NEEDED` |
+| {$NET.IF.IFNAME.MATCHES} | <p>-</p> | `^.*$` |
+| {$NET.IF.IFNAME.NOT_MATCHES} | <p>Filter out loopbacks, nulls, docker veth links and docker0 bridge by default</p> | `(^Software Loopback Interface|^NULL[0-9.]*$|^[Ll]o[0-9.]*$|^[Ss]ystem$|^Nu[0-9.]*$|^veth[0-9a-z]+$|docker[0-9]+|br-[a-z0-9]{12})` |
+| {$NET.IF.IFOPERSTATUS.MATCHES} | <p>-</p> | `^.*$` |
+| {$NET.IF.IFOPERSTATUS.NOT_MATCHES} | <p>Ignore notPresent(6)</p> | `^6$` |
+| {$NET.IF.IFTYPE.MATCHES} | <p>-</p> | `.*` |
+| {$NET.IF.IFTYPE.NOT_MATCHES} | <p>-</p> | `CHANGE_IF_NEEDED` |
## Template links
@@ -37,32 +37,32 @@ There are no template links in this template.
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Network interfaces discovery |<p>Discovering interfaces from IF-MIB.</p> |SNMP |net.if.discovery<p>**Filter**:</p>AND <p>- A: {#IFADMINSTATUS} MATCHES_REGEX `{$NET.IF.IFADMINSTATUS.MATCHES}`</p><p>- B: {#IFADMINSTATUS} NOT_MATCHES_REGEX `{$NET.IF.IFADMINSTATUS.NOT_MATCHES}`</p><p>- C: {#IFOPERSTATUS} MATCHES_REGEX `{$NET.IF.IFOPERSTATUS.MATCHES}`</p><p>- D: {#IFOPERSTATUS} NOT_MATCHES_REGEX `{$NET.IF.IFOPERSTATUS.NOT_MATCHES}`</p><p>- E: {#IFNAME} MATCHES_REGEX `{$NET.IF.IFNAME.MATCHES}`</p><p>- F: {#IFNAME} NOT_MATCHES_REGEX `{$NET.IF.IFNAME.NOT_MATCHES}`</p><p>- G: {#IFDESCR} MATCHES_REGEX `{$NET.IF.IFDESCR.MATCHES}`</p><p>- H: {#IFDESCR} NOT_MATCHES_REGEX `{$NET.IF.IFDESCR.NOT_MATCHES}`</p><p>- K: {#IFTYPE} MATCHES_REGEX `{$NET.IF.IFTYPE.MATCHES}`</p><p>- L: {#IFTYPE} NOT_MATCHES_REGEX `{$NET.IF.IFTYPE.NOT_MATCHES}`</p> |
+| Name | Description | Type | Key and additional info |
+|------------------------------|--------------------------------------------|------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Network interfaces discovery | <p>Discovering interfaces from IF-MIB.</p> | SNMP | net.if.discovery<p>**Filter**:</p>AND <p>- A: {#IFADMINSTATUS} MATCHES_REGEX `{$NET.IF.IFADMINSTATUS.MATCHES}`</p><p>- B: {#IFADMINSTATUS} NOT_MATCHES_REGEX `{$NET.IF.IFADMINSTATUS.NOT_MATCHES}`</p><p>- C: {#IFOPERSTATUS} MATCHES_REGEX `{$NET.IF.IFOPERSTATUS.MATCHES}`</p><p>- D: {#IFOPERSTATUS} NOT_MATCHES_REGEX `{$NET.IF.IFOPERSTATUS.NOT_MATCHES}`</p><p>- E: {#IFNAME} MATCHES_REGEX `{$NET.IF.IFNAME.MATCHES}`</p><p>- F: {#IFNAME} NOT_MATCHES_REGEX `{$NET.IF.IFNAME.NOT_MATCHES}`</p><p>- G: {#IFDESCR} MATCHES_REGEX `{$NET.IF.IFDESCR.MATCHES}`</p><p>- H: {#IFDESCR} NOT_MATCHES_REGEX `{$NET.IF.IFDESCR.NOT_MATCHES}`</p><p>- K: {#IFTYPE} MATCHES_REGEX `{$NET.IF.IFTYPE.MATCHES}`</p><p>- L: {#IFTYPE} NOT_MATCHES_REGEX `{$NET.IF.IFTYPE.NOT_MATCHES}`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Network_interfaces |Interface {#IFDESCR}: Operational status |<p>MIB: IF-MIB</p><p>The current operational state of the interface.</p><p>- The testing(3) state indicates that no operational packet scan be passed</p><p>- If ifAdminStatus is down(2) then ifOperStatus should be down(2)</p><p>- If ifAdminStatus is changed to up(1) then ifOperStatus should change to up(1) if the interface is ready to transmit and receive network traffic</p><p>- It should change todormant(5) if the interface is waiting for external actions (such as a serial line waiting for an incoming connection)</p><p>- It should remain in the down(2) state if and only if there is a fault that prevents it from going to the up(1) state</p><p>- It should remain in the notPresent(6) state if the interface has missing(typically, hardware) components.</p> |SNMP |net.if.status[ifOperStatus.{#SNMPINDEX}] |
-|Network_interfaces |Interface {#IFDESCR}: Bits received |<p>MIB: IF-MIB</p><p>The total number of octets received on the interface,including framing characters. Discontinuities in the value of this counter can occurat re-initialization of the management system, and atother times as indicated by the value of ifCounterDiscontinuityTime.</p> |SNMP |net.if.in[ifInOctets.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `8`</p> |
-|Network_interfaces |Interface {#IFDESCR}: Bits sent |<p>MIB: IF-MIB</p><p>The total number of octets transmitted out of the interface, including framing characters. Discontinuities in the value of this counter can occurat re-initialization of the management system, and at other times as indicated by the value of ifCounterDiscontinuityTime.</p> |SNMP |net.if.out[ifOutOctets.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `8`</p> |
-|Network_interfaces |Interface {#IFDESCR}: Inbound packets with errors |<p>MIB: IF-MIB</p><p>For packet-oriented interfaces, the number of inbound packets that contained errors preventing them from being deliverable to a higher-layer protocol. For character-oriented or fixed-length interfaces, the number of inbound transmission units that contained errors preventing them from being deliverable to a higher-layer protocol. Discontinuities in the value of this counter can occur at re-initialization of the management system, and at other times as indicated by the value of ifCounterDiscontinuityTime.</p> |SNMP |net.if.in.errors[ifInErrors.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Network_interfaces |Interface {#IFDESCR}: Outbound packets with errors |<p>MIB: IF-MIB</p><p>For packet-oriented interfaces, the number of outbound packets that contained errors preventing them from being deliverable to a higher-layer protocol. For character-oriented or fixed-length interfaces, the number of outbound transmission units that contained errors preventing them from being deliverable to a higher-layer protocol. Discontinuities in the value of this counter can occur at re-initialization of the management system, and at other times as indicated by the value of ifCounterDiscontinuityTime.</p> |SNMP |net.if.out.errors[ifOutErrors.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Network_interfaces |Interface {#IFDESCR}: Outbound packets discarded |<p>MIB: IF-MIB</p><p>The number of outbound packets which were chosen to be discarded</p><p>even though no errors had been detected to prevent their being deliverable to a higher-layer protocol.</p><p>One possible reason for discarding such a packet could be to free up buffer space.</p><p>Discontinuities in the value of this counter can occur at re-initialization of the management system,</p><p>and at other times as indicated by the value of ifCounterDiscontinuityTime.</p> |SNMP |net.if.out.discards[ifOutDiscards.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Network_interfaces |Interface {#IFDESCR}: Inbound packets discarded |<p>MIB: IF-MIB</p><p>The number of inbound packets which were chosen to be discarded</p><p>even though no errors had been detected to prevent their being deliverable to a higher-layer protocol.</p><p>One possible reason for discarding such a packet could be to free up buffer space.</p><p>Discontinuities in the value of this counter can occur at re-initialization of the management system,</p><p>and at other times as indicated by the value of ifCounterDiscontinuityTime.</p> |SNMP |net.if.in.discards[ifInDiscards.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Network_interfaces |Interface {#IFDESCR}: Interface type |<p>MIB: IF-MIB</p><p>The type of interface.</p><p>Additional values for ifType are assigned by the Internet Assigned NumbersAuthority (IANA),</p><p>through updating the syntax of the IANAifType textual convention.</p> |SNMP |net.if.type[ifType.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Network_interfaces |Interface {#IFDESCR}: Speed |<p>MIB: IF-MIB</p><p>An estimate of the interface's current bandwidth in bits per second.</p><p>For interfaces which do not vary in bandwidth or for those where no accurate estimation can be made,</p><p>this object should contain the nominal bandwidth.</p><p>If the bandwidth of the interface is greater than the maximum value reportable by this object then</p><p>this object should report its maximum value (4,294,967,295) and ifHighSpeed must be used to report the interace's speed.</p><p>For a sub-layer which has no concept of bandwidth, this object should be zero.</p> |SNMP |net.if.speed[ifSpeed.{#SNMPINDEX}] |
+| Group | Name | Description | Type | Key and additional info |
+|--------------------|----------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------|-------------------------------------------------------------------------------------------------------------|
+| Network_interfaces | Interface {#IFDESCR}: Operational status | <p>MIB: IF-MIB</p><p>The current operational state of the interface.</p><p>- The testing(3) state indicates that no operational packet scan be passed</p><p>- If ifAdminStatus is down(2) then ifOperStatus should be down(2)</p><p>- If ifAdminStatus is changed to up(1) then ifOperStatus should change to up(1) if the interface is ready to transmit and receive network traffic</p><p>- It should change todormant(5) if the interface is waiting for external actions (such as a serial line waiting for an incoming connection)</p><p>- It should remain in the down(2) state if and only if there is a fault that prevents it from going to the up(1) state</p><p>- It should remain in the notPresent(6) state if the interface has missing(typically, hardware) components.</p> | SNMP | net.if.status[ifOperStatus.{#SNMPINDEX}] |
+| Network_interfaces | Interface {#IFDESCR}: Bits received | <p>MIB: IF-MIB</p><p>The total number of octets received on the interface,including framing characters. Discontinuities in the value of this counter can occurat re-initialization of the management system, and atother times as indicated by the value of ifCounterDiscontinuityTime.</p> | SNMP | net.if.in[ifInOctets.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `8`</p> |
+| Network_interfaces | Interface {#IFDESCR}: Bits sent | <p>MIB: IF-MIB</p><p>The total number of octets transmitted out of the interface, including framing characters. Discontinuities in the value of this counter can occurat re-initialization of the management system, and at other times as indicated by the value of ifCounterDiscontinuityTime.</p> | SNMP | net.if.out[ifOutOctets.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `8`</p> |
+| Network_interfaces | Interface {#IFDESCR}: Inbound packets with errors | <p>MIB: IF-MIB</p><p>For packet-oriented interfaces, the number of inbound packets that contained errors preventing them from being deliverable to a higher-layer protocol. For character-oriented or fixed-length interfaces, the number of inbound transmission units that contained errors preventing them from being deliverable to a higher-layer protocol. Discontinuities in the value of this counter can occur at re-initialization of the management system, and at other times as indicated by the value of ifCounterDiscontinuityTime.</p> | SNMP | net.if.in.errors[ifInErrors.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Network_interfaces | Interface {#IFDESCR}: Outbound packets with errors | <p>MIB: IF-MIB</p><p>For packet-oriented interfaces, the number of outbound packets that contained errors preventing them from being deliverable to a higher-layer protocol. For character-oriented or fixed-length interfaces, the number of outbound transmission units that contained errors preventing them from being deliverable to a higher-layer protocol. Discontinuities in the value of this counter can occur at re-initialization of the management system, and at other times as indicated by the value of ifCounterDiscontinuityTime.</p> | SNMP | net.if.out.errors[ifOutErrors.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Network_interfaces | Interface {#IFDESCR}: Outbound packets discarded | <p>MIB: IF-MIB</p><p>The number of outbound packets which were chosen to be discarded</p><p>even though no errors had been detected to prevent their being deliverable to a higher-layer protocol.</p><p>One possible reason for discarding such a packet could be to free up buffer space.</p><p>Discontinuities in the value of this counter can occur at re-initialization of the management system,</p><p>and at other times as indicated by the value of ifCounterDiscontinuityTime.</p> | SNMP | net.if.out.discards[ifOutDiscards.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Network_interfaces | Interface {#IFDESCR}: Inbound packets discarded | <p>MIB: IF-MIB</p><p>The number of inbound packets which were chosen to be discarded</p><p>even though no errors had been detected to prevent their being deliverable to a higher-layer protocol.</p><p>One possible reason for discarding such a packet could be to free up buffer space.</p><p>Discontinuities in the value of this counter can occur at re-initialization of the management system,</p><p>and at other times as indicated by the value of ifCounterDiscontinuityTime.</p> | SNMP | net.if.in.discards[ifInDiscards.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Network_interfaces | Interface {#IFDESCR}: Interface type | <p>MIB: IF-MIB</p><p>The type of interface.</p><p>Additional values for ifType are assigned by the Internet Assigned NumbersAuthority (IANA),</p><p>through updating the syntax of the IANAifType textual convention.</p> | SNMP | net.if.type[ifType.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Network_interfaces | Interface {#IFDESCR}: Speed | <p>MIB: IF-MIB</p><p>An estimate of the interface's current bandwidth in bits per second.</p><p>For interfaces which do not vary in bandwidth or for those where no accurate estimation can be made,</p><p>this object should contain the nominal bandwidth.</p><p>If the bandwidth of the interface is greater than the maximum value reportable by this object then</p><p>this object should report its maximum value (4,294,967,295) and ifHighSpeed must be used to report the interace's speed.</p><p>For a sub-layer which has no concept of bandwidth, this object should be zero.</p> | SNMP | net.if.speed[ifSpeed.{#SNMPINDEX}] |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|Interface {#IFDESCR}: Link down |<p>This trigger expression works as follows:</p><p>1. Can be triggered if operations status is down.</p><p>2. {$IFCONTROL:"{#IFNAME}"}=1 - user can redefine Context macro to value - 0. That marks this interface as not important. No new trigger will be fired if this interface is down.</p><p>3. {TEMPLATE_NAME:METRIC.diff()}=1) - trigger fires only if operational status was up(1) sometime before. (So, do not fire 'ethernal off' interfaces.)</p><p>WARNING: if closed manually - won't fire again on next poll, because of .diff.</p> |`{$IFCONTROL:"{#IFNAME}"}=1 and ({TEMPLATE_NAME:net.if.status[ifOperStatus.{#SNMPINDEX}].last()}=2 and {TEMPLATE_NAME:net.if.status[ifOperStatus.{#SNMPINDEX}].diff()}=1)`<p>Recovery expression:</p>`{TEMPLATE_NAME:net.if.status[ifOperStatus.{#SNMPINDEX}].last()}<>2 or {$IFCONTROL:"{#IFNAME}"}=0` |AVERAGE |<p>Manual close: YES</p> |
-|Interface {#IFDESCR}: High bandwidth usage (> {$IF.UTIL.MAX:"{#IFNAME}"}% ) |<p>The network interface utilization is close to its estimated maximum bandwidth.</p> |`({TEMPLATE_NAME:net.if.in[ifInOctets.{#SNMPINDEX}].avg(15m)}>({$IF.UTIL.MAX:"{#IFNAME}"}/100)*{Interfaces Simple SNMP:net.if.speed[ifSpeed.{#SNMPINDEX}].last()} or {Interfaces Simple SNMP:net.if.out[ifOutOctets.{#SNMPINDEX}].avg(15m)}>({$IF.UTIL.MAX:"{#IFNAME}"}/100)*{Interfaces Simple SNMP:net.if.speed[ifSpeed.{#SNMPINDEX}].last()}) and {Interfaces Simple SNMP:net.if.speed[ifSpeed.{#SNMPINDEX}].last()}>0`<p>Recovery expression:</p>`{TEMPLATE_NAME:net.if.in[ifInOctets.{#SNMPINDEX}].avg(15m)}<(({$IF.UTIL.MAX:"{#IFNAME}"}-3)/100)*{Interfaces Simple SNMP:net.if.speed[ifSpeed.{#SNMPINDEX}].last()} and {Interfaces Simple SNMP:net.if.out[ifOutOctets.{#SNMPINDEX}].avg(15m)}<(({$IF.UTIL.MAX:"{#IFNAME}"}-3)/100)*{Interfaces Simple SNMP:net.if.speed[ifSpeed.{#SNMPINDEX}].last()}` |WARNING |<p>Manual close: YES</p><p>**Depends on**:</p><p>- Interface {#IFDESCR}: Link down</p> |
-|Interface {#IFDESCR}: High error rate (> {$IF.ERRORS.WARN:"{#IFNAME}"} for 5m) |<p>Recovers when below 80% of {$IF.ERRORS.WARN:"{#IFNAME}"} threshold</p> |`{TEMPLATE_NAME:net.if.in.errors[ifInErrors.{#SNMPINDEX}].min(5m)}>{$IF.ERRORS.WARN:"{#IFNAME}"} or {Interfaces Simple SNMP:net.if.out.errors[ifOutErrors.{#SNMPINDEX}].min(5m)}>{$IF.ERRORS.WARN:"{#IFNAME}"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:net.if.in.errors[ifInErrors.{#SNMPINDEX}].max(5m)}<{$IF.ERRORS.WARN:"{#IFNAME}"}*0.8 and {Interfaces Simple SNMP:net.if.out.errors[ifOutErrors.{#SNMPINDEX}].max(5m)}<{$IF.ERRORS.WARN:"{#IFNAME}"}*0.8` |WARNING |<p>Manual close: YES</p><p>**Depends on**:</p><p>- Interface {#IFDESCR}: Link down</p> |
-|Interface {#IFDESCR}: Ethernet has changed to lower speed than it was before |<p>This Ethernet connection has transitioned down from its known maximum speed. This might be a sign of autonegotiation issues. Ack to close.</p> |`{TEMPLATE_NAME:net.if.speed[ifSpeed.{#SNMPINDEX}].change()}<0 and {TEMPLATE_NAME:net.if.speed[ifSpeed.{#SNMPINDEX}].last()}>0 and ( {Interfaces Simple SNMP:net.if.type[ifType.{#SNMPINDEX}].last()}=6 or {Interfaces Simple SNMP:net.if.type[ifType.{#SNMPINDEX}].last()}=7 or {Interfaces Simple SNMP:net.if.type[ifType.{#SNMPINDEX}].last()}=11 or {Interfaces Simple SNMP:net.if.type[ifType.{#SNMPINDEX}].last()}=62 or {Interfaces Simple SNMP:net.if.type[ifType.{#SNMPINDEX}].last()}=69 or {Interfaces Simple SNMP:net.if.type[ifType.{#SNMPINDEX}].last()}=117 ) and ({Interfaces Simple SNMP:net.if.status[ifOperStatus.{#SNMPINDEX}].last()}<>2)`<p>Recovery expression:</p>`({TEMPLATE_NAME:net.if.speed[ifSpeed.{#SNMPINDEX}].change()}>0 and {TEMPLATE_NAME:net.if.speed[ifSpeed.{#SNMPINDEX}].prev()}>0) or ({Interfaces Simple SNMP:net.if.status[ifOperStatus.{#SNMPINDEX}].last()}=2)` |INFO |<p>Manual close: YES</p><p>**Depends on**:</p><p>- Interface {#IFDESCR}: Link down</p> |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|--------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|----------------------------------------------------------------------------------------|
+| Interface {#IFDESCR}: Link down | <p>This trigger expression works as follows:</p><p>1. Can be triggered if operations status is down.</p><p>2. {$IFCONTROL:"{#IFNAME}"}=1 - user can redefine Context macro to value - 0. That marks this interface as not important. No new trigger will be fired if this interface is down.</p><p>3. {TEMPLATE_NAME:METRIC.diff()}=1) - trigger fires only if operational status was up(1) sometime before. (So, do not fire 'ethernal off' interfaces.)</p><p>WARNING: if closed manually - won't fire again on next poll, because of .diff.</p> | `{$IFCONTROL:"{#IFNAME}"}=1 and ({TEMPLATE_NAME:net.if.status[ifOperStatus.{#SNMPINDEX}].last()}=2 and {TEMPLATE_NAME:net.if.status[ifOperStatus.{#SNMPINDEX}].diff()}=1)`<p>Recovery expression:</p>`{TEMPLATE_NAME:net.if.status[ifOperStatus.{#SNMPINDEX}].last()}<>2 or {$IFCONTROL:"{#IFNAME}"}=0` | AVERAGE | <p>Manual close: YES</p> |
+| Interface {#IFDESCR}: High bandwidth usage (> {$IF.UTIL.MAX:"{#IFNAME}"}% ) | <p>The network interface utilization is close to its estimated maximum bandwidth.</p> | `({TEMPLATE_NAME:net.if.in[ifInOctets.{#SNMPINDEX}].avg(15m)}>({$IF.UTIL.MAX:"{#IFNAME}"}/100)*{Interfaces Simple SNMP:net.if.speed[ifSpeed.{#SNMPINDEX}].last()} or {Interfaces Simple SNMP:net.if.out[ifOutOctets.{#SNMPINDEX}].avg(15m)}>({$IF.UTIL.MAX:"{#IFNAME}"}/100)*{Interfaces Simple SNMP:net.if.speed[ifSpeed.{#SNMPINDEX}].last()}) and {Interfaces Simple SNMP:net.if.speed[ifSpeed.{#SNMPINDEX}].last()}>0`<p>Recovery expression:</p>`{TEMPLATE_NAME:net.if.in[ifInOctets.{#SNMPINDEX}].avg(15m)}<(({$IF.UTIL.MAX:"{#IFNAME}"}-3)/100)*{Interfaces Simple SNMP:net.if.speed[ifSpeed.{#SNMPINDEX}].last()} and {Interfaces Simple SNMP:net.if.out[ifOutOctets.{#SNMPINDEX}].avg(15m)}<(({$IF.UTIL.MAX:"{#IFNAME}"}-3)/100)*{Interfaces Simple SNMP:net.if.speed[ifSpeed.{#SNMPINDEX}].last()}` | WARNING | <p>Manual close: YES</p><p>**Depends on**:</p><p>- Interface {#IFDESCR}: Link down</p> |
+| Interface {#IFDESCR}: High error rate (> {$IF.ERRORS.WARN:"{#IFNAME}"} for 5m) | <p>Recovers when below 80% of {$IF.ERRORS.WARN:"{#IFNAME}"} threshold</p> | `{TEMPLATE_NAME:net.if.in.errors[ifInErrors.{#SNMPINDEX}].min(5m)}>{$IF.ERRORS.WARN:"{#IFNAME}"} or {Interfaces Simple SNMP:net.if.out.errors[ifOutErrors.{#SNMPINDEX}].min(5m)}>{$IF.ERRORS.WARN:"{#IFNAME}"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:net.if.in.errors[ifInErrors.{#SNMPINDEX}].max(5m)}<{$IF.ERRORS.WARN:"{#IFNAME}"}*0.8 and {Interfaces Simple SNMP:net.if.out.errors[ifOutErrors.{#SNMPINDEX}].max(5m)}<{$IF.ERRORS.WARN:"{#IFNAME}"}*0.8` | WARNING | <p>Manual close: YES</p><p>**Depends on**:</p><p>- Interface {#IFDESCR}: Link down</p> |
+| Interface {#IFDESCR}: Ethernet has changed to lower speed than it was before | <p>This Ethernet connection has transitioned down from its known maximum speed. This might be a sign of autonegotiation issues. Ack to close.</p> | `{TEMPLATE_NAME:net.if.speed[ifSpeed.{#SNMPINDEX}].change()}<0 and {TEMPLATE_NAME:net.if.speed[ifSpeed.{#SNMPINDEX}].last()}>0 and ( {Interfaces Simple SNMP:net.if.type[ifType.{#SNMPINDEX}].last()}=6 or {Interfaces Simple SNMP:net.if.type[ifType.{#SNMPINDEX}].last()}=7 or {Interfaces Simple SNMP:net.if.type[ifType.{#SNMPINDEX}].last()}=11 or {Interfaces Simple SNMP:net.if.type[ifType.{#SNMPINDEX}].last()}=62 or {Interfaces Simple SNMP:net.if.type[ifType.{#SNMPINDEX}].last()}=69 or {Interfaces Simple SNMP:net.if.type[ifType.{#SNMPINDEX}].last()}=117 ) and ({Interfaces Simple SNMP:net.if.status[ifOperStatus.{#SNMPINDEX}].last()}<>2)`<p>Recovery expression:</p>`({TEMPLATE_NAME:net.if.speed[ifSpeed.{#SNMPINDEX}].change()}>0 and {TEMPLATE_NAME:net.if.speed[ifSpeed.{#SNMPINDEX}].prev()}>0) or ({Interfaces Simple SNMP:net.if.status[ifOperStatus.{#SNMPINDEX}].last()}=2)` | INFO | <p>Manual close: YES</p><p>**Depends on**:</p><p>- Interface {#IFDESCR}: Link down</p> |
## Feedback
diff --git a/templates/module/interfaces_simple_snmp/template_module_interfaces_simple_snmp.yaml b/templates/module/interfaces_simple_snmp/template_module_interfaces_simple_snmp.yaml
index df5d9a1877b..95fd7d9c8af 100644
--- a/templates/module/interfaces_simple_snmp/template_module_interfaces_simple_snmp.yaml
+++ b/templates/module/interfaces_simple_snmp/template_module_interfaces_simple_snmp.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-03-02T19:42:23Z'
+ date: '2021-04-22T11:28:22Z'
groups:
-
name: Templates/Modules
@@ -451,26 +451,28 @@ zabbix_export:
dashboards:
-
name: 'Network interfaces'
- widgets:
+ pages:
-
- type: GRAPH_PROTOTYPE
- width: '24'
- height: '12'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '3'
+ widgets:
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: 'Interface {#IFDESCR}: Network traffic'
- host: 'Interfaces Simple SNMP'
+ width: '24'
+ height: '12'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '3'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: 'Interface {#IFDESCR}: Network traffic'
+ host: 'Interfaces Simple SNMP'
valuemaps:
-
name: 'IF-MIB::ifOperStatus'
diff --git a/templates/module/interfaces_snmp/README.md b/templates/module/interfaces_snmp/README.md
index 8df5160b40e..d187da89803 100644
--- a/templates/module/interfaces_snmp/README.md
+++ b/templates/module/interfaces_snmp/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
@@ -15,23 +15,23 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$IF.ERRORS.WARN} |<p>-</p> |`2` |
-|{$IF.UTIL.MAX} |<p>-</p> |`90` |
-|{$IFCONTROL} |<p>-</p> |`1` |
-|{$NET.IF.IFADMINSTATUS.MATCHES} |<p>Ignore notPresent(6)</p> |`^.*` |
-|{$NET.IF.IFADMINSTATUS.NOT_MATCHES} |<p>Ignore down(2) administrative status</p> |`^2$` |
-|{$NET.IF.IFALIAS.MATCHES} |<p>-</p> |`.*` |
-|{$NET.IF.IFALIAS.NOT_MATCHES} |<p>-</p> |`CHANGE_IF_NEEDED` |
-|{$NET.IF.IFDESCR.MATCHES} |<p>-</p> |`.*` |
-|{$NET.IF.IFDESCR.NOT_MATCHES} |<p>-</p> |`CHANGE_IF_NEEDED` |
-|{$NET.IF.IFNAME.MATCHES} |<p>-</p> |`^.*$` |
-|{$NET.IF.IFNAME.NOT_MATCHES} |<p>Filter out loopbacks, nulls, docker veth links and docker0 bridge by default</p> |`(^Software Loopback Interface|^NULL[0-9.]*$|^[Ll]o[0-9.]*$|^[Ss]ystem$|^Nu[0-9.]*$|^veth[0-9a-z]+$|docker[0-9]+|br-[a-z0-9]{12})` |
-|{$NET.IF.IFOPERSTATUS.MATCHES} |<p>-</p> |`^.*$` |
-|{$NET.IF.IFOPERSTATUS.NOT_MATCHES} |<p>Ignore notPresent(6)</p> |`^6$` |
-|{$NET.IF.IFTYPE.MATCHES} |<p>-</p> |`.*` |
-|{$NET.IF.IFTYPE.NOT_MATCHES} |<p>-</p> |`CHANGE_IF_NEEDED` |
+| Name | Description | Default |
+|-------------------------------------|-------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------|
+| {$IF.ERRORS.WARN} | <p>-</p> | `2` |
+| {$IF.UTIL.MAX} | <p>-</p> | `90` |
+| {$IFCONTROL} | <p>-</p> | `1` |
+| {$NET.IF.IFADMINSTATUS.MATCHES} | <p>Ignore notPresent(6)</p> | `^.*` |
+| {$NET.IF.IFADMINSTATUS.NOT_MATCHES} | <p>Ignore down(2) administrative status</p> | `^2$` |
+| {$NET.IF.IFALIAS.MATCHES} | <p>-</p> | `.*` |
+| {$NET.IF.IFALIAS.NOT_MATCHES} | <p>-</p> | `CHANGE_IF_NEEDED` |
+| {$NET.IF.IFDESCR.MATCHES} | <p>-</p> | `.*` |
+| {$NET.IF.IFDESCR.NOT_MATCHES} | <p>-</p> | `CHANGE_IF_NEEDED` |
+| {$NET.IF.IFNAME.MATCHES} | <p>-</p> | `^.*$` |
+| {$NET.IF.IFNAME.NOT_MATCHES} | <p>Filter out loopbacks, nulls, docker veth links and docker0 bridge by default</p> | `(^Software Loopback Interface|^NULL[0-9.]*$|^[Ll]o[0-9.]*$|^[Ss]ystem$|^Nu[0-9.]*$|^veth[0-9a-z]+$|docker[0-9]+|br-[a-z0-9]{12})` |
+| {$NET.IF.IFOPERSTATUS.MATCHES} | <p>-</p> | `^.*$` |
+| {$NET.IF.IFOPERSTATUS.NOT_MATCHES} | <p>Ignore notPresent(6)</p> | `^6$` |
+| {$NET.IF.IFTYPE.MATCHES} | <p>-</p> | `.*` |
+| {$NET.IF.IFTYPE.NOT_MATCHES} | <p>-</p> | `CHANGE_IF_NEEDED` |
## Template links
@@ -39,32 +39,32 @@ There are no template links in this template.
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Network interfaces discovery |<p>Discovering interfaces from IF-MIB.</p> |SNMP |net.if.discovery<p>**Filter**:</p>AND <p>- A: {#IFADMINSTATUS} MATCHES_REGEX `{$NET.IF.IFADMINSTATUS.MATCHES}`</p><p>- B: {#IFADMINSTATUS} NOT_MATCHES_REGEX `{$NET.IF.IFADMINSTATUS.NOT_MATCHES}`</p><p>- C: {#IFOPERSTATUS} MATCHES_REGEX `{$NET.IF.IFOPERSTATUS.MATCHES}`</p><p>- D: {#IFOPERSTATUS} NOT_MATCHES_REGEX `{$NET.IF.IFOPERSTATUS.NOT_MATCHES}`</p><p>- E: {#IFNAME} MATCHES_REGEX `{$NET.IF.IFNAME.MATCHES}`</p><p>- F: {#IFNAME} NOT_MATCHES_REGEX `{$NET.IF.IFNAME.NOT_MATCHES}`</p><p>- G: {#IFDESCR} MATCHES_REGEX `{$NET.IF.IFDESCR.MATCHES}`</p><p>- H: {#IFDESCR} NOT_MATCHES_REGEX `{$NET.IF.IFDESCR.NOT_MATCHES}`</p><p>- I: {#IFALIAS} MATCHES_REGEX `{$NET.IF.IFALIAS.MATCHES}`</p><p>- J: {#IFALIAS} NOT_MATCHES_REGEX `{$NET.IF.IFALIAS.NOT_MATCHES}`</p><p>- K: {#IFTYPE} MATCHES_REGEX `{$NET.IF.IFTYPE.MATCHES}`</p><p>- L: {#IFTYPE} NOT_MATCHES_REGEX `{$NET.IF.IFTYPE.NOT_MATCHES}`</p> |
+| Name | Description | Type | Key and additional info |
+|------------------------------|--------------------------------------------|------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Network interfaces discovery | <p>Discovering interfaces from IF-MIB.</p> | SNMP | net.if.discovery<p>**Filter**:</p>AND <p>- A: {#IFADMINSTATUS} MATCHES_REGEX `{$NET.IF.IFADMINSTATUS.MATCHES}`</p><p>- B: {#IFADMINSTATUS} NOT_MATCHES_REGEX `{$NET.IF.IFADMINSTATUS.NOT_MATCHES}`</p><p>- C: {#IFOPERSTATUS} MATCHES_REGEX `{$NET.IF.IFOPERSTATUS.MATCHES}`</p><p>- D: {#IFOPERSTATUS} NOT_MATCHES_REGEX `{$NET.IF.IFOPERSTATUS.NOT_MATCHES}`</p><p>- E: {#IFNAME} MATCHES_REGEX `{$NET.IF.IFNAME.MATCHES}`</p><p>- F: {#IFNAME} NOT_MATCHES_REGEX `{$NET.IF.IFNAME.NOT_MATCHES}`</p><p>- G: {#IFDESCR} MATCHES_REGEX `{$NET.IF.IFDESCR.MATCHES}`</p><p>- H: {#IFDESCR} NOT_MATCHES_REGEX `{$NET.IF.IFDESCR.NOT_MATCHES}`</p><p>- I: {#IFALIAS} MATCHES_REGEX `{$NET.IF.IFALIAS.MATCHES}`</p><p>- J: {#IFALIAS} NOT_MATCHES_REGEX `{$NET.IF.IFALIAS.NOT_MATCHES}`</p><p>- K: {#IFTYPE} MATCHES_REGEX `{$NET.IF.IFTYPE.MATCHES}`</p><p>- L: {#IFTYPE} NOT_MATCHES_REGEX `{$NET.IF.IFTYPE.NOT_MATCHES}`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Network_interfaces |Interface {#IFNAME}({#IFALIAS}): Operational status |<p>MIB: IF-MIB</p><p>The current operational state of the interface.</p><p>- The testing(3) state indicates that no operational packet scan be passed</p><p>- If ifAdminStatus is down(2) then ifOperStatus should be down(2)</p><p>- If ifAdminStatus is changed to up(1) then ifOperStatus should change to up(1) if the interface is ready to transmit and receive network traffic</p><p>- It should change todormant(5) if the interface is waiting for external actions (such as a serial line waiting for an incoming connection)</p><p>- It should remain in the down(2) state if and only if there is a fault that prevents it from going to the up(1) state</p><p>- It should remain in the notPresent(6) state if the interface has missing(typically, hardware) components.</p> |SNMP |net.if.status[ifOperStatus.{#SNMPINDEX}] |
-|Network_interfaces |Interface {#IFNAME}({#IFALIAS}): Bits received |<p>MIB: IF-MIB</p><p>The total number of octets received on the interface, including framing characters. This object is a 64-bit version of ifInOctets. Discontinuities in the value of this counter can occur at re-initialization of the management system, and at other times as indicated by the value of ifCounterDiscontinuityTime.</p> |SNMP |net.if.in[ifHCInOctets.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `8`</p> |
-|Network_interfaces |Interface {#IFNAME}({#IFALIAS}): Bits sent |<p>MIB: IF-MIB</p><p>The total number of octets transmitted out of the interface, including framing characters. This object is a 64-bit version of ifOutOctets.Discontinuities in the value of this counter can occur at re-initialization of the management system, and at other times as indicated by the value of ifCounterDiscontinuityTime.</p> |SNMP |net.if.out[ifHCOutOctets.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `8`</p> |
-|Network_interfaces |Interface {#IFNAME}({#IFALIAS}): Inbound packets with errors |<p>MIB: IF-MIB</p><p>For packet-oriented interfaces, the number of inbound packets that contained errors preventing them from being deliverable to a higher-layer protocol. For character-oriented or fixed-length interfaces, the number of inbound transmission units that contained errors preventing them from being deliverable to a higher-layer protocol. Discontinuities in the value of this counter can occur at re-initialization of the management system, and at other times as indicated by the value of ifCounterDiscontinuityTime.</p> |SNMP |net.if.in.errors[ifInErrors.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Network_interfaces |Interface {#IFNAME}({#IFALIAS}): Outbound packets with errors |<p>MIB: IF-MIB</p><p>For packet-oriented interfaces, the number of outbound packets that contained errors preventing them from being deliverable to a higher-layer protocol. For character-oriented or fixed-length interfaces, the number of outbound transmission units that contained errors preventing them from being deliverable to a higher-layer protocol. Discontinuities in the value of this counter can occur at re-initialization of the management system, and at other times as indicated by the value of ifCounterDiscontinuityTime.</p> |SNMP |net.if.out.errors[ifOutErrors.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Network_interfaces |Interface {#IFNAME}({#IFALIAS}): Outbound packets discarded |<p>MIB: IF-MIB</p><p>The number of outbound packets which were chosen to be discarded</p><p>even though no errors had been detected to prevent their being deliverable to a higher-layer protocol.</p><p>One possible reason for discarding such a packet could be to free up buffer space.</p><p>Discontinuities in the value of this counter can occur at re-initialization of the management system,</p><p>and at other times as indicated by the value of ifCounterDiscontinuityTime.</p> |SNMP |net.if.out.discards[ifOutDiscards.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Network_interfaces |Interface {#IFNAME}({#IFALIAS}): Inbound packets discarded |<p>MIB: IF-MIB</p><p>The number of inbound packets which were chosen to be discarded</p><p>even though no errors had been detected to prevent their being deliverable to a higher-layer protocol.</p><p>One possible reason for discarding such a packet could be to free up buffer space.</p><p>Discontinuities in the value of this counter can occur at re-initialization of the management system,</p><p>and at other times as indicated by the value of ifCounterDiscontinuityTime.</p> |SNMP |net.if.in.discards[ifInDiscards.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Network_interfaces |Interface {#IFNAME}({#IFALIAS}): Interface type |<p>MIB: IF-MIB</p><p>The type of interface.</p><p>Additional values for ifType are assigned by the Internet Assigned NumbersAuthority (IANA),</p><p>through updating the syntax of the IANAifType textual convention.</p> |SNMP |net.if.type[ifType.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Group | Name | Description | Type | Key and additional info |
+|--------------------|---------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------|---------------------------------------------------------------------------------------------------------------|
+| Network_interfaces | Interface {#IFNAME}({#IFALIAS}): Operational status | <p>MIB: IF-MIB</p><p>The current operational state of the interface.</p><p>- The testing(3) state indicates that no operational packet scan be passed</p><p>- If ifAdminStatus is down(2) then ifOperStatus should be down(2)</p><p>- If ifAdminStatus is changed to up(1) then ifOperStatus should change to up(1) if the interface is ready to transmit and receive network traffic</p><p>- It should change todormant(5) if the interface is waiting for external actions (such as a serial line waiting for an incoming connection)</p><p>- It should remain in the down(2) state if and only if there is a fault that prevents it from going to the up(1) state</p><p>- It should remain in the notPresent(6) state if the interface has missing(typically, hardware) components.</p> | SNMP | net.if.status[ifOperStatus.{#SNMPINDEX}] |
+| Network_interfaces | Interface {#IFNAME}({#IFALIAS}): Bits received | <p>MIB: IF-MIB</p><p>The total number of octets received on the interface, including framing characters. This object is a 64-bit version of ifInOctets. Discontinuities in the value of this counter can occur at re-initialization of the management system, and at other times as indicated by the value of ifCounterDiscontinuityTime.</p> | SNMP | net.if.in[ifHCInOctets.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `8`</p> |
+| Network_interfaces | Interface {#IFNAME}({#IFALIAS}): Bits sent | <p>MIB: IF-MIB</p><p>The total number of octets transmitted out of the interface, including framing characters. This object is a 64-bit version of ifOutOctets.Discontinuities in the value of this counter can occur at re-initialization of the management system, and at other times as indicated by the value of ifCounterDiscontinuityTime.</p> | SNMP | net.if.out[ifHCOutOctets.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `8`</p> |
+| Network_interfaces | Interface {#IFNAME}({#IFALIAS}): Inbound packets with errors | <p>MIB: IF-MIB</p><p>For packet-oriented interfaces, the number of inbound packets that contained errors preventing them from being deliverable to a higher-layer protocol. For character-oriented or fixed-length interfaces, the number of inbound transmission units that contained errors preventing them from being deliverable to a higher-layer protocol. Discontinuities in the value of this counter can occur at re-initialization of the management system, and at other times as indicated by the value of ifCounterDiscontinuityTime.</p> | SNMP | net.if.in.errors[ifInErrors.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Network_interfaces | Interface {#IFNAME}({#IFALIAS}): Outbound packets with errors | <p>MIB: IF-MIB</p><p>For packet-oriented interfaces, the number of outbound packets that contained errors preventing them from being deliverable to a higher-layer protocol. For character-oriented or fixed-length interfaces, the number of outbound transmission units that contained errors preventing them from being deliverable to a higher-layer protocol. Discontinuities in the value of this counter can occur at re-initialization of the management system, and at other times as indicated by the value of ifCounterDiscontinuityTime.</p> | SNMP | net.if.out.errors[ifOutErrors.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Network_interfaces | Interface {#IFNAME}({#IFALIAS}): Outbound packets discarded | <p>MIB: IF-MIB</p><p>The number of outbound packets which were chosen to be discarded</p><p>even though no errors had been detected to prevent their being deliverable to a higher-layer protocol.</p><p>One possible reason for discarding such a packet could be to free up buffer space.</p><p>Discontinuities in the value of this counter can occur at re-initialization of the management system,</p><p>and at other times as indicated by the value of ifCounterDiscontinuityTime.</p> | SNMP | net.if.out.discards[ifOutDiscards.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Network_interfaces | Interface {#IFNAME}({#IFALIAS}): Inbound packets discarded | <p>MIB: IF-MIB</p><p>The number of inbound packets which were chosen to be discarded</p><p>even though no errors had been detected to prevent their being deliverable to a higher-layer protocol.</p><p>One possible reason for discarding such a packet could be to free up buffer space.</p><p>Discontinuities in the value of this counter can occur at re-initialization of the management system,</p><p>and at other times as indicated by the value of ifCounterDiscontinuityTime.</p> | SNMP | net.if.in.discards[ifInDiscards.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Network_interfaces | Interface {#IFNAME}({#IFALIAS}): Interface type | <p>MIB: IF-MIB</p><p>The type of interface.</p><p>Additional values for ifType are assigned by the Internet Assigned NumbersAuthority (IANA),</p><p>through updating the syntax of the IANAifType textual convention.</p> | SNMP | net.if.type[ifType.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
|Network_interfaces |Interface {#IFNAME}({#IFALIAS}): Speed |<p>MIB: IF-MIB</p><p>An estimate of the interface's current bandwidth in units of 1,000,000 bits per second. If this object reports a value of `n' then the speed of the interface is somewhere in the range of `n-500,000' to`n+499,999'. For interfaces which do not vary in bandwidth or for those where no accurate estimation can be made, this object should contain the nominal bandwidth. For a sub-layer which has no concept of bandwidth, this object should be zero.</p> |SNMP |net.if.speed[ifHighSpeed.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1000000`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|Interface {#IFNAME}({#IFALIAS}): Link down |<p>This trigger expression works as follows:</p><p>1. Can be triggered if operations status is down.</p><p>2. {$IFCONTROL:"{#IFNAME}"}=1 - user can redefine Context macro to value - 0. That marks this interface as not important. No new trigger will be fired if this interface is down.</p><p>3. {TEMPLATE_NAME:METRIC.diff()}=1) - trigger fires only if operational status was up(1) sometime before. (So, do not fire 'ethernal off' interfaces.)</p><p>WARNING: if closed manually - won't fire again on next poll, because of .diff.</p> |`{$IFCONTROL:"{#IFNAME}"}=1 and ({TEMPLATE_NAME:net.if.status[ifOperStatus.{#SNMPINDEX}].last()}=2 and {TEMPLATE_NAME:net.if.status[ifOperStatus.{#SNMPINDEX}].diff()}=1)`<p>Recovery expression:</p>`{TEMPLATE_NAME:net.if.status[ifOperStatus.{#SNMPINDEX}].last()}<>2 or {$IFCONTROL:"{#IFNAME}"}=0` |AVERAGE |<p>Manual close: YES</p> |
-|Interface {#IFNAME}({#IFALIAS}): High bandwidth usage (> {$IF.UTIL.MAX:"{#IFNAME}"}% ) |<p>The network interface utilization is close to its estimated maximum bandwidth.</p> |`({TEMPLATE_NAME:net.if.in[ifHCInOctets.{#SNMPINDEX}].avg(15m)}>({$IF.UTIL.MAX:"{#IFNAME}"}/100)*{Interfaces SNMP:net.if.speed[ifHighSpeed.{#SNMPINDEX}].last()} or {Interfaces SNMP:net.if.out[ifHCOutOctets.{#SNMPINDEX}].avg(15m)}>({$IF.UTIL.MAX:"{#IFNAME}"}/100)*{Interfaces SNMP:net.if.speed[ifHighSpeed.{#SNMPINDEX}].last()}) and {Interfaces SNMP:net.if.speed[ifHighSpeed.{#SNMPINDEX}].last()}>0`<p>Recovery expression:</p>`{TEMPLATE_NAME:net.if.in[ifHCInOctets.{#SNMPINDEX}].avg(15m)}<(({$IF.UTIL.MAX:"{#IFNAME}"}-3)/100)*{Interfaces SNMP:net.if.speed[ifHighSpeed.{#SNMPINDEX}].last()} and {Interfaces SNMP:net.if.out[ifHCOutOctets.{#SNMPINDEX}].avg(15m)}<(({$IF.UTIL.MAX:"{#IFNAME}"}-3)/100)*{Interfaces SNMP:net.if.speed[ifHighSpeed.{#SNMPINDEX}].last()}` |WARNING |<p>Manual close: YES</p><p>**Depends on**:</p><p>- Interface {#IFNAME}({#IFALIAS}): Link down</p> |
-|Interface {#IFNAME}({#IFALIAS}): High error rate (> {$IF.ERRORS.WARN:"{#IFNAME}"} for 5m) |<p>Recovers when below 80% of {$IF.ERRORS.WARN:"{#IFNAME}"} threshold</p> |`{TEMPLATE_NAME:net.if.in.errors[ifInErrors.{#SNMPINDEX}].min(5m)}>{$IF.ERRORS.WARN:"{#IFNAME}"} or {Interfaces SNMP:net.if.out.errors[ifOutErrors.{#SNMPINDEX}].min(5m)}>{$IF.ERRORS.WARN:"{#IFNAME}"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:net.if.in.errors[ifInErrors.{#SNMPINDEX}].max(5m)}<{$IF.ERRORS.WARN:"{#IFNAME}"}*0.8 and {Interfaces SNMP:net.if.out.errors[ifOutErrors.{#SNMPINDEX}].max(5m)}<{$IF.ERRORS.WARN:"{#IFNAME}"}*0.8` |WARNING |<p>Manual close: YES</p><p>**Depends on**:</p><p>- Interface {#IFNAME}({#IFALIAS}): Link down</p> |
-|Interface {#IFNAME}({#IFALIAS}): Ethernet has changed to lower speed than it was before |<p>This Ethernet connection has transitioned down from its known maximum speed. This might be a sign of autonegotiation issues. Ack to close.</p> |`{TEMPLATE_NAME:net.if.speed[ifHighSpeed.{#SNMPINDEX}].change()}<0 and {TEMPLATE_NAME:net.if.speed[ifHighSpeed.{#SNMPINDEX}].last()}>0 and ( {Interfaces SNMP:net.if.type[ifType.{#SNMPINDEX}].last()}=6 or {Interfaces SNMP:net.if.type[ifType.{#SNMPINDEX}].last()}=7 or {Interfaces SNMP:net.if.type[ifType.{#SNMPINDEX}].last()}=11 or {Interfaces SNMP:net.if.type[ifType.{#SNMPINDEX}].last()}=62 or {Interfaces SNMP:net.if.type[ifType.{#SNMPINDEX}].last()}=69 or {Interfaces SNMP:net.if.type[ifType.{#SNMPINDEX}].last()}=117 ) and ({Interfaces SNMP:net.if.status[ifOperStatus.{#SNMPINDEX}].last()}<>2)`<p>Recovery expression:</p>`({TEMPLATE_NAME:net.if.speed[ifHighSpeed.{#SNMPINDEX}].change()}>0 and {TEMPLATE_NAME:net.if.speed[ifHighSpeed.{#SNMPINDEX}].prev()}>0) or ({Interfaces SNMP:net.if.status[ifOperStatus.{#SNMPINDEX}].last()}=2)` |INFO |<p>Manual close: YES</p><p>**Depends on**:</p><p>- Interface {#IFNAME}({#IFALIAS}): Link down</p> |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|-------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|---------------------------------------------------------------------------------------------------|
+| Interface {#IFNAME}({#IFALIAS}): Link down | <p>This trigger expression works as follows:</p><p>1. Can be triggered if operations status is down.</p><p>2. {$IFCONTROL:"{#IFNAME}"}=1 - user can redefine Context macro to value - 0. That marks this interface as not important. No new trigger will be fired if this interface is down.</p><p>3. {TEMPLATE_NAME:METRIC.diff()}=1) - trigger fires only if operational status was up(1) sometime before. (So, do not fire 'ethernal off' interfaces.)</p><p>WARNING: if closed manually - won't fire again on next poll, because of .diff.</p> | `{$IFCONTROL:"{#IFNAME}"}=1 and ({TEMPLATE_NAME:net.if.status[ifOperStatus.{#SNMPINDEX}].last()}=2 and {TEMPLATE_NAME:net.if.status[ifOperStatus.{#SNMPINDEX}].diff()}=1)`<p>Recovery expression:</p>`{TEMPLATE_NAME:net.if.status[ifOperStatus.{#SNMPINDEX}].last()}<>2 or {$IFCONTROL:"{#IFNAME}"}=0` | AVERAGE | <p>Manual close: YES</p> |
+| Interface {#IFNAME}({#IFALIAS}): High bandwidth usage (> {$IF.UTIL.MAX:"{#IFNAME}"}% ) | <p>The network interface utilization is close to its estimated maximum bandwidth.</p> | `({TEMPLATE_NAME:net.if.in[ifHCInOctets.{#SNMPINDEX}].avg(15m)}>({$IF.UTIL.MAX:"{#IFNAME}"}/100)*{Interfaces SNMP:net.if.speed[ifHighSpeed.{#SNMPINDEX}].last()} or {Interfaces SNMP:net.if.out[ifHCOutOctets.{#SNMPINDEX}].avg(15m)}>({$IF.UTIL.MAX:"{#IFNAME}"}/100)*{Interfaces SNMP:net.if.speed[ifHighSpeed.{#SNMPINDEX}].last()}) and {Interfaces SNMP:net.if.speed[ifHighSpeed.{#SNMPINDEX}].last()}>0`<p>Recovery expression:</p>`{TEMPLATE_NAME:net.if.in[ifHCInOctets.{#SNMPINDEX}].avg(15m)}<(({$IF.UTIL.MAX:"{#IFNAME}"}-3)/100)*{Interfaces SNMP:net.if.speed[ifHighSpeed.{#SNMPINDEX}].last()} and {Interfaces SNMP:net.if.out[ifHCOutOctets.{#SNMPINDEX}].avg(15m)}<(({$IF.UTIL.MAX:"{#IFNAME}"}-3)/100)*{Interfaces SNMP:net.if.speed[ifHighSpeed.{#SNMPINDEX}].last()}` | WARNING | <p>Manual close: YES</p><p>**Depends on**:</p><p>- Interface {#IFNAME}({#IFALIAS}): Link down</p> |
+| Interface {#IFNAME}({#IFALIAS}): High error rate (> {$IF.ERRORS.WARN:"{#IFNAME}"} for 5m) | <p>Recovers when below 80% of {$IF.ERRORS.WARN:"{#IFNAME}"} threshold</p> | `{TEMPLATE_NAME:net.if.in.errors[ifInErrors.{#SNMPINDEX}].min(5m)}>{$IF.ERRORS.WARN:"{#IFNAME}"} or {Interfaces SNMP:net.if.out.errors[ifOutErrors.{#SNMPINDEX}].min(5m)}>{$IF.ERRORS.WARN:"{#IFNAME}"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:net.if.in.errors[ifInErrors.{#SNMPINDEX}].max(5m)}<{$IF.ERRORS.WARN:"{#IFNAME}"}*0.8 and {Interfaces SNMP:net.if.out.errors[ifOutErrors.{#SNMPINDEX}].max(5m)}<{$IF.ERRORS.WARN:"{#IFNAME}"}*0.8` | WARNING | <p>Manual close: YES</p><p>**Depends on**:</p><p>- Interface {#IFNAME}({#IFALIAS}): Link down</p> |
+| Interface {#IFNAME}({#IFALIAS}): Ethernet has changed to lower speed than it was before | <p>This Ethernet connection has transitioned down from its known maximum speed. This might be a sign of autonegotiation issues. Ack to close.</p> | `{TEMPLATE_NAME:net.if.speed[ifHighSpeed.{#SNMPINDEX}].change()}<0 and {TEMPLATE_NAME:net.if.speed[ifHighSpeed.{#SNMPINDEX}].last()}>0 and ( {Interfaces SNMP:net.if.type[ifType.{#SNMPINDEX}].last()}=6 or {Interfaces SNMP:net.if.type[ifType.{#SNMPINDEX}].last()}=7 or {Interfaces SNMP:net.if.type[ifType.{#SNMPINDEX}].last()}=11 or {Interfaces SNMP:net.if.type[ifType.{#SNMPINDEX}].last()}=62 or {Interfaces SNMP:net.if.type[ifType.{#SNMPINDEX}].last()}=69 or {Interfaces SNMP:net.if.type[ifType.{#SNMPINDEX}].last()}=117 ) and ({Interfaces SNMP:net.if.status[ifOperStatus.{#SNMPINDEX}].last()}<>2)`<p>Recovery expression:</p>`({TEMPLATE_NAME:net.if.speed[ifHighSpeed.{#SNMPINDEX}].change()}>0 and {TEMPLATE_NAME:net.if.speed[ifHighSpeed.{#SNMPINDEX}].prev()}>0) or ({Interfaces SNMP:net.if.status[ifOperStatus.{#SNMPINDEX}].last()}=2)` | INFO | <p>Manual close: YES</p><p>**Depends on**:</p><p>- Interface {#IFNAME}({#IFALIAS}): Link down</p> |
## Feedback
diff --git a/templates/module/interfaces_snmp/template_module_interfaces_snmp.yaml b/templates/module/interfaces_snmp/template_module_interfaces_snmp.yaml
index e1f05a60be9..904195d2176 100644
--- a/templates/module/interfaces_snmp/template_module_interfaces_snmp.yaml
+++ b/templates/module/interfaces_snmp/template_module_interfaces_snmp.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-03-02T19:42:24Z'
+ date: '2021-04-22T11:28:21Z'
groups:
-
name: Templates/Modules
@@ -466,26 +466,28 @@ zabbix_export:
dashboards:
-
name: 'Network interfaces'
- widgets:
+ pages:
-
- type: GRAPH_PROTOTYPE
- width: '24'
- height: '12'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '3'
+ widgets:
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: 'Interface {#IFNAME}({#IFALIAS}): Network traffic'
- host: 'Interfaces SNMP'
+ width: '24'
+ height: '12'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '3'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: 'Interface {#IFNAME}({#IFALIAS}): Network traffic'
+ host: 'Interfaces SNMP'
valuemaps:
-
name: 'IF-MIB::ifOperStatus'
diff --git a/templates/module/interfaces_win_snmp/README.md b/templates/module/interfaces_win_snmp/README.md
index b9db00c54e0..5cac4ca0bc4 100644
--- a/templates/module/interfaces_win_snmp/README.md
+++ b/templates/module/interfaces_win_snmp/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
Special version of interfaces template that is required for Windows OS. Since MS doesn't support 64 bit counters but supports ifAlias and ifHighSpeed.
## Setup
@@ -16,23 +16,23 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$IF.ERRORS.WARN} |<p>-</p> |`2` |
-|{$IF.UTIL.MAX} |<p>-</p> |`90` |
-|{$IFCONTROL} |<p>-</p> |`1` |
-|{$NET.IF.IFADMINSTATUS.MATCHES} |<p>-</p> |`^.*$` |
-|{$NET.IF.IFADMINSTATUS.NOT_MATCHES} |<p>Ignore down(2) administrative status</p> |`^2$` |
-|{$NET.IF.IFALIAS.MATCHES} |<p>-</p> |`.*` |
-|{$NET.IF.IFALIAS.NOT_MATCHES} |<p>-</p> |`CHANGE_IF_NEEDED` |
-|{$NET.IF.IFDESCR.MATCHES} |<p>-</p> |`.*` |
-|{$NET.IF.IFDESCR.NOT_MATCHES} |<p>-</p> |`Miniport|Virtual|Teredo|Kernel|Loopback|Bluetooth|HTTPS|6to4|QoS|Layer|isatap|ISATAP` |
-|{$NET.IF.IFNAME.MATCHES} |<p>-</p> |`^.*$` |
-|{$NET.IF.IFNAME.NOT_MATCHES} |<p>Filter out loopbacks, nulls, docker veth links and docker0 bridge by default</p> |`(^Software Loopback Interface|^NULL[0-9.]*$|^[Ll]o[0-9.]*$|^[Ss]ystem$|^Nu[0-9.]*$|^veth[0-9a-z]+$|docker[0-9]+|br-[a-z0-9]{12})` |
-|{$NET.IF.IFOPERSTATUS.MATCHES} |<p>-</p> |`^.*$` |
-|{$NET.IF.IFOPERSTATUS.NOT_MATCHES} |<p>Ignore notPresent(6)</p> |`^6$` |
-|{$NET.IF.IFTYPE.MATCHES} |<p>-</p> |`.*` |
-|{$NET.IF.IFTYPE.NOT_MATCHES} |<p>-</p> |`CHANGE_IF_NEEDED` |
+| Name | Description | Default |
+|-------------------------------------|-------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------|
+| {$IF.ERRORS.WARN} | <p>-</p> | `2` |
+| {$IF.UTIL.MAX} | <p>-</p> | `90` |
+| {$IFCONTROL} | <p>-</p> | `1` |
+| {$NET.IF.IFADMINSTATUS.MATCHES} | <p>-</p> | `^.*$` |
+| {$NET.IF.IFADMINSTATUS.NOT_MATCHES} | <p>Ignore down(2) administrative status</p> | `^2$` |
+| {$NET.IF.IFALIAS.MATCHES} | <p>-</p> | `.*` |
+| {$NET.IF.IFALIAS.NOT_MATCHES} | <p>-</p> | `CHANGE_IF_NEEDED` |
+| {$NET.IF.IFDESCR.MATCHES} | <p>-</p> | `.*` |
+| {$NET.IF.IFDESCR.NOT_MATCHES} | <p>-</p> | `Miniport|Virtual|Teredo|Kernel|Loopback|Bluetooth|HTTPS|6to4|QoS|Layer|isatap|ISATAP` |
+| {$NET.IF.IFNAME.MATCHES} | <p>-</p> | `^.*$` |
+| {$NET.IF.IFNAME.NOT_MATCHES} | <p>Filter out loopbacks, nulls, docker veth links and docker0 bridge by default</p> | `(^Software Loopback Interface|^NULL[0-9.]*$|^[Ll]o[0-9.]*$|^[Ss]ystem$|^Nu[0-9.]*$|^veth[0-9a-z]+$|docker[0-9]+|br-[a-z0-9]{12})` |
+| {$NET.IF.IFOPERSTATUS.MATCHES} | <p>-</p> | `^.*$` |
+| {$NET.IF.IFOPERSTATUS.NOT_MATCHES} | <p>Ignore notPresent(6)</p> | `^6$` |
+| {$NET.IF.IFTYPE.MATCHES} | <p>-</p> | `.*` |
+| {$NET.IF.IFTYPE.NOT_MATCHES} | <p>-</p> | `CHANGE_IF_NEEDED` |
## Template links
@@ -40,32 +40,32 @@ There are no template links in this template.
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Network interfaces discovery |<p>Discovering interfaces from IF-MIB.</p> |SNMP |net.if.discovery<p>**Filter**:</p>AND <p>- A: {#IFADMINSTATUS} MATCHES_REGEX `{$NET.IF.IFADMINSTATUS.MATCHES}`</p><p>- B: {#IFADMINSTATUS} NOT_MATCHES_REGEX `{$NET.IF.IFADMINSTATUS.NOT_MATCHES}`</p><p>- C: {#IFOPERSTATUS} MATCHES_REGEX `{$NET.IF.IFOPERSTATUS.MATCHES}`</p><p>- D: {#IFOPERSTATUS} NOT_MATCHES_REGEX `{$NET.IF.IFOPERSTATUS.NOT_MATCHES}`</p><p>- E: {#IFNAME} MATCHES_REGEX `{$NET.IF.IFNAME.MATCHES}`</p><p>- F: {#IFNAME} NOT_MATCHES_REGEX `{$NET.IF.IFNAME.NOT_MATCHES}`</p><p>- G: {#IFDESCR} MATCHES_REGEX `{$NET.IF.IFDESCR.MATCHES}`</p><p>- H: {#IFDESCR} NOT_MATCHES_REGEX `{$NET.IF.IFDESCR.NOT_MATCHES}`</p><p>- I: {#IFALIAS} MATCHES_REGEX `{$NET.IF.IFALIAS.MATCHES}`</p><p>- J: {#IFALIAS} NOT_MATCHES_REGEX `{$NET.IF.IFALIAS.NOT_MATCHES}`</p><p>- K: {#IFTYPE} MATCHES_REGEX `{$NET.IF.IFTYPE.MATCHES}`</p><p>- L: {#IFTYPE} NOT_MATCHES_REGEX `{$NET.IF.IFTYPE.NOT_MATCHES}`</p> |
+| Name | Description | Type | Key and additional info |
+|------------------------------|--------------------------------------------|------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Network interfaces discovery | <p>Discovering interfaces from IF-MIB.</p> | SNMP | net.if.discovery<p>**Filter**:</p>AND <p>- A: {#IFADMINSTATUS} MATCHES_REGEX `{$NET.IF.IFADMINSTATUS.MATCHES}`</p><p>- B: {#IFADMINSTATUS} NOT_MATCHES_REGEX `{$NET.IF.IFADMINSTATUS.NOT_MATCHES}`</p><p>- C: {#IFOPERSTATUS} MATCHES_REGEX `{$NET.IF.IFOPERSTATUS.MATCHES}`</p><p>- D: {#IFOPERSTATUS} NOT_MATCHES_REGEX `{$NET.IF.IFOPERSTATUS.NOT_MATCHES}`</p><p>- E: {#IFNAME} MATCHES_REGEX `{$NET.IF.IFNAME.MATCHES}`</p><p>- F: {#IFNAME} NOT_MATCHES_REGEX `{$NET.IF.IFNAME.NOT_MATCHES}`</p><p>- G: {#IFDESCR} MATCHES_REGEX `{$NET.IF.IFDESCR.MATCHES}`</p><p>- H: {#IFDESCR} NOT_MATCHES_REGEX `{$NET.IF.IFDESCR.NOT_MATCHES}`</p><p>- I: {#IFALIAS} MATCHES_REGEX `{$NET.IF.IFALIAS.MATCHES}`</p><p>- J: {#IFALIAS} NOT_MATCHES_REGEX `{$NET.IF.IFALIAS.NOT_MATCHES}`</p><p>- K: {#IFTYPE} MATCHES_REGEX `{$NET.IF.IFTYPE.MATCHES}`</p><p>- L: {#IFTYPE} NOT_MATCHES_REGEX `{$NET.IF.IFTYPE.NOT_MATCHES}`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Network_interfaces |Interface {#IFNAME}({#IFALIAS}): Operational status |<p>MIB: IF-MIB</p><p>The current operational state of the interface.</p><p>- The testing(3) state indicates that no operational packet scan be passed</p><p>- If ifAdminStatus is down(2) then ifOperStatus should be down(2)</p><p>- If ifAdminStatus is changed to up(1) then ifOperStatus should change to up(1) if the interface is ready to transmit and receive network traffic</p><p>- It should change todormant(5) if the interface is waiting for external actions (such as a serial line waiting for an incoming connection)</p><p>- It should remain in the down(2) state if and only if there is a fault that prevents it from going to the up(1) state</p><p>- It should remain in the notPresent(6) state if the interface has missing(typically, hardware) components.</p> |SNMP |net.if.status[ifOperStatus.{#SNMPINDEX}] |
-|Network_interfaces |Interface {#IFNAME}({#IFALIAS}): Bits received |<p>MIB: IF-MIB</p><p>The total number of octets received on the interface,including framing characters. Discontinuities in the value of this counter can occur at re-initialization of the management system, and atother times as indicated by the value of ifCounterDiscontinuityTime.</p> |SNMP |net.if.in[ifInOctets.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `8`</p> |
-|Network_interfaces |Interface {#IFNAME}({#IFALIAS}): Bits sent |<p>MIB: IF-MIB</p><p>The total number of octets transmitted out of the interface, including framing characters. Discontinuities in the value of this counter can occur at re-initialization of the management system, and at other times as indicated by the value of ifCounterDiscontinuityTime.</p> |SNMP |net.if.out[ifOutOctets.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `8`</p> |
-|Network_interfaces |Interface {#IFNAME}({#IFALIAS}): Inbound packets with errors |<p>MIB: IF-MIB</p><p>For packet-oriented interfaces, the number of inbound packets that contained errors preventing them from being deliverable to a higher-layer protocol. For character-oriented or fixed-length interfaces, the number of inbound transmission units that contained errors preventing them from being deliverable to a higher-layer protocol. Discontinuities in the value of this counter can occur at re-initialization of the management system, and at other times as indicated by the value of ifCounterDiscontinuityTime.</p> |SNMP |net.if.in.errors[ifInErrors.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Network_interfaces |Interface {#IFNAME}({#IFALIAS}): Outbound packets with errors |<p>MIB: IF-MIB</p><p>For packet-oriented interfaces, the number of outbound packets that contained errors preventing them from being deliverable to a higher-layer protocol. For character-oriented or fixed-length interfaces, the number of outbound transmission units that contained errors preventing them from being deliverable to a higher-layer protocol. Discontinuities in the value of this counter can occur at re-initialization of the management system, and at other times as indicated by the value of ifCounterDiscontinuityTime.</p> |SNMP |net.if.out.errors[ifOutErrors.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Network_interfaces |Interface {#IFNAME}({#IFALIAS}): Outbound packets discarded |<p>MIB: IF-MIB</p><p>The number of outbound packets which were chosen to be discarded</p><p>even though no errors had been detected to prevent their being deliverable to a higher-layer protocol.</p><p>One possible reason for discarding such a packet could be to free up buffer space.</p><p>Discontinuities in the value of this counter can occur at re-initialization of the management system,</p><p>and at other times as indicated by the value of ifCounterDiscontinuityTime.</p> |SNMP |net.if.out.discards[ifOutDiscards.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Network_interfaces |Interface {#IFNAME}({#IFALIAS}): Inbound packets discarded |<p>MIB: IF-MIB</p><p>The number of inbound packets which were chosen to be discarded</p><p>even though no errors had been detected to prevent their being deliverable to a higher-layer protocol.</p><p>One possible reason for discarding such a packet could be to free up buffer space.</p><p>Discontinuities in the value of this counter can occur at re-initialization of the management system,</p><p>and at other times as indicated by the value of ifCounterDiscontinuityTime.</p> |SNMP |net.if.in.discards[ifInDiscards.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Network_interfaces |Interface {#IFNAME}({#IFALIAS}): Interface type |<p>MIB: IF-MIB</p><p>The type of interface.</p><p>Additional values for ifType are assigned by the Internet Assigned NumbersAuthority (IANA),</p><p>through updating the syntax of the IANAifType textual convention.</p> |SNMP |net.if.type[ifType.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Group | Name | Description | Type | Key and additional info |
+|--------------------|---------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------|-------------------------------------------------------------------------------------------------------------|
+| Network_interfaces | Interface {#IFNAME}({#IFALIAS}): Operational status | <p>MIB: IF-MIB</p><p>The current operational state of the interface.</p><p>- The testing(3) state indicates that no operational packet scan be passed</p><p>- If ifAdminStatus is down(2) then ifOperStatus should be down(2)</p><p>- If ifAdminStatus is changed to up(1) then ifOperStatus should change to up(1) if the interface is ready to transmit and receive network traffic</p><p>- It should change todormant(5) if the interface is waiting for external actions (such as a serial line waiting for an incoming connection)</p><p>- It should remain in the down(2) state if and only if there is a fault that prevents it from going to the up(1) state</p><p>- It should remain in the notPresent(6) state if the interface has missing(typically, hardware) components.</p> | SNMP | net.if.status[ifOperStatus.{#SNMPINDEX}] |
+| Network_interfaces | Interface {#IFNAME}({#IFALIAS}): Bits received | <p>MIB: IF-MIB</p><p>The total number of octets received on the interface,including framing characters. Discontinuities in the value of this counter can occur at re-initialization of the management system, and atother times as indicated by the value of ifCounterDiscontinuityTime.</p> | SNMP | net.if.in[ifInOctets.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `8`</p> |
+| Network_interfaces | Interface {#IFNAME}({#IFALIAS}): Bits sent | <p>MIB: IF-MIB</p><p>The total number of octets transmitted out of the interface, including framing characters. Discontinuities in the value of this counter can occur at re-initialization of the management system, and at other times as indicated by the value of ifCounterDiscontinuityTime.</p> | SNMP | net.if.out[ifOutOctets.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `8`</p> |
+| Network_interfaces | Interface {#IFNAME}({#IFALIAS}): Inbound packets with errors | <p>MIB: IF-MIB</p><p>For packet-oriented interfaces, the number of inbound packets that contained errors preventing them from being deliverable to a higher-layer protocol. For character-oriented or fixed-length interfaces, the number of inbound transmission units that contained errors preventing them from being deliverable to a higher-layer protocol. Discontinuities in the value of this counter can occur at re-initialization of the management system, and at other times as indicated by the value of ifCounterDiscontinuityTime.</p> | SNMP | net.if.in.errors[ifInErrors.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Network_interfaces | Interface {#IFNAME}({#IFALIAS}): Outbound packets with errors | <p>MIB: IF-MIB</p><p>For packet-oriented interfaces, the number of outbound packets that contained errors preventing them from being deliverable to a higher-layer protocol. For character-oriented or fixed-length interfaces, the number of outbound transmission units that contained errors preventing them from being deliverable to a higher-layer protocol. Discontinuities in the value of this counter can occur at re-initialization of the management system, and at other times as indicated by the value of ifCounterDiscontinuityTime.</p> | SNMP | net.if.out.errors[ifOutErrors.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Network_interfaces | Interface {#IFNAME}({#IFALIAS}): Outbound packets discarded | <p>MIB: IF-MIB</p><p>The number of outbound packets which were chosen to be discarded</p><p>even though no errors had been detected to prevent their being deliverable to a higher-layer protocol.</p><p>One possible reason for discarding such a packet could be to free up buffer space.</p><p>Discontinuities in the value of this counter can occur at re-initialization of the management system,</p><p>and at other times as indicated by the value of ifCounterDiscontinuityTime.</p> | SNMP | net.if.out.discards[ifOutDiscards.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Network_interfaces | Interface {#IFNAME}({#IFALIAS}): Inbound packets discarded | <p>MIB: IF-MIB</p><p>The number of inbound packets which were chosen to be discarded</p><p>even though no errors had been detected to prevent their being deliverable to a higher-layer protocol.</p><p>One possible reason for discarding such a packet could be to free up buffer space.</p><p>Discontinuities in the value of this counter can occur at re-initialization of the management system,</p><p>and at other times as indicated by the value of ifCounterDiscontinuityTime.</p> | SNMP | net.if.in.discards[ifInDiscards.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Network_interfaces | Interface {#IFNAME}({#IFALIAS}): Interface type | <p>MIB: IF-MIB</p><p>The type of interface.</p><p>Additional values for ifType are assigned by the Internet Assigned NumbersAuthority (IANA),</p><p>through updating the syntax of the IANAifType textual convention.</p> | SNMP | net.if.type[ifType.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
|Network_interfaces |Interface {#IFNAME}({#IFALIAS}): Speed |<p>MIB: IF-MIB</p><p>An estimate of the interface's current bandwidth in units of 1,000,000 bits per second. If this object reports a value of `n' then the speed of the interface is somewhere in the range of `n-500,000' to`n+499,999'. For interfaces which do not vary in bandwidth or for those where no accurate estimation can be made, this object should contain the nominal bandwidth. For a sub-layer which has no concept of bandwidth, this object should be zero.</p> |SNMP |net.if.speed[ifHighSpeed.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1000000`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|Interface {#IFNAME}({#IFALIAS}): Link down |<p>This trigger expression works as follows:</p><p>1. Can be triggered if operations status is down.</p><p>2. {$IFCONTROL:"{#IFNAME}"}=1 - user can redefine Context macro to value - 0. That marks this interface as not important. No new trigger will be fired if this interface is down.</p><p>3. {TEMPLATE_NAME:METRIC.diff()}=1) - trigger fires only if operational status was up(1) sometime before. (So, do not fire 'ethernal off' interfaces.)</p><p>WARNING: if closed manually - won't fire again on next poll, because of .diff.</p> |`{$IFCONTROL:"{#IFNAME}"}=1 and ({TEMPLATE_NAME:net.if.status[ifOperStatus.{#SNMPINDEX}].last()}=2 and {TEMPLATE_NAME:net.if.status[ifOperStatus.{#SNMPINDEX}].diff()}=1)`<p>Recovery expression:</p>`{TEMPLATE_NAME:net.if.status[ifOperStatus.{#SNMPINDEX}].last()}<>2 or {$IFCONTROL:"{#IFNAME}"}=0` |AVERAGE |<p>Manual close: YES</p> |
-|Interface {#IFNAME}({#IFALIAS}): High bandwidth usage (> {$IF.UTIL.MAX:"{#IFNAME}"}% ) |<p>The network interface utilization is close to its estimated maximum bandwidth.</p> |`({TEMPLATE_NAME:net.if.in[ifInOctets.{#SNMPINDEX}].avg(15m)}>({$IF.UTIL.MAX:"{#IFNAME}"}/100)*{Interfaces Windows SNMP:net.if.speed[ifHighSpeed.{#SNMPINDEX}].last()} or {Interfaces Windows SNMP:net.if.out[ifOutOctets.{#SNMPINDEX}].avg(15m)}>({$IF.UTIL.MAX:"{#IFNAME}"}/100)*{Interfaces Windows SNMP:net.if.speed[ifHighSpeed.{#SNMPINDEX}].last()}) and {Interfaces Windows SNMP:net.if.speed[ifHighSpeed.{#SNMPINDEX}].last()}>0`<p>Recovery expression:</p>`{TEMPLATE_NAME:net.if.in[ifInOctets.{#SNMPINDEX}].avg(15m)}<(({$IF.UTIL.MAX:"{#IFNAME}"}-3)/100)*{Interfaces Windows SNMP:net.if.speed[ifHighSpeed.{#SNMPINDEX}].last()} and {Interfaces Windows SNMP:net.if.out[ifOutOctets.{#SNMPINDEX}].avg(15m)}<(({$IF.UTIL.MAX:"{#IFNAME}"}-3)/100)*{Interfaces Windows SNMP:net.if.speed[ifHighSpeed.{#SNMPINDEX}].last()}` |WARNING |<p>Manual close: YES</p><p>**Depends on**:</p><p>- Interface {#IFNAME}({#IFALIAS}): Link down</p> |
-|Interface {#IFNAME}({#IFALIAS}): High error rate (> {$IF.ERRORS.WARN:"{#IFNAME}"} for 5m) |<p>Recovers when below 80% of {$IF.ERRORS.WARN:"{#IFNAME}"} threshold</p> |`{TEMPLATE_NAME:net.if.in.errors[ifInErrors.{#SNMPINDEX}].min(5m)}>{$IF.ERRORS.WARN:"{#IFNAME}"} or {Interfaces Windows SNMP:net.if.out.errors[ifOutErrors.{#SNMPINDEX}].min(5m)}>{$IF.ERRORS.WARN:"{#IFNAME}"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:net.if.in.errors[ifInErrors.{#SNMPINDEX}].max(5m)}<{$IF.ERRORS.WARN:"{#IFNAME}"}*0.8 and {Interfaces Windows SNMP:net.if.out.errors[ifOutErrors.{#SNMPINDEX}].max(5m)}<{$IF.ERRORS.WARN:"{#IFNAME}"}*0.8` |WARNING |<p>Manual close: YES</p><p>**Depends on**:</p><p>- Interface {#IFNAME}({#IFALIAS}): Link down</p> |
-|Interface {#IFNAME}({#IFALIAS}): Ethernet has changed to lower speed than it was before |<p>This Ethernet connection has transitioned down from its known maximum speed. This might be a sign of autonegotiation issues. Ack to close.</p> |`{TEMPLATE_NAME:net.if.speed[ifHighSpeed.{#SNMPINDEX}].change()}<0 and {TEMPLATE_NAME:net.if.speed[ifHighSpeed.{#SNMPINDEX}].last()}>0 and ( {Interfaces Windows SNMP:net.if.type[ifType.{#SNMPINDEX}].last()}=6 or {Interfaces Windows SNMP:net.if.type[ifType.{#SNMPINDEX}].last()}=7 or {Interfaces Windows SNMP:net.if.type[ifType.{#SNMPINDEX}].last()}=11 or {Interfaces Windows SNMP:net.if.type[ifType.{#SNMPINDEX}].last()}=62 or {Interfaces Windows SNMP:net.if.type[ifType.{#SNMPINDEX}].last()}=69 or {Interfaces Windows SNMP:net.if.type[ifType.{#SNMPINDEX}].last()}=117 ) and ({Interfaces Windows SNMP:net.if.status[ifOperStatus.{#SNMPINDEX}].last()}<>2)`<p>Recovery expression:</p>`({TEMPLATE_NAME:net.if.speed[ifHighSpeed.{#SNMPINDEX}].change()}>0 and {TEMPLATE_NAME:net.if.speed[ifHighSpeed.{#SNMPINDEX}].prev()}>0) or ({Interfaces Windows SNMP:net.if.status[ifOperStatus.{#SNMPINDEX}].last()}=2)` |INFO |<p>Manual close: YES</p><p>**Depends on**:</p><p>- Interface {#IFNAME}({#IFALIAS}): Link down</p> |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|-------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|---------------------------------------------------------------------------------------------------|
+| Interface {#IFNAME}({#IFALIAS}): Link down | <p>This trigger expression works as follows:</p><p>1. Can be triggered if operations status is down.</p><p>2. {$IFCONTROL:"{#IFNAME}"}=1 - user can redefine Context macro to value - 0. That marks this interface as not important. No new trigger will be fired if this interface is down.</p><p>3. {TEMPLATE_NAME:METRIC.diff()}=1) - trigger fires only if operational status was up(1) sometime before. (So, do not fire 'ethernal off' interfaces.)</p><p>WARNING: if closed manually - won't fire again on next poll, because of .diff.</p> | `{$IFCONTROL:"{#IFNAME}"}=1 and ({TEMPLATE_NAME:net.if.status[ifOperStatus.{#SNMPINDEX}].last()}=2 and {TEMPLATE_NAME:net.if.status[ifOperStatus.{#SNMPINDEX}].diff()}=1)`<p>Recovery expression:</p>`{TEMPLATE_NAME:net.if.status[ifOperStatus.{#SNMPINDEX}].last()}<>2 or {$IFCONTROL:"{#IFNAME}"}=0` | AVERAGE | <p>Manual close: YES</p> |
+| Interface {#IFNAME}({#IFALIAS}): High bandwidth usage (> {$IF.UTIL.MAX:"{#IFNAME}"}% ) | <p>The network interface utilization is close to its estimated maximum bandwidth.</p> | `({TEMPLATE_NAME:net.if.in[ifInOctets.{#SNMPINDEX}].avg(15m)}>({$IF.UTIL.MAX:"{#IFNAME}"}/100)*{Interfaces Windows SNMP:net.if.speed[ifHighSpeed.{#SNMPINDEX}].last()} or {Interfaces Windows SNMP:net.if.out[ifOutOctets.{#SNMPINDEX}].avg(15m)}>({$IF.UTIL.MAX:"{#IFNAME}"}/100)*{Interfaces Windows SNMP:net.if.speed[ifHighSpeed.{#SNMPINDEX}].last()}) and {Interfaces Windows SNMP:net.if.speed[ifHighSpeed.{#SNMPINDEX}].last()}>0`<p>Recovery expression:</p>`{TEMPLATE_NAME:net.if.in[ifInOctets.{#SNMPINDEX}].avg(15m)}<(({$IF.UTIL.MAX:"{#IFNAME}"}-3)/100)*{Interfaces Windows SNMP:net.if.speed[ifHighSpeed.{#SNMPINDEX}].last()} and {Interfaces Windows SNMP:net.if.out[ifOutOctets.{#SNMPINDEX}].avg(15m)}<(({$IF.UTIL.MAX:"{#IFNAME}"}-3)/100)*{Interfaces Windows SNMP:net.if.speed[ifHighSpeed.{#SNMPINDEX}].last()}` | WARNING | <p>Manual close: YES</p><p>**Depends on**:</p><p>- Interface {#IFNAME}({#IFALIAS}): Link down</p> |
+| Interface {#IFNAME}({#IFALIAS}): High error rate (> {$IF.ERRORS.WARN:"{#IFNAME}"} for 5m) | <p>Recovers when below 80% of {$IF.ERRORS.WARN:"{#IFNAME}"} threshold</p> | `{TEMPLATE_NAME:net.if.in.errors[ifInErrors.{#SNMPINDEX}].min(5m)}>{$IF.ERRORS.WARN:"{#IFNAME}"} or {Interfaces Windows SNMP:net.if.out.errors[ifOutErrors.{#SNMPINDEX}].min(5m)}>{$IF.ERRORS.WARN:"{#IFNAME}"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:net.if.in.errors[ifInErrors.{#SNMPINDEX}].max(5m)}<{$IF.ERRORS.WARN:"{#IFNAME}"}*0.8 and {Interfaces Windows SNMP:net.if.out.errors[ifOutErrors.{#SNMPINDEX}].max(5m)}<{$IF.ERRORS.WARN:"{#IFNAME}"}*0.8` | WARNING | <p>Manual close: YES</p><p>**Depends on**:</p><p>- Interface {#IFNAME}({#IFALIAS}): Link down</p> |
+| Interface {#IFNAME}({#IFALIAS}): Ethernet has changed to lower speed than it was before | <p>This Ethernet connection has transitioned down from its known maximum speed. This might be a sign of autonegotiation issues. Ack to close.</p> | `{TEMPLATE_NAME:net.if.speed[ifHighSpeed.{#SNMPINDEX}].change()}<0 and {TEMPLATE_NAME:net.if.speed[ifHighSpeed.{#SNMPINDEX}].last()}>0 and ( {Interfaces Windows SNMP:net.if.type[ifType.{#SNMPINDEX}].last()}=6 or {Interfaces Windows SNMP:net.if.type[ifType.{#SNMPINDEX}].last()}=7 or {Interfaces Windows SNMP:net.if.type[ifType.{#SNMPINDEX}].last()}=11 or {Interfaces Windows SNMP:net.if.type[ifType.{#SNMPINDEX}].last()}=62 or {Interfaces Windows SNMP:net.if.type[ifType.{#SNMPINDEX}].last()}=69 or {Interfaces Windows SNMP:net.if.type[ifType.{#SNMPINDEX}].last()}=117 ) and ({Interfaces Windows SNMP:net.if.status[ifOperStatus.{#SNMPINDEX}].last()}<>2)`<p>Recovery expression:</p>`({TEMPLATE_NAME:net.if.speed[ifHighSpeed.{#SNMPINDEX}].change()}>0 and {TEMPLATE_NAME:net.if.speed[ifHighSpeed.{#SNMPINDEX}].prev()}>0) or ({Interfaces Windows SNMP:net.if.status[ifOperStatus.{#SNMPINDEX}].last()}=2)` | INFO | <p>Manual close: YES</p><p>**Depends on**:</p><p>- Interface {#IFNAME}({#IFALIAS}): Link down</p> |
## Feedback
diff --git a/templates/module/interfaces_win_snmp/template_module_interfaces_win_snmp.yaml b/templates/module/interfaces_win_snmp/template_module_interfaces_win_snmp.yaml
index cbd1b0d4aff..ecb85038b27 100644
--- a/templates/module/interfaces_win_snmp/template_module_interfaces_win_snmp.yaml
+++ b/templates/module/interfaces_win_snmp/template_module_interfaces_win_snmp.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-03-02T19:42:22Z'
+ date: '2021-04-22T11:28:23Z'
groups:
-
name: Templates/Modules
@@ -469,26 +469,28 @@ zabbix_export:
dashboards:
-
name: 'Network interfaces'
- widgets:
+ pages:
-
- type: GRAPH_PROTOTYPE
- width: '24'
- height: '12'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '3'
+ widgets:
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: 'Interface {#IFNAME}({#IFALIAS}): Network traffic'
- host: 'Interfaces Windows SNMP'
+ width: '24'
+ height: '12'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '3'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: 'Interface {#IFNAME}({#IFALIAS}): Network traffic'
+ host: 'Interfaces Windows SNMP'
valuemaps:
-
name: 'IF-MIB::ifOperStatus'
diff --git a/templates/module/smart_agent2/template_module_smart_agent2.yaml b/templates/module/smart_agent2/template_module_smart_agent2.yaml
index 5deea55c732..0ed48f3ccc5 100644
--- a/templates/module/smart_agent2/template_module_smart_agent2.yaml
+++ b/templates/module/smart_agent2/template_module_smart_agent2.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-02-25T19:10:31Z'
+ date: '2021-04-22T11:28:23Z'
groups:
-
name: Templates/Modules
@@ -17,9 +17,6 @@ zabbix_export:
groups:
-
name: Templates/Modules
- applications:
- -
- name: 'Zabbix raw items'
items:
-
name: 'SMART: Get attributes'
@@ -27,9 +24,10 @@ zabbix_export:
history: '0'
trends: '0'
value_type: TEXT
- applications:
+ tags:
-
- name: 'Zabbix raw items'
+ tag: Application
+ value: 'Zabbix raw items'
discovery_rules:
-
name: 'Attribute discovery'
@@ -43,9 +41,6 @@ zabbix_export:
key: 'smart.disk.error[{#NAME},{#ID}]'
delay: '0'
history: 7d
- application_prototypes:
- -
- name: '{#DISKTYPE} {#NAME}'
preprocessing:
-
type: JSONPATH
@@ -57,6 +52,10 @@ zabbix_export:
- 6h
master_item:
key: smart.disk.get
+ tags:
+ -
+ tag: Application
+ value: '{#DISKTYPE} {#NAME}'
trigger_prototypes:
-
expression: '{last()} <= {#THRESH}'
@@ -96,9 +95,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'This field indicates critical warnings for the state of the controller.'
- application_prototypes:
- -
- name: '{#DISKTYPE} {#NAME}'
preprocessing:
-
type: JSONPATH
@@ -110,6 +106,10 @@ zabbix_export:
- 6h
master_item:
key: smart.disk.get
+ tags:
+ -
+ tag: Application
+ value: '{#DISKTYPE} {#NAME}'
-
name: 'SMART [{#NAME}]: Power on hours'
type: DEPENDENT
@@ -118,9 +118,6 @@ zabbix_export:
history: 7d
units: s
description: 'Count of hours in power-on state. The raw value of this attribute shows total count of hours (or minutes, or seconds, depending on manufacturer) in power-on state. "By default, the total expected lifetime of a hard disk in perfect condition is defined as 5 years (running every day and night on all days). This is equal to 1825 days in 24/7 mode or 43800 hours." On some pre-2005 drives, this raw value may advance erratically and/or "wrap around" (reset to zero periodically). https://en.wikipedia.org/wiki/S.M.A.R.T.#Known_ATA_S.M.A.R.T._attributes'
- application_prototypes:
- -
- name: '{#DISKTYPE} {#NAME}'
preprocessing:
-
type: JSONPATH
@@ -128,6 +125,10 @@ zabbix_export:
- '$[?(@.disk_name==''{#NAME}'')].power_on_time.hours.first()'
master_item:
key: smart.disk.get
+ tags:
+ -
+ tag: Application
+ value: '{#DISKTYPE} {#NAME}'
-
name: 'SMART [{#NAME}]: Media errors'
type: DEPENDENT
@@ -135,9 +136,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'Contains the number of occurrences where the controller detected an unrecovered data integrity error. Errors such as uncorrectable ECC, CRC checksum failure, or LBA tag mismatch are included in this field.'
- application_prototypes:
- -
- name: '{#DISKTYPE} {#NAME}'
preprocessing:
-
type: JSONPATH
@@ -149,6 +147,10 @@ zabbix_export:
- 6h
master_item:
key: smart.disk.get
+ tags:
+ -
+ tag: Application
+ value: '{#DISKTYPE} {#NAME}'
-
name: 'SMART [{#NAME}]: Device model'
type: DEPENDENT
@@ -157,9 +159,6 @@ zabbix_export:
history: 7d
trends: '0'
value_type: CHAR
- application_prototypes:
- -
- name: '{#DISKTYPE} {#NAME}'
preprocessing:
-
type: JSONPATH
@@ -171,6 +170,10 @@ zabbix_export:
- 6h
master_item:
key: smart.disk.get
+ tags:
+ -
+ tag: Application
+ value: '{#DISKTYPE} {#NAME}'
-
name: 'SMART [{#NAME}]: Percentage used'
type: DEPENDENT
@@ -179,9 +182,6 @@ zabbix_export:
history: 7d
units: '%'
description: 'Contains a vendor specific estimate of the percentage of NVM subsystem life used based on the actual usage and the manufacturer’s prediction of NVM life. A value of 100 indicates that the estimated endurance of the NVM in the NVM subsystem has been consumed, but may not indicate an NVM subsystem failure. The value is allowed to exceed 100. Percentages greater than 254 shall be represented as 255. This value shall be updated once per power-on hour (when the controller is not in a sleep state).'
- application_prototypes:
- -
- name: '{#DISKTYPE} {#NAME}'
preprocessing:
-
type: JSONPATH
@@ -193,6 +193,10 @@ zabbix_export:
- 6h
master_item:
key: smart.disk.get
+ tags:
+ -
+ tag: Application
+ value: '{#DISKTYPE} {#NAME}'
trigger_prototypes:
-
expression: '{last()}>90'
@@ -206,9 +210,6 @@ zabbix_export:
history: 7d
trends: '0'
value_type: CHAR
- application_prototypes:
- -
- name: '{#DISKTYPE} {#NAME}'
preprocessing:
-
type: JSONPATH
@@ -220,6 +221,10 @@ zabbix_export:
- 6h
master_item:
key: smart.disk.get
+ tags:
+ -
+ tag: Application
+ value: '{#DISKTYPE} {#NAME}'
trigger_prototypes:
-
expression: '{diff()}=1 and {strlen()}>0'
@@ -235,9 +240,6 @@ zabbix_export:
history: 7d
units: °C
description: 'Current drive temperature.'
- application_prototypes:
- -
- name: '{#DISKTYPE} {#NAME}'
preprocessing:
-
type: JSONPATH
@@ -249,6 +251,10 @@ zabbix_export:
- 6h
master_item:
key: smart.disk.get
+ tags:
+ -
+ tag: Application
+ value: '{#DISKTYPE} {#NAME}'
trigger_prototypes:
-
expression: '{avg(5m)}>{$SMART.TEMPERATURE.MAX.CRIT}'
@@ -271,9 +277,6 @@ zabbix_export:
trends: '0'
value_type: CHAR
description: 'The disk is passed the SMART self-test or not.'
- application_prototypes:
- -
- name: '{#DISKTYPE} {#NAME}'
preprocessing:
-
type: JSONPATH
@@ -285,6 +288,10 @@ zabbix_export:
- 6h
master_item:
key: smart.disk.get
+ tags:
+ -
+ tag: Application
+ value: '{#DISKTYPE} {#NAME}'
trigger_prototypes:
-
expression: '{last()}="false"'
diff --git a/templates/module/smart_agent2_active/template_module_smart_agent2_active.yaml b/templates/module/smart_agent2_active/template_module_smart_agent2_active.yaml
index 4345d8f4832..928725c1e57 100644
--- a/templates/module/smart_agent2_active/template_module_smart_agent2_active.yaml
+++ b/templates/module/smart_agent2_active/template_module_smart_agent2_active.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-02-25T19:10:37Z'
+ date: '2021-04-22T11:28:22Z'
groups:
-
name: Templates/Modules
@@ -17,9 +17,6 @@ zabbix_export:
groups:
-
name: Templates/Modules
- applications:
- -
- name: 'Zabbix raw items'
items:
-
name: 'SMART: Get attributes'
@@ -28,9 +25,10 @@ zabbix_export:
history: '0'
trends: '0'
value_type: TEXT
- applications:
+ tags:
-
- name: 'Zabbix raw items'
+ tag: Application
+ value: 'Zabbix raw items'
discovery_rules:
-
name: 'Attribute discovery'
@@ -45,9 +43,6 @@ zabbix_export:
key: 'smart.disk.error[{#NAME},{#ID}]'
delay: '0'
history: 7d
- application_prototypes:
- -
- name: '{#DISKTYPE} {#NAME}'
preprocessing:
-
type: JSONPATH
@@ -59,6 +54,10 @@ zabbix_export:
- 6h
master_item:
key: smart.disk.get
+ tags:
+ -
+ tag: Application
+ value: '{#DISKTYPE} {#NAME}'
trigger_prototypes:
-
expression: '{last()} <= {#THRESH}'
@@ -99,9 +98,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'This field indicates critical warnings for the state of the controller.'
- application_prototypes:
- -
- name: '{#DISKTYPE} {#NAME}'
preprocessing:
-
type: JSONPATH
@@ -113,6 +109,10 @@ zabbix_export:
- 6h
master_item:
key: smart.disk.get
+ tags:
+ -
+ tag: Application
+ value: '{#DISKTYPE} {#NAME}'
-
name: 'SMART [{#NAME}]: Power on hours'
type: DEPENDENT
@@ -121,9 +121,6 @@ zabbix_export:
history: 7d
units: s
description: 'Count of hours in power-on state. The raw value of this attribute shows total count of hours (or minutes, or seconds, depending on manufacturer) in power-on state. "By default, the total expected lifetime of a hard disk in perfect condition is defined as 5 years (running every day and night on all days). This is equal to 1825 days in 24/7 mode or 43800 hours." On some pre-2005 drives, this raw value may advance erratically and/or "wrap around" (reset to zero periodically). https://en.wikipedia.org/wiki/S.M.A.R.T.#Known_ATA_S.M.A.R.T._attributes'
- application_prototypes:
- -
- name: '{#DISKTYPE} {#NAME}'
preprocessing:
-
type: JSONPATH
@@ -131,6 +128,10 @@ zabbix_export:
- '$[?(@.disk_name==''{#NAME}'')].power_on_time.hours.first()'
master_item:
key: smart.disk.get
+ tags:
+ -
+ tag: Application
+ value: '{#DISKTYPE} {#NAME}'
-
name: 'SMART [{#NAME}]: Media errors'
type: DEPENDENT
@@ -138,9 +139,6 @@ zabbix_export:
delay: '0'
history: 7d
description: 'Contains the number of occurrences where the controller detected an unrecovered data integrity error. Errors such as uncorrectable ECC, CRC checksum failure, or LBA tag mismatch are included in this field.'
- application_prototypes:
- -
- name: '{#DISKTYPE} {#NAME}'
preprocessing:
-
type: JSONPATH
@@ -152,6 +150,10 @@ zabbix_export:
- 6h
master_item:
key: smart.disk.get
+ tags:
+ -
+ tag: Application
+ value: '{#DISKTYPE} {#NAME}'
-
name: 'SMART [{#NAME}]: Device model'
type: DEPENDENT
@@ -160,9 +162,6 @@ zabbix_export:
history: 7d
trends: '0'
value_type: CHAR
- application_prototypes:
- -
- name: '{#DISKTYPE} {#NAME}'
preprocessing:
-
type: JSONPATH
@@ -174,6 +173,10 @@ zabbix_export:
- 6h
master_item:
key: smart.disk.get
+ tags:
+ -
+ tag: Application
+ value: '{#DISKTYPE} {#NAME}'
-
name: 'SMART [{#NAME}]: Percentage used'
type: DEPENDENT
@@ -182,9 +185,6 @@ zabbix_export:
history: 7d
units: '%'
description: 'Contains a vendor specific estimate of the percentage of NVM subsystem life used based on the actual usage and the manufacturer’s prediction of NVM life. A value of 100 indicates that the estimated endurance of the NVM in the NVM subsystem has been consumed, but may not indicate an NVM subsystem failure. The value is allowed to exceed 100. Percentages greater than 254 shall be represented as 255. This value shall be updated once per power-on hour (when the controller is not in a sleep state).'
- application_prototypes:
- -
- name: '{#DISKTYPE} {#NAME}'
preprocessing:
-
type: JSONPATH
@@ -196,6 +196,10 @@ zabbix_export:
- 6h
master_item:
key: smart.disk.get
+ tags:
+ -
+ tag: Application
+ value: '{#DISKTYPE} {#NAME}'
trigger_prototypes:
-
expression: '{last()}>90'
@@ -209,9 +213,6 @@ zabbix_export:
history: 7d
trends: '0'
value_type: CHAR
- application_prototypes:
- -
- name: '{#DISKTYPE} {#NAME}'
preprocessing:
-
type: JSONPATH
@@ -223,6 +224,10 @@ zabbix_export:
- 6h
master_item:
key: smart.disk.get
+ tags:
+ -
+ tag: Application
+ value: '{#DISKTYPE} {#NAME}'
trigger_prototypes:
-
expression: '{diff()}=1 and {strlen()}>0'
@@ -238,9 +243,6 @@ zabbix_export:
history: 7d
units: °C
description: 'Current drive temperature.'
- application_prototypes:
- -
- name: '{#DISKTYPE} {#NAME}'
preprocessing:
-
type: JSONPATH
@@ -252,6 +254,10 @@ zabbix_export:
- 6h
master_item:
key: smart.disk.get
+ tags:
+ -
+ tag: Application
+ value: '{#DISKTYPE} {#NAME}'
trigger_prototypes:
-
expression: '{avg(5m)}>{$SMART.TEMPERATURE.MAX.CRIT}'
@@ -274,9 +280,6 @@ zabbix_export:
trends: '0'
value_type: CHAR
description: 'The disk is passed the SMART self-test or not.'
- application_prototypes:
- -
- name: '{#DISKTYPE} {#NAME}'
preprocessing:
-
type: JSONPATH
@@ -288,6 +291,10 @@ zabbix_export:
- 6h
master_item:
key: smart.disk.get
+ tags:
+ -
+ tag: Application
+ value: '{#DISKTYPE} {#NAME}'
trigger_prototypes:
-
expression: '{last()}="false"'
diff --git a/templates/module/zabbix_agent/README.md b/templates/module/zabbix_agent/README.md
index cebc912ea5f..39ad94c62a4 100644
--- a/templates/module/zabbix_agent/README.md
+++ b/templates/module/zabbix_agent/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
@@ -15,9 +15,9 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$AGENT.TIMEOUT} |<p>Timeout after which agent is considered unavailable. Works only for agents reachable from Zabbix server/proxy (passive mode).</p> |`3m` |
+| Name | Description | Default |
+|------------------|--------------------------------------------------------------------------------------------------------------------------------------|---------|
+| {$AGENT.TIMEOUT} | <p>Timeout after which agent is considered unavailable. Works only for agents reachable from Zabbix server/proxy (passive mode).</p> | `3m` |
## Template links
@@ -28,19 +28,19 @@ There are no template links in this template.
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Monitoring_agent |Version of Zabbix agent running |<p>-</p> |ZABBIX_PASSIVE |agent.version<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Monitoring_agent |Host name of Zabbix agent running |<p>-</p> |ZABBIX_PASSIVE |agent.hostname<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Monitoring_agent |Version of Zabbix agent running |<p>-</p> |ZABBIX_PASSIVE |agent.version<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Monitoring_agent |Zabbix agent ping |<p>The agent always returns 1 for this item. It could be used in combination with nodata() for availability check.</p> |ZABBIX_PASSIVE |agent.ping |
-|Status |Zabbix agent availability |<p>Monitoring agent availability status</p> |INTERNAL |zabbix[host,agent,available] |
+| Group | Name | Description | Type | Key and additional info |
+|------------------|-----------------------------------|------------------------------------------------------------------------------------------------------------------------|----------------|-----------------------------------------------------------------------------------|
+| Monitoring_agent | Version of Zabbix agent running | <p>-</p> | ZABBIX_PASSIVE | agent.version<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Monitoring_agent | Host name of Zabbix agent running | <p>-</p> | ZABBIX_PASSIVE | agent.hostname<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Monitoring_agent | Version of Zabbix agent running | <p>-</p> | ZABBIX_PASSIVE | agent.version<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Monitoring_agent | Zabbix agent ping | <p>The agent always returns 1 for this item. It could be used in combination with nodata() for availability check.</p> | ZABBIX_PASSIVE | agent.ping |
+| Status | Zabbix agent availability | <p>Monitoring agent availability status</p> | INTERNAL | zabbix[host,agent,available] |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|Zabbix agent is not available (for {$AGENT.TIMEOUT}) |<p>For passive only agents, host availability is used with {$AGENT.TIMEOUT} as time threshold.</p> |`{TEMPLATE_NAME:zabbix[host,agent,available].max({$AGENT.TIMEOUT})}=0` |AVERAGE |<p>Manual close: YES</p> |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|------------------------------------------------------|----------------------------------------------------------------------------------------------------|------------------------------------------------------------------------|----------|----------------------------------|
+| Zabbix agent is not available (for {$AGENT.TIMEOUT}) | <p>For passive only agents, host availability is used with {$AGENT.TIMEOUT} as time threshold.</p> | `{TEMPLATE_NAME:zabbix[host,agent,available].max({$AGENT.TIMEOUT})}=0` | AVERAGE | <p>Manual close: YES</p> |
## Feedback
@@ -50,7 +50,7 @@ Please report any issues with the template at https://support.zabbix.com
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
@@ -62,9 +62,9 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$AGENT.NODATA_TIMEOUT} |<p>No data timeout for active agents. Consider to keep it relatively high.</p> |`30m` |
+| Name | Description | Default |
+|-------------------------|--------------------------------------------------------------------------------|---------|
+| {$AGENT.NODATA_TIMEOUT} | <p>No data timeout for active agents. Consider to keep it relatively high.</p> | `30m` |
## Template links
@@ -75,18 +75,18 @@ There are no template links in this template.
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Monitoring_agent |Version of Zabbix agent running |<p>-</p> |ZABBIX_ACTIVE |agent.version<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Monitoring_agent |Host name of Zabbix agent running |<p>-</p> |ZABBIX_ACTIVE |agent.hostname<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Monitoring_agent |Version of Zabbix agent running |<p>-</p> |ZABBIX_ACTIVE |agent.version<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Status |Zabbix agent ping |<p>The agent always returns 1 for this item. It could be used in combination with nodata() for availability check.</p> |ZABBIX_ACTIVE |agent.ping |
+| Group | Name | Description | Type | Key and additional info |
+|------------------|-----------------------------------|------------------------------------------------------------------------------------------------------------------------|---------------|-----------------------------------------------------------------------------------|
+| Monitoring_agent | Version of Zabbix agent running | <p>-</p> | ZABBIX_ACTIVE | agent.version<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Monitoring_agent | Host name of Zabbix agent running | <p>-</p> | ZABBIX_ACTIVE | agent.hostname<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Monitoring_agent | Version of Zabbix agent running | <p>-</p> | ZABBIX_ACTIVE | agent.version<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Status | Zabbix agent ping | <p>The agent always returns 1 for this item. It could be used in combination with nodata() for availability check.</p> | ZABBIX_ACTIVE | agent.ping |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|Zabbix agent is not available (or nodata for {$AGENT.NODATA_TIMEOUT}) |<p>For active agents, nodata() with agent.ping is used with {$AGENT.NODATA_TIMEOUT} as time threshold.</p> |`{TEMPLATE_NAME:agent.ping.nodata({$AGENT.NODATA_TIMEOUT})}=1` |AVERAGE |<p>Manual close: YES</p> |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|-----------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------|----------|----------------------------------|
+| Zabbix agent is not available (or nodata for {$AGENT.NODATA_TIMEOUT}) | <p>For active agents, nodata() with agent.ping is used with {$AGENT.NODATA_TIMEOUT} as time threshold.</p> | `{TEMPLATE_NAME:agent.ping.nodata({$AGENT.NODATA_TIMEOUT})}=1` | AVERAGE | <p>Manual close: YES</p> |
## Feedback
diff --git a/templates/net/alcatel_timetra_snmp/README.md b/templates/net/alcatel_timetra_snmp/README.md
index b6daacd472d..5ba185d2490 100644
--- a/templates/net/alcatel_timetra_snmp/README.md
+++ b/templates/net/alcatel_timetra_snmp/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
@@ -15,64 +15,64 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$CPU.UTIL.CRIT} |<p>-</p> |`90` |
-|{$FAN_CRIT_STATUS} |<p>-</p> |`4` |
-|{$MEMORY.UTIL.MAX} |<p>-</p> |`90` |
-|{$PSU_CRIT_STATUS} |<p>-</p> |`4` |
-|{$TEMP_CRIT_LOW} |<p>-</p> |`5` |
-|{$TEMP_CRIT} |<p>-</p> |`75` |
-|{$TEMP_WARN} |<p>-</p> |`65` |
+| Name | Description | Default |
+|--------------------|-------------|---------|
+| {$CPU.UTIL.CRIT} | <p>-</p> | `90` |
+| {$FAN_CRIT_STATUS} | <p>-</p> | `4` |
+| {$MEMORY.UTIL.MAX} | <p>-</p> | `90` |
+| {$PSU_CRIT_STATUS} | <p>-</p> | `4` |
+| {$TEMP_CRIT_LOW} | <p>-</p> | `5` |
+| {$TEMP_CRIT} | <p>-</p> | `75` |
+| {$TEMP_WARN} | <p>-</p> | `65` |
## Template links
-|Name|
-|----|
-|EtherLike-MIB SNMP |
-|Generic SNMP |
-|Interfaces SNMP |
+| Name |
+|--------------------|
+| EtherLike-MIB SNMP |
+| Generic SNMP |
+| Interfaces SNMP |
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Temperature Discovery |<p>-</p> |SNMP |temperature.discovery<p>**Filter**:</p>AND_OR <p>- A: {#TEMP_SENSOR} MATCHES_REGEX `1`</p> |
-|FAN Discovery |<p>-</p> |SNMP |fan.discovery<p>**Filter**:</p>AND_OR <p>- A: {#SNMPVALUE} MATCHES_REGEX `[^1]`</p> |
-|PSU Discovery |<p>-</p> |SNMP |psu.discovery |
-|Entity Serial Numbers Discovery |<p>-</p> |SNMP |entity_sn.discovery<p>**Filter**:</p>AND <p>- B: {#ENT_SN} MATCHES_REGEX `.+`</p> |
+| Name | Description | Type | Key and additional info |
+|---------------------------------|-------------|------|--------------------------------------------------------------------------------------------|
+| Temperature Discovery | <p>-</p> | SNMP | temperature.discovery<p>**Filter**:</p>AND_OR <p>- A: {#TEMP_SENSOR} MATCHES_REGEX `1`</p> |
+| FAN Discovery | <p>-</p> | SNMP | fan.discovery<p>**Filter**:</p>AND_OR <p>- A: {#SNMPVALUE} MATCHES_REGEX `[^1]`</p> |
+| PSU Discovery | <p>-</p> | SNMP | psu.discovery |
+| Entity Serial Numbers Discovery | <p>-</p> | SNMP | entity_sn.discovery<p>**Filter**:</p>AND <p>- B: {#ENT_SN} MATCHES_REGEX `.+`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|CPU |CPU utilization |<p>MIB: TIMETRA-SYSTEM-MIB</p><p>The value of sgiCpuUsage indicates the current CPU utilization for the system.</p> |SNMP |system.cpu.util[sgiCpuUsage.0] |
-|Fans |#{#SNMPINDEX}: Fan status |<p>MIB: TIMETRA-SYSTEM-MIB</p><p>Current status of the Fan tray.</p> |SNMP |sensor.fan.status[tmnxChassisFanOperStatus.{#SNMPINDEX}] |
-|Inventory |Hardware model name |<p>MIB: SNMPv2-MIB</p> |SNMP |system.hw.model<p>**Preprocessing**:</p><p>- REGEX: `^(\w|-|\.|/)+ (\w|-|\.|/)+ (.+) Copyright \3`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |Operating system |<p>MIB: SNMPv2-MIB</p> |SNMP |system.sw.os[sysDescr.0]<p>**Preprocessing**:</p><p>- REGEX: `^((\w|-|\.|/)+) \1`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |{#ENT_NAME}: Hardware serial number |<p>MIB: TIMETRA-CHASSIS-MIB</p> |SNMP |system.hw.serialnumber[tmnxHwSerialNumber.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Memory |Used memory |<p>MIB: TIMETRA-SYSTEM-MIB</p><p>The value of sgiKbMemoryUsed indicates the total pre-allocated pool memory, in kilobytes, currently in use on the system.</p> |SNMP |vm.memory.used[sgiKbMemoryUsed.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1024`</p> |
-|Memory |Available memory |<p>MIB: TIMETRA-SYSTEM-MIB</p><p>The value of sgiKbMemoryAvailable indicates the amount of free memory, in kilobytes, in the overall system that is not allocated to memory pools, but is available in case a memory pool needs to grow.</p> |SNMP |vm.memory.available[sgiKbMemoryAvailable.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1024`</p> |
-|Memory |Total memory |<p>Total memory in Bytes</p> |CALCULATED |vm.memory.total[snmp]<p>**Expression**:</p>`last("vm.memory.available[sgiKbMemoryAvailable.0]")+last("vm.memory.used[sgiKbMemoryUsed.0]")` |
-|Memory |Memory utilization |<p>Memory utilization in %</p> |CALCULATED |vm.memory.util[vm.memory.util.0]<p>**Expression**:</p>`last("vm.memory.used[sgiKbMemoryUsed.0]")/(last("vm.memory.available[sgiKbMemoryAvailable.0]")+last("vm.memory.used[sgiKbMemoryUsed.0]"))*100` |
-|Power_supply |#{#SNMPINDEX}: Power supply status |<p>MIB: TIMETRA-SYSTEM-MIB</p><p>The overall status of an equipped power supply. </p><p>For AC multiple powersupplies, this represents the overall status of the first power supplyin the tray (or shelf).</p><p>For any other type, this represents the overall status of the power supply.</p><p>If tmnxChassisPowerSupply1Status is'deviceStateOk', then all monitored statuses are 'deviceStateOk'.</p><p>A value of 'deviceStateFailed' represents a condition where at least one monitored status is in a failed state.</p> |SNMP |sensor.psu.status[tmnxChassisPowerSupply1Status.{#SNMPINDEX}] |
-|Power_supply |#{#SNMPINDEX}: Power supply status |<p>MIB: TIMETRA-SYSTEM-MIB</p><p>The overall status of an equipped power supply.</p><p>For AC multiple powersupplies, this represents the overall status of the second power supplyin the tray (or shelf).</p><p>For any other type, this field is unused and set to 'deviceNotEquipped'.</p><p>If tmnxChassisPowerSupply2Status is 'deviceStateOk', then all monitored statuses are 'deviceStateOk'.</p><p>A value of 'deviceStateFailed' represents a condition where at least one monitored status is in a failed state.</p> |SNMP |sensor.psu.status[tmnxChassisPowerSupply2Status.{#SNMPINDEX}] |
-|Temperature |{#SNMPVALUE}: Temperature |<p>MIB: TIMETRA-SYSTEM-MIB</p><p>The current temperature reading in degrees celsius from this hardware component's temperature sensor. If this component does not contain a temperature sensor, then the value -1 is returned.</p> |SNMP |sensor.temp.value[tmnxHwTemperature.{#SNMPINDEX}] |
+| Group | Name | Description | Type | Key and additional info |
+|--------------|-------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| CPU | CPU utilization | <p>MIB: TIMETRA-SYSTEM-MIB</p><p>The value of sgiCpuUsage indicates the current CPU utilization for the system.</p> | SNMP | system.cpu.util[sgiCpuUsage.0] |
+| Fans | #{#SNMPINDEX}: Fan status | <p>MIB: TIMETRA-SYSTEM-MIB</p><p>Current status of the Fan tray.</p> | SNMP | sensor.fan.status[tmnxChassisFanOperStatus.{#SNMPINDEX}] |
+| Inventory | Hardware model name | <p>MIB: SNMPv2-MIB</p> | SNMP | system.hw.model<p>**Preprocessing**:</p><p>- REGEX: `^(\w|-|\.|/)+ (\w|-|\.|/)+ (.+) Copyright \3`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | Operating system | <p>MIB: SNMPv2-MIB</p> | SNMP | system.sw.os[sysDescr.0]<p>**Preprocessing**:</p><p>- REGEX: `^((\w|-|\.|/)+) \1`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | {#ENT_NAME}: Hardware serial number | <p>MIB: TIMETRA-CHASSIS-MIB</p> | SNMP | system.hw.serialnumber[tmnxHwSerialNumber.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Memory | Used memory | <p>MIB: TIMETRA-SYSTEM-MIB</p><p>The value of sgiKbMemoryUsed indicates the total pre-allocated pool memory, in kilobytes, currently in use on the system.</p> | SNMP | vm.memory.used[sgiKbMemoryUsed.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1024`</p> |
+| Memory | Available memory | <p>MIB: TIMETRA-SYSTEM-MIB</p><p>The value of sgiKbMemoryAvailable indicates the amount of free memory, in kilobytes, in the overall system that is not allocated to memory pools, but is available in case a memory pool needs to grow.</p> | SNMP | vm.memory.available[sgiKbMemoryAvailable.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1024`</p> |
+| Memory | Total memory | <p>Total memory in Bytes</p> | CALCULATED | vm.memory.total[snmp]<p>**Expression**:</p>`last("vm.memory.available[sgiKbMemoryAvailable.0]")+last("vm.memory.used[sgiKbMemoryUsed.0]")` |
+| Memory | Memory utilization | <p>Memory utilization in %</p> | CALCULATED | vm.memory.util[vm.memory.util.0]<p>**Expression**:</p>`last("vm.memory.used[sgiKbMemoryUsed.0]")/(last("vm.memory.available[sgiKbMemoryAvailable.0]")+last("vm.memory.used[sgiKbMemoryUsed.0]"))*100` |
+| Power_supply | #{#SNMPINDEX}: Power supply status | <p>MIB: TIMETRA-SYSTEM-MIB</p><p>The overall status of an equipped power supply. </p><p>For AC multiple powersupplies, this represents the overall status of the first power supplyin the tray (or shelf).</p><p>For any other type, this represents the overall status of the power supply.</p><p>If tmnxChassisPowerSupply1Status is'deviceStateOk', then all monitored statuses are 'deviceStateOk'.</p><p>A value of 'deviceStateFailed' represents a condition where at least one monitored status is in a failed state.</p> | SNMP | sensor.psu.status[tmnxChassisPowerSupply1Status.{#SNMPINDEX}] |
+| Power_supply | #{#SNMPINDEX}: Power supply status | <p>MIB: TIMETRA-SYSTEM-MIB</p><p>The overall status of an equipped power supply.</p><p>For AC multiple powersupplies, this represents the overall status of the second power supplyin the tray (or shelf).</p><p>For any other type, this field is unused and set to 'deviceNotEquipped'.</p><p>If tmnxChassisPowerSupply2Status is 'deviceStateOk', then all monitored statuses are 'deviceStateOk'.</p><p>A value of 'deviceStateFailed' represents a condition where at least one monitored status is in a failed state.</p> | SNMP | sensor.psu.status[tmnxChassisPowerSupply2Status.{#SNMPINDEX}] |
+| Temperature | {#SNMPVALUE}: Temperature | <p>MIB: TIMETRA-SYSTEM-MIB</p><p>The current temperature reading in degrees celsius from this hardware component's temperature sensor. If this component does not contain a temperature sensor, then the value -1 is returned.</p> | SNMP | sensor.temp.value[tmnxHwTemperature.{#SNMPINDEX}] |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|High CPU utilization (over {$CPU.UTIL.CRIT}% for 5m) |<p>CPU utilization is too high. The system might be slow to respond.</p> |`{TEMPLATE_NAME:system.cpu.util[sgiCpuUsage.0].min(5m)}>{$CPU.UTIL.CRIT}` |WARNING | |
-|#{#SNMPINDEX}: Fan is in critical state |<p>Please check the fan unit</p> |`{TEMPLATE_NAME:sensor.fan.status[tmnxChassisFanOperStatus.{#SNMPINDEX}].count(#1,{$FAN_CRIT_STATUS},eq)}=1` |AVERAGE | |
-|Operating system description has changed |<p>Operating system description has changed. Possible reasons that system has been updated or replaced. Ack to close.</p> |`{TEMPLATE_NAME:system.sw.os[sysDescr.0].diff()}=1 and {TEMPLATE_NAME:system.sw.os[sysDescr.0].strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|{#ENT_NAME}: Device has been replaced (new serial number received) |<p>Device serial number has changed. Ack to close</p> |`{TEMPLATE_NAME:system.hw.serialnumber[tmnxHwSerialNumber.{#SNMPINDEX}].diff()}=1 and {TEMPLATE_NAME:system.hw.serialnumber[tmnxHwSerialNumber.{#SNMPINDEX}].strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m) |<p>The system is running out of free memory.</p> |`{TEMPLATE_NAME:vm.memory.util[vm.memory.util.0].min(5m)}>{$MEMORY.UTIL.MAX}` |AVERAGE | |
-|#{#SNMPINDEX}: Power supply is in critical state |<p>Please check the power supply unit for errors</p> |`{TEMPLATE_NAME:sensor.psu.status[tmnxChassisPowerSupply1Status.{#SNMPINDEX}].count(#1,{$PSU_CRIT_STATUS},eq)}=1` |AVERAGE | |
-|#{#SNMPINDEX}: Power supply is in critical state |<p>Please check the power supply unit for errors</p> |`{TEMPLATE_NAME:sensor.psu.status[tmnxChassisPowerSupply2Status.{#SNMPINDEX}].count(#1,{$PSU_CRIT_STATUS},eq)}=1` |AVERAGE | |
-|{#SNMPVALUE}: Temperature is above warning threshold: >{$TEMP_WARN:""} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[tmnxHwTemperature.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[tmnxHwTemperature.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:""}-3` |WARNING |<p>**Depends on**:</p><p>- {#SNMPVALUE}: Temperature is above critical threshold: >{$TEMP_CRIT:""}</p> |
-|{#SNMPVALUE}: Temperature is above critical threshold: >{$TEMP_CRIT:""} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[tmnxHwTemperature.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[tmnxHwTemperature.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:""}-3` |HIGH | |
-|{#SNMPVALUE}: Temperature is too low: <{$TEMP_CRIT_LOW:""} |<p>-</p> |`{TEMPLATE_NAME:sensor.temp.value[tmnxHwTemperature.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[tmnxHwTemperature.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:""}+3` |AVERAGE | |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|-------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|--------------------------------------------------------------------------------------------------------|
+| High CPU utilization (over {$CPU.UTIL.CRIT}% for 5m) | <p>CPU utilization is too high. The system might be slow to respond.</p> | `{TEMPLATE_NAME:system.cpu.util[sgiCpuUsage.0].min(5m)}>{$CPU.UTIL.CRIT}` | WARNING | |
+| #{#SNMPINDEX}: Fan is in critical state | <p>Please check the fan unit</p> | `{TEMPLATE_NAME:sensor.fan.status[tmnxChassisFanOperStatus.{#SNMPINDEX}].count(#1,{$FAN_CRIT_STATUS},eq)}=1` | AVERAGE | |
+| Operating system description has changed | <p>Operating system description has changed. Possible reasons that system has been updated or replaced. Ack to close.</p> | `{TEMPLATE_NAME:system.sw.os[sysDescr.0].diff()}=1 and {TEMPLATE_NAME:system.sw.os[sysDescr.0].strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| {#ENT_NAME}: Device has been replaced (new serial number received) | <p>Device serial number has changed. Ack to close</p> | `{TEMPLATE_NAME:system.hw.serialnumber[tmnxHwSerialNumber.{#SNMPINDEX}].diff()}=1 and {TEMPLATE_NAME:system.hw.serialnumber[tmnxHwSerialNumber.{#SNMPINDEX}].strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m) | <p>The system is running out of free memory.</p> | `{TEMPLATE_NAME:vm.memory.util[vm.memory.util.0].min(5m)}>{$MEMORY.UTIL.MAX}` | AVERAGE | |
+| #{#SNMPINDEX}: Power supply is in critical state | <p>Please check the power supply unit for errors</p> | `{TEMPLATE_NAME:sensor.psu.status[tmnxChassisPowerSupply1Status.{#SNMPINDEX}].count(#1,{$PSU_CRIT_STATUS},eq)}=1` | AVERAGE | |
+| #{#SNMPINDEX}: Power supply is in critical state | <p>Please check the power supply unit for errors</p> | `{TEMPLATE_NAME:sensor.psu.status[tmnxChassisPowerSupply2Status.{#SNMPINDEX}].count(#1,{$PSU_CRIT_STATUS},eq)}=1` | AVERAGE | |
+| {#SNMPVALUE}: Temperature is above warning threshold: >{$TEMP_WARN:""} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[tmnxHwTemperature.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[tmnxHwTemperature.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:""}-3` | WARNING | <p>**Depends on**:</p><p>- {#SNMPVALUE}: Temperature is above critical threshold: >{$TEMP_CRIT:""}</p> |
+| {#SNMPVALUE}: Temperature is above critical threshold: >{$TEMP_CRIT:""} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[tmnxHwTemperature.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[tmnxHwTemperature.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:""}-3` | HIGH | |
+| {#SNMPVALUE}: Temperature is too low: <{$TEMP_CRIT_LOW:""} | <p>-</p> | `{TEMPLATE_NAME:sensor.temp.value[tmnxHwTemperature.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[tmnxHwTemperature.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:""}+3` | AVERAGE | |
## Feedback
diff --git a/templates/net/arista_snmp/README.md b/templates/net/arista_snmp/README.md
index 19e79bafdec..fbb3c36b8c1 100644
--- a/templates/net/arista_snmp/README.md
+++ b/templates/net/arista_snmp/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
This template was tested on:
@@ -19,66 +19,66 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$FAN_CRIT_STATUS} |<p>-</p> |`3` |
-|{$MEMORY.NAME.NOT_MATCHES} |<p>Filter is overridden to ignore RAM(Cache) and RAM(Buffers) memory objects.</p> |`(Buffer|Cache)` |
-|{$PSU_CRIT_STATUS} |<p>-</p> |`2` |
-|{$VFS.FS.PUSED.MAX.CRIT} |<p>-</p> |`95` |
-|{$VFS.FS.PUSED.MAX.WARN} |<p>-</p> |`90` |
+| Name | Description | Default |
+|----------------------------|-----------------------------------------------------------------------------------|------------------|
+| {$FAN_CRIT_STATUS} | <p>-</p> | `3` |
+| {$MEMORY.NAME.NOT_MATCHES} | <p>Filter is overridden to ignore RAM(Cache) and RAM(Buffers) memory objects.</p> | `(Buffer|Cache)` |
+| {$PSU_CRIT_STATUS} | <p>-</p> | `2` |
+| {$VFS.FS.PUSED.MAX.CRIT} | <p>-</p> | `95` |
+| {$VFS.FS.PUSED.MAX.WARN} | <p>-</p> | `90` |
## Template links
-|Name|
-|----|
-|EtherLike-MIB SNMP |
-|Generic SNMP |
-|HOST-RESOURCES-MIB SNMP |
-|Interfaces SNMP |
+| Name |
+|-------------------------|
+| EtherLike-MIB SNMP |
+| Generic SNMP |
+| HOST-RESOURCES-MIB SNMP |
+| Interfaces SNMP |
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Temperature discovery |<p>ENTITY-SENSORS-MIB::EntitySensorDataType discovery with celsius filter</p> |DEPENDENT |temp.discovery<p>**Filter**:</p>AND <p>- B: {#SENSOR_TYPE} MATCHES_REGEX `8`</p><p>- B: {#SENSOR_PRECISION} MATCHES_REGEX `1`</p> |
-|Fan discovery |<p>ENTITY-SENSORS-MIB::EntitySensorDataType discovery with rpm filter</p> |DEPENDENT |fan.discovery<p>**Filter**:</p>OR <p>- B: {#SENSOR_TYPE} MATCHES_REGEX `10`</p> |
-|Voltage discovery |<p>ENTITY-SENSORS-MIB::EntitySensorDataType discovery with volts filter</p> |DEPENDENT |voltage.discovery<p>**Filter**:</p>OR <p>- B: {#SENSOR_TYPE} MATCHES_REGEX `3|4`</p> |
-|Entity discovery |<p>-</p> |SNMP |entity.discovery<p>**Filter**:</p>AND_OR <p>- A: {#ENT_CLASS} MATCHES_REGEX `3`</p> |
-|PSU discovery |<p>-</p> |SNMP |psu.discovery<p>**Filter**:</p>AND_OR <p>- A: {#ENT_CLASS} MATCHES_REGEX `6`</p> |
+| Name | Description | Type | Key and additional info |
+|-----------------------|-------------------------------------------------------------------------------|-----------|-----------------------------------------------------------------------------------------------------------------------------------|
+| Temperature discovery | <p>ENTITY-SENSORS-MIB::EntitySensorDataType discovery with celsius filter</p> | DEPENDENT | temp.discovery<p>**Filter**:</p>AND <p>- B: {#SENSOR_TYPE} MATCHES_REGEX `8`</p><p>- B: {#SENSOR_PRECISION} MATCHES_REGEX `1`</p> |
+| Fan discovery | <p>ENTITY-SENSORS-MIB::EntitySensorDataType discovery with rpm filter</p> | DEPENDENT | fan.discovery<p>**Filter**:</p>OR <p>- B: {#SENSOR_TYPE} MATCHES_REGEX `10`</p> |
+| Voltage discovery | <p>ENTITY-SENSORS-MIB::EntitySensorDataType discovery with volts filter</p> | DEPENDENT | voltage.discovery<p>**Filter**:</p>OR <p>- B: {#SENSOR_TYPE} MATCHES_REGEX `3|4`</p> |
+| Entity discovery | <p>-</p> | SNMP | entity.discovery<p>**Filter**:</p>AND_OR <p>- A: {#ENT_CLASS} MATCHES_REGEX `3`</p> |
+| PSU discovery | <p>-</p> | SNMP | psu.discovery<p>**Filter**:</p>AND_OR <p>- A: {#ENT_CLASS} MATCHES_REGEX `6`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Fans |{#SENSOR_INFO}: Fan speed |<p>MIB: ENTITY-SENSORS-MIB</p><p>The most recent measurement obtained by the agent for this sensor.</p><p>To correctly interpret the value of this object, the associated entPhySensorType,</p><p>entPhySensorScale, and entPhySensorPrecision objects must also be examined.</p> |SNMP |sensor.fan.speed[entPhySensorValue.{#SNMPINDEX}] |
-|Fans |{#SENSOR_INFO}: Fan status |<p>MIB: ENTITY-SENSORS-MIB</p><p>The operational status of the sensor {#SENSOR_INFO}</p> |SNMP |sensor.fan.status[entPhySensorOperStatus.{#SNMPINDEX}] |
-|Inventory |{#ENT_NAME}: Hardware model name |<p>MIB: ENTITY-MIB</p> |SNMP |system.hw.model[entPhysicalModelName.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
-|Inventory |{#ENT_NAME}: Hardware serial number |<p>MIB: ENTITY-MIB</p> |SNMP |system.hw.serialnumber[entPhysicalSerialNum.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
-|Power_supply |{#ENT_NAME}: Power supply status |<p>MIB: ENTITY-STATE-MIB</p> |SNMP |sensor.psu.status[entStateOper.{#SNMPINDEX}] |
-|Temperature |{#SENSOR_INFO}: Temperature |<p>MIB: ENTITY-SENSORS-MIB</p><p>The most recent measurement obtained by the agent for this sensor.</p><p>To correctly interpret the value of this object, the associated entPhySensorType,</p><p>entPhySensorScale, and entPhySensorPrecision objects must also be examined.</p> |SNMP |sensor.temp.value[entPhySensorValue.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.1`</p> |
-|Temperature |{#SENSOR_INFO}: Temperature status |<p>MIB: ENTITY-SENSORS-MIB</p><p>The operational status of the sensor {#SENSOR_INFO}</p> |SNMP |sensor.temp.status[entPhySensorOperStatus.{#SNMPINDEX}] |
-|Voltage |{#SENSOR_INFO}: Voltage |<p>MIB: ENTITY-SENSORS-MIB</p><p>The most recent measurement obtained by the agent for this sensor.</p><p>To correctly interpret the value of this object, the associated entPhySensorType,</p><p>entPhySensorScale, and entPhySensorPrecision objects must also be examined.</p> |SNMP |sensor.voltage.value[entPhySensorValue.{#SNMPINDEX}] |
-|Zabbix_raw_items |Get sensors |<p>Gets sensors with type, description, and thresholds.</p> |SNMP |sensors.get<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Group | Name | Description | Type | Key and additional info |
+|------------------|-------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------|------------------------------------------------------------------------------------------------------------------------------|
+| Fans | {#SENSOR_INFO}: Fan speed | <p>MIB: ENTITY-SENSORS-MIB</p><p>The most recent measurement obtained by the agent for this sensor.</p><p>To correctly interpret the value of this object, the associated entPhySensorType,</p><p>entPhySensorScale, and entPhySensorPrecision objects must also be examined.</p> | SNMP | sensor.fan.speed[entPhySensorValue.{#SNMPINDEX}] |
+| Fans | {#SENSOR_INFO}: Fan status | <p>MIB: ENTITY-SENSORS-MIB</p><p>The operational status of the sensor {#SENSOR_INFO}</p> | SNMP | sensor.fan.status[entPhySensorOperStatus.{#SNMPINDEX}] |
+| Inventory | {#ENT_NAME}: Hardware model name | <p>MIB: ENTITY-MIB</p> | SNMP | system.hw.model[entPhysicalModelName.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
+| Inventory | {#ENT_NAME}: Hardware serial number | <p>MIB: ENTITY-MIB</p> | SNMP | system.hw.serialnumber[entPhysicalSerialNum.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
+| Power_supply | {#ENT_NAME}: Power supply status | <p>MIB: ENTITY-STATE-MIB</p> | SNMP | sensor.psu.status[entStateOper.{#SNMPINDEX}] |
+| Temperature | {#SENSOR_INFO}: Temperature | <p>MIB: ENTITY-SENSORS-MIB</p><p>The most recent measurement obtained by the agent for this sensor.</p><p>To correctly interpret the value of this object, the associated entPhySensorType,</p><p>entPhySensorScale, and entPhySensorPrecision objects must also be examined.</p> | SNMP | sensor.temp.value[entPhySensorValue.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.1`</p> |
+| Temperature | {#SENSOR_INFO}: Temperature status | <p>MIB: ENTITY-SENSORS-MIB</p><p>The operational status of the sensor {#SENSOR_INFO}</p> | SNMP | sensor.temp.status[entPhySensorOperStatus.{#SNMPINDEX}] |
+| Voltage | {#SENSOR_INFO}: Voltage | <p>MIB: ENTITY-SENSORS-MIB</p><p>The most recent measurement obtained by the agent for this sensor.</p><p>To correctly interpret the value of this object, the associated entPhySensorType,</p><p>entPhySensorScale, and entPhySensorPrecision objects must also be examined.</p> | SNMP | sensor.voltage.value[entPhySensorValue.{#SNMPINDEX}] |
+| Zabbix_raw_items | Get sensors | <p>Gets sensors with type, description, and thresholds.</p> | SNMP | sensors.get<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|{#SENSOR_INFO}: Fan speed is below the warning threshold of {#THRESHOLD_LO_WARN}rpm for 5m |<p>This trigger uses fan sensor values defined in the device.</p> |`{TEMPLATE_NAME:sensor.fan.speed[entPhySensorValue.{#SNMPINDEX}].max(5m)} < {#THRESHOLD_LO_WARN}` |WARNING |<p>**Depends on**:</p><p>- {#SENSOR_INFO}: Fan is in critical state</p><p>- {#SENSOR_INFO}: Fan speed is below the critical threshold of {#THRESHOLD_LO_CRIT}rpm for 5m</p> |
-|{#SENSOR_INFO}: Fan speed is below the critical threshold of {#THRESHOLD_LO_CRIT}rpm for 5m |<p>This trigger uses fan sensor values defined in the device.</p> |`{TEMPLATE_NAME:sensor.fan.speed[entPhySensorValue.{#SNMPINDEX}].max(5m)} < {#THRESHOLD_LO_CRIT}` |HIGH |<p>**Depends on**:</p><p>- {#SENSOR_INFO}: Fan is in critical state</p> |
-|{#SENSOR_INFO}: Fan speed is above the warning threshold of {#THRESHOLD_HI_WARN}rpm for 5m |<p>This trigger uses fan sensor values defined in the device.</p> |`{TEMPLATE_NAME:sensor.fan.speed[entPhySensorValue.{#SNMPINDEX}].min(5m)} > {#THRESHOLD_HI_WARN}` |WARNING |<p>**Depends on**:</p><p>- {#SENSOR_INFO}: Fan is in critical state</p><p>- {#SENSOR_INFO}: Fan speed is above the critical threshold of {#THRESHOLD_HI_CRIT}rpm for 5m</p> |
-|{#SENSOR_INFO}: Fan speed is above the critical threshold of {#THRESHOLD_HI_CRIT}rpm for 5m |<p>This trigger uses fan sensor values defined in the device.</p> |`{TEMPLATE_NAME:sensor.fan.speed[entPhySensorValue.{#SNMPINDEX}].min(5m)} > {#THRESHOLD_HI_CRIT}` |HIGH |<p>**Depends on**:</p><p>- {#SENSOR_INFO}: Fan is in critical state</p> |
-|{#SENSOR_INFO}: Fan is in critical state |<p>Please check the fan unit</p> |`{TEMPLATE_NAME:sensor.fan.status[entPhySensorOperStatus.{#SNMPINDEX}].count(#1,{$FAN_CRIT_STATUS},eq)}=1` |AVERAGE | |
-|{#ENT_NAME}: Device has been replaced (new serial number received) |<p>Device serial number has changed. Ack to close</p> |`{TEMPLATE_NAME:system.hw.serialnumber[entPhysicalSerialNum.{#SNMPINDEX}].diff()}=1 and {TEMPLATE_NAME:system.hw.serialnumber[entPhysicalSerialNum.{#SNMPINDEX}].strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|{#ENT_NAME}: Power supply is in critical state |<p>Please check the power supply unit for errors</p> |`{TEMPLATE_NAME:sensor.psu.status[entStateOper.{#SNMPINDEX}].count(#1,{$PSU_CRIT_STATUS},eq)}=1` |AVERAGE | |
-|{#SENSOR_INFO}: Temperature is below the warning threshold of {#THRESHOLD_LO_WARN}°C for 5m |<p>This trigger uses temperature sensor values defined in the device.</p> |`{TEMPLATE_NAME:sensor.temp.value[entPhySensorValue.{#SNMPINDEX}].max(5m)} < {#THRESHOLD_LO_WARN}` |WARNING |<p>**Depends on**:</p><p>- {#SENSOR_INFO}: Temperature is below the critical threshold of {#THRESHOLD_LO_CRIT}°C for 5m</p> |
-|{#SENSOR_INFO}: Temperature is below the critical threshold of {#THRESHOLD_LO_CRIT}°C for 5m |<p>This trigger uses temperature sensor values defined in the device.</p> |`{TEMPLATE_NAME:sensor.temp.value[entPhySensorValue.{#SNMPINDEX}].max(5m)} < {#THRESHOLD_LO_CRIT}` |HIGH | |
-|{#SENSOR_INFO}: Temperature is above the warning threshold of {#THRESHOLD_HI_WARN}°C for 5m |<p>This trigger uses temperature sensor values defined in the device.</p> |`{TEMPLATE_NAME:sensor.temp.value[entPhySensorValue.{#SNMPINDEX}].min(5m)} > {#THRESHOLD_HI_WARN}` |WARNING |<p>**Depends on**:</p><p>- {#SENSOR_INFO}: Temperature is above the critical threshold of {#THRESHOLD_HI_CRIT}°C for 5m</p> |
-|{#SENSOR_INFO}: Temperature is above the critical threshold of {#THRESHOLD_HI_CRIT}°C for 5m |<p>This trigger uses temperature sensor values defined in the device.</p> |`{TEMPLATE_NAME:sensor.temp.value[entPhySensorValue.{#SNMPINDEX}].min(5m)} > {#THRESHOLD_HI_CRIT}` |HIGH | |
-|{#SENSOR_INFO}: Voltage is below the warning threshold of {#THRESHOLD_LO_WARN}V for 5m |<p>This trigger uses voltage sensor values defined in the device.</p> |`{TEMPLATE_NAME:sensor.voltage.value[entPhySensorValue.{#SNMPINDEX}].max(5m)} < {#THRESHOLD_LO_WARN}` |WARNING |<p>**Depends on**:</p><p>- {#SENSOR_INFO}: Voltage is below the critical threshold of {#THRESHOLD_LO_CRIT}V for 5m</p> |
-|{#SENSOR_INFO}: Voltage is below the critical threshold of {#THRESHOLD_LO_CRIT}V for 5m |<p>This trigger uses voltage sensor values defined in the device.</p> |`{TEMPLATE_NAME:sensor.voltage.value[entPhySensorValue.{#SNMPINDEX}].max(5m)} < {#THRESHOLD_LO_CRIT}` |HIGH | |
-|{#SENSOR_INFO}: Voltage is above the warning threshold of {#THRESHOLD_HI_WARN}V for 5m |<p>This trigger uses voltage sensor values defined in the device.</p> |`{TEMPLATE_NAME:sensor.voltage.value[entPhySensorValue.{#SNMPINDEX}].min(5m)} > {#THRESHOLD_HI_WARN}` |WARNING |<p>**Depends on**:</p><p>- {#SENSOR_INFO}: Voltage is above the critical threshold of {#THRESHOLD_HI_CRIT}V for 5m</p> |
-|{#SENSOR_INFO}: Voltage is above the critical threshold of {#THRESHOLD_HI_CRIT}V for 5m |<p>This trigger uses voltage sensor values defined in the device.</p> |`{TEMPLATE_NAME:sensor.voltage.value[entPhySensorValue.{#SNMPINDEX}].min(5m)} > {#THRESHOLD_HI_CRIT}` |HIGH | |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|----------------------------------------------------------------------------------------------|---------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| {#SENSOR_INFO}: Fan speed is below the warning threshold of {#THRESHOLD_LO_WARN}rpm for 5m | <p>This trigger uses fan sensor values defined in the device.</p> | `{TEMPLATE_NAME:sensor.fan.speed[entPhySensorValue.{#SNMPINDEX}].max(5m)} < {#THRESHOLD_LO_WARN}` | WARNING | <p>**Depends on**:</p><p>- {#SENSOR_INFO}: Fan is in critical state</p><p>- {#SENSOR_INFO}: Fan speed is below the critical threshold of {#THRESHOLD_LO_CRIT}rpm for 5m</p> |
+| {#SENSOR_INFO}: Fan speed is below the critical threshold of {#THRESHOLD_LO_CRIT}rpm for 5m | <p>This trigger uses fan sensor values defined in the device.</p> | `{TEMPLATE_NAME:sensor.fan.speed[entPhySensorValue.{#SNMPINDEX}].max(5m)} < {#THRESHOLD_LO_CRIT}` | HIGH | <p>**Depends on**:</p><p>- {#SENSOR_INFO}: Fan is in critical state</p> |
+| {#SENSOR_INFO}: Fan speed is above the warning threshold of {#THRESHOLD_HI_WARN}rpm for 5m | <p>This trigger uses fan sensor values defined in the device.</p> | `{TEMPLATE_NAME:sensor.fan.speed[entPhySensorValue.{#SNMPINDEX}].min(5m)} > {#THRESHOLD_HI_WARN}` | WARNING | <p>**Depends on**:</p><p>- {#SENSOR_INFO}: Fan is in critical state</p><p>- {#SENSOR_INFO}: Fan speed is above the critical threshold of {#THRESHOLD_HI_CRIT}rpm for 5m</p> |
+| {#SENSOR_INFO}: Fan speed is above the critical threshold of {#THRESHOLD_HI_CRIT}rpm for 5m | <p>This trigger uses fan sensor values defined in the device.</p> | `{TEMPLATE_NAME:sensor.fan.speed[entPhySensorValue.{#SNMPINDEX}].min(5m)} > {#THRESHOLD_HI_CRIT}` | HIGH | <p>**Depends on**:</p><p>- {#SENSOR_INFO}: Fan is in critical state</p> |
+| {#SENSOR_INFO}: Fan is in critical state | <p>Please check the fan unit</p> | `{TEMPLATE_NAME:sensor.fan.status[entPhySensorOperStatus.{#SNMPINDEX}].count(#1,{$FAN_CRIT_STATUS},eq)}=1` | AVERAGE | |
+| {#ENT_NAME}: Device has been replaced (new serial number received) | <p>Device serial number has changed. Ack to close</p> | `{TEMPLATE_NAME:system.hw.serialnumber[entPhysicalSerialNum.{#SNMPINDEX}].diff()}=1 and {TEMPLATE_NAME:system.hw.serialnumber[entPhysicalSerialNum.{#SNMPINDEX}].strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| {#ENT_NAME}: Power supply is in critical state | <p>Please check the power supply unit for errors</p> | `{TEMPLATE_NAME:sensor.psu.status[entStateOper.{#SNMPINDEX}].count(#1,{$PSU_CRIT_STATUS},eq)}=1` | AVERAGE | |
+| {#SENSOR_INFO}: Temperature is below the warning threshold of {#THRESHOLD_LO_WARN}°C for 5m | <p>This trigger uses temperature sensor values defined in the device.</p> | `{TEMPLATE_NAME:sensor.temp.value[entPhySensorValue.{#SNMPINDEX}].max(5m)} < {#THRESHOLD_LO_WARN}` | WARNING | <p>**Depends on**:</p><p>- {#SENSOR_INFO}: Temperature is below the critical threshold of {#THRESHOLD_LO_CRIT}°C for 5m</p> |
+| {#SENSOR_INFO}: Temperature is below the critical threshold of {#THRESHOLD_LO_CRIT}°C for 5m | <p>This trigger uses temperature sensor values defined in the device.</p> | `{TEMPLATE_NAME:sensor.temp.value[entPhySensorValue.{#SNMPINDEX}].max(5m)} < {#THRESHOLD_LO_CRIT}` | HIGH | |
+| {#SENSOR_INFO}: Temperature is above the warning threshold of {#THRESHOLD_HI_WARN}°C for 5m | <p>This trigger uses temperature sensor values defined in the device.</p> | `{TEMPLATE_NAME:sensor.temp.value[entPhySensorValue.{#SNMPINDEX}].min(5m)} > {#THRESHOLD_HI_WARN}` | WARNING | <p>**Depends on**:</p><p>- {#SENSOR_INFO}: Temperature is above the critical threshold of {#THRESHOLD_HI_CRIT}°C for 5m</p> |
+| {#SENSOR_INFO}: Temperature is above the critical threshold of {#THRESHOLD_HI_CRIT}°C for 5m | <p>This trigger uses temperature sensor values defined in the device.</p> | `{TEMPLATE_NAME:sensor.temp.value[entPhySensorValue.{#SNMPINDEX}].min(5m)} > {#THRESHOLD_HI_CRIT}` | HIGH | |
+| {#SENSOR_INFO}: Voltage is below the warning threshold of {#THRESHOLD_LO_WARN}V for 5m | <p>This trigger uses voltage sensor values defined in the device.</p> | `{TEMPLATE_NAME:sensor.voltage.value[entPhySensorValue.{#SNMPINDEX}].max(5m)} < {#THRESHOLD_LO_WARN}` | WARNING | <p>**Depends on**:</p><p>- {#SENSOR_INFO}: Voltage is below the critical threshold of {#THRESHOLD_LO_CRIT}V for 5m</p> |
+| {#SENSOR_INFO}: Voltage is below the critical threshold of {#THRESHOLD_LO_CRIT}V for 5m | <p>This trigger uses voltage sensor values defined in the device.</p> | `{TEMPLATE_NAME:sensor.voltage.value[entPhySensorValue.{#SNMPINDEX}].max(5m)} < {#THRESHOLD_LO_CRIT}` | HIGH | |
+| {#SENSOR_INFO}: Voltage is above the warning threshold of {#THRESHOLD_HI_WARN}V for 5m | <p>This trigger uses voltage sensor values defined in the device.</p> | `{TEMPLATE_NAME:sensor.voltage.value[entPhySensorValue.{#SNMPINDEX}].min(5m)} > {#THRESHOLD_HI_WARN}` | WARNING | <p>**Depends on**:</p><p>- {#SENSOR_INFO}: Voltage is above the critical threshold of {#THRESHOLD_HI_CRIT}V for 5m</p> |
+| {#SENSOR_INFO}: Voltage is above the critical threshold of {#THRESHOLD_HI_CRIT}V for 5m | <p>This trigger uses voltage sensor values defined in the device.</p> | `{TEMPLATE_NAME:sensor.voltage.value[entPhySensorValue.{#SNMPINDEX}].min(5m)} > {#THRESHOLD_HI_CRIT}` | HIGH | |
## Feedback
diff --git a/templates/net/brocade_fc_sw_snmp/README.md b/templates/net/brocade_fc_sw_snmp/README.md
index 55b115db98c..fe010ed71ec 100644
--- a/templates/net/brocade_fc_sw_snmp/README.md
+++ b/templates/net/brocade_fc_sw_snmp/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
https://community.brocade.com/dtscp75322/attachments/dtscp75322/fibre/25235/1/FOS_MIB_Reference_v740.pdf
This template was tested on:
@@ -22,69 +22,69 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$CPU.UTIL.CRIT} |<p>-</p> |`90` |
-|{$FAN_CRIT_STATUS} |<p>-</p> |`2` |
-|{$FAN_OK_STATUS} |<p>-</p> |`4` |
-|{$HEALTH_CRIT_STATUS} |<p>-</p> |`4` |
-|{$HEALTH_WARN_STATUS:"offline"} |<p>-</p> |`2` |
-|{$HEALTH_WARN_STATUS:"testing"} |<p>-</p> |`3` |
-|{$MEMORY.UTIL.MAX} |<p>-</p> |`90` |
-|{$PSU_CRIT_STATUS} |<p>-</p> |`2` |
-|{$PSU_OK_STATUS} |<p>-</p> |`4` |
-|{$TEMP_CRIT_LOW} |<p>-</p> |`5` |
-|{$TEMP_CRIT} |<p>-</p> |`75` |
-|{$TEMP_WARN_STATUS} |<p>-</p> |`5` |
-|{$TEMP_WARN} |<p>-</p> |`65` |
+| Name | Description | Default |
+|---------------------------------|-------------|---------|
+| {$CPU.UTIL.CRIT} | <p>-</p> | `90` |
+| {$FAN_CRIT_STATUS} | <p>-</p> | `2` |
+| {$FAN_OK_STATUS} | <p>-</p> | `4` |
+| {$HEALTH_CRIT_STATUS} | <p>-</p> | `4` |
+| {$HEALTH_WARN_STATUS:"offline"} | <p>-</p> | `2` |
+| {$HEALTH_WARN_STATUS:"testing"} | <p>-</p> | `3` |
+| {$MEMORY.UTIL.MAX} | <p>-</p> | `90` |
+| {$PSU_CRIT_STATUS} | <p>-</p> | `2` |
+| {$PSU_OK_STATUS} | <p>-</p> | `4` |
+| {$TEMP_CRIT_LOW} | <p>-</p> | `5` |
+| {$TEMP_CRIT} | <p>-</p> | `75` |
+| {$TEMP_WARN_STATUS} | <p>-</p> | `5` |
+| {$TEMP_WARN} | <p>-</p> | `65` |
## Template links
-|Name|
-|----|
-|Generic SNMP |
-|Interfaces SNMP |
+| Name |
+|-----------------|
+| Generic SNMP |
+| Interfaces SNMP |
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Temperature Discovery |<p>-</p> |SNMP |temperature.discovery<p>**Filter**:</p>AND_OR <p>- A: {#SENSOR_TYPE} MATCHES_REGEX `1`</p> |
-|PSU Discovery |<p>-</p> |SNMP |psu.discovery<p>**Filter**:</p>AND_OR <p>- A: {#SENSOR_TYPE} MATCHES_REGEX `3`</p> |
-|FAN Discovery |<p>-</p> |SNMP |fan.discovery<p>**Filter**:</p>AND_OR <p>- A: {#SENSOR_TYPE} MATCHES_REGEX `2`</p> |
+| Name | Description | Type | Key and additional info |
+|-----------------------|-------------|------|--------------------------------------------------------------------------------------------|
+| Temperature Discovery | <p>-</p> | SNMP | temperature.discovery<p>**Filter**:</p>AND_OR <p>- A: {#SENSOR_TYPE} MATCHES_REGEX `1`</p> |
+| PSU Discovery | <p>-</p> | SNMP | psu.discovery<p>**Filter**:</p>AND_OR <p>- A: {#SENSOR_TYPE} MATCHES_REGEX `3`</p> |
+| FAN Discovery | <p>-</p> | SNMP | fan.discovery<p>**Filter**:</p>AND_OR <p>- A: {#SENSOR_TYPE} MATCHES_REGEX `2`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|CPU |CPU utilization |<p>MIB: SW-MIB</p><p>System's CPU usage.</p> |SNMP |system.cpu.util[swCpuUsage.0] |
-|Fans |{#SENSOR_INFO}: Fan status |<p>MIB: SW-MIB</p> |SNMP |sensor.fan.status[swSensorStatus.{#SNMPINDEX}] |
-|Fans |{#SENSOR_INFO}: Fan speed |<p>MIB: SW-MIB</p><p>The current value (reading) of the sensor.</p><p>The value, -2147483648, represents an unknown quantity.</p><p>The fan value will be in RPM(revolution per minute)</p> |SNMP |sensor.fan.speed[swSensorValue.{#SNMPINDEX}] |
-|Inventory |Hardware serial number |<p>MIB: SW-MIB</p> |SNMP |system.hw.serialnumber<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |Firmware version |<p>MIB: SW-MIB</p> |SNMP |system.hw.firmware<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Memory |Memory utilization |<p>MIB: SW-MIB</p><p>Memory utilization in %</p> |SNMP |vm.memory.util[swMemUsage.0] |
-|Power_supply |{#SENSOR_INFO}: Power supply status |<p>MIB: SW-MIB</p> |SNMP |sensor.psu.status[swSensorStatus.{#SNMPINDEX}] |
-|Status |Overall system health status |<p>MIB: SW-MIB</p><p>The current operational status of the switch.The states are as follow:</p><p>online(1) means the switch is accessible by an external Fibre Channel port</p><p>offline(2) means the switch is not accessible</p><p>testing(3) means the switch is in a built-in test mode and is not accessible by an external Fibre Channel port</p><p>faulty(4) means the switch is not operational.</p> |SNMP |system.status[swOperStatus.0] |
-|Temperature |{#SENSOR_INFO}: Temperature |<p>MIB: SW-MIB</p><p>Temperature readings of testpoint: {#SENSOR_INFO}</p> |SNMP |sensor.temp.value[swSensorValue.{#SNMPINDEX}] |
-|Temperature |{#SENSOR_INFO}: Temperature status |<p>MIB: SW-MIB</p><p>Temperature status of testpoint: {#SENSOR_INFO}</p> |SNMP |sensor.temp.status[swSensorStatus.{#SNMPINDEX}] |
+| Group | Name | Description | Type | Key and additional info |
+|--------------|-------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------|-------------------------------------------------------------------------------------------|
+| CPU | CPU utilization | <p>MIB: SW-MIB</p><p>System's CPU usage.</p> | SNMP | system.cpu.util[swCpuUsage.0] |
+| Fans | {#SENSOR_INFO}: Fan status | <p>MIB: SW-MIB</p> | SNMP | sensor.fan.status[swSensorStatus.{#SNMPINDEX}] |
+| Fans | {#SENSOR_INFO}: Fan speed | <p>MIB: SW-MIB</p><p>The current value (reading) of the sensor.</p><p>The value, -2147483648, represents an unknown quantity.</p><p>The fan value will be in RPM(revolution per minute)</p> | SNMP | sensor.fan.speed[swSensorValue.{#SNMPINDEX}] |
+| Inventory | Hardware serial number | <p>MIB: SW-MIB</p> | SNMP | system.hw.serialnumber<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | Firmware version | <p>MIB: SW-MIB</p> | SNMP | system.hw.firmware<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Memory | Memory utilization | <p>MIB: SW-MIB</p><p>Memory utilization in %</p> | SNMP | vm.memory.util[swMemUsage.0] |
+| Power_supply | {#SENSOR_INFO}: Power supply status | <p>MIB: SW-MIB</p> | SNMP | sensor.psu.status[swSensorStatus.{#SNMPINDEX}] |
+| Status | Overall system health status | <p>MIB: SW-MIB</p><p>The current operational status of the switch.The states are as follow:</p><p>online(1) means the switch is accessible by an external Fibre Channel port</p><p>offline(2) means the switch is not accessible</p><p>testing(3) means the switch is in a built-in test mode and is not accessible by an external Fibre Channel port</p><p>faulty(4) means the switch is not operational.</p> | SNMP | system.status[swOperStatus.0] |
+| Temperature | {#SENSOR_INFO}: Temperature | <p>MIB: SW-MIB</p><p>Temperature readings of testpoint: {#SENSOR_INFO}</p> | SNMP | sensor.temp.value[swSensorValue.{#SNMPINDEX}] |
+| Temperature | {#SENSOR_INFO}: Temperature status | <p>MIB: SW-MIB</p><p>Temperature status of testpoint: {#SENSOR_INFO}</p> | SNMP | sensor.temp.status[swSensorStatus.{#SNMPINDEX}] |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|High CPU utilization (over {$CPU.UTIL.CRIT}% for 5m) |<p>CPU utilization is too high. The system might be slow to respond.</p> |`{TEMPLATE_NAME:system.cpu.util[swCpuUsage.0].min(5m)}>{$CPU.UTIL.CRIT}` |WARNING | |
-|{#SENSOR_INFO}: Fan is in critical state |<p>Please check the fan unit</p> |`{TEMPLATE_NAME:sensor.fan.status[swSensorStatus.{#SNMPINDEX}].count(#1,{$FAN_CRIT_STATUS},eq)}=1` |AVERAGE | |
-|{#SENSOR_INFO}: Fan is not in normal state |<p>Please check the fan unit</p> |`{TEMPLATE_NAME:sensor.fan.status[swSensorStatus.{#SNMPINDEX}].count(#1,{$FAN_OK_STATUS},ne)}=1` |INFO |<p>**Depends on**:</p><p>- {#SENSOR_INFO}: Fan is in critical state</p> |
-|Device has been replaced (new serial number received) |<p>Device serial number has changed. Ack to close</p> |`{TEMPLATE_NAME:system.hw.serialnumber.diff()}=1 and {TEMPLATE_NAME:system.hw.serialnumber.strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|Firmware has changed |<p>Firmware version has changed. Ack to close</p> |`{TEMPLATE_NAME:system.hw.firmware.diff()}=1 and {TEMPLATE_NAME:system.hw.firmware.strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m) |<p>The system is running out of free memory.</p> |`{TEMPLATE_NAME:vm.memory.util[swMemUsage.0].min(5m)}>{$MEMORY.UTIL.MAX}` |AVERAGE | |
-|{#SENSOR_INFO}: Power supply is in critical state |<p>Please check the power supply unit for errors</p> |`{TEMPLATE_NAME:sensor.psu.status[swSensorStatus.{#SNMPINDEX}].count(#1,{$PSU_CRIT_STATUS},eq)}=1` |AVERAGE | |
-|{#SENSOR_INFO}: Power supply is not in normal state |<p>Please check the power supply unit for errors</p> |`{TEMPLATE_NAME:sensor.psu.status[swSensorStatus.{#SNMPINDEX}].count(#1,{$PSU_OK_STATUS},ne)}=1` |INFO |<p>**Depends on**:</p><p>- {#SENSOR_INFO}: Power supply is in critical state</p> |
-|System status is in critical state |<p>Please check the device for errors</p> |`{TEMPLATE_NAME:system.status[swOperStatus.0].count(#1,{$HEALTH_CRIT_STATUS},eq)}=1` |HIGH | |
-|System status is in warning state |<p>Please check the device for warnings</p> |`{TEMPLATE_NAME:system.status[swOperStatus.0].count(#1,{$HEALTH_WARN_STATUS:"offline"},eq)}=1 or {TEMPLATE_NAME:system.status[swOperStatus.0].count(#1,{$HEALTH_WARN_STATUS:"testing"},eq)}=1` |WARNING |<p>**Depends on**:</p><p>- System status is in critical state</p> |
-|{#SENSOR_INFO}: Temperature is above warning threshold: >{$TEMP_WARN:""} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[swSensorValue.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:""} or {Brocade FC SNMP:sensor.temp.status[swSensorStatus.{#SNMPINDEX}].last(0)}={$TEMP_WARN_STATUS}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[swSensorValue.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:""}-3` |WARNING |<p>**Depends on**:</p><p>- {#SENSOR_INFO}: Temperature is above critical threshold: >{$TEMP_CRIT:""}</p> |
-|{#SENSOR_INFO}: Temperature is above critical threshold: >{$TEMP_CRIT:""} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[swSensorValue.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[swSensorValue.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:""}-3` |HIGH | |
-|{#SENSOR_INFO}: Temperature is too low: <{$TEMP_CRIT_LOW:""} |<p>-</p> |`{TEMPLATE_NAME:sensor.temp.value[swSensorValue.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[swSensorValue.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:""}+3` |AVERAGE | |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|---------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|----------------------------------------------------------------------------------------------------------|
+| High CPU utilization (over {$CPU.UTIL.CRIT}% for 5m) | <p>CPU utilization is too high. The system might be slow to respond.</p> | `{TEMPLATE_NAME:system.cpu.util[swCpuUsage.0].min(5m)}>{$CPU.UTIL.CRIT}` | WARNING | |
+| {#SENSOR_INFO}: Fan is in critical state | <p>Please check the fan unit</p> | `{TEMPLATE_NAME:sensor.fan.status[swSensorStatus.{#SNMPINDEX}].count(#1,{$FAN_CRIT_STATUS},eq)}=1` | AVERAGE | |
+| {#SENSOR_INFO}: Fan is not in normal state | <p>Please check the fan unit</p> | `{TEMPLATE_NAME:sensor.fan.status[swSensorStatus.{#SNMPINDEX}].count(#1,{$FAN_OK_STATUS},ne)}=1` | INFO | <p>**Depends on**:</p><p>- {#SENSOR_INFO}: Fan is in critical state</p> |
+| Device has been replaced (new serial number received) | <p>Device serial number has changed. Ack to close</p> | `{TEMPLATE_NAME:system.hw.serialnumber.diff()}=1 and {TEMPLATE_NAME:system.hw.serialnumber.strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| Firmware has changed | <p>Firmware version has changed. Ack to close</p> | `{TEMPLATE_NAME:system.hw.firmware.diff()}=1 and {TEMPLATE_NAME:system.hw.firmware.strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m) | <p>The system is running out of free memory.</p> | `{TEMPLATE_NAME:vm.memory.util[swMemUsage.0].min(5m)}>{$MEMORY.UTIL.MAX}` | AVERAGE | |
+| {#SENSOR_INFO}: Power supply is in critical state | <p>Please check the power supply unit for errors</p> | `{TEMPLATE_NAME:sensor.psu.status[swSensorStatus.{#SNMPINDEX}].count(#1,{$PSU_CRIT_STATUS},eq)}=1` | AVERAGE | |
+| {#SENSOR_INFO}: Power supply is not in normal state | <p>Please check the power supply unit for errors</p> | `{TEMPLATE_NAME:sensor.psu.status[swSensorStatus.{#SNMPINDEX}].count(#1,{$PSU_OK_STATUS},ne)}=1` | INFO | <p>**Depends on**:</p><p>- {#SENSOR_INFO}: Power supply is in critical state</p> |
+| System status is in critical state | <p>Please check the device for errors</p> | `{TEMPLATE_NAME:system.status[swOperStatus.0].count(#1,{$HEALTH_CRIT_STATUS},eq)}=1` | HIGH | |
+| System status is in warning state | <p>Please check the device for warnings</p> | `{TEMPLATE_NAME:system.status[swOperStatus.0].count(#1,{$HEALTH_WARN_STATUS:"offline"},eq)}=1 or {TEMPLATE_NAME:system.status[swOperStatus.0].count(#1,{$HEALTH_WARN_STATUS:"testing"},eq)}=1` | WARNING | <p>**Depends on**:</p><p>- System status is in critical state</p> |
+| {#SENSOR_INFO}: Temperature is above warning threshold: >{$TEMP_WARN:""} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[swSensorValue.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:""} or {Brocade FC SNMP:sensor.temp.status[swSensorStatus.{#SNMPINDEX}].last(0)}={$TEMP_WARN_STATUS}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[swSensorValue.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:""}-3` | WARNING | <p>**Depends on**:</p><p>- {#SENSOR_INFO}: Temperature is above critical threshold: >{$TEMP_CRIT:""}</p> |
+| {#SENSOR_INFO}: Temperature is above critical threshold: >{$TEMP_CRIT:""} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[swSensorValue.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[swSensorValue.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:""}-3` | HIGH | |
+| {#SENSOR_INFO}: Temperature is too low: <{$TEMP_CRIT_LOW:""} | <p>-</p> | `{TEMPLATE_NAME:sensor.temp.value[swSensorValue.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[swSensorValue.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:""}+3` | AVERAGE | |
## Feedback
diff --git a/templates/net/brocade_foundry_sw_snmp/README.md b/templates/net/brocade_foundry_sw_snmp/README.md
index dfbbf9c017d..99f0ccb241b 100644
--- a/templates/net/brocade_foundry_sw_snmp/README.md
+++ b/templates/net/brocade_foundry_sw_snmp/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
@@ -15,10 +15,10 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$CPU.UTIL.CRIT} |<p>-</p> |`90` |
-|{$MEMORY.UTIL.MAX} |<p>-</p> |`90` |
+| Name | Description | Default |
+|--------------------|-------------|---------|
+| {$CPU.UTIL.CRIT} | <p>-</p> | `90` |
+| {$MEMORY.UTIL.MAX} | <p>-</p> | `90` |
## Template links
@@ -29,17 +29,17 @@ There are no template links in this template.
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|CPU |CPU utilization |<p>MIB: FOUNDRY-SN-AGENT-MIB</p><p>The statistics collection of 1 minute CPU utilization.</p> |SNMP |system.cpu.util[snAgGblCpuUtil1MinAvg.0] |
-|Memory |Memory utilization |<p>MIB: FOUNDRY-SN-AGENT-MIB</p><p>The system dynamic memory utilization, in unit of percentage.</p><p>Deprecated: Refer to snAgSystemDRAMUtil.</p><p>For NI platforms, refer to snAgentBrdMemoryUtil100thPercent</p> |SNMP |vm.memory.util[snAgGblDynMemUtil.0] |
+| Group | Name | Description | Type | Key and additional info |
+|--------|--------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------|------------------------------------------|
+| CPU | CPU utilization | <p>MIB: FOUNDRY-SN-AGENT-MIB</p><p>The statistics collection of 1 minute CPU utilization.</p> | SNMP | system.cpu.util[snAgGblCpuUtil1MinAvg.0] |
+| Memory | Memory utilization | <p>MIB: FOUNDRY-SN-AGENT-MIB</p><p>The system dynamic memory utilization, in unit of percentage.</p><p>Deprecated: Refer to snAgSystemDRAMUtil.</p><p>For NI platforms, refer to snAgentBrdMemoryUtil100thPercent</p> | SNMP | vm.memory.util[snAgGblDynMemUtil.0] |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|High CPU utilization (over {$CPU.UTIL.CRIT}% for 5m) |<p>CPU utilization is too high. The system might be slow to respond.</p> |`{TEMPLATE_NAME:system.cpu.util[snAgGblCpuUtil1MinAvg.0].min(5m)}>{$CPU.UTIL.CRIT}` |WARNING | |
-|High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m) |<p>The system is running out of free memory.</p> |`{TEMPLATE_NAME:vm.memory.util[snAgGblDynMemUtil.0].min(5m)}>{$MEMORY.UTIL.MAX}` |AVERAGE | |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|-------------------------------------------------------|--------------------------------------------------------------------------|-------------------------------------------------------------------------------------|----------|----------------------------------|
+| High CPU utilization (over {$CPU.UTIL.CRIT}% for 5m) | <p>CPU utilization is too high. The system might be slow to respond.</p> | `{TEMPLATE_NAME:system.cpu.util[snAgGblCpuUtil1MinAvg.0].min(5m)}>{$CPU.UTIL.CRIT}` | WARNING | |
+| High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m) | <p>The system is running out of free memory.</p> | `{TEMPLATE_NAME:vm.memory.util[snAgGblDynMemUtil.0].min(5m)}>{$MEMORY.UTIL.MAX}` | AVERAGE | |
## Feedback
@@ -49,7 +49,7 @@ Please report any issues with the template at https://support.zabbix.com
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
For devices(old Foundry devices, MLXe and so on) that doesn't support Stackable SNMP Tables: snChasFan2Table, snChasPwrSupply2Table,snAgentTemp2Table -
FOUNDRY-SN-AGENT-MIB::snChasFanTable, snChasPwrSupplyTable,snAgentTempTable are used instead.
For example:
@@ -73,60 +73,60 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$FAN_CRIT_STATUS} |<p>-</p> |`3` |
-|{$FAN_OK_STATUS} |<p>-</p> |`2` |
-|{$PSU_CRIT_STATUS} |<p>-</p> |`3` |
-|{$PSU_OK_STATUS} |<p>-</p> |`2` |
-|{$TEMP_CRIT_LOW} |<p>-</p> |`5` |
-|{$TEMP_CRIT} |<p>-</p> |`75` |
-|{$TEMP_WARN} |<p>-</p> |`65` |
+| Name | Description | Default |
+|--------------------|-------------|---------|
+| {$FAN_CRIT_STATUS} | <p>-</p> | `3` |
+| {$FAN_OK_STATUS} | <p>-</p> | `2` |
+| {$PSU_CRIT_STATUS} | <p>-</p> | `3` |
+| {$PSU_OK_STATUS} | <p>-</p> | `2` |
+| {$TEMP_CRIT_LOW} | <p>-</p> | `5` |
+| {$TEMP_CRIT} | <p>-</p> | `75` |
+| {$TEMP_WARN} | <p>-</p> | `65` |
## Template links
-|Name|
-|----|
-|Brocade_Foundry Performance SNMP |
-|Generic SNMP |
-|Interfaces SNMP |
+| Name |
+|----------------------------------|
+| Brocade_Foundry Performance SNMP |
+| Generic SNMP |
+| Interfaces SNMP |
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|PSU Discovery |<p>snChasPwrSupplyTable: A table of each power supply information. Only installed power supply appears in a table row.</p> |SNMP |psu.discovery |
-|FAN Discovery |<p>snChasFanTable: A table of each fan information. Only installed fan appears in a table row.</p> |SNMP |fan.discovery |
-|Temperature Discovery |<p>snAgentTempTable:Table to list temperatures of the modules in the device. This table is applicable to only those modules with temperature sensors.</p> |SNMP |temp.discovery |
-|Temperature Discovery Chassis |<p>Since temperature of the chassis is not available on all Brocade/Foundry hardware, this LLD is here to avoid unsupported items.</p> |SNMP |temp.chassis.discovery |
+| Name | Description | Type | Key and additional info |
+|-------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------|------|-------------------------|
+| PSU Discovery | <p>snChasPwrSupplyTable: A table of each power supply information. Only installed power supply appears in a table row.</p> | SNMP | psu.discovery |
+| FAN Discovery | <p>snChasFanTable: A table of each fan information. Only installed fan appears in a table row.</p> | SNMP | fan.discovery |
+| Temperature Discovery | <p>snAgentTempTable:Table to list temperatures of the modules in the device. This table is applicable to only those modules with temperature sensors.</p> | SNMP | temp.discovery |
+| Temperature Discovery Chassis | <p>Since temperature of the chassis is not available on all Brocade/Foundry hardware, this LLD is here to avoid unsupported items.</p> | SNMP | temp.chassis.discovery |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Fans |Fan {#FAN_INDEX}: Fan status |<p>MIB: FOUNDRY-SN-AGENT-MIB</p> |SNMP |sensor.fan.status[snChasFanOperStatus.{#SNMPINDEX}] |
-|Inventory |Hardware serial number |<p>MIB: FOUNDRY-SN-AGENT-MIB</p> |SNMP |system.hw.serialnumber<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |Firmware version |<p>MIB: FOUNDRY-SN-AGENT-MIB</p><p>The version of the running software in the form'major.minor.maintenance[letters]'</p> |SNMP |system.hw.firmware<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Power_supply |PSU {#PSU_INDEX}: Power supply status |<p>MIB: FOUNDRY-SN-AGENT-MIB</p> |SNMP |sensor.psu.status[snChasPwrSupplyOperStatus.{#SNMPINDEX}] |
-|Temperature |{#SENSOR_DESCR}: Temperature |<p>MIB: FOUNDRY-SN-AGENT-MIB</p><p>Temperature of the sensor represented by this row. Each unit is 0.5 degrees Celsius.</p> |SNMP |sensor.temp.value[snAgentTempValue.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.5`</p> |
-|Temperature |Chassis #{#SNMPINDEX}: Temperature |<p>MIB: FOUNDRY-SN-AGENT-MIB</p><p>Temperature of the chassis. Each unit is 0.5 degrees Celsius.</p><p>Only management module built with temperature sensor hardware is applicable.</p><p>For those non-applicable management module, it returns no-such-name.</p> |SNMP |sensor.temp.value[snChasActualTemperature.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.5`</p> |
+| Group | Name | Description | Type | Key and additional info |
+|--------------|---------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------|------------------------------------------------------------------------------------------------------------|
+| Fans | Fan {#FAN_INDEX}: Fan status | <p>MIB: FOUNDRY-SN-AGENT-MIB</p> | SNMP | sensor.fan.status[snChasFanOperStatus.{#SNMPINDEX}] |
+| Inventory | Hardware serial number | <p>MIB: FOUNDRY-SN-AGENT-MIB</p> | SNMP | system.hw.serialnumber<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | Firmware version | <p>MIB: FOUNDRY-SN-AGENT-MIB</p><p>The version of the running software in the form'major.minor.maintenance[letters]'</p> | SNMP | system.hw.firmware<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Power_supply | PSU {#PSU_INDEX}: Power supply status | <p>MIB: FOUNDRY-SN-AGENT-MIB</p> | SNMP | sensor.psu.status[snChasPwrSupplyOperStatus.{#SNMPINDEX}] |
+| Temperature | {#SENSOR_DESCR}: Temperature | <p>MIB: FOUNDRY-SN-AGENT-MIB</p><p>Temperature of the sensor represented by this row. Each unit is 0.5 degrees Celsius.</p> | SNMP | sensor.temp.value[snAgentTempValue.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.5`</p> |
+| Temperature | Chassis #{#SNMPINDEX}: Temperature | <p>MIB: FOUNDRY-SN-AGENT-MIB</p><p>Temperature of the chassis. Each unit is 0.5 degrees Celsius.</p><p>Only management module built with temperature sensor hardware is applicable.</p><p>For those non-applicable management module, it returns no-such-name.</p> | SNMP | sensor.temp.value[snChasActualTemperature.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.5`</p> |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|Fan {#FAN_INDEX}: Fan is in critical state |<p>Please check the fan unit</p> |`{TEMPLATE_NAME:sensor.fan.status[snChasFanOperStatus.{#SNMPINDEX}].count(#1,{$FAN_CRIT_STATUS},eq)}=1` |AVERAGE | |
-|Fan {#FAN_INDEX}: Fan is not in normal state |<p>Please check the fan unit</p> |`{TEMPLATE_NAME:sensor.fan.status[snChasFanOperStatus.{#SNMPINDEX}].count(#1,{$FAN_OK_STATUS},ne)}=1` |INFO |<p>**Depends on**:</p><p>- Fan {#FAN_INDEX}: Fan is in critical state</p> |
-|Device has been replaced (new serial number received) |<p>Device serial number has changed. Ack to close</p> |`{TEMPLATE_NAME:system.hw.serialnumber.diff()}=1 and {TEMPLATE_NAME:system.hw.serialnumber.strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|Firmware has changed |<p>Firmware version has changed. Ack to close</p> |`{TEMPLATE_NAME:system.hw.firmware.diff()}=1 and {TEMPLATE_NAME:system.hw.firmware.strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|PSU {#PSU_INDEX}: Power supply is in critical state |<p>Please check the power supply unit for errors</p> |`{TEMPLATE_NAME:sensor.psu.status[snChasPwrSupplyOperStatus.{#SNMPINDEX}].count(#1,{$PSU_CRIT_STATUS},eq)}=1` |AVERAGE | |
-|PSU {#PSU_INDEX}: Power supply is not in normal state |<p>Please check the power supply unit for errors</p> |`{TEMPLATE_NAME:sensor.psu.status[snChasPwrSupplyOperStatus.{#SNMPINDEX}].count(#1,{$PSU_OK_STATUS},ne)}=1` |INFO |<p>**Depends on**:</p><p>- PSU {#PSU_INDEX}: Power supply is in critical state</p> |
-|{#SENSOR_DESCR}: Temperature is above warning threshold: >{$TEMP_WARN:""} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[snAgentTempValue.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[snAgentTempValue.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:""}-3` |WARNING |<p>**Depends on**:</p><p>- {#SENSOR_DESCR}: Temperature is above critical threshold: >{$TEMP_CRIT:""}</p> |
-|{#SENSOR_DESCR}: Temperature is above critical threshold: >{$TEMP_CRIT:""} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[snAgentTempValue.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[snAgentTempValue.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:""}-3` |HIGH | |
-|{#SENSOR_DESCR}: Temperature is too low: <{$TEMP_CRIT_LOW:""} |<p>-</p> |`{TEMPLATE_NAME:sensor.temp.value[snAgentTempValue.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[snAgentTempValue.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:""}+3` |AVERAGE | |
-|Chassis #{#SNMPINDEX}: Temperature is above warning threshold: >{$TEMP_WARN:"Chassis"} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[snChasActualTemperature.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:"Chassis"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[snChasActualTemperature.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:"Chassis"}-3` |WARNING |<p>**Depends on**:</p><p>- Chassis #{#SNMPINDEX}: Temperature is above critical threshold: >{$TEMP_CRIT:"Chassis"}</p> |
-|Chassis #{#SNMPINDEX}: Temperature is above critical threshold: >{$TEMP_CRIT:"Chassis"} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[snChasActualTemperature.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:"Chassis"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[snChasActualTemperature.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:"Chassis"}-3` |HIGH | |
-|Chassis #{#SNMPINDEX}: Temperature is too low: <{$TEMP_CRIT_LOW:"Chassis"} |<p>-</p> |`{TEMPLATE_NAME:sensor.temp.value[snChasActualTemperature.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:"Chassis"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[snChasActualTemperature.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:"Chassis"}+3` |AVERAGE | |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|-----------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|------------------------------------------------------------------------------------------------------------------------|
+| Fan {#FAN_INDEX}: Fan is in critical state | <p>Please check the fan unit</p> | `{TEMPLATE_NAME:sensor.fan.status[snChasFanOperStatus.{#SNMPINDEX}].count(#1,{$FAN_CRIT_STATUS},eq)}=1` | AVERAGE | |
+| Fan {#FAN_INDEX}: Fan is not in normal state | <p>Please check the fan unit</p> | `{TEMPLATE_NAME:sensor.fan.status[snChasFanOperStatus.{#SNMPINDEX}].count(#1,{$FAN_OK_STATUS},ne)}=1` | INFO | <p>**Depends on**:</p><p>- Fan {#FAN_INDEX}: Fan is in critical state</p> |
+| Device has been replaced (new serial number received) | <p>Device serial number has changed. Ack to close</p> | `{TEMPLATE_NAME:system.hw.serialnumber.diff()}=1 and {TEMPLATE_NAME:system.hw.serialnumber.strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| Firmware has changed | <p>Firmware version has changed. Ack to close</p> | `{TEMPLATE_NAME:system.hw.firmware.diff()}=1 and {TEMPLATE_NAME:system.hw.firmware.strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| PSU {#PSU_INDEX}: Power supply is in critical state | <p>Please check the power supply unit for errors</p> | `{TEMPLATE_NAME:sensor.psu.status[snChasPwrSupplyOperStatus.{#SNMPINDEX}].count(#1,{$PSU_CRIT_STATUS},eq)}=1` | AVERAGE | |
+| PSU {#PSU_INDEX}: Power supply is not in normal state | <p>Please check the power supply unit for errors</p> | `{TEMPLATE_NAME:sensor.psu.status[snChasPwrSupplyOperStatus.{#SNMPINDEX}].count(#1,{$PSU_OK_STATUS},ne)}=1` | INFO | <p>**Depends on**:</p><p>- PSU {#PSU_INDEX}: Power supply is in critical state</p> |
+| {#SENSOR_DESCR}: Temperature is above warning threshold: >{$TEMP_WARN:""} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[snAgentTempValue.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[snAgentTempValue.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:""}-3` | WARNING | <p>**Depends on**:</p><p>- {#SENSOR_DESCR}: Temperature is above critical threshold: >{$TEMP_CRIT:""}</p> |
+| {#SENSOR_DESCR}: Temperature is above critical threshold: >{$TEMP_CRIT:""} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[snAgentTempValue.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[snAgentTempValue.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:""}-3` | HIGH | |
+| {#SENSOR_DESCR}: Temperature is too low: <{$TEMP_CRIT_LOW:""} | <p>-</p> | `{TEMPLATE_NAME:sensor.temp.value[snAgentTempValue.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[snAgentTempValue.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:""}+3` | AVERAGE | |
+| Chassis #{#SNMPINDEX}: Temperature is above warning threshold: >{$TEMP_WARN:"Chassis"} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[snChasActualTemperature.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:"Chassis"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[snChasActualTemperature.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:"Chassis"}-3` | WARNING | <p>**Depends on**:</p><p>- Chassis #{#SNMPINDEX}: Temperature is above critical threshold: >{$TEMP_CRIT:"Chassis"}</p> |
+| Chassis #{#SNMPINDEX}: Temperature is above critical threshold: >{$TEMP_CRIT:"Chassis"} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[snChasActualTemperature.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:"Chassis"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[snChasActualTemperature.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:"Chassis"}-3` | HIGH | |
+| Chassis #{#SNMPINDEX}: Temperature is too low: <{$TEMP_CRIT_LOW:"Chassis"} | <p>-</p> | `{TEMPLATE_NAME:sensor.temp.value[snChasActualTemperature.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:"Chassis"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[snChasActualTemperature.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:"Chassis"}+3` | AVERAGE | |
## Feedback
@@ -136,7 +136,7 @@ Please report any issues with the template at https://support.zabbix.com
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
For devices(most of the IronWare Brocade devices) that support Stackable SNMP Tables in FOUNDRY-SN-AGENT-MIB: snChasFan2Table, snChasPwrSupply2Table,snAgentTemp2Table - so objects from all Stack members are provided.
This template was tested on:
@@ -158,58 +158,58 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$FAN_CRIT_STATUS} |<p>-</p> |`3` |
-|{$FAN_OK_STATUS} |<p>-</p> |`2` |
-|{$PSU_CRIT_STATUS} |<p>-</p> |`3` |
-|{$PSU_OK_STATUS} |<p>-</p> |`2` |
-|{$TEMP_CRIT_LOW} |<p>-</p> |`5` |
-|{$TEMP_CRIT} |<p>-</p> |`75` |
-|{$TEMP_WARN} |<p>-</p> |`65` |
+| Name | Description | Default |
+|--------------------|-------------|---------|
+| {$FAN_CRIT_STATUS} | <p>-</p> | `3` |
+| {$FAN_OK_STATUS} | <p>-</p> | `2` |
+| {$PSU_CRIT_STATUS} | <p>-</p> | `3` |
+| {$PSU_OK_STATUS} | <p>-</p> | `2` |
+| {$TEMP_CRIT_LOW} | <p>-</p> | `5` |
+| {$TEMP_CRIT} | <p>-</p> | `75` |
+| {$TEMP_WARN} | <p>-</p> | `65` |
## Template links
-|Name|
-|----|
-|Brocade_Foundry Performance SNMP |
-|Generic SNMP |
-|Interfaces SNMP |
+| Name |
+|----------------------------------|
+| Brocade_Foundry Performance SNMP |
+| Generic SNMP |
+| Interfaces SNMP |
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|PSU Discovery |<p>snChasPwrSupply2Table: A table of each power supply information for each unit. Only installed power supply appears in a table row.</p> |SNMP |psu.discovery |
-|FAN Discovery |<p>snChasFan2Table: A table of each fan information for each unit. Only installed fan appears in a table row.</p> |SNMP |fan.discovery |
-|Temperature Discovery |<p>snAgentTemp2Table:Table to list temperatures of the modules in the device for each unit. This table is applicable to only those modules with temperature sensors.</p> |SNMP |temp.discovery |
-|Stack Discovery |<p>Discovering snStackingConfigUnitTable for Model names</p> |SNMP |stack.discovery |
-|Chassis Discovery |<p>snChasUnitIndex: The index to chassis table.</p> |SNMP |chassis.discovery |
+| Name | Description | Type | Key and additional info |
+|-----------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------|-------------------------|
+| PSU Discovery | <p>snChasPwrSupply2Table: A table of each power supply information for each unit. Only installed power supply appears in a table row.</p> | SNMP | psu.discovery |
+| FAN Discovery | <p>snChasFan2Table: A table of each fan information for each unit. Only installed fan appears in a table row.</p> | SNMP | fan.discovery |
+| Temperature Discovery | <p>snAgentTemp2Table:Table to list temperatures of the modules in the device for each unit. This table is applicable to only those modules with temperature sensors.</p> | SNMP | temp.discovery |
+| Stack Discovery | <p>Discovering snStackingConfigUnitTable for Model names</p> | SNMP | stack.discovery |
+| Chassis Discovery | <p>snChasUnitIndex: The index to chassis table.</p> | SNMP | chassis.discovery |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Fans |Unit {#FAN_UNIT} Fan {#FAN_INDEX}: Fan status |<p>MIB: FOUNDRY-SN-AGENT-MIB</p> |SNMP |sensor.fan.status[snChasFan2OperStatus.{#SNMPINDEX}] |
-|Inventory |Firmware version |<p>MIB: FOUNDRY-SN-AGENT-MIB</p><p>The version of the running software in the form 'major.minor.maintenance[letters]'</p> |SNMP |system.hw.firmware<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |Unit {#SNMPINDEX}: Hardware model name |<p>MIB: FOUNDRY-SN-STACKING-MIB</p><p>A description of the configured/active system type for each unit.</p> |SNMP |system.hw.model[snStackingConfigUnitType.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |Unit {#SNMPVALUE}: Hardware serial number |<p>MIB: FOUNDRY-SN-AGENT-MIB</p><p>The serial number of the chassis for each unit. If the serial number is unknown or unavailable then the value should be a zero length string.</p> |SNMP |system.hw.serialnumber[snChasUnitSerNum.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Power_supply |Unit {#PSU_UNIT} PSU {#PSU_INDEX}: Power supply status |<p>MIB: FOUNDRY-SN-AGENT-MIB</p> |SNMP |sensor.psu.status[snChasPwrSupply2OperStatus.{#SNMPINDEX}] |
-|Temperature |{#SENSOR_DESCR}: Temperature |<p>MIB: FOUNDRY-SN-AGENT-MIB</p><p>Temperature of the sensor represented by this row. Each unit is 0.5 degrees Celsius.</p> |SNMP |sensor.temp.value[snAgentTemp2Value.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.5`</p> |
+| Group | Name | Description | Type | Key and additional info |
+|--------------|--------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------|---------------------------------------------------------------------------------------------------------------------------|
+| Fans | Unit {#FAN_UNIT} Fan {#FAN_INDEX}: Fan status | <p>MIB: FOUNDRY-SN-AGENT-MIB</p> | SNMP | sensor.fan.status[snChasFan2OperStatus.{#SNMPINDEX}] |
+| Inventory | Firmware version | <p>MIB: FOUNDRY-SN-AGENT-MIB</p><p>The version of the running software in the form 'major.minor.maintenance[letters]'</p> | SNMP | system.hw.firmware<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | Unit {#SNMPINDEX}: Hardware model name | <p>MIB: FOUNDRY-SN-STACKING-MIB</p><p>A description of the configured/active system type for each unit.</p> | SNMP | system.hw.model[snStackingConfigUnitType.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | Unit {#SNMPVALUE}: Hardware serial number | <p>MIB: FOUNDRY-SN-AGENT-MIB</p><p>The serial number of the chassis for each unit. If the serial number is unknown or unavailable then the value should be a zero length string.</p> | SNMP | system.hw.serialnumber[snChasUnitSerNum.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Power_supply | Unit {#PSU_UNIT} PSU {#PSU_INDEX}: Power supply status | <p>MIB: FOUNDRY-SN-AGENT-MIB</p> | SNMP | sensor.psu.status[snChasPwrSupply2OperStatus.{#SNMPINDEX}] |
+| Temperature | {#SENSOR_DESCR}: Temperature | <p>MIB: FOUNDRY-SN-AGENT-MIB</p><p>Temperature of the sensor represented by this row. Each unit is 0.5 degrees Celsius.</p> | SNMP | sensor.temp.value[snAgentTemp2Value.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.5`</p> |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|Unit {#FAN_UNIT} Fan {#FAN_INDEX}: Fan is in critical state |<p>Please check the fan unit</p> |`{TEMPLATE_NAME:sensor.fan.status[snChasFan2OperStatus.{#SNMPINDEX}].count(#1,{$FAN_CRIT_STATUS},eq)}=1` |AVERAGE | |
-|Unit {#FAN_UNIT} Fan {#FAN_INDEX}: Fan is not in normal state |<p>Please check the fan unit</p> |`{TEMPLATE_NAME:sensor.fan.status[snChasFan2OperStatus.{#SNMPINDEX}].count(#1,{$FAN_OK_STATUS},ne)}=1` |INFO |<p>**Depends on**:</p><p>- Unit {#FAN_UNIT} Fan {#FAN_INDEX}: Fan is in critical state</p> |
-|Firmware has changed |<p>Firmware version has changed. Ack to close</p> |`{TEMPLATE_NAME:system.hw.firmware.diff()}=1 and {TEMPLATE_NAME:system.hw.firmware.strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|Unit {#SNMPVALUE}: Device has been replaced (new serial number received) |<p>Device serial number has changed. Ack to close</p> |`{TEMPLATE_NAME:system.hw.serialnumber[snChasUnitSerNum.{#SNMPINDEX}].diff()}=1 and {TEMPLATE_NAME:system.hw.serialnumber[snChasUnitSerNum.{#SNMPINDEX}].strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|Unit {#PSU_UNIT} PSU {#PSU_INDEX}: Power supply is in critical state |<p>Please check the power supply unit for errors</p> |`{TEMPLATE_NAME:sensor.psu.status[snChasPwrSupply2OperStatus.{#SNMPINDEX}].count(#1,{$PSU_CRIT_STATUS},eq)}=1` |AVERAGE | |
-|Unit {#PSU_UNIT} PSU {#PSU_INDEX}: Power supply is not in normal state |<p>Please check the power supply unit for errors</p> |`{TEMPLATE_NAME:sensor.psu.status[snChasPwrSupply2OperStatus.{#SNMPINDEX}].count(#1,{$PSU_OK_STATUS},ne)}=1` |INFO |<p>**Depends on**:</p><p>- Unit {#PSU_UNIT} PSU {#PSU_INDEX}: Power supply is in critical state</p> |
-|{#SENSOR_DESCR}: Temperature is above warning threshold: >{$TEMP_WARN:""} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[snAgentTemp2Value.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[snAgentTemp2Value.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:""}-3` |WARNING |<p>**Depends on**:</p><p>- {#SENSOR_DESCR}: Temperature is above critical threshold: >{$TEMP_CRIT:""}</p> |
-|{#SENSOR_DESCR}: Temperature is above critical threshold: >{$TEMP_CRIT:""} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[snAgentTemp2Value.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[snAgentTemp2Value.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:""}-3` |HIGH | |
-|{#SENSOR_DESCR}: Temperature is too low: <{$TEMP_CRIT_LOW:""} |<p>-</p> |`{TEMPLATE_NAME:sensor.temp.value[snAgentTemp2Value.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[snAgentTemp2Value.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:""}+3` |AVERAGE | |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|----------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|-----------------------------------------------------------------------------------------------------------|
+| Unit {#FAN_UNIT} Fan {#FAN_INDEX}: Fan is in critical state | <p>Please check the fan unit</p> | `{TEMPLATE_NAME:sensor.fan.status[snChasFan2OperStatus.{#SNMPINDEX}].count(#1,{$FAN_CRIT_STATUS},eq)}=1` | AVERAGE | |
+| Unit {#FAN_UNIT} Fan {#FAN_INDEX}: Fan is not in normal state | <p>Please check the fan unit</p> | `{TEMPLATE_NAME:sensor.fan.status[snChasFan2OperStatus.{#SNMPINDEX}].count(#1,{$FAN_OK_STATUS},ne)}=1` | INFO | <p>**Depends on**:</p><p>- Unit {#FAN_UNIT} Fan {#FAN_INDEX}: Fan is in critical state</p> |
+| Firmware has changed | <p>Firmware version has changed. Ack to close</p> | `{TEMPLATE_NAME:system.hw.firmware.diff()}=1 and {TEMPLATE_NAME:system.hw.firmware.strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| Unit {#SNMPVALUE}: Device has been replaced (new serial number received) | <p>Device serial number has changed. Ack to close</p> | `{TEMPLATE_NAME:system.hw.serialnumber[snChasUnitSerNum.{#SNMPINDEX}].diff()}=1 and {TEMPLATE_NAME:system.hw.serialnumber[snChasUnitSerNum.{#SNMPINDEX}].strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| Unit {#PSU_UNIT} PSU {#PSU_INDEX}: Power supply is in critical state | <p>Please check the power supply unit for errors</p> | `{TEMPLATE_NAME:sensor.psu.status[snChasPwrSupply2OperStatus.{#SNMPINDEX}].count(#1,{$PSU_CRIT_STATUS},eq)}=1` | AVERAGE | |
+| Unit {#PSU_UNIT} PSU {#PSU_INDEX}: Power supply is not in normal state | <p>Please check the power supply unit for errors</p> | `{TEMPLATE_NAME:sensor.psu.status[snChasPwrSupply2OperStatus.{#SNMPINDEX}].count(#1,{$PSU_OK_STATUS},ne)}=1` | INFO | <p>**Depends on**:</p><p>- Unit {#PSU_UNIT} PSU {#PSU_INDEX}: Power supply is in critical state</p> |
+| {#SENSOR_DESCR}: Temperature is above warning threshold: >{$TEMP_WARN:""} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[snAgentTemp2Value.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[snAgentTemp2Value.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:""}-3` | WARNING | <p>**Depends on**:</p><p>- {#SENSOR_DESCR}: Temperature is above critical threshold: >{$TEMP_CRIT:""}</p> |
+| {#SENSOR_DESCR}: Temperature is above critical threshold: >{$TEMP_CRIT:""} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[snAgentTemp2Value.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[snAgentTemp2Value.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:""}-3` | HIGH | |
+| {#SENSOR_DESCR}: Temperature is too low: <{$TEMP_CRIT_LOW:""} | <p>-</p> | `{TEMPLATE_NAME:sensor.temp.value[snAgentTemp2Value.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[snAgentTemp2Value.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:""}+3` | AVERAGE | |
## Feedback
diff --git a/templates/net/cisco_catalyst_3750/cisco_catalyst_3750_24fs_snmp/template_net_cisco_catalyst_3750_24fs_snmp.yaml b/templates/net/cisco_catalyst_3750/cisco_catalyst_3750_24fs_snmp/template_net_cisco_catalyst_3750_24fs_snmp.yaml
index 40a1a17ea11..0671c8969fc 100644
--- a/templates/net/cisco_catalyst_3750/cisco_catalyst_3750_24fs_snmp/template_net_cisco_catalyst_3750_24fs_snmp.yaml
+++ b/templates/net/cisco_catalyst_3750/cisco_catalyst_3750_24fs_snmp/template_net_cisco_catalyst_3750_24fs_snmp.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-03-04T00:00:00Z'
+ date: '2021-04-22T11:27:09Z'
groups:
-
name: Templates/Modules
@@ -24,36 +24,18 @@ zabbix_export:
groups:
-
name: Templates/Modules
- applications:
- -
- name: CPU
- -
- name: Fans
- -
- name: General
- -
- name: Inventory
- -
- name: Memory
- -
- name: 'Network interfaces'
- -
- name: 'Power supply'
- -
- name: Status
- -
- name: Temperature
items:
-
name: 'ICMP ping'
type: SIMPLE
key: icmpping
history: 7d
- applications:
- -
- name: Status
valuemap:
name: 'Service state'
+ tags:
+ -
+ tag: Application
+ value: Status
triggers:
-
expression: '{max(#3)}=0'
@@ -67,9 +49,10 @@ zabbix_export:
history: 7d
value_type: FLOAT
units: '%'
- applications:
+ tags:
-
- name: Status
+ tag: Application
+ value: Status
triggers:
-
expression: '{min(5m)}>{$ICMP_LOSS_WARN} and {min(5m)}<100'
@@ -87,9 +70,10 @@ zabbix_export:
history: 7d
value_type: FLOAT
units: s
- applications:
+ tags:
-
- name: Status
+ tag: Application
+ value: Status
triggers:
-
expression: '{avg(5m)}>{$ICMP_RESPONSE_TIME_WARN}'
@@ -111,10 +95,11 @@ zabbix_export:
trends: '0'
value_type: LOG
description: 'Item is used to collect all SNMP traps unmatched by other snmptrap items'
- applications:
- -
- name: General
logtimefmt: 'hh:mm:sszyyyy/MM/dd'
+ tags:
+ -
+ tag: Application
+ value: General
-
name: 'System contact details'
type: SNMP_AGENT
@@ -128,14 +113,15 @@ zabbix_export:
MIB: SNMPv2-MIB
The textual identification of the contact person for this managed node, together with information on how to contact this person. If no contact information is known, the value is the zero-length string.
inventory_link: CONTACT
- applications:
- -
- name: General
preprocessing:
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1d
+ tags:
+ -
+ tag: Application
+ value: General
-
name: 'System description'
type: SNMP_AGENT
@@ -150,14 +136,15 @@ zabbix_export:
A textual description of the entity. This value should
include the full name and version identification of the system's hardware type, software operating-system, and
networking software.
- applications:
- -
- name: General
preprocessing:
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1d
+ tags:
+ -
+ tag: Application
+ value: General
-
name: 'Hardware model name'
type: SNMP_AGENT
@@ -169,14 +156,15 @@ zabbix_export:
value_type: CHAR
description: 'MIB: ENTITY-MIB'
inventory_link: MODEL
- applications:
- -
- name: Inventory
preprocessing:
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1d
+ tags:
+ -
+ tag: Application
+ value: Inventory
-
name: 'Hardware serial number'
type: SNMP_AGENT
@@ -188,14 +176,15 @@ zabbix_export:
value_type: CHAR
description: 'MIB: ENTITY-MIB'
inventory_link: SERIALNO_A
- applications:
- -
- name: Inventory
preprocessing:
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1d
+ tags:
+ -
+ tag: Application
+ value: Inventory
triggers:
-
expression: '{diff()}=1 and {strlen()}>0'
@@ -216,14 +205,15 @@ zabbix_export:
MIB: SNMPv2-MIB
The physical location of this node (e.g., `telephone closet, 3rd floor'). If the location is unknown, the value is the zero-length string.
inventory_link: LOCATION
- applications:
- -
- name: General
preprocessing:
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1h
+ tags:
+ -
+ tag: Application
+ value: General
-
name: 'System name'
type: SNMP_AGENT
@@ -237,14 +227,15 @@ zabbix_export:
MIB: SNMPv2-MIB
An administratively-assigned name for this managed node.By convention, this is the node's fully-qualified domain name. If the name is unknown, the value is the zero-length string.
inventory_link: NAME
- applications:
- -
- name: General
preprocessing:
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1h
+ tags:
+ -
+ tag: Application
+ value: General
triggers:
-
expression: '{diff()}=1 and {strlen()}>0'
@@ -263,14 +254,15 @@ zabbix_export:
description: |
MIB: SNMPv2-MIB
The vendor's authoritative identification of the network management subsystem contained in the entity. This value is allocated within the SMI enterprises subtree (1.3.6.1.4.1) and provides an easy and unambiguous means for determining`what kind of box' is being managed. For example, if vendor`Flintstones, Inc.' was assigned the subtree1.3.6.1.4.1.4242, it could assign the identifier 1.3.6.1.4.1.4242.1.1 to its `Fred Router'.
- applications:
- -
- name: General
preprocessing:
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1h
+ tags:
+ -
+ tag: Application
+ value: General
-
name: 'Operating system'
type: SNMP_AGENT
@@ -282,9 +274,6 @@ zabbix_export:
value_type: CHAR
description: 'MIB: SNMPv2-MIB'
inventory_link: OS
- applications:
- -
- name: Inventory
preprocessing:
-
type: REGEX
@@ -295,6 +284,10 @@ zabbix_export:
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1d
+ tags:
+ -
+ tag: Application
+ value: Inventory
triggers:
-
expression: '{diff()}=1 and {strlen()}>0'
@@ -318,14 +311,15 @@ zabbix_export:
description: |
MIB: SNMPv2-MIB
The time (in hundredths of a second) since the network management portion of the system was last re-initialized.
- applications:
- -
- name: Status
preprocessing:
-
type: MULTIPLIER
parameters:
- '0.01'
+ tags:
+ -
+ tag: Application
+ value: Status
triggers:
-
expression: '{last()}<10m'
@@ -338,11 +332,12 @@ zabbix_export:
type: INTERNAL
key: 'zabbix[host,snmp,available]'
history: 7d
- applications:
- -
- name: Status
valuemap:
name: zabbix.host.available
+ tags:
+ -
+ tag: Application
+ value: Status
triggers:
-
expression: '{max({$SNMP.TIMEOUT})}=0'
@@ -378,9 +373,10 @@ zabbix_export:
Object name: cpmCPUTotal5minRev
The cpmCPUTotal5minRev MIB object provides a more accurate view of the performance of the router over time than the MIB objects cpmCPUTotal1minRev and cpmCPUTotal5secRev . These MIB objects are not accurate because they look at CPU at one minute and five second intervals, respectively. These MIBs enable you to monitor the trends and plan the capacity of your network. The recommended baseline rising threshold for cpmCPUTotal5minRev is 90 percent. Depending on the platform, some routers that run at 90 percent, for example, 2500s, can exhibit performance degradation versus a high-end router, for example, the 7500 series, which can operate fine.
Reference: http://www.cisco.com/c/en/us/support/docs/ip/simple-network-management-protocol-snmp/15215-collect-cpu-util-snmp.html
- applications:
+ tags:
-
- name: CPU
+ tag: Application
+ value: CPU
trigger_prototypes:
-
expression: '{min(5m)}>{$CPU.UTIL.CRIT}'
@@ -430,14 +426,15 @@ zabbix_export:
description: |
MIB: ENTITY-MIB
Object name: entPhysicalSerialNum
- applications:
- -
- name: Inventory
preprocessing:
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1d
+ tags:
+ -
+ tag: Application
+ value: Inventory
trigger_prototypes:
-
expression: '{diff()}=1 and {strlen()}>0'
@@ -461,11 +458,12 @@ zabbix_export:
description: |
MIB: CISCO-ENVMON-MIB
Object name: ciscoEnvMonFanState
- applications:
- -
- name: Fans
valuemap:
name: 'CISCO-ENVMON-MIB::CiscoEnvMonState'
+ tags:
+ -
+ tag: Application
+ value: Fans
trigger_prototypes:
-
expression: '{last()}=3 or {last()}=4'
@@ -505,9 +503,10 @@ zabbix_export:
Object name: ciscoMemoryPoolFree
Indicates the number of bytes from the memory pool that are currently unused on the managed device. Note that the sum of ciscoMemoryPoolUsed and ciscoMemoryPoolFree is the total amount of memory in the pool
Reference: http://www.cisco.com/c/en/us/support/docs/ip/simple-network-management-protocol-snmp/15216-contiguous-memory.html
- applications:
+ tags:
-
- name: Memory
+ tag: Application
+ value: Memory
-
name: '{#SNMPVALUE}: Used memory'
type: SNMP_AGENT
@@ -520,9 +519,10 @@ zabbix_export:
Object name: ciscoMemoryPoolUsed
Indicates the number of bytes from the memory pool that are currently in use by applications on the managed device.
Reference: http://www.cisco.com/c/en/us/support/docs/ip/simple-network-management-protocol-snmp/15216-contiguous-memory.html
- applications:
+ tags:
-
- name: Memory
+ tag: Application
+ value: Memory
-
name: '{#SNMPVALUE}: Memory utilization'
type: CALCULATED
@@ -532,9 +532,10 @@ zabbix_export:
units: '%'
params: 'last("vm.memory.used[{#SNMPINDEX}]")/(last("vm.memory.free[{#SNMPINDEX}]")+last("vm.memory.used[{#SNMPINDEX}]"))*100'
description: 'Memory utilization in %'
- applications:
+ tags:
-
- name: Memory
+ tag: Application
+ value: Memory
trigger_prototypes:
-
expression: '{min(5m)}>{$MEMORY.UTIL.MAX}'
@@ -631,14 +632,15 @@ zabbix_export:
One possible reason for discarding such a packet could be to free up buffer space.
Discontinuities in the value of this counter can occur at re-initialization of the management system,
and at other times as indicated by the value of ifCounterDiscontinuityTime.
- application_prototypes:
- -
- name: 'Interface {#IFNAME}({#IFALIAS})'
preprocessing:
-
type: CHANGE_PER_SECOND
parameters:
- ''
+ tags:
+ -
+ tag: Application
+ value: 'Interface {#IFNAME}({#IFALIAS})'
-
name: 'Interface {#IFNAME}({#IFALIAS}): Inbound packets with errors'
type: SNMP_AGENT
@@ -648,14 +650,15 @@ zabbix_export:
description: |
MIB: IF-MIB
For packet-oriented interfaces, the number of inbound packets that contained errors preventing them from being deliverable to a higher-layer protocol. For character-oriented or fixed-length interfaces, the number of inbound transmission units that contained errors preventing them from being deliverable to a higher-layer protocol. Discontinuities in the value of this counter can occur at re-initialization of the management system, and at other times as indicated by the value of ifCounterDiscontinuityTime.
- application_prototypes:
- -
- name: 'Interface {#IFNAME}({#IFALIAS})'
preprocessing:
-
type: CHANGE_PER_SECOND
parameters:
- ''
+ tags:
+ -
+ tag: Application
+ value: 'Interface {#IFNAME}({#IFALIAS})'
trigger_prototypes:
-
expression: '{min(5m)}>{$IF.ERRORS.WARN:"{#IFNAME}"}'
@@ -679,9 +682,6 @@ zabbix_export:
description: |
MIB: IF-MIB
The total number of octets received on the interface, including framing characters. This object is a 64-bit version of ifInOctets. Discontinuities in the value of this counter can occur at re-initialization of the management system, and at other times as indicated by the value of ifCounterDiscontinuityTime.
- application_prototypes:
- -
- name: 'Interface {#IFNAME}({#IFALIAS})'
preprocessing:
-
type: CHANGE_PER_SECOND
@@ -691,6 +691,10 @@ zabbix_export:
type: MULTIPLIER
parameters:
- '8'
+ tags:
+ -
+ tag: Application
+ value: 'Interface {#IFNAME}({#IFALIAS})'
-
name: 'Interface {#IFNAME}({#IFALIAS}): Outbound packets discarded'
type: SNMP_AGENT
@@ -704,14 +708,15 @@ zabbix_export:
One possible reason for discarding such a packet could be to free up buffer space.
Discontinuities in the value of this counter can occur at re-initialization of the management system,
and at other times as indicated by the value of ifCounterDiscontinuityTime.
- application_prototypes:
- -
- name: 'Interface {#IFNAME}({#IFALIAS})'
preprocessing:
-
type: CHANGE_PER_SECOND
parameters:
- ''
+ tags:
+ -
+ tag: Application
+ value: 'Interface {#IFNAME}({#IFALIAS})'
-
name: 'Interface {#IFNAME}({#IFALIAS}): Outbound packets with errors'
type: SNMP_AGENT
@@ -721,14 +726,15 @@ zabbix_export:
description: |
MIB: IF-MIB
For packet-oriented interfaces, the number of outbound packets that contained errors preventing them from being deliverable to a higher-layer protocol. For character-oriented or fixed-length interfaces, the number of outbound transmission units that contained errors preventing them from being deliverable to a higher-layer protocol. Discontinuities in the value of this counter can occur at re-initialization of the management system, and at other times as indicated by the value of ifCounterDiscontinuityTime.
- application_prototypes:
- -
- name: 'Interface {#IFNAME}({#IFALIAS})'
preprocessing:
-
type: CHANGE_PER_SECOND
parameters:
- ''
+ tags:
+ -
+ tag: Application
+ value: 'Interface {#IFNAME}({#IFALIAS})'
trigger_prototypes:
-
expression: '{min(5m)}>{$IF.ERRORS.WARN:"{#IFNAME}"}'
@@ -752,9 +758,6 @@ zabbix_export:
description: |
MIB: IF-MIB
The total number of octets transmitted out of the interface, including framing characters. This object is a 64-bit version of ifOutOctets.Discontinuities in the value of this counter can occur at re-initialization of the management system, and at other times as indicated by the value of ifCounterDiscontinuityTime.
- application_prototypes:
- -
- name: 'Interface {#IFNAME}({#IFALIAS})'
preprocessing:
-
type: CHANGE_PER_SECOND
@@ -764,6 +767,10 @@ zabbix_export:
type: MULTIPLIER
parameters:
- '8'
+ tags:
+ -
+ tag: Application
+ value: 'Interface {#IFNAME}({#IFALIAS})'
-
name: 'Interface {#IFNAME}({#IFALIAS}): Speed'
type: SNMP_AGENT
@@ -774,9 +781,6 @@ zabbix_export:
description: |
MIB: IF-MIB
An estimate of the interface's current bandwidth in units of 1,000,000 bits per second. If this object reports a value of `n' then the speed of the interface is somewhere in the range of `n-500,000' to`n+499,999'. For interfaces which do not vary in bandwidth or for those where no accurate estimation can be made, this object should contain the nominal bandwidth. For a sub-layer which has no concept of bandwidth, this object should be zero.
- application_prototypes:
- -
- name: 'Interface {#IFNAME}({#IFALIAS})'
preprocessing:
-
type: MULTIPLIER
@@ -786,6 +790,10 @@ zabbix_export:
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1h
+ tags:
+ -
+ tag: Application
+ value: 'Interface {#IFNAME}({#IFALIAS})'
-
name: 'Interface {#IFNAME}({#IFALIAS}): Operational status'
type: SNMP_AGENT
@@ -801,9 +809,6 @@ zabbix_export:
- It should change todormant(5) if the interface is waiting for external actions (such as a serial line waiting for an incoming connection)
- It should remain in the down(2) state if and only if there is a fault that prevents it from going to the up(1) state
- It should remain in the notPresent(6) state if the interface has missing(typically, hardware) components.
- application_prototypes:
- -
- name: 'Interface {#IFNAME}({#IFALIAS})'
valuemap:
name: 'IF-MIB::ifOperStatus'
preprocessing:
@@ -811,6 +816,10 @@ zabbix_export:
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 6h
+ tags:
+ -
+ tag: Application
+ value: 'Interface {#IFNAME}({#IFALIAS})'
trigger_prototypes:
-
expression: '{$IFCONTROL:"{#IFNAME}"}=1 and ({last()}=2)'
@@ -832,9 +841,6 @@ zabbix_export:
The type of interface.
Additional values for ifType are assigned by the Internet Assigned NumbersAuthority (IANA),
through updating the syntax of the IANAifType textual convention.
- application_prototypes:
- -
- name: 'Interface {#IFNAME}({#IFALIAS})'
valuemap:
name: 'IF-MIB::ifType'
preprocessing:
@@ -842,6 +848,10 @@ zabbix_export:
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 6h
+ tags:
+ -
+ tag: Application
+ value: 'Interface {#IFNAME}({#IFALIAS})'
trigger_prototypes:
-
expression: |
@@ -993,11 +1003,12 @@ zabbix_export:
ifMauType. This was felt to be sufficiently
valuable to justify the redundancy.
Reference: [IEEE 802.3 Std.], 30.3.1.1.32,aDuplexStatus.
- application_prototypes:
- -
- name: 'Interface {#IFNAME}({#IFALIAS})'
valuemap:
name: 'EtherLike-MIB::dot3StatsDuplexStatus'
+ tags:
+ -
+ tag: Application
+ value: 'Interface {#IFNAME}({#IFALIAS})'
trigger_prototypes:
-
expression: '{last()}=2'
@@ -1021,11 +1032,12 @@ zabbix_export:
description: |
MIB: CISCO-ENVMON-MIB
Object name: ciscoEnvMonSupplyState
- applications:
- -
- name: 'Power supply'
valuemap:
name: 'CISCO-ENVMON-MIB::CiscoEnvMonState'
+ tags:
+ -
+ tag: Application
+ value: 'Power supply'
trigger_prototypes:
-
expression: '{last()}=3 or {last(4)}=4'
@@ -1063,11 +1075,12 @@ zabbix_export:
MIB: CISCO-ENVMON-MIB
Object name: ciscoEnvMonTemperatureState
The current state of the test point being instrumented.
- applications:
- -
- name: Temperature
valuemap:
name: 'CISCO-ENVMON-MIB::CiscoEnvMonState'
+ tags:
+ -
+ tag: Application
+ value: Temperature
trigger_prototypes:
-
expression: '{last()}=3 or {last()}=4'
@@ -1097,9 +1110,10 @@ zabbix_export:
MIB: CISCO-ENVMON-MIB
Object name: ciscoEnvMonTemperatureValue
The current measurement of the test point being instrumented.
- applications:
+ tags:
-
- name: Temperature
+ tag: Application
+ value: Temperature
trigger_prototypes:
-
expression: '{avg(5m)}>{$TEMP_CRIT:"{#SNMPVALUE}"}'
diff --git a/templates/net/cisco_catalyst_3750/cisco_catalyst_3750_24ps_snmp/template_net_cisco_catalyst_3750_24ps_snmp.yaml b/templates/net/cisco_catalyst_3750/cisco_catalyst_3750_24ps_snmp/template_net_cisco_catalyst_3750_24ps_snmp.yaml
index 1dabceddb79..ee2c1e3ff95 100644
--- a/templates/net/cisco_catalyst_3750/cisco_catalyst_3750_24ps_snmp/template_net_cisco_catalyst_3750_24ps_snmp.yaml
+++ b/templates/net/cisco_catalyst_3750/cisco_catalyst_3750_24ps_snmp/template_net_cisco_catalyst_3750_24ps_snmp.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-03-04T00:00:00Z'
+ date: '2021-04-22T11:27:07Z'
groups:
-
name: Templates/Modules
@@ -24,36 +24,18 @@ zabbix_export:
groups:
-
name: Templates/Modules
- applications:
- -
- name: CPU
- -
- name: Fans
- -
- name: General
- -
- name: Inventory
- -
- name: Memory
- -
- name: 'Network interfaces'
- -
- name: 'Power supply'
- -
- name: Status
- -
- name: Temperature
items:
-
name: 'ICMP ping'
type: SIMPLE
key: icmpping
history: 7d
- applications:
- -
- name: Status
valuemap:
name: 'Service state'
+ tags:
+ -
+ tag: Application
+ value: Status
triggers:
-
expression: '{max(#3)}=0'
@@ -67,9 +49,10 @@ zabbix_export:
history: 7d
value_type: FLOAT
units: '%'
- applications:
+ tags:
-
- name: Status
+ tag: Application
+ value: Status
triggers:
-
expression: '{min(5m)}>{$ICMP_LOSS_WARN} and {min(5m)}<100'
@@ -87,9 +70,10 @@ zabbix_export:
history: 7d
value_type: FLOAT
units: s
- applications:
+ tags:
-
- name: Status
+ tag: Application
+ value: Status
triggers:
-
expression: '{avg(5m)}>{$ICMP_RESPONSE_TIME_WARN}'
@@ -111,10 +95,11 @@ zabbix_export:
trends: '0'
value_type: LOG
description: 'Item is used to collect all SNMP traps unmatched by other snmptrap items'
- applications:
- -
- name: General
logtimefmt: 'hh:mm:sszyyyy/MM/dd'
+ tags:
+ -
+ tag: Application
+ value: General
-
name: 'System contact details'
type: SNMP_AGENT
@@ -128,14 +113,15 @@ zabbix_export:
MIB: SNMPv2-MIB
The textual identification of the contact person for this managed node, together with information on how to contact this person. If no contact information is known, the value is the zero-length string.
inventory_link: CONTACT
- applications:
- -
- name: General
preprocessing:
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1d
+ tags:
+ -
+ tag: Application
+ value: General
-
name: 'System description'
type: SNMP_AGENT
@@ -150,14 +136,15 @@ zabbix_export:
A textual description of the entity. This value should
include the full name and version identification of the system's hardware type, software operating-system, and
networking software.
- applications:
- -
- name: General
preprocessing:
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1d
+ tags:
+ -
+ tag: Application
+ value: General
-
name: 'Hardware model name'
type: SNMP_AGENT
@@ -169,14 +156,15 @@ zabbix_export:
value_type: CHAR
description: 'MIB: ENTITY-MIB'
inventory_link: MODEL
- applications:
- -
- name: Inventory
preprocessing:
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1d
+ tags:
+ -
+ tag: Application
+ value: Inventory
-
name: 'Hardware serial number'
type: SNMP_AGENT
@@ -188,14 +176,15 @@ zabbix_export:
value_type: CHAR
description: 'MIB: ENTITY-MIB'
inventory_link: SERIALNO_A
- applications:
- -
- name: Inventory
preprocessing:
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1d
+ tags:
+ -
+ tag: Application
+ value: Inventory
triggers:
-
expression: '{diff()}=1 and {strlen()}>0'
@@ -216,14 +205,15 @@ zabbix_export:
MIB: SNMPv2-MIB
The physical location of this node (e.g., `telephone closet, 3rd floor'). If the location is unknown, the value is the zero-length string.
inventory_link: LOCATION
- applications:
- -
- name: General
preprocessing:
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1h
+ tags:
+ -
+ tag: Application
+ value: General
-
name: 'System name'
type: SNMP_AGENT
@@ -237,14 +227,15 @@ zabbix_export:
MIB: SNMPv2-MIB
An administratively-assigned name for this managed node.By convention, this is the node's fully-qualified domain name. If the name is unknown, the value is the zero-length string.
inventory_link: NAME
- applications:
- -
- name: General
preprocessing:
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1h
+ tags:
+ -
+ tag: Application
+ value: General
triggers:
-
expression: '{diff()}=1 and {strlen()}>0'
@@ -263,14 +254,15 @@ zabbix_export:
description: |
MIB: SNMPv2-MIB
The vendor's authoritative identification of the network management subsystem contained in the entity. This value is allocated within the SMI enterprises subtree (1.3.6.1.4.1) and provides an easy and unambiguous means for determining`what kind of box' is being managed. For example, if vendor`Flintstones, Inc.' was assigned the subtree1.3.6.1.4.1.4242, it could assign the identifier 1.3.6.1.4.1.4242.1.1 to its `Fred Router'.
- applications:
- -
- name: General
preprocessing:
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1h
+ tags:
+ -
+ tag: Application
+ value: General
-
name: 'Operating system'
type: SNMP_AGENT
@@ -282,9 +274,6 @@ zabbix_export:
value_type: CHAR
description: 'MIB: SNMPv2-MIB'
inventory_link: OS
- applications:
- -
- name: Inventory
preprocessing:
-
type: REGEX
@@ -295,6 +284,10 @@ zabbix_export:
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1d
+ tags:
+ -
+ tag: Application
+ value: Inventory
triggers:
-
expression: '{diff()}=1 and {strlen()}>0'
@@ -318,14 +311,15 @@ zabbix_export:
description: |
MIB: SNMPv2-MIB
The time (in hundredths of a second) since the network management portion of the system was last re-initialized.
- applications:
- -
- name: Status
preprocessing:
-
type: MULTIPLIER
parameters:
- '0.01'
+ tags:
+ -
+ tag: Application
+ value: Status
triggers:
-
expression: '{last()}<10m'
@@ -338,11 +332,12 @@ zabbix_export:
type: INTERNAL
key: 'zabbix[host,snmp,available]'
history: 7d
- applications:
- -
- name: Status
valuemap:
name: zabbix.host.available
+ tags:
+ -
+ tag: Application
+ value: Status
triggers:
-
expression: '{max({$SNMP.TIMEOUT})}=0'
@@ -378,9 +373,10 @@ zabbix_export:
Object name: cpmCPUTotal5minRev
The cpmCPUTotal5minRev MIB object provides a more accurate view of the performance of the router over time than the MIB objects cpmCPUTotal1minRev and cpmCPUTotal5secRev . These MIB objects are not accurate because they look at CPU at one minute and five second intervals, respectively. These MIBs enable you to monitor the trends and plan the capacity of your network. The recommended baseline rising threshold for cpmCPUTotal5minRev is 90 percent. Depending on the platform, some routers that run at 90 percent, for example, 2500s, can exhibit performance degradation versus a high-end router, for example, the 7500 series, which can operate fine.
Reference: http://www.cisco.com/c/en/us/support/docs/ip/simple-network-management-protocol-snmp/15215-collect-cpu-util-snmp.html
- applications:
+ tags:
-
- name: CPU
+ tag: Application
+ value: CPU
trigger_prototypes:
-
expression: '{min(5m)}>{$CPU.UTIL.CRIT}'
@@ -430,14 +426,15 @@ zabbix_export:
description: |
MIB: ENTITY-MIB
Object name: entPhysicalSerialNum
- applications:
- -
- name: Inventory
preprocessing:
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1d
+ tags:
+ -
+ tag: Application
+ value: Inventory
trigger_prototypes:
-
expression: '{diff()}=1 and {strlen()}>0'
@@ -461,11 +458,12 @@ zabbix_export:
description: |
MIB: CISCO-ENVMON-MIB
Object name: ciscoEnvMonFanState
- applications:
- -
- name: Fans
valuemap:
name: 'CISCO-ENVMON-MIB::CiscoEnvMonState'
+ tags:
+ -
+ tag: Application
+ value: Fans
trigger_prototypes:
-
expression: '{last()}=3 or {last()}=4'
@@ -505,9 +503,10 @@ zabbix_export:
Object name: ciscoMemoryPoolFree
Indicates the number of bytes from the memory pool that are currently unused on the managed device. Note that the sum of ciscoMemoryPoolUsed and ciscoMemoryPoolFree is the total amount of memory in the pool
Reference: http://www.cisco.com/c/en/us/support/docs/ip/simple-network-management-protocol-snmp/15216-contiguous-memory.html
- applications:
+ tags:
-
- name: Memory
+ tag: Application
+ value: Memory
-
name: '{#SNMPVALUE}: Used memory'
type: SNMP_AGENT
@@ -520,9 +519,10 @@ zabbix_export:
Object name: ciscoMemoryPoolUsed
Indicates the number of bytes from the memory pool that are currently in use by applications on the managed device.
Reference: http://www.cisco.com/c/en/us/support/docs/ip/simple-network-management-protocol-snmp/15216-contiguous-memory.html
- applications:
+ tags:
-
- name: Memory
+ tag: Application
+ value: Memory
-
name: '{#SNMPVALUE}: Memory utilization'
type: CALCULATED
@@ -532,9 +532,10 @@ zabbix_export:
units: '%'
params: 'last("vm.memory.used[{#SNMPINDEX}]")/(last("vm.memory.free[{#SNMPINDEX}]")+last("vm.memory.used[{#SNMPINDEX}]"))*100'
description: 'Memory utilization in %'
- applications:
+ tags:
-
- name: Memory
+ tag: Application
+ value: Memory
trigger_prototypes:
-
expression: '{min(5m)}>{$MEMORY.UTIL.MAX}'
@@ -631,14 +632,15 @@ zabbix_export:
One possible reason for discarding such a packet could be to free up buffer space.
Discontinuities in the value of this counter can occur at re-initialization of the management system,
and at other times as indicated by the value of ifCounterDiscontinuityTime.
- application_prototypes:
- -
- name: 'Interface {#IFNAME}({#IFALIAS})'
preprocessing:
-
type: CHANGE_PER_SECOND
parameters:
- ''
+ tags:
+ -
+ tag: Application
+ value: 'Interface {#IFNAME}({#IFALIAS})'
-
name: 'Interface {#IFNAME}({#IFALIAS}): Inbound packets with errors'
type: SNMP_AGENT
@@ -648,14 +650,15 @@ zabbix_export:
description: |
MIB: IF-MIB
For packet-oriented interfaces, the number of inbound packets that contained errors preventing them from being deliverable to a higher-layer protocol. For character-oriented or fixed-length interfaces, the number of inbound transmission units that contained errors preventing them from being deliverable to a higher-layer protocol. Discontinuities in the value of this counter can occur at re-initialization of the management system, and at other times as indicated by the value of ifCounterDiscontinuityTime.
- application_prototypes:
- -
- name: 'Interface {#IFNAME}({#IFALIAS})'
preprocessing:
-
type: CHANGE_PER_SECOND
parameters:
- ''
+ tags:
+ -
+ tag: Application
+ value: 'Interface {#IFNAME}({#IFALIAS})'
trigger_prototypes:
-
expression: '{min(5m)}>{$IF.ERRORS.WARN:"{#IFNAME}"}'
@@ -679,9 +682,6 @@ zabbix_export:
description: |
MIB: IF-MIB
The total number of octets received on the interface, including framing characters. This object is a 64-bit version of ifInOctets. Discontinuities in the value of this counter can occur at re-initialization of the management system, and at other times as indicated by the value of ifCounterDiscontinuityTime.
- application_prototypes:
- -
- name: 'Interface {#IFNAME}({#IFALIAS})'
preprocessing:
-
type: CHANGE_PER_SECOND
@@ -691,6 +691,10 @@ zabbix_export:
type: MULTIPLIER
parameters:
- '8'
+ tags:
+ -
+ tag: Application
+ value: 'Interface {#IFNAME}({#IFALIAS})'
-
name: 'Interface {#IFNAME}({#IFALIAS}): Outbound packets discarded'
type: SNMP_AGENT
@@ -704,14 +708,15 @@ zabbix_export:
One possible reason for discarding such a packet could be to free up buffer space.
Discontinuities in the value of this counter can occur at re-initialization of the management system,
and at other times as indicated by the value of ifCounterDiscontinuityTime.
- application_prototypes:
- -
- name: 'Interface {#IFNAME}({#IFALIAS})'
preprocessing:
-
type: CHANGE_PER_SECOND
parameters:
- ''
+ tags:
+ -
+ tag: Application
+ value: 'Interface {#IFNAME}({#IFALIAS})'
-
name: 'Interface {#IFNAME}({#IFALIAS}): Outbound packets with errors'
type: SNMP_AGENT
@@ -721,14 +726,15 @@ zabbix_export:
description: |
MIB: IF-MIB
For packet-oriented interfaces, the number of outbound packets that contained errors preventing them from being deliverable to a higher-layer protocol. For character-oriented or fixed-length interfaces, the number of outbound transmission units that contained errors preventing them from being deliverable to a higher-layer protocol. Discontinuities in the value of this counter can occur at re-initialization of the management system, and at other times as indicated by the value of ifCounterDiscontinuityTime.
- application_prototypes:
- -
- name: 'Interface {#IFNAME}({#IFALIAS})'
preprocessing:
-
type: CHANGE_PER_SECOND
parameters:
- ''
+ tags:
+ -
+ tag: Application
+ value: 'Interface {#IFNAME}({#IFALIAS})'
trigger_prototypes:
-
expression: '{min(5m)}>{$IF.ERRORS.WARN:"{#IFNAME}"}'
@@ -752,9 +758,6 @@ zabbix_export:
description: |
MIB: IF-MIB
The total number of octets transmitted out of the interface, including framing characters. This object is a 64-bit version of ifOutOctets.Discontinuities in the value of this counter can occur at re-initialization of the management system, and at other times as indicated by the value of ifCounterDiscontinuityTime.
- application_prototypes:
- -
- name: 'Interface {#IFNAME}({#IFALIAS})'
preprocessing:
-
type: CHANGE_PER_SECOND
@@ -764,6 +767,10 @@ zabbix_export:
type: MULTIPLIER
parameters:
- '8'
+ tags:
+ -
+ tag: Application
+ value: 'Interface {#IFNAME}({#IFALIAS})'
-
name: 'Interface {#IFNAME}({#IFALIAS}): Speed'
type: SNMP_AGENT
@@ -774,9 +781,6 @@ zabbix_export:
description: |
MIB: IF-MIB
An estimate of the interface's current bandwidth in units of 1,000,000 bits per second. If this object reports a value of `n' then the speed of the interface is somewhere in the range of `n-500,000' to`n+499,999'. For interfaces which do not vary in bandwidth or for those where no accurate estimation can be made, this object should contain the nominal bandwidth. For a sub-layer which has no concept of bandwidth, this object should be zero.
- application_prototypes:
- -
- name: 'Interface {#IFNAME}({#IFALIAS})'
preprocessing:
-
type: MULTIPLIER
@@ -786,6 +790,10 @@ zabbix_export:
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1h
+ tags:
+ -
+ tag: Application
+ value: 'Interface {#IFNAME}({#IFALIAS})'
-
name: 'Interface {#IFNAME}({#IFALIAS}): Operational status'
type: SNMP_AGENT
@@ -801,9 +809,6 @@ zabbix_export:
- It should change todormant(5) if the interface is waiting for external actions (such as a serial line waiting for an incoming connection)
- It should remain in the down(2) state if and only if there is a fault that prevents it from going to the up(1) state
- It should remain in the notPresent(6) state if the interface has missing(typically, hardware) components.
- application_prototypes:
- -
- name: 'Interface {#IFNAME}({#IFALIAS})'
valuemap:
name: 'IF-MIB::ifOperStatus'
preprocessing:
@@ -811,6 +816,10 @@ zabbix_export:
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 6h
+ tags:
+ -
+ tag: Application
+ value: 'Interface {#IFNAME}({#IFALIAS})'
trigger_prototypes:
-
expression: '{$IFCONTROL:"{#IFNAME}"}=1 and ({last()}=2)'
@@ -832,9 +841,6 @@ zabbix_export:
The type of interface.
Additional values for ifType are assigned by the Internet Assigned NumbersAuthority (IANA),
through updating the syntax of the IANAifType textual convention.
- application_prototypes:
- -
- name: 'Interface {#IFNAME}({#IFALIAS})'
valuemap:
name: 'IF-MIB::ifType'
preprocessing:
@@ -842,6 +848,10 @@ zabbix_export:
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 6h
+ tags:
+ -
+ tag: Application
+ value: 'Interface {#IFNAME}({#IFALIAS})'
trigger_prototypes:
-
expression: |
@@ -993,11 +1003,12 @@ zabbix_export:
ifMauType. This was felt to be sufficiently
valuable to justify the redundancy.
Reference: [IEEE 802.3 Std.], 30.3.1.1.32,aDuplexStatus.
- application_prototypes:
- -
- name: 'Interface {#IFNAME}({#IFALIAS})'
valuemap:
name: 'EtherLike-MIB::dot3StatsDuplexStatus'
+ tags:
+ -
+ tag: Application
+ value: 'Interface {#IFNAME}({#IFALIAS})'
trigger_prototypes:
-
expression: '{last()}=2'
@@ -1021,11 +1032,12 @@ zabbix_export:
description: |
MIB: CISCO-ENVMON-MIB
Object name: ciscoEnvMonSupplyState
- applications:
- -
- name: 'Power supply'
valuemap:
name: 'CISCO-ENVMON-MIB::CiscoEnvMonState'
+ tags:
+ -
+ tag: Application
+ value: 'Power supply'
trigger_prototypes:
-
expression: '{last()}=3 or {last(4)}=4'
@@ -1063,11 +1075,12 @@ zabbix_export:
MIB: CISCO-ENVMON-MIB
Object name: ciscoEnvMonTemperatureState
The current state of the test point being instrumented.
- applications:
- -
- name: Temperature
valuemap:
name: 'CISCO-ENVMON-MIB::CiscoEnvMonState'
+ tags:
+ -
+ tag: Application
+ value: Temperature
trigger_prototypes:
-
expression: '{last()}=3 or {last()}=4'
@@ -1097,9 +1110,10 @@ zabbix_export:
MIB: CISCO-ENVMON-MIB
Object name: ciscoEnvMonTemperatureValue
The current measurement of the test point being instrumented.
- applications:
+ tags:
-
- name: Temperature
+ tag: Application
+ value: Temperature
trigger_prototypes:
-
expression: '{avg(5m)}>{$TEMP_CRIT:"{#SNMPVALUE}"}'
diff --git a/templates/net/cisco_catalyst_3750/cisco_catalyst_3750_24ts_snmp/template_net_cisco_catalyst_3750_24ts_snmp.yaml b/templates/net/cisco_catalyst_3750/cisco_catalyst_3750_24ts_snmp/template_net_cisco_catalyst_3750_24ts_snmp.yaml
index 5929e89cf1d..c5aae8027bb 100644
--- a/templates/net/cisco_catalyst_3750/cisco_catalyst_3750_24ts_snmp/template_net_cisco_catalyst_3750_24ts_snmp.yaml
+++ b/templates/net/cisco_catalyst_3750/cisco_catalyst_3750_24ts_snmp/template_net_cisco_catalyst_3750_24ts_snmp.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-03-04T00:00:00Z'
+ date: '2021-04-22T11:27:07Z'
groups:
-
name: Templates/Modules
@@ -24,36 +24,18 @@ zabbix_export:
groups:
-
name: Templates/Modules
- applications:
- -
- name: CPU
- -
- name: Fans
- -
- name: General
- -
- name: Inventory
- -
- name: Memory
- -
- name: 'Network interfaces'
- -
- name: 'Power supply'
- -
- name: Status
- -
- name: Temperature
items:
-
name: 'ICMP ping'
type: SIMPLE
key: icmpping
history: 7d
- applications:
- -
- name: Status
valuemap:
name: 'Service state'
+ tags:
+ -
+ tag: Application
+ value: Status
triggers:
-
expression: '{max(#3)}=0'
@@ -67,9 +49,10 @@ zabbix_export:
history: 7d
value_type: FLOAT
units: '%'
- applications:
+ tags:
-
- name: Status
+ tag: Application
+ value: Status
triggers:
-
expression: '{min(5m)}>{$ICMP_LOSS_WARN} and {min(5m)}<100'
@@ -87,9 +70,10 @@ zabbix_export:
history: 7d
value_type: FLOAT
units: s
- applications:
+ tags:
-
- name: Status
+ tag: Application
+ value: Status
triggers:
-
expression: '{avg(5m)}>{$ICMP_RESPONSE_TIME_WARN}'
@@ -111,10 +95,11 @@ zabbix_export:
trends: '0'
value_type: LOG
description: 'Item is used to collect all SNMP traps unmatched by other snmptrap items'
- applications:
- -
- name: General
logtimefmt: 'hh:mm:sszyyyy/MM/dd'
+ tags:
+ -
+ tag: Application
+ value: General
-
name: 'System contact details'
type: SNMP_AGENT
@@ -128,14 +113,15 @@ zabbix_export:
MIB: SNMPv2-MIB
The textual identification of the contact person for this managed node, together with information on how to contact this person. If no contact information is known, the value is the zero-length string.
inventory_link: CONTACT
- applications:
- -
- name: General
preprocessing:
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1d
+ tags:
+ -
+ tag: Application
+ value: General
-
name: 'System description'
type: SNMP_AGENT
@@ -150,14 +136,15 @@ zabbix_export:
A textual description of the entity. This value should
include the full name and version identification of the system's hardware type, software operating-system, and
networking software.
- applications:
- -
- name: General
preprocessing:
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1d
+ tags:
+ -
+ tag: Application
+ value: General
-
name: 'Hardware model name'
type: SNMP_AGENT
@@ -169,14 +156,15 @@ zabbix_export:
value_type: CHAR
description: 'MIB: ENTITY-MIB'
inventory_link: MODEL
- applications:
- -
- name: Inventory
preprocessing:
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1d
+ tags:
+ -
+ tag: Application
+ value: Inventory
-
name: 'Hardware serial number'
type: SNMP_AGENT
@@ -188,14 +176,15 @@ zabbix_export:
value_type: CHAR
description: 'MIB: ENTITY-MIB'
inventory_link: SERIALNO_A
- applications:
- -
- name: Inventory
preprocessing:
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1d
+ tags:
+ -
+ tag: Application
+ value: Inventory
triggers:
-
expression: '{diff()}=1 and {strlen()}>0'
@@ -216,14 +205,15 @@ zabbix_export:
MIB: SNMPv2-MIB
The physical location of this node (e.g., `telephone closet, 3rd floor'). If the location is unknown, the value is the zero-length string.
inventory_link: LOCATION
- applications:
- -
- name: General
preprocessing:
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1h
+ tags:
+ -
+ tag: Application
+ value: General
-
name: 'System name'
type: SNMP_AGENT
@@ -237,14 +227,15 @@ zabbix_export:
MIB: SNMPv2-MIB
An administratively-assigned name for this managed node.By convention, this is the node's fully-qualified domain name. If the name is unknown, the value is the zero-length string.
inventory_link: NAME
- applications:
- -
- name: General
preprocessing:
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1h
+ tags:
+ -
+ tag: Application
+ value: General
triggers:
-
expression: '{diff()}=1 and {strlen()}>0'
@@ -263,14 +254,15 @@ zabbix_export:
description: |
MIB: SNMPv2-MIB
The vendor's authoritative identification of the network management subsystem contained in the entity. This value is allocated within the SMI enterprises subtree (1.3.6.1.4.1) and provides an easy and unambiguous means for determining`what kind of box' is being managed. For example, if vendor`Flintstones, Inc.' was assigned the subtree1.3.6.1.4.1.4242, it could assign the identifier 1.3.6.1.4.1.4242.1.1 to its `Fred Router'.
- applications:
- -
- name: General
preprocessing:
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1h
+ tags:
+ -
+ tag: Application
+ value: General
-
name: 'Operating system'
type: SNMP_AGENT
@@ -282,9 +274,6 @@ zabbix_export:
value_type: CHAR
description: 'MIB: SNMPv2-MIB'
inventory_link: OS
- applications:
- -
- name: Inventory
preprocessing:
-
type: REGEX
@@ -295,6 +284,10 @@ zabbix_export:
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1d
+ tags:
+ -
+ tag: Application
+ value: Inventory
triggers:
-
expression: '{diff()}=1 and {strlen()}>0'
@@ -318,14 +311,15 @@ zabbix_export:
description: |
MIB: SNMPv2-MIB
The time (in hundredths of a second) since the network management portion of the system was last re-initialized.
- applications:
- -
- name: Status
preprocessing:
-
type: MULTIPLIER
parameters:
- '0.01'
+ tags:
+ -
+ tag: Application
+ value: Status
triggers:
-
expression: '{last()}<10m'
@@ -338,11 +332,12 @@ zabbix_export:
type: INTERNAL
key: 'zabbix[host,snmp,available]'
history: 7d
- applications:
- -
- name: Status
valuemap:
name: zabbix.host.available
+ tags:
+ -
+ tag: Application
+ value: Status
triggers:
-
expression: '{max({$SNMP.TIMEOUT})}=0'
@@ -378,9 +373,10 @@ zabbix_export:
Object name: cpmCPUTotal5minRev
The cpmCPUTotal5minRev MIB object provides a more accurate view of the performance of the router over time than the MIB objects cpmCPUTotal1minRev and cpmCPUTotal5secRev . These MIB objects are not accurate because they look at CPU at one minute and five second intervals, respectively. These MIBs enable you to monitor the trends and plan the capacity of your network. The recommended baseline rising threshold for cpmCPUTotal5minRev is 90 percent. Depending on the platform, some routers that run at 90 percent, for example, 2500s, can exhibit performance degradation versus a high-end router, for example, the 7500 series, which can operate fine.
Reference: http://www.cisco.com/c/en/us/support/docs/ip/simple-network-management-protocol-snmp/15215-collect-cpu-util-snmp.html
- applications:
+ tags:
-
- name: CPU
+ tag: Application
+ value: CPU
trigger_prototypes:
-
expression: '{min(5m)}>{$CPU.UTIL.CRIT}'
@@ -430,14 +426,15 @@ zabbix_export:
description: |
MIB: ENTITY-MIB
Object name: entPhysicalSerialNum
- applications:
- -
- name: Inventory
preprocessing:
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1d
+ tags:
+ -
+ tag: Application
+ value: Inventory
trigger_prototypes:
-
expression: '{diff()}=1 and {strlen()}>0'
@@ -461,11 +458,12 @@ zabbix_export:
description: |
MIB: CISCO-ENVMON-MIB
Object name: ciscoEnvMonFanState
- applications:
- -
- name: Fans
valuemap:
name: 'CISCO-ENVMON-MIB::CiscoEnvMonState'
+ tags:
+ -
+ tag: Application
+ value: Fans
trigger_prototypes:
-
expression: '{last()}=3 or {last()}=4'
@@ -505,9 +503,10 @@ zabbix_export:
Object name: ciscoMemoryPoolFree
Indicates the number of bytes from the memory pool that are currently unused on the managed device. Note that the sum of ciscoMemoryPoolUsed and ciscoMemoryPoolFree is the total amount of memory in the pool
Reference: http://www.cisco.com/c/en/us/support/docs/ip/simple-network-management-protocol-snmp/15216-contiguous-memory.html
- applications:
+ tags:
-
- name: Memory
+ tag: Application
+ value: Memory
-
name: '{#SNMPVALUE}: Used memory'
type: SNMP_AGENT
@@ -520,9 +519,10 @@ zabbix_export:
Object name: ciscoMemoryPoolUsed
Indicates the number of bytes from the memory pool that are currently in use by applications on the managed device.
Reference: http://www.cisco.com/c/en/us/support/docs/ip/simple-network-management-protocol-snmp/15216-contiguous-memory.html
- applications:
+ tags:
-
- name: Memory
+ tag: Application
+ value: Memory
-
name: '{#SNMPVALUE}: Memory utilization'
type: CALCULATED
@@ -532,9 +532,10 @@ zabbix_export:
units: '%'
params: 'last("vm.memory.used[{#SNMPINDEX}]")/(last("vm.memory.free[{#SNMPINDEX}]")+last("vm.memory.used[{#SNMPINDEX}]"))*100'
description: 'Memory utilization in %'
- applications:
+ tags:
-
- name: Memory
+ tag: Application
+ value: Memory
trigger_prototypes:
-
expression: '{min(5m)}>{$MEMORY.UTIL.MAX}'
@@ -631,14 +632,15 @@ zabbix_export:
One possible reason for discarding such a packet could be to free up buffer space.
Discontinuities in the value of this counter can occur at re-initialization of the management system,
and at other times as indicated by the value of ifCounterDiscontinuityTime.
- application_prototypes:
- -
- name: 'Interface {#IFNAME}({#IFALIAS})'
preprocessing:
-
type: CHANGE_PER_SECOND
parameters:
- ''
+ tags:
+ -
+ tag: Application
+ value: 'Interface {#IFNAME}({#IFALIAS})'
-
name: 'Interface {#IFNAME}({#IFALIAS}): Inbound packets with errors'
type: SNMP_AGENT
@@ -648,14 +650,15 @@ zabbix_export:
description: |
MIB: IF-MIB
For packet-oriented interfaces, the number of inbound packets that contained errors preventing them from being deliverable to a higher-layer protocol. For character-oriented or fixed-length interfaces, the number of inbound transmission units that contained errors preventing them from being deliverable to a higher-layer protocol. Discontinuities in the value of this counter can occur at re-initialization of the management system, and at other times as indicated by the value of ifCounterDiscontinuityTime.
- application_prototypes:
- -
- name: 'Interface {#IFNAME}({#IFALIAS})'
preprocessing:
-
type: CHANGE_PER_SECOND
parameters:
- ''
+ tags:
+ -
+ tag: Application
+ value: 'Interface {#IFNAME}({#IFALIAS})'
trigger_prototypes:
-
expression: '{min(5m)}>{$IF.ERRORS.WARN:"{#IFNAME}"}'
@@ -679,9 +682,6 @@ zabbix_export:
description: |
MIB: IF-MIB
The total number of octets received on the interface, including framing characters. This object is a 64-bit version of ifInOctets. Discontinuities in the value of this counter can occur at re-initialization of the management system, and at other times as indicated by the value of ifCounterDiscontinuityTime.
- application_prototypes:
- -
- name: 'Interface {#IFNAME}({#IFALIAS})'
preprocessing:
-
type: CHANGE_PER_SECOND
@@ -691,6 +691,10 @@ zabbix_export:
type: MULTIPLIER
parameters:
- '8'
+ tags:
+ -
+ tag: Application
+ value: 'Interface {#IFNAME}({#IFALIAS})'
-
name: 'Interface {#IFNAME}({#IFALIAS}): Outbound packets discarded'
type: SNMP_AGENT
@@ -704,14 +708,15 @@ zabbix_export:
One possible reason for discarding such a packet could be to free up buffer space.
Discontinuities in the value of this counter can occur at re-initialization of the management system,
and at other times as indicated by the value of ifCounterDiscontinuityTime.
- application_prototypes:
- -
- name: 'Interface {#IFNAME}({#IFALIAS})'
preprocessing:
-
type: CHANGE_PER_SECOND
parameters:
- ''
+ tags:
+ -
+ tag: Application
+ value: 'Interface {#IFNAME}({#IFALIAS})'
-
name: 'Interface {#IFNAME}({#IFALIAS}): Outbound packets with errors'
type: SNMP_AGENT
@@ -721,14 +726,15 @@ zabbix_export:
description: |
MIB: IF-MIB
For packet-oriented interfaces, the number of outbound packets that contained errors preventing them from being deliverable to a higher-layer protocol. For character-oriented or fixed-length interfaces, the number of outbound transmission units that contained errors preventing them from being deliverable to a higher-layer protocol. Discontinuities in the value of this counter can occur at re-initialization of the management system, and at other times as indicated by the value of ifCounterDiscontinuityTime.
- application_prototypes:
- -
- name: 'Interface {#IFNAME}({#IFALIAS})'
preprocessing:
-
type: CHANGE_PER_SECOND
parameters:
- ''
+ tags:
+ -
+ tag: Application
+ value: 'Interface {#IFNAME}({#IFALIAS})'
trigger_prototypes:
-
expression: '{min(5m)}>{$IF.ERRORS.WARN:"{#IFNAME}"}'
@@ -752,9 +758,6 @@ zabbix_export:
description: |
MIB: IF-MIB
The total number of octets transmitted out of the interface, including framing characters. This object is a 64-bit version of ifOutOctets.Discontinuities in the value of this counter can occur at re-initialization of the management system, and at other times as indicated by the value of ifCounterDiscontinuityTime.
- application_prototypes:
- -
- name: 'Interface {#IFNAME}({#IFALIAS})'
preprocessing:
-
type: CHANGE_PER_SECOND
@@ -764,6 +767,10 @@ zabbix_export:
type: MULTIPLIER
parameters:
- '8'
+ tags:
+ -
+ tag: Application
+ value: 'Interface {#IFNAME}({#IFALIAS})'
-
name: 'Interface {#IFNAME}({#IFALIAS}): Speed'
type: SNMP_AGENT
@@ -774,9 +781,6 @@ zabbix_export:
description: |
MIB: IF-MIB
An estimate of the interface's current bandwidth in units of 1,000,000 bits per second. If this object reports a value of `n' then the speed of the interface is somewhere in the range of `n-500,000' to`n+499,999'. For interfaces which do not vary in bandwidth or for those where no accurate estimation can be made, this object should contain the nominal bandwidth. For a sub-layer which has no concept of bandwidth, this object should be zero.
- application_prototypes:
- -
- name: 'Interface {#IFNAME}({#IFALIAS})'
preprocessing:
-
type: MULTIPLIER
@@ -786,6 +790,10 @@ zabbix_export:
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1h
+ tags:
+ -
+ tag: Application
+ value: 'Interface {#IFNAME}({#IFALIAS})'
-
name: 'Interface {#IFNAME}({#IFALIAS}): Operational status'
type: SNMP_AGENT
@@ -801,9 +809,6 @@ zabbix_export:
- It should change todormant(5) if the interface is waiting for external actions (such as a serial line waiting for an incoming connection)
- It should remain in the down(2) state if and only if there is a fault that prevents it from going to the up(1) state
- It should remain in the notPresent(6) state if the interface has missing(typically, hardware) components.
- application_prototypes:
- -
- name: 'Interface {#IFNAME}({#IFALIAS})'
valuemap:
name: 'IF-MIB::ifOperStatus'
preprocessing:
@@ -811,6 +816,10 @@ zabbix_export:
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 6h
+ tags:
+ -
+ tag: Application
+ value: 'Interface {#IFNAME}({#IFALIAS})'
trigger_prototypes:
-
expression: '{$IFCONTROL:"{#IFNAME}"}=1 and ({last()}=2)'
@@ -832,9 +841,6 @@ zabbix_export:
The type of interface.
Additional values for ifType are assigned by the Internet Assigned NumbersAuthority (IANA),
through updating the syntax of the IANAifType textual convention.
- application_prototypes:
- -
- name: 'Interface {#IFNAME}({#IFALIAS})'
valuemap:
name: 'IF-MIB::ifType'
preprocessing:
@@ -842,6 +848,10 @@ zabbix_export:
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 6h
+ tags:
+ -
+ tag: Application
+ value: 'Interface {#IFNAME}({#IFALIAS})'
trigger_prototypes:
-
expression: |
@@ -993,11 +1003,12 @@ zabbix_export:
ifMauType. This was felt to be sufficiently
valuable to justify the redundancy.
Reference: [IEEE 802.3 Std.], 30.3.1.1.32,aDuplexStatus.
- application_prototypes:
- -
- name: 'Interface {#IFNAME}({#IFALIAS})'
valuemap:
name: 'EtherLike-MIB::dot3StatsDuplexStatus'
+ tags:
+ -
+ tag: Application
+ value: 'Interface {#IFNAME}({#IFALIAS})'
trigger_prototypes:
-
expression: '{last()}=2'
@@ -1021,11 +1032,12 @@ zabbix_export:
description: |
MIB: CISCO-ENVMON-MIB
Object name: ciscoEnvMonSupplyState
- applications:
- -
- name: 'Power supply'
valuemap:
name: 'CISCO-ENVMON-MIB::CiscoEnvMonState'
+ tags:
+ -
+ tag: Application
+ value: 'Power supply'
trigger_prototypes:
-
expression: '{last()}=3 or {last(4)}=4'
@@ -1063,11 +1075,12 @@ zabbix_export:
MIB: CISCO-ENVMON-MIB
Object name: ciscoEnvMonTemperatureState
The current state of the test point being instrumented.
- applications:
- -
- name: Temperature
valuemap:
name: 'CISCO-ENVMON-MIB::CiscoEnvMonState'
+ tags:
+ -
+ tag: Application
+ value: Temperature
trigger_prototypes:
-
expression: '{last()}=3 or {last()}=4'
@@ -1097,9 +1110,10 @@ zabbix_export:
MIB: CISCO-ENVMON-MIB
Object name: ciscoEnvMonTemperatureValue
The current measurement of the test point being instrumented.
- applications:
+ tags:
-
- name: Temperature
+ tag: Application
+ value: Temperature
trigger_prototypes:
-
expression: '{avg(5m)}>{$TEMP_CRIT:"{#SNMPVALUE}"}'
diff --git a/templates/net/cisco_catalyst_3750/cisco_catalyst_3750_48ps_snmp/template_net_cisco_catalyst_3750_48ps_snmp.yaml b/templates/net/cisco_catalyst_3750/cisco_catalyst_3750_48ps_snmp/template_net_cisco_catalyst_3750_48ps_snmp.yaml
index 7a4d70acd07..05170a4e3c8 100644
--- a/templates/net/cisco_catalyst_3750/cisco_catalyst_3750_48ps_snmp/template_net_cisco_catalyst_3750_48ps_snmp.yaml
+++ b/templates/net/cisco_catalyst_3750/cisco_catalyst_3750_48ps_snmp/template_net_cisco_catalyst_3750_48ps_snmp.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-03-04T00:00:00Z'
+ date: '2021-04-22T13:00:43Z'
groups:
-
name: Templates/Modules
@@ -24,36 +24,18 @@ zabbix_export:
groups:
-
name: Templates/Modules
- applications:
- -
- name: CPU
- -
- name: Fans
- -
- name: General
- -
- name: Inventory
- -
- name: Memory
- -
- name: 'Network interfaces'
- -
- name: 'Power supply'
- -
- name: Status
- -
- name: Temperature
items:
-
name: 'ICMP ping'
type: SIMPLE
key: icmpping
history: 7d
- applications:
- -
- name: Status
valuemap:
name: 'Service state'
+ tags:
+ -
+ tag: Application
+ value: Status
triggers:
-
expression: '{max(#3)}=0'
@@ -67,9 +49,10 @@ zabbix_export:
history: 7d
value_type: FLOAT
units: '%'
- applications:
+ tags:
-
- name: Status
+ tag: Application
+ value: Status
triggers:
-
expression: '{min(5m)}>{$ICMP_LOSS_WARN} and {min(5m)}<100'
@@ -87,9 +70,10 @@ zabbix_export:
history: 7d
value_type: FLOAT
units: s
- applications:
+ tags:
-
- name: Status
+ tag: Application
+ value: Status
triggers:
-
expression: '{avg(5m)}>{$ICMP_RESPONSE_TIME_WARN}'
@@ -111,10 +95,11 @@ zabbix_export:
trends: '0'
value_type: LOG
description: 'Item is used to collect all SNMP traps unmatched by other snmptrap items'
- applications:
- -
- name: General
logtimefmt: 'hh:mm:sszyyyy/MM/dd'
+ tags:
+ -
+ tag: Application
+ value: General
-
name: 'System contact details'
type: SNMP_AGENT
@@ -128,14 +113,15 @@ zabbix_export:
MIB: SNMPv2-MIB
The textual identification of the contact person for this managed node, together with information on how to contact this person. If no contact information is known, the value is the zero-length string.
inventory_link: CONTACT
- applications:
- -
- name: General
preprocessing:
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1d
+ tags:
+ -
+ tag: Application
+ value: General
-
name: 'System description'
type: SNMP_AGENT
@@ -150,14 +136,15 @@ zabbix_export:
A textual description of the entity. This value should
include the full name and version identification of the system's hardware type, software operating-system, and
networking software.
- applications:
- -
- name: General
preprocessing:
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1d
+ tags:
+ -
+ tag: Application
+ value: General
-
name: 'Hardware model name'
type: SNMP_AGENT
@@ -169,14 +156,15 @@ zabbix_export:
value_type: CHAR
description: 'MIB: ENTITY-MIB'
inventory_link: MODEL
- applications:
- -
- name: Inventory
preprocessing:
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1d
+ tags:
+ -
+ tag: Application
+ value: Inventory
-
name: 'Hardware serial number'
type: SNMP_AGENT
@@ -188,14 +176,15 @@ zabbix_export:
value_type: CHAR
description: 'MIB: ENTITY-MIB'
inventory_link: SERIALNO_A
- applications:
- -
- name: Inventory
preprocessing:
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1d
+ tags:
+ -
+ tag: Application
+ value: Inventory
triggers:
-
expression: '{diff()}=1 and {strlen()}>0'
@@ -216,14 +205,15 @@ zabbix_export:
MIB: SNMPv2-MIB
The physical location of this node (e.g., `telephone closet, 3rd floor'). If the location is unknown, the value is the zero-length string.
inventory_link: LOCATION
- applications:
- -
- name: General
preprocessing:
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1h
+ tags:
+ -
+ tag: Application
+ value: General
-
name: 'System name'
type: SNMP_AGENT
@@ -237,14 +227,15 @@ zabbix_export:
MIB: SNMPv2-MIB
An administratively-assigned name for this managed node.By convention, this is the node's fully-qualified domain name. If the name is unknown, the value is the zero-length string.
inventory_link: NAME
- applications:
- -
- name: General
preprocessing:
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1h
+ tags:
+ -
+ tag: Application
+ value: General
triggers:
-
expression: '{diff()}=1 and {strlen()}>0'
@@ -263,14 +254,15 @@ zabbix_export:
description: |
MIB: SNMPv2-MIB
The vendor's authoritative identification of the network management subsystem contained in the entity. This value is allocated within the SMI enterprises subtree (1.3.6.1.4.1) and provides an easy and unambiguous means for determining`what kind of box' is being managed. For example, if vendor`Flintstones, Inc.' was assigned the subtree1.3.6.1.4.1.4242, it could assign the identifier 1.3.6.1.4.1.4242.1.1 to its `Fred Router'.
- applications:
- -
- name: General
preprocessing:
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1h
+ tags:
+ -
+ tag: Application
+ value: General
-
name: 'Operating system'
type: SNMP_AGENT
@@ -282,9 +274,6 @@ zabbix_export:
value_type: CHAR
description: 'MIB: SNMPv2-MIB'
inventory_link: OS
- applications:
- -
- name: Inventory
preprocessing:
-
type: REGEX
@@ -295,6 +284,10 @@ zabbix_export:
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1d
+ tags:
+ -
+ tag: Application
+ value: Inventory
triggers:
-
expression: '{diff()}=1 and {strlen()}>0'
@@ -318,14 +311,15 @@ zabbix_export:
description: |
MIB: SNMPv2-MIB
The time (in hundredths of a second) since the network management portion of the system was last re-initialized.
- applications:
- -
- name: Status
preprocessing:
-
type: MULTIPLIER
parameters:
- '0.01'
+ tags:
+ -
+ tag: Application
+ value: Status
triggers:
-
expression: '{last()}<10m'
@@ -338,11 +332,12 @@ zabbix_export:
type: INTERNAL
key: 'zabbix[host,snmp,available]'
history: 7d
- applications:
- -
- name: Status
valuemap:
name: zabbix.host.available
+ tags:
+ -
+ tag: Application
+ value: Status
triggers:
-
expression: '{max({$SNMP.TIMEOUT})}=0'
@@ -378,9 +373,10 @@ zabbix_export:
Object name: cpmCPUTotal5minRev
The cpmCPUTotal5minRev MIB object provides a more accurate view of the performance of the router over time than the MIB objects cpmCPUTotal1minRev and cpmCPUTotal5secRev . These MIB objects are not accurate because they look at CPU at one minute and five second intervals, respectively. These MIBs enable you to monitor the trends and plan the capacity of your network. The recommended baseline rising threshold for cpmCPUTotal5minRev is 90 percent. Depending on the platform, some routers that run at 90 percent, for example, 2500s, can exhibit performance degradation versus a high-end router, for example, the 7500 series, which can operate fine.
Reference: http://www.cisco.com/c/en/us/support/docs/ip/simple-network-management-protocol-snmp/15215-collect-cpu-util-snmp.html
- applications:
+ tags:
-
- name: CPU
+ tag: Application
+ value: CPU
trigger_prototypes:
-
expression: '{min(5m)}>{$CPU.UTIL.CRIT}'
@@ -430,14 +426,15 @@ zabbix_export:
description: |
MIB: ENTITY-MIB
Object name: entPhysicalSerialNum
- applications:
- -
- name: Inventory
preprocessing:
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1d
+ tags:
+ -
+ tag: Application
+ value: Inventory
trigger_prototypes:
-
expression: '{diff()}=1 and {strlen()}>0'
@@ -461,11 +458,12 @@ zabbix_export:
description: |
MIB: CISCO-ENVMON-MIB
Object name: ciscoEnvMonFanState
- applications:
- -
- name: Fans
valuemap:
name: 'CISCO-ENVMON-MIB::CiscoEnvMonState'
+ tags:
+ -
+ tag: Application
+ value: Fans
trigger_prototypes:
-
expression: '{last()}=3 or {last()}=4'
@@ -505,9 +503,10 @@ zabbix_export:
Object name: ciscoMemoryPoolFree
Indicates the number of bytes from the memory pool that are currently unused on the managed device. Note that the sum of ciscoMemoryPoolUsed and ciscoMemoryPoolFree is the total amount of memory in the pool
Reference: http://www.cisco.com/c/en/us/support/docs/ip/simple-network-management-protocol-snmp/15216-contiguous-memory.html
- applications:
+ tags:
-
- name: Memory
+ tag: Application
+ value: Memory
-
name: '{#SNMPVALUE}: Used memory'
type: SNMP_AGENT
@@ -520,9 +519,10 @@ zabbix_export:
Object name: ciscoMemoryPoolUsed
Indicates the number of bytes from the memory pool that are currently in use by applications on the managed device.
Reference: http://www.cisco.com/c/en/us/support/docs/ip/simple-network-management-protocol-snmp/15216-contiguous-memory.html
- applications:
+ tags:
-
- name: Memory
+ tag: Application
+ value: Memory
-
name: '{#SNMPVALUE}: Memory utilization'
type: CALCULATED
@@ -532,9 +532,10 @@ zabbix_export:
units: '%'
params: 'last("vm.memory.used[{#SNMPINDEX}]")/(last("vm.memory.free[{#SNMPINDEX}]")+last("vm.memory.used[{#SNMPINDEX}]"))*100'
description: 'Memory utilization in %'
- applications:
+ tags:
-
- name: Memory
+ tag: Application
+ value: Memory
trigger_prototypes:
-
expression: '{min(5m)}>{$MEMORY.UTIL.MAX}'
@@ -631,14 +632,15 @@ zabbix_export:
One possible reason for discarding such a packet could be to free up buffer space.
Discontinuities in the value of this counter can occur at re-initialization of the management system,
and at other times as indicated by the value of ifCounterDiscontinuityTime.
- application_prototypes:
- -
- name: 'Interface {#IFNAME}({#IFALIAS})'
preprocessing:
-
type: CHANGE_PER_SECOND
parameters:
- ''
+ tags:
+ -
+ tag: Application
+ value: 'Interface {#IFNAME}({#IFALIAS})'
-
name: 'Interface {#IFNAME}({#IFALIAS}): Inbound packets with errors'
type: SNMP_AGENT
@@ -648,14 +650,15 @@ zabbix_export:
description: |
MIB: IF-MIB
For packet-oriented interfaces, the number of inbound packets that contained errors preventing them from being deliverable to a higher-layer protocol. For character-oriented or fixed-length interfaces, the number of inbound transmission units that contained errors preventing them from being deliverable to a higher-layer protocol. Discontinuities in the value of this counter can occur at re-initialization of the management system, and at other times as indicated by the value of ifCounterDiscontinuityTime.
- application_prototypes:
- -
- name: 'Interface {#IFNAME}({#IFALIAS})'
preprocessing:
-
type: CHANGE_PER_SECOND
parameters:
- ''
+ tags:
+ -
+ tag: Application
+ value: 'Interface {#IFNAME}({#IFALIAS})'
trigger_prototypes:
-
expression: '{min(5m)}>{$IF.ERRORS.WARN:"{#IFNAME}"}'
@@ -679,9 +682,6 @@ zabbix_export:
description: |
MIB: IF-MIB
The total number of octets received on the interface, including framing characters. This object is a 64-bit version of ifInOctets. Discontinuities in the value of this counter can occur at re-initialization of the management system, and at other times as indicated by the value of ifCounterDiscontinuityTime.
- application_prototypes:
- -
- name: 'Interface {#IFNAME}({#IFALIAS})'
preprocessing:
-
type: CHANGE_PER_SECOND
@@ -691,6 +691,10 @@ zabbix_export:
type: MULTIPLIER
parameters:
- '8'
+ tags:
+ -
+ tag: Application
+ value: 'Interface {#IFNAME}({#IFALIAS})'
-
name: 'Interface {#IFNAME}({#IFALIAS}): Outbound packets discarded'
type: SNMP_AGENT
@@ -704,14 +708,15 @@ zabbix_export:
One possible reason for discarding such a packet could be to free up buffer space.
Discontinuities in the value of this counter can occur at re-initialization of the management system,
and at other times as indicated by the value of ifCounterDiscontinuityTime.
- application_prototypes:
- -
- name: 'Interface {#IFNAME}({#IFALIAS})'
preprocessing:
-
type: CHANGE_PER_SECOND
parameters:
- ''
+ tags:
+ -
+ tag: Application
+ value: 'Interface {#IFNAME}({#IFALIAS})'
-
name: 'Interface {#IFNAME}({#IFALIAS}): Outbound packets with errors'
type: SNMP_AGENT
@@ -721,14 +726,15 @@ zabbix_export:
description: |
MIB: IF-MIB
For packet-oriented interfaces, the number of outbound packets that contained errors preventing them from being deliverable to a higher-layer protocol. For character-oriented or fixed-length interfaces, the number of outbound transmission units that contained errors preventing them from being deliverable to a higher-layer protocol. Discontinuities in the value of this counter can occur at re-initialization of the management system, and at other times as indicated by the value of ifCounterDiscontinuityTime.
- application_prototypes:
- -
- name: 'Interface {#IFNAME}({#IFALIAS})'
preprocessing:
-
type: CHANGE_PER_SECOND
parameters:
- ''
+ tags:
+ -
+ tag: Application
+ value: 'Interface {#IFNAME}({#IFALIAS})'
trigger_prototypes:
-
expression: '{min(5m)}>{$IF.ERRORS.WARN:"{#IFNAME}"}'
@@ -752,9 +758,6 @@ zabbix_export:
description: |
MIB: IF-MIB
The total number of octets transmitted out of the interface, including framing characters. This object is a 64-bit version of ifOutOctets.Discontinuities in the value of this counter can occur at re-initialization of the management system, and at other times as indicated by the value of ifCounterDiscontinuityTime.
- application_prototypes:
- -
- name: 'Interface {#IFNAME}({#IFALIAS})'
preprocessing:
-
type: CHANGE_PER_SECOND
@@ -764,6 +767,10 @@ zabbix_export:
type: MULTIPLIER
parameters:
- '8'
+ tags:
+ -
+ tag: Application
+ value: 'Interface {#IFNAME}({#IFALIAS})'
-
name: 'Interface {#IFNAME}({#IFALIAS}): Speed'
type: SNMP_AGENT
@@ -774,9 +781,6 @@ zabbix_export:
description: |
MIB: IF-MIB
An estimate of the interface's current bandwidth in units of 1,000,000 bits per second. If this object reports a value of `n' then the speed of the interface is somewhere in the range of `n-500,000' to`n+499,999'. For interfaces which do not vary in bandwidth or for those where no accurate estimation can be made, this object should contain the nominal bandwidth. For a sub-layer which has no concept of bandwidth, this object should be zero.
- application_prototypes:
- -
- name: 'Interface {#IFNAME}({#IFALIAS})'
preprocessing:
-
type: MULTIPLIER
@@ -786,6 +790,10 @@ zabbix_export:
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1h
+ tags:
+ -
+ tag: Application
+ value: 'Interface {#IFNAME}({#IFALIAS})'
-
name: 'Interface {#IFNAME}({#IFALIAS}): Operational status'
type: SNMP_AGENT
@@ -801,9 +809,6 @@ zabbix_export:
- It should change todormant(5) if the interface is waiting for external actions (such as a serial line waiting for an incoming connection)
- It should remain in the down(2) state if and only if there is a fault that prevents it from going to the up(1) state
- It should remain in the notPresent(6) state if the interface has missing(typically, hardware) components.
- application_prototypes:
- -
- name: 'Interface {#IFNAME}({#IFALIAS})'
valuemap:
name: 'IF-MIB::ifOperStatus'
preprocessing:
@@ -811,6 +816,10 @@ zabbix_export:
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 6h
+ tags:
+ -
+ tag: Application
+ value: 'Interface {#IFNAME}({#IFALIAS})'
trigger_prototypes:
-
expression: '{$IFCONTROL:"{#IFNAME}"}=1 and ({last()}=2)'
@@ -832,9 +841,6 @@ zabbix_export:
The type of interface.
Additional values for ifType are assigned by the Internet Assigned NumbersAuthority (IANA),
through updating the syntax of the IANAifType textual convention.
- application_prototypes:
- -
- name: 'Interface {#IFNAME}({#IFALIAS})'
valuemap:
name: 'IF-MIB::ifType'
preprocessing:
@@ -842,6 +848,10 @@ zabbix_export:
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 6h
+ tags:
+ -
+ tag: Application
+ value: 'Interface {#IFNAME}({#IFALIAS})'
trigger_prototypes:
-
expression: |
@@ -993,11 +1003,12 @@ zabbix_export:
ifMauType. This was felt to be sufficiently
valuable to justify the redundancy.
Reference: [IEEE 802.3 Std.], 30.3.1.1.32,aDuplexStatus.
- application_prototypes:
- -
- name: 'Interface {#IFNAME}({#IFALIAS})'
valuemap:
name: 'EtherLike-MIB::dot3StatsDuplexStatus'
+ tags:
+ -
+ tag: Application
+ value: 'Interface {#IFNAME}({#IFALIAS})'
trigger_prototypes:
-
expression: '{last()}=2'
@@ -1021,11 +1032,12 @@ zabbix_export:
description: |
MIB: CISCO-ENVMON-MIB
Object name: ciscoEnvMonSupplyState
- applications:
- -
- name: 'Power supply'
valuemap:
name: 'CISCO-ENVMON-MIB::CiscoEnvMonState'
+ tags:
+ -
+ tag: Application
+ value: 'Power supply'
trigger_prototypes:
-
expression: '{last()}=3 or {last(4)}=4'
@@ -1063,11 +1075,12 @@ zabbix_export:
MIB: CISCO-ENVMON-MIB
Object name: ciscoEnvMonTemperatureState
The current state of the test point being instrumented.
- applications:
- -
- name: Temperature
valuemap:
name: 'CISCO-ENVMON-MIB::CiscoEnvMonState'
+ tags:
+ -
+ tag: Application
+ value: Temperature
trigger_prototypes:
-
expression: '{last()}=3 or {last()}=4'
@@ -1097,9 +1110,10 @@ zabbix_export:
MIB: CISCO-ENVMON-MIB
Object name: ciscoEnvMonTemperatureValue
The current measurement of the test point being instrumented.
- applications:
+ tags:
-
- name: Temperature
+ tag: Application
+ value: Temperature
trigger_prototypes:
-
expression: '{avg(5m)}>{$TEMP_CRIT:"{#SNMPVALUE}"}'
diff --git a/templates/net/cisco_catalyst_3750/cisco_catalyst_3750_48ts_snmp/template_net_cisco_catalyst_3750_48ts_snmp.yaml b/templates/net/cisco_catalyst_3750/cisco_catalyst_3750_48ts_snmp/template_net_cisco_catalyst_3750_48ts_snmp.yaml
index acada896573..7e222e2f882 100644
--- a/templates/net/cisco_catalyst_3750/cisco_catalyst_3750_48ts_snmp/template_net_cisco_catalyst_3750_48ts_snmp.yaml
+++ b/templates/net/cisco_catalyst_3750/cisco_catalyst_3750_48ts_snmp/template_net_cisco_catalyst_3750_48ts_snmp.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-03-04T00:00:00Z'
+ date: '2021-04-22T11:27:08Z'
groups:
-
name: Templates/Modules
@@ -24,36 +24,18 @@ zabbix_export:
groups:
-
name: Templates/Modules
- applications:
- -
- name: CPU
- -
- name: Fans
- -
- name: General
- -
- name: Inventory
- -
- name: Memory
- -
- name: 'Network interfaces'
- -
- name: 'Power supply'
- -
- name: Status
- -
- name: Temperature
items:
-
name: 'ICMP ping'
type: SIMPLE
key: icmpping
history: 7d
- applications:
- -
- name: Status
valuemap:
name: 'Service state'
+ tags:
+ -
+ tag: Application
+ value: Status
triggers:
-
expression: '{max(#3)}=0'
@@ -67,9 +49,10 @@ zabbix_export:
history: 7d
value_type: FLOAT
units: '%'
- applications:
+ tags:
-
- name: Status
+ tag: Application
+ value: Status
triggers:
-
expression: '{min(5m)}>{$ICMP_LOSS_WARN} and {min(5m)}<100'
@@ -87,9 +70,10 @@ zabbix_export:
history: 7d
value_type: FLOAT
units: s
- applications:
+ tags:
-
- name: Status
+ tag: Application
+ value: Status
triggers:
-
expression: '{avg(5m)}>{$ICMP_RESPONSE_TIME_WARN}'
@@ -111,10 +95,11 @@ zabbix_export:
trends: '0'
value_type: LOG
description: 'Item is used to collect all SNMP traps unmatched by other snmptrap items'
- applications:
- -
- name: General
logtimefmt: 'hh:mm:sszyyyy/MM/dd'
+ tags:
+ -
+ tag: Application
+ value: General
-
name: 'System contact details'
type: SNMP_AGENT
@@ -128,14 +113,15 @@ zabbix_export:
MIB: SNMPv2-MIB
The textual identification of the contact person for this managed node, together with information on how to contact this person. If no contact information is known, the value is the zero-length string.
inventory_link: CONTACT
- applications:
- -
- name: General
preprocessing:
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1d
+ tags:
+ -
+ tag: Application
+ value: General
-
name: 'System description'
type: SNMP_AGENT
@@ -150,14 +136,15 @@ zabbix_export:
A textual description of the entity. This value should
include the full name and version identification of the system's hardware type, software operating-system, and
networking software.
- applications:
- -
- name: General
preprocessing:
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1d
+ tags:
+ -
+ tag: Application
+ value: General
-
name: 'Hardware model name'
type: SNMP_AGENT
@@ -169,14 +156,15 @@ zabbix_export:
value_type: CHAR
description: 'MIB: ENTITY-MIB'
inventory_link: MODEL
- applications:
- -
- name: Inventory
preprocessing:
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1d
+ tags:
+ -
+ tag: Application
+ value: Inventory
-
name: 'Hardware serial number'
type: SNMP_AGENT
@@ -188,14 +176,15 @@ zabbix_export:
value_type: CHAR
description: 'MIB: ENTITY-MIB'
inventory_link: SERIALNO_A
- applications:
- -
- name: Inventory
preprocessing:
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1d
+ tags:
+ -
+ tag: Application
+ value: Inventory
triggers:
-
expression: '{diff()}=1 and {strlen()}>0'
@@ -216,14 +205,15 @@ zabbix_export:
MIB: SNMPv2-MIB
The physical location of this node (e.g., `telephone closet, 3rd floor'). If the location is unknown, the value is the zero-length string.
inventory_link: LOCATION
- applications:
- -
- name: General
preprocessing:
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1h
+ tags:
+ -
+ tag: Application
+ value: General
-
name: 'System name'
type: SNMP_AGENT
@@ -237,14 +227,15 @@ zabbix_export:
MIB: SNMPv2-MIB
An administratively-assigned name for this managed node.By convention, this is the node's fully-qualified domain name. If the name is unknown, the value is the zero-length string.
inventory_link: NAME
- applications:
- -
- name: General
preprocessing:
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1h
+ tags:
+ -
+ tag: Application
+ value: General
triggers:
-
expression: '{diff()}=1 and {strlen()}>0'
@@ -263,14 +254,15 @@ zabbix_export:
description: |
MIB: SNMPv2-MIB
The vendor's authoritative identification of the network management subsystem contained in the entity. This value is allocated within the SMI enterprises subtree (1.3.6.1.4.1) and provides an easy and unambiguous means for determining`what kind of box' is being managed. For example, if vendor`Flintstones, Inc.' was assigned the subtree1.3.6.1.4.1.4242, it could assign the identifier 1.3.6.1.4.1.4242.1.1 to its `Fred Router'.
- applications:
- -
- name: General
preprocessing:
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1h
+ tags:
+ -
+ tag: Application
+ value: General
-
name: 'Operating system'
type: SNMP_AGENT
@@ -282,9 +274,6 @@ zabbix_export:
value_type: CHAR
description: 'MIB: SNMPv2-MIB'
inventory_link: OS
- applications:
- -
- name: Inventory
preprocessing:
-
type: REGEX
@@ -295,6 +284,10 @@ zabbix_export:
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1d
+ tags:
+ -
+ tag: Application
+ value: Inventory
triggers:
-
expression: '{diff()}=1 and {strlen()}>0'
@@ -318,14 +311,15 @@ zabbix_export:
description: |
MIB: SNMPv2-MIB
The time (in hundredths of a second) since the network management portion of the system was last re-initialized.
- applications:
- -
- name: Status
preprocessing:
-
type: MULTIPLIER
parameters:
- '0.01'
+ tags:
+ -
+ tag: Application
+ value: Status
triggers:
-
expression: '{last()}<10m'
@@ -338,11 +332,12 @@ zabbix_export:
type: INTERNAL
key: 'zabbix[host,snmp,available]'
history: 7d
- applications:
- -
- name: Status
valuemap:
name: zabbix.host.available
+ tags:
+ -
+ tag: Application
+ value: Status
triggers:
-
expression: '{max({$SNMP.TIMEOUT})}=0'
@@ -378,9 +373,10 @@ zabbix_export:
Object name: cpmCPUTotal5minRev
The cpmCPUTotal5minRev MIB object provides a more accurate view of the performance of the router over time than the MIB objects cpmCPUTotal1minRev and cpmCPUTotal5secRev . These MIB objects are not accurate because they look at CPU at one minute and five second intervals, respectively. These MIBs enable you to monitor the trends and plan the capacity of your network. The recommended baseline rising threshold for cpmCPUTotal5minRev is 90 percent. Depending on the platform, some routers that run at 90 percent, for example, 2500s, can exhibit performance degradation versus a high-end router, for example, the 7500 series, which can operate fine.
Reference: http://www.cisco.com/c/en/us/support/docs/ip/simple-network-management-protocol-snmp/15215-collect-cpu-util-snmp.html
- applications:
+ tags:
-
- name: CPU
+ tag: Application
+ value: CPU
trigger_prototypes:
-
expression: '{min(5m)}>{$CPU.UTIL.CRIT}'
@@ -430,14 +426,15 @@ zabbix_export:
description: |
MIB: ENTITY-MIB
Object name: entPhysicalSerialNum
- applications:
- -
- name: Inventory
preprocessing:
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1d
+ tags:
+ -
+ tag: Application
+ value: Inventory
trigger_prototypes:
-
expression: '{diff()}=1 and {strlen()}>0'
@@ -462,11 +459,12 @@ zabbix_export:
description: |
MIB: CISCO-ENVMON-MIB
Object name: ciscoEnvMonFanState
- applications:
- -
- name: Fans
valuemap:
name: 'CISCO-ENVMON-MIB::CiscoEnvMonState'
+ tags:
+ -
+ tag: Application
+ value: Fans
trigger_prototypes:
-
expression: '{last()}=3 or {last()}=4'
@@ -506,9 +504,10 @@ zabbix_export:
Object name: ciscoMemoryPoolFree
Indicates the number of bytes from the memory pool that are currently unused on the managed device. Note that the sum of ciscoMemoryPoolUsed and ciscoMemoryPoolFree is the total amount of memory in the pool
Reference: http://www.cisco.com/c/en/us/support/docs/ip/simple-network-management-protocol-snmp/15216-contiguous-memory.html
- applications:
+ tags:
-
- name: Memory
+ tag: Application
+ value: Memory
-
name: '{#SNMPVALUE}: Used memory'
type: SNMP_AGENT
@@ -521,9 +520,10 @@ zabbix_export:
Object name: ciscoMemoryPoolUsed
Indicates the number of bytes from the memory pool that are currently in use by applications on the managed device.
Reference: http://www.cisco.com/c/en/us/support/docs/ip/simple-network-management-protocol-snmp/15216-contiguous-memory.html
- applications:
+ tags:
-
- name: Memory
+ tag: Application
+ value: Memory
-
name: '{#SNMPVALUE}: Memory utilization'
type: CALCULATED
@@ -533,9 +533,10 @@ zabbix_export:
units: '%'
params: 'last("vm.memory.used[{#SNMPINDEX}]")/(last("vm.memory.free[{#SNMPINDEX}]")+last("vm.memory.used[{#SNMPINDEX}]"))*100'
description: 'Memory utilization in %'
- applications:
+ tags:
-
- name: Memory
+ tag: Application
+ value: Memory
trigger_prototypes:
-
expression: '{min(5m)}>{$MEMORY.UTIL.MAX}'
@@ -632,14 +633,15 @@ zabbix_export:
One possible reason for discarding such a packet could be to free up buffer space.
Discontinuities in the value of this counter can occur at re-initialization of the management system,
and at other times as indicated by the value of ifCounterDiscontinuityTime.
- application_prototypes:
- -
- name: 'Interface {#IFNAME}({#IFALIAS})'
preprocessing:
-
type: CHANGE_PER_SECOND
parameters:
- ''
+ tags:
+ -
+ tag: Application
+ value: 'Interface {#IFNAME}({#IFALIAS})'
-
name: 'Interface {#IFNAME}({#IFALIAS}): Inbound packets with errors'
type: SNMP_AGENT
@@ -649,14 +651,15 @@ zabbix_export:
description: |
MIB: IF-MIB
For packet-oriented interfaces, the number of inbound packets that contained errors preventing them from being deliverable to a higher-layer protocol. For character-oriented or fixed-length interfaces, the number of inbound transmission units that contained errors preventing them from being deliverable to a higher-layer protocol. Discontinuities in the value of this counter can occur at re-initialization of the management system, and at other times as indicated by the value of ifCounterDiscontinuityTime.
- application_prototypes:
- -
- name: 'Interface {#IFNAME}({#IFALIAS})'
preprocessing:
-
type: CHANGE_PER_SECOND
parameters:
- ''
+ tags:
+ -
+ tag: Application
+ value: 'Interface {#IFNAME}({#IFALIAS})'
trigger_prototypes:
-
expression: '{min(5m)}>{$IF.ERRORS.WARN:"{#IFNAME}"}'
@@ -681,9 +684,6 @@ zabbix_export:
description: |
MIB: IF-MIB
The total number of octets received on the interface, including framing characters. This object is a 64-bit version of ifInOctets. Discontinuities in the value of this counter can occur at re-initialization of the management system, and at other times as indicated by the value of ifCounterDiscontinuityTime.
- application_prototypes:
- -
- name: 'Interface {#IFNAME}({#IFALIAS})'
preprocessing:
-
type: CHANGE_PER_SECOND
@@ -693,6 +693,10 @@ zabbix_export:
type: MULTIPLIER
parameters:
- '8'
+ tags:
+ -
+ tag: Application
+ value: 'Interface {#IFNAME}({#IFALIAS})'
-
name: 'Interface {#IFNAME}({#IFALIAS}): Outbound packets discarded'
type: SNMP_AGENT
@@ -706,14 +710,15 @@ zabbix_export:
One possible reason for discarding such a packet could be to free up buffer space.
Discontinuities in the value of this counter can occur at re-initialization of the management system,
and at other times as indicated by the value of ifCounterDiscontinuityTime.
- application_prototypes:
- -
- name: 'Interface {#IFNAME}({#IFALIAS})'
preprocessing:
-
type: CHANGE_PER_SECOND
parameters:
- ''
+ tags:
+ -
+ tag: Application
+ value: 'Interface {#IFNAME}({#IFALIAS})'
-
name: 'Interface {#IFNAME}({#IFALIAS}): Outbound packets with errors'
type: SNMP_AGENT
@@ -723,14 +728,15 @@ zabbix_export:
description: |
MIB: IF-MIB
For packet-oriented interfaces, the number of outbound packets that contained errors preventing them from being deliverable to a higher-layer protocol. For character-oriented or fixed-length interfaces, the number of outbound transmission units that contained errors preventing them from being deliverable to a higher-layer protocol. Discontinuities in the value of this counter can occur at re-initialization of the management system, and at other times as indicated by the value of ifCounterDiscontinuityTime.
- application_prototypes:
- -
- name: 'Interface {#IFNAME}({#IFALIAS})'
preprocessing:
-
type: CHANGE_PER_SECOND
parameters:
- ''
+ tags:
+ -
+ tag: Application
+ value: 'Interface {#IFNAME}({#IFALIAS})'
trigger_prototypes:
-
expression: '{min(5m)}>{$IF.ERRORS.WARN:"{#IFNAME}"}'
@@ -755,9 +761,6 @@ zabbix_export:
description: |
MIB: IF-MIB
The total number of octets transmitted out of the interface, including framing characters. This object is a 64-bit version of ifOutOctets.Discontinuities in the value of this counter can occur at re-initialization of the management system, and at other times as indicated by the value of ifCounterDiscontinuityTime.
- application_prototypes:
- -
- name: 'Interface {#IFNAME}({#IFALIAS})'
preprocessing:
-
type: CHANGE_PER_SECOND
@@ -767,6 +770,10 @@ zabbix_export:
type: MULTIPLIER
parameters:
- '8'
+ tags:
+ -
+ tag: Application
+ value: 'Interface {#IFNAME}({#IFALIAS})'
-
name: 'Interface {#IFNAME}({#IFALIAS}): Speed'
type: SNMP_AGENT
@@ -777,9 +784,6 @@ zabbix_export:
description: |
MIB: IF-MIB
An estimate of the interface's current bandwidth in units of 1,000,000 bits per second. If this object reports a value of `n' then the speed of the interface is somewhere in the range of `n-500,000' to`n+499,999'. For interfaces which do not vary in bandwidth or for those where no accurate estimation can be made, this object should contain the nominal bandwidth. For a sub-layer which has no concept of bandwidth, this object should be zero.
- application_prototypes:
- -
- name: 'Interface {#IFNAME}({#IFALIAS})'
preprocessing:
-
type: MULTIPLIER
@@ -789,6 +793,10 @@ zabbix_export:
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1h
+ tags:
+ -
+ tag: Application
+ value: 'Interface {#IFNAME}({#IFALIAS})'
-
name: 'Interface {#IFNAME}({#IFALIAS}): Operational status'
type: SNMP_AGENT
@@ -804,9 +812,6 @@ zabbix_export:
- It should change todormant(5) if the interface is waiting for external actions (such as a serial line waiting for an incoming connection)
- It should remain in the down(2) state if and only if there is a fault that prevents it from going to the up(1) state
- It should remain in the notPresent(6) state if the interface has missing(typically, hardware) components.
- application_prototypes:
- -
- name: 'Interface {#IFNAME}({#IFALIAS})'
valuemap:
name: 'IF-MIB::ifOperStatus'
preprocessing:
@@ -814,6 +819,10 @@ zabbix_export:
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 6h
+ tags:
+ -
+ tag: Application
+ value: 'Interface {#IFNAME}({#IFALIAS})'
trigger_prototypes:
-
expression: '{$IFCONTROL:"{#IFNAME}"}=1 and ({last()}=2)'
@@ -836,9 +845,6 @@ zabbix_export:
The type of interface.
Additional values for ifType are assigned by the Internet Assigned NumbersAuthority (IANA),
through updating the syntax of the IANAifType textual convention.
- application_prototypes:
- -
- name: 'Interface {#IFNAME}({#IFALIAS})'
valuemap:
name: 'IF-MIB::ifType'
preprocessing:
@@ -846,6 +852,10 @@ zabbix_export:
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 6h
+ tags:
+ -
+ tag: Application
+ value: 'Interface {#IFNAME}({#IFALIAS})'
trigger_prototypes:
-
expression: |
@@ -1000,11 +1010,12 @@ zabbix_export:
ifMauType. This was felt to be sufficiently
valuable to justify the redundancy.
Reference: [IEEE 802.3 Std.], 30.3.1.1.32,aDuplexStatus.
- application_prototypes:
- -
- name: 'Interface {#IFNAME}({#IFALIAS})'
valuemap:
name: 'EtherLike-MIB::dot3StatsDuplexStatus'
+ tags:
+ -
+ tag: Application
+ value: 'Interface {#IFNAME}({#IFALIAS})'
trigger_prototypes:
-
expression: '{last()}=2'
@@ -1029,11 +1040,12 @@ zabbix_export:
description: |
MIB: CISCO-ENVMON-MIB
Object name: ciscoEnvMonSupplyState
- applications:
- -
- name: 'Power supply'
valuemap:
name: 'CISCO-ENVMON-MIB::CiscoEnvMonState'
+ tags:
+ -
+ tag: Application
+ value: 'Power supply'
trigger_prototypes:
-
expression: '{last()}=3 or {last(4)}=4'
@@ -1071,11 +1083,12 @@ zabbix_export:
MIB: CISCO-ENVMON-MIB
Object name: ciscoEnvMonTemperatureState
The current state of the test point being instrumented.
- applications:
- -
- name: Temperature
valuemap:
name: 'CISCO-ENVMON-MIB::CiscoEnvMonState'
+ tags:
+ -
+ tag: Application
+ value: Temperature
trigger_prototypes:
-
expression: '{last()}=3 or {last()}=4'
@@ -1105,9 +1118,10 @@ zabbix_export:
MIB: CISCO-ENVMON-MIB
Object name: ciscoEnvMonTemperatureValue
The current measurement of the test point being instrumented.
- applications:
+ tags:
-
- name: Temperature
+ tag: Application
+ value: Temperature
trigger_prototypes:
-
expression: '{avg(5m)}>{$TEMP_CRIT:"{#SNMPVALUE}"}'
diff --git a/templates/net/cisco_snmp/README.md b/templates/net/cisco_snmp/README.md
index fe6063b5375..09cca4a2e02 100644
--- a/templates/net/cisco_snmp/README.md
+++ b/templates/net/cisco_snmp/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
@@ -15,9 +15,9 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$MEMORY.UTIL.MAX} |<p>-</p> |`90` |
+| Name | Description | Default |
+|--------------------|-------------|---------|
+| {$MEMORY.UTIL.MAX} | <p>-</p> | `90` |
## Template links
@@ -25,23 +25,23 @@ There are no template links in this template.
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Memory Discovery |<p>Discovery of ciscoMemoryPoolTable, a table of memory pool monitoring entries.</p><p>http://www.cisco.com/c/en/us/support/docs/ip/simple-network-management-protocol-snmp/15216-contiguous-memory.html</p> |SNMP |memory.discovery |
+| Name | Description | Type | Key and additional info |
+|------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------|-------------------------|
+| Memory Discovery | <p>Discovery of ciscoMemoryPoolTable, a table of memory pool monitoring entries.</p><p>http://www.cisco.com/c/en/us/support/docs/ip/simple-network-management-protocol-snmp/15216-contiguous-memory.html</p> | SNMP | memory.discovery |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Memory |{#SNMPVALUE}: Used memory |<p>MIB: CISCO-MEMORY-POOL-MIB</p><p>Indicates the number of bytes from the memory pool that are currently in use by applications on the managed device.</p><p>Reference: http://www.cisco.com/c/en/us/support/docs/ip/simple-network-management-protocol-snmp/15216-contiguous-memory.html</p> |SNMP |vm.memory.used[ciscoMemoryPoolUsed.{#SNMPINDEX}] |
-|Memory |{#SNMPVALUE}: Free memory |<p>MIB: CISCO-MEMORY-POOL-MIB</p><p>Indicates the number of bytes from the memory pool that are currently unused on the managed device. Note that the sum of ciscoMemoryPoolUsed and ciscoMemoryPoolFree is the total amount of memory in the pool</p><p>Reference: http://www.cisco.com/c/en/us/support/docs/ip/simple-network-management-protocol-snmp/15216-contiguous-memory.html</p> |SNMP |vm.memory.free[ciscoMemoryPoolFree.{#SNMPINDEX}] |
-|Memory |{#SNMPVALUE}: Memory utilization |<p>Memory utilization in %</p> |CALCULATED |vm.memory.util[vm.memory.util.{#SNMPINDEX}]<p>**Expression**:</p>`last("vm.memory.used[ciscoMemoryPoolUsed.{#SNMPINDEX}]")/(last("vm.memory.free[ciscoMemoryPoolFree.{#SNMPINDEX}]")+last("vm.memory.used[ciscoMemoryPoolUsed.{#SNMPINDEX}]"))*100` |
+| Group | Name | Description | Type | Key and additional info |
+|--------|----------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Memory | {#SNMPVALUE}: Used memory | <p>MIB: CISCO-MEMORY-POOL-MIB</p><p>Indicates the number of bytes from the memory pool that are currently in use by applications on the managed device.</p><p>Reference: http://www.cisco.com/c/en/us/support/docs/ip/simple-network-management-protocol-snmp/15216-contiguous-memory.html</p> | SNMP | vm.memory.used[ciscoMemoryPoolUsed.{#SNMPINDEX}] |
+| Memory | {#SNMPVALUE}: Free memory | <p>MIB: CISCO-MEMORY-POOL-MIB</p><p>Indicates the number of bytes from the memory pool that are currently unused on the managed device. Note that the sum of ciscoMemoryPoolUsed and ciscoMemoryPoolFree is the total amount of memory in the pool</p><p>Reference: http://www.cisco.com/c/en/us/support/docs/ip/simple-network-management-protocol-snmp/15216-contiguous-memory.html</p> | SNMP | vm.memory.free[ciscoMemoryPoolFree.{#SNMPINDEX}] |
+| Memory | {#SNMPVALUE}: Memory utilization | <p>Memory utilization in %</p> | CALCULATED | vm.memory.util[vm.memory.util.{#SNMPINDEX}]<p>**Expression**:</p>`last("vm.memory.used[ciscoMemoryPoolUsed.{#SNMPINDEX}]")/(last("vm.memory.free[ciscoMemoryPoolFree.{#SNMPINDEX}]")+last("vm.memory.used[ciscoMemoryPoolUsed.{#SNMPINDEX}]"))*100` |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|{#SNMPVALUE}: High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m) |<p>The system is running out of free memory.</p> |`{TEMPLATE_NAME:vm.memory.util[vm.memory.util.{#SNMPINDEX}].min(5m)}>{$MEMORY.UTIL.MAX}` |AVERAGE | |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|---------------------------------------------------------------------|--------------------------------------------------|------------------------------------------------------------------------------------------|----------|----------------------------------|
+| {#SNMPVALUE}: High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m) | <p>The system is running out of free memory.</p> | `{TEMPLATE_NAME:vm.memory.util[vm.memory.util.{#SNMPINDEX}].min(5m)}>{$MEMORY.UTIL.MAX}` | AVERAGE | |
## Feedback
@@ -51,7 +51,7 @@ Please report any issues with the template at https://support.zabbix.com
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
@@ -63,9 +63,9 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$CPU.UTIL.CRIT} |<p>-</p> |`90` |
+| Name | Description | Default |
+|------------------|-------------|---------|
+| {$CPU.UTIL.CRIT} | <p>-</p> | `90` |
## Template links
@@ -73,21 +73,21 @@ There are no template links in this template.
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|CPU Discovery |<p>If your IOS device has several CPUs, you must use CISCO-PROCESS-MIB and its object cpmCPUTotal5minRev from the table called cpmCPUTotalTable ,</p><p>indexed with cpmCPUTotalIndex .</p><p>This table allows CISCO-PROCESS-MIB to keep CPU statistics for different physical entities in the router,</p><p>like different CPU chips, group of CPUs, or CPUs in different modules/cards.</p><p>In case of a single CPU, cpmCPUTotalTable has only one entry.</p> |SNMP |cpu.discovery |
+| Name | Description | Type | Key and additional info |
+|---------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------|-------------------------|
+| CPU Discovery | <p>If your IOS device has several CPUs, you must use CISCO-PROCESS-MIB and its object cpmCPUTotal5minRev from the table called cpmCPUTotalTable ,</p><p>indexed with cpmCPUTotalIndex .</p><p>This table allows CISCO-PROCESS-MIB to keep CPU statistics for different physical entities in the router,</p><p>like different CPU chips, group of CPUs, or CPUs in different modules/cards.</p><p>In case of a single CPU, cpmCPUTotalTable has only one entry.</p> | SNMP | cpu.discovery |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|CPU |#{#SNMPINDEX}: CPU utilization |<p>MIB: CISCO-PROCESS-MIB</p><p>The cpmCPUTotal5minRev MIB object provides a more accurate view of the performance of the router over time than the MIB objects cpmCPUTotal1minRev and cpmCPUTotal5secRev . These MIB objects are not accurate because they look at CPU at one minute and five second intervals, respectively. These MIBs enable you to monitor the trends and plan the capacity of your network. The recommended baseline rising threshold for cpmCPUTotal5minRev is 90 percent. Depending on the platform, some routers that run at 90 percent, for example, 2500s, can exhibit performance degradation versus a high-end router, for example, the 7500 series, which can operate fine.</p><p>Reference: http://www.cisco.com/c/en/us/support/docs/ip/simple-network-management-protocol-snmp/15215-collect-cpu-util-snmp.html</p> |SNMP |system.cpu.util[cpmCPUTotal5minRev.{#SNMPINDEX}] |
+| Group | Name | Description | Type | Key and additional info |
+|-------|--------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------|--------------------------------------------------|
+| CPU | #{#SNMPINDEX}: CPU utilization | <p>MIB: CISCO-PROCESS-MIB</p><p>The cpmCPUTotal5minRev MIB object provides a more accurate view of the performance of the router over time than the MIB objects cpmCPUTotal1minRev and cpmCPUTotal5secRev . These MIB objects are not accurate because they look at CPU at one minute and five second intervals, respectively. These MIBs enable you to monitor the trends and plan the capacity of your network. The recommended baseline rising threshold for cpmCPUTotal5minRev is 90 percent. Depending on the platform, some routers that run at 90 percent, for example, 2500s, can exhibit performance degradation versus a high-end router, for example, the 7500 series, which can operate fine.</p><p>Reference: http://www.cisco.com/c/en/us/support/docs/ip/simple-network-management-protocol-snmp/15215-collect-cpu-util-snmp.html</p> | SNMP | system.cpu.util[cpmCPUTotal5minRev.{#SNMPINDEX}] |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|#{#SNMPINDEX}: High CPU utilization (over {$CPU.UTIL.CRIT}% for 5m) |<p>CPU utilization is too high. The system might be slow to respond.</p> |`{TEMPLATE_NAME:system.cpu.util[cpmCPUTotal5minRev.{#SNMPINDEX}].min(5m)}>{$CPU.UTIL.CRIT}` |WARNING | |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|---------------------------------------------------------------------|--------------------------------------------------------------------------|---------------------------------------------------------------------------------------------|----------|----------------------------------|
+| #{#SNMPINDEX}: High CPU utilization (over {$CPU.UTIL.CRIT}% for 5m) | <p>CPU utilization is too high. The system might be slow to respond.</p> | `{TEMPLATE_NAME:system.cpu.util[cpmCPUTotal5minRev.{#SNMPINDEX}].min(5m)}>{$CPU.UTIL.CRIT}` | WARNING | |
## Feedback
@@ -97,7 +97,7 @@ Please report any issues with the template at https://support.zabbix.com
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
@@ -109,9 +109,9 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$CPU.UTIL.CRIT} |<p>-</p> |`90` |
+| Name | Description | Default |
+|------------------|-------------|---------|
+| {$CPU.UTIL.CRIT} | <p>-</p> | `90` |
## Template links
@@ -119,21 +119,21 @@ There are no template links in this template.
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|CPU Discovery |<p>If your IOS device has several CPUs, you must use CISCO-PROCESS-MIB and its object cpmCPUTotal5minRev from the table called cpmCPUTotalTable ,</p><p>indexed with cpmCPUTotalIndex .</p><p>This table allows CISCO-PROCESS-MIB to keep CPU statistics for different physical entities in the router,</p><p>like different CPU chips, group of CPUs, or CPUs in different modules/cards.</p><p>In case of a single CPU, cpmCPUTotalTable has only one entry.</p> |SNMP |cpu.discovery |
+| Name | Description | Type | Key and additional info |
+|---------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------|-------------------------|
+| CPU Discovery | <p>If your IOS device has several CPUs, you must use CISCO-PROCESS-MIB and its object cpmCPUTotal5minRev from the table called cpmCPUTotalTable ,</p><p>indexed with cpmCPUTotalIndex .</p><p>This table allows CISCO-PROCESS-MIB to keep CPU statistics for different physical entities in the router,</p><p>like different CPU chips, group of CPUs, or CPUs in different modules/cards.</p><p>In case of a single CPU, cpmCPUTotalTable has only one entry.</p> | SNMP | cpu.discovery |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|CPU |{#SNMPVALUE}: CPU utilization |<p>MIB: CISCO-PROCESS-MIB</p><p>The overall CPU busy percentage in the last 5 minute</p><p>period. This object deprecates the avgBusy5 object from</p><p>the OLD-CISCO-SYSTEM-MIB. This object is deprecated</p><p>by cpmCPUTotal5minRev which has the changed range</p><p>of value (0..100)</p><p>Reference: http://www.cisco.com/c/en/us/support/docs/ip/simple-network-management-protocol-snmp/15215-collect-cpu-util-snmp.html</p> |SNMP |system.cpu.util[cpmCPUTotal5min.{#SNMPINDEX}] |
+| Group | Name | Description | Type | Key and additional info |
+|-------|-------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------|-----------------------------------------------|
+| CPU | {#SNMPVALUE}: CPU utilization | <p>MIB: CISCO-PROCESS-MIB</p><p>The overall CPU busy percentage in the last 5 minute</p><p>period. This object deprecates the avgBusy5 object from</p><p>the OLD-CISCO-SYSTEM-MIB. This object is deprecated</p><p>by cpmCPUTotal5minRev which has the changed range</p><p>of value (0..100)</p><p>Reference: http://www.cisco.com/c/en/us/support/docs/ip/simple-network-management-protocol-snmp/15215-collect-cpu-util-snmp.html</p> | SNMP | system.cpu.util[cpmCPUTotal5min.{#SNMPINDEX}] |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|{#SNMPVALUE}: High CPU utilization (over {$CPU.UTIL.CRIT}% for 5m) |<p>CPU utilization is too high. The system might be slow to respond.</p> |`{TEMPLATE_NAME:system.cpu.util[cpmCPUTotal5min.{#SNMPINDEX}].min(5m)}>{$CPU.UTIL.CRIT}` |WARNING | |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|--------------------------------------------------------------------|--------------------------------------------------------------------------|------------------------------------------------------------------------------------------|----------|----------------------------------|
+| {#SNMPVALUE}: High CPU utilization (over {$CPU.UTIL.CRIT}% for 5m) | <p>CPU utilization is too high. The system might be slow to respond.</p> | `{TEMPLATE_NAME:system.cpu.util[cpmCPUTotal5min.{#SNMPINDEX}].min(5m)}>{$CPU.UTIL.CRIT}` | WARNING | |
## Feedback
@@ -143,7 +143,7 @@ Please report any issues with the template at https://support.zabbix.com
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
@@ -155,9 +155,9 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$CPU.UTIL.CRIT} |<p>-</p> |`90` |
+| Name | Description | Default |
+|------------------|-------------|---------|
+| {$CPU.UTIL.CRIT} | <p>-</p> | `90` |
## Template links
@@ -168,15 +168,15 @@ There are no template links in this template.
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|CPU |CPU utilization |<p>MIB: OLD-CISCO-CPU-MIB</p><p>5 minute exponentially-decayed moving average of the CPU busy percentage.</p><p>Reference: http://www.cisco.com/c/en/us/support/docs/ip/simple-network-management-protocol-snmp/15215-collect-cpu-util-snmp.html</p> |SNMP |system.cpu.util[avgBusy5] |
+| Group | Name | Description | Type | Key and additional info |
+|-------|-----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------|---------------------------|
+| CPU | CPU utilization | <p>MIB: OLD-CISCO-CPU-MIB</p><p>5 minute exponentially-decayed moving average of the CPU busy percentage.</p><p>Reference: http://www.cisco.com/c/en/us/support/docs/ip/simple-network-management-protocol-snmp/15215-collect-cpu-util-snmp.html</p> | SNMP | system.cpu.util[avgBusy5] |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|High CPU utilization (over {$CPU.UTIL.CRIT}% for 5m) |<p>CPU utilization is too high. The system might be slow to respond.</p> |`{TEMPLATE_NAME:system.cpu.util[avgBusy5].min(5m)}>{$CPU.UTIL.CRIT}` |WARNING | |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|------------------------------------------------------|--------------------------------------------------------------------------|----------------------------------------------------------------------|----------|----------------------------------|
+| High CPU utilization (over {$CPU.UTIL.CRIT}% for 5m) | <p>CPU utilization is too high. The system might be slow to respond.</p> | `{TEMPLATE_NAME:system.cpu.util[avgBusy5].min(5m)}>{$CPU.UTIL.CRIT}` | WARNING | |
## Feedback
@@ -186,7 +186,7 @@ Please report any issues with the template at https://support.zabbix.com
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
@@ -203,26 +203,26 @@ There are no template links in this template.
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Entity Serial Numbers Discovery |<p>-</p> |SNMP |entity_sn.discovery<p>**Filter**:</p>AND <p>- B: {#ENT_SN} MATCHES_REGEX `.+`</p><p>- A: {#ENT_CLASS} MATCHES_REGEX `[^3]`</p> |
+| Name | Description | Type | Key and additional info |
+|---------------------------------|-------------|------|--------------------------------------------------------------------------------------------------------------------------------|
+| Entity Serial Numbers Discovery | <p>-</p> | SNMP | entity_sn.discovery<p>**Filter**:</p>AND <p>- B: {#ENT_SN} MATCHES_REGEX `.+`</p><p>- A: {#ENT_CLASS} MATCHES_REGEX `[^3]`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Inventory |Hardware model name |<p>MIB: ENTITY-MIB</p> |SNMP |system.hw.model<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |Hardware serial number |<p>MIB: ENTITY-MIB</p> |SNMP |system.hw.serialnumber<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |Operating system |<p>MIB: SNMPv2-MIB</p> |SNMP |system.sw.os[sysDescr.0]<p>**Preprocessing**:</p><p>- REGEX: `Version (.+), RELEASE \1`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |{#ENT_NAME}: Hardware serial number |<p>MIB: ENTITY-MIB</p> |SNMP |system.hw.serialnumber[entPhysicalSerialNum.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Group | Name | Description | Type | Key and additional info |
+|-----------|-------------------------------------|------------------------|------|---------------------------------------------------------------------------------------------------------------------------------------|
+| Inventory | Hardware model name | <p>MIB: ENTITY-MIB</p> | SNMP | system.hw.model<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | Hardware serial number | <p>MIB: ENTITY-MIB</p> | SNMP | system.hw.serialnumber<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | Operating system | <p>MIB: SNMPv2-MIB</p> | SNMP | system.sw.os[sysDescr.0]<p>**Preprocessing**:</p><p>- REGEX: `Version (.+), RELEASE \1`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | {#ENT_NAME}: Hardware serial number | <p>MIB: ENTITY-MIB</p> | SNMP | system.hw.serialnumber[entPhysicalSerialNum.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|Device has been replaced (new serial number received) |<p>Device serial number has changed. Ack to close</p> |`{TEMPLATE_NAME:system.hw.serialnumber.diff()}=1 and {TEMPLATE_NAME:system.hw.serialnumber.strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|Operating system description has changed |<p>Operating system description has changed. Possible reasons that system has been updated or replaced. Ack to close.</p> |`{TEMPLATE_NAME:system.sw.os[sysDescr.0].diff()}=1 and {TEMPLATE_NAME:system.sw.os[sysDescr.0].strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|{#ENT_NAME}: Device has been replaced (new serial number received) |<p>Device serial number has changed. Ack to close</p> |`{TEMPLATE_NAME:system.hw.serialnumber[entPhysicalSerialNum.{#SNMPINDEX}].diff()}=1 and {TEMPLATE_NAME:system.hw.serialnumber[entPhysicalSerialNum.{#SNMPINDEX}].strlen()}>0` |INFO |<p>Manual close: YES</p> |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|--------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|----------------------------------|
+| Device has been replaced (new serial number received) | <p>Device serial number has changed. Ack to close</p> | `{TEMPLATE_NAME:system.hw.serialnumber.diff()}=1 and {TEMPLATE_NAME:system.hw.serialnumber.strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| Operating system description has changed | <p>Operating system description has changed. Possible reasons that system has been updated or replaced. Ack to close.</p> | `{TEMPLATE_NAME:system.sw.os[sysDescr.0].diff()}=1 and {TEMPLATE_NAME:system.sw.os[sysDescr.0].strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| {#ENT_NAME}: Device has been replaced (new serial number received) | <p>Device serial number has changed. Ack to close</p> | `{TEMPLATE_NAME:system.hw.serialnumber[entPhysicalSerialNum.{#SNMPINDEX}].diff()}=1 and {TEMPLATE_NAME:system.hw.serialnumber[entPhysicalSerialNum.{#SNMPINDEX}].strlen()}>0` | INFO | <p>Manual close: YES</p> |
## Feedback
@@ -232,7 +232,7 @@ Please report any issues with the template at https://support.zabbix.com
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
@@ -244,24 +244,24 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$FAN_CRIT_STATUS:"critical"} |<p>-</p> |`3` |
-|{$FAN_CRIT_STATUS:"shutdown"} |<p>-</p> |`4` |
-|{$FAN_WARN_STATUS:"notFunctioning"} |<p>-</p> |`6` |
-|{$FAN_WARN_STATUS:"warning"} |<p>-</p> |`2` |
-|{$PSU_CRIT_STATUS:"critical"} |<p>-</p> |`3` |
-|{$PSU_CRIT_STATUS:"shutdown"} |<p>-</p> |`4` |
-|{$PSU_WARN_STATUS:"notFunctioning"} |<p>-</p> |`6` |
-|{$PSU_WARN_STATUS:"warning"} |<p>-</p> |`2` |
-|{$TEMP_CRIT:"CPU"} |<p>-</p> |`75` |
-|{$TEMP_CRIT_LOW} |<p>-</p> |`5` |
-|{$TEMP_CRIT_STATUS} |<p>-</p> |`3` |
-|{$TEMP_CRIT} |<p>-</p> |`60` |
-|{$TEMP_DISASTER_STATUS} |<p>-</p> |`4` |
-|{$TEMP_WARN:"CPU"} |<p>-</p> |`70` |
-|{$TEMP_WARN_STATUS} |<p>-</p> |`2` |
-|{$TEMP_WARN} |<p>-</p> |`50` |
+| Name | Description | Default |
+|-------------------------------------|-------------|---------|
+| {$FAN_CRIT_STATUS:"critical"} | <p>-</p> | `3` |
+| {$FAN_CRIT_STATUS:"shutdown"} | <p>-</p> | `4` |
+| {$FAN_WARN_STATUS:"notFunctioning"} | <p>-</p> | `6` |
+| {$FAN_WARN_STATUS:"warning"} | <p>-</p> | `2` |
+| {$PSU_CRIT_STATUS:"critical"} | <p>-</p> | `3` |
+| {$PSU_CRIT_STATUS:"shutdown"} | <p>-</p> | `4` |
+| {$PSU_WARN_STATUS:"notFunctioning"} | <p>-</p> | `6` |
+| {$PSU_WARN_STATUS:"warning"} | <p>-</p> | `2` |
+| {$TEMP_CRIT:"CPU"} | <p>-</p> | `75` |
+| {$TEMP_CRIT_LOW} | <p>-</p> | `5` |
+| {$TEMP_CRIT_STATUS} | <p>-</p> | `3` |
+| {$TEMP_CRIT} | <p>-</p> | `60` |
+| {$TEMP_DISASTER_STATUS} | <p>-</p> | `4` |
+| {$TEMP_WARN:"CPU"} | <p>-</p> | `70` |
+| {$TEMP_WARN_STATUS} | <p>-</p> | `2` |
+| {$TEMP_WARN} | <p>-</p> | `50` |
## Template links
@@ -269,32 +269,32 @@ There are no template links in this template.
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Temperature Discovery |<p>Discovery of ciscoEnvMonTemperatureTable (ciscoEnvMonTemperatureDescr), a table of ambient temperature status</p><p>maintained by the environmental monitor.</p> |SNMP |temperature.discovery |
-|PSU Discovery |<p>The table of power supply status maintained by the environmental monitor card.</p> |SNMP |psu.discovery |
-|FAN Discovery |<p>The table of fan status maintained by the environmental monitor.</p> |SNMP |fan.discovery |
+| Name | Description | Type | Key and additional info |
+|-----------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------|------|-------------------------|
+| Temperature Discovery | <p>Discovery of ciscoEnvMonTemperatureTable (ciscoEnvMonTemperatureDescr), a table of ambient temperature status</p><p>maintained by the environmental monitor.</p> | SNMP | temperature.discovery |
+| PSU Discovery | <p>The table of power supply status maintained by the environmental monitor card.</p> | SNMP | psu.discovery |
+| FAN Discovery | <p>The table of fan status maintained by the environmental monitor.</p> | SNMP | fan.discovery |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Fans |{#SENSOR_INFO}: Fan status |<p>MIB: CISCO-ENVMON-MIB</p> |SNMP |sensor.fan.status[ciscoEnvMonFanState.{#SNMPINDEX}] |
-|Power_supply |{#SENSOR_INFO}: Power supply status |<p>MIB: CISCO-ENVMON-MIB</p> |SNMP |sensor.psu.status[ciscoEnvMonSupplyState.{#SNMPINDEX}] |
-|Temperature |{#SNMPVALUE}: Temperature |<p>MIB: CISCO-ENVMON-MIB</p><p>The current measurement of the test point being instrumented.</p> |SNMP |sensor.temp.value[ciscoEnvMonTemperatureValue.{#SNMPINDEX}] |
-|Temperature |{#SNMPVALUE}: Temperature status |<p>MIB: CISCO-ENVMON-MIB</p><p>The current state of the test point being instrumented.</p> |SNMP |sensor.temp.status[ciscoEnvMonTemperatureState.{#SNMPINDEX}] |
+| Group | Name | Description | Type | Key and additional info |
+|--------------|-------------------------------------|--------------------------------------------------------------------------------------------------|------|--------------------------------------------------------------|
+| Fans | {#SENSOR_INFO}: Fan status | <p>MIB: CISCO-ENVMON-MIB</p> | SNMP | sensor.fan.status[ciscoEnvMonFanState.{#SNMPINDEX}] |
+| Power_supply | {#SENSOR_INFO}: Power supply status | <p>MIB: CISCO-ENVMON-MIB</p> | SNMP | sensor.psu.status[ciscoEnvMonSupplyState.{#SNMPINDEX}] |
+| Temperature | {#SNMPVALUE}: Temperature | <p>MIB: CISCO-ENVMON-MIB</p><p>The current measurement of the test point being instrumented.</p> | SNMP | sensor.temp.value[ciscoEnvMonTemperatureValue.{#SNMPINDEX}] |
+| Temperature | {#SNMPVALUE}: Temperature status | <p>MIB: CISCO-ENVMON-MIB</p><p>The current state of the test point being instrumented.</p> | SNMP | sensor.temp.status[ciscoEnvMonTemperatureState.{#SNMPINDEX}] |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|{#SENSOR_INFO}: Fan is in critical state |<p>Please check the fan unit</p> |`{TEMPLATE_NAME:sensor.fan.status[ciscoEnvMonFanState.{#SNMPINDEX}].count(#1,{$FAN_CRIT_STATUS:"critical"},eq)}=1 or {TEMPLATE_NAME:sensor.fan.status[ciscoEnvMonFanState.{#SNMPINDEX}].count(#1,{$FAN_CRIT_STATUS:"shutdown"},eq)}=1` |AVERAGE | |
-|{#SENSOR_INFO}: Fan is in warning state |<p>Please check the fan unit</p> |`{TEMPLATE_NAME:sensor.fan.status[ciscoEnvMonFanState.{#SNMPINDEX}].count(#1,{$FAN_WARN_STATUS:"warning"},eq)}=1 or {TEMPLATE_NAME:sensor.fan.status[ciscoEnvMonFanState.{#SNMPINDEX}].count(#1,{$FAN_WARN_STATUS:"notFunctioning"},eq)}=1` |WARNING |<p>**Depends on**:</p><p>- {#SENSOR_INFO}: Fan is in critical state</p> |
-|{#SENSOR_INFO}: Power supply is in critical state |<p>Please check the power supply unit for errors</p> |`{TEMPLATE_NAME:sensor.psu.status[ciscoEnvMonSupplyState.{#SNMPINDEX}].count(#1,{$PSU_CRIT_STATUS:"critical"},eq)}=1 or {TEMPLATE_NAME:sensor.psu.status[ciscoEnvMonSupplyState.{#SNMPINDEX}].count(#1,{$PSU_CRIT_STATUS:"shutdown"},eq)}=1` |AVERAGE | |
-|{#SENSOR_INFO}: Power supply is in warning state |<p>Please check the power supply unit for errors</p> |`{TEMPLATE_NAME:sensor.psu.status[ciscoEnvMonSupplyState.{#SNMPINDEX}].count(#1,{$PSU_WARN_STATUS:"warning"},eq)}=1 or {TEMPLATE_NAME:sensor.psu.status[ciscoEnvMonSupplyState.{#SNMPINDEX}].count(#1,{$PSU_WARN_STATUS:"notFunctioning"},eq)}=1` |WARNING |<p>**Depends on**:</p><p>- {#SENSOR_INFO}: Power supply is in critical state</p> |
-|{#SNMPVALUE}: Temperature is above warning threshold: >{$TEMP_WARN:"{#SNMPVALUE}"} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[ciscoEnvMonTemperatureValue.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:"{#SNMPVALUE}"} or {Cisco CISCO-ENVMON-MIB SNMP:sensor.temp.status[ciscoEnvMonTemperatureState.{#SNMPINDEX}].last()}={$TEMP_WARN_STATUS}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[ciscoEnvMonTemperatureValue.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:"{#SNMPVALUE}"}-3` |WARNING |<p>**Depends on**:</p><p>- {#SNMPVALUE}: Temperature is above critical threshold: >{$TEMP_CRIT:"{#SNMPVALUE}"}</p> |
-|{#SNMPVALUE}: Temperature is above critical threshold: >{$TEMP_CRIT:"{#SNMPVALUE}"} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[ciscoEnvMonTemperatureValue.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:"{#SNMPVALUE}"} or {Cisco CISCO-ENVMON-MIB SNMP:sensor.temp.status[ciscoEnvMonTemperatureState.{#SNMPINDEX}].last()}={$TEMP_CRIT_STATUS} or {Cisco CISCO-ENVMON-MIB SNMP:sensor.temp.status[ciscoEnvMonTemperatureState.{#SNMPINDEX}].last()}={$TEMP_DISASTER_STATUS}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[ciscoEnvMonTemperatureValue.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:"{#SNMPVALUE}"}-3` |HIGH | |
-|{#SNMPVALUE}: Temperature is too low: <{$TEMP_CRIT_LOW:"{#SNMPVALUE}"} |<p>-</p> |`{TEMPLATE_NAME:sensor.temp.value[ciscoEnvMonTemperatureValue.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:"{#SNMPVALUE}"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[ciscoEnvMonTemperatureValue.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:"{#SNMPVALUE}"}+3` |AVERAGE | |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|-------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|--------------------------------------------------------------------------------------------------------------------|
+| {#SENSOR_INFO}: Fan is in critical state | <p>Please check the fan unit</p> | `{TEMPLATE_NAME:sensor.fan.status[ciscoEnvMonFanState.{#SNMPINDEX}].count(#1,{$FAN_CRIT_STATUS:"critical"},eq)}=1 or {TEMPLATE_NAME:sensor.fan.status[ciscoEnvMonFanState.{#SNMPINDEX}].count(#1,{$FAN_CRIT_STATUS:"shutdown"},eq)}=1` | AVERAGE | |
+| {#SENSOR_INFO}: Fan is in warning state | <p>Please check the fan unit</p> | `{TEMPLATE_NAME:sensor.fan.status[ciscoEnvMonFanState.{#SNMPINDEX}].count(#1,{$FAN_WARN_STATUS:"warning"},eq)}=1 or {TEMPLATE_NAME:sensor.fan.status[ciscoEnvMonFanState.{#SNMPINDEX}].count(#1,{$FAN_WARN_STATUS:"notFunctioning"},eq)}=1` | WARNING | <p>**Depends on**:</p><p>- {#SENSOR_INFO}: Fan is in critical state</p> |
+| {#SENSOR_INFO}: Power supply is in critical state | <p>Please check the power supply unit for errors</p> | `{TEMPLATE_NAME:sensor.psu.status[ciscoEnvMonSupplyState.{#SNMPINDEX}].count(#1,{$PSU_CRIT_STATUS:"critical"},eq)}=1 or {TEMPLATE_NAME:sensor.psu.status[ciscoEnvMonSupplyState.{#SNMPINDEX}].count(#1,{$PSU_CRIT_STATUS:"shutdown"},eq)}=1` | AVERAGE | |
+| {#SENSOR_INFO}: Power supply is in warning state | <p>Please check the power supply unit for errors</p> | `{TEMPLATE_NAME:sensor.psu.status[ciscoEnvMonSupplyState.{#SNMPINDEX}].count(#1,{$PSU_WARN_STATUS:"warning"},eq)}=1 or {TEMPLATE_NAME:sensor.psu.status[ciscoEnvMonSupplyState.{#SNMPINDEX}].count(#1,{$PSU_WARN_STATUS:"notFunctioning"},eq)}=1` | WARNING | <p>**Depends on**:</p><p>- {#SENSOR_INFO}: Power supply is in critical state</p> |
+| {#SNMPVALUE}: Temperature is above warning threshold: >{$TEMP_WARN:"{#SNMPVALUE}"} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[ciscoEnvMonTemperatureValue.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:"{#SNMPVALUE}"} or {Cisco CISCO-ENVMON-MIB SNMP:sensor.temp.status[ciscoEnvMonTemperatureState.{#SNMPINDEX}].last()}={$TEMP_WARN_STATUS}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[ciscoEnvMonTemperatureValue.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:"{#SNMPVALUE}"}-3` | WARNING | <p>**Depends on**:</p><p>- {#SNMPVALUE}: Temperature is above critical threshold: >{$TEMP_CRIT:"{#SNMPVALUE}"}</p> |
+| {#SNMPVALUE}: Temperature is above critical threshold: >{$TEMP_CRIT:"{#SNMPVALUE}"} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[ciscoEnvMonTemperatureValue.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:"{#SNMPVALUE}"} or {Cisco CISCO-ENVMON-MIB SNMP:sensor.temp.status[ciscoEnvMonTemperatureState.{#SNMPINDEX}].last()}={$TEMP_CRIT_STATUS} or {Cisco CISCO-ENVMON-MIB SNMP:sensor.temp.status[ciscoEnvMonTemperatureState.{#SNMPINDEX}].last()}={$TEMP_DISASTER_STATUS}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[ciscoEnvMonTemperatureValue.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:"{#SNMPVALUE}"}-3` | HIGH | |
+| {#SNMPVALUE}: Temperature is too low: <{$TEMP_CRIT_LOW:"{#SNMPVALUE}"} | <p>-</p> | `{TEMPLATE_NAME:sensor.temp.value[ciscoEnvMonTemperatureValue.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:"{#SNMPVALUE}"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[ciscoEnvMonTemperatureValue.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:"{#SNMPVALUE}"}+3` | AVERAGE | |
## Feedback
@@ -304,7 +304,7 @@ Please report any issues with the template at https://support.zabbix.com
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
@@ -317,15 +317,15 @@ No specific Zabbix configuration is required.
## Template links
-|Name|
-|----|
-|Cisco CISCO-ENVMON-MIB SNMP |
-|Cisco CISCO-MEMORY-POOL-MIB SNMP |
-|Cisco CISCO-PROCESS-MIB SNMP |
-|Cisco Inventory SNMP |
-|EtherLike-MIB SNMP |
-|Generic SNMP |
-|Interfaces SNMP |
+| Name |
+|----------------------------------|
+| Cisco CISCO-ENVMON-MIB SNMP |
+| Cisco CISCO-MEMORY-POOL-MIB SNMP |
+| Cisco CISCO-PROCESS-MIB SNMP |
+| Cisco Inventory SNMP |
+| EtherLike-MIB SNMP |
+| Generic SNMP |
+| Interfaces SNMP |
## Discovery rules
@@ -354,7 +354,7 @@ Please report any issues with the template at https://support.zabbix.com
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
@@ -367,14 +367,14 @@ No specific Zabbix configuration is required.
## Template links
-|Name|
-|----|
-|Cisco CISCO-ENVMON-MIB SNMP |
-|Cisco CISCO-MEMORY-POOL-MIB SNMP |
-|Cisco CISCO-PROCESS-MIB IOS versions 12.0_3_T-12.2_3.5 SNMP |
-|Cisco Inventory SNMP |
-|Generic SNMP |
-|Interfaces SNMP |
+| Name |
+|-------------------------------------------------------------|
+| Cisco CISCO-ENVMON-MIB SNMP |
+| Cisco CISCO-MEMORY-POOL-MIB SNMP |
+| Cisco CISCO-PROCESS-MIB IOS versions 12.0_3_T-12.2_3.5 SNMP |
+| Cisco Inventory SNMP |
+| Generic SNMP |
+| Interfaces SNMP |
## Discovery rules
@@ -397,7 +397,7 @@ Please report any issues with the template at https://support.zabbix.com
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
@@ -410,13 +410,13 @@ No specific Zabbix configuration is required.
## Template links
-|Name|
-|----|
-|Cisco CISCO-ENVMON-MIB SNMP |
-|Cisco CISCO-MEMORY-POOL-MIB SNMP |
-|Cisco Inventory SNMP |
-|Cisco OLD-CISCO-CPU-MIB SNMP |
-|Generic SNMP |
+| Name |
+|----------------------------------|
+| Cisco CISCO-ENVMON-MIB SNMP |
+| Cisco CISCO-MEMORY-POOL-MIB SNMP |
+| Cisco Inventory SNMP |
+| Cisco OLD-CISCO-CPU-MIB SNMP |
+| Generic SNMP |
## Discovery rules
diff --git a/templates/net/dell_force_s_series_snmp/README.md b/templates/net/dell_force_s_series_snmp/README.md
index 11ad4d5a3f9..226ff08a60e 100644
--- a/templates/net/dell_force_s_series_snmp/README.md
+++ b/templates/net/dell_force_s_series_snmp/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
@@ -15,64 +15,64 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$CPU.UTIL.CRIT} |<p>-</p> |`90` |
-|{$FAN_CRIT_STATUS} |<p>-</p> |`2` |
-|{$FAN_OK_STATUS} |<p>-</p> |`1` |
-|{$MEMORY.UTIL.MAX} |<p>-</p> |`90` |
-|{$PSU_CRIT_STATUS} |<p>-</p> |`2` |
-|{$PSU_OK_STATUS} |<p>-</p> |`1` |
-|{$TEMP_CRIT_LOW} |<p>-</p> |`5` |
-|{$TEMP_CRIT} |<p>-</p> |`65` |
-|{$TEMP_WARN} |<p>-</p> |`55` |
+| Name | Description | Default |
+|--------------------|-------------|---------|
+| {$CPU.UTIL.CRIT} | <p>-</p> | `90` |
+| {$FAN_CRIT_STATUS} | <p>-</p> | `2` |
+| {$FAN_OK_STATUS} | <p>-</p> | `1` |
+| {$MEMORY.UTIL.MAX} | <p>-</p> | `90` |
+| {$PSU_CRIT_STATUS} | <p>-</p> | `2` |
+| {$PSU_OK_STATUS} | <p>-</p> | `1` |
+| {$TEMP_CRIT_LOW} | <p>-</p> | `5` |
+| {$TEMP_CRIT} | <p>-</p> | `65` |
+| {$TEMP_WARN} | <p>-</p> | `55` |
## Template links
-|Name|
-|----|
-|EtherLike-MIB SNMP |
-|Generic SNMP |
-|Interfaces SNMP |
+| Name |
+|--------------------|
+| EtherLike-MIB SNMP |
+| Generic SNMP |
+| Interfaces SNMP |
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|CPU and Memory and Flash Discovery |<p>-</p> |SNMP |module.discovery |
-|PSU Discovery |<p>A list of power supply residents in the S-series chassis.</p> |SNMP |psu.discovery |
-|FAN Discovery |<p>-</p> |SNMP |fan.discovery |
-|Stack Unit Discovery |<p>-</p> |SNMP |stack.discovery |
+| Name | Description | Type | Key and additional info |
+|------------------------------------|------------------------------------------------------------------|------|-------------------------|
+| CPU and Memory and Flash Discovery | <p>-</p> | SNMP | module.discovery |
+| PSU Discovery | <p>A list of power supply residents in the S-series chassis.</p> | SNMP | psu.discovery |
+| FAN Discovery | <p>-</p> | SNMP | fan.discovery |
+| Stack Unit Discovery | <p>-</p> | SNMP | stack.discovery |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|CPU |#{#SNMPINDEX}: CPU utilization |<p>MIB: F10-S-SERIES-CHASSIS-MIB</p><p>CPU utilization in percentage for last 1 minute.</p> |SNMP |system.cpu.util[chStackUnitCpuUtil1Min.{#SNMPINDEX}] |
-|Fans |Fan {#SNMPVALUE}: Fan status |<p>MIB: F10-S-SERIES-CHASSIS-MIB</p><p>The status of the fan tray {#SNMPVALUE}.</p> |SNMP |sensor.fan.status[chSysFanTrayOperStatus.{#SNMPINDEX}] |
-|Inventory |#{#SNMPVALUE}: Hardware model name |<p>MIB: F10-S-SERIES-CHASSIS-MIB</p><p>The plugged-in model ID for this unit.</p> |SNMP |system.hw.model[chStackUnitModelID.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |#{#SNMPVALUE}: Hardware serial number |<p>MIB: F10-S-SERIES-CHASSIS-MIB</p><p>The unit's serial number.</p> |SNMP |system.hw.serialnumber[chStackUnitSerialNumber.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |#{#SNMPVALUE}: Hardware version(revision) |<p>MIB: F10-S-SERIES-CHASSIS-MIB</p><p>The unit manufacturer's product revision</p> |SNMP |system.hw.version[chStackUnitProductRev.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |#{#SNMPVALUE}: Operating system |<p>MIB: F10-S-SERIES-CHASSIS-MIB</p><p>Current code version of this unit.</p> |SNMP |system.sw.os[chStackUnitCodeVersion.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Memory |#{#SNMPINDEX}: Memory utilization |<p>MIB: F10-S-SERIES-CHASSIS-MIB</p><p>Total memory usage in percentage.</p> |SNMP |vm.memory.util[chStackUnitMemUsageUtil.{#SNMPINDEX}] |
-|Power_supply |PSU {#SNMPVALUE}: Power supply status |<p>MIB: F10-S-SERIES-CHASSIS-MIB</p><p>The status of the power supply {#SNMPVALUE}</p> |SNMP |sensor.psu.status[chSysPowerSupplyOperStatus.{#SNMPINDEX}] |
-|Temperature |Device {#SNMPVALUE}: Temperature |<p>MIB: F10-S-SERIES-CHASSIS-MIB</p><p>The temperature of the unit.</p> |SNMP |sensor.temp.value[chStackUnitTemp.{#SNMPINDEX}] |
+| Group | Name | Description | Type | Key and additional info |
+|--------------|-------------------------------------------|---------------------------------------------------------------------------------------------|------|---------------------------------------------------------------------------------------------------------------------------------|
+| CPU | #{#SNMPINDEX}: CPU utilization | <p>MIB: F10-S-SERIES-CHASSIS-MIB</p><p>CPU utilization in percentage for last 1 minute.</p> | SNMP | system.cpu.util[chStackUnitCpuUtil1Min.{#SNMPINDEX}] |
+| Fans | Fan {#SNMPVALUE}: Fan status | <p>MIB: F10-S-SERIES-CHASSIS-MIB</p><p>The status of the fan tray {#SNMPVALUE}.</p> | SNMP | sensor.fan.status[chSysFanTrayOperStatus.{#SNMPINDEX}] |
+| Inventory | #{#SNMPVALUE}: Hardware model name | <p>MIB: F10-S-SERIES-CHASSIS-MIB</p><p>The plugged-in model ID for this unit.</p> | SNMP | system.hw.model[chStackUnitModelID.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | #{#SNMPVALUE}: Hardware serial number | <p>MIB: F10-S-SERIES-CHASSIS-MIB</p><p>The unit's serial number.</p> | SNMP | system.hw.serialnumber[chStackUnitSerialNumber.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | #{#SNMPVALUE}: Hardware version(revision) | <p>MIB: F10-S-SERIES-CHASSIS-MIB</p><p>The unit manufacturer's product revision</p> | SNMP | system.hw.version[chStackUnitProductRev.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | #{#SNMPVALUE}: Operating system | <p>MIB: F10-S-SERIES-CHASSIS-MIB</p><p>Current code version of this unit.</p> | SNMP | system.sw.os[chStackUnitCodeVersion.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Memory | #{#SNMPINDEX}: Memory utilization | <p>MIB: F10-S-SERIES-CHASSIS-MIB</p><p>Total memory usage in percentage.</p> | SNMP | vm.memory.util[chStackUnitMemUsageUtil.{#SNMPINDEX}] |
+| Power_supply | PSU {#SNMPVALUE}: Power supply status | <p>MIB: F10-S-SERIES-CHASSIS-MIB</p><p>The status of the power supply {#SNMPVALUE}</p> | SNMP | sensor.psu.status[chSysPowerSupplyOperStatus.{#SNMPINDEX}] |
+| Temperature | Device {#SNMPVALUE}: Temperature | <p>MIB: F10-S-SERIES-CHASSIS-MIB</p><p>The temperature of the unit.</p> | SNMP | sensor.temp.value[chStackUnitTemp.{#SNMPINDEX}] |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|#{#SNMPINDEX}: High CPU utilization (over {$CPU.UTIL.CRIT}% for 5m) |<p>CPU utilization is too high. The system might be slow to respond.</p> |`{TEMPLATE_NAME:system.cpu.util[chStackUnitCpuUtil1Min.{#SNMPINDEX}].min(5m)}>{$CPU.UTIL.CRIT}` |WARNING | |
-|Fan {#SNMPVALUE}: Fan is in critical state |<p>Please check the fan unit</p> |`{TEMPLATE_NAME:sensor.fan.status[chSysFanTrayOperStatus.{#SNMPINDEX}].count(#1,{$FAN_CRIT_STATUS},eq)}=1` |AVERAGE | |
-|Fan {#SNMPVALUE}: Fan is not in normal state |<p>Please check the fan unit</p> |`{TEMPLATE_NAME:sensor.fan.status[chSysFanTrayOperStatus.{#SNMPINDEX}].count(#1,{$FAN_OK_STATUS},ne)}=1` |INFO |<p>**Depends on**:</p><p>- Fan {#SNMPVALUE}: Fan is in critical state</p> |
-|#{#SNMPVALUE}: Device has been replaced (new serial number received) |<p>Device serial number has changed. Ack to close</p> |`{TEMPLATE_NAME:system.hw.serialnumber[chStackUnitSerialNumber.{#SNMPINDEX}].diff()}=1 and {TEMPLATE_NAME:system.hw.serialnumber[chStackUnitSerialNumber.{#SNMPINDEX}].strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|#{#SNMPVALUE}: Operating system description has changed |<p>Operating system description has changed. Possible reasons that system has been updated or replaced. Ack to close.</p> |`{TEMPLATE_NAME:system.sw.os[chStackUnitCodeVersion.{#SNMPINDEX}].diff()}=1 and {TEMPLATE_NAME:system.sw.os[chStackUnitCodeVersion.{#SNMPINDEX}].strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|#{#SNMPINDEX}: High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m) |<p>The system is running out of free memory.</p> |`{TEMPLATE_NAME:vm.memory.util[chStackUnitMemUsageUtil.{#SNMPINDEX}].min(5m)}>{$MEMORY.UTIL.MAX}` |AVERAGE | |
-|PSU {#SNMPVALUE}: Power supply is in critical state |<p>Please check the power supply unit for errors</p> |`{TEMPLATE_NAME:sensor.psu.status[chSysPowerSupplyOperStatus.{#SNMPINDEX}].count(#1,{$PSU_CRIT_STATUS},eq)}=1` |AVERAGE | |
-|PSU {#SNMPVALUE}: Power supply is not in normal state |<p>Please check the power supply unit for errors</p> |`{TEMPLATE_NAME:sensor.psu.status[chSysPowerSupplyOperStatus.{#SNMPINDEX}].count(#1,{$PSU_OK_STATUS},ne)}=1` |INFO |<p>**Depends on**:</p><p>- PSU {#SNMPVALUE}: Power supply is in critical state</p> |
-|Device {#SNMPVALUE}: Temperature is above warning threshold: >{$TEMP_WARN:""} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[chStackUnitTemp.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[chStackUnitTemp.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:""}-3` |WARNING |<p>**Depends on**:</p><p>- Device {#SNMPVALUE}: Temperature is above critical threshold: >{$TEMP_CRIT:""}</p> |
-|Device {#SNMPVALUE}: Temperature is above critical threshold: >{$TEMP_CRIT:""} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[chStackUnitTemp.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[chStackUnitTemp.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:""}-3` |HIGH | |
-|Device {#SNMPVALUE}: Temperature is too low: <{$TEMP_CRIT_LOW:""} |<p>-</p> |`{TEMPLATE_NAME:sensor.temp.value[chStackUnitTemp.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[chStackUnitTemp.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:""}+3` |AVERAGE | |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|--------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|---------------------------------------------------------------------------------------------------------------|
+| #{#SNMPINDEX}: High CPU utilization (over {$CPU.UTIL.CRIT}% for 5m) | <p>CPU utilization is too high. The system might be slow to respond.</p> | `{TEMPLATE_NAME:system.cpu.util[chStackUnitCpuUtil1Min.{#SNMPINDEX}].min(5m)}>{$CPU.UTIL.CRIT}` | WARNING | |
+| Fan {#SNMPVALUE}: Fan is in critical state | <p>Please check the fan unit</p> | `{TEMPLATE_NAME:sensor.fan.status[chSysFanTrayOperStatus.{#SNMPINDEX}].count(#1,{$FAN_CRIT_STATUS},eq)}=1` | AVERAGE | |
+| Fan {#SNMPVALUE}: Fan is not in normal state | <p>Please check the fan unit</p> | `{TEMPLATE_NAME:sensor.fan.status[chSysFanTrayOperStatus.{#SNMPINDEX}].count(#1,{$FAN_OK_STATUS},ne)}=1` | INFO | <p>**Depends on**:</p><p>- Fan {#SNMPVALUE}: Fan is in critical state</p> |
+| #{#SNMPVALUE}: Device has been replaced (new serial number received) | <p>Device serial number has changed. Ack to close</p> | `{TEMPLATE_NAME:system.hw.serialnumber[chStackUnitSerialNumber.{#SNMPINDEX}].diff()}=1 and {TEMPLATE_NAME:system.hw.serialnumber[chStackUnitSerialNumber.{#SNMPINDEX}].strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| #{#SNMPVALUE}: Operating system description has changed | <p>Operating system description has changed. Possible reasons that system has been updated or replaced. Ack to close.</p> | `{TEMPLATE_NAME:system.sw.os[chStackUnitCodeVersion.{#SNMPINDEX}].diff()}=1 and {TEMPLATE_NAME:system.sw.os[chStackUnitCodeVersion.{#SNMPINDEX}].strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| #{#SNMPINDEX}: High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m) | <p>The system is running out of free memory.</p> | `{TEMPLATE_NAME:vm.memory.util[chStackUnitMemUsageUtil.{#SNMPINDEX}].min(5m)}>{$MEMORY.UTIL.MAX}` | AVERAGE | |
+| PSU {#SNMPVALUE}: Power supply is in critical state | <p>Please check the power supply unit for errors</p> | `{TEMPLATE_NAME:sensor.psu.status[chSysPowerSupplyOperStatus.{#SNMPINDEX}].count(#1,{$PSU_CRIT_STATUS},eq)}=1` | AVERAGE | |
+| PSU {#SNMPVALUE}: Power supply is not in normal state | <p>Please check the power supply unit for errors</p> | `{TEMPLATE_NAME:sensor.psu.status[chSysPowerSupplyOperStatus.{#SNMPINDEX}].count(#1,{$PSU_OK_STATUS},ne)}=1` | INFO | <p>**Depends on**:</p><p>- PSU {#SNMPVALUE}: Power supply is in critical state</p> |
+| Device {#SNMPVALUE}: Temperature is above warning threshold: >{$TEMP_WARN:""} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[chStackUnitTemp.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[chStackUnitTemp.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:""}-3` | WARNING | <p>**Depends on**:</p><p>- Device {#SNMPVALUE}: Temperature is above critical threshold: >{$TEMP_CRIT:""}</p> |
+| Device {#SNMPVALUE}: Temperature is above critical threshold: >{$TEMP_CRIT:""} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[chStackUnitTemp.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[chStackUnitTemp.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:""}-3` | HIGH | |
+| Device {#SNMPVALUE}: Temperature is too low: <{$TEMP_CRIT_LOW:""} | <p>-</p> | `{TEMPLATE_NAME:sensor.temp.value[chStackUnitTemp.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[chStackUnitTemp.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:""}+3` | AVERAGE | |
## Feedback
diff --git a/templates/net/dlink_des7200_snmp/README.md b/templates/net/dlink_des7200_snmp/README.md
index 6abae7af0d2..8e35c9a0875 100644
--- a/templates/net/dlink_des7200_snmp/README.md
+++ b/templates/net/dlink_des7200_snmp/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
@@ -15,59 +15,59 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$CPU.UTIL.CRIT} |<p>-</p> |`90` |
-|{$FAN_CRIT_STATUS} |<p>-</p> |`5` |
-|{$MEMORY.UTIL.MAX} |<p>-</p> |`90` |
-|{$PSU_CRIT_STATUS} |<p>-</p> |`5` |
-|{$TEMP_CRIT_LOW} |<p>-</p> |`5` |
-|{$TEMP_CRIT} |<p>-</p> |`75` |
-|{$TEMP_WARN} |<p>-</p> |`65` |
+| Name | Description | Default |
+|--------------------|-------------|---------|
+| {$CPU.UTIL.CRIT} | <p>-</p> | `90` |
+| {$FAN_CRIT_STATUS} | <p>-</p> | `5` |
+| {$MEMORY.UTIL.MAX} | <p>-</p> | `90` |
+| {$PSU_CRIT_STATUS} | <p>-</p> | `5` |
+| {$TEMP_CRIT_LOW} | <p>-</p> | `5` |
+| {$TEMP_CRIT} | <p>-</p> | `75` |
+| {$TEMP_WARN} | <p>-</p> | `65` |
## Template links
-|Name|
-|----|
-|Generic SNMP |
-|Interfaces SNMP |
+| Name |
+|-----------------|
+| Generic SNMP |
+| Interfaces SNMP |
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Memory Discovery |<p>-</p> |SNMP |memory.discovery |
-|Temperature Discovery |<p>-</p> |SNMP |temperature.discovery |
-|PSU Discovery |<p>-</p> |SNMP |psu.discovery |
-|FAN Discovery |<p>-</p> |SNMP |fan.discovery |
+| Name | Description | Type | Key and additional info |
+|-----------------------|-------------|------|-------------------------|
+| Memory Discovery | <p>-</p> | SNMP | memory.discovery |
+| Temperature Discovery | <p>-</p> | SNMP | temperature.discovery |
+| PSU Discovery | <p>-</p> | SNMP | psu.discovery |
+| FAN Discovery | <p>-</p> | SNMP | fan.discovery |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|CPU |CPU utilization |<p>MIB: MY-PROCESS-MIB</p><p>CPU utilization in %</p> |SNMP |system.cpu.util[myCPUUtilization5Min.0] |
-|Fans |{#SNMPVALUE}: Fan status |<p>MIB: MY-SYSTEM-MIB</p> |SNMP |sensor.fan.status[mySystemFanIsNormal.{#SNMPINDEX}] |
-|Inventory |Hardware model name |<p>MIB: SNMPv2-MIB</p><p>A textual description of the entity. This value should</p><p>include the full name and version identification of the system's hardware type, software operating-system, and</p><p>networking software.</p> |SNMP |system.hw.model<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |Firmware version |<p>MIB: ENTITY-MIB</p> |SNMP |system.hw.firmware<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |Hardware version(revision) |<p>MIB: ENTITY-MIB</p> |SNMP |system.hw.version<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |Operating system |<p>MIB: MY-SYSTEM-MIB</p> |SNMP |system.sw.os[mySystemSwVersion.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Memory |{#SNMPINDEX}: Memory utilization |<p>MIB: MY-MEMORY-MIB</p><p>This is the memory pool utilization currently.</p> |SNMP |vm.memory.util[myMemoryPoolCurrentUtilization.{#SNMPINDEX}] |
-|Power_supply |{#SNMPVALUE}: Power supply status |<p>MIB: MY-SYSTEM-MIB</p> |SNMP |sensor.psu.status[mySystemElectricalSourceIsNormal.{#SNMPINDEX}] |
-|Temperature |{#SNMPVALUE}: Temperature |<p>MIB: MY-SYSTEM-MIB</p><p>Return the current temperature of the FastSwitch.The temperature display is not supported for the current temperature returns to 0.</p> |SNMP |sensor.temp.value[mySystemTemperatureCurrent.{#SNMPINDEX}] |
+| Group | Name | Description | Type | Key and additional info |
+|--------------|-----------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------|------------------------------------------------------------------------------------------------------|
+| CPU | CPU utilization | <p>MIB: MY-PROCESS-MIB</p><p>CPU utilization in %</p> | SNMP | system.cpu.util[myCPUUtilization5Min.0] |
+| Fans | {#SNMPVALUE}: Fan status | <p>MIB: MY-SYSTEM-MIB</p> | SNMP | sensor.fan.status[mySystemFanIsNormal.{#SNMPINDEX}] |
+| Inventory | Hardware model name | <p>MIB: SNMPv2-MIB</p><p>A textual description of the entity. This value should</p><p>include the full name and version identification of the system's hardware type, software operating-system, and</p><p>networking software.</p> | SNMP | system.hw.model<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | Firmware version | <p>MIB: ENTITY-MIB</p> | SNMP | system.hw.firmware<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | Hardware version(revision) | <p>MIB: ENTITY-MIB</p> | SNMP | system.hw.version<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | Operating system | <p>MIB: MY-SYSTEM-MIB</p> | SNMP | system.sw.os[mySystemSwVersion.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Memory | {#SNMPINDEX}: Memory utilization | <p>MIB: MY-MEMORY-MIB</p><p>This is the memory pool utilization currently.</p> | SNMP | vm.memory.util[myMemoryPoolCurrentUtilization.{#SNMPINDEX}] |
+| Power_supply | {#SNMPVALUE}: Power supply status | <p>MIB: MY-SYSTEM-MIB</p> | SNMP | sensor.psu.status[mySystemElectricalSourceIsNormal.{#SNMPINDEX}] |
+| Temperature | {#SNMPVALUE}: Temperature | <p>MIB: MY-SYSTEM-MIB</p><p>Return the current temperature of the FastSwitch.The temperature display is not supported for the current temperature returns to 0.</p> | SNMP | sensor.temp.value[mySystemTemperatureCurrent.{#SNMPINDEX}] |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|High CPU utilization (over {$CPU.UTIL.CRIT}% for 5m) |<p>CPU utilization is too high. The system might be slow to respond.</p> |`{TEMPLATE_NAME:system.cpu.util[myCPUUtilization5Min.0].min(5m)}>{$CPU.UTIL.CRIT}` |WARNING | |
-|{#SNMPVALUE}: Fan is in critical state |<p>Please check the fan unit</p> |`{TEMPLATE_NAME:sensor.fan.status[mySystemFanIsNormal.{#SNMPINDEX}].count(#1,{$FAN_CRIT_STATUS},eq)}=1` |AVERAGE | |
-|Firmware has changed |<p>Firmware version has changed. Ack to close</p> |`{TEMPLATE_NAME:system.hw.firmware.diff()}=1 and {TEMPLATE_NAME:system.hw.firmware.strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|Operating system description has changed |<p>Operating system description has changed. Possible reasons that system has been updated or replaced. Ack to close.</p> |`{TEMPLATE_NAME:system.sw.os[mySystemSwVersion.0].diff()}=1 and {TEMPLATE_NAME:system.sw.os[mySystemSwVersion.0].strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|{#SNMPINDEX}: High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m) |<p>The system is running out of free memory.</p> |`{TEMPLATE_NAME:vm.memory.util[myMemoryPoolCurrentUtilization.{#SNMPINDEX}].min(5m)}>{$MEMORY.UTIL.MAX}` |AVERAGE | |
-|{#SNMPVALUE}: Power supply is in critical state |<p>Please check the power supply unit for errors</p> |`{TEMPLATE_NAME:sensor.psu.status[mySystemElectricalSourceIsNormal.{#SNMPINDEX}].count(#1,{$PSU_CRIT_STATUS},eq)}=1` |AVERAGE | |
-|{#SNMPVALUE}: Temperature is above warning threshold: >{$TEMP_WARN:""} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[mySystemTemperatureCurrent.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[mySystemTemperatureCurrent.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:""}-3` |WARNING |<p>**Depends on**:</p><p>- {#SNMPVALUE}: Temperature is above critical threshold: >{$TEMP_CRIT:""}</p> |
-|{#SNMPVALUE}: Temperature is above critical threshold: >{$TEMP_CRIT:""} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[mySystemTemperatureCurrent.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[mySystemTemperatureCurrent.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:""}-3` |HIGH | |
-|{#SNMPVALUE}: Temperature is too low: <{$TEMP_CRIT_LOW:""} |<p>-</p> |`{TEMPLATE_NAME:sensor.temp.value[mySystemTemperatureCurrent.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[mySystemTemperatureCurrent.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:""}+3` |AVERAGE | |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|-------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|--------------------------------------------------------------------------------------------------------|
+| High CPU utilization (over {$CPU.UTIL.CRIT}% for 5m) | <p>CPU utilization is too high. The system might be slow to respond.</p> | `{TEMPLATE_NAME:system.cpu.util[myCPUUtilization5Min.0].min(5m)}>{$CPU.UTIL.CRIT}` | WARNING | |
+| {#SNMPVALUE}: Fan is in critical state | <p>Please check the fan unit</p> | `{TEMPLATE_NAME:sensor.fan.status[mySystemFanIsNormal.{#SNMPINDEX}].count(#1,{$FAN_CRIT_STATUS},eq)}=1` | AVERAGE | |
+| Firmware has changed | <p>Firmware version has changed. Ack to close</p> | `{TEMPLATE_NAME:system.hw.firmware.diff()}=1 and {TEMPLATE_NAME:system.hw.firmware.strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| Operating system description has changed | <p>Operating system description has changed. Possible reasons that system has been updated or replaced. Ack to close.</p> | `{TEMPLATE_NAME:system.sw.os[mySystemSwVersion.0].diff()}=1 and {TEMPLATE_NAME:system.sw.os[mySystemSwVersion.0].strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| {#SNMPINDEX}: High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m) | <p>The system is running out of free memory.</p> | `{TEMPLATE_NAME:vm.memory.util[myMemoryPoolCurrentUtilization.{#SNMPINDEX}].min(5m)}>{$MEMORY.UTIL.MAX}` | AVERAGE | |
+| {#SNMPVALUE}: Power supply is in critical state | <p>Please check the power supply unit for errors</p> | `{TEMPLATE_NAME:sensor.psu.status[mySystemElectricalSourceIsNormal.{#SNMPINDEX}].count(#1,{$PSU_CRIT_STATUS},eq)}=1` | AVERAGE | |
+| {#SNMPVALUE}: Temperature is above warning threshold: >{$TEMP_WARN:""} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[mySystemTemperatureCurrent.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[mySystemTemperatureCurrent.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:""}-3` | WARNING | <p>**Depends on**:</p><p>- {#SNMPVALUE}: Temperature is above critical threshold: >{$TEMP_CRIT:""}</p> |
+| {#SNMPVALUE}: Temperature is above critical threshold: >{$TEMP_CRIT:""} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[mySystemTemperatureCurrent.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[mySystemTemperatureCurrent.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:""}-3` | HIGH | |
+| {#SNMPVALUE}: Temperature is too low: <{$TEMP_CRIT_LOW:""} | <p>-</p> | `{TEMPLATE_NAME:sensor.temp.value[mySystemTemperatureCurrent.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[mySystemTemperatureCurrent.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:""}+3` | AVERAGE | |
## Feedback
diff --git a/templates/net/dlink_des_snmp/README.md b/templates/net/dlink_des_snmp/README.md
index c298aca8b7c..bbb7afc91cb 100644
--- a/templates/net/dlink_des_snmp/README.md
+++ b/templates/net/dlink_des_snmp/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
@@ -15,60 +15,60 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$CPU.UTIL.CRIT} |<p>-</p> |`90` |
-|{$FAN_CRIT_STATUS} |<p>-</p> |`2` |
-|{$MEMORY.UTIL.MAX} |<p>-</p> |`90` |
-|{$PSU_CRIT_STATUS} |<p>-</p> |`4` |
-|{$TEMP_CRIT_LOW} |<p>-</p> |`5` |
-|{$TEMP_CRIT} |<p>-</p> |`75` |
-|{$TEMP_WARN} |<p>-</p> |`65` |
+| Name | Description | Default |
+|--------------------|-------------|---------|
+| {$CPU.UTIL.CRIT} | <p>-</p> | `90` |
+| {$FAN_CRIT_STATUS} | <p>-</p> | `2` |
+| {$MEMORY.UTIL.MAX} | <p>-</p> | `90` |
+| {$PSU_CRIT_STATUS} | <p>-</p> | `4` |
+| {$TEMP_CRIT_LOW} | <p>-</p> | `5` |
+| {$TEMP_CRIT} | <p>-</p> | `75` |
+| {$TEMP_WARN} | <p>-</p> | `65` |
## Template links
-|Name|
-|----|
-|EtherLike-MIB SNMP |
-|Generic SNMP |
-|Interfaces SNMP |
+| Name |
+|--------------------|
+| EtherLike-MIB SNMP |
+| Generic SNMP |
+| Interfaces SNMP |
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Memory Discovery |<p>-</p> |SNMP |memory.discovery |
-|Temperature Discovery |<p>-</p> |SNMP |temperature.discovery |
-|PSU Discovery |<p>swPowerID of EQUIPMENT-MIB::swPowerTable</p> |SNMP |psu.discovery<p>**Filter**:</p>AND_OR <p>- A: {#STATUS} MATCHES_REGEX `[^0]`</p> |
-|FAN Discovery |<p>swFanID of EQUIPMENT-MIB::swFanTable</p> |SNMP |fan.discovery<p>**Filter**:</p>AND_OR <p>- A: {#STATUS} MATCHES_REGEX `[^0]`</p> |
+| Name | Description | Type | Key and additional info |
+|-----------------------|-------------------------------------------------|------|----------------------------------------------------------------------------------|
+| Memory Discovery | <p>-</p> | SNMP | memory.discovery |
+| Temperature Discovery | <p>-</p> | SNMP | temperature.discovery |
+| PSU Discovery | <p>swPowerID of EQUIPMENT-MIB::swPowerTable</p> | SNMP | psu.discovery<p>**Filter**:</p>AND_OR <p>- A: {#STATUS} MATCHES_REGEX `[^0]`</p> |
+| FAN Discovery | <p>swFanID of EQUIPMENT-MIB::swFanTable</p> | SNMP | fan.discovery<p>**Filter**:</p>AND_OR <p>- A: {#STATUS} MATCHES_REGEX `[^0]`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|CPU |CPU utilization |<p>MIB: DLINK-AGENT-MIB</p><p>The unit of time is 1 minute. The value will be between 0% (idle) and 100%(very busy).</p> |SNMP |system.cpu.util[agentCPUutilizationIn1min.0] |
-|Fans |#{#SNMPVALUE}: Fan status |<p>MIB: EQUIPMENT-MIB</p><p>Indicates the current fan status.</p><p>speed-0 : If the fan function is normal and the fan does not spin due to the temperature not reaching the threshold, the status of the fan is speed 0.</p><p>speed-low : Fan spin using the lowest speed.</p><p>speed-middle: Fan spin using the middle speed.</p><p>speed-high : Fan spin using the highest speed.</p> |SNMP |sensor.fan.status[swFanStatus.{#SNMPINDEX}] |
-|Inventory |Hardware model name |<p>MIB: SNMPv2-MIB</p><p>A textual description of the entity. This value should</p><p>include the full name and version identification of the system's hardware type, software operating-system, and</p><p>networking software.</p> |SNMP |system.hw.model<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |Hardware serial number |<p>MIB: DLINK-AGENT-MIB</p><p>A text string containing the serial number of this device.</p> |SNMP |system.hw.serialnumber<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |Firmware version |<p>MIB: ENTITY-MIB</p> |SNMP |system.hw.firmware<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |Hardware version(revision) |<p>MIB: ENTITY-MIB</p> |SNMP |system.hw.version<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Memory |#{#SNMPVALUE}: Memory utilization |<p>MIB: DLINK-AGENT-MIB</p><p>The percentage of used DRAM memory of the total DRAM memory available.The value will be between 0%(idle) and 100%(very busy)</p> |SNMP |vm.memory.util[agentDRAMutilization.{#SNMPINDEX}] |
-|Power_supply |#{#SNMPVALUE}: Power supply status |<p>MIB: EQUIPMENT-MIB</p><p>Indicates the current power status.</p><p>lowVoltage : The voltage of the power unit is too low.</p><p>overCurrent: The current of the power unit is too high.</p><p>working : The power unit is working normally.</p><p>fail : The power unit has failed.</p><p>connect : The power unit is connected but not powered on.</p><p>disconnect : The power unit is not connected.</p> |SNMP |sensor.psu.status[swPowerStatus.{#SNMPINDEX}] |
-|Temperature |#{#SNMPVALUE}: Temperature |<p>MIB: EQUIPMENT-MIB</p><p>The shelf current temperature.</p> |SNMP |sensor.temp.value[swTemperatureCurrent.{#SNMPINDEX}] |
+| Group | Name | Description | Type | Key and additional info |
+|--------------|------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------|-------------------------------------------------------------------------------------------|
+| CPU | CPU utilization | <p>MIB: DLINK-AGENT-MIB</p><p>The unit of time is 1 minute. The value will be between 0% (idle) and 100%(very busy).</p> | SNMP | system.cpu.util[agentCPUutilizationIn1min.0] |
+| Fans | #{#SNMPVALUE}: Fan status | <p>MIB: EQUIPMENT-MIB</p><p>Indicates the current fan status.</p><p>speed-0 : If the fan function is normal and the fan does not spin due to the temperature not reaching the threshold, the status of the fan is speed 0.</p><p>speed-low : Fan spin using the lowest speed.</p><p>speed-middle: Fan spin using the middle speed.</p><p>speed-high : Fan spin using the highest speed.</p> | SNMP | sensor.fan.status[swFanStatus.{#SNMPINDEX}] |
+| Inventory | Hardware model name | <p>MIB: SNMPv2-MIB</p><p>A textual description of the entity. This value should</p><p>include the full name and version identification of the system's hardware type, software operating-system, and</p><p>networking software.</p> | SNMP | system.hw.model<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | Hardware serial number | <p>MIB: DLINK-AGENT-MIB</p><p>A text string containing the serial number of this device.</p> | SNMP | system.hw.serialnumber<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | Firmware version | <p>MIB: ENTITY-MIB</p> | SNMP | system.hw.firmware<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | Hardware version(revision) | <p>MIB: ENTITY-MIB</p> | SNMP | system.hw.version<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Memory | #{#SNMPVALUE}: Memory utilization | <p>MIB: DLINK-AGENT-MIB</p><p>The percentage of used DRAM memory of the total DRAM memory available.The value will be between 0%(idle) and 100%(very busy)</p> | SNMP | vm.memory.util[agentDRAMutilization.{#SNMPINDEX}] |
+| Power_supply | #{#SNMPVALUE}: Power supply status | <p>MIB: EQUIPMENT-MIB</p><p>Indicates the current power status.</p><p>lowVoltage : The voltage of the power unit is too low.</p><p>overCurrent: The current of the power unit is too high.</p><p>working : The power unit is working normally.</p><p>fail : The power unit has failed.</p><p>connect : The power unit is connected but not powered on.</p><p>disconnect : The power unit is not connected.</p> | SNMP | sensor.psu.status[swPowerStatus.{#SNMPINDEX}] |
+| Temperature | #{#SNMPVALUE}: Temperature | <p>MIB: EQUIPMENT-MIB</p><p>The shelf current temperature.</p> | SNMP | sensor.temp.value[swTemperatureCurrent.{#SNMPINDEX}] |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|High CPU utilization (over {$CPU.UTIL.CRIT}% for 5m) |<p>CPU utilization is too high. The system might be slow to respond.</p> |`{TEMPLATE_NAME:system.cpu.util[agentCPUutilizationIn1min.0].min(5m)}>{$CPU.UTIL.CRIT}` |WARNING | |
-|#{#SNMPVALUE}: Fan is in critical state |<p>Please check the fan unit</p> |`{TEMPLATE_NAME:sensor.fan.status[swFanStatus.{#SNMPINDEX}].count(#1,{$FAN_CRIT_STATUS},eq)}=1` |AVERAGE | |
-|Device has been replaced (new serial number received) |<p>Device serial number has changed. Ack to close</p> |`{TEMPLATE_NAME:system.hw.serialnumber.diff()}=1 and {TEMPLATE_NAME:system.hw.serialnumber.strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|Firmware has changed |<p>Firmware version has changed. Ack to close</p> |`{TEMPLATE_NAME:system.hw.firmware.diff()}=1 and {TEMPLATE_NAME:system.hw.firmware.strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|#{#SNMPVALUE}: High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m) |<p>The system is running out of free memory.</p> |`{TEMPLATE_NAME:vm.memory.util[agentDRAMutilization.{#SNMPINDEX}].min(5m)}>{$MEMORY.UTIL.MAX}` |AVERAGE | |
-|#{#SNMPVALUE}: Power supply is in critical state |<p>Please check the power supply unit for errors</p> |`{TEMPLATE_NAME:sensor.psu.status[swPowerStatus.{#SNMPINDEX}].count(#1,{$PSU_CRIT_STATUS},eq)}=1` |AVERAGE | |
-|#{#SNMPVALUE}: Temperature is above warning threshold: >{$TEMP_WARN:""} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[swTemperatureCurrent.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[swTemperatureCurrent.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:""}-3` |WARNING |<p>**Depends on**:</p><p>- #{#SNMPVALUE}: Temperature is above critical threshold: >{$TEMP_CRIT:""}</p> |
-|#{#SNMPVALUE}: Temperature is above critical threshold: >{$TEMP_CRIT:""} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[swTemperatureCurrent.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[swTemperatureCurrent.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:""}-3` |HIGH | |
-|#{#SNMPVALUE}: Temperature is too low: <{$TEMP_CRIT_LOW:""} |<p>-</p> |`{TEMPLATE_NAME:sensor.temp.value[swTemperatureCurrent.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[swTemperatureCurrent.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:""}+3` |AVERAGE | |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|--------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|---------------------------------------------------------------------------------------------------------|
+| High CPU utilization (over {$CPU.UTIL.CRIT}% for 5m) | <p>CPU utilization is too high. The system might be slow to respond.</p> | `{TEMPLATE_NAME:system.cpu.util[agentCPUutilizationIn1min.0].min(5m)}>{$CPU.UTIL.CRIT}` | WARNING | |
+| #{#SNMPVALUE}: Fan is in critical state | <p>Please check the fan unit</p> | `{TEMPLATE_NAME:sensor.fan.status[swFanStatus.{#SNMPINDEX}].count(#1,{$FAN_CRIT_STATUS},eq)}=1` | AVERAGE | |
+| Device has been replaced (new serial number received) | <p>Device serial number has changed. Ack to close</p> | `{TEMPLATE_NAME:system.hw.serialnumber.diff()}=1 and {TEMPLATE_NAME:system.hw.serialnumber.strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| Firmware has changed | <p>Firmware version has changed. Ack to close</p> | `{TEMPLATE_NAME:system.hw.firmware.diff()}=1 and {TEMPLATE_NAME:system.hw.firmware.strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| #{#SNMPVALUE}: High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m) | <p>The system is running out of free memory.</p> | `{TEMPLATE_NAME:vm.memory.util[agentDRAMutilization.{#SNMPINDEX}].min(5m)}>{$MEMORY.UTIL.MAX}` | AVERAGE | |
+| #{#SNMPVALUE}: Power supply is in critical state | <p>Please check the power supply unit for errors</p> | `{TEMPLATE_NAME:sensor.psu.status[swPowerStatus.{#SNMPINDEX}].count(#1,{$PSU_CRIT_STATUS},eq)}=1` | AVERAGE | |
+| #{#SNMPVALUE}: Temperature is above warning threshold: >{$TEMP_WARN:""} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[swTemperatureCurrent.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[swTemperatureCurrent.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:""}-3` | WARNING | <p>**Depends on**:</p><p>- #{#SNMPVALUE}: Temperature is above critical threshold: >{$TEMP_CRIT:""}</p> |
+| #{#SNMPVALUE}: Temperature is above critical threshold: >{$TEMP_CRIT:""} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[swTemperatureCurrent.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[swTemperatureCurrent.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:""}-3` | HIGH | |
+| #{#SNMPVALUE}: Temperature is too low: <{$TEMP_CRIT_LOW:""} | <p>-</p> | `{TEMPLATE_NAME:sensor.temp.value[swTemperatureCurrent.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[swTemperatureCurrent.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:""}+3` | AVERAGE | |
## Feedback
diff --git a/templates/net/extreme_snmp/README.md b/templates/net/extreme_snmp/README.md
index 00179a37bc0..9055c58a143 100644
--- a/templates/net/extreme_snmp/README.md
+++ b/templates/net/extreme_snmp/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
@@ -15,66 +15,66 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$CPU.UTIL.CRIT} |<p>-</p> |`90` |
-|{$FAN_CRIT_STATUS} |<p>-</p> |`2` |
-|{$MEMORY.UTIL.MAX} |<p>-</p> |`90` |
-|{$PSU_CRIT_STATUS} |<p>-</p> |`3` |
-|{$TEMP_CRIT_LOW} |<p>-</p> |`5` |
-|{$TEMP_CRIT_STATUS} |<p>-</p> |`1` |
-|{$TEMP_CRIT} |<p>-</p> |`65` |
-|{$TEMP_WARN} |<p>-</p> |`55` |
+| Name | Description | Default |
+|---------------------|-------------|---------|
+| {$CPU.UTIL.CRIT} | <p>-</p> | `90` |
+| {$FAN_CRIT_STATUS} | <p>-</p> | `2` |
+| {$MEMORY.UTIL.MAX} | <p>-</p> | `90` |
+| {$PSU_CRIT_STATUS} | <p>-</p> | `3` |
+| {$TEMP_CRIT_LOW} | <p>-</p> | `5` |
+| {$TEMP_CRIT_STATUS} | <p>-</p> | `1` |
+| {$TEMP_CRIT} | <p>-</p> | `65` |
+| {$TEMP_WARN} | <p>-</p> | `55` |
## Template links
-|Name|
-|----|
-|EtherLike-MIB SNMP |
-|Generic SNMP |
-|Interfaces SNMP |
+| Name |
+|--------------------|
+| EtherLike-MIB SNMP |
+| Generic SNMP |
+| Interfaces SNMP |
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Memory Discovery |<p>-</p> |SNMP |memory.discovery |
-|PSU Discovery |<p>Table of status of all power supplies in the system.</p> |SNMP |psu.discovery |
-|FAN Discovery |<p>-</p> |SNMP |fan.discovery |
+| Name | Description | Type | Key and additional info |
+|------------------|-------------------------------------------------------------|------|-------------------------|
+| Memory Discovery | <p>-</p> | SNMP | memory.discovery |
+| PSU Discovery | <p>Table of status of all power supplies in the system.</p> | SNMP | psu.discovery |
+| FAN Discovery | <p>-</p> | SNMP | fan.discovery |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|CPU |CPU utilization |<p>MIB: EXTREME-SOFTWARE-MONITOR-MIB</p><p>Total CPU utlization (percentage) as of last sampling.</p> |SNMP |system.cpu.util[extremeCpuMonitorTotalUtilization.0] |
-|Fans |Fan {#SNMPVALUE}: Fan status |<p>MIB: EXTREME-SYSTEM-MIB</p><p>Operational status of a cooling fan.</p> |SNMP |sensor.fan.status[extremeFanOperational.{#SNMPINDEX}] |
-|Fans |Fan {#SNMPVALUE}: Fan speed |<p>MIB: EXTREME-SYSTEM-MIB</p><p>The speed (RPM) of a cooling fan in the fantray {#SNMPVALUE}</p> |SNMP |sensor.fan.speed[extremeFanSpeed.{#SNMPINDEX}] |
-|Inventory |Hardware model name |<p>MIB: ENTITY-MIB</p> |SNMP |system.hw.model<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |Hardware serial number |<p>MIB: ENTITY-MIB</p> |SNMP |system.hw.serialnumber<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |Firmware version |<p>MIB: ENTITY-MIB</p> |SNMP |system.hw.firmware<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |Hardware version(revision) |<p>MIB: ENTITY-MIB</p> |SNMP |system.hw.version<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |Operating system |<p>MIB: EXTREME-SYSTEM-MIB</p><p>The software revision of the primary image stored in this device.</p><p>This string will have a zero length if the revision is unknown, invalid or not present.</p><p>This will also be reported in RMON2 probeSoftwareRev if this is the software image currently running in the device.</p> |SNMP |system.sw.os[extremePrimarySoftwareRev.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Memory |#{#SNMPVALUE}: Available memory |<p>MIB: EXTREME-SOFTWARE-MONITOR-MIB</p><p>Total amount of free memory in Kbytes in the system.</p> |SNMP |vm.memory.available[extremeMemoryMonitorSystemFree.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1024`</p> |
-|Memory |#{#SNMPVALUE}: Total memory |<p>MIB: EXTREME-SOFTWARE-MONITOR-MIB</p><p>Total amount of DRAM in Kbytes in the system.</p> |SNMP |vm.memory.total[extremeMemoryMonitorSystemTotal.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1024`</p> |
-|Memory |#{#SNMPVALUE}: Memory utilization |<p>Memory utilization in %</p> |CALCULATED |vm.memory.util[{#SNMPVALUE}]<p>**Expression**:</p>`(last("vm.memory.total[extremeMemoryMonitorSystemTotal.{#SNMPINDEX}]") - last("vm.memory.available[extremeMemoryMonitorSystemFree.{#SNMPINDEX}]")) / last("vm.memory.total[extremeMemoryMonitorSystemTotal.{#SNMPINDEX}]") * 100` |
-|Power_supply |PSU {#SNMPVALUE}: Power supply status |<p>MIB: EXTREME-SYSTEM-MIB</p><p>Status of the power supply {#SNMPVALUE}</p> |SNMP |sensor.psu.status[extremePowerSupplyStatus.{#SNMPINDEX}] |
-|Temperature |Device: Temperature |<p>MIB: EXTREME-SYSTEM-MIB</p><p>Temperature readings of testpoint: Device</p><p>Reference: https://gtacknowledge.extremenetworks.com/articles/Q_A/Does-EXOS-support-temperature-polling-via-SNMP-on-all-nodes-in-a-stack</p> |SNMP |sensor.temp.value[extremeCurrentTemperature.0] |
-|Temperature |Device: Temperature status |<p>MIB: EXTREME-SYSTEM-MIB</p><p>Temperature status of testpoint: Device</p> |SNMP |sensor.temp.status[extremeOverTemperatureAlarm.0] |
+| Group | Name | Description | Type | Key and additional info |
+|--------------|---------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| CPU | CPU utilization | <p>MIB: EXTREME-SOFTWARE-MONITOR-MIB</p><p>Total CPU utlization (percentage) as of last sampling.</p> | SNMP | system.cpu.util[extremeCpuMonitorTotalUtilization.0] |
+| Fans | Fan {#SNMPVALUE}: Fan status | <p>MIB: EXTREME-SYSTEM-MIB</p><p>Operational status of a cooling fan.</p> | SNMP | sensor.fan.status[extremeFanOperational.{#SNMPINDEX}] |
+| Fans | Fan {#SNMPVALUE}: Fan speed | <p>MIB: EXTREME-SYSTEM-MIB</p><p>The speed (RPM) of a cooling fan in the fantray {#SNMPVALUE}</p> | SNMP | sensor.fan.speed[extremeFanSpeed.{#SNMPINDEX}] |
+| Inventory | Hardware model name | <p>MIB: ENTITY-MIB</p> | SNMP | system.hw.model<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | Hardware serial number | <p>MIB: ENTITY-MIB</p> | SNMP | system.hw.serialnumber<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | Firmware version | <p>MIB: ENTITY-MIB</p> | SNMP | system.hw.firmware<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | Hardware version(revision) | <p>MIB: ENTITY-MIB</p> | SNMP | system.hw.version<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | Operating system | <p>MIB: EXTREME-SYSTEM-MIB</p><p>The software revision of the primary image stored in this device.</p><p>This string will have a zero length if the revision is unknown, invalid or not present.</p><p>This will also be reported in RMON2 probeSoftwareRev if this is the software image currently running in the device.</p> | SNMP | system.sw.os[extremePrimarySoftwareRev.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Memory | #{#SNMPVALUE}: Available memory | <p>MIB: EXTREME-SOFTWARE-MONITOR-MIB</p><p>Total amount of free memory in Kbytes in the system.</p> | SNMP | vm.memory.available[extremeMemoryMonitorSystemFree.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1024`</p> |
+| Memory | #{#SNMPVALUE}: Total memory | <p>MIB: EXTREME-SOFTWARE-MONITOR-MIB</p><p>Total amount of DRAM in Kbytes in the system.</p> | SNMP | vm.memory.total[extremeMemoryMonitorSystemTotal.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1024`</p> |
+| Memory | #{#SNMPVALUE}: Memory utilization | <p>Memory utilization in %</p> | CALCULATED | vm.memory.util[{#SNMPVALUE}]<p>**Expression**:</p>`(last("vm.memory.total[extremeMemoryMonitorSystemTotal.{#SNMPINDEX}]") - last("vm.memory.available[extremeMemoryMonitorSystemFree.{#SNMPINDEX}]")) / last("vm.memory.total[extremeMemoryMonitorSystemTotal.{#SNMPINDEX}]") * 100` |
+| Power_supply | PSU {#SNMPVALUE}: Power supply status | <p>MIB: EXTREME-SYSTEM-MIB</p><p>Status of the power supply {#SNMPVALUE}</p> | SNMP | sensor.psu.status[extremePowerSupplyStatus.{#SNMPINDEX}] |
+| Temperature | Device: Temperature | <p>MIB: EXTREME-SYSTEM-MIB</p><p>Temperature readings of testpoint: Device</p><p>Reference: https://gtacknowledge.extremenetworks.com/articles/Q_A/Does-EXOS-support-temperature-polling-via-SNMP-on-all-nodes-in-a-stack</p> | SNMP | sensor.temp.value[extremeCurrentTemperature.0] |
+| Temperature | Device: Temperature status | <p>MIB: EXTREME-SYSTEM-MIB</p><p>Temperature status of testpoint: Device</p> | SNMP | sensor.temp.status[extremeOverTemperatureAlarm.0] |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|High CPU utilization (over {$CPU.UTIL.CRIT}% for 5m) |<p>CPU utilization is too high. The system might be slow to respond.</p> |`{TEMPLATE_NAME:system.cpu.util[extremeCpuMonitorTotalUtilization.0].min(5m)}>{$CPU.UTIL.CRIT}` |WARNING | |
-|Fan {#SNMPVALUE}: Fan is in critical state |<p>Please check the fan unit</p> |`{TEMPLATE_NAME:sensor.fan.status[extremeFanOperational.{#SNMPINDEX}].count(#1,{$FAN_CRIT_STATUS},eq)}=1` |AVERAGE | |
-|Device has been replaced (new serial number received) |<p>Device serial number has changed. Ack to close</p> |`{TEMPLATE_NAME:system.hw.serialnumber.diff()}=1 and {TEMPLATE_NAME:system.hw.serialnumber.strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|Firmware has changed |<p>Firmware version has changed. Ack to close</p> |`{TEMPLATE_NAME:system.hw.firmware.diff()}=1 and {TEMPLATE_NAME:system.hw.firmware.strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|Operating system description has changed |<p>Operating system description has changed. Possible reasons that system has been updated or replaced. Ack to close.</p> |`{TEMPLATE_NAME:system.sw.os[extremePrimarySoftwareRev.0].diff()}=1 and {TEMPLATE_NAME:system.sw.os[extremePrimarySoftwareRev.0].strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|#{#SNMPVALUE}: High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m) |<p>The system is running out of free memory.</p> |`{TEMPLATE_NAME:vm.memory.util[{#SNMPVALUE}].min(5m)}>{$MEMORY.UTIL.MAX}` |AVERAGE | |
-|PSU {#SNMPVALUE}: Power supply is in critical state |<p>Please check the power supply unit for errors</p> |`{TEMPLATE_NAME:sensor.psu.status[extremePowerSupplyStatus.{#SNMPINDEX}].count(#1,{$PSU_CRIT_STATUS},eq)}=1` |AVERAGE | |
-|Device: Temperature is above warning threshold: >{$TEMP_WARN:""} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[extremeCurrentTemperature.0].avg(5m)}>{$TEMP_WARN:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[extremeCurrentTemperature.0].max(5m)}<{$TEMP_WARN:""}-3` |WARNING |<p>**Depends on**:</p><p>- Device: Temperature is above critical threshold: >{$TEMP_CRIT:""}</p> |
-|Device: Temperature is above critical threshold: >{$TEMP_CRIT:""} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[extremeCurrentTemperature.0].avg(5m)}>{$TEMP_CRIT:""} or {Extreme EXOS SNMP:sensor.temp.status[extremeOverTemperatureAlarm.0].last()}={$TEMP_CRIT_STATUS}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[extremeCurrentTemperature.0].max(5m)}<{$TEMP_CRIT:""}-3` |HIGH | |
-|Device: Temperature is too low: <{$TEMP_CRIT_LOW:""} |<p>-</p> |`{TEMPLATE_NAME:sensor.temp.value[extremeCurrentTemperature.0].avg(5m)}<{$TEMP_CRIT_LOW:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[extremeCurrentTemperature.0].min(5m)}>{$TEMP_CRIT_LOW:""}+3` |AVERAGE | |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|----------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|--------------------------------------------------------------------------------------------------|
+| High CPU utilization (over {$CPU.UTIL.CRIT}% for 5m) | <p>CPU utilization is too high. The system might be slow to respond.</p> | `{TEMPLATE_NAME:system.cpu.util[extremeCpuMonitorTotalUtilization.0].min(5m)}>{$CPU.UTIL.CRIT}` | WARNING | |
+| Fan {#SNMPVALUE}: Fan is in critical state | <p>Please check the fan unit</p> | `{TEMPLATE_NAME:sensor.fan.status[extremeFanOperational.{#SNMPINDEX}].count(#1,{$FAN_CRIT_STATUS},eq)}=1` | AVERAGE | |
+| Device has been replaced (new serial number received) | <p>Device serial number has changed. Ack to close</p> | `{TEMPLATE_NAME:system.hw.serialnumber.diff()}=1 and {TEMPLATE_NAME:system.hw.serialnumber.strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| Firmware has changed | <p>Firmware version has changed. Ack to close</p> | `{TEMPLATE_NAME:system.hw.firmware.diff()}=1 and {TEMPLATE_NAME:system.hw.firmware.strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| Operating system description has changed | <p>Operating system description has changed. Possible reasons that system has been updated or replaced. Ack to close.</p> | `{TEMPLATE_NAME:system.sw.os[extremePrimarySoftwareRev.0].diff()}=1 and {TEMPLATE_NAME:system.sw.os[extremePrimarySoftwareRev.0].strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| #{#SNMPVALUE}: High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m) | <p>The system is running out of free memory.</p> | `{TEMPLATE_NAME:vm.memory.util[{#SNMPVALUE}].min(5m)}>{$MEMORY.UTIL.MAX}` | AVERAGE | |
+| PSU {#SNMPVALUE}: Power supply is in critical state | <p>Please check the power supply unit for errors</p> | `{TEMPLATE_NAME:sensor.psu.status[extremePowerSupplyStatus.{#SNMPINDEX}].count(#1,{$PSU_CRIT_STATUS},eq)}=1` | AVERAGE | |
+| Device: Temperature is above warning threshold: >{$TEMP_WARN:""} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[extremeCurrentTemperature.0].avg(5m)}>{$TEMP_WARN:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[extremeCurrentTemperature.0].max(5m)}<{$TEMP_WARN:""}-3` | WARNING | <p>**Depends on**:</p><p>- Device: Temperature is above critical threshold: >{$TEMP_CRIT:""}</p> |
+| Device: Temperature is above critical threshold: >{$TEMP_CRIT:""} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[extremeCurrentTemperature.0].avg(5m)}>{$TEMP_CRIT:""} or {Extreme EXOS SNMP:sensor.temp.status[extremeOverTemperatureAlarm.0].last()}={$TEMP_CRIT_STATUS}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[extremeCurrentTemperature.0].max(5m)}<{$TEMP_CRIT:""}-3` | HIGH | |
+| Device: Temperature is too low: <{$TEMP_CRIT_LOW:""} | <p>-</p> | `{TEMPLATE_NAME:sensor.temp.value[extremeCurrentTemperature.0].avg(5m)}<{$TEMP_CRIT_LOW:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[extremeCurrentTemperature.0].min(5m)}>{$TEMP_CRIT_LOW:""}+3` | AVERAGE | |
## Feedback
diff --git a/templates/net/generic_snmp/README.md b/templates/net/generic_snmp/README.md
index 52b261d44a6..d3030c9ad49 100644
--- a/templates/net/generic_snmp/README.md
+++ b/templates/net/generic_snmp/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
Use this template if you can't find the template for specific vendor or device family.
## Setup
@@ -17,11 +17,11 @@ No specific Zabbix configuration is required.
## Template links
-|Name|
-|----|
-|EtherLike-MIB SNMP |
-|Generic SNMP |
-|Interfaces Simple SNMP |
+| Name |
+|------------------------|
+| EtherLike-MIB SNMP |
+| Generic SNMP |
+| Interfaces Simple SNMP |
## Discovery rules
diff --git a/templates/net/hp_hh3c_snmp/README.md b/templates/net/hp_hh3c_snmp/README.md
index a38bffcf4b2..20d50bc389b 100644
--- a/templates/net/hp_hh3c_snmp/README.md
+++ b/templates/net/hp_hh3c_snmp/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
http://certifiedgeek.weebly.com/blog/hp-comware-snmp-mib-for-cpu-memory-and-temperature
http://www.h3c.com.hk/products___solutions/technology/system_management/configuration_example/200912/656451_57_0.htm
@@ -22,66 +22,66 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$CPU.UTIL.CRIT} |<p>-</p> |`90` |
-|{$FAN_CRIT_STATUS:"fanError"} |<p>-</p> |`41` |
-|{$FAN_CRIT_STATUS:"hardwareFaulty"} |<p>-</p> |`91` |
-|{$MEMORY.UTIL.MAX} |<p>-</p> |`90` |
-|{$PSU_CRIT_STATUS:"hardwareFaulty"} |<p>-</p> |`91` |
-|{$PSU_CRIT_STATUS:"psuError"} |<p>-</p> |`51` |
-|{$PSU_CRIT_STATUS:"rpsError"} |<p>-</p> |`61` |
-|{$TEMP_CRIT_LOW} |<p>-</p> |`5` |
-|{$TEMP_CRIT} |<p>-</p> |`60` |
-|{$TEMP_WARN} |<p>-</p> |`50` |
+| Name | Description | Default |
+|-------------------------------------|-------------|---------|
+| {$CPU.UTIL.CRIT} | <p>-</p> | `90` |
+| {$FAN_CRIT_STATUS:"fanError"} | <p>-</p> | `41` |
+| {$FAN_CRIT_STATUS:"hardwareFaulty"} | <p>-</p> | `91` |
+| {$MEMORY.UTIL.MAX} | <p>-</p> | `90` |
+| {$PSU_CRIT_STATUS:"hardwareFaulty"} | <p>-</p> | `91` |
+| {$PSU_CRIT_STATUS:"psuError"} | <p>-</p> | `51` |
+| {$PSU_CRIT_STATUS:"rpsError"} | <p>-</p> | `61` |
+| {$TEMP_CRIT_LOW} | <p>-</p> | `5` |
+| {$TEMP_CRIT} | <p>-</p> | `60` |
+| {$TEMP_WARN} | <p>-</p> | `50` |
## Template links
-|Name|
-|----|
-|EtherLike-MIB SNMP |
-|Generic SNMP |
-|Interfaces SNMP |
+| Name |
+|--------------------|
+| EtherLike-MIB SNMP |
+| Generic SNMP |
+| Interfaces SNMP |
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Module Discovery |<p>Filter limits results to 'Module level1' or Fabric Modules</p> |SNMP |module.discovery<p>**Filter**:</p>OR <p>- A: {#SNMPVALUE} MATCHES_REGEX `^(MODULE|Module) (LEVEL|level)1$`</p><p>- A: {#SNMPVALUE} MATCHES_REGEX `(Fabric|FABRIC) (.+) (Module|MODULE)`</p> |
-|Temperature Discovery |<p>Discovering modules temperature (same filter as in Module Discovery) plus and temperature sensors</p> |SNMP |temp.discovery<p>**Filter**:</p>OR <p>- A: {#SNMPVALUE} MATCHES_REGEX `^(MODULE|Module) (LEVEL|level)1$`</p><p>- A: {#SNMPVALUE} MATCHES_REGEX `(Fabric|FABRIC) (.+) (Module|MODULE)`</p><p>- A: {#SNMPVALUE} MATCHES_REGEX `(T|t)emperature.*(s|S)ensor`</p> |
-|FAN Discovery |<p>Discovering all entities of PhysicalClass - 7: fan(7)</p> |SNMP |fan.discovery<p>**Filter**:</p>AND_OR <p>- A: {#ENT_CLASS} MATCHES_REGEX `7`</p> |
-|PSU Discovery |<p>Discovering all entities of PhysicalClass - 6: powerSupply(6)</p> |SNMP |psu.discovery<p>**Filter**:</p>AND_OR <p>- A: {#ENT_CLASS} MATCHES_REGEX `6`</p> |
-|Entity Discovery |<p>-</p> |SNMP |entity.discovery<p>**Filter**:</p>AND_OR <p>- A: {#ENT_CLASS} MATCHES_REGEX `3`</p> |
+| Name | Description | Type | Key and additional info |
+|-----------------------|----------------------------------------------------------------------------------------------------------|------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Module Discovery | <p>Filter limits results to 'Module level1' or Fabric Modules</p> | SNMP | module.discovery<p>**Filter**:</p>OR <p>- A: {#SNMPVALUE} MATCHES_REGEX `^(MODULE|Module) (LEVEL|level)1$`</p><p>- A: {#SNMPVALUE} MATCHES_REGEX `(Fabric|FABRIC) (.+) (Module|MODULE)`</p> |
+| Temperature Discovery | <p>Discovering modules temperature (same filter as in Module Discovery) plus and temperature sensors</p> | SNMP | temp.discovery<p>**Filter**:</p>OR <p>- A: {#SNMPVALUE} MATCHES_REGEX `^(MODULE|Module) (LEVEL|level)1$`</p><p>- A: {#SNMPVALUE} MATCHES_REGEX `(Fabric|FABRIC) (.+) (Module|MODULE)`</p><p>- A: {#SNMPVALUE} MATCHES_REGEX `(T|t)emperature.*(s|S)ensor`</p> |
+| FAN Discovery | <p>Discovering all entities of PhysicalClass - 7: fan(7)</p> | SNMP | fan.discovery<p>**Filter**:</p>AND_OR <p>- A: {#ENT_CLASS} MATCHES_REGEX `7`</p> |
+| PSU Discovery | <p>Discovering all entities of PhysicalClass - 6: powerSupply(6)</p> | SNMP | psu.discovery<p>**Filter**:</p>AND_OR <p>- A: {#ENT_CLASS} MATCHES_REGEX `6`</p> |
+| Entity Discovery | <p>-</p> | SNMP | entity.discovery<p>**Filter**:</p>AND_OR <p>- A: {#ENT_CLASS} MATCHES_REGEX `3`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|CPU |{#MODULE_NAME}: CPU utilization |<p>MIB: HH3C-ENTITY-EXT-MIB</p><p>The CPU usage for this entity. Generally, the CPU usage</p><p>will calculate the overall CPU usage on the entity, and it</p><p>is not sensible with the number of CPU on the entity</p> |SNMP |system.cpu.util[hh3cEntityExtCpuUsage.{#SNMPINDEX}] |
-|Fans |{#ENT_NAME}: Fan status |<p>MIB: HH3C-ENTITY-EXT-MIB</p><p>Indicate the error state of this entity object.</p><p>fanError(41) means that the fan stops working.</p> |SNMP |sensor.fan.status[hh3cEntityExtErrorStatus.{#SNMPINDEX}] |
-|Inventory |{#ENT_NAME}: Hardware model name |<p>MIB: ENTITY-MIB</p> |SNMP |system.hw.model[entPhysicalDescr.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |{#ENT_NAME}: Hardware serial number |<p>MIB: ENTITY-MIB</p> |SNMP |system.hw.serialnumber[entPhysicalSerialNum.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |{#ENT_NAME}: Firmware version |<p>MIB: ENTITY-MIB</p> |SNMP |system.hw.firmware[entPhysicalFirmwareRev.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |{#ENT_NAME}: Hardware version(revision) |<p>MIB: ENTITY-MIB</p> |SNMP |system.hw.version[entPhysicalHardwareRev.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |{#ENT_NAME}: Operating system |<p>MIB: ENTITY-MIB</p> |SNMP |system.sw.os[entPhysicalSoftwareRev.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Memory |{#MODULE_NAME}: Memory utilization |<p>MIB: HH3C-ENTITY-EXT-MIB</p><p>The memory usage for the entity. This object indicates what</p><p>percent of memory are used.</p> |SNMP |vm.memory.util[hh3cEntityExtMemUsage.{#SNMPINDEX}] |
-|Power_supply |{#ENT_NAME}: Power supply status |<p>MIB: HH3C-ENTITY-EXT-MIB</p><p>Indicate the error state of this entity object.</p><p>psuError(51) means that the Power Supply Unit is in the state of fault.</p><p>rpsError(61) means the Redundant Power Supply is in the state of fault.</p> |SNMP |sensor.psu.status[hh3cEntityExtErrorStatus.{#SNMPINDEX}] |
-|Temperature |{#SNMPVALUE}: Temperature |<p>MIB: HH3C-ENTITY-EXT-MIB</p><p>The temperature for the {#SNMPVALUE}.</p> |SNMP |sensor.temp.value[hh3cEntityExtTemperature.{#SNMPINDEX}] |
+| Group | Name | Description | Type | Key and additional info |
+|--------------|-----------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------|------------------------------------------------------------------------------------------------------------------------------|
+| CPU | {#MODULE_NAME}: CPU utilization | <p>MIB: HH3C-ENTITY-EXT-MIB</p><p>The CPU usage for this entity. Generally, the CPU usage</p><p>will calculate the overall CPU usage on the entity, and it</p><p>is not sensible with the number of CPU on the entity</p> | SNMP | system.cpu.util[hh3cEntityExtCpuUsage.{#SNMPINDEX}] |
+| Fans | {#ENT_NAME}: Fan status | <p>MIB: HH3C-ENTITY-EXT-MIB</p><p>Indicate the error state of this entity object.</p><p>fanError(41) means that the fan stops working.</p> | SNMP | sensor.fan.status[hh3cEntityExtErrorStatus.{#SNMPINDEX}] |
+| Inventory | {#ENT_NAME}: Hardware model name | <p>MIB: ENTITY-MIB</p> | SNMP | system.hw.model[entPhysicalDescr.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | {#ENT_NAME}: Hardware serial number | <p>MIB: ENTITY-MIB</p> | SNMP | system.hw.serialnumber[entPhysicalSerialNum.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | {#ENT_NAME}: Firmware version | <p>MIB: ENTITY-MIB</p> | SNMP | system.hw.firmware[entPhysicalFirmwareRev.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | {#ENT_NAME}: Hardware version(revision) | <p>MIB: ENTITY-MIB</p> | SNMP | system.hw.version[entPhysicalHardwareRev.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | {#ENT_NAME}: Operating system | <p>MIB: ENTITY-MIB</p> | SNMP | system.sw.os[entPhysicalSoftwareRev.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Memory | {#MODULE_NAME}: Memory utilization | <p>MIB: HH3C-ENTITY-EXT-MIB</p><p>The memory usage for the entity. This object indicates what</p><p>percent of memory are used.</p> | SNMP | vm.memory.util[hh3cEntityExtMemUsage.{#SNMPINDEX}] |
+| Power_supply | {#ENT_NAME}: Power supply status | <p>MIB: HH3C-ENTITY-EXT-MIB</p><p>Indicate the error state of this entity object.</p><p>psuError(51) means that the Power Supply Unit is in the state of fault.</p><p>rpsError(61) means the Redundant Power Supply is in the state of fault.</p> | SNMP | sensor.psu.status[hh3cEntityExtErrorStatus.{#SNMPINDEX}] |
+| Temperature | {#SNMPVALUE}: Temperature | <p>MIB: HH3C-ENTITY-EXT-MIB</p><p>The temperature for the {#SNMPVALUE}.</p> | SNMP | sensor.temp.value[hh3cEntityExtTemperature.{#SNMPINDEX}] |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|{#MODULE_NAME}: High CPU utilization (over {$CPU.UTIL.CRIT}% for 5m) |<p>CPU utilization is too high. The system might be slow to respond.</p> |`{TEMPLATE_NAME:system.cpu.util[hh3cEntityExtCpuUsage.{#SNMPINDEX}].min(5m)}>{$CPU.UTIL.CRIT}` |WARNING | |
-|{#ENT_NAME}: Fan is in critical state |<p>Please check the fan unit</p> |`{TEMPLATE_NAME:sensor.fan.status[hh3cEntityExtErrorStatus.{#SNMPINDEX}].count(#1,{$FAN_CRIT_STATUS:"fanError"},eq)}=1 or {TEMPLATE_NAME:sensor.fan.status[hh3cEntityExtErrorStatus.{#SNMPINDEX}].count(#1,{$FAN_CRIT_STATUS:"hardwareFaulty"},eq)}=1` |AVERAGE | |
-|{#ENT_NAME}: Device has been replaced (new serial number received) |<p>Device serial number has changed. Ack to close</p> |`{TEMPLATE_NAME:system.hw.serialnumber[entPhysicalSerialNum.{#SNMPINDEX}].diff()}=1 and {TEMPLATE_NAME:system.hw.serialnumber[entPhysicalSerialNum.{#SNMPINDEX}].strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|{#ENT_NAME}: Firmware has changed |<p>Firmware version has changed. Ack to close</p> |`{TEMPLATE_NAME:system.hw.firmware[entPhysicalFirmwareRev.{#SNMPINDEX}].diff()}=1 and {TEMPLATE_NAME:system.hw.firmware[entPhysicalFirmwareRev.{#SNMPINDEX}].strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|{#ENT_NAME}: Operating system description has changed |<p>Operating system description has changed. Possible reasons that system has been updated or replaced. Ack to close.</p> |`{TEMPLATE_NAME:system.sw.os[entPhysicalSoftwareRev.{#SNMPINDEX}].diff()}=1 and {TEMPLATE_NAME:system.sw.os[entPhysicalSoftwareRev.{#SNMPINDEX}].strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|{#MODULE_NAME}: High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m) |<p>The system is running out of free memory.</p> |`{TEMPLATE_NAME:vm.memory.util[hh3cEntityExtMemUsage.{#SNMPINDEX}].min(5m)}>{$MEMORY.UTIL.MAX}` |AVERAGE | |
-|{#ENT_NAME}: Power supply is in critical state |<p>Please check the power supply unit for errors</p> |`{TEMPLATE_NAME:sensor.psu.status[hh3cEntityExtErrorStatus.{#SNMPINDEX}].count(#1,{$PSU_CRIT_STATUS:"psuError"},eq)}=1 or {TEMPLATE_NAME:sensor.psu.status[hh3cEntityExtErrorStatus.{#SNMPINDEX}].count(#1,{$PSU_CRIT_STATUS:"rpsError"},eq)}=1 or {TEMPLATE_NAME:sensor.psu.status[hh3cEntityExtErrorStatus.{#SNMPINDEX}].count(#1,{$PSU_CRIT_STATUS:"hardwareFaulty"},eq)}=1` |AVERAGE | |
-|{#SNMPVALUE}: Temperature is above warning threshold: >{$TEMP_WARN:""} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[hh3cEntityExtTemperature.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[hh3cEntityExtTemperature.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:""}-3` |WARNING |<p>**Depends on**:</p><p>- {#SNMPVALUE}: Temperature is above critical threshold: >{$TEMP_CRIT:""}</p> |
-|{#SNMPVALUE}: Temperature is above critical threshold: >{$TEMP_CRIT:""} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[hh3cEntityExtTemperature.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[hh3cEntityExtTemperature.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:""}-3` |HIGH | |
-|{#SNMPVALUE}: Temperature is too low: <{$TEMP_CRIT_LOW:""} |<p>-</p> |`{TEMPLATE_NAME:sensor.temp.value[hh3cEntityExtTemperature.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[hh3cEntityExtTemperature.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:""}+3` |AVERAGE | |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|-------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|--------------------------------------------------------------------------------------------------------|
+| {#MODULE_NAME}: High CPU utilization (over {$CPU.UTIL.CRIT}% for 5m) | <p>CPU utilization is too high. The system might be slow to respond.</p> | `{TEMPLATE_NAME:system.cpu.util[hh3cEntityExtCpuUsage.{#SNMPINDEX}].min(5m)}>{$CPU.UTIL.CRIT}` | WARNING | |
+| {#ENT_NAME}: Fan is in critical state | <p>Please check the fan unit</p> | `{TEMPLATE_NAME:sensor.fan.status[hh3cEntityExtErrorStatus.{#SNMPINDEX}].count(#1,{$FAN_CRIT_STATUS:"fanError"},eq)}=1 or {TEMPLATE_NAME:sensor.fan.status[hh3cEntityExtErrorStatus.{#SNMPINDEX}].count(#1,{$FAN_CRIT_STATUS:"hardwareFaulty"},eq)}=1` | AVERAGE | |
+| {#ENT_NAME}: Device has been replaced (new serial number received) | <p>Device serial number has changed. Ack to close</p> | `{TEMPLATE_NAME:system.hw.serialnumber[entPhysicalSerialNum.{#SNMPINDEX}].diff()}=1 and {TEMPLATE_NAME:system.hw.serialnumber[entPhysicalSerialNum.{#SNMPINDEX}].strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| {#ENT_NAME}: Firmware has changed | <p>Firmware version has changed. Ack to close</p> | `{TEMPLATE_NAME:system.hw.firmware[entPhysicalFirmwareRev.{#SNMPINDEX}].diff()}=1 and {TEMPLATE_NAME:system.hw.firmware[entPhysicalFirmwareRev.{#SNMPINDEX}].strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| {#ENT_NAME}: Operating system description has changed | <p>Operating system description has changed. Possible reasons that system has been updated or replaced. Ack to close.</p> | `{TEMPLATE_NAME:system.sw.os[entPhysicalSoftwareRev.{#SNMPINDEX}].diff()}=1 and {TEMPLATE_NAME:system.sw.os[entPhysicalSoftwareRev.{#SNMPINDEX}].strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| {#MODULE_NAME}: High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m) | <p>The system is running out of free memory.</p> | `{TEMPLATE_NAME:vm.memory.util[hh3cEntityExtMemUsage.{#SNMPINDEX}].min(5m)}>{$MEMORY.UTIL.MAX}` | AVERAGE | |
+| {#ENT_NAME}: Power supply is in critical state | <p>Please check the power supply unit for errors</p> | `{TEMPLATE_NAME:sensor.psu.status[hh3cEntityExtErrorStatus.{#SNMPINDEX}].count(#1,{$PSU_CRIT_STATUS:"psuError"},eq)}=1 or {TEMPLATE_NAME:sensor.psu.status[hh3cEntityExtErrorStatus.{#SNMPINDEX}].count(#1,{$PSU_CRIT_STATUS:"rpsError"},eq)}=1 or {TEMPLATE_NAME:sensor.psu.status[hh3cEntityExtErrorStatus.{#SNMPINDEX}].count(#1,{$PSU_CRIT_STATUS:"hardwareFaulty"},eq)}=1` | AVERAGE | |
+| {#SNMPVALUE}: Temperature is above warning threshold: >{$TEMP_WARN:""} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[hh3cEntityExtTemperature.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[hh3cEntityExtTemperature.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:""}-3` | WARNING | <p>**Depends on**:</p><p>- {#SNMPVALUE}: Temperature is above critical threshold: >{$TEMP_CRIT:""}</p> |
+| {#SNMPVALUE}: Temperature is above critical threshold: >{$TEMP_CRIT:""} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[hh3cEntityExtTemperature.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[hh3cEntityExtTemperature.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:""}-3` | HIGH | |
+| {#SNMPVALUE}: Temperature is too low: <{$TEMP_CRIT_LOW:""} | <p>-</p> | `{TEMPLATE_NAME:sensor.temp.value[hh3cEntityExtTemperature.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[hh3cEntityExtTemperature.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:""}+3` | AVERAGE | |
## Feedback
diff --git a/templates/net/hp_hpn_snmp/README.md b/templates/net/hp_hpn_snmp/README.md
index a3d14cce29d..2ac9ae38e66 100644
--- a/templates/net/hp_hpn_snmp/README.md
+++ b/templates/net/hp_hpn_snmp/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
This template was tested on:
@@ -20,70 +20,70 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$CPU.UTIL.CRIT} |<p>-</p> |`90` |
-|{$FAN_CRIT_STATUS:"bad"} |<p>-</p> |`2` |
-|{$FAN_WARN_STATUS:"warning"} |<p>-</p> |`3` |
-|{$MEMORY.UTIL.MAX} |<p>-</p> |`90` |
-|{$PSU_CRIT_STATUS:"bad"} |<p>-</p> |`2` |
-|{$PSU_WARN_STATUS:"warning"} |<p>-</p> |`3` |
-|{$TEMP_CRIT_LOW} |<p>-</p> |`5` |
-|{$TEMP_CRIT} |<p>-</p> |`60` |
-|{$TEMP_WARN} |<p>-</p> |`50` |
+| Name | Description | Default |
+|------------------------------|-------------|---------|
+| {$CPU.UTIL.CRIT} | <p>-</p> | `90` |
+| {$FAN_CRIT_STATUS:"bad"} | <p>-</p> | `2` |
+| {$FAN_WARN_STATUS:"warning"} | <p>-</p> | `3` |
+| {$MEMORY.UTIL.MAX} | <p>-</p> | `90` |
+| {$PSU_CRIT_STATUS:"bad"} | <p>-</p> | `2` |
+| {$PSU_WARN_STATUS:"warning"} | <p>-</p> | `3` |
+| {$TEMP_CRIT_LOW} | <p>-</p> | `5` |
+| {$TEMP_CRIT} | <p>-</p> | `60` |
+| {$TEMP_WARN} | <p>-</p> | `50` |
## Template links
-|Name|
-|----|
-|EtherLike-MIB SNMP |
-|Generic SNMP |
-|Interfaces SNMP |
+| Name |
+|--------------------|
+| EtherLike-MIB SNMP |
+| Generic SNMP |
+| Interfaces SNMP |
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Temperature Discovery |<p>ENTITY-SENSORS-MIB::EntitySensorDataType discovery with celsius filter</p> |SNMP |temp.precision0.discovery<p>**Filter**:</p>AND <p>- B: {#SENSOR_TYPE} MATCHES_REGEX `8`</p><p>- B: {#SENSOR_PRECISION} MATCHES_REGEX `0`</p> |
-|Memory Discovery |<p>Discovery of NETSWITCH-MIB::hpLocalMemTable, A table that contains information on all the local memory for each slot.</p> |SNMP |memory.discovery |
-|FAN Discovery |<p>Discovering all entities of hpicfSensorObjectId that ends with: 11.2.3.7.8.3.2 - fans and are present</p> |SNMP |fan.discovery<p>**Filter**:</p>AND <p>- A: {#ENT_CLASS} MATCHES_REGEX `.+8.3.2$`</p><p>- A: {#ENT_STATUS} MATCHES_REGEX `(1|2|3|4)`</p> |
-|PSU Discovery |<p>Discovering all entities of hpicfSensorObjectId that ends with: 11.2.3.7.8.3.1 - power supplies and are present</p> |SNMP |psu.discovery<p>**Filter**:</p>AND <p>- A: {#ENT_CLASS} MATCHES_REGEX `.+8.3.1$`</p><p>- A: {#ENT_STATUS} MATCHES_REGEX `(1|2|3|4)`</p> |
-|Temp Status Discovery |<p>Discovering all entities of hpicfSensorObjectId that ends with: 11.2.3.7.8.3.3 - over temp status and are present</p> |SNMP |temp.status.discovery<p>**Filter**:</p>AND <p>- A: {#ENT_CLASS} MATCHES_REGEX `.+8.3.3$`</p><p>- A: {#ENT_STATUS} MATCHES_REGEX `(1|2|3|4)`</p> |
-|Entity Discovery |<p>-</p> |SNMP |entity.discovery<p>**Filter**:</p>AND_OR <p>- A: {#ENT_CLASS} MATCHES_REGEX `3`</p> |
+| Name | Description | Type | Key and additional info |
+|-----------------------|------------------------------------------------------------------------------------------------------------------------------|------|-------------------------------------------------------------------------------------------------------------------------------------------------|
+| Temperature Discovery | <p>ENTITY-SENSORS-MIB::EntitySensorDataType discovery with celsius filter</p> | SNMP | temp.precision0.discovery<p>**Filter**:</p>AND <p>- B: {#SENSOR_TYPE} MATCHES_REGEX `8`</p><p>- B: {#SENSOR_PRECISION} MATCHES_REGEX `0`</p> |
+| Memory Discovery | <p>Discovery of NETSWITCH-MIB::hpLocalMemTable, A table that contains information on all the local memory for each slot.</p> | SNMP | memory.discovery |
+| FAN Discovery | <p>Discovering all entities of hpicfSensorObjectId that ends with: 11.2.3.7.8.3.2 - fans and are present</p> | SNMP | fan.discovery<p>**Filter**:</p>AND <p>- A: {#ENT_CLASS} MATCHES_REGEX `.+8.3.2$`</p><p>- A: {#ENT_STATUS} MATCHES_REGEX `(1|2|3|4)`</p> |
+| PSU Discovery | <p>Discovering all entities of hpicfSensorObjectId that ends with: 11.2.3.7.8.3.1 - power supplies and are present</p> | SNMP | psu.discovery<p>**Filter**:</p>AND <p>- A: {#ENT_CLASS} MATCHES_REGEX `.+8.3.1$`</p><p>- A: {#ENT_STATUS} MATCHES_REGEX `(1|2|3|4)`</p> |
+| Temp Status Discovery | <p>Discovering all entities of hpicfSensorObjectId that ends with: 11.2.3.7.8.3.3 - over temp status and are present</p> | SNMP | temp.status.discovery<p>**Filter**:</p>AND <p>- A: {#ENT_CLASS} MATCHES_REGEX `.+8.3.3$`</p><p>- A: {#ENT_STATUS} MATCHES_REGEX `(1|2|3|4)`</p> |
+| Entity Discovery | <p>-</p> | SNMP | entity.discovery<p>**Filter**:</p>AND_OR <p>- A: {#ENT_CLASS} MATCHES_REGEX `3`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|CPU |CPU utilization |<p>MIB: STATISTICS-MIB</p><p>The CPU utilization in percent(%).</p><p>Reference: http://h20564.www2.hpe.com/hpsc/doc/public/display?docId=emr_na-c02597344&sp4ts.oid=51079</p> |SNMP |system.cpu.util[hpSwitchCpuStat.0] |
-|Fans |{#ENT_DESCR}: Fan status |<p>MIB: HP-ICF-CHASSIS</p><p>Actual status indicated by the sensor: {#ENT_DESCR}</p> |SNMP |sensor.fan.status[hpicfSensorStatus.{#SNMPINDEX}] |
-|Inventory |Hardware serial number |<p>MIB: SEMI-MIB</p> |SNMP |system.hw.serialnumber<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |Firmware version |<p>MIB: NETSWITCH-MIB</p><p>Contains the operating code version number (also known as software or firmware).</p><p>For example, a software version such as A.08.01 is described as follows:</p><p>A the function set available in your router</p><p>08 the common release number</p><p>01 updates to the current common release</p> |SNMP |system.hw.firmware<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |{#ENT_NAME}: Hardware model name |<p>MIB: ENTITY-MIB</p> |SNMP |system.hw.model[entPhysicalDescr.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |{#ENT_NAME}: Hardware version(revision) |<p>MIB: ENTITY-MIB</p> |SNMP |system.hw.version[entPhysicalHardwareRev.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Memory |#{#SNMPVALUE}: Used memory |<p>MIB: NETSWITCH-MIB</p><p>The number of currently allocated bytes.</p> |SNMP |vm.memory.used[hpLocalMemAllocBytes.{#SNMPINDEX}] |
-|Memory |#{#SNMPVALUE}: Available memory |<p>MIB: NETSWITCH-MIB</p><p>The number of available (unallocated) bytes.</p> |SNMP |vm.memory.available[hpLocalMemFreeBytes.{#SNMPINDEX}] |
-|Memory |#{#SNMPVALUE}: Total memory |<p>MIB: NETSWITCH-MIB</p><p>The number of currently installed bytes.</p> |SNMP |vm.memory.total[hpLocalMemTotalBytes.{#SNMPINDEX}] |
-|Memory |#{#SNMPVALUE}: Memory utilization |<p>Memory utilization in %</p> |CALCULATED |vm.memory.util[snmp.{#SNMPINDEX}]<p>**Expression**:</p>`last("vm.memory.used[hpLocalMemAllocBytes.{#SNMPINDEX}]")/last("vm.memory.total[hpLocalMemTotalBytes.{#SNMPINDEX}]")*100` |
-|Power_supply |{#ENT_DESCR}: Power supply status |<p>MIB: HP-ICF-CHASSIS</p><p>Actual status indicated by the sensor: {#ENT_DESCR}</p> |SNMP |sensor.psu.status[hpicfSensorStatus.{#SNMPINDEX}] |
-|Temperature |{#SENSOR_INFO}: Temperature |<p>MIB: ENTITY-SENSORS-MIB</p><p>The most recent measurement obtained by the agent for this sensor.</p><p>To correctly interpret the value of this object, the associated entPhySensorType,</p><p>entPhySensorScale, and entPhySensorPrecision objects must also be examined.</p> |SNMP |sensor.temp.value[entPhySensorValue.{#SNMPINDEX}] |
-|Temperature |{#ENT_DESCR}: Temperature status |<p>MIB: HP-ICF-CHASSIS</p><p>Actual status indicated by the sensor: {#ENT_DESCR}</p> |SNMP |sensor.temp.status[hpicfSensorStatus.{#SNMPINDEX}] |
+| Group | Name | Description | Type | Key and additional info |
+|--------------|-----------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| CPU | CPU utilization | <p>MIB: STATISTICS-MIB</p><p>The CPU utilization in percent(%).</p><p>Reference: http://h20564.www2.hpe.com/hpsc/doc/public/display?docId=emr_na-c02597344&sp4ts.oid=51079</p> | SNMP | system.cpu.util[hpSwitchCpuStat.0] |
+| Fans | {#ENT_DESCR}: Fan status | <p>MIB: HP-ICF-CHASSIS</p><p>Actual status indicated by the sensor: {#ENT_DESCR}</p> | SNMP | sensor.fan.status[hpicfSensorStatus.{#SNMPINDEX}] |
+| Inventory | Hardware serial number | <p>MIB: SEMI-MIB</p> | SNMP | system.hw.serialnumber<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | Firmware version | <p>MIB: NETSWITCH-MIB</p><p>Contains the operating code version number (also known as software or firmware).</p><p>For example, a software version such as A.08.01 is described as follows:</p><p>A the function set available in your router</p><p>08 the common release number</p><p>01 updates to the current common release</p> | SNMP | system.hw.firmware<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | {#ENT_NAME}: Hardware model name | <p>MIB: ENTITY-MIB</p> | SNMP | system.hw.model[entPhysicalDescr.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | {#ENT_NAME}: Hardware version(revision) | <p>MIB: ENTITY-MIB</p> | SNMP | system.hw.version[entPhysicalHardwareRev.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Memory | #{#SNMPVALUE}: Used memory | <p>MIB: NETSWITCH-MIB</p><p>The number of currently allocated bytes.</p> | SNMP | vm.memory.used[hpLocalMemAllocBytes.{#SNMPINDEX}] |
+| Memory | #{#SNMPVALUE}: Available memory | <p>MIB: NETSWITCH-MIB</p><p>The number of available (unallocated) bytes.</p> | SNMP | vm.memory.available[hpLocalMemFreeBytes.{#SNMPINDEX}] |
+| Memory | #{#SNMPVALUE}: Total memory | <p>MIB: NETSWITCH-MIB</p><p>The number of currently installed bytes.</p> | SNMP | vm.memory.total[hpLocalMemTotalBytes.{#SNMPINDEX}] |
+| Memory | #{#SNMPVALUE}: Memory utilization | <p>Memory utilization in %</p> | CALCULATED | vm.memory.util[snmp.{#SNMPINDEX}]<p>**Expression**:</p>`last("vm.memory.used[hpLocalMemAllocBytes.{#SNMPINDEX}]")/last("vm.memory.total[hpLocalMemTotalBytes.{#SNMPINDEX}]")*100` |
+| Power_supply | {#ENT_DESCR}: Power supply status | <p>MIB: HP-ICF-CHASSIS</p><p>Actual status indicated by the sensor: {#ENT_DESCR}</p> | SNMP | sensor.psu.status[hpicfSensorStatus.{#SNMPINDEX}] |
+| Temperature | {#SENSOR_INFO}: Temperature | <p>MIB: ENTITY-SENSORS-MIB</p><p>The most recent measurement obtained by the agent for this sensor.</p><p>To correctly interpret the value of this object, the associated entPhySensorType,</p><p>entPhySensorScale, and entPhySensorPrecision objects must also be examined.</p> | SNMP | sensor.temp.value[entPhySensorValue.{#SNMPINDEX}] |
+| Temperature | {#ENT_DESCR}: Temperature status | <p>MIB: HP-ICF-CHASSIS</p><p>Actual status indicated by the sensor: {#ENT_DESCR}</p> | SNMP | sensor.temp.status[hpicfSensorStatus.{#SNMPINDEX}] |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|High CPU utilization (over {$CPU.UTIL.CRIT}% for 5m) |<p>CPU utilization is too high. The system might be slow to respond.</p> |`{TEMPLATE_NAME:system.cpu.util[hpSwitchCpuStat.0].min(5m)}>{$CPU.UTIL.CRIT}` |WARNING | |
-|{#ENT_DESCR}: Fan is in critical state |<p>Please check the fan unit</p> |`{TEMPLATE_NAME:sensor.fan.status[hpicfSensorStatus.{#SNMPINDEX}].count(#1,{$FAN_CRIT_STATUS:"bad"},eq)}=1` |AVERAGE | |
-|{#ENT_DESCR}: Fan is in warning state |<p>Please check the fan unit</p> |`{TEMPLATE_NAME:sensor.fan.status[hpicfSensorStatus.{#SNMPINDEX}].count(#1,{$FAN_WARN_STATUS:"warning"},eq)}=1` |WARNING |<p>**Depends on**:</p><p>- {#ENT_DESCR}: Fan is in critical state</p> |
-|Device has been replaced (new serial number received) |<p>Device serial number has changed. Ack to close</p> |`{TEMPLATE_NAME:system.hw.serialnumber.diff()}=1 and {TEMPLATE_NAME:system.hw.serialnumber.strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|Firmware has changed |<p>Firmware version has changed. Ack to close</p> |`{TEMPLATE_NAME:system.hw.firmware.diff()}=1 and {TEMPLATE_NAME:system.hw.firmware.strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|#{#SNMPVALUE}: High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m) |<p>The system is running out of free memory.</p> |`{TEMPLATE_NAME:vm.memory.util[snmp.{#SNMPINDEX}].min(5m)}>{$MEMORY.UTIL.MAX}` |AVERAGE | |
-|{#ENT_DESCR}: Power supply is in critical state |<p>Please check the power supply unit for errors</p> |`{TEMPLATE_NAME:sensor.psu.status[hpicfSensorStatus.{#SNMPINDEX}].count(#1,{$PSU_CRIT_STATUS:"bad"},eq)}=1` |AVERAGE | |
-|{#ENT_DESCR}: Power supply is in warning state |<p>Please check the power supply unit for errors</p> |`{TEMPLATE_NAME:sensor.psu.status[hpicfSensorStatus.{#SNMPINDEX}].count(#1,{$PSU_WARN_STATUS:"warning"},eq)}=1` |WARNING |<p>**Depends on**:</p><p>- {#ENT_DESCR}: Power supply is in critical state</p> |
-|{#SENSOR_INFO}: Temperature is above warning threshold: >{$TEMP_WARN:""} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[entPhySensorValue.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[entPhySensorValue.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:""}-3` |WARNING |<p>**Depends on**:</p><p>- {#SENSOR_INFO}: Temperature is above critical threshold: >{$TEMP_CRIT:""}</p> |
-|{#SENSOR_INFO}: Temperature is above critical threshold: >{$TEMP_CRIT:""} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[entPhySensorValue.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[entPhySensorValue.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:""}-3` |HIGH | |
-|{#SENSOR_INFO}: Temperature is too low: <{$TEMP_CRIT_LOW:""} |<p>-</p> |`{TEMPLATE_NAME:sensor.temp.value[entPhySensorValue.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[entPhySensorValue.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:""}+3` |AVERAGE | |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|---------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|----------------------------------------------------------------------------------------------------------|
+| High CPU utilization (over {$CPU.UTIL.CRIT}% for 5m) | <p>CPU utilization is too high. The system might be slow to respond.</p> | `{TEMPLATE_NAME:system.cpu.util[hpSwitchCpuStat.0].min(5m)}>{$CPU.UTIL.CRIT}` | WARNING | |
+| {#ENT_DESCR}: Fan is in critical state | <p>Please check the fan unit</p> | `{TEMPLATE_NAME:sensor.fan.status[hpicfSensorStatus.{#SNMPINDEX}].count(#1,{$FAN_CRIT_STATUS:"bad"},eq)}=1` | AVERAGE | |
+| {#ENT_DESCR}: Fan is in warning state | <p>Please check the fan unit</p> | `{TEMPLATE_NAME:sensor.fan.status[hpicfSensorStatus.{#SNMPINDEX}].count(#1,{$FAN_WARN_STATUS:"warning"},eq)}=1` | WARNING | <p>**Depends on**:</p><p>- {#ENT_DESCR}: Fan is in critical state</p> |
+| Device has been replaced (new serial number received) | <p>Device serial number has changed. Ack to close</p> | `{TEMPLATE_NAME:system.hw.serialnumber.diff()}=1 and {TEMPLATE_NAME:system.hw.serialnumber.strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| Firmware has changed | <p>Firmware version has changed. Ack to close</p> | `{TEMPLATE_NAME:system.hw.firmware.diff()}=1 and {TEMPLATE_NAME:system.hw.firmware.strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| #{#SNMPVALUE}: High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m) | <p>The system is running out of free memory.</p> | `{TEMPLATE_NAME:vm.memory.util[snmp.{#SNMPINDEX}].min(5m)}>{$MEMORY.UTIL.MAX}` | AVERAGE | |
+| {#ENT_DESCR}: Power supply is in critical state | <p>Please check the power supply unit for errors</p> | `{TEMPLATE_NAME:sensor.psu.status[hpicfSensorStatus.{#SNMPINDEX}].count(#1,{$PSU_CRIT_STATUS:"bad"},eq)}=1` | AVERAGE | |
+| {#ENT_DESCR}: Power supply is in warning state | <p>Please check the power supply unit for errors</p> | `{TEMPLATE_NAME:sensor.psu.status[hpicfSensorStatus.{#SNMPINDEX}].count(#1,{$PSU_WARN_STATUS:"warning"},eq)}=1` | WARNING | <p>**Depends on**:</p><p>- {#ENT_DESCR}: Power supply is in critical state</p> |
+| {#SENSOR_INFO}: Temperature is above warning threshold: >{$TEMP_WARN:""} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[entPhySensorValue.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[entPhySensorValue.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:""}-3` | WARNING | <p>**Depends on**:</p><p>- {#SENSOR_INFO}: Temperature is above critical threshold: >{$TEMP_CRIT:""}</p> |
+| {#SENSOR_INFO}: Temperature is above critical threshold: >{$TEMP_CRIT:""} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[entPhySensorValue.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[entPhySensorValue.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:""}-3` | HIGH | |
+| {#SENSOR_INFO}: Temperature is too low: <{$TEMP_CRIT_LOW:""} | <p>-</p> | `{TEMPLATE_NAME:sensor.temp.value[entPhySensorValue.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[entPhySensorValue.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:""}+3` | AVERAGE | |
## Feedback
diff --git a/templates/net/huawei_snmp/README.md b/templates/net/huawei_snmp/README.md
index ac5defa1df9..921ddf75513 100644
--- a/templates/net/huawei_snmp/README.md
+++ b/templates/net/huawei_snmp/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
Reference: https://www.slideshare.net/Huanetwork/huawei-s5700-naming-conventions-and-port-numbering-conventions
Reference: http://support.huawei.com/enterprise/KnowledgebaseReadAction.action?contentId=KB1000090234
@@ -17,56 +17,56 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$CPU.UTIL.CRIT} |<p>-</p> |`90` |
-|{$FAN_CRIT_STATUS} |<p>-</p> |`2` |
-|{$MEMORY.UTIL.MAX} |<p>-</p> |`90` |
-|{$TEMP_CRIT_LOW} |<p>-</p> |`5` |
-|{$TEMP_CRIT} |<p>-</p> |`60` |
-|{$TEMP_WARN} |<p>-</p> |`50` |
+| Name | Description | Default |
+|--------------------|-------------|---------|
+| {$CPU.UTIL.CRIT} | <p>-</p> | `90` |
+| {$FAN_CRIT_STATUS} | <p>-</p> | `2` |
+| {$MEMORY.UTIL.MAX} | <p>-</p> | `90` |
+| {$TEMP_CRIT_LOW} | <p>-</p> | `5` |
+| {$TEMP_CRIT} | <p>-</p> | `60` |
+| {$TEMP_WARN} | <p>-</p> | `50` |
## Template links
-|Name|
-|----|
-|EtherLike-MIB SNMP |
-|Generic SNMP |
-|Interfaces SNMP |
+| Name |
+|--------------------|
+| EtherLike-MIB SNMP |
+| Generic SNMP |
+| Interfaces SNMP |
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|MPU Discovery |<p>http://support.huawei.com/enterprise/KnowledgebaseReadAction.action?contentId=KB1000090234. Filter limits results to Main Processing Units</p> |SNMP |mpu.discovery<p>**Filter**:</p>AND_OR <p>- A: {#ENT_NAME} MATCHES_REGEX `MPU.*`</p> |
-|Entity Discovery |<p>-</p> |SNMP |entity.discovery<p>**Filter**:</p>AND_OR <p>- A: {#ENT_CLASS} MATCHES_REGEX `3`</p> |
-|FAN Discovery |<p>-</p> |SNMP |discovery.fans |
+| Name | Description | Type | Key and additional info |
+|------------------|---------------------------------------------------------------------------------------------------------------------------------------------------|------|-------------------------------------------------------------------------------------|
+| MPU Discovery | <p>http://support.huawei.com/enterprise/KnowledgebaseReadAction.action?contentId=KB1000090234. Filter limits results to Main Processing Units</p> | SNMP | mpu.discovery<p>**Filter**:</p>AND_OR <p>- A: {#ENT_NAME} MATCHES_REGEX `MPU.*`</p> |
+| Entity Discovery | <p>-</p> | SNMP | entity.discovery<p>**Filter**:</p>AND_OR <p>- A: {#ENT_CLASS} MATCHES_REGEX `3`</p> |
+| FAN Discovery | <p>-</p> | SNMP | discovery.fans |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|CPU |{#ENT_NAME}: CPU utilization |<p>MIB: HUAWEI-ENTITY-EXTENT-MIB</p><p>The CPU usage for this entity. Generally, the CPU usage will calculate the overall CPU usage on the entity, and itis not sensible with the number of CPU on the entity.</p><p>Reference: http://support.huawei.com/enterprise/KnowledgebaseReadAction.action?contentId=KB1000090234</p> |SNMP |system.cpu.util[hwEntityCpuUsage.{#SNMPINDEX}] |
-|Fans |#{#SNMPVALUE}: Fan status |<p>MIB: HUAWEI-ENTITY-EXTENT-MIB</p> |SNMP |sensor.fan.status[hwEntityFanState.{#SNMPINDEX}] |
-|Inventory |{#ENT_NAME}: Hardware serial number |<p>MIB: ENTITY-MIB</p> |SNMP |system.hw.serialnumber[entPhysicalSerialNum.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |{#ENT_NAME}: Hardware version(revision) |<p>MIB: ENTITY-MIB</p> |SNMP |system.hw.version[entPhysicalHardwareRev.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |{#ENT_NAME}: Operating system |<p>MIB: ENTITY-MIB</p> |SNMP |system.sw.os[entPhysicalSoftwareRev.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |{#ENT_NAME}: Hardware model name |<p>MIB: ENTITY-MIB</p> |SNMP |system.hw.model[entPhysicalDescr.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Memory |{#ENT_NAME}: Memory utilization |<p>MIB: HUAWEI-ENTITY-EXTENT-MIB</p><p>The memory usage for the entity. This object indicates what percent of memory are used.</p><p>Reference: http://support.huawei.com/enterprise/KnowledgebaseReadAction.action?contentId=KB1000090234</p> |SNMP |vm.memory.util[hwEntityMemUsage.{#SNMPINDEX}] |
-|Temperature |{#ENT_NAME}: Temperature |<p>MIB: HUAWEI-ENTITY-EXTENT-MIB</p><p>The temperature for the {#SNMPVALUE}.</p> |SNMP |sensor.temp.value[hwEntityTemperature.{#SNMPINDEX}] |
+| Group | Name | Description | Type | Key and additional info |
+|-------------|-----------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------|------------------------------------------------------------------------------------------------------------------------------|
+| CPU | {#ENT_NAME}: CPU utilization | <p>MIB: HUAWEI-ENTITY-EXTENT-MIB</p><p>The CPU usage for this entity. Generally, the CPU usage will calculate the overall CPU usage on the entity, and itis not sensible with the number of CPU on the entity.</p><p>Reference: http://support.huawei.com/enterprise/KnowledgebaseReadAction.action?contentId=KB1000090234</p> | SNMP | system.cpu.util[hwEntityCpuUsage.{#SNMPINDEX}] |
+| Fans | #{#SNMPVALUE}: Fan status | <p>MIB: HUAWEI-ENTITY-EXTENT-MIB</p> | SNMP | sensor.fan.status[hwEntityFanState.{#SNMPINDEX}] |
+| Inventory | {#ENT_NAME}: Hardware serial number | <p>MIB: ENTITY-MIB</p> | SNMP | system.hw.serialnumber[entPhysicalSerialNum.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | {#ENT_NAME}: Hardware version(revision) | <p>MIB: ENTITY-MIB</p> | SNMP | system.hw.version[entPhysicalHardwareRev.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | {#ENT_NAME}: Operating system | <p>MIB: ENTITY-MIB</p> | SNMP | system.sw.os[entPhysicalSoftwareRev.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | {#ENT_NAME}: Hardware model name | <p>MIB: ENTITY-MIB</p> | SNMP | system.hw.model[entPhysicalDescr.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Memory | {#ENT_NAME}: Memory utilization | <p>MIB: HUAWEI-ENTITY-EXTENT-MIB</p><p>The memory usage for the entity. This object indicates what percent of memory are used.</p><p>Reference: http://support.huawei.com/enterprise/KnowledgebaseReadAction.action?contentId=KB1000090234</p> | SNMP | vm.memory.util[hwEntityMemUsage.{#SNMPINDEX}] |
+| Temperature | {#ENT_NAME}: Temperature | <p>MIB: HUAWEI-ENTITY-EXTENT-MIB</p><p>The temperature for the {#SNMPVALUE}.</p> | SNMP | sensor.temp.value[hwEntityTemperature.{#SNMPINDEX}] |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|{#ENT_NAME}: High CPU utilization (over {$CPU.UTIL.CRIT}% for 5m) |<p>CPU utilization is too high. The system might be slow to respond.</p> |`{TEMPLATE_NAME:system.cpu.util[hwEntityCpuUsage.{#SNMPINDEX}].min(5m)}>{$CPU.UTIL.CRIT}` |WARNING | |
-|#{#SNMPVALUE}: Fan is in critical state |<p>Please check the fan unit</p> |`{TEMPLATE_NAME:sensor.fan.status[hwEntityFanState.{#SNMPINDEX}].count(#1,{$FAN_CRIT_STATUS},eq)}=1` |AVERAGE | |
-|{#ENT_NAME}: Device has been replaced (new serial number received) |<p>Device serial number has changed. Ack to close</p> |`{TEMPLATE_NAME:system.hw.serialnumber[entPhysicalSerialNum.{#SNMPINDEX}].diff()}=1 and {TEMPLATE_NAME:system.hw.serialnumber[entPhysicalSerialNum.{#SNMPINDEX}].strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|{#ENT_NAME}: Operating system description has changed |<p>Operating system description has changed. Possible reasons that system has been updated or replaced. Ack to close.</p> |`{TEMPLATE_NAME:system.sw.os[entPhysicalSoftwareRev.{#SNMPINDEX}].diff()}=1 and {TEMPLATE_NAME:system.sw.os[entPhysicalSoftwareRev.{#SNMPINDEX}].strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|{#ENT_NAME}: High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m) |<p>The system is running out of free memory.</p> |`{TEMPLATE_NAME:vm.memory.util[hwEntityMemUsage.{#SNMPINDEX}].min(5m)}>{$MEMORY.UTIL.MAX}` |AVERAGE | |
-|{#ENT_NAME}: Temperature is above warning threshold: >{$TEMP_WARN:""} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[hwEntityTemperature.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[hwEntityTemperature.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:""}-3` |WARNING |<p>**Depends on**:</p><p>- {#ENT_NAME}: Temperature is above critical threshold: >{$TEMP_CRIT:""}</p> |
-|{#ENT_NAME}: Temperature is above critical threshold: >{$TEMP_CRIT:""} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[hwEntityTemperature.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[hwEntityTemperature.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:""}-3` |HIGH | |
-|{#ENT_NAME}: Temperature is too low: <{$TEMP_CRIT_LOW:""} |<p>-</p> |`{TEMPLATE_NAME:sensor.temp.value[hwEntityTemperature.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[hwEntityTemperature.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:""}+3` |AVERAGE | |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|-------------------------------------------------------------------------------------------------------|
+| {#ENT_NAME}: High CPU utilization (over {$CPU.UTIL.CRIT}% for 5m) | <p>CPU utilization is too high. The system might be slow to respond.</p> | `{TEMPLATE_NAME:system.cpu.util[hwEntityCpuUsage.{#SNMPINDEX}].min(5m)}>{$CPU.UTIL.CRIT}` | WARNING | |
+| #{#SNMPVALUE}: Fan is in critical state | <p>Please check the fan unit</p> | `{TEMPLATE_NAME:sensor.fan.status[hwEntityFanState.{#SNMPINDEX}].count(#1,{$FAN_CRIT_STATUS},eq)}=1` | AVERAGE | |
+| {#ENT_NAME}: Device has been replaced (new serial number received) | <p>Device serial number has changed. Ack to close</p> | `{TEMPLATE_NAME:system.hw.serialnumber[entPhysicalSerialNum.{#SNMPINDEX}].diff()}=1 and {TEMPLATE_NAME:system.hw.serialnumber[entPhysicalSerialNum.{#SNMPINDEX}].strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| {#ENT_NAME}: Operating system description has changed | <p>Operating system description has changed. Possible reasons that system has been updated or replaced. Ack to close.</p> | `{TEMPLATE_NAME:system.sw.os[entPhysicalSoftwareRev.{#SNMPINDEX}].diff()}=1 and {TEMPLATE_NAME:system.sw.os[entPhysicalSoftwareRev.{#SNMPINDEX}].strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| {#ENT_NAME}: High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m) | <p>The system is running out of free memory.</p> | `{TEMPLATE_NAME:vm.memory.util[hwEntityMemUsage.{#SNMPINDEX}].min(5m)}>{$MEMORY.UTIL.MAX}` | AVERAGE | |
+| {#ENT_NAME}: Temperature is above warning threshold: >{$TEMP_WARN:""} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[hwEntityTemperature.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[hwEntityTemperature.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:""}-3` | WARNING | <p>**Depends on**:</p><p>- {#ENT_NAME}: Temperature is above critical threshold: >{$TEMP_CRIT:""}</p> |
+| {#ENT_NAME}: Temperature is above critical threshold: >{$TEMP_CRIT:""} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[hwEntityTemperature.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[hwEntityTemperature.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:""}-3` | HIGH | |
+| {#ENT_NAME}: Temperature is too low: <{$TEMP_CRIT_LOW:""} | <p>-</p> | `{TEMPLATE_NAME:sensor.temp.value[hwEntityTemperature.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[hwEntityTemperature.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:""}+3` | AVERAGE | |
## Feedback
diff --git a/templates/net/intel_qlogic_infiniband_snmp/README.md b/templates/net/intel_qlogic_infiniband_snmp/README.md
index cc983b2aad1..3ac31e5a01c 100644
--- a/templates/net/intel_qlogic_infiniband_snmp/README.md
+++ b/templates/net/intel_qlogic_infiniband_snmp/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
@@ -15,57 +15,57 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$FAN_CRIT_STATUS} |<p>-</p> |`3` |
-|{$PSU_CRIT_STATUS} |<p>-</p> |`3` |
-|{$PSU_WARN_STATUS} |<p>-</p> |`4` |
-|{$TEMP_CRIT_LOW} |<p>-</p> |`5` |
-|{$TEMP_CRIT_STATUS} |<p>-</p> |`3` |
-|{$TEMP_CRIT} |<p>-</p> |`60` |
-|{$TEMP_WARN_STATUS} |<p>-</p> |`2` |
-|{$TEMP_WARN} |<p>-</p> |`50` |
+| Name | Description | Default |
+|---------------------|-------------|---------|
+| {$FAN_CRIT_STATUS} | <p>-</p> | `3` |
+| {$PSU_CRIT_STATUS} | <p>-</p> | `3` |
+| {$PSU_WARN_STATUS} | <p>-</p> | `4` |
+| {$TEMP_CRIT_LOW} | <p>-</p> | `5` |
+| {$TEMP_CRIT_STATUS} | <p>-</p> | `3` |
+| {$TEMP_CRIT} | <p>-</p> | `60` |
+| {$TEMP_WARN_STATUS} | <p>-</p> | `2` |
+| {$TEMP_WARN} | <p>-</p> | `50` |
## Template links
-|Name|
-|----|
-|Generic SNMP |
-|Interfaces SNMP |
+| Name |
+|-----------------|
+| Generic SNMP |
+| Interfaces SNMP |
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Temperature Discovery |<p>Discovering sensor's table with temperature filter</p> |SNMP |temp.discovery<p>**Filter**:</p>AND <p>- B: {#SENSOR_TYPE} MATCHES_REGEX `2`</p> |
-|Unit Discovery |<p>-</p> |SNMP |unit.discovery<p>**Filter**:</p>AND_OR <p>- A: {#ENT_CLASS} MATCHES_REGEX `2`</p> |
-|PSU Discovery |<p>A textual description of the power supply, that can be assigned by the administrator.</p> |SNMP |psu.discovery |
-|FAN Discovery |<p>icsChassisFanDescription of icsChassisFanTable</p> |SNMP |fan.discovery |
+| Name | Description | Type | Key and additional info |
+|-----------------------|----------------------------------------------------------------------------------------------|------|-----------------------------------------------------------------------------------|
+| Temperature Discovery | <p>Discovering sensor's table with temperature filter</p> | SNMP | temp.discovery<p>**Filter**:</p>AND <p>- B: {#SENSOR_TYPE} MATCHES_REGEX `2`</p> |
+| Unit Discovery | <p>-</p> | SNMP | unit.discovery<p>**Filter**:</p>AND_OR <p>- A: {#ENT_CLASS} MATCHES_REGEX `2`</p> |
+| PSU Discovery | <p>A textual description of the power supply, that can be assigned by the administrator.</p> | SNMP | psu.discovery |
+| FAN Discovery | <p>icsChassisFanDescription of icsChassisFanTable</p> | SNMP | fan.discovery |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Fans |{#SNMPVALUE}: Fan status |<p>MIB: ICS-CHASSIS-MIB</p><p>The operational status of the fan unit.</p> |SNMP |sensor.fan.status[icsChassisFanOperStatus.{#SNMPINDEX}] |
-|Inventory |Hardware model name |<p>MIB: ICS-CHASSIS-MIB</p> |SNMP |system.hw.model<p>**Preprocessing**:</p><p>- REGEX: `(.+) - Firmware \1`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |Firmware version |<p>MIB: ICS-CHASSIS-MIB</p> |SNMP |system.hw.firmware<p>**Preprocessing**:</p><p>- REGEX: `Firmware Version: ([0-9.]+), \1`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |{#ENT_NAME}: Hardware serial number |<p>MIB: ICS-CHASSIS-MIB</p><p>The serial number of the FRU. If not available, this value is a zero-length string.</p> |SNMP |system.hw.serialnumber[icsChassisSystemUnitFruSerialNumber.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Power_supply |{#SNMPVALUE}: Power supply status |<p>MIB: ICS-CHASSIS-MIB</p><p>Actual status of the power supply:</p><p>(1) unknown: status not known.</p><p>(2) disabled: power supply is disabled.</p><p>(3) failed - power supply is unable to supply power due to failure.</p><p>(4) warning - power supply is supplying power, but an output or sensor is bad or warning.</p><p>(5) standby - power supply believed usable,but not supplying power.</p><p>(6) engaged - power supply is supplying power.</p><p>(7) redundant - power supply is supplying power, but not needed.</p><p>(8) notPresent - power supply is supplying power is not present.</p> |SNMP |sensor.psu.status[icsChassisPowerSupplyEntry.{#SNMPINDEX}] |
-|Temperature |{#SENSOR_INFO}: Temperature |<p>MIB: ICS-CHASSIS-MIB</p><p>The current value read from the sensor.</p> |SNMP |sensor.temp.value[icsChassisSensorSlotValue.{#SNMPINDEX}] |
-|Temperature |{#SENSOR_INFO}: Temperature status |<p>MIB: ICS-CHASSIS-MIB</p><p>The operational status of the sensor.</p> |SNMP |sensor.temp.status[icsChassisSensorSlotOperStatus.{#SNMPINDEX}] |
+| Group | Name | Description | Type | Key and additional info |
+|--------------|-------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------|---------------------------------------------------------------------------------------------------------------------------------------------|
+| Fans | {#SNMPVALUE}: Fan status | <p>MIB: ICS-CHASSIS-MIB</p><p>The operational status of the fan unit.</p> | SNMP | sensor.fan.status[icsChassisFanOperStatus.{#SNMPINDEX}] |
+| Inventory | Hardware model name | <p>MIB: ICS-CHASSIS-MIB</p> | SNMP | system.hw.model<p>**Preprocessing**:</p><p>- REGEX: `(.+) - Firmware \1`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | Firmware version | <p>MIB: ICS-CHASSIS-MIB</p> | SNMP | system.hw.firmware<p>**Preprocessing**:</p><p>- REGEX: `Firmware Version: ([0-9.]+), \1`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | {#ENT_NAME}: Hardware serial number | <p>MIB: ICS-CHASSIS-MIB</p><p>The serial number of the FRU. If not available, this value is a zero-length string.</p> | SNMP | system.hw.serialnumber[icsChassisSystemUnitFruSerialNumber.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Power_supply | {#SNMPVALUE}: Power supply status | <p>MIB: ICS-CHASSIS-MIB</p><p>Actual status of the power supply:</p><p>(1) unknown: status not known.</p><p>(2) disabled: power supply is disabled.</p><p>(3) failed - power supply is unable to supply power due to failure.</p><p>(4) warning - power supply is supplying power, but an output or sensor is bad or warning.</p><p>(5) standby - power supply believed usable,but not supplying power.</p><p>(6) engaged - power supply is supplying power.</p><p>(7) redundant - power supply is supplying power, but not needed.</p><p>(8) notPresent - power supply is supplying power is not present.</p> | SNMP | sensor.psu.status[icsChassisPowerSupplyEntry.{#SNMPINDEX}] |
+| Temperature | {#SENSOR_INFO}: Temperature | <p>MIB: ICS-CHASSIS-MIB</p><p>The current value read from the sensor.</p> | SNMP | sensor.temp.value[icsChassisSensorSlotValue.{#SNMPINDEX}] |
+| Temperature | {#SENSOR_INFO}: Temperature status | <p>MIB: ICS-CHASSIS-MIB</p><p>The operational status of the sensor.</p> | SNMP | sensor.temp.status[icsChassisSensorSlotOperStatus.{#SNMPINDEX}] |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|{#SNMPVALUE}: Fan is in critical state |<p>Please check the fan unit</p> |`{TEMPLATE_NAME:sensor.fan.status[icsChassisFanOperStatus.{#SNMPINDEX}].count(#1,{$FAN_CRIT_STATUS},eq)}=1` |AVERAGE | |
-|Firmware has changed |<p>Firmware version has changed. Ack to close</p> |`{TEMPLATE_NAME:system.hw.firmware.diff()}=1 and {TEMPLATE_NAME:system.hw.firmware.strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|{#ENT_NAME}: Device has been replaced (new serial number received) |<p>Device serial number has changed. Ack to close</p> |`{TEMPLATE_NAME:system.hw.serialnumber[icsChassisSystemUnitFruSerialNumber.{#SNMPINDEX}].diff()}=1 and {TEMPLATE_NAME:system.hw.serialnumber[icsChassisSystemUnitFruSerialNumber.{#SNMPINDEX}].strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|{#SNMPVALUE}: Power supply is in critical state |<p>Please check the power supply unit for errors</p> |`{TEMPLATE_NAME:sensor.psu.status[icsChassisPowerSupplyEntry.{#SNMPINDEX}].count(#1,{$PSU_CRIT_STATUS},eq)}=1` |AVERAGE | |
-|{#SNMPVALUE}: Power supply is in warning state |<p>Please check the power supply unit for errors</p> |`{TEMPLATE_NAME:sensor.psu.status[icsChassisPowerSupplyEntry.{#SNMPINDEX}].count(#1,{$PSU_WARN_STATUS},eq)}=1` |WARNING |<p>**Depends on**:</p><p>- {#SNMPVALUE}: Power supply is in critical state</p> |
-|{#SENSOR_INFO}: Temperature is above warning threshold: >{$TEMP_WARN:""} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[icsChassisSensorSlotValue.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:""} or {Intel_Qlogic Infiniband SNMP:sensor.temp.status[icsChassisSensorSlotOperStatus.{#SNMPINDEX}].last(0)}={$TEMP_WARN_STATUS}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[icsChassisSensorSlotValue.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:""}-3` |WARNING |<p>**Depends on**:</p><p>- {#SENSOR_INFO}: Temperature is above critical threshold: >{$TEMP_CRIT:""}</p> |
-|{#SENSOR_INFO}: Temperature is above critical threshold: >{$TEMP_CRIT:""} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[icsChassisSensorSlotValue.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:""} or {Intel_Qlogic Infiniband SNMP:sensor.temp.status[icsChassisSensorSlotOperStatus.{#SNMPINDEX}].last(0)}={$TEMP_CRIT_STATUS}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[icsChassisSensorSlotValue.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:""}-3` |HIGH | |
-|{#SENSOR_INFO}: Temperature is too low: <{$TEMP_CRIT_LOW:""} |<p>-</p> |`{TEMPLATE_NAME:sensor.temp.value[icsChassisSensorSlotValue.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[icsChassisSensorSlotValue.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:""}+3` |AVERAGE | |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|---------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|----------------------------------------------------------------------------------------------------------|
+| {#SNMPVALUE}: Fan is in critical state | <p>Please check the fan unit</p> | `{TEMPLATE_NAME:sensor.fan.status[icsChassisFanOperStatus.{#SNMPINDEX}].count(#1,{$FAN_CRIT_STATUS},eq)}=1` | AVERAGE | |
+| Firmware has changed | <p>Firmware version has changed. Ack to close</p> | `{TEMPLATE_NAME:system.hw.firmware.diff()}=1 and {TEMPLATE_NAME:system.hw.firmware.strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| {#ENT_NAME}: Device has been replaced (new serial number received) | <p>Device serial number has changed. Ack to close</p> | `{TEMPLATE_NAME:system.hw.serialnumber[icsChassisSystemUnitFruSerialNumber.{#SNMPINDEX}].diff()}=1 and {TEMPLATE_NAME:system.hw.serialnumber[icsChassisSystemUnitFruSerialNumber.{#SNMPINDEX}].strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| {#SNMPVALUE}: Power supply is in critical state | <p>Please check the power supply unit for errors</p> | `{TEMPLATE_NAME:sensor.psu.status[icsChassisPowerSupplyEntry.{#SNMPINDEX}].count(#1,{$PSU_CRIT_STATUS},eq)}=1` | AVERAGE | |
+| {#SNMPVALUE}: Power supply is in warning state | <p>Please check the power supply unit for errors</p> | `{TEMPLATE_NAME:sensor.psu.status[icsChassisPowerSupplyEntry.{#SNMPINDEX}].count(#1,{$PSU_WARN_STATUS},eq)}=1` | WARNING | <p>**Depends on**:</p><p>- {#SNMPVALUE}: Power supply is in critical state</p> |
+| {#SENSOR_INFO}: Temperature is above warning threshold: >{$TEMP_WARN:""} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[icsChassisSensorSlotValue.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:""} or {Intel_Qlogic Infiniband SNMP:sensor.temp.status[icsChassisSensorSlotOperStatus.{#SNMPINDEX}].last(0)}={$TEMP_WARN_STATUS}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[icsChassisSensorSlotValue.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:""}-3` | WARNING | <p>**Depends on**:</p><p>- {#SENSOR_INFO}: Temperature is above critical threshold: >{$TEMP_CRIT:""}</p> |
+| {#SENSOR_INFO}: Temperature is above critical threshold: >{$TEMP_CRIT:""} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[icsChassisSensorSlotValue.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:""} or {Intel_Qlogic Infiniband SNMP:sensor.temp.status[icsChassisSensorSlotOperStatus.{#SNMPINDEX}].last(0)}={$TEMP_CRIT_STATUS}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[icsChassisSensorSlotValue.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:""}-3` | HIGH | |
+| {#SENSOR_INFO}: Temperature is too low: <{$TEMP_CRIT_LOW:""} | <p>-</p> | `{TEMPLATE_NAME:sensor.temp.value[icsChassisSensorSlotValue.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[icsChassisSensorSlotValue.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:""}+3` | AVERAGE | |
## Feedback
diff --git a/templates/net/juniper_snmp/README.md b/templates/net/juniper_snmp/README.md
index f3708ccec35..57bd5149189 100644
--- a/templates/net/juniper_snmp/README.md
+++ b/templates/net/juniper_snmp/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
@@ -15,64 +15,64 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$CPU.UTIL.CRIT} |<p>-</p> |`90` |
-|{$FAN_CRIT_STATUS} |<p>-</p> |`6` |
-|{$HEALTH_CRIT_STATUS} |<p>-</p> |`3` |
-|{$MEMORY.UTIL.MAX} |<p>-</p> |`90` |
-|{$PSU_CRIT_STATUS} |<p>-</p> |`6` |
-|{$TEMP_CRIT:"Routing Engine"} |<p>-</p> |`80` |
-|{$TEMP_CRIT_LOW} |<p>-</p> |`5` |
-|{$TEMP_CRIT} |<p>-</p> |`60` |
-|{$TEMP_WARN:"Routing Engine"} |<p>-</p> |`70` |
-|{$TEMP_WARN} |<p>-</p> |`50` |
+| Name | Description | Default |
+|-------------------------------|-------------|---------|
+| {$CPU.UTIL.CRIT} | <p>-</p> | `90` |
+| {$FAN_CRIT_STATUS} | <p>-</p> | `6` |
+| {$HEALTH_CRIT_STATUS} | <p>-</p> | `3` |
+| {$MEMORY.UTIL.MAX} | <p>-</p> | `90` |
+| {$PSU_CRIT_STATUS} | <p>-</p> | `6` |
+| {$TEMP_CRIT:"Routing Engine"} | <p>-</p> | `80` |
+| {$TEMP_CRIT_LOW} | <p>-</p> | `5` |
+| {$TEMP_CRIT} | <p>-</p> | `60` |
+| {$TEMP_WARN:"Routing Engine"} | <p>-</p> | `70` |
+| {$TEMP_WARN} | <p>-</p> | `50` |
## Template links
-|Name|
-|----|
-|EtherLike-MIB SNMP |
-|Generic SNMP |
-|Interfaces SNMP |
+| Name |
+|--------------------|
+| EtherLike-MIB SNMP |
+| Generic SNMP |
+| Interfaces SNMP |
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|CPU and Memory Discovery |<p>Scanning JUNIPER-MIB::jnxOperatingTable for CPU and Memory</p><p>http://kb.juniper.net/InfoCenter/index?page=content&id=KB17526&actp=search. Filter limits results to Routing Engines</p> |SNMP |jnxOperatingTable.discovery<p>**Filter**:</p>AND_OR <p>- A: {#SNMPVALUE} MATCHES_REGEX `Routing Engine.*`</p> |
-|Temperature discovery |<p>Scanning JUNIPER-MIB::jnxOperatingTable for Temperature</p><p>http://kb.juniper.net/InfoCenter/index?page=content&id=KB17526&actp=search. Filter limits results to Routing Engines</p> |SNMP |jnxOperatingTable.discovery.temp<p>**Filter**:</p>AND_OR <p>- A: {#SNMPVALUE} MATCHES_REGEX `[^0]+`</p> |
-|FAN Discovery |<p>Scanning JUNIPER-MIB::jnxOperatingTable for Fans</p> |SNMP |jnxOperatingTable.discovery.fans |
-|PSU Discovery |<p>Scanning JUNIPER-MIB::jnxOperatingTable for Power Supplies</p> |SNMP |jnxOperatingTable.discovery.psu |
+| Name | Description | Type | Key and additional info |
+|--------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------|---------------------------------------------------------------------------------------------------------------|
+| CPU and Memory Discovery | <p>Scanning JUNIPER-MIB::jnxOperatingTable for CPU and Memory</p><p>http://kb.juniper.net/InfoCenter/index?page=content&id=KB17526&actp=search. Filter limits results to Routing Engines</p> | SNMP | jnxOperatingTable.discovery<p>**Filter**:</p>AND_OR <p>- A: {#SNMPVALUE} MATCHES_REGEX `Routing Engine.*`</p> |
+| Temperature discovery | <p>Scanning JUNIPER-MIB::jnxOperatingTable for Temperature</p><p>http://kb.juniper.net/InfoCenter/index?page=content&id=KB17526&actp=search. Filter limits results to Routing Engines</p> | SNMP | jnxOperatingTable.discovery.temp<p>**Filter**:</p>AND_OR <p>- A: {#SNMPVALUE} MATCHES_REGEX `[^0]+`</p> |
+| FAN Discovery | <p>Scanning JUNIPER-MIB::jnxOperatingTable for Fans</p> | SNMP | jnxOperatingTable.discovery.fans |
+| PSU Discovery | <p>Scanning JUNIPER-MIB::jnxOperatingTable for Power Supplies</p> | SNMP | jnxOperatingTable.discovery.psu |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|CPU |{#SNMPVALUE}: CPU utilization |<p>MIB: JUNIPER-MIB</p><p>The CPU utilization in percentage of this subject. Zero if unavailable or inapplicable.</p><p>Reference: http://kb.juniper.net/library/CUSTOMERSERVICE/GLOBAL_JTAC/BK26199/SRX%20SNMP%20Monitoring%20Guide_v1.1.pdf</p> |SNMP |system.cpu.util[jnxOperatingCPU.{#SNMPINDEX}] |
-|Fans |{#SNMPVALUE}: Fan status |<p>MIB: JUNIPER-MIB</p> |SNMP |sensor.fan.status[jnxOperatingState.4.{#SNMPINDEX}] |
-|Inventory |Hardware serial number |<p>MIB: JUNIPER-MIB</p><p>The serial number of this subject, blank if unknown or unavailable.</p> |SNMP |system.hw.serialnumber<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |Hardware model name |<p>MIB: JUNIPER-MIB</p><p>The name, model, or detailed description of the box,indicating which product the box is about, for example 'M40'.</p> |SNMP |system.hw.model<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |Operating system |<p>MIB: SNMPv2-MIB</p> |SNMP |system.sw.os[sysDescr.0]<p>**Preprocessing**:</p><p>- REGEX: `kernel (JUNOS [0-9a-zA-Z\.\-]+) \1`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Memory |{#SNMPVALUE}: Memory utilization |<p>MIB: JUNIPER-MIB</p><p>The buffer pool utilization in percentage of this subject. Zero if unavailable or inapplicable.</p><p>Reference: http://kb.juniper.net/library/CUSTOMERSERVICE/GLOBAL_JTAC/BK26199/SRX%20SNMP%20Monitoring%20Guide_v1.1.pdf</p> |SNMP |vm.memory.util[jnxOperatingBuffer.{#SNMPINDEX}] |
-|Power_supply |{#SNMPVALUE}: Power supply status |<p>MIB: JUNIPER-MIB</p><p>If they are using DC power supplies there is a known issue on PR 1064039 where the fans do not detect the temperature correctly and fail to cool the power supply causing the shutdown to occur.</p><p>This is fixed in Junos 13.3R7 https://forums.juniper.net/t5/Routing/PEM-0-not-OK-MX104/m-p/289644#M14122</p> |SNMP |sensor.psu.status[jnxOperatingState.2.{#SNMPINDEX}] |
-|Status |Overall system health status |<p>MIB: JUNIPER-ALARM-MIB</p><p>The red alarm indication on the craft interface panel.</p><p>The red alarm is on when there is some system</p><p>failure or power supply failure or the system</p><p>is experiencing a hardware malfunction or some</p><p>threshold is being exceeded.</p><p>This red alarm state could be turned off by the</p><p>ACO/LT (Alarm Cut Off / Lamp Test) button on the</p><p>front panel module.</p> |SNMP |system.status[jnxRedAlarmState.0] |
-|Temperature |{#SENSOR_INFO}: Temperature |<p>MIB: JUNIPER-MIB</p><p>The temperature in Celsius (degrees C) of {#SENSOR_INFO}</p> |SNMP |sensor.temp.value[jnxOperatingTemp.{#SNMPINDEX}] |
+| Group | Name | Description | Type | Key and additional info |
+|--------------|-----------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------|-------------------------------------------------------------------------------------------------------------------------------------------------|
+| CPU | {#SNMPVALUE}: CPU utilization | <p>MIB: JUNIPER-MIB</p><p>The CPU utilization in percentage of this subject. Zero if unavailable or inapplicable.</p><p>Reference: http://kb.juniper.net/library/CUSTOMERSERVICE/GLOBAL_JTAC/BK26199/SRX%20SNMP%20Monitoring%20Guide_v1.1.pdf</p> | SNMP | system.cpu.util[jnxOperatingCPU.{#SNMPINDEX}] |
+| Fans | {#SNMPVALUE}: Fan status | <p>MIB: JUNIPER-MIB</p> | SNMP | sensor.fan.status[jnxOperatingState.4.{#SNMPINDEX}] |
+| Inventory | Hardware serial number | <p>MIB: JUNIPER-MIB</p><p>The serial number of this subject, blank if unknown or unavailable.</p> | SNMP | system.hw.serialnumber<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | Hardware model name | <p>MIB: JUNIPER-MIB</p><p>The name, model, or detailed description of the box,indicating which product the box is about, for example 'M40'.</p> | SNMP | system.hw.model<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | Operating system | <p>MIB: SNMPv2-MIB</p> | SNMP | system.sw.os[sysDescr.0]<p>**Preprocessing**:</p><p>- REGEX: `kernel (JUNOS [0-9a-zA-Z\.\-]+) \1`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Memory | {#SNMPVALUE}: Memory utilization | <p>MIB: JUNIPER-MIB</p><p>The buffer pool utilization in percentage of this subject. Zero if unavailable or inapplicable.</p><p>Reference: http://kb.juniper.net/library/CUSTOMERSERVICE/GLOBAL_JTAC/BK26199/SRX%20SNMP%20Monitoring%20Guide_v1.1.pdf</p> | SNMP | vm.memory.util[jnxOperatingBuffer.{#SNMPINDEX}] |
+| Power_supply | {#SNMPVALUE}: Power supply status | <p>MIB: JUNIPER-MIB</p><p>If they are using DC power supplies there is a known issue on PR 1064039 where the fans do not detect the temperature correctly and fail to cool the power supply causing the shutdown to occur.</p><p>This is fixed in Junos 13.3R7 https://forums.juniper.net/t5/Routing/PEM-0-not-OK-MX104/m-p/289644#M14122</p> | SNMP | sensor.psu.status[jnxOperatingState.2.{#SNMPINDEX}] |
+| Status | Overall system health status | <p>MIB: JUNIPER-ALARM-MIB</p><p>The red alarm indication on the craft interface panel.</p><p>The red alarm is on when there is some system</p><p>failure or power supply failure or the system</p><p>is experiencing a hardware malfunction or some</p><p>threshold is being exceeded.</p><p>This red alarm state could be turned off by the</p><p>ACO/LT (Alarm Cut Off / Lamp Test) button on the</p><p>front panel module.</p> | SNMP | system.status[jnxRedAlarmState.0] |
+| Temperature | {#SENSOR_INFO}: Temperature | <p>MIB: JUNIPER-MIB</p><p>The temperature in Celsius (degrees C) of {#SENSOR_INFO}</p> | SNMP | sensor.temp.value[jnxOperatingTemp.{#SNMPINDEX}] |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|{#SNMPVALUE}: High CPU utilization (over {$CPU.UTIL.CRIT}% for 5m) |<p>CPU utilization is too high. The system might be slow to respond.</p> |`{TEMPLATE_NAME:system.cpu.util[jnxOperatingCPU.{#SNMPINDEX}].min(5m)}>{$CPU.UTIL.CRIT}` |WARNING | |
-|{#SNMPVALUE}: Fan is in critical state |<p>Please check the fan unit</p> |`{TEMPLATE_NAME:sensor.fan.status[jnxOperatingState.4.{#SNMPINDEX}].count(#1,{$FAN_CRIT_STATUS},eq)}=1` |AVERAGE | |
-|Device has been replaced (new serial number received) |<p>Device serial number has changed. Ack to close</p> |`{TEMPLATE_NAME:system.hw.serialnumber.diff()}=1 and {TEMPLATE_NAME:system.hw.serialnumber.strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|Operating system description has changed |<p>Operating system description has changed. Possible reasons that system has been updated or replaced. Ack to close.</p> |`{TEMPLATE_NAME:system.sw.os[sysDescr.0].diff()}=1 and {TEMPLATE_NAME:system.sw.os[sysDescr.0].strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|{#SNMPVALUE}: High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m) |<p>The system is running out of free memory.</p> |`{TEMPLATE_NAME:vm.memory.util[jnxOperatingBuffer.{#SNMPINDEX}].min(5m)}>{$MEMORY.UTIL.MAX}` |AVERAGE | |
-|{#SNMPVALUE}: Power supply is in critical state |<p>Please check the power supply unit for errors</p> |`{TEMPLATE_NAME:sensor.psu.status[jnxOperatingState.2.{#SNMPINDEX}].count(#1,{$PSU_CRIT_STATUS},eq)}=1` |AVERAGE | |
-|System status is in critical state |<p>Please check the device for errors</p> |`{TEMPLATE_NAME:system.status[jnxRedAlarmState.0].count(#1,{$HEALTH_CRIT_STATUS},eq)}=1` |HIGH | |
-|{#SENSOR_INFO}: Temperature is above warning threshold: >{$TEMP_WARN:""} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[jnxOperatingTemp.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[jnxOperatingTemp.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:""}-3` |WARNING |<p>**Depends on**:</p><p>- {#SENSOR_INFO}: Temperature is above critical threshold: >{$TEMP_CRIT:""}</p> |
-|{#SENSOR_INFO}: Temperature is above critical threshold: >{$TEMP_CRIT:""} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[jnxOperatingTemp.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[jnxOperatingTemp.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:""}-3` |HIGH | |
-|{#SENSOR_INFO}: Temperature is too low: <{$TEMP_CRIT_LOW:""} |<p>-</p> |`{TEMPLATE_NAME:sensor.temp.value[jnxOperatingTemp.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[jnxOperatingTemp.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:""}+3` |AVERAGE | |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|---------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|----------------------------------------------------------------------------------------------------------|
+| {#SNMPVALUE}: High CPU utilization (over {$CPU.UTIL.CRIT}% for 5m) | <p>CPU utilization is too high. The system might be slow to respond.</p> | `{TEMPLATE_NAME:system.cpu.util[jnxOperatingCPU.{#SNMPINDEX}].min(5m)}>{$CPU.UTIL.CRIT}` | WARNING | |
+| {#SNMPVALUE}: Fan is in critical state | <p>Please check the fan unit</p> | `{TEMPLATE_NAME:sensor.fan.status[jnxOperatingState.4.{#SNMPINDEX}].count(#1,{$FAN_CRIT_STATUS},eq)}=1` | AVERAGE | |
+| Device has been replaced (new serial number received) | <p>Device serial number has changed. Ack to close</p> | `{TEMPLATE_NAME:system.hw.serialnumber.diff()}=1 and {TEMPLATE_NAME:system.hw.serialnumber.strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| Operating system description has changed | <p>Operating system description has changed. Possible reasons that system has been updated or replaced. Ack to close.</p> | `{TEMPLATE_NAME:system.sw.os[sysDescr.0].diff()}=1 and {TEMPLATE_NAME:system.sw.os[sysDescr.0].strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| {#SNMPVALUE}: High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m) | <p>The system is running out of free memory.</p> | `{TEMPLATE_NAME:vm.memory.util[jnxOperatingBuffer.{#SNMPINDEX}].min(5m)}>{$MEMORY.UTIL.MAX}` | AVERAGE | |
+| {#SNMPVALUE}: Power supply is in critical state | <p>Please check the power supply unit for errors</p> | `{TEMPLATE_NAME:sensor.psu.status[jnxOperatingState.2.{#SNMPINDEX}].count(#1,{$PSU_CRIT_STATUS},eq)}=1` | AVERAGE | |
+| System status is in critical state | <p>Please check the device for errors</p> | `{TEMPLATE_NAME:system.status[jnxRedAlarmState.0].count(#1,{$HEALTH_CRIT_STATUS},eq)}=1` | HIGH | |
+| {#SENSOR_INFO}: Temperature is above warning threshold: >{$TEMP_WARN:""} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[jnxOperatingTemp.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[jnxOperatingTemp.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:""}-3` | WARNING | <p>**Depends on**:</p><p>- {#SENSOR_INFO}: Temperature is above critical threshold: >{$TEMP_CRIT:""}</p> |
+| {#SENSOR_INFO}: Temperature is above critical threshold: >{$TEMP_CRIT:""} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[jnxOperatingTemp.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[jnxOperatingTemp.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:""}-3` | HIGH | |
+| {#SENSOR_INFO}: Temperature is too low: <{$TEMP_CRIT_LOW:""} | <p>-</p> | `{TEMPLATE_NAME:sensor.temp.value[jnxOperatingTemp.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[jnxOperatingTemp.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:""}+3` | AVERAGE | |
## Feedback
diff --git a/templates/net/mellanox_snmp/template_net_mellanox_snmp.yaml b/templates/net/mellanox_snmp/template_net_mellanox_snmp.yaml
index 7f318cc6348..150da6481af 100644
--- a/templates/net/mellanox_snmp/template_net_mellanox_snmp.yaml
+++ b/templates/net/mellanox_snmp/template_net_mellanox_snmp.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-03-16T09:53:43Z'
+ date: '2021-04-22T12:40:12Z'
groups:
-
name: 'Templates/Network devices'
@@ -1356,96 +1356,100 @@ zabbix_export:
dashboards:
-
name: 'Network interfaces'
- widgets:
+ pages:
-
- type: GRAPH_PROTOTYPE
- width: '24'
- height: '5'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '1'
- -
- type: INTEGER
- name: source_type
- value: '2'
+ widgets:
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: 'Interface {#IFNAME}({#IFALIAS}): Network traffic'
- host: 'Mellanox SNMP'
+ width: '24'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '1'
+ -
+ type: INTEGER
+ name: source_type
+ value: '2'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: 'Interface {#IFNAME}({#IFALIAS}): Network traffic'
+ host: 'Mellanox SNMP'
-
name: 'System performance'
- widgets:
- -
- type: GRAPH_CLASSIC
- width: '24'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'CPU utilization'
- host: 'Mellanox SNMP'
- -
- type: GRAPH_PROTOTYPE
- 'y': '5'
- width: '24'
- height: '5'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '1'
- -
- type: INTEGER
- name: source_type
- value: '2'
+ pages:
+ -
+ widgets:
+ -
+ type: GRAPH_CLASSIC
+ width: '24'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'CPU utilization'
+ host: 'Mellanox SNMP'
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: '{#MEMNAME}: Memory utilization'
- host: 'Mellanox SNMP'
- -
- type: GRAPH_PROTOTYPE
- 'y': '10'
- width: '24'
- height: '5'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '1'
- -
- type: INTEGER
- name: source_type
- value: '2'
+ 'y': '5'
+ width: '24'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '1'
+ -
+ type: INTEGER
+ name: source_type
+ value: '2'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: '{#MEMNAME}: Memory utilization'
+ host: 'Mellanox SNMP'
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: '{#FSNAME}: Disk space usage'
- host: 'Mellanox SNMP'
+ 'y': '10'
+ width: '24'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '1'
+ -
+ type: INTEGER
+ name: source_type
+ value: '2'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: '{#FSNAME}: Disk space usage'
+ host: 'Mellanox SNMP'
valuemaps:
-
name: 'ENTITY-SENSORS-MIB::EntitySensorStatus'
diff --git a/templates/net/mikrotik_snmp/README.md b/templates/net/mikrotik_snmp/README.md
index c301ebe00bc..b8e00862390 100644
--- a/templates/net/mikrotik_snmp/README.md
+++ b/templates/net/mikrotik_snmp/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
@@ -15,96 +15,96 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$CPU.UTIL.CRIT} |<p>-</p> |`90` |
-|{$IFNAME.LTEMODEM.MATCHES} |<p>This macro is used in LTE modem discovery. It can be overridden on the host.</p> |`^lte` |
-|{$IFNAME.WIFI.MATCHES} |<p>This macro is used in CAPsMAN AP channel discovery. It can be overridden on the host level.</p> |`WIFI` |
-|{$LTEMODEM.RSRP.MIN.WARN} |<p>The LTE modem RSRP minimum value for warning trigger expression.</p> |`-100` |
-|{$LTEMODEM.RSRQ.MIN.WARN} |<p>The LTE modem RSRQ minimum value for warning trigger expression.</p> |`-20` |
-|{$LTEMODEM.RSSI.MIN.WARN} |<p>The LTE modem RSSI minimum value for warning trigger expression.</p> |`-100` |
-|{$LTEMODEM.SINR.MIN.WARN} |<p>The LTE modem SINR minimum value for warning trigger expression.</p> |`0` |
-|{$MEMORY.UTIL.MAX} |<p>-</p> |`90` |
-|{$TEMP_CRIT:"CPU"} |<p>-</p> |`75` |
-|{$TEMP_CRIT_LOW} |<p>-</p> |`5` |
-|{$TEMP_CRIT} |<p>-</p> |`60` |
-|{$TEMP_WARN:"CPU"} |<p>-</p> |`70` |
-|{$TEMP_WARN} |<p>-</p> |`50` |
-|{$VFS.FS.PUSED.MAX.CRIT} |<p>-</p> |`90` |
-|{$VFS.FS.PUSED.MAX.WARN} |<p>-</p> |`80` |
+| Name | Description | Default |
+|----------------------------|----------------------------------------------------------------------------------------------------|---------|
+| {$CPU.UTIL.CRIT} | <p>-</p> | `90` |
+| {$IFNAME.LTEMODEM.MATCHES} | <p>This macro is used in LTE modem discovery. It can be overridden on the host.</p> | `^lte` |
+| {$IFNAME.WIFI.MATCHES} | <p>This macro is used in CAPsMAN AP channel discovery. It can be overridden on the host level.</p> | `WIFI` |
+| {$LTEMODEM.RSRP.MIN.WARN} | <p>The LTE modem RSRP minimum value for warning trigger expression.</p> | `-100` |
+| {$LTEMODEM.RSRQ.MIN.WARN} | <p>The LTE modem RSRQ minimum value for warning trigger expression.</p> | `-20` |
+| {$LTEMODEM.RSSI.MIN.WARN} | <p>The LTE modem RSSI minimum value for warning trigger expression.</p> | `-100` |
+| {$LTEMODEM.SINR.MIN.WARN} | <p>The LTE modem SINR minimum value for warning trigger expression.</p> | `0` |
+| {$MEMORY.UTIL.MAX} | <p>-</p> | `90` |
+| {$TEMP_CRIT:"CPU"} | <p>-</p> | `75` |
+| {$TEMP_CRIT_LOW} | <p>-</p> | `5` |
+| {$TEMP_CRIT} | <p>-</p> | `60` |
+| {$TEMP_WARN:"CPU"} | <p>-</p> | `70` |
+| {$TEMP_WARN} | <p>-</p> | `50` |
+| {$VFS.FS.PUSED.MAX.CRIT} | <p>-</p> | `90` |
+| {$VFS.FS.PUSED.MAX.WARN} | <p>-</p> | `80` |
## Template links
-|Name|
-|----|
-|Generic SNMP |
-|Interfaces SNMP |
+| Name |
+|-----------------|
+| Generic SNMP |
+| Interfaces SNMP |
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|CPU discovery |<p>HOST-RESOURCES-MIB::hrProcessorTable discovery</p> |SNMP |hrProcessorLoad.discovery |
-|Temperature CPU discovery |<p>MIKROTIK-MIB::mtxrHlProcessorTemperature</p><p>Since temperature of CPU is not available on all Mikrotik hardware, this is done to avoid unsupported items.</p> |SNMP |mtxrHlProcessorTemperature.discovery |
-|Temperature sensor discovery |<p>MIKROTIK-MIB::mtxrHlTemperature</p><p>Since temperature sensor is not available on all Mikrotik hardware,</p><p>this is done to avoid unsupported items.</p> |SNMP |mtxrHlTemperature.discovery |
-|LTE modem discovery |<p>MIKROTIK-MIB::mtxrLTEModemInterfaceIndex</p> |SNMP |mtxrLTEModem.discovery<p>**Filter**:</p>AND <p>- A: {#IFTYPE} MATCHES_REGEX `^1$`</p><p>- B: {#IFNAME} MATCHES_REGEX `{$IFNAME.LTEMODEM.MATCHES}`</p> |
-|AP channel discovery |<p>MIKROTIK-MIB::mtxrWlAp</p> |SNMP |mtxrWlAp.discovery<p>**Filter**:</p>AND <p>- A: {#IFTYPE} MATCHES_REGEX `^71$`</p><p>- B: {#IFADMINSTATUS} MATCHES_REGEX `^1$`</p> |
-|CAPsMAN AP channel discovery |<p>MIKROTIK-MIB::mtxrWlCMChannel</p> |SNMP |mtxrWlCMChannel.discovery<p>**Filter**:</p>AND <p>- A: {#IFTYPE} MATCHES_REGEX `^1$`</p><p>- B: {#IFNAME} MATCHES_REGEX `{$IFNAME.WIFI.MATCHES}`</p> |
-|Storage discovery |<p>HOST-RESOURCES-MIB::hrStorage discovery with storage filter</p> |SNMP |storage.discovery<p>**Filter**:</p>OR <p>- B: {#STORAGE_TYPE} MATCHES_REGEX `.+4$`</p><p>- A: {#STORAGE_TYPE} MATCHES_REGEX `.+hrStorageFixedDisk`</p> |
+| Name | Description | Type | Key and additional info |
+|------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------|------|--------------------------------------------------------------------------------------------------------------------------------------------------------|
+| CPU discovery | <p>HOST-RESOURCES-MIB::hrProcessorTable discovery</p> | SNMP | hrProcessorLoad.discovery |
+| Temperature CPU discovery | <p>MIKROTIK-MIB::mtxrHlProcessorTemperature</p><p>Since temperature of CPU is not available on all Mikrotik hardware, this is done to avoid unsupported items.</p> | SNMP | mtxrHlProcessorTemperature.discovery |
+| Temperature sensor discovery | <p>MIKROTIK-MIB::mtxrHlTemperature</p><p>Since temperature sensor is not available on all Mikrotik hardware,</p><p>this is done to avoid unsupported items.</p> | SNMP | mtxrHlTemperature.discovery |
+| LTE modem discovery | <p>MIKROTIK-MIB::mtxrLTEModemInterfaceIndex</p> | SNMP | mtxrLTEModem.discovery<p>**Filter**:</p>AND <p>- A: {#IFTYPE} MATCHES_REGEX `^1$`</p><p>- B: {#IFNAME} MATCHES_REGEX `{$IFNAME.LTEMODEM.MATCHES}`</p> |
+| AP channel discovery | <p>MIKROTIK-MIB::mtxrWlAp</p> | SNMP | mtxrWlAp.discovery<p>**Filter**:</p>AND <p>- A: {#IFTYPE} MATCHES_REGEX `^71$`</p><p>- B: {#IFADMINSTATUS} MATCHES_REGEX `^1$`</p> |
+| CAPsMAN AP channel discovery | <p>MIKROTIK-MIB::mtxrWlCMChannel</p> | SNMP | mtxrWlCMChannel.discovery<p>**Filter**:</p>AND <p>- A: {#IFTYPE} MATCHES_REGEX `^1$`</p><p>- B: {#IFNAME} MATCHES_REGEX `{$IFNAME.WIFI.MATCHES}`</p> |
+| Storage discovery | <p>HOST-RESOURCES-MIB::hrStorage discovery with storage filter</p> | SNMP | storage.discovery<p>**Filter**:</p>OR <p>- B: {#STORAGE_TYPE} MATCHES_REGEX `.+4$`</p><p>- A: {#STORAGE_TYPE} MATCHES_REGEX `.+hrStorageFixedDisk`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|CPU |#{#SNMPINDEX}: CPU utilization |<p>MIB: HOST-RESOURCES-MIB</p><p>The average, over the last minute, of the percentage of time that this processor was not idle. Implementations may approximate this one minute smoothing period if necessary.</p> |SNMP |system.cpu.util[hrProcessorLoad.{#SNMPINDEX}] |
-|Inventory |Operating system |<p>MIB: MIKROTIK-MIB</p><p>Software version.</p> |SNMP |system.sw.os[mtxrLicVersion.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |Hardware model name |<p>-</p> |SNMP |system.hw.model<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |Hardware serial number |<p>MIB: MIKROTIK-MIB</p><p>RouterBOARD serial number.</p> |SNMP |system.hw.serialnumber<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |Firmware version |<p>MIB: MIKROTIK-MIB</p><p>Current firmware version.</p> |SNMP |system.hw.firmware<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Memory |Used memory |<p>MIB: HOST-RESOURCES-MIB</p><p>The amount of the storage represented by this entry that is allocated, in units of hrStorageAllocationUnits.</p> |SNMP |vm.memory.used[hrStorageUsed.Memory]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1024`</p> |
-|Memory |Total memory |<p>MIB: HOST-RESOURCES-MIB</p><p>The size of the storage represented by this entry, in</p><p>units of hrStorageAllocationUnits. This object is</p><p>writable to allow remote configuration of the size of</p><p>the storage area in those cases where such an</p><p>operation makes sense and is possible on the</p><p>underlying system. For example, the amount of main</p><p>memory allocated to a buffer pool might be modified or</p><p>the amount of disk space allocated to virtual memory</p><p>might be modified.</p> |SNMP |vm.memory.total[hrStorageSize.Memory]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1024`</p> |
-|Memory |Memory utilization |<p>Memory utilization in %</p> |CALCULATED |vm.memory.util[memoryUsedPercentage.Memory]<p>**Expression**:</p>`last("vm.memory.used[hrStorageUsed.Memory]")/last("vm.memory.total[hrStorageSize.Memory]")*100` |
-|Storage |Disk-{#SNMPINDEX}: Used space |<p>MIB: HOST-RESOURCES-MIB</p><p>The amount of the storage represented by this entry that is allocated, in units of hrStorageAllocationUnits.</p> |SNMP |vfs.fs.used[hrStorageSize.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1024`</p> |
-|Storage |Disk-{#SNMPINDEX}: Total space |<p>MIB: HOST-RESOURCES-MIB</p><p>The size of the storage represented by this entry, in</p><p>units of hrStorageAllocationUnits. This object is</p><p>writable to allow remote configuration of the size of</p><p>the storage area in those cases where such an</p><p>operation makes sense and is possible on the</p><p>underlying system. For example, the amount of main</p><p>memory allocated to a buffer pool might be modified or</p><p>the amount of disk space allocated to virtual memory</p><p>might be modified.</p> |SNMP |vfs.fs.total[hrStorageSize.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1024`</p> |
-|Storage |Disk-{#SNMPINDEX}: Space utilization |<p>Space utilization in % for Disk-{#SNMPINDEX}</p> |CALCULATED |vfs.fs.pused[hrStorageSize.{#SNMPINDEX}]<p>**Expression**:</p>`(last("vfs.fs.used[hrStorageSize.{#SNMPINDEX}]")/last("vfs.fs.total[hrStorageSize.{#SNMPINDEX}]"))*100` |
-|Temperature |CPU: Temperature |<p>MIB: MIKROTIK-MIB</p><p>mtxrHlProcessorTemperature Processor temperature in Celsius (degrees C).</p><p>Might be missing in entry models (RB750, RB450G..).</p> |SNMP |sensor.temp.value[mtxrHlProcessorTemperature.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.1`</p> |
-|Temperature |Device: Temperature |<p>MIB: MIKROTIK-MIB</p><p>mtxrHlTemperature Device temperature in Celsius (degrees C).</p><p>Might be missing in entry models (RB750, RB450G..).</p><p>Reference: http://wiki.mikrotik.com/wiki/Manual:SNMP</p> |SNMP |sensor.temp.value[mtxrHlTemperature.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.1`</p> |
-|Wireless |Interface {#IFNAME}({#IFALIAS}): LTE modem RSSI |<p>MIB: MIKROTIK-MIB</p><p>mtxrLTEModemSignalRSSI Received Signal Strength Indicator.</p> |SNMP |lte.modem.rssi[mtxrLTEModemSignalRSSI.{#SNMPINDEX}] |
-|Wireless |Interface {#IFNAME}({#IFALIAS}): LTE modem RSRP |<p>MIB: MIKROTIK-MIB</p><p>mtxrLTEModemSignalRSRP Reference Signal Received Power.</p> |SNMP |lte.modem.rsrp[mtxrLTEModemSignalRSRP.{#SNMPINDEX}] |
-|Wireless |Interface {#IFNAME}({#IFALIAS}): LTE modem RSRQ |<p>MIB: MIKROTIK-MIB</p><p>mtxrLTEModemSignalRSRQ Reference Signal Received Quality.</p> |SNMP |lte.modem.rsrq[mtxrLTEModemSignalRSRQ.{#SNMPINDEX}] |
-|Wireless |Interface {#IFNAME}({#IFALIAS}): LTE modem SINR |<p>MIB: MIKROTIK-MIB</p><p>mtxrLTEModemSignalSINR Signal to Interference & Noise Ratio.</p> |SNMP |lte.modem.sinr[mtxrLTEModemSignalSINR.{#SNMPINDEX}] |
-|Wireless |Interface {#IFNAME}({#IFALIAS}): SSID |<p>MIB: MIKROTIK-MIB</p><p>mtxrWlApSsid Service Set Identifier.</p> |SNMP |ssid.name[mtxrWlApSsid.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|Wireless |Interface {#IFNAME}({#IFALIAS}): AP band |<p>MIB: MIKROTIK-MIB</p><p>mtxrWlApBand</p> |SNMP |ssid.band[mtxrWlApBand.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|Wireless |Interface {#IFNAME}({#IFALIAS}): AP noise floor |<p>MIB: MIKROTIK-MIB</p><p>mtxrWlApNoiseFloor</p> |SNMP |ssid.noise[mtxrWlApNoiseFloor.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `15m`</p> |
-|Wireless |Interface {#IFNAME}({#IFALIAS}): AP registered clients |<p>MIB: MIKROTIK-MIB</p><p>mtxrWlApClientCount Client established connection to AP, but didn't finish all authetncation procedures for full connection.</p> |SNMP |ssid.regclient[mtxrWlApClientCount.{#SNMPINDEX}] |
-|Wireless |Interface {#IFNAME}({#IFALIAS}): AP authenticated clients |<p>MIB: MIKROTIK-MIB</p><p>mtxrWlApAuthClientCount Number of authentication clients.</p> |SNMP |ssid.authclient[mtxrWlApAuthClientCount.{#SNMPINDEX}] |
-|Wireless |Interface {#IFNAME}({#IFALIAS}): AP channel |<p>MIB: MIKROTIK-MIB</p><p>mtxrWlCMChannel</p> |SNMP |ssid.channel[mtxrWlCMChannel.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|Wireless |Interface {#IFNAME}({#IFALIAS}): AP state |<p>MIB: MIKROTIK-MIB</p><p>mtxrWlCMState Wireless interface state.</p> |SNMP |ssid.state[mtxrWlCMState.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|Wireless |Interface {#IFNAME}({#IFALIAS}): AP registered clients |<p>MIB: MIKROTIK-MIB</p><p>mtxrWlCMRegClientCount Client established connection to AP, but didn't finish all authetncation procedures for full connection.</p> |SNMP |ssid.regclient[mtxrWlCMRegClientCount.{#SNMPINDEX}] |
-|Wireless |Interface {#IFNAME}({#IFALIAS}): AP authenticated clients |<p>MIB: MIKROTIK-MIB</p><p>mtxrWlCMAuthClientCount Number of authentication clients.</p> |SNMP |ssid.authclient[mtxrWlCMAuthClientCount.{#SNMPINDEX}] |
+| Group | Name | Description | Type | Key and additional info |
+|-------------|-----------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| CPU | #{#SNMPINDEX}: CPU utilization | <p>MIB: HOST-RESOURCES-MIB</p><p>The average, over the last minute, of the percentage of time that this processor was not idle. Implementations may approximate this one minute smoothing period if necessary.</p> | SNMP | system.cpu.util[hrProcessorLoad.{#SNMPINDEX}] |
+| Inventory | Operating system | <p>MIB: MIKROTIK-MIB</p><p>Software version.</p> | SNMP | system.sw.os[mtxrLicVersion.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | Hardware model name | <p>-</p> | SNMP | system.hw.model<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | Hardware serial number | <p>MIB: MIKROTIK-MIB</p><p>RouterBOARD serial number.</p> | SNMP | system.hw.serialnumber<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | Firmware version | <p>MIB: MIKROTIK-MIB</p><p>Current firmware version.</p> | SNMP | system.hw.firmware<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Memory | Used memory | <p>MIB: HOST-RESOURCES-MIB</p><p>The amount of the storage represented by this entry that is allocated, in units of hrStorageAllocationUnits.</p> | SNMP | vm.memory.used[hrStorageUsed.Memory]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1024`</p> |
+| Memory | Total memory | <p>MIB: HOST-RESOURCES-MIB</p><p>The size of the storage represented by this entry, in</p><p>units of hrStorageAllocationUnits. This object is</p><p>writable to allow remote configuration of the size of</p><p>the storage area in those cases where such an</p><p>operation makes sense and is possible on the</p><p>underlying system. For example, the amount of main</p><p>memory allocated to a buffer pool might be modified or</p><p>the amount of disk space allocated to virtual memory</p><p>might be modified.</p> | SNMP | vm.memory.total[hrStorageSize.Memory]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1024`</p> |
+| Memory | Memory utilization | <p>Memory utilization in %</p> | CALCULATED | vm.memory.util[memoryUsedPercentage.Memory]<p>**Expression**:</p>`last("vm.memory.used[hrStorageUsed.Memory]")/last("vm.memory.total[hrStorageSize.Memory]")*100` |
+| Storage | Disk-{#SNMPINDEX}: Used space | <p>MIB: HOST-RESOURCES-MIB</p><p>The amount of the storage represented by this entry that is allocated, in units of hrStorageAllocationUnits.</p> | SNMP | vfs.fs.used[hrStorageSize.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1024`</p> |
+| Storage | Disk-{#SNMPINDEX}: Total space | <p>MIB: HOST-RESOURCES-MIB</p><p>The size of the storage represented by this entry, in</p><p>units of hrStorageAllocationUnits. This object is</p><p>writable to allow remote configuration of the size of</p><p>the storage area in those cases where such an</p><p>operation makes sense and is possible on the</p><p>underlying system. For example, the amount of main</p><p>memory allocated to a buffer pool might be modified or</p><p>the amount of disk space allocated to virtual memory</p><p>might be modified.</p> | SNMP | vfs.fs.total[hrStorageSize.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1024`</p> |
+| Storage | Disk-{#SNMPINDEX}: Space utilization | <p>Space utilization in % for Disk-{#SNMPINDEX}</p> | CALCULATED | vfs.fs.pused[hrStorageSize.{#SNMPINDEX}]<p>**Expression**:</p>`(last("vfs.fs.used[hrStorageSize.{#SNMPINDEX}]")/last("vfs.fs.total[hrStorageSize.{#SNMPINDEX}]"))*100` |
+| Temperature | CPU: Temperature | <p>MIB: MIKROTIK-MIB</p><p>mtxrHlProcessorTemperature Processor temperature in Celsius (degrees C).</p><p>Might be missing in entry models (RB750, RB450G..).</p> | SNMP | sensor.temp.value[mtxrHlProcessorTemperature.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.1`</p> |
+| Temperature | Device: Temperature | <p>MIB: MIKROTIK-MIB</p><p>mtxrHlTemperature Device temperature in Celsius (degrees C).</p><p>Might be missing in entry models (RB750, RB450G..).</p><p>Reference: http://wiki.mikrotik.com/wiki/Manual:SNMP</p> | SNMP | sensor.temp.value[mtxrHlTemperature.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.1`</p> |
+| Wireless | Interface {#IFNAME}({#IFALIAS}): LTE modem RSSI | <p>MIB: MIKROTIK-MIB</p><p>mtxrLTEModemSignalRSSI Received Signal Strength Indicator.</p> | SNMP | lte.modem.rssi[mtxrLTEModemSignalRSSI.{#SNMPINDEX}] |
+| Wireless | Interface {#IFNAME}({#IFALIAS}): LTE modem RSRP | <p>MIB: MIKROTIK-MIB</p><p>mtxrLTEModemSignalRSRP Reference Signal Received Power.</p> | SNMP | lte.modem.rsrp[mtxrLTEModemSignalRSRP.{#SNMPINDEX}] |
+| Wireless | Interface {#IFNAME}({#IFALIAS}): LTE modem RSRQ | <p>MIB: MIKROTIK-MIB</p><p>mtxrLTEModemSignalRSRQ Reference Signal Received Quality.</p> | SNMP | lte.modem.rsrq[mtxrLTEModemSignalRSRQ.{#SNMPINDEX}] |
+| Wireless | Interface {#IFNAME}({#IFALIAS}): LTE modem SINR | <p>MIB: MIKROTIK-MIB</p><p>mtxrLTEModemSignalSINR Signal to Interference & Noise Ratio.</p> | SNMP | lte.modem.sinr[mtxrLTEModemSignalSINR.{#SNMPINDEX}] |
+| Wireless | Interface {#IFNAME}({#IFALIAS}): SSID | <p>MIB: MIKROTIK-MIB</p><p>mtxrWlApSsid Service Set Identifier.</p> | SNMP | ssid.name[mtxrWlApSsid.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| Wireless | Interface {#IFNAME}({#IFALIAS}): AP band | <p>MIB: MIKROTIK-MIB</p><p>mtxrWlApBand</p> | SNMP | ssid.band[mtxrWlApBand.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| Wireless | Interface {#IFNAME}({#IFALIAS}): AP noise floor | <p>MIB: MIKROTIK-MIB</p><p>mtxrWlApNoiseFloor</p> | SNMP | ssid.noise[mtxrWlApNoiseFloor.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `15m`</p> |
+| Wireless | Interface {#IFNAME}({#IFALIAS}): AP registered clients | <p>MIB: MIKROTIK-MIB</p><p>mtxrWlApClientCount Client established connection to AP, but didn't finish all authetncation procedures for full connection.</p> | SNMP | ssid.regclient[mtxrWlApClientCount.{#SNMPINDEX}] |
+| Wireless | Interface {#IFNAME}({#IFALIAS}): AP authenticated clients | <p>MIB: MIKROTIK-MIB</p><p>mtxrWlApAuthClientCount Number of authentication clients.</p> | SNMP | ssid.authclient[mtxrWlApAuthClientCount.{#SNMPINDEX}] |
+| Wireless | Interface {#IFNAME}({#IFALIAS}): AP channel | <p>MIB: MIKROTIK-MIB</p><p>mtxrWlCMChannel</p> | SNMP | ssid.channel[mtxrWlCMChannel.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| Wireless | Interface {#IFNAME}({#IFALIAS}): AP state | <p>MIB: MIKROTIK-MIB</p><p>mtxrWlCMState Wireless interface state.</p> | SNMP | ssid.state[mtxrWlCMState.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| Wireless | Interface {#IFNAME}({#IFALIAS}): AP registered clients | <p>MIB: MIKROTIK-MIB</p><p>mtxrWlCMRegClientCount Client established connection to AP, but didn't finish all authetncation procedures for full connection.</p> | SNMP | ssid.regclient[mtxrWlCMRegClientCount.{#SNMPINDEX}] |
+| Wireless | Interface {#IFNAME}({#IFALIAS}): AP authenticated clients | <p>MIB: MIKROTIK-MIB</p><p>mtxrWlCMAuthClientCount Number of authentication clients.</p> | SNMP | ssid.authclient[mtxrWlCMAuthClientCount.{#SNMPINDEX}] |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|#{#SNMPINDEX}: High CPU utilization (over {$CPU.UTIL.CRIT}% for 5m) |<p>CPU utilization is too high. The system might be slow to respond.</p> |`{TEMPLATE_NAME:system.cpu.util[hrProcessorLoad.{#SNMPINDEX}].min(5m)}>{$CPU.UTIL.CRIT}` |WARNING | |
-|Operating system description has changed |<p>Operating system description has changed. Possible reasons that system has been updated or replaced. Ack to close.</p> |`{TEMPLATE_NAME:system.sw.os[mtxrLicVersion.0].diff()}=1 and {TEMPLATE_NAME:system.sw.os[mtxrLicVersion.0].strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|Device has been replaced (new serial number received) |<p>Device serial number has changed. Ack to close</p> |`{TEMPLATE_NAME:system.hw.serialnumber.diff()}=1 and {TEMPLATE_NAME:system.hw.serialnumber.strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|Firmware has changed |<p>Firmware version has changed. Ack to close</p> |`{TEMPLATE_NAME:system.hw.firmware.diff()}=1 and {TEMPLATE_NAME:system.hw.firmware.strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m) |<p>The system is running out of free memory.</p> |`{TEMPLATE_NAME:vm.memory.util[memoryUsedPercentage.Memory].min(5m)}>{$MEMORY.UTIL.MAX}` |AVERAGE | |
-|Disk-{#SNMPINDEX}: Disk space is critically low (used > {$VFS.FS.PUSED.MAX.CRIT:"Disk-{#SNMPINDEX}"}%) |<p>Two conditions should match: First, space utilization should be above {$VFS.FS.PUSED.MAX.CRIT:"Disk-{#SNMPINDEX}"}.</p><p> Second condition should be one of the following:</p><p> - The disk free space is less than 5G.</p><p> - The disk will be full in less than 24 hours.</p> |`{TEMPLATE_NAME:vfs.fs.pused[hrStorageSize.{#SNMPINDEX}].last()}>{$VFS.FS.PUSED.MAX.CRIT:"Disk-{#SNMPINDEX}"} and (({Mikrotik SNMP:vfs.fs.total[hrStorageSize.{#SNMPINDEX}].last()}-{Mikrotik SNMP:vfs.fs.used[hrStorageSize.{#SNMPINDEX}].last()})<5G or {TEMPLATE_NAME:vfs.fs.pused[hrStorageSize.{#SNMPINDEX}].timeleft(1h,,100)}<1d)` |AVERAGE |<p>Manual close: YES</p> |
-|Disk-{#SNMPINDEX}: Disk space is low (used > {$VFS.FS.PUSED.MAX.WARN:"Disk-{#SNMPINDEX}"}%) |<p>Two conditions should match: First, space utilization should be above {$VFS.FS.PUSED.MAX.WARN:"Disk-{#SNMPINDEX}"}.</p><p> Second condition should be one of the following:</p><p> - The disk free space is less than 10G.</p><p> - The disk will be full in less than 24 hours.</p> |`{TEMPLATE_NAME:vfs.fs.pused[hrStorageSize.{#SNMPINDEX}].last()}>{$VFS.FS.PUSED.MAX.WARN:"Disk-{#SNMPINDEX}"} and (({Mikrotik SNMP:vfs.fs.total[hrStorageSize.{#SNMPINDEX}].last()}-{Mikrotik SNMP:vfs.fs.used[hrStorageSize.{#SNMPINDEX}].last()})<10G or {TEMPLATE_NAME:vfs.fs.pused[hrStorageSize.{#SNMPINDEX}].timeleft(1h,,100)}<1d)` |WARNING |<p>Manual close: YES</p><p>**Depends on**:</p><p>- Disk-{#SNMPINDEX}: Disk space is critically low (used > {$VFS.FS.PUSED.MAX.CRIT:"Disk-{#SNMPINDEX}"}%)</p> |
-|CPU: Temperature is above warning threshold: >{$TEMP_WARN:"CPU"} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[mtxrHlProcessorTemperature.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:"CPU"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[mtxrHlProcessorTemperature.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:"CPU"}-3` |WARNING |<p>**Depends on**:</p><p>- CPU: Temperature is above critical threshold: >{$TEMP_CRIT:"CPU"}</p> |
-|CPU: Temperature is above critical threshold: >{$TEMP_CRIT:"CPU"} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[mtxrHlProcessorTemperature.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:"CPU"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[mtxrHlProcessorTemperature.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:"CPU"}-3` |HIGH | |
-|CPU: Temperature is too low: <{$TEMP_CRIT_LOW:"CPU"} |<p>-</p> |`{TEMPLATE_NAME:sensor.temp.value[mtxrHlProcessorTemperature.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:"CPU"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[mtxrHlProcessorTemperature.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:"CPU"}+3` |AVERAGE | |
-|Device: Temperature is above warning threshold: >{$TEMP_WARN:"Device"} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[mtxrHlTemperature.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:"Device"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[mtxrHlTemperature.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:"Device"}-3` |WARNING |<p>**Depends on**:</p><p>- Device: Temperature is above critical threshold: >{$TEMP_CRIT:"Device"}</p> |
-|Device: Temperature is above critical threshold: >{$TEMP_CRIT:"Device"} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[mtxrHlTemperature.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:"Device"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[mtxrHlTemperature.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:"Device"}-3` |HIGH | |
-|Device: Temperature is too low: <{$TEMP_CRIT_LOW:"Device"} |<p>-</p> |`{TEMPLATE_NAME:sensor.temp.value[mtxrHlTemperature.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:"Device"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[mtxrHlTemperature.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:"Device"}+3` |AVERAGE | |
-|Interface {#IFNAME}({#IFALIAS}): LTE modem RSSI is low (below {$LTEMODEM.RSSI.MIN.WARN}dbm for 5m) |<p>-</p> |`{TEMPLATE_NAME:lte.modem.rssi[mtxrLTEModemSignalRSSI.{#SNMPINDEX}].max(5m)} < {$LTEMODEM.RSSI.MIN.WARN}` |WARNING | |
-|Interface {#IFNAME}({#IFALIAS}): LTE modem RSRP is low (below {$LTEMODEM.RSRP.MIN.WARN}dbm for 5m) |<p>-</p> |`{TEMPLATE_NAME:lte.modem.rsrp[mtxrLTEModemSignalRSRP.{#SNMPINDEX}].max(5m)} < {$LTEMODEM.RSRP.MIN.WARN}` |WARNING | |
-|Interface {#IFNAME}({#IFALIAS}): LTE modem RSRQ is low (below {$LTEMODEM.RSRQ.MIN.WARN}db for 5m) |<p>-</p> |`{TEMPLATE_NAME:lte.modem.rsrq[mtxrLTEModemSignalRSRQ.{#SNMPINDEX}].max(5m)} < {$LTEMODEM.RSRQ.MIN.WARN}` |WARNING | |
-|Interface {#IFNAME}({#IFALIAS}): LTE modem SINR is low (below {$LTEMODEM.SINR.MIN.WARN}db for 5m) |<p>-</p> |`{TEMPLATE_NAME:lte.modem.sinr[mtxrLTEModemSignalSINR.{#SNMPINDEX}].max(5m)} < {$LTEMODEM.SINR.MIN.WARN}` |WARNING | |
-|Interface {#IFNAME}({#IFALIAS}): AP interface {#IFNAME}({#IFALIAS}) is not running |<p>Access point interface can be not running by different reasons - disabled interface, power off, network link down.</p> |`{TEMPLATE_NAME:ssid.state[mtxrWlCMState.{#SNMPINDEX}].last()}<>"running-ap"` |WARNING | |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|--------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| #{#SNMPINDEX}: High CPU utilization (over {$CPU.UTIL.CRIT}% for 5m) | <p>CPU utilization is too high. The system might be slow to respond.</p> | `{TEMPLATE_NAME:system.cpu.util[hrProcessorLoad.{#SNMPINDEX}].min(5m)}>{$CPU.UTIL.CRIT}` | WARNING | |
+| Operating system description has changed | <p>Operating system description has changed. Possible reasons that system has been updated or replaced. Ack to close.</p> | `{TEMPLATE_NAME:system.sw.os[mtxrLicVersion.0].diff()}=1 and {TEMPLATE_NAME:system.sw.os[mtxrLicVersion.0].strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| Device has been replaced (new serial number received) | <p>Device serial number has changed. Ack to close</p> | `{TEMPLATE_NAME:system.hw.serialnumber.diff()}=1 and {TEMPLATE_NAME:system.hw.serialnumber.strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| Firmware has changed | <p>Firmware version has changed. Ack to close</p> | `{TEMPLATE_NAME:system.hw.firmware.diff()}=1 and {TEMPLATE_NAME:system.hw.firmware.strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m) | <p>The system is running out of free memory.</p> | `{TEMPLATE_NAME:vm.memory.util[memoryUsedPercentage.Memory].min(5m)}>{$MEMORY.UTIL.MAX}` | AVERAGE | |
+| Disk-{#SNMPINDEX}: Disk space is critically low (used > {$VFS.FS.PUSED.MAX.CRIT:"Disk-{#SNMPINDEX}"}%) | <p>Two conditions should match: First, space utilization should be above {$VFS.FS.PUSED.MAX.CRIT:"Disk-{#SNMPINDEX}"}.</p><p> Second condition should be one of the following:</p><p> - The disk free space is less than 5G.</p><p> - The disk will be full in less than 24 hours.</p> | `{TEMPLATE_NAME:vfs.fs.pused[hrStorageSize.{#SNMPINDEX}].last()}>{$VFS.FS.PUSED.MAX.CRIT:"Disk-{#SNMPINDEX}"} and (({Mikrotik SNMP:vfs.fs.total[hrStorageSize.{#SNMPINDEX}].last()}-{Mikrotik SNMP:vfs.fs.used[hrStorageSize.{#SNMPINDEX}].last()})<5G or {TEMPLATE_NAME:vfs.fs.pused[hrStorageSize.{#SNMPINDEX}].timeleft(1h,,100)}<1d)` | AVERAGE | <p>Manual close: YES</p> |
+| Disk-{#SNMPINDEX}: Disk space is low (used > {$VFS.FS.PUSED.MAX.WARN:"Disk-{#SNMPINDEX}"}%) | <p>Two conditions should match: First, space utilization should be above {$VFS.FS.PUSED.MAX.WARN:"Disk-{#SNMPINDEX}"}.</p><p> Second condition should be one of the following:</p><p> - The disk free space is less than 10G.</p><p> - The disk will be full in less than 24 hours.</p> | `{TEMPLATE_NAME:vfs.fs.pused[hrStorageSize.{#SNMPINDEX}].last()}>{$VFS.FS.PUSED.MAX.WARN:"Disk-{#SNMPINDEX}"} and (({Mikrotik SNMP:vfs.fs.total[hrStorageSize.{#SNMPINDEX}].last()}-{Mikrotik SNMP:vfs.fs.used[hrStorageSize.{#SNMPINDEX}].last()})<10G or {TEMPLATE_NAME:vfs.fs.pused[hrStorageSize.{#SNMPINDEX}].timeleft(1h,,100)}<1d)` | WARNING | <p>Manual close: YES</p><p>**Depends on**:</p><p>- Disk-{#SNMPINDEX}: Disk space is critically low (used > {$VFS.FS.PUSED.MAX.CRIT:"Disk-{#SNMPINDEX}"}%)</p> |
+| CPU: Temperature is above warning threshold: >{$TEMP_WARN:"CPU"} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[mtxrHlProcessorTemperature.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:"CPU"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[mtxrHlProcessorTemperature.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:"CPU"}-3` | WARNING | <p>**Depends on**:</p><p>- CPU: Temperature is above critical threshold: >{$TEMP_CRIT:"CPU"}</p> |
+| CPU: Temperature is above critical threshold: >{$TEMP_CRIT:"CPU"} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[mtxrHlProcessorTemperature.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:"CPU"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[mtxrHlProcessorTemperature.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:"CPU"}-3` | HIGH | |
+| CPU: Temperature is too low: <{$TEMP_CRIT_LOW:"CPU"} | <p>-</p> | `{TEMPLATE_NAME:sensor.temp.value[mtxrHlProcessorTemperature.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:"CPU"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[mtxrHlProcessorTemperature.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:"CPU"}+3` | AVERAGE | |
+| Device: Temperature is above warning threshold: >{$TEMP_WARN:"Device"} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[mtxrHlTemperature.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:"Device"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[mtxrHlTemperature.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:"Device"}-3` | WARNING | <p>**Depends on**:</p><p>- Device: Temperature is above critical threshold: >{$TEMP_CRIT:"Device"}</p> |
+| Device: Temperature is above critical threshold: >{$TEMP_CRIT:"Device"} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[mtxrHlTemperature.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:"Device"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[mtxrHlTemperature.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:"Device"}-3` | HIGH | |
+| Device: Temperature is too low: <{$TEMP_CRIT_LOW:"Device"} | <p>-</p> | `{TEMPLATE_NAME:sensor.temp.value[mtxrHlTemperature.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:"Device"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[mtxrHlTemperature.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:"Device"}+3` | AVERAGE | |
+| Interface {#IFNAME}({#IFALIAS}): LTE modem RSSI is low (below {$LTEMODEM.RSSI.MIN.WARN}dbm for 5m) | <p>-</p> | `{TEMPLATE_NAME:lte.modem.rssi[mtxrLTEModemSignalRSSI.{#SNMPINDEX}].max(5m)} < {$LTEMODEM.RSSI.MIN.WARN}` | WARNING | |
+| Interface {#IFNAME}({#IFALIAS}): LTE modem RSRP is low (below {$LTEMODEM.RSRP.MIN.WARN}dbm for 5m) | <p>-</p> | `{TEMPLATE_NAME:lte.modem.rsrp[mtxrLTEModemSignalRSRP.{#SNMPINDEX}].max(5m)} < {$LTEMODEM.RSRP.MIN.WARN}` | WARNING | |
+| Interface {#IFNAME}({#IFALIAS}): LTE modem RSRQ is low (below {$LTEMODEM.RSRQ.MIN.WARN}db for 5m) | <p>-</p> | `{TEMPLATE_NAME:lte.modem.rsrq[mtxrLTEModemSignalRSRQ.{#SNMPINDEX}].max(5m)} < {$LTEMODEM.RSRQ.MIN.WARN}` | WARNING | |
+| Interface {#IFNAME}({#IFALIAS}): LTE modem SINR is low (below {$LTEMODEM.SINR.MIN.WARN}db for 5m) | <p>-</p> | `{TEMPLATE_NAME:lte.modem.sinr[mtxrLTEModemSignalSINR.{#SNMPINDEX}].max(5m)} < {$LTEMODEM.SINR.MIN.WARN}` | WARNING | |
+| Interface {#IFNAME}({#IFALIAS}): AP interface {#IFNAME}({#IFALIAS}) is not running | <p>Access point interface can be not running by different reasons - disabled interface, power off, network link down.</p> | `{TEMPLATE_NAME:ssid.state[mtxrWlCMState.{#SNMPINDEX}].last()}<>"running-ap"` | WARNING | |
## Feedback
diff --git a/templates/net/morningstar_snmp/prostar_mppt_snmp/README.md b/templates/net/morningstar_snmp/prostar_mppt_snmp/README.md
index cacc1c79e34..e5dc10db6c9 100644
--- a/templates/net/morningstar_snmp/prostar_mppt_snmp/README.md
+++ b/templates/net/morningstar_snmp/prostar_mppt_snmp/README.md
@@ -3,11 +3,11 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
-> See [Zabbix template operation](https://www.zabbix.com/documentation/5.2/manual/config/templates_out_of_the_box/zabbix_agent) for basic instructions.
+> See [Zabbix template operation](https://www.zabbix.com/documentation/5.4/manual/config/templates_out_of_the_box/zabbix_agent) for basic instructions.
Refer to the vendor documentation.
@@ -17,23 +17,23 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$BATTERY.TEMP.MAX.CRIT} |<p>Battery high temperature critical value</p> |`60` |
-|{$BATTERY.TEMP.MAX.WARN} |<p>Battery high temperature warning value</p> |`45` |
-|{$BATTERY.TEMP.MIN.CRIT} |<p>Battery low temperature critical value</p> |`-20` |
-|{$BATTERY.TEMP.MIN.WARN} |<p>Battery low temperature warning value</p> |`0` |
-|{$CHARGE.STATE.CRIT} |<p>fault</p> |`4` |
-|{$CHARGE.STATE.WARN} |<p>disconnect</p> |`2` |
-|{$LOAD.STATE.CRIT:"fault"} |<p>fault</p> |`4` |
-|{$LOAD.STATE.CRIT:"lvd"} |<p>lvd</p> |`3` |
-|{$LOAD.STATE.WARN:"disconnect"} |<p>disconnect</p> |`5` |
-|{$LOAD.STATE.WARN:"lvdWarning"} |<p>lvdWarning</p> |`2` |
-|{$LOAD.STATE.WARN:"override"} |<p>override</p> |`7` |
-|{$VOLTAGE.MAX.CRIT} | |`` |
-|{$VOLTAGE.MAX.WARN} | |`` |
-|{$VOLTAGE.MIN.CRIT} | |`` |
-|{$VOLTAGE.MIN.WARN} | |`` |
+| Name | Description | Default |
+|---------------------------------|------------------------------------------------|---------|
+| {$BATTERY.TEMP.MAX.CRIT} | <p>Battery high temperature critical value</p> | `60` |
+| {$BATTERY.TEMP.MAX.WARN} | <p>Battery high temperature warning value</p> | `45` |
+| {$BATTERY.TEMP.MIN.CRIT} | <p>Battery low temperature critical value</p> | `-20` |
+| {$BATTERY.TEMP.MIN.WARN} | <p>Battery low temperature warning value</p> | `0` |
+| {$CHARGE.STATE.CRIT} | <p>fault</p> | `4` |
+| {$CHARGE.STATE.WARN} | <p>disconnect</p> | `2` |
+| {$LOAD.STATE.CRIT:"fault"} | <p>fault</p> | `4` |
+| {$LOAD.STATE.CRIT:"lvd"} | <p>lvd</p> | `3` |
+| {$LOAD.STATE.WARN:"disconnect"} | <p>disconnect</p> | `5` |
+| {$LOAD.STATE.WARN:"lvdWarning"} | <p>lvdWarning</p> | `2` |
+| {$LOAD.STATE.WARN:"override"} | <p>override</p> | `7` |
+| {$VOLTAGE.MAX.CRIT} | | `` |
+| {$VOLTAGE.MAX.WARN} | | `` |
+| {$VOLTAGE.MIN.CRIT} | | `` |
+| {$VOLTAGE.MIN.WARN} | | `` |
## Template links
@@ -41,100 +41,100 @@ There are no template links in this template.
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Battery voltage discovery |<p>Discovery for battery voltage triggers</p> |DEPENDENT |battery.voltage.discovery<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Name | Description | Type | Key and additional info |
+|---------------------------|-----------------------------------------------|-----------|---------------------------------------------------------------------------------------------------------------------|
+| Battery voltage discovery | <p>Discovery for battery voltage triggers</p> | DEPENDENT | battery.voltage.discovery<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Array |Array: Voltage |<p>MIB: PROSTAR-MPPT</p><p>Array Voltage</p><p> Description:Array Voltage</p><p> Scaling Factor:1.0</p><p> Units:V</p><p> Range:[0, 80]</p><p> Modbus address:0x0013</p> |SNMP |array.voltage[arrayVoltage.0] |
-|Array |Array: Sweep Vmp |<p>MIB: PROSTAR-MPPT</p><p>Array Vmp</p><p> Description:Array Max. Power Point Voltage</p><p> Scaling Factor:1.0</p><p> Units:V</p><p> Range:[0.0, 5000.0]</p><p> Modbus address:0x003D</p> |SNMP |array.sweep_vmp[arrayVmp.0] |
-|Array |Array: Sweep Voc |<p>MIB: PROSTAR-MPPT</p><p>Array Voc</p><p> Description:Array Open Circuit Voltage</p><p> Scaling Factor:1.0</p><p> Units:V</p><p> Range:[0.0, 80.0]</p><p> Modbus address:0x003F</p> |SNMP |array.sweep_voc[arrayVoc.0] |
-|Array |Array: Sweep Pmax |<p>MIB: PROSTAR-MPPT</p><p>Array Max. Power (sweep)</p><p> Description:Array Max. Power (last sweep)</p><p> Scaling Factor:1.0</p><p> Units:W</p><p> Range:[0.0, 500]</p><p> Modbus address:0x003E</p> |SNMP |array.sweep_pmax[arrayMaxPowerSweep.0] |
-|Battery |Battery: Charge State |<p>MIB: PROSTAR-MPPT</p><p>Charge State</p><p> Description:Control State</p><p> Modbus address:0x0021</p><p> 0: Start</p><p> 1: NightCheck</p><p> 2: Disconnect</p><p> 3: Night</p><p> 4: Fault</p><p> 5: BulkMppt</p><p> 6: Absorption</p><p> 7: Float</p><p> 8: Equalize</p><p> 9: Slave</p><p> 10: Fixed</p> |SNMP |charge.state[chargeState.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|Battery |Battery: Target Voltage |<p>MIB: PROSTAR-MPPT</p><p>Target Voltage</p><p> Description:Target Regulation Voltage</p><p> Scaling Factor:1.0</p><p> Units:V</p><p> Range:[0.0, 80.0]</p><p> Modbus address:0x0024</p> |SNMP |target.voltage[targetVoltage.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|Battery |Battery: Charge Current |<p>MIB: PROSTAR-MPPT</p><p>Charge Current</p><p> Description:Charge Current</p><p> Scaling Factor:1.0</p><p> Units:A</p><p> Range:[0, 40]</p><p> Modbus address:0x0010</p> |SNMP |charge.current[chargeCurrent.0] |
-|Battery |Battery: Voltage{#SINGLETON} |<p>MIB: PROSTAR-MPPT</p><p>Battery Terminal Voltage</p><p>Description:Battery Terminal Voltage</p><p>Scaling Factor:1.0</p><p>Units:V</p><p>Range:[0.0, 80.0]</p><p>Modbus address:0x0012</p> |SNMP |battery.voltage[batteryTerminalVoltage.0{#SINGLETON}] |
-|Counter |Counter: Charge Amp-hours |<p>MIB: PROSTAR-MPPT</p><p>Ah Charge (Resettable)</p><p> Description:Ah Charge (Resettable)</p><p> Scaling Factor:0.1</p><p> Units:Ah</p><p> Range:[0.0, 4294967294]</p><p> Modbus addresses:H=0x0026 L=0x0027</p> |SNMP |counter.charge_amp_hours[ahChargeResettable.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.1`</p> |
-|Counter |Counter: Charge KW-hours |<p>MIB: PROSTAR-MPPT</p><p>kWh Charge (Resettable)</p><p>Description:Kilowatt Hours Charge (Resettable)</p><p>Scaling Factor:1.0</p><p>Units:kWh</p><p>Range:[0.0, 65535]</p><p>Modbus address:0x002A</p> |SNMP |counter.charge_kw_hours[kwhChargeResettable.0] |
-|Counter |Counter: Load Amp-hours |<p>MIB: PROSTAR-MPPT</p><p>Description:Ah Load (Resettable)</p><p>Scaling Factor:0.1</p><p>Units:Ah</p><p>Range:[0.0, 4294967294]</p><p>Modbus addresses:H=0x0032 L=0x0033</p> |SNMP |counter.load_amp_hours[ahLoadResettable.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.1`</p> |
-|Load |Load: State |<p>MIB: PROSTAR-MPPT</p><p>Load State</p><p> Description:Load State</p><p> Modbus address:0x002E</p><p> 0: Start</p><p>1: Normal</p><p>2: LvdWarning</p><p>3: Lvd</p><p>4: Fault</p><p>5: Disconnect</p><p>6: NormalOff</p><p>7: Override</p><p>8: NotUsed</p> |SNMP |load.state[loadState.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|Load |Load: Voltage |<p>MIB: PROSTAR-MPPT</p><p>Load Voltage</p><p> Description:Load Voltage</p><p> Scaling Factor:1.0</p><p> Units:V</p><p> Range:[0, 80]</p><p> Modbus address:0x0014</p> |SNMP |load.voltage[loadVoltage.0] |
-|Load |Load: Current |<p>MIB: PROSTAR-MPPT</p><p>Load Current</p><p> Description:Load Current</p><p> Scaling Factor:1.0</p><p> Units:A</p><p> Range:[0, 60]</p><p> Modbus address:0x0016</p> |SNMP |load.current[loadCurrent.0] |
-|Status |Status: Uptime |<p>Device uptime in seconds</p> |SNMP |status.uptime<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.01`</p> |
-|Status |Status: Array Faults |<p>MIB: PROSTAR-MPPT</p><p>Description:Array Faults</p><p>Modbus address:0x0022</p> |SNMP |status.array_faults[arrayFaults.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
-|Status |Status: Load Faults |<p>MIB: PROSTAR-MPPT</p><p>Description:Array Faults</p><p>Modbus address:0x0022</p> |SNMP |status.load_faults[loadFaults.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
-|Status |Status: Alarms |<p>MIB: PROSTAR-MPPT</p><p>Description:Alarms</p><p>Modbus addresses:H=0x0038 L=0x0039</p> |SNMP |status.alarms[alarms.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
-|Temperature |Temperature: Ambient |<p>MIB: PROSTAR-MPPT</p><p>Ambient Temperature</p><p> Description:Ambient Temperature</p><p> Scaling Factor:1.0</p><p> Units:deg C</p><p> Range:[-128, 127]</p><p> Modbus address:0x001C</p> |SNMP |temp.ambient[ambientTemperature.0] |
-|Temperature |Temperature: Battery |<p>MIB: PROSTAR-MPPT</p><p>Battery Temperature</p><p> Description:Battery Temperature</p><p> Scaling Factor:1.0</p><p> Units:deg C</p><p> Range:[-128, 127]</p><p> Modbus address:0x001B</p> |SNMP |temp.battery[batteryTemperature.0] |
-|Temperature |Temperature: Heatsink |<p>MIB: PROSTAR-MPPT</p><p>Heatsink Temperature</p><p> Description:Heatsink Temperature</p><p> Scaling Factor:1.0</p><p> Units:deg C</p><p> Range:[-128, 127]</p><p> Modbus address:0x001A</p> |SNMP |temp.heatsink[heatsinkTemperature.0] |
-|Zabbix_raw_items |Battery: Battery Voltage discovery |<p>MIB: PROSTAR-MPPT</p> |SNMP |battery.voltage.discovery[batteryTerminalVoltage.0] |
+| Group | Name | Description | Type | Key and additional info |
+|------------------|------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Array | Array: Voltage | <p>MIB: PROSTAR-MPPT</p><p>Array Voltage</p><p> Description:Array Voltage</p><p> Scaling Factor:1.0</p><p> Units:V</p><p> Range:[0, 80]</p><p> Modbus address:0x0013</p> | SNMP | array.voltage[arrayVoltage.0] |
+| Array | Array: Sweep Vmp | <p>MIB: PROSTAR-MPPT</p><p>Array Vmp</p><p> Description:Array Max. Power Point Voltage</p><p> Scaling Factor:1.0</p><p> Units:V</p><p> Range:[0.0, 5000.0]</p><p> Modbus address:0x003D</p> | SNMP | array.sweep_vmp[arrayVmp.0] |
+| Array | Array: Sweep Voc | <p>MIB: PROSTAR-MPPT</p><p>Array Voc</p><p> Description:Array Open Circuit Voltage</p><p> Scaling Factor:1.0</p><p> Units:V</p><p> Range:[0.0, 80.0]</p><p> Modbus address:0x003F</p> | SNMP | array.sweep_voc[arrayVoc.0] |
+| Array | Array: Sweep Pmax | <p>MIB: PROSTAR-MPPT</p><p>Array Max. Power (sweep)</p><p> Description:Array Max. Power (last sweep)</p><p> Scaling Factor:1.0</p><p> Units:W</p><p> Range:[0.0, 500]</p><p> Modbus address:0x003E</p> | SNMP | array.sweep_pmax[arrayMaxPowerSweep.0] |
+| Battery | Battery: Charge State | <p>MIB: PROSTAR-MPPT</p><p>Charge State</p><p> Description:Control State</p><p> Modbus address:0x0021</p><p> 0: Start</p><p> 1: NightCheck</p><p> 2: Disconnect</p><p> 3: Night</p><p> 4: Fault</p><p> 5: BulkMppt</p><p> 6: Absorption</p><p> 7: Float</p><p> 8: Equalize</p><p> 9: Slave</p><p> 10: Fixed</p> | SNMP | charge.state[chargeState.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| Battery | Battery: Target Voltage | <p>MIB: PROSTAR-MPPT</p><p>Target Voltage</p><p> Description:Target Regulation Voltage</p><p> Scaling Factor:1.0</p><p> Units:V</p><p> Range:[0.0, 80.0]</p><p> Modbus address:0x0024</p> | SNMP | target.voltage[targetVoltage.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| Battery | Battery: Charge Current | <p>MIB: PROSTAR-MPPT</p><p>Charge Current</p><p> Description:Charge Current</p><p> Scaling Factor:1.0</p><p> Units:A</p><p> Range:[0, 40]</p><p> Modbus address:0x0010</p> | SNMP | charge.current[chargeCurrent.0] |
+| Battery | Battery: Voltage{#SINGLETON} | <p>MIB: PROSTAR-MPPT</p><p>Battery Terminal Voltage</p><p>Description:Battery Terminal Voltage</p><p>Scaling Factor:1.0</p><p>Units:V</p><p>Range:[0.0, 80.0]</p><p>Modbus address:0x0012</p> | SNMP | battery.voltage[batteryTerminalVoltage.0{#SINGLETON}] |
+| Counter | Counter: Charge Amp-hours | <p>MIB: PROSTAR-MPPT</p><p>Ah Charge (Resettable)</p><p> Description:Ah Charge (Resettable)</p><p> Scaling Factor:0.1</p><p> Units:Ah</p><p> Range:[0.0, 4294967294]</p><p> Modbus addresses:H=0x0026 L=0x0027</p> | SNMP | counter.charge_amp_hours[ahChargeResettable.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.1`</p> |
+| Counter | Counter: Charge KW-hours | <p>MIB: PROSTAR-MPPT</p><p>kWh Charge (Resettable)</p><p>Description:Kilowatt Hours Charge (Resettable)</p><p>Scaling Factor:1.0</p><p>Units:kWh</p><p>Range:[0.0, 65535]</p><p>Modbus address:0x002A</p> | SNMP | counter.charge_kw_hours[kwhChargeResettable.0] |
+| Counter | Counter: Load Amp-hours | <p>MIB: PROSTAR-MPPT</p><p>Description:Ah Load (Resettable)</p><p>Scaling Factor:0.1</p><p>Units:Ah</p><p>Range:[0.0, 4294967294]</p><p>Modbus addresses:H=0x0032 L=0x0033</p> | SNMP | counter.load_amp_hours[ahLoadResettable.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.1`</p> |
+| Load | Load: State | <p>MIB: PROSTAR-MPPT</p><p>Load State</p><p> Description:Load State</p><p> Modbus address:0x002E</p><p> 0: Start</p><p>1: Normal</p><p>2: LvdWarning</p><p>3: Lvd</p><p>4: Fault</p><p>5: Disconnect</p><p>6: NormalOff</p><p>7: Override</p><p>8: NotUsed</p> | SNMP | load.state[loadState.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| Load | Load: Voltage | <p>MIB: PROSTAR-MPPT</p><p>Load Voltage</p><p> Description:Load Voltage</p><p> Scaling Factor:1.0</p><p> Units:V</p><p> Range:[0, 80]</p><p> Modbus address:0x0014</p> | SNMP | load.voltage[loadVoltage.0] |
+| Load | Load: Current | <p>MIB: PROSTAR-MPPT</p><p>Load Current</p><p> Description:Load Current</p><p> Scaling Factor:1.0</p><p> Units:A</p><p> Range:[0, 60]</p><p> Modbus address:0x0016</p> | SNMP | load.current[loadCurrent.0] |
+| Status | Status: Uptime | <p>Device uptime in seconds</p> | SNMP | status.uptime<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.01`</p> |
+| Status | Status: Array Faults | <p>MIB: PROSTAR-MPPT</p><p>Description:Array Faults</p><p>Modbus address:0x0022</p> | SNMP | status.array_faults[arrayFaults.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Status | Status: Load Faults | <p>MIB: PROSTAR-MPPT</p><p>Description:Array Faults</p><p>Modbus address:0x0022</p> | SNMP | status.load_faults[loadFaults.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Status | Status: Alarms | <p>MIB: PROSTAR-MPPT</p><p>Description:Alarms</p><p>Modbus addresses:H=0x0038 L=0x0039</p> | SNMP | status.alarms[alarms.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Temperature | Temperature: Ambient | <p>MIB: PROSTAR-MPPT</p><p>Ambient Temperature</p><p> Description:Ambient Temperature</p><p> Scaling Factor:1.0</p><p> Units:deg C</p><p> Range:[-128, 127]</p><p> Modbus address:0x001C</p> | SNMP | temp.ambient[ambientTemperature.0] |
+| Temperature | Temperature: Battery | <p>MIB: PROSTAR-MPPT</p><p>Battery Temperature</p><p> Description:Battery Temperature</p><p> Scaling Factor:1.0</p><p> Units:deg C</p><p> Range:[-128, 127]</p><p> Modbus address:0x001B</p> | SNMP | temp.battery[batteryTemperature.0] |
+| Temperature | Temperature: Heatsink | <p>MIB: PROSTAR-MPPT</p><p>Heatsink Temperature</p><p> Description:Heatsink Temperature</p><p> Scaling Factor:1.0</p><p> Units:deg C</p><p> Range:[-128, 127]</p><p> Modbus address:0x001A</p> | SNMP | temp.heatsink[heatsinkTemperature.0] |
+| Zabbix_raw_items | Battery: Battery Voltage discovery | <p>MIB: PROSTAR-MPPT</p> | SNMP | battery.voltage.discovery[batteryTerminalVoltage.0] |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|Battery: Device charge in warning state |<p>-</p> |`{TEMPLATE_NAME:charge.state[chargeState.0].last()}={$CHARGE.STATE.WARN}` |WARNING |<p>**Depends on**:</p><p>- Battery: Device charge in critical state</p> |
-|Battery: Device charge in critical state |<p>-</p> |`{TEMPLATE_NAME:charge.state[chargeState.0].last()}={$CHARGE.STATE.CRIT}` |HIGH | |
-|Battery: Low battery voltage (below {#VOLTAGE.MIN.WARN}V for 5m) |<p>-</p> |`{TEMPLATE_NAME:battery.voltage[batteryTerminalVoltage.0{#SINGLETON}].max(5m)}<{#VOLTAGE.MIN.WARN}` |WARNING |<p>**Depends on**:</p><p>- Battery: Critically low battery voltage (below {#VOLTAGE.MIN.CRIT}V for 5m)</p> |
-|Battery: Critically low battery voltage (below {#VOLTAGE.MIN.CRIT}V for 5m) |<p>-</p> |`{TEMPLATE_NAME:battery.voltage[batteryTerminalVoltage.0{#SINGLETON}].max(5m)}<{#VOLTAGE.MIN.CRIT}` |HIGH | |
-|Battery: High battery voltage (over {#VOLTAGE.MAX.WARN}V for 5m) |<p>-</p> |`{TEMPLATE_NAME:battery.voltage[batteryTerminalVoltage.0{#SINGLETON}].min(5m)}>{#VOLTAGE.MAX.WARN}` |WARNING |<p>**Depends on**:</p><p>- Battery: Critically high battery voltage (over {#VOLTAGE.MAX.CRIT}V for 5m)</p> |
-|Battery: Critically high battery voltage (over {#VOLTAGE.MAX.CRIT}V for 5m) |<p>-</p> |`{TEMPLATE_NAME:battery.voltage[batteryTerminalVoltage.0{#SINGLETON}].min(5m)}>{#VOLTAGE.MAX.CRIT}` |HIGH | |
-|Load: Device load in warning state |<p>-</p> |`{TEMPLATE_NAME:load.state[loadState.0].last()}={$LOAD.STATE.WARN:"lvdWarning"} or {TEMPLATE_NAME:load.state[loadState.0].last()}={$LOAD.STATE.WARN:"override"}` |WARNING |<p>**Depends on**:</p><p>- Load: Device load in critical state</p> |
-|Load: Device load in critical state |<p>-</p> |`{TEMPLATE_NAME:load.state[loadState.0].last()}={$LOAD.STATE.CRIT:"lvd"} or {TEMPLATE_NAME:load.state[loadState.0].last()}={$LOAD.STATE.CRIT:"fault"}` |HIGH | |
-|Status: Device has been restarted (uptime < 10m) |<p>Uptime is less than 10 minutes</p> |`{TEMPLATE_NAME:status.uptime.last()}<10m` |INFO |<p>Manual close: YES</p> |
-|Status: Failed to fetch data (or no data for 5m) |<p>Zabbix has not received data for items for the last 5 minutes</p> |`{TEMPLATE_NAME:status.uptime.nodata(5m)}=1` |WARNING |<p>Manual close: YES</p> |
-|Status: Device has "overcurrent" array faults flag |<p>-</p> |`{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"overcurrent","like")}=2` |HIGH | |
-|Status: Device has "mosfetSShorted" array faults flag |<p>-</p> |`{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"mosfetSShorted","like")}=2` |HIGH | |
-|Status: Device has "software" array faults flag |<p>-</p> |`{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"software","like")}=2` |HIGH | |
-|Status: Device has "batteryHvd" array faults flag |<p>-</p> |`{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"batteryHvd","like")}=2` |HIGH | |
-|Status: Device has "arrayHvd" array faults flag |<p>-</p> |`{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"arrayHvd","like")}=2` |HIGH | |
-|Status: Device has "customSettingsEdit" array faults flag |<p>-</p> |`{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"customSettingsEdit","like")}=2` |HIGH | |
-|Status: Device has "rtsShorted" array faults flag |<p>-</p> |`{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"rtsShorted","like")}=2` |HIGH | |
-|Status: Device has "rtsNoLongerValid" array faults flag |<p>-</p> |`{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"rtsNoLongerValid","like")}=2` |HIGH | |
-|Status: Device has "localTempSensorDamaged" array faults flag |<p>-</p> |`{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"localTempSensorDamaged","like")}=2` |HIGH | |
-|Status: Device has "batteryLowVoltageDisconnect" array faults flag |<p>-</p> |`{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"batteryLowVoltageDisconnect","like")}=2` |HIGH | |
-|Status: Device has "slaveTimeout" array faults flag |<p>-</p> |`{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"slaveTimeout","like")}=2` |HIGH | |
-|Status: Device has "dipSwitchChanged" array faults flag |<p>-</p> |`{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"dipSwitchChanged","like")}=2` |HIGH | |
-|Status: Device has "externalShortCircuit" load faults flag |<p>-</p> |`{TEMPLATE_NAME:status.load_faults[loadFaults.0].count(#3,"externalShortCircuit","like")}=2` |HIGH | |
-|Status: Device has "overcurrent" load faults flag |<p>-</p> |`{TEMPLATE_NAME:status.load_faults[loadFaults.0].count(#3,"overcurrent","like")}=2` |HIGH | |
-|Status: Device has "mosfetShorted" load faults flag |<p>-</p> |`{TEMPLATE_NAME:status.load_faults[loadFaults.0].count(#3,"mosfetShorted","like")}=2` |HIGH | |
-|Status: Device has "software" load faults flag |<p>-</p> |`{TEMPLATE_NAME:status.load_faults[loadFaults.0].count(#3,"software","like")}=2` |HIGH | |
-|Status: Device has "loadHvd" load faults flag |<p>-</p> |`{TEMPLATE_NAME:status.load_faults[loadFaults.0].count(#3,"loadHvd","like")}=2` |HIGH | |
-|Status: Device has "highTempDisconnect" load faults flag |<p>-</p> |`{TEMPLATE_NAME:status.load_faults[loadFaults.0].count(#3,"highTempDisconnect","like")}=2` |HIGH | |
-|Status: Device has "dipSwitchChanged" load faults flag |<p>-</p> |`{TEMPLATE_NAME:status.load_faults[loadFaults.0].count(#3,"dipSwitchChanged","like")}=2` |HIGH | |
-|Status: Device has "customSettingsEdit" load faults flag |<p>-</p> |`{TEMPLATE_NAME:status.load_faults[loadFaults.0].count(#3,"customSettingsEdit","like")}=2` |HIGH | |
-|Status: Device has "rtsShorted" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"rtsShorted","like")}=2` |WARNING | |
-|Status: Device has "rtsDisconnected" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"rtsDisconnected","like")}=2` |WARNING | |
-|Status: Device has "heatsinkTempSensorOpen" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"heatsinkTempSensorOpen","like")}=2` |WARNING | |
-|Status: Device has "heatsinkTempSensorShorted" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"heatsinkTempSensorShorted","like")}=2` |WARNING | |
-|Status: Device has "heatsinkTempLimit" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"heatsinkTempLimit","like")}=2` |WARNING | |
-|Status: Device has "inductorTempSensorOpen" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"inductorTempSensorOpen","like")}=2` |WARNING | |
-|Status: Device has "inductorTempSensorShorted" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"inductorTempSensorShorted","like")}=2` |WARNING | |
-|Status: Device has "inductorTempLimit" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"inductorTempLimit","like")}=2` |WARNING | |
-|Status: Device has "currentLimit" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"currentLimit","like")}=2` |WARNING | |
-|Status: Device has "currentMeasurementError" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"currentMeasurementError","like")}=2` |WARNING | |
-|Status: Device has "batterySenseOutOfRange" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"batterySenseOutOfRange","like")}=2` |WARNING | |
-|Status: Device has "batterySenseDisconnected" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"batterySenseDisconnected","like")}=2` |WARNING | |
-|Status: Device has "uncalibrated" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"uncalibrated","like")}=2` |WARNING | |
-|Status: Device has "tb5v" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"tb5v","like")}=2` |WARNING | |
-|Status: Device has "fp10SupplyOutOfRange" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"fp10SupplyOutOfRange","like")}=2` |WARNING | |
-|Status: Device has "mosfetOpen" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"mosfetOpen","like")}=2` |WARNING | |
-|Status: Device has "arrayCurrentOffset" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"arrayCurrentOffset","like")}=2` |WARNING | |
-|Status: Device has "loadCurrentOffset" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"loadCurrentOffset","like")}=2` |WARNING | |
-|Status: Device has "p33SupplyOutOfRange" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"p33SupplyOutOfRange","like")}=2` |WARNING | |
-|Status: Device has "p12SupplyOutOfRange" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"p12SupplyOutOfRange","like")}=2` |WARNING | |
-|Status: Device has "hightInputVoltageLimit" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"hightInputVoltageLimit","like")}=2` |WARNING | |
-|Status: Device has "controllerReset" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"controllerReset","like")}=2` |WARNING | |
-|Status: Device has "loadLvd" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"loadLvd","like")}=2` |WARNING | |
-|Status: Device has "logTimeout" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"logTimeout","like")}=2` |WARNING | |
-|Status: Device has "eepromAccessFailure" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"eepromAccessFailure","like")}=2` |WARNING | |
-|Temperature: Low battery temperature (below {$BATTERY.TEMP.MIN.WARN}C for 5m) |<p>-</p> |`{TEMPLATE_NAME:temp.battery[batteryTemperature.0].max(5m)}<{$BATTERY.TEMP.MIN.WARN}` |WARNING |<p>**Depends on**:</p><p>- Temperature: Critically low battery temperature (below {$BATTERY.TEMP.MIN.WARN}C for 5m)</p> |
-|Temperature: Critically low battery temperature (below {$BATTERY.TEMP.MIN.WARN}C for 5m) |<p>-</p> |`{TEMPLATE_NAME:temp.battery[batteryTemperature.0].max(5m)}<{$BATTERY.TEMP.MIN.CRIT}` |HIGH | |
-|Temperature: High battery temperature (over {$BATTERY.TEMP.MAX.WARN}C for 5m) |<p>-</p> |`{TEMPLATE_NAME:temp.battery[batteryTemperature.0].min(5m)}>{$BATTERY.TEMP.MAX.WARN}` |WARNING |<p>**Depends on**:</p><p>- Temperature: Critically high battery temperature (over {$BATTERY.TEMP.MAX.CRIT}C for 5m)</p> |
-|Temperature: Critically high battery temperature (over {$BATTERY.TEMP.MAX.CRIT}C for 5m) |<p>-</p> |`{TEMPLATE_NAME:temp.battery[batteryTemperature.0].min(5m)}>{$BATTERY.TEMP.MAX.CRIT}` |HIGH | |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|------------------------------------------------------------------------------------------|----------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|-------------------------------------------------------------------------------------------------------------------------|
+| Battery: Device charge in warning state | <p>-</p> | `{TEMPLATE_NAME:charge.state[chargeState.0].last()}={$CHARGE.STATE.WARN}` | WARNING | <p>**Depends on**:</p><p>- Battery: Device charge in critical state</p> |
+| Battery: Device charge in critical state | <p>-</p> | `{TEMPLATE_NAME:charge.state[chargeState.0].last()}={$CHARGE.STATE.CRIT}` | HIGH | |
+| Battery: Low battery voltage (below {#VOLTAGE.MIN.WARN}V for 5m) | <p>-</p> | `{TEMPLATE_NAME:battery.voltage[batteryTerminalVoltage.0{#SINGLETON}].max(5m)}<{#VOLTAGE.MIN.WARN}` | WARNING | <p>**Depends on**:</p><p>- Battery: Critically low battery voltage (below {#VOLTAGE.MIN.CRIT}V for 5m)</p> |
+| Battery: Critically low battery voltage (below {#VOLTAGE.MIN.CRIT}V for 5m) | <p>-</p> | `{TEMPLATE_NAME:battery.voltage[batteryTerminalVoltage.0{#SINGLETON}].max(5m)}<{#VOLTAGE.MIN.CRIT}` | HIGH | |
+| Battery: High battery voltage (over {#VOLTAGE.MAX.WARN}V for 5m) | <p>-</p> | `{TEMPLATE_NAME:battery.voltage[batteryTerminalVoltage.0{#SINGLETON}].min(5m)}>{#VOLTAGE.MAX.WARN}` | WARNING | <p>**Depends on**:</p><p>- Battery: Critically high battery voltage (over {#VOLTAGE.MAX.CRIT}V for 5m)</p> |
+| Battery: Critically high battery voltage (over {#VOLTAGE.MAX.CRIT}V for 5m) | <p>-</p> | `{TEMPLATE_NAME:battery.voltage[batteryTerminalVoltage.0{#SINGLETON}].min(5m)}>{#VOLTAGE.MAX.CRIT}` | HIGH | |
+| Load: Device load in warning state | <p>-</p> | `{TEMPLATE_NAME:load.state[loadState.0].last()}={$LOAD.STATE.WARN:"lvdWarning"} or {TEMPLATE_NAME:load.state[loadState.0].last()}={$LOAD.STATE.WARN:"override"}` | WARNING | <p>**Depends on**:</p><p>- Load: Device load in critical state</p> |
+| Load: Device load in critical state | <p>-</p> | `{TEMPLATE_NAME:load.state[loadState.0].last()}={$LOAD.STATE.CRIT:"lvd"} or {TEMPLATE_NAME:load.state[loadState.0].last()}={$LOAD.STATE.CRIT:"fault"}` | HIGH | |
+| Status: Device has been restarted (uptime < 10m) | <p>Uptime is less than 10 minutes</p> | `{TEMPLATE_NAME:status.uptime.last()}<10m` | INFO | <p>Manual close: YES</p> |
+| Status: Failed to fetch data (or no data for 5m) | <p>Zabbix has not received data for items for the last 5 minutes</p> | `{TEMPLATE_NAME:status.uptime.nodata(5m)}=1` | WARNING | <p>Manual close: YES</p> |
+| Status: Device has "overcurrent" array faults flag | <p>-</p> | `{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"overcurrent","like")}=2` | HIGH | |
+| Status: Device has "mosfetSShorted" array faults flag | <p>-</p> | `{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"mosfetSShorted","like")}=2` | HIGH | |
+| Status: Device has "software" array faults flag | <p>-</p> | `{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"software","like")}=2` | HIGH | |
+| Status: Device has "batteryHvd" array faults flag | <p>-</p> | `{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"batteryHvd","like")}=2` | HIGH | |
+| Status: Device has "arrayHvd" array faults flag | <p>-</p> | `{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"arrayHvd","like")}=2` | HIGH | |
+| Status: Device has "customSettingsEdit" array faults flag | <p>-</p> | `{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"customSettingsEdit","like")}=2` | HIGH | |
+| Status: Device has "rtsShorted" array faults flag | <p>-</p> | `{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"rtsShorted","like")}=2` | HIGH | |
+| Status: Device has "rtsNoLongerValid" array faults flag | <p>-</p> | `{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"rtsNoLongerValid","like")}=2` | HIGH | |
+| Status: Device has "localTempSensorDamaged" array faults flag | <p>-</p> | `{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"localTempSensorDamaged","like")}=2` | HIGH | |
+| Status: Device has "batteryLowVoltageDisconnect" array faults flag | <p>-</p> | `{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"batteryLowVoltageDisconnect","like")}=2` | HIGH | |
+| Status: Device has "slaveTimeout" array faults flag | <p>-</p> | `{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"slaveTimeout","like")}=2` | HIGH | |
+| Status: Device has "dipSwitchChanged" array faults flag | <p>-</p> | `{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"dipSwitchChanged","like")}=2` | HIGH | |
+| Status: Device has "externalShortCircuit" load faults flag | <p>-</p> | `{TEMPLATE_NAME:status.load_faults[loadFaults.0].count(#3,"externalShortCircuit","like")}=2` | HIGH | |
+| Status: Device has "overcurrent" load faults flag | <p>-</p> | `{TEMPLATE_NAME:status.load_faults[loadFaults.0].count(#3,"overcurrent","like")}=2` | HIGH | |
+| Status: Device has "mosfetShorted" load faults flag | <p>-</p> | `{TEMPLATE_NAME:status.load_faults[loadFaults.0].count(#3,"mosfetShorted","like")}=2` | HIGH | |
+| Status: Device has "software" load faults flag | <p>-</p> | `{TEMPLATE_NAME:status.load_faults[loadFaults.0].count(#3,"software","like")}=2` | HIGH | |
+| Status: Device has "loadHvd" load faults flag | <p>-</p> | `{TEMPLATE_NAME:status.load_faults[loadFaults.0].count(#3,"loadHvd","like")}=2` | HIGH | |
+| Status: Device has "highTempDisconnect" load faults flag | <p>-</p> | `{TEMPLATE_NAME:status.load_faults[loadFaults.0].count(#3,"highTempDisconnect","like")}=2` | HIGH | |
+| Status: Device has "dipSwitchChanged" load faults flag | <p>-</p> | `{TEMPLATE_NAME:status.load_faults[loadFaults.0].count(#3,"dipSwitchChanged","like")}=2` | HIGH | |
+| Status: Device has "customSettingsEdit" load faults flag | <p>-</p> | `{TEMPLATE_NAME:status.load_faults[loadFaults.0].count(#3,"customSettingsEdit","like")}=2` | HIGH | |
+| Status: Device has "rtsShorted" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"rtsShorted","like")}=2` | WARNING | |
+| Status: Device has "rtsDisconnected" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"rtsDisconnected","like")}=2` | WARNING | |
+| Status: Device has "heatsinkTempSensorOpen" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"heatsinkTempSensorOpen","like")}=2` | WARNING | |
+| Status: Device has "heatsinkTempSensorShorted" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"heatsinkTempSensorShorted","like")}=2` | WARNING | |
+| Status: Device has "heatsinkTempLimit" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"heatsinkTempLimit","like")}=2` | WARNING | |
+| Status: Device has "inductorTempSensorOpen" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"inductorTempSensorOpen","like")}=2` | WARNING | |
+| Status: Device has "inductorTempSensorShorted" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"inductorTempSensorShorted","like")}=2` | WARNING | |
+| Status: Device has "inductorTempLimit" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"inductorTempLimit","like")}=2` | WARNING | |
+| Status: Device has "currentLimit" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"currentLimit","like")}=2` | WARNING | |
+| Status: Device has "currentMeasurementError" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"currentMeasurementError","like")}=2` | WARNING | |
+| Status: Device has "batterySenseOutOfRange" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"batterySenseOutOfRange","like")}=2` | WARNING | |
+| Status: Device has "batterySenseDisconnected" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"batterySenseDisconnected","like")}=2` | WARNING | |
+| Status: Device has "uncalibrated" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"uncalibrated","like")}=2` | WARNING | |
+| Status: Device has "tb5v" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"tb5v","like")}=2` | WARNING | |
+| Status: Device has "fp10SupplyOutOfRange" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"fp10SupplyOutOfRange","like")}=2` | WARNING | |
+| Status: Device has "mosfetOpen" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"mosfetOpen","like")}=2` | WARNING | |
+| Status: Device has "arrayCurrentOffset" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"arrayCurrentOffset","like")}=2` | WARNING | |
+| Status: Device has "loadCurrentOffset" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"loadCurrentOffset","like")}=2` | WARNING | |
+| Status: Device has "p33SupplyOutOfRange" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"p33SupplyOutOfRange","like")}=2` | WARNING | |
+| Status: Device has "p12SupplyOutOfRange" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"p12SupplyOutOfRange","like")}=2` | WARNING | |
+| Status: Device has "hightInputVoltageLimit" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"hightInputVoltageLimit","like")}=2` | WARNING | |
+| Status: Device has "controllerReset" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"controllerReset","like")}=2` | WARNING | |
+| Status: Device has "loadLvd" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"loadLvd","like")}=2` | WARNING | |
+| Status: Device has "logTimeout" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"logTimeout","like")}=2` | WARNING | |
+| Status: Device has "eepromAccessFailure" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"eepromAccessFailure","like")}=2` | WARNING | |
+| Temperature: Low battery temperature (below {$BATTERY.TEMP.MIN.WARN}C for 5m) | <p>-</p> | `{TEMPLATE_NAME:temp.battery[batteryTemperature.0].max(5m)}<{$BATTERY.TEMP.MIN.WARN}` | WARNING | <p>**Depends on**:</p><p>- Temperature: Critically low battery temperature (below {$BATTERY.TEMP.MIN.WARN}C for 5m)</p> |
+| Temperature: Critically low battery temperature (below {$BATTERY.TEMP.MIN.WARN}C for 5m) | <p>-</p> | `{TEMPLATE_NAME:temp.battery[batteryTemperature.0].max(5m)}<{$BATTERY.TEMP.MIN.CRIT}` | HIGH | |
+| Temperature: High battery temperature (over {$BATTERY.TEMP.MAX.WARN}C for 5m) | <p>-</p> | `{TEMPLATE_NAME:temp.battery[batteryTemperature.0].min(5m)}>{$BATTERY.TEMP.MAX.WARN}` | WARNING | <p>**Depends on**:</p><p>- Temperature: Critically high battery temperature (over {$BATTERY.TEMP.MAX.CRIT}C for 5m)</p> |
+| Temperature: Critically high battery temperature (over {$BATTERY.TEMP.MAX.CRIT}C for 5m) | <p>-</p> | `{TEMPLATE_NAME:temp.battery[batteryTemperature.0].min(5m)}>{$BATTERY.TEMP.MAX.CRIT}` | HIGH | |
## Feedback
diff --git a/templates/net/morningstar_snmp/prostar_pwm_snmp/README.md b/templates/net/morningstar_snmp/prostar_pwm_snmp/README.md
index 9d6c56ea01d..069455a04ee 100644
--- a/templates/net/morningstar_snmp/prostar_pwm_snmp/README.md
+++ b/templates/net/morningstar_snmp/prostar_pwm_snmp/README.md
@@ -3,11 +3,11 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
-> See [Zabbix template operation](https://www.zabbix.com/documentation/5.2/manual/config/templates_out_of_the_box/zabbix_agent) for basic instructions.
+> See [Zabbix template operation](https://www.zabbix.com/documentation/5.4/manual/config/templates_out_of_the_box/zabbix_agent) for basic instructions.
Refer to the vendor documentation.
@@ -17,23 +17,23 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$BATTERY.TEMP.MAX.CRIT} |<p>Battery high temperature critical value</p> |`60` |
-|{$BATTERY.TEMP.MAX.WARN} |<p>Battery high temperature warning value</p> |`45` |
-|{$BATTERY.TEMP.MIN.CRIT} |<p>Battery low temperature critical value</p> |`-20` |
-|{$BATTERY.TEMP.MIN.WARN} |<p>Battery low temperature warning value</p> |`0` |
-|{$CHARGE.STATE.CRIT} |<p>fault</p> |`4` |
-|{$CHARGE.STATE.WARN} |<p>disconnect</p> |`2` |
-|{$LOAD.STATE.CRIT:"fault"} |<p>fault</p> |`4` |
-|{$LOAD.STATE.CRIT:"lvd"} |<p>lvd</p> |`3` |
-|{$LOAD.STATE.WARN:"disconnect"} |<p>disconnect</p> |`5` |
-|{$LOAD.STATE.WARN:"lvdWarning"} |<p>lvdWarning</p> |`2` |
-|{$LOAD.STATE.WARN:"override"} |<p>override</p> |`7` |
-|{$VOLTAGE.MAX.CRIT} | |`` |
-|{$VOLTAGE.MAX.WARN} | |`` |
-|{$VOLTAGE.MIN.CRIT} | |`` |
-|{$VOLTAGE.MIN.WARN} | |`` |
+| Name | Description | Default |
+|---------------------------------|------------------------------------------------|---------|
+| {$BATTERY.TEMP.MAX.CRIT} | <p>Battery high temperature critical value</p> | `60` |
+| {$BATTERY.TEMP.MAX.WARN} | <p>Battery high temperature warning value</p> | `45` |
+| {$BATTERY.TEMP.MIN.CRIT} | <p>Battery low temperature critical value</p> | `-20` |
+| {$BATTERY.TEMP.MIN.WARN} | <p>Battery low temperature warning value</p> | `0` |
+| {$CHARGE.STATE.CRIT} | <p>fault</p> | `4` |
+| {$CHARGE.STATE.WARN} | <p>disconnect</p> | `2` |
+| {$LOAD.STATE.CRIT:"fault"} | <p>fault</p> | `4` |
+| {$LOAD.STATE.CRIT:"lvd"} | <p>lvd</p> | `3` |
+| {$LOAD.STATE.WARN:"disconnect"} | <p>disconnect</p> | `5` |
+| {$LOAD.STATE.WARN:"lvdWarning"} | <p>lvdWarning</p> | `2` |
+| {$LOAD.STATE.WARN:"override"} | <p>override</p> | `7` |
+| {$VOLTAGE.MAX.CRIT} | | `` |
+| {$VOLTAGE.MAX.WARN} | | `` |
+| {$VOLTAGE.MIN.CRIT} | | `` |
+| {$VOLTAGE.MIN.WARN} | | `` |
## Template links
@@ -41,96 +41,96 @@ There are no template links in this template.
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Battery voltage discovery |<p>Discovery for battery voltage triggers</p> |DEPENDENT |battery.voltage.discovery<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Name | Description | Type | Key and additional info |
+|---------------------------|-----------------------------------------------|-----------|---------------------------------------------------------------------------------------------------------------------|
+| Battery voltage discovery | <p>Discovery for battery voltage triggers</p> | DEPENDENT | battery.voltage.discovery<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Array |Array: Voltage |<p>MIB: PROSTAR-PWM</p><p>Description:Array Voltage</p><p>Scaling Factor:1.0</p><p>Units:V</p><p>Range:[0, 80]</p><p>Modbus address:0x0013</p> |SNMP |array.voltage[arrayVoltage.0] |
-|Battery |Battery: Charge State |<p>MIB: PROSTAR-PWM</p><p>Description:Control State</p><p>Modbus address:0x0021</p><p>0: Start</p><p>1: NightCheck</p><p>2: Disconnect</p><p>3: Night</p><p>4: Fault</p><p>5: Bulk</p><p>6: Pwm</p><p>7: Float</p><p>8: Equalize</p> |SNMP |charge.state[chargeState.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|Battery |Battery: Target Voltage |<p>MIB: PROSTAR-PWM</p><p>Description:Target Regulation Voltage</p><p>Scaling Factor:1.0</p><p>Units:V</p><p>Range:[0.0, 80.0]</p><p>Modbus address:0x0024</p> |SNMP |target.voltage[targetVoltage.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|Battery |Battery: Charge Current |<p>MIB: PROSTAR-PWM</p><p>Description:Charge Current</p><p>Scaling Factor:1.0</p><p>Units:A</p><p>Range:[0, 40]</p><p>Modbus address:0x0011</p> |SNMP |charge.current[chargeCurrent.0] |
-|Battery |Battery: Voltage{#SINGLETON} |<p>MIB: PROSTAR-PWM</p><p>Description:Control State</p><p>Modbus address:0x0021</p> |SNMP |battery.voltage[batteryTerminalVoltage.0{#SINGLETON}] |
-|Counter |Counter: Charge Amp-hours |<p>MIB: PROSTAR-PWM</p><p>Description:Ah Charge (Resettable)</p><p>Scaling Factor:0.1</p><p>Units:Ah</p><p>Range:[0.0, 4294967294]</p><p>Modbus addresses:H=0x0026 L=0x0027</p> |SNMP |counter.charge_amp_hours[ahChargeResettable.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.1`</p> |
-|Counter |Counter: Charge KW-hours |<p>MIB: PROSTAR-PWM</p><p>Description:Kilowatt Hours Charge (Resettable)</p><p>Scaling Factor:1.0</p><p>Units:kWh</p><p>Range:[0.0, 65535]</p><p>Modbus address:0x002A</p> |SNMP |counter.charge_kw_hours[kwhChargeResettable.0] |
-|Counter |Counter: Load Amp-hours |<p>MIB: PROSTAR-PWM</p><p>Description:Ah Load (Resettable)</p><p>Scaling Factor:0.1</p><p>Units:Ah</p><p>Range:[0.0, 4294967294]</p><p>Modbus addresses:H=0x0032 L=0x0033</p> |SNMP |counter.load_amp_hours[ahLoadResettable.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.1`</p> |
-|Load |Load: State |<p>MIB: PROSTAR-PWM</p><p>Description:Load State</p><p>Modbus address:0x002E</p><p>0: Start</p><p>1: Normal</p><p>2: LvdWarning</p><p>3: Lvd</p><p>4: Fault</p><p>5: Disconnect</p><p>6: NormalOff</p><p>7: Override</p><p>8: NotUsed</p> |SNMP |load.state[loadState.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|Load |Load: Voltage |<p>MIB: PROSTAR-PWM</p><p>Description:Load Voltage</p><p>Scaling Factor:1.0</p><p>Units:V</p><p>Range:[0, 80]</p><p>Modbus address:0x0014</p> |SNMP |load.voltage[loadVoltage.0] |
-|Load |Load: Current |<p>MIB: PROSTAR-PWM</p><p>Description:Load Current</p><p>Scaling Factor:1.0</p><p>Units:A</p><p>Range:[0, 60]</p><p>Modbus address:0x0016</p> |SNMP |load.current[loadCurrent.0] |
-|Status |Status: Uptime |<p>Device uptime in seconds</p> |SNMP |status.uptime<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.01`</p> |
-|Status |Status: Array Faults |<p>MIB: PROSTAR-PWM</p><p>Description:Array Faults</p><p>Modbus address:0x0022</p> |SNMP |status.array_faults[arrayFaults.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
-|Status |Status: Load Faults |<p>MIB: PROSTAR-PWM</p><p>Description:Load Faults</p><p>Modbus address:0x002F</p> |SNMP |status.load_faults[loadFaults.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
-|Status |Status: Alarms |<p>MIB: PROSTAR-PWM</p><p>Description:Alarms</p><p>Modbus addresses:H=0x0038 L=0x0039</p> |SNMP |status.alarms[alarms.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
-|Temperature |Temperature: Ambient |<p>MIB: PROSTAR-PWM</p><p>Description:Ambient Temperature</p><p>Scaling Factor:1.0</p><p>Units:deg C</p><p>Range:[-128, 127]</p><p>Modbus address:0x001C</p> |SNMP |temp.ambient[ambientTemperature.0] |
-|Temperature |Temperature: Battery |<p>MIB: PROSTAR-PWM</p><p>Description:Battery Temperature</p><p>Scaling Factor:1.0</p><p>Units:deg C</p><p>Range:[-128, 127]</p><p>Modbus address:0x001B</p> |SNMP |temp.battery[batteryTemperature.0] |
-|Temperature |Temperature: Heatsink |<p>MIB: PROSTAR-PWM</p><p>Description:Heatsink Temperature</p><p>Scaling Factor:1.0</p><p>Units:deg C</p><p>Range:[-128, 127]</p><p>Modbus address:0x001A</p> |SNMP |temp.heatsink[heatsinkTemperature.0] |
-|Zabbix_raw_items |Battery: Battery Voltage discovery |<p>MIB: PROSTAR-PWM</p> |SNMP |battery.voltage.discovery[batteryTerminalVoltage.0] |
+| Group | Name | Description | Type | Key and additional info |
+|------------------|------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Array | Array: Voltage | <p>MIB: PROSTAR-PWM</p><p>Description:Array Voltage</p><p>Scaling Factor:1.0</p><p>Units:V</p><p>Range:[0, 80]</p><p>Modbus address:0x0013</p> | SNMP | array.voltage[arrayVoltage.0] |
+| Battery | Battery: Charge State | <p>MIB: PROSTAR-PWM</p><p>Description:Control State</p><p>Modbus address:0x0021</p><p>0: Start</p><p>1: NightCheck</p><p>2: Disconnect</p><p>3: Night</p><p>4: Fault</p><p>5: Bulk</p><p>6: Pwm</p><p>7: Float</p><p>8: Equalize</p> | SNMP | charge.state[chargeState.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| Battery | Battery: Target Voltage | <p>MIB: PROSTAR-PWM</p><p>Description:Target Regulation Voltage</p><p>Scaling Factor:1.0</p><p>Units:V</p><p>Range:[0.0, 80.0]</p><p>Modbus address:0x0024</p> | SNMP | target.voltage[targetVoltage.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| Battery | Battery: Charge Current | <p>MIB: PROSTAR-PWM</p><p>Description:Charge Current</p><p>Scaling Factor:1.0</p><p>Units:A</p><p>Range:[0, 40]</p><p>Modbus address:0x0011</p> | SNMP | charge.current[chargeCurrent.0] |
+| Battery | Battery: Voltage{#SINGLETON} | <p>MIB: PROSTAR-PWM</p><p>Description:Control State</p><p>Modbus address:0x0021</p> | SNMP | battery.voltage[batteryTerminalVoltage.0{#SINGLETON}] |
+| Counter | Counter: Charge Amp-hours | <p>MIB: PROSTAR-PWM</p><p>Description:Ah Charge (Resettable)</p><p>Scaling Factor:0.1</p><p>Units:Ah</p><p>Range:[0.0, 4294967294]</p><p>Modbus addresses:H=0x0026 L=0x0027</p> | SNMP | counter.charge_amp_hours[ahChargeResettable.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.1`</p> |
+| Counter | Counter: Charge KW-hours | <p>MIB: PROSTAR-PWM</p><p>Description:Kilowatt Hours Charge (Resettable)</p><p>Scaling Factor:1.0</p><p>Units:kWh</p><p>Range:[0.0, 65535]</p><p>Modbus address:0x002A</p> | SNMP | counter.charge_kw_hours[kwhChargeResettable.0] |
+| Counter | Counter: Load Amp-hours | <p>MIB: PROSTAR-PWM</p><p>Description:Ah Load (Resettable)</p><p>Scaling Factor:0.1</p><p>Units:Ah</p><p>Range:[0.0, 4294967294]</p><p>Modbus addresses:H=0x0032 L=0x0033</p> | SNMP | counter.load_amp_hours[ahLoadResettable.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.1`</p> |
+| Load | Load: State | <p>MIB: PROSTAR-PWM</p><p>Description:Load State</p><p>Modbus address:0x002E</p><p>0: Start</p><p>1: Normal</p><p>2: LvdWarning</p><p>3: Lvd</p><p>4: Fault</p><p>5: Disconnect</p><p>6: NormalOff</p><p>7: Override</p><p>8: NotUsed</p> | SNMP | load.state[loadState.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| Load | Load: Voltage | <p>MIB: PROSTAR-PWM</p><p>Description:Load Voltage</p><p>Scaling Factor:1.0</p><p>Units:V</p><p>Range:[0, 80]</p><p>Modbus address:0x0014</p> | SNMP | load.voltage[loadVoltage.0] |
+| Load | Load: Current | <p>MIB: PROSTAR-PWM</p><p>Description:Load Current</p><p>Scaling Factor:1.0</p><p>Units:A</p><p>Range:[0, 60]</p><p>Modbus address:0x0016</p> | SNMP | load.current[loadCurrent.0] |
+| Status | Status: Uptime | <p>Device uptime in seconds</p> | SNMP | status.uptime<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.01`</p> |
+| Status | Status: Array Faults | <p>MIB: PROSTAR-PWM</p><p>Description:Array Faults</p><p>Modbus address:0x0022</p> | SNMP | status.array_faults[arrayFaults.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Status | Status: Load Faults | <p>MIB: PROSTAR-PWM</p><p>Description:Load Faults</p><p>Modbus address:0x002F</p> | SNMP | status.load_faults[loadFaults.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Status | Status: Alarms | <p>MIB: PROSTAR-PWM</p><p>Description:Alarms</p><p>Modbus addresses:H=0x0038 L=0x0039</p> | SNMP | status.alarms[alarms.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Temperature | Temperature: Ambient | <p>MIB: PROSTAR-PWM</p><p>Description:Ambient Temperature</p><p>Scaling Factor:1.0</p><p>Units:deg C</p><p>Range:[-128, 127]</p><p>Modbus address:0x001C</p> | SNMP | temp.ambient[ambientTemperature.0] |
+| Temperature | Temperature: Battery | <p>MIB: PROSTAR-PWM</p><p>Description:Battery Temperature</p><p>Scaling Factor:1.0</p><p>Units:deg C</p><p>Range:[-128, 127]</p><p>Modbus address:0x001B</p> | SNMP | temp.battery[batteryTemperature.0] |
+| Temperature | Temperature: Heatsink | <p>MIB: PROSTAR-PWM</p><p>Description:Heatsink Temperature</p><p>Scaling Factor:1.0</p><p>Units:deg C</p><p>Range:[-128, 127]</p><p>Modbus address:0x001A</p> | SNMP | temp.heatsink[heatsinkTemperature.0] |
+| Zabbix_raw_items | Battery: Battery Voltage discovery | <p>MIB: PROSTAR-PWM</p> | SNMP | battery.voltage.discovery[batteryTerminalVoltage.0] |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|Battery: Device charge in warning state |<p>-</p> |`{TEMPLATE_NAME:charge.state[chargeState.0].last()}={$CHARGE.STATE.WARN}` |WARNING |<p>**Depends on**:</p><p>- Battery: Device charge in critical state</p> |
-|Battery: Device charge in critical state |<p>-</p> |`{TEMPLATE_NAME:charge.state[chargeState.0].last()}={$CHARGE.STATE.CRIT}` |HIGH | |
-|Battery: Low battery voltage (below {#VOLTAGE.MIN.WARN}V for 5m) |<p>-</p> |`{TEMPLATE_NAME:battery.voltage[batteryTerminalVoltage.0{#SINGLETON}].max(5m)}<{#VOLTAGE.MIN.WARN}` |WARNING |<p>**Depends on**:</p><p>- Battery: Critically low battery voltage (below {#VOLTAGE.MIN.CRIT}V for 5m)</p> |
-|Battery: Critically low battery voltage (below {#VOLTAGE.MIN.CRIT}V for 5m) |<p>-</p> |`{TEMPLATE_NAME:battery.voltage[batteryTerminalVoltage.0{#SINGLETON}].max(5m)}<{#VOLTAGE.MIN.CRIT}` |HIGH | |
-|Battery: High battery voltage (over {#VOLTAGE.MAX.WARN}V for 5m) |<p>-</p> |`{TEMPLATE_NAME:battery.voltage[batteryTerminalVoltage.0{#SINGLETON}].min(5m)}>{#VOLTAGE.MAX.WARN}` |WARNING |<p>**Depends on**:</p><p>- Battery: Critically high battery voltage (over {#VOLTAGE.MAX.CRIT}V for 5m)</p> |
-|Battery: Critically high battery voltage (over {#VOLTAGE.MAX.CRIT}V for 5m) |<p>-</p> |`{TEMPLATE_NAME:battery.voltage[batteryTerminalVoltage.0{#SINGLETON}].min(5m)}>{#VOLTAGE.MAX.CRIT}` |HIGH | |
-|Load: Device load in warning state |<p>-</p> |`{TEMPLATE_NAME:load.state[loadState.0].last()}={$LOAD.STATE.WARN:"lvdWarning"} or {TEMPLATE_NAME:load.state[loadState.0].last()}={$LOAD.STATE.WARN:"override"}` |WARNING |<p>**Depends on**:</p><p>- Load: Device load in critical state</p> |
-|Load: Device load in critical state |<p>-</p> |`{TEMPLATE_NAME:load.state[loadState.0].last()}={$LOAD.STATE.CRIT:"lvd"} or {TEMPLATE_NAME:load.state[loadState.0].last()}={$LOAD.STATE.CRIT:"fault"}` |HIGH | |
-|Status: Device has been restarted (uptime < 10m) |<p>Uptime is less than 10 minutes</p> |`{TEMPLATE_NAME:status.uptime.last()}<10m` |INFO |<p>Manual close: YES</p> |
-|Status: Failed to fetch data (or no data for 5m) |<p>Zabbix has not received data for items for the last 5 minutes</p> |`{TEMPLATE_NAME:status.uptime.nodata(5m)}=1` |WARNING |<p>Manual close: YES</p> |
-|Status: Device has "overcurrent" array faults flag |<p>-</p> |`{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"overcurrent","like")}=2` |HIGH | |
-|Status: Device has "mosfetSShorted" array faults flag |<p>-</p> |`{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"mosfetSShorted","like")}=2` |HIGH | |
-|Status: Device has "software" array faults flag |<p>-</p> |`{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"software","like")}=2` |HIGH | |
-|Status: Device has "batteryHvd" array faults flag |<p>-</p> |`{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"batteryHvd","like")}=2` |HIGH | |
-|Status: Device has "arrayHvd" array faults flag |<p>-</p> |`{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"arrayHvd","like")}=2` |HIGH | |
-|Status: Device has "customSettingsEdit" array faults flag |<p>-</p> |`{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"customSettingsEdit","like")}=2` |HIGH | |
-|Status: Device has "rtsShorted" array faults flag |<p>-</p> |`{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"rtsShorted","like")}=2` |HIGH | |
-|Status: Device has "rtsNoLongerValid" array faults flag |<p>-</p> |`{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"rtsNoLongerValid","like")}=2` |HIGH | |
-|Status: Device has "localTempSensorDamaged" array faults flag |<p>-</p> |`{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"localTempSensorDamaged","like")}=2` |HIGH | |
-|Status: Device has "batteryLowVoltageDisconnect" array faults flag |<p>-</p> |`{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"batteryLowVoltageDisconnect","like")}=2` |HIGH | |
-|Status: Device has "slaveTimeout" array faults flag |<p>-</p> |`{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"slaveTimeout","like")}=2` |HIGH | |
-|Status: Device has "dipSwitchChanged" array faults flag |<p>-</p> |`{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"dipSwitchChanged","like")}=2` |HIGH | |
-|Status: Device has "p3Fault" array faults flag |<p>-</p> |`{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"p3Fault","like")}=2` |HIGH | |
-|Status: Device has "externalShortCircuit" load faults flag |<p>-</p> |`{TEMPLATE_NAME:status.load_faults[loadFaults.0].count(#3,"externalShortCircuit","like")}=2` |HIGH | |
-|Status: Device has "overcurrent" load faults flag |<p>-</p> |`{TEMPLATE_NAME:status.load_faults[loadFaults.0].count(#3,"overcurrent","like")}=2` |HIGH | |
-|Status: Device has "mosfetShorted" load faults flag |<p>-</p> |`{TEMPLATE_NAME:status.load_faults[loadFaults.0].count(#3,"mosfetShorted","like")}=2` |HIGH | |
-|Status: Device has "software" load faults flag |<p>-</p> |`{TEMPLATE_NAME:status.load_faults[loadFaults.0].count(#3,"software","like")}=2` |HIGH | |
-|Status: Device has "loadHvd" load faults flag |<p>-</p> |`{TEMPLATE_NAME:status.load_faults[loadFaults.0].count(#3,"loadHvd","like")}=2` |HIGH | |
-|Status: Device has "highTempDisconnect" load faults flag |<p>-</p> |`{TEMPLATE_NAME:status.load_faults[loadFaults.0].count(#3,"highTempDisconnect","like")}=2` |HIGH | |
-|Status: Device has "dipSwitchChanged" load faults flag |<p>-</p> |`{TEMPLATE_NAME:status.load_faults[loadFaults.0].count(#3,"dipSwitchChanged","like")}=2` |HIGH | |
-|Status: Device has "customSettingsEdit" load faults flag |<p>-</p> |`{TEMPLATE_NAME:status.load_faults[loadFaults.0].count(#3,"customSettingsEdit","like")}=2` |HIGH | |
-|Status: Device has "p3Fault" load faults flag |<p>-</p> |`{TEMPLATE_NAME:status.load_faults[loadFaults.0].count(#3,"p3Fault","like")}=2` |HIGH | |
-|Status: Device has "rtsShorted" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"rtsShorted","like")}=2` |WARNING | |
-|Status: Device has "rtsDisconnected" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"rtsDisconnected","like")}=2` |WARNING | |
-|Status: Device has "heatsinkTempSensorOpen" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"heatsinkTempSensorOpen","like")}=2` |WARNING | |
-|Status: Device has "heatsinkTempSensorShorted" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"heatsinkTempSensorShorted","like")}=2` |WARNING | |
-|Status: Device has "heatsinkTempLimit" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"heatsinkTempLimit","like")}=2` |WARNING | |
-|Status: Device has "currentLimit" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"currentLimit","like")}=2` |WARNING | |
-|Status: Device has "currentMeasurementError" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"currentMeasurementError","like")}=2` |WARNING | |
-|Status: Device has "batterySenseOutOfRange" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"batterySenseOutOfRange","like")}=2` |WARNING | |
-|Status: Device has "batterySenseDisconnected" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"batterySenseDisconnected","like")}=2` |WARNING | |
-|Status: Device has "uncalibrated" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"uncalibrated","like")}=2` |WARNING | |
-|Status: Device has "batteryTempOutOfRange" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"batteryTempOutOfRange","like")}=2` |WARNING | |
-|Status: Device has "fp10SupplyOutOfRange" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"fp10SupplyOutOfRange","like")}=2` |WARNING | |
-|Status: Device has "mosfetOpen" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"mosfetOpen","like")}=2` |WARNING | |
-|Status: Device has "arrayCurrentOffset" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"arrayCurrentOffset","like")}=2` |WARNING | |
-|Status: Device has "loadCurrentOffset" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"loadCurrentOffset","like")}=2` |WARNING | |
-|Status: Device has "p33SupplyOutOfRange" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"p33SupplyOutOfRange","like")}=2` |WARNING | |
-|Status: Device has "p12SupplyOutOfRange" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"p12SupplyOutOfRange","like")}=2` |WARNING | |
-|Status: Device has "hightInputVoltageLimit" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"hightInputVoltageLimit","like")}=2` |WARNING | |
-|Status: Device has "controllerReset" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"controllerReset","like")}=2` |WARNING | |
-|Status: Device has "loadLvd" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"loadLvd","like")}=2` |WARNING | |
-|Status: Device has "logTimeout" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"logTimeout","like")}=2` |WARNING | |
-|Status: Device has "eepromAccessFailure" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"eepromAccessFailure","like")}=2` |WARNING | |
-|Temperature: Low battery temperature (below {$BATTERY.TEMP.MIN.WARN}C for 5m) |<p>-</p> |`{TEMPLATE_NAME:temp.battery[batteryTemperature.0].max(5m)}<{$BATTERY.TEMP.MIN.WARN}` |WARNING |<p>**Depends on**:</p><p>- Temperature: Critically low battery temperature (below {$BATTERY.TEMP.MIN.WARN}C for 5m)</p> |
-|Temperature: Critically low battery temperature (below {$BATTERY.TEMP.MIN.WARN}C for 5m) |<p>-</p> |`{TEMPLATE_NAME:temp.battery[batteryTemperature.0].max(5m)}<{$BATTERY.TEMP.MIN.CRIT}` |HIGH | |
-|Temperature: High battery temperature (over {$BATTERY.TEMP.MAX.WARN}C for 5m) |<p>-</p> |`{TEMPLATE_NAME:temp.battery[batteryTemperature.0].min(5m)}>{$BATTERY.TEMP.MAX.WARN}` |WARNING |<p>**Depends on**:</p><p>- Temperature: Critically high battery temperature (over {$BATTERY.TEMP.MAX.CRIT}C for 5m)</p> |
-|Temperature: Critically high battery temperature (over {$BATTERY.TEMP.MAX.CRIT}C for 5m) |<p>-</p> |`{TEMPLATE_NAME:temp.battery[batteryTemperature.0].min(5m)}>{$BATTERY.TEMP.MAX.CRIT}` |HIGH | |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|------------------------------------------------------------------------------------------|----------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|-------------------------------------------------------------------------------------------------------------------------|
+| Battery: Device charge in warning state | <p>-</p> | `{TEMPLATE_NAME:charge.state[chargeState.0].last()}={$CHARGE.STATE.WARN}` | WARNING | <p>**Depends on**:</p><p>- Battery: Device charge in critical state</p> |
+| Battery: Device charge in critical state | <p>-</p> | `{TEMPLATE_NAME:charge.state[chargeState.0].last()}={$CHARGE.STATE.CRIT}` | HIGH | |
+| Battery: Low battery voltage (below {#VOLTAGE.MIN.WARN}V for 5m) | <p>-</p> | `{TEMPLATE_NAME:battery.voltage[batteryTerminalVoltage.0{#SINGLETON}].max(5m)}<{#VOLTAGE.MIN.WARN}` | WARNING | <p>**Depends on**:</p><p>- Battery: Critically low battery voltage (below {#VOLTAGE.MIN.CRIT}V for 5m)</p> |
+| Battery: Critically low battery voltage (below {#VOLTAGE.MIN.CRIT}V for 5m) | <p>-</p> | `{TEMPLATE_NAME:battery.voltage[batteryTerminalVoltage.0{#SINGLETON}].max(5m)}<{#VOLTAGE.MIN.CRIT}` | HIGH | |
+| Battery: High battery voltage (over {#VOLTAGE.MAX.WARN}V for 5m) | <p>-</p> | `{TEMPLATE_NAME:battery.voltage[batteryTerminalVoltage.0{#SINGLETON}].min(5m)}>{#VOLTAGE.MAX.WARN}` | WARNING | <p>**Depends on**:</p><p>- Battery: Critically high battery voltage (over {#VOLTAGE.MAX.CRIT}V for 5m)</p> |
+| Battery: Critically high battery voltage (over {#VOLTAGE.MAX.CRIT}V for 5m) | <p>-</p> | `{TEMPLATE_NAME:battery.voltage[batteryTerminalVoltage.0{#SINGLETON}].min(5m)}>{#VOLTAGE.MAX.CRIT}` | HIGH | |
+| Load: Device load in warning state | <p>-</p> | `{TEMPLATE_NAME:load.state[loadState.0].last()}={$LOAD.STATE.WARN:"lvdWarning"} or {TEMPLATE_NAME:load.state[loadState.0].last()}={$LOAD.STATE.WARN:"override"}` | WARNING | <p>**Depends on**:</p><p>- Load: Device load in critical state</p> |
+| Load: Device load in critical state | <p>-</p> | `{TEMPLATE_NAME:load.state[loadState.0].last()}={$LOAD.STATE.CRIT:"lvd"} or {TEMPLATE_NAME:load.state[loadState.0].last()}={$LOAD.STATE.CRIT:"fault"}` | HIGH | |
+| Status: Device has been restarted (uptime < 10m) | <p>Uptime is less than 10 minutes</p> | `{TEMPLATE_NAME:status.uptime.last()}<10m` | INFO | <p>Manual close: YES</p> |
+| Status: Failed to fetch data (or no data for 5m) | <p>Zabbix has not received data for items for the last 5 minutes</p> | `{TEMPLATE_NAME:status.uptime.nodata(5m)}=1` | WARNING | <p>Manual close: YES</p> |
+| Status: Device has "overcurrent" array faults flag | <p>-</p> | `{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"overcurrent","like")}=2` | HIGH | |
+| Status: Device has "mosfetSShorted" array faults flag | <p>-</p> | `{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"mosfetSShorted","like")}=2` | HIGH | |
+| Status: Device has "software" array faults flag | <p>-</p> | `{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"software","like")}=2` | HIGH | |
+| Status: Device has "batteryHvd" array faults flag | <p>-</p> | `{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"batteryHvd","like")}=2` | HIGH | |
+| Status: Device has "arrayHvd" array faults flag | <p>-</p> | `{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"arrayHvd","like")}=2` | HIGH | |
+| Status: Device has "customSettingsEdit" array faults flag | <p>-</p> | `{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"customSettingsEdit","like")}=2` | HIGH | |
+| Status: Device has "rtsShorted" array faults flag | <p>-</p> | `{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"rtsShorted","like")}=2` | HIGH | |
+| Status: Device has "rtsNoLongerValid" array faults flag | <p>-</p> | `{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"rtsNoLongerValid","like")}=2` | HIGH | |
+| Status: Device has "localTempSensorDamaged" array faults flag | <p>-</p> | `{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"localTempSensorDamaged","like")}=2` | HIGH | |
+| Status: Device has "batteryLowVoltageDisconnect" array faults flag | <p>-</p> | `{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"batteryLowVoltageDisconnect","like")}=2` | HIGH | |
+| Status: Device has "slaveTimeout" array faults flag | <p>-</p> | `{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"slaveTimeout","like")}=2` | HIGH | |
+| Status: Device has "dipSwitchChanged" array faults flag | <p>-</p> | `{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"dipSwitchChanged","like")}=2` | HIGH | |
+| Status: Device has "p3Fault" array faults flag | <p>-</p> | `{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"p3Fault","like")}=2` | HIGH | |
+| Status: Device has "externalShortCircuit" load faults flag | <p>-</p> | `{TEMPLATE_NAME:status.load_faults[loadFaults.0].count(#3,"externalShortCircuit","like")}=2` | HIGH | |
+| Status: Device has "overcurrent" load faults flag | <p>-</p> | `{TEMPLATE_NAME:status.load_faults[loadFaults.0].count(#3,"overcurrent","like")}=2` | HIGH | |
+| Status: Device has "mosfetShorted" load faults flag | <p>-</p> | `{TEMPLATE_NAME:status.load_faults[loadFaults.0].count(#3,"mosfetShorted","like")}=2` | HIGH | |
+| Status: Device has "software" load faults flag | <p>-</p> | `{TEMPLATE_NAME:status.load_faults[loadFaults.0].count(#3,"software","like")}=2` | HIGH | |
+| Status: Device has "loadHvd" load faults flag | <p>-</p> | `{TEMPLATE_NAME:status.load_faults[loadFaults.0].count(#3,"loadHvd","like")}=2` | HIGH | |
+| Status: Device has "highTempDisconnect" load faults flag | <p>-</p> | `{TEMPLATE_NAME:status.load_faults[loadFaults.0].count(#3,"highTempDisconnect","like")}=2` | HIGH | |
+| Status: Device has "dipSwitchChanged" load faults flag | <p>-</p> | `{TEMPLATE_NAME:status.load_faults[loadFaults.0].count(#3,"dipSwitchChanged","like")}=2` | HIGH | |
+| Status: Device has "customSettingsEdit" load faults flag | <p>-</p> | `{TEMPLATE_NAME:status.load_faults[loadFaults.0].count(#3,"customSettingsEdit","like")}=2` | HIGH | |
+| Status: Device has "p3Fault" load faults flag | <p>-</p> | `{TEMPLATE_NAME:status.load_faults[loadFaults.0].count(#3,"p3Fault","like")}=2` | HIGH | |
+| Status: Device has "rtsShorted" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"rtsShorted","like")}=2` | WARNING | |
+| Status: Device has "rtsDisconnected" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"rtsDisconnected","like")}=2` | WARNING | |
+| Status: Device has "heatsinkTempSensorOpen" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"heatsinkTempSensorOpen","like")}=2` | WARNING | |
+| Status: Device has "heatsinkTempSensorShorted" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"heatsinkTempSensorShorted","like")}=2` | WARNING | |
+| Status: Device has "heatsinkTempLimit" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"heatsinkTempLimit","like")}=2` | WARNING | |
+| Status: Device has "currentLimit" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"currentLimit","like")}=2` | WARNING | |
+| Status: Device has "currentMeasurementError" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"currentMeasurementError","like")}=2` | WARNING | |
+| Status: Device has "batterySenseOutOfRange" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"batterySenseOutOfRange","like")}=2` | WARNING | |
+| Status: Device has "batterySenseDisconnected" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"batterySenseDisconnected","like")}=2` | WARNING | |
+| Status: Device has "uncalibrated" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"uncalibrated","like")}=2` | WARNING | |
+| Status: Device has "batteryTempOutOfRange" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"batteryTempOutOfRange","like")}=2` | WARNING | |
+| Status: Device has "fp10SupplyOutOfRange" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"fp10SupplyOutOfRange","like")}=2` | WARNING | |
+| Status: Device has "mosfetOpen" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"mosfetOpen","like")}=2` | WARNING | |
+| Status: Device has "arrayCurrentOffset" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"arrayCurrentOffset","like")}=2` | WARNING | |
+| Status: Device has "loadCurrentOffset" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"loadCurrentOffset","like")}=2` | WARNING | |
+| Status: Device has "p33SupplyOutOfRange" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"p33SupplyOutOfRange","like")}=2` | WARNING | |
+| Status: Device has "p12SupplyOutOfRange" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"p12SupplyOutOfRange","like")}=2` | WARNING | |
+| Status: Device has "hightInputVoltageLimit" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"hightInputVoltageLimit","like")}=2` | WARNING | |
+| Status: Device has "controllerReset" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"controllerReset","like")}=2` | WARNING | |
+| Status: Device has "loadLvd" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"loadLvd","like")}=2` | WARNING | |
+| Status: Device has "logTimeout" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"logTimeout","like")}=2` | WARNING | |
+| Status: Device has "eepromAccessFailure" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"eepromAccessFailure","like")}=2` | WARNING | |
+| Temperature: Low battery temperature (below {$BATTERY.TEMP.MIN.WARN}C for 5m) | <p>-</p> | `{TEMPLATE_NAME:temp.battery[batteryTemperature.0].max(5m)}<{$BATTERY.TEMP.MIN.WARN}` | WARNING | <p>**Depends on**:</p><p>- Temperature: Critically low battery temperature (below {$BATTERY.TEMP.MIN.WARN}C for 5m)</p> |
+| Temperature: Critically low battery temperature (below {$BATTERY.TEMP.MIN.WARN}C for 5m) | <p>-</p> | `{TEMPLATE_NAME:temp.battery[batteryTemperature.0].max(5m)}<{$BATTERY.TEMP.MIN.CRIT}` | HIGH | |
+| Temperature: High battery temperature (over {$BATTERY.TEMP.MAX.WARN}C for 5m) | <p>-</p> | `{TEMPLATE_NAME:temp.battery[batteryTemperature.0].min(5m)}>{$BATTERY.TEMP.MAX.WARN}` | WARNING | <p>**Depends on**:</p><p>- Temperature: Critically high battery temperature (over {$BATTERY.TEMP.MAX.CRIT}C for 5m)</p> |
+| Temperature: Critically high battery temperature (over {$BATTERY.TEMP.MAX.CRIT}C for 5m) | <p>-</p> | `{TEMPLATE_NAME:temp.battery[batteryTemperature.0].min(5m)}>{$BATTERY.TEMP.MAX.CRIT}` | HIGH | |
## Feedback
diff --git a/templates/net/morningstar_snmp/sunsaver_mppt_snmp/README.md b/templates/net/morningstar_snmp/sunsaver_mppt_snmp/README.md
index bd8fbf562c9..94ba6c095e7 100644
--- a/templates/net/morningstar_snmp/sunsaver_mppt_snmp/README.md
+++ b/templates/net/morningstar_snmp/sunsaver_mppt_snmp/README.md
@@ -3,11 +3,11 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
-> See [Zabbix template operation](https://www.zabbix.com/documentation/5.2/manual/config/templates_out_of_the_box/zabbix_agent) for basic instructions.
+> See [Zabbix template operation](https://www.zabbix.com/documentation/5.4/manual/config/templates_out_of_the_box/zabbix_agent) for basic instructions.
Refer to the vendor documentation.
@@ -17,23 +17,23 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$BATTERY.TEMP.MAX.CRIT} |<p>Battery high temperature critical value</p> |`60` |
-|{$BATTERY.TEMP.MAX.WARN} |<p>Battery high temperature warning value</p> |`45` |
-|{$BATTERY.TEMP.MIN.CRIT} |<p>Battery low temperature critical value</p> |`-20` |
-|{$BATTERY.TEMP.MIN.WARN} |<p>Battery low temperature warning value</p> |`0` |
-|{$CHARGE.STATE.CRIT} |<p>fault</p> |`4` |
-|{$CHARGE.STATE.WARN} |<p>disconnect</p> |`2` |
-|{$LOAD.STATE.CRIT:"fault"} |<p>fault</p> |`4` |
-|{$LOAD.STATE.CRIT:"lvd"} |<p>lvd</p> |`3` |
-|{$LOAD.STATE.WARN:"disconnect"} |<p>disconnect</p> |`5` |
-|{$LOAD.STATE.WARN:"lvdWarning"} |<p>lvdWarning</p> |`2` |
-|{$LOAD.STATE.WARN:"override"} |<p>override</p> |`7` |
-|{$VOLTAGE.MAX.CRIT} | |`` |
-|{$VOLTAGE.MAX.WARN} | |`` |
-|{$VOLTAGE.MIN.CRIT} | |`` |
-|{$VOLTAGE.MIN.WARN} | |`` |
+| Name | Description | Default |
+|---------------------------------|------------------------------------------------|---------|
+| {$BATTERY.TEMP.MAX.CRIT} | <p>Battery high temperature critical value</p> | `60` |
+| {$BATTERY.TEMP.MAX.WARN} | <p>Battery high temperature warning value</p> | `45` |
+| {$BATTERY.TEMP.MIN.CRIT} | <p>Battery low temperature critical value</p> | `-20` |
+| {$BATTERY.TEMP.MIN.WARN} | <p>Battery low temperature warning value</p> | `0` |
+| {$CHARGE.STATE.CRIT} | <p>fault</p> | `4` |
+| {$CHARGE.STATE.WARN} | <p>disconnect</p> | `2` |
+| {$LOAD.STATE.CRIT:"fault"} | <p>fault</p> | `4` |
+| {$LOAD.STATE.CRIT:"lvd"} | <p>lvd</p> | `3` |
+| {$LOAD.STATE.WARN:"disconnect"} | <p>disconnect</p> | `5` |
+| {$LOAD.STATE.WARN:"lvdWarning"} | <p>lvdWarning</p> | `2` |
+| {$LOAD.STATE.WARN:"override"} | <p>override</p> | `7` |
+| {$VOLTAGE.MAX.CRIT} | | `` |
+| {$VOLTAGE.MAX.WARN} | | `` |
+| {$VOLTAGE.MIN.CRIT} | | `` |
+| {$VOLTAGE.MIN.WARN} | | `` |
## Template links
@@ -41,85 +41,85 @@ There are no template links in this template.
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Battery voltage discovery |<p>Discovery for battery voltage triggers</p> |DEPENDENT |battery.voltage.discovery<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Name | Description | Type | Key and additional info |
+|---------------------------|-----------------------------------------------|-----------|---------------------------------------------------------------------------------------------------------------------|
+| Battery voltage discovery | <p>Discovery for battery voltage triggers</p> | DEPENDENT | battery.voltage.discovery<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Array |Array: Voltage |<p>MIB: SUNSAVER-MPPT</p><p>Description:Array Voltage</p><p>Scaling Factor:0.0030517578125</p><p>Units:V</p><p>Range:[0, 80]</p><p>Modbus address:0x0009</p> |SNMP |array.voltage[arrayVoltage.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.003051757813`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
-|Array |Array: Sweep Vmp |<p>MIB: SUNSAVER-MPPT</p><p>Description:Array Max. Power Point Voltage</p><p>Scaling Factor:0.0030517578125</p><p>Units:V</p><p>Range:[0.0, 5000.0]</p><p>Modbus address:0x0028</p> |SNMP |array.sweep_vmp[arrayVmp.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.003051757813`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
-|Array |Array: Sweep Voc |<p>MIB: SUNSAVER-MPPT</p><p>Description:Array Open Circuit Voltage</p><p>Scaling Factor:0.0030517578125</p><p>Units:V</p><p>Range:[0.0, 80.0]</p><p>Modbus address:0x002A</p> |SNMP |array.sweep_voc[arrayVoc.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.003051757813`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
-|Array |Array: Sweep Pmax |<p>MIB: SUNSAVER-MPPT</p><p>Description:Array Open Circuit Voltage</p><p>Scaling Factor:0.0030517578125</p><p>Units:V</p><p>Range:[0.0, 80.0]</p><p>Modbus address:0x002A</p> |SNMP |array.sweep_pmax[arrayMaxPowerSweep.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.01509857178`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
-|Battery |Battery: Charge State |<p>MIB: SUNSAVER-MPPT</p><p>Description:Control State</p><p>Modbus address:0x0011</p><p>0: Start</p><p>1: NightCheck</p><p>2: Disconnect</p><p>3: Night</p><p>4: Fault</p><p>5: BulkMppt</p><p>6: Pwm</p><p>7: Float</p><p>8: Equalize</p> |SNMP |charge.state[chargeState.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|Battery |Battery: Target Voltage |<p>MIB: SUNSAVER-MPPT</p><p>Description:Target Regulation Voltage</p><p>Scaling Factor:0.0030517578125</p><p>Units:V</p><p>Range:[0.0, 80.0]</p><p>Modbus address:0x0014</p> |SNMP |target.voltage[targetVoltage.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.003051757813`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
-|Battery |Battery: Charge Current |<p>MIB: SUNSAVER-MPPT</p><p>Description:Target Regulation Voltage</p><p>Scaling Factor:0.0030517578125</p><p>Units:V</p><p>Range:[0.0, 80.0]</p><p>Modbus address:0x0014</p> |SNMP |charge.current[chargeCurrent.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.002415771484`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
-|Battery |Battery: Voltage{#SINGLETON} |<p>MIB: SUNSAVER-MPPT</p><p>Description:Control State</p><p>Modbus address:0x0011</p> |SNMP |battery.voltage[batteryVoltage.0{#SINGLETON}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.003051757813`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
-|Counter |Counter: Charge Amp-hours |<p>MIB: SUNSAVER-MPPT</p><p>Description:Ah Charge(Resettable)</p><p>Scaling Factor:0.1</p><p>Units:Ah</p><p>Range:[0.0, 4294967294]</p><p>Modbus addresses:H=0x0015 L=0x0016</p> |SNMP |counter.charge_amp_hours[ahChargeResettable.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.1`</p> |
-|Counter |Counter: Charge KW-hours |<p>MIB: SUNSAVER-MPPT</p> |SNMP |counter.charge_kw_hours[kwhCharge.0] |
-|Counter |Counter: Load Amp-hours |<p>MIB: SUNSAVER-MPPT</p><p>Description:Ah Load(Resettable)</p><p>Scaling Factor:0.1</p><p>Units:Ah</p><p>Range:[0.0, 4294967294]</p><p>Modbus addresses:H=0x001D L=0x001E</p> |SNMP |counter.load_amp_hours[ahLoadResettable.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.1`</p> |
-|Load |Load: State |<p>MIB: SUNSAVER-MPPT</p><p>Description:Load State</p><p>Modbus address:0x001A</p><p>0: Start</p><p>1: Normal</p><p>2: LvdWarning</p><p>3: Lvd</p><p>4: Fault</p><p>5: Disconnect</p><p>6: NormalOff</p><p>7: Override</p><p>8: NotUsed</p> |SNMP |load.state[loadState.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|Load |Load: Voltage |<p>MIB: SUNSAVER-MPPT</p><p>Description:Load Voltage</p><p>Scaling Factor:0.0030517578125</p><p>Units:V</p><p>Range:[0, 80]</p><p>Modbus address:0x000A</p> |SNMP |load.voltage[loadVoltage.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.003051757813`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
-|Load |Load: Current |<p>MIB: SUNSAVER-MPPT</p><p>Description:Load Current</p><p>Scaling Factor:0.002415771484375</p><p>Units:A</p><p>Range:[0, 60]</p><p>Modbus address:0x000C</p> |SNMP |load.current[loadCurrent.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.002415771484`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
-|Status |Status: Uptime |<p>Device uptime in seconds</p> |SNMP |status.uptime<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.01`</p> |
-|Status |Status: Array Faults |<p>MIB: SUNSAVER-MPPT</p><p>Description:Array Faults</p><p>Modbus address:0x0012</p> |SNMP |status.array_faults[arrayFaults.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
-|Status |Status: Load Faults |<p>MIB: SUNSAVER-MPPT</p><p>Description:Array Faults</p><p>Modbus address:0x0012</p> |SNMP |status.load_faults[loadFaults.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
-|Status |Status: Alarms |<p>MIB: SUNSAVER-MPPT</p><p>Description:Alarms</p><p>Modbus addresses:H=0x0023 L=0x0024</p> |SNMP |status.alarms[alarms.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
-|Temperature |Temperature: Ambient |<p>MIB: SUNSAVER-MPPT</p><p>Description:Ambient Temperature</p><p>Scaling Factor:1.0</p><p>Units:deg C</p><p>Range:[-128, 127]</p><p>Modbus address:0x000F</p> |SNMP |temp.ambient[ambientTemperature.0] |
-|Temperature |Temperature: Battery |<p>MIB: SUNSAVER-MPPT</p><p>Description:Heatsink Temperature</p><p>Scaling Factor:1.0</p><p>Units:deg C</p><p>Range:[-128, 127]</p><p>Modbus address:0x000D</p> |SNMP |temp.battery[batteryTemperature.0] |
-|Temperature |Temperature: Heatsink |<p>MIB: SUNSAVER-MPPT</p><p>Description:Battery Temperature</p><p>Scaling Factor:1.0</p><p>Units:deg C</p><p>Range:[-128, 127]</p><p>Modbus address:0x000E</p> |SNMP |temp.heatsink[heatsinkTemperature.0] |
-|Zabbix_raw_items |Battery: Battery Voltage discovery |<p>MIB: SUNSAVER-MPPT</p> |SNMP |battery.voltage.discovery[batteryVoltage.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.003051757813`</p> |
+| Group | Name | Description | Type | Key and additional info |
+|------------------|------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Array | Array: Voltage | <p>MIB: SUNSAVER-MPPT</p><p>Description:Array Voltage</p><p>Scaling Factor:0.0030517578125</p><p>Units:V</p><p>Range:[0, 80]</p><p>Modbus address:0x0009</p> | SNMP | array.voltage[arrayVoltage.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.003051757813`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
+| Array | Array: Sweep Vmp | <p>MIB: SUNSAVER-MPPT</p><p>Description:Array Max. Power Point Voltage</p><p>Scaling Factor:0.0030517578125</p><p>Units:V</p><p>Range:[0.0, 5000.0]</p><p>Modbus address:0x0028</p> | SNMP | array.sweep_vmp[arrayVmp.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.003051757813`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
+| Array | Array: Sweep Voc | <p>MIB: SUNSAVER-MPPT</p><p>Description:Array Open Circuit Voltage</p><p>Scaling Factor:0.0030517578125</p><p>Units:V</p><p>Range:[0.0, 80.0]</p><p>Modbus address:0x002A</p> | SNMP | array.sweep_voc[arrayVoc.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.003051757813`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
+| Array | Array: Sweep Pmax | <p>MIB: SUNSAVER-MPPT</p><p>Description:Array Open Circuit Voltage</p><p>Scaling Factor:0.0030517578125</p><p>Units:V</p><p>Range:[0.0, 80.0]</p><p>Modbus address:0x002A</p> | SNMP | array.sweep_pmax[arrayMaxPowerSweep.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.01509857178`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
+| Battery | Battery: Charge State | <p>MIB: SUNSAVER-MPPT</p><p>Description:Control State</p><p>Modbus address:0x0011</p><p>0: Start</p><p>1: NightCheck</p><p>2: Disconnect</p><p>3: Night</p><p>4: Fault</p><p>5: BulkMppt</p><p>6: Pwm</p><p>7: Float</p><p>8: Equalize</p> | SNMP | charge.state[chargeState.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| Battery | Battery: Target Voltage | <p>MIB: SUNSAVER-MPPT</p><p>Description:Target Regulation Voltage</p><p>Scaling Factor:0.0030517578125</p><p>Units:V</p><p>Range:[0.0, 80.0]</p><p>Modbus address:0x0014</p> | SNMP | target.voltage[targetVoltage.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.003051757813`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
+| Battery | Battery: Charge Current | <p>MIB: SUNSAVER-MPPT</p><p>Description:Target Regulation Voltage</p><p>Scaling Factor:0.0030517578125</p><p>Units:V</p><p>Range:[0.0, 80.0]</p><p>Modbus address:0x0014</p> | SNMP | charge.current[chargeCurrent.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.002415771484`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
+| Battery | Battery: Voltage{#SINGLETON} | <p>MIB: SUNSAVER-MPPT</p><p>Description:Control State</p><p>Modbus address:0x0011</p> | SNMP | battery.voltage[batteryVoltage.0{#SINGLETON}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.003051757813`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
+| Counter | Counter: Charge Amp-hours | <p>MIB: SUNSAVER-MPPT</p><p>Description:Ah Charge(Resettable)</p><p>Scaling Factor:0.1</p><p>Units:Ah</p><p>Range:[0.0, 4294967294]</p><p>Modbus addresses:H=0x0015 L=0x0016</p> | SNMP | counter.charge_amp_hours[ahChargeResettable.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.1`</p> |
+| Counter | Counter: Charge KW-hours | <p>MIB: SUNSAVER-MPPT</p> | SNMP | counter.charge_kw_hours[kwhCharge.0] |
+| Counter | Counter: Load Amp-hours | <p>MIB: SUNSAVER-MPPT</p><p>Description:Ah Load(Resettable)</p><p>Scaling Factor:0.1</p><p>Units:Ah</p><p>Range:[0.0, 4294967294]</p><p>Modbus addresses:H=0x001D L=0x001E</p> | SNMP | counter.load_amp_hours[ahLoadResettable.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.1`</p> |
+| Load | Load: State | <p>MIB: SUNSAVER-MPPT</p><p>Description:Load State</p><p>Modbus address:0x001A</p><p>0: Start</p><p>1: Normal</p><p>2: LvdWarning</p><p>3: Lvd</p><p>4: Fault</p><p>5: Disconnect</p><p>6: NormalOff</p><p>7: Override</p><p>8: NotUsed</p> | SNMP | load.state[loadState.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| Load | Load: Voltage | <p>MIB: SUNSAVER-MPPT</p><p>Description:Load Voltage</p><p>Scaling Factor:0.0030517578125</p><p>Units:V</p><p>Range:[0, 80]</p><p>Modbus address:0x000A</p> | SNMP | load.voltage[loadVoltage.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.003051757813`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
+| Load | Load: Current | <p>MIB: SUNSAVER-MPPT</p><p>Description:Load Current</p><p>Scaling Factor:0.002415771484375</p><p>Units:A</p><p>Range:[0, 60]</p><p>Modbus address:0x000C</p> | SNMP | load.current[loadCurrent.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.002415771484`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
+| Status | Status: Uptime | <p>Device uptime in seconds</p> | SNMP | status.uptime<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.01`</p> |
+| Status | Status: Array Faults | <p>MIB: SUNSAVER-MPPT</p><p>Description:Array Faults</p><p>Modbus address:0x0012</p> | SNMP | status.array_faults[arrayFaults.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Status | Status: Load Faults | <p>MIB: SUNSAVER-MPPT</p><p>Description:Array Faults</p><p>Modbus address:0x0012</p> | SNMP | status.load_faults[loadFaults.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Status | Status: Alarms | <p>MIB: SUNSAVER-MPPT</p><p>Description:Alarms</p><p>Modbus addresses:H=0x0023 L=0x0024</p> | SNMP | status.alarms[alarms.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Temperature | Temperature: Ambient | <p>MIB: SUNSAVER-MPPT</p><p>Description:Ambient Temperature</p><p>Scaling Factor:1.0</p><p>Units:deg C</p><p>Range:[-128, 127]</p><p>Modbus address:0x000F</p> | SNMP | temp.ambient[ambientTemperature.0] |
+| Temperature | Temperature: Battery | <p>MIB: SUNSAVER-MPPT</p><p>Description:Heatsink Temperature</p><p>Scaling Factor:1.0</p><p>Units:deg C</p><p>Range:[-128, 127]</p><p>Modbus address:0x000D</p> | SNMP | temp.battery[batteryTemperature.0] |
+| Temperature | Temperature: Heatsink | <p>MIB: SUNSAVER-MPPT</p><p>Description:Battery Temperature</p><p>Scaling Factor:1.0</p><p>Units:deg C</p><p>Range:[-128, 127]</p><p>Modbus address:0x000E</p> | SNMP | temp.heatsink[heatsinkTemperature.0] |
+| Zabbix_raw_items | Battery: Battery Voltage discovery | <p>MIB: SUNSAVER-MPPT</p> | SNMP | battery.voltage.discovery[batteryVoltage.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.003051757813`</p> |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|Battery: Device charge in warning state |<p>-</p> |`{TEMPLATE_NAME:charge.state[chargeState.0].last()}={$CHARGE.STATE.WARN}` |WARNING |<p>**Depends on**:</p><p>- Battery: Device charge in critical state</p> |
-|Battery: Device charge in critical state |<p>-</p> |`{TEMPLATE_NAME:charge.state[chargeState.0].last()}={$CHARGE.STATE.CRIT}` |HIGH | |
-|Battery: Low battery voltage (below {#VOLTAGE.MIN.WARN}V for 5m) |<p>-</p> |`{TEMPLATE_NAME:battery.voltage[batteryVoltage.0{#SINGLETON}].max(5m)}<{#VOLTAGE.MIN.WARN}` |WARNING |<p>**Depends on**:</p><p>- Battery: Critically low battery voltage (below {#VOLTAGE.MIN.CRIT}V for 5m)</p> |
-|Battery: Critically low battery voltage (below {#VOLTAGE.MIN.CRIT}V for 5m) |<p>-</p> |`{TEMPLATE_NAME:battery.voltage[batteryVoltage.0{#SINGLETON}].max(5m)}<{#VOLTAGE.MIN.CRIT}` |HIGH | |
-|Battery: High battery voltage (over {#VOLTAGE.MAX.WARN}V for 5m) |<p>-</p> |`{TEMPLATE_NAME:battery.voltage[batteryVoltage.0{#SINGLETON}].min(5m)}>{#VOLTAGE.MAX.WARN}` |WARNING |<p>**Depends on**:</p><p>- Battery: Critically high battery voltage (over {#VOLTAGE.MAX.CRIT}V for 5m)</p> |
-|Battery: Critically high battery voltage (over {#VOLTAGE.MAX.CRIT}V for 5m) |<p>-</p> |`{TEMPLATE_NAME:battery.voltage[batteryVoltage.0{#SINGLETON}].min(5m)}>{#VOLTAGE.MAX.CRIT}` |HIGH | |
-|Load: Device load in warning state |<p>-</p> |`{TEMPLATE_NAME:load.state[loadState.0].last()}={$LOAD.STATE.WARN:"lvdWarning"} or {TEMPLATE_NAME:load.state[loadState.0].last()}={$LOAD.STATE.WARN:"override"}` |WARNING |<p>**Depends on**:</p><p>- Load: Device load in critical state</p> |
-|Load: Device load in critical state |<p>-</p> |`{TEMPLATE_NAME:load.state[loadState.0].last()}={$LOAD.STATE.CRIT:"lvd"} or {TEMPLATE_NAME:load.state[loadState.0].last()}={$LOAD.STATE.CRIT:"fault"}` |HIGH | |
-|Status: Device has been restarted (uptime < 10m) |<p>Uptime is less than 10 minutes</p> |`{TEMPLATE_NAME:status.uptime.last()}<10m` |INFO |<p>Manual close: YES</p> |
-|Status: Failed to fetch data (or no data for 5m) |<p>Zabbix has not received data for items for the last 5 minutes</p> |`{TEMPLATE_NAME:status.uptime.nodata(5m)}=1` |WARNING |<p>Manual close: YES</p> |
-|Status: Device has "overcurrent" array faults flag |<p>-</p> |`{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"overcurrent","like")}=2` |HIGH | |
-|Status: Device has "mosfetSShorted" array faults flag |<p>-</p> |`{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"mosfetSShorted","like")}=2` |HIGH | |
-|Status: Device has "softwareFault" array faults flag |<p>-</p> |`{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"softwareFault","like")}=2` |HIGH | |
-|Status: Device has "batteryHvd" array faults flag |<p>-</p> |`{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"batteryHvd","like")}=2` |HIGH | |
-|Status: Device has "arrayHvd" array faults flag |<p>-</p> |`{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"arrayHvd","like")}=2` |HIGH | |
-|Status: Device has "customSettingsEdit" array faults flag |<p>-</p> |`{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"customSettingsEdit","like")}=2` |HIGH | |
-|Status: Device has "rtsShorted" array faults flag |<p>-</p> |`{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"rtsShorted","like")}=2` |HIGH | |
-|Status: Device has "rtsNoLongerValid" array faults flag |<p>-</p> |`{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"rtsNoLongerValid","like")}=2` |HIGH | |
-|Status: Device has "localTempSensorDamaged" array faults flag |<p>-</p> |`{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"localTempSensorDamaged","like")}=2` |HIGH | |
-|Status: Device has "externalShortCircuit" load faults flag |<p>-</p> |`{TEMPLATE_NAME:status.load_faults[loadFaults.0].count(#3,"externalShortCircuit","like")}=2` |HIGH | |
-|Status: Device has "overcurrent" load faults flag |<p>-</p> |`{TEMPLATE_NAME:status.load_faults[loadFaults.0].count(#3,"overcurrent","like")}=2` |HIGH | |
-|Status: Device has "mosfetShorted" load faults flag |<p>-</p> |`{TEMPLATE_NAME:status.load_faults[loadFaults.0].count(#3,"mosfetShorted","like")}=2` |HIGH | |
-|Status: Device has "software" load faults flag |<p>-</p> |`{TEMPLATE_NAME:status.load_faults[loadFaults.0].count(#3,"software","like")}=2` |HIGH | |
-|Status: Device has "loadHvd" load faults flag |<p>-</p> |`{TEMPLATE_NAME:status.load_faults[loadFaults.0].count(#3,"loadHvd","like")}=2` |HIGH | |
-|Status: Device has "highTempDisconnect" load faults flag |<p>-</p> |`{TEMPLATE_NAME:status.load_faults[loadFaults.0].count(#3,"highTempDisconnect","like")}=2` |HIGH | |
-|Status: Device has "customSettingsEdit" load faults flag |<p>-</p> |`{TEMPLATE_NAME:status.load_faults[loadFaults.0].count(#3,"customSettingsEdit","like")}=2` |HIGH | |
-|Status: Device has "unknownLoadFault" load faults flag |<p>-</p> |`{TEMPLATE_NAME:status.load_faults[loadFaults.0].count(#3,"unknownLoadFault","like")}=2` |HIGH | |
-|Status: Device has "rtsShorted" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"rtsShorted","like")}=2` |WARNING | |
-|Status: Device has "rtsDisconnected" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"rtsDisconnected","like")}=2` |WARNING | |
-|Status: Device has "heatsinkTempSensorOpen" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"heatsinkTempSensorOpen","like")}=2` |WARNING | |
-|Status: Device has "heatsinkTempSensorShorted" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"heatsinkTempSensorShorted","like")}=2` |WARNING | |
-|Status: Device has "sspptHot" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"sspptHot","like")}=2` |WARNING | |
-|Status: Device has "currentLimit" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"currentLimit","like")}=2` |WARNING | |
-|Status: Device has "currentOffset" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"currentOffset","like")}=2` |WARNING | |
-|Status: Device has "uncalibrated" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"uncalibrated","like")}=2` |WARNING | |
-|Status: Device has "rtsMiswire" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"rtsMiswire","like")}=2` |WARNING | |
-|Status: Device has "systemMiswire" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"systemMiswire","like")}=2` |WARNING | |
-|Status: Device has "mosfetSOpen" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"mosfetSOpen","like")}=2` |WARNING | |
-|Status: Device has "p12VoltageReferenceOff" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"p12VoltageReferenceOff","like")}=2` |WARNING | |
-|Status: Device has "highVaCurrentLimit" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"highVaCurrentLimit","like")}=2` |WARNING | |
-|Temperature: Low battery temperature (below {$BATTERY.TEMP.MIN.WARN}C for 5m) |<p>-</p> |`{TEMPLATE_NAME:temp.battery[batteryTemperature.0].max(5m)}<{$BATTERY.TEMP.MIN.WARN}` |WARNING |<p>**Depends on**:</p><p>- Temperature: Critically low battery temperature (below {$BATTERY.TEMP.MIN.WARN}C for 5m)</p> |
-|Temperature: Critically low battery temperature (below {$BATTERY.TEMP.MIN.WARN}C for 5m) |<p>-</p> |`{TEMPLATE_NAME:temp.battery[batteryTemperature.0].max(5m)}<{$BATTERY.TEMP.MIN.CRIT}` |HIGH | |
-|Temperature: High battery temperature (over {$BATTERY.TEMP.MAX.WARN}C for 5m) |<p>-</p> |`{TEMPLATE_NAME:temp.battery[batteryTemperature.0].min(5m)}>{$BATTERY.TEMP.MAX.WARN}` |WARNING |<p>**Depends on**:</p><p>- Temperature: Critically high battery temperature (over {$BATTERY.TEMP.MAX.CRIT}C for 5m)</p> |
-|Temperature: Critically high battery temperature (over {$BATTERY.TEMP.MAX.CRIT}C for 5m) |<p>-</p> |`{TEMPLATE_NAME:temp.battery[batteryTemperature.0].min(5m)}>{$BATTERY.TEMP.MAX.CRIT}` |HIGH | |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|------------------------------------------------------------------------------------------|----------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|-------------------------------------------------------------------------------------------------------------------------|
+| Battery: Device charge in warning state | <p>-</p> | `{TEMPLATE_NAME:charge.state[chargeState.0].last()}={$CHARGE.STATE.WARN}` | WARNING | <p>**Depends on**:</p><p>- Battery: Device charge in critical state</p> |
+| Battery: Device charge in critical state | <p>-</p> | `{TEMPLATE_NAME:charge.state[chargeState.0].last()}={$CHARGE.STATE.CRIT}` | HIGH | |
+| Battery: Low battery voltage (below {#VOLTAGE.MIN.WARN}V for 5m) | <p>-</p> | `{TEMPLATE_NAME:battery.voltage[batteryVoltage.0{#SINGLETON}].max(5m)}<{#VOLTAGE.MIN.WARN}` | WARNING | <p>**Depends on**:</p><p>- Battery: Critically low battery voltage (below {#VOLTAGE.MIN.CRIT}V for 5m)</p> |
+| Battery: Critically low battery voltage (below {#VOLTAGE.MIN.CRIT}V for 5m) | <p>-</p> | `{TEMPLATE_NAME:battery.voltage[batteryVoltage.0{#SINGLETON}].max(5m)}<{#VOLTAGE.MIN.CRIT}` | HIGH | |
+| Battery: High battery voltage (over {#VOLTAGE.MAX.WARN}V for 5m) | <p>-</p> | `{TEMPLATE_NAME:battery.voltage[batteryVoltage.0{#SINGLETON}].min(5m)}>{#VOLTAGE.MAX.WARN}` | WARNING | <p>**Depends on**:</p><p>- Battery: Critically high battery voltage (over {#VOLTAGE.MAX.CRIT}V for 5m)</p> |
+| Battery: Critically high battery voltage (over {#VOLTAGE.MAX.CRIT}V for 5m) | <p>-</p> | `{TEMPLATE_NAME:battery.voltage[batteryVoltage.0{#SINGLETON}].min(5m)}>{#VOLTAGE.MAX.CRIT}` | HIGH | |
+| Load: Device load in warning state | <p>-</p> | `{TEMPLATE_NAME:load.state[loadState.0].last()}={$LOAD.STATE.WARN:"lvdWarning"} or {TEMPLATE_NAME:load.state[loadState.0].last()}={$LOAD.STATE.WARN:"override"}` | WARNING | <p>**Depends on**:</p><p>- Load: Device load in critical state</p> |
+| Load: Device load in critical state | <p>-</p> | `{TEMPLATE_NAME:load.state[loadState.0].last()}={$LOAD.STATE.CRIT:"lvd"} or {TEMPLATE_NAME:load.state[loadState.0].last()}={$LOAD.STATE.CRIT:"fault"}` | HIGH | |
+| Status: Device has been restarted (uptime < 10m) | <p>Uptime is less than 10 minutes</p> | `{TEMPLATE_NAME:status.uptime.last()}<10m` | INFO | <p>Manual close: YES</p> |
+| Status: Failed to fetch data (or no data for 5m) | <p>Zabbix has not received data for items for the last 5 minutes</p> | `{TEMPLATE_NAME:status.uptime.nodata(5m)}=1` | WARNING | <p>Manual close: YES</p> |
+| Status: Device has "overcurrent" array faults flag | <p>-</p> | `{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"overcurrent","like")}=2` | HIGH | |
+| Status: Device has "mosfetSShorted" array faults flag | <p>-</p> | `{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"mosfetSShorted","like")}=2` | HIGH | |
+| Status: Device has "softwareFault" array faults flag | <p>-</p> | `{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"softwareFault","like")}=2` | HIGH | |
+| Status: Device has "batteryHvd" array faults flag | <p>-</p> | `{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"batteryHvd","like")}=2` | HIGH | |
+| Status: Device has "arrayHvd" array faults flag | <p>-</p> | `{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"arrayHvd","like")}=2` | HIGH | |
+| Status: Device has "customSettingsEdit" array faults flag | <p>-</p> | `{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"customSettingsEdit","like")}=2` | HIGH | |
+| Status: Device has "rtsShorted" array faults flag | <p>-</p> | `{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"rtsShorted","like")}=2` | HIGH | |
+| Status: Device has "rtsNoLongerValid" array faults flag | <p>-</p> | `{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"rtsNoLongerValid","like")}=2` | HIGH | |
+| Status: Device has "localTempSensorDamaged" array faults flag | <p>-</p> | `{TEMPLATE_NAME:status.array_faults[arrayFaults.0].count(#3,"localTempSensorDamaged","like")}=2` | HIGH | |
+| Status: Device has "externalShortCircuit" load faults flag | <p>-</p> | `{TEMPLATE_NAME:status.load_faults[loadFaults.0].count(#3,"externalShortCircuit","like")}=2` | HIGH | |
+| Status: Device has "overcurrent" load faults flag | <p>-</p> | `{TEMPLATE_NAME:status.load_faults[loadFaults.0].count(#3,"overcurrent","like")}=2` | HIGH | |
+| Status: Device has "mosfetShorted" load faults flag | <p>-</p> | `{TEMPLATE_NAME:status.load_faults[loadFaults.0].count(#3,"mosfetShorted","like")}=2` | HIGH | |
+| Status: Device has "software" load faults flag | <p>-</p> | `{TEMPLATE_NAME:status.load_faults[loadFaults.0].count(#3,"software","like")}=2` | HIGH | |
+| Status: Device has "loadHvd" load faults flag | <p>-</p> | `{TEMPLATE_NAME:status.load_faults[loadFaults.0].count(#3,"loadHvd","like")}=2` | HIGH | |
+| Status: Device has "highTempDisconnect" load faults flag | <p>-</p> | `{TEMPLATE_NAME:status.load_faults[loadFaults.0].count(#3,"highTempDisconnect","like")}=2` | HIGH | |
+| Status: Device has "customSettingsEdit" load faults flag | <p>-</p> | `{TEMPLATE_NAME:status.load_faults[loadFaults.0].count(#3,"customSettingsEdit","like")}=2` | HIGH | |
+| Status: Device has "unknownLoadFault" load faults flag | <p>-</p> | `{TEMPLATE_NAME:status.load_faults[loadFaults.0].count(#3,"unknownLoadFault","like")}=2` | HIGH | |
+| Status: Device has "rtsShorted" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"rtsShorted","like")}=2` | WARNING | |
+| Status: Device has "rtsDisconnected" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"rtsDisconnected","like")}=2` | WARNING | |
+| Status: Device has "heatsinkTempSensorOpen" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"heatsinkTempSensorOpen","like")}=2` | WARNING | |
+| Status: Device has "heatsinkTempSensorShorted" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"heatsinkTempSensorShorted","like")}=2` | WARNING | |
+| Status: Device has "sspptHot" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"sspptHot","like")}=2` | WARNING | |
+| Status: Device has "currentLimit" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"currentLimit","like")}=2` | WARNING | |
+| Status: Device has "currentOffset" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"currentOffset","like")}=2` | WARNING | |
+| Status: Device has "uncalibrated" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"uncalibrated","like")}=2` | WARNING | |
+| Status: Device has "rtsMiswire" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"rtsMiswire","like")}=2` | WARNING | |
+| Status: Device has "systemMiswire" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"systemMiswire","like")}=2` | WARNING | |
+| Status: Device has "mosfetSOpen" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"mosfetSOpen","like")}=2` | WARNING | |
+| Status: Device has "p12VoltageReferenceOff" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"p12VoltageReferenceOff","like")}=2` | WARNING | |
+| Status: Device has "highVaCurrentLimit" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"highVaCurrentLimit","like")}=2` | WARNING | |
+| Temperature: Low battery temperature (below {$BATTERY.TEMP.MIN.WARN}C for 5m) | <p>-</p> | `{TEMPLATE_NAME:temp.battery[batteryTemperature.0].max(5m)}<{$BATTERY.TEMP.MIN.WARN}` | WARNING | <p>**Depends on**:</p><p>- Temperature: Critically low battery temperature (below {$BATTERY.TEMP.MIN.WARN}C for 5m)</p> |
+| Temperature: Critically low battery temperature (below {$BATTERY.TEMP.MIN.WARN}C for 5m) | <p>-</p> | `{TEMPLATE_NAME:temp.battery[batteryTemperature.0].max(5m)}<{$BATTERY.TEMP.MIN.CRIT}` | HIGH | |
+| Temperature: High battery temperature (over {$BATTERY.TEMP.MAX.WARN}C for 5m) | <p>-</p> | `{TEMPLATE_NAME:temp.battery[batteryTemperature.0].min(5m)}>{$BATTERY.TEMP.MAX.WARN}` | WARNING | <p>**Depends on**:</p><p>- Temperature: Critically high battery temperature (over {$BATTERY.TEMP.MAX.CRIT}C for 5m)</p> |
+| Temperature: Critically high battery temperature (over {$BATTERY.TEMP.MAX.CRIT}C for 5m) | <p>-</p> | `{TEMPLATE_NAME:temp.battery[batteryTemperature.0].min(5m)}>{$BATTERY.TEMP.MAX.CRIT}` | HIGH | |
## Feedback
diff --git a/templates/net/morningstar_snmp/suresine_snmp/README.md b/templates/net/morningstar_snmp/suresine_snmp/README.md
index 6c7831d4891..1803d39a9ff 100644
--- a/templates/net/morningstar_snmp/suresine_snmp/README.md
+++ b/templates/net/morningstar_snmp/suresine_snmp/README.md
@@ -3,11 +3,11 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
-> See [Zabbix template operation](https://www.zabbix.com/documentation/5.2/manual/config/templates_out_of_the_box/zabbix_agent) for basic instructions.
+> See [Zabbix template operation](https://www.zabbix.com/documentation/5.4/manual/config/templates_out_of_the_box/zabbix_agent) for basic instructions.
Refer to the vendor documentation.
@@ -17,23 +17,23 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$BATTERY.TEMP.MAX.CRIT} |<p>Battery high temperature critical value</p> |`60` |
-|{$BATTERY.TEMP.MAX.WARN} |<p>Battery high temperature warning value</p> |`45` |
-|{$BATTERY.TEMP.MIN.CRIT} |<p>Battery low temperature critical value</p> |`-20` |
-|{$BATTERY.TEMP.MIN.WARN} |<p>Battery low temperature warning value</p> |`0` |
-|{$CHARGE.STATE.CRIT} |<p>fault</p> |`4` |
-|{$CHARGE.STATE.WARN} |<p>disconnect</p> |`2` |
-|{$LOAD.STATE.CRIT:"fault"} |<p>fault</p> |`4` |
-|{$LOAD.STATE.CRIT:"lvd"} |<p>lvd</p> |`3` |
-|{$LOAD.STATE.WARN:"disconnect"} |<p>disconnect</p> |`5` |
-|{$LOAD.STATE.WARN:"lvdWarning"} |<p>lvdWarning</p> |`2` |
-|{$LOAD.STATE.WARN:"override"} |<p>override</p> |`7` |
-|{$VOLTAGE.MAX.CRIT} | |`` |
-|{$VOLTAGE.MAX.WARN} | |`` |
-|{$VOLTAGE.MIN.CRIT} | |`` |
-|{$VOLTAGE.MIN.WARN} | |`` |
+| Name | Description | Default |
+|---------------------------------|------------------------------------------------|---------|
+| {$BATTERY.TEMP.MAX.CRIT} | <p>Battery high temperature critical value</p> | `60` |
+| {$BATTERY.TEMP.MAX.WARN} | <p>Battery high temperature warning value</p> | `45` |
+| {$BATTERY.TEMP.MIN.CRIT} | <p>Battery low temperature critical value</p> | `-20` |
+| {$BATTERY.TEMP.MIN.WARN} | <p>Battery low temperature warning value</p> | `0` |
+| {$CHARGE.STATE.CRIT} | <p>fault</p> | `4` |
+| {$CHARGE.STATE.WARN} | <p>disconnect</p> | `2` |
+| {$LOAD.STATE.CRIT:"fault"} | <p>fault</p> | `4` |
+| {$LOAD.STATE.CRIT:"lvd"} | <p>lvd</p> | `3` |
+| {$LOAD.STATE.WARN:"disconnect"} | <p>disconnect</p> | `5` |
+| {$LOAD.STATE.WARN:"lvdWarning"} | <p>lvdWarning</p> | `2` |
+| {$LOAD.STATE.WARN:"override"} | <p>override</p> | `7` |
+| {$VOLTAGE.MAX.CRIT} | | `` |
+| {$VOLTAGE.MAX.WARN} | | `` |
+| {$VOLTAGE.MIN.CRIT} | | `` |
+| {$VOLTAGE.MIN.WARN} | | `` |
## Template links
@@ -41,47 +41,47 @@ There are no template links in this template.
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Battery voltage discovery |<p>Discovery for battery voltage triggers</p> |DEPENDENT |battery.voltage.discovery<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Name | Description | Type | Key and additional info |
+|---------------------------|-----------------------------------------------|-----------|---------------------------------------------------------------------------------------------------------------------|
+| Battery voltage discovery | <p>Discovery for battery voltage triggers</p> | DEPENDENT | battery.voltage.discovery<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Battery |Battery: Voltage{#SINGLETON} |<p>MIB: SURESINE</p><p>Description:Battery Voltage(slow)</p><p>Scaling Factor:0.0002581787109375</p><p>Units:V</p><p>Range:[0.0, 17.0]</p><p>Modbus address:0x0004</p> |SNMP |battery.voltage[batteryVoltageSlow.0{#SINGLETON}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `2.581787109375E-4`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
-|Load |Load: State |<p>MIB: SURESINE</p><p>Description:Load State</p><p>Modbus address:0x000B</p><p> 0: Start</p><p>1: LoadOn</p><p>2: LvdWarning</p><p>3: LowVoltageDisconnect</p><p>4: Fault</p><p>5: Disconnect</p><p>6: NormalOff</p><p>7: UnknownState</p><p>8: Standby</p> |SNMP |load.state[loadState.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|Load |Load: A/C Current |<p>MIB: SURESINE</p><p>Description:AC Output Current</p><p>Scaling Factor:0.0001953125</p><p>Units:A</p><p>Range:[0.0, 17]</p><p>Modbus address:0x0005</p> |SNMP |load.ac_current[acCurrent.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1.953125E-4`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
-|Status |Status: Uptime |<p>Device uptime in seconds</p> |SNMP |status.uptime<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.01`</p> |
-|Status |Status: Faults |<p>MIB: SURESINE</p><p>Description:Faults</p><p>Modbus address:0x0007</p> |SNMP |status.faults[faults.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
-|Status |Status: Alarms |<p>MIB: SURESINE</p><p>Description:Faults</p><p>Modbus address:0x0007</p> |SNMP |status.alarms[alarms.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
-|Temperature |Temperature: Heatsink |<p>MIB: SURESINE</p><p>Description:Heatsink Temperature</p><p>Scaling Factor:1</p><p>Units:C</p><p>Range:[-128, 127]</p><p>Modbus address:0x0006</p> |SNMP |temp.heatsink[heatsinkTemperature.0] |
-|Zabbix_raw_items |Battery: Battery Voltage discovery |<p>MIB: SURESINE</p> |SNMP |battery.voltage.discovery[batteryVoltageSlow.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `2.581787109375E-4`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
+| Group | Name | Description | Type | Key and additional info |
+|------------------|------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------|-------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Battery | Battery: Voltage{#SINGLETON} | <p>MIB: SURESINE</p><p>Description:Battery Voltage(slow)</p><p>Scaling Factor:0.0002581787109375</p><p>Units:V</p><p>Range:[0.0, 17.0]</p><p>Modbus address:0x0004</p> | SNMP | battery.voltage[batteryVoltageSlow.0{#SINGLETON}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `2.581787109375E-4`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
+| Load | Load: State | <p>MIB: SURESINE</p><p>Description:Load State</p><p>Modbus address:0x000B</p><p> 0: Start</p><p>1: LoadOn</p><p>2: LvdWarning</p><p>3: LowVoltageDisconnect</p><p>4: Fault</p><p>5: Disconnect</p><p>6: NormalOff</p><p>7: UnknownState</p><p>8: Standby</p> | SNMP | load.state[loadState.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| Load | Load: A/C Current | <p>MIB: SURESINE</p><p>Description:AC Output Current</p><p>Scaling Factor:0.0001953125</p><p>Units:A</p><p>Range:[0.0, 17]</p><p>Modbus address:0x0005</p> | SNMP | load.ac_current[acCurrent.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1.953125E-4`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
+| Status | Status: Uptime | <p>Device uptime in seconds</p> | SNMP | status.uptime<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.01`</p> |
+| Status | Status: Faults | <p>MIB: SURESINE</p><p>Description:Faults</p><p>Modbus address:0x0007</p> | SNMP | status.faults[faults.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Status | Status: Alarms | <p>MIB: SURESINE</p><p>Description:Faults</p><p>Modbus address:0x0007</p> | SNMP | status.alarms[alarms.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Temperature | Temperature: Heatsink | <p>MIB: SURESINE</p><p>Description:Heatsink Temperature</p><p>Scaling Factor:1</p><p>Units:C</p><p>Range:[-128, 127]</p><p>Modbus address:0x0006</p> | SNMP | temp.heatsink[heatsinkTemperature.0] |
+| Zabbix_raw_items | Battery: Battery Voltage discovery | <p>MIB: SURESINE</p> | SNMP | battery.voltage.discovery[batteryVoltageSlow.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `2.581787109375E-4`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|Battery: Low battery voltage (below {#VOLTAGE.MIN.WARN}V for 5m) |<p>-</p> |`{TEMPLATE_NAME:battery.voltage[batteryVoltageSlow.0{#SINGLETON}].max(5m)}<{#VOLTAGE.MIN.WARN}` |WARNING |<p>**Depends on**:</p><p>- Battery: Critically low battery voltage (below {#VOLTAGE.MIN.CRIT}V for 5m)</p> |
-|Battery: Critically low battery voltage (below {#VOLTAGE.MIN.CRIT}V for 5m) |<p>-</p> |`{TEMPLATE_NAME:battery.voltage[batteryVoltageSlow.0{#SINGLETON}].max(5m)}<{#VOLTAGE.MIN.CRIT}` |HIGH | |
-|Battery: High battery voltage (over {#VOLTAGE.MAX.WARN}V for 5m) |<p>-</p> |`{TEMPLATE_NAME:battery.voltage[batteryVoltageSlow.0{#SINGLETON}].min(5m)}>{#VOLTAGE.MAX.WARN}` |WARNING |<p>**Depends on**:</p><p>- Battery: Critically high battery voltage (over {#VOLTAGE.MAX.CRIT}V for 5m)</p> |
-|Battery: Critically high battery voltage (over {#VOLTAGE.MAX.CRIT}V for 5m) |<p>-</p> |`{TEMPLATE_NAME:battery.voltage[batteryVoltageSlow.0{#SINGLETON}].min(5m)}>{#VOLTAGE.MAX.CRIT}` |HIGH | |
-|Load: Device load in warning state |<p>-</p> |`{TEMPLATE_NAME:load.state[loadState.0].last()}={$LOAD.STATE.WARN:"lvdWarning"} or {TEMPLATE_NAME:load.state[loadState.0].last()}={$LOAD.STATE.WARN:"override"}` |WARNING |<p>**Depends on**:</p><p>- Load: Device load in critical state</p> |
-|Load: Device load in critical state |<p>-</p> |`{TEMPLATE_NAME:load.state[loadState.0].last()}={$LOAD.STATE.CRIT:"lvd"} or {TEMPLATE_NAME:load.state[loadState.0].last()}={$LOAD.STATE.CRIT:"fault"}` |HIGH | |
-|Status: Device has been restarted (uptime < 10m) |<p>Uptime is less than 10 minutes</p> |`{TEMPLATE_NAME:status.uptime.last()}<10m` |INFO |<p>Manual close: YES</p> |
-|Status: Failed to fetch data (or no data for 5m) |<p>Zabbix has not received data for items for the last 5 minutes</p> |`{TEMPLATE_NAME:status.uptime.nodata(5m)}=1` |WARNING |<p>Manual close: YES</p> |
-|Status: Device has "reset" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"reset","like")}=2` |HIGH | |
-|Status: Device has "overcurrent" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"overcurrent","like")}=2` |HIGH | |
-|Status: Device has "unknownFault" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"unknownFault","like")}=2` |HIGH | |
-|Status: Device has "software" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"software","like")}=2` |HIGH | |
-|Status: Device has "highVoltageDisconnect" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"highVoltageDisconnect","like")}=2` |HIGH | |
-|Status: Device has "suresineHot" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"suresineHot","like")}=2` |HIGH | |
-|Status: Device has "dipSwitchChanged" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"dipSwitchChanged","like")}=2` |HIGH | |
-|Status: Device has "customSettingsEdit" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"customSettingsEdit","like")}=2` |HIGH | |
-|Status: Device has "heatsinkTempSensorOpen" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"heatsinkTempSensorOpen","like")}=2` |WARNING | |
-|Status: Device has "heatsinkTempSensorShort" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"heatsinkTempSensorShort","like")}=2` |WARNING | |
-|Status: Device has "unknownAlarm" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"unknownAlarm","like")}=2` |WARNING | |
-|Status: Device has "suresineHot" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"suresineHot","like")}=2` |WARNING | |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|-----------------------------------------------------------------------------|----------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|------------------------------------------------------------------------------------------------------------|
+| Battery: Low battery voltage (below {#VOLTAGE.MIN.WARN}V for 5m) | <p>-</p> | `{TEMPLATE_NAME:battery.voltage[batteryVoltageSlow.0{#SINGLETON}].max(5m)}<{#VOLTAGE.MIN.WARN}` | WARNING | <p>**Depends on**:</p><p>- Battery: Critically low battery voltage (below {#VOLTAGE.MIN.CRIT}V for 5m)</p> |
+| Battery: Critically low battery voltage (below {#VOLTAGE.MIN.CRIT}V for 5m) | <p>-</p> | `{TEMPLATE_NAME:battery.voltage[batteryVoltageSlow.0{#SINGLETON}].max(5m)}<{#VOLTAGE.MIN.CRIT}` | HIGH | |
+| Battery: High battery voltage (over {#VOLTAGE.MAX.WARN}V for 5m) | <p>-</p> | `{TEMPLATE_NAME:battery.voltage[batteryVoltageSlow.0{#SINGLETON}].min(5m)}>{#VOLTAGE.MAX.WARN}` | WARNING | <p>**Depends on**:</p><p>- Battery: Critically high battery voltage (over {#VOLTAGE.MAX.CRIT}V for 5m)</p> |
+| Battery: Critically high battery voltage (over {#VOLTAGE.MAX.CRIT}V for 5m) | <p>-</p> | `{TEMPLATE_NAME:battery.voltage[batteryVoltageSlow.0{#SINGLETON}].min(5m)}>{#VOLTAGE.MAX.CRIT}` | HIGH | |
+| Load: Device load in warning state | <p>-</p> | `{TEMPLATE_NAME:load.state[loadState.0].last()}={$LOAD.STATE.WARN:"lvdWarning"} or {TEMPLATE_NAME:load.state[loadState.0].last()}={$LOAD.STATE.WARN:"override"}` | WARNING | <p>**Depends on**:</p><p>- Load: Device load in critical state</p> |
+| Load: Device load in critical state | <p>-</p> | `{TEMPLATE_NAME:load.state[loadState.0].last()}={$LOAD.STATE.CRIT:"lvd"} or {TEMPLATE_NAME:load.state[loadState.0].last()}={$LOAD.STATE.CRIT:"fault"}` | HIGH | |
+| Status: Device has been restarted (uptime < 10m) | <p>Uptime is less than 10 minutes</p> | `{TEMPLATE_NAME:status.uptime.last()}<10m` | INFO | <p>Manual close: YES</p> |
+| Status: Failed to fetch data (or no data for 5m) | <p>Zabbix has not received data for items for the last 5 minutes</p> | `{TEMPLATE_NAME:status.uptime.nodata(5m)}=1` | WARNING | <p>Manual close: YES</p> |
+| Status: Device has "reset" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"reset","like")}=2` | HIGH | |
+| Status: Device has "overcurrent" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"overcurrent","like")}=2` | HIGH | |
+| Status: Device has "unknownFault" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"unknownFault","like")}=2` | HIGH | |
+| Status: Device has "software" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"software","like")}=2` | HIGH | |
+| Status: Device has "highVoltageDisconnect" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"highVoltageDisconnect","like")}=2` | HIGH | |
+| Status: Device has "suresineHot" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"suresineHot","like")}=2` | HIGH | |
+| Status: Device has "dipSwitchChanged" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"dipSwitchChanged","like")}=2` | HIGH | |
+| Status: Device has "customSettingsEdit" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"customSettingsEdit","like")}=2` | HIGH | |
+| Status: Device has "heatsinkTempSensorOpen" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"heatsinkTempSensorOpen","like")}=2` | WARNING | |
+| Status: Device has "heatsinkTempSensorShort" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"heatsinkTempSensorShort","like")}=2` | WARNING | |
+| Status: Device has "unknownAlarm" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"unknownAlarm","like")}=2` | WARNING | |
+| Status: Device has "suresineHot" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"suresineHot","like")}=2` | WARNING | |
## Feedback
diff --git a/templates/net/morningstar_snmp/tristar_mppt_600V_snmp/README.md b/templates/net/morningstar_snmp/tristar_mppt_600V_snmp/README.md
index 09f04abdc5e..9203c875ffe 100644
--- a/templates/net/morningstar_snmp/tristar_mppt_600V_snmp/README.md
+++ b/templates/net/morningstar_snmp/tristar_mppt_600V_snmp/README.md
@@ -3,11 +3,11 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
-> See [Zabbix template operation](https://www.zabbix.com/documentation/5.2/manual/config/templates_out_of_the_box/zabbix_agent) for basic instructions.
+> See [Zabbix template operation](https://www.zabbix.com/documentation/5.4/manual/config/templates_out_of_the_box/zabbix_agent) for basic instructions.
Refer to the vendor documentation.
@@ -17,23 +17,23 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$BATTERY.TEMP.MAX.CRIT} |<p>Battery high temperature critical value</p> |`60` |
-|{$BATTERY.TEMP.MAX.WARN} |<p>Battery high temperature warning value</p> |`45` |
-|{$BATTERY.TEMP.MIN.CRIT} |<p>Battery low temperature critical value</p> |`-20` |
-|{$BATTERY.TEMP.MIN.WARN} |<p>Battery low temperature warning value</p> |`0` |
-|{$CHARGE.STATE.CRIT} |<p>fault</p> |`4` |
-|{$CHARGE.STATE.WARN} |<p>disconnect</p> |`2` |
-|{$LOAD.STATE.CRIT:"fault"} |<p>fault</p> |`4` |
-|{$LOAD.STATE.CRIT:"lvd"} |<p>lvd</p> |`3` |
-|{$LOAD.STATE.WARN:"disconnect"} |<p>disconnect</p> |`5` |
-|{$LOAD.STATE.WARN:"lvdWarning"} |<p>lvdWarning</p> |`2` |
-|{$LOAD.STATE.WARN:"override"} |<p>override</p> |`7` |
-|{$VOLTAGE.MAX.CRIT} | |`` |
-|{$VOLTAGE.MAX.WARN} | |`` |
-|{$VOLTAGE.MIN.CRIT} | |`` |
-|{$VOLTAGE.MIN.WARN} | |`` |
+| Name | Description | Default |
+|---------------------------------|------------------------------------------------|---------|
+| {$BATTERY.TEMP.MAX.CRIT} | <p>Battery high temperature critical value</p> | `60` |
+| {$BATTERY.TEMP.MAX.WARN} | <p>Battery high temperature warning value</p> | `45` |
+| {$BATTERY.TEMP.MIN.CRIT} | <p>Battery low temperature critical value</p> | `-20` |
+| {$BATTERY.TEMP.MIN.WARN} | <p>Battery low temperature warning value</p> | `0` |
+| {$CHARGE.STATE.CRIT} | <p>fault</p> | `4` |
+| {$CHARGE.STATE.WARN} | <p>disconnect</p> | `2` |
+| {$LOAD.STATE.CRIT:"fault"} | <p>fault</p> | `4` |
+| {$LOAD.STATE.CRIT:"lvd"} | <p>lvd</p> | `3` |
+| {$LOAD.STATE.WARN:"disconnect"} | <p>disconnect</p> | `5` |
+| {$LOAD.STATE.WARN:"lvdWarning"} | <p>lvdWarning</p> | `2` |
+| {$LOAD.STATE.WARN:"override"} | <p>override</p> | `7` |
+| {$VOLTAGE.MAX.CRIT} | | `` |
+| {$VOLTAGE.MAX.WARN} | | `` |
+| {$VOLTAGE.MIN.CRIT} | | `` |
+| {$VOLTAGE.MIN.WARN} | | `` |
## Template links
@@ -41,106 +41,106 @@ There are no template links in this template.
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Battery voltage discovery |<p>Discovery for battery voltage triggers</p> |DEPENDENT |battery.voltage.discovery<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Name | Description | Type | Key and additional info |
+|---------------------------|-----------------------------------------------|-----------|---------------------------------------------------------------------------------------------------------------------|
+| Battery voltage discovery | <p>Discovery for battery voltage triggers</p> | DEPENDENT | battery.voltage.discovery<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Array |Array: Voltage |<p>MIB: TRISTAR-MPPT</p><p>Description:Array Voltage</p><p>Scaling Factor:1.0</p><p>Units:V</p><p>Range:[-10, 650]</p><p>Modbus address:0x001b</p> |SNMP |array.voltage[arrayVoltage.0] |
-|Array |Array: Array Current |<p>MIB: TRISTAR-MPPT</p><p>Description:Array Current</p><p>Scaling Factor:1.0</p><p>Units:A</p><p>Range:[-10, 80]</p><p>Modbus address:0x001d</p> |SNMP |array.current[arrayCurrent.0] |
-|Array |Array: Sweep Vmp |<p>MIB: TRISTAR-MPPT</p><p>Description:Vmp (last sweep)</p><p>Scaling Factor:1.0</p><p>Units:V</p><p>Range:[-10, 650.0]</p><p>Modbus address:0x003d</p> |SNMP |array.sweep_vmp[arrayVmpLastSweep.0] |
-|Array |Array: Sweep Voc |<p>MIB: TRISTAR-MPPT</p><p>Description:Voc (last sweep)</p><p>Scaling Factor:1.0</p><p>Units:V</p><p>Range:[-10, 650.0]</p><p>Modbus address:0x003e</p> |SNMP |array.sweep_voc[arrayVocLastSweep.0] |
-|Array |Array: Sweep Pmax |<p>MIB: TRISTAR-MPPT</p><p>Description:Pmax (last sweep)</p><p>Scaling Factor:1.0</p><p>Units:W</p><p>Range:[-10, 5000]</p><p>Modbus address:0x003c</p> |SNMP |array.sweep_pmax[arrayPmaxLastSweep.0] |
-|Battery |Battery: Charge State |<p>MIB: TRISTAR-MPPT</p><p>Description:Charge State</p><p>Modbus address:0x0032</p><p>0: Start</p><p>1: NightCheck</p><p>2: Disconnect</p><p>3: Night</p><p>4: Fault</p><p>5: Mppt</p><p>6: Absorption</p><p>7: Float</p><p>8: Equalize</p><p>9: Slave</p><p>10: Fixed</p> |SNMP |charge.state[chargeState.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|Battery |Battery: Target Voltage |<p>MIB: TRISTAR-MPPT</p><p>Description:Target Voltage</p><p>Scaling Factor:1.0</p><p>Units:V</p><p>Range:[-10, 650.0]</p><p>Modbus address:0x0033</p> |SNMP |target.voltage[targetRegulationVoltage.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|Battery |Battery: Charge Current |<p>MIB: TRISTAR-MPPT</p><p>Description:Battery Current</p><p>Scaling Factor:1.0</p><p>Units:A</p><p>Range:[-10, 80]</p><p>Modbus address:0x001c</p> |SNMP |charge.current[batteryCurrent.0] |
-|Battery |Battery: Output Power |<p>MIB: TRISTAR-MPPT</p><p>Description:Output Power</p><p>Scaling Factor:1.0</p><p>Units:W</p><p>Range:[-10, 4000]</p><p>Modbus address:0x003a</p> |SNMP |charge.output_power[ outputPower.0] |
-|Battery |Battery: Voltage{#SINGLETON} |<p>MIB: TRISTAR-MPPT</p><p>Description:Battery voltage</p><p>Scaling Factor:1.0</p><p>Units:V</p><p>Range:[-10, 80]</p><p>Modbus address:0x0018</p> |SNMP |battery.voltage[batteryVoltage.0{#SINGLETON}] |
-|Counter |Counter: Charge Amp-hours |<p>MIB: TRISTAR-MPPT</p><p>Description:Ah Charge Resettable</p><p>Scaling Factor:1.0</p><p>Units:Ah</p><p>Range:[0.0, 5000]</p><p>Modbus addresses:H=0x0034 L=0x0035</p> |SNMP |counter.charge_amp_hours[ahChargeResetable.0] |
-|Counter |Counter: Charge KW-hours |<p>MIB: TRISTAR-MPPT</p><p>Description:kWh Charge Resettable</p><p>Scaling Factor:1.0</p><p>Units:kWh</p><p>Range:[0.0, 65535.0]</p><p>Modbus address:0x0038</p> |SNMP |counter.charge_kw_hours[kwhChargeResetable.0] |
-|Status |Status: Uptime |<p>Device uptime in seconds</p> |SNMP |status.uptime<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.01`</p> |
-|Status |Status: Faults |<p>MIB: TRISTAR-MPPT</p><p>Description:Faults</p><p>Modbus addresses:H=0x002c L=0x002d</p> |SNMP |status.faults[faults.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
-|Status |Status: Alarms |<p>MIB: TRISTAR-MPPT</p><p>Description:Alarms</p><p>Modbus addresses:H=0x002e L=0x002f</p> |SNMP |status.alarms[alarms.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
-|Temperature |Temperature: Battery |<p>MIB: TRISTAR-MPPT</p><p>Description:Batt. Temp</p><p>Scaling Factor:1.0</p><p>Units:C</p><p>Range:[-40, 80]</p><p>Modbus address:0x0025</p> |SNMP |temp.battery[batteryTemperature.0] |
-|Temperature |Temperature: Heatsink |<p>MIB: TRISTAR-MPPT</p><p>Description:HS Temp</p><p>Scaling Factor:1.0</p><p>Units:C</p><p>Range:[-40, 80]</p><p>Modbus address:0x0023</p> |SNMP |temp.heatsink[heatsinkTemperature.0] |
-|Zabbix_raw_items |Battery: Battery Voltage discovery |<p>MIB: TRISTAR-MPPT</p><p>Description:Battery voltage</p><p>Scaling Factor:1.0</p><p>Units:V</p><p>Range:[-10, 80]</p><p>Modbus address:0x0018</p> |SNMP |battery.voltage.discovery[batteryVoltage.0] |
+| Group | Name | Description | Type | Key and additional info |
+|------------------|------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------|-------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Array | Array: Voltage | <p>MIB: TRISTAR-MPPT</p><p>Description:Array Voltage</p><p>Scaling Factor:1.0</p><p>Units:V</p><p>Range:[-10, 650]</p><p>Modbus address:0x001b</p> | SNMP | array.voltage[arrayVoltage.0] |
+| Array | Array: Array Current | <p>MIB: TRISTAR-MPPT</p><p>Description:Array Current</p><p>Scaling Factor:1.0</p><p>Units:A</p><p>Range:[-10, 80]</p><p>Modbus address:0x001d</p> | SNMP | array.current[arrayCurrent.0] |
+| Array | Array: Sweep Vmp | <p>MIB: TRISTAR-MPPT</p><p>Description:Vmp (last sweep)</p><p>Scaling Factor:1.0</p><p>Units:V</p><p>Range:[-10, 650.0]</p><p>Modbus address:0x003d</p> | SNMP | array.sweep_vmp[arrayVmpLastSweep.0] |
+| Array | Array: Sweep Voc | <p>MIB: TRISTAR-MPPT</p><p>Description:Voc (last sweep)</p><p>Scaling Factor:1.0</p><p>Units:V</p><p>Range:[-10, 650.0]</p><p>Modbus address:0x003e</p> | SNMP | array.sweep_voc[arrayVocLastSweep.0] |
+| Array | Array: Sweep Pmax | <p>MIB: TRISTAR-MPPT</p><p>Description:Pmax (last sweep)</p><p>Scaling Factor:1.0</p><p>Units:W</p><p>Range:[-10, 5000]</p><p>Modbus address:0x003c</p> | SNMP | array.sweep_pmax[arrayPmaxLastSweep.0] |
+| Battery | Battery: Charge State | <p>MIB: TRISTAR-MPPT</p><p>Description:Charge State</p><p>Modbus address:0x0032</p><p>0: Start</p><p>1: NightCheck</p><p>2: Disconnect</p><p>3: Night</p><p>4: Fault</p><p>5: Mppt</p><p>6: Absorption</p><p>7: Float</p><p>8: Equalize</p><p>9: Slave</p><p>10: Fixed</p> | SNMP | charge.state[chargeState.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| Battery | Battery: Target Voltage | <p>MIB: TRISTAR-MPPT</p><p>Description:Target Voltage</p><p>Scaling Factor:1.0</p><p>Units:V</p><p>Range:[-10, 650.0]</p><p>Modbus address:0x0033</p> | SNMP | target.voltage[targetRegulationVoltage.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| Battery | Battery: Charge Current | <p>MIB: TRISTAR-MPPT</p><p>Description:Battery Current</p><p>Scaling Factor:1.0</p><p>Units:A</p><p>Range:[-10, 80]</p><p>Modbus address:0x001c</p> | SNMP | charge.current[batteryCurrent.0] |
+| Battery | Battery: Output Power | <p>MIB: TRISTAR-MPPT</p><p>Description:Output Power</p><p>Scaling Factor:1.0</p><p>Units:W</p><p>Range:[-10, 4000]</p><p>Modbus address:0x003a</p> | SNMP | charge.output_power[ outputPower.0] |
+| Battery | Battery: Voltage{#SINGLETON} | <p>MIB: TRISTAR-MPPT</p><p>Description:Battery voltage</p><p>Scaling Factor:1.0</p><p>Units:V</p><p>Range:[-10, 80]</p><p>Modbus address:0x0018</p> | SNMP | battery.voltage[batteryVoltage.0{#SINGLETON}] |
+| Counter | Counter: Charge Amp-hours | <p>MIB: TRISTAR-MPPT</p><p>Description:Ah Charge Resettable</p><p>Scaling Factor:1.0</p><p>Units:Ah</p><p>Range:[0.0, 5000]</p><p>Modbus addresses:H=0x0034 L=0x0035</p> | SNMP | counter.charge_amp_hours[ahChargeResetable.0] |
+| Counter | Counter: Charge KW-hours | <p>MIB: TRISTAR-MPPT</p><p>Description:kWh Charge Resettable</p><p>Scaling Factor:1.0</p><p>Units:kWh</p><p>Range:[0.0, 65535.0]</p><p>Modbus address:0x0038</p> | SNMP | counter.charge_kw_hours[kwhChargeResetable.0] |
+| Status | Status: Uptime | <p>Device uptime in seconds</p> | SNMP | status.uptime<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.01`</p> |
+| Status | Status: Faults | <p>MIB: TRISTAR-MPPT</p><p>Description:Faults</p><p>Modbus addresses:H=0x002c L=0x002d</p> | SNMP | status.faults[faults.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Status | Status: Alarms | <p>MIB: TRISTAR-MPPT</p><p>Description:Alarms</p><p>Modbus addresses:H=0x002e L=0x002f</p> | SNMP | status.alarms[alarms.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Temperature | Temperature: Battery | <p>MIB: TRISTAR-MPPT</p><p>Description:Batt. Temp</p><p>Scaling Factor:1.0</p><p>Units:C</p><p>Range:[-40, 80]</p><p>Modbus address:0x0025</p> | SNMP | temp.battery[batteryTemperature.0] |
+| Temperature | Temperature: Heatsink | <p>MIB: TRISTAR-MPPT</p><p>Description:HS Temp</p><p>Scaling Factor:1.0</p><p>Units:C</p><p>Range:[-40, 80]</p><p>Modbus address:0x0023</p> | SNMP | temp.heatsink[heatsinkTemperature.0] |
+| Zabbix_raw_items | Battery: Battery Voltage discovery | <p>MIB: TRISTAR-MPPT</p><p>Description:Battery voltage</p><p>Scaling Factor:1.0</p><p>Units:V</p><p>Range:[-10, 80]</p><p>Modbus address:0x0018</p> | SNMP | battery.voltage.discovery[batteryVoltage.0] |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|Battery: Device charge in warning state |<p>-</p> |`{TEMPLATE_NAME:charge.state[chargeState.0].last()}={$CHARGE.STATE.WARN}` |WARNING |<p>**Depends on**:</p><p>- Battery: Device charge in critical state</p> |
-|Battery: Device charge in critical state |<p>-</p> |`{TEMPLATE_NAME:charge.state[chargeState.0].last()}={$CHARGE.STATE.CRIT}` |HIGH | |
-|Battery: Low battery voltage (below {#VOLTAGE.MIN.WARN}V for 5m) |<p>-</p> |`{TEMPLATE_NAME:battery.voltage[batteryVoltage.0{#SINGLETON}].max(5m)}<{#VOLTAGE.MIN.WARN}` |WARNING |<p>**Depends on**:</p><p>- Battery: Critically low battery voltage (below {#VOLTAGE.MIN.CRIT}V for 5m)</p> |
-|Battery: Critically low battery voltage (below {#VOLTAGE.MIN.CRIT}V for 5m) |<p>-</p> |`{TEMPLATE_NAME:battery.voltage[batteryVoltage.0{#SINGLETON}].max(5m)}<{#VOLTAGE.MIN.CRIT}` |HIGH | |
-|Battery: High battery voltage (over {#VOLTAGE.MAX.WARN}V for 5m) |<p>-</p> |`{TEMPLATE_NAME:battery.voltage[batteryVoltage.0{#SINGLETON}].min(5m)}>{#VOLTAGE.MAX.WARN}` |WARNING |<p>**Depends on**:</p><p>- Battery: Critically high battery voltage (over {#VOLTAGE.MAX.CRIT}V for 5m)</p> |
-|Battery: Critically high battery voltage (over {#VOLTAGE.MAX.CRIT}V for 5m) |<p>-</p> |`{TEMPLATE_NAME:battery.voltage[batteryVoltage.0{#SINGLETON}].min(5m)}>{#VOLTAGE.MAX.CRIT}` |HIGH | |
-|Status: Device has been restarted (uptime < 10m) |<p>Uptime is less than 10 minutes</p> |`{TEMPLATE_NAME:status.uptime.last()}<10m` |INFO |<p>Manual close: YES</p> |
-|Status: Failed to fetch data (or no data for 5m) |<p>Zabbix has not received data for items for the last 5 minutes</p> |`{TEMPLATE_NAME:status.uptime.nodata(5m)}=1` |WARNING |<p>Manual close: YES</p> |
-|Status: Device has "overcurrent" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"overcurrent","like")}=2` |HIGH | |
-|Status: Device has "fetShort" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"fetShort","like")}=2` |HIGH | |
-|Status: Device has "softwareFault" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"softwareFault","like")}=2` |HIGH | |
-|Status: Device has "batteryHvd" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"batteryHvd","like")}=2` |HIGH | |
-|Status: Device has "arrayHvd" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"arrayHvd","like")}=2` |HIGH | |
-|Status: Device has "dipSwitchChange" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"dipSwitchChange","like")}=2` |HIGH | |
-|Status: Device has "customSettingsEdit" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"customSettingsEdit","like")}=2` |HIGH | |
-|Status: Device has "rtsShorted" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"rtsShorted","like")}=2` |HIGH | |
-|Status: Device has "rtsDisconnected" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"rtsDisconnected","like")}=2` |HIGH | |
-|Status: Device has "eepromRetryLimit" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"eepromRetryLimit","like")}=2` |HIGH | |
-|Status: Device has "controllerWasReset" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"controllerWasReset","like")}=2` |HIGH | |
-|Status: Device has "chargeSlaveControlTimeout" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"chargeSlaveControlTimeout","like")}=2` |HIGH | |
-|Status: Device has "rs232SerialToMeterBridge" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"rs232SerialToMeterBridge","like")}=2` |HIGH | |
-|Status: Device has "batteryLvd" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"batteryLvd","like")}=2` |HIGH | |
-|Status: Device has "powerboardCommunicationFault" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"powerboardCommunicationFault","like")}=2` |HIGH | |
-|Status: Device has "fault16Software" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"fault16Software","like")}=2` |HIGH | |
-|Status: Device has "fault17Software" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"fault17Software","like")}=2` |HIGH | |
-|Status: Device has "fault18Software" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"fault18Software","like")}=2` |HIGH | |
-|Status: Device has "fault19Software" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"fault19Software","like")}=2` |HIGH | |
-|Status: Device has "fault20Software" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"fault20Software","like")}=2` |HIGH | |
-|Status: Device has "fault21Software" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"fault21Software","like")}=2` |HIGH | |
-|Status: Device has "fpgaVersion" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"fpgaVersion","like")}=2` |HIGH | |
-|Status: Device has "currentSensorReferenceOutOfRange" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"currentSensorReferenceOutOfRange","like")}=2` |HIGH | |
-|Status: Device has "ia-refSlaveModeTimeout" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"ia-refSlaveModeTimeout","like")}=2` |HIGH | |
-|Status: Device has "blockbusBoot" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"blockbusBoot","like")}=2` |HIGH | |
-|Status: Device has "hscommMaster" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"hscommMaster","like")}=2` |HIGH | |
-|Status: Device has "hscomm" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"hscomm","like")}=2` |HIGH | |
-|Status: Device has "slave" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"slave","like")}=2` |HIGH | |
-|Status: Device has "rtsShorted" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"rtsShorted","like")}=2` |WARNING | |
-|Status: Device has "rtsDisconnected" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"rtsDisconnected","like")}=2` |WARNING | |
-|Status: Device has "heatsinkTempSensorOpen" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"heatsinkTempSensorOpen","like")}=2` |WARNING | |
-|Status: Device has "heatsinkTempSensorShorted" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"heatsinkTempSensorShorted","like")}=2` |WARNING | |
-|Status: Device has "highTemperatureCurrentLimit" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"highTemperatureCurrentLimit","like")}=2` |WARNING | |
-|Status: Device has "currentLimit" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"currentLimit","like")}=2` |WARNING | |
-|Status: Device has "currentOffset" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"currentOffset","like")}=2` |WARNING | |
-|Status: Device has "batterySense" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"batterySense","like")}=2` |WARNING | |
-|Status: Device has "batterySenseDisconnected" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"batterySenseDisconnected","like")}=2` |WARNING | |
-|Status: Device has "uncalibrated" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"uncalibrated","like")}=2` |WARNING | |
-|Status: Device has "rtsMiswire" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"rtsMiswire","like")}=2` |WARNING | |
-|Status: Device has "highVoltageDisconnect" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"highVoltageDisconnect","like")}=2` |WARNING | |
-|Status: Device has "systemMiswire" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"systemMiswire","like")}=2` |WARNING | |
-|Status: Device has "mosfetSOpen" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"mosfetSOpen","like")}=2` |WARNING | |
-|Status: Device has "p12VoltageOutOfRange" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"p12VoltageOutOfRange","like")}=2` |WARNING | |
-|Status: Device has "highArrayVCurrentLimit" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"highArrayVCurrentLimit","like")}=2` |WARNING | |
-|Status: Device has "maxAdcValueReached" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"maxAdcValueReached","like")}=2` |WARNING | |
-|Status: Device has "controllerWasReset" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"controllerWasReset","like")}=2` |WARNING | |
-|Status: Device has "alarm21Internal" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"alarm21Internal","like")}=2` |WARNING | |
-|Status: Device has "p3VoltageOutOfRange" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"p3VoltageOutOfRange","like")}=2` |WARNING | |
-|Status: Device has "derateLimit" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"derateLimit","like")}=2` |WARNING | |
-|Status: Device has "arrayCurrentOffset" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"arrayCurrentOffset","like")}=2` |WARNING | |
-|Status: Device has "ee-i2cRetryLimit" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"ee-i2cRetryLimit","like")}=2` |WARNING | |
-|Status: Device has "ethernetAlarm" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"ethernetAlarm","like")}=2` |WARNING | |
-|Status: Device has "lvd" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"lvd","like")}=2` |WARNING | |
-|Status: Device has "software" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"software","like")}=2` |WARNING | |
-|Status: Device has "fp12VoltageOutOfRange" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"fp12VoltageOutOfRange","like")}=2` |WARNING | |
-|Status: Device has "extflashFault" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"extflashFault","like")}=2` |WARNING | |
-|Status: Device has "slaveControlFault" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"slaveControlFault","like")}=2` |WARNING | |
-|Temperature: Low battery temperature (below {$BATTERY.TEMP.MIN.WARN}C for 5m) |<p>-</p> |`{TEMPLATE_NAME:temp.battery[batteryTemperature.0].max(5m)}<{$BATTERY.TEMP.MIN.WARN}` |WARNING |<p>**Depends on**:</p><p>- Temperature: Critically low battery temperature (below {$BATTERY.TEMP.MIN.WARN}C for 5m)</p> |
-|Temperature: Critically low battery temperature (below {$BATTERY.TEMP.MIN.WARN}C for 5m) |<p>-</p> |`{TEMPLATE_NAME:temp.battery[batteryTemperature.0].max(5m)}<{$BATTERY.TEMP.MIN.CRIT}` |HIGH | |
-|Temperature: High battery temperature (over {$BATTERY.TEMP.MAX.WARN}C for 5m) |<p>-</p> |`{TEMPLATE_NAME:temp.battery[batteryTemperature.0].min(5m)}>{$BATTERY.TEMP.MAX.WARN}` |WARNING |<p>**Depends on**:</p><p>- Temperature: Critically high battery temperature (over {$BATTERY.TEMP.MAX.CRIT}C for 5m)</p> |
-|Temperature: Critically high battery temperature (over {$BATTERY.TEMP.MAX.CRIT}C for 5m) |<p>-</p> |`{TEMPLATE_NAME:temp.battery[batteryTemperature.0].min(5m)}>{$BATTERY.TEMP.MAX.CRIT}` |HIGH | |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|------------------------------------------------------------------------------------------|----------------------------------------------------------------------|-------------------------------------------------------------------------------------------------|----------|-------------------------------------------------------------------------------------------------------------------------|
+| Battery: Device charge in warning state | <p>-</p> | `{TEMPLATE_NAME:charge.state[chargeState.0].last()}={$CHARGE.STATE.WARN}` | WARNING | <p>**Depends on**:</p><p>- Battery: Device charge in critical state</p> |
+| Battery: Device charge in critical state | <p>-</p> | `{TEMPLATE_NAME:charge.state[chargeState.0].last()}={$CHARGE.STATE.CRIT}` | HIGH | |
+| Battery: Low battery voltage (below {#VOLTAGE.MIN.WARN}V for 5m) | <p>-</p> | `{TEMPLATE_NAME:battery.voltage[batteryVoltage.0{#SINGLETON}].max(5m)}<{#VOLTAGE.MIN.WARN}` | WARNING | <p>**Depends on**:</p><p>- Battery: Critically low battery voltage (below {#VOLTAGE.MIN.CRIT}V for 5m)</p> |
+| Battery: Critically low battery voltage (below {#VOLTAGE.MIN.CRIT}V for 5m) | <p>-</p> | `{TEMPLATE_NAME:battery.voltage[batteryVoltage.0{#SINGLETON}].max(5m)}<{#VOLTAGE.MIN.CRIT}` | HIGH | |
+| Battery: High battery voltage (over {#VOLTAGE.MAX.WARN}V for 5m) | <p>-</p> | `{TEMPLATE_NAME:battery.voltage[batteryVoltage.0{#SINGLETON}].min(5m)}>{#VOLTAGE.MAX.WARN}` | WARNING | <p>**Depends on**:</p><p>- Battery: Critically high battery voltage (over {#VOLTAGE.MAX.CRIT}V for 5m)</p> |
+| Battery: Critically high battery voltage (over {#VOLTAGE.MAX.CRIT}V for 5m) | <p>-</p> | `{TEMPLATE_NAME:battery.voltage[batteryVoltage.0{#SINGLETON}].min(5m)}>{#VOLTAGE.MAX.CRIT}` | HIGH | |
+| Status: Device has been restarted (uptime < 10m) | <p>Uptime is less than 10 minutes</p> | `{TEMPLATE_NAME:status.uptime.last()}<10m` | INFO | <p>Manual close: YES</p> |
+| Status: Failed to fetch data (or no data for 5m) | <p>Zabbix has not received data for items for the last 5 minutes</p> | `{TEMPLATE_NAME:status.uptime.nodata(5m)}=1` | WARNING | <p>Manual close: YES</p> |
+| Status: Device has "overcurrent" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"overcurrent","like")}=2` | HIGH | |
+| Status: Device has "fetShort" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"fetShort","like")}=2` | HIGH | |
+| Status: Device has "softwareFault" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"softwareFault","like")}=2` | HIGH | |
+| Status: Device has "batteryHvd" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"batteryHvd","like")}=2` | HIGH | |
+| Status: Device has "arrayHvd" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"arrayHvd","like")}=2` | HIGH | |
+| Status: Device has "dipSwitchChange" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"dipSwitchChange","like")}=2` | HIGH | |
+| Status: Device has "customSettingsEdit" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"customSettingsEdit","like")}=2` | HIGH | |
+| Status: Device has "rtsShorted" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"rtsShorted","like")}=2` | HIGH | |
+| Status: Device has "rtsDisconnected" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"rtsDisconnected","like")}=2` | HIGH | |
+| Status: Device has "eepromRetryLimit" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"eepromRetryLimit","like")}=2` | HIGH | |
+| Status: Device has "controllerWasReset" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"controllerWasReset","like")}=2` | HIGH | |
+| Status: Device has "chargeSlaveControlTimeout" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"chargeSlaveControlTimeout","like")}=2` | HIGH | |
+| Status: Device has "rs232SerialToMeterBridge" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"rs232SerialToMeterBridge","like")}=2` | HIGH | |
+| Status: Device has "batteryLvd" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"batteryLvd","like")}=2` | HIGH | |
+| Status: Device has "powerboardCommunicationFault" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"powerboardCommunicationFault","like")}=2` | HIGH | |
+| Status: Device has "fault16Software" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"fault16Software","like")}=2` | HIGH | |
+| Status: Device has "fault17Software" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"fault17Software","like")}=2` | HIGH | |
+| Status: Device has "fault18Software" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"fault18Software","like")}=2` | HIGH | |
+| Status: Device has "fault19Software" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"fault19Software","like")}=2` | HIGH | |
+| Status: Device has "fault20Software" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"fault20Software","like")}=2` | HIGH | |
+| Status: Device has "fault21Software" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"fault21Software","like")}=2` | HIGH | |
+| Status: Device has "fpgaVersion" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"fpgaVersion","like")}=2` | HIGH | |
+| Status: Device has "currentSensorReferenceOutOfRange" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"currentSensorReferenceOutOfRange","like")}=2` | HIGH | |
+| Status: Device has "ia-refSlaveModeTimeout" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"ia-refSlaveModeTimeout","like")}=2` | HIGH | |
+| Status: Device has "blockbusBoot" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"blockbusBoot","like")}=2` | HIGH | |
+| Status: Device has "hscommMaster" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"hscommMaster","like")}=2` | HIGH | |
+| Status: Device has "hscomm" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"hscomm","like")}=2` | HIGH | |
+| Status: Device has "slave" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"slave","like")}=2` | HIGH | |
+| Status: Device has "rtsShorted" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"rtsShorted","like")}=2` | WARNING | |
+| Status: Device has "rtsDisconnected" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"rtsDisconnected","like")}=2` | WARNING | |
+| Status: Device has "heatsinkTempSensorOpen" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"heatsinkTempSensorOpen","like")}=2` | WARNING | |
+| Status: Device has "heatsinkTempSensorShorted" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"heatsinkTempSensorShorted","like")}=2` | WARNING | |
+| Status: Device has "highTemperatureCurrentLimit" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"highTemperatureCurrentLimit","like")}=2` | WARNING | |
+| Status: Device has "currentLimit" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"currentLimit","like")}=2` | WARNING | |
+| Status: Device has "currentOffset" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"currentOffset","like")}=2` | WARNING | |
+| Status: Device has "batterySense" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"batterySense","like")}=2` | WARNING | |
+| Status: Device has "batterySenseDisconnected" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"batterySenseDisconnected","like")}=2` | WARNING | |
+| Status: Device has "uncalibrated" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"uncalibrated","like")}=2` | WARNING | |
+| Status: Device has "rtsMiswire" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"rtsMiswire","like")}=2` | WARNING | |
+| Status: Device has "highVoltageDisconnect" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"highVoltageDisconnect","like")}=2` | WARNING | |
+| Status: Device has "systemMiswire" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"systemMiswire","like")}=2` | WARNING | |
+| Status: Device has "mosfetSOpen" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"mosfetSOpen","like")}=2` | WARNING | |
+| Status: Device has "p12VoltageOutOfRange" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"p12VoltageOutOfRange","like")}=2` | WARNING | |
+| Status: Device has "highArrayVCurrentLimit" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"highArrayVCurrentLimit","like")}=2` | WARNING | |
+| Status: Device has "maxAdcValueReached" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"maxAdcValueReached","like")}=2` | WARNING | |
+| Status: Device has "controllerWasReset" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"controllerWasReset","like")}=2` | WARNING | |
+| Status: Device has "alarm21Internal" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"alarm21Internal","like")}=2` | WARNING | |
+| Status: Device has "p3VoltageOutOfRange" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"p3VoltageOutOfRange","like")}=2` | WARNING | |
+| Status: Device has "derateLimit" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"derateLimit","like")}=2` | WARNING | |
+| Status: Device has "arrayCurrentOffset" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"arrayCurrentOffset","like")}=2` | WARNING | |
+| Status: Device has "ee-i2cRetryLimit" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"ee-i2cRetryLimit","like")}=2` | WARNING | |
+| Status: Device has "ethernetAlarm" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"ethernetAlarm","like")}=2` | WARNING | |
+| Status: Device has "lvd" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"lvd","like")}=2` | WARNING | |
+| Status: Device has "software" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"software","like")}=2` | WARNING | |
+| Status: Device has "fp12VoltageOutOfRange" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"fp12VoltageOutOfRange","like")}=2` | WARNING | |
+| Status: Device has "extflashFault" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"extflashFault","like")}=2` | WARNING | |
+| Status: Device has "slaveControlFault" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"slaveControlFault","like")}=2` | WARNING | |
+| Temperature: Low battery temperature (below {$BATTERY.TEMP.MIN.WARN}C for 5m) | <p>-</p> | `{TEMPLATE_NAME:temp.battery[batteryTemperature.0].max(5m)}<{$BATTERY.TEMP.MIN.WARN}` | WARNING | <p>**Depends on**:</p><p>- Temperature: Critically low battery temperature (below {$BATTERY.TEMP.MIN.WARN}C for 5m)</p> |
+| Temperature: Critically low battery temperature (below {$BATTERY.TEMP.MIN.WARN}C for 5m) | <p>-</p> | `{TEMPLATE_NAME:temp.battery[batteryTemperature.0].max(5m)}<{$BATTERY.TEMP.MIN.CRIT}` | HIGH | |
+| Temperature: High battery temperature (over {$BATTERY.TEMP.MAX.WARN}C for 5m) | <p>-</p> | `{TEMPLATE_NAME:temp.battery[batteryTemperature.0].min(5m)}>{$BATTERY.TEMP.MAX.WARN}` | WARNING | <p>**Depends on**:</p><p>- Temperature: Critically high battery temperature (over {$BATTERY.TEMP.MAX.CRIT}C for 5m)</p> |
+| Temperature: Critically high battery temperature (over {$BATTERY.TEMP.MAX.CRIT}C for 5m) | <p>-</p> | `{TEMPLATE_NAME:temp.battery[batteryTemperature.0].min(5m)}>{$BATTERY.TEMP.MAX.CRIT}` | HIGH | |
## Feedback
diff --git a/templates/net/morningstar_snmp/tristar_mppt_snmp/README.md b/templates/net/morningstar_snmp/tristar_mppt_snmp/README.md
index 976d44df503..6c63bf61fe9 100644
--- a/templates/net/morningstar_snmp/tristar_mppt_snmp/README.md
+++ b/templates/net/morningstar_snmp/tristar_mppt_snmp/README.md
@@ -3,11 +3,11 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
-> See [Zabbix template operation](https://www.zabbix.com/documentation/5.2/manual/config/templates_out_of_the_box/zabbix_agent) for basic instructions.
+> See [Zabbix template operation](https://www.zabbix.com/documentation/5.4/manual/config/templates_out_of_the_box/zabbix_agent) for basic instructions.
Refer to the vendor documentation.
@@ -17,23 +17,23 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$BATTERY.TEMP.MAX.CRIT} |<p>Battery high temperature critical value</p> |`60` |
-|{$BATTERY.TEMP.MAX.WARN} |<p>Battery high temperature warning value</p> |`45` |
-|{$BATTERY.TEMP.MIN.CRIT} |<p>Battery low temperature critical value</p> |`-20` |
-|{$BATTERY.TEMP.MIN.WARN} |<p>Battery low temperature warning value</p> |`0` |
-|{$CHARGE.STATE.CRIT} |<p>fault</p> |`4` |
-|{$CHARGE.STATE.WARN} |<p>disconnect</p> |`2` |
-|{$LOAD.STATE.CRIT:"fault"} |<p>fault</p> |`4` |
-|{$LOAD.STATE.CRIT:"lvd"} |<p>lvd</p> |`3` |
-|{$LOAD.STATE.WARN:"disconnect"} |<p>disconnect</p> |`5` |
-|{$LOAD.STATE.WARN:"lvdWarning"} |<p>lvdWarning</p> |`2` |
-|{$LOAD.STATE.WARN:"override"} |<p>override</p> |`7` |
-|{$VOLTAGE.MAX.CRIT} | |`` |
-|{$VOLTAGE.MAX.WARN} | |`` |
-|{$VOLTAGE.MIN.CRIT} | |`` |
-|{$VOLTAGE.MIN.WARN} | |`` |
+| Name | Description | Default |
+|---------------------------------|------------------------------------------------|---------|
+| {$BATTERY.TEMP.MAX.CRIT} | <p>Battery high temperature critical value</p> | `60` |
+| {$BATTERY.TEMP.MAX.WARN} | <p>Battery high temperature warning value</p> | `45` |
+| {$BATTERY.TEMP.MIN.CRIT} | <p>Battery low temperature critical value</p> | `-20` |
+| {$BATTERY.TEMP.MIN.WARN} | <p>Battery low temperature warning value</p> | `0` |
+| {$CHARGE.STATE.CRIT} | <p>fault</p> | `4` |
+| {$CHARGE.STATE.WARN} | <p>disconnect</p> | `2` |
+| {$LOAD.STATE.CRIT:"fault"} | <p>fault</p> | `4` |
+| {$LOAD.STATE.CRIT:"lvd"} | <p>lvd</p> | `3` |
+| {$LOAD.STATE.WARN:"disconnect"} | <p>disconnect</p> | `5` |
+| {$LOAD.STATE.WARN:"lvdWarning"} | <p>lvdWarning</p> | `2` |
+| {$LOAD.STATE.WARN:"override"} | <p>override</p> | `7` |
+| {$VOLTAGE.MAX.CRIT} | | `` |
+| {$VOLTAGE.MAX.WARN} | | `` |
+| {$VOLTAGE.MIN.CRIT} | | `` |
+| {$VOLTAGE.MIN.WARN} | | `` |
## Template links
@@ -41,78 +41,78 @@ There are no template links in this template.
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Battery voltage discovery |<p>Discovery for battery voltage triggers</p> |DEPENDENT |battery.voltage.discovery<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Name | Description | Type | Key and additional info |
+|---------------------------|-----------------------------------------------|-----------|---------------------------------------------------------------------------------------------------------------------|
+| Battery voltage discovery | <p>Discovery for battery voltage triggers</p> | DEPENDENT | battery.voltage.discovery<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Array |Array: Voltage |<p>MIB: TRISTAR-MPPT</p><p>Description:Array Voltage</p><p>Scaling Factor:0.0054931640625</p><p>Units:V</p><p>Range:[-10, 180]</p><p>Modbus address:0x001b</p> |SNMP |array.voltage[arrayVoltage.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.005493164063`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
-|Array |Array: Array Current |<p>MIB: TRISTAR-MPPT</p><p>Description:Array Current</p><p>Scaling Factor:0.00244140625</p><p>Units:A</p><p>Range:[-10, 80]</p><p>Modbus address:0x001d</p> |SNMP |array.current[arrayCurrent.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.00244140625`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
-|Array |Array: Sweep Vmp |<p>MIB: TRISTAR-MPPT</p><p>Description:Vmp (last sweep)</p><p>Scaling Factor:0.0054931640625</p><p>Units:V</p><p>Range:[-10, 180.0]</p><p>Modbus address:0x003d</p> |SNMP |array.sweep_vmp[arrayVmpLastSweep.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.005493164063`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
-|Array |Array: Sweep Voc |<p>MIB: TRISTAR-MPPT</p><p>Description:Voc (last sweep)</p><p>Scaling Factor:0.0054931640625</p><p>Units:V</p><p>Range:[-10, 180.0]</p><p>Modbus address:0x003e</p> |SNMP |array.sweep_voc[arrayVocLastSweep.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.005493164063`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
-|Array |Array: Sweep Pmax |<p>MIB: TRISTAR-MPPT</p><p>Description:Pmax (last sweep)</p><p>Scaling Factor:0.10986328125</p><p>Units:W</p><p>Range:[-10, 5000]</p><p>Modbus address:0x003c</p> |SNMP |array.sweep_pmax[arrayPmaxLastSweep.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.1098632813`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
-|Battery |Battery: Charge State |<p>MIB: TRISTAR-MPPT</p><p>Description:Charge State</p><p>Modbus address:0x0032</p><p>0: Start</p><p>1: NightCheck</p><p>2: Disconnect</p><p>3: Night</p><p>4: Fault</p><p>5: Mppt</p><p>6: Absorption</p><p>7: Float</p><p>8: Equalize</p><p>9: Slave</p> |SNMP |charge.state[chargeState.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|Battery |Battery: Target Voltage |<p>MIB: TRISTAR-MPPT</p><p>Description:Target Voltage</p><p>Scaling Factor:0.0054931640625</p><p>Units:V</p><p>Range:[-10, 180.0]</p><p>Modbus address:0x0033</p> |SNMP |target.voltage[targetRegulationVoltage.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.005493164063`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
-|Battery |Battery: Charge Current |<p>MIB: TRISTAR-MPPT</p><p>Description:Battery Current</p><p>Scaling Factor:0.00244140625</p><p>Units:A</p><p>Range:[-10, 80]</p><p>Modbus address:0x001c</p> |SNMP |charge.current[batteryCurrent.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.00244140625`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
-|Battery |Battery: Output Power |<p>MIB: TRISTAR-MPPT</p><p>Description:Output Power</p><p>Scaling Factor:0.10986328125</p><p>Units:W</p><p>Range:[-10, 5000]</p><p>Modbus address:0x003a</p> |SNMP |charge.output_power[ outputPower.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.1098632813`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
-|Battery |Battery: Voltage{#SINGLETON} |<p>MIB: TRISTAR-MPPT</p><p>Description:Battery voltage</p><p>Scaling Factor:0.0054931640625</p><p>Units:V</p><p>Range:[-10, 180.0]</p><p>Modbus address:0x0018</p> |SNMP |battery.voltage[batteryVoltage.0{#SINGLETON}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.005493164063`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
-|Counter |Counter: Charge Amp-hours |<p>MIB: TRISTAR-MPPT</p><p>Description:Ah Charge Resettable</p><p>Scaling Factor:0.1</p><p>Units:Ah</p><p>Range:[0.0, 5000]</p><p>Modbus addresses:H=0x0034 L=0x0035</p> |SNMP |counter.charge_amp_hours[ahChargeResetable.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.1`</p> |
-|Counter |Counter: Charge KW-hours |<p>MIB: TRISTAR-MPPT</p><p>Description:kWh Charge Resettable</p><p>Scaling Factor:0.1</p><p>Units:kWh</p><p>Range:[0.0, 65535.0]</p><p>Modbus address:0x0038</p> |SNMP |counter.charge_kw_hours[kwhChargeResetable.0] |
-|Status |Status: Uptime |<p>Device uptime in seconds</p> |SNMP |status.uptime<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.01`</p> |
-|Status |Status: Faults |<p>MIB: TRISTAR-MPPT</p><p>Description:Faults</p><p>Modbus address:0x002c</p> |SNMP |status.faults[faults.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
-|Status |Status: Alarms |<p>MIB: TRISTAR-MPPT</p><p>Description:Faults</p><p>Modbus address:0x002c</p> |SNMP |status.alarms[alarms.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
-|Temperature |Temperature: Battery |<p>MIB: TRISTAR-MPPT</p><p>Description:Batt. Temp</p><p>Scaling Factor:1.0</p><p>Units:C</p><p>Range:[-40, 80]</p><p>Modbus address:0x0025</p> |SNMP |temp.battery[batteryTemperature.0] |
-|Temperature |Temperature: Heatsink |<p>MIB: TRISTAR-MPPT</p><p>Description:HS Temp</p><p>Scaling Factor:1.0</p><p>Units:C</p><p>Range:[-40, 80]</p><p>Modbus address:0x0023</p> |SNMP |temp.heatsink[heatsinkTemperature.0] |
-|Zabbix_raw_items |Battery: Battery Voltage discovery |<p>MIB: TRISTAR-MPPT</p> |SNMP |battery.voltage.discovery[batteryVoltage.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.005493164063`</p> |
+| Group | Name | Description | Type | Key and additional info |
+|------------------|------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------|-------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Array | Array: Voltage | <p>MIB: TRISTAR-MPPT</p><p>Description:Array Voltage</p><p>Scaling Factor:0.0054931640625</p><p>Units:V</p><p>Range:[-10, 180]</p><p>Modbus address:0x001b</p> | SNMP | array.voltage[arrayVoltage.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.005493164063`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
+| Array | Array: Array Current | <p>MIB: TRISTAR-MPPT</p><p>Description:Array Current</p><p>Scaling Factor:0.00244140625</p><p>Units:A</p><p>Range:[-10, 80]</p><p>Modbus address:0x001d</p> | SNMP | array.current[arrayCurrent.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.00244140625`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
+| Array | Array: Sweep Vmp | <p>MIB: TRISTAR-MPPT</p><p>Description:Vmp (last sweep)</p><p>Scaling Factor:0.0054931640625</p><p>Units:V</p><p>Range:[-10, 180.0]</p><p>Modbus address:0x003d</p> | SNMP | array.sweep_vmp[arrayVmpLastSweep.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.005493164063`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
+| Array | Array: Sweep Voc | <p>MIB: TRISTAR-MPPT</p><p>Description:Voc (last sweep)</p><p>Scaling Factor:0.0054931640625</p><p>Units:V</p><p>Range:[-10, 180.0]</p><p>Modbus address:0x003e</p> | SNMP | array.sweep_voc[arrayVocLastSweep.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.005493164063`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
+| Array | Array: Sweep Pmax | <p>MIB: TRISTAR-MPPT</p><p>Description:Pmax (last sweep)</p><p>Scaling Factor:0.10986328125</p><p>Units:W</p><p>Range:[-10, 5000]</p><p>Modbus address:0x003c</p> | SNMP | array.sweep_pmax[arrayPmaxLastSweep.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.1098632813`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
+| Battery | Battery: Charge State | <p>MIB: TRISTAR-MPPT</p><p>Description:Charge State</p><p>Modbus address:0x0032</p><p>0: Start</p><p>1: NightCheck</p><p>2: Disconnect</p><p>3: Night</p><p>4: Fault</p><p>5: Mppt</p><p>6: Absorption</p><p>7: Float</p><p>8: Equalize</p><p>9: Slave</p> | SNMP | charge.state[chargeState.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| Battery | Battery: Target Voltage | <p>MIB: TRISTAR-MPPT</p><p>Description:Target Voltage</p><p>Scaling Factor:0.0054931640625</p><p>Units:V</p><p>Range:[-10, 180.0]</p><p>Modbus address:0x0033</p> | SNMP | target.voltage[targetRegulationVoltage.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.005493164063`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
+| Battery | Battery: Charge Current | <p>MIB: TRISTAR-MPPT</p><p>Description:Battery Current</p><p>Scaling Factor:0.00244140625</p><p>Units:A</p><p>Range:[-10, 80]</p><p>Modbus address:0x001c</p> | SNMP | charge.current[batteryCurrent.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.00244140625`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
+| Battery | Battery: Output Power | <p>MIB: TRISTAR-MPPT</p><p>Description:Output Power</p><p>Scaling Factor:0.10986328125</p><p>Units:W</p><p>Range:[-10, 5000]</p><p>Modbus address:0x003a</p> | SNMP | charge.output_power[ outputPower.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.1098632813`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
+| Battery | Battery: Voltage{#SINGLETON} | <p>MIB: TRISTAR-MPPT</p><p>Description:Battery voltage</p><p>Scaling Factor:0.0054931640625</p><p>Units:V</p><p>Range:[-10, 180.0]</p><p>Modbus address:0x0018</p> | SNMP | battery.voltage[batteryVoltage.0{#SINGLETON}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.005493164063`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
+| Counter | Counter: Charge Amp-hours | <p>MIB: TRISTAR-MPPT</p><p>Description:Ah Charge Resettable</p><p>Scaling Factor:0.1</p><p>Units:Ah</p><p>Range:[0.0, 5000]</p><p>Modbus addresses:H=0x0034 L=0x0035</p> | SNMP | counter.charge_amp_hours[ahChargeResetable.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.1`</p> |
+| Counter | Counter: Charge KW-hours | <p>MIB: TRISTAR-MPPT</p><p>Description:kWh Charge Resettable</p><p>Scaling Factor:0.1</p><p>Units:kWh</p><p>Range:[0.0, 65535.0]</p><p>Modbus address:0x0038</p> | SNMP | counter.charge_kw_hours[kwhChargeResetable.0] |
+| Status | Status: Uptime | <p>Device uptime in seconds</p> | SNMP | status.uptime<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.01`</p> |
+| Status | Status: Faults | <p>MIB: TRISTAR-MPPT</p><p>Description:Faults</p><p>Modbus address:0x002c</p> | SNMP | status.faults[faults.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Status | Status: Alarms | <p>MIB: TRISTAR-MPPT</p><p>Description:Faults</p><p>Modbus address:0x002c</p> | SNMP | status.alarms[alarms.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Temperature | Temperature: Battery | <p>MIB: TRISTAR-MPPT</p><p>Description:Batt. Temp</p><p>Scaling Factor:1.0</p><p>Units:C</p><p>Range:[-40, 80]</p><p>Modbus address:0x0025</p> | SNMP | temp.battery[batteryTemperature.0] |
+| Temperature | Temperature: Heatsink | <p>MIB: TRISTAR-MPPT</p><p>Description:HS Temp</p><p>Scaling Factor:1.0</p><p>Units:C</p><p>Range:[-40, 80]</p><p>Modbus address:0x0023</p> | SNMP | temp.heatsink[heatsinkTemperature.0] |
+| Zabbix_raw_items | Battery: Battery Voltage discovery | <p>MIB: TRISTAR-MPPT</p> | SNMP | battery.voltage.discovery[batteryVoltage.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.005493164063`</p> |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|Battery: Device charge in warning state |<p>-</p> |`{TEMPLATE_NAME:charge.state[chargeState.0].last()}={$CHARGE.STATE.WARN}` |WARNING |<p>**Depends on**:</p><p>- Battery: Device charge in critical state</p> |
-|Battery: Device charge in critical state |<p>-</p> |`{TEMPLATE_NAME:charge.state[chargeState.0].last()}={$CHARGE.STATE.CRIT}` |HIGH | |
-|Battery: Low battery voltage (below {#VOLTAGE.MIN.WARN}V for 5m) |<p>-</p> |`{TEMPLATE_NAME:battery.voltage[batteryVoltage.0{#SINGLETON}].max(5m)}<{#VOLTAGE.MIN.WARN}` |WARNING |<p>**Depends on**:</p><p>- Battery: Critically low battery voltage (below {#VOLTAGE.MIN.CRIT}V for 5m)</p> |
-|Battery: Critically low battery voltage (below {#VOLTAGE.MIN.CRIT}V for 5m) |<p>-</p> |`{TEMPLATE_NAME:battery.voltage[batteryVoltage.0{#SINGLETON}].max(5m)}<{#VOLTAGE.MIN.CRIT}` |HIGH | |
-|Battery: High battery voltage (over {#VOLTAGE.MAX.WARN}V for 5m) |<p>-</p> |`{TEMPLATE_NAME:battery.voltage[batteryVoltage.0{#SINGLETON}].min(5m)}>{#VOLTAGE.MAX.WARN}` |WARNING |<p>**Depends on**:</p><p>- Battery: Critically high battery voltage (over {#VOLTAGE.MAX.CRIT}V for 5m)</p> |
-|Battery: Critically high battery voltage (over {#VOLTAGE.MAX.CRIT}V for 5m) |<p>-</p> |`{TEMPLATE_NAME:battery.voltage[batteryVoltage.0{#SINGLETON}].min(5m)}>{#VOLTAGE.MAX.CRIT}` |HIGH | |
-|Status: Device has been restarted (uptime < 10m) |<p>Uptime is less than 10 minutes</p> |`{TEMPLATE_NAME:status.uptime.last()}<10m` |INFO |<p>Manual close: YES</p> |
-|Status: Failed to fetch data (or no data for 5m) |<p>Zabbix has not received data for items for the last 5 minutes</p> |`{TEMPLATE_NAME:status.uptime.nodata(5m)}=1` |WARNING |<p>Manual close: YES</p> |
-|Status: Device has "overcurrent" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"overcurrent","like")}=2` |HIGH | |
-|Status: Device has "fetShort" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"fetShort","like")}=2` |HIGH | |
-|Status: Device has "softwareFault" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"softwareFault","like")}=2` |HIGH | |
-|Status: Device has "batteryHvd" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"batteryHvd","like")}=2` |HIGH | |
-|Status: Device has "arrayHvd" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"arrayHvd","like")}=2` |HIGH | |
-|Status: Device has "dipSwitchChange" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"dipSwitchChange","like")}=2` |HIGH | |
-|Status: Device has "customSettingsEdit" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"customSettingsEdit","like")}=2` |HIGH | |
-|Status: Device has "rtsShorted" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"rtsShorted","like")}=2` |HIGH | |
-|Status: Device has "rtsDisconnected" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"rtsDisconnected","like")}=2` |HIGH | |
-|Status: Device has "eepromRetryLimit" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"eepromRetryLimit","like")}=2` |HIGH | |
-|Status: Device has "slaveControlTimeout" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"slaveControlTimeout","like")}=2` |HIGH | |
-|Status: Device has "rtsShorted" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"rtsShorted","like")}=2` |WARNING | |
-|Status: Device has "rtsDisconnected" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"rtsDisconnected","like")}=2` |WARNING | |
-|Status: Device has "heatsinkTempSensorOpen" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"heatsinkTempSensorOpen","like")}=2` |WARNING | |
-|Status: Device has "heatsinkTempSensorShorted" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"heatsinkTempSensorShorted","like")}=2` |WARNING | |
-|Status: Device has "highTemperatureCurrentLimit" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"highTemperatureCurrentLimit","like")}=2` |WARNING | |
-|Status: Device has "currentLimit" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"currentLimit","like")}=2` |WARNING | |
-|Status: Device has "currentOffset" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"currentOffset","like")}=2` |WARNING | |
-|Status: Device has "batterySense" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"batterySense","like")}=2` |WARNING | |
-|Status: Device has "batterySenseDisconnected" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"batterySenseDisconnected","like")}=2` |WARNING | |
-|Status: Device has "uncalibrated" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"uncalibrated","like")}=2` |WARNING | |
-|Status: Device has "rtsMiswire" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"rtsMiswire","like")}=2` |WARNING | |
-|Status: Device has "highVoltageDisconnect" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"highVoltageDisconnect","like")}=2` |WARNING | |
-|Status: Device has "systemMiswire" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"systemMiswire","like")}=2` |WARNING | |
-|Status: Device has "mosfetSOpen" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"mosfetSOpen","like")}=2` |WARNING | |
-|Status: Device has "p12VoltageReferenceOff" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"p12VoltageReferenceOff","like")}=2` |WARNING | |
-|Status: Device has "highArrayVCurrentLimit" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"highArrayVCurrentLimit","like")}=2` |WARNING | |
-|Status: Device has "maxAdcValueReached" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"maxAdcValueReached","like")}=2` |WARNING | |
-|Status: Device has "controllerWasReset" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"controllerWasReset","like")}=2` |WARNING | |
-|Temperature: Low battery temperature (below {$BATTERY.TEMP.MIN.WARN}C for 5m) |<p>-</p> |`{TEMPLATE_NAME:temp.battery[batteryTemperature.0].max(5m)}<{$BATTERY.TEMP.MIN.WARN}` |WARNING |<p>**Depends on**:</p><p>- Temperature: Critically low battery temperature (below {$BATTERY.TEMP.MIN.WARN}C for 5m)</p> |
-|Temperature: Critically low battery temperature (below {$BATTERY.TEMP.MIN.WARN}C for 5m) |<p>-</p> |`{TEMPLATE_NAME:temp.battery[batteryTemperature.0].max(5m)}<{$BATTERY.TEMP.MIN.CRIT}` |HIGH | |
-|Temperature: High battery temperature (over {$BATTERY.TEMP.MAX.WARN}C for 5m) |<p>-</p> |`{TEMPLATE_NAME:temp.battery[batteryTemperature.0].min(5m)}>{$BATTERY.TEMP.MAX.WARN}` |WARNING |<p>**Depends on**:</p><p>- Temperature: Critically high battery temperature (over {$BATTERY.TEMP.MAX.CRIT}C for 5m)</p> |
-|Temperature: Critically high battery temperature (over {$BATTERY.TEMP.MAX.CRIT}C for 5m) |<p>-</p> |`{TEMPLATE_NAME:temp.battery[batteryTemperature.0].min(5m)}>{$BATTERY.TEMP.MAX.CRIT}` |HIGH | |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|------------------------------------------------------------------------------------------|----------------------------------------------------------------------|---------------------------------------------------------------------------------------------|----------|-------------------------------------------------------------------------------------------------------------------------|
+| Battery: Device charge in warning state | <p>-</p> | `{TEMPLATE_NAME:charge.state[chargeState.0].last()}={$CHARGE.STATE.WARN}` | WARNING | <p>**Depends on**:</p><p>- Battery: Device charge in critical state</p> |
+| Battery: Device charge in critical state | <p>-</p> | `{TEMPLATE_NAME:charge.state[chargeState.0].last()}={$CHARGE.STATE.CRIT}` | HIGH | |
+| Battery: Low battery voltage (below {#VOLTAGE.MIN.WARN}V for 5m) | <p>-</p> | `{TEMPLATE_NAME:battery.voltage[batteryVoltage.0{#SINGLETON}].max(5m)}<{#VOLTAGE.MIN.WARN}` | WARNING | <p>**Depends on**:</p><p>- Battery: Critically low battery voltage (below {#VOLTAGE.MIN.CRIT}V for 5m)</p> |
+| Battery: Critically low battery voltage (below {#VOLTAGE.MIN.CRIT}V for 5m) | <p>-</p> | `{TEMPLATE_NAME:battery.voltage[batteryVoltage.0{#SINGLETON}].max(5m)}<{#VOLTAGE.MIN.CRIT}` | HIGH | |
+| Battery: High battery voltage (over {#VOLTAGE.MAX.WARN}V for 5m) | <p>-</p> | `{TEMPLATE_NAME:battery.voltage[batteryVoltage.0{#SINGLETON}].min(5m)}>{#VOLTAGE.MAX.WARN}` | WARNING | <p>**Depends on**:</p><p>- Battery: Critically high battery voltage (over {#VOLTAGE.MAX.CRIT}V for 5m)</p> |
+| Battery: Critically high battery voltage (over {#VOLTAGE.MAX.CRIT}V for 5m) | <p>-</p> | `{TEMPLATE_NAME:battery.voltage[batteryVoltage.0{#SINGLETON}].min(5m)}>{#VOLTAGE.MAX.CRIT}` | HIGH | |
+| Status: Device has been restarted (uptime < 10m) | <p>Uptime is less than 10 minutes</p> | `{TEMPLATE_NAME:status.uptime.last()}<10m` | INFO | <p>Manual close: YES</p> |
+| Status: Failed to fetch data (or no data for 5m) | <p>Zabbix has not received data for items for the last 5 minutes</p> | `{TEMPLATE_NAME:status.uptime.nodata(5m)}=1` | WARNING | <p>Manual close: YES</p> |
+| Status: Device has "overcurrent" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"overcurrent","like")}=2` | HIGH | |
+| Status: Device has "fetShort" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"fetShort","like")}=2` | HIGH | |
+| Status: Device has "softwareFault" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"softwareFault","like")}=2` | HIGH | |
+| Status: Device has "batteryHvd" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"batteryHvd","like")}=2` | HIGH | |
+| Status: Device has "arrayHvd" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"arrayHvd","like")}=2` | HIGH | |
+| Status: Device has "dipSwitchChange" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"dipSwitchChange","like")}=2` | HIGH | |
+| Status: Device has "customSettingsEdit" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"customSettingsEdit","like")}=2` | HIGH | |
+| Status: Device has "rtsShorted" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"rtsShorted","like")}=2` | HIGH | |
+| Status: Device has "rtsDisconnected" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"rtsDisconnected","like")}=2` | HIGH | |
+| Status: Device has "eepromRetryLimit" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"eepromRetryLimit","like")}=2` | HIGH | |
+| Status: Device has "slaveControlTimeout" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"slaveControlTimeout","like")}=2` | HIGH | |
+| Status: Device has "rtsShorted" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"rtsShorted","like")}=2` | WARNING | |
+| Status: Device has "rtsDisconnected" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"rtsDisconnected","like")}=2` | WARNING | |
+| Status: Device has "heatsinkTempSensorOpen" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"heatsinkTempSensorOpen","like")}=2` | WARNING | |
+| Status: Device has "heatsinkTempSensorShorted" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"heatsinkTempSensorShorted","like")}=2` | WARNING | |
+| Status: Device has "highTemperatureCurrentLimit" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"highTemperatureCurrentLimit","like")}=2` | WARNING | |
+| Status: Device has "currentLimit" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"currentLimit","like")}=2` | WARNING | |
+| Status: Device has "currentOffset" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"currentOffset","like")}=2` | WARNING | |
+| Status: Device has "batterySense" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"batterySense","like")}=2` | WARNING | |
+| Status: Device has "batterySenseDisconnected" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"batterySenseDisconnected","like")}=2` | WARNING | |
+| Status: Device has "uncalibrated" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"uncalibrated","like")}=2` | WARNING | |
+| Status: Device has "rtsMiswire" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"rtsMiswire","like")}=2` | WARNING | |
+| Status: Device has "highVoltageDisconnect" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"highVoltageDisconnect","like")}=2` | WARNING | |
+| Status: Device has "systemMiswire" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"systemMiswire","like")}=2` | WARNING | |
+| Status: Device has "mosfetSOpen" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"mosfetSOpen","like")}=2` | WARNING | |
+| Status: Device has "p12VoltageReferenceOff" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"p12VoltageReferenceOff","like")}=2` | WARNING | |
+| Status: Device has "highArrayVCurrentLimit" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"highArrayVCurrentLimit","like")}=2` | WARNING | |
+| Status: Device has "maxAdcValueReached" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"maxAdcValueReached","like")}=2` | WARNING | |
+| Status: Device has "controllerWasReset" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"controllerWasReset","like")}=2` | WARNING | |
+| Temperature: Low battery temperature (below {$BATTERY.TEMP.MIN.WARN}C for 5m) | <p>-</p> | `{TEMPLATE_NAME:temp.battery[batteryTemperature.0].max(5m)}<{$BATTERY.TEMP.MIN.WARN}` | WARNING | <p>**Depends on**:</p><p>- Temperature: Critically low battery temperature (below {$BATTERY.TEMP.MIN.WARN}C for 5m)</p> |
+| Temperature: Critically low battery temperature (below {$BATTERY.TEMP.MIN.WARN}C for 5m) | <p>-</p> | `{TEMPLATE_NAME:temp.battery[batteryTemperature.0].max(5m)}<{$BATTERY.TEMP.MIN.CRIT}` | HIGH | |
+| Temperature: High battery temperature (over {$BATTERY.TEMP.MAX.WARN}C for 5m) | <p>-</p> | `{TEMPLATE_NAME:temp.battery[batteryTemperature.0].min(5m)}>{$BATTERY.TEMP.MAX.WARN}` | WARNING | <p>**Depends on**:</p><p>- Temperature: Critically high battery temperature (over {$BATTERY.TEMP.MAX.CRIT}C for 5m)</p> |
+| Temperature: Critically high battery temperature (over {$BATTERY.TEMP.MAX.CRIT}C for 5m) | <p>-</p> | `{TEMPLATE_NAME:temp.battery[batteryTemperature.0].min(5m)}>{$BATTERY.TEMP.MAX.CRIT}` | HIGH | |
## Feedback
diff --git a/templates/net/morningstar_snmp/tristar_pwm_snmp/README.md b/templates/net/morningstar_snmp/tristar_pwm_snmp/README.md
index e1afa59198a..02fe7cda8cb 100644
--- a/templates/net/morningstar_snmp/tristar_pwm_snmp/README.md
+++ b/templates/net/morningstar_snmp/tristar_pwm_snmp/README.md
@@ -3,11 +3,11 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
-> See [Zabbix template operation](https://www.zabbix.com/documentation/5.2/manual/config/templates_out_of_the_box/zabbix_agent) for basic instructions.
+> See [Zabbix template operation](https://www.zabbix.com/documentation/5.4/manual/config/templates_out_of_the_box/zabbix_agent) for basic instructions.
Refer to the vendor documentation.
@@ -17,23 +17,23 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$BATTERY.TEMP.MAX.CRIT} |<p>Battery high temperature critical value</p> |`60` |
-|{$BATTERY.TEMP.MAX.WARN} |<p>Battery high temperature warning value</p> |`45` |
-|{$BATTERY.TEMP.MIN.CRIT} |<p>Battery low temperature critical value</p> |`-20` |
-|{$BATTERY.TEMP.MIN.WARN} |<p>Battery low temperature warning value</p> |`0` |
-|{$CHARGE.STATE.CRIT} |<p>fault</p> |`4` |
-|{$CHARGE.STATE.WARN} |<p>disconnect</p> |`2` |
-|{$LOAD.STATE.CRIT:"fault"} |<p>fault</p> |`4` |
-|{$LOAD.STATE.CRIT:"lvd"} |<p>lvd</p> |`3` |
-|{$LOAD.STATE.WARN:"disconnect"} |<p>disconnect</p> |`5` |
-|{$LOAD.STATE.WARN:"lvdWarning"} |<p>lvdWarning</p> |`2` |
-|{$LOAD.STATE.WARN:"override"} |<p>override</p> |`7` |
-|{$VOLTAGE.MAX.CRIT} | |`` |
-|{$VOLTAGE.MAX.WARN} | |`` |
-|{$VOLTAGE.MIN.CRIT} | |`` |
-|{$VOLTAGE.MIN.WARN} | |`` |
+| Name | Description | Default |
+|---------------------------------|------------------------------------------------|---------|
+| {$BATTERY.TEMP.MAX.CRIT} | <p>Battery high temperature critical value</p> | `60` |
+| {$BATTERY.TEMP.MAX.WARN} | <p>Battery high temperature warning value</p> | `45` |
+| {$BATTERY.TEMP.MIN.CRIT} | <p>Battery low temperature critical value</p> | `-20` |
+| {$BATTERY.TEMP.MIN.WARN} | <p>Battery low temperature warning value</p> | `0` |
+| {$CHARGE.STATE.CRIT} | <p>fault</p> | `4` |
+| {$CHARGE.STATE.WARN} | <p>disconnect</p> | `2` |
+| {$LOAD.STATE.CRIT:"fault"} | <p>fault</p> | `4` |
+| {$LOAD.STATE.CRIT:"lvd"} | <p>lvd</p> | `3` |
+| {$LOAD.STATE.WARN:"disconnect"} | <p>disconnect</p> | `5` |
+| {$LOAD.STATE.WARN:"lvdWarning"} | <p>lvdWarning</p> | `2` |
+| {$LOAD.STATE.WARN:"override"} | <p>override</p> | `7` |
+| {$VOLTAGE.MAX.CRIT} | | `` |
+| {$VOLTAGE.MAX.WARN} | | `` |
+| {$VOLTAGE.MIN.CRIT} | | `` |
+| {$VOLTAGE.MIN.WARN} | | `` |
## Template links
@@ -41,85 +41,85 @@ There are no template links in this template.
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Battery voltage discovery |<p>Discovery for battery voltage triggers</p> |DEPENDENT |battery.voltage.discovery<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
-|Charge mode discovery |<p>Discovery for device in charge mode</p> |DEPENDENT |controlmode.charge.discovery<p>**Preprocessing**:</p><p>- JAVASCRIPT: `return JSON.stringify(parseInt(value) === 0 ? [{'{#SINGLETON}': ''}] : []);`</p> |
-|Load mode discovery |<p>Discovery for device in load mode</p> |DEPENDENT |controlmode.load.discovery<p>**Preprocessing**:</p><p>- JAVASCRIPT: `return JSON.stringify(parseInt(value) === 1 ? [{'{#SINGLETON}': ''}] : []);`</p> |
-|Diversion mode discovery |<p>Discovery for device in diversion mode</p> |DEPENDENT |controlmode.diversion.discovery<p>**Preprocessing**:</p><p>- JAVASCRIPT: `return JSON.stringify(parseInt(value) === 2 ? [{'{#SINGLETON}': ''}] : []);`</p> |
-|Charge + Diversion mode discovery |<p>Discovery for device in charge and diversion modes</p> |DEPENDENT |controlmode.charge_diversion.discovery<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
-|Load + Diversion mode discovery |<p>Discovery for device in load and diversion modes</p> |DEPENDENT |controlmode.load_diversion.discovery<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Name | Description | Type | Key and additional info |
+|-----------------------------------|-----------------------------------------------------------|-----------|------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Battery voltage discovery | <p>Discovery for battery voltage triggers</p> | DEPENDENT | battery.voltage.discovery<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Charge mode discovery | <p>Discovery for device in charge mode</p> | DEPENDENT | controlmode.charge.discovery<p>**Preprocessing**:</p><p>- JAVASCRIPT: `return JSON.stringify(parseInt(value) === 0 ? [{'{#SINGLETON}': ''}] : []);`</p> |
+| Load mode discovery | <p>Discovery for device in load mode</p> | DEPENDENT | controlmode.load.discovery<p>**Preprocessing**:</p><p>- JAVASCRIPT: `return JSON.stringify(parseInt(value) === 1 ? [{'{#SINGLETON}': ''}] : []);`</p> |
+| Diversion mode discovery | <p>Discovery for device in diversion mode</p> | DEPENDENT | controlmode.diversion.discovery<p>**Preprocessing**:</p><p>- JAVASCRIPT: `return JSON.stringify(parseInt(value) === 2 ? [{'{#SINGLETON}': ''}] : []);`</p> |
+| Charge + Diversion mode discovery | <p>Discovery for device in charge and diversion modes</p> | DEPENDENT | controlmode.charge_diversion.discovery<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Load + Diversion mode discovery | <p>Discovery for device in load and diversion modes</p> | DEPENDENT | controlmode.load_diversion.discovery<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Array |Array: Voltage{#SINGLETON} |<p>MIB: TRISTAR</p><p>Description:Array/Load Voltage</p><p>Scaling Factor:0.00424652099609375</p><p>Units:V</p><p>Range:[0, 80]</p><p>Modbus address:0x000A</p> |SNMP |array.voltage[arrayloadVoltage.0{#SINGLETON}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.004246520996`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
-|Battery |Battery: Voltage{#SINGLETON} |<p>MIB: TRISTAR</p><p>Description:Battery voltage</p><p>Scaling Factor:0.002950042724609375</p><p>Units:V</p><p>Range:[0.0, 80.0]</p><p>Modbus address:0x0008</p> |SNMP |battery.voltage[batteryVoltage.0{#SINGLETON}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.002950042725`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
-|Battery |Battery: Charge Current{#SINGLETON} |<p>MIB: TRISTAR</p><p>Description:Charge Current</p><p>Scaling Factor:0.002034515380859375</p><p>Units:A</p><p>Range:[0, 60]</p><p>Modbus address:0x000B</p> |SNMP |charge.current[chargeCurrent.0{#SINGLETON}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.002034515381`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
-|Battery |Battery: Charge State{#SINGLETON} |<p>MIB: TRISTAR</p><p>Description:Control State</p><p>Modbus address:0x001B</p> |SNMP |charge.state[controlState.0{#SINGLETON}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|Battery |Battery: Target Voltage{#SINGLETON} |<p>MIB: TRISTAR</p><p>Description:Target Regulation Voltage</p><p>Scaling Factor:0.002950042724609375</p><p>Units:V</p><p>Range:[0.0, 80.0]</p><p>Modbus address:0x0010</p> |SNMP |target.voltage[targetVoltage.0{#SINGLETON}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.002950042725`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
-|Counter |Counter: KW-hours |<p>MIB: TRISTAR</p><p>Description:Kilowatt Hours</p><p>Scaling Factor:1.0</p><p>Units:kWh</p><p>Range:[0.0, 5000.0]</p><p>Modbus address:0x001E</p> |SNMP |counter.charge_kw_hours[kilowattHours.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
-|Counter |Counter: Amp-hours |<p>MIB: TRISTAR</p><p>Description:Ah (Resettable)</p><p>Scaling Factor:0.1</p><p>Units:Ah</p><p>Range:[0.0, 50000.0]</p><p>Modbus addresses:H=0x0011 L=0x0012</p> |SNMP |counter.charge_amp_hours[ahResettable.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.1`</p> |
-|Load |Load: State{#SINGLETON} |<p>MIB: TRISTAR</p><p>Description:Load State</p><p>Modbus address:0x001B</p><p>0: Start</p><p>1: Normal</p><p>2: LvdWarning</p><p>3: Lvd</p><p>4: Fault</p><p>5: Disconnect</p><p>6: LvdWarning1</p><p>7: OverrideLvd</p><p>8: Equalize</p> |SNMP |load.state[loadState.0{#SINGLETON}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|Load |Load: PWM Duty Cycle{#SINGLETON} |<p>MIB: TRISTAR</p><p>Description:PWM Duty Cycle</p><p>Scaling Factor:0.392156862745098</p><p>Units:%</p><p>Range:[0.0, 100.0]</p><p>Modbus address:0x001C</p> |SNMP |diversion.pwm_duty_cycle[pwmDutyCycle.0{#SINGLETON}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.3921568627`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
-|Load |Load: Current{#SINGLETON} |<p>MIB: TRISTAR</p><p>Description:Load Current</p><p>Scaling Factor:0.00966400146484375</p><p>Units:A</p><p>Range:[0, 60]</p><p>Modbus address:0x000C</p> |SNMP |load.current[loadCurrent.0{#SINGLETON}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.009664001465`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
-|Load |Load: Voltage{#SINGLETON} |<p>MIB: TRISTAR</p><p>Description:Array/Load Voltage</p><p>Scaling Factor:0.00424652099609375</p><p>Units:V</p><p>Range:[0, 80]</p><p>Modbus address:0x000A</p> |SNMP |load.voltage[arrayloadVoltage.0{#SINGLETON}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.004246520996`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
-|Status |Status: Uptime |<p>Device uptime in seconds</p> |SNMP |status.uptime<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.01`</p> |
-|Status |Status: Control Mode |<p>MIB: TRISTAR</p><p>Description:Control Mode</p><p>Modbus address:0x001A</p><p>0: charge</p><p>1: loadControl</p><p>2: diversion</p><p>3: lighting</p> |SNMP |control.mode[controlMode.0] |
-|Status |Status: Faults |<p>MIB: TRISTAR</p><p>Description:Battery voltage</p><p>Scaling Factor:0.002950042724609375</p><p>Units:V</p><p>Range:[0.0, 80.0]</p><p>Modbus address:0x0008</p> |SNMP |status.faults[faults.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
-|Status |Status: Alarms |<p>MIB: TRISTAR</p><p>Description:Alarms</p><p>Modbus addresses:H=0x001D L=0x0017</p> |SNMP |status.alarms[alarms.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
-|Temperature |Temperature: Battery |<p>MIB: TRISTAR</p><p>Description:Battery Temperature</p><p>Scaling Factor:1.0</p><p>Units:deg C</p><p>Range:[-40, 120]</p><p>Modbus address:0x000F</p> |SNMP |temp.battery[batteryTemperature.0] |
-|Temperature |Temperature: Heatsink |<p>MIB: TRISTAR</p><p>Description:Heatsink Temperature</p><p>Scaling Factor:1.0</p><p>Units:deg C</p><p>Range:[-40, 120]</p><p>Modbus address:0x000E</p> |SNMP |temp.heatsink[heatsinkTemperature.0] |
-|Zabbix_raw_items |Battery: Battery Voltage discovery |<p>MIB: TRISTAR</p><p>Description:Battery voltage</p><p>Scaling Factor:0.002950042724609375</p><p>Units:V</p><p>Range:[0.0, 80.0]</p><p>Modbus address:0x0008</p> |SNMP |battery.voltage.discovery[batteryVoltage.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.002950042725`</p> |
+| Group | Name | Description | Type | Key and additional info |
+|------------------|-------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------|-------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Array | Array: Voltage{#SINGLETON} | <p>MIB: TRISTAR</p><p>Description:Array/Load Voltage</p><p>Scaling Factor:0.00424652099609375</p><p>Units:V</p><p>Range:[0, 80]</p><p>Modbus address:0x000A</p> | SNMP | array.voltage[arrayloadVoltage.0{#SINGLETON}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.004246520996`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
+| Battery | Battery: Voltage{#SINGLETON} | <p>MIB: TRISTAR</p><p>Description:Battery voltage</p><p>Scaling Factor:0.002950042724609375</p><p>Units:V</p><p>Range:[0.0, 80.0]</p><p>Modbus address:0x0008</p> | SNMP | battery.voltage[batteryVoltage.0{#SINGLETON}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.002950042725`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
+| Battery | Battery: Charge Current{#SINGLETON} | <p>MIB: TRISTAR</p><p>Description:Charge Current</p><p>Scaling Factor:0.002034515380859375</p><p>Units:A</p><p>Range:[0, 60]</p><p>Modbus address:0x000B</p> | SNMP | charge.current[chargeCurrent.0{#SINGLETON}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.002034515381`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
+| Battery | Battery: Charge State{#SINGLETON} | <p>MIB: TRISTAR</p><p>Description:Control State</p><p>Modbus address:0x001B</p> | SNMP | charge.state[controlState.0{#SINGLETON}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| Battery | Battery: Target Voltage{#SINGLETON} | <p>MIB: TRISTAR</p><p>Description:Target Regulation Voltage</p><p>Scaling Factor:0.002950042724609375</p><p>Units:V</p><p>Range:[0.0, 80.0]</p><p>Modbus address:0x0010</p> | SNMP | target.voltage[targetVoltage.0{#SINGLETON}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.002950042725`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
+| Counter | Counter: KW-hours | <p>MIB: TRISTAR</p><p>Description:Kilowatt Hours</p><p>Scaling Factor:1.0</p><p>Units:kWh</p><p>Range:[0.0, 5000.0]</p><p>Modbus address:0x001E</p> | SNMP | counter.charge_kw_hours[kilowattHours.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.001`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
+| Counter | Counter: Amp-hours | <p>MIB: TRISTAR</p><p>Description:Ah (Resettable)</p><p>Scaling Factor:0.1</p><p>Units:Ah</p><p>Range:[0.0, 50000.0]</p><p>Modbus addresses:H=0x0011 L=0x0012</p> | SNMP | counter.charge_amp_hours[ahResettable.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.1`</p> |
+| Load | Load: State{#SINGLETON} | <p>MIB: TRISTAR</p><p>Description:Load State</p><p>Modbus address:0x001B</p><p>0: Start</p><p>1: Normal</p><p>2: LvdWarning</p><p>3: Lvd</p><p>4: Fault</p><p>5: Disconnect</p><p>6: LvdWarning1</p><p>7: OverrideLvd</p><p>8: Equalize</p> | SNMP | load.state[loadState.0{#SINGLETON}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| Load | Load: PWM Duty Cycle{#SINGLETON} | <p>MIB: TRISTAR</p><p>Description:PWM Duty Cycle</p><p>Scaling Factor:0.392156862745098</p><p>Units:%</p><p>Range:[0.0, 100.0]</p><p>Modbus address:0x001C</p> | SNMP | diversion.pwm_duty_cycle[pwmDutyCycle.0{#SINGLETON}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.3921568627`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
+| Load | Load: Current{#SINGLETON} | <p>MIB: TRISTAR</p><p>Description:Load Current</p><p>Scaling Factor:0.00966400146484375</p><p>Units:A</p><p>Range:[0, 60]</p><p>Modbus address:0x000C</p> | SNMP | load.current[loadCurrent.0{#SINGLETON}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.009664001465`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
+| Load | Load: Voltage{#SINGLETON} | <p>MIB: TRISTAR</p><p>Description:Array/Load Voltage</p><p>Scaling Factor:0.00424652099609375</p><p>Units:V</p><p>Range:[0, 80]</p><p>Modbus address:0x000A</p> | SNMP | load.voltage[arrayloadVoltage.0{#SINGLETON}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.004246520996`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
+| Status | Status: Uptime | <p>Device uptime in seconds</p> | SNMP | status.uptime<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.01`</p> |
+| Status | Status: Control Mode | <p>MIB: TRISTAR</p><p>Description:Control Mode</p><p>Modbus address:0x001A</p><p>0: charge</p><p>1: loadControl</p><p>2: diversion</p><p>3: lighting</p> | SNMP | control.mode[controlMode.0] |
+| Status | Status: Faults | <p>MIB: TRISTAR</p><p>Description:Battery voltage</p><p>Scaling Factor:0.002950042724609375</p><p>Units:V</p><p>Range:[0.0, 80.0]</p><p>Modbus address:0x0008</p> | SNMP | status.faults[faults.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Status | Status: Alarms | <p>MIB: TRISTAR</p><p>Description:Alarms</p><p>Modbus addresses:H=0x001D L=0x0017</p> | SNMP | status.alarms[alarms.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Temperature | Temperature: Battery | <p>MIB: TRISTAR</p><p>Description:Battery Temperature</p><p>Scaling Factor:1.0</p><p>Units:deg C</p><p>Range:[-40, 120]</p><p>Modbus address:0x000F</p> | SNMP | temp.battery[batteryTemperature.0] |
+| Temperature | Temperature: Heatsink | <p>MIB: TRISTAR</p><p>Description:Heatsink Temperature</p><p>Scaling Factor:1.0</p><p>Units:deg C</p><p>Range:[-40, 120]</p><p>Modbus address:0x000E</p> | SNMP | temp.heatsink[heatsinkTemperature.0] |
+| Zabbix_raw_items | Battery: Battery Voltage discovery | <p>MIB: TRISTAR</p><p>Description:Battery voltage</p><p>Scaling Factor:0.002950042724609375</p><p>Units:V</p><p>Range:[0.0, 80.0]</p><p>Modbus address:0x0008</p> | SNMP | battery.voltage.discovery[batteryVoltage.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.002950042725`</p> |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|Battery: Low battery voltage (below {#VOLTAGE.MIN.WARN}V for 5m) |<p>-</p> |`{TEMPLATE_NAME:battery.voltage[batteryVoltage.0{#SINGLETON}].max(5m)}<{#VOLTAGE.MIN.WARN}` |WARNING |<p>**Depends on**:</p><p>- Battery: Critically low battery voltage (below {#VOLTAGE.MIN.CRIT}V for 5m)</p> |
-|Battery: Critically low battery voltage (below {#VOLTAGE.MIN.CRIT}V for 5m) |<p>-</p> |`{TEMPLATE_NAME:battery.voltage[batteryVoltage.0{#SINGLETON}].max(5m)}<{#VOLTAGE.MIN.CRIT}` |HIGH | |
-|Battery: High battery voltage (over {#VOLTAGE.MAX.WARN}V for 5m) |<p>-</p> |`{TEMPLATE_NAME:battery.voltage[batteryVoltage.0{#SINGLETON}].min(5m)}>{#VOLTAGE.MAX.WARN}` |WARNING |<p>**Depends on**:</p><p>- Battery: Critically high battery voltage (over {#VOLTAGE.MAX.CRIT}V for 5m)</p> |
-|Battery: Critically high battery voltage (over {#VOLTAGE.MAX.CRIT}V for 5m) |<p>-</p> |`{TEMPLATE_NAME:battery.voltage[batteryVoltage.0{#SINGLETON}].min(5m)}>{#VOLTAGE.MAX.CRIT}` |HIGH | |
-|Battery: Device charge in warning state |<p>-</p> |`{TEMPLATE_NAME:charge.state[controlState.0{#SINGLETON}].last()}={$CHARGE.STATE.WARN}` |WARNING |<p>**Depends on**:</p><p>- Battery: Device charge in critical state</p> |
-|Battery: Device charge in critical state |<p>-</p> |`{TEMPLATE_NAME:charge.state[controlState.0{#SINGLETON}].last()}={$CHARGE.STATE.CRIT}` |HIGH | |
-|Load: Device load in warning state |<p>-</p> |`{TEMPLATE_NAME:load.state[loadState.0{#SINGLETON}].last()}={$LOAD.STATE.WARN:"lvdWarning"} or {TEMPLATE_NAME:load.state[loadState.0{#SINGLETON}].last()}={$LOAD.STATE.WARN:"override"}` |WARNING |<p>**Depends on**:</p><p>- Load: Device load in critical state</p> |
-|Load: Device load in critical state |<p>-</p> |`{TEMPLATE_NAME:load.state[loadState.0{#SINGLETON}].last()}={$LOAD.STATE.CRIT:"lvd"} or {TEMPLATE_NAME:load.state[loadState.0{#SINGLETON}].last()}={$LOAD.STATE.CRIT:"fault"}` |HIGH | |
-|Status: Device has been restarted (uptime < 10m) |<p>Uptime is less than 10 minutes</p> |`{TEMPLATE_NAME:status.uptime.last()}<10m` |INFO |<p>Manual close: YES</p> |
-|Status: Failed to fetch data (or no data for 5m) |<p>Zabbix has not received data for items for the last 5 minutes</p> |`{TEMPLATE_NAME:status.uptime.nodata(5m)}=1` |WARNING |<p>Manual close: YES</p> |
-|Status: Device has "externalShort" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"externalShort","like")}=2` |HIGH | |
-|Status: Device has "overcurrent" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"overcurrent","like")}=2` |HIGH | |
-|Status: Device has "mosfetSShorted" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"mosfetSShorted","like")}=2` |HIGH | |
-|Status: Device has "softwareFault" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"softwareFault","like")}=2` |HIGH | |
-|Status: Device has "highVoltageDisconnect" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"highVoltageDisconnect","like")}=2` |HIGH | |
-|Status: Device has "tristarHot" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"tristarHot","like")}=2` |HIGH | |
-|Status: Device has "dipSwitchChange" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"dipSwitchChange","like")}=2` |HIGH | |
-|Status: Device has "customSettingsEdit" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"customSettingsEdit","like")}=2` |HIGH | |
-|Status: Device has "reset" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"reset","like")}=2` |HIGH | |
-|Status: Device has "systemMiswire" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"systemMiswire","like")}=2` |HIGH | |
-|Status: Device has "rtsShorted" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"rtsShorted","like")}=2` |HIGH | |
-|Status: Device has "rtsDisconnected" faults flag |<p>-</p> |`{TEMPLATE_NAME:status.faults[faults.0].count(#3,"rtsDisconnected","like")}=2` |HIGH | |
-|Status: Device has "rtsShorted" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"rtsShorted","like")}=2` |WARNING | |
-|Status: Device has "rtsDisconnected" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"rtsDisconnected","like")}=2` |WARNING | |
-|Status: Device has "heatsinkTempSensorOpen" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"heatsinkTempSensorOpen","like")}=2` |WARNING | |
-|Status: Device has "heatsinkTempSensorShorted" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"heatsinkTempSensorShorted","like")}=2` |WARNING | |
-|Status: Device has "tristarHot" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"tristarHot","like")}=2` |WARNING | |
-|Status: Device has "currentLimit" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"currentLimit","like")}=2` |WARNING | |
-|Status: Device has "currentOffset" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"currentOffset","like")}=2` |WARNING | |
-|Status: Device has "batterySense" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"batterySense","like")}=2` |WARNING | |
-|Status: Device has "batterySenseDisconnected" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"batterySenseDisconnected","like")}=2` |WARNING | |
-|Status: Device has "uncalibrated" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"uncalibrated","like")}=2` |WARNING | |
-|Status: Device has "rtsMiswire" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"rtsMiswire","like")}=2` |WARNING | |
-|Status: Device has "highVoltageDisconnect" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"highVoltageDisconnect","like")}=2` |WARNING | |
-|Status: Device has "diversionLoadNearMax" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"diversionLoadNearMax","like")}=2` |WARNING | |
-|Status: Device has "systemMiswire" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"systemMiswire","like")}=2` |WARNING | |
-|Status: Device has "mosfetSOpen" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"mosfetSOpen","like")}=2` |WARNING | |
-|Status: Device has "p12VoltageReferenceOff" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"p12VoltageReferenceOff","like")}=2` |WARNING | |
-|Status: Device has "loadDisconnectState" alarm flag |<p>-</p> |`{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"loadDisconnectState","like")}=2` |WARNING | |
-|Temperature: Low battery temperature (below {$BATTERY.TEMP.MIN.WARN}C for 5m) |<p>-</p> |`{TEMPLATE_NAME:temp.battery[batteryTemperature.0].max(5m)}<{$BATTERY.TEMP.MIN.WARN}` |WARNING |<p>**Depends on**:</p><p>- Temperature: Critically low battery temperature (below {$BATTERY.TEMP.MIN.WARN}C for 5m)</p> |
-|Temperature: Critically low battery temperature (below {$BATTERY.TEMP.MIN.WARN}C for 5m) |<p>-</p> |`{TEMPLATE_NAME:temp.battery[batteryTemperature.0].max(5m)}<{$BATTERY.TEMP.MIN.CRIT}` |HIGH | |
-|Temperature: High battery temperature (over {$BATTERY.TEMP.MAX.WARN}C for 5m) |<p>-</p> |`{TEMPLATE_NAME:temp.battery[batteryTemperature.0].min(5m)}>{$BATTERY.TEMP.MAX.WARN}` |WARNING |<p>**Depends on**:</p><p>- Temperature: Critically high battery temperature (over {$BATTERY.TEMP.MAX.CRIT}C for 5m)</p> |
-|Temperature: Critically high battery temperature (over {$BATTERY.TEMP.MAX.CRIT}C for 5m) |<p>-</p> |`{TEMPLATE_NAME:temp.battery[batteryTemperature.0].min(5m)}>{$BATTERY.TEMP.MAX.CRIT}` |HIGH | |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|------------------------------------------------------------------------------------------|----------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|-------------------------------------------------------------------------------------------------------------------------|
+| Battery: Low battery voltage (below {#VOLTAGE.MIN.WARN}V for 5m) | <p>-</p> | `{TEMPLATE_NAME:battery.voltage[batteryVoltage.0{#SINGLETON}].max(5m)}<{#VOLTAGE.MIN.WARN}` | WARNING | <p>**Depends on**:</p><p>- Battery: Critically low battery voltage (below {#VOLTAGE.MIN.CRIT}V for 5m)</p> |
+| Battery: Critically low battery voltage (below {#VOLTAGE.MIN.CRIT}V for 5m) | <p>-</p> | `{TEMPLATE_NAME:battery.voltage[batteryVoltage.0{#SINGLETON}].max(5m)}<{#VOLTAGE.MIN.CRIT}` | HIGH | |
+| Battery: High battery voltage (over {#VOLTAGE.MAX.WARN}V for 5m) | <p>-</p> | `{TEMPLATE_NAME:battery.voltage[batteryVoltage.0{#SINGLETON}].min(5m)}>{#VOLTAGE.MAX.WARN}` | WARNING | <p>**Depends on**:</p><p>- Battery: Critically high battery voltage (over {#VOLTAGE.MAX.CRIT}V for 5m)</p> |
+| Battery: Critically high battery voltage (over {#VOLTAGE.MAX.CRIT}V for 5m) | <p>-</p> | `{TEMPLATE_NAME:battery.voltage[batteryVoltage.0{#SINGLETON}].min(5m)}>{#VOLTAGE.MAX.CRIT}` | HIGH | |
+| Battery: Device charge in warning state | <p>-</p> | `{TEMPLATE_NAME:charge.state[controlState.0{#SINGLETON}].last()}={$CHARGE.STATE.WARN}` | WARNING | <p>**Depends on**:</p><p>- Battery: Device charge in critical state</p> |
+| Battery: Device charge in critical state | <p>-</p> | `{TEMPLATE_NAME:charge.state[controlState.0{#SINGLETON}].last()}={$CHARGE.STATE.CRIT}` | HIGH | |
+| Load: Device load in warning state | <p>-</p> | `{TEMPLATE_NAME:load.state[loadState.0{#SINGLETON}].last()}={$LOAD.STATE.WARN:"lvdWarning"} or {TEMPLATE_NAME:load.state[loadState.0{#SINGLETON}].last()}={$LOAD.STATE.WARN:"override"}` | WARNING | <p>**Depends on**:</p><p>- Load: Device load in critical state</p> |
+| Load: Device load in critical state | <p>-</p> | `{TEMPLATE_NAME:load.state[loadState.0{#SINGLETON}].last()}={$LOAD.STATE.CRIT:"lvd"} or {TEMPLATE_NAME:load.state[loadState.0{#SINGLETON}].last()}={$LOAD.STATE.CRIT:"fault"}` | HIGH | |
+| Status: Device has been restarted (uptime < 10m) | <p>Uptime is less than 10 minutes</p> | `{TEMPLATE_NAME:status.uptime.last()}<10m` | INFO | <p>Manual close: YES</p> |
+| Status: Failed to fetch data (or no data for 5m) | <p>Zabbix has not received data for items for the last 5 minutes</p> | `{TEMPLATE_NAME:status.uptime.nodata(5m)}=1` | WARNING | <p>Manual close: YES</p> |
+| Status: Device has "externalShort" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"externalShort","like")}=2` | HIGH | |
+| Status: Device has "overcurrent" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"overcurrent","like")}=2` | HIGH | |
+| Status: Device has "mosfetSShorted" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"mosfetSShorted","like")}=2` | HIGH | |
+| Status: Device has "softwareFault" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"softwareFault","like")}=2` | HIGH | |
+| Status: Device has "highVoltageDisconnect" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"highVoltageDisconnect","like")}=2` | HIGH | |
+| Status: Device has "tristarHot" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"tristarHot","like")}=2` | HIGH | |
+| Status: Device has "dipSwitchChange" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"dipSwitchChange","like")}=2` | HIGH | |
+| Status: Device has "customSettingsEdit" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"customSettingsEdit","like")}=2` | HIGH | |
+| Status: Device has "reset" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"reset","like")}=2` | HIGH | |
+| Status: Device has "systemMiswire" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"systemMiswire","like")}=2` | HIGH | |
+| Status: Device has "rtsShorted" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"rtsShorted","like")}=2` | HIGH | |
+| Status: Device has "rtsDisconnected" faults flag | <p>-</p> | `{TEMPLATE_NAME:status.faults[faults.0].count(#3,"rtsDisconnected","like")}=2` | HIGH | |
+| Status: Device has "rtsShorted" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"rtsShorted","like")}=2` | WARNING | |
+| Status: Device has "rtsDisconnected" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"rtsDisconnected","like")}=2` | WARNING | |
+| Status: Device has "heatsinkTempSensorOpen" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"heatsinkTempSensorOpen","like")}=2` | WARNING | |
+| Status: Device has "heatsinkTempSensorShorted" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"heatsinkTempSensorShorted","like")}=2` | WARNING | |
+| Status: Device has "tristarHot" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"tristarHot","like")}=2` | WARNING | |
+| Status: Device has "currentLimit" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"currentLimit","like")}=2` | WARNING | |
+| Status: Device has "currentOffset" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"currentOffset","like")}=2` | WARNING | |
+| Status: Device has "batterySense" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"batterySense","like")}=2` | WARNING | |
+| Status: Device has "batterySenseDisconnected" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"batterySenseDisconnected","like")}=2` | WARNING | |
+| Status: Device has "uncalibrated" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"uncalibrated","like")}=2` | WARNING | |
+| Status: Device has "rtsMiswire" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"rtsMiswire","like")}=2` | WARNING | |
+| Status: Device has "highVoltageDisconnect" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"highVoltageDisconnect","like")}=2` | WARNING | |
+| Status: Device has "diversionLoadNearMax" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"diversionLoadNearMax","like")}=2` | WARNING | |
+| Status: Device has "systemMiswire" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"systemMiswire","like")}=2` | WARNING | |
+| Status: Device has "mosfetSOpen" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"mosfetSOpen","like")}=2` | WARNING | |
+| Status: Device has "p12VoltageReferenceOff" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"p12VoltageReferenceOff","like")}=2` | WARNING | |
+| Status: Device has "loadDisconnectState" alarm flag | <p>-</p> | `{TEMPLATE_NAME:status.alarms[alarms.0].count(#3,"loadDisconnectState","like")}=2` | WARNING | |
+| Temperature: Low battery temperature (below {$BATTERY.TEMP.MIN.WARN}C for 5m) | <p>-</p> | `{TEMPLATE_NAME:temp.battery[batteryTemperature.0].max(5m)}<{$BATTERY.TEMP.MIN.WARN}` | WARNING | <p>**Depends on**:</p><p>- Temperature: Critically low battery temperature (below {$BATTERY.TEMP.MIN.WARN}C for 5m)</p> |
+| Temperature: Critically low battery temperature (below {$BATTERY.TEMP.MIN.WARN}C for 5m) | <p>-</p> | `{TEMPLATE_NAME:temp.battery[batteryTemperature.0].max(5m)}<{$BATTERY.TEMP.MIN.CRIT}` | HIGH | |
+| Temperature: High battery temperature (over {$BATTERY.TEMP.MAX.WARN}C for 5m) | <p>-</p> | `{TEMPLATE_NAME:temp.battery[batteryTemperature.0].min(5m)}>{$BATTERY.TEMP.MAX.WARN}` | WARNING | <p>**Depends on**:</p><p>- Temperature: Critically high battery temperature (over {$BATTERY.TEMP.MAX.CRIT}C for 5m)</p> |
+| Temperature: Critically high battery temperature (over {$BATTERY.TEMP.MAX.CRIT}C for 5m) | <p>-</p> | `{TEMPLATE_NAME:temp.battery[batteryTemperature.0].min(5m)}>{$BATTERY.TEMP.MAX.CRIT}` | HIGH | |
## Feedback
diff --git a/templates/net/netgear_snmp/README.md b/templates/net/netgear_snmp/README.md
index bc9650468f6..2464f839235 100644
--- a/templates/net/netgear_snmp/README.md
+++ b/templates/net/netgear_snmp/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
https://kb.netgear.com/24352/MIBs-for-Smart-switches
This template was tested on:
@@ -20,62 +20,62 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$CPU.UTIL.CRIT} |<p>-</p> |`90` |
-|{$FAN_CRIT_STATUS:"failed"} |<p>-</p> |`2` |
-|{$MEMORY.UTIL.MAX} |<p>-</p> |`90` |
-|{$PSU_CRIT_STATUS:"failed"} |<p>-</p> |`2` |
-|{$TEMP_CRIT_LOW} |<p>-</p> |`5` |
-|{$TEMP_CRIT_STATUS} |<p>-</p> |`3` |
-|{$TEMP_CRIT} |<p>-</p> |`60` |
-|{$TEMP_WARN_STATUS} |<p>-</p> |`2` |
-|{$TEMP_WARN} |<p>-</p> |`50` |
+| Name | Description | Default |
+|-----------------------------|-------------|---------|
+| {$CPU.UTIL.CRIT} | <p>-</p> | `90` |
+| {$FAN_CRIT_STATUS:"failed"} | <p>-</p> | `2` |
+| {$MEMORY.UTIL.MAX} | <p>-</p> | `90` |
+| {$PSU_CRIT_STATUS:"failed"} | <p>-</p> | `2` |
+| {$TEMP_CRIT_LOW} | <p>-</p> | `5` |
+| {$TEMP_CRIT_STATUS} | <p>-</p> | `3` |
+| {$TEMP_CRIT} | <p>-</p> | `60` |
+| {$TEMP_WARN_STATUS} | <p>-</p> | `2` |
+| {$TEMP_WARN} | <p>-</p> | `50` |
## Template links
-|Name|
-|----|
-|Generic SNMP |
-|Interfaces SNMP |
+| Name |
+|-----------------|
+| Generic SNMP |
+| Interfaces SNMP |
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Temperature Discovery |<p>FASTPATH-BOXSERVICES-PRIVATE-MIB::boxServicesTempSensorsTable</p> |SNMP |temp.discovery |
-|FAN Discovery |<p>FASTPATH-BOXSERVICES-PRIVATE-MIB::1.3.6.1.4.1.4526.10.43.1.6.1.1</p> |SNMP |fan.discovery |
-|PSU Discovery |<p>FASTPATH-BOXSERVICES-PRIVATE-MIB::boxServicesPowSupplyIndex</p> |SNMP |psu.discovery |
+| Name | Description | Type | Key and additional info |
+|-----------------------|-------------------------------------------------------------------------|------|-------------------------|
+| Temperature Discovery | <p>FASTPATH-BOXSERVICES-PRIVATE-MIB::boxServicesTempSensorsTable</p> | SNMP | temp.discovery |
+| FAN Discovery | <p>FASTPATH-BOXSERVICES-PRIVATE-MIB::1.3.6.1.4.1.4526.10.43.1.6.1.1</p> | SNMP | fan.discovery |
+| PSU Discovery | <p>FASTPATH-BOXSERVICES-PRIVATE-MIB::boxServicesPowSupplyIndex</p> | SNMP | psu.discovery |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|CPU |CPU utilization |<p>MIB: FASTPATH-SWITCHING-MIB</p><p>CPU utilization in %</p> |SNMP |system.cpu.util[agentSwitchCpuProcessTotalUtilization.0]<p>**Preprocessing**:</p><p>- REGEX: `60 Secs \( ([0-9\.]+)%\).+300 Secs \1`</p> |
-|Fans |#{#SNMPVALUE}: Fan status |<p>MIB: FASTPATH-BOXSERVICES-PRIVATE-MIB</p><p>The status of fan</p> |SNMP |sensor.fan.status[boxServicesFanItemState.{#SNMPINDEX}] |
-|Inventory |Operating system |<p>MIB: FASTPATH-SWITCHING-MIB</p><p>Operating System running on this unit</p> |SNMP |system.sw.os[agentInventoryOperatingSystem.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |Hardware model name |<p>MIB: FASTPATH-SWITCHING-MIB</p> |SNMP |system.hw.model<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |Hardware serial number |<p>MIB: FASTPATH-SWITCHING-MIB</p><p>Serial number of the switch</p> |SNMP |system.hw.serialnumber<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Memory |Available memory |<p>MIB: FASTPATH-SWITCHING-MIB</p><p>The total memory freed for utilization.</p> |SNMP |vm.memory.available[agentSwitchCpuProcessMemFree.0] |
-|Memory |Total memory |<p>MIB: FASTPATH-SWITCHING-MIB</p><p>The total Memory allocated for the tasks</p> |SNMP |vm.memory.total[agentSwitchCpuProcessMemAvailable.0] |
-|Memory |Memory utilization |<p>Memory utilization in %</p> |CALCULATED |vm.memory.util[memoryUsedPercentage.0]<p>**Expression**:</p>`(last("vm.memory.total[agentSwitchCpuProcessMemAvailable.0]")-last("vm.memory.available[agentSwitchCpuProcessMemFree.0]"))/last("vm.memory.total[agentSwitchCpuProcessMemAvailable.0]")*100` |
-|Power_supply |#{#SNMPVALUE}: Power supply status |<p>MIB: FASTPATH-BOXSERVICES-PRIVATE-MIB</p><p>The status of power supply</p> |SNMP |sensor.psu.status[boxServicesPowSupplyItemState.{#SNMPINDEX}] |
-|Temperature |#{#SNMPVALUE}: Temperature |<p>MIB: FASTPATH-BOXSERVICES-PRIVATE-MIB</p><p>The temperature value reported by sensor</p> |SNMP |sensor.temp.value[boxServicesTempSensorTemperature.{#SNMPINDEX}] |
-|Temperature |#{#SNMPVALUE}: Temperature status |<p>MIB: FASTPATH-BOXSERVICES-PRIVATE-MIB</p><p>The state of temperature sensor</p> |SNMP |sensor.temp.status[boxServicesTempSensorState.{#SNMPINDEX}] |
+| Group | Name | Description | Type | Key and additional info |
+|--------------|------------------------------------|---------------------------------------------------------------------------------------------|------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| CPU | CPU utilization | <p>MIB: FASTPATH-SWITCHING-MIB</p><p>CPU utilization in %</p> | SNMP | system.cpu.util[agentSwitchCpuProcessTotalUtilization.0]<p>**Preprocessing**:</p><p>- REGEX: `60 Secs \( ([0-9\.]+)%\).+300 Secs \1`</p> |
+| Fans | #{#SNMPVALUE}: Fan status | <p>MIB: FASTPATH-BOXSERVICES-PRIVATE-MIB</p><p>The status of fan</p> | SNMP | sensor.fan.status[boxServicesFanItemState.{#SNMPINDEX}] |
+| Inventory | Operating system | <p>MIB: FASTPATH-SWITCHING-MIB</p><p>Operating System running on this unit</p> | SNMP | system.sw.os[agentInventoryOperatingSystem.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | Hardware model name | <p>MIB: FASTPATH-SWITCHING-MIB</p> | SNMP | system.hw.model<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | Hardware serial number | <p>MIB: FASTPATH-SWITCHING-MIB</p><p>Serial number of the switch</p> | SNMP | system.hw.serialnumber<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Memory | Available memory | <p>MIB: FASTPATH-SWITCHING-MIB</p><p>The total memory freed for utilization.</p> | SNMP | vm.memory.available[agentSwitchCpuProcessMemFree.0] |
+| Memory | Total memory | <p>MIB: FASTPATH-SWITCHING-MIB</p><p>The total Memory allocated for the tasks</p> | SNMP | vm.memory.total[agentSwitchCpuProcessMemAvailable.0] |
+| Memory | Memory utilization | <p>Memory utilization in %</p> | CALCULATED | vm.memory.util[memoryUsedPercentage.0]<p>**Expression**:</p>`(last("vm.memory.total[agentSwitchCpuProcessMemAvailable.0]")-last("vm.memory.available[agentSwitchCpuProcessMemFree.0]"))/last("vm.memory.total[agentSwitchCpuProcessMemAvailable.0]")*100` |
+| Power_supply | #{#SNMPVALUE}: Power supply status | <p>MIB: FASTPATH-BOXSERVICES-PRIVATE-MIB</p><p>The status of power supply</p> | SNMP | sensor.psu.status[boxServicesPowSupplyItemState.{#SNMPINDEX}] |
+| Temperature | #{#SNMPVALUE}: Temperature | <p>MIB: FASTPATH-BOXSERVICES-PRIVATE-MIB</p><p>The temperature value reported by sensor</p> | SNMP | sensor.temp.value[boxServicesTempSensorTemperature.{#SNMPINDEX}] |
+| Temperature | #{#SNMPVALUE}: Temperature status | <p>MIB: FASTPATH-BOXSERVICES-PRIVATE-MIB</p><p>The state of temperature sensor</p> | SNMP | sensor.temp.status[boxServicesTempSensorState.{#SNMPINDEX}] |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|High CPU utilization (over {$CPU.UTIL.CRIT}% for 5m) |<p>CPU utilization is too high. The system might be slow to respond.</p> |`{TEMPLATE_NAME:system.cpu.util[agentSwitchCpuProcessTotalUtilization.0].min(5m)}>{$CPU.UTIL.CRIT}` |WARNING | |
-|#{#SNMPVALUE}: Fan is in critical state |<p>Please check the fan unit</p> |`{TEMPLATE_NAME:sensor.fan.status[boxServicesFanItemState.{#SNMPINDEX}].count(#1,{$FAN_CRIT_STATUS:"failed"},eq)}=1` |AVERAGE | |
-|Operating system description has changed |<p>Operating system description has changed. Possible reasons that system has been updated or replaced. Ack to close.</p> |`{TEMPLATE_NAME:system.sw.os[agentInventoryOperatingSystem.0].diff()}=1 and {TEMPLATE_NAME:system.sw.os[agentInventoryOperatingSystem.0].strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|Device has been replaced (new serial number received) |<p>Device serial number has changed. Ack to close</p> |`{TEMPLATE_NAME:system.hw.serialnumber.diff()}=1 and {TEMPLATE_NAME:system.hw.serialnumber.strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m) |<p>The system is running out of free memory.</p> |`{TEMPLATE_NAME:vm.memory.util[memoryUsedPercentage.0].min(5m)}>{$MEMORY.UTIL.MAX}` |AVERAGE | |
-|#{#SNMPVALUE}: Power supply is in critical state |<p>Please check the power supply unit for errors</p> |`{TEMPLATE_NAME:sensor.psu.status[boxServicesPowSupplyItemState.{#SNMPINDEX}].count(#1,{$PSU_CRIT_STATUS:"failed"},eq)}=1` |AVERAGE | |
-|#{#SNMPVALUE}: Temperature is above warning threshold: >{$TEMP_WARN:""} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[boxServicesTempSensorTemperature.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:""} or {Netgear Fastpath SNMP:sensor.temp.status[boxServicesTempSensorState.{#SNMPINDEX}].last()}={$TEMP_WARN_STATUS}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[boxServicesTempSensorTemperature.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:""}-3` |WARNING |<p>**Depends on**:</p><p>- #{#SNMPVALUE}: Temperature is above critical threshold: >{$TEMP_CRIT:""}</p> |
-|#{#SNMPVALUE}: Temperature is above critical threshold: >{$TEMP_CRIT:""} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[boxServicesTempSensorTemperature.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:""} or {Netgear Fastpath SNMP:sensor.temp.status[boxServicesTempSensorState.{#SNMPINDEX}].last()}={$TEMP_CRIT_STATUS}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[boxServicesTempSensorTemperature.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:""}-3` |HIGH | |
-|#{#SNMPVALUE}: Temperature is too low: <{$TEMP_CRIT_LOW:""} |<p>-</p> |`{TEMPLATE_NAME:sensor.temp.value[boxServicesTempSensorTemperature.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[boxServicesTempSensorTemperature.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:""}+3` |AVERAGE | |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|--------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|---------------------------------------------------------------------------------------------------------|
+| High CPU utilization (over {$CPU.UTIL.CRIT}% for 5m) | <p>CPU utilization is too high. The system might be slow to respond.</p> | `{TEMPLATE_NAME:system.cpu.util[agentSwitchCpuProcessTotalUtilization.0].min(5m)}>{$CPU.UTIL.CRIT}` | WARNING | |
+| #{#SNMPVALUE}: Fan is in critical state | <p>Please check the fan unit</p> | `{TEMPLATE_NAME:sensor.fan.status[boxServicesFanItemState.{#SNMPINDEX}].count(#1,{$FAN_CRIT_STATUS:"failed"},eq)}=1` | AVERAGE | |
+| Operating system description has changed | <p>Operating system description has changed. Possible reasons that system has been updated or replaced. Ack to close.</p> | `{TEMPLATE_NAME:system.sw.os[agentInventoryOperatingSystem.0].diff()}=1 and {TEMPLATE_NAME:system.sw.os[agentInventoryOperatingSystem.0].strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| Device has been replaced (new serial number received) | <p>Device serial number has changed. Ack to close</p> | `{TEMPLATE_NAME:system.hw.serialnumber.diff()}=1 and {TEMPLATE_NAME:system.hw.serialnumber.strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m) | <p>The system is running out of free memory.</p> | `{TEMPLATE_NAME:vm.memory.util[memoryUsedPercentage.0].min(5m)}>{$MEMORY.UTIL.MAX}` | AVERAGE | |
+| #{#SNMPVALUE}: Power supply is in critical state | <p>Please check the power supply unit for errors</p> | `{TEMPLATE_NAME:sensor.psu.status[boxServicesPowSupplyItemState.{#SNMPINDEX}].count(#1,{$PSU_CRIT_STATUS:"failed"},eq)}=1` | AVERAGE | |
+| #{#SNMPVALUE}: Temperature is above warning threshold: >{$TEMP_WARN:""} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[boxServicesTempSensorTemperature.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:""} or {Netgear Fastpath SNMP:sensor.temp.status[boxServicesTempSensorState.{#SNMPINDEX}].last()}={$TEMP_WARN_STATUS}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[boxServicesTempSensorTemperature.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:""}-3` | WARNING | <p>**Depends on**:</p><p>- #{#SNMPVALUE}: Temperature is above critical threshold: >{$TEMP_CRIT:""}</p> |
+| #{#SNMPVALUE}: Temperature is above critical threshold: >{$TEMP_CRIT:""} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[boxServicesTempSensorTemperature.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:""} or {Netgear Fastpath SNMP:sensor.temp.status[boxServicesTempSensorState.{#SNMPINDEX}].last()}={$TEMP_CRIT_STATUS}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[boxServicesTempSensorTemperature.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:""}-3` | HIGH | |
+| #{#SNMPVALUE}: Temperature is too low: <{$TEMP_CRIT_LOW:""} | <p>-</p> | `{TEMPLATE_NAME:sensor.temp.value[boxServicesTempSensorTemperature.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[boxServicesTempSensorTemperature.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:""}+3` | AVERAGE | |
## Feedback
diff --git a/templates/net/qtech_snmp/README.md b/templates/net/qtech_snmp/README.md
index e732bf4dd58..a00f2ef227c 100644
--- a/templates/net/qtech_snmp/README.md
+++ b/templates/net/qtech_snmp/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
@@ -15,62 +15,62 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$CPU.UTIL.CRIT} |<p>-</p> |`90` |
-|{$FAN_CRIT_STATUS} |<p>-</p> |`1` |
-|{$MEMORY.UTIL.MAX} |<p>-</p> |`90` |
-|{$PSU_CRIT_STATUS} |<p>-</p> |`1` |
-|{$TEMP_CRIT_LOW} |<p>-</p> |`5` |
-|{$TEMP_CRIT} |<p>-</p> |`75` |
-|{$TEMP_WARN} |<p>-</p> |`65` |
+| Name | Description | Default |
+|--------------------|-------------|---------|
+| {$CPU.UTIL.CRIT} | <p>-</p> | `90` |
+| {$FAN_CRIT_STATUS} | <p>-</p> | `1` |
+| {$MEMORY.UTIL.MAX} | <p>-</p> | `90` |
+| {$PSU_CRIT_STATUS} | <p>-</p> | `1` |
+| {$TEMP_CRIT_LOW} | <p>-</p> | `5` |
+| {$TEMP_CRIT} | <p>-</p> | `75` |
+| {$TEMP_WARN} | <p>-</p> | `65` |
## Template links
-|Name|
-|----|
-|EtherLike-MIB SNMP |
-|Generic SNMP |
-|Interfaces SNMP |
+| Name |
+|--------------------|
+| EtherLike-MIB SNMP |
+| Generic SNMP |
+| Interfaces SNMP |
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|PSU Discovery |<p>-</p> |SNMP |psu.discovery |
-|FAN Discovery |<p>-</p> |SNMP |fan.discovery |
+| Name | Description | Type | Key and additional info |
+|---------------|-------------|------|-------------------------|
+| PSU Discovery | <p>-</p> | SNMP | psu.discovery |
+| FAN Discovery | <p>-</p> | SNMP | fan.discovery |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|CPU |CPU utilization |<p>MIB: QTECH-MIB</p><p>CPU utilization in %</p> |SNMP |system.cpu.util[switchCpuUsage.0] |
-|Fans |{#SNMPINDEX}: Fan status |<p>MIB: QTECH-MIB</p> |SNMP |sensor.fan.status[sysFanStatus.{#SNMPINDEX}] |
-|Inventory |Hardware model name |<p>MIB: ENTITY-MIB</p> |SNMP |system.hw.model<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |Hardware serial number |<p>MIB: ENTITY-MIB</p> |SNMP |system.hw.serialnumber<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |Firmware version |<p>MIB: ENTITY-MIB</p> |SNMP |system.hw.firmware<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |Hardware version(revision) |<p>MIB: ENTITY-MIB</p> |SNMP |system.hw.version<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |Operating system |<p>MIB: QTECH-MIB</p> |SNMP |system.sw.os[sysSoftwareVersion.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Memory |Used memory |<p>MIB: QTECH-MIB</p><p>Used memory in Bytes</p> |SNMP |vm.memory.used[switchMemoryBusy.0] |
-|Memory |Total memory |<p>MIB: QTECH-MIB</p><p>Total memory in Bytes</p> |SNMP |vm.memory.total[switchMemorySize.0] |
-|Memory |Memory utilization |<p>Memory utilization in %</p> |CALCULATED |vm.memory.util[vm.memory.util.0]<p>**Expression**:</p>`last("vm.memory.used[switchMemoryBusy.0]")/last("vm.memory.total[switchMemorySize.0]")*100` |
-|Power_supply |{#SNMPINDEX}: Power supply status |<p>MIB: QTECH-MIB</p> |SNMP |sensor.psu.status[sysPowerStatus.{#SNMPINDEX}] |
-|Temperature |Temperature |<p>MIB: QTECH-MIB</p><p>Temperature readings of testpoint: __RESOURCE__</p> |SNMP |sensor.temp.value[switchTemperature.0] |
+| Group | Name | Description | Type | Key and additional info |
+|--------------|-----------------------------------|-----------------------------------------------------------------------------|------------|----------------------------------------------------------------------------------------------------------------------------------------------------|
+| CPU | CPU utilization | <p>MIB: QTECH-MIB</p><p>CPU utilization in %</p> | SNMP | system.cpu.util[switchCpuUsage.0] |
+| Fans | {#SNMPINDEX}: Fan status | <p>MIB: QTECH-MIB</p> | SNMP | sensor.fan.status[sysFanStatus.{#SNMPINDEX}] |
+| Inventory | Hardware model name | <p>MIB: ENTITY-MIB</p> | SNMP | system.hw.model<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | Hardware serial number | <p>MIB: ENTITY-MIB</p> | SNMP | system.hw.serialnumber<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | Firmware version | <p>MIB: ENTITY-MIB</p> | SNMP | system.hw.firmware<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | Hardware version(revision) | <p>MIB: ENTITY-MIB</p> | SNMP | system.hw.version<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | Operating system | <p>MIB: QTECH-MIB</p> | SNMP | system.sw.os[sysSoftwareVersion.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Memory | Used memory | <p>MIB: QTECH-MIB</p><p>Used memory in Bytes</p> | SNMP | vm.memory.used[switchMemoryBusy.0] |
+| Memory | Total memory | <p>MIB: QTECH-MIB</p><p>Total memory in Bytes</p> | SNMP | vm.memory.total[switchMemorySize.0] |
+| Memory | Memory utilization | <p>Memory utilization in %</p> | CALCULATED | vm.memory.util[vm.memory.util.0]<p>**Expression**:</p>`last("vm.memory.used[switchMemoryBusy.0]")/last("vm.memory.total[switchMemorySize.0]")*100` |
+| Power_supply | {#SNMPINDEX}: Power supply status | <p>MIB: QTECH-MIB</p> | SNMP | sensor.psu.status[sysPowerStatus.{#SNMPINDEX}] |
+| Temperature | Temperature | <p>MIB: QTECH-MIB</p><p>Temperature readings of testpoint: __RESOURCE__</p> | SNMP | sensor.temp.value[switchTemperature.0] |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|High CPU utilization (over {$CPU.UTIL.CRIT}% for 5m) |<p>CPU utilization is too high. The system might be slow to respond.</p> |`{TEMPLATE_NAME:system.cpu.util[switchCpuUsage.0].min(5m)}>{$CPU.UTIL.CRIT}` |WARNING | |
-|{#SNMPINDEX}: Fan is in critical state |<p>Please check the fan unit</p> |`{TEMPLATE_NAME:sensor.fan.status[sysFanStatus.{#SNMPINDEX}].count(#1,{$FAN_CRIT_STATUS},eq)}=1` |AVERAGE | |
-|Device has been replaced (new serial number received) |<p>Device serial number has changed. Ack to close</p> |`{TEMPLATE_NAME:system.hw.serialnumber.diff()}=1 and {TEMPLATE_NAME:system.hw.serialnumber.strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|Firmware has changed |<p>Firmware version has changed. Ack to close</p> |`{TEMPLATE_NAME:system.hw.firmware.diff()}=1 and {TEMPLATE_NAME:system.hw.firmware.strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|Operating system description has changed |<p>Operating system description has changed. Possible reasons that system has been updated or replaced. Ack to close.</p> |`{TEMPLATE_NAME:system.sw.os[sysSoftwareVersion.0].diff()}=1 and {TEMPLATE_NAME:system.sw.os[sysSoftwareVersion.0].strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m) |<p>The system is running out of free memory.</p> |`{TEMPLATE_NAME:vm.memory.util[vm.memory.util.0].min(5m)}>{$MEMORY.UTIL.MAX}` |AVERAGE | |
-|{#SNMPINDEX}: Power supply is in critical state |<p>Please check the power supply unit for errors</p> |`{TEMPLATE_NAME:sensor.psu.status[sysPowerStatus.{#SNMPINDEX}].count(#1,{$PSU_CRIT_STATUS},eq)}=1` |AVERAGE | |
-|Temperature is above warning threshold: >{$TEMP_WARN:""} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[switchTemperature.0].avg(5m)}>{$TEMP_WARN:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[switchTemperature.0].max(5m)}<{$TEMP_WARN:""}-3` |WARNING |<p>**Depends on**:</p><p>- Temperature is above critical threshold: >{$TEMP_CRIT:""}</p> |
-|Temperature is above critical threshold: >{$TEMP_CRIT:""} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[switchTemperature.0].avg(5m)}>{$TEMP_CRIT:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[switchTemperature.0].max(5m)}<{$TEMP_CRIT:""}-3` |HIGH | |
-|Temperature is too low: <{$TEMP_CRIT_LOW:""} |<p>-</p> |`{TEMPLATE_NAME:sensor.temp.value[switchTemperature.0].avg(5m)}<{$TEMP_CRIT_LOW:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[switchTemperature.0].min(5m)}>{$TEMP_CRIT_LOW:""}+3` |AVERAGE | |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|-----------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|------------------------------------------------------------------------------------------|
+| High CPU utilization (over {$CPU.UTIL.CRIT}% for 5m) | <p>CPU utilization is too high. The system might be slow to respond.</p> | `{TEMPLATE_NAME:system.cpu.util[switchCpuUsage.0].min(5m)}>{$CPU.UTIL.CRIT}` | WARNING | |
+| {#SNMPINDEX}: Fan is in critical state | <p>Please check the fan unit</p> | `{TEMPLATE_NAME:sensor.fan.status[sysFanStatus.{#SNMPINDEX}].count(#1,{$FAN_CRIT_STATUS},eq)}=1` | AVERAGE | |
+| Device has been replaced (new serial number received) | <p>Device serial number has changed. Ack to close</p> | `{TEMPLATE_NAME:system.hw.serialnumber.diff()}=1 and {TEMPLATE_NAME:system.hw.serialnumber.strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| Firmware has changed | <p>Firmware version has changed. Ack to close</p> | `{TEMPLATE_NAME:system.hw.firmware.diff()}=1 and {TEMPLATE_NAME:system.hw.firmware.strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| Operating system description has changed | <p>Operating system description has changed. Possible reasons that system has been updated or replaced. Ack to close.</p> | `{TEMPLATE_NAME:system.sw.os[sysSoftwareVersion.0].diff()}=1 and {TEMPLATE_NAME:system.sw.os[sysSoftwareVersion.0].strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m) | <p>The system is running out of free memory.</p> | `{TEMPLATE_NAME:vm.memory.util[vm.memory.util.0].min(5m)}>{$MEMORY.UTIL.MAX}` | AVERAGE | |
+| {#SNMPINDEX}: Power supply is in critical state | <p>Please check the power supply unit for errors</p> | `{TEMPLATE_NAME:sensor.psu.status[sysPowerStatus.{#SNMPINDEX}].count(#1,{$PSU_CRIT_STATUS},eq)}=1` | AVERAGE | |
+| Temperature is above warning threshold: >{$TEMP_WARN:""} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[switchTemperature.0].avg(5m)}>{$TEMP_WARN:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[switchTemperature.0].max(5m)}<{$TEMP_WARN:""}-3` | WARNING | <p>**Depends on**:</p><p>- Temperature is above critical threshold: >{$TEMP_CRIT:""}</p> |
+| Temperature is above critical threshold: >{$TEMP_CRIT:""} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[switchTemperature.0].avg(5m)}>{$TEMP_CRIT:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[switchTemperature.0].max(5m)}<{$TEMP_CRIT:""}-3` | HIGH | |
+| Temperature is too low: <{$TEMP_CRIT_LOW:""} | <p>-</p> | `{TEMPLATE_NAME:sensor.temp.value[switchTemperature.0].avg(5m)}<{$TEMP_CRIT_LOW:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[switchTemperature.0].min(5m)}>{$TEMP_CRIT_LOW:""}+3` | AVERAGE | |
## Feedback
diff --git a/templates/net/tplink_snmp/README.md b/templates/net/tplink_snmp/README.md
index e7af55e8f2e..5f6e5d82ce0 100644
--- a/templates/net/tplink_snmp/README.md
+++ b/templates/net/tplink_snmp/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
Link to MIBs: https://www.tp-link.com/en/support/download/t2600g-28ts/#MIBs_Files
Sample device overview page: https://www.tp-link.com/en/business-networking/managed-switch/t2600g-28ts/#overview
Emulation page (web): https://emulator.tp-link.com/T2600G-28TS(UN)_1.0/Index.htm
@@ -23,44 +23,44 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$CPU.UTIL.CRIT} |<p>-</p> |`90` |
-|{$MEMORY.UTIL.MAX} |<p>-</p> |`90` |
+| Name | Description | Default |
+|--------------------|-------------|---------|
+| {$CPU.UTIL.CRIT} | <p>-</p> | `90` |
+| {$MEMORY.UTIL.MAX} | <p>-</p> | `90` |
## Template links
-|Name|
-|----|
-|Generic SNMP |
-|Interfaces Simple SNMP |
+| Name |
+|------------------------|
+| Generic SNMP |
+| Interfaces Simple SNMP |
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|CPU Discovery |<p>Discovering TPLINK-SYSMONITOR-MIB::tpSysMonitorCpuTable, displays the CPU utilization of all UNITs.</p> |SNMP |cpu.discovery |
-|Memory Discovery |<p>Discovering TPLINK-SYSMONITOR-MIB::tpSysMonitorMemoryTable, displays the memory utilization of all UNITs.</p> |SNMP |memory.discovery |
+| Name | Description | Type | Key and additional info |
+|------------------|------------------------------------------------------------------------------------------------------------------|------|-------------------------|
+| CPU Discovery | <p>Discovering TPLINK-SYSMONITOR-MIB::tpSysMonitorCpuTable, displays the CPU utilization of all UNITs.</p> | SNMP | cpu.discovery |
+| Memory Discovery | <p>Discovering TPLINK-SYSMONITOR-MIB::tpSysMonitorMemoryTable, displays the memory utilization of all UNITs.</p> | SNMP | memory.discovery |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|CPU |#{#SNMPVALUE}: CPU utilization |<p>MIB: TPLINK-SYSMONITOR-MIB</p><p>Displays the CPU utilization in 1 minute.</p><p>Reference: http://www.tp-link.com/faq-1330.html</p> |SNMP |system.cpu.util[tpSysMonitorCpu1Minute.{#SNMPINDEX}] |
-|Inventory |Hardware model name |<p>MIB: TPLINK-SYSINFO-MIB</p><p>The hardware version of the product.</p> |SNMP |system.hw.model<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |Hardware serial number |<p>MIB: TPLINK-SYSINFO-MIB</p><p>The Serial number of the product.</p> |SNMP |system.hw.serialnumber<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |Firmware version |<p>MIB: TPLINK-SYSINFO-MIB</p><p>The software version of the product.</p> |SNMP |system.hw.firmware<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |Hardware version(revision) |<p>MIB: TPLINK-SYSINFO-MIB</p><p>The hardware version of the product.</p> |SNMP |system.hw.version<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Memory |#{#SNMPVALUE}: Memory utilization |<p>MIB: TPLINK-SYSMONITOR-MIB</p><p>Displays the memory utilization.</p><p>Reference: http://www.tp-link.com/faq-1330.html</p> |SNMP |vm.memory.util[tpSysMonitorMemoryUtilization.{#SNMPINDEX}] |
+| Group | Name | Description | Type | Key and additional info |
+|-----------|-----------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------|------|-------------------------------------------------------------------------------------------|
+| CPU | #{#SNMPVALUE}: CPU utilization | <p>MIB: TPLINK-SYSMONITOR-MIB</p><p>Displays the CPU utilization in 1 minute.</p><p>Reference: http://www.tp-link.com/faq-1330.html</p> | SNMP | system.cpu.util[tpSysMonitorCpu1Minute.{#SNMPINDEX}] |
+| Inventory | Hardware model name | <p>MIB: TPLINK-SYSINFO-MIB</p><p>The hardware version of the product.</p> | SNMP | system.hw.model<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | Hardware serial number | <p>MIB: TPLINK-SYSINFO-MIB</p><p>The Serial number of the product.</p> | SNMP | system.hw.serialnumber<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | Firmware version | <p>MIB: TPLINK-SYSINFO-MIB</p><p>The software version of the product.</p> | SNMP | system.hw.firmware<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | Hardware version(revision) | <p>MIB: TPLINK-SYSINFO-MIB</p><p>The hardware version of the product.</p> | SNMP | system.hw.version<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Memory | #{#SNMPVALUE}: Memory utilization | <p>MIB: TPLINK-SYSMONITOR-MIB</p><p>Displays the memory utilization.</p><p>Reference: http://www.tp-link.com/faq-1330.html</p> | SNMP | vm.memory.util[tpSysMonitorMemoryUtilization.{#SNMPINDEX}] |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|#{#SNMPVALUE}: High CPU utilization (over {$CPU.UTIL.CRIT}% for 5m) |<p>CPU utilization is too high. The system might be slow to respond.</p> |`{TEMPLATE_NAME:system.cpu.util[tpSysMonitorCpu1Minute.{#SNMPINDEX}].min(5m)}>{$CPU.UTIL.CRIT}` |WARNING | |
-|Device has been replaced (new serial number received) |<p>Device serial number has changed. Ack to close</p> |`{TEMPLATE_NAME:system.hw.serialnumber.diff()}=1 and {TEMPLATE_NAME:system.hw.serialnumber.strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|Firmware has changed |<p>Firmware version has changed. Ack to close</p> |`{TEMPLATE_NAME:system.hw.firmware.diff()}=1 and {TEMPLATE_NAME:system.hw.firmware.strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|#{#SNMPVALUE}: High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m) |<p>The system is running out of free memory.</p> |`{TEMPLATE_NAME:vm.memory.util[tpSysMonitorMemoryUtilization.{#SNMPINDEX}].min(5m)}>{$MEMORY.UTIL.MAX}` |AVERAGE | |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|----------------------------------------------------------------------|--------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------|----------|----------------------------------|
+| #{#SNMPVALUE}: High CPU utilization (over {$CPU.UTIL.CRIT}% for 5m) | <p>CPU utilization is too high. The system might be slow to respond.</p> | `{TEMPLATE_NAME:system.cpu.util[tpSysMonitorCpu1Minute.{#SNMPINDEX}].min(5m)}>{$CPU.UTIL.CRIT}` | WARNING | |
+| Device has been replaced (new serial number received) | <p>Device serial number has changed. Ack to close</p> | `{TEMPLATE_NAME:system.hw.serialnumber.diff()}=1 and {TEMPLATE_NAME:system.hw.serialnumber.strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| Firmware has changed | <p>Firmware version has changed. Ack to close</p> | `{TEMPLATE_NAME:system.hw.firmware.diff()}=1 and {TEMPLATE_NAME:system.hw.firmware.strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| #{#SNMPVALUE}: High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m) | <p>The system is running out of free memory.</p> | `{TEMPLATE_NAME:vm.memory.util[tpSysMonitorMemoryUtilization.{#SNMPINDEX}].min(5m)}>{$MEMORY.UTIL.MAX}` | AVERAGE | |
## Feedback
diff --git a/templates/net/ubiquiti_airos_snmp/README.md b/templates/net/ubiquiti_airos_snmp/README.md
index d8859a76640..180ed47f2d1 100644
--- a/templates/net/ubiquiti_airos_snmp/README.md
+++ b/templates/net/ubiquiti_airos_snmp/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
@@ -15,41 +15,41 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$CPU.UTIL.CRIT} |<p>-</p> |`90` |
-|{$MEMORY.UTIL.MAX} |<p>-</p> |`90` |
+| Name | Description | Default |
+|--------------------|-------------|---------|
+| {$CPU.UTIL.CRIT} | <p>-</p> | `90` |
+| {$MEMORY.UTIL.MAX} | <p>-</p> | `90` |
## Template links
-|Name|
-|----|
-|Generic SNMP |
-|Interfaces Simple SNMP |
+| Name |
+|------------------------|
+| Generic SNMP |
+| Interfaces Simple SNMP |
## Discovery rules
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|CPU |CPU utilization |<p>MIB: FROGFOOT-RESOURCES-MIB</p><p>5 minute load average of processor load.</p> |SNMP |system.cpu.util[loadValue.2] |
-|Inventory |Hardware model name |<p>MIB: IEEE802dot11-MIB</p><p>A printable string used to identify the manufacturer's product name of the resource. Maximum string length is 128 octets.</p> |SNMP |system.hw.model<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |Firmware version |<p>MIB: IEEE802dot11-MIB</p><p>Printable string used to identify the manufacturer's product version of the resource. Maximum string length is 128 octets.</p> |SNMP |system.hw.firmware<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Memory |Free memory |<p>MIB: FROGFOOT-RESOURCES-MIB</p> |SNMP |vm.memory.free[memFree.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1024`</p> |
-|Memory |Total memory |<p>MIB: FROGFOOT-RESOURCES-MIB</p><p>Total memory in Bytes</p> |SNMP |vm.memory.total[memTotal.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1024`</p> |
-|Memory |Memory (buffers) |<p>MIB: FROGFOOT-RESOURCES-MIB</p><p>Memory used by kernel buffers (Buffers in /proc/meminfo)</p> |SNMP |vm.memory.buffers[memBuffer.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1024`</p> |
-|Memory |Memory (cached) |<p>MIB: FROGFOOT-RESOURCES-MIB</p><p>Memory used by the page cache and slabs (Cached and Slab in /proc/meminfo)</p> |SNMP |vm.memory.cached[memCache.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1024`</p> |
-|Memory |Memory utilization |<p>Memory utilization in %</p> |CALCULATED |vm.memory.util[memoryUsedPercentage]<p>**Expression**:</p>`(last("vm.memory.total[memTotal.0]")-(last("vm.memory.free[memFree.0]")+last("vm.memory.buffers[memBuffer.0]")+last("vm.memory.cached[memCache.0]")))/last("vm.memory.total[memTotal.0]")*100` |
+| Group | Name | Description | Type | Key and additional info |
+|-----------|---------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------|------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| CPU | CPU utilization | <p>MIB: FROGFOOT-RESOURCES-MIB</p><p>5 minute load average of processor load.</p> | SNMP | system.cpu.util[loadValue.2] |
+| Inventory | Hardware model name | <p>MIB: IEEE802dot11-MIB</p><p>A printable string used to identify the manufacturer's product name of the resource. Maximum string length is 128 octets.</p> | SNMP | system.hw.model<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | Firmware version | <p>MIB: IEEE802dot11-MIB</p><p>Printable string used to identify the manufacturer's product version of the resource. Maximum string length is 128 octets.</p> | SNMP | system.hw.firmware<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Memory | Free memory | <p>MIB: FROGFOOT-RESOURCES-MIB</p> | SNMP | vm.memory.free[memFree.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1024`</p> |
+| Memory | Total memory | <p>MIB: FROGFOOT-RESOURCES-MIB</p><p>Total memory in Bytes</p> | SNMP | vm.memory.total[memTotal.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1024`</p> |
+| Memory | Memory (buffers) | <p>MIB: FROGFOOT-RESOURCES-MIB</p><p>Memory used by kernel buffers (Buffers in /proc/meminfo)</p> | SNMP | vm.memory.buffers[memBuffer.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1024`</p> |
+| Memory | Memory (cached) | <p>MIB: FROGFOOT-RESOURCES-MIB</p><p>Memory used by the page cache and slabs (Cached and Slab in /proc/meminfo)</p> | SNMP | vm.memory.cached[memCache.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1024`</p> |
+| Memory | Memory utilization | <p>Memory utilization in %</p> | CALCULATED | vm.memory.util[memoryUsedPercentage]<p>**Expression**:</p>`(last("vm.memory.total[memTotal.0]")-(last("vm.memory.free[memFree.0]")+last("vm.memory.buffers[memBuffer.0]")+last("vm.memory.cached[memCache.0]")))/last("vm.memory.total[memTotal.0]")*100` |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|High CPU utilization (over {$CPU.UTIL.CRIT}% for 5m) |<p>CPU utilization is too high. The system might be slow to respond.</p> |`{TEMPLATE_NAME:system.cpu.util[loadValue.2].min(5m)}>{$CPU.UTIL.CRIT}` |WARNING | |
-|Firmware has changed |<p>Firmware version has changed. Ack to close</p> |`{TEMPLATE_NAME:system.hw.firmware.diff()}=1 and {TEMPLATE_NAME:system.hw.firmware.strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m) |<p>The system is running out of free memory.</p> |`{TEMPLATE_NAME:vm.memory.util[memoryUsedPercentage].min(5m)}>{$MEMORY.UTIL.MAX}` |AVERAGE | |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|-------------------------------------------------------|--------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------|----------|----------------------------------|
+| High CPU utilization (over {$CPU.UTIL.CRIT}% for 5m) | <p>CPU utilization is too high. The system might be slow to respond.</p> | `{TEMPLATE_NAME:system.cpu.util[loadValue.2].min(5m)}>{$CPU.UTIL.CRIT}` | WARNING | |
+| Firmware has changed | <p>Firmware version has changed. Ack to close</p> | `{TEMPLATE_NAME:system.hw.firmware.diff()}=1 and {TEMPLATE_NAME:system.hw.firmware.strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m) | <p>The system is running out of free memory.</p> | `{TEMPLATE_NAME:vm.memory.util[memoryUsedPercentage].min(5m)}>{$MEMORY.UTIL.MAX}` | AVERAGE | |
## Feedback
diff --git a/templates/os/linux/README.md b/templates/os/linux/README.md
index 1427e5eef7f..03a21e8f0f9 100644
--- a/templates/os/linux/README.md
+++ b/templates/os/linux/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
@@ -15,10 +15,10 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$CPU.UTIL.CRIT} |<p>-</p> |`90` |
-|{$LOAD_AVG_PER_CPU.MAX.WARN} |<p>Load per CPU considered sustainable. Tune if needed.</p> |`1.5` |
+| Name | Description | Default |
+|------------------------------|-------------------------------------------------------------|---------|
+| {$CPU.UTIL.CRIT} | <p>-</p> | `90` |
+| {$LOAD_AVG_PER_CPU.MAX.WARN} | <p>Load per CPU considered sustainable. Tune if needed.</p> | `1.5` |
## Template links
@@ -29,32 +29,32 @@ There are no template links in this template.
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|CPU |Number of CPUs |<p>-</p> |ZABBIX_PASSIVE |system.cpu.num<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|CPU |Load average (1m avg) |<p>-</p> |ZABBIX_PASSIVE |system.cpu.load[all,avg1] |
-|CPU |Load average (5m avg) |<p>-</p> |ZABBIX_PASSIVE |system.cpu.load[all,avg5] |
-|CPU |Load average (15m avg) |<p>-</p> |ZABBIX_PASSIVE |system.cpu.load[all,avg15] |
-|CPU |CPU utilization |<p>CPU utilization in %</p> |DEPENDENT |system.cpu.util<p>**Preprocessing**:</p><p>- JAVASCRIPT: `//Calculate utilization return (100 - value)`</p> |
-|CPU |CPU idle time |<p>The time the CPU has spent doing nothing.</p> |ZABBIX_PASSIVE |system.cpu.util[,idle] |
-|CPU |CPU system time |<p>The time the CPU has spent running the kernel and its processes.</p> |ZABBIX_PASSIVE |system.cpu.util[,system] |
-|CPU |CPU user time |<p>The time the CPU has spent running users' processes that are not niced.</p> |ZABBIX_PASSIVE |system.cpu.util[,user] |
-|CPU |CPU nice time |<p>The time the CPU has spent running users' processes that have been niced.</p> |ZABBIX_PASSIVE |system.cpu.util[,nice] |
-|CPU |CPU iowait time |<p>Amount of time the CPU has been waiting for I/O to complete.</p> |ZABBIX_PASSIVE |system.cpu.util[,iowait] |
-|CPU |CPU steal time |<p>The amount of CPU 'stolen' from this virtual machine by the hypervisor for other tasks (such as running another virtual machine).</p> |ZABBIX_PASSIVE |system.cpu.util[,steal] |
-|CPU |CPU interrupt time |<p>The amount of time the CPU has been servicing hardware interrupts.</p> |ZABBIX_PASSIVE |system.cpu.util[,interrupt] |
-|CPU |CPU softirq time |<p>The amount of time the CPU has been servicing software interrupts.</p> |ZABBIX_PASSIVE |system.cpu.util[,softirq] |
-|CPU |CPU guest time |<p>Guest time (time spent running a virtual CPU for a guest operating system)</p> |ZABBIX_PASSIVE |system.cpu.util[,guest] |
-|CPU |CPU guest nice time |<p>Time spent running a niced guest (virtual CPU for guest operating systems under the control of the Linux kernel)</p> |ZABBIX_PASSIVE |system.cpu.util[,guest_nice] |
-|CPU |Context switches per second |<p>-</p> |ZABBIX_PASSIVE |system.cpu.switches<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|CPU |Interrupts per second |<p>-</p> |ZABBIX_PASSIVE |system.cpu.intr<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Group | Name | Description | Type | Key and additional info |
+|-------|-----------------------------|------------------------------------------------------------------------------------------------------------------------------------------|----------------|-------------------------------------------------------------------------------------------------------------|
+| CPU | Number of CPUs | <p>-</p> | ZABBIX_PASSIVE | system.cpu.num<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| CPU | Load average (1m avg) | <p>-</p> | ZABBIX_PASSIVE | system.cpu.load[all,avg1] |
+| CPU | Load average (5m avg) | <p>-</p> | ZABBIX_PASSIVE | system.cpu.load[all,avg5] |
+| CPU | Load average (15m avg) | <p>-</p> | ZABBIX_PASSIVE | system.cpu.load[all,avg15] |
+| CPU | CPU utilization | <p>CPU utilization in %</p> | DEPENDENT | system.cpu.util<p>**Preprocessing**:</p><p>- JAVASCRIPT: `//Calculate utilization return (100 - value)`</p> |
+| CPU | CPU idle time | <p>The time the CPU has spent doing nothing.</p> | ZABBIX_PASSIVE | system.cpu.util[,idle] |
+| CPU | CPU system time | <p>The time the CPU has spent running the kernel and its processes.</p> | ZABBIX_PASSIVE | system.cpu.util[,system] |
+| CPU | CPU user time | <p>The time the CPU has spent running users' processes that are not niced.</p> | ZABBIX_PASSIVE | system.cpu.util[,user] |
+| CPU | CPU nice time | <p>The time the CPU has spent running users' processes that have been niced.</p> | ZABBIX_PASSIVE | system.cpu.util[,nice] |
+| CPU | CPU iowait time | <p>Amount of time the CPU has been waiting for I/O to complete.</p> | ZABBIX_PASSIVE | system.cpu.util[,iowait] |
+| CPU | CPU steal time | <p>The amount of CPU 'stolen' from this virtual machine by the hypervisor for other tasks (such as running another virtual machine).</p> | ZABBIX_PASSIVE | system.cpu.util[,steal] |
+| CPU | CPU interrupt time | <p>The amount of time the CPU has been servicing hardware interrupts.</p> | ZABBIX_PASSIVE | system.cpu.util[,interrupt] |
+| CPU | CPU softirq time | <p>The amount of time the CPU has been servicing software interrupts.</p> | ZABBIX_PASSIVE | system.cpu.util[,softirq] |
+| CPU | CPU guest time | <p>Guest time (time spent running a virtual CPU for a guest operating system)</p> | ZABBIX_PASSIVE | system.cpu.util[,guest] |
+| CPU | CPU guest nice time | <p>Time spent running a niced guest (virtual CPU for guest operating systems under the control of the Linux kernel)</p> | ZABBIX_PASSIVE | system.cpu.util[,guest_nice] |
+| CPU | Context switches per second | <p>-</p> | ZABBIX_PASSIVE | system.cpu.switches<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| CPU | Interrupts per second | <p>-</p> | ZABBIX_PASSIVE | system.cpu.intr<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|Load average is too high (per CPU load over {$LOAD_AVG_PER_CPU.MAX.WARN} for 5m) |<p>Per CPU load average is too high. Your system may be slow to respond.</p> |`{TEMPLATE_NAME:system.cpu.load[all,avg1].min(5m)}/{Linux CPU by Zabbix agent:system.cpu.num.last()}>{$LOAD_AVG_PER_CPU.MAX.WARN} and {Linux CPU by Zabbix agent:system.cpu.load[all,avg5].last()}>0 and {Linux CPU by Zabbix agent:system.cpu.load[all,avg15].last()}>0` |AVERAGE | |
-|High CPU utilization (over {$CPU.UTIL.CRIT}% for 5m) |<p>CPU utilization is too high. The system might be slow to respond.</p> |`{TEMPLATE_NAME:system.cpu.util.min(5m)}>{$CPU.UTIL.CRIT}` |WARNING |<p>**Depends on**:</p><p>- Load average is too high (per CPU load over {$LOAD_AVG_PER_CPU.MAX.WARN} for 5m)</p> |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|----------------------------------------------------------------------------------|------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|-----------------------------------------------------------------------------------------------------------------|
+| Load average is too high (per CPU load over {$LOAD_AVG_PER_CPU.MAX.WARN} for 5m) | <p>Per CPU load average is too high. Your system may be slow to respond.</p> | `{TEMPLATE_NAME:system.cpu.load[all,avg1].min(5m)}/{Linux CPU by Zabbix agent:system.cpu.num.last()}>{$LOAD_AVG_PER_CPU.MAX.WARN} and {Linux CPU by Zabbix agent:system.cpu.load[all,avg5].last()}>0 and {Linux CPU by Zabbix agent:system.cpu.load[all,avg15].last()}>0` | AVERAGE | |
+| High CPU utilization (over {$CPU.UTIL.CRIT}% for 5m) | <p>CPU utilization is too high. The system might be slow to respond.</p> | `{TEMPLATE_NAME:system.cpu.util.min(5m)}>{$CPU.UTIL.CRIT}` | WARNING | <p>**Depends on**:</p><p>- Load average is too high (per CPU load over {$LOAD_AVG_PER_CPU.MAX.WARN} for 5m)</p> |
## Feedback
@@ -64,7 +64,7 @@ Please report any issues with the template at https://support.zabbix.com
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
@@ -76,16 +76,16 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$VFS.FS.FSNAME.MATCHES} |<p>This macro is used in filesystems discovery. Can be overridden on the host or linked template level</p> |`.+` |
-|{$VFS.FS.FSNAME.NOT_MATCHES} |<p>This macro is used in filesystems discovery. Can be overridden on the host or linked template level</p> |`^(/dev|/sys|/run|/proc|.+/shm$)` |
-|{$VFS.FS.FSTYPE.MATCHES} |<p>This macro is used in filesystems discovery. Can be overridden on the host or linked template level</p> |`^(btrfs|ext2|ext3|ext4|reiser|xfs|ffs|ufs|jfs|jfs2|vxfs|hfs|apfs|refs|ntfs|fat32|zfs)$` |
-|{$VFS.FS.FSTYPE.NOT_MATCHES} |<p>This macro is used in filesystems discovery. Can be overridden on the host or linked template level</p> |`^\s$` |
-|{$VFS.FS.INODE.PFREE.MIN.CRIT} |<p>-</p> |`10` |
-|{$VFS.FS.INODE.PFREE.MIN.WARN} |<p>-</p> |`20` |
-|{$VFS.FS.PUSED.MAX.CRIT} |<p>-</p> |`90` |
-|{$VFS.FS.PUSED.MAX.WARN} |<p>-</p> |`80` |
+| Name | Description | Default |
+|--------------------------------|------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------|
+| {$VFS.FS.FSNAME.MATCHES} | <p>This macro is used in filesystems discovery. Can be overridden on the host or linked template level</p> | `.+` |
+| {$VFS.FS.FSNAME.NOT_MATCHES} | <p>This macro is used in filesystems discovery. Can be overridden on the host or linked template level</p> | `^(/dev|/sys|/run|/proc|.+/shm$)` |
+| {$VFS.FS.FSTYPE.MATCHES} | <p>This macro is used in filesystems discovery. Can be overridden on the host or linked template level</p> | `^(btrfs|ext2|ext3|ext4|reiser|xfs|ffs|ufs|jfs|jfs2|vxfs|hfs|apfs|refs|ntfs|fat32|zfs)$` |
+| {$VFS.FS.FSTYPE.NOT_MATCHES} | <p>This macro is used in filesystems discovery. Can be overridden on the host or linked template level</p> | `^\s$` |
+| {$VFS.FS.INODE.PFREE.MIN.CRIT} | <p>-</p> | `10` |
+| {$VFS.FS.INODE.PFREE.MIN.WARN} | <p>-</p> | `20` |
+| {$VFS.FS.PUSED.MAX.CRIT} | <p>-</p> | `90` |
+| {$VFS.FS.PUSED.MAX.WARN} | <p>-</p> | `80` |
## Template links
@@ -93,27 +93,27 @@ There are no template links in this template.
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Mounted filesystem discovery |<p>Discovery of file systems of different types.</p> |ZABBIX_PASSIVE |vfs.fs.discovery<p>**Filter**:</p>AND <p>- A: {#FSTYPE} MATCHES_REGEX `{$VFS.FS.FSTYPE.MATCHES}`</p><p>- B: {#FSTYPE} NOT_MATCHES_REGEX `{$VFS.FS.FSTYPE.NOT_MATCHES}`</p><p>- C: {#FSNAME} MATCHES_REGEX `{$VFS.FS.FSNAME.MATCHES}`</p><p>- D: {#FSNAME} NOT_MATCHES_REGEX `{$VFS.FS.FSNAME.NOT_MATCHES}`</p> |
+| Name | Description | Type | Key and additional info |
+|------------------------------|------------------------------------------------------|----------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Mounted filesystem discovery | <p>Discovery of file systems of different types.</p> | ZABBIX_PASSIVE | vfs.fs.discovery<p>**Filter**:</p>AND <p>- A: {#FSTYPE} MATCHES_REGEX `{$VFS.FS.FSTYPE.MATCHES}`</p><p>- B: {#FSTYPE} NOT_MATCHES_REGEX `{$VFS.FS.FSTYPE.NOT_MATCHES}`</p><p>- C: {#FSNAME} MATCHES_REGEX `{$VFS.FS.FSNAME.MATCHES}`</p><p>- D: {#FSNAME} NOT_MATCHES_REGEX `{$VFS.FS.FSNAME.NOT_MATCHES}`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Filesystems |{#FSNAME}: Used space |<p>Used storage in Bytes</p> |ZABBIX_PASSIVE |vfs.fs.size[{#FSNAME},used] |
-|Filesystems |{#FSNAME}: Total space |<p>Total space in Bytes</p> |ZABBIX_PASSIVE |vfs.fs.size[{#FSNAME},total] |
-|Filesystems |{#FSNAME}: Space utilization |<p>Space utilization in % for {#FSNAME}</p> |ZABBIX_PASSIVE |vfs.fs.size[{#FSNAME},pused] |
-|Filesystems |{#FSNAME}: Free inodes in % |<p>-</p> |ZABBIX_PASSIVE |vfs.fs.inode[{#FSNAME},pfree] |
+| Group | Name | Description | Type | Key and additional info |
+|-------------|------------------------------|---------------------------------------------|----------------|-------------------------------|
+| Filesystems | {#FSNAME}: Used space | <p>Used storage in Bytes</p> | ZABBIX_PASSIVE | vfs.fs.size[{#FSNAME},used] |
+| Filesystems | {#FSNAME}: Total space | <p>Total space in Bytes</p> | ZABBIX_PASSIVE | vfs.fs.size[{#FSNAME},total] |
+| Filesystems | {#FSNAME}: Space utilization | <p>Space utilization in % for {#FSNAME}</p> | ZABBIX_PASSIVE | vfs.fs.size[{#FSNAME},pused] |
+| Filesystems | {#FSNAME}: Free inodes in % | <p>-</p> | ZABBIX_PASSIVE | vfs.fs.inode[{#FSNAME},pfree] |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|{#FSNAME}: Disk space is critically low (used > {$VFS.FS.PUSED.MAX.CRIT:"{#FSNAME}"}%) |<p>Two conditions should match: First, space utilization should be above {$VFS.FS.PUSED.MAX.CRIT:"{#FSNAME}"}.</p><p> Second condition should be one of the following:</p><p> - The disk free space is less than 5G.</p><p> - The disk will be full in less than 24 hours.</p> |`{TEMPLATE_NAME:vfs.fs.size[{#FSNAME},pused].last()}>{$VFS.FS.PUSED.MAX.CRIT:"{#FSNAME}"} and (({Linux filesystems by Zabbix agent:vfs.fs.size[{#FSNAME},total].last()}-{Linux filesystems by Zabbix agent:vfs.fs.size[{#FSNAME},used].last()})<5G or {TEMPLATE_NAME:vfs.fs.size[{#FSNAME},pused].timeleft(1h,,100)}<1d)` |AVERAGE |<p>Manual close: YES</p> |
-|{#FSNAME}: Disk space is low (used > {$VFS.FS.PUSED.MAX.WARN:"{#FSNAME}"}%) |<p>Two conditions should match: First, space utilization should be above {$VFS.FS.PUSED.MAX.WARN:"{#FSNAME}"}.</p><p> Second condition should be one of the following:</p><p> - The disk free space is less than 10G.</p><p> - The disk will be full in less than 24 hours.</p> |`{TEMPLATE_NAME:vfs.fs.size[{#FSNAME},pused].last()}>{$VFS.FS.PUSED.MAX.WARN:"{#FSNAME}"} and (({Linux filesystems by Zabbix agent:vfs.fs.size[{#FSNAME},total].last()}-{Linux filesystems by Zabbix agent:vfs.fs.size[{#FSNAME},used].last()})<10G or {TEMPLATE_NAME:vfs.fs.size[{#FSNAME},pused].timeleft(1h,,100)}<1d)` |WARNING |<p>Manual close: YES</p><p>**Depends on**:</p><p>- {#FSNAME}: Disk space is critically low (used > {$VFS.FS.PUSED.MAX.CRIT:"{#FSNAME}"}%)</p> |
-|{#FSNAME}: Running out of free inodes (free < {$VFS.FS.INODE.PFREE.MIN.CRIT:"{#FSNAME}"}%) |<p>It may become impossible to write to disk if there are no index nodes left.</p><p>As symptoms, 'No space left on device' or 'Disk is full' errors may be seen even though free space is available.</p> |`{TEMPLATE_NAME:vfs.fs.inode[{#FSNAME},pfree].min(5m)}<{$VFS.FS.INODE.PFREE.MIN.CRIT:"{#FSNAME}"}` |AVERAGE | |
-|{#FSNAME}: Running out of free inodes (free < {$VFS.FS.INODE.PFREE.MIN.WARN:"{#FSNAME}"}%) |<p>It may become impossible to write to disk if there are no index nodes left.</p><p>As symptoms, 'No space left on device' or 'Disk is full' errors may be seen even though free space is available.</p> |`{TEMPLATE_NAME:vfs.fs.inode[{#FSNAME},pfree].min(5m)}<{$VFS.FS.INODE.PFREE.MIN.WARN:"{#FSNAME}"}` |WARNING |<p>**Depends on**:</p><p>- {#FSNAME}: Running out of free inodes (free < {$VFS.FS.INODE.PFREE.MIN.CRIT:"{#FSNAME}"}%)</p> |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|--------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|-----------------------------------------------------------------------------------------------------------------------------------------------|
+| {#FSNAME}: Disk space is critically low (used > {$VFS.FS.PUSED.MAX.CRIT:"{#FSNAME}"}%) | <p>Two conditions should match: First, space utilization should be above {$VFS.FS.PUSED.MAX.CRIT:"{#FSNAME}"}.</p><p> Second condition should be one of the following:</p><p> - The disk free space is less than 5G.</p><p> - The disk will be full in less than 24 hours.</p> | `{TEMPLATE_NAME:vfs.fs.size[{#FSNAME},pused].last()}>{$VFS.FS.PUSED.MAX.CRIT:"{#FSNAME}"} and (({Linux filesystems by Zabbix agent:vfs.fs.size[{#FSNAME},total].last()}-{Linux filesystems by Zabbix agent:vfs.fs.size[{#FSNAME},used].last()})<5G or {TEMPLATE_NAME:vfs.fs.size[{#FSNAME},pused].timeleft(1h,,100)}<1d)` | AVERAGE | <p>Manual close: YES</p> |
+| {#FSNAME}: Disk space is low (used > {$VFS.FS.PUSED.MAX.WARN:"{#FSNAME}"}%) | <p>Two conditions should match: First, space utilization should be above {$VFS.FS.PUSED.MAX.WARN:"{#FSNAME}"}.</p><p> Second condition should be one of the following:</p><p> - The disk free space is less than 10G.</p><p> - The disk will be full in less than 24 hours.</p> | `{TEMPLATE_NAME:vfs.fs.size[{#FSNAME},pused].last()}>{$VFS.FS.PUSED.MAX.WARN:"{#FSNAME}"} and (({Linux filesystems by Zabbix agent:vfs.fs.size[{#FSNAME},total].last()}-{Linux filesystems by Zabbix agent:vfs.fs.size[{#FSNAME},used].last()})<10G or {TEMPLATE_NAME:vfs.fs.size[{#FSNAME},pused].timeleft(1h,,100)}<1d)` | WARNING | <p>Manual close: YES</p><p>**Depends on**:</p><p>- {#FSNAME}: Disk space is critically low (used > {$VFS.FS.PUSED.MAX.CRIT:"{#FSNAME}"}%)</p> |
+| {#FSNAME}: Running out of free inodes (free < {$VFS.FS.INODE.PFREE.MIN.CRIT:"{#FSNAME}"}%) | <p>It may become impossible to write to disk if there are no index nodes left.</p><p>As symptoms, 'No space left on device' or 'Disk is full' errors may be seen even though free space is available.</p> | `{TEMPLATE_NAME:vfs.fs.inode[{#FSNAME},pfree].min(5m)}<{$VFS.FS.INODE.PFREE.MIN.CRIT:"{#FSNAME}"}` | AVERAGE | |
+| {#FSNAME}: Running out of free inodes (free < {$VFS.FS.INODE.PFREE.MIN.WARN:"{#FSNAME}"}%) | <p>It may become impossible to write to disk if there are no index nodes left.</p><p>As symptoms, 'No space left on device' or 'Disk is full' errors may be seen even though free space is available.</p> | `{TEMPLATE_NAME:vfs.fs.inode[{#FSNAME},pfree].min(5m)}<{$VFS.FS.INODE.PFREE.MIN.WARN:"{#FSNAME}"}` | WARNING | <p>**Depends on**:</p><p>- {#FSNAME}: Running out of free inodes (free < {$VFS.FS.INODE.PFREE.MIN.CRIT:"{#FSNAME}"}%)</p> |
## Feedback
@@ -123,7 +123,7 @@ Please report any issues with the template at https://support.zabbix.com
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
@@ -135,11 +135,11 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$MEMORY.AVAILABLE.MIN} |<p>This macro is used as a threshold in memory available trigger.</p> |`20M` |
-|{$MEMORY.UTIL.MAX} |<p>This macro is used as a threshold in memory utilization trigger.</p> |`90` |
-|{$SWAP.PFREE.MIN.WARN} |<p>-</p> |`50` |
+| Name | Description | Default |
+|-------------------------|-------------------------------------------------------------------------|---------|
+| {$MEMORY.AVAILABLE.MIN} | <p>This macro is used as a threshold in memory available trigger.</p> | `20M` |
+| {$MEMORY.UTIL.MAX} | <p>This macro is used as a threshold in memory utilization trigger.</p> | `90` |
+| {$SWAP.PFREE.MIN.WARN} | <p>-</p> | `50` |
## Template links
@@ -150,23 +150,23 @@ There are no template links in this template.
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Memory |Memory utilization |<p>Memory used percentage is calculated as (100-pavailable)</p> |DEPENDENT |vm.memory.utilization<p>**Preprocessing**:</p><p>- JAVASCRIPT: `return (100-value);`</p> |
-|Memory |Available memory in % |<p>Available memory as percentage of total. See also: https://www.zabbix.com/documentation/5.4/manual/appendix/items/vm.memory.size_params</p> |ZABBIX_PASSIVE |vm.memory.size[pavailable] |
-|Memory |Total memory |<p>Total memory in Bytes</p> |ZABBIX_PASSIVE |vm.memory.size[total] |
-|Memory |Available memory |<p>Available memory, in Linux, available = free + buffers + cache. On other platforms calculation may vary. See also: https://www.zabbix.com/documentation/5.4/manual/appendix/items/vm.memory.size_params</p> |ZABBIX_PASSIVE |vm.memory.size[available] |
-|Memory |Total swap space |<p>The total space of swap volume/file in bytes.</p> |ZABBIX_PASSIVE |system.swap.size[,total] |
-|Memory |Free swap space |<p>The free space of swap volume/file in bytes.</p> |ZABBIX_PASSIVE |system.swap.size[,free] |
-|Memory |Free swap space in % |<p>The free space of swap volume/file in percent.</p> |ZABBIX_PASSIVE |system.swap.size[,pfree] |
+| Group | Name | Description | Type | Key and additional info |
+|--------|-----------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------|------------------------------------------------------------------------------------------|
+| Memory | Memory utilization | <p>Memory used percentage is calculated as (100-pavailable)</p> | DEPENDENT | vm.memory.utilization<p>**Preprocessing**:</p><p>- JAVASCRIPT: `return (100-value);`</p> |
+| Memory | Available memory in % | <p>Available memory as percentage of total. See also: https://www.zabbix.com/documentation/5.4/manual/appendix/items/vm.memory.size_params</p> | ZABBIX_PASSIVE | vm.memory.size[pavailable] |
+| Memory | Total memory | <p>Total memory in Bytes</p> | ZABBIX_PASSIVE | vm.memory.size[total] |
+| Memory | Available memory | <p>Available memory, in Linux, available = free + buffers + cache. On other platforms calculation may vary. See also: https://www.zabbix.com/documentation/5.4/manual/appendix/items/vm.memory.size_params</p> | ZABBIX_PASSIVE | vm.memory.size[available] |
+| Memory | Total swap space | <p>The total space of swap volume/file in bytes.</p> | ZABBIX_PASSIVE | system.swap.size[,total] |
+| Memory | Free swap space | <p>The free space of swap volume/file in bytes.</p> | ZABBIX_PASSIVE | system.swap.size[,free] |
+| Memory | Free swap space in % | <p>The free space of swap volume/file in percent.</p> | ZABBIX_PASSIVE | system.swap.size[,pfree] |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m) |<p>The system is running out of free memory.</p> |`{TEMPLATE_NAME:vm.memory.utilization.min(5m)}>{$MEMORY.UTIL.MAX}` |AVERAGE |<p>**Depends on**:</p><p>- Lack of available memory (< {$MEMORY.AVAILABLE.MIN} of {ITEM.VALUE2})</p> |
-|Lack of available memory (< {$MEMORY.AVAILABLE.MIN} of {ITEM.VALUE2}) |<p>-</p> |`{TEMPLATE_NAME:vm.memory.size[available].min(5m)}<{$MEMORY.AVAILABLE.MIN} and {Linux memory by Zabbix agent:vm.memory.size[total].last()}>0` |AVERAGE | |
-|High swap space usage (less than {$SWAP.PFREE.MIN.WARN}% free) |<p>This trigger is ignored, if there is no swap configured</p> |`{TEMPLATE_NAME:system.swap.size[,pfree].min(5m)}<{$SWAP.PFREE.MIN.WARN} and {Linux memory by Zabbix agent:system.swap.size[,total].last()}>0` |WARNING |<p>**Depends on**:</p><p>- High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m)</p><p>- Lack of available memory (< {$MEMORY.AVAILABLE.MIN} of {ITEM.VALUE2})</p> |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|-----------------------------------------------------------------------|----------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------|----------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m) | <p>The system is running out of free memory.</p> | `{TEMPLATE_NAME:vm.memory.utilization.min(5m)}>{$MEMORY.UTIL.MAX}` | AVERAGE | <p>**Depends on**:</p><p>- Lack of available memory (< {$MEMORY.AVAILABLE.MIN} of {ITEM.VALUE2})</p> |
+| Lack of available memory (< {$MEMORY.AVAILABLE.MIN} of {ITEM.VALUE2}) | <p>-</p> | `{TEMPLATE_NAME:vm.memory.size[available].min(5m)}<{$MEMORY.AVAILABLE.MIN} and {Linux memory by Zabbix agent:vm.memory.size[total].last()}>0` | AVERAGE | |
+| High swap space usage (less than {$SWAP.PFREE.MIN.WARN}% free) | <p>This trigger is ignored, if there is no swap configured</p> | `{TEMPLATE_NAME:system.swap.size[,pfree].min(5m)}<{$SWAP.PFREE.MIN.WARN} and {Linux memory by Zabbix agent:system.swap.size[,total].last()}>0` | WARNING | <p>**Depends on**:</p><p>- High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m)</p><p>- Lack of available memory (< {$MEMORY.AVAILABLE.MIN} of {ITEM.VALUE2})</p> |
## Feedback
@@ -176,7 +176,7 @@ Please report any issues with the template at https://support.zabbix.com
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
@@ -188,12 +188,12 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$VFS.DEV.DEVNAME.MATCHES} |<p>This macro is used in block devices discovery. Can be overridden on the host or linked template level</p> |`.+` |
-|{$VFS.DEV.DEVNAME.NOT_MATCHES} |<p>This macro is used in block devices discovery. Can be overridden on the host or linked template level</p> |`^(loop[0-9]*|sd[a-z][0-9]+|nbd[0-9]+|sr[0-9]+|fd[0-9]+|dm-[0-9]+|ram[0-9]+|ploop[a-z0-9]+|md[0-9]*|hcp[0-9]*|zram[0-9]*)` |
-|{$VFS.DEV.READ.AWAIT.WARN} |<p>Disk read average response time (in ms) before the trigger would fire</p> |`20` |
-|{$VFS.DEV.WRITE.AWAIT.WARN} |<p>Disk write average response time (in ms) before the trigger would fire</p> |`20` |
+| Name | Description | Default |
+|--------------------------------|--------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------|
+| {$VFS.DEV.DEVNAME.MATCHES} | <p>This macro is used in block devices discovery. Can be overridden on the host or linked template level</p> | `.+` |
+| {$VFS.DEV.DEVNAME.NOT_MATCHES} | <p>This macro is used in block devices discovery. Can be overridden on the host or linked template level</p> | `^(loop[0-9]*|sd[a-z][0-9]+|nbd[0-9]+|sr[0-9]+|fd[0-9]+|dm-[0-9]+|ram[0-9]+|ploop[a-z0-9]+|md[0-9]*|hcp[0-9]*|zram[0-9]*)` |
+| {$VFS.DEV.READ.AWAIT.WARN} | <p>Disk read average response time (in ms) before the trigger would fire</p> | `20` |
+| {$VFS.DEV.WRITE.AWAIT.WARN} | <p>Disk write average response time (in ms) before the trigger would fire</p> | `20` |
## Template links
@@ -201,29 +201,29 @@ There are no template links in this template.
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Block devices discovery |<p>-</p> |ZABBIX_PASSIVE |vfs.dev.discovery<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>**Filter**:</p>AND <p>- A: {#DEVTYPE} MATCHES_REGEX `disk`</p><p>- B: {#DEVNAME} MATCHES_REGEX `{$VFS.DEV.DEVNAME.MATCHES}`</p><p>- C: {#DEVNAME} NOT_MATCHES_REGEX `{$VFS.DEV.DEVNAME.NOT_MATCHES}`</p> |
+| Name | Description | Type | Key and additional info |
+|-------------------------|-------------|----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Block devices discovery | <p>-</p> | ZABBIX_PASSIVE | vfs.dev.discovery<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>**Filter**:</p>AND <p>- A: {#DEVTYPE} MATCHES_REGEX `disk`</p><p>- B: {#DEVNAME} MATCHES_REGEX `{$VFS.DEV.DEVNAME.MATCHES}`</p><p>- C: {#DEVNAME} NOT_MATCHES_REGEX `{$VFS.DEV.DEVNAME.NOT_MATCHES}`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Storage |{#DEVNAME}: Disk read rate |<p>r/s. The number (after merges) of read requests completed per second for the device.</p> |DEPENDENT |vfs.dev.read.rate[{#DEVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$[0]`</p><p>- CHANGE_PER_SECOND |
-|Storage |{#DEVNAME}: Disk write rate |<p>w/s. The number (after merges) of write requests completed per second for the device.</p> |DEPENDENT |vfs.dev.write.rate[{#DEVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$[4]`</p><p>- CHANGE_PER_SECOND |
-|Storage |{#DEVNAME}: Disk read request avg waiting time (r_await) |<p>This formula contains two boolean expressions that evaluates to 1 or 0 in order to set calculated metric to zero and to avoid division by zero exception.</p> |CALCULATED |vfs.dev.read.await[{#DEVNAME}]<p>**Expression**:</p>`(last("vfs.dev.read.time.rate[{#DEVNAME}]")/(last("vfs.dev.read.rate[{#DEVNAME}]")+(last("vfs.dev.read.rate[{#DEVNAME}]")=0)))*1000*(last("vfs.dev.read.rate[{#DEVNAME}]") > 0)` |
-|Storage |{#DEVNAME}: Disk write request avg waiting time (w_await) |<p>This formula contains two boolean expressions that evaluates to 1 or 0 in order to set calculated metric to zero and to avoid division by zero exception.</p> |CALCULATED |vfs.dev.write.await[{#DEVNAME}]<p>**Expression**:</p>`(last("vfs.dev.write.time.rate[{#DEVNAME}]")/(last("vfs.dev.write.rate[{#DEVNAME}]")+(last("vfs.dev.write.rate[{#DEVNAME}]")=0)))*1000*(last("vfs.dev.write.rate[{#DEVNAME}]") > 0)` |
-|Storage |{#DEVNAME}: Disk average queue size (avgqu-sz) |<p>Current average disk queue, the number of requests outstanding on the disk at the time the performance data is collected.</p> |DEPENDENT |vfs.dev.queue_size[{#DEVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$[10]`</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `0.001`</p> |
-|Storage |{#DEVNAME}: Disk utilization |<p>This item is the percentage of elapsed time that the selected disk drive was busy servicing read or writes requests.</p> |DEPENDENT |vfs.dev.util[{#DEVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$[9]`</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `0.1`</p> |
-|Zabbix_raw_items |{#DEVNAME}: Get stats |<p>Get contents of /sys/block/{#DEVNAME}/stat for disk stats.</p> |ZABBIX_PASSIVE |vfs.file.contents[/sys/block/{#DEVNAME}/stat]<p>**Preprocessing**:</p><p>- JAVASCRIPT: `return JSON.stringify(value.trim().split(/ +/));`</p> |
-|Zabbix_raw_items |{#DEVNAME}: Disk read time (rate) |<p>Rate of total read time counter. Used in r_await calculation</p> |DEPENDENT |vfs.dev.read.time.rate[{#DEVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$[3]`</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `0.001`</p> |
-|Zabbix_raw_items |{#DEVNAME}: Disk write time (rate) |<p>Rate of total write time counter. Used in w_await calculation</p> |DEPENDENT |vfs.dev.write.time.rate[{#DEVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$[7]`</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `0.001`</p> |
+| Group | Name | Description | Type | Key and additional info |
+|------------------|-----------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Storage | {#DEVNAME}: Disk read rate | <p>r/s. The number (after merges) of read requests completed per second for the device.</p> | DEPENDENT | vfs.dev.read.rate[{#DEVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$[0]`</p><p>- CHANGE_PER_SECOND |
+| Storage | {#DEVNAME}: Disk write rate | <p>w/s. The number (after merges) of write requests completed per second for the device.</p> | DEPENDENT | vfs.dev.write.rate[{#DEVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$[4]`</p><p>- CHANGE_PER_SECOND |
+| Storage | {#DEVNAME}: Disk read request avg waiting time (r_await) | <p>This formula contains two boolean expressions that evaluates to 1 or 0 in order to set calculated metric to zero and to avoid division by zero exception.</p> | CALCULATED | vfs.dev.read.await[{#DEVNAME}]<p>**Expression**:</p>`(last("vfs.dev.read.time.rate[{#DEVNAME}]")/(last("vfs.dev.read.rate[{#DEVNAME}]")+(last("vfs.dev.read.rate[{#DEVNAME}]")=0)))*1000*(last("vfs.dev.read.rate[{#DEVNAME}]") > 0)` |
+| Storage | {#DEVNAME}: Disk write request avg waiting time (w_await) | <p>This formula contains two boolean expressions that evaluates to 1 or 0 in order to set calculated metric to zero and to avoid division by zero exception.</p> | CALCULATED | vfs.dev.write.await[{#DEVNAME}]<p>**Expression**:</p>`(last("vfs.dev.write.time.rate[{#DEVNAME}]")/(last("vfs.dev.write.rate[{#DEVNAME}]")+(last("vfs.dev.write.rate[{#DEVNAME}]")=0)))*1000*(last("vfs.dev.write.rate[{#DEVNAME}]") > 0)` |
+| Storage | {#DEVNAME}: Disk average queue size (avgqu-sz) | <p>Current average disk queue, the number of requests outstanding on the disk at the time the performance data is collected.</p> | DEPENDENT | vfs.dev.queue_size[{#DEVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$[10]`</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `0.001`</p> |
+| Storage | {#DEVNAME}: Disk utilization | <p>This item is the percentage of elapsed time that the selected disk drive was busy servicing read or writes requests.</p> | DEPENDENT | vfs.dev.util[{#DEVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$[9]`</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `0.1`</p> |
+| Zabbix_raw_items | {#DEVNAME}: Get stats | <p>Get contents of /sys/block/{#DEVNAME}/stat for disk stats.</p> | ZABBIX_PASSIVE | vfs.file.contents[/sys/block/{#DEVNAME}/stat]<p>**Preprocessing**:</p><p>- JAVASCRIPT: `return JSON.stringify(value.trim().split(/ +/));`</p> |
+| Zabbix_raw_items | {#DEVNAME}: Disk read time (rate) | <p>Rate of total read time counter. Used in r_await calculation</p> | DEPENDENT | vfs.dev.read.time.rate[{#DEVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$[3]`</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `0.001`</p> |
+| Zabbix_raw_items | {#DEVNAME}: Disk write time (rate) | <p>Rate of total write time counter. Used in w_await calculation</p> | DEPENDENT | vfs.dev.write.time.rate[{#DEVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$[7]`</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `0.001`</p> |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|{#DEVNAME}: Disk read/write request responses are too high (read > {$VFS.DEV.READ.AWAIT.WARN:"{#DEVNAME}"} ms for 15m or write > {$VFS.DEV.WRITE.AWAIT.WARN:"{#DEVNAME}"} ms for 15m) |<p>This trigger might indicate disk {#DEVNAME} saturation.</p> |`{TEMPLATE_NAME:vfs.dev.read.await[{#DEVNAME}].min(15m)} > {$VFS.DEV.READ.AWAIT.WARN:"{#DEVNAME}"} or {Linux block devices by Zabbix agent:vfs.dev.write.await[{#DEVNAME}].min(15m)} > {$VFS.DEV.WRITE.AWAIT.WARN:"{#DEVNAME}"}` |WARNING |<p>Manual close: YES</p> |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|----------------------------------|
+| {#DEVNAME}: Disk read/write request responses are too high (read > {$VFS.DEV.READ.AWAIT.WARN:"{#DEVNAME}"} ms for 15m or write > {$VFS.DEV.WRITE.AWAIT.WARN:"{#DEVNAME}"} ms for 15m) | <p>This trigger might indicate disk {#DEVNAME} saturation.</p> | `{TEMPLATE_NAME:vfs.dev.read.await[{#DEVNAME}].min(15m)} > {$VFS.DEV.READ.AWAIT.WARN:"{#DEVNAME}"} or {Linux block devices by Zabbix agent:vfs.dev.write.await[{#DEVNAME}].min(15m)} > {$VFS.DEV.WRITE.AWAIT.WARN:"{#DEVNAME}"}` | WARNING | <p>Manual close: YES</p> |
## Feedback
@@ -233,7 +233,7 @@ Please report any issues with the template at https://support.zabbix.com
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
@@ -245,12 +245,12 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$IF.ERRORS.WARN} |<p>-</p> |`2` |
-|{$IFCONTROL} |<p>-</p> |`1` |
-|{$NET.IF.IFNAME.MATCHES} |<p>-</p> |`^.*$` |
-|{$NET.IF.IFNAME.NOT_MATCHES} |<p>Filter out loopbacks, nulls, docker veth links and docker0 bridge by default</p> |`(^Software Loopback Interface|^NULL[0-9.]*$|^[Ll]o[0-9.]*$|^[Ss]ystem$|^Nu[0-9.]*$|^veth[0-9a-z]+$|docker[0-9]+|br-[a-z0-9]{12})` |
+| Name | Description | Default |
+|------------------------------|-------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------|
+| {$IF.ERRORS.WARN} | <p>-</p> | `2` |
+| {$IFCONTROL} | <p>-</p> | `1` |
+| {$NET.IF.IFNAME.MATCHES} | <p>-</p> | `^.*$` |
+| {$NET.IF.IFNAME.NOT_MATCHES} | <p>Filter out loopbacks, nulls, docker veth links and docker0 bridge by default</p> | `(^Software Loopback Interface|^NULL[0-9.]*$|^[Ll]o[0-9.]*$|^[Ss]ystem$|^Nu[0-9.]*$|^veth[0-9a-z]+$|docker[0-9]+|br-[a-z0-9]{12})` |
## Template links
@@ -258,30 +258,30 @@ There are no template links in this template.
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Network interface discovery |<p>Discovery of network interfaces.</p> |ZABBIX_PASSIVE |net.if.discovery<p>**Filter**:</p>AND <p>- A: {#IFNAME} MATCHES_REGEX `{$NET.IF.IFNAME.MATCHES}`</p><p>- B: {#IFNAME} NOT_MATCHES_REGEX `{$NET.IF.IFNAME.NOT_MATCHES}`</p> |
+| Name | Description | Type | Key and additional info |
+|-----------------------------|-----------------------------------------|----------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Network interface discovery | <p>Discovery of network interfaces.</p> | ZABBIX_PASSIVE | net.if.discovery<p>**Filter**:</p>AND <p>- A: {#IFNAME} MATCHES_REGEX `{$NET.IF.IFNAME.MATCHES}`</p><p>- B: {#IFNAME} NOT_MATCHES_REGEX `{$NET.IF.IFNAME.NOT_MATCHES}`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Network_interfaces |Interface {#IFNAME}: Bits received | |ZABBIX_PASSIVE |net.if.in["{#IFNAME}"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `8`</p> |
-|Network_interfaces |Interface {#IFNAME}: Bits sent | |ZABBIX_PASSIVE |net.if.out["{#IFNAME}"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `8`</p> |
-|Network_interfaces |Interface {#IFNAME}: Outbound packets with errors | |ZABBIX_PASSIVE |net.if.out["{#IFNAME}",errors]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Network_interfaces |Interface {#IFNAME}: Inbound packets with errors | |ZABBIX_PASSIVE |net.if.in["{#IFNAME}",errors]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Network_interfaces |Interface {#IFNAME}: Outbound packets discarded | |ZABBIX_PASSIVE |net.if.out["{#IFNAME}",dropped]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Network_interfaces |Interface {#IFNAME}: Inbound packets discarded | |ZABBIX_PASSIVE |net.if.in["{#IFNAME}",dropped]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Network_interfaces |Interface {#IFNAME}: Operational status |<p>Indicates the interface RFC2863 operational state as a string.</p><p>Possible values are:"unknown", "notpresent", "down", "lowerlayerdown", "testing","dormant", "up".</p><p>Reference: https://www.kernel.org/doc/Documentation/ABI/testing/sysfs-class-net</p> |ZABBIX_PASSIVE |vfs.file.contents["/sys/class/net/{#IFNAME}/operstate"]<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
-|Network_interfaces |Interface {#IFNAME}: Interface type |<p>Indicates the interface protocol type as a decimal value.</p><p>See include/uapi/linux/if_arp.h for all possible values.</p><p>Reference: https://www.kernel.org/doc/Documentation/ABI/testing/sysfs-class-net</p> |ZABBIX_PASSIVE |vfs.file.contents["/sys/class/net/{#IFNAME}/type"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Group | Name | Description | Type | Key and additional info |
+|--------------------|---------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------|---------------------------------------------------------------------------------------------------------------------------------------------------|
+| Network_interfaces | Interface {#IFNAME}: Bits received | | ZABBIX_PASSIVE | net.if.in["{#IFNAME}"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `8`</p> |
+| Network_interfaces | Interface {#IFNAME}: Bits sent | | ZABBIX_PASSIVE | net.if.out["{#IFNAME}"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `8`</p> |
+| Network_interfaces | Interface {#IFNAME}: Outbound packets with errors | | ZABBIX_PASSIVE | net.if.out["{#IFNAME}",errors]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Network_interfaces | Interface {#IFNAME}: Inbound packets with errors | | ZABBIX_PASSIVE | net.if.in["{#IFNAME}",errors]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Network_interfaces | Interface {#IFNAME}: Outbound packets discarded | | ZABBIX_PASSIVE | net.if.out["{#IFNAME}",dropped]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Network_interfaces | Interface {#IFNAME}: Inbound packets discarded | | ZABBIX_PASSIVE | net.if.in["{#IFNAME}",dropped]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Network_interfaces | Interface {#IFNAME}: Operational status | <p>Indicates the interface RFC2863 operational state as a string.</p><p>Possible values are:"unknown", "notpresent", "down", "lowerlayerdown", "testing","dormant", "up".</p><p>Reference: https://www.kernel.org/doc/Documentation/ABI/testing/sysfs-class-net</p> | ZABBIX_PASSIVE | vfs.file.contents["/sys/class/net/{#IFNAME}/operstate"]<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Network_interfaces | Interface {#IFNAME}: Interface type | <p>Indicates the interface protocol type as a decimal value.</p><p>See include/uapi/linux/if_arp.h for all possible values.</p><p>Reference: https://www.kernel.org/doc/Documentation/ABI/testing/sysfs-class-net</p> | ZABBIX_PASSIVE | vfs.file.contents["/sys/class/net/{#IFNAME}/type"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|Interface {#IFNAME}: High error rate (> {$IF.ERRORS.WARN:"{#IFNAME}"} for 5m) |<p>Recovers when below 80% of {$IF.ERRORS.WARN:"{#IFNAME}"} threshold</p> |`{TEMPLATE_NAME:net.if.in["{#IFNAME}",errors].min(5m)}>{$IF.ERRORS.WARN:"{#IFNAME}"} or {Linux network interfaces by Zabbix agent:net.if.out["{#IFNAME}",errors].min(5m)}>{$IF.ERRORS.WARN:"{#IFNAME}"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:net.if.in["{#IFNAME}",errors].max(5m)}<{$IF.ERRORS.WARN:"{#IFNAME}"}*0.8 and {Linux network interfaces by Zabbix agent:net.if.out["{#IFNAME}",errors].max(5m)}<{$IF.ERRORS.WARN:"{#IFNAME}"}*0.8` |WARNING |<p>Manual close: YES</p><p>**Depends on**:</p><p>- Interface {#IFNAME}: Link down</p> |
-|Interface {#IFNAME}: Link down |<p>This trigger expression works as follows:</p><p>1. Can be triggered if operations status is down.</p><p>2. {$IFCONTROL:"{#IFNAME}"}=1 - user can redefine Context macro to value - 0. That marks this interface as not important. No new trigger will be fired if this interface is down.</p><p>3. {TEMPLATE_NAME:METRIC.diff()}=1) - trigger fires only if operational status was up(1) sometime before. (So, do not fire 'ethernal off' interfaces.)</p><p>WARNING: if closed manually - won't fire again on next poll, because of .diff.</p> |`{$IFCONTROL:"{#IFNAME}"}=1 and ({TEMPLATE_NAME:vfs.file.contents["/sys/class/net/{#IFNAME}/operstate"].last()}=2 and {TEMPLATE_NAME:vfs.file.contents["/sys/class/net/{#IFNAME}/operstate"].diff()}=1)`<p>Recovery expression:</p>`{TEMPLATE_NAME:vfs.file.contents["/sys/class/net/{#IFNAME}/operstate"].last()}<>2 or {$IFCONTROL:"{#IFNAME}"}=0` |AVERAGE |<p>Manual close: YES</p> |
-|Interface {#IFNAME}: Ethernet has changed to lower speed than it was before |<p>This Ethernet connection has transitioned down from its known maximum speed. This might be a sign of autonegotiation issues. Ack to close.</p> |`{TEMPLATE_NAME:vfs.file.contents["/sys/class/net/{#IFNAME}/type"].change()}<0 and {TEMPLATE_NAME:vfs.file.contents["/sys/class/net/{#IFNAME}/type"].last()}>0 and ({Linux network interfaces by Zabbix agent:vfs.file.contents["/sys/class/net/{#IFNAME}/type"].last()}=6 or {Linux network interfaces by Zabbix agent:vfs.file.contents["/sys/class/net/{#IFNAME}/type"].last()}=1) and ({Linux network interfaces by Zabbix agent:vfs.file.contents["/sys/class/net/{#IFNAME}/operstate"].last()}<>2)`<p>Recovery expression:</p>`({TEMPLATE_NAME:vfs.file.contents["/sys/class/net/{#IFNAME}/type"].change()}>0 and {TEMPLATE_NAME:vfs.file.contents["/sys/class/net/{#IFNAME}/type"].prev()}>0) or ({Linux network interfaces by Zabbix agent:vfs.file.contents["/sys/class/net/{#IFNAME}/operstate"].last()}=2)` |INFO |<p>Manual close: YES</p><p>**Depends on**:</p><p>- Interface {#IFNAME}: Link down</p> |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|-------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|---------------------------------------------------------------------------------------|
+| Interface {#IFNAME}: High error rate (> {$IF.ERRORS.WARN:"{#IFNAME}"} for 5m) | <p>Recovers when below 80% of {$IF.ERRORS.WARN:"{#IFNAME}"} threshold</p> | `{TEMPLATE_NAME:net.if.in["{#IFNAME}",errors].min(5m)}>{$IF.ERRORS.WARN:"{#IFNAME}"} or {Linux network interfaces by Zabbix agent:net.if.out["{#IFNAME}",errors].min(5m)}>{$IF.ERRORS.WARN:"{#IFNAME}"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:net.if.in["{#IFNAME}",errors].max(5m)}<{$IF.ERRORS.WARN:"{#IFNAME}"}*0.8 and {Linux network interfaces by Zabbix agent:net.if.out["{#IFNAME}",errors].max(5m)}<{$IF.ERRORS.WARN:"{#IFNAME}"}*0.8` | WARNING | <p>Manual close: YES</p><p>**Depends on**:</p><p>- Interface {#IFNAME}: Link down</p> |
+| Interface {#IFNAME}: Link down | <p>This trigger expression works as follows:</p><p>1. Can be triggered if operations status is down.</p><p>2. {$IFCONTROL:"{#IFNAME}"}=1 - user can redefine Context macro to value - 0. That marks this interface as not important. No new trigger will be fired if this interface is down.</p><p>3. {TEMPLATE_NAME:METRIC.diff()}=1) - trigger fires only if operational status was up(1) sometime before. (So, do not fire 'ethernal off' interfaces.)</p><p>WARNING: if closed manually - won't fire again on next poll, because of .diff.</p> | `{$IFCONTROL:"{#IFNAME}"}=1 and ({TEMPLATE_NAME:vfs.file.contents["/sys/class/net/{#IFNAME}/operstate"].last()}=2 and {TEMPLATE_NAME:vfs.file.contents["/sys/class/net/{#IFNAME}/operstate"].diff()}=1)`<p>Recovery expression:</p>`{TEMPLATE_NAME:vfs.file.contents["/sys/class/net/{#IFNAME}/operstate"].last()}<>2 or {$IFCONTROL:"{#IFNAME}"}=0` | AVERAGE | <p>Manual close: YES</p> |
+| Interface {#IFNAME}: Ethernet has changed to lower speed than it was before | <p>This Ethernet connection has transitioned down from its known maximum speed. This might be a sign of autonegotiation issues. Ack to close.</p> | `{TEMPLATE_NAME:vfs.file.contents["/sys/class/net/{#IFNAME}/type"].change()}<0 and {TEMPLATE_NAME:vfs.file.contents["/sys/class/net/{#IFNAME}/type"].last()}>0 and ({Linux network interfaces by Zabbix agent:vfs.file.contents["/sys/class/net/{#IFNAME}/type"].last()}=6 or {Linux network interfaces by Zabbix agent:vfs.file.contents["/sys/class/net/{#IFNAME}/type"].last()}=1) and ({Linux network interfaces by Zabbix agent:vfs.file.contents["/sys/class/net/{#IFNAME}/operstate"].last()}<>2)`<p>Recovery expression:</p>`({TEMPLATE_NAME:vfs.file.contents["/sys/class/net/{#IFNAME}/type"].change()}>0 and {TEMPLATE_NAME:vfs.file.contents["/sys/class/net/{#IFNAME}/type"].prev()}>0) or ({Linux network interfaces by Zabbix agent:vfs.file.contents["/sys/class/net/{#IFNAME}/operstate"].last()}=2)` | INFO | <p>Manual close: YES</p><p>**Depends on**:</p><p>- Interface {#IFNAME}: Link down</p> |
## Feedback
@@ -295,7 +295,7 @@ Please report any issues with the template at https://support.zabbix.com
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
@@ -307,11 +307,11 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$KERNEL.MAXFILES.MIN} |<p>-</p> |`256` |
-|{$KERNEL.MAXPROC.MIN} |<p>-</p> |`1024` |
-|{$SYSTEM.FUZZYTIME.MAX} |<p>-</p> |`60` |
+| Name | Description | Default |
+|-------------------------|-------------|---------|
+| {$KERNEL.MAXFILES.MIN} | <p>-</p> | `256` |
+| {$KERNEL.MAXPROC.MIN} | <p>-</p> | `1024` |
+| {$SYSTEM.FUZZYTIME.MAX} | <p>-</p> | `60` |
## Template links
@@ -322,35 +322,35 @@ There are no template links in this template.
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|General |System boot time |<p>-</p> |ZABBIX_PASSIVE |system.boottime<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|General |System local time |<p>System local time of the host.</p> |ZABBIX_PASSIVE |system.localtime |
-|General |System name |<p>System host name.</p> |ZABBIX_PASSIVE |system.hostname<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|General |System description |<p>The information as normally returned by 'uname -a'.</p> |ZABBIX_PASSIVE |system.uname<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|General |Number of logged in users |<p>Number of users who are currently logged in.</p> |ZABBIX_PASSIVE |system.users.num |
-|General |Maximum number of open file descriptors |<p>It could be increased by using sysctrl utility or modifying file /etc/sysctl.conf.</p> |ZABBIX_PASSIVE |kernel.maxfiles<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|General |Maximum number of processes |<p>It could be increased by using sysctrl utility or modifying file /etc/sysctl.conf.</p> |ZABBIX_PASSIVE |kernel.maxproc<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|General |Number of processes |<p>-</p> |ZABBIX_PASSIVE |proc.num |
-|General |Number of running processes |<p>-</p> |ZABBIX_PASSIVE |proc.num[,,run] |
-|Inventory |Operating system |<p>-</p> |ZABBIX_PASSIVE |system.sw.os<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |Operating system architecture |<p>Operating system architecture of the host.</p> |ZABBIX_PASSIVE |system.sw.arch<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |Software installed |<p>-</p> |ZABBIX_PASSIVE |system.sw.packages<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Security |Checksum of /etc/passwd |<p>-</p> |ZABBIX_PASSIVE |vfs.file.cksum[/etc/passwd]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|Status |System uptime |<p>System uptime in 'N days, hh:mm:ss' format.</p> |ZABBIX_PASSIVE |system.uptime |
+| Group | Name | Description | Type | Key and additional info |
+|-----------|-----------------------------------------|-------------------------------------------------------------------------------------------|----------------|------------------------------------------------------------------------------------------------|
+| General | System boot time | <p>-</p> | ZABBIX_PASSIVE | system.boottime<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| General | System local time | <p>System local time of the host.</p> | ZABBIX_PASSIVE | system.localtime |
+| General | System name | <p>System host name.</p> | ZABBIX_PASSIVE | system.hostname<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| General | System description | <p>The information as normally returned by 'uname -a'.</p> | ZABBIX_PASSIVE | system.uname<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| General | Number of logged in users | <p>Number of users who are currently logged in.</p> | ZABBIX_PASSIVE | system.users.num |
+| General | Maximum number of open file descriptors | <p>It could be increased by using sysctrl utility or modifying file /etc/sysctl.conf.</p> | ZABBIX_PASSIVE | kernel.maxfiles<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| General | Maximum number of processes | <p>It could be increased by using sysctrl utility or modifying file /etc/sysctl.conf.</p> | ZABBIX_PASSIVE | kernel.maxproc<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| General | Number of processes | <p>-</p> | ZABBIX_PASSIVE | proc.num |
+| General | Number of running processes | <p>-</p> | ZABBIX_PASSIVE | proc.num[,,run] |
+| Inventory | Operating system | <p>-</p> | ZABBIX_PASSIVE | system.sw.os<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | Operating system architecture | <p>Operating system architecture of the host.</p> | ZABBIX_PASSIVE | system.sw.arch<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | Software installed | <p>-</p> | ZABBIX_PASSIVE | system.sw.packages<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Security | Checksum of /etc/passwd | <p>-</p> | ZABBIX_PASSIVE | vfs.file.cksum[/etc/passwd]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| Status | System uptime | <p>System uptime in 'N days, hh:mm:ss' format.</p> | ZABBIX_PASSIVE | system.uptime |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|System time is out of sync (diff with Zabbix server > {$SYSTEM.FUZZYTIME.MAX}s) |<p>The host system time is different from the Zabbix server time.</p> |`{TEMPLATE_NAME:system.localtime.fuzzytime({$SYSTEM.FUZZYTIME.MAX})}=0` |WARNING |<p>Manual close: YES</p> |
-|System name has changed (new name: {ITEM.VALUE}) |<p>System name has changed. Ack to close.</p> |`{TEMPLATE_NAME:system.hostname.diff()}=1 and {TEMPLATE_NAME:system.hostname.strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|Configured max number of open filedescriptors is too low (< {$KERNEL.MAXFILES.MIN}) |<p>-</p> |`{TEMPLATE_NAME:kernel.maxfiles.last()}<{$KERNEL.MAXFILES.MIN}` |INFO | |
-|Configured max number of processes is too low (< {$KERNEL.MAXPROC.MIN}) |<p>-</p> |`{TEMPLATE_NAME:kernel.maxproc.last()}<{$KERNEL.MAXPROC.MIN}` |INFO |<p>**Depends on**:</p><p>- Getting closer to process limit (over 80% used)</p> |
-|Getting closer to process limit (over 80% used) |<p>-</p> |`{TEMPLATE_NAME:proc.num.last()}/{Linux generic by Zabbix agent:kernel.maxproc.last()}*100>80` |WARNING | |
-|Operating system description has changed |<p>Operating system description has changed. Possible reasons that system has been updated or replaced. Ack to close.</p> |`{TEMPLATE_NAME:system.sw.os.diff()}=1 and {TEMPLATE_NAME:system.sw.os.strlen()}>0` |INFO |<p>Manual close: YES</p><p>**Depends on**:</p><p>- System name has changed (new name: {ITEM.VALUE})</p> |
-|/etc/passwd has been changed |<p>-</p> |`{TEMPLATE_NAME:vfs.file.cksum[/etc/passwd].diff()}>0` |INFO |<p>Manual close: YES</p><p>**Depends on**:</p><p>- Operating system description has changed</p><p>- System name has changed (new name: {ITEM.VALUE})</p> |
-|{HOST.NAME} has been restarted (uptime < 10m) |<p>The host uptime is less than 10 minutes</p> |`{TEMPLATE_NAME:system.uptime.last()}<10m` |WARNING |<p>Manual close: YES</p> |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|-------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------|----------|----------------------------------------------------------------------------------------------------------------------------------------------------------|
+| System time is out of sync (diff with Zabbix server > {$SYSTEM.FUZZYTIME.MAX}s) | <p>The host system time is different from the Zabbix server time.</p> | `{TEMPLATE_NAME:system.localtime.fuzzytime({$SYSTEM.FUZZYTIME.MAX})}=0` | WARNING | <p>Manual close: YES</p> |
+| System name has changed (new name: {ITEM.VALUE}) | <p>System name has changed. Ack to close.</p> | `{TEMPLATE_NAME:system.hostname.diff()}=1 and {TEMPLATE_NAME:system.hostname.strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| Configured max number of open filedescriptors is too low (< {$KERNEL.MAXFILES.MIN}) | <p>-</p> | `{TEMPLATE_NAME:kernel.maxfiles.last()}<{$KERNEL.MAXFILES.MIN}` | INFO | |
+| Configured max number of processes is too low (< {$KERNEL.MAXPROC.MIN}) | <p>-</p> | `{TEMPLATE_NAME:kernel.maxproc.last()}<{$KERNEL.MAXPROC.MIN}` | INFO | <p>**Depends on**:</p><p>- Getting closer to process limit (over 80% used)</p> |
+| Getting closer to process limit (over 80% used) | <p>-</p> | `{TEMPLATE_NAME:proc.num.last()}/{Linux generic by Zabbix agent:kernel.maxproc.last()}*100>80` | WARNING | |
+| Operating system description has changed | <p>Operating system description has changed. Possible reasons that system has been updated or replaced. Ack to close.</p> | `{TEMPLATE_NAME:system.sw.os.diff()}=1 and {TEMPLATE_NAME:system.sw.os.strlen()}>0` | INFO | <p>Manual close: YES</p><p>**Depends on**:</p><p>- System name has changed (new name: {ITEM.VALUE})</p> |
+| /etc/passwd has been changed | <p>-</p> | `{TEMPLATE_NAME:vfs.file.cksum[/etc/passwd].diff()}>0` | INFO | <p>Manual close: YES</p><p>**Depends on**:</p><p>- Operating system description has changed</p><p>- System name has changed (new name: {ITEM.VALUE})</p> |
+| {HOST.NAME} has been restarted (uptime < 10m) | <p>The host uptime is less than 10 minutes</p> | `{TEMPLATE_NAME:system.uptime.last()}<10m` | WARNING | <p>Manual close: YES</p> |
## Feedback
@@ -360,7 +360,7 @@ Please report any issues with the template at https://support.zabbix.com
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
New official Linux template. Requires agent of Zabbix 3.0.14, 3.4.5 and 4.0.0 or newer.
## Setup
@@ -374,15 +374,15 @@ No specific Zabbix configuration is required.
## Template links
-|Name|
-|----|
-|Linux CPU by Zabbix agent |
-|Linux block devices by Zabbix agent |
-|Linux filesystems by Zabbix agent |
-|Linux generic by Zabbix agent |
-|Linux memory by Zabbix agent |
-|Linux network interfaces by Zabbix agent |
-|Zabbix agent |
+| Name |
+|------------------------------------------|
+| Linux CPU by Zabbix agent |
+| Linux block devices by Zabbix agent |
+| Linux filesystems by Zabbix agent |
+| Linux generic by Zabbix agent |
+| Linux memory by Zabbix agent |
+| Linux network interfaces by Zabbix agent |
+| Zabbix agent |
## Discovery rules
diff --git a/templates/os/linux/template_os_linux.yaml b/templates/os/linux/template_os_linux.yaml
index 8c15a50a421..d4e86e420be 100644
--- a/templates/os/linux/template_os_linux.yaml
+++ b/templates/os/linux/template_os_linux.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-03-02T19:42:24Z'
+ date: '2021-04-22T11:28:49Z'
groups:
-
name: Templates/Modules
@@ -345,151 +345,153 @@ zabbix_export:
dashboards:
-
name: 'System performance'
- widgets:
- -
- type: GRAPH_CLASSIC
- width: '12'
- height: '5'
- fields:
- -
- type: GRAPH
- name: graphid
- value:
- name: 'System load'
- host: 'Linux by Zabbix agent'
- -
- type: GRAPH_CLASSIC
- x: '12'
- width: '12'
- height: '5'
- fields:
- -
- type: GRAPH
- name: graphid
- value:
- name: 'CPU usage'
- host: 'Linux by Zabbix agent'
- -
- type: GRAPH_CLASSIC
- 'y': '5'
- width: '12'
- height: '5'
- fields:
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Memory usage'
- host: 'Linux by Zabbix agent'
- -
- type: GRAPH_CLASSIC
- x: '12'
- 'y': '5'
- width: '12'
- height: '5'
- fields:
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Swap usage'
- host: 'Linux by Zabbix agent'
- -
- type: GRAPH_PROTOTYPE
- 'y': '10'
- width: '24'
- height: '12'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '3'
- -
- type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: '{#FSNAME}: Disk space usage'
- host: 'Linux by Zabbix agent'
+ pages:
-
- type: GRAPH_PROTOTYPE
- 'y': '22'
- width: '24'
- height: '12'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
+ widgets:
-
- type: INTEGER
- name: rows
- value: '3'
+ type: GRAPH_CLASSIC
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'System load'
+ host: 'Linux by Zabbix agent'
+ -
+ type: GRAPH_CLASSIC
+ x: '12'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'CPU usage'
+ host: 'Linux by Zabbix agent'
+ -
+ type: GRAPH_CLASSIC
+ 'y': '5'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Memory usage'
+ host: 'Linux by Zabbix agent'
+ -
+ type: GRAPH_CLASSIC
+ x: '12'
+ 'y': '5'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Swap usage'
+ host: 'Linux by Zabbix agent'
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: '{#DEVNAME}: Disk read/write rates'
- host: 'Linux by Zabbix agent'
- -
- type: GRAPH_PROTOTYPE
- 'y': '34'
- width: '24'
- height: '12'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '3'
+ 'y': '10'
+ width: '24'
+ height: '12'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '3'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: '{#FSNAME}: Disk space usage'
+ host: 'Linux by Zabbix agent'
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: '{#DEVNAME}: Disk average waiting time'
- host: 'Linux by Zabbix agent'
- -
- type: GRAPH_PROTOTYPE
- 'y': '46'
- width: '24'
- height: '12'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '3'
+ 'y': '22'
+ width: '24'
+ height: '12'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '3'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: '{#DEVNAME}: Disk read/write rates'
+ host: 'Linux by Zabbix agent'
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: '{#DEVNAME}: Disk utilization and queue'
- host: 'Linux by Zabbix agent'
- -
- type: GRAPH_PROTOTYPE
- 'y': '58'
- width: '24'
- height: '5'
- fields:
+ 'y': '34'
+ width: '24'
+ height: '12'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '3'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: '{#DEVNAME}: Disk average waiting time'
+ host: 'Linux by Zabbix agent'
-
- type: INTEGER
- name: columns
- value: '1'
+ type: GRAPH_PROTOTYPE
+ 'y': '46'
+ width: '24'
+ height: '12'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '3'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: '{#DEVNAME}: Disk utilization and queue'
+ host: 'Linux by Zabbix agent'
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: 'Interface {#IFNAME}: Network traffic'
- host: 'Linux by Zabbix agent'
+ 'y': '58'
+ width: '24'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: 'Interface {#IFNAME}: Network traffic'
+ host: 'Linux by Zabbix agent'
-
template: 'Linux CPU by Zabbix agent'
name: 'Linux CPU by Zabbix agent'
@@ -1592,26 +1594,28 @@ zabbix_export:
dashboards:
-
name: 'Network interfaces'
- widgets:
+ pages:
-
- type: GRAPH_PROTOTYPE
- width: '24'
- height: '12'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '3'
+ widgets:
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: 'Interface {#IFNAME}: Network traffic'
- host: 'Linux network interfaces by Zabbix agent'
+ width: '24'
+ height: '12'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '3'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: 'Interface {#IFNAME}: Network traffic'
+ host: 'Linux network interfaces by Zabbix agent'
valuemaps:
-
name: 'IF-MIB::ifOperStatus'
diff --git a/templates/os/linux_active/README.md b/templates/os/linux_active/README.md
index b0e16198b82..788b7c9e632 100644
--- a/templates/os/linux_active/README.md
+++ b/templates/os/linux_active/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
@@ -15,10 +15,10 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$CPU.UTIL.CRIT} |<p>-</p> |`90` |
-|{$LOAD_AVG_PER_CPU.MAX.WARN} |<p>Load per CPU considered sustainable. Tune if needed.</p> |`1.5` |
+| Name | Description | Default |
+|------------------------------|-------------------------------------------------------------|---------|
+| {$CPU.UTIL.CRIT} | <p>-</p> | `90` |
+| {$LOAD_AVG_PER_CPU.MAX.WARN} | <p>Load per CPU considered sustainable. Tune if needed.</p> | `1.5` |
## Template links
@@ -29,32 +29,32 @@ There are no template links in this template.
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|CPU |Number of CPUs |<p>-</p> |ZABBIX_ACTIVE |system.cpu.num<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|CPU |Load average (1m avg) |<p>-</p> |ZABBIX_ACTIVE |system.cpu.load[all,avg1] |
-|CPU |Load average (5m avg) |<p>-</p> |ZABBIX_ACTIVE |system.cpu.load[all,avg5] |
-|CPU |Load average (15m avg) |<p>-</p> |ZABBIX_ACTIVE |system.cpu.load[all,avg15] |
-|CPU |CPU utilization |<p>CPU utilization in %</p> |DEPENDENT |system.cpu.util<p>**Preprocessing**:</p><p>- JAVASCRIPT: `//Calculate utilization return (100 - value)`</p> |
-|CPU |CPU idle time |<p>The time the CPU has spent doing nothing.</p> |ZABBIX_ACTIVE |system.cpu.util[,idle] |
-|CPU |CPU system time |<p>The time the CPU has spent running the kernel and its processes.</p> |ZABBIX_ACTIVE |system.cpu.util[,system] |
-|CPU |CPU user time |<p>The time the CPU has spent running users' processes that are not niced.</p> |ZABBIX_ACTIVE |system.cpu.util[,user] |
-|CPU |CPU nice time |<p>The time the CPU has spent running users' processes that have been niced.</p> |ZABBIX_ACTIVE |system.cpu.util[,nice] |
-|CPU |CPU iowait time |<p>Amount of time the CPU has been waiting for I/O to complete.</p> |ZABBIX_ACTIVE |system.cpu.util[,iowait] |
-|CPU |CPU steal time |<p>The amount of CPU 'stolen' from this virtual machine by the hypervisor for other tasks (such as running another virtual machine).</p> |ZABBIX_ACTIVE |system.cpu.util[,steal] |
-|CPU |CPU interrupt time |<p>The amount of time the CPU has been servicing hardware interrupts.</p> |ZABBIX_ACTIVE |system.cpu.util[,interrupt] |
-|CPU |CPU softirq time |<p>The amount of time the CPU has been servicing software interrupts.</p> |ZABBIX_ACTIVE |system.cpu.util[,softirq] |
-|CPU |CPU guest time |<p>Guest time (time spent running a virtual CPU for a guest operating system)</p> |ZABBIX_ACTIVE |system.cpu.util[,guest] |
-|CPU |CPU guest nice time |<p>Time spent running a niced guest (virtual CPU for guest operating systems under the control of the Linux kernel)</p> |ZABBIX_ACTIVE |system.cpu.util[,guest_nice] |
-|CPU |Context switches per second |<p>-</p> |ZABBIX_ACTIVE |system.cpu.switches<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|CPU |Interrupts per second |<p>-</p> |ZABBIX_ACTIVE |system.cpu.intr<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Group | Name | Description | Type | Key and additional info |
+|-------|-----------------------------|------------------------------------------------------------------------------------------------------------------------------------------|---------------|-------------------------------------------------------------------------------------------------------------|
+| CPU | Number of CPUs | <p>-</p> | ZABBIX_ACTIVE | system.cpu.num<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| CPU | Load average (1m avg) | <p>-</p> | ZABBIX_ACTIVE | system.cpu.load[all,avg1] |
+| CPU | Load average (5m avg) | <p>-</p> | ZABBIX_ACTIVE | system.cpu.load[all,avg5] |
+| CPU | Load average (15m avg) | <p>-</p> | ZABBIX_ACTIVE | system.cpu.load[all,avg15] |
+| CPU | CPU utilization | <p>CPU utilization in %</p> | DEPENDENT | system.cpu.util<p>**Preprocessing**:</p><p>- JAVASCRIPT: `//Calculate utilization return (100 - value)`</p> |
+| CPU | CPU idle time | <p>The time the CPU has spent doing nothing.</p> | ZABBIX_ACTIVE | system.cpu.util[,idle] |
+| CPU | CPU system time | <p>The time the CPU has spent running the kernel and its processes.</p> | ZABBIX_ACTIVE | system.cpu.util[,system] |
+| CPU | CPU user time | <p>The time the CPU has spent running users' processes that are not niced.</p> | ZABBIX_ACTIVE | system.cpu.util[,user] |
+| CPU | CPU nice time | <p>The time the CPU has spent running users' processes that have been niced.</p> | ZABBIX_ACTIVE | system.cpu.util[,nice] |
+| CPU | CPU iowait time | <p>Amount of time the CPU has been waiting for I/O to complete.</p> | ZABBIX_ACTIVE | system.cpu.util[,iowait] |
+| CPU | CPU steal time | <p>The amount of CPU 'stolen' from this virtual machine by the hypervisor for other tasks (such as running another virtual machine).</p> | ZABBIX_ACTIVE | system.cpu.util[,steal] |
+| CPU | CPU interrupt time | <p>The amount of time the CPU has been servicing hardware interrupts.</p> | ZABBIX_ACTIVE | system.cpu.util[,interrupt] |
+| CPU | CPU softirq time | <p>The amount of time the CPU has been servicing software interrupts.</p> | ZABBIX_ACTIVE | system.cpu.util[,softirq] |
+| CPU | CPU guest time | <p>Guest time (time spent running a virtual CPU for a guest operating system)</p> | ZABBIX_ACTIVE | system.cpu.util[,guest] |
+| CPU | CPU guest nice time | <p>Time spent running a niced guest (virtual CPU for guest operating systems under the control of the Linux kernel)</p> | ZABBIX_ACTIVE | system.cpu.util[,guest_nice] |
+| CPU | Context switches per second | <p>-</p> | ZABBIX_ACTIVE | system.cpu.switches<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| CPU | Interrupts per second | <p>-</p> | ZABBIX_ACTIVE | system.cpu.intr<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|Load average is too high (per CPU load over {$LOAD_AVG_PER_CPU.MAX.WARN} for 5m) |<p>Per CPU load average is too high. Your system may be slow to respond.</p> |`{TEMPLATE_NAME:system.cpu.load[all,avg1].min(5m)}/{Linux CPU by Zabbix agent active:system.cpu.num.last()}>{$LOAD_AVG_PER_CPU.MAX.WARN} and {Linux CPU by Zabbix agent active:system.cpu.load[all,avg5].last()}>0 and {Linux CPU by Zabbix agent active:system.cpu.load[all,avg15].last()}>0` |AVERAGE | |
-|High CPU utilization (over {$CPU.UTIL.CRIT}% for 5m) |<p>CPU utilization is too high. The system might be slow to respond.</p> |`{TEMPLATE_NAME:system.cpu.util.min(5m)}>{$CPU.UTIL.CRIT}` |WARNING |<p>**Depends on**:</p><p>- Load average is too high (per CPU load over {$LOAD_AVG_PER_CPU.MAX.WARN} for 5m)</p> |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|----------------------------------------------------------------------------------|------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|-----------------------------------------------------------------------------------------------------------------|
+| Load average is too high (per CPU load over {$LOAD_AVG_PER_CPU.MAX.WARN} for 5m) | <p>Per CPU load average is too high. Your system may be slow to respond.</p> | `{TEMPLATE_NAME:system.cpu.load[all,avg1].min(5m)}/{Linux CPU by Zabbix agent active:system.cpu.num.last()}>{$LOAD_AVG_PER_CPU.MAX.WARN} and {Linux CPU by Zabbix agent active:system.cpu.load[all,avg5].last()}>0 and {Linux CPU by Zabbix agent active:system.cpu.load[all,avg15].last()}>0` | AVERAGE | |
+| High CPU utilization (over {$CPU.UTIL.CRIT}% for 5m) | <p>CPU utilization is too high. The system might be slow to respond.</p> | `{TEMPLATE_NAME:system.cpu.util.min(5m)}>{$CPU.UTIL.CRIT}` | WARNING | <p>**Depends on**:</p><p>- Load average is too high (per CPU load over {$LOAD_AVG_PER_CPU.MAX.WARN} for 5m)</p> |
## Feedback
@@ -64,7 +64,7 @@ Please report any issues with the template at https://support.zabbix.com
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
@@ -76,16 +76,16 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$VFS.FS.FSNAME.MATCHES} |<p>This macro is used in filesystems discovery. Can be overridden on the host or linked template level</p> |`.+` |
-|{$VFS.FS.FSNAME.NOT_MATCHES} |<p>This macro is used in filesystems discovery. Can be overridden on the host or linked template level</p> |`^(/dev|/sys|/run|/proc|.+/shm$)` |
-|{$VFS.FS.FSTYPE.MATCHES} |<p>This macro is used in filesystems discovery. Can be overridden on the host or linked template level</p> |`^(btrfs|ext2|ext3|ext4|reiser|xfs|ffs|ufs|jfs|jfs2|vxfs|hfs|apfs|refs|ntfs|fat32|zfs)$` |
-|{$VFS.FS.FSTYPE.NOT_MATCHES} |<p>This macro is used in filesystems discovery. Can be overridden on the host or linked template level</p> |`^\s$` |
-|{$VFS.FS.INODE.PFREE.MIN.CRIT} |<p>-</p> |`10` |
-|{$VFS.FS.INODE.PFREE.MIN.WARN} |<p>-</p> |`20` |
-|{$VFS.FS.PUSED.MAX.CRIT} |<p>-</p> |`90` |
-|{$VFS.FS.PUSED.MAX.WARN} |<p>-</p> |`80` |
+| Name | Description | Default |
+|--------------------------------|------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------|
+| {$VFS.FS.FSNAME.MATCHES} | <p>This macro is used in filesystems discovery. Can be overridden on the host or linked template level</p> | `.+` |
+| {$VFS.FS.FSNAME.NOT_MATCHES} | <p>This macro is used in filesystems discovery. Can be overridden on the host or linked template level</p> | `^(/dev|/sys|/run|/proc|.+/shm$)` |
+| {$VFS.FS.FSTYPE.MATCHES} | <p>This macro is used in filesystems discovery. Can be overridden on the host or linked template level</p> | `^(btrfs|ext2|ext3|ext4|reiser|xfs|ffs|ufs|jfs|jfs2|vxfs|hfs|apfs|refs|ntfs|fat32|zfs)$` |
+| {$VFS.FS.FSTYPE.NOT_MATCHES} | <p>This macro is used in filesystems discovery. Can be overridden on the host or linked template level</p> | `^\s$` |
+| {$VFS.FS.INODE.PFREE.MIN.CRIT} | <p>-</p> | `10` |
+| {$VFS.FS.INODE.PFREE.MIN.WARN} | <p>-</p> | `20` |
+| {$VFS.FS.PUSED.MAX.CRIT} | <p>-</p> | `90` |
+| {$VFS.FS.PUSED.MAX.WARN} | <p>-</p> | `80` |
## Template links
@@ -93,27 +93,27 @@ There are no template links in this template.
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Mounted filesystem discovery |<p>Discovery of file systems of different types.</p> |ZABBIX_ACTIVE |vfs.fs.discovery<p>**Filter**:</p>AND <p>- A: {#FSTYPE} MATCHES_REGEX `{$VFS.FS.FSTYPE.MATCHES}`</p><p>- B: {#FSTYPE} NOT_MATCHES_REGEX `{$VFS.FS.FSTYPE.NOT_MATCHES}`</p><p>- C: {#FSNAME} MATCHES_REGEX `{$VFS.FS.FSNAME.MATCHES}`</p><p>- D: {#FSNAME} NOT_MATCHES_REGEX `{$VFS.FS.FSNAME.NOT_MATCHES}`</p> |
+| Name | Description | Type | Key and additional info |
+|------------------------------|------------------------------------------------------|---------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Mounted filesystem discovery | <p>Discovery of file systems of different types.</p> | ZABBIX_ACTIVE | vfs.fs.discovery<p>**Filter**:</p>AND <p>- A: {#FSTYPE} MATCHES_REGEX `{$VFS.FS.FSTYPE.MATCHES}`</p><p>- B: {#FSTYPE} NOT_MATCHES_REGEX `{$VFS.FS.FSTYPE.NOT_MATCHES}`</p><p>- C: {#FSNAME} MATCHES_REGEX `{$VFS.FS.FSNAME.MATCHES}`</p><p>- D: {#FSNAME} NOT_MATCHES_REGEX `{$VFS.FS.FSNAME.NOT_MATCHES}`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Filesystems |{#FSNAME}: Used space |<p>Used storage in Bytes</p> |ZABBIX_ACTIVE |vfs.fs.size[{#FSNAME},used] |
-|Filesystems |{#FSNAME}: Total space |<p>Total space in Bytes</p> |ZABBIX_ACTIVE |vfs.fs.size[{#FSNAME},total] |
-|Filesystems |{#FSNAME}: Space utilization |<p>Space utilization in % for {#FSNAME}</p> |ZABBIX_ACTIVE |vfs.fs.size[{#FSNAME},pused] |
-|Filesystems |{#FSNAME}: Free inodes in % |<p>-</p> |ZABBIX_ACTIVE |vfs.fs.inode[{#FSNAME},pfree] |
+| Group | Name | Description | Type | Key and additional info |
+|-------------|------------------------------|---------------------------------------------|---------------|-------------------------------|
+| Filesystems | {#FSNAME}: Used space | <p>Used storage in Bytes</p> | ZABBIX_ACTIVE | vfs.fs.size[{#FSNAME},used] |
+| Filesystems | {#FSNAME}: Total space | <p>Total space in Bytes</p> | ZABBIX_ACTIVE | vfs.fs.size[{#FSNAME},total] |
+| Filesystems | {#FSNAME}: Space utilization | <p>Space utilization in % for {#FSNAME}</p> | ZABBIX_ACTIVE | vfs.fs.size[{#FSNAME},pused] |
+| Filesystems | {#FSNAME}: Free inodes in % | <p>-</p> | ZABBIX_ACTIVE | vfs.fs.inode[{#FSNAME},pfree] |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|{#FSNAME}: Disk space is critically low (used > {$VFS.FS.PUSED.MAX.CRIT:"{#FSNAME}"}%) |<p>Two conditions should match: First, space utilization should be above {$VFS.FS.PUSED.MAX.CRIT:"{#FSNAME}"}.</p><p> Second condition should be one of the following:</p><p> - The disk free space is less than 5G.</p><p> - The disk will be full in less than 24 hours.</p> |`{TEMPLATE_NAME:vfs.fs.size[{#FSNAME},pused].last()}>{$VFS.FS.PUSED.MAX.CRIT:"{#FSNAME}"} and (({Linux filesystems by Zabbix agent active:vfs.fs.size[{#FSNAME},total].last()}-{Linux filesystems by Zabbix agent active:vfs.fs.size[{#FSNAME},used].last()})<5G or {TEMPLATE_NAME:vfs.fs.size[{#FSNAME},pused].timeleft(1h,,100)}<1d)` |AVERAGE |<p>Manual close: YES</p> |
-|{#FSNAME}: Disk space is low (used > {$VFS.FS.PUSED.MAX.WARN:"{#FSNAME}"}%) |<p>Two conditions should match: First, space utilization should be above {$VFS.FS.PUSED.MAX.WARN:"{#FSNAME}"}.</p><p> Second condition should be one of the following:</p><p> - The disk free space is less than 10G.</p><p> - The disk will be full in less than 24 hours.</p> |`{TEMPLATE_NAME:vfs.fs.size[{#FSNAME},pused].last()}>{$VFS.FS.PUSED.MAX.WARN:"{#FSNAME}"} and (({Linux filesystems by Zabbix agent active:vfs.fs.size[{#FSNAME},total].last()}-{Linux filesystems by Zabbix agent active:vfs.fs.size[{#FSNAME},used].last()})<10G or {TEMPLATE_NAME:vfs.fs.size[{#FSNAME},pused].timeleft(1h,,100)}<1d)` |WARNING |<p>Manual close: YES</p><p>**Depends on**:</p><p>- {#FSNAME}: Disk space is critically low (used > {$VFS.FS.PUSED.MAX.CRIT:"{#FSNAME}"}%)</p> |
-|{#FSNAME}: Running out of free inodes (free < {$VFS.FS.INODE.PFREE.MIN.CRIT:"{#FSNAME}"}%) |<p>It may become impossible to write to disk if there are no index nodes left.</p><p>As symptoms, 'No space left on device' or 'Disk is full' errors may be seen even though free space is available.</p> |`{TEMPLATE_NAME:vfs.fs.inode[{#FSNAME},pfree].min(5m)}<{$VFS.FS.INODE.PFREE.MIN.CRIT:"{#FSNAME}"}` |AVERAGE | |
-|{#FSNAME}: Running out of free inodes (free < {$VFS.FS.INODE.PFREE.MIN.WARN:"{#FSNAME}"}%) |<p>It may become impossible to write to disk if there are no index nodes left.</p><p>As symptoms, 'No space left on device' or 'Disk is full' errors may be seen even though free space is available.</p> |`{TEMPLATE_NAME:vfs.fs.inode[{#FSNAME},pfree].min(5m)}<{$VFS.FS.INODE.PFREE.MIN.WARN:"{#FSNAME}"}` |WARNING |<p>**Depends on**:</p><p>- {#FSNAME}: Running out of free inodes (free < {$VFS.FS.INODE.PFREE.MIN.CRIT:"{#FSNAME}"}%)</p> |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|--------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|-----------------------------------------------------------------------------------------------------------------------------------------------|
+| {#FSNAME}: Disk space is critically low (used > {$VFS.FS.PUSED.MAX.CRIT:"{#FSNAME}"}%) | <p>Two conditions should match: First, space utilization should be above {$VFS.FS.PUSED.MAX.CRIT:"{#FSNAME}"}.</p><p> Second condition should be one of the following:</p><p> - The disk free space is less than 5G.</p><p> - The disk will be full in less than 24 hours.</p> | `{TEMPLATE_NAME:vfs.fs.size[{#FSNAME},pused].last()}>{$VFS.FS.PUSED.MAX.CRIT:"{#FSNAME}"} and (({Linux filesystems by Zabbix agent active:vfs.fs.size[{#FSNAME},total].last()}-{Linux filesystems by Zabbix agent active:vfs.fs.size[{#FSNAME},used].last()})<5G or {TEMPLATE_NAME:vfs.fs.size[{#FSNAME},pused].timeleft(1h,,100)}<1d)` | AVERAGE | <p>Manual close: YES</p> |
+| {#FSNAME}: Disk space is low (used > {$VFS.FS.PUSED.MAX.WARN:"{#FSNAME}"}%) | <p>Two conditions should match: First, space utilization should be above {$VFS.FS.PUSED.MAX.WARN:"{#FSNAME}"}.</p><p> Second condition should be one of the following:</p><p> - The disk free space is less than 10G.</p><p> - The disk will be full in less than 24 hours.</p> | `{TEMPLATE_NAME:vfs.fs.size[{#FSNAME},pused].last()}>{$VFS.FS.PUSED.MAX.WARN:"{#FSNAME}"} and (({Linux filesystems by Zabbix agent active:vfs.fs.size[{#FSNAME},total].last()}-{Linux filesystems by Zabbix agent active:vfs.fs.size[{#FSNAME},used].last()})<10G or {TEMPLATE_NAME:vfs.fs.size[{#FSNAME},pused].timeleft(1h,,100)}<1d)` | WARNING | <p>Manual close: YES</p><p>**Depends on**:</p><p>- {#FSNAME}: Disk space is critically low (used > {$VFS.FS.PUSED.MAX.CRIT:"{#FSNAME}"}%)</p> |
+| {#FSNAME}: Running out of free inodes (free < {$VFS.FS.INODE.PFREE.MIN.CRIT:"{#FSNAME}"}%) | <p>It may become impossible to write to disk if there are no index nodes left.</p><p>As symptoms, 'No space left on device' or 'Disk is full' errors may be seen even though free space is available.</p> | `{TEMPLATE_NAME:vfs.fs.inode[{#FSNAME},pfree].min(5m)}<{$VFS.FS.INODE.PFREE.MIN.CRIT:"{#FSNAME}"}` | AVERAGE | |
+| {#FSNAME}: Running out of free inodes (free < {$VFS.FS.INODE.PFREE.MIN.WARN:"{#FSNAME}"}%) | <p>It may become impossible to write to disk if there are no index nodes left.</p><p>As symptoms, 'No space left on device' or 'Disk is full' errors may be seen even though free space is available.</p> | `{TEMPLATE_NAME:vfs.fs.inode[{#FSNAME},pfree].min(5m)}<{$VFS.FS.INODE.PFREE.MIN.WARN:"{#FSNAME}"}` | WARNING | <p>**Depends on**:</p><p>- {#FSNAME}: Running out of free inodes (free < {$VFS.FS.INODE.PFREE.MIN.CRIT:"{#FSNAME}"}%)</p> |
## Feedback
@@ -123,7 +123,7 @@ Please report any issues with the template at https://support.zabbix.com
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
@@ -135,11 +135,11 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$MEMORY.AVAILABLE.MIN} |<p>This macro is used as a threshold in memory available trigger.</p> |`20M` |
-|{$MEMORY.UTIL.MAX} |<p>This macro is used as a threshold in memory utilization trigger.</p> |`90` |
-|{$SWAP.PFREE.MIN.WARN} |<p>-</p> |`50` |
+| Name | Description | Default |
+|-------------------------|-------------------------------------------------------------------------|---------|
+| {$MEMORY.AVAILABLE.MIN} | <p>This macro is used as a threshold in memory available trigger.</p> | `20M` |
+| {$MEMORY.UTIL.MAX} | <p>This macro is used as a threshold in memory utilization trigger.</p> | `90` |
+| {$SWAP.PFREE.MIN.WARN} | <p>-</p> | `50` |
## Template links
@@ -150,23 +150,23 @@ There are no template links in this template.
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Memory |Memory utilization |<p>Memory used percentage is calculated as (100-pavailable)</p> |DEPENDENT |vm.memory.utilization<p>**Preprocessing**:</p><p>- JAVASCRIPT: `return (100-value);`</p> |
-|Memory |Available memory in % |<p>Available memory as percentage of total. See also: https://www.zabbix.com/documentation/5.4/manual/appendix/items/vm.memory.size_params</p> |ZABBIX_ACTIVE |vm.memory.size[pavailable] |
-|Memory |Total memory |<p>Total memory in Bytes</p> |ZABBIX_ACTIVE |vm.memory.size[total] |
-|Memory |Available memory |<p>Available memory, in Linux, available = free + buffers + cache. On other platforms calculation may vary. See also: https://www.zabbix.com/documentation/5.4/manual/appendix/items/vm.memory.size_params</p> |ZABBIX_ACTIVE |vm.memory.size[available] |
-|Memory |Total swap space |<p>The total space of swap volume/file in bytes.</p> |ZABBIX_ACTIVE |system.swap.size[,total] |
-|Memory |Free swap space |<p>The free space of swap volume/file in bytes.</p> |ZABBIX_ACTIVE |system.swap.size[,free] |
-|Memory |Free swap space in % |<p>The free space of swap volume/file in percent.</p> |ZABBIX_ACTIVE |system.swap.size[,pfree] |
+| Group | Name | Description | Type | Key and additional info |
+|--------|-----------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------|------------------------------------------------------------------------------------------|
+| Memory | Memory utilization | <p>Memory used percentage is calculated as (100-pavailable)</p> | DEPENDENT | vm.memory.utilization<p>**Preprocessing**:</p><p>- JAVASCRIPT: `return (100-value);`</p> |
+| Memory | Available memory in % | <p>Available memory as percentage of total. See also: https://www.zabbix.com/documentation/5.4/manual/appendix/items/vm.memory.size_params</p> | ZABBIX_ACTIVE | vm.memory.size[pavailable] |
+| Memory | Total memory | <p>Total memory in Bytes</p> | ZABBIX_ACTIVE | vm.memory.size[total] |
+| Memory | Available memory | <p>Available memory, in Linux, available = free + buffers + cache. On other platforms calculation may vary. See also: https://www.zabbix.com/documentation/5.4/manual/appendix/items/vm.memory.size_params</p> | ZABBIX_ACTIVE | vm.memory.size[available] |
+| Memory | Total swap space | <p>The total space of swap volume/file in bytes.</p> | ZABBIX_ACTIVE | system.swap.size[,total] |
+| Memory | Free swap space | <p>The free space of swap volume/file in bytes.</p> | ZABBIX_ACTIVE | system.swap.size[,free] |
+| Memory | Free swap space in % | <p>The free space of swap volume/file in percent.</p> | ZABBIX_ACTIVE | system.swap.size[,pfree] |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m) |<p>The system is running out of free memory.</p> |`{TEMPLATE_NAME:vm.memory.utilization.min(5m)}>{$MEMORY.UTIL.MAX}` |AVERAGE |<p>**Depends on**:</p><p>- Lack of available memory (< {$MEMORY.AVAILABLE.MIN} of {ITEM.VALUE2})</p> |
-|Lack of available memory (< {$MEMORY.AVAILABLE.MIN} of {ITEM.VALUE2}) |<p>-</p> |`{TEMPLATE_NAME:vm.memory.size[available].min(5m)}<{$MEMORY.AVAILABLE.MIN} and {Linux memory by Zabbix agent active:vm.memory.size[total].last()}>0` |AVERAGE | |
-|High swap space usage (less than {$SWAP.PFREE.MIN.WARN}% free) |<p>This trigger is ignored, if there is no swap configured</p> |`{TEMPLATE_NAME:system.swap.size[,pfree].min(5m)}<{$SWAP.PFREE.MIN.WARN} and {Linux memory by Zabbix agent active:system.swap.size[,total].last()}>0` |WARNING |<p>**Depends on**:</p><p>- High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m)</p><p>- Lack of available memory (< {$MEMORY.AVAILABLE.MIN} of {ITEM.VALUE2})</p> |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|-----------------------------------------------------------------------|----------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------|----------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m) | <p>The system is running out of free memory.</p> | `{TEMPLATE_NAME:vm.memory.utilization.min(5m)}>{$MEMORY.UTIL.MAX}` | AVERAGE | <p>**Depends on**:</p><p>- Lack of available memory (< {$MEMORY.AVAILABLE.MIN} of {ITEM.VALUE2})</p> |
+| Lack of available memory (< {$MEMORY.AVAILABLE.MIN} of {ITEM.VALUE2}) | <p>-</p> | `{TEMPLATE_NAME:vm.memory.size[available].min(5m)}<{$MEMORY.AVAILABLE.MIN} and {Linux memory by Zabbix agent active:vm.memory.size[total].last()}>0` | AVERAGE | |
+| High swap space usage (less than {$SWAP.PFREE.MIN.WARN}% free) | <p>This trigger is ignored, if there is no swap configured</p> | `{TEMPLATE_NAME:system.swap.size[,pfree].min(5m)}<{$SWAP.PFREE.MIN.WARN} and {Linux memory by Zabbix agent active:system.swap.size[,total].last()}>0` | WARNING | <p>**Depends on**:</p><p>- High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m)</p><p>- Lack of available memory (< {$MEMORY.AVAILABLE.MIN} of {ITEM.VALUE2})</p> |
## Feedback
@@ -176,7 +176,7 @@ Please report any issues with the template at https://support.zabbix.com
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
@@ -188,12 +188,12 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$VFS.DEV.DEVNAME.MATCHES} |<p>This macro is used in block devices discovery. Can be overridden on the host or linked template level</p> |`.+` |
-|{$VFS.DEV.DEVNAME.NOT_MATCHES} |<p>This macro is used in block devices discovery. Can be overridden on the host or linked template level</p> |`^(loop[0-9]*|sd[a-z][0-9]+|nbd[0-9]+|sr[0-9]+|fd[0-9]+|dm-[0-9]+|ram[0-9]+|ploop[a-z0-9]+|md[0-9]*|hcp[0-9]*|zram[0-9]*)` |
-|{$VFS.DEV.READ.AWAIT.WARN} |<p>Disk read average response time (in ms) before the trigger would fire</p> |`20` |
-|{$VFS.DEV.WRITE.AWAIT.WARN} |<p>Disk write average response time (in ms) before the trigger would fire</p> |`20` |
+| Name | Description | Default |
+|--------------------------------|--------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------|
+| {$VFS.DEV.DEVNAME.MATCHES} | <p>This macro is used in block devices discovery. Can be overridden on the host or linked template level</p> | `.+` |
+| {$VFS.DEV.DEVNAME.NOT_MATCHES} | <p>This macro is used in block devices discovery. Can be overridden on the host or linked template level</p> | `^(loop[0-9]*|sd[a-z][0-9]+|nbd[0-9]+|sr[0-9]+|fd[0-9]+|dm-[0-9]+|ram[0-9]+|ploop[a-z0-9]+|md[0-9]*|hcp[0-9]*|zram[0-9]*)` |
+| {$VFS.DEV.READ.AWAIT.WARN} | <p>Disk read average response time (in ms) before the trigger would fire</p> | `20` |
+| {$VFS.DEV.WRITE.AWAIT.WARN} | <p>Disk write average response time (in ms) before the trigger would fire</p> | `20` |
## Template links
@@ -201,29 +201,29 @@ There are no template links in this template.
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Block devices discovery |<p>-</p> |ZABBIX_ACTIVE |vfs.dev.discovery<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>**Filter**:</p>AND <p>- A: {#DEVTYPE} MATCHES_REGEX `disk`</p><p>- B: {#DEVNAME} MATCHES_REGEX `{$VFS.DEV.DEVNAME.MATCHES}`</p><p>- C: {#DEVNAME} NOT_MATCHES_REGEX `{$VFS.DEV.DEVNAME.NOT_MATCHES}`</p> |
+| Name | Description | Type | Key and additional info |
+|-------------------------|-------------|---------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Block devices discovery | <p>-</p> | ZABBIX_ACTIVE | vfs.dev.discovery<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>**Filter**:</p>AND <p>- A: {#DEVTYPE} MATCHES_REGEX `disk`</p><p>- B: {#DEVNAME} MATCHES_REGEX `{$VFS.DEV.DEVNAME.MATCHES}`</p><p>- C: {#DEVNAME} NOT_MATCHES_REGEX `{$VFS.DEV.DEVNAME.NOT_MATCHES}`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Storage |{#DEVNAME}: Disk read rate |<p>r/s. The number (after merges) of read requests completed per second for the device.</p> |DEPENDENT |vfs.dev.read.rate[{#DEVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$[0]`</p><p>- CHANGE_PER_SECOND |
-|Storage |{#DEVNAME}: Disk write rate |<p>w/s. The number (after merges) of write requests completed per second for the device.</p> |DEPENDENT |vfs.dev.write.rate[{#DEVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$[4]`</p><p>- CHANGE_PER_SECOND |
-|Storage |{#DEVNAME}: Disk read request avg waiting time (r_await) |<p>This formula contains two boolean expressions that evaluates to 1 or 0 in order to set calculated metric to zero and to avoid division by zero exception.</p> |CALCULATED |vfs.dev.read.await[{#DEVNAME}]<p>**Expression**:</p>`(last("vfs.dev.read.time.rate[{#DEVNAME}]")/(last("vfs.dev.read.rate[{#DEVNAME}]")+(last("vfs.dev.read.rate[{#DEVNAME}]")=0)))*1000*(last("vfs.dev.read.rate[{#DEVNAME}]") > 0)` |
-|Storage |{#DEVNAME}: Disk write request avg waiting time (w_await) |<p>This formula contains two boolean expressions that evaluates to 1 or 0 in order to set calculated metric to zero and to avoid division by zero exception.</p> |CALCULATED |vfs.dev.write.await[{#DEVNAME}]<p>**Expression**:</p>`(last("vfs.dev.write.time.rate[{#DEVNAME}]")/(last("vfs.dev.write.rate[{#DEVNAME}]")+(last("vfs.dev.write.rate[{#DEVNAME}]")=0)))*1000*(last("vfs.dev.write.rate[{#DEVNAME}]") > 0)` |
-|Storage |{#DEVNAME}: Disk average queue size (avgqu-sz) |<p>Current average disk queue, the number of requests outstanding on the disk at the time the performance data is collected.</p> |DEPENDENT |vfs.dev.queue_size[{#DEVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$[10]`</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `0.001`</p> |
-|Storage |{#DEVNAME}: Disk utilization |<p>This item is the percentage of elapsed time that the selected disk drive was busy servicing read or writes requests.</p> |DEPENDENT |vfs.dev.util[{#DEVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$[9]`</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `0.1`</p> |
-|Zabbix_raw_items |{#DEVNAME}: Get stats |<p>Get contents of /sys/block/{#DEVNAME}/stat for disk stats.</p> |ZABBIX_ACTIVE |vfs.file.contents[/sys/block/{#DEVNAME}/stat]<p>**Preprocessing**:</p><p>- JAVASCRIPT: `return JSON.stringify(value.trim().split(/ +/));`</p> |
-|Zabbix_raw_items |{#DEVNAME}: Disk read time (rate) |<p>Rate of total read time counter. Used in r_await calculation</p> |DEPENDENT |vfs.dev.read.time.rate[{#DEVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$[3]`</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `0.001`</p> |
-|Zabbix_raw_items |{#DEVNAME}: Disk write time (rate) |<p>Rate of total write time counter. Used in w_await calculation</p> |DEPENDENT |vfs.dev.write.time.rate[{#DEVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$[7]`</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `0.001`</p> |
+| Group | Name | Description | Type | Key and additional info |
+|------------------|-----------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Storage | {#DEVNAME}: Disk read rate | <p>r/s. The number (after merges) of read requests completed per second for the device.</p> | DEPENDENT | vfs.dev.read.rate[{#DEVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$[0]`</p><p>- CHANGE_PER_SECOND |
+| Storage | {#DEVNAME}: Disk write rate | <p>w/s. The number (after merges) of write requests completed per second for the device.</p> | DEPENDENT | vfs.dev.write.rate[{#DEVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$[4]`</p><p>- CHANGE_PER_SECOND |
+| Storage | {#DEVNAME}: Disk read request avg waiting time (r_await) | <p>This formula contains two boolean expressions that evaluates to 1 or 0 in order to set calculated metric to zero and to avoid division by zero exception.</p> | CALCULATED | vfs.dev.read.await[{#DEVNAME}]<p>**Expression**:</p>`(last("vfs.dev.read.time.rate[{#DEVNAME}]")/(last("vfs.dev.read.rate[{#DEVNAME}]")+(last("vfs.dev.read.rate[{#DEVNAME}]")=0)))*1000*(last("vfs.dev.read.rate[{#DEVNAME}]") > 0)` |
+| Storage | {#DEVNAME}: Disk write request avg waiting time (w_await) | <p>This formula contains two boolean expressions that evaluates to 1 or 0 in order to set calculated metric to zero and to avoid division by zero exception.</p> | CALCULATED | vfs.dev.write.await[{#DEVNAME}]<p>**Expression**:</p>`(last("vfs.dev.write.time.rate[{#DEVNAME}]")/(last("vfs.dev.write.rate[{#DEVNAME}]")+(last("vfs.dev.write.rate[{#DEVNAME}]")=0)))*1000*(last("vfs.dev.write.rate[{#DEVNAME}]") > 0)` |
+| Storage | {#DEVNAME}: Disk average queue size (avgqu-sz) | <p>Current average disk queue, the number of requests outstanding on the disk at the time the performance data is collected.</p> | DEPENDENT | vfs.dev.queue_size[{#DEVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$[10]`</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `0.001`</p> |
+| Storage | {#DEVNAME}: Disk utilization | <p>This item is the percentage of elapsed time that the selected disk drive was busy servicing read or writes requests.</p> | DEPENDENT | vfs.dev.util[{#DEVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$[9]`</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `0.1`</p> |
+| Zabbix_raw_items | {#DEVNAME}: Get stats | <p>Get contents of /sys/block/{#DEVNAME}/stat for disk stats.</p> | ZABBIX_ACTIVE | vfs.file.contents[/sys/block/{#DEVNAME}/stat]<p>**Preprocessing**:</p><p>- JAVASCRIPT: `return JSON.stringify(value.trim().split(/ +/));`</p> |
+| Zabbix_raw_items | {#DEVNAME}: Disk read time (rate) | <p>Rate of total read time counter. Used in r_await calculation</p> | DEPENDENT | vfs.dev.read.time.rate[{#DEVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$[3]`</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `0.001`</p> |
+| Zabbix_raw_items | {#DEVNAME}: Disk write time (rate) | <p>Rate of total write time counter. Used in w_await calculation</p> | DEPENDENT | vfs.dev.write.time.rate[{#DEVNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$[7]`</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `0.001`</p> |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|{#DEVNAME}: Disk read/write request responses are too high (read > {$VFS.DEV.READ.AWAIT.WARN:"{#DEVNAME}"} ms for 15m or write > {$VFS.DEV.WRITE.AWAIT.WARN:"{#DEVNAME}"} ms for 15m) |<p>This trigger might indicate disk {#DEVNAME} saturation.</p> |`{TEMPLATE_NAME:vfs.dev.read.await[{#DEVNAME}].min(15m)} > {$VFS.DEV.READ.AWAIT.WARN:"{#DEVNAME}"} or {Linux block devices by Zabbix agent active:vfs.dev.write.await[{#DEVNAME}].min(15m)} > {$VFS.DEV.WRITE.AWAIT.WARN:"{#DEVNAME}"}` |WARNING |<p>Manual close: YES</p> |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|----------------------------------|
+| {#DEVNAME}: Disk read/write request responses are too high (read > {$VFS.DEV.READ.AWAIT.WARN:"{#DEVNAME}"} ms for 15m or write > {$VFS.DEV.WRITE.AWAIT.WARN:"{#DEVNAME}"} ms for 15m) | <p>This trigger might indicate disk {#DEVNAME} saturation.</p> | `{TEMPLATE_NAME:vfs.dev.read.await[{#DEVNAME}].min(15m)} > {$VFS.DEV.READ.AWAIT.WARN:"{#DEVNAME}"} or {Linux block devices by Zabbix agent active:vfs.dev.write.await[{#DEVNAME}].min(15m)} > {$VFS.DEV.WRITE.AWAIT.WARN:"{#DEVNAME}"}` | WARNING | <p>Manual close: YES</p> |
## Feedback
@@ -233,7 +233,7 @@ Please report any issues with the template at https://support.zabbix.com
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
@@ -245,12 +245,12 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$IF.ERRORS.WARN} |<p>-</p> |`2` |
-|{$IFCONTROL} |<p>-</p> |`1` |
-|{$NET.IF.IFNAME.MATCHES} |<p>-</p> |`^.*$` |
-|{$NET.IF.IFNAME.NOT_MATCHES} |<p>Filter out loopbacks, nulls, docker veth links and docker0 bridge by default</p> |`(^Software Loopback Interface|^NULL[0-9.]*$|^[Ll]o[0-9.]*$|^[Ss]ystem$|^Nu[0-9.]*$|^veth[0-9a-z]+$|docker[0-9]+|br-[a-z0-9]{12})` |
+| Name | Description | Default |
+|------------------------------|-------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------|
+| {$IF.ERRORS.WARN} | <p>-</p> | `2` |
+| {$IFCONTROL} | <p>-</p> | `1` |
+| {$NET.IF.IFNAME.MATCHES} | <p>-</p> | `^.*$` |
+| {$NET.IF.IFNAME.NOT_MATCHES} | <p>Filter out loopbacks, nulls, docker veth links and docker0 bridge by default</p> | `(^Software Loopback Interface|^NULL[0-9.]*$|^[Ll]o[0-9.]*$|^[Ss]ystem$|^Nu[0-9.]*$|^veth[0-9a-z]+$|docker[0-9]+|br-[a-z0-9]{12})` |
## Template links
@@ -258,30 +258,30 @@ There are no template links in this template.
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Network interface discovery |<p>Discovery of network interfaces.</p> |ZABBIX_ACTIVE |net.if.discovery<p>**Filter**:</p>AND <p>- A: {#IFNAME} MATCHES_REGEX `{$NET.IF.IFNAME.MATCHES}`</p><p>- B: {#IFNAME} NOT_MATCHES_REGEX `{$NET.IF.IFNAME.NOT_MATCHES}`</p> |
+| Name | Description | Type | Key and additional info |
+|-----------------------------|-----------------------------------------|---------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Network interface discovery | <p>Discovery of network interfaces.</p> | ZABBIX_ACTIVE | net.if.discovery<p>**Filter**:</p>AND <p>- A: {#IFNAME} MATCHES_REGEX `{$NET.IF.IFNAME.MATCHES}`</p><p>- B: {#IFNAME} NOT_MATCHES_REGEX `{$NET.IF.IFNAME.NOT_MATCHES}`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Network_interfaces |Interface {#IFNAME}: Bits received | |ZABBIX_ACTIVE |net.if.in["{#IFNAME}"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `8`</p> |
-|Network_interfaces |Interface {#IFNAME}: Bits sent | |ZABBIX_ACTIVE |net.if.out["{#IFNAME}"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `8`</p> |
-|Network_interfaces |Interface {#IFNAME}: Outbound packets with errors | |ZABBIX_ACTIVE |net.if.out["{#IFNAME}",errors]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Network_interfaces |Interface {#IFNAME}: Inbound packets with errors | |ZABBIX_ACTIVE |net.if.in["{#IFNAME}",errors]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Network_interfaces |Interface {#IFNAME}: Outbound packets discarded | |ZABBIX_ACTIVE |net.if.out["{#IFNAME}",dropped]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Network_interfaces |Interface {#IFNAME}: Inbound packets discarded | |ZABBIX_ACTIVE |net.if.in["{#IFNAME}",dropped]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Network_interfaces |Interface {#IFNAME}: Operational status |<p>Indicates the interface RFC2863 operational state as a string.</p><p>Possible values are:"unknown", "notpresent", "down", "lowerlayerdown", "testing","dormant", "up".</p><p>Reference: https://www.kernel.org/doc/Documentation/ABI/testing/sysfs-class-net</p> |ZABBIX_ACTIVE |vfs.file.contents["/sys/class/net/{#IFNAME}/operstate"]<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
-|Network_interfaces |Interface {#IFNAME}: Interface type |<p>Indicates the interface protocol type as a decimal value.</p><p>See include/uapi/linux/if_arp.h for all possible values.</p><p>Reference: https://www.kernel.org/doc/Documentation/ABI/testing/sysfs-class-net</p> |ZABBIX_ACTIVE |vfs.file.contents["/sys/class/net/{#IFNAME}/type"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Group | Name | Description | Type | Key and additional info |
+|--------------------|---------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------|---------------------------------------------------------------------------------------------------------------------------------------------------|
+| Network_interfaces | Interface {#IFNAME}: Bits received | | ZABBIX_ACTIVE | net.if.in["{#IFNAME}"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `8`</p> |
+| Network_interfaces | Interface {#IFNAME}: Bits sent | | ZABBIX_ACTIVE | net.if.out["{#IFNAME}"]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `8`</p> |
+| Network_interfaces | Interface {#IFNAME}: Outbound packets with errors | | ZABBIX_ACTIVE | net.if.out["{#IFNAME}",errors]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Network_interfaces | Interface {#IFNAME}: Inbound packets with errors | | ZABBIX_ACTIVE | net.if.in["{#IFNAME}",errors]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Network_interfaces | Interface {#IFNAME}: Outbound packets discarded | | ZABBIX_ACTIVE | net.if.out["{#IFNAME}",dropped]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Network_interfaces | Interface {#IFNAME}: Inbound packets discarded | | ZABBIX_ACTIVE | net.if.in["{#IFNAME}",dropped]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Network_interfaces | Interface {#IFNAME}: Operational status | <p>Indicates the interface RFC2863 operational state as a string.</p><p>Possible values are:"unknown", "notpresent", "down", "lowerlayerdown", "testing","dormant", "up".</p><p>Reference: https://www.kernel.org/doc/Documentation/ABI/testing/sysfs-class-net</p> | ZABBIX_ACTIVE | vfs.file.contents["/sys/class/net/{#IFNAME}/operstate"]<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Network_interfaces | Interface {#IFNAME}: Interface type | <p>Indicates the interface protocol type as a decimal value.</p><p>See include/uapi/linux/if_arp.h for all possible values.</p><p>Reference: https://www.kernel.org/doc/Documentation/ABI/testing/sysfs-class-net</p> | ZABBIX_ACTIVE | vfs.file.contents["/sys/class/net/{#IFNAME}/type"]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|Interface {#IFNAME}: High error rate (> {$IF.ERRORS.WARN:"{#IFNAME}"} for 5m) |<p>Recovers when below 80% of {$IF.ERRORS.WARN:"{#IFNAME}"} threshold</p> |`{TEMPLATE_NAME:net.if.in["{#IFNAME}",errors].min(5m)}>{$IF.ERRORS.WARN:"{#IFNAME}"} or {Linux network interfaces by Zabbix agent active:net.if.out["{#IFNAME}",errors].min(5m)}>{$IF.ERRORS.WARN:"{#IFNAME}"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:net.if.in["{#IFNAME}",errors].max(5m)}<{$IF.ERRORS.WARN:"{#IFNAME}"}*0.8 and {Linux network interfaces by Zabbix agent active:net.if.out["{#IFNAME}",errors].max(5m)}<{$IF.ERRORS.WARN:"{#IFNAME}"}*0.8` |WARNING |<p>Manual close: YES</p><p>**Depends on**:</p><p>- Interface {#IFNAME}: Link down</p> |
-|Interface {#IFNAME}: Link down |<p>This trigger expression works as follows:</p><p>1. Can be triggered if operations status is down.</p><p>2. {$IFCONTROL:"{#IFNAME}"}=1 - user can redefine Context macro to value - 0. That marks this interface as not important. No new trigger will be fired if this interface is down.</p><p>3. {TEMPLATE_NAME:METRIC.diff()}=1) - trigger fires only if operational status was up(1) sometime before. (So, do not fire 'ethernal off' interfaces.)</p><p>WARNING: if closed manually - won't fire again on next poll, because of .diff.</p> |`{$IFCONTROL:"{#IFNAME}"}=1 and ({TEMPLATE_NAME:vfs.file.contents["/sys/class/net/{#IFNAME}/operstate"].last()}=2 and {TEMPLATE_NAME:vfs.file.contents["/sys/class/net/{#IFNAME}/operstate"].diff()}=1)`<p>Recovery expression:</p>`{TEMPLATE_NAME:vfs.file.contents["/sys/class/net/{#IFNAME}/operstate"].last()}<>2 or {$IFCONTROL:"{#IFNAME}"}=0` |AVERAGE |<p>Manual close: YES</p> |
-|Interface {#IFNAME}: Ethernet has changed to lower speed than it was before |<p>This Ethernet connection has transitioned down from its known maximum speed. This might be a sign of autonegotiation issues. Ack to close.</p> |`{TEMPLATE_NAME:vfs.file.contents["/sys/class/net/{#IFNAME}/type"].change()}<0 and {TEMPLATE_NAME:vfs.file.contents["/sys/class/net/{#IFNAME}/type"].last()}>0 and ({Linux network interfaces by Zabbix agent active:vfs.file.contents["/sys/class/net/{#IFNAME}/type"].last()}=6 or {Linux network interfaces by Zabbix agent active:vfs.file.contents["/sys/class/net/{#IFNAME}/type"].last()}=1) and ({Linux network interfaces by Zabbix agent active:vfs.file.contents["/sys/class/net/{#IFNAME}/operstate"].last()}<>2)`<p>Recovery expression:</p>`({TEMPLATE_NAME:vfs.file.contents["/sys/class/net/{#IFNAME}/type"].change()}>0 and {TEMPLATE_NAME:vfs.file.contents["/sys/class/net/{#IFNAME}/type"].prev()}>0) or ({Linux network interfaces by Zabbix agent active:vfs.file.contents["/sys/class/net/{#IFNAME}/operstate"].last()}=2)` |INFO |<p>Manual close: YES</p><p>**Depends on**:</p><p>- Interface {#IFNAME}: Link down</p> |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|-------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|---------------------------------------------------------------------------------------|
+| Interface {#IFNAME}: High error rate (> {$IF.ERRORS.WARN:"{#IFNAME}"} for 5m) | <p>Recovers when below 80% of {$IF.ERRORS.WARN:"{#IFNAME}"} threshold</p> | `{TEMPLATE_NAME:net.if.in["{#IFNAME}",errors].min(5m)}>{$IF.ERRORS.WARN:"{#IFNAME}"} or {Linux network interfaces by Zabbix agent active:net.if.out["{#IFNAME}",errors].min(5m)}>{$IF.ERRORS.WARN:"{#IFNAME}"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:net.if.in["{#IFNAME}",errors].max(5m)}<{$IF.ERRORS.WARN:"{#IFNAME}"}*0.8 and {Linux network interfaces by Zabbix agent active:net.if.out["{#IFNAME}",errors].max(5m)}<{$IF.ERRORS.WARN:"{#IFNAME}"}*0.8` | WARNING | <p>Manual close: YES</p><p>**Depends on**:</p><p>- Interface {#IFNAME}: Link down</p> |
+| Interface {#IFNAME}: Link down | <p>This trigger expression works as follows:</p><p>1. Can be triggered if operations status is down.</p><p>2. {$IFCONTROL:"{#IFNAME}"}=1 - user can redefine Context macro to value - 0. That marks this interface as not important. No new trigger will be fired if this interface is down.</p><p>3. {TEMPLATE_NAME:METRIC.diff()}=1) - trigger fires only if operational status was up(1) sometime before. (So, do not fire 'ethernal off' interfaces.)</p><p>WARNING: if closed manually - won't fire again on next poll, because of .diff.</p> | `{$IFCONTROL:"{#IFNAME}"}=1 and ({TEMPLATE_NAME:vfs.file.contents["/sys/class/net/{#IFNAME}/operstate"].last()}=2 and {TEMPLATE_NAME:vfs.file.contents["/sys/class/net/{#IFNAME}/operstate"].diff()}=1)`<p>Recovery expression:</p>`{TEMPLATE_NAME:vfs.file.contents["/sys/class/net/{#IFNAME}/operstate"].last()}<>2 or {$IFCONTROL:"{#IFNAME}"}=0` | AVERAGE | <p>Manual close: YES</p> |
+| Interface {#IFNAME}: Ethernet has changed to lower speed than it was before | <p>This Ethernet connection has transitioned down from its known maximum speed. This might be a sign of autonegotiation issues. Ack to close.</p> | `{TEMPLATE_NAME:vfs.file.contents["/sys/class/net/{#IFNAME}/type"].change()}<0 and {TEMPLATE_NAME:vfs.file.contents["/sys/class/net/{#IFNAME}/type"].last()}>0 and ({Linux network interfaces by Zabbix agent active:vfs.file.contents["/sys/class/net/{#IFNAME}/type"].last()}=6 or {Linux network interfaces by Zabbix agent active:vfs.file.contents["/sys/class/net/{#IFNAME}/type"].last()}=1) and ({Linux network interfaces by Zabbix agent active:vfs.file.contents["/sys/class/net/{#IFNAME}/operstate"].last()}<>2)`<p>Recovery expression:</p>`({TEMPLATE_NAME:vfs.file.contents["/sys/class/net/{#IFNAME}/type"].change()}>0 and {TEMPLATE_NAME:vfs.file.contents["/sys/class/net/{#IFNAME}/type"].prev()}>0) or ({Linux network interfaces by Zabbix agent active:vfs.file.contents["/sys/class/net/{#IFNAME}/operstate"].last()}=2)` | INFO | <p>Manual close: YES</p><p>**Depends on**:</p><p>- Interface {#IFNAME}: Link down</p> |
## Feedback
@@ -295,7 +295,7 @@ Please report any issues with the template at https://support.zabbix.com
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
@@ -307,11 +307,11 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$KERNEL.MAXFILES.MIN} |<p>-</p> |`256` |
-|{$KERNEL.MAXPROC.MIN} |<p>-</p> |`1024` |
-|{$SYSTEM.FUZZYTIME.MAX} |<p>-</p> |`60` |
+| Name | Description | Default |
+|-------------------------|-------------|---------|
+| {$KERNEL.MAXFILES.MIN} | <p>-</p> | `256` |
+| {$KERNEL.MAXPROC.MIN} | <p>-</p> | `1024` |
+| {$SYSTEM.FUZZYTIME.MAX} | <p>-</p> | `60` |
## Template links
@@ -322,35 +322,35 @@ There are no template links in this template.
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|General |System boot time |<p>-</p> |ZABBIX_ACTIVE |system.boottime<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|General |System local time |<p>System local time of the host.</p> |ZABBIX_ACTIVE |system.localtime |
-|General |System name |<p>System host name.</p> |ZABBIX_ACTIVE |system.hostname<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|General |System description |<p>The information as normally returned by 'uname -a'.</p> |ZABBIX_ACTIVE |system.uname<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|General |Number of logged in users |<p>Number of users who are currently logged in.</p> |ZABBIX_ACTIVE |system.users.num |
-|General |Maximum number of open file descriptors |<p>It could be increased by using sysctrl utility or modifying file /etc/sysctl.conf.</p> |ZABBIX_ACTIVE |kernel.maxfiles<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|General |Maximum number of processes |<p>It could be increased by using sysctrl utility or modifying file /etc/sysctl.conf.</p> |ZABBIX_ACTIVE |kernel.maxproc<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|General |Number of processes |<p>-</p> |ZABBIX_ACTIVE |proc.num |
-|General |Number of running processes |<p>-</p> |ZABBIX_ACTIVE |proc.num[,,run] |
-|Inventory |Operating system |<p>-</p> |ZABBIX_ACTIVE |system.sw.os<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |Operating system architecture |<p>Operating system architecture of the host.</p> |ZABBIX_ACTIVE |system.sw.arch<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |Software installed |<p>-</p> |ZABBIX_ACTIVE |system.sw.packages<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Security |Checksum of /etc/passwd |<p>-</p> |ZABBIX_ACTIVE |vfs.file.cksum[/etc/passwd]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|Status |System uptime |<p>System uptime in 'N days, hh:mm:ss' format.</p> |ZABBIX_ACTIVE |system.uptime |
+| Group | Name | Description | Type | Key and additional info |
+|-----------|-----------------------------------------|-------------------------------------------------------------------------------------------|---------------|------------------------------------------------------------------------------------------------|
+| General | System boot time | <p>-</p> | ZABBIX_ACTIVE | system.boottime<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| General | System local time | <p>System local time of the host.</p> | ZABBIX_ACTIVE | system.localtime |
+| General | System name | <p>System host name.</p> | ZABBIX_ACTIVE | system.hostname<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| General | System description | <p>The information as normally returned by 'uname -a'.</p> | ZABBIX_ACTIVE | system.uname<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| General | Number of logged in users | <p>Number of users who are currently logged in.</p> | ZABBIX_ACTIVE | system.users.num |
+| General | Maximum number of open file descriptors | <p>It could be increased by using sysctrl utility or modifying file /etc/sysctl.conf.</p> | ZABBIX_ACTIVE | kernel.maxfiles<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| General | Maximum number of processes | <p>It could be increased by using sysctrl utility or modifying file /etc/sysctl.conf.</p> | ZABBIX_ACTIVE | kernel.maxproc<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| General | Number of processes | <p>-</p> | ZABBIX_ACTIVE | proc.num |
+| General | Number of running processes | <p>-</p> | ZABBIX_ACTIVE | proc.num[,,run] |
+| Inventory | Operating system | <p>-</p> | ZABBIX_ACTIVE | system.sw.os<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | Operating system architecture | <p>Operating system architecture of the host.</p> | ZABBIX_ACTIVE | system.sw.arch<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | Software installed | <p>-</p> | ZABBIX_ACTIVE | system.sw.packages<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Security | Checksum of /etc/passwd | <p>-</p> | ZABBIX_ACTIVE | vfs.file.cksum[/etc/passwd]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| Status | System uptime | <p>System uptime in 'N days, hh:mm:ss' format.</p> | ZABBIX_ACTIVE | system.uptime |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|System time is out of sync (diff with Zabbix server > {$SYSTEM.FUZZYTIME.MAX}s) |<p>The host system time is different from the Zabbix server time.</p> |`{TEMPLATE_NAME:system.localtime.fuzzytime({$SYSTEM.FUZZYTIME.MAX})}=0` |WARNING |<p>Manual close: YES</p> |
-|System name has changed (new name: {ITEM.VALUE}) |<p>System name has changed. Ack to close.</p> |`{TEMPLATE_NAME:system.hostname.diff()}=1 and {TEMPLATE_NAME:system.hostname.strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|Configured max number of open filedescriptors is too low (< {$KERNEL.MAXFILES.MIN}) |<p>-</p> |`{TEMPLATE_NAME:kernel.maxfiles.last()}<{$KERNEL.MAXFILES.MIN}` |INFO | |
-|Configured max number of processes is too low (< {$KERNEL.MAXPROC.MIN}) |<p>-</p> |`{TEMPLATE_NAME:kernel.maxproc.last()}<{$KERNEL.MAXPROC.MIN}` |INFO |<p>**Depends on**:</p><p>- Getting closer to process limit (over 80% used)</p> |
-|Getting closer to process limit (over 80% used) |<p>-</p> |`{TEMPLATE_NAME:proc.num.last()}/{Linux generic by Zabbix agent active:kernel.maxproc.last()}*100>80` |WARNING | |
-|Operating system description has changed |<p>Operating system description has changed. Possible reasons that system has been updated or replaced. Ack to close.</p> |`{TEMPLATE_NAME:system.sw.os.diff()}=1 and {TEMPLATE_NAME:system.sw.os.strlen()}>0` |INFO |<p>Manual close: YES</p><p>**Depends on**:</p><p>- System name has changed (new name: {ITEM.VALUE})</p> |
-|/etc/passwd has been changed |<p>-</p> |`{TEMPLATE_NAME:vfs.file.cksum[/etc/passwd].diff()}>0` |INFO |<p>Manual close: YES</p><p>**Depends on**:</p><p>- Operating system description has changed</p><p>- System name has changed (new name: {ITEM.VALUE})</p> |
-|{HOST.NAME} has been restarted (uptime < 10m) |<p>The host uptime is less than 10 minutes</p> |`{TEMPLATE_NAME:system.uptime.last()}<10m` |WARNING |<p>Manual close: YES</p> |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|-------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------|----------|----------------------------------------------------------------------------------------------------------------------------------------------------------|
+| System time is out of sync (diff with Zabbix server > {$SYSTEM.FUZZYTIME.MAX}s) | <p>The host system time is different from the Zabbix server time.</p> | `{TEMPLATE_NAME:system.localtime.fuzzytime({$SYSTEM.FUZZYTIME.MAX})}=0` | WARNING | <p>Manual close: YES</p> |
+| System name has changed (new name: {ITEM.VALUE}) | <p>System name has changed. Ack to close.</p> | `{TEMPLATE_NAME:system.hostname.diff()}=1 and {TEMPLATE_NAME:system.hostname.strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| Configured max number of open filedescriptors is too low (< {$KERNEL.MAXFILES.MIN}) | <p>-</p> | `{TEMPLATE_NAME:kernel.maxfiles.last()}<{$KERNEL.MAXFILES.MIN}` | INFO | |
+| Configured max number of processes is too low (< {$KERNEL.MAXPROC.MIN}) | <p>-</p> | `{TEMPLATE_NAME:kernel.maxproc.last()}<{$KERNEL.MAXPROC.MIN}` | INFO | <p>**Depends on**:</p><p>- Getting closer to process limit (over 80% used)</p> |
+| Getting closer to process limit (over 80% used) | <p>-</p> | `{TEMPLATE_NAME:proc.num.last()}/{Linux generic by Zabbix agent active:kernel.maxproc.last()}*100>80` | WARNING | |
+| Operating system description has changed | <p>Operating system description has changed. Possible reasons that system has been updated or replaced. Ack to close.</p> | `{TEMPLATE_NAME:system.sw.os.diff()}=1 and {TEMPLATE_NAME:system.sw.os.strlen()}>0` | INFO | <p>Manual close: YES</p><p>**Depends on**:</p><p>- System name has changed (new name: {ITEM.VALUE})</p> |
+| /etc/passwd has been changed | <p>-</p> | `{TEMPLATE_NAME:vfs.file.cksum[/etc/passwd].diff()}>0` | INFO | <p>Manual close: YES</p><p>**Depends on**:</p><p>- Operating system description has changed</p><p>- System name has changed (new name: {ITEM.VALUE})</p> |
+| {HOST.NAME} has been restarted (uptime < 10m) | <p>The host uptime is less than 10 minutes</p> | `{TEMPLATE_NAME:system.uptime.last()}<10m` | WARNING | <p>Manual close: YES</p> |
## Feedback
@@ -360,7 +360,7 @@ Please report any issues with the template at https://support.zabbix.com
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
New official Linux template. Requires agent of Zabbix 3.0.14, 3.4.5 and 4.0.0 or newer.
## Setup
@@ -374,15 +374,15 @@ No specific Zabbix configuration is required.
## Template links
-|Name|
-|----|
-|Linux CPU by Zabbix agent active |
-|Linux block devices by Zabbix agent active |
-|Linux filesystems by Zabbix agent active |
-|Linux generic by Zabbix agent active |
-|Linux memory by Zabbix agent active |
-|Linux network interfaces by Zabbix agent active |
-|Zabbix agent |
+| Name |
+|-------------------------------------------------|
+| Linux CPU by Zabbix agent active |
+| Linux block devices by Zabbix agent active |
+| Linux filesystems by Zabbix agent active |
+| Linux generic by Zabbix agent active |
+| Linux memory by Zabbix agent active |
+| Linux network interfaces by Zabbix agent active |
+| Zabbix agent |
## Discovery rules
diff --git a/templates/os/linux_active/template_os_linux_active.yaml b/templates/os/linux_active/template_os_linux_active.yaml
index 3ad9c8572b2..17ddb58b48c 100644
--- a/templates/os/linux_active/template_os_linux_active.yaml
+++ b/templates/os/linux_active/template_os_linux_active.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-03-02T19:42:21Z'
+ date: '2021-04-22T11:28:50Z'
groups:
-
name: Templates/Modules
@@ -347,151 +347,153 @@ zabbix_export:
dashboards:
-
name: 'System performance'
- widgets:
- -
- type: GRAPH_CLASSIC
- width: '12'
- height: '5'
- fields:
- -
- type: GRAPH
- name: graphid
- value:
- name: 'System load'
- host: 'Linux by Zabbix agent active'
- -
- type: GRAPH_CLASSIC
- x: '12'
- width: '12'
- height: '5'
- fields:
- -
- type: GRAPH
- name: graphid
- value:
- name: 'CPU usage'
- host: 'Linux by Zabbix agent active'
- -
- type: GRAPH_CLASSIC
- 'y': '5'
- width: '12'
- height: '5'
- fields:
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Memory usage'
- host: 'Linux by Zabbix agent active'
- -
- type: GRAPH_CLASSIC
- x: '12'
- 'y': '5'
- width: '12'
- height: '5'
- fields:
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Swap usage'
- host: 'Linux by Zabbix agent active'
- -
- type: GRAPH_PROTOTYPE
- 'y': '10'
- width: '24'
- height: '12'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '3'
- -
- type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: '{#FSNAME}: Disk space usage'
- host: 'Linux by Zabbix agent active'
+ pages:
-
- type: GRAPH_PROTOTYPE
- 'y': '22'
- width: '24'
- height: '12'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
+ widgets:
-
- type: INTEGER
- name: rows
- value: '3'
+ type: GRAPH_CLASSIC
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'System load'
+ host: 'Linux by Zabbix agent active'
+ -
+ type: GRAPH_CLASSIC
+ x: '12'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'CPU usage'
+ host: 'Linux by Zabbix agent active'
+ -
+ type: GRAPH_CLASSIC
+ 'y': '5'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Memory usage'
+ host: 'Linux by Zabbix agent active'
+ -
+ type: GRAPH_CLASSIC
+ x: '12'
+ 'y': '5'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Swap usage'
+ host: 'Linux by Zabbix agent active'
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: '{#DEVNAME}: Disk read/write rates'
- host: 'Linux by Zabbix agent active'
- -
- type: GRAPH_PROTOTYPE
- 'y': '34'
- width: '24'
- height: '12'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '3'
+ 'y': '10'
+ width: '24'
+ height: '12'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '3'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: '{#FSNAME}: Disk space usage'
+ host: 'Linux by Zabbix agent active'
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: '{#DEVNAME}: Disk average waiting time'
- host: 'Linux by Zabbix agent active'
- -
- type: GRAPH_PROTOTYPE
- 'y': '46'
- width: '24'
- height: '12'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '3'
+ 'y': '22'
+ width: '24'
+ height: '12'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '3'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: '{#DEVNAME}: Disk read/write rates'
+ host: 'Linux by Zabbix agent active'
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: '{#DEVNAME}: Disk utilization and queue'
- host: 'Linux by Zabbix agent active'
- -
- type: GRAPH_PROTOTYPE
- 'y': '58'
- width: '24'
- height: '5'
- fields:
+ 'y': '34'
+ width: '24'
+ height: '12'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '3'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: '{#DEVNAME}: Disk average waiting time'
+ host: 'Linux by Zabbix agent active'
-
- type: INTEGER
- name: columns
- value: '1'
+ type: GRAPH_PROTOTYPE
+ 'y': '46'
+ width: '24'
+ height: '12'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '3'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: '{#DEVNAME}: Disk utilization and queue'
+ host: 'Linux by Zabbix agent active'
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: 'Interface {#IFNAME}: Network traffic'
- host: 'Linux by Zabbix agent active'
+ 'y': '58'
+ width: '24'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: 'Interface {#IFNAME}: Network traffic'
+ host: 'Linux by Zabbix agent active'
-
template: 'Linux CPU by Zabbix agent active'
name: 'Linux CPU by Zabbix agent active'
@@ -1644,26 +1646,28 @@ zabbix_export:
dashboards:
-
name: 'Network interfaces'
- widgets:
+ pages:
-
- type: GRAPH_PROTOTYPE
- width: '24'
- height: '12'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '3'
+ widgets:
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: 'Interface {#IFNAME}: Network traffic'
- host: 'Linux network interfaces by Zabbix agent active'
+ width: '24'
+ height: '12'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '3'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: 'Interface {#IFNAME}: Network traffic'
+ host: 'Linux network interfaces by Zabbix agent active'
valuemaps:
-
name: 'IF-MIB::ifOperStatus'
diff --git a/templates/os/linux_prom/README.md b/templates/os/linux_prom/README.md
index 1a14caba0b1..3e40a79cf24 100644
--- a/templates/os/linux_prom/README.md
+++ b/templates/os/linux_prom/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
This template collects Linux metrics from node_exporter 0.18 and above. Support for older node_exporter versions is provided as 'best effort'.
This template was tested on:
@@ -21,39 +21,39 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$CPU.UTIL.CRIT} |<p>-</p> |`90` |
-|{$IF.ERRORS.WARN} |<p>-</p> |`2` |
-|{$IF.UTIL.MAX} |<p>-</p> |`90` |
-|{$IFCONTROL} |<p>-</p> |`1` |
-|{$KERNEL.MAXFILES.MIN} |<p>-</p> |`256` |
-|{$LOAD_AVG_PER_CPU.MAX.WARN} |<p>Load per CPU considered sustainable. Tune if needed.</p> |`1.5` |
-|{$MEMORY.AVAILABLE.MIN} |<p>-</p> |`20M` |
-|{$MEMORY.UTIL.MAX} |<p>-</p> |`90` |
-|{$NET.IF.IFALIAS.MATCHES} |<p>-</p> |`^.*$` |
-|{$NET.IF.IFALIAS.NOT_MATCHES} |<p>-</p> |`CHANGE_IF_NEEDED` |
-|{$NET.IF.IFNAME.MATCHES} |<p>-</p> |`^.*$` |
-|{$NET.IF.IFNAME.NOT_MATCHES} |<p>Filter out loopbacks, nulls, docker veth links and docker0 bridge by default</p> |`(^Software Loopback Interface|^NULL[0-9.]*$|^[Ll]o[0-9.]*$|^[Ss]ystem$|^Nu[0-9.]*$|^veth[0-9a-z]+$|docker[0-9]+|br-[a-z0-9]{12})` |
-|{$NET.IF.IFOPERSTATUS.MATCHES} |<p>-</p> |`^.*$` |
-|{$NET.IF.IFOPERSTATUS.NOT_MATCHES} |<p>Ignore notPresent(7)</p> |`^7$` |
-|{$NODE_EXPORTER_PORT} |<p>TCP Port node_exporter is listening on.</p> |`9100` |
-|{$SWAP.PFREE.MIN.WARN} |<p>-</p> |`50` |
-|{$SYSTEM.FUZZYTIME.MAX} |<p>-</p> |`60` |
-|{$VFS.DEV.DEVNAME.MATCHES} |<p>This macro is used in block devices discovery. Can be overridden on the host or linked template level</p> |`.+` |
-|{$VFS.DEV.DEVNAME.NOT_MATCHES} |<p>This macro is used in block devices discovery. Can be overridden on the host or linked template level</p> |`^(loop[0-9]*|sd[a-z][0-9]+|nbd[0-9]+|sr[0-9]+|fd[0-9]+|dm-[0-9]+|ram[0-9]+|ploop[a-z0-9]+|md[0-9]*|hcp[0-9]*|zram[0-9]*)` |
-|{$VFS.DEV.READ.AWAIT.WARN} |<p>Disk read average response time (in ms) before the trigger would fire</p> |`20` |
-|{$VFS.DEV.WRITE.AWAIT.WARN} |<p>Disk write average response time (in ms) before the trigger would fire</p> |`20` |
-|{$VFS.FS.FSDEVICE.MATCHES} |<p>This macro is used in filesystems discovery. Can be overridden on the host or linked template level</p> |`^.+$` |
-|{$VFS.FS.FSDEVICE.NOT_MATCHES} |<p>This macro is used in filesystems discovery. Can be overridden on the host or linked template level</p> |`^\s$` |
-|{$VFS.FS.FSNAME.MATCHES} |<p>This macro is used in filesystems discovery. Can be overridden on the host or linked template level</p> |`.+` |
-|{$VFS.FS.FSNAME.NOT_MATCHES} |<p>This macro is used in filesystems discovery. Can be overridden on the host or linked template level</p> |`^(/dev|/sys|/run|/proc|.+/shm$)` |
-|{$VFS.FS.FSTYPE.MATCHES} |<p>This macro is used in filesystems discovery. Can be overridden on the host or linked template level</p> |`^(btrfs|ext2|ext3|ext4|reiser|xfs|ffs|ufs|jfs|jfs2|vxfs|hfs|apfs|refs|ntfs|fat32|zfs)$` |
-|{$VFS.FS.FSTYPE.NOT_MATCHES} |<p>This macro is used in filesystems discovery. Can be overridden on the host or linked template level</p> |`^\s$` |
-|{$VFS.FS.INODE.PFREE.MIN.CRIT} |<p>-</p> |`10` |
-|{$VFS.FS.INODE.PFREE.MIN.WARN} |<p>-</p> |`20` |
-|{$VFS.FS.PUSED.MAX.CRIT} |<p>-</p> |`90` |
-|{$VFS.FS.PUSED.MAX.WARN} |<p>-</p> |`80` |
+| Name | Description | Default |
+|------------------------------------|--------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------|
+| {$CPU.UTIL.CRIT} | <p>-</p> | `90` |
+| {$IF.ERRORS.WARN} | <p>-</p> | `2` |
+| {$IF.UTIL.MAX} | <p>-</p> | `90` |
+| {$IFCONTROL} | <p>-</p> | `1` |
+| {$KERNEL.MAXFILES.MIN} | <p>-</p> | `256` |
+| {$LOAD_AVG_PER_CPU.MAX.WARN} | <p>Load per CPU considered sustainable. Tune if needed.</p> | `1.5` |
+| {$MEMORY.AVAILABLE.MIN} | <p>-</p> | `20M` |
+| {$MEMORY.UTIL.MAX} | <p>-</p> | `90` |
+| {$NET.IF.IFALIAS.MATCHES} | <p>-</p> | `^.*$` |
+| {$NET.IF.IFALIAS.NOT_MATCHES} | <p>-</p> | `CHANGE_IF_NEEDED` |
+| {$NET.IF.IFNAME.MATCHES} | <p>-</p> | `^.*$` |
+| {$NET.IF.IFNAME.NOT_MATCHES} | <p>Filter out loopbacks, nulls, docker veth links and docker0 bridge by default</p> | `(^Software Loopback Interface|^NULL[0-9.]*$|^[Ll]o[0-9.]*$|^[Ss]ystem$|^Nu[0-9.]*$|^veth[0-9a-z]+$|docker[0-9]+|br-[a-z0-9]{12})` |
+| {$NET.IF.IFOPERSTATUS.MATCHES} | <p>-</p> | `^.*$` |
+| {$NET.IF.IFOPERSTATUS.NOT_MATCHES} | <p>Ignore notPresent(7)</p> | `^7$` |
+| {$NODE_EXPORTER_PORT} | <p>TCP Port node_exporter is listening on.</p> | `9100` |
+| {$SWAP.PFREE.MIN.WARN} | <p>-</p> | `50` |
+| {$SYSTEM.FUZZYTIME.MAX} | <p>-</p> | `60` |
+| {$VFS.DEV.DEVNAME.MATCHES} | <p>This macro is used in block devices discovery. Can be overridden on the host or linked template level</p> | `.+` |
+| {$VFS.DEV.DEVNAME.NOT_MATCHES} | <p>This macro is used in block devices discovery. Can be overridden on the host or linked template level</p> | `^(loop[0-9]*|sd[a-z][0-9]+|nbd[0-9]+|sr[0-9]+|fd[0-9]+|dm-[0-9]+|ram[0-9]+|ploop[a-z0-9]+|md[0-9]*|hcp[0-9]*|zram[0-9]*)` |
+| {$VFS.DEV.READ.AWAIT.WARN} | <p>Disk read average response time (in ms) before the trigger would fire</p> | `20` |
+| {$VFS.DEV.WRITE.AWAIT.WARN} | <p>Disk write average response time (in ms) before the trigger would fire</p> | `20` |
+| {$VFS.FS.FSDEVICE.MATCHES} | <p>This macro is used in filesystems discovery. Can be overridden on the host or linked template level</p> | `^.+$` |
+| {$VFS.FS.FSDEVICE.NOT_MATCHES} | <p>This macro is used in filesystems discovery. Can be overridden on the host or linked template level</p> | `^\s$` |
+| {$VFS.FS.FSNAME.MATCHES} | <p>This macro is used in filesystems discovery. Can be overridden on the host or linked template level</p> | `.+` |
+| {$VFS.FS.FSNAME.NOT_MATCHES} | <p>This macro is used in filesystems discovery. Can be overridden on the host or linked template level</p> | `^(/dev|/sys|/run|/proc|.+/shm$)` |
+| {$VFS.FS.FSTYPE.MATCHES} | <p>This macro is used in filesystems discovery. Can be overridden on the host or linked template level</p> | `^(btrfs|ext2|ext3|ext4|reiser|xfs|ffs|ufs|jfs|jfs2|vxfs|hfs|apfs|refs|ntfs|fat32|zfs)$` |
+| {$VFS.FS.FSTYPE.NOT_MATCHES} | <p>This macro is used in filesystems discovery. Can be overridden on the host or linked template level</p> | `^\s$` |
+| {$VFS.FS.INODE.PFREE.MIN.CRIT} | <p>-</p> | `10` |
+| {$VFS.FS.INODE.PFREE.MIN.WARN} | <p>-</p> | `20` |
+| {$VFS.FS.PUSED.MAX.CRIT} | <p>-</p> | `90` |
+| {$VFS.FS.PUSED.MAX.WARN} | <p>-</p> | `80` |
## Template links
@@ -61,99 +61,99 @@ There are no template links in this template.
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Network interface discovery |<p>Discovery of network interfaces. Requires node_exporter v0.18 and up.</p> |DEPENDENT |net.if.discovery[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `{__name__=~"^node_network_info$"}`</p><p>**Filter**:</p>AND <p>- A: {#IFNAME} MATCHES_REGEX `{$NET.IF.IFNAME.MATCHES}`</p><p>- B: {#IFNAME} NOT_MATCHES_REGEX `{$NET.IF.IFNAME.NOT_MATCHES}`</p><p>- C: {#IFALIAS} MATCHES_REGEX `{$NET.IF.IFALIAS.MATCHES}`</p><p>- D: {#IFALIAS} NOT_MATCHES_REGEX `{$NET.IF.IFALIAS.NOT_MATCHES}`</p><p>- E: {#IFOPERSTATUS} MATCHES_REGEX `{$NET.IF.IFOPERSTATUS.MATCHES}`</p><p>- F: {#IFOPERSTATUS} NOT_MATCHES_REGEX `{$NET.IF.IFOPERSTATUS.NOT_MATCHES}`</p> |
-|Mounted filesystem discovery |<p>Discovery of file systems of different types.</p> |DEPENDENT |vfs.fs.discovery[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `{__name__=~"^node_filesystem_size(?:_bytes)?$", mountpoint=~".+"}`</p><p>**Filter**:</p>AND <p>- A: {#FSTYPE} MATCHES_REGEX `{$VFS.FS.FSTYPE.MATCHES}`</p><p>- B: {#FSTYPE} NOT_MATCHES_REGEX `{$VFS.FS.FSTYPE.NOT_MATCHES}`</p><p>- C: {#FSNAME} MATCHES_REGEX `{$VFS.FS.FSNAME.MATCHES}`</p><p>- D: {#FSNAME} NOT_MATCHES_REGEX `{$VFS.FS.FSNAME.NOT_MATCHES}`</p><p>- E: {#FSNAME} MATCHES_REGEX `{$VFS.FS.FSDEVICE.MATCHES}`</p><p>- F: {#FSDEVICE} NOT_MATCHES_REGEX `{$VFS.FS.FSDEVICE.NOT_MATCHES}`</p> |
-|Block devices discovery |<p>-</p> |DEPENDENT |vfs.dev.discovery[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `node_disk_io_now{device=~".+"}`</p><p>**Filter**:</p>AND <p>- A: {#DEVNAME} MATCHES_REGEX `{$VFS.DEV.DEVNAME.MATCHES}`</p><p>- B: {#DEVNAME} NOT_MATCHES_REGEX `{$VFS.DEV.DEVNAME.NOT_MATCHES}`</p> |
+| Name | Description | Type | Key and additional info |
+|------------------------------|------------------------------------------------------------------------------|-----------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Network interface discovery | <p>Discovery of network interfaces. Requires node_exporter v0.18 and up.</p> | DEPENDENT | net.if.discovery[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `{__name__=~"^node_network_info$"}`</p><p>**Filter**:</p>AND <p>- A: {#IFNAME} MATCHES_REGEX `{$NET.IF.IFNAME.MATCHES}`</p><p>- B: {#IFNAME} NOT_MATCHES_REGEX `{$NET.IF.IFNAME.NOT_MATCHES}`</p><p>- C: {#IFALIAS} MATCHES_REGEX `{$NET.IF.IFALIAS.MATCHES}`</p><p>- D: {#IFALIAS} NOT_MATCHES_REGEX `{$NET.IF.IFALIAS.NOT_MATCHES}`</p><p>- E: {#IFOPERSTATUS} MATCHES_REGEX `{$NET.IF.IFOPERSTATUS.MATCHES}`</p><p>- F: {#IFOPERSTATUS} NOT_MATCHES_REGEX `{$NET.IF.IFOPERSTATUS.NOT_MATCHES}`</p> |
+| Mounted filesystem discovery | <p>Discovery of file systems of different types.</p> | DEPENDENT | vfs.fs.discovery[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `{__name__=~"^node_filesystem_size(?:_bytes)?$", mountpoint=~".+"}`</p><p>**Filter**:</p>AND <p>- A: {#FSTYPE} MATCHES_REGEX `{$VFS.FS.FSTYPE.MATCHES}`</p><p>- B: {#FSTYPE} NOT_MATCHES_REGEX `{$VFS.FS.FSTYPE.NOT_MATCHES}`</p><p>- C: {#FSNAME} MATCHES_REGEX `{$VFS.FS.FSNAME.MATCHES}`</p><p>- D: {#FSNAME} NOT_MATCHES_REGEX `{$VFS.FS.FSNAME.NOT_MATCHES}`</p><p>- E: {#FSNAME} MATCHES_REGEX `{$VFS.FS.FSDEVICE.MATCHES}`</p><p>- F: {#FSDEVICE} NOT_MATCHES_REGEX `{$VFS.FS.FSDEVICE.NOT_MATCHES}`</p> |
+| Block devices discovery | <p>-</p> | DEPENDENT | vfs.dev.discovery[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `node_disk_io_now{device=~".+"}`</p><p>**Filter**:</p>AND <p>- A: {#DEVNAME} MATCHES_REGEX `{$VFS.DEV.DEVNAME.MATCHES}`</p><p>- B: {#DEVNAME} NOT_MATCHES_REGEX `{$VFS.DEV.DEVNAME.NOT_MATCHES}`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|CPU |Load average (1m avg) |<p>-</p> |DEPENDENT |system.cpu.load.avg1[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `node_load1 `</p> |
-|CPU |Load average (5m avg) |<p>-</p> |DEPENDENT |system.cpu.load.avg5[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `node_load5 `</p> |
-|CPU |Load average (15m avg) |<p>-</p> |DEPENDENT |system.cpu.load.avg15[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `node_load15 `</p> |
-|CPU |Number of CPUs |<p>-</p> |DEPENDENT |system.cpu.num[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `{__name__=~"^node_cpu(?:_seconds_total)?$",cpu=~".+",mode="idle"}`</p><p>- JAVASCRIPT: `//count the number of cores return JSON.parse(value).length `</p> |
-|CPU |CPU utilization |<p>CPU utilization in %</p> |DEPENDENT |system.cpu.util[node_exporter]<p>**Preprocessing**:</p><p>- JAVASCRIPT: `//Calculate utilization return (100 - value)`</p> |
-|CPU |CPU idle time |<p>The time the CPU has spent doing nothing.</p> |DEPENDENT |system.cpu.idle[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `{__name__=~"^node_cpu(?:_seconds_total)?$",cpu=~".+",mode="idle"}`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `100`</p> |
-|CPU |CPU system time |<p>The time the CPU has spent running the kernel and its processes.</p> |DEPENDENT |system.cpu.system[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `{__name__=~"^node_cpu(?:_seconds_total)?$",cpu=~".+",mode="system"}`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `100`</p> |
-|CPU |CPU user time |<p>The time the CPU has spent running users' processes that are not niced.</p> |DEPENDENT |system.cpu.user[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `{__name__=~"^node_cpu(?:_seconds_total)?$",cpu=~".+",mode="user"}`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `100`</p> |
-|CPU |CPU steal time |<p>The amount of CPU 'stolen' from this virtual machine by the hypervisor for other tasks (such as running another virtual machine).</p> |DEPENDENT |system.cpu.steal[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `{__name__=~"^node_cpu(?:_seconds_total)?$",cpu=~".+",mode="steal"}`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `100`</p> |
-|CPU |CPU softirq time |<p>The amount of time the CPU has been servicing software interrupts.</p> |DEPENDENT |system.cpu.softirq[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `{__name__=~"^node_cpu(?:_seconds_total)?$",cpu=~".+",mode="softirq"}`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `100`</p> |
-|CPU |CPU nice time |<p>The time the CPU has spent running users' processes that have been niced.</p> |DEPENDENT |system.cpu.nice[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `{__name__=~"^node_cpu(?:_seconds_total)?$",cpu=~".+",mode="nice"}`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `100`</p> |
-|CPU |CPU iowait time |<p>Amount of time the CPU has been waiting for I/O to complete.</p> |DEPENDENT |system.cpu.iowait[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `{__name__=~"^node_cpu(?:_seconds_total)?$",cpu=~".+",mode="iowait"}`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `100`</p> |
-|CPU |CPU interrupt time |<p>The amount of time the CPU has been servicing hardware interrupts.</p> |DEPENDENT |system.cpu.interrupt[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `{__name__=~"^node_cpu(?:_seconds_total)?$",cpu=~".+",mode="irq"}`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `100`</p> |
-|CPU |CPU guest time |<p>Guest time (time spent running a virtual CPU for a guest operating system)</p> |DEPENDENT |system.cpu.guest[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `{__name__=~"^node_cpu(?:_guest_seconds_total)?$",cpu=~".+",mode=~"^(?:user|guest)$"}`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `100`</p> |
-|CPU |CPU guest nice time |<p>Time spent running a niced guest (virtual CPU for guest operating systems under the control of the Linux kernel)</p> |DEPENDENT |system.cpu.guest_nice[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `{__name__=~"^node_cpu(?:_guest_seconds_total)?$",cpu=~".+",mode=~"^(?:nice|guest_nice)$"}`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `100`</p> |
-|CPU |Interrupts per second |<p>-</p> |DEPENDENT |system.cpu.intr[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `{__name__=~"node_intr"} `</p><p>- CHANGE_PER_SECOND |
-|CPU |Context switches per second |<p>-</p> |DEPENDENT |system.cpu.switches[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `{__name__=~"node_context_switches"} `</p><p>- CHANGE_PER_SECOND |
-|General |System boot time |<p>-</p> |DEPENDENT |system.boottime[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `{__name__=~"^node_boot_time(?:_seconds)?$"} `</p> |
-|General |System local time |<p>System local time of the host.</p> |DEPENDENT |system.localtime[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `{__name__=~"^node_time(?:_seconds)?$"} `</p> |
-|General |System name |<p>System host name.</p> |DEPENDENT |system.name[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `node_uname_info nodename`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|General |System description |<p>Labeled system information as provided by the uname system call.</p> |DEPENDENT |system.descr[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `node_uname_info`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|General |Maximum number of open file descriptors |<p>It could be increased by using sysctrl utility or modifying file /etc/sysctl.conf.</p> |DEPENDENT |kernel.maxfiles[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `node_filefd_maximum `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|General |Number of open file descriptors |<p>-</p> |DEPENDENT |fd.open[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `node_filefd_allocated `</p> |
-|Inventory |Operating system |<p>-</p> |DEPENDENT |system.sw.os[node_exporter]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |Operating system architecture |<p>Operating system architecture of the host.</p> |DEPENDENT |system.sw.arch[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `node_uname_info machine`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Memory |Memory utilization |<p>Memory used percentage is calculated as (total-available)/total*100</p> |CALCULATED |vm.memory.util[node_exporter]<p>**Expression**:</p>`(last("vm.memory.total[node_exporter]")-last("vm.memory.available[node_exporter]"))/last("vm.memory.total[node_exporter]")*100` |
-|Memory |Total memory |<p>Total memory in Bytes</p> |DEPENDENT |vm.memory.total[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `{__name__=~"node_memory_MemTotal"} `</p> |
-|Memory |Available memory |<p>Available memory, in Linux, available = free + buffers + cache. On other platforms calculation may vary. See also: https://www.zabbix.com/documentation/5.4/manual/appendix/items/vm.memory.size_params</p> |DEPENDENT |vm.memory.available[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `{__name__=~"node_memory_MemAvailable"} `</p> |
-|Memory |Total swap space |<p>The total space of swap volume/file in bytes.</p> |DEPENDENT |system.swap.total[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `{__name__=~"node_memory_SwapTotal"} `</p> |
-|Memory |Free swap space |<p>The free space of swap volume/file in bytes.</p> |DEPENDENT |system.swap.free[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `{__name__=~"node_memory_SwapFree"} `</p> |
-|Memory |Free swap space in % |<p>The free space of swap volume/file in percent.</p> |CALCULATED |system.swap.pfree[node_exporter]<p>**Expression**:</p>`last("system.swap.free[node_exporter]")/last("system.swap.total[node_exporter]")*100` |
-|Monitoring_agent |Version of node_exporter running |<p>-</p> |DEPENDENT |agent.version[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `node_exporter_build_info version`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Network_interfaces |Interface {#IFNAME}({#IFALIAS}): Bits received | |DEPENDENT |net.if.in[node_exporter,"{#IFNAME}"]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `node_network_receive_bytes_total{device="{#IFNAME}"} `</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `8`</p> |
-|Network_interfaces |Interface {#IFNAME}({#IFALIAS}): Bits sent | |DEPENDENT |net.if.out[node_exporter,"{#IFNAME}"]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `node_network_transmit_bytes_total{device="{#IFNAME}"} `</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `8`</p> |
-|Network_interfaces |Interface {#IFNAME}({#IFALIAS}): Outbound packets with errors | |DEPENDENT |net.if.out.errors[node_exporter"{#IFNAME}"]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `node_network_transmit_errs_total{device="{#IFNAME}"} `</p><p>- CHANGE_PER_SECOND |
-|Network_interfaces |Interface {#IFNAME}({#IFALIAS}): Inbound packets with errors | |DEPENDENT |net.if.in.errors[node_exporter,"{#IFNAME}"]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `node_network_receive_errs_total{device="{#IFNAME}"} `</p><p>- CHANGE_PER_SECOND |
-|Network_interfaces |Interface {#IFNAME}({#IFALIAS}): Inbound packets discarded | |DEPENDENT |net.if.in.discards[node_exporter,"{#IFNAME}"]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `node_network_receive_drop_total{device="{#IFNAME}"} `</p><p>- CHANGE_PER_SECOND |
-|Network_interfaces |Interface {#IFNAME}({#IFALIAS}): Outbound packets discarded | |DEPENDENT |net.if.out.discards[node_exporter,"{#IFNAME}"]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `node_network_transmit_drop_total{device="{#IFNAME}"} `</p><p>- CHANGE_PER_SECOND |
-|Network_interfaces |Interface {#IFNAME}({#IFALIAS}): Speed |<p>Sets value to 0 if metric is missing in node_exporter output.</p> |DEPENDENT |net.if.speed[node_exporter,"{#IFNAME}"]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `node_network_speed_bytes{device="{#IFNAME}"} `</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- MULTIPLIER: `8`</p> |
-|Network_interfaces |Interface {#IFNAME}({#IFALIAS}): Interface type |<p>node_network_protocol_type protocol_type value of /sys/class/net/<iface>.</p> |DEPENDENT |net.if.type[node_exporter,"{#IFNAME}"]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `node_network_protocol_type{device="{#IFNAME}"} `</p> |
-|Network_interfaces |Interface {#IFNAME}({#IFALIAS}): Operational status |<p>Indicates the interface RFC2863 operational state as a string.</p><p>Possible values are:"unknown", "notpresent", "down", "lowerlayerdown", "testing","dormant", "up".</p><p>Reference: https://www.kernel.org/doc/Documentation/ABI/testing/sysfs-class-net</p> |DEPENDENT |net.if.status[node_exporter,"{#IFNAME}"]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `node_network_info{device="{#IFNAME}"} operstate`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
-|Status |System uptime |<p>System uptime in 'N days, hh:mm:ss' format.</p> |DEPENDENT |system.uptime[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `{__name__=~"^node_boot_time(?:_seconds)?$"} `</p><p>- JAVASCRIPT: `//use boottime to calculate uptime return (Math.floor(Date.now()/1000)-Number(value));`</p> |
-|Storage |{#FSNAME}: Free space |<p>-</p> |DEPENDENT |vfs.fs.free[node_exporter,"{#FSNAME}"]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `{__name__=~"^node_filesystem_avail(?:_bytes)?$", mountpoint="{#FSNAME}"} `</p> |
-|Storage |{#FSNAME}: Total space |<p>Total space in Bytes</p> |DEPENDENT |vfs.fs.total[node_exporter,"{#FSNAME}"]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `{__name__=~"^node_filesystem_size(?:_bytes)?$", mountpoint="{#FSNAME}"} `</p> |
-|Storage |{#FSNAME}: Used space |<p>Used storage in Bytes</p> |CALCULATED |vfs.fs.used[node_exporter,"{#FSNAME}"]<p>**Expression**:</p>`(last("vfs.fs.total[node_exporter,\"{#FSNAME}\"]")-last("vfs.fs.free[node_exporter,\"{#FSNAME}\"]"))` |
-|Storage |{#FSNAME}: Space utilization |<p>Space utilization in % for {#FSNAME}</p> |CALCULATED |vfs.fs.pused[node_exporter,"{#FSNAME}"]<p>**Expression**:</p>`(last("vfs.fs.used[node_exporter,\"{#FSNAME}\"]")/last("vfs.fs.total[node_exporter,\"{#FSNAME}\"]"))*100` |
-|Storage |{#FSNAME}: Free inodes in % |<p>-</p> |DEPENDENT |vfs.fs.inode.pfree[node_exporter,"{#FSNAME}"]<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `{__name__=~"node_filesystem_files.*",mountpoint="{#FSNAME}"}`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
-|Storage |{#DEVNAME}: Disk read rate |<p>r/s. The number (after merges) of read requests completed per second for the device.</p> |DEPENDENT |vfs.dev.read.rate[node_exporter,"{#DEVNAME}"]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `node_disk_reads_completed_total{device="{#DEVNAME}"} `</p><p>- CHANGE_PER_SECOND |
-|Storage |{#DEVNAME}: Disk write rate |<p>w/s. The number (after merges) of write requests completed per second for the device.</p> |DEPENDENT |vfs.dev.write.rate[node_exporter,"{#DEVNAME}"]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `node_disk_writes_completed_total{device="{#DEVNAME}"} `</p><p>- CHANGE_PER_SECOND |
-|Storage |{#DEVNAME}: Disk read request avg waiting time (r_await) |<p>This formula contains two boolean expressions that evaluates to 1 or 0 in order to set calculated metric to zero and to avoid division by zero exception.</p> |CALCULATED |vfs.dev.read.await[node_exporter,"{#DEVNAME}"]<p>**Expression**:</p>`(last("vfs.dev.read.time.rate[node_exporter,\"{#DEVNAME}\"]")/(last("vfs.dev.read.rate[node_exporter,\"{#DEVNAME}\"]")+(last("vfs.dev.read.rate[node_exporter,\"{#DEVNAME}\"]")=0)))*1000*(last("vfs.dev.read.rate[node_exporter,\"{#DEVNAME}\"]") > 0)` |
-|Storage |{#DEVNAME}: Disk write request avg waiting time (w_await) |<p>This formula contains two boolean expressions that evaluates to 1 or 0 in order to set calculated metric to zero and to avoid division by zero exception.</p> |CALCULATED |vfs.dev.write.await[node_exporter,"{#DEVNAME}"]<p>**Expression**:</p>`(last("vfs.dev.write.time.rate[node_exporter,\"{#DEVNAME}\"]")/(last("vfs.dev.write.rate[node_exporter,\"{#DEVNAME}\"]")+(last("vfs.dev.write.rate[node_exporter,\"{#DEVNAME}\"]")=0)))*1000*(last("vfs.dev.write.rate[node_exporter,\"{#DEVNAME}\"]") > 0)` |
-|Storage |{#DEVNAME}: Disk average queue size (avgqu-sz) |<p>Current average disk queue, the number of requests outstanding on the disk at the time the performance data is collected.</p> |DEPENDENT |vfs.dev.queue_size[node_exporter,"{#DEVNAME}"]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `node_disk_io_time_weighted_seconds_total{device="{#DEVNAME}"} `</p><p>- CHANGE_PER_SECOND |
-|Storage |{#DEVNAME}: Disk utilization |<p>This item is the percentage of elapsed time that the selected disk drive was busy servicing read or writes requests.</p> |DEPENDENT |vfs.dev.util[node_exporter,"{#DEVNAME}"]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `node_disk_io_time_seconds_total{device="{#DEVNAME}"} `</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `100`</p> |
-|Zabbix_raw_items |Get node_exporter metrics |<p>-</p> |HTTP_AGENT |node_exporter.get |
-|Zabbix_raw_items |{#DEVNAME}: Disk read time (rate) |<p>Rate of total read time counter. Used in r_await calculation</p> |DEPENDENT |vfs.dev.read.time.rate[node_exporter,"{#DEVNAME}"]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `node_disk_read_time_seconds_total{device="{#DEVNAME}"} `</p><p>- CHANGE_PER_SECOND |
-|Zabbix_raw_items |{#DEVNAME}: Disk write time (rate) |<p>Rate of total write time counter. Used in w_await calculation</p> |DEPENDENT |vfs.dev.write.time.rate[node_exporter,"{#DEVNAME}"]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `node_disk_write_time_seconds_total{device="{#DEVNAME}"} `</p><p>- CHANGE_PER_SECOND |
+| Group | Name | Description | Type | Key and additional info |
+|--------------------|---------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| CPU | Load average (1m avg) | <p>-</p> | DEPENDENT | system.cpu.load.avg1[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `node_load1 `</p> |
+| CPU | Load average (5m avg) | <p>-</p> | DEPENDENT | system.cpu.load.avg5[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `node_load5 `</p> |
+| CPU | Load average (15m avg) | <p>-</p> | DEPENDENT | system.cpu.load.avg15[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `node_load15 `</p> |
+| CPU | Number of CPUs | <p>-</p> | DEPENDENT | system.cpu.num[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `{__name__=~"^node_cpu(?:_seconds_total)?$",cpu=~".+",mode="idle"}`</p><p>- JAVASCRIPT: `//count the number of cores return JSON.parse(value).length `</p> |
+| CPU | CPU utilization | <p>CPU utilization in %</p> | DEPENDENT | system.cpu.util[node_exporter]<p>**Preprocessing**:</p><p>- JAVASCRIPT: `//Calculate utilization return (100 - value)`</p> |
+| CPU | CPU idle time | <p>The time the CPU has spent doing nothing.</p> | DEPENDENT | system.cpu.idle[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `{__name__=~"^node_cpu(?:_seconds_total)?$",cpu=~".+",mode="idle"}`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `100`</p> |
+| CPU | CPU system time | <p>The time the CPU has spent running the kernel and its processes.</p> | DEPENDENT | system.cpu.system[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `{__name__=~"^node_cpu(?:_seconds_total)?$",cpu=~".+",mode="system"}`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `100`</p> |
+| CPU | CPU user time | <p>The time the CPU has spent running users' processes that are not niced.</p> | DEPENDENT | system.cpu.user[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `{__name__=~"^node_cpu(?:_seconds_total)?$",cpu=~".+",mode="user"}`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `100`</p> |
+| CPU | CPU steal time | <p>The amount of CPU 'stolen' from this virtual machine by the hypervisor for other tasks (such as running another virtual machine).</p> | DEPENDENT | system.cpu.steal[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `{__name__=~"^node_cpu(?:_seconds_total)?$",cpu=~".+",mode="steal"}`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `100`</p> |
+| CPU | CPU softirq time | <p>The amount of time the CPU has been servicing software interrupts.</p> | DEPENDENT | system.cpu.softirq[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `{__name__=~"^node_cpu(?:_seconds_total)?$",cpu=~".+",mode="softirq"}`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `100`</p> |
+| CPU | CPU nice time | <p>The time the CPU has spent running users' processes that have been niced.</p> | DEPENDENT | system.cpu.nice[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `{__name__=~"^node_cpu(?:_seconds_total)?$",cpu=~".+",mode="nice"}`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `100`</p> |
+| CPU | CPU iowait time | <p>Amount of time the CPU has been waiting for I/O to complete.</p> | DEPENDENT | system.cpu.iowait[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `{__name__=~"^node_cpu(?:_seconds_total)?$",cpu=~".+",mode="iowait"}`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `100`</p> |
+| CPU | CPU interrupt time | <p>The amount of time the CPU has been servicing hardware interrupts.</p> | DEPENDENT | system.cpu.interrupt[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `{__name__=~"^node_cpu(?:_seconds_total)?$",cpu=~".+",mode="irq"}`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `100`</p> |
+| CPU | CPU guest time | <p>Guest time (time spent running a virtual CPU for a guest operating system)</p> | DEPENDENT | system.cpu.guest[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `{__name__=~"^node_cpu(?:_guest_seconds_total)?$",cpu=~".+",mode=~"^(?:user|guest)$"}`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `100`</p> |
+| CPU | CPU guest nice time | <p>Time spent running a niced guest (virtual CPU for guest operating systems under the control of the Linux kernel)</p> | DEPENDENT | system.cpu.guest_nice[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `{__name__=~"^node_cpu(?:_guest_seconds_total)?$",cpu=~".+",mode=~"^(?:nice|guest_nice)$"}`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `100`</p> |
+| CPU | Interrupts per second | <p>-</p> | DEPENDENT | system.cpu.intr[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `{__name__=~"node_intr"} `</p><p>- CHANGE_PER_SECOND |
+| CPU | Context switches per second | <p>-</p> | DEPENDENT | system.cpu.switches[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `{__name__=~"node_context_switches"} `</p><p>- CHANGE_PER_SECOND |
+| General | System boot time | <p>-</p> | DEPENDENT | system.boottime[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `{__name__=~"^node_boot_time(?:_seconds)?$"} `</p> |
+| General | System local time | <p>System local time of the host.</p> | DEPENDENT | system.localtime[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `{__name__=~"^node_time(?:_seconds)?$"} `</p> |
+| General | System name | <p>System host name.</p> | DEPENDENT | system.name[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `node_uname_info nodename`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| General | System description | <p>Labeled system information as provided by the uname system call.</p> | DEPENDENT | system.descr[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `node_uname_info`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| General | Maximum number of open file descriptors | <p>It could be increased by using sysctrl utility or modifying file /etc/sysctl.conf.</p> | DEPENDENT | kernel.maxfiles[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `node_filefd_maximum `</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| General | Number of open file descriptors | <p>-</p> | DEPENDENT | fd.open[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `node_filefd_allocated `</p> |
+| Inventory | Operating system | <p>-</p> | DEPENDENT | system.sw.os[node_exporter]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | Operating system architecture | <p>Operating system architecture of the host.</p> | DEPENDENT | system.sw.arch[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `node_uname_info machine`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Memory | Memory utilization | <p>Memory used percentage is calculated as (total-available)/total*100</p> | CALCULATED | vm.memory.util[node_exporter]<p>**Expression**:</p>`(last("vm.memory.total[node_exporter]")-last("vm.memory.available[node_exporter]"))/last("vm.memory.total[node_exporter]")*100` |
+| Memory | Total memory | <p>Total memory in Bytes</p> | DEPENDENT | vm.memory.total[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `{__name__=~"node_memory_MemTotal"} `</p> |
+| Memory | Available memory | <p>Available memory, in Linux, available = free + buffers + cache. On other platforms calculation may vary. See also: https://www.zabbix.com/documentation/5.4/manual/appendix/items/vm.memory.size_params</p> | DEPENDENT | vm.memory.available[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `{__name__=~"node_memory_MemAvailable"} `</p> |
+| Memory | Total swap space | <p>The total space of swap volume/file in bytes.</p> | DEPENDENT | system.swap.total[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `{__name__=~"node_memory_SwapTotal"} `</p> |
+| Memory | Free swap space | <p>The free space of swap volume/file in bytes.</p> | DEPENDENT | system.swap.free[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `{__name__=~"node_memory_SwapFree"} `</p> |
+| Memory | Free swap space in % | <p>The free space of swap volume/file in percent.</p> | CALCULATED | system.swap.pfree[node_exporter]<p>**Expression**:</p>`last("system.swap.free[node_exporter]")/last("system.swap.total[node_exporter]")*100` |
+| Monitoring_agent | Version of node_exporter running | <p>-</p> | DEPENDENT | agent.version[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `node_exporter_build_info version`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Network_interfaces | Interface {#IFNAME}({#IFALIAS}): Bits received | | DEPENDENT | net.if.in[node_exporter,"{#IFNAME}"]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `node_network_receive_bytes_total{device="{#IFNAME}"} `</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `8`</p> |
+| Network_interfaces | Interface {#IFNAME}({#IFALIAS}): Bits sent | | DEPENDENT | net.if.out[node_exporter,"{#IFNAME}"]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `node_network_transmit_bytes_total{device="{#IFNAME}"} `</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `8`</p> |
+| Network_interfaces | Interface {#IFNAME}({#IFALIAS}): Outbound packets with errors | | DEPENDENT | net.if.out.errors[node_exporter"{#IFNAME}"]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `node_network_transmit_errs_total{device="{#IFNAME}"} `</p><p>- CHANGE_PER_SECOND |
+| Network_interfaces | Interface {#IFNAME}({#IFALIAS}): Inbound packets with errors | | DEPENDENT | net.if.in.errors[node_exporter,"{#IFNAME}"]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `node_network_receive_errs_total{device="{#IFNAME}"} `</p><p>- CHANGE_PER_SECOND |
+| Network_interfaces | Interface {#IFNAME}({#IFALIAS}): Inbound packets discarded | | DEPENDENT | net.if.in.discards[node_exporter,"{#IFNAME}"]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `node_network_receive_drop_total{device="{#IFNAME}"} `</p><p>- CHANGE_PER_SECOND |
+| Network_interfaces | Interface {#IFNAME}({#IFALIAS}): Outbound packets discarded | | DEPENDENT | net.if.out.discards[node_exporter,"{#IFNAME}"]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `node_network_transmit_drop_total{device="{#IFNAME}"} `</p><p>- CHANGE_PER_SECOND |
+| Network_interfaces | Interface {#IFNAME}({#IFALIAS}): Speed | <p>Sets value to 0 if metric is missing in node_exporter output.</p> | DEPENDENT | net.if.speed[node_exporter,"{#IFNAME}"]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `node_network_speed_bytes{device="{#IFNAME}"} `</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- MULTIPLIER: `8`</p> |
+| Network_interfaces | Interface {#IFNAME}({#IFALIAS}): Interface type | <p>node_network_protocol_type protocol_type value of /sys/class/net/<iface>.</p> | DEPENDENT | net.if.type[node_exporter,"{#IFNAME}"]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `node_network_protocol_type{device="{#IFNAME}"} `</p> |
+| Network_interfaces | Interface {#IFNAME}({#IFALIAS}): Operational status | <p>Indicates the interface RFC2863 operational state as a string.</p><p>Possible values are:"unknown", "notpresent", "down", "lowerlayerdown", "testing","dormant", "up".</p><p>Reference: https://www.kernel.org/doc/Documentation/ABI/testing/sysfs-class-net</p> | DEPENDENT | net.if.status[node_exporter,"{#IFNAME}"]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `node_network_info{device="{#IFNAME}"} operstate`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Status | System uptime | <p>System uptime in 'N days, hh:mm:ss' format.</p> | DEPENDENT | system.uptime[node_exporter]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `{__name__=~"^node_boot_time(?:_seconds)?$"} `</p><p>- JAVASCRIPT: `//use boottime to calculate uptime return (Math.floor(Date.now()/1000)-Number(value));`</p> |
+| Storage | {#FSNAME}: Free space | <p>-</p> | DEPENDENT | vfs.fs.free[node_exporter,"{#FSNAME}"]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `{__name__=~"^node_filesystem_avail(?:_bytes)?$", mountpoint="{#FSNAME}"} `</p> |
+| Storage | {#FSNAME}: Total space | <p>Total space in Bytes</p> | DEPENDENT | vfs.fs.total[node_exporter,"{#FSNAME}"]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `{__name__=~"^node_filesystem_size(?:_bytes)?$", mountpoint="{#FSNAME}"} `</p> |
+| Storage | {#FSNAME}: Used space | <p>Used storage in Bytes</p> | CALCULATED | vfs.fs.used[node_exporter,"{#FSNAME}"]<p>**Expression**:</p>`(last("vfs.fs.total[node_exporter,\"{#FSNAME}\"]")-last("vfs.fs.free[node_exporter,\"{#FSNAME}\"]"))` |
+| Storage | {#FSNAME}: Space utilization | <p>Space utilization in % for {#FSNAME}</p> | CALCULATED | vfs.fs.pused[node_exporter,"{#FSNAME}"]<p>**Expression**:</p>`(last("vfs.fs.used[node_exporter,\"{#FSNAME}\"]")/last("vfs.fs.total[node_exporter,\"{#FSNAME}\"]"))*100` |
+| Storage | {#FSNAME}: Free inodes in % | <p>-</p> | DEPENDENT | vfs.fs.inode.pfree[node_exporter,"{#FSNAME}"]<p>**Preprocessing**:</p><p>- PROMETHEUS_TO_JSON: `{__name__=~"node_filesystem_files.*",mountpoint="{#FSNAME}"}`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Storage | {#DEVNAME}: Disk read rate | <p>r/s. The number (after merges) of read requests completed per second for the device.</p> | DEPENDENT | vfs.dev.read.rate[node_exporter,"{#DEVNAME}"]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `node_disk_reads_completed_total{device="{#DEVNAME}"} `</p><p>- CHANGE_PER_SECOND |
+| Storage | {#DEVNAME}: Disk write rate | <p>w/s. The number (after merges) of write requests completed per second for the device.</p> | DEPENDENT | vfs.dev.write.rate[node_exporter,"{#DEVNAME}"]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `node_disk_writes_completed_total{device="{#DEVNAME}"} `</p><p>- CHANGE_PER_SECOND |
+| Storage | {#DEVNAME}: Disk read request avg waiting time (r_await) | <p>This formula contains two boolean expressions that evaluates to 1 or 0 in order to set calculated metric to zero and to avoid division by zero exception.</p> | CALCULATED | vfs.dev.read.await[node_exporter,"{#DEVNAME}"]<p>**Expression**:</p>`(last("vfs.dev.read.time.rate[node_exporter,\"{#DEVNAME}\"]")/(last("vfs.dev.read.rate[node_exporter,\"{#DEVNAME}\"]")+(last("vfs.dev.read.rate[node_exporter,\"{#DEVNAME}\"]")=0)))*1000*(last("vfs.dev.read.rate[node_exporter,\"{#DEVNAME}\"]") > 0)` |
+| Storage | {#DEVNAME}: Disk write request avg waiting time (w_await) | <p>This formula contains two boolean expressions that evaluates to 1 or 0 in order to set calculated metric to zero and to avoid division by zero exception.</p> | CALCULATED | vfs.dev.write.await[node_exporter,"{#DEVNAME}"]<p>**Expression**:</p>`(last("vfs.dev.write.time.rate[node_exporter,\"{#DEVNAME}\"]")/(last("vfs.dev.write.rate[node_exporter,\"{#DEVNAME}\"]")+(last("vfs.dev.write.rate[node_exporter,\"{#DEVNAME}\"]")=0)))*1000*(last("vfs.dev.write.rate[node_exporter,\"{#DEVNAME}\"]") > 0)` |
+| Storage | {#DEVNAME}: Disk average queue size (avgqu-sz) | <p>Current average disk queue, the number of requests outstanding on the disk at the time the performance data is collected.</p> | DEPENDENT | vfs.dev.queue_size[node_exporter,"{#DEVNAME}"]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `node_disk_io_time_weighted_seconds_total{device="{#DEVNAME}"} `</p><p>- CHANGE_PER_SECOND |
+| Storage | {#DEVNAME}: Disk utilization | <p>This item is the percentage of elapsed time that the selected disk drive was busy servicing read or writes requests.</p> | DEPENDENT | vfs.dev.util[node_exporter,"{#DEVNAME}"]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `node_disk_io_time_seconds_total{device="{#DEVNAME}"} `</p><p>- CHANGE_PER_SECOND<p>- MULTIPLIER: `100`</p> |
+| Zabbix_raw_items | Get node_exporter metrics | <p>-</p> | HTTP_AGENT | node_exporter.get |
+| Zabbix_raw_items | {#DEVNAME}: Disk read time (rate) | <p>Rate of total read time counter. Used in r_await calculation</p> | DEPENDENT | vfs.dev.read.time.rate[node_exporter,"{#DEVNAME}"]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `node_disk_read_time_seconds_total{device="{#DEVNAME}"} `</p><p>- CHANGE_PER_SECOND |
+| Zabbix_raw_items | {#DEVNAME}: Disk write time (rate) | <p>Rate of total write time counter. Used in w_await calculation</p> | DEPENDENT | vfs.dev.write.time.rate[node_exporter,"{#DEVNAME}"]<p>**Preprocessing**:</p><p>- PROMETHEUS_PATTERN: `node_disk_write_time_seconds_total{device="{#DEVNAME}"} `</p><p>- CHANGE_PER_SECOND |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|Load average is too high (per CPU load over {$LOAD_AVG_PER_CPU.MAX.WARN} for 5m) |<p>Per CPU load average is too high. Your system may be slow to respond.</p> |`{TEMPLATE_NAME:system.cpu.load.avg1[node_exporter].min(5m)}/{Linux by Prom:system.cpu.num[node_exporter].last()}>{$LOAD_AVG_PER_CPU.MAX.WARN} and {Linux by Prom:system.cpu.load.avg5[node_exporter].last()}>0 and {Linux by Prom:system.cpu.load.avg15[node_exporter].last()}>0` |AVERAGE | |
-|High CPU utilization (over {$CPU.UTIL.CRIT}% for 5m) |<p>CPU utilization is too high. The system might be slow to respond.</p> |`{TEMPLATE_NAME:system.cpu.util[node_exporter].min(5m)}>{$CPU.UTIL.CRIT}` |WARNING |<p>**Depends on**:</p><p>- Load average is too high (per CPU load over {$LOAD_AVG_PER_CPU.MAX.WARN} for 5m)</p> |
-|System time is out of sync (diff with Zabbix server > {$SYSTEM.FUZZYTIME.MAX}s) |<p>The host system time is different from the Zabbix server time.</p> |`{TEMPLATE_NAME:system.localtime[node_exporter].fuzzytime({$SYSTEM.FUZZYTIME.MAX})}=0` |WARNING |<p>Manual close: YES</p> |
-|System name has changed (new name: {ITEM.VALUE}) |<p>System name has changed. Ack to close.</p> |`{TEMPLATE_NAME:system.name[node_exporter].diff()}=1 and {TEMPLATE_NAME:system.name[node_exporter].strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|Configured max number of open filedescriptors is too low (< {$KERNEL.MAXFILES.MIN}) |<p>-</p> |`{TEMPLATE_NAME:kernel.maxfiles[node_exporter].last()}<{$KERNEL.MAXFILES.MIN}` |INFO |<p>**Depends on**:</p><p>- Running out of file descriptors (less than < 20% free)</p> |
-|Running out of file descriptors (less than < 20% free) |<p>-</p> |`{TEMPLATE_NAME:fd.open[node_exporter].last()}/{Linux by Prom:kernel.maxfiles[node_exporter].last()}*100>80` |WARNING | |
-|Operating system description has changed |<p>Operating system description has changed. Possible reasons that system has been updated or replaced. Ack to close.</p> |`{TEMPLATE_NAME:system.sw.os[node_exporter].diff()}=1 and {TEMPLATE_NAME:system.sw.os[node_exporter].strlen()}>0` |INFO |<p>Manual close: YES</p><p>**Depends on**:</p><p>- System name has changed (new name: {ITEM.VALUE})</p> |
-|High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m) |<p>The system is running out of free memory.</p> |`{TEMPLATE_NAME:vm.memory.util[node_exporter].min(5m)}>{$MEMORY.UTIL.MAX}` |AVERAGE |<p>**Depends on**:</p><p>- Lack of available memory (< {$MEMORY.AVAILABLE.MIN} of {ITEM.VALUE2})</p> |
-|Lack of available memory (< {$MEMORY.AVAILABLE.MIN} of {ITEM.VALUE2}) |<p>-</p> |`{TEMPLATE_NAME:vm.memory.available[node_exporter].min(5m)}<{$MEMORY.AVAILABLE.MIN} and {Linux by Prom:vm.memory.total[node_exporter].last()}>0` |AVERAGE | |
-|High swap space usage (less than {$SWAP.PFREE.MIN.WARN}% free) |<p>This trigger is ignored, if there is no swap configured</p> |`{TEMPLATE_NAME:system.swap.pfree[node_exporter].min(5m)}<{$SWAP.PFREE.MIN.WARN} and {Linux by Prom:system.swap.total[node_exporter].last()}>0` |WARNING |<p>**Depends on**:</p><p>- High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m)</p><p>- Lack of available memory (< {$MEMORY.AVAILABLE.MIN} of {ITEM.VALUE2})</p> |
-|Interface {#IFNAME}({#IFALIAS}): High bandwidth usage (> {$IF.UTIL.MAX:"{#IFNAME}"}% ) |<p>The network interface utilization is close to its estimated maximum bandwidth.</p> |`({TEMPLATE_NAME:net.if.in[node_exporter,"{#IFNAME}"].avg(15m)}>({$IF.UTIL.MAX:"{#IFNAME}"}/100)*{Linux by Prom:net.if.speed[node_exporter,"{#IFNAME}"].last()} or {Linux by Prom:net.if.out[node_exporter,"{#IFNAME}"].avg(15m)}>({$IF.UTIL.MAX:"{#IFNAME}"}/100)*{Linux by Prom:net.if.speed[node_exporter,"{#IFNAME}"].last()}) and {Linux by Prom:net.if.speed[node_exporter,"{#IFNAME}"].last()}>0`<p>Recovery expression:</p>`{TEMPLATE_NAME:net.if.in[node_exporter,"{#IFNAME}"].avg(15m)}<(({$IF.UTIL.MAX:"{#IFNAME}"}-3)/100)*{Linux by Prom:net.if.speed[node_exporter,"{#IFNAME}"].last()} and {Linux by Prom:net.if.out[node_exporter,"{#IFNAME}"].avg(15m)}<(({$IF.UTIL.MAX:"{#IFNAME}"}-3)/100)*{Linux by Prom:net.if.speed[node_exporter,"{#IFNAME}"].last()}` |WARNING |<p>Manual close: YES</p><p>**Depends on**:</p><p>- Interface {#IFNAME}({#IFALIAS}): Link down</p> |
-|Interface {#IFNAME}({#IFALIAS}): High error rate (> {$IF.ERRORS.WARN:"{#IFNAME}"} for 5m) |<p>Recovers when below 80% of {$IF.ERRORS.WARN:"{#IFNAME}"} threshold</p> |`{TEMPLATE_NAME:net.if.in.errors[node_exporter,"{#IFNAME}"].min(5m)}>{$IF.ERRORS.WARN:"{#IFNAME}"} or {Linux by Prom:net.if.out.errors[node_exporter"{#IFNAME}"].min(5m)}>{$IF.ERRORS.WARN:"{#IFNAME}"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:net.if.in.errors[node_exporter,"{#IFNAME}"].max(5m)}<{$IF.ERRORS.WARN:"{#IFNAME}"}*0.8 and {Linux by Prom:net.if.out.errors[node_exporter"{#IFNAME}"].max(5m)}<{$IF.ERRORS.WARN:"{#IFNAME}"}*0.8` |WARNING |<p>Manual close: YES</p><p>**Depends on**:</p><p>- Interface {#IFNAME}({#IFALIAS}): Link down</p> |
-|Interface {#IFNAME}({#IFALIAS}): Ethernet has changed to lower speed than it was before |<p>This Ethernet connection has transitioned down from its known maximum speed. This might be a sign of autonegotiation issues. Ack to close.</p> |`{TEMPLATE_NAME:net.if.speed[node_exporter,"{#IFNAME}"].change()}<0 and {TEMPLATE_NAME:net.if.speed[node_exporter,"{#IFNAME}"].last()}>0 and ({Linux by Prom:net.if.type[node_exporter,"{#IFNAME}"].last()}=6 or {Linux by Prom:net.if.type[node_exporter,"{#IFNAME}"].last()}=7 or {Linux by Prom:net.if.type[node_exporter,"{#IFNAME}"].last()}=11 or {Linux by Prom:net.if.type[node_exporter,"{#IFNAME}"].last()}=62 or {Linux by Prom:net.if.type[node_exporter,"{#IFNAME}"].last()}=69 or {Linux by Prom:net.if.type[node_exporter,"{#IFNAME}"].last()}=117 ) and ({Linux by Prom:net.if.status[node_exporter,"{#IFNAME}"].last()}<>2)`<p>Recovery expression:</p>`({TEMPLATE_NAME:net.if.speed[node_exporter,"{#IFNAME}"].change()}>0 and {TEMPLATE_NAME:net.if.speed[node_exporter,"{#IFNAME}"].prev()}>0) or ({Linux by Prom:net.if.status[node_exporter,"{#IFNAME}"].last()}=2)` |INFO |<p>Manual close: YES</p><p>**Depends on**:</p><p>- Interface {#IFNAME}({#IFALIAS}): Link down</p> |
-|Interface {#IFNAME}({#IFALIAS}): Ethernet has changed to lower speed than it was before |<p>This Ethernet connection has transitioned down from its known maximum speed. This might be a sign of autonegotiation issues. Ack to close.</p> |`{TEMPLATE_NAME:net.if.type[node_exporter,"{#IFNAME}"].change()}<0 and {TEMPLATE_NAME:net.if.type[node_exporter,"{#IFNAME}"].last()}>0 and ({Linux by Prom:net.if.type[node_exporter,"{#IFNAME}"].last()}=6 or {Linux by Prom:net.if.type[node_exporter,"{#IFNAME}"].last()}=1) and ({Linux by Prom:net.if.status[node_exporter,"{#IFNAME}"].last()}<>2)`<p>Recovery expression:</p>`({TEMPLATE_NAME:net.if.type[node_exporter,"{#IFNAME}"].change()}>0 and {TEMPLATE_NAME:net.if.type[node_exporter,"{#IFNAME}"].prev()}>0) or ({Linux by Prom:net.if.status[node_exporter,"{#IFNAME}"].last()}=2)` |INFO |<p>Manual close: YES</p><p>**Depends on**:</p><p>- Interface {#IFNAME}({#IFALIAS}): Link down</p> |
-|Interface {#IFNAME}({#IFALIAS}): Link down |<p>This trigger expression works as follows:</p><p>1. Can be triggered if operations status is down.</p><p>2. {$IFCONTROL:"{#IFNAME}"}=1 - user can redefine Context macro to value - 0. That marks this interface as not important. No new trigger will be fired if this interface is down.</p><p>3. {TEMPLATE_NAME:METRIC.diff()}=1) - trigger fires only if operational status was up(1) sometime before. (So, do not fire 'ethernal off' interfaces.)</p><p>WARNING: if closed manually - won't fire again on next poll, because of .diff.</p> |`{$IFCONTROL:"{#IFNAME}"}=1 and ({TEMPLATE_NAME:net.if.status[node_exporter,"{#IFNAME}"].last()}=2 and {TEMPLATE_NAME:net.if.status[node_exporter,"{#IFNAME}"].diff()}=1)`<p>Recovery expression:</p>`{TEMPLATE_NAME:net.if.status[node_exporter,"{#IFNAME}"].last()}<>2 or {$IFCONTROL:"{#IFNAME}"}=0` |AVERAGE |<p>Manual close: YES</p> |
-|{HOST.NAME} has been restarted (uptime < 10m) |<p>The device uptime is less than 10 minutes</p> |`{TEMPLATE_NAME:system.uptime[node_exporter].last()}<10m` |WARNING |<p>Manual close: YES</p> |
-|{#FSNAME}: Disk space is critically low (used > {$VFS.FS.PUSED.MAX.CRIT:"{#FSNAME}"}%) |<p>Two conditions should match: First, space utilization should be above {$VFS.FS.PUSED.MAX.CRIT:"{#FSNAME}"}.</p><p> Second condition should be one of the following:</p><p> - The disk free space is less than 5G.</p><p> - The disk will be full in less than 24 hours.</p> |`{TEMPLATE_NAME:vfs.fs.pused[node_exporter,"{#FSNAME}"].last()}>{$VFS.FS.PUSED.MAX.CRIT:"{#FSNAME}"} and (({Linux by Prom:vfs.fs.total[node_exporter,"{#FSNAME}"].last()}-{Linux by Prom:vfs.fs.used[node_exporter,"{#FSNAME}"].last()})<5G or {TEMPLATE_NAME:vfs.fs.pused[node_exporter,"{#FSNAME}"].timeleft(1h,,100)}<1d)` |AVERAGE |<p>Manual close: YES</p> |
-|{#FSNAME}: Disk space is low (used > {$VFS.FS.PUSED.MAX.WARN:"{#FSNAME}"}%) |<p>Two conditions should match: First, space utilization should be above {$VFS.FS.PUSED.MAX.WARN:"{#FSNAME}"}.</p><p> Second condition should be one of the following:</p><p> - The disk free space is less than 10G.</p><p> - The disk will be full in less than 24 hours.</p> |`{TEMPLATE_NAME:vfs.fs.pused[node_exporter,"{#FSNAME}"].last()}>{$VFS.FS.PUSED.MAX.WARN:"{#FSNAME}"} and (({Linux by Prom:vfs.fs.total[node_exporter,"{#FSNAME}"].last()}-{Linux by Prom:vfs.fs.used[node_exporter,"{#FSNAME}"].last()})<10G or {TEMPLATE_NAME:vfs.fs.pused[node_exporter,"{#FSNAME}"].timeleft(1h,,100)}<1d)` |WARNING |<p>Manual close: YES</p><p>**Depends on**:</p><p>- {#FSNAME}: Disk space is critically low (used > {$VFS.FS.PUSED.MAX.CRIT:"{#FSNAME}"}%)</p> |
-|{#FSNAME}: Running out of free inodes (free < {$VFS.FS.INODE.PFREE.MIN.CRIT:"{#FSNAME}"}%) |<p>It may become impossible to write to disk if there are no index nodes left.</p><p>As symptoms, 'No space left on device' or 'Disk is full' errors may be seen even though free space is available.</p> |`{TEMPLATE_NAME:vfs.fs.inode.pfree[node_exporter,"{#FSNAME}"].min(5m)}<{$VFS.FS.INODE.PFREE.MIN.CRIT:"{#FSNAME}"}` |AVERAGE | |
-|{#FSNAME}: Running out of free inodes (free < {$VFS.FS.INODE.PFREE.MIN.WARN:"{#FSNAME}"}%) |<p>It may become impossible to write to disk if there are no index nodes left.</p><p>As symptoms, 'No space left on device' or 'Disk is full' errors may be seen even though free space is available.</p> |`{TEMPLATE_NAME:vfs.fs.inode.pfree[node_exporter,"{#FSNAME}"].min(5m)}<{$VFS.FS.INODE.PFREE.MIN.WARN:"{#FSNAME}"}` |WARNING |<p>**Depends on**:</p><p>- {#FSNAME}: Running out of free inodes (free < {$VFS.FS.INODE.PFREE.MIN.CRIT:"{#FSNAME}"}%)</p> |
-|{#DEVNAME}: Disk read/write request responses are too high (read > {$VFS.DEV.READ.AWAIT.WARN:"{#DEVNAME}"} ms for 15m or write > {$VFS.DEV.WRITE.AWAIT.WARN:"{#DEVNAME}"} ms for 15m) |<p>This trigger might indicate disk {#DEVNAME} saturation.</p> |`{TEMPLATE_NAME:vfs.dev.read.await[node_exporter,"{#DEVNAME}"].min(15m)} > {$VFS.DEV.READ.AWAIT.WARN:"{#DEVNAME}"} or {Linux by Prom:vfs.dev.write.await[node_exporter,"{#DEVNAME}"].min(15m)} > {$VFS.DEV.WRITE.AWAIT.WARN:"{#DEVNAME}"}` |WARNING |<p>Manual close: YES</p> |
-|node_exporter is not available (or no data for 30m) |<p>Failed to fetch system metrics from node_exporter in time.</p> |`{TEMPLATE_NAME:node_exporter.get.nodata(30m)}=1` |WARNING |<p>Manual close: YES</p> |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Load average is too high (per CPU load over {$LOAD_AVG_PER_CPU.MAX.WARN} for 5m) | <p>Per CPU load average is too high. Your system may be slow to respond.</p> | `{TEMPLATE_NAME:system.cpu.load.avg1[node_exporter].min(5m)}/{Linux by Prom:system.cpu.num[node_exporter].last()}>{$LOAD_AVG_PER_CPU.MAX.WARN} and {Linux by Prom:system.cpu.load.avg5[node_exporter].last()}>0 and {Linux by Prom:system.cpu.load.avg15[node_exporter].last()}>0` | AVERAGE | |
+| High CPU utilization (over {$CPU.UTIL.CRIT}% for 5m) | <p>CPU utilization is too high. The system might be slow to respond.</p> | `{TEMPLATE_NAME:system.cpu.util[node_exporter].min(5m)}>{$CPU.UTIL.CRIT}` | WARNING | <p>**Depends on**:</p><p>- Load average is too high (per CPU load over {$LOAD_AVG_PER_CPU.MAX.WARN} for 5m)</p> |
+| System time is out of sync (diff with Zabbix server > {$SYSTEM.FUZZYTIME.MAX}s) | <p>The host system time is different from the Zabbix server time.</p> | `{TEMPLATE_NAME:system.localtime[node_exporter].fuzzytime({$SYSTEM.FUZZYTIME.MAX})}=0` | WARNING | <p>Manual close: YES</p> |
+| System name has changed (new name: {ITEM.VALUE}) | <p>System name has changed. Ack to close.</p> | `{TEMPLATE_NAME:system.name[node_exporter].diff()}=1 and {TEMPLATE_NAME:system.name[node_exporter].strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| Configured max number of open filedescriptors is too low (< {$KERNEL.MAXFILES.MIN}) | <p>-</p> | `{TEMPLATE_NAME:kernel.maxfiles[node_exporter].last()}<{$KERNEL.MAXFILES.MIN}` | INFO | <p>**Depends on**:</p><p>- Running out of file descriptors (less than < 20% free)</p> |
+| Running out of file descriptors (less than < 20% free) | <p>-</p> | `{TEMPLATE_NAME:fd.open[node_exporter].last()}/{Linux by Prom:kernel.maxfiles[node_exporter].last()}*100>80` | WARNING | |
+| Operating system description has changed | <p>Operating system description has changed. Possible reasons that system has been updated or replaced. Ack to close.</p> | `{TEMPLATE_NAME:system.sw.os[node_exporter].diff()}=1 and {TEMPLATE_NAME:system.sw.os[node_exporter].strlen()}>0` | INFO | <p>Manual close: YES</p><p>**Depends on**:</p><p>- System name has changed (new name: {ITEM.VALUE})</p> |
+| High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m) | <p>The system is running out of free memory.</p> | `{TEMPLATE_NAME:vm.memory.util[node_exporter].min(5m)}>{$MEMORY.UTIL.MAX}` | AVERAGE | <p>**Depends on**:</p><p>- Lack of available memory (< {$MEMORY.AVAILABLE.MIN} of {ITEM.VALUE2})</p> |
+| Lack of available memory (< {$MEMORY.AVAILABLE.MIN} of {ITEM.VALUE2}) | <p>-</p> | `{TEMPLATE_NAME:vm.memory.available[node_exporter].min(5m)}<{$MEMORY.AVAILABLE.MIN} and {Linux by Prom:vm.memory.total[node_exporter].last()}>0` | AVERAGE | |
+| High swap space usage (less than {$SWAP.PFREE.MIN.WARN}% free) | <p>This trigger is ignored, if there is no swap configured</p> | `{TEMPLATE_NAME:system.swap.pfree[node_exporter].min(5m)}<{$SWAP.PFREE.MIN.WARN} and {Linux by Prom:system.swap.total[node_exporter].last()}>0` | WARNING | <p>**Depends on**:</p><p>- High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m)</p><p>- Lack of available memory (< {$MEMORY.AVAILABLE.MIN} of {ITEM.VALUE2})</p> |
+| Interface {#IFNAME}({#IFALIAS}): High bandwidth usage (> {$IF.UTIL.MAX:"{#IFNAME}"}% ) | <p>The network interface utilization is close to its estimated maximum bandwidth.</p> | `({TEMPLATE_NAME:net.if.in[node_exporter,"{#IFNAME}"].avg(15m)}>({$IF.UTIL.MAX:"{#IFNAME}"}/100)*{Linux by Prom:net.if.speed[node_exporter,"{#IFNAME}"].last()} or {Linux by Prom:net.if.out[node_exporter,"{#IFNAME}"].avg(15m)}>({$IF.UTIL.MAX:"{#IFNAME}"}/100)*{Linux by Prom:net.if.speed[node_exporter,"{#IFNAME}"].last()}) and {Linux by Prom:net.if.speed[node_exporter,"{#IFNAME}"].last()}>0`<p>Recovery expression:</p>`{TEMPLATE_NAME:net.if.in[node_exporter,"{#IFNAME}"].avg(15m)}<(({$IF.UTIL.MAX:"{#IFNAME}"}-3)/100)*{Linux by Prom:net.if.speed[node_exporter,"{#IFNAME}"].last()} and {Linux by Prom:net.if.out[node_exporter,"{#IFNAME}"].avg(15m)}<(({$IF.UTIL.MAX:"{#IFNAME}"}-3)/100)*{Linux by Prom:net.if.speed[node_exporter,"{#IFNAME}"].last()}` | WARNING | <p>Manual close: YES</p><p>**Depends on**:</p><p>- Interface {#IFNAME}({#IFALIAS}): Link down</p> |
+| Interface {#IFNAME}({#IFALIAS}): High error rate (> {$IF.ERRORS.WARN:"{#IFNAME}"} for 5m) | <p>Recovers when below 80% of {$IF.ERRORS.WARN:"{#IFNAME}"} threshold</p> | `{TEMPLATE_NAME:net.if.in.errors[node_exporter,"{#IFNAME}"].min(5m)}>{$IF.ERRORS.WARN:"{#IFNAME}"} or {Linux by Prom:net.if.out.errors[node_exporter"{#IFNAME}"].min(5m)}>{$IF.ERRORS.WARN:"{#IFNAME}"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:net.if.in.errors[node_exporter,"{#IFNAME}"].max(5m)}<{$IF.ERRORS.WARN:"{#IFNAME}"}*0.8 and {Linux by Prom:net.if.out.errors[node_exporter"{#IFNAME}"].max(5m)}<{$IF.ERRORS.WARN:"{#IFNAME}"}*0.8` | WARNING | <p>Manual close: YES</p><p>**Depends on**:</p><p>- Interface {#IFNAME}({#IFALIAS}): Link down</p> |
+| Interface {#IFNAME}({#IFALIAS}): Ethernet has changed to lower speed than it was before | <p>This Ethernet connection has transitioned down from its known maximum speed. This might be a sign of autonegotiation issues. Ack to close.</p> | `{TEMPLATE_NAME:net.if.speed[node_exporter,"{#IFNAME}"].change()}<0 and {TEMPLATE_NAME:net.if.speed[node_exporter,"{#IFNAME}"].last()}>0 and ({Linux by Prom:net.if.type[node_exporter,"{#IFNAME}"].last()}=6 or {Linux by Prom:net.if.type[node_exporter,"{#IFNAME}"].last()}=7 or {Linux by Prom:net.if.type[node_exporter,"{#IFNAME}"].last()}=11 or {Linux by Prom:net.if.type[node_exporter,"{#IFNAME}"].last()}=62 or {Linux by Prom:net.if.type[node_exporter,"{#IFNAME}"].last()}=69 or {Linux by Prom:net.if.type[node_exporter,"{#IFNAME}"].last()}=117 ) and ({Linux by Prom:net.if.status[node_exporter,"{#IFNAME}"].last()}<>2)`<p>Recovery expression:</p>`({TEMPLATE_NAME:net.if.speed[node_exporter,"{#IFNAME}"].change()}>0 and {TEMPLATE_NAME:net.if.speed[node_exporter,"{#IFNAME}"].prev()}>0) or ({Linux by Prom:net.if.status[node_exporter,"{#IFNAME}"].last()}=2)` | INFO | <p>Manual close: YES</p><p>**Depends on**:</p><p>- Interface {#IFNAME}({#IFALIAS}): Link down</p> |
+| Interface {#IFNAME}({#IFALIAS}): Ethernet has changed to lower speed than it was before | <p>This Ethernet connection has transitioned down from its known maximum speed. This might be a sign of autonegotiation issues. Ack to close.</p> | `{TEMPLATE_NAME:net.if.type[node_exporter,"{#IFNAME}"].change()}<0 and {TEMPLATE_NAME:net.if.type[node_exporter,"{#IFNAME}"].last()}>0 and ({Linux by Prom:net.if.type[node_exporter,"{#IFNAME}"].last()}=6 or {Linux by Prom:net.if.type[node_exporter,"{#IFNAME}"].last()}=1) and ({Linux by Prom:net.if.status[node_exporter,"{#IFNAME}"].last()}<>2)`<p>Recovery expression:</p>`({TEMPLATE_NAME:net.if.type[node_exporter,"{#IFNAME}"].change()}>0 and {TEMPLATE_NAME:net.if.type[node_exporter,"{#IFNAME}"].prev()}>0) or ({Linux by Prom:net.if.status[node_exporter,"{#IFNAME}"].last()}=2)` | INFO | <p>Manual close: YES</p><p>**Depends on**:</p><p>- Interface {#IFNAME}({#IFALIAS}): Link down</p> |
+| Interface {#IFNAME}({#IFALIAS}): Link down | <p>This trigger expression works as follows:</p><p>1. Can be triggered if operations status is down.</p><p>2. {$IFCONTROL:"{#IFNAME}"}=1 - user can redefine Context macro to value - 0. That marks this interface as not important. No new trigger will be fired if this interface is down.</p><p>3. {TEMPLATE_NAME:METRIC.diff()}=1) - trigger fires only if operational status was up(1) sometime before. (So, do not fire 'ethernal off' interfaces.)</p><p>WARNING: if closed manually - won't fire again on next poll, because of .diff.</p> | `{$IFCONTROL:"{#IFNAME}"}=1 and ({TEMPLATE_NAME:net.if.status[node_exporter,"{#IFNAME}"].last()}=2 and {TEMPLATE_NAME:net.if.status[node_exporter,"{#IFNAME}"].diff()}=1)`<p>Recovery expression:</p>`{TEMPLATE_NAME:net.if.status[node_exporter,"{#IFNAME}"].last()}<>2 or {$IFCONTROL:"{#IFNAME}"}=0` | AVERAGE | <p>Manual close: YES</p> |
+| {HOST.NAME} has been restarted (uptime < 10m) | <p>The device uptime is less than 10 minutes</p> | `{TEMPLATE_NAME:system.uptime[node_exporter].last()}<10m` | WARNING | <p>Manual close: YES</p> |
+| {#FSNAME}: Disk space is critically low (used > {$VFS.FS.PUSED.MAX.CRIT:"{#FSNAME}"}%) | <p>Two conditions should match: First, space utilization should be above {$VFS.FS.PUSED.MAX.CRIT:"{#FSNAME}"}.</p><p> Second condition should be one of the following:</p><p> - The disk free space is less than 5G.</p><p> - The disk will be full in less than 24 hours.</p> | `{TEMPLATE_NAME:vfs.fs.pused[node_exporter,"{#FSNAME}"].last()}>{$VFS.FS.PUSED.MAX.CRIT:"{#FSNAME}"} and (({Linux by Prom:vfs.fs.total[node_exporter,"{#FSNAME}"].last()}-{Linux by Prom:vfs.fs.used[node_exporter,"{#FSNAME}"].last()})<5G or {TEMPLATE_NAME:vfs.fs.pused[node_exporter,"{#FSNAME}"].timeleft(1h,,100)}<1d)` | AVERAGE | <p>Manual close: YES</p> |
+| {#FSNAME}: Disk space is low (used > {$VFS.FS.PUSED.MAX.WARN:"{#FSNAME}"}%) | <p>Two conditions should match: First, space utilization should be above {$VFS.FS.PUSED.MAX.WARN:"{#FSNAME}"}.</p><p> Second condition should be one of the following:</p><p> - The disk free space is less than 10G.</p><p> - The disk will be full in less than 24 hours.</p> | `{TEMPLATE_NAME:vfs.fs.pused[node_exporter,"{#FSNAME}"].last()}>{$VFS.FS.PUSED.MAX.WARN:"{#FSNAME}"} and (({Linux by Prom:vfs.fs.total[node_exporter,"{#FSNAME}"].last()}-{Linux by Prom:vfs.fs.used[node_exporter,"{#FSNAME}"].last()})<10G or {TEMPLATE_NAME:vfs.fs.pused[node_exporter,"{#FSNAME}"].timeleft(1h,,100)}<1d)` | WARNING | <p>Manual close: YES</p><p>**Depends on**:</p><p>- {#FSNAME}: Disk space is critically low (used > {$VFS.FS.PUSED.MAX.CRIT:"{#FSNAME}"}%)</p> |
+| {#FSNAME}: Running out of free inodes (free < {$VFS.FS.INODE.PFREE.MIN.CRIT:"{#FSNAME}"}%) | <p>It may become impossible to write to disk if there are no index nodes left.</p><p>As symptoms, 'No space left on device' or 'Disk is full' errors may be seen even though free space is available.</p> | `{TEMPLATE_NAME:vfs.fs.inode.pfree[node_exporter,"{#FSNAME}"].min(5m)}<{$VFS.FS.INODE.PFREE.MIN.CRIT:"{#FSNAME}"}` | AVERAGE | |
+| {#FSNAME}: Running out of free inodes (free < {$VFS.FS.INODE.PFREE.MIN.WARN:"{#FSNAME}"}%) | <p>It may become impossible to write to disk if there are no index nodes left.</p><p>As symptoms, 'No space left on device' or 'Disk is full' errors may be seen even though free space is available.</p> | `{TEMPLATE_NAME:vfs.fs.inode.pfree[node_exporter,"{#FSNAME}"].min(5m)}<{$VFS.FS.INODE.PFREE.MIN.WARN:"{#FSNAME}"}` | WARNING | <p>**Depends on**:</p><p>- {#FSNAME}: Running out of free inodes (free < {$VFS.FS.INODE.PFREE.MIN.CRIT:"{#FSNAME}"}%)</p> |
+| {#DEVNAME}: Disk read/write request responses are too high (read > {$VFS.DEV.READ.AWAIT.WARN:"{#DEVNAME}"} ms for 15m or write > {$VFS.DEV.WRITE.AWAIT.WARN:"{#DEVNAME}"} ms for 15m) | <p>This trigger might indicate disk {#DEVNAME} saturation.</p> | `{TEMPLATE_NAME:vfs.dev.read.await[node_exporter,"{#DEVNAME}"].min(15m)} > {$VFS.DEV.READ.AWAIT.WARN:"{#DEVNAME}"} or {Linux by Prom:vfs.dev.write.await[node_exporter,"{#DEVNAME}"].min(15m)} > {$VFS.DEV.WRITE.AWAIT.WARN:"{#DEVNAME}"}` | WARNING | <p>Manual close: YES</p> |
+| node_exporter is not available (or no data for 30m) | <p>Failed to fetch system metrics from node_exporter in time.</p> | `{TEMPLATE_NAME:node_exporter.get.nodata(30m)}=1` | WARNING | <p>Manual close: YES</p> |
## Feedback
diff --git a/templates/os/linux_prom/template_os_linux_prom.yaml b/templates/os/linux_prom/template_os_linux_prom.yaml
index d28c1fd255f..5ec427242a5 100644
--- a/templates/os/linux_prom/template_os_linux_prom.yaml
+++ b/templates/os/linux_prom/template_os_linux_prom.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-03-02T19:42:20Z'
+ date: '2021-04-22T11:28:48Z'
groups:
-
name: 'Templates/Operating systems'
@@ -2039,173 +2039,177 @@ zabbix_export:
dashboards:
-
name: 'Network interfaces'
- widgets:
+ pages:
-
- type: GRAPH_PROTOTYPE
- width: '24'
- height: '12'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '3'
+ widgets:
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: 'Interface {#IFNAME}({#IFALIAS}): Network traffic'
- host: 'Linux by Prom'
+ width: '24'
+ height: '12'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '3'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: 'Interface {#IFNAME}({#IFALIAS}): Network traffic'
+ host: 'Linux by Prom'
-
name: 'System performance'
- widgets:
- -
- type: GRAPH_CLASSIC
- width: '12'
- height: '5'
- fields:
- -
- type: GRAPH
- name: graphid
- value:
- name: 'System load'
- host: 'Linux by Prom'
- -
- type: GRAPH_CLASSIC
- x: '12'
- width: '12'
- height: '5'
- fields:
- -
- type: GRAPH
- name: graphid
- value:
- name: 'CPU usage'
- host: 'Linux by Prom'
- -
- type: GRAPH_CLASSIC
- 'y': '5'
- width: '12'
- height: '5'
- fields:
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Memory usage'
- host: 'Linux by Prom'
- -
- type: GRAPH_CLASSIC
- x: '12'
- 'y': '5'
- width: '12'
- height: '5'
- fields:
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Swap usage'
- host: 'Linux by Prom'
+ pages:
-
- type: GRAPH_PROTOTYPE
- 'y': '10'
- width: '24'
- height: '12'
- fields:
+ widgets:
-
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '3'
+ type: GRAPH_CLASSIC
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'System load'
+ host: 'Linux by Prom'
+ -
+ type: GRAPH_CLASSIC
+ x: '12'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'CPU usage'
+ host: 'Linux by Prom'
+ -
+ type: GRAPH_CLASSIC
+ 'y': '5'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Memory usage'
+ host: 'Linux by Prom'
+ -
+ type: GRAPH_CLASSIC
+ x: '12'
+ 'y': '5'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Swap usage'
+ host: 'Linux by Prom'
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: '{#FSNAME}: Disk space usage'
- host: 'Linux by Prom'
- -
- type: GRAPH_PROTOTYPE
- 'y': '22'
- width: '24'
- height: '12'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '3'
+ 'y': '10'
+ width: '24'
+ height: '12'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '3'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: '{#FSNAME}: Disk space usage'
+ host: 'Linux by Prom'
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: '{#DEVNAME}: Disk read/write rates'
- host: 'Linux by Prom'
- -
- type: GRAPH_PROTOTYPE
- 'y': '34'
- width: '24'
- height: '12'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '3'
+ 'y': '22'
+ width: '24'
+ height: '12'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '3'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: '{#DEVNAME}: Disk read/write rates'
+ host: 'Linux by Prom'
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: '{#DEVNAME}: Disk average waiting time'
- host: 'Linux by Prom'
- -
- type: GRAPH_PROTOTYPE
- 'y': '46'
- width: '24'
- height: '12'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '3'
+ 'y': '34'
+ width: '24'
+ height: '12'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '3'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: '{#DEVNAME}: Disk average waiting time'
+ host: 'Linux by Prom'
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: '{#DEVNAME}: Disk utilization and queue'
- host: 'Linux by Prom'
- -
- type: GRAPH_PROTOTYPE
- 'y': '58'
- width: '24'
- height: '6'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
+ 'y': '46'
+ width: '24'
+ height: '12'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '3'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: '{#DEVNAME}: Disk utilization and queue'
+ host: 'Linux by Prom'
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: 'Interface {#IFNAME}({#IFALIAS}): Network traffic'
- host: 'Linux by Prom'
+ 'y': '58'
+ width: '24'
+ height: '6'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: 'Interface {#IFNAME}({#IFALIAS}): Network traffic'
+ host: 'Linux by Prom'
valuemaps:
-
name: 'IF-MIB::ifOperStatus'
diff --git a/templates/os/linux_snmp_snmp/README.md b/templates/os/linux_snmp_snmp/README.md
index 131823dddfe..9e5cc8d4a4d 100644
--- a/templates/os/linux_snmp_snmp/README.md
+++ b/templates/os/linux_snmp_snmp/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
@@ -15,11 +15,11 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$MEMORY.AVAILABLE.MIN} |<p>-</p> |`20M` |
-|{$MEMORY.UTIL.MAX} |<p>-</p> |`90` |
-|{$SWAP.PFREE.MIN.WARN} |<p>-</p> |`50` |
+| Name | Description | Default |
+|-------------------------|-------------|---------|
+| {$MEMORY.AVAILABLE.MIN} | <p>-</p> | `20M` |
+| {$MEMORY.UTIL.MAX} | <p>-</p> | `90` |
+| {$SWAP.PFREE.MIN.WARN} | <p>-</p> | `50` |
## Template links
@@ -30,25 +30,25 @@ There are no template links in this template.
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Memory |Memory utilization |<p>Please note that memory utilization is a rough estimate, since memory available is calculated as free+buffers+cached, which is not 100% accurate, but the best we can get using SNMP.</p> |CALCULATED |vm.memory.util[snmp]<p>**Expression**:</p>`(last("vm.memory.total[memTotalReal.0]")-(last("vm.memory.free[memAvailReal.0]")+last("vm.memory.buffers[memBuffer.0]")+last("vm.memory.cached[memCached.0]")))/last("vm.memory.total[memTotalReal.0]")*100` |
-|Memory |Free memory |<p>MIB: UCD-SNMP-MIB</p> |SNMP |vm.memory.free[memAvailReal.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1024`</p> |
-|Memory |Memory (buffers) |<p>MIB: UCD-SNMP-MIB</p><p>Memory used by kernel buffers (Buffers in /proc/meminfo)</p> |SNMP |vm.memory.buffers[memBuffer.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1024`</p> |
-|Memory |Memory (cached) |<p>MIB: UCD-SNMP-MIB</p><p>Memory used by the page cache and slabs (Cached and Slab in /proc/meminfo)</p> |SNMP |vm.memory.cached[memCached.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1024`</p> |
-|Memory |Total memory |<p>MIB: UCD-SNMP-MIB</p><p>Total memory in Bytes</p> |SNMP |vm.memory.total[memTotalReal.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1024`</p> |
-|Memory |Available memory |<p>Please note that memory utilization is a rough estimate, since memory available is calculated as free+buffers+cached, which is not 100% accurate, but the best we can get using SNMP.</p> |CALCULATED |vm.memory.available[snmp]<p>**Expression**:</p>`last("vm.memory.free[memAvailReal.0]")+last("vm.memory.buffers[memBuffer.0]")+last("vm.memory.cached[memCached.0]")` |
-|Memory |Total swap space |<p>MIB: UCD-SNMP-MIB</p><p>The total amount of swap space configured for this host.</p> |SNMP |system.swap.total[memTotalSwap.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1024`</p> |
-|Memory |Free swap space |<p>MIB: UCD-SNMP-MIB</p><p>The amount of swap space currently unused or available.</p> |SNMP |system.swap.free[memAvailSwap.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1024`</p> |
-|Memory |Free swap space in % |<p>The free space of swap volume/file in percent.</p> |CALCULATED |system.swap.pfree[snmp]<p>**Expression**:</p>`last("system.swap.free[memAvailSwap.0]")/last("system.swap.total[memTotalSwap.0]")*100` |
+| Group | Name | Description | Type | Key and additional info |
+|--------|----------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Memory | Memory utilization | <p>Please note that memory utilization is a rough estimate, since memory available is calculated as free+buffers+cached, which is not 100% accurate, but the best we can get using SNMP.</p> | CALCULATED | vm.memory.util[snmp]<p>**Expression**:</p>`(last("vm.memory.total[memTotalReal.0]")-(last("vm.memory.free[memAvailReal.0]")+last("vm.memory.buffers[memBuffer.0]")+last("vm.memory.cached[memCached.0]")))/last("vm.memory.total[memTotalReal.0]")*100` |
+| Memory | Free memory | <p>MIB: UCD-SNMP-MIB</p> | SNMP | vm.memory.free[memAvailReal.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1024`</p> |
+| Memory | Memory (buffers) | <p>MIB: UCD-SNMP-MIB</p><p>Memory used by kernel buffers (Buffers in /proc/meminfo)</p> | SNMP | vm.memory.buffers[memBuffer.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1024`</p> |
+| Memory | Memory (cached) | <p>MIB: UCD-SNMP-MIB</p><p>Memory used by the page cache and slabs (Cached and Slab in /proc/meminfo)</p> | SNMP | vm.memory.cached[memCached.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1024`</p> |
+| Memory | Total memory | <p>MIB: UCD-SNMP-MIB</p><p>Total memory in Bytes</p> | SNMP | vm.memory.total[memTotalReal.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1024`</p> |
+| Memory | Available memory | <p>Please note that memory utilization is a rough estimate, since memory available is calculated as free+buffers+cached, which is not 100% accurate, but the best we can get using SNMP.</p> | CALCULATED | vm.memory.available[snmp]<p>**Expression**:</p>`last("vm.memory.free[memAvailReal.0]")+last("vm.memory.buffers[memBuffer.0]")+last("vm.memory.cached[memCached.0]")` |
+| Memory | Total swap space | <p>MIB: UCD-SNMP-MIB</p><p>The total amount of swap space configured for this host.</p> | SNMP | system.swap.total[memTotalSwap.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1024`</p> |
+| Memory | Free swap space | <p>MIB: UCD-SNMP-MIB</p><p>The amount of swap space currently unused or available.</p> | SNMP | system.swap.free[memAvailSwap.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1024`</p> |
+| Memory | Free swap space in % | <p>The free space of swap volume/file in percent.</p> | CALCULATED | system.swap.pfree[snmp]<p>**Expression**:</p>`last("system.swap.free[memAvailSwap.0]")/last("system.swap.total[memTotalSwap.0]")*100` |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m) |<p>The system is running out of free memory.</p> |`{TEMPLATE_NAME:vm.memory.util[snmp].min(5m)}>{$MEMORY.UTIL.MAX}` |AVERAGE |<p>**Depends on**:</p><p>- Lack of available memory (< {$MEMORY.AVAILABLE.MIN} of {ITEM.VALUE2})</p> |
-|Lack of available memory (< {$MEMORY.AVAILABLE.MIN} of {ITEM.VALUE2}) |<p>-</p> |`{TEMPLATE_NAME:vm.memory.available[snmp].min(5m)}<{$MEMORY.AVAILABLE.MIN} and {Linux memory SNMP:vm.memory.total[memTotalReal.0].last()}>0` |AVERAGE | |
-|High swap space usage (less than {$SWAP.PFREE.MIN.WARN}% free) |<p>This trigger is ignored, if there is no swap configured</p> |`{TEMPLATE_NAME:system.swap.pfree[snmp].min(5m)}<{$SWAP.PFREE.MIN.WARN} and {Linux memory SNMP:system.swap.total[memTotalSwap.0].last()}>0` |WARNING |<p>**Depends on**:</p><p>- High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m)</p><p>- Lack of available memory (< {$MEMORY.AVAILABLE.MIN} of {ITEM.VALUE2})</p> |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|-----------------------------------------------------------------------|----------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------|----------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m) | <p>The system is running out of free memory.</p> | `{TEMPLATE_NAME:vm.memory.util[snmp].min(5m)}>{$MEMORY.UTIL.MAX}` | AVERAGE | <p>**Depends on**:</p><p>- Lack of available memory (< {$MEMORY.AVAILABLE.MIN} of {ITEM.VALUE2})</p> |
+| Lack of available memory (< {$MEMORY.AVAILABLE.MIN} of {ITEM.VALUE2}) | <p>-</p> | `{TEMPLATE_NAME:vm.memory.available[snmp].min(5m)}<{$MEMORY.AVAILABLE.MIN} and {Linux memory SNMP:vm.memory.total[memTotalReal.0].last()}>0` | AVERAGE | |
+| High swap space usage (less than {$SWAP.PFREE.MIN.WARN}% free) | <p>This trigger is ignored, if there is no swap configured</p> | `{TEMPLATE_NAME:system.swap.pfree[snmp].min(5m)}<{$SWAP.PFREE.MIN.WARN} and {Linux memory SNMP:system.swap.total[memTotalSwap.0].last()}>0` | WARNING | <p>**Depends on**:</p><p>- High memory utilization (>{$MEMORY.UTIL.MAX}% for 5m)</p><p>- Lack of available memory (< {$MEMORY.AVAILABLE.MIN} of {ITEM.VALUE2})</p> |
## Feedback
@@ -62,7 +62,7 @@ Please report any issues with the template at https://support.zabbix.com
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
@@ -74,10 +74,10 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$VFS.DEV.DEVNAME.MATCHES} |<p>This macro is used in block devices discovery. Can be overridden on the host or linked template level</p> |`.+` |
-|{$VFS.DEV.DEVNAME.NOT_MATCHES} |<p>This macro is used in block devices discovery. Can be overridden on the host or linked template level</p> |`^(loop[0-9]*|sd[a-z][0-9]+|nbd[0-9]+|sr[0-9]+|fd[0-9]+|dm-[0-9]+|ram[0-9]+|ploop[a-z0-9]+|md[0-9]*|hcp[0-9]*|zram[0-9]*)` |
+| Name | Description | Default |
+|--------------------------------|--------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------|
+| {$VFS.DEV.DEVNAME.MATCHES} | <p>This macro is used in block devices discovery. Can be overridden on the host or linked template level</p> | `.+` |
+| {$VFS.DEV.DEVNAME.NOT_MATCHES} | <p>This macro is used in block devices discovery. Can be overridden on the host or linked template level</p> | `^(loop[0-9]*|sd[a-z][0-9]+|nbd[0-9]+|sr[0-9]+|fd[0-9]+|dm-[0-9]+|ram[0-9]+|ploop[a-z0-9]+|md[0-9]*|hcp[0-9]*|zram[0-9]*)` |
## Template links
@@ -85,17 +85,17 @@ There are no template links in this template.
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Block devices discovery |<p>Block devices are discovered from UCD-DISKIO-MIB::diskIOTable (http://net-snmp.sourceforge.net/docs/mibs/ucdDiskIOMIB.html#diskIOTable)</p> |SNMP |vfs.dev.discovery[snmp]<p>**Filter**:</p>AND <p>- A: {#DEVNAME} MATCHES_REGEX `{$VFS.DEV.DEVNAME.MATCHES}`</p><p>- B: {#DEVNAME} NOT_MATCHES_REGEX `{$VFS.DEV.DEVNAME.NOT_MATCHES}`</p> |
+| Name | Description | Type | Key and additional info |
+|-------------------------|------------------------------------------------------------------------------------------------------------------------------------------------|------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Block devices discovery | <p>Block devices are discovered from UCD-DISKIO-MIB::diskIOTable (http://net-snmp.sourceforge.net/docs/mibs/ucdDiskIOMIB.html#diskIOTable)</p> | SNMP | vfs.dev.discovery[snmp]<p>**Filter**:</p>AND <p>- A: {#DEVNAME} MATCHES_REGEX `{$VFS.DEV.DEVNAME.MATCHES}`</p><p>- B: {#DEVNAME} NOT_MATCHES_REGEX `{$VFS.DEV.DEVNAME.NOT_MATCHES}`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Storage |{#DEVNAME}: Disk read rate |<p>MIB: UCD-DISKIO-MIB</p><p>The number of read accesses from this device since boot.</p> |SNMP |vfs.dev.read.rate[diskIOReads.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Storage |{#DEVNAME}: Disk write rate |<p>MIB: UCD-DISKIO-MIB</p><p>The number of write accesses from this device since boot.</p> |SNMP |vfs.dev.write.rate[diskIOWrites.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|Storage |{#DEVNAME}: Disk utilization |<p>MIB: UCD-DISKIO-MIB</p><p>The 1 minute average load of disk (%)</p> |SNMP |vfs.dev.util[diskIOLA1.{#SNMPINDEX}] |
+| Group | Name | Description | Type | Key and additional info |
+|---------|------------------------------|--------------------------------------------------------------------------------------------|------|----------------------------------------------------------------------------------------------|
+| Storage | {#DEVNAME}: Disk read rate | <p>MIB: UCD-DISKIO-MIB</p><p>The number of read accesses from this device since boot.</p> | SNMP | vfs.dev.read.rate[diskIOReads.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Storage | {#DEVNAME}: Disk write rate | <p>MIB: UCD-DISKIO-MIB</p><p>The number of write accesses from this device since boot.</p> | SNMP | vfs.dev.write.rate[diskIOWrites.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| Storage | {#DEVNAME}: Disk utilization | <p>MIB: UCD-DISKIO-MIB</p><p>The 1 minute average load of disk (%)</p> | SNMP | vfs.dev.util[diskIOLA1.{#SNMPINDEX}] |
## Triggers
@@ -110,7 +110,7 @@ Please report any issues with the template at https://support.zabbix.com
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
@@ -122,10 +122,10 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$CPU.UTIL.CRIT} |<p>-</p> |`90` |
-|{$LOAD_AVG_PER_CPU.MAX.WARN} |<p>Load per CPU considered sustainable. Tune if needed.</p> |`1.5` |
+| Name | Description | Default |
+|------------------------------|-------------------------------------------------------------|---------|
+| {$CPU.UTIL.CRIT} | <p>-</p> | `90` |
+| {$LOAD_AVG_PER_CPU.MAX.WARN} | <p>Load per CPU considered sustainable. Tune if needed.</p> | `1.5` |
## Template links
@@ -133,38 +133,38 @@ There are no template links in this template.
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|CPU discovery |<p>This discovery will create set of per core CPU metrics from UCD-SNMP-MIB, using {#CPU.COUNT} in preprocessing. That's the only reason why LLD is used.</p> |DEPENDENT |cpu.discovery[snmp]<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
+| Name | Description | Type | Key and additional info |
+|---------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------|---------------------------------------------------------------------------------------------------------------|
+| CPU discovery | <p>This discovery will create set of per core CPU metrics from UCD-SNMP-MIB, using {#CPU.COUNT} in preprocessing. That's the only reason why LLD is used.</p> | DEPENDENT | cpu.discovery[snmp]<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|CPU |Load average (1m avg) |<p>MIB: UCD-SNMP-MIB</p> |SNMP |system.cpu.load.avg1[laLoad.1] |
-|CPU |Load average (5m avg) |<p>MIB: UCD-SNMP-MIB</p> |SNMP |system.cpu.load.avg5[laLoad.2] |
-|CPU |Load average (15m avg) |<p>MIB: UCD-SNMP-MIB</p> |SNMP |system.cpu.load.avg15[laLoad.3] |
-|CPU |Number of CPUs |<p>MIB: HOST-RESOURCES-MIB</p><p>Count the number of CPU cores by counting number of cores discovered in hrProcessorTable using LLD</p> |SNMP |system.cpu.num[snmp]<p>**Preprocessing**:</p><p>- JAVASCRIPT: `//count the number of cores return JSON.parse(value).length; `</p> |
-|CPU |Interrupts per second |<p>-</p> |SNMP |system.cpu.intr[ssRawInterrupts.0]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|CPU |Context switches per second |<p>-</p> |SNMP |system.cpu.switches[ssRawContexts.0]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
-|CPU |CPU idle time |<p>MIB: UCD-SNMP-MIB</p><p>The time the CPU has spent doing nothing.</p> |SNMP |system.cpu.idle[ssCpuRawIdle.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND<p>- JAVASCRIPT: `//to get utilization in %, divide by N, where N is number of cores. return value/{#CPU.COUNT} `</p> |
-|CPU |CPU system time |<p>MIB: UCD-SNMP-MIB</p><p>The time the CPU has spent running the kernel and its processes.</p> |SNMP |system.cpu.system[ssCpuRawSystem.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND<p>- JAVASCRIPT: `//to get utilization in %, divide by N, where N is number of cores. return value/{#CPU.COUNT} `</p> |
-|CPU |CPU user time |<p>MIB: UCD-SNMP-MIB</p><p>The time the CPU has spent running users' processes that are not niced.</p> |SNMP |system.cpu.user[ssCpuRawUser.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND<p>- JAVASCRIPT: `//to get utilization in %, divide by N, where N is number of cores. return value/{#CPU.COUNT} `</p> |
-|CPU |CPU steal time |<p>MIB: UCD-SNMP-MIB</p><p>The amount of CPU 'stolen' from this virtual machine by the hypervisor for other tasks (such as running another virtual machine).</p> |SNMP |system.cpu.steal[ssCpuRawSteal.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND<p>- JAVASCRIPT: `//to get utilization in %, divide by N, where N is number of cores. return value/{#CPU.COUNT} `</p> |
-|CPU |CPU softirq time |<p>MIB: UCD-SNMP-MIB</p><p>The amount of time the CPU has been servicing software interrupts.</p> |SNMP |system.cpu.softirq[ssCpuRawSoftIRQ.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND<p>- JAVASCRIPT: `//to get utilization in %, divide by N, where N is number of cores. return value/{#CPU.COUNT} `</p> |
-|CPU |CPU nice time |<p>MIB: UCD-SNMP-MIB</p><p>The time the CPU has spent running users' processes that have been niced.</p> |SNMP |system.cpu.nice[ssCpuRawNice.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND<p>- JAVASCRIPT: `//to get utilization in %, divide by N, where N is number of cores. return value/{#CPU.COUNT} `</p> |
-|CPU |CPU iowait time |<p>MIB: UCD-SNMP-MIB</p><p>Amount of time the CPU has been waiting for I/O to complete.</p> |SNMP |system.cpu.iowait[ssCpuRawWait.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND<p>- JAVASCRIPT: `//to get utilization in %, divide by N, where N is number of cores. return value/{#CPU.COUNT} `</p> |
-|CPU |CPU interrupt time |<p>MIB: UCD-SNMP-MIB</p><p>The amount of time the CPU has been servicing hardware interrupts.</p> |SNMP |system.cpu.interrupt[ssCpuRawInterrupt.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND<p>- JAVASCRIPT: `//to get utilization in %, divide by N, where N is number of cores. return value/{#CPU.COUNT} `</p> |
-|CPU |CPU guest time |<p>MIB: UCD-SNMP-MIB</p><p>Guest time (time spent running a virtual CPU for a guest operating system)</p> |SNMP |system.cpu.guest[ssCpuRawGuest.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND<p>- JAVASCRIPT: `//to get utilization in %, divide by N, where N is number of cores. return value/{#CPU.COUNT} `</p> |
-|CPU |CPU guest nice time |<p>MIB: UCD-SNMP-MIB</p><p>Time spent running a niced guest (virtual CPU for guest operating systems under the control of the Linux kernel)</p> |SNMP |system.cpu.guest_nice[ssCpuRawGuestNice.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND<p>- JAVASCRIPT: `//to get utilization in %, divide by N, where N is number of cores. return value/{#CPU.COUNT} `</p> |
-|CPU |CPU utilization |<p>CPU utilization in %</p> |DEPENDENT |system.cpu.util[snmp,{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- JAVASCRIPT: `//Calculate utilization return (100 - value) `</p> |
+| Group | Name | Description | Type | Key and additional info |
+|-------|-----------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| CPU | Load average (1m avg) | <p>MIB: UCD-SNMP-MIB</p> | SNMP | system.cpu.load.avg1[laLoad.1] |
+| CPU | Load average (5m avg) | <p>MIB: UCD-SNMP-MIB</p> | SNMP | system.cpu.load.avg5[laLoad.2] |
+| CPU | Load average (15m avg) | <p>MIB: UCD-SNMP-MIB</p> | SNMP | system.cpu.load.avg15[laLoad.3] |
+| CPU | Number of CPUs | <p>MIB: HOST-RESOURCES-MIB</p><p>Count the number of CPU cores by counting number of cores discovered in hrProcessorTable using LLD</p> | SNMP | system.cpu.num[snmp]<p>**Preprocessing**:</p><p>- JAVASCRIPT: `//count the number of cores return JSON.parse(value).length; `</p> |
+| CPU | Interrupts per second | <p>-</p> | SNMP | system.cpu.intr[ssRawInterrupts.0]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| CPU | Context switches per second | <p>-</p> | SNMP | system.cpu.switches[ssRawContexts.0]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND |
+| CPU | CPU idle time | <p>MIB: UCD-SNMP-MIB</p><p>The time the CPU has spent doing nothing.</p> | SNMP | system.cpu.idle[ssCpuRawIdle.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND<p>- JAVASCRIPT: `//to get utilization in %, divide by N, where N is number of cores. return value/{#CPU.COUNT} `</p> |
+| CPU | CPU system time | <p>MIB: UCD-SNMP-MIB</p><p>The time the CPU has spent running the kernel and its processes.</p> | SNMP | system.cpu.system[ssCpuRawSystem.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND<p>- JAVASCRIPT: `//to get utilization in %, divide by N, where N is number of cores. return value/{#CPU.COUNT} `</p> |
+| CPU | CPU user time | <p>MIB: UCD-SNMP-MIB</p><p>The time the CPU has spent running users' processes that are not niced.</p> | SNMP | system.cpu.user[ssCpuRawUser.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND<p>- JAVASCRIPT: `//to get utilization in %, divide by N, where N is number of cores. return value/{#CPU.COUNT} `</p> |
+| CPU | CPU steal time | <p>MIB: UCD-SNMP-MIB</p><p>The amount of CPU 'stolen' from this virtual machine by the hypervisor for other tasks (such as running another virtual machine).</p> | SNMP | system.cpu.steal[ssCpuRawSteal.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND<p>- JAVASCRIPT: `//to get utilization in %, divide by N, where N is number of cores. return value/{#CPU.COUNT} `</p> |
+| CPU | CPU softirq time | <p>MIB: UCD-SNMP-MIB</p><p>The amount of time the CPU has been servicing software interrupts.</p> | SNMP | system.cpu.softirq[ssCpuRawSoftIRQ.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND<p>- JAVASCRIPT: `//to get utilization in %, divide by N, where N is number of cores. return value/{#CPU.COUNT} `</p> |
+| CPU | CPU nice time | <p>MIB: UCD-SNMP-MIB</p><p>The time the CPU has spent running users' processes that have been niced.</p> | SNMP | system.cpu.nice[ssCpuRawNice.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND<p>- JAVASCRIPT: `//to get utilization in %, divide by N, where N is number of cores. return value/{#CPU.COUNT} `</p> |
+| CPU | CPU iowait time | <p>MIB: UCD-SNMP-MIB</p><p>Amount of time the CPU has been waiting for I/O to complete.</p> | SNMP | system.cpu.iowait[ssCpuRawWait.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND<p>- JAVASCRIPT: `//to get utilization in %, divide by N, where N is number of cores. return value/{#CPU.COUNT} `</p> |
+| CPU | CPU interrupt time | <p>MIB: UCD-SNMP-MIB</p><p>The amount of time the CPU has been servicing hardware interrupts.</p> | SNMP | system.cpu.interrupt[ssCpuRawInterrupt.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND<p>- JAVASCRIPT: `//to get utilization in %, divide by N, where N is number of cores. return value/{#CPU.COUNT} `</p> |
+| CPU | CPU guest time | <p>MIB: UCD-SNMP-MIB</p><p>Guest time (time spent running a virtual CPU for a guest operating system)</p> | SNMP | system.cpu.guest[ssCpuRawGuest.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND<p>- JAVASCRIPT: `//to get utilization in %, divide by N, where N is number of cores. return value/{#CPU.COUNT} `</p> |
+| CPU | CPU guest nice time | <p>MIB: UCD-SNMP-MIB</p><p>Time spent running a niced guest (virtual CPU for guest operating systems under the control of the Linux kernel)</p> | SNMP | system.cpu.guest_nice[ssCpuRawGuestNice.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- CHANGE_PER_SECOND<p>- JAVASCRIPT: `//to get utilization in %, divide by N, where N is number of cores. return value/{#CPU.COUNT} `</p> |
+| CPU | CPU utilization | <p>CPU utilization in %</p> | DEPENDENT | system.cpu.util[snmp,{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- JAVASCRIPT: `//Calculate utilization return (100 - value) `</p> |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|Load average is too high (per CPU load over {$LOAD_AVG_PER_CPU.MAX.WARN} for 5m) |<p>Per CPU load average is too high. Your system may be slow to respond.</p> |`{TEMPLATE_NAME:system.cpu.load.avg1[laLoad.1].min(5m)}/{Linux CPU SNMP:system.cpu.num[snmp].last()}>{$LOAD_AVG_PER_CPU.MAX.WARN} and {Linux CPU SNMP:system.cpu.load.avg5[laLoad.2].last()}>0 and {Linux CPU SNMP:system.cpu.load.avg15[laLoad.3].last()}>0` |AVERAGE | |
-|High CPU utilization (over {$CPU.UTIL.CRIT}% for 5m) |<p>CPU utilization is too high. The system might be slow to respond.</p> |`{TEMPLATE_NAME:system.cpu.util[snmp,{#SNMPINDEX}].min(5m)}>{$CPU.UTIL.CRIT}` |WARNING | |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|----------------------------------------------------------------------------------|------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|----------------------------------|
+| Load average is too high (per CPU load over {$LOAD_AVG_PER_CPU.MAX.WARN} for 5m) | <p>Per CPU load average is too high. Your system may be slow to respond.</p> | `{TEMPLATE_NAME:system.cpu.load.avg1[laLoad.1].min(5m)}/{Linux CPU SNMP:system.cpu.num[snmp].last()}>{$LOAD_AVG_PER_CPU.MAX.WARN} and {Linux CPU SNMP:system.cpu.load.avg5[laLoad.2].last()}>0 and {Linux CPU SNMP:system.cpu.load.avg15[laLoad.3].last()}>0` | AVERAGE | |
+| High CPU utilization (over {$CPU.UTIL.CRIT}% for 5m) | <p>CPU utilization is too high. The system might be slow to respond.</p> | `{TEMPLATE_NAME:system.cpu.util[snmp,{#SNMPINDEX}].min(5m)}>{$CPU.UTIL.CRIT}` | WARNING | |
## Feedback
@@ -174,7 +174,7 @@ Please report any issues with the template at https://support.zabbix.com
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
@@ -186,16 +186,16 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$VFS.FS.FSNAME.MATCHES} |<p>This macro is used in filesystems discovery. Can be overridden on the host or linked template level</p> |`.+` |
-|{$VFS.FS.FSNAME.NOT_MATCHES} |<p>This macro is used in filesystems discovery. Can be overridden on the host or linked template level</p> |`^(/dev|/sys|/run|/proc|.+/shm$)` |
-|{$VFS.FS.FSTYPE.MATCHES} |<p>This macro is used in filesystems discovery. Can be overridden on the host or linked template level</p> |`.*(\.4|\.9|hrStorageFixedDisk|hrStorageFlashMemory)$` |
-|{$VFS.FS.FSTYPE.NOT_MATCHES} |<p>This macro is used in filesystems discovery. Can be overridden on the host or linked template level</p> |`^\s$` |
-|{$VFS.FS.INODE.PFREE.MIN.CRIT} |<p>-</p> |`10` |
-|{$VFS.FS.INODE.PFREE.MIN.WARN} |<p>-</p> |`20` |
-|{$VFS.FS.PUSED.MAX.CRIT} |<p>-</p> |`90` |
-|{$VFS.FS.PUSED.MAX.WARN} |<p>-</p> |`80` |
+| Name | Description | Default |
+|--------------------------------|------------------------------------------------------------------------------------------------------------|--------------------------------------------------------|
+| {$VFS.FS.FSNAME.MATCHES} | <p>This macro is used in filesystems discovery. Can be overridden on the host or linked template level</p> | `.+` |
+| {$VFS.FS.FSNAME.NOT_MATCHES} | <p>This macro is used in filesystems discovery. Can be overridden on the host or linked template level</p> | `^(/dev|/sys|/run|/proc|.+/shm$)` |
+| {$VFS.FS.FSTYPE.MATCHES} | <p>This macro is used in filesystems discovery. Can be overridden on the host or linked template level</p> | `.*(\.4|\.9|hrStorageFixedDisk|hrStorageFlashMemory)$` |
+| {$VFS.FS.FSTYPE.NOT_MATCHES} | <p>This macro is used in filesystems discovery. Can be overridden on the host or linked template level</p> | `^\s$` |
+| {$VFS.FS.INODE.PFREE.MIN.CRIT} | <p>-</p> | `10` |
+| {$VFS.FS.INODE.PFREE.MIN.WARN} | <p>-</p> | `20` |
+| {$VFS.FS.PUSED.MAX.CRIT} | <p>-</p> | `90` |
+| {$VFS.FS.PUSED.MAX.WARN} | <p>-</p> | `80` |
## Template links
@@ -203,27 +203,27 @@ There are no template links in this template.
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Mounted filesystem discovery |<p>HOST-RESOURCES-MIB::hrStorage discovery with storage filter</p> |SNMP |vfs.fs.discovery[snmp]<p>**Filter**:</p>AND <p>- A: {#FSTYPE} MATCHES_REGEX `{$VFS.FS.FSTYPE.MATCHES}`</p><p>- B: {#FSTYPE} NOT_MATCHES_REGEX `{$VFS.FS.FSTYPE.NOT_MATCHES}`</p><p>- C: {#FSNAME} MATCHES_REGEX `{$VFS.FS.FSNAME.MATCHES}`</p><p>- D: {#FSNAME} NOT_MATCHES_REGEX `{$VFS.FS.FSNAME.NOT_MATCHES}`</p> |
+| Name | Description | Type | Key and additional info |
+|------------------------------|--------------------------------------------------------------------|------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Mounted filesystem discovery | <p>HOST-RESOURCES-MIB::hrStorage discovery with storage filter</p> | SNMP | vfs.fs.discovery[snmp]<p>**Filter**:</p>AND <p>- A: {#FSTYPE} MATCHES_REGEX `{$VFS.FS.FSTYPE.MATCHES}`</p><p>- B: {#FSTYPE} NOT_MATCHES_REGEX `{$VFS.FS.FSTYPE.NOT_MATCHES}`</p><p>- C: {#FSNAME} MATCHES_REGEX `{$VFS.FS.FSNAME.MATCHES}`</p><p>- D: {#FSNAME} NOT_MATCHES_REGEX `{$VFS.FS.FSNAME.NOT_MATCHES}`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Storage |{#FSNAME}: Used space |<p>MIB: HOST-RESOURCES-MIB</p><p>The amount of the storage represented by this entry that is allocated, in units of hrStorageAllocationUnits.</p> |SNMP |vfs.fs.used[hrStorageUsed.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `{#ALLOC_UNITS}`</p> |
-|Storage |{#FSNAME}: Total space |<p>MIB: HOST-RESOURCES-MIB</p><p>The size of the storage represented by this entry, in units of hrStorageAllocationUnits.</p><p>This object is writable to allow remote configuration of the size of the storage area in those cases where such an operation makes sense and is possible on the underlying system.</p><p>For example, the amount of main storage allocated to a buffer pool might be modified or the amount of disk space allocated to virtual storage might be modified.</p> |SNMP |vfs.fs.total[hrStorageSize.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `{#ALLOC_UNITS}`</p> |
-|Storage |{#FSNAME}: Space utilization |<p>Space utilization in % for {#FSNAME}</p> |CALCULATED |vfs.fs.pused[storageUsedPercentage.{#SNMPINDEX}]<p>**Expression**:</p>`(last("vfs.fs.used[hrStorageUsed.{#SNMPINDEX}]")/last("vfs.fs.total[hrStorageSize.{#SNMPINDEX}]"))*100` |
-|Storage |{#FSNAME}: Free inodes in % |<p>MIB: UCD-SNMP-MIB</p><p>If having problems collecting this item make sure access to UCD-SNMP-MIB is allowed.</p> |SNMP |vfs.fs.inode.pfree[dskPercentNode.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- JAVASCRIPT: `return (100-value);`</p> |
+| Group | Name | Description | Type | Key and additional info |
+|---------|------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Storage | {#FSNAME}: Used space | <p>MIB: HOST-RESOURCES-MIB</p><p>The amount of the storage represented by this entry that is allocated, in units of hrStorageAllocationUnits.</p> | SNMP | vfs.fs.used[hrStorageUsed.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `{#ALLOC_UNITS}`</p> |
+| Storage | {#FSNAME}: Total space | <p>MIB: HOST-RESOURCES-MIB</p><p>The size of the storage represented by this entry, in units of hrStorageAllocationUnits.</p><p>This object is writable to allow remote configuration of the size of the storage area in those cases where such an operation makes sense and is possible on the underlying system.</p><p>For example, the amount of main storage allocated to a buffer pool might be modified or the amount of disk space allocated to virtual storage might be modified.</p> | SNMP | vfs.fs.total[hrStorageSize.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `{#ALLOC_UNITS}`</p> |
+| Storage | {#FSNAME}: Space utilization | <p>Space utilization in % for {#FSNAME}</p> | CALCULATED | vfs.fs.pused[storageUsedPercentage.{#SNMPINDEX}]<p>**Expression**:</p>`(last("vfs.fs.used[hrStorageUsed.{#SNMPINDEX}]")/last("vfs.fs.total[hrStorageSize.{#SNMPINDEX}]"))*100` |
+| Storage | {#FSNAME}: Free inodes in % | <p>MIB: UCD-SNMP-MIB</p><p>If having problems collecting this item make sure access to UCD-SNMP-MIB is allowed.</p> | SNMP | vfs.fs.inode.pfree[dskPercentNode.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- JAVASCRIPT: `return (100-value);`</p> |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|{#FSNAME}: Disk space is critically low (used > {$VFS.FS.PUSED.MAX.CRIT:"{#FSNAME}"}%) |<p>Two conditions should match: First, space utilization should be above {$VFS.FS.PUSED.MAX.CRIT:"{#FSNAME}"}.</p><p> Second condition should be one of the following:</p><p> - The disk free space is less than 5G.</p><p> - The disk will be full in less than 24 hours.</p> |`{TEMPLATE_NAME:vfs.fs.pused[storageUsedPercentage.{#SNMPINDEX}].last()}>{$VFS.FS.PUSED.MAX.CRIT:"{#FSNAME}"} and (({Linux filesystems SNMP:vfs.fs.total[hrStorageSize.{#SNMPINDEX}].last()}-{Linux filesystems SNMP:vfs.fs.used[hrStorageUsed.{#SNMPINDEX}].last()})<5G or {TEMPLATE_NAME:vfs.fs.pused[storageUsedPercentage.{#SNMPINDEX}].timeleft(1h,,100)}<1d)` |AVERAGE |<p>Manual close: YES</p> |
-|{#FSNAME}: Disk space is low (used > {$VFS.FS.PUSED.MAX.WARN:"{#FSNAME}"}%) |<p>Two conditions should match: First, space utilization should be above {$VFS.FS.PUSED.MAX.WARN:"{#FSNAME}"}.</p><p> Second condition should be one of the following:</p><p> - The disk free space is less than 10G.</p><p> - The disk will be full in less than 24 hours.</p> |`{TEMPLATE_NAME:vfs.fs.pused[storageUsedPercentage.{#SNMPINDEX}].last()}>{$VFS.FS.PUSED.MAX.WARN:"{#FSNAME}"} and (({Linux filesystems SNMP:vfs.fs.total[hrStorageSize.{#SNMPINDEX}].last()}-{Linux filesystems SNMP:vfs.fs.used[hrStorageUsed.{#SNMPINDEX}].last()})<10G or {TEMPLATE_NAME:vfs.fs.pused[storageUsedPercentage.{#SNMPINDEX}].timeleft(1h,,100)}<1d)` |WARNING |<p>Manual close: YES</p><p>**Depends on**:</p><p>- {#FSNAME}: Disk space is critically low (used > {$VFS.FS.PUSED.MAX.CRIT:"{#FSNAME}"}%)</p> |
-|{#FSNAME}: Running out of free inodes (free < {$VFS.FS.INODE.PFREE.MIN.CRIT:"{#FSNAME}"}%) |<p>It may become impossible to write to disk if there are no index nodes left.</p><p>As symptoms, 'No space left on device' or 'Disk is full' errors may be seen even though free space is available.</p> |`{TEMPLATE_NAME:vfs.fs.inode.pfree[dskPercentNode.{#SNMPINDEX}].min(5m)}<{$VFS.FS.INODE.PFREE.MIN.CRIT:"{#FSNAME}"}` |AVERAGE | |
-|{#FSNAME}: Running out of free inodes (free < {$VFS.FS.INODE.PFREE.MIN.WARN:"{#FSNAME}"}%) |<p>It may become impossible to write to disk if there are no index nodes left.</p><p>As symptoms, 'No space left on device' or 'Disk is full' errors may be seen even though free space is available.</p> |`{TEMPLATE_NAME:vfs.fs.inode.pfree[dskPercentNode.{#SNMPINDEX}].min(5m)}<{$VFS.FS.INODE.PFREE.MIN.WARN:"{#FSNAME}"}` |WARNING |<p>**Depends on**:</p><p>- {#FSNAME}: Running out of free inodes (free < {$VFS.FS.INODE.PFREE.MIN.CRIT:"{#FSNAME}"}%)</p> |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|--------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|-----------------------------------------------------------------------------------------------------------------------------------------------|
+| {#FSNAME}: Disk space is critically low (used > {$VFS.FS.PUSED.MAX.CRIT:"{#FSNAME}"}%) | <p>Two conditions should match: First, space utilization should be above {$VFS.FS.PUSED.MAX.CRIT:"{#FSNAME}"}.</p><p> Second condition should be one of the following:</p><p> - The disk free space is less than 5G.</p><p> - The disk will be full in less than 24 hours.</p> | `{TEMPLATE_NAME:vfs.fs.pused[storageUsedPercentage.{#SNMPINDEX}].last()}>{$VFS.FS.PUSED.MAX.CRIT:"{#FSNAME}"} and (({Linux filesystems SNMP:vfs.fs.total[hrStorageSize.{#SNMPINDEX}].last()}-{Linux filesystems SNMP:vfs.fs.used[hrStorageUsed.{#SNMPINDEX}].last()})<5G or {TEMPLATE_NAME:vfs.fs.pused[storageUsedPercentage.{#SNMPINDEX}].timeleft(1h,,100)}<1d)` | AVERAGE | <p>Manual close: YES</p> |
+| {#FSNAME}: Disk space is low (used > {$VFS.FS.PUSED.MAX.WARN:"{#FSNAME}"}%) | <p>Two conditions should match: First, space utilization should be above {$VFS.FS.PUSED.MAX.WARN:"{#FSNAME}"}.</p><p> Second condition should be one of the following:</p><p> - The disk free space is less than 10G.</p><p> - The disk will be full in less than 24 hours.</p> | `{TEMPLATE_NAME:vfs.fs.pused[storageUsedPercentage.{#SNMPINDEX}].last()}>{$VFS.FS.PUSED.MAX.WARN:"{#FSNAME}"} and (({Linux filesystems SNMP:vfs.fs.total[hrStorageSize.{#SNMPINDEX}].last()}-{Linux filesystems SNMP:vfs.fs.used[hrStorageUsed.{#SNMPINDEX}].last()})<10G or {TEMPLATE_NAME:vfs.fs.pused[storageUsedPercentage.{#SNMPINDEX}].timeleft(1h,,100)}<1d)` | WARNING | <p>Manual close: YES</p><p>**Depends on**:</p><p>- {#FSNAME}: Disk space is critically low (used > {$VFS.FS.PUSED.MAX.CRIT:"{#FSNAME}"}%)</p> |
+| {#FSNAME}: Running out of free inodes (free < {$VFS.FS.INODE.PFREE.MIN.CRIT:"{#FSNAME}"}%) | <p>It may become impossible to write to disk if there are no index nodes left.</p><p>As symptoms, 'No space left on device' or 'Disk is full' errors may be seen even though free space is available.</p> | `{TEMPLATE_NAME:vfs.fs.inode.pfree[dskPercentNode.{#SNMPINDEX}].min(5m)}<{$VFS.FS.INODE.PFREE.MIN.CRIT:"{#FSNAME}"}` | AVERAGE | |
+| {#FSNAME}: Running out of free inodes (free < {$VFS.FS.INODE.PFREE.MIN.WARN:"{#FSNAME}"}%) | <p>It may become impossible to write to disk if there are no index nodes left.</p><p>As symptoms, 'No space left on device' or 'Disk is full' errors may be seen even though free space is available.</p> | `{TEMPLATE_NAME:vfs.fs.inode.pfree[dskPercentNode.{#SNMPINDEX}].min(5m)}<{$VFS.FS.INODE.PFREE.MIN.WARN:"{#FSNAME}"}` | WARNING | <p>**Depends on**:</p><p>- {#FSNAME}: Running out of free inodes (free < {$VFS.FS.INODE.PFREE.MIN.CRIT:"{#FSNAME}"}%)</p> |
## Feedback
@@ -233,7 +233,7 @@ Please report any issues with the template at https://support.zabbix.com
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
@@ -263,15 +263,15 @@ No specific Zabbix configuration is required.
## Template links
-|Name|
-|----|
-|EtherLike-MIB SNMP |
-|Generic SNMP |
-|Interfaces SNMP |
-|Linux CPU SNMP |
-|Linux block devices SNMP |
-|Linux filesystems SNMP |
-|Linux memory SNMP |
+| Name |
+|--------------------------|
+| EtherLike-MIB SNMP |
+| Generic SNMP |
+| Interfaces SNMP |
+| Linux CPU SNMP |
+| Linux block devices SNMP |
+| Linux filesystems SNMP |
+| Linux memory SNMP |
## Discovery rules
diff --git a/templates/os/linux_snmp_snmp/template_os_linux_snmp_snmp.yaml b/templates/os/linux_snmp_snmp/template_os_linux_snmp_snmp.yaml
index 124c0754ac0..eaf53e38cf4 100644
--- a/templates/os/linux_snmp_snmp/template_os_linux_snmp_snmp.yaml
+++ b/templates/os/linux_snmp_snmp/template_os_linux_snmp_snmp.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-02-23T11:04:58Z'
+ date: '2021-04-22T11:28:46Z'
groups:
-
name: Templates/Modules
@@ -1041,175 +1041,177 @@ zabbix_export:
dashboards:
-
name: 'System performance'
- widgets:
+ pages:
-
- type: GRAPH_CLASSIC
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'System load'
- host: 'Linux SNMP'
- -
- type: GRAPH_PROTOTYPE
- x: '12'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '1'
+ widgets:
-
- type: INTEGER
- name: source_type
- value: '2'
+ type: GRAPH_CLASSIC
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'System load'
+ host: 'Linux SNMP'
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: 'CPU usage{#SINGLETON}'
- host: 'Linux SNMP'
- -
- type: GRAPH_CLASSIC
- 'y': '5'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Memory usage'
- host: 'Linux SNMP'
- -
- type: GRAPH_CLASSIC
- x: '12'
- 'y': '5'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Swap usage'
- host: 'Linux SNMP'
- -
- type: GRAPH_PROTOTYPE
- 'y': '10'
- width: '24'
- height: '5'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '1'
- -
- type: INTEGER
- name: source_type
- value: '2'
+ x: '12'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '1'
+ -
+ type: INTEGER
+ name: source_type
+ value: '2'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: 'CPU usage{#SINGLETON}'
+ host: 'Linux SNMP'
+ -
+ type: GRAPH_CLASSIC
+ 'y': '5'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Memory usage'
+ host: 'Linux SNMP'
+ -
+ type: GRAPH_CLASSIC
+ x: '12'
+ 'y': '5'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Swap usage'
+ host: 'Linux SNMP'
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: '{#FSNAME}: Disk space usage'
- host: 'Linux SNMP'
- -
- type: GRAPH_PROTOTYPE
- 'y': '15'
- width: '24'
- height: '5'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '1'
- -
- type: INTEGER
- name: source_type
- value: '2'
+ 'y': '10'
+ width: '24'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '1'
+ -
+ type: INTEGER
+ name: source_type
+ value: '2'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: '{#FSNAME}: Disk space usage'
+ host: 'Linux SNMP'
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: '{#DEVNAME}: Disk read/write rates'
- host: 'Linux SNMP'
- -
- type: GRAPH_PROTOTYPE
- 'y': '20'
- width: '24'
- height: '5'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '1'
- -
- type: INTEGER
- name: source_type
- value: '3'
- -
- type: ITEM_PROTOTYPE
- name: itemid
- value:
- key: 'vfs.dev.util[diskIOLA1.{#SNMPINDEX}]'
- host: 'Linux SNMP'
- -
- type: GRAPH_PROTOTYPE
- 'y': '25'
- width: '24'
- height: '5'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '1'
+ 'y': '15'
+ width: '24'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '1'
+ -
+ type: INTEGER
+ name: source_type
+ value: '2'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: '{#DEVNAME}: Disk read/write rates'
+ host: 'Linux SNMP'
-
- type: INTEGER
- name: source_type
- value: '2'
+ type: GRAPH_PROTOTYPE
+ 'y': '20'
+ width: '24'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '1'
+ -
+ type: INTEGER
+ name: source_type
+ value: '3'
+ -
+ type: ITEM_PROTOTYPE
+ name: itemid
+ value:
+ key: 'vfs.dev.util[diskIOLA1.{#SNMPINDEX}]'
+ host: 'Linux SNMP'
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: 'Interface {#IFNAME}({#IFALIAS}): Network traffic'
- host: 'Linux SNMP'
+ 'y': '25'
+ width: '24'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '1'
+ -
+ type: INTEGER
+ name: source_type
+ value: '2'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: 'Interface {#IFNAME}({#IFALIAS}): Network traffic'
+ host: 'Linux SNMP'
triggers:
-
expression: '{Linux memory SNMP:system.swap.pfree[snmp].min(5m)}<{$SWAP.PFREE.MIN.WARN} and {Linux memory SNMP:system.swap.total[memTotalSwap.0].last()}>0'
diff --git a/templates/os/windows_agent/template_os_windows_agent.yaml b/templates/os/windows_agent/template_os_windows_agent.yaml
index 86b4536626f..c6858f5fc41 100644
--- a/templates/os/windows_agent/template_os_windows_agent.yaml
+++ b/templates/os/windows_agent/template_os_windows_agent.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-03-02T19:42:26Z'
+ date: '2021-04-22T11:28:48Z'
groups:
-
name: Templates/Modules
@@ -40,167 +40,169 @@ zabbix_export:
dashboards:
-
name: 'System performance'
- widgets:
- -
- type: GRAPH_CLASSIC
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'CPU usage'
- host: 'Windows by Zabbix agent'
- -
- type: GRAPH_CLASSIC
- x: '12'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '1'
- -
- type: ITEM
- name: itemid
- value:
- key: 'perf_counter_en["\System\Processor Queue Length"]'
- host: 'Windows by Zabbix agent'
- -
- type: GRAPH_CLASSIC
- 'y': '5'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Memory utilization'
- host: 'Windows by Zabbix agent'
- -
- type: GRAPH_CLASSIC
- x: '12'
- 'y': '5'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Swap usage'
- host: 'Windows by Zabbix agent'
- -
- type: GRAPH_PROTOTYPE
- 'y': '10'
- width: '24'
- height: '5'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '1'
- -
- type: INTEGER
- name: source_type
- value: '2'
- -
- type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: '{#FSNAME}: Disk space usage'
- host: 'Windows by Zabbix agent'
+ pages:
-
- type: GRAPH_PROTOTYPE
- 'y': '15'
- width: '24'
- height: '5'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
+ widgets:
-
- type: INTEGER
- name: rows
- value: '1'
- -
- type: INTEGER
- name: source_type
- value: '2'
+ type: GRAPH_CLASSIC
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'CPU usage'
+ host: 'Windows by Zabbix agent'
+ -
+ type: GRAPH_CLASSIC
+ x: '12'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '1'
+ -
+ type: ITEM
+ name: itemid
+ value:
+ key: 'perf_counter_en["\System\Processor Queue Length"]'
+ host: 'Windows by Zabbix agent'
+ -
+ type: GRAPH_CLASSIC
+ 'y': '5'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Memory utilization'
+ host: 'Windows by Zabbix agent'
+ -
+ type: GRAPH_CLASSIC
+ x: '12'
+ 'y': '5'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Swap usage'
+ host: 'Windows by Zabbix agent'
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: '{#DEVNAME}: Disk read/write rates'
- host: 'Windows by Zabbix agent'
- -
- type: GRAPH_PROTOTYPE
- 'y': '20'
- width: '24'
- height: '5'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '1'
- -
- type: INTEGER
- name: source_type
- value: '2'
+ 'y': '10'
+ width: '24'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '1'
+ -
+ type: INTEGER
+ name: source_type
+ value: '2'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: '{#FSNAME}: Disk space usage'
+ host: 'Windows by Zabbix agent'
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: '{#DEVNAME}: Disk utilization and queue'
- host: 'Windows by Zabbix agent'
- -
- type: GRAPH_PROTOTYPE
- 'y': '25'
- width: '24'
- height: '5'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '1'
+ 'y': '15'
+ width: '24'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '1'
+ -
+ type: INTEGER
+ name: source_type
+ value: '2'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: '{#DEVNAME}: Disk read/write rates'
+ host: 'Windows by Zabbix agent'
-
- type: INTEGER
- name: source_type
- value: '2'
+ type: GRAPH_PROTOTYPE
+ 'y': '20'
+ width: '24'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '1'
+ -
+ type: INTEGER
+ name: source_type
+ value: '2'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: '{#DEVNAME}: Disk utilization and queue'
+ host: 'Windows by Zabbix agent'
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: 'Interface {#IFNAME}({#IFALIAS}): Network traffic'
- host: 'Windows by Zabbix agent'
+ 'y': '25'
+ width: '24'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '1'
+ -
+ type: INTEGER
+ name: source_type
+ value: '2'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: 'Interface {#IFNAME}({#IFALIAS}): Network traffic'
+ host: 'Windows by Zabbix agent'
-
template: 'Windows CPU by Zabbix agent'
name: 'Windows CPU by Zabbix agent'
@@ -1267,30 +1269,32 @@ zabbix_export:
dashboards:
-
name: 'Netowrk interfaces'
- widgets:
+ pages:
-
- type: GRAPH_PROTOTYPE
- width: '24'
- height: '5'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '1'
- -
- type: INTEGER
- name: source_type
- value: '2'
+ widgets:
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: 'Interface {#IFNAME}({#IFALIAS}): Network traffic'
- host: 'Windows network by Zabbix agent'
+ width: '24'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '1'
+ -
+ type: INTEGER
+ name: source_type
+ value: '2'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: 'Interface {#IFNAME}({#IFALIAS}): Network traffic'
+ host: 'Windows network by Zabbix agent'
valuemaps:
-
name: 'Win32_NetworkAdapter::AdapterTypeId'
diff --git a/templates/os/windows_agent_active/template_os_windows_agent_active.yaml b/templates/os/windows_agent_active/template_os_windows_agent_active.yaml
index 6fdc2dd7104..e1f0548f28d 100644
--- a/templates/os/windows_agent_active/template_os_windows_agent_active.yaml
+++ b/templates/os/windows_agent_active/template_os_windows_agent_active.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-03-02T19:42:25Z'
+ date: '2021-04-22T11:28:50Z'
groups:
-
name: Templates/Modules
@@ -40,167 +40,169 @@ zabbix_export:
dashboards:
-
name: 'System performance'
- widgets:
- -
- type: GRAPH_CLASSIC
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'CPU usage'
- host: 'Windows by Zabbix agent active'
- -
- type: GRAPH_CLASSIC
- x: '12'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '1'
- -
- type: ITEM
- name: itemid
- value:
- key: 'perf_counter_en["\System\Processor Queue Length"]'
- host: 'Windows by Zabbix agent active'
- -
- type: GRAPH_CLASSIC
- 'y': '5'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Memory utilization'
- host: 'Windows by Zabbix agent active'
- -
- type: GRAPH_CLASSIC
- x: '12'
- 'y': '5'
- width: '12'
- height: '5'
- fields:
- -
- type: INTEGER
- name: source_type
- value: '0'
- -
- type: GRAPH
- name: graphid
- value:
- name: 'Swap usage'
- host: 'Windows by Zabbix agent active'
- -
- type: GRAPH_PROTOTYPE
- 'y': '10'
- width: '24'
- height: '5'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '1'
- -
- type: INTEGER
- name: source_type
- value: '2'
- -
- type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: '{#FSNAME}: Disk space usage'
- host: 'Windows by Zabbix agent active'
+ pages:
-
- type: GRAPH_PROTOTYPE
- 'y': '15'
- width: '24'
- height: '5'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
+ widgets:
-
- type: INTEGER
- name: rows
- value: '1'
- -
- type: INTEGER
- name: source_type
- value: '2'
+ type: GRAPH_CLASSIC
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'CPU usage'
+ host: 'Windows by Zabbix agent active'
+ -
+ type: GRAPH_CLASSIC
+ x: '12'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '1'
+ -
+ type: ITEM
+ name: itemid
+ value:
+ key: 'perf_counter_en["\System\Processor Queue Length"]'
+ host: 'Windows by Zabbix agent active'
+ -
+ type: GRAPH_CLASSIC
+ 'y': '5'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Memory utilization'
+ host: 'Windows by Zabbix agent active'
+ -
+ type: GRAPH_CLASSIC
+ x: '12'
+ 'y': '5'
+ width: '12'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: source_type
+ value: '0'
+ -
+ type: GRAPH
+ name: graphid
+ value:
+ name: 'Swap usage'
+ host: 'Windows by Zabbix agent active'
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: '{#DEVNAME}: Disk read/write rates'
- host: 'Windows by Zabbix agent active'
- -
- type: GRAPH_PROTOTYPE
- 'y': '20'
- width: '24'
- height: '5'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '1'
- -
- type: INTEGER
- name: source_type
- value: '2'
+ 'y': '10'
+ width: '24'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '1'
+ -
+ type: INTEGER
+ name: source_type
+ value: '2'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: '{#FSNAME}: Disk space usage'
+ host: 'Windows by Zabbix agent active'
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: '{#DEVNAME}: Disk utilization and queue'
- host: 'Windows by Zabbix agent active'
- -
- type: GRAPH_PROTOTYPE
- 'y': '25'
- width: '24'
- height: '5'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '1'
+ 'y': '15'
+ width: '24'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '1'
+ -
+ type: INTEGER
+ name: source_type
+ value: '2'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: '{#DEVNAME}: Disk read/write rates'
+ host: 'Windows by Zabbix agent active'
-
- type: INTEGER
- name: source_type
- value: '2'
+ type: GRAPH_PROTOTYPE
+ 'y': '20'
+ width: '24'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '1'
+ -
+ type: INTEGER
+ name: source_type
+ value: '2'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: '{#DEVNAME}: Disk utilization and queue'
+ host: 'Windows by Zabbix agent active'
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: 'Interface {#IFNAME}({#IFALIAS}): Network traffic'
- host: 'Windows by Zabbix agent active'
+ 'y': '25'
+ width: '24'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '1'
+ -
+ type: INTEGER
+ name: source_type
+ value: '2'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: 'Interface {#IFNAME}({#IFALIAS}): Network traffic'
+ host: 'Windows by Zabbix agent active'
-
template: 'Windows CPU by Zabbix agent active'
name: 'Windows CPU by Zabbix agent active'
@@ -1302,30 +1304,32 @@ zabbix_export:
dashboards:
-
name: 'Netowrk interfaces'
- widgets:
+ pages:
-
- type: GRAPH_PROTOTYPE
- width: '24'
- height: '5'
- fields:
- -
- type: INTEGER
- name: columns
- value: '1'
- -
- type: INTEGER
- name: rows
- value: '1'
- -
- type: INTEGER
- name: source_type
- value: '2'
+ widgets:
-
type: GRAPH_PROTOTYPE
- name: graphid
- value:
- name: 'Interface {#IFNAME}({#IFALIAS}): Network traffic'
- host: 'Windows network by Zabbix agent active'
+ width: '24'
+ height: '5'
+ fields:
+ -
+ type: INTEGER
+ name: columns
+ value: '1'
+ -
+ type: INTEGER
+ name: rows
+ value: '1'
+ -
+ type: INTEGER
+ name: source_type
+ value: '2'
+ -
+ type: GRAPH_PROTOTYPE
+ name: graphid
+ value:
+ name: 'Interface {#IFNAME}({#IFALIAS}): Network traffic'
+ host: 'Windows network by Zabbix agent active'
valuemaps:
-
name: 'Win32_NetworkAdapter::AdapterTypeId'
diff --git a/templates/os/windows_snmp/README.md b/templates/os/windows_snmp/README.md
index 805ddd937bd..cbde060a391 100644
--- a/templates/os/windows_snmp/README.md
+++ b/templates/os/windows_snmp/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
## Setup
@@ -16,11 +16,11 @@ No specific Zabbix configuration is required.
## Template links
-|Name|
-|----|
-|Generic SNMP |
-|HOST-RESOURCES-MIB SNMP |
-|Interfaces Windows SNMP |
+| Name |
+|-------------------------|
+| Generic SNMP |
+| HOST-RESOURCES-MIB SNMP |
+| Interfaces Windows SNMP |
## Discovery rules
diff --git a/templates/san/huawei_5300v5_snmp/template_san_huawei_5300v5_snmp.yaml b/templates/san/huawei_5300v5_snmp/template_san_huawei_5300v5_snmp.yaml
index e34600edff1..448373cd6f4 100644
--- a/templates/san/huawei_5300v5_snmp/template_san_huawei_5300v5_snmp.yaml
+++ b/templates/san/huawei_5300v5_snmp/template_san_huawei_5300v5_snmp.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-02-24T11:15:15Z'
+ date: '2021-04-22T11:28:44Z'
groups:
-
name: Templates/SAN
@@ -20,11 +20,6 @@ zabbix_export:
groups:
-
name: Templates/SAN
- applications:
- -
- name: CPU
- -
- name: Huawei
items:
-
name: 'OceanStor 5300 V5: Status'
@@ -33,14 +28,15 @@ zabbix_export:
key: 'huawei.5300.v5[status]'
history: 7d
description: 'System running status.'
- applications:
- -
- name: Huawei
preprocessing:
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 6h
+ tags:
+ -
+ tag: Application
+ value: Huawei
-
name: 'OceanStor 5300 V5: Capacity total'
type: SNMP_AGENT
@@ -49,9 +45,6 @@ zabbix_export:
history: 7d
units: B
description: 'Total capacity of a device.'
- applications:
- -
- name: Huawei
preprocessing:
-
type: MULTIPLIER
@@ -61,6 +54,10 @@ zabbix_export:
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 10m
+ tags:
+ -
+ tag: Application
+ value: Huawei
-
name: 'OceanStor 5300 V5: Capacity used'
type: SNMP_AGENT
@@ -69,14 +66,15 @@ zabbix_export:
history: 7d
units: B
description: 'Used capacity of a device.'
- applications:
- -
- name: Huawei
preprocessing:
-
type: MULTIPLIER
parameters:
- '1048576'
+ tags:
+ -
+ tag: Application
+ value: Huawei
-
name: 'OceanStor 5300 V5: Version'
type: SNMP_AGENT
@@ -86,14 +84,15 @@ zabbix_export:
trends: '0'
value_type: CHAR
description: 'The device version.'
- applications:
- -
- name: Huawei
preprocessing:
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 6h
+ tags:
+ -
+ tag: Application
+ value: Huawei
triggers:
-
expression: '{diff()}=1 and {strlen()}>0'
@@ -119,9 +118,6 @@ zabbix_export:
description: |
Health status of a BBU. For details, see definition of Enum Values (HEALTH_STATUS_E).
https://support.huawei.com/enterprise/en/centralized-storage/oceanstor-5300-v5-pid-22462029?category=reference-guides&subcategory=mib-reference
- application_prototypes:
- -
- name: 'BBU {#ID}'
valuemap:
name: 'Huawei storage: Health status'
preprocessing:
@@ -129,6 +125,10 @@ zabbix_export:
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 6h
+ tags:
+ -
+ tag: Application
+ value: 'BBU {#ID}'
trigger_prototypes:
-
expression: '{last()}<>1'
@@ -143,9 +143,6 @@ zabbix_export:
description: |
Running status of a BBU. For details, see definition of Enum Values (RUNNING_STATUS_E).
https://support.huawei.com/enterprise/en/centralized-storage/oceanstor-5300-v5-pid-22462029?category=reference-guides&subcategory=mib-reference
- application_prototypes:
- -
- name: 'BBU {#ID}'
valuemap:
name: 'Huawei storage: Running status'
preprocessing:
@@ -153,6 +150,10 @@ zabbix_export:
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 6h
+ tags:
+ -
+ tag: Application
+ value: 'BBU {#ID}'
trigger_prototypes:
-
expression: '{last()}<>2'
@@ -175,9 +176,10 @@ zabbix_export:
value_type: FLOAT
units: '%'
description: 'CPU usage of a controller {#ID}.'
- application_prototypes:
+ tags:
-
- name: 'Controller {#ID}'
+ tag: Application
+ value: 'Controller {#ID}'
trigger_prototypes:
-
expression: '{min(5m)}>{$CPU.UTIL.CRIT}'
@@ -194,9 +196,6 @@ zabbix_export:
description: |
Controller health status. For details, see definition of Enum Values (HEALTH_STATUS_E).
https://support.huawei.com/enterprise/en/centralized-storage/oceanstor-5300-v5-pid-22462029?category=reference-guides&subcategory=mib-reference
- application_prototypes:
- -
- name: 'Controller {#ID}'
valuemap:
name: 'Huawei storage: Health status'
preprocessing:
@@ -204,6 +203,10 @@ zabbix_export:
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 6h
+ tags:
+ -
+ tag: Application
+ value: 'Controller {#ID}'
trigger_prototypes:
-
expression: '{last()}<>1'
@@ -217,9 +220,10 @@ zabbix_export:
history: 7d
units: '%'
description: 'Memory usage of a controller {#ID}.'
- application_prototypes:
+ tags:
-
- name: 'Controller {#ID}'
+ tag: Application
+ value: 'Controller {#ID}'
trigger_prototypes:
-
expression: '{min({$HUAWEI.5300.MEM.MAX.TIME})}>{$HUAWEI.5300.MEM.MAX.WARN}'
@@ -232,9 +236,6 @@ zabbix_export:
key: 'huawei.5300.v5[hwInfoControllerRole, "{#ID}"]'
history: 7d
description: 'Controller role..'
- application_prototypes:
- -
- name: 'Controller {#ID}'
valuemap:
name: 'Huawei storage: Controller role'
preprocessing:
@@ -242,6 +243,10 @@ zabbix_export:
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 6h
+ tags:
+ -
+ tag: Application
+ value: 'Controller {#ID}'
trigger_prototypes:
-
expression: '{diff()}=1'
@@ -257,9 +262,6 @@ zabbix_export:
description: |
Controller running status. For details, see definition of Enum Values (RUNNING_STATUS_E).
https://support.huawei.com/enterprise/en/centralized-storage/oceanstor-5300-v5-pid-22462029?category=reference-guides&subcategory=mib-reference
- application_prototypes:
- -
- name: 'Controller {#ID}'
valuemap:
name: 'Huawei storage: Running status'
preprocessing:
@@ -267,6 +269,10 @@ zabbix_export:
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 6h
+ tags:
+ -
+ tag: Application
+ value: 'Controller {#ID}'
trigger_prototypes:
-
expression: '{last()}<>27'
@@ -299,14 +305,15 @@ zabbix_export:
key: 'huawei.5300.v5[hwInfoDiskHealthMark, "{#ID}"]'
history: 7d
description: 'Health score of a disk. If the value is 255, indicating invalid.'
- application_prototypes:
- -
- name: 'Disk {#MODEL}'
preprocessing:
-
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 6h
+ tags:
+ -
+ tag: Application
+ value: 'Disk {#MODEL}'
-
name: 'Disk {#MODEL} on {#LOCATION}: Health status'
type: SNMP_AGENT
@@ -316,9 +323,6 @@ zabbix_export:
description: |
Disk health status. For details, see definition of Enum Values (HEALTH_STATUS_E).
https://support.huawei.com/enterprise/en/centralized-storage/oceanstor-5300-v5-pid-22462029?category=reference-guides&subcategory=mib-reference
- application_prototypes:
- -
- name: 'Disk {#NAME}'
valuemap:
name: 'Huawei storage: Health status'
preprocessing:
@@ -326,6 +330,10 @@ zabbix_export:
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 6h
+ tags:
+ -
+ tag: Application
+ value: 'Disk {#NAME}'
trigger_prototypes:
-
expression: '{last()}<>1'
@@ -340,9 +348,6 @@ zabbix_export:
description: |
Disk running status. For details, see definition of Enum Values (RUNNING_STATUS_E).
https://support.huawei.com/enterprise/en/centralized-storage/oceanstor-5300-v5-pid-22462029?category=reference-guides&subcategory=mib-reference
- application_prototypes:
- -
- name: 'Disk {#MODEL}'
valuemap:
name: 'Huawei storage: Running status'
preprocessing:
@@ -350,6 +355,10 @@ zabbix_export:
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 6h
+ tags:
+ -
+ tag: Application
+ value: 'Disk {#MODEL}'
trigger_prototypes:
-
expression: '{last()}<>27'
@@ -363,9 +372,10 @@ zabbix_export:
history: 7d
units: °C
description: 'Disk temperature.'
- application_prototypes:
+ tags:
-
- name: 'Disk {#MODEL}'
+ tag: Application
+ value: 'Disk {#MODEL}'
trigger_prototypes:
-
expression: '{min({$HUAWEI.5300.DISK.TEMP.MAX.TIME})}>{$HUAWEI.5300.DISK.TEMP.MAX.WARN:"{#MODEL}"}'
@@ -389,9 +399,6 @@ zabbix_export:
description: |
Enclosure health status. For details, see definition of Enum Values (HEALTH_STATUS_E).
https://support.huawei.com/enterprise/en/centralized-storage/oceanstor-5300-v5-pid-22462029?category=reference-guides&subcategory=mib-reference
- application_prototypes:
- -
- name: 'Enclosure {#NAME}'
valuemap:
name: 'Huawei storage: Health status'
preprocessing:
@@ -399,6 +406,10 @@ zabbix_export:
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 6h
+ tags:
+ -
+ tag: Application
+ value: 'Enclosure {#NAME}'
trigger_prototypes:
-
expression: '{last()}<>1'
@@ -413,9 +424,6 @@ zabbix_export:
description: |
Enclosure running status. For details, see definition of Enum Values (RUNNING_STATUS_E).
https://support.huawei.com/enterprise/en/centralized-storage/oceanstor-5300-v5-pid-22462029?category=reference-guides&subcategory=mib-reference
- application_prototypes:
- -
- name: 'Enclosure {#NAME}'
valuemap:
name: 'Huawei storage: Running status'
preprocessing:
@@ -423,6 +431,10 @@ zabbix_export:
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 6h
+ tags:
+ -
+ tag: Application
+ value: 'Enclosure {#NAME}'
trigger_prototypes:
-
expression: '{last()}<>27'
@@ -436,9 +448,10 @@ zabbix_export:
history: 7d
units: °C
description: 'Enclosure temperature.'
- application_prototypes:
+ tags:
-
- name: 'Enclosure {#NAME}'
+ tag: Application
+ value: 'Enclosure {#NAME}'
trigger_prototypes:
-
expression: '{min({$HUAWEI.5300.TEMP.MAX.TIME})}>{$HUAWEI.5300.TEMP.MAX.WARN}'
@@ -462,9 +475,6 @@ zabbix_export:
description: |
Health status of a fan. For details, see definition of Enum Values (HEALTH_STATUS_E).
https://support.huawei.com/enterprise/en/centralized-storage/oceanstor-5300-v5-pid-22462029?category=reference-guides&subcategory=mib-reference
- application_prototypes:
- -
- name: 'FAN {#ID}'
valuemap:
name: 'Huawei storage: Health status'
preprocessing:
@@ -472,6 +482,10 @@ zabbix_export:
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 6h
+ tags:
+ -
+ tag: Application
+ value: 'FAN {#ID}'
trigger_prototypes:
-
expression: '{last()}<>1'
@@ -486,9 +500,6 @@ zabbix_export:
description: |
Operating status of a fan. For details, see definition of Enum Values (RUNNING_STATUS_E).
https://support.huawei.com/enterprise/en/centralized-storage/oceanstor-5300-v5-pid-22462029?category=reference-guides&subcategory=mib-reference
- application_prototypes:
- -
- name: 'FAN {#ID}'
valuemap:
name: 'Huawei storage: Running status'
preprocessing:
@@ -496,6 +507,10 @@ zabbix_export:
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 6h
+ tags:
+ -
+ tag: Application
+ value: 'FAN {#ID}'
trigger_prototypes:
-
expression: '{last()}<>2'
@@ -517,14 +532,15 @@ zabbix_export:
history: 7d
units: '!ms'
description: 'Average I/O latency of the node in milliseconds.'
- application_prototypes:
- -
- name: 'LUN {#NAME}'
preprocessing:
-
type: MULTIPLIER
parameters:
- '0.001'
+ tags:
+ -
+ tag: Application
+ value: 'LUN {#NAME}'
trigger_prototypes:
-
expression: '{min({$HUAWEI.5300.LUN.IO.TIME.MAX.TIME})}>{$HUAWEI.5300.LUN.IO.TIME.MAX.WARN}'
@@ -538,14 +554,15 @@ zabbix_export:
history: 7d
units: '!ms'
description: 'Average read I/O response time in milliseconds.'
- application_prototypes:
- -
- name: 'LUN {#NAME}'
preprocessing:
-
type: MULTIPLIER
parameters:
- '0.001'
+ tags:
+ -
+ tag: Application
+ value: 'LUN {#NAME}'
-
name: 'LUN {#NAME}: Average write I/O latency'
type: SNMP_AGENT
@@ -554,14 +571,15 @@ zabbix_export:
history: 7d
units: '!ms'
description: 'Average write I/O response time in milliseconds.'
- application_prototypes:
- -
- name: 'LUN {#NAME}'
preprocessing:
-
type: MULTIPLIER
parameters:
- '0.001'
+ tags:
+ -
+ tag: Application
+ value: 'LUN {#NAME}'
-
name: 'LUN {#NAME}: Read operations per second'
type: SNMP_AGENT
@@ -570,9 +588,10 @@ zabbix_export:
history: 7d
units: '!iops'
description: 'Read IOPS of the node.'
- application_prototypes:
+ tags:
-
- name: 'LUN {#NAME}'
+ tag: Application
+ value: 'LUN {#NAME}'
-
name: 'LUN {#NAME}: Read traffic per second'
type: SNMP_AGENT
@@ -581,14 +600,15 @@ zabbix_export:
history: 7d
units: Bps
description: 'Current read bandwidth for the LUN.'
- application_prototypes:
- -
- name: 'LUN {#NAME}'
preprocessing:
-
type: MULTIPLIER
parameters:
- '1048576'
+ tags:
+ -
+ tag: Application
+ value: 'LUN {#NAME}'
-
name: 'LUN {#NAME}: Total I/O per second'
type: SNMP_AGENT
@@ -597,9 +617,10 @@ zabbix_export:
history: 7d
units: '!iops'
description: 'Current IOPS of the LUN.'
- application_prototypes:
+ tags:
-
- name: 'LUN {#NAME}'
+ tag: Application
+ value: 'LUN {#NAME}'
-
name: 'LUN {#NAME}: Total traffic per second'
type: SNMP_AGENT
@@ -608,14 +629,15 @@ zabbix_export:
history: 7d
units: Bps
description: 'Current total bandwidth for the LUN.'
- application_prototypes:
- -
- name: 'LUN {#NAME}'
preprocessing:
-
type: MULTIPLIER
parameters:
- '1048576'
+ tags:
+ -
+ tag: Application
+ value: 'LUN {#NAME}'
-
name: 'LUN {#NAME}: Write operations per second'
type: SNMP_AGENT
@@ -624,9 +646,10 @@ zabbix_export:
history: 7d
units: '!iops'
description: 'Write IOPS of the node.'
- application_prototypes:
+ tags:
-
- name: 'LUN {#NAME}'
+ tag: Application
+ value: 'LUN {#NAME}'
-
name: 'LUN {#NAME}: Write traffic per second'
type: SNMP_AGENT
@@ -635,14 +658,15 @@ zabbix_export:
history: 7d
units: Bps
description: 'Current write bandwidth for the LUN.'
- application_prototypes:
- -
- name: 'LUN {#NAME}'
preprocessing:
-
type: MULTIPLIER
parameters:
- '1048576'
+ tags:
+ -
+ tag: Application
+ value: 'LUN {#NAME}'
-
name: 'LUN {#NAME}: Capacity'
type: SNMP_AGENT
@@ -651,9 +675,6 @@ zabbix_export:
history: 7d
units: B
description: 'Capacity of the LUN.'
- application_prototypes:
- -
- name: 'LUN {#NAME}'
preprocessing:
-
type: MULTIPLIER
@@ -663,6 +684,10 @@ zabbix_export:
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 6h
+ tags:
+ -
+ tag: Application
+ value: 'LUN {#NAME}'
-
name: 'LUN {#NAME}: Status'
type: SNMP_AGENT
@@ -670,9 +695,6 @@ zabbix_export:
key: 'huawei.5300.v5[hwStorageLunStatus, "{#NAME}"]'
history: 7d
description: 'Status of the LUN.'
- application_prototypes:
- -
- name: 'LUN {#NAME}'
valuemap:
name: 'Huawei storage: LUN status'
preprocessing:
@@ -680,6 +702,10 @@ zabbix_export:
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 6h
+ tags:
+ -
+ tag: Application
+ value: 'LUN {#NAME}'
trigger_prototypes:
-
expression: '{last()}<>1'
@@ -763,9 +789,10 @@ zabbix_export:
value_type: FLOAT
units: '%'
description: 'CPU usage of the node {#NODE}.'
- application_prototypes:
+ tags:
-
- name: 'Node {#NODE}'
+ tag: Application
+ value: 'Node {#NODE}'
trigger_prototypes:
-
expression: '{min(5m)}>{$CPU.UTIL.CRIT}'
@@ -782,9 +809,10 @@ zabbix_export:
value_type: FLOAT
units: '!ms'
description: 'Average I/O latency of the node.'
- application_prototypes:
+ tags:
-
- name: 'Node {#NODE}'
+ tag: Application
+ value: 'Node {#NODE}'
trigger_prototypes:
-
expression: '{min({$HUAWEI.5300.NODE.IO.DELAY.MAX.TIME})}>{$HUAWEI.5300.NODE.IO.DELAY.MAX.WARN}'
@@ -798,9 +826,10 @@ zabbix_export:
history: 7d
units: '!iops'
description: 'Read IOPS of the node.'
- application_prototypes:
+ tags:
-
- name: 'Node {#NODE}'
+ tag: Application
+ value: 'Node {#NODE}'
-
name: 'Node {#NODE}: Read traffic per second'
type: SNMP_AGENT
@@ -809,14 +838,15 @@ zabbix_export:
history: 7d
units: Bps
description: 'Read bandwidth for the node.'
- application_prototypes:
- -
- name: 'Node {#NODE}'
preprocessing:
-
type: MULTIPLIER
parameters:
- '1048576'
+ tags:
+ -
+ tag: Application
+ value: 'Node {#NODE}'
-
name: 'Node {#NODE}: Total I/O per second'
type: SNMP_AGENT
@@ -825,9 +855,10 @@ zabbix_export:
history: 7d
units: '!iops'
description: 'Total IOPS of the node.'
- application_prototypes:
+ tags:
-
- name: 'Node {#NODE}'
+ tag: Application
+ value: 'Node {#NODE}'
-
name: 'Node {#NODE}: Total traffic per second'
type: SNMP_AGENT
@@ -836,14 +867,15 @@ zabbix_export:
history: 7d
units: Bps
description: 'Total bandwidth for the node.'
- application_prototypes:
- -
- name: 'Node {#NODE}'
preprocessing:
-
type: MULTIPLIER
parameters:
- '1048576'
+ tags:
+ -
+ tag: Application
+ value: 'Node {#NODE}'
-
name: 'Node {#NODE}: Write operations per second'
type: SNMP_AGENT
@@ -852,9 +884,10 @@ zabbix_export:
history: 7d
units: '!iops'
description: 'Write IOPS of the node.'
- application_prototypes:
+ tags:
-
- name: 'Node {#NODE}'
+ tag: Application
+ value: 'Node {#NODE}'
-
name: 'Node {#NODE}: Write traffic per second'
type: SNMP_AGENT
@@ -863,14 +896,15 @@ zabbix_export:
history: 7d
units: Bps
description: 'Write bandwidth for the node.'
- application_prototypes:
- -
- name: 'Node {#NODE}'
preprocessing:
-
type: MULTIPLIER
parameters:
- '1048576'
+ tags:
+ -
+ tag: Application
+ value: 'Node {#NODE}'
graph_prototypes:
-
name: 'Node {#NODE}: CPU utilization'
@@ -939,14 +973,15 @@ zabbix_export:
history: 7d
units: B
description: 'Available capacity of a storage pool.'
- application_prototypes:
- -
- name: 'Pool {#MODEL}'
preprocessing:
-
type: MULTIPLIER
parameters:
- '1048576'
+ tags:
+ -
+ tag: Application
+ value: 'Pool {#MODEL}'
-
name: 'Pool {#NAME}: Capacity used percentage'
type: CALCULATED
@@ -955,9 +990,10 @@ zabbix_export:
units: '%'
params: 'last("huawei.5300.v5[hwInfoStoragePoolSubscribedCapacity, \"{#NAME}\"]")/last("huawei.5300.v5[hwInfoStoragePoolTotalCapacity, \"{#NAME}\"]")*100'
description: 'Used capacity of a storage pool in percents.'
- application_prototypes:
+ tags:
-
- name: 'Pool {#MODEL}'
+ tag: Application
+ value: 'Pool {#MODEL}'
trigger_prototypes:
-
expression: '{min({$HUAWEI.5300.POOL.CAPACITY.THRESH.TIME})}>{#THRESHOLD}'
@@ -972,9 +1008,6 @@ zabbix_export:
description: |
Health status of a storage pool. For details, see definition of Enum Values (HEALTH_STATUS_E).
https://support.huawei.com/enterprise/en/centralized-storage/oceanstor-5300-v5-pid-22462029?category=reference-guides&subcategory=mib-reference
- application_prototypes:
- -
- name: 'Pool {#NAME}'
valuemap:
name: 'Huawei storage: Health status'
preprocessing:
@@ -982,6 +1015,10 @@ zabbix_export:
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 6h
+ tags:
+ -
+ tag: Application
+ value: 'Pool {#NAME}'
trigger_prototypes:
-
expression: '{last()}<>1'
@@ -996,9 +1033,6 @@ zabbix_export:
description: |
Operating status of a storage pool. For details, see definition of Enum Values (RUNNING_STATUS_E).
https://support.huawei.com/enterprise/en/centralized-storage/oceanstor-5300-v5-pid-22462029?category=reference-guides&subcategory=mib-reference
- application_prototypes:
- -
- name: 'Pool {#MODEL}'
valuemap:
name: 'Huawei storage: Running status'
preprocessing:
@@ -1006,6 +1040,10 @@ zabbix_export:
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 6h
+ tags:
+ -
+ tag: Application
+ value: 'Pool {#MODEL}'
trigger_prototypes:
-
expression: '{last()}<>27'
@@ -1019,14 +1057,15 @@ zabbix_export:
history: 7d
units: B
description: 'Used capacity of a storage pool.'
- application_prototypes:
- -
- name: 'Pool {#MODEL}'
preprocessing:
-
type: MULTIPLIER
parameters:
- '1048576'
+ tags:
+ -
+ tag: Application
+ value: 'Pool {#MODEL}'
-
name: 'Pool {#NAME}: Capacity total'
type: SNMP_AGENT
@@ -1035,9 +1074,6 @@ zabbix_export:
history: 7d
units: B
description: 'Total capacity of a storage pool.'
- application_prototypes:
- -
- name: 'Pool {#MODEL}'
preprocessing:
-
type: MULTIPLIER
@@ -1047,6 +1083,10 @@ zabbix_export:
type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 10m
+ tags:
+ -
+ tag: Application
+ value: 'Pool {#MODEL}'
graph_prototypes:
-
name: 'Pool {#NAME}: Capacity'
diff --git a/templates/san/netapp_aff_a700_http/template_san_netapp_aff_a700_http.yaml b/templates/san/netapp_aff_a700_http/template_san_netapp_aff_a700_http.yaml
index 4deab99300e..9cd746ec62f 100644
--- a/templates/san/netapp_aff_a700_http/template_san_netapp_aff_a700_http.yaml
+++ b/templates/san/netapp_aff_a700_http/template_san_netapp_aff_a700_http.yaml
@@ -1,6 +1,6 @@
zabbix_export:
version: '5.4'
- date: '2021-03-05T15:51:28Z'
+ date: '2021-04-22T11:28:43Z'
groups:
-
name: Templates/SAN
@@ -17,11 +17,6 @@ zabbix_export:
groups:
-
name: Templates/SAN
- applications:
- -
- name: General
- -
- name: 'Zabbix raw items'
items:
-
name: 'Get chassis'
@@ -33,11 +28,12 @@ zabbix_export:
authtype: BASIC
username: '{$USERNAME}'
password: '{$PASSWORD}'
- applications:
- -
- name: 'Zabbix raw items'
timeout: '{$HTTP.AGENT.TIMEOUT}'
url: '{$URL}/api/cluster/chassis?fields=id,state'
+ tags:
+ -
+ tag: Application
+ value: 'Zabbix raw items'
-
name: 'Get cluster'
type: HTTP_AGENT
@@ -48,11 +44,12 @@ zabbix_export:
authtype: BASIC
username: '{$USERNAME}'
password: '{$PASSWORD}'
- applications:
- -
- name: 'Zabbix raw items'
timeout: '{$HTTP.AGENT.TIMEOUT}'
url: '{$URL}/api/cluster'
+ tags:
+ -
+ tag: Application
+ value: 'Zabbix raw items'
-
name: 'Cluster location'
type: DEPENDENT
@@ -62,9 +59,6 @@ zabbix_export:
trends: '0'
value_type: CHAR
description: 'The location of the cluster.'
- applications:
- -
- name: General
preprocessing:
-
type: JSONPATH
@@ -76,6 +70,10 @@ zabbix_export:
- 6h
master_item:
key: netapp.cluster.get
+ tags:
+ -
+ tag: Application
+ value: General
-
name: 'Cluster name'
type: DEPENDENT
@@ -85,9 +83,6 @@ zabbix_export:
trends: '0'
value_type: CHAR
description: 'The name of the cluster.'
- applications:
- -
- name: General
preprocessing:
-
type: JSONPATH
@@ -99,6 +94,10 @@ zabbix_export:
- 6h
master_item:
key: netapp.cluster.get
+ tags:
+ -
+ tag: Application
+ value: General
-
name: 'Cluster IOPS, other rate'
type: DEPENDENT
@@ -108,9 +107,6 @@ zabbix_export:
value_type: FLOAT
units: '!iops'
description: 'The number of I/O operations observed at the storage object. Performance metric for other I/O operations. Other I/O operations can be metadata operations, such as directory lookups and so on.'
- applications:
- -
- name: General
preprocessing:
-
type: JSONPATH
@@ -122,6 +118,10 @@ zabbix_export:
- ''
master_item:
key: netapp.cluster.get
+ tags:
+ -
+ tag: Application
+ value: General
-
name: 'Cluster IOPS, read rate'
type: DEPENDENT
@@ -131,9 +131,6 @@ zabbix_export:
value_type: FLOAT
units: '!iops'
description: 'The number of I/O operations observed at the storage object. Performance metric for read I/O operations.'
- applications:
- -
- name: General
preprocessing:
-
type: JSONPATH
@@ -145,6 +142,10 @@ zabbix_export:
- ''
master_item:
key: netapp.cluster.get
+ tags:
+ -
+ tag: Application
+ value: General
-
name: 'Cluster IOPS, total rate'
type: DEPENDENT
@@ -154,9 +155,6 @@ zabbix_export:
value_type: FLOAT
units: '!iops'
description: 'The number of I/O operations observed at the storage object. Performance metric aggregated over all types of I/O operations.'
- applications:
- -
- name: General
preprocessing:
-
type: JSONPATH
@@ -168,6 +166,10 @@ zabbix_export:
- ''
master_item:
key: netapp.cluster.get
+ tags:
+ -
+ tag: Application
+ value: General
-
name: 'Cluster IOPS, write rate'
type: DEPENDENT
@@ -176,10 +178,7 @@ zabbix_export:
history: 7d
value_type: FLOAT
units: '!iops'
- description: 'The number of I/O operations observed at the storage object. Performance metric for write I/O operations.'
- applications:
- -
- name: General
+ description: 'The number of I/O operations observed at the storage object. Peformance metric for write I/O operations.'
preprocessing:
-
type: JSONPATH
@@ -191,6 +190,10 @@ zabbix_export:
- ''
master_item:
key: netapp.cluster.get
+ tags:
+ -
+ tag: Application
+ value: General
-
name: 'Cluster IOPS raw, other'
type: DEPENDENT
@@ -199,9 +202,6 @@ zabbix_export:
history: 7d
units: '!iops'
description: 'The number of I/O operations observed at the storage object. Performance metric for other I/O operations. Other I/O operations can be metadata operations, such as directory lookups and so on.'
- applications:
- -
- name: 'Zabbix raw items'
preprocessing:
-
type: JSONPATH
@@ -209,6 +209,10 @@ zabbix_export:
- $.statistics.iops_raw.other
master_item:
key: netapp.cluster.get
+ tags:
+ -
+ tag: Application
+ value: 'Zabbix raw items'
-
name: 'Cluster IOPS raw, read'
type: DEPENDENT
@@ -217,9 +221,6 @@ zabbix_export:
history: 7d
units: '!iops'
description: 'The number of I/O operations observed at the storage object. Performance metric for read I/O operations.'
- applications:
- -
- name: 'Zabbix raw items'
preprocessing:
-
type: JSONPATH
@@ -227,6 +228,10 @@ zabbix_export:
- $.statistics.iops_raw.read
master_item:
key: netapp.cluster.get
+ tags:
+ -
+ tag: Application
+ value: 'Zabbix raw items'
-
name: 'Cluster IOPS raw, total'
type: DEPENDENT
@@ -235,9 +240,6 @@ zabbix_export:
history: 7d
units: '!iops'
description: 'The number of I/O operations observed at the storage object. Performance metric aggregated over all types of I/O operations.'
- applications:
- -
- name: 'Zabbix raw items'
preprocessing:
-
type: JSONPATH
@@ -245,6 +247,10 @@ zabbix_export:
- $.statistics.iops_raw.total
master_item:
key: netapp.cluster.get
+ tags:
+ -
+ tag: Application
+ value: 'Zabbix raw items'
-
name: 'Cluster IOPS raw, write'
type: DEPENDENT
@@ -252,10 +258,7 @@ zabbix_export:
delay: '0'
history: 7d
units: '!iops'
- description: 'The number of I/O operations observed at the storage object. Performance metric for write I/O operations.'
- applications:
- -
- name: 'Zabbix raw items'
+ description: 'The number of I/O operations observed at the storage object. Peformance metric for write I/O operations.'
preprocessing:
-
type: JSONPATH
@@ -263,6 +266,10 @@ zabbix_export:
- $.statistics.iops_raw.write
master_item:
key: netapp.cluster.get
+ tags:
+ -
+ tag: Application
+ value: 'Zabbix raw items'
-
name: 'Cluster latency, other'
type: CALCULATED
@@ -275,9 +282,10 @@ zabbix_export:
(last(netapp.cluster.statistics.iops_raw.other) - prev(netapp.cluster.statistics.iops_raw.other) +
(last(netapp.cluster.statistics.iops_raw.other) - prev(netapp.cluster.statistics.iops_raw.other) = 0) ) * 0.001
description: 'The average latency per I/O operation in milliseconds observed at the storage object. Performance metric for other I/O operations. Other I/O operations can be metadata operations, such as directory lookups and so on.'
- applications:
+ tags:
-
- name: General
+ tag: Application
+ value: General
-
name: 'Cluster latency, read'
type: CALCULATED
@@ -290,9 +298,10 @@ zabbix_export:
( last(netapp.cluster.statistics.iops_raw.read) - prev(netapp.cluster.statistics.iops_raw.read) +
(last(netapp.cluster.statistics.iops_raw.read) - prev(netapp.cluster.statistics.iops_raw.read) = 0) ) * 0.001
description: 'The average latency per I/O operation in milliseconds observed at the storage object. Performance metric for read I/O operations.'
- applications:
+ tags:
-
- name: General
+ tag: Application
+ value: General
-
name: 'Cluster latency, total'
type: CALCULATED
@@ -305,9 +314,10 @@ zabbix_export:
( last(netapp.cluster.statistics.iops_raw.total) - prev(netapp.cluster.statistics.iops_raw.total) +
(last(netapp.cluster.statistics.iops_raw.total) - prev(netapp.cluster.statistics.iops_raw.total) = 0) ) * 0.001
description: 'The average latency per I/O operation in milliseconds observed at the storage object. Performance metric aggregated over all types of I/O operations.'
- applications:
+ tags:
-
- name: General
+ tag: Application
+ value: General
-
name: 'Cluster latency, write'
type: CALCULATED
@@ -319,10 +329,11 @@ zabbix_export:
(last(netapp.cluster.statistics.latency_raw.write) - prev(netapp.cluster.statistics.latency_raw.write)) /
( last(netapp.cluster.statistics.iops_raw.write) - prev(netapp.cluster.statistics.iops_raw.write) +
(last(netapp.cluster.statistics.iops_raw.write) - prev(netapp.cluster.statistics.iops_raw.write) = 0) ) * 0.001
- description: 'The average latency per I/O operation in milliseconds observed at the storage object. Performance metric for write I/O operations.'
- applications:
+ description: 'The average latency per I/O operation in milliseconds observed at the storage object. Peformance metric for write I/O operations.'
+ tags:
-
- name: General
+ tag: Application
+ value: General
-
name: 'Cluster latency raw, other'
type: DEPENDENT
@@ -331,9 +342,6 @@ zabbix_export:
history: 7d
units: '!mcs'
description: 'The raw latency in microseconds observed at the storage object. This can be divided by the raw IOPS value to calculate the average latency per I/O operation. Performance metric for other I/O operations. Other I/O operations can be metadata operations, such as directory lookups and so on.'
- applications:
- -
- name: 'Zabbix raw items'
preprocessing:
-
type: JSONPATH
@@ -341,6 +349,10 @@ zabbix_export:
- $.statistics.latency_raw.other
master_item:
key: netapp.cluster.get
+ tags:
+ -
+ tag: Application
+ value: 'Zabbix raw items'
-
name: 'Cluster latency raw, read'
type: DEPENDENT
@@ -349,9 +361,6 @@ zabbix_export:
history: 7d
units: '!mcs'
description: 'The raw latency in microseconds observed at the storage object. This can be divided by the raw IOPS value to calculate the average latency per I/O operation. Performance metric for read I/O operations.'
- applications:
- -
- name: 'Zabbix raw items'
preprocessing:
-
type: JSONPATH
@@ -359,6 +368,10 @@ zabbix_export:
- $.statistics.latency_raw.read
master_item:
key: netapp.cluster.get
+ tags:
+ -
+ tag: Application
+ value: 'Zabbix raw items'
-
name: 'Cluster latency raw, total'
type: DEPENDENT
@@ -367,9 +380,6 @@ zabbix_export:
history: 7d
units: '!mcs'
description: 'The raw latency in microseconds observed at the storage object. This can be divided by the raw IOPS value to calculate the average latency per I/O operation. Performance metric aggregated over all types of I/O operations.'
- applications:
- -
- name: 'Zabbix raw items'
preprocessing:
-
type: JSONPATH
@@ -377,6 +387,10 @@ zabbix_export:
- $.statistics.latency_raw.total
master_item:
key: netapp.cluster.get
+ tags:
+ -
+ tag: Application
+ value: 'Zabbix raw items'
-
name: 'Cluster latency raw, write'
type: DEPENDENT
@@ -384,10 +398,7 @@ zabbix_export:
delay: '0'
history: 7d
units: '!mcs'
- description: 'The raw latency in microseconds observed at the storage object. This can be divided by the raw IOPS value to calculate the average latency per I/O operation. Performance metric for write I/O operations.'
- applications:
- -
- name: 'Zabbix raw items'
+ description: 'The raw latency in microseconds observed at the storage object. This can be divided by the raw IOPS value to calculate the average latency per I/O operation. Peformance metric for write I/O operations.'
preprocessing:
-
type: JSONPATH
@@ -395,6 +406,10 @@ zabbix_export:
- $.statistics.latency_raw.write
master_item:
key: netapp.cluster.get
+ tags:
+ -
+ tag: Application
+ value: 'Zabbix raw items'
-
name: 'Cluster throughput, other rate'
type: DEPENDENT
@@ -404,9 +419,6 @@ zabbix_export:
value_type: FLOAT
units: Bps
description: 'Throughput bytes observed at the storage object. Performance metric for other I/O operations. Other I/O operations can be metadata operations, such as directory lookups and so on.'
- applications:
- -
- name: General
preprocessing:
-
type: JSONPATH
@@ -418,6 +430,10 @@ zabbix_export:
- ''
master_item:
key: netapp.cluster.get
+ tags:
+ -
+ tag: Application
+ value: General
-
name: 'Cluster throughput, read rate'
type: DEPENDENT
@@ -427,9 +443,6 @@ zabbix_export:
value_type: FLOAT
units: Bps
description: 'Throughput bytes observed at the storage object. Performance metric for read I/O operations.'
- applications:
- -
- name: General
preprocessing:
-
type: JSONPATH
@@ -441,6 +454,10 @@ zabbix_export:
- ''
master_item:
key: netapp.cluster.get
+ tags:
+ -
+ tag: Application
+ value: General
-
name: 'Cluster throughput, total rate'
type: DEPENDENT
@@ -450,9 +467,6 @@ zabbix_export:
value_type: FLOAT
units: Bps
description: 'Throughput bytes observed at the storage object. Performance metric aggregated over all types of I/O operations.'
- applications:
- -
- name: General
preprocessing:
-
type: JSONPATH
@@ -464,6 +478,10 @@ zabbix_export:
- ''
master_item:
key: netapp.cluster.get
+ tags:
+ -
+ tag: Application
+ value: General
-
name: 'Cluster throughput, write rate'
type: DEPENDENT
@@ -472,10 +490,7 @@ zabbix_export:
history: 7d
value_type: FLOAT
units: Bps
- description: 'Throughput bytes observed at the storage object. Performance metric for write I/O operations.'
- applications:
- -
- name: General
+ description: 'Throughput bytes observed at the storage object. Peformance metric for write I/O operations.'
preprocessing:
-
type: JSONPATH
@@ -487,6 +502,10 @@ zabbix_export:
- ''
master_item:
key: netapp.cluster.get
+ tags:
+ -
+ tag: Application
+ value: General
-
name: 'Cluster status'
type: DEPENDENT
@@ -496,9 +515,6 @@ zabbix_export:
trends: '0'
value_type: CHAR
description: 'The status of the cluster: ok, error, partial_no_data, partial_no_response, partial_other_error, negative_delta, backfilled_data, inconsistent_delta_time, inconsistent_old_data.'
- applications:
- -
- name: General
preprocessing:
-
type: JSONPATH
@@ -510,6 +526,10 @@ zabbix_export:
- 6h
master_item:
key: netapp.cluster.get
+ tags:
+ -
+ tag: Application
+ value: General
triggers:
-
expression: '({last()}<>"ok")'
@@ -525,9 +545,6 @@ zabbix_export:
trends: '0'
value_type: CHAR
description: 'This returns the cluster version information. When the cluster has more than one node, the cluster version is equivalent to the lowest of generation, major, and minor versions on all nodes.'
- applications:
- -
- name: General
preprocessing:
-
type: JSONPATH
@@ -539,6 +556,10 @@ zabbix_export:
- 6h
master_item:
key: netapp.cluster.get
+ tags:
+ -
+ tag: Application
+ value: General
triggers:
-
expression: '{diff()}=1 and {strlen()}>0'
@@ -556,11 +577,12 @@ zabbix_export:
authtype: BASIC
username: '{$USERNAME}'
password: '{$PASSWORD}'
- applications:
- -
- name: 'Zabbix raw items'
timeout: '{$HTTP.AGENT.TIMEOUT}'
url: '{$URL}/api/storage/disks?fields=state,node.name'
+ tags:
+ -
+ tag: Application
+ value: 'Zabbix raw items'
-
name: 'Get FRUs'
type: HTTP_AGENT
@@ -571,9 +593,6 @@ zabbix_export:
authtype: BASIC
username: '{$USERNAME}'
password: '{$PASSWORD}'
- applications:
- -
- name: 'Zabbix raw items'
preprocessing:
-
type: JAVASCRIPT
@@ -591,6 +610,10 @@ zabbix_export:
return JSON.stringify(result);
timeout: '{$HTTP.AGENT.TIMEOUT}'
url: '{$URL}/api/cluster/chassis?fields=id,frus.id,frus.state'
+ tags:
+ -
+ tag: Application
+ value: 'Zabbix raw items'
-
name: 'Get LUNs'
type: HTTP_AGENT
@@ -601,11 +624,12 @@ zabbix_export:
authtype: BASIC
username: '{$USERNAME}'
password: '{$PASSWORD}'
- applications:
- -
- name: 'Zabbix raw items'
timeout: '{$HTTP.AGENT.TIMEOUT}'
url: '{$URL}/api/storage/luns?fields=name,svm.name,space.size,space.used,status.state,status.container_state'
+ tags:
+ -
+ tag: Application
+ value: 'Zabbix raw items'
-
name: 'Get nodes'
type: HTTP_AGENT
@@ -616,13 +640,14 @@ zabbix_export:
authtype: BASIC
username: '{$USERNAME}'
password: '{$PASSWORD}'
- applications:
- -
- name: 'Zabbix raw items'
timeout: '{$HTTP.AGENT.TIMEOUT}'
url: '{$URL}/api/cluster/nodes?fields=*'
+ tags:
+ -
+ tag: Application
+ value: 'Zabbix raw items'
-
- name: 'Get ethernet ports'
+ name: 'Get ehternet ports'
type: HTTP_AGENT
key: netapp.ports.eth.get
history: '0'
@@ -631,11 +656,12 @@ zabbix_export:
authtype: BASIC
username: '{$USERNAME}'
password: '{$PASSWORD}'
- applications:
- -
- name: 'Zabbix raw items'
timeout: '{$HTTP.AGENT.TIMEOUT}'
url: '{$URL}/api/network/ethernet/ports?fields=name,type,node.name,broadcast_domain.name,enabled,state,mtu,speed'
+ tags:
+ -
+ tag: Application
+ value: 'Zabbix raw items'
-
name: 'Get FC ports'
type: HTTP_AGENT
@@ -646,11 +672,12 @@ zabbix_export:
authtype: BASIC
username: '{$USERNAME}'
password: '{$PASSWORD}'
- applications:
- -
- name: 'Zabbix raw items'
timeout: '{$HTTP.AGENT.TIMEOUT}'
url: '{$URL}/api/network/fc/ports?fields=name,node.name,description,enabled,fabric.switch_port,state'
+ tags:
+ -
+ tag: Application
+ value: 'Zabbix raw items'
-
name: 'Get SVMs'
type: HTTP_AGENT
@@ -661,11 +688,12 @@ zabbix_export:
authtype: BASIC
username: '{$USERNAME}'
password: '{$PASSWORD}'
- applications:
- -
- name: 'Zabbix raw items'
timeout: '{$HTTP.AGENT.TIMEOUT}'
url: '{$URL}/api/svm/svms?fields=name,state,comment'
+ tags:
+ -
+ tag: Application
+ value: 'Zabbix raw items'
-
name: 'Get volumes'
type: HTTP_AGENT
@@ -676,11 +704,12 @@ zabbix_export:
authtype: BASIC
username: '{$USERNAME}'
password: '{$PASSWORD}'
- applications:
- -
- name: 'Zabbix raw items'
timeout: '{$HTTP.AGENT.TIMEOUT}'
url: '{$URL}/api/storage/volumes?fields=name,comment,state,type,svm.name,space.size,space.available,space.used,statistics'
+ tags:
+ -
+ tag: Application
+ value: 'Zabbix raw items'
discovery_rules:
-
name: 'Chassis discovery'
@@ -700,9 +729,6 @@ zabbix_export:
trends: '0'
value_type: CHAR
description: 'The chassis state: ok, error.'
- application_prototypes:
- -
- name: 'Chassis "{#ID}"'
preprocessing:
-
type: JSONPATH
@@ -714,6 +740,10 @@ zabbix_export:
- 6h
master_item:
key: netapp.chassis.get
+ tags:
+ -
+ tag: Application
+ value: 'Chassis "{#ID}"'
trigger_prototypes:
-
expression: '({diff()}=1 and {last()}="error")'
@@ -755,9 +785,6 @@ zabbix_export:
trends: '0'
value_type: CHAR
description: 'The state of the disk. Possible values: broken, copy, maintenance, partner, pending, present, reconstructing, removed, spare, unfail, zeroing'
- application_prototypes:
- -
- name: 'Node "{#NODENAME}" disks'
preprocessing:
-
type: JSONPATH
@@ -769,6 +796,10 @@ zabbix_export:
- 6h
master_item:
key: netapp.disks.get
+ tags:
+ -
+ tag: Application
+ value: 'Node "{#NODENAME}" disks'
trigger_prototypes:
-
expression: '({diff()}=1 and {last()}<>"present")'
@@ -807,9 +838,6 @@ zabbix_export:
trends: '0'
value_type: CHAR
description: 'The FRU state: ok, error.'
- application_prototypes:
- -
- name: 'Chassis "{#CHASSISID}"'
preprocessing:
-
type: JSONPATH
@@ -821,6 +849,10 @@ zabbix_export:
- 6h
master_item:
key: netapp.frus.get
+ tags:
+ -
+ tag: Application
+ value: 'Chassis "{#CHASSISID}"'
trigger_prototypes:
-
expression: '({diff()}=1 and {last()}="error")'
@@ -865,9 +897,6 @@ zabbix_export:
history: 7d
units: B
description: 'The total provisioned size of the LUN.'
- application_prototypes:
- -
- name: 'SVM "{#SVMNAME}"'
preprocessing:
-
type: JSONPATH
@@ -879,6 +908,10 @@ zabbix_export:
- 6h
master_item:
key: netapp.luns.get
+ tags:
+ -
+ tag: Application
+ value: 'SVM "{#SVMNAME}"'
-
name: '{#LUNNAME}: Space used'
type: DEPENDENT
@@ -887,9 +920,6 @@ zabbix_export:
history: 7d
units: B
description: 'The amount of space consumed by the main data stream of the LUN.'
- application_prototypes:
- -
- name: 'SVM "{#SVMNAME}"'
preprocessing:
-
type: JSONPATH
@@ -901,6 +931,10 @@ zabbix_export:
- 6h
master_item:
key: netapp.luns.get
+ tags:
+ -
+ tag: Application
+ value: 'SVM "{#SVMNAME}"'
-
name: '{#LUNNAME}: Container state'
type: DEPENDENT
@@ -910,9 +944,6 @@ zabbix_export:
trends: '0'
value_type: CHAR
description: 'The state of the volume and aggregate that contain the LUN: online, aggregate_offline, volume_offline. LUNs are only available when their containers are available.'
- application_prototypes:
- -
- name: 'SVM "{#SVMNAME}"'
preprocessing:
-
type: JSONPATH
@@ -924,6 +955,10 @@ zabbix_export:
- 6h
master_item:
key: netapp.luns.get
+ tags:
+ -
+ tag: Application
+ value: 'SVM "{#SVMNAME}"'
trigger_prototypes:
-
expression: '({diff()}=1 and {last()}<>"online")'
@@ -942,9 +977,6 @@ zabbix_export:
trends: '0'
value_type: CHAR
description: 'The state of the LUN. Normal states for a LUN are online and offline. Other states indicate errors. Possible values: foreign_lun_error, nvfail, offline, online, space_error.'
- application_prototypes:
- -
- name: 'SVM "{#SVMNAME}"'
preprocessing:
-
type: JSONPATH
@@ -956,6 +988,10 @@ zabbix_export:
- 6h
master_item:
key: netapp.luns.get
+ tags:
+ -
+ tag: Application
+ value: 'SVM "{#SVMNAME}"'
trigger_prototypes:
-
expression: '({diff()}=1 and {last()}<>"online")'
@@ -997,9 +1033,6 @@ zabbix_export:
trends: '0'
value_type: CHAR
description: 'This returns the cluster version information. When the cluster has more than one node, the cluster version is equivalent to the lowest of generation, major, and minor versions on all nodes.'
- application_prototypes:
- -
- name: 'Node "{#NODENAME}"'
preprocessing:
-
type: JSONPATH
@@ -1011,6 +1044,10 @@ zabbix_export:
- 6h
master_item:
key: netapp.nodes.get
+ tags:
+ -
+ tag: Application
+ value: 'Node "{#NODENAME}"'
trigger_prototypes:
-
expression: '{diff()}=1 and {strlen()}>0'
@@ -1027,9 +1064,6 @@ zabbix_export:
trends: '0'
value_type: CHAR
description: 'Specifies whether the hardware is currently operating outside of its recommended temperature range. The hardware shuts down if the temperature exceeds critical thresholds. Possible values: over, normal'
- application_prototypes:
- -
- name: 'Node "{#NODENAME}"'
preprocessing:
-
type: JSONPATH
@@ -1041,6 +1075,10 @@ zabbix_export:
- 6h
master_item:
key: netapp.nodes.get
+ tags:
+ -
+ tag: Application
+ value: 'Node "{#NODENAME}"'
trigger_prototypes:
-
expression: '({last()}<>"normal")'
@@ -1056,9 +1094,6 @@ zabbix_export:
trends: '0'
value_type: CHAR
description: 'The location of the node.'
- application_prototypes:
- -
- name: 'Node "{#NODENAME}"'
preprocessing:
-
type: JSONPATH
@@ -1070,6 +1105,10 @@ zabbix_export:
- 6h
master_item:
key: netapp.nodes.get
+ tags:
+ -
+ tag: Application
+ value: 'Node "{#NODENAME}"'
-
name: '{#NODENAME}: Membership'
type: DEPENDENT
@@ -1083,9 +1122,6 @@ zabbix_export:
available - If a node is available, this means it is detected on the internal cluster network and can be added to the cluster. Nodes that have a membership of “available” are not returned when a GET request is called when the cluster exists. A query on the “membership” property for available must be provided to scan for nodes on the cluster network. Nodes that have a membership of “available” are returned automatically before a cluster is created.
joining - Joining nodes are in the process of being added to the cluster. The node may be progressing through the steps to become a member or might have failed. The job to add the node or create the cluster provides details on the current progress of the node.
member - Nodes that are members have successfully joined the cluster.
- application_prototypes:
- -
- name: 'Node "{#NODENAME}"'
preprocessing:
-
type: JSONPATH
@@ -1097,6 +1133,10 @@ zabbix_export:
- 6h
master_item:
key: netapp.nodes.get
+ tags:
+ -
+ tag: Application
+ value: 'Node "{#NODENAME}"'
-
name: '{#NODENAME}: State'
type: DEPENDENT
@@ -1114,9 +1154,6 @@ zabbix_export:
waiting_for_giveback - Node has been taken over by its HA partner and is waiting for the HA partner to giveback disks.
degraded - Node has one or more critical services offline.
unknown - Node or its HA partner cannot be contacted and there is no information on the node’s state.
- application_prototypes:
- -
- name: 'Node "{#NODENAME}"'
preprocessing:
-
type: JSONPATH
@@ -1128,6 +1165,10 @@ zabbix_export:
- 6h
master_item:
key: netapp.nodes.get
+ tags:
+ -
+ tag: Application
+ value: 'Node "{#NODENAME}"'
trigger_prototypes:
-
expression: '({last()}<>"up")'
@@ -1149,9 +1190,6 @@ zabbix_export:
history: 7d
units: s
description: 'The total time, in seconds, that the node has been up.'
- application_prototypes:
- -
- name: 'Node "{#NODENAME}"'
preprocessing:
-
type: JSONPATH
@@ -1159,6 +1197,10 @@ zabbix_export:
- '$.records[?(@.name==''{#NODENAME}'')].uptime.first()'
master_item:
key: netapp.nodes.get
+ tags:
+ -
+ tag: Application
+ value: 'Node "{#NODENAME}"'
trigger_prototypes:
-
expression: '{last()}<10m'
@@ -1198,9 +1240,6 @@ zabbix_export:
trends: '0'
value_type: CHAR
description: 'The operational state of the port. Possible values: up, down.'
- application_prototypes:
- -
- name: 'Node "{#NODENAME}" Ethernet ports'
preprocessing:
-
type: JSONPATH
@@ -1212,6 +1251,10 @@ zabbix_export:
- 6h
master_item:
key: netapp.ports.eth.get
+ tags:
+ -
+ tag: Application
+ value: 'Node "{#NODENAME}" Ethernet ports'
trigger_prototypes:
-
expression: '({diff()}=1 and {last()}="down")'
@@ -1253,9 +1296,6 @@ zabbix_export:
trends: '0'
value_type: CHAR
description: 'A description of the FC port.'
- application_prototypes:
- -
- name: 'Node "{#NODENAME}" FC ports'
preprocessing:
-
type: JSONPATH
@@ -1267,6 +1307,10 @@ zabbix_export:
- 6h
master_item:
key: netapp.ports.fc.get
+ tags:
+ -
+ tag: Application
+ value: 'Node "{#NODENAME}" FC ports'
-
name: '{#FCPORTNAME}: State'
type: DEPENDENT
@@ -1284,9 +1328,6 @@ zabbix_export:
offlined_by_user - The port is administratively disabled.
offlined_by_system - The port is set to offline by the system. This happens when the port encounters too many errors.
node_offline - The state information for the port cannot be retrieved. The node is offline or inaccessible.
- application_prototypes:
- -
- name: 'Node "{#NODENAME}" FC ports'
preprocessing:
-
type: JSONPATH
@@ -1298,6 +1339,10 @@ zabbix_export:
- 6h
master_item:
key: netapp.ports.fc.get
+ tags:
+ -
+ tag: Application
+ value: 'Node "{#NODENAME}" FC ports'
trigger_prototypes:
-
expression: '({diff()}=1 and {last()}<>"online")'
@@ -1339,9 +1384,6 @@ zabbix_export:
trends: '0'
value_type: CHAR
description: 'The comment for the SVM.'
- application_prototypes:
- -
- name: 'SVM "{#SVMNAME}"'
preprocessing:
-
type: JSONPATH
@@ -1353,6 +1395,10 @@ zabbix_export:
- 6h
master_item:
key: netapp.svms.get
+ tags:
+ -
+ tag: Application
+ value: 'SVM "{#SVMNAME}"'
-
name: '{#SVMNAME}: State'
type: DEPENDENT
@@ -1362,9 +1408,6 @@ zabbix_export:
trends: '0'
value_type: CHAR
description: 'SVM state: starting, running, stopping, stopped, deleting.'
- application_prototypes:
- -
- name: 'SVM "{#SVMNAME}"'
preprocessing:
-
type: JSONPATH
@@ -1376,6 +1419,10 @@ zabbix_export:
- 6h
master_item:
key: netapp.svms.get
+ tags:
+ -
+ tag: Application
+ value: 'SVM "{#SVMNAME}"'
trigger_prototypes:
-
expression: '({diff()}=1 and {last()}<>"running")'
@@ -1417,9 +1464,6 @@ zabbix_export:
trends: '0'
value_type: CHAR
description: 'A comment for the volume.'
- application_prototypes:
- -
- name: 'Volume "{#VOLUMENAME}"'
preprocessing:
-
type: JSONPATH
@@ -1431,6 +1475,10 @@ zabbix_export:
- 6h
master_item:
key: netapp.volumes.get
+ tags:
+ -
+ tag: Application
+ value: 'Volume "{#VOLUMENAME}"'
-
name: '{#VOLUMENAME}: Available size'
type: DEPENDENT
@@ -1439,9 +1487,6 @@ zabbix_export:
history: 7d
units: B
description: 'The available space, in bytes.'
- application_prototypes:
- -
- name: 'Volume "{#VOLUMENAME}"'
preprocessing:
-
type: JSONPATH
@@ -1453,6 +1498,10 @@ zabbix_export:
- 6h
master_item:
key: netapp.volumes.get
+ tags:
+ -
+ tag: Application
+ value: 'Volume "{#VOLUMENAME}"'
-
name: '{#VOLUMENAME}: Space size'
type: DEPENDENT
@@ -1461,9 +1510,6 @@ zabbix_export:
history: 7d
units: B
description: 'Total provisioned size. The default size is equal to the minimum size of 20MB, in bytes.'
- application_prototypes:
- -
- name: 'Volume "{#VOLUMENAME}"'
preprocessing:
-
type: JSONPATH
@@ -1475,6 +1521,10 @@ zabbix_export:
- 6h
master_item:
key: netapp.volumes.get
+ tags:
+ -
+ tag: Application
+ value: 'Volume "{#VOLUMENAME}"'
-
name: '{#VOLUMENAME}: Used size'
type: DEPENDENT
@@ -1483,9 +1533,6 @@ zabbix_export:
history: 7d
units: B
description: 'The virtual space used (includes volume reserves) before storage efficiency, in bytes.'
- application_prototypes:
- -
- name: 'Volume "{#VOLUMENAME}"'
preprocessing:
-
type: JSONPATH
@@ -1497,6 +1544,10 @@ zabbix_export:
- 6h
master_item:
key: netapp.volumes.get
+ tags:
+ -
+ tag: Application
+ value: 'Volume "{#VOLUMENAME}"'
-
name: '{#VOLUMENAME}: State'
type: DEPENDENT
@@ -1506,9 +1557,6 @@ zabbix_export:
trends: '0'
value_type: CHAR
description: 'Volume state. A volume can only be brought online if it is offline. Taking a volume offline removes its junction path. The ‘mixed’ state applies to FlexGroup volumes only and cannot be specified as a target state. An ‘error’ state implies that the volume is not in a state to serve data.'
- application_prototypes:
- -
- name: 'Volume "{#VOLUMENAME}"'
preprocessing:
-
type: JSONPATH
@@ -1520,6 +1568,10 @@ zabbix_export:
- 6h
master_item:
key: netapp.volumes.get
+ tags:
+ -
+ tag: Application
+ value: 'Volume "{#VOLUMENAME}"'
trigger_prototypes:
-
expression: '({diff()}=1 and {last()}<>"online")'
@@ -1538,9 +1590,6 @@ zabbix_export:
value_type: FLOAT
units: '!iops'
description: 'The number of I/O operations observed at the storage object. Performance metric for other I/O operations. Other I/O operations can be metadata operations, such as directory lookups and so on.'
- application_prototypes:
- -
- name: 'Volume "{#VOLUMENAME}"'
preprocessing:
-
type: JSONPATH
@@ -1552,6 +1601,10 @@ zabbix_export:
- ''
master_item:
key: netapp.volumes.get
+ tags:
+ -
+ tag: Application
+ value: 'Volume "{#VOLUMENAME}"'
-
name: '{#VOLUMENAME}: Volume IOPS, read rate'
type: DEPENDENT
@@ -1561,9 +1614,6 @@ zabbix_export:
value_type: FLOAT
units: '!iops'
description: 'The number of I/O operations observed at the storage object. Performance metric for read I/O operations.'
- application_prototypes:
- -
- name: 'Volume "{#VOLUMENAME}"'
preprocessing:
-
type: JSONPATH
@@ -1575,6 +1625,10 @@ zabbix_export:
- ''
master_item:
key: netapp.volumes.get
+ tags:
+ -
+ tag: Application
+ value: 'Volume "{#VOLUMENAME}"'
-
name: '{#VOLUMENAME}: Volume IOPS, total rate'
type: DEPENDENT
@@ -1584,9 +1638,6 @@ zabbix_export:
value_type: FLOAT
units: '!iops'
description: 'The number of I/O operations observed at the storage object. Performance metric aggregated over all types of I/O operations.'
- application_prototypes:
- -
- name: 'Volume "{#VOLUMENAME}"'
preprocessing:
-
type: JSONPATH
@@ -1598,6 +1649,10 @@ zabbix_export:
- ''
master_item:
key: netapp.volumes.get
+ tags:
+ -
+ tag: Application
+ value: 'Volume "{#VOLUMENAME}"'
-
name: '{#VOLUMENAME}: Volume IOPS, write rate'
type: DEPENDENT
@@ -1606,10 +1661,7 @@ zabbix_export:
history: 7d
value_type: FLOAT
units: '!iops'
- description: 'The number of I/O operations observed at the storage object. Performance metric for write I/O operations.'
- application_prototypes:
- -
- name: 'Volume "{#VOLUMENAME}"'
+ description: 'The number of I/O operations observed at the storage object. Peformance metric for write I/O operations.'
preprocessing:
-
type: JSONPATH
@@ -1621,6 +1673,10 @@ zabbix_export:
- ''
master_item:
key: netapp.volumes.get
+ tags:
+ -
+ tag: Application
+ value: 'Volume "{#VOLUMENAME}"'
-
name: '{#VOLUMENAME}: Volume IOPS raw, other'
type: DEPENDENT
@@ -1629,9 +1685,6 @@ zabbix_export:
history: 7d
units: '!iops'
description: 'The number of I/O operations observed at the storage object. Performance metric for other I/O operations. Other I/O operations can be metadata operations, such as directory lookups and so on.'
- applications:
- -
- name: 'Zabbix raw items'
preprocessing:
-
type: JSONPATH
@@ -1639,6 +1692,10 @@ zabbix_export:
- '$.records[?(@.name==''{#VOLUMENAME}'')].statistics.iops_raw.other.first()'
master_item:
key: netapp.volumes.get
+ tags:
+ -
+ tag: Application
+ value: 'Zabbix raw items'
-
name: '{#VOLUMENAME}: Volume IOPS raw, read'
type: DEPENDENT
@@ -1647,9 +1704,6 @@ zabbix_export:
history: 7d
units: '!iops'
description: 'The number of I/O operations observed at the storage object. Performance metric for read I/O operations.'
- applications:
- -
- name: 'Zabbix raw items'
preprocessing:
-
type: JSONPATH
@@ -1657,6 +1711,10 @@ zabbix_export:
- '$.records[?(@.name==''{#VOLUMENAME}'')].statistics.iops_raw.read.first()'
master_item:
key: netapp.volumes.get
+ tags:
+ -
+ tag: Application
+ value: 'Zabbix raw items'
-
name: '{#VOLUMENAME}: Volume IOPS raw, total'
type: DEPENDENT
@@ -1665,9 +1723,6 @@ zabbix_export:
history: 7d
units: '!iops'
description: 'The number of I/O operations observed at the storage object. Performance metric aggregated over all types of I/O operations.'
- applications:
- -
- name: 'Zabbix raw items'
preprocessing:
-
type: JSONPATH
@@ -1675,6 +1730,10 @@ zabbix_export:
- '$.records[?(@.name==''{#VOLUMENAME}'')].statistics.iops_raw.total.first()'
master_item:
key: netapp.volumes.get
+ tags:
+ -
+ tag: Application
+ value: 'Zabbix raw items'
-
name: '{#VOLUMENAME}: Volume IOPS raw, write'
type: DEPENDENT
@@ -1682,10 +1741,7 @@ zabbix_export:
delay: '0'
history: 7d
units: '!iops'
- description: 'The number of I/O operations observed at the storage object. Performance metric for write I/O operations.'
- applications:
- -
- name: 'Zabbix raw items'
+ description: 'The number of I/O operations observed at the storage object. Peformance metric for write I/O operations.'
preprocessing:
-
type: JSONPATH
@@ -1693,6 +1749,10 @@ zabbix_export:
- '$.records[?(@.name==''{#VOLUMENAME}'')].statistics.iops_raw.write.first()'
master_item:
key: netapp.volumes.get
+ tags:
+ -
+ tag: Application
+ value: 'Zabbix raw items'
-
name: '{#VOLUMENAME}: Volume latency, other'
type: CALCULATED
@@ -1705,9 +1765,10 @@ zabbix_export:
( last(netapp.volume.statistics.iops_raw.other[{#VOLUMENAME}]) - prev(netapp.volume.statistics.iops_raw.other[{#VOLUMENAME}]) +
(last(netapp.volume.statistics.iops_raw.other[{#VOLUMENAME}]) - prev(netapp.volume.statistics.iops_raw.other[{#VOLUMENAME}]) = 0) ) * 0.001
description: 'The average latency per I/O operation in milliseconds observed at the storage object. Performance metric for other I/O operations. Other I/O operations can be metadata operations, such as directory lookups and so on.'
- application_prototypes:
+ tags:
-
- name: 'Volume "{#VOLUMENAME}"'
+ tag: Application
+ value: 'Volume "{#VOLUMENAME}"'
-
name: '{#VOLUMENAME}: Volume latency, read'
type: CALCULATED
@@ -1720,9 +1781,10 @@ zabbix_export:
( last(netapp.volume.statistics.iops_raw.read[{#VOLUMENAME}]) - prev(netapp.volume.statistics.iops_raw.read[{#VOLUMENAME}]) +
(last(netapp.volume.statistics.iops_raw.read[{#VOLUMENAME}]) - prev(netapp.volume.statistics.iops_raw.read[{#VOLUMENAME}]) = 0)) * 0.001
description: 'The average latency per I/O operation in milliseconds observed at the storage object. Performance metric for read I/O operations.'
- application_prototypes:
+ tags:
-
- name: 'Volume "{#VOLUMENAME}"'
+ tag: Application
+ value: 'Volume "{#VOLUMENAME}"'
-
name: '{#VOLUMENAME}: Volume latency, total'
type: CALCULATED
@@ -1735,9 +1797,10 @@ zabbix_export:
( last(netapp.volume.statistics.iops_raw.total[{#VOLUMENAME}]) - prev(netapp.volume.statistics.iops_raw.total[{#VOLUMENAME}]) +
(last(netapp.volume.statistics.iops_raw.total[{#VOLUMENAME}]) - prev(netapp.volume.statistics.iops_raw.total[{#VOLUMENAME}]) = 0) ) * 0.001
description: 'The average latency per I/O operation in milliseconds observed at the storage object. Performance metric aggregated over all types of I/O operations.'
- application_prototypes:
+ tags:
-
- name: 'Volume "{#VOLUMENAME}"'
+ tag: Application
+ value: 'Volume "{#VOLUMENAME}"'
-
name: '{#VOLUMENAME}: Volume latency, write'
type: CALCULATED
@@ -1750,9 +1813,10 @@ zabbix_export:
( last(netapp.volume.statistics.iops_raw.write[{#VOLUMENAME}]) - prev(netapp.volume.statistics.iops_raw.write[{#VOLUMENAME}]) +
(last(netapp.volume.statistics.iops_raw.write[{#VOLUMENAME}]) - prev(netapp.volume.statistics.iops_raw.write[{#VOLUMENAME}]) = 0) ) * 0.001
description: 'The average latency per I/O operation in milliseconds observed at the storage object. Performance metric for write I/O operations.'
- application_prototypes:
+ tags:
-
- name: 'Volume "{#VOLUMENAME}"'
+ tag: Application
+ value: 'Volume "{#VOLUMENAME}"'
-
name: '{#VOLUMENAME}: Volume latency raw, other'
type: DEPENDENT
@@ -1761,9 +1825,6 @@ zabbix_export:
history: 7d
units: '!mcs'
description: 'The raw latency in microseconds observed at the storage object. Performance metric for other I/O operations. Other I/O operations can be metadata operations, such as directory lookups and so on.'
- applications:
- -
- name: 'Zabbix raw items'
preprocessing:
-
type: JSONPATH
@@ -1771,6 +1832,10 @@ zabbix_export:
- '$.records[?(@.name==''{#VOLUMENAME}'')].statistics.latency_raw.other.first()'
master_item:
key: netapp.volumes.get
+ tags:
+ -
+ tag: Application
+ value: 'Zabbix raw items'
-
name: '{#VOLUMENAME}: Volume latency raw, read'
type: DEPENDENT
@@ -1779,9 +1844,6 @@ zabbix_export:
history: 7d
units: '!mcs'
description: 'The raw latency in microseconds observed at the storage object. Performance metric for read I/O operations.'
- applications:
- -
- name: 'Zabbix raw items'
preprocessing:
-
type: JSONPATH
@@ -1789,6 +1851,10 @@ zabbix_export:
- '$.records[?(@.name==''{#VOLUMENAME}'')].statistics.latency_raw.read.first()'
master_item:
key: netapp.volumes.get
+ tags:
+ -
+ tag: Application
+ value: 'Zabbix raw items'
-
name: '{#VOLUMENAME}: Volume latency raw, total'
type: DEPENDENT
@@ -1797,9 +1863,6 @@ zabbix_export:
history: 7d
units: '!mcs'
description: 'The raw latency in microseconds observed at the storage object. Performance metric aggregated over all types of I/O operations.'
- applications:
- -
- name: 'Zabbix raw items'
preprocessing:
-
type: JSONPATH
@@ -1807,6 +1870,10 @@ zabbix_export:
- '$.records[?(@.name==''{#VOLUMENAME}'')].statistics.latency_raw.total.first()'
master_item:
key: netapp.volumes.get
+ tags:
+ -
+ tag: Application
+ value: 'Zabbix raw items'
-
name: '{#VOLUMENAME}: Volume latency raw, write'
type: DEPENDENT
@@ -1814,10 +1881,7 @@ zabbix_export:
delay: '0'
history: 7d
units: '!mcs'
- description: 'The raw latency in microseconds observed at the storage object. Performance metric for write I/O operations.'
- applications:
- -
- name: 'Zabbix raw items'
+ description: 'The raw latency in microseconds observed at the storage object. Peformance metric for write I/O operations.'
preprocessing:
-
type: JSONPATH
@@ -1825,6 +1889,10 @@ zabbix_export:
- '$.records[?(@.name==''{#VOLUMENAME}'')].statistics.latency_raw.write.first()'
master_item:
key: netapp.volumes.get
+ tags:
+ -
+ tag: Application
+ value: 'Zabbix raw items'
-
name: '{#VOLUMENAME}: Volume throughput, other rate'
type: DEPENDENT
@@ -1834,9 +1902,6 @@ zabbix_export:
value_type: FLOAT
units: Bps
description: 'Throughput bytes observed at the storage object. Performance metric for other I/O operations. Other I/O operations can be metadata operations, such as directory lookups and so on.'
- application_prototypes:
- -
- name: 'Volume "{#VOLUMENAME}"'
preprocessing:
-
type: JSONPATH
@@ -1848,6 +1913,10 @@ zabbix_export:
- ''
master_item:
key: netapp.volumes.get
+ tags:
+ -
+ tag: Application
+ value: 'Volume "{#VOLUMENAME}"'
-
name: '{#VOLUMENAME}: Volume throughput, read rate'
type: DEPENDENT
@@ -1857,9 +1926,6 @@ zabbix_export:
value_type: FLOAT
units: Bps
description: 'Throughput bytes observed at the storage object. Performance metric for read I/O operations.'
- application_prototypes:
- -
- name: 'Volume "{#VOLUMENAME}"'
preprocessing:
-
type: JSONPATH
@@ -1871,6 +1937,10 @@ zabbix_export:
- ''
master_item:
key: netapp.volumes.get
+ tags:
+ -
+ tag: Application
+ value: 'Volume "{#VOLUMENAME}"'
-
name: '{#VOLUMENAME}: Volume throughput, total rate'
type: DEPENDENT
@@ -1880,9 +1950,6 @@ zabbix_export:
value_type: FLOAT
units: Bps
description: 'Throughput bytes observed at the storage object. Performance metric aggregated over all types of I/O operations.'
- application_prototypes:
- -
- name: 'Volume "{#VOLUMENAME}"'
preprocessing:
-
type: JSONPATH
@@ -1894,6 +1961,10 @@ zabbix_export:
- ''
master_item:
key: netapp.volumes.get
+ tags:
+ -
+ tag: Application
+ value: 'Volume "{#VOLUMENAME}"'
-
name: '{#VOLUMENAME}: Volume throughput, write rate'
type: DEPENDENT
@@ -1902,10 +1973,7 @@ zabbix_export:
history: 7d
value_type: FLOAT
units: Bps
- description: 'Throughput bytes observed at the storage object. Performance metric for write I/O operations.'
- application_prototypes:
- -
- name: 'Volume "{#VOLUMENAME}"'
+ description: 'Throughput bytes observed at the storage object. Peformance metric for write I/O operations.'
preprocessing:
-
type: JSONPATH
@@ -1917,6 +1985,10 @@ zabbix_export:
- ''
master_item:
key: netapp.volumes.get
+ tags:
+ -
+ tag: Application
+ value: 'Volume "{#VOLUMENAME}"'
-
name: '{#VOLUMENAME}: SVM name'
type: DEPENDENT
@@ -1926,9 +1998,6 @@ zabbix_export:
trends: '0'
value_type: CHAR
description: 'The volume belongs this SVM.'
- application_prototypes:
- -
- name: 'Volume "{#VOLUMENAME}"'
preprocessing:
-
type: JSONPATH
@@ -1940,6 +2009,10 @@ zabbix_export:
- 6h
master_item:
key: netapp.volumes.get
+ tags:
+ -
+ tag: Application
+ value: 'Volume "{#VOLUMENAME}"'
-
name: '{#VOLUMENAME}: Type'
type: DEPENDENT
@@ -1953,9 +2026,6 @@ zabbix_export:
rw ‐ read-write volume.
dp ‐ data-protection volume.
ls ‐ load-sharing dp volume.
- application_prototypes:
- -
- name: 'Volume "{#VOLUMENAME}"'
preprocessing:
-
type: JSONPATH
@@ -1967,6 +2037,10 @@ zabbix_export:
- 6h
master_item:
key: netapp.volumes.get
+ tags:
+ -
+ tag: Application
+ value: 'Volume "{#VOLUMENAME}"'
graph_prototypes:
-
name: '{#VOLUMENAME}: Volume latency'
diff --git a/templates/server/chassis_ipmi/README.md b/templates/server/chassis_ipmi/README.md
index fcf436eb731..e0fcd0b6a27 100644
--- a/templates/server/chassis_ipmi/README.md
+++ b/templates/server/chassis_ipmi/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
Template for monitoring servers with BMC over IPMI that work without any external scripts.
All metrics are collected at once, thanks to Zabbix's bulk data collection. The template is available starting from Zabbix version 5.0.
It collects metrics by polling BMC remotely using an IPMI agent.
@@ -15,7 +15,7 @@ This template was tested on:
## Setup
-> See [Zabbix template operation](https://www.zabbix.com/documentation/5.2/manual/config/templates_out_of_the_box/ipmi) for basic instructions.
+> See [Zabbix template operation](https://www.zabbix.com/documentation/5.4/manual/config/templates_out_of_the_box/ipmi) for basic instructions.
You can set {$IPMI.USER} and {$IPMI.PASSWORD} macros in the template for using on the host level.
@@ -26,12 +26,12 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$IPMI.PASSWORD} |<p>This macro is used for access to BMC. It can be overridden on the host or linked template level.</p> |`` |
-|{$IPMI.SENSOR_TYPE.MATCHES} |<p>This macro is used in sensors discovery. It can be overridden on the host or linked template level.</p> |`.*` |
-|{$IPMI.SENSOR_TYPE.NOT_MATCHES} |<p>This macro is used in sensors discovery. It can be overridden on the host or linked template level.</p> |`invalid` |
-|{$IPMI.USER} |<p>This macro is used for access to BMC. It can be overridden on the host or linked template level.</p> |`` |
+| Name | Description | Default |
+|---------------------------------|------------------------------------------------------------------------------------------------------------|-----------|
+| {$IPMI.PASSWORD} | <p>This macro is used for access to BMC. It can be overridden on the host or linked template level.</p> | `` |
+| {$IPMI.SENSOR_TYPE.MATCHES} | <p>This macro is used in sensors discovery. It can be overridden on the host or linked template level.</p> | `.*` |
+| {$IPMI.SENSOR_TYPE.NOT_MATCHES} | <p>This macro is used in sensors discovery. It can be overridden on the host or linked template level.</p> | `invalid` |
+| {$IPMI.USER} | <p>This macro is used for access to BMC. It can be overridden on the host or linked template level.</p> | `` |
## Template links
@@ -39,30 +39,30 @@ There are no template links in this template.
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Discrete sensors discovery |<p>Discovery of the discrete IPMI sensors.</p> |DEPENDENT |ipmi.discrete.discovery<p>**Filter**:</p>AND <p>- A: {#SENSOR_READING_TYPE} NOT_MATCHES_REGEX `threshold`</p><p>- B: {#SENSOR_TYPE} MATCHES_REGEX `{$IPMI.SENSOR_TYPE.MATCHES}`</p><p>- C: {#SENSOR_TYPE} NOT_MATCHES_REGEX `{$IPMI.SENSOR_TYPE.NOT_MATCHES}`</p> |
-|Threshold sensors discovery |<p>Discovery of the threshold IPMI sensors.</p> |DEPENDENT |ipmi.sensors.discovery<p>**Filter**:</p>AND <p>- A: {#SENSOR_READING_TYPE} MATCHES_REGEX `threshold`</p><p>- B: {#SENSOR_TYPE} MATCHES_REGEX `{$IPMI.SENSOR_TYPE.MATCHES}`</p><p>- C: {#SENSOR_TYPE} NOT_MATCHES_REGEX `{$IPMI.SENSOR_TYPE.NOT_MATCHES}`</p> |
+| Name | Description | Type | Key and additional info |
+|-----------------------------|-------------------------------------------------|-----------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Discrete sensors discovery | <p>Discovery of the discrete IPMI sensors.</p> | DEPENDENT | ipmi.discrete.discovery<p>**Filter**:</p>AND <p>- A: {#SENSOR_READING_TYPE} NOT_MATCHES_REGEX `threshold`</p><p>- B: {#SENSOR_TYPE} MATCHES_REGEX `{$IPMI.SENSOR_TYPE.MATCHES}`</p><p>- C: {#SENSOR_TYPE} NOT_MATCHES_REGEX `{$IPMI.SENSOR_TYPE.NOT_MATCHES}`</p> |
+| Threshold sensors discovery | <p>Discovery of the threshold IPMI sensors.</p> | DEPENDENT | ipmi.sensors.discovery<p>**Filter**:</p>AND <p>- A: {#SENSOR_READING_TYPE} MATCHES_REGEX `threshold`</p><p>- B: {#SENSOR_TYPE} MATCHES_REGEX `{$IPMI.SENSOR_TYPE.MATCHES}`</p><p>- C: {#SENSOR_TYPE} NOT_MATCHES_REGEX `{$IPMI.SENSOR_TYPE.NOT_MATCHES}`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|General |IPMI: {#SENSOR_ID} |<p>It is a state of the discrete IPMI sensor.</p> |DEPENDENT |ipmi.state_text[{#SENSOR_ID}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.id=='{#SENSOR_ID}')].state.text.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|General |IPMI: {#SENSOR_ID}, {#SENSOR_UNIT} |<p>It is a state of the threshold IPMI sensor.</p> |DEPENDENT |ipmi.value[{#SENSOR_ID}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.id=='{#SENSOR_ID}')].value.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
-|Zabbix_raw_items |Get IPMI sensors |<p>The master item that receives all sensors with values for LLD and dependent elements from BMC.</p> |IPMI |ipmi.get |
+| Group | Name | Description | Type | Key and additional info |
+|------------------|------------------------------------|-------------------------------------------------------------------------------------------------------|-----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| General | IPMI: {#SENSOR_ID} | <p>It is a state of the discrete IPMI sensor.</p> | DEPENDENT | ipmi.state_text[{#SENSOR_ID}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.id=='{#SENSOR_ID}')].state.text.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| General | IPMI: {#SENSOR_ID}, {#SENSOR_UNIT} | <p>It is a state of the threshold IPMI sensor.</p> | DEPENDENT | ipmi.value[{#SENSOR_ID}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.[?(@.id=='{#SENSOR_ID}')].value.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p> |
+| Zabbix_raw_items | Get IPMI sensors | <p>The master item that receives all sensors with values for LLD and dependent elements from BMC.</p> | IPMI | ipmi.get |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|IPMI: {#SENSOR_ID} value has changed |<p>The trigger is informing about changes in a state of the discrete IPMI sensor. A problem generated by this trigger can be manually closed.</p> |`{TEMPLATE_NAME:ipmi.state_text[{#SENSOR_ID}].diff()}=1` |INFO |<p>Manual close: YES</p> |
-|IPMI: {#SENSOR_ID} value is below non-critical low (less than {#SENSOR_LO_WARN} for 5m) |<p>The trigger is informing that a value less than the lower non-critical threshold has been reached.</p> |`{TEMPLATE_NAME:ipmi.value[{#SENSOR_ID}].min(5m)}<{#SENSOR_LO_WARN}` |WARNING |<p>**Depends on**:</p><p>- IPMI: {#SENSOR_ID} value is below critical low (less than {#SENSOR_LO_CRIT} for 5m)</p><p>- IPMI: {#SENSOR_ID} value is below non-recoverable low (less than {#SENSOR_LO_DISAST} for 5m)</p> |
-|IPMI: {#SENSOR_ID} value is below critical low (less than {#SENSOR_LO_CRIT} for 5m) |<p>The trigger is informing that a value less than the lower critical threshold has been reached.</p> |`{TEMPLATE_NAME:ipmi.value[{#SENSOR_ID}].min(5m)}<{#SENSOR_LO_CRIT}` |HIGH |<p>**Depends on**:</p><p>- IPMI: {#SENSOR_ID} value is below non-recoverable low (less than {#SENSOR_LO_DISAST} for 5m)</p> |
-|IPMI: {#SENSOR_ID} value is below non-recoverable low (less than {#SENSOR_LO_DISAST} for 5m) |<p>The trigger is informing that a value less than the lower non-recoverable threshold has been reached.</p> |`{TEMPLATE_NAME:ipmi.value[{#SENSOR_ID}].min(5m)}<{#SENSOR_LO_DISAST}` |DISASTER | |
-|IPMI: {#SENSOR_ID} value is above non-critical high (greater than {#SENSOR_HI_WARN} for 5m) |<p>The trigger is informing that a value higher than the upper non-critical threshold has been reached.</p> |`{TEMPLATE_NAME:ipmi.value[{#SENSOR_ID}].min(5m)}>{#SENSOR_HI_WARN}` |WARNING |<p>**Depends on**:</p><p>- IPMI: {#SENSOR_ID} value is above critical high (greater than {#SENSOR_HI_CRIT} for 5m)</p><p>- IPMI: {#SENSOR_ID} value is above non-recoverable high (greater than {#SENSOR_HI_DISAST} for 5m)</p> |
-|IPMI: {#SENSOR_ID} value is above critical high (greater than {#SENSOR_HI_CRIT} for 5m) |<p>The trigger is informing that a value higher than the upper critical threshold has been reached.</p> |`{TEMPLATE_NAME:ipmi.value[{#SENSOR_ID}].min(5m)}>{#SENSOR_HI_CRIT}` |HIGH |<p>**Depends on**:</p><p>- IPMI: {#SENSOR_ID} value is above non-recoverable high (greater than {#SENSOR_HI_DISAST} for 5m)</p> |
-|IPMI: {#SENSOR_ID} value is above non-recoverable high (greater than {#SENSOR_HI_DISAST} for 5m) |<p>The trigger is informing that a value higher than the upper non-recoverable threshold has been reached.</p> |`{TEMPLATE_NAME:ipmi.value[{#SENSOR_ID}].min(5m)}>{#SENSOR_HI_DISAST}` |DISASTER | |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|--------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------|----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| IPMI: {#SENSOR_ID} value has changed | <p>The trigger is informing about changes in a state of the discrete IPMI sensor. A problem generated by this trigger can be manually closed.</p> | `{TEMPLATE_NAME:ipmi.state_text[{#SENSOR_ID}].diff()}=1` | INFO | <p>Manual close: YES</p> |
+| IPMI: {#SENSOR_ID} value is below non-critical low (less than {#SENSOR_LO_WARN} for 5m) | <p>The trigger is informing that a value less than the lower non-critical threshold has been reached.</p> | `{TEMPLATE_NAME:ipmi.value[{#SENSOR_ID}].min(5m)}<{#SENSOR_LO_WARN}` | WARNING | <p>**Depends on**:</p><p>- IPMI: {#SENSOR_ID} value is below critical low (less than {#SENSOR_LO_CRIT} for 5m)</p><p>- IPMI: {#SENSOR_ID} value is below non-recoverable low (less than {#SENSOR_LO_DISAST} for 5m)</p> |
+| IPMI: {#SENSOR_ID} value is below critical low (less than {#SENSOR_LO_CRIT} for 5m) | <p>The trigger is informing that a value less than the lower critical threshold has been reached.</p> | `{TEMPLATE_NAME:ipmi.value[{#SENSOR_ID}].min(5m)}<{#SENSOR_LO_CRIT}` | HIGH | <p>**Depends on**:</p><p>- IPMI: {#SENSOR_ID} value is below non-recoverable low (less than {#SENSOR_LO_DISAST} for 5m)</p> |
+| IPMI: {#SENSOR_ID} value is below non-recoverable low (less than {#SENSOR_LO_DISAST} for 5m) | <p>The trigger is informing that a value less than the lower non-recoverable threshold has been reached.</p> | `{TEMPLATE_NAME:ipmi.value[{#SENSOR_ID}].min(5m)}<{#SENSOR_LO_DISAST}` | DISASTER | |
+| IPMI: {#SENSOR_ID} value is above non-critical high (greater than {#SENSOR_HI_WARN} for 5m) | <p>The trigger is informing that a value higher than the upper non-critical threshold has been reached.</p> | `{TEMPLATE_NAME:ipmi.value[{#SENSOR_ID}].min(5m)}>{#SENSOR_HI_WARN}` | WARNING | <p>**Depends on**:</p><p>- IPMI: {#SENSOR_ID} value is above critical high (greater than {#SENSOR_HI_CRIT} for 5m)</p><p>- IPMI: {#SENSOR_ID} value is above non-recoverable high (greater than {#SENSOR_HI_DISAST} for 5m)</p> |
+| IPMI: {#SENSOR_ID} value is above critical high (greater than {#SENSOR_HI_CRIT} for 5m) | <p>The trigger is informing that a value higher than the upper critical threshold has been reached.</p> | `{TEMPLATE_NAME:ipmi.value[{#SENSOR_ID}].min(5m)}>{#SENSOR_HI_CRIT}` | HIGH | <p>**Depends on**:</p><p>- IPMI: {#SENSOR_ID} value is above non-recoverable high (greater than {#SENSOR_HI_DISAST} for 5m)</p> |
+| IPMI: {#SENSOR_ID} value is above non-recoverable high (greater than {#SENSOR_HI_DISAST} for 5m) | <p>The trigger is informing that a value higher than the upper non-recoverable threshold has been reached.</p> | `{TEMPLATE_NAME:ipmi.value[{#SENSOR_ID}].min(5m)}>{#SENSOR_HI_DISAST}` | DISASTER | |
## Feedback
diff --git a/templates/server/cisco_ucs_snmp/README.md b/templates/server/cisco_ucs_snmp/README.md
index 0ec8857b345..cb2962c7690 100644
--- a/templates/server/cisco_ucs_snmp/README.md
+++ b/templates/server/cisco_ucs_snmp/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
for Cisco UCS via Integrated Management Controller
This template was tested on:
@@ -20,115 +20,115 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$DISK_ARRAY_CACHE_BATTERY_CRIT_STATUS} |<p>-</p> |`2` |
-|{$DISK_ARRAY_CACHE_BATTERY_OK_STATUS} |<p>-</p> |`1` |
-|{$DISK_ARRAY_CRIT_STATUS:"inoperable"} |<p>-</p> |`2` |
-|{$DISK_ARRAY_OK_STATUS:"operable"} |<p>-</p> |`1` |
-|{$DISK_ARRAY_WARN_STATUS:"degraded"} |<p>-</p> |`3` |
-|{$DISK_CRIT_STATUS:"bad"} |<p>-</p> |`16` |
-|{$DISK_CRIT_STATUS:"predictiveFailure"} |<p>-</p> |`11` |
-|{$DISK_FAIL_STATUS:"failed"} |<p>-</p> |`9` |
-|{$FAN_CRIT_STATUS:"inoperable"} |<p>-</p> |`2` |
-|{$FAN_WARN_STATUS:"degraded"} |<p>-</p> |`3` |
-|{$HEALTH_CRIT_STATUS:"computeFailed"} |<p>-</p> |`30` |
-|{$HEALTH_CRIT_STATUS:"configFailure"} |<p>-</p> |`33` |
-|{$HEALTH_CRIT_STATUS:"inoperable"} |<p>-</p> |`60` |
-|{$HEALTH_CRIT_STATUS:"unconfigFailure"} |<p>-</p> |`34` |
-|{$HEALTH_WARN_STATUS:"diagnosticsFailed"} |<p>-</p> |`204` |
-|{$HEALTH_WARN_STATUS:"powerProblem"} |<p>-</p> |`62` |
-|{$HEALTH_WARN_STATUS:"testFailed"} |<p>-</p> |`35` |
-|{$HEALTH_WARN_STATUS:"thermalProblem"} |<p>-</p> |`60` |
-|{$HEALTH_WARN_STATUS:"voltageProblem"} |<p>-</p> |`62` |
-|{$PSU_CRIT_STATUS:"inoperable"} |<p>-</p> |`2` |
-|{$PSU_WARN_STATUS:"degraded"} |<p>-</p> |`3` |
-|{$TEMP_CRIT:"Ambient"} |<p>-</p> |`35` |
-|{$TEMP_CRIT_LOW} |<p>-</p> |`5` |
-|{$TEMP_CRIT} |<p>-</p> |`60` |
-|{$TEMP_WARN:"Ambient"} |<p>-</p> |`30` |
-|{$TEMP_WARN} |<p>-</p> |`50` |
-|{$VDISK_OK_STATUS:"equipped"} |<p>-</p> |`10` |
+| Name | Description | Default |
+|-------------------------------------------|-------------|---------|
+| {$DISK_ARRAY_CACHE_BATTERY_CRIT_STATUS} | <p>-</p> | `2` |
+| {$DISK_ARRAY_CACHE_BATTERY_OK_STATUS} | <p>-</p> | `1` |
+| {$DISK_ARRAY_CRIT_STATUS:"inoperable"} | <p>-</p> | `2` |
+| {$DISK_ARRAY_OK_STATUS:"operable"} | <p>-</p> | `1` |
+| {$DISK_ARRAY_WARN_STATUS:"degraded"} | <p>-</p> | `3` |
+| {$DISK_CRIT_STATUS:"bad"} | <p>-</p> | `16` |
+| {$DISK_CRIT_STATUS:"predictiveFailure"} | <p>-</p> | `11` |
+| {$DISK_FAIL_STATUS:"failed"} | <p>-</p> | `9` |
+| {$FAN_CRIT_STATUS:"inoperable"} | <p>-</p> | `2` |
+| {$FAN_WARN_STATUS:"degraded"} | <p>-</p> | `3` |
+| {$HEALTH_CRIT_STATUS:"computeFailed"} | <p>-</p> | `30` |
+| {$HEALTH_CRIT_STATUS:"configFailure"} | <p>-</p> | `33` |
+| {$HEALTH_CRIT_STATUS:"inoperable"} | <p>-</p> | `60` |
+| {$HEALTH_CRIT_STATUS:"unconfigFailure"} | <p>-</p> | `34` |
+| {$HEALTH_WARN_STATUS:"diagnosticsFailed"} | <p>-</p> | `204` |
+| {$HEALTH_WARN_STATUS:"powerProblem"} | <p>-</p> | `62` |
+| {$HEALTH_WARN_STATUS:"testFailed"} | <p>-</p> | `35` |
+| {$HEALTH_WARN_STATUS:"thermalProblem"} | <p>-</p> | `60` |
+| {$HEALTH_WARN_STATUS:"voltageProblem"} | <p>-</p> | `62` |
+| {$PSU_CRIT_STATUS:"inoperable"} | <p>-</p> | `2` |
+| {$PSU_WARN_STATUS:"degraded"} | <p>-</p> | `3` |
+| {$TEMP_CRIT:"Ambient"} | <p>-</p> | `35` |
+| {$TEMP_CRIT_LOW} | <p>-</p> | `5` |
+| {$TEMP_CRIT} | <p>-</p> | `60` |
+| {$TEMP_WARN:"Ambient"} | <p>-</p> | `30` |
+| {$TEMP_WARN} | <p>-</p> | `50` |
+| {$VDISK_OK_STATUS:"equipped"} | <p>-</p> | `10` |
## Template links
-|Name|
-|----|
-|Generic SNMP |
+| Name |
+|--------------|
+| Generic SNMP |
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Temperature Discovery |<p>-</p> |SNMP |temp.discovery |
-|Temperature CPU Discovery |<p>-</p> |SNMP |temp.cpu.discovery |
-|PSU Discovery |<p>-</p> |SNMP |psu.discovery |
-|Unit Discovery |<p>-</p> |SNMP |unit.discovery |
-|FAN Discovery |<p>-</p> |SNMP |fan.discovery |
-|Physical Disk Discovery |<p>Scanning table of physical drive entries CISCO-UNIFIED-COMPUTING-STORAGE-MIB::cucsStorageLocalDiskTable.</p> |SNMP |physicalDisk.discovery |
-|Virtual Disk Discovery |<p>CISCO-UNIFIED-COMPUTING-STORAGE-MIB::cucsStorageLocalLunTable</p> |SNMP |virtualdisk.discovery |
-|Array Controller Discovery |<p>Scanning table of Array controllers: CISCO-UNIFIED-COMPUTING-STORAGE-MIB::cucsStorageControllerTable.</p> |SNMP |array.discovery |
-|Array Controller Cache Discovery |<p>Scanning table of Array controllers: CISCO-UNIFIED-COMPUTING-STORAGE-MIB::cucsStorageControllerTable.</p> |SNMP |array.cache.discovery |
+| Name | Description | Type | Key and additional info |
+|----------------------------------|-----------------------------------------------------------------------------------------------------------------|------|-------------------------|
+| Temperature Discovery | <p>-</p> | SNMP | temp.discovery |
+| Temperature CPU Discovery | <p>-</p> | SNMP | temp.cpu.discovery |
+| PSU Discovery | <p>-</p> | SNMP | psu.discovery |
+| Unit Discovery | <p>-</p> | SNMP | unit.discovery |
+| FAN Discovery | <p>-</p> | SNMP | fan.discovery |
+| Physical Disk Discovery | <p>Scanning table of physical drive entries CISCO-UNIFIED-COMPUTING-STORAGE-MIB::cucsStorageLocalDiskTable.</p> | SNMP | physicalDisk.discovery |
+| Virtual Disk Discovery | <p>CISCO-UNIFIED-COMPUTING-STORAGE-MIB::cucsStorageLocalLunTable</p> | SNMP | virtualdisk.discovery |
+| Array Controller Discovery | <p>Scanning table of Array controllers: CISCO-UNIFIED-COMPUTING-STORAGE-MIB::cucsStorageControllerTable.</p> | SNMP | array.discovery |
+| Array Controller Cache Discovery | <p>Scanning table of Array controllers: CISCO-UNIFIED-COMPUTING-STORAGE-MIB::cucsStorageControllerTable.</p> | SNMP | array.cache.discovery |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Disk_arrays |{#DISKARRAY_LOCATION}: Disk array controller status |<p>MIB: CISCO-UNIFIED-COMPUTING-STORAGE-MIB</p> |SNMP |system.hw.diskarray.status[cucsStorageControllerOperState.{#SNMPINDEX}] |
-|Disk_arrays |{#DISKARRAY_LOCATION}: Disk array controller model |<p>MIB: CISCO-UNIFIED-COMPUTING-STORAGE-MIB</p> |SNMP |system.hw.diskarray.model[cucsStorageControllerModel.{#SNMPINDEX}] |
-|Disk_arrays |{#DISKARRAY_CACHE_LOCATION}: Disk array cache controller battery status |<p>MIB: CISCO-UNIFIED-COMPUTING-STORAGE-MIB</p> |SNMP |system.hw.diskarray.cache.battery.status[cucsStorageRaidBatteryOperability.{#SNMPINDEX}] |
-|Fans |{#FAN_LOCATION}: Fan status |<p>MIB: CISCO-UNIFIED-COMPUTING-EQUIPMENT-MIB</p><p>Cisco UCS equipment:Fan:operState managed object property</p> |SNMP |sensor.fan.status[cucsEquipmentFanOperState.{#SNMPINDEX}] |
-|Inventory |{#UNIT_LOCATION}: Hardware model name |<p>MIB: CISCO-UNIFIED-COMPUTING-COMPUTE-MIB</p><p>Cisco UCS compute:RackUnit:model managed object property</p> |SNMP |system.hw.model[cucsComputeRackUnitModel.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |{#UNIT_LOCATION}: Hardware serial number |<p>MIB: CISCO-UNIFIED-COMPUTING-COMPUTE-MIB</p><p>Cisco UCS compute:RackUnit:serial managed object property</p> |SNMP |system.hw.serialnumber[cucsComputeRackUnitSerial.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Physical_disks |{#DISK_LOCATION}: Physical disk status |<p>MIB: CISCO-UNIFIED-COMPUTING-STORAGE-MIB</p><p>Cisco UCS storage:LocalDisk:diskState managed object property.</p> |SNMP |system.hw.physicaldisk.status[cucsStorageLocalDiskDiskState.{#SNMPINDEX}] |
-|Physical_disks |{#DISK_LOCATION}: Physical disk model name |<p>MIB: CISCO-UNIFIED-COMPUTING-STORAGE-MIB</p><p>Cisco UCS storage:LocalDisk:serial managed object property. Actually returns part number code</p> |SNMP |system.hw.physicaldisk.model[cucsStorageLocalDiskSerial.{#SNMPINDEX}] |
-|Physical_disks |{#DISK_LOCATION}: Physical disk media type |<p>MIB: CISCO-UNIFIED-COMPUTING-STORAGE-MIB</p><p>Cisco UCS storage:LocalDisk:model managed object property. Actually returns 'HDD' or 'SSD'</p> |SNMP |system.hw.physicaldisk.media_type[cucsStorageLocalDiskModel.{#SNMPINDEX}] |
-|Physical_disks |{#DISK_LOCATION}: Disk size |<p>MIB: CISCO-UNIFIED-COMPUTING-STORAGE-MIB</p><p>Cisco UCS storage:LocalDisk:size managed object property. In MB.</p> |SNMP |system.hw.physicaldisk.size[cucsStorageLocalDiskSize.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1048576`</p> |
-|Power_supply |{#PSU_LOCATION}: Power supply status |<p>MIB: CISCO-UNIFIED-COMPUTING-EQUIPMENT-MIB</p><p>Cisco UCS equipment:Psu:operState managed object property</p> |SNMP |sensor.psu.status[cucsEquipmentPsuOperState.{#SNMPINDEX}] |
-|Status |{#UNIT_LOCATION}: Overall system health status |<p>MIB: CISCO-UNIFIED-COMPUTING-COMPUTE-MIB</p><p>Cisco UCS compute:RackUnit:operState managed object property</p> |SNMP |system.status[cucsComputeRackUnitOperState.{#SNMPINDEX}] |
-|Temperature |{#SENSOR_LOCATION}.Ambient: Temperature |<p>MIB: CISCO-UNIFIED-COMPUTING-COMPUTE-MIB</p><p>Temperature readings of testpoint: {#SENSOR_LOCATION}.Ambient</p> |SNMP |sensor.temp.value[cucsComputeRackUnitMbTempStatsAmbientTemp.{#SNMPINDEX}] |
-|Temperature |{#SENSOR_LOCATION}.Front: Temperature |<p>MIB: CISCO-UNIFIED-COMPUTING-COMPUTE-MIB</p><p>Cisco UCS compute:RackUnitMbTempStats:frontTemp managed object property</p> |SNMP |sensor.temp.value[cucsComputeRackUnitMbTempStatsFrontTemp.{#SNMPINDEX}] |
-|Temperature |{#SENSOR_LOCATION}.Rear: Temperature |<p>MIB: CISCO-UNIFIED-COMPUTING-COMPUTE-MIB</p><p>Cisco UCS compute:RackUnitMbTempStats:rearTemp managed object property</p> |SNMP |sensor.temp.value[cucsComputeRackUnitMbTempStatsRearTemp.{#SNMPINDEX}] |
-|Temperature |{#SENSOR_LOCATION}.IOH: Temperature |<p>MIB: CISCO-UNIFIED-COMPUTING-COMPUTE-MIB</p><p>Cisco UCS compute:RackUnitMbTempStats:ioh1Temp managed object property</p> |SNMP |sensor.temp.value[cucsComputeRackUnitMbTempSltatsIoh1Temp.{#SNMPINDEX}] |
-|Temperature |{#SENSOR_LOCATION}: Temperature |<p>MIB: CISCO-UNIFIED-COMPUTING-PROCESSOR-MIB</p><p>Cisco UCS processor:EnvStats:temperature managed object property</p> |SNMP |sensor.temp.value[cucsProcessorEnvStatsTemperature.{#SNMPINDEX}] |
-|Virtual_disks |{#VDISK_LOCATION}: Status |<p>MIB: CISCO-UNIFIED-COMPUTING-STORAGE-MIB</p><p>Cisco UCS storage:LocalLun:presence managed object property</p> |SNMP |system.hw.virtualdisk.status[cucsStorageLocalLunPresence.{#SNMPINDEX}] |
-|Virtual_disks |{#VDISK_LOCATION}: Layout type |<p>MIB: CISCO-UNIFIED-COMPUTING-STORAGE-MIB</p><p>Cisco UCS storage:LocalLun:type managed object property</p> |SNMP |system.hw.virtualdisk.layout[cucsStorageLocalLunType.{#SNMPINDEX}] |
-|Virtual_disks |{#VDISK_LOCATION}: Disk size |<p>MIB: CISCO-UNIFIED-COMPUTING-STORAGE-MIB</p><p>Cisco UCS storage:LocalLun:size managed object property in MB.</p> |SNMP |system.hw.virtualdisk.size[cucsStorageLocalLunSize.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1048576`</p> |
+| Group | Name | Description | Type | Key and additional info |
+|----------------|-------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------|------|-----------------------------------------------------------------------------------------------------------------------------------|
+| Disk_arrays | {#DISKARRAY_LOCATION}: Disk array controller status | <p>MIB: CISCO-UNIFIED-COMPUTING-STORAGE-MIB</p> | SNMP | system.hw.diskarray.status[cucsStorageControllerOperState.{#SNMPINDEX}] |
+| Disk_arrays | {#DISKARRAY_LOCATION}: Disk array controller model | <p>MIB: CISCO-UNIFIED-COMPUTING-STORAGE-MIB</p> | SNMP | system.hw.diskarray.model[cucsStorageControllerModel.{#SNMPINDEX}] |
+| Disk_arrays | {#DISKARRAY_CACHE_LOCATION}: Disk array cache controller battery status | <p>MIB: CISCO-UNIFIED-COMPUTING-STORAGE-MIB</p> | SNMP | system.hw.diskarray.cache.battery.status[cucsStorageRaidBatteryOperability.{#SNMPINDEX}] |
+| Fans | {#FAN_LOCATION}: Fan status | <p>MIB: CISCO-UNIFIED-COMPUTING-EQUIPMENT-MIB</p><p>Cisco UCS equipment:Fan:operState managed object property</p> | SNMP | sensor.fan.status[cucsEquipmentFanOperState.{#SNMPINDEX}] |
+| Inventory | {#UNIT_LOCATION}: Hardware model name | <p>MIB: CISCO-UNIFIED-COMPUTING-COMPUTE-MIB</p><p>Cisco UCS compute:RackUnit:model managed object property</p> | SNMP | system.hw.model[cucsComputeRackUnitModel.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | {#UNIT_LOCATION}: Hardware serial number | <p>MIB: CISCO-UNIFIED-COMPUTING-COMPUTE-MIB</p><p>Cisco UCS compute:RackUnit:serial managed object property</p> | SNMP | system.hw.serialnumber[cucsComputeRackUnitSerial.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Physical_disks | {#DISK_LOCATION}: Physical disk status | <p>MIB: CISCO-UNIFIED-COMPUTING-STORAGE-MIB</p><p>Cisco UCS storage:LocalDisk:diskState managed object property.</p> | SNMP | system.hw.physicaldisk.status[cucsStorageLocalDiskDiskState.{#SNMPINDEX}] |
+| Physical_disks | {#DISK_LOCATION}: Physical disk model name | <p>MIB: CISCO-UNIFIED-COMPUTING-STORAGE-MIB</p><p>Cisco UCS storage:LocalDisk:serial managed object property. Actually returns part number code</p> | SNMP | system.hw.physicaldisk.model[cucsStorageLocalDiskSerial.{#SNMPINDEX}] |
+| Physical_disks | {#DISK_LOCATION}: Physical disk media type | <p>MIB: CISCO-UNIFIED-COMPUTING-STORAGE-MIB</p><p>Cisco UCS storage:LocalDisk:model managed object property. Actually returns 'HDD' or 'SSD'</p> | SNMP | system.hw.physicaldisk.media_type[cucsStorageLocalDiskModel.{#SNMPINDEX}] |
+| Physical_disks | {#DISK_LOCATION}: Disk size | <p>MIB: CISCO-UNIFIED-COMPUTING-STORAGE-MIB</p><p>Cisco UCS storage:LocalDisk:size managed object property. In MB.</p> | SNMP | system.hw.physicaldisk.size[cucsStorageLocalDiskSize.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1048576`</p> |
+| Power_supply | {#PSU_LOCATION}: Power supply status | <p>MIB: CISCO-UNIFIED-COMPUTING-EQUIPMENT-MIB</p><p>Cisco UCS equipment:Psu:operState managed object property</p> | SNMP | sensor.psu.status[cucsEquipmentPsuOperState.{#SNMPINDEX}] |
+| Status | {#UNIT_LOCATION}: Overall system health status | <p>MIB: CISCO-UNIFIED-COMPUTING-COMPUTE-MIB</p><p>Cisco UCS compute:RackUnit:operState managed object property</p> | SNMP | system.status[cucsComputeRackUnitOperState.{#SNMPINDEX}] |
+| Temperature | {#SENSOR_LOCATION}.Ambient: Temperature | <p>MIB: CISCO-UNIFIED-COMPUTING-COMPUTE-MIB</p><p>Temperature readings of testpoint: {#SENSOR_LOCATION}.Ambient</p> | SNMP | sensor.temp.value[cucsComputeRackUnitMbTempStatsAmbientTemp.{#SNMPINDEX}] |
+| Temperature | {#SENSOR_LOCATION}.Front: Temperature | <p>MIB: CISCO-UNIFIED-COMPUTING-COMPUTE-MIB</p><p>Cisco UCS compute:RackUnitMbTempStats:frontTemp managed object property</p> | SNMP | sensor.temp.value[cucsComputeRackUnitMbTempStatsFrontTemp.{#SNMPINDEX}] |
+| Temperature | {#SENSOR_LOCATION}.Rear: Temperature | <p>MIB: CISCO-UNIFIED-COMPUTING-COMPUTE-MIB</p><p>Cisco UCS compute:RackUnitMbTempStats:rearTemp managed object property</p> | SNMP | sensor.temp.value[cucsComputeRackUnitMbTempStatsRearTemp.{#SNMPINDEX}] |
+| Temperature | {#SENSOR_LOCATION}.IOH: Temperature | <p>MIB: CISCO-UNIFIED-COMPUTING-COMPUTE-MIB</p><p>Cisco UCS compute:RackUnitMbTempStats:ioh1Temp managed object property</p> | SNMP | sensor.temp.value[cucsComputeRackUnitMbTempSltatsIoh1Temp.{#SNMPINDEX}] |
+| Temperature | {#SENSOR_LOCATION}: Temperature | <p>MIB: CISCO-UNIFIED-COMPUTING-PROCESSOR-MIB</p><p>Cisco UCS processor:EnvStats:temperature managed object property</p> | SNMP | sensor.temp.value[cucsProcessorEnvStatsTemperature.{#SNMPINDEX}] |
+| Virtual_disks | {#VDISK_LOCATION}: Status | <p>MIB: CISCO-UNIFIED-COMPUTING-STORAGE-MIB</p><p>Cisco UCS storage:LocalLun:presence managed object property</p> | SNMP | system.hw.virtualdisk.status[cucsStorageLocalLunPresence.{#SNMPINDEX}] |
+| Virtual_disks | {#VDISK_LOCATION}: Layout type | <p>MIB: CISCO-UNIFIED-COMPUTING-STORAGE-MIB</p><p>Cisco UCS storage:LocalLun:type managed object property</p> | SNMP | system.hw.virtualdisk.layout[cucsStorageLocalLunType.{#SNMPINDEX}] |
+| Virtual_disks | {#VDISK_LOCATION}: Disk size | <p>MIB: CISCO-UNIFIED-COMPUTING-STORAGE-MIB</p><p>Cisco UCS storage:LocalLun:size managed object property in MB.</p> | SNMP | system.hw.virtualdisk.size[cucsStorageLocalLunSize.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1048576`</p> |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|{#DISKARRAY_LOCATION}: Disk array controller is in critical state |<p>Please check the device for faults</p> |`{TEMPLATE_NAME:system.hw.diskarray.status[cucsStorageControllerOperState.{#SNMPINDEX}].count(#1,{$DISK_ARRAY_CRIT_STATUS:"inoperable"},eq)}=1` |HIGH | |
-|{#DISKARRAY_LOCATION}: Disk array controller is in warning state |<p>Please check the device for faults</p> |`{TEMPLATE_NAME:system.hw.diskarray.status[cucsStorageControllerOperState.{#SNMPINDEX}].count(#1,{$DISK_ARRAY_WARN_STATUS:"degraded"},eq)}=1` |AVERAGE |<p>**Depends on**:</p><p>- {#DISKARRAY_LOCATION}: Disk array controller is in critical state</p> |
-|{#DISKARRAY_LOCATION}: Disk array controller is not in optimal state |<p>Please check the device for faults</p> |`{TEMPLATE_NAME:system.hw.diskarray.status[cucsStorageControllerOperState.{#SNMPINDEX}].count(#1,{$DISK_ARRAY_OK_STATUS:"operable"},ne)}=1` |WARNING |<p>**Depends on**:</p><p>- {#DISKARRAY_LOCATION}: Disk array controller is in critical state</p><p>- {#DISKARRAY_LOCATION}: Disk array controller is in warning state</p> |
-|{#DISKARRAY_CACHE_LOCATION}: Disk array cache controller battery is in critical state! |<p>Please check the device for faults</p> |`{TEMPLATE_NAME:system.hw.diskarray.cache.battery.status[cucsStorageRaidBatteryOperability.{#SNMPINDEX}].count(#1,{$DISK_ARRAY_CACHE_BATTERY_CRIT_STATUS},eq)}=1` |AVERAGE | |
-|{#DISKARRAY_CACHE_LOCATION}: Disk array cache controller battery is not in optimal state |<p>Please check the device for faults</p> |`{TEMPLATE_NAME:system.hw.diskarray.cache.battery.status[cucsStorageRaidBatteryOperability.{#SNMPINDEX}].count(#1,{$DISK_ARRAY_CACHE_BATTERY_OK_STATUS},ne)}=1` |WARNING |<p>**Depends on**:</p><p>- {#DISKARRAY_CACHE_LOCATION}: Disk array cache controller battery is in critical state!</p> |
-|{#FAN_LOCATION}: Fan is in critical state |<p>Please check the fan unit</p> |`{TEMPLATE_NAME:sensor.fan.status[cucsEquipmentFanOperState.{#SNMPINDEX}].count(#1,{$FAN_CRIT_STATUS:"inoperable"},eq)}=1` |AVERAGE | |
-|{#FAN_LOCATION}: Fan is in warning state |<p>Please check the fan unit</p> |`{TEMPLATE_NAME:sensor.fan.status[cucsEquipmentFanOperState.{#SNMPINDEX}].count(#1,{$FAN_WARN_STATUS:"degraded"},eq)}=1` |WARNING |<p>**Depends on**:</p><p>- {#FAN_LOCATION}: Fan is in critical state</p> |
-|{#UNIT_LOCATION}: Device has been replaced (new serial number received) |<p>Device serial number has changed. Ack to close</p> |`{TEMPLATE_NAME:system.hw.serialnumber[cucsComputeRackUnitSerial.{#SNMPINDEX}].diff()}=1 and {TEMPLATE_NAME:system.hw.serialnumber[cucsComputeRackUnitSerial.{#SNMPINDEX}].strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|{#DISK_LOCATION}: Physical disk failed |<p>Please check physical disk for warnings or errors</p> |`{TEMPLATE_NAME:system.hw.physicaldisk.status[cucsStorageLocalDiskDiskState.{#SNMPINDEX}].count(#1,{$DISK_FAIL_STATUS:"failed"},eq)}=1` |HIGH | |
-|{#DISK_LOCATION}: Physical disk error |<p>Please check physical disk for warnings or errors</p> |`{TEMPLATE_NAME:system.hw.physicaldisk.status[cucsStorageLocalDiskDiskState.{#SNMPINDEX}].count(#1,{$DISK_CRIT_STATUS:"bad"},eq)}=1 or {TEMPLATE_NAME:system.hw.physicaldisk.status[cucsStorageLocalDiskDiskState.{#SNMPINDEX}].count(#1,{$DISK_CRIT_STATUS:"predictiveFailure"},eq)}=1` |AVERAGE |<p>**Depends on**:</p><p>- {#DISK_LOCATION}: Physical disk failed</p> |
-|{#PSU_LOCATION}: Power supply is in critical state |<p>Please check the power supply unit for errors</p> |`{TEMPLATE_NAME:sensor.psu.status[cucsEquipmentPsuOperState.{#SNMPINDEX}].count(#1,{$PSU_CRIT_STATUS:"inoperable"},eq)}=1` |AVERAGE | |
-|{#PSU_LOCATION}: Power supply is in warning state |<p>Please check the power supply unit for errors</p> |`{TEMPLATE_NAME:sensor.psu.status[cucsEquipmentPsuOperState.{#SNMPINDEX}].count(#1,{$PSU_WARN_STATUS:"degraded"},eq)}=1` |WARNING |<p>**Depends on**:</p><p>- {#PSU_LOCATION}: Power supply is in critical state</p> |
-|{#UNIT_LOCATION}: System status is in critical state |<p>Please check the device for errors</p> |`{TEMPLATE_NAME:system.status[cucsComputeRackUnitOperState.{#SNMPINDEX}].count(#1,{$HEALTH_CRIT_STATUS:"computeFailed"},eq)}=1 or {TEMPLATE_NAME:system.status[cucsComputeRackUnitOperState.{#SNMPINDEX}].count(#1,{$HEALTH_CRIT_STATUS:"configFailure"},eq)}=1 or {TEMPLATE_NAME:system.status[cucsComputeRackUnitOperState.{#SNMPINDEX}].count(#1,{$HEALTH_CRIT_STATUS:"unconfigFailure"},eq)}=1 or {TEMPLATE_NAME:system.status[cucsComputeRackUnitOperState.{#SNMPINDEX}].count(#1,{$HEALTH_CRIT_STATUS:"inoperable"},eq)}=1` |HIGH | |
-|{#UNIT_LOCATION}: System status is in warning state |<p>Please check the device for warnings</p> |`{TEMPLATE_NAME:system.status[cucsComputeRackUnitOperState.{#SNMPINDEX}].count(#1,{$HEALTH_WARN_STATUS:"testFailed"},eq)}=1 or {TEMPLATE_NAME:system.status[cucsComputeRackUnitOperState.{#SNMPINDEX}].count(#1,{$HEALTH_WARN_STATUS:"thermalProblem"},eq)}=1 or {TEMPLATE_NAME:system.status[cucsComputeRackUnitOperState.{#SNMPINDEX}].count(#1,{$HEALTH_WARN_STATUS:"powerProblem"},eq)}=1 or {TEMPLATE_NAME:system.status[cucsComputeRackUnitOperState.{#SNMPINDEX}].count(#1,{$HEALTH_WARN_STATUS:"voltageProblem"},eq)}=1 or {TEMPLATE_NAME:system.status[cucsComputeRackUnitOperState.{#SNMPINDEX}].count(#1,{$HEALTH_WARN_STATUS:"diagnosticsFailed"},eq)}=1` |WARNING |<p>**Depends on**:</p><p>- {#UNIT_LOCATION}: System status is in critical state</p> |
-|{#SENSOR_LOCATION}.Ambient: Temperature is above warning threshold: >{$TEMP_WARN:"Ambient"} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[cucsComputeRackUnitMbTempStatsAmbientTemp.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:"Ambient"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[cucsComputeRackUnitMbTempStatsAmbientTemp.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:"Ambient"}-3` |WARNING |<p>**Depends on**:</p><p>- {#SENSOR_LOCATION}.Ambient: Temperature is above critical threshold: >{$TEMP_CRIT:"Ambient"}</p> |
-|{#SENSOR_LOCATION}.Ambient: Temperature is above critical threshold: >{$TEMP_CRIT:"Ambient"} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[cucsComputeRackUnitMbTempStatsAmbientTemp.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:"Ambient"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[cucsComputeRackUnitMbTempStatsAmbientTemp.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:"Ambient"}-3` |HIGH | |
-|{#SENSOR_LOCATION}.Ambient: Temperature is too low: <{$TEMP_CRIT_LOW:"Ambient"} |<p>-</p> |`{TEMPLATE_NAME:sensor.temp.value[cucsComputeRackUnitMbTempStatsAmbientTemp.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:"Ambient"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[cucsComputeRackUnitMbTempStatsAmbientTemp.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:"Ambient"}+3` |AVERAGE | |
-|{#SENSOR_LOCATION}.Front: Temperature is above warning threshold: >{$TEMP_WARN:"Ambient"} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[cucsComputeRackUnitMbTempStatsFrontTemp.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:"Ambient"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[cucsComputeRackUnitMbTempStatsFrontTemp.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:"Ambient"}-3` |WARNING |<p>**Depends on**:</p><p>- {#SENSOR_LOCATION}.Front: Temperature is above critical threshold: >{$TEMP_CRIT:"Ambient"}</p> |
-|{#SENSOR_LOCATION}.Front: Temperature is above critical threshold: >{$TEMP_CRIT:"Ambient"} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[cucsComputeRackUnitMbTempStatsFrontTemp.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:"Ambient"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[cucsComputeRackUnitMbTempStatsFrontTemp.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:"Ambient"}-3` |HIGH | |
-|{#SENSOR_LOCATION}.Front: Temperature is too low: <{$TEMP_CRIT_LOW:"Ambient"} |<p>-</p> |`{TEMPLATE_NAME:sensor.temp.value[cucsComputeRackUnitMbTempStatsFrontTemp.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:"Ambient"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[cucsComputeRackUnitMbTempStatsFrontTemp.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:"Ambient"}+3` |AVERAGE | |
-|{#SENSOR_LOCATION}.Rear: Temperature is above warning threshold: >{$TEMP_WARN:"Ambient"} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[cucsComputeRackUnitMbTempStatsRearTemp.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:"Ambient"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[cucsComputeRackUnitMbTempStatsRearTemp.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:"Ambient"}-3` |WARNING |<p>**Depends on**:</p><p>- {#SENSOR_LOCATION}.Rear: Temperature is above critical threshold: >{$TEMP_CRIT:"Ambient"}</p> |
-|{#SENSOR_LOCATION}.Rear: Temperature is above critical threshold: >{$TEMP_CRIT:"Ambient"} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[cucsComputeRackUnitMbTempStatsRearTemp.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:"Ambient"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[cucsComputeRackUnitMbTempStatsRearTemp.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:"Ambient"}-3` |HIGH | |
-|{#SENSOR_LOCATION}.Rear: Temperature is too low: <{$TEMP_CRIT_LOW:"Ambient"} |<p>-</p> |`{TEMPLATE_NAME:sensor.temp.value[cucsComputeRackUnitMbTempStatsRearTemp.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:"Ambient"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[cucsComputeRackUnitMbTempStatsRearTemp.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:"Ambient"}+3` |AVERAGE | |
-|{#SENSOR_LOCATION}.IOH: Temperature is above warning threshold: >{$TEMP_WARN:"Ambient"} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[cucsComputeRackUnitMbTempSltatsIoh1Temp.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:"Ambient"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[cucsComputeRackUnitMbTempSltatsIoh1Temp.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:"Ambient"}-3` |WARNING |<p>**Depends on**:</p><p>- {#SENSOR_LOCATION}.IOH: Temperature is above critical threshold: >{$TEMP_CRIT:"Ambient"}</p> |
-|{#SENSOR_LOCATION}.IOH: Temperature is above critical threshold: >{$TEMP_CRIT:"Ambient"} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[cucsComputeRackUnitMbTempSltatsIoh1Temp.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:"Ambient"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[cucsComputeRackUnitMbTempSltatsIoh1Temp.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:"Ambient"}-3` |HIGH | |
-|{#SENSOR_LOCATION}.IOH: Temperature is too low: <{$TEMP_CRIT_LOW:"Ambient"} |<p>-</p> |`{TEMPLATE_NAME:sensor.temp.value[cucsComputeRackUnitMbTempSltatsIoh1Temp.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:"Ambient"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[cucsComputeRackUnitMbTempSltatsIoh1Temp.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:"Ambient"}+3` |AVERAGE | |
-|{#SENSOR_LOCATION}: Temperature is above warning threshold: >{$TEMP_WARN:"CPU"} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[cucsProcessorEnvStatsTemperature.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:"CPU"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[cucsProcessorEnvStatsTemperature.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:"CPU"}-3` |WARNING |<p>**Depends on**:</p><p>- {#SENSOR_LOCATION}: Temperature is above critical threshold: >{$TEMP_CRIT:"CPU"}</p> |
-|{#SENSOR_LOCATION}: Temperature is above critical threshold: >{$TEMP_CRIT:"CPU"} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[cucsProcessorEnvStatsTemperature.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:"CPU"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[cucsProcessorEnvStatsTemperature.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:"CPU"}-3` |HIGH | |
-|{#SENSOR_LOCATION}: Temperature is too low: <{$TEMP_CRIT_LOW:"CPU"} |<p>-</p> |`{TEMPLATE_NAME:sensor.temp.value[cucsProcessorEnvStatsTemperature.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:"CPU"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[cucsProcessorEnvStatsTemperature.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:"CPU"}+3` |AVERAGE | |
-|{#VDISK_LOCATION}: Virtual disk is not in OK state |<p>Please check virtual disk for warnings or errors</p> |`{TEMPLATE_NAME:system.hw.virtualdisk.status[cucsStorageLocalLunPresence.{#SNMPINDEX}].count(#1,{$VDISK_OK_STATUS:"equipped"},ne)}=1` |WARNING | |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|----------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| {#DISKARRAY_LOCATION}: Disk array controller is in critical state | <p>Please check the device for faults</p> | `{TEMPLATE_NAME:system.hw.diskarray.status[cucsStorageControllerOperState.{#SNMPINDEX}].count(#1,{$DISK_ARRAY_CRIT_STATUS:"inoperable"},eq)}=1` | HIGH | |
+| {#DISKARRAY_LOCATION}: Disk array controller is in warning state | <p>Please check the device for faults</p> | `{TEMPLATE_NAME:system.hw.diskarray.status[cucsStorageControllerOperState.{#SNMPINDEX}].count(#1,{$DISK_ARRAY_WARN_STATUS:"degraded"},eq)}=1` | AVERAGE | <p>**Depends on**:</p><p>- {#DISKARRAY_LOCATION}: Disk array controller is in critical state</p> |
+| {#DISKARRAY_LOCATION}: Disk array controller is not in optimal state | <p>Please check the device for faults</p> | `{TEMPLATE_NAME:system.hw.diskarray.status[cucsStorageControllerOperState.{#SNMPINDEX}].count(#1,{$DISK_ARRAY_OK_STATUS:"operable"},ne)}=1` | WARNING | <p>**Depends on**:</p><p>- {#DISKARRAY_LOCATION}: Disk array controller is in critical state</p><p>- {#DISKARRAY_LOCATION}: Disk array controller is in warning state</p> |
+| {#DISKARRAY_CACHE_LOCATION}: Disk array cache controller battery is in critical state! | <p>Please check the device for faults</p> | `{TEMPLATE_NAME:system.hw.diskarray.cache.battery.status[cucsStorageRaidBatteryOperability.{#SNMPINDEX}].count(#1,{$DISK_ARRAY_CACHE_BATTERY_CRIT_STATUS},eq)}=1` | AVERAGE | |
+| {#DISKARRAY_CACHE_LOCATION}: Disk array cache controller battery is not in optimal state | <p>Please check the device for faults</p> | `{TEMPLATE_NAME:system.hw.diskarray.cache.battery.status[cucsStorageRaidBatteryOperability.{#SNMPINDEX}].count(#1,{$DISK_ARRAY_CACHE_BATTERY_OK_STATUS},ne)}=1` | WARNING | <p>**Depends on**:</p><p>- {#DISKARRAY_CACHE_LOCATION}: Disk array cache controller battery is in critical state!</p> |
+| {#FAN_LOCATION}: Fan is in critical state | <p>Please check the fan unit</p> | `{TEMPLATE_NAME:sensor.fan.status[cucsEquipmentFanOperState.{#SNMPINDEX}].count(#1,{$FAN_CRIT_STATUS:"inoperable"},eq)}=1` | AVERAGE | |
+| {#FAN_LOCATION}: Fan is in warning state | <p>Please check the fan unit</p> | `{TEMPLATE_NAME:sensor.fan.status[cucsEquipmentFanOperState.{#SNMPINDEX}].count(#1,{$FAN_WARN_STATUS:"degraded"},eq)}=1` | WARNING | <p>**Depends on**:</p><p>- {#FAN_LOCATION}: Fan is in critical state</p> |
+| {#UNIT_LOCATION}: Device has been replaced (new serial number received) | <p>Device serial number has changed. Ack to close</p> | `{TEMPLATE_NAME:system.hw.serialnumber[cucsComputeRackUnitSerial.{#SNMPINDEX}].diff()}=1 and {TEMPLATE_NAME:system.hw.serialnumber[cucsComputeRackUnitSerial.{#SNMPINDEX}].strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| {#DISK_LOCATION}: Physical disk failed | <p>Please check physical disk for warnings or errors</p> | `{TEMPLATE_NAME:system.hw.physicaldisk.status[cucsStorageLocalDiskDiskState.{#SNMPINDEX}].count(#1,{$DISK_FAIL_STATUS:"failed"},eq)}=1` | HIGH | |
+| {#DISK_LOCATION}: Physical disk error | <p>Please check physical disk for warnings or errors</p> | `{TEMPLATE_NAME:system.hw.physicaldisk.status[cucsStorageLocalDiskDiskState.{#SNMPINDEX}].count(#1,{$DISK_CRIT_STATUS:"bad"},eq)}=1 or {TEMPLATE_NAME:system.hw.physicaldisk.status[cucsStorageLocalDiskDiskState.{#SNMPINDEX}].count(#1,{$DISK_CRIT_STATUS:"predictiveFailure"},eq)}=1` | AVERAGE | <p>**Depends on**:</p><p>- {#DISK_LOCATION}: Physical disk failed</p> |
+| {#PSU_LOCATION}: Power supply is in critical state | <p>Please check the power supply unit for errors</p> | `{TEMPLATE_NAME:sensor.psu.status[cucsEquipmentPsuOperState.{#SNMPINDEX}].count(#1,{$PSU_CRIT_STATUS:"inoperable"},eq)}=1` | AVERAGE | |
+| {#PSU_LOCATION}: Power supply is in warning state | <p>Please check the power supply unit for errors</p> | `{TEMPLATE_NAME:sensor.psu.status[cucsEquipmentPsuOperState.{#SNMPINDEX}].count(#1,{$PSU_WARN_STATUS:"degraded"},eq)}=1` | WARNING | <p>**Depends on**:</p><p>- {#PSU_LOCATION}: Power supply is in critical state</p> |
+| {#UNIT_LOCATION}: System status is in critical state | <p>Please check the device for errors</p> | `{TEMPLATE_NAME:system.status[cucsComputeRackUnitOperState.{#SNMPINDEX}].count(#1,{$HEALTH_CRIT_STATUS:"computeFailed"},eq)}=1 or {TEMPLATE_NAME:system.status[cucsComputeRackUnitOperState.{#SNMPINDEX}].count(#1,{$HEALTH_CRIT_STATUS:"configFailure"},eq)}=1 or {TEMPLATE_NAME:system.status[cucsComputeRackUnitOperState.{#SNMPINDEX}].count(#1,{$HEALTH_CRIT_STATUS:"unconfigFailure"},eq)}=1 or {TEMPLATE_NAME:system.status[cucsComputeRackUnitOperState.{#SNMPINDEX}].count(#1,{$HEALTH_CRIT_STATUS:"inoperable"},eq)}=1` | HIGH | |
+| {#UNIT_LOCATION}: System status is in warning state | <p>Please check the device for warnings</p> | `{TEMPLATE_NAME:system.status[cucsComputeRackUnitOperState.{#SNMPINDEX}].count(#1,{$HEALTH_WARN_STATUS:"testFailed"},eq)}=1 or {TEMPLATE_NAME:system.status[cucsComputeRackUnitOperState.{#SNMPINDEX}].count(#1,{$HEALTH_WARN_STATUS:"thermalProblem"},eq)}=1 or {TEMPLATE_NAME:system.status[cucsComputeRackUnitOperState.{#SNMPINDEX}].count(#1,{$HEALTH_WARN_STATUS:"powerProblem"},eq)}=1 or {TEMPLATE_NAME:system.status[cucsComputeRackUnitOperState.{#SNMPINDEX}].count(#1,{$HEALTH_WARN_STATUS:"voltageProblem"},eq)}=1 or {TEMPLATE_NAME:system.status[cucsComputeRackUnitOperState.{#SNMPINDEX}].count(#1,{$HEALTH_WARN_STATUS:"diagnosticsFailed"},eq)}=1` | WARNING | <p>**Depends on**:</p><p>- {#UNIT_LOCATION}: System status is in critical state</p> |
+| {#SENSOR_LOCATION}.Ambient: Temperature is above warning threshold: >{$TEMP_WARN:"Ambient"} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[cucsComputeRackUnitMbTempStatsAmbientTemp.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:"Ambient"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[cucsComputeRackUnitMbTempStatsAmbientTemp.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:"Ambient"}-3` | WARNING | <p>**Depends on**:</p><p>- {#SENSOR_LOCATION}.Ambient: Temperature is above critical threshold: >{$TEMP_CRIT:"Ambient"}</p> |
+| {#SENSOR_LOCATION}.Ambient: Temperature is above critical threshold: >{$TEMP_CRIT:"Ambient"} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[cucsComputeRackUnitMbTempStatsAmbientTemp.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:"Ambient"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[cucsComputeRackUnitMbTempStatsAmbientTemp.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:"Ambient"}-3` | HIGH | |
+| {#SENSOR_LOCATION}.Ambient: Temperature is too low: <{$TEMP_CRIT_LOW:"Ambient"} | <p>-</p> | `{TEMPLATE_NAME:sensor.temp.value[cucsComputeRackUnitMbTempStatsAmbientTemp.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:"Ambient"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[cucsComputeRackUnitMbTempStatsAmbientTemp.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:"Ambient"}+3` | AVERAGE | |
+| {#SENSOR_LOCATION}.Front: Temperature is above warning threshold: >{$TEMP_WARN:"Ambient"} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[cucsComputeRackUnitMbTempStatsFrontTemp.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:"Ambient"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[cucsComputeRackUnitMbTempStatsFrontTemp.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:"Ambient"}-3` | WARNING | <p>**Depends on**:</p><p>- {#SENSOR_LOCATION}.Front: Temperature is above critical threshold: >{$TEMP_CRIT:"Ambient"}</p> |
+| {#SENSOR_LOCATION}.Front: Temperature is above critical threshold: >{$TEMP_CRIT:"Ambient"} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[cucsComputeRackUnitMbTempStatsFrontTemp.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:"Ambient"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[cucsComputeRackUnitMbTempStatsFrontTemp.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:"Ambient"}-3` | HIGH | |
+| {#SENSOR_LOCATION}.Front: Temperature is too low: <{$TEMP_CRIT_LOW:"Ambient"} | <p>-</p> | `{TEMPLATE_NAME:sensor.temp.value[cucsComputeRackUnitMbTempStatsFrontTemp.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:"Ambient"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[cucsComputeRackUnitMbTempStatsFrontTemp.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:"Ambient"}+3` | AVERAGE | |
+| {#SENSOR_LOCATION}.Rear: Temperature is above warning threshold: >{$TEMP_WARN:"Ambient"} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[cucsComputeRackUnitMbTempStatsRearTemp.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:"Ambient"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[cucsComputeRackUnitMbTempStatsRearTemp.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:"Ambient"}-3` | WARNING | <p>**Depends on**:</p><p>- {#SENSOR_LOCATION}.Rear: Temperature is above critical threshold: >{$TEMP_CRIT:"Ambient"}</p> |
+| {#SENSOR_LOCATION}.Rear: Temperature is above critical threshold: >{$TEMP_CRIT:"Ambient"} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[cucsComputeRackUnitMbTempStatsRearTemp.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:"Ambient"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[cucsComputeRackUnitMbTempStatsRearTemp.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:"Ambient"}-3` | HIGH | |
+| {#SENSOR_LOCATION}.Rear: Temperature is too low: <{$TEMP_CRIT_LOW:"Ambient"} | <p>-</p> | `{TEMPLATE_NAME:sensor.temp.value[cucsComputeRackUnitMbTempStatsRearTemp.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:"Ambient"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[cucsComputeRackUnitMbTempStatsRearTemp.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:"Ambient"}+3` | AVERAGE | |
+| {#SENSOR_LOCATION}.IOH: Temperature is above warning threshold: >{$TEMP_WARN:"Ambient"} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[cucsComputeRackUnitMbTempSltatsIoh1Temp.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:"Ambient"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[cucsComputeRackUnitMbTempSltatsIoh1Temp.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:"Ambient"}-3` | WARNING | <p>**Depends on**:</p><p>- {#SENSOR_LOCATION}.IOH: Temperature is above critical threshold: >{$TEMP_CRIT:"Ambient"}</p> |
+| {#SENSOR_LOCATION}.IOH: Temperature is above critical threshold: >{$TEMP_CRIT:"Ambient"} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[cucsComputeRackUnitMbTempSltatsIoh1Temp.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:"Ambient"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[cucsComputeRackUnitMbTempSltatsIoh1Temp.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:"Ambient"}-3` | HIGH | |
+| {#SENSOR_LOCATION}.IOH: Temperature is too low: <{$TEMP_CRIT_LOW:"Ambient"} | <p>-</p> | `{TEMPLATE_NAME:sensor.temp.value[cucsComputeRackUnitMbTempSltatsIoh1Temp.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:"Ambient"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[cucsComputeRackUnitMbTempSltatsIoh1Temp.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:"Ambient"}+3` | AVERAGE | |
+| {#SENSOR_LOCATION}: Temperature is above warning threshold: >{$TEMP_WARN:"CPU"} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[cucsProcessorEnvStatsTemperature.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:"CPU"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[cucsProcessorEnvStatsTemperature.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:"CPU"}-3` | WARNING | <p>**Depends on**:</p><p>- {#SENSOR_LOCATION}: Temperature is above critical threshold: >{$TEMP_CRIT:"CPU"}</p> |
+| {#SENSOR_LOCATION}: Temperature is above critical threshold: >{$TEMP_CRIT:"CPU"} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[cucsProcessorEnvStatsTemperature.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:"CPU"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[cucsProcessorEnvStatsTemperature.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:"CPU"}-3` | HIGH | |
+| {#SENSOR_LOCATION}: Temperature is too low: <{$TEMP_CRIT_LOW:"CPU"} | <p>-</p> | `{TEMPLATE_NAME:sensor.temp.value[cucsProcessorEnvStatsTemperature.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:"CPU"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[cucsProcessorEnvStatsTemperature.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:"CPU"}+3` | AVERAGE | |
+| {#VDISK_LOCATION}: Virtual disk is not in OK state | <p>Please check virtual disk for warnings or errors</p> | `{TEMPLATE_NAME:system.hw.virtualdisk.status[cucsStorageLocalLunPresence.{#SNMPINDEX}].count(#1,{$VDISK_OK_STATUS:"equipped"},ne)}=1` | WARNING | |
## Feedback
diff --git a/templates/server/dell_idrac_snmp/README.md b/templates/server/dell_idrac_snmp/README.md
index a2f1348b9a2..f2acf567f67 100644
--- a/templates/server/dell_idrac_snmp/README.md
+++ b/templates/server/dell_idrac_snmp/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
for Dell servers with iDRAC controllers
http://www.dell.com/support/manuals/us/en/19/dell-openmanage-server-administrator-v8.3/snmp_idrac8/idrac-mib?guid=guid-e686536d-bc8e-4e09-8e8b-de8eb052efee
Supported systems: http://www.dell.com/support/manuals/us/en/04/dell-openmanage-server-administrator-v8.3/snmp_idrac8/supported-systems?guid=guid-f72b75ba-e686-4e8a-b8c5-ca11c7c21381
@@ -24,128 +24,128 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$DISK_ARRAY_CACHE_BATTERY_CRIT_STATUS} |<p>-</p> |`3` |
-|{$DISK_ARRAY_CACHE_BATTERY_OK_STATUS} |<p>-</p> |`2` |
-|{$DISK_ARRAY_CACHE_BATTERY_WARN_STATUS} |<p>-</p> |`4` |
-|{$DISK_ARRAY_CRIT_STATUS:"critical"} |<p>-</p> |`5` |
-|{$DISK_ARRAY_FAIL_STATUS:"nonRecoverable"} |<p>-</p> |`6` |
-|{$DISK_ARRAY_WARN_STATUS:"nonCritical"} |<p>-</p> |`4` |
-|{$DISK_FAIL_STATUS:"critical"} |<p>-</p> |`5` |
-|{$DISK_FAIL_STATUS:"nonRecoverable"} |<p>-</p> |`6` |
-|{$DISK_SMART_FAIL_STATUS} |<p>-</p> |`1` |
-|{$DISK_WARN_STATUS:"nonCritical"} |<p>-</p> |`4` |
-|{$FAN_CRIT_STATUS:"criticalLower"} |<p>-</p> |`8` |
-|{$FAN_CRIT_STATUS:"criticalUpper"} |<p>-</p> |`5` |
-|{$FAN_CRIT_STATUS:"failed"} |<p>-</p> |`10` |
-|{$FAN_CRIT_STATUS:"nonRecoverableLower"} |<p>-</p> |`9` |
-|{$FAN_CRIT_STATUS:"nonRecoverableUpper"} |<p>-</p> |`6` |
-|{$FAN_WARN_STATUS:"nonCriticalLower"} |<p>-</p> |`7` |
-|{$FAN_WARN_STATUS:"nonCriticalUpper"} |<p>-</p> |`4` |
-|{$HEALTH_CRIT_STATUS} |<p>-</p> |`5` |
-|{$HEALTH_DISASTER_STATUS} |<p>-</p> |`6` |
-|{$HEALTH_WARN_STATUS} |<p>-</p> |`4` |
-|{$PSU_CRIT_STATUS:"critical"} |<p>-</p> |`5` |
-|{$PSU_CRIT_STATUS:"nonRecoverable"} |<p>-</p> |`6` |
-|{$PSU_WARN_STATUS:"nonCritical"} |<p>-</p> |`4` |
-|{$TEMP_CRIT:"Ambient"} |<p>-</p> |`35` |
-|{$TEMP_CRIT:"CPU"} |<p>-</p> |`75` |
-|{$TEMP_CRIT_LOW} |<p>-</p> |`5` |
-|{$TEMP_CRIT_STATUS} |<p>-</p> |`5` |
-|{$TEMP_CRIT} |<p>-</p> |`60` |
-|{$TEMP_DISASTER_STATUS} |<p>-</p> |`6` |
-|{$TEMP_WARN:"Ambient"} |<p>-</p> |`30` |
-|{$TEMP_WARN:"CPU"} |<p>-</p> |`70` |
-|{$TEMP_WARN_STATUS} |<p>-</p> |`4` |
-|{$TEMP_WARN} |<p>-</p> |`50` |
-|{$VDISK_CRIT_STATUS:"failed"} |<p>-</p> |`3` |
-|{$VDISK_WARN_STATUS:"degraded"} |<p>-</p> |`4` |
+| Name | Description | Default |
+|--------------------------------------------|-------------|---------|
+| {$DISK_ARRAY_CACHE_BATTERY_CRIT_STATUS} | <p>-</p> | `3` |
+| {$DISK_ARRAY_CACHE_BATTERY_OK_STATUS} | <p>-</p> | `2` |
+| {$DISK_ARRAY_CACHE_BATTERY_WARN_STATUS} | <p>-</p> | `4` |
+| {$DISK_ARRAY_CRIT_STATUS:"critical"} | <p>-</p> | `5` |
+| {$DISK_ARRAY_FAIL_STATUS:"nonRecoverable"} | <p>-</p> | `6` |
+| {$DISK_ARRAY_WARN_STATUS:"nonCritical"} | <p>-</p> | `4` |
+| {$DISK_FAIL_STATUS:"critical"} | <p>-</p> | `5` |
+| {$DISK_FAIL_STATUS:"nonRecoverable"} | <p>-</p> | `6` |
+| {$DISK_SMART_FAIL_STATUS} | <p>-</p> | `1` |
+| {$DISK_WARN_STATUS:"nonCritical"} | <p>-</p> | `4` |
+| {$FAN_CRIT_STATUS:"criticalLower"} | <p>-</p> | `8` |
+| {$FAN_CRIT_STATUS:"criticalUpper"} | <p>-</p> | `5` |
+| {$FAN_CRIT_STATUS:"failed"} | <p>-</p> | `10` |
+| {$FAN_CRIT_STATUS:"nonRecoverableLower"} | <p>-</p> | `9` |
+| {$FAN_CRIT_STATUS:"nonRecoverableUpper"} | <p>-</p> | `6` |
+| {$FAN_WARN_STATUS:"nonCriticalLower"} | <p>-</p> | `7` |
+| {$FAN_WARN_STATUS:"nonCriticalUpper"} | <p>-</p> | `4` |
+| {$HEALTH_CRIT_STATUS} | <p>-</p> | `5` |
+| {$HEALTH_DISASTER_STATUS} | <p>-</p> | `6` |
+| {$HEALTH_WARN_STATUS} | <p>-</p> | `4` |
+| {$PSU_CRIT_STATUS:"critical"} | <p>-</p> | `5` |
+| {$PSU_CRIT_STATUS:"nonRecoverable"} | <p>-</p> | `6` |
+| {$PSU_WARN_STATUS:"nonCritical"} | <p>-</p> | `4` |
+| {$TEMP_CRIT:"Ambient"} | <p>-</p> | `35` |
+| {$TEMP_CRIT:"CPU"} | <p>-</p> | `75` |
+| {$TEMP_CRIT_LOW} | <p>-</p> | `5` |
+| {$TEMP_CRIT_STATUS} | <p>-</p> | `5` |
+| {$TEMP_CRIT} | <p>-</p> | `60` |
+| {$TEMP_DISASTER_STATUS} | <p>-</p> | `6` |
+| {$TEMP_WARN:"Ambient"} | <p>-</p> | `30` |
+| {$TEMP_WARN:"CPU"} | <p>-</p> | `70` |
+| {$TEMP_WARN_STATUS} | <p>-</p> | `4` |
+| {$TEMP_WARN} | <p>-</p> | `50` |
+| {$VDISK_CRIT_STATUS:"failed"} | <p>-</p> | `3` |
+| {$VDISK_WARN_STATUS:"degraded"} | <p>-</p> | `4` |
## Template links
-|Name|
-|----|
-|Generic SNMP |
+| Name |
+|--------------|
+| Generic SNMP |
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Temperature CPU Discovery |<p>Scanning table of Temperature Probe Table IDRAC-MIB-SMIv2::temperatureProbeTable</p> |SNMP |temp.cpu.discovery<p>**Filter**:</p>AND_OR <p>- A: {#SENSOR_LOCALE} MATCHES_REGEX `.*CPU.*`</p> |
-|Temperature Ambient Discovery |<p>Scanning table of Temperature Probe Table IDRAC-MIB-SMIv2::temperatureProbeTable</p> |SNMP |temp.ambient.discovery<p>**Filter**:</p>AND_OR <p>- A: {#SENSOR_LOCALE} MATCHES_REGEX `.*Inlet Temp.*`</p> |
-|PSU Discovery |<p>IDRAC-MIB-SMIv2::powerSupplyTable</p> |SNMP |psu.discovery |
-|FAN Discovery |<p>IDRAC-MIB-SMIv2::coolingDeviceTable</p> |SNMP |fan.discovery<p>**Filter**:</p>AND_OR <p>- A: {#TYPE} MATCHES_REGEX `3`</p> |
-|Physical Disk Discovery |<p>IDRAC-MIB-SMIv2::physicalDiskTable</p> |SNMP |physicaldisk.discovery |
-|Virtual Disk Discovery |<p>IDRAC-MIB-SMIv2::virtualDiskTable</p> |SNMP |virtualdisk.discovery |
-|Array Controller Discovery |<p>IDRAC-MIB-SMIv2::controllerTable</p> |SNMP |physicaldisk.arr.discovery |
-|Array Controller Cache Discovery |<p>IDRAC-MIB-SMIv2::batteryTable</p> |SNMP |array.cache.discovery |
+| Name | Description | Type | Key and additional info |
+|----------------------------------|-----------------------------------------------------------------------------------------|------|------------------------------------------------------------------------------------------------------------|
+| Temperature CPU Discovery | <p>Scanning table of Temperature Probe Table IDRAC-MIB-SMIv2::temperatureProbeTable</p> | SNMP | temp.cpu.discovery<p>**Filter**:</p>AND_OR <p>- A: {#SENSOR_LOCALE} MATCHES_REGEX `.*CPU.*`</p> |
+| Temperature Ambient Discovery | <p>Scanning table of Temperature Probe Table IDRAC-MIB-SMIv2::temperatureProbeTable</p> | SNMP | temp.ambient.discovery<p>**Filter**:</p>AND_OR <p>- A: {#SENSOR_LOCALE} MATCHES_REGEX `.*Inlet Temp.*`</p> |
+| PSU Discovery | <p>IDRAC-MIB-SMIv2::powerSupplyTable</p> | SNMP | psu.discovery |
+| FAN Discovery | <p>IDRAC-MIB-SMIv2::coolingDeviceTable</p> | SNMP | fan.discovery<p>**Filter**:</p>AND_OR <p>- A: {#TYPE} MATCHES_REGEX `3`</p> |
+| Physical Disk Discovery | <p>IDRAC-MIB-SMIv2::physicalDiskTable</p> | SNMP | physicaldisk.discovery |
+| Virtual Disk Discovery | <p>IDRAC-MIB-SMIv2::virtualDiskTable</p> | SNMP | virtualdisk.discovery |
+| Array Controller Discovery | <p>IDRAC-MIB-SMIv2::controllerTable</p> | SNMP | physicaldisk.arr.discovery |
+| Array Controller Cache Discovery | <p>IDRAC-MIB-SMIv2::batteryTable</p> | SNMP | array.cache.discovery |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Disk_arrays |{#CNTLR_NAME}: Disk array controller status |<p>MIB: IDRAC-MIB-SMIv2</p><p>The status of the controller itself without the propagation of any contained component status.</p><p>Possible values:</p><p>1: Other</p><p>2: Unknown</p><p>3: OK</p><p>4: Non-critical</p><p>5: Critical</p><p>6: Non-recoverable</p><p> </p> |SNMP |system.hw.diskarray.status[controllerComponentStatus.{#SNMPINDEX}] |
-|Disk_arrays |{#CNTLR_NAME}: Disk array controller model |<p>MIB: IDRAC-MIB-SMIv2</p><p>The controller's name as represented in Storage Management.</p> |SNMP |system.hw.diskarray.model[controllerName.{#SNMPINDEX}] |
-|Disk_arrays |Battery {#BATTERY_NUM}: Disk array cache controller battery status |<p>MIB: IDRAC-MIB-SMIv2</p><p>Current state of battery.</p><p>Possible values:</p><p>1: The current state could not be determined.</p><p>2: The battery is operating normally.</p><p>3: The battery has failed and needs to be replaced.</p><p>4: The battery temperature is high or charge level is depleting.</p><p>5: The battery is missing or not detected.</p><p>6: The battery is undergoing the re-charge phase.</p><p>7: The battery voltage or charge level is below the threshold.</p><p> </p> |SNMP |system.hw.diskarray.cache.battery.status[batteryState.{#SNMPINDEX}] |
-|Fans |{#FAN_DESCR}: Fan status |<p>MIB: IDRAC-MIB-SMIv2</p><p>0700.0012.0001.0005 This attribute defines the probe status of the cooling device.</p> |SNMP |sensor.fan.status[coolingDeviceStatus.{#SNMPINDEX}] |
-|Fans |{#FAN_DESCR}: Fan speed |<p>MIB: IDRAC-MIB-SMIv2</p><p>0700.0012.0001.0006 This attribute defines the reading for a cooling device</p><p>of subtype other than coolingDeviceSubTypeIsDiscrete. When the value</p><p>for coolingDeviceSubType is other than coolingDeviceSubTypeIsDiscrete, the</p><p>value returned for this attribute is the speed in RPM or the OFF/ON value</p><p>of the cooling device. When the value for coolingDeviceSubType is</p><p>coolingDeviceSubTypeIsDiscrete, a value is not returned for this attribute.</p> |SNMP |sensor.fan.speed[coolingDeviceReading.{#SNMPINDEX}] |
-|Inventory |Hardware model name |<p>MIB: IDRAC-MIB-SMIv2</p><p>This attribute defines the model name of the system.</p> |SNMP |system.hw.model<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |Operating system |<p>MIB: IDRAC-MIB-SMIv2</p><p>This attribute defines the name of the operating system that the hostis running.</p> |SNMP |system.sw.os[systemOSName]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |Hardware serial number |<p>MIB: IDRAC-MIB-SMIv2</p><p>This attribute defines the service tag of the system.</p> |SNMP |system.hw.serialnumber<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |Firmware version |<p>MIB: IDRAC-MIB-SMIv2</p><p>This attribute defines the firmware version of a remote access card.</p> |SNMP |system.hw.firmware<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Physical_disks |{#DISK_NAME}: Physical disk status |<p>MIB: IDRAC-MIB-SMIv2</p><p>The status of the physical disk itself without the propagation of any contained component status.</p><p>Possible values:</p><p>1: Other</p><p>2: Unknown</p><p>3: OK</p><p>4: Non-critical</p><p>5: Critical</p><p>6: Non-recoverable</p> |SNMP |system.hw.physicaldisk.status[physicalDiskComponentStatus.{#SNMPINDEX}] |
-|Physical_disks |{#DISK_NAME}: Physical disk serial number |<p>MIB: IDRAC-MIB-SMIv2</p><p>The physical disk's unique identification number from the manufacturer.</p> |SNMP |system.hw.physicaldisk.serialnumber[physicalDiskSerialNo.{#SNMPINDEX}] |
-|Physical_disks |{#DISK_NAME}: Physical disk S.M.A.R.T. status |<p>MIB: IDRAC-MIB-SMIv2</p><p>Indicates whether the physical disk has received a predictive failure alert.</p> |SNMP |system.hw.physicaldisk.smart_status[physicalDiskSmartAlertIndication.{#SNMPINDEX}] |
-|Physical_disks |{#DISK_NAME}: Physical disk model name |<p>MIB: IDRAC-MIB-SMIv2</p><p>The model number of the physical disk.</p> |SNMP |system.hw.physicaldisk.model[physicalDiskProductID.{#SNMPINDEX}] |
-|Physical_disks |{#DISK_NAME}: Physical disk part number |<p>MIB: IDRAC-MIB-SMIv2</p><p>The part number of the disk.</p> |SNMP |system.hw.physicaldisk.part_number[physicalDiskPartNumber.{#SNMPINDEX}] |
-|Physical_disks |{#DISK_NAME}: Physical disk media type |<p>MIB: IDRAC-MIB-SMIv2</p><p>The media type of the physical disk. Possible Values:</p><p>1: The media type could not be determined.</p><p>2: Hard Disk Drive (HDD).</p><p>3: Solid State Drive (SSD).</p> |SNMP |system.hw.physicaldisk.media_type[physicalDiskMediaType.{#SNMPINDEX}] |
-|Physical_disks |{#DISK_NAME}: Disk size |<p>MIB: IDRAC-MIB-SMIv2</p><p>The size of the physical disk in megabytes.</p> |SNMP |system.hw.physicaldisk.size[physicalDiskCapacityInMB.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1048576`</p> |
-|Power_supply |{#PSU_DESCR}: Power supply status |<p>MIB: IDRAC-MIB-SMIv2</p><p>0600.0012.0001.0005 This attribute defines the status of the power supply.</p> |SNMP |sensor.psu.status[powerSupplyStatus.{#SNMPINDEX}] |
-|Status |Overall system health status |<p>MIB: IDRAC-MIB-SMIv2</p><p>This attribute defines the overall rollup status of all components in the system being monitored by the remote access card. Includes system, storage, IO devices, iDRAC, CPU, memory, etc.</p> |SNMP |system.status[globalSystemStatus.0] |
-|Temperature |{#SENSOR_LOCALE}: Temperature |<p>MIB: IDRAC-MIB-SMIv2</p><p>0700.0020.0001.0006 This attribute defines the reading for a temperature probe of type other than temperatureProbeTypeIsDiscrete. When the value for temperatureProbeType is other than temperatureProbeTypeIsDiscrete,the value returned for this attribute is the temperature that the probeis reading in tenths of degrees Centigrade. When the value for temperatureProbeType is temperatureProbeTypeIsDiscrete, a value is not returned for this attribute.</p> |SNMP |sensor.temp.value[temperatureProbeReading.CPU.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.1`</p> |
-|Temperature |{#SENSOR_LOCALE}: Temperature status |<p>MIB: IDRAC-MIB-SMIv2</p><p>0700.0020.0001.0005 This attribute defines the probe status of the temperature probe.</p> |SNMP |sensor.temp.status[temperatureProbeStatus.CPU.{#SNMPINDEX}] |
-|Temperature |{#SENSOR_LOCALE}: Temperature |<p>MIB: IDRAC-MIB-SMIv2</p><p>0700.0020.0001.0006 This attribute defines the reading for a temperature probe of type other than temperatureProbeTypeIsDiscrete. When the value for temperatureProbeType is other than temperatureProbeTypeIsDiscrete,the value returned for this attribute is the temperature that the probeis reading in tenths of degrees Centigrade. When the value for temperatureProbeType is temperatureProbeTypeIsDiscrete, a value is not returned for this attribute.</p> |SNMP |sensor.temp.value[temperatureProbeReading.Ambient.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.1`</p> |
-|Temperature |{#SENSOR_LOCALE}: Temperature status |<p>MIB: IDRAC-MIB-SMIv2</p><p>0700.0020.0001.0005 This attribute defines the probe status of the temperature probe.</p> |SNMP |sensor.temp.status[temperatureProbeStatus.Ambient.{#SNMPINDEX}] |
-|Virtual_disks |Disk {#SNMPVALUE}({#DISK_NAME}): Layout type |<p>MIB: IDRAC-MIB-SMIv2</p><p>The virtual disk's RAID type.</p><p>Possible values:</p><p>1: Not one of the following</p><p>2: RAID-0</p><p>3: RAID-1</p><p>4: RAID-5</p><p>5: RAID-6</p><p>6: RAID-10</p><p>7: RAID-50</p><p>8: RAID-60</p><p>9: Concatenated RAID 1</p><p>10: Concatenated RAID 5</p> |SNMP |system.hw.virtualdisk.layout[virtualDiskLayout.{#SNMPINDEX}] |
-|Virtual_disks |Disk {#SNMPVALUE}({#DISK_NAME}): Current state |<p>MIB: IDRAC-MIB-SMIv2</p><p>The state of the virtual disk when there are progressive operations ongoing.</p><p>Possible values:</p><p>1: There is no active operation running.</p><p>2: The virtual disk configuration has changed. The physical disks included in the virtual disk are being modified to support the new configuration.</p><p>3: A Consistency Check (CC) is being performed on the virtual disk.</p><p>4: The virtual disk is being initialized.</p><p>5: BackGround Initialization (BGI) is being performed on the virtual disk.</p> |SNMP |system.hw.virtualdisk.state[virtualDiskOperationalState.{#SNMPINDEX}] |
-|Virtual_disks |Disk {#SNMPVALUE}({#DISK_NAME}): Read policy |<p>MIB: IDRAC-MIB-SMIv2</p><p>The read policy used by the controller for read operations on this virtual disk.</p><p>Possible values:</p><p>1: No Read Ahead.</p><p>2: Read Ahead.</p><p>3: Adaptive Read Ahead.</p> |SNMP |system.hw.virtualdisk.readpolicy[virtualDiskReadPolicy.{#SNMPINDEX}] |
-|Virtual_disks |Disk {#SNMPVALUE}({#DISK_NAME}): Write policy |<p>MIB: IDRAC-MIB-SMIv2</p><p>The write policy used by the controller for write operations on this virtual disk.</p><p>Possible values:</p><p>1: Write Through.</p><p>2: Write Back.</p><p>3: Force Write Back.</p> |SNMP |system.hw.virtualdisk.writepolicy[virtualDiskWritePolicy.{#SNMPINDEX}] |
-|Virtual_disks |Disk {#SNMPVALUE}({#DISK_NAME}): Disk size |<p>MIB: IDRAC-MIB-SMIv2</p><p>The size of the virtual disk in megabytes.</p> |SNMP |system.hw.virtualdisk.size[virtualDiskSizeInMB.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1048576`</p> |
-|Virtual_disks |Disk {#SNMPVALUE}({#DISK_NAME}): Status |<p>MIB: IDRAC-MIB-SMIv2</p><p>The current state of this virtual disk (which includes any member physical disks.)</p><p>Possible states:</p><p>1: The current state could not be determined.</p><p>2: The virtual disk is operating normally or optimally.</p><p>3: The virtual disk has encountered a failure. The data on disk is lost or is about to be lost.</p><p>4: The virtual disk encountered a failure with one or all of the constituent redundant physical disks.</p><p>The data on the virtual disk might no longer be fault tolerant.</p> |SNMP |system.hw.virtualdisk.status[virtualDiskState.{#SNMPINDEX}] |
+| Group | Name | Description | Type | Key and additional info |
+|----------------|--------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------|---------------------------------------------------------------------------------------------------------------------------|
+| Disk_arrays | {#CNTLR_NAME}: Disk array controller status | <p>MIB: IDRAC-MIB-SMIv2</p><p>The status of the controller itself without the propagation of any contained component status.</p><p>Possible values:</p><p>1: Other</p><p>2: Unknown</p><p>3: OK</p><p>4: Non-critical</p><p>5: Critical</p><p>6: Non-recoverable</p><p> </p> | SNMP | system.hw.diskarray.status[controllerComponentStatus.{#SNMPINDEX}] |
+| Disk_arrays | {#CNTLR_NAME}: Disk array controller model | <p>MIB: IDRAC-MIB-SMIv2</p><p>The controller's name as represented in Storage Management.</p> | SNMP | system.hw.diskarray.model[controllerName.{#SNMPINDEX}] |
+| Disk_arrays | Battery {#BATTERY_NUM}: Disk array cache controller battery status | <p>MIB: IDRAC-MIB-SMIv2</p><p>Current state of battery.</p><p>Possible values:</p><p>1: The current state could not be determined.</p><p>2: The battery is operating normally.</p><p>3: The battery has failed and needs to be replaced.</p><p>4: The battery temperature is high or charge level is depleting.</p><p>5: The battery is missing or not detected.</p><p>6: The battery is undergoing the re-charge phase.</p><p>7: The battery voltage or charge level is below the threshold.</p><p> </p> | SNMP | system.hw.diskarray.cache.battery.status[batteryState.{#SNMPINDEX}] |
+| Fans | {#FAN_DESCR}: Fan status | <p>MIB: IDRAC-MIB-SMIv2</p><p>0700.0012.0001.0005 This attribute defines the probe status of the cooling device.</p> | SNMP | sensor.fan.status[coolingDeviceStatus.{#SNMPINDEX}] |
+| Fans | {#FAN_DESCR}: Fan speed | <p>MIB: IDRAC-MIB-SMIv2</p><p>0700.0012.0001.0006 This attribute defines the reading for a cooling device</p><p>of subtype other than coolingDeviceSubTypeIsDiscrete. When the value</p><p>for coolingDeviceSubType is other than coolingDeviceSubTypeIsDiscrete, the</p><p>value returned for this attribute is the speed in RPM or the OFF/ON value</p><p>of the cooling device. When the value for coolingDeviceSubType is</p><p>coolingDeviceSubTypeIsDiscrete, a value is not returned for this attribute.</p> | SNMP | sensor.fan.speed[coolingDeviceReading.{#SNMPINDEX}] |
+| Inventory | Hardware model name | <p>MIB: IDRAC-MIB-SMIv2</p><p>This attribute defines the model name of the system.</p> | SNMP | system.hw.model<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | Operating system | <p>MIB: IDRAC-MIB-SMIv2</p><p>This attribute defines the name of the operating system that the hostis running.</p> | SNMP | system.sw.os[systemOSName]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | Hardware serial number | <p>MIB: IDRAC-MIB-SMIv2</p><p>This attribute defines the service tag of the system.</p> | SNMP | system.hw.serialnumber<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | Firmware version | <p>MIB: IDRAC-MIB-SMIv2</p><p>This attribute defines the firmware version of a remote access card.</p> | SNMP | system.hw.firmware<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Physical_disks | {#DISK_NAME}: Physical disk status | <p>MIB: IDRAC-MIB-SMIv2</p><p>The status of the physical disk itself without the propagation of any contained component status.</p><p>Possible values:</p><p>1: Other</p><p>2: Unknown</p><p>3: OK</p><p>4: Non-critical</p><p>5: Critical</p><p>6: Non-recoverable</p> | SNMP | system.hw.physicaldisk.status[physicalDiskComponentStatus.{#SNMPINDEX}] |
+| Physical_disks | {#DISK_NAME}: Physical disk serial number | <p>MIB: IDRAC-MIB-SMIv2</p><p>The physical disk's unique identification number from the manufacturer.</p> | SNMP | system.hw.physicaldisk.serialnumber[physicalDiskSerialNo.{#SNMPINDEX}] |
+| Physical_disks | {#DISK_NAME}: Physical disk S.M.A.R.T. status | <p>MIB: IDRAC-MIB-SMIv2</p><p>Indicates whether the physical disk has received a predictive failure alert.</p> | SNMP | system.hw.physicaldisk.smart_status[physicalDiskSmartAlertIndication.{#SNMPINDEX}] |
+| Physical_disks | {#DISK_NAME}: Physical disk model name | <p>MIB: IDRAC-MIB-SMIv2</p><p>The model number of the physical disk.</p> | SNMP | system.hw.physicaldisk.model[physicalDiskProductID.{#SNMPINDEX}] |
+| Physical_disks | {#DISK_NAME}: Physical disk part number | <p>MIB: IDRAC-MIB-SMIv2</p><p>The part number of the disk.</p> | SNMP | system.hw.physicaldisk.part_number[physicalDiskPartNumber.{#SNMPINDEX}] |
+| Physical_disks | {#DISK_NAME}: Physical disk media type | <p>MIB: IDRAC-MIB-SMIv2</p><p>The media type of the physical disk. Possible Values:</p><p>1: The media type could not be determined.</p><p>2: Hard Disk Drive (HDD).</p><p>3: Solid State Drive (SSD).</p> | SNMP | system.hw.physicaldisk.media_type[physicalDiskMediaType.{#SNMPINDEX}] |
+| Physical_disks | {#DISK_NAME}: Disk size | <p>MIB: IDRAC-MIB-SMIv2</p><p>The size of the physical disk in megabytes.</p> | SNMP | system.hw.physicaldisk.size[physicalDiskCapacityInMB.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1048576`</p> |
+| Power_supply | {#PSU_DESCR}: Power supply status | <p>MIB: IDRAC-MIB-SMIv2</p><p>0600.0012.0001.0005 This attribute defines the status of the power supply.</p> | SNMP | sensor.psu.status[powerSupplyStatus.{#SNMPINDEX}] |
+| Status | Overall system health status | <p>MIB: IDRAC-MIB-SMIv2</p><p>This attribute defines the overall rollup status of all components in the system being monitored by the remote access card. Includes system, storage, IO devices, iDRAC, CPU, memory, etc.</p> | SNMP | system.status[globalSystemStatus.0] |
+| Temperature | {#SENSOR_LOCALE}: Temperature | <p>MIB: IDRAC-MIB-SMIv2</p><p>0700.0020.0001.0006 This attribute defines the reading for a temperature probe of type other than temperatureProbeTypeIsDiscrete. When the value for temperatureProbeType is other than temperatureProbeTypeIsDiscrete,the value returned for this attribute is the temperature that the probeis reading in tenths of degrees Centigrade. When the value for temperatureProbeType is temperatureProbeTypeIsDiscrete, a value is not returned for this attribute.</p> | SNMP | sensor.temp.value[temperatureProbeReading.CPU.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.1`</p> |
+| Temperature | {#SENSOR_LOCALE}: Temperature status | <p>MIB: IDRAC-MIB-SMIv2</p><p>0700.0020.0001.0005 This attribute defines the probe status of the temperature probe.</p> | SNMP | sensor.temp.status[temperatureProbeStatus.CPU.{#SNMPINDEX}] |
+| Temperature | {#SENSOR_LOCALE}: Temperature | <p>MIB: IDRAC-MIB-SMIv2</p><p>0700.0020.0001.0006 This attribute defines the reading for a temperature probe of type other than temperatureProbeTypeIsDiscrete. When the value for temperatureProbeType is other than temperatureProbeTypeIsDiscrete,the value returned for this attribute is the temperature that the probeis reading in tenths of degrees Centigrade. When the value for temperatureProbeType is temperatureProbeTypeIsDiscrete, a value is not returned for this attribute.</p> | SNMP | sensor.temp.value[temperatureProbeReading.Ambient.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.1`</p> |
+| Temperature | {#SENSOR_LOCALE}: Temperature status | <p>MIB: IDRAC-MIB-SMIv2</p><p>0700.0020.0001.0005 This attribute defines the probe status of the temperature probe.</p> | SNMP | sensor.temp.status[temperatureProbeStatus.Ambient.{#SNMPINDEX}] |
+| Virtual_disks | Disk {#SNMPVALUE}({#DISK_NAME}): Layout type | <p>MIB: IDRAC-MIB-SMIv2</p><p>The virtual disk's RAID type.</p><p>Possible values:</p><p>1: Not one of the following</p><p>2: RAID-0</p><p>3: RAID-1</p><p>4: RAID-5</p><p>5: RAID-6</p><p>6: RAID-10</p><p>7: RAID-50</p><p>8: RAID-60</p><p>9: Concatenated RAID 1</p><p>10: Concatenated RAID 5</p> | SNMP | system.hw.virtualdisk.layout[virtualDiskLayout.{#SNMPINDEX}] |
+| Virtual_disks | Disk {#SNMPVALUE}({#DISK_NAME}): Current state | <p>MIB: IDRAC-MIB-SMIv2</p><p>The state of the virtual disk when there are progressive operations ongoing.</p><p>Possible values:</p><p>1: There is no active operation running.</p><p>2: The virtual disk configuration has changed. The physical disks included in the virtual disk are being modified to support the new configuration.</p><p>3: A Consistency Check (CC) is being performed on the virtual disk.</p><p>4: The virtual disk is being initialized.</p><p>5: BackGround Initialization (BGI) is being performed on the virtual disk.</p> | SNMP | system.hw.virtualdisk.state[virtualDiskOperationalState.{#SNMPINDEX}] |
+| Virtual_disks | Disk {#SNMPVALUE}({#DISK_NAME}): Read policy | <p>MIB: IDRAC-MIB-SMIv2</p><p>The read policy used by the controller for read operations on this virtual disk.</p><p>Possible values:</p><p>1: No Read Ahead.</p><p>2: Read Ahead.</p><p>3: Adaptive Read Ahead.</p> | SNMP | system.hw.virtualdisk.readpolicy[virtualDiskReadPolicy.{#SNMPINDEX}] |
+| Virtual_disks | Disk {#SNMPVALUE}({#DISK_NAME}): Write policy | <p>MIB: IDRAC-MIB-SMIv2</p><p>The write policy used by the controller for write operations on this virtual disk.</p><p>Possible values:</p><p>1: Write Through.</p><p>2: Write Back.</p><p>3: Force Write Back.</p> | SNMP | system.hw.virtualdisk.writepolicy[virtualDiskWritePolicy.{#SNMPINDEX}] |
+| Virtual_disks | Disk {#SNMPVALUE}({#DISK_NAME}): Disk size | <p>MIB: IDRAC-MIB-SMIv2</p><p>The size of the virtual disk in megabytes.</p> | SNMP | system.hw.virtualdisk.size[virtualDiskSizeInMB.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1048576`</p> |
+| Virtual_disks | Disk {#SNMPVALUE}({#DISK_NAME}): Status | <p>MIB: IDRAC-MIB-SMIv2</p><p>The current state of this virtual disk (which includes any member physical disks.)</p><p>Possible states:</p><p>1: The current state could not be determined.</p><p>2: The virtual disk is operating normally or optimally.</p><p>3: The virtual disk has encountered a failure. The data on disk is lost or is about to be lost.</p><p>4: The virtual disk encountered a failure with one or all of the constituent redundant physical disks.</p><p>The data on the virtual disk might no longer be fault tolerant.</p> | SNMP | system.hw.virtualdisk.status[virtualDiskState.{#SNMPINDEX}] |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|{#CNTLR_NAME}: Disk array controller is in unrecoverable state! |<p>Please check the device for faults</p> |`{TEMPLATE_NAME:system.hw.diskarray.status[controllerComponentStatus.{#SNMPINDEX}].count(#1,{$DISK_ARRAY_FAIL_STATUS:"nonRecoverable"},eq)}=1` |DISASTER | |
-|{#CNTLR_NAME}: Disk array controller is in critical state |<p>Please check the device for faults</p> |`{TEMPLATE_NAME:system.hw.diskarray.status[controllerComponentStatus.{#SNMPINDEX}].count(#1,{$DISK_ARRAY_CRIT_STATUS:"critical"},eq)}=1` |HIGH |<p>**Depends on**:</p><p>- {#CNTLR_NAME}: Disk array controller is in unrecoverable state!</p> |
-|{#CNTLR_NAME}: Disk array controller is in warning state |<p>Please check the device for faults</p> |`{TEMPLATE_NAME:system.hw.diskarray.status[controllerComponentStatus.{#SNMPINDEX}].count(#1,{$DISK_ARRAY_WARN_STATUS:"nonCritical"},eq)}=1` |AVERAGE |<p>**Depends on**:</p><p>- {#CNTLR_NAME}: Disk array controller is in critical state</p><p>- {#CNTLR_NAME}: Disk array controller is in unrecoverable state!</p> |
-|Battery {#BATTERY_NUM}: Disk array cache controller battery is in warning state |<p>Please check the device for faults</p> |`{TEMPLATE_NAME:system.hw.diskarray.cache.battery.status[batteryState.{#SNMPINDEX}].count(#1,{$DISK_ARRAY_CACHE_BATTERY_WARN_STATUS},eq)}=1` |WARNING |<p>**Depends on**:</p><p>- Battery {#BATTERY_NUM}: Disk array cache controller battery is in critical state!</p> |
-|Battery {#BATTERY_NUM}: Disk array cache controller battery is not in optimal state |<p>Please check the device for faults</p> |`{TEMPLATE_NAME:system.hw.diskarray.cache.battery.status[batteryState.{#SNMPINDEX}].count(#1,{$DISK_ARRAY_CACHE_BATTERY_OK_STATUS},ne)}=1` |WARNING |<p>**Depends on**:</p><p>- Battery {#BATTERY_NUM}: Disk array cache controller battery is in critical state!</p><p>- Battery {#BATTERY_NUM}: Disk array cache controller battery is in warning state</p> |
-|Battery {#BATTERY_NUM}: Disk array cache controller battery is in critical state! |<p>Please check the device for faults</p> |`{TEMPLATE_NAME:system.hw.diskarray.cache.battery.status[batteryState.{#SNMPINDEX}].count(#1,{$DISK_ARRAY_CACHE_BATTERY_CRIT_STATUS},eq)}=1` |AVERAGE | |
-|{#FAN_DESCR}: Fan is in critical state |<p>Please check the fan unit</p> |`{TEMPLATE_NAME:sensor.fan.status[coolingDeviceStatus.{#SNMPINDEX}].count(#1,{$FAN_CRIT_STATUS:"criticalUpper"},eq)}=1 or {TEMPLATE_NAME:sensor.fan.status[coolingDeviceStatus.{#SNMPINDEX}].count(#1,{$FAN_CRIT_STATUS:"nonRecoverableUpper"},eq)}=1 or {TEMPLATE_NAME:sensor.fan.status[coolingDeviceStatus.{#SNMPINDEX}].count(#1,{$FAN_CRIT_STATUS:"criticalLower"},eq)}=1 or {TEMPLATE_NAME:sensor.fan.status[coolingDeviceStatus.{#SNMPINDEX}].count(#1,{$FAN_CRIT_STATUS:"nonRecoverableLower"},eq)}=1 or {TEMPLATE_NAME:sensor.fan.status[coolingDeviceStatus.{#SNMPINDEX}].count(#1,{$FAN_CRIT_STATUS:"failed"},eq)}=1` |AVERAGE | |
-|{#FAN_DESCR}: Fan is in warning state |<p>Please check the fan unit</p> |`{TEMPLATE_NAME:sensor.fan.status[coolingDeviceStatus.{#SNMPINDEX}].count(#1,{$FAN_WARN_STATUS:"nonCriticalUpper"},eq)}=1 or {TEMPLATE_NAME:sensor.fan.status[coolingDeviceStatus.{#SNMPINDEX}].count(#1,{$FAN_WARN_STATUS:"nonCriticalLower"},eq)}=1` |WARNING |<p>**Depends on**:</p><p>- {#FAN_DESCR}: Fan is in critical state</p> |
-|Operating system description has changed |<p>Operating system description has changed. Possible reasons that system has been updated or replaced. Ack to close.</p> |`{TEMPLATE_NAME:system.sw.os[systemOSName].diff()}=1 and {TEMPLATE_NAME:system.sw.os[systemOSName].strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|Device has been replaced (new serial number received) |<p>Device serial number has changed. Ack to close</p> |`{TEMPLATE_NAME:system.hw.serialnumber.diff()}=1 and {TEMPLATE_NAME:system.hw.serialnumber.strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|Firmware has changed |<p>Firmware version has changed. Ack to close</p> |`{TEMPLATE_NAME:system.hw.firmware.diff()}=1 and {TEMPLATE_NAME:system.hw.firmware.strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|{#DISK_NAME}: Physical disk failed |<p>Please check physical disk for warnings or errors</p> |`{TEMPLATE_NAME:system.hw.physicaldisk.status[physicalDiskComponentStatus.{#SNMPINDEX}].count(#1,{$DISK_FAIL_STATUS:"critical"},eq)}=1 or {TEMPLATE_NAME:system.hw.physicaldisk.status[physicalDiskComponentStatus.{#SNMPINDEX}].count(#1,{$DISK_FAIL_STATUS:"nonRecoverable"},eq)}=1` |HIGH | |
-|{#DISK_NAME}: Physical disk is in warning state |<p>Please check physical disk for warnings or errors</p> |`{TEMPLATE_NAME:system.hw.physicaldisk.status[physicalDiskComponentStatus.{#SNMPINDEX}].count(#1,{$DISK_WARN_STATUS:"nonCritical"},eq)}=1` |WARNING |<p>**Depends on**:</p><p>- {#DISK_NAME}: Physical disk failed</p> |
-|{#DISK_NAME}: Disk has been replaced (new serial number received) |<p>Disk serial number has changed. Ack to close</p> |`{TEMPLATE_NAME:system.hw.physicaldisk.serialnumber[physicalDiskSerialNo.{#SNMPINDEX}].diff()}=1 and {TEMPLATE_NAME:system.hw.physicaldisk.serialnumber[physicalDiskSerialNo.{#SNMPINDEX}].strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|{#DISK_NAME}: Physical disk S.M.A.R.T. failed |<p>Disk probably requires replacement.</p> |`{TEMPLATE_NAME:system.hw.physicaldisk.smart_status[physicalDiskSmartAlertIndication.{#SNMPINDEX}].count(#1,{$DISK_SMART_FAIL_STATUS},eq)}=1` |HIGH |<p>**Depends on**:</p><p>- {#DISK_NAME}: Physical disk failed</p> |
-|{#PSU_DESCR}: Power supply is in critical state |<p>Please check the power supply unit for errors</p> |`{TEMPLATE_NAME:sensor.psu.status[powerSupplyStatus.{#SNMPINDEX}].count(#1,{$PSU_CRIT_STATUS:"critical"},eq)}=1 or {TEMPLATE_NAME:sensor.psu.status[powerSupplyStatus.{#SNMPINDEX}].count(#1,{$PSU_CRIT_STATUS:"nonRecoverable"},eq)}=1` |AVERAGE | |
-|{#PSU_DESCR}: Power supply is in warning state |<p>Please check the power supply unit for errors</p> |`{TEMPLATE_NAME:sensor.psu.status[powerSupplyStatus.{#SNMPINDEX}].count(#1,{$PSU_WARN_STATUS:"nonCritical"},eq)}=1` |WARNING |<p>**Depends on**:</p><p>- {#PSU_DESCR}: Power supply is in critical state</p> |
-|System is in unrecoverable state! |<p>Please check the device for faults</p> |`{TEMPLATE_NAME:system.status[globalSystemStatus.0].count(#1,{$HEALTH_DISASTER_STATUS},eq)}=1` |HIGH | |
-|System status is in critical state |<p>Please check the device for errors</p> |`{TEMPLATE_NAME:system.status[globalSystemStatus.0].count(#1,{$HEALTH_CRIT_STATUS},eq)}=1` |HIGH |<p>**Depends on**:</p><p>- System is in unrecoverable state!</p> |
-|System status is in warning state |<p>Please check the device for warnings</p> |`{TEMPLATE_NAME:system.status[globalSystemStatus.0].count(#1,{$HEALTH_WARN_STATUS},eq)}=1` |WARNING |<p>**Depends on**:</p><p>- System is in unrecoverable state!</p><p>- System status is in critical state</p> |
-|{#SENSOR_LOCALE}: Temperature is above warning threshold: >{$TEMP_WARN:"CPU"} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[temperatureProbeReading.CPU.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:"CPU"} or {Dell iDRAC SNMP:sensor.temp.status[temperatureProbeStatus.CPU.{#SNMPINDEX}].last()}={$TEMP_WARN_STATUS}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[temperatureProbeReading.CPU.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:"CPU"}-3` |WARNING |<p>**Depends on**:</p><p>- {#SENSOR_LOCALE}: Temperature is above critical threshold: >{$TEMP_CRIT:"CPU"}</p> |
-|{#SENSOR_LOCALE}: Temperature is above critical threshold: >{$TEMP_CRIT:"CPU"} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[temperatureProbeReading.CPU.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:"CPU"} or {Dell iDRAC SNMP:sensor.temp.status[temperatureProbeStatus.CPU.{#SNMPINDEX}].last()}={$TEMP_CRIT_STATUS} or {Dell iDRAC SNMP:sensor.temp.status[temperatureProbeStatus.CPU.{#SNMPINDEX}].last()}={$TEMP_DISASTER_STATUS}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[temperatureProbeReading.CPU.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:"CPU"}-3` |HIGH | |
-|{#SENSOR_LOCALE}: Temperature is too low: <{$TEMP_CRIT_LOW:"CPU"} |<p>-</p> |`{TEMPLATE_NAME:sensor.temp.value[temperatureProbeReading.CPU.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:"CPU"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[temperatureProbeReading.CPU.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:"CPU"}+3` |AVERAGE | |
-|{#SENSOR_LOCALE}: Temperature is above warning threshold: >{$TEMP_WARN:"Ambient"} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[temperatureProbeReading.Ambient.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:"Ambient"} or {Dell iDRAC SNMP:sensor.temp.status[temperatureProbeStatus.Ambient.{#SNMPINDEX}].last()}={$TEMP_WARN_STATUS}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[temperatureProbeReading.Ambient.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:"Ambient"}-3` |WARNING |<p>**Depends on**:</p><p>- {#SENSOR_LOCALE}: Temperature is above critical threshold: >{$TEMP_CRIT:"Ambient"}</p> |
-|{#SENSOR_LOCALE}: Temperature is above critical threshold: >{$TEMP_CRIT:"Ambient"} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[temperatureProbeReading.Ambient.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:"Ambient"} or {Dell iDRAC SNMP:sensor.temp.status[temperatureProbeStatus.Ambient.{#SNMPINDEX}].last()}={$TEMP_CRIT_STATUS} or {Dell iDRAC SNMP:sensor.temp.status[temperatureProbeStatus.Ambient.{#SNMPINDEX}].last()}={$TEMP_DISASTER_STATUS}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[temperatureProbeReading.Ambient.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:"Ambient"}-3` |HIGH | |
-|{#SENSOR_LOCALE}: Temperature is too low: <{$TEMP_CRIT_LOW:"Ambient"} |<p>-</p> |`{TEMPLATE_NAME:sensor.temp.value[temperatureProbeReading.Ambient.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:"Ambient"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[temperatureProbeReading.Ambient.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:"Ambient"}+3` |AVERAGE | |
-|Disk {#SNMPVALUE}({#DISK_NAME}): Virtual disk failed |<p>Please check virtual disk for warnings or errors</p> |`{TEMPLATE_NAME:system.hw.virtualdisk.status[virtualDiskState.{#SNMPINDEX}].count(#1,{$VDISK_CRIT_STATUS:"failed"},eq)}=1` |HIGH | |
-|Disk {#SNMPVALUE}({#DISK_NAME}): Virtual disk is in warning state |<p>Please check virtual disk for warnings or errors</p> |`{TEMPLATE_NAME:system.hw.virtualdisk.status[virtualDiskState.{#SNMPINDEX}].count(#1,{$VDISK_WARN_STATUS:"degraded"},eq)}=1` |AVERAGE |<p>**Depends on**:</p><p>- Disk {#SNMPVALUE}({#DISK_NAME}): Virtual disk failed</p> |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|-------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| {#CNTLR_NAME}: Disk array controller is in unrecoverable state! | <p>Please check the device for faults</p> | `{TEMPLATE_NAME:system.hw.diskarray.status[controllerComponentStatus.{#SNMPINDEX}].count(#1,{$DISK_ARRAY_FAIL_STATUS:"nonRecoverable"},eq)}=1` | DISASTER | |
+| {#CNTLR_NAME}: Disk array controller is in critical state | <p>Please check the device for faults</p> | `{TEMPLATE_NAME:system.hw.diskarray.status[controllerComponentStatus.{#SNMPINDEX}].count(#1,{$DISK_ARRAY_CRIT_STATUS:"critical"},eq)}=1` | HIGH | <p>**Depends on**:</p><p>- {#CNTLR_NAME}: Disk array controller is in unrecoverable state!</p> |
+| {#CNTLR_NAME}: Disk array controller is in warning state | <p>Please check the device for faults</p> | `{TEMPLATE_NAME:system.hw.diskarray.status[controllerComponentStatus.{#SNMPINDEX}].count(#1,{$DISK_ARRAY_WARN_STATUS:"nonCritical"},eq)}=1` | AVERAGE | <p>**Depends on**:</p><p>- {#CNTLR_NAME}: Disk array controller is in critical state</p><p>- {#CNTLR_NAME}: Disk array controller is in unrecoverable state!</p> |
+| Battery {#BATTERY_NUM}: Disk array cache controller battery is in warning state | <p>Please check the device for faults</p> | `{TEMPLATE_NAME:system.hw.diskarray.cache.battery.status[batteryState.{#SNMPINDEX}].count(#1,{$DISK_ARRAY_CACHE_BATTERY_WARN_STATUS},eq)}=1` | WARNING | <p>**Depends on**:</p><p>- Battery {#BATTERY_NUM}: Disk array cache controller battery is in critical state!</p> |
+| Battery {#BATTERY_NUM}: Disk array cache controller battery is not in optimal state | <p>Please check the device for faults</p> | `{TEMPLATE_NAME:system.hw.diskarray.cache.battery.status[batteryState.{#SNMPINDEX}].count(#1,{$DISK_ARRAY_CACHE_BATTERY_OK_STATUS},ne)}=1` | WARNING | <p>**Depends on**:</p><p>- Battery {#BATTERY_NUM}: Disk array cache controller battery is in critical state!</p><p>- Battery {#BATTERY_NUM}: Disk array cache controller battery is in warning state</p> |
+| Battery {#BATTERY_NUM}: Disk array cache controller battery is in critical state! | <p>Please check the device for faults</p> | `{TEMPLATE_NAME:system.hw.diskarray.cache.battery.status[batteryState.{#SNMPINDEX}].count(#1,{$DISK_ARRAY_CACHE_BATTERY_CRIT_STATUS},eq)}=1` | AVERAGE | |
+| {#FAN_DESCR}: Fan is in critical state | <p>Please check the fan unit</p> | `{TEMPLATE_NAME:sensor.fan.status[coolingDeviceStatus.{#SNMPINDEX}].count(#1,{$FAN_CRIT_STATUS:"criticalUpper"},eq)}=1 or {TEMPLATE_NAME:sensor.fan.status[coolingDeviceStatus.{#SNMPINDEX}].count(#1,{$FAN_CRIT_STATUS:"nonRecoverableUpper"},eq)}=1 or {TEMPLATE_NAME:sensor.fan.status[coolingDeviceStatus.{#SNMPINDEX}].count(#1,{$FAN_CRIT_STATUS:"criticalLower"},eq)}=1 or {TEMPLATE_NAME:sensor.fan.status[coolingDeviceStatus.{#SNMPINDEX}].count(#1,{$FAN_CRIT_STATUS:"nonRecoverableLower"},eq)}=1 or {TEMPLATE_NAME:sensor.fan.status[coolingDeviceStatus.{#SNMPINDEX}].count(#1,{$FAN_CRIT_STATUS:"failed"},eq)}=1` | AVERAGE | |
+| {#FAN_DESCR}: Fan is in warning state | <p>Please check the fan unit</p> | `{TEMPLATE_NAME:sensor.fan.status[coolingDeviceStatus.{#SNMPINDEX}].count(#1,{$FAN_WARN_STATUS:"nonCriticalUpper"},eq)}=1 or {TEMPLATE_NAME:sensor.fan.status[coolingDeviceStatus.{#SNMPINDEX}].count(#1,{$FAN_WARN_STATUS:"nonCriticalLower"},eq)}=1` | WARNING | <p>**Depends on**:</p><p>- {#FAN_DESCR}: Fan is in critical state</p> |
+| Operating system description has changed | <p>Operating system description has changed. Possible reasons that system has been updated or replaced. Ack to close.</p> | `{TEMPLATE_NAME:system.sw.os[systemOSName].diff()}=1 and {TEMPLATE_NAME:system.sw.os[systemOSName].strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| Device has been replaced (new serial number received) | <p>Device serial number has changed. Ack to close</p> | `{TEMPLATE_NAME:system.hw.serialnumber.diff()}=1 and {TEMPLATE_NAME:system.hw.serialnumber.strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| Firmware has changed | <p>Firmware version has changed. Ack to close</p> | `{TEMPLATE_NAME:system.hw.firmware.diff()}=1 and {TEMPLATE_NAME:system.hw.firmware.strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| {#DISK_NAME}: Physical disk failed | <p>Please check physical disk for warnings or errors</p> | `{TEMPLATE_NAME:system.hw.physicaldisk.status[physicalDiskComponentStatus.{#SNMPINDEX}].count(#1,{$DISK_FAIL_STATUS:"critical"},eq)}=1 or {TEMPLATE_NAME:system.hw.physicaldisk.status[physicalDiskComponentStatus.{#SNMPINDEX}].count(#1,{$DISK_FAIL_STATUS:"nonRecoverable"},eq)}=1` | HIGH | |
+| {#DISK_NAME}: Physical disk is in warning state | <p>Please check physical disk for warnings or errors</p> | `{TEMPLATE_NAME:system.hw.physicaldisk.status[physicalDiskComponentStatus.{#SNMPINDEX}].count(#1,{$DISK_WARN_STATUS:"nonCritical"},eq)}=1` | WARNING | <p>**Depends on**:</p><p>- {#DISK_NAME}: Physical disk failed</p> |
+| {#DISK_NAME}: Disk has been replaced (new serial number received) | <p>Disk serial number has changed. Ack to close</p> | `{TEMPLATE_NAME:system.hw.physicaldisk.serialnumber[physicalDiskSerialNo.{#SNMPINDEX}].diff()}=1 and {TEMPLATE_NAME:system.hw.physicaldisk.serialnumber[physicalDiskSerialNo.{#SNMPINDEX}].strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| {#DISK_NAME}: Physical disk S.M.A.R.T. failed | <p>Disk probably requires replacement.</p> | `{TEMPLATE_NAME:system.hw.physicaldisk.smart_status[physicalDiskSmartAlertIndication.{#SNMPINDEX}].count(#1,{$DISK_SMART_FAIL_STATUS},eq)}=1` | HIGH | <p>**Depends on**:</p><p>- {#DISK_NAME}: Physical disk failed</p> |
+| {#PSU_DESCR}: Power supply is in critical state | <p>Please check the power supply unit for errors</p> | `{TEMPLATE_NAME:sensor.psu.status[powerSupplyStatus.{#SNMPINDEX}].count(#1,{$PSU_CRIT_STATUS:"critical"},eq)}=1 or {TEMPLATE_NAME:sensor.psu.status[powerSupplyStatus.{#SNMPINDEX}].count(#1,{$PSU_CRIT_STATUS:"nonRecoverable"},eq)}=1` | AVERAGE | |
+| {#PSU_DESCR}: Power supply is in warning state | <p>Please check the power supply unit for errors</p> | `{TEMPLATE_NAME:sensor.psu.status[powerSupplyStatus.{#SNMPINDEX}].count(#1,{$PSU_WARN_STATUS:"nonCritical"},eq)}=1` | WARNING | <p>**Depends on**:</p><p>- {#PSU_DESCR}: Power supply is in critical state</p> |
+| System is in unrecoverable state! | <p>Please check the device for faults</p> | `{TEMPLATE_NAME:system.status[globalSystemStatus.0].count(#1,{$HEALTH_DISASTER_STATUS},eq)}=1` | HIGH | |
+| System status is in critical state | <p>Please check the device for errors</p> | `{TEMPLATE_NAME:system.status[globalSystemStatus.0].count(#1,{$HEALTH_CRIT_STATUS},eq)}=1` | HIGH | <p>**Depends on**:</p><p>- System is in unrecoverable state!</p> |
+| System status is in warning state | <p>Please check the device for warnings</p> | `{TEMPLATE_NAME:system.status[globalSystemStatus.0].count(#1,{$HEALTH_WARN_STATUS},eq)}=1` | WARNING | <p>**Depends on**:</p><p>- System is in unrecoverable state!</p><p>- System status is in critical state</p> |
+| {#SENSOR_LOCALE}: Temperature is above warning threshold: >{$TEMP_WARN:"CPU"} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[temperatureProbeReading.CPU.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:"CPU"} or {Dell iDRAC SNMP:sensor.temp.status[temperatureProbeStatus.CPU.{#SNMPINDEX}].last()}={$TEMP_WARN_STATUS}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[temperatureProbeReading.CPU.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:"CPU"}-3` | WARNING | <p>**Depends on**:</p><p>- {#SENSOR_LOCALE}: Temperature is above critical threshold: >{$TEMP_CRIT:"CPU"}</p> |
+| {#SENSOR_LOCALE}: Temperature is above critical threshold: >{$TEMP_CRIT:"CPU"} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[temperatureProbeReading.CPU.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:"CPU"} or {Dell iDRAC SNMP:sensor.temp.status[temperatureProbeStatus.CPU.{#SNMPINDEX}].last()}={$TEMP_CRIT_STATUS} or {Dell iDRAC SNMP:sensor.temp.status[temperatureProbeStatus.CPU.{#SNMPINDEX}].last()}={$TEMP_DISASTER_STATUS}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[temperatureProbeReading.CPU.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:"CPU"}-3` | HIGH | |
+| {#SENSOR_LOCALE}: Temperature is too low: <{$TEMP_CRIT_LOW:"CPU"} | <p>-</p> | `{TEMPLATE_NAME:sensor.temp.value[temperatureProbeReading.CPU.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:"CPU"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[temperatureProbeReading.CPU.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:"CPU"}+3` | AVERAGE | |
+| {#SENSOR_LOCALE}: Temperature is above warning threshold: >{$TEMP_WARN:"Ambient"} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[temperatureProbeReading.Ambient.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:"Ambient"} or {Dell iDRAC SNMP:sensor.temp.status[temperatureProbeStatus.Ambient.{#SNMPINDEX}].last()}={$TEMP_WARN_STATUS}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[temperatureProbeReading.Ambient.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:"Ambient"}-3` | WARNING | <p>**Depends on**:</p><p>- {#SENSOR_LOCALE}: Temperature is above critical threshold: >{$TEMP_CRIT:"Ambient"}</p> |
+| {#SENSOR_LOCALE}: Temperature is above critical threshold: >{$TEMP_CRIT:"Ambient"} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[temperatureProbeReading.Ambient.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:"Ambient"} or {Dell iDRAC SNMP:sensor.temp.status[temperatureProbeStatus.Ambient.{#SNMPINDEX}].last()}={$TEMP_CRIT_STATUS} or {Dell iDRAC SNMP:sensor.temp.status[temperatureProbeStatus.Ambient.{#SNMPINDEX}].last()}={$TEMP_DISASTER_STATUS}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[temperatureProbeReading.Ambient.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:"Ambient"}-3` | HIGH | |
+| {#SENSOR_LOCALE}: Temperature is too low: <{$TEMP_CRIT_LOW:"Ambient"} | <p>-</p> | `{TEMPLATE_NAME:sensor.temp.value[temperatureProbeReading.Ambient.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:"Ambient"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[temperatureProbeReading.Ambient.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:"Ambient"}+3` | AVERAGE | |
+| Disk {#SNMPVALUE}({#DISK_NAME}): Virtual disk failed | <p>Please check virtual disk for warnings or errors</p> | `{TEMPLATE_NAME:system.hw.virtualdisk.status[virtualDiskState.{#SNMPINDEX}].count(#1,{$VDISK_CRIT_STATUS:"failed"},eq)}=1` | HIGH | |
+| Disk {#SNMPVALUE}({#DISK_NAME}): Virtual disk is in warning state | <p>Please check virtual disk for warnings or errors</p> | `{TEMPLATE_NAME:system.hw.virtualdisk.status[virtualDiskState.{#SNMPINDEX}].count(#1,{$VDISK_WARN_STATUS:"degraded"},eq)}=1` | AVERAGE | <p>**Depends on**:</p><p>- Disk {#SNMPVALUE}({#DISK_NAME}): Virtual disk failed</p> |
## Feedback
diff --git a/templates/server/ibm_imm_snmp/README.md b/templates/server/ibm_imm_snmp/README.md
index f00aa2124ec..e63d17a99db 100644
--- a/templates/server/ibm_imm_snmp/README.md
+++ b/templates/server/ibm_imm_snmp/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
for IMM2 and IMM1 IBM serverX hardware
This template was tested on:
@@ -23,73 +23,73 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$DISK_OK_STATUS} |<p>-</p> |`Normal` |
-|{$FAN_OK_STATUS} |<p>-</p> |`Normal` |
-|{$HEALTH_CRIT_STATUS} |<p>-</p> |`2` |
-|{$HEALTH_DISASTER_STATUS} |<p>-</p> |`0` |
-|{$HEALTH_WARN_STATUS} |<p>-</p> |`4` |
-|{$PSU_OK_STATUS} |<p>-</p> |`Normal` |
-|{$TEMP_CRIT:"Ambient"} |<p>-</p> |`35` |
-|{$TEMP_CRIT_LOW} |<p>-</p> |`5` |
-|{$TEMP_CRIT} |<p>-</p> |`60` |
-|{$TEMP_WARN:"Ambient"} |<p>-</p> |`30` |
-|{$TEMP_WARN} |<p>-</p> |`50` |
+| Name | Description | Default |
+|---------------------------|-------------|----------|
+| {$DISK_OK_STATUS} | <p>-</p> | `Normal` |
+| {$FAN_OK_STATUS} | <p>-</p> | `Normal` |
+| {$HEALTH_CRIT_STATUS} | <p>-</p> | `2` |
+| {$HEALTH_DISASTER_STATUS} | <p>-</p> | `0` |
+| {$HEALTH_WARN_STATUS} | <p>-</p> | `4` |
+| {$PSU_OK_STATUS} | <p>-</p> | `Normal` |
+| {$TEMP_CRIT:"Ambient"} | <p>-</p> | `35` |
+| {$TEMP_CRIT_LOW} | <p>-</p> | `5` |
+| {$TEMP_CRIT} | <p>-</p> | `60` |
+| {$TEMP_WARN:"Ambient"} | <p>-</p> | `30` |
+| {$TEMP_WARN} | <p>-</p> | `50` |
## Template links
-|Name|
-|----|
-|Generic SNMP |
+| Name |
+|--------------|
+| Generic SNMP |
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Temperature Discovery |<p>Scanning IMM-MIB::tempTable</p> |SNMP |tempDescr.discovery<p>**Filter**:</p>AND_OR <p>- B: {#SNMPVALUE} MATCHES_REGEX `(DIMM|PSU|PCH|RAID|RR|PCI).*`</p> |
-|Temperature Discovery Ambient |<p>Scanning IMM-MIB::tempTable with Ambient filter</p> |SNMP |tempDescr.discovery.ambient<p>**Filter**:</p>AND_OR <p>- B: {#SNMPVALUE} MATCHES_REGEX `Ambient.*`</p> |
-|Temperature Discovery CPU |<p>Scanning IMM-MIB::tempTable with CPU filter</p> |SNMP |tempDescr.discovery.cpu<p>**Filter**:</p>AND_OR <p>- B: {#SNMPVALUE} MATCHES_REGEX `CPU [0-9]* Temp`</p> |
-|PSU Discovery |<p>IMM-MIB::powerFruName</p> |SNMP |psu.discovery |
-|FAN Discovery |<p>IMM-MIB::fanDescr</p> |SNMP |fan.discovery |
-|Physical Disk Discovery |<p>-</p> |SNMP |physicalDisk.discovery |
+| Name | Description | Type | Key and additional info |
+|-------------------------------|--------------------------------------------------------|------|-------------------------------------------------------------------------------------------------------------------|
+| Temperature Discovery | <p>Scanning IMM-MIB::tempTable</p> | SNMP | tempDescr.discovery<p>**Filter**:</p>AND_OR <p>- B: {#SNMPVALUE} MATCHES_REGEX `(DIMM|PSU|PCH|RAID|RR|PCI).*`</p> |
+| Temperature Discovery Ambient | <p>Scanning IMM-MIB::tempTable with Ambient filter</p> | SNMP | tempDescr.discovery.ambient<p>**Filter**:</p>AND_OR <p>- B: {#SNMPVALUE} MATCHES_REGEX `Ambient.*`</p> |
+| Temperature Discovery CPU | <p>Scanning IMM-MIB::tempTable with CPU filter</p> | SNMP | tempDescr.discovery.cpu<p>**Filter**:</p>AND_OR <p>- B: {#SNMPVALUE} MATCHES_REGEX `CPU [0-9]* Temp`</p> |
+| PSU Discovery | <p>IMM-MIB::powerFruName</p> | SNMP | psu.discovery |
+| FAN Discovery | <p>IMM-MIB::fanDescr</p> | SNMP | fan.discovery |
+| Physical Disk Discovery | <p>-</p> | SNMP | physicalDisk.discovery |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Fans |{#FAN_DESCR}: Fan status |<p>MIB: IMM-MIB</p><p>A description of the fan component status.</p> |SNMP |sensor.fan.status[fanHealthStatus.{#SNMPINDEX}] |
-|Fans |{#FAN_DESCR}: Fan speed, % |<p>MIB: IMM-MIB</p><p>Fan speed expressed in percent(%) of maximum RPM.</p><p>An octet string expressed as 'ddd% of maximum' where:d is a decimal digit or blank space for a leading zero.</p><p>If the fan is determined not to be running or the fan speed cannot be determined, the string will indicate 'Offline'.</p> |SNMP |sensor.fan.speed.percentage[fanSpeed.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- REGEX: `(\d{1,3}) *%( of maximum)? \1`</p> |
-|Inventory |Hardware model name |<p>MIB: IMM-MIB</p> |SNMP |system.hw.model<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Inventory |Hardware serial number |<p>MIB: IMM-MIB</p><p>Machine serial number VPD information</p> |SNMP |system.hw.serialnumber<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
-|Physical_disks |{#SNMPINDEX}: Physical disk status |<p>MIB: IMM-MIB</p> |SNMP |system.hw.physicaldisk.status[diskHealthStatus.{#SNMPINDEX}] |
-|Physical_disks |{#SNMPINDEX}: Physical disk part number |<p>MIB: IMM-MIB</p><p>disk module FRU name.</p> |SNMP |system.hw.physicaldisk.part_number[diskFruName.{#SNMPINDEX}] |
-|Power_supply |{#PSU_DESCR}: Power supply status |<p>MIB: IMM-MIB</p><p>A description of the power module status.</p> |SNMP |sensor.psu.status[powerHealthStatus.{#SNMPINDEX}] |
-|Status |Overall system health status |<p>MIB: IMM-MIB</p><p>Indicates status of system health for the system in which the IMM resides. Value of 'nonRecoverable' indicates a severe error has occurred and the system may not be functioning. A value of 'critical' indicates that a error has occurred but the system is currently functioning properly. A value of 'nonCritical' indicates that a condition has occurred that may change the state of the system in the future but currently the system is working properly. A value of 'normal' indicates that the system is operating normally.</p> |SNMP |system.status[systemHealthStat.0] |
-|Temperature |{#SNMPVALUE}: Temperature |<p>MIB: IMM-MIB</p><p>Temperature readings of testpoint: {#SNMPVALUE}</p> |SNMP |sensor.temp.value[tempReading.{#SNMPINDEX}] |
-|Temperature |Ambient: Temperature |<p>MIB: IMM-MIB</p><p>Temperature readings of testpoint: Ambient</p> |SNMP |sensor.temp.value[tempReading.Ambient.{#SNMPINDEX}] |
-|Temperature |CPU: Temperature |<p>MIB: IMM-MIB</p><p>Temperature readings of testpoint: CPU</p> |SNMP |sensor.temp.value[tempReading.CPU.{#SNMPINDEX}] |
+| Group | Name | Description | Type | Key and additional info |
+|----------------|-----------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------|----------------------------------------------------------------------------------------------------------------------------|
+| Fans | {#FAN_DESCR}: Fan status | <p>MIB: IMM-MIB</p><p>A description of the fan component status.</p> | SNMP | sensor.fan.status[fanHealthStatus.{#SNMPINDEX}] |
+| Fans | {#FAN_DESCR}: Fan speed, % | <p>MIB: IMM-MIB</p><p>Fan speed expressed in percent(%) of maximum RPM.</p><p>An octet string expressed as 'ddd% of maximum' where:d is a decimal digit or blank space for a leading zero.</p><p>If the fan is determined not to be running or the fan speed cannot be determined, the string will indicate 'Offline'.</p> | SNMP | sensor.fan.speed.percentage[fanSpeed.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- REGEX: `(\d{1,3}) *%( of maximum)? \1`</p> |
+| Inventory | Hardware model name | <p>MIB: IMM-MIB</p> | SNMP | system.hw.model<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Inventory | Hardware serial number | <p>MIB: IMM-MIB</p><p>Machine serial number VPD information</p> | SNMP | system.hw.serialnumber<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
+| Physical_disks | {#SNMPINDEX}: Physical disk status | <p>MIB: IMM-MIB</p> | SNMP | system.hw.physicaldisk.status[diskHealthStatus.{#SNMPINDEX}] |
+| Physical_disks | {#SNMPINDEX}: Physical disk part number | <p>MIB: IMM-MIB</p><p>disk module FRU name.</p> | SNMP | system.hw.physicaldisk.part_number[diskFruName.{#SNMPINDEX}] |
+| Power_supply | {#PSU_DESCR}: Power supply status | <p>MIB: IMM-MIB</p><p>A description of the power module status.</p> | SNMP | sensor.psu.status[powerHealthStatus.{#SNMPINDEX}] |
+| Status | Overall system health status | <p>MIB: IMM-MIB</p><p>Indicates status of system health for the system in which the IMM resides. Value of 'nonRecoverable' indicates a severe error has occurred and the system may not be functioning. A value of 'critical' indicates that a error has occurred but the system is currently functioning properly. A value of 'nonCritical' indicates that a condition has occurred that may change the state of the system in the future but currently the system is working properly. A value of 'normal' indicates that the system is operating normally.</p> | SNMP | system.status[systemHealthStat.0] |
+| Temperature | {#SNMPVALUE}: Temperature | <p>MIB: IMM-MIB</p><p>Temperature readings of testpoint: {#SNMPVALUE}</p> | SNMP | sensor.temp.value[tempReading.{#SNMPINDEX}] |
+| Temperature | Ambient: Temperature | <p>MIB: IMM-MIB</p><p>Temperature readings of testpoint: Ambient</p> | SNMP | sensor.temp.value[tempReading.Ambient.{#SNMPINDEX}] |
+| Temperature | CPU: Temperature | <p>MIB: IMM-MIB</p><p>Temperature readings of testpoint: CPU</p> | SNMP | sensor.temp.value[tempReading.CPU.{#SNMPINDEX}] |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|{#FAN_DESCR}: Fan is not in normal state |<p>Please check the fan unit</p> |`{TEMPLATE_NAME:sensor.fan.status[fanHealthStatus.{#SNMPINDEX}].count(#1,{$FAN_OK_STATUS},ne)}=1` |INFO | |
-|Device has been replaced (new serial number received) |<p>Device serial number has changed. Ack to close</p> |`{TEMPLATE_NAME:system.hw.serialnumber.diff()}=1 and {TEMPLATE_NAME:system.hw.serialnumber.strlen()}>0` |INFO |<p>Manual close: YES</p> |
-|{#SNMPINDEX}: Physical disk is not in OK state |<p>Please check physical disk for warnings or errors</p> |`{TEMPLATE_NAME:system.hw.physicaldisk.status[diskHealthStatus.{#SNMPINDEX}].count(#1,{$DISK_OK_STATUS},ne)}=1` |WARNING | |
-|{#PSU_DESCR}: Power supply is not in normal state |<p>Please check the power supply unit for errors</p> |`{TEMPLATE_NAME:sensor.psu.status[powerHealthStatus.{#SNMPINDEX}].count(#1,{$PSU_OK_STATUS},ne)}=1` |INFO | |
-|System is in unrecoverable state! |<p>Please check the device for faults</p> |`{TEMPLATE_NAME:system.status[systemHealthStat.0].count(#1,{$HEALTH_DISASTER_STATUS},eq)}=1` |HIGH | |
-|System status is in critical state |<p>Please check the device for errors</p> |`{TEMPLATE_NAME:system.status[systemHealthStat.0].count(#1,{$HEALTH_CRIT_STATUS},eq)}=1` |HIGH |<p>**Depends on**:</p><p>- System is in unrecoverable state!</p> |
-|System status is in warning state |<p>Please check the device for warnings</p> |`{TEMPLATE_NAME:system.status[systemHealthStat.0].count(#1,{$HEALTH_WARN_STATUS},eq)}=1` |WARNING |<p>**Depends on**:</p><p>- System is in unrecoverable state!</p><p>- System status is in critical state</p> |
-|{#SNMPVALUE}: Temperature is above warning threshold: >{$TEMP_WARN:""} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[tempReading.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[tempReading.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:""}-3` |WARNING |<p>**Depends on**:</p><p>- {#SNMPVALUE}: Temperature is above critical threshold: >{$TEMP_CRIT:""}</p> |
-|{#SNMPVALUE}: Temperature is above critical threshold: >{$TEMP_CRIT:""} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[tempReading.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[tempReading.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:""}-3` |HIGH | |
-|{#SNMPVALUE}: Temperature is too low: <{$TEMP_CRIT_LOW:""} |<p>-</p> |`{TEMPLATE_NAME:sensor.temp.value[tempReading.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[tempReading.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:""}+3` |AVERAGE | |
-|Ambient: Temperature is above warning threshold: >{$TEMP_WARN:"Ambient"} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[tempReading.Ambient.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:"Ambient"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[tempReading.Ambient.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:"Ambient"}-3` |WARNING |<p>**Depends on**:</p><p>- Ambient: Temperature is above critical threshold: >{$TEMP_CRIT:"Ambient"}</p> |
-|Ambient: Temperature is above critical threshold: >{$TEMP_CRIT:"Ambient"} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[tempReading.Ambient.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:"Ambient"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[tempReading.Ambient.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:"Ambient"}-3` |HIGH | |
-|Ambient: Temperature is too low: <{$TEMP_CRIT_LOW:"Ambient"} |<p>-</p> |`{TEMPLATE_NAME:sensor.temp.value[tempReading.Ambient.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:"Ambient"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[tempReading.Ambient.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:"Ambient"}+3` |AVERAGE | |
-|CPU: Temperature is above warning threshold: >{$TEMP_WARN:"CPU"} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[tempReading.CPU.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:"CPU"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[tempReading.CPU.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:"CPU"}-3` |WARNING |<p>**Depends on**:</p><p>- CPU: Temperature is above critical threshold: >{$TEMP_CRIT:"CPU"}</p> |
-|CPU: Temperature is above critical threshold: >{$TEMP_CRIT:"CPU"} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[tempReading.CPU.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:"CPU"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[tempReading.CPU.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:"CPU"}-3` |HIGH | |
-|CPU: Temperature is too low: <{$TEMP_CRIT_LOW:"CPU"} |<p>-</p> |`{TEMPLATE_NAME:sensor.temp.value[tempReading.CPU.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:"CPU"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[tempReading.CPU.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:"CPU"}+3` |AVERAGE | |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|---------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|-------------------------------------------------------------------------------------------------------------|
+| {#FAN_DESCR}: Fan is not in normal state | <p>Please check the fan unit</p> | `{TEMPLATE_NAME:sensor.fan.status[fanHealthStatus.{#SNMPINDEX}].count(#1,{$FAN_OK_STATUS},ne)}=1` | INFO | |
+| Device has been replaced (new serial number received) | <p>Device serial number has changed. Ack to close</p> | `{TEMPLATE_NAME:system.hw.serialnumber.diff()}=1 and {TEMPLATE_NAME:system.hw.serialnumber.strlen()}>0` | INFO | <p>Manual close: YES</p> |
+| {#SNMPINDEX}: Physical disk is not in OK state | <p>Please check physical disk for warnings or errors</p> | `{TEMPLATE_NAME:system.hw.physicaldisk.status[diskHealthStatus.{#SNMPINDEX}].count(#1,{$DISK_OK_STATUS},ne)}=1` | WARNING | |
+| {#PSU_DESCR}: Power supply is not in normal state | <p>Please check the power supply unit for errors</p> | `{TEMPLATE_NAME:sensor.psu.status[powerHealthStatus.{#SNMPINDEX}].count(#1,{$PSU_OK_STATUS},ne)}=1` | INFO | |
+| System is in unrecoverable state! | <p>Please check the device for faults</p> | `{TEMPLATE_NAME:system.status[systemHealthStat.0].count(#1,{$HEALTH_DISASTER_STATUS},eq)}=1` | HIGH | |
+| System status is in critical state | <p>Please check the device for errors</p> | `{TEMPLATE_NAME:system.status[systemHealthStat.0].count(#1,{$HEALTH_CRIT_STATUS},eq)}=1` | HIGH | <p>**Depends on**:</p><p>- System is in unrecoverable state!</p> |
+| System status is in warning state | <p>Please check the device for warnings</p> | `{TEMPLATE_NAME:system.status[systemHealthStat.0].count(#1,{$HEALTH_WARN_STATUS},eq)}=1` | WARNING | <p>**Depends on**:</p><p>- System is in unrecoverable state!</p><p>- System status is in critical state</p> |
+| {#SNMPVALUE}: Temperature is above warning threshold: >{$TEMP_WARN:""} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[tempReading.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[tempReading.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:""}-3` | WARNING | <p>**Depends on**:</p><p>- {#SNMPVALUE}: Temperature is above critical threshold: >{$TEMP_CRIT:""}</p> |
+| {#SNMPVALUE}: Temperature is above critical threshold: >{$TEMP_CRIT:""} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[tempReading.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[tempReading.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:""}-3` | HIGH | |
+| {#SNMPVALUE}: Temperature is too low: <{$TEMP_CRIT_LOW:""} | <p>-</p> | `{TEMPLATE_NAME:sensor.temp.value[tempReading.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[tempReading.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:""}+3` | AVERAGE | |
+| Ambient: Temperature is above warning threshold: >{$TEMP_WARN:"Ambient"} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[tempReading.Ambient.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:"Ambient"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[tempReading.Ambient.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:"Ambient"}-3` | WARNING | <p>**Depends on**:</p><p>- Ambient: Temperature is above critical threshold: >{$TEMP_CRIT:"Ambient"}</p> |
+| Ambient: Temperature is above critical threshold: >{$TEMP_CRIT:"Ambient"} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[tempReading.Ambient.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:"Ambient"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[tempReading.Ambient.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:"Ambient"}-3` | HIGH | |
+| Ambient: Temperature is too low: <{$TEMP_CRIT_LOW:"Ambient"} | <p>-</p> | `{TEMPLATE_NAME:sensor.temp.value[tempReading.Ambient.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:"Ambient"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[tempReading.Ambient.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:"Ambient"}+3` | AVERAGE | |
+| CPU: Temperature is above warning threshold: >{$TEMP_WARN:"CPU"} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[tempReading.CPU.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:"CPU"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[tempReading.CPU.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:"CPU"}-3` | WARNING | <p>**Depends on**:</p><p>- CPU: Temperature is above critical threshold: >{$TEMP_CRIT:"CPU"}</p> |
+| CPU: Temperature is above critical threshold: >{$TEMP_CRIT:"CPU"} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[tempReading.CPU.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:"CPU"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[tempReading.CPU.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:"CPU"}-3` | HIGH | |
+| CPU: Temperature is too low: <{$TEMP_CRIT_LOW:"CPU"} | <p>-</p> | `{TEMPLATE_NAME:sensor.temp.value[tempReading.CPU.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:"CPU"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[tempReading.CPU.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:"CPU"}+3` | AVERAGE | |
## Feedback
diff --git a/templates/server/supermicro_aten_snmp/README.md b/templates/server/supermicro_aten_snmp/README.md
index a9f645fda52..e7bd4942059 100644
--- a/templates/server/supermicro_aten_snmp/README.md
+++ b/templates/server/supermicro_aten_snmp/README.md
@@ -3,7 +3,7 @@
## Overview
-For Zabbix version: 5.2 and higher
+For Zabbix version: 5.4 and higher
for BMC ATEN IPMI controllers of Supermicro servers
https://www.supermicro.com/solutions/IPMI.cfm
@@ -21,39 +21,39 @@ No specific Zabbix configuration is required.
### Macros used
-|Name|Description|Default|
-|----|-----------|-------|
-|{$TEMP_CRIT_LOW} |<p>-</p> |`5` |
-|{$TEMP_CRIT} |<p>-</p> |`60` |
-|{$TEMP_WARN} |<p>-</p> |`50` |
+| Name | Description | Default |
+|------------------|-------------|---------|
+| {$TEMP_CRIT_LOW} | <p>-</p> | `5` |
+| {$TEMP_CRIT} | <p>-</p> | `60` |
+| {$TEMP_WARN} | <p>-</p> | `50` |
## Template links
-|Name|
-|----|
-|Generic SNMP |
+| Name |
+|--------------|
+| Generic SNMP |
## Discovery rules
-|Name|Description|Type|Key and additional info|
-|----|-----------|----|----|
-|Temperature Discovery |<p>Scanning ATEN-IPMI-MIB::sensorTable with filter: not connected temp sensors (Value = 0)</p> |SNMP |tempDescr.discovery<p>**Filter**:</p>AND <p>- B: {#SNMPVALUE} MATCHES_REGEX `[1-9]+`</p><p>- A: {#SENSOR_DESCR} MATCHES_REGEX `.*Temp.*`</p> |
-|FAN Discovery |<p>Scanning ATEN-IPMI-MIB::sensorTable with filter: not connected FAN sensors (Value = 0)</p> |SNMP |fan.discovery<p>**Filter**:</p>AND <p>- B: {#SNMPVALUE} MATCHES_REGEX `[1-9]+`</p><p>- A: {#SENSOR_DESCR} MATCHES_REGEX `FAN.*`</p> |
+| Name | Description | Type | Key and additional info |
+|-----------------------|------------------------------------------------------------------------------------------------|------|----------------------------------------------------------------------------------------------------------------------------------------------|
+| Temperature Discovery | <p>Scanning ATEN-IPMI-MIB::sensorTable with filter: not connected temp sensors (Value = 0)</p> | SNMP | tempDescr.discovery<p>**Filter**:</p>AND <p>- B: {#SNMPVALUE} MATCHES_REGEX `[1-9]+`</p><p>- A: {#SENSOR_DESCR} MATCHES_REGEX `.*Temp.*`</p> |
+| FAN Discovery | <p>Scanning ATEN-IPMI-MIB::sensorTable with filter: not connected FAN sensors (Value = 0)</p> | SNMP | fan.discovery<p>**Filter**:</p>AND <p>- B: {#SNMPVALUE} MATCHES_REGEX `[1-9]+`</p><p>- A: {#SENSOR_DESCR} MATCHES_REGEX `FAN.*`</p> |
## Items collected
-|Group|Name|Description|Type|Key and additional info|
-|-----|----|-----------|----|---------------------|
-|Fans |{#SENSOR_DESCR}: Fan speed, % |<p>MIB: ATEN-IPMI-MIB</p><p>A textual string containing information about the interface.</p><p>This string should include the name of the manufacturer, the product name and the version of the interface hardware/software.</p> |SNMP |sensor.fan.speed.percentage[sensorReading.{#SNMPINDEX}] |
-|Temperature |{#SENSOR_DESCR}: Temperature |<p>MIB: ATEN-IPMI-MIB</p><p>A textual string containing information about the interface.</p><p>This string should include the name of the manufacturer, the product name and the version of the interface hardware/software.</p> |SNMP |sensor.temp.value[sensorReading.{#SNMPINDEX}] |
+| Group | Name | Description | Type | Key and additional info |
+|-------------|-------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------|---------------------------------------------------------|
+| Fans | {#SENSOR_DESCR}: Fan speed, % | <p>MIB: ATEN-IPMI-MIB</p><p>A textual string containing information about the interface.</p><p>This string should include the name of the manufacturer, the product name and the version of the interface hardware/software.</p> | SNMP | sensor.fan.speed.percentage[sensorReading.{#SNMPINDEX}] |
+| Temperature | {#SENSOR_DESCR}: Temperature | <p>MIB: ATEN-IPMI-MIB</p><p>A textual string containing information about the interface.</p><p>This string should include the name of the manufacturer, the product name and the version of the interface hardware/software.</p> | SNMP | sensor.temp.value[sensorReading.{#SNMPINDEX}] |
## Triggers
-|Name|Description|Expression|Severity|Dependencies and additional info|
-|----|-----------|----|----|----|
-|{#SENSOR_DESCR}: Temperature is above warning threshold: >{$TEMP_WARN:""} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[sensorReading.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[sensorReading.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:""}-3` |WARNING |<p>**Depends on**:</p><p>- {#SENSOR_DESCR}: Temperature is above critical threshold: >{$TEMP_CRIT:""}</p> |
-|{#SENSOR_DESCR}: Temperature is above critical threshold: >{$TEMP_CRIT:""} |<p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> |`{TEMPLATE_NAME:sensor.temp.value[sensorReading.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[sensorReading.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:""}-3` |HIGH | |
-|{#SENSOR_DESCR}: Temperature is too low: <{$TEMP_CRIT_LOW:""} |<p>-</p> |`{TEMPLATE_NAME:sensor.temp.value[sensorReading.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[sensorReading.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:""}+3` |AVERAGE | |
+| Name | Description | Expression | Severity | Dependencies and additional info |
+|----------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|-----------------------------------------------------------------------------------------------------------|
+| {#SENSOR_DESCR}: Temperature is above warning threshold: >{$TEMP_WARN:""} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[sensorReading.{#SNMPINDEX}].avg(5m)}>{$TEMP_WARN:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[sensorReading.{#SNMPINDEX}].max(5m)}<{$TEMP_WARN:""}-3` | WARNING | <p>**Depends on**:</p><p>- {#SENSOR_DESCR}: Temperature is above critical threshold: >{$TEMP_CRIT:""}</p> |
+| {#SENSOR_DESCR}: Temperature is above critical threshold: >{$TEMP_CRIT:""} | <p>This trigger uses temperature sensor values as well as temperature sensor status if available</p> | `{TEMPLATE_NAME:sensor.temp.value[sensorReading.{#SNMPINDEX}].avg(5m)}>{$TEMP_CRIT:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[sensorReading.{#SNMPINDEX}].max(5m)}<{$TEMP_CRIT:""}-3` | HIGH | |
+| {#SENSOR_DESCR}: Temperature is too low: <{$TEMP_CRIT_LOW:""} | <p>-</p> | `{TEMPLATE_NAME:sensor.temp.value[sensorReading.{#SNMPINDEX}].avg(5m)}<{$TEMP_CRIT_LOW:""}`<p>Recovery expression:</p>`{TEMPLATE_NAME:sensor.temp.value[sensorReading.{#SNMPINDEX}].min(5m)}>{$TEMP_CRIT_LOW:""}+3` | AVERAGE | |
## Feedback