Welcome to mirror list, hosted at ThFree Co, Russian Federation.

github.com/zabbix/zabbix.git - Unnamed repository; edit this file 'description' to name the repository.
summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorAlexander Bakaldin <alexander.bakaldin@zabbix.com>2021-04-05 14:46:12 +0300
committerAlexander Bakaldin <alexander.bakaldin@zabbix.com>2021-04-05 14:46:12 +0300
commitbf1d5fe45b944865387f1864a928b44791d63f9c (patch)
treef7b21579718d31d74723af3608c7038d1e309244 /templates
parentfaab58ed3424ef5def34155063807cc21a926b83 (diff)
parent4c6645521de87b3ecf7cf97fccbd9d0eae31c68c (diff)
.........T [ZBX-18667] fixed code spelling in templates
* commit '4c6645521de87b3ecf7cf97fccbd9d0eae31c68c': .........T [ZBX-18667] fixed code spelling in templates .........T [ZBX-18667] fixed code spelling in templates .........T [ZBX-18667] further fixes .........T [ZBX-18667] fixed code spelling in templates
Diffstat (limited to 'templates')
-rw-r--r--templates/app/activemq_jmx/README.md8
-rw-r--r--templates/app/activemq_jmx/template_app_activemq_jmx.yaml14
-rw-r--r--templates/app/generic_java_jmx/README.md2
-rw-r--r--templates/app/generic_java_jmx/template_app_generic_java_jmx.yaml2
-rw-r--r--templates/app/gitlab_http/README.md2
-rw-r--r--templates/app/gitlab_http/template_app_gitlab_http.yaml2
-rw-r--r--templates/app/jenkins/README.md4
-rw-r--r--templates/app/jenkins/template_app_jenkins.yaml4
-rw-r--r--templates/app/memcached/README.md2
-rw-r--r--templates/app/memcached/template_app_memcached.yaml2
-rw-r--r--templates/app/squid_snmp/README.md6
-rw-r--r--templates/app/squid_snmp/template_app_squid_snmp.yaml6
-rw-r--r--templates/app/tomcat_jmx/README.md2
-rw-r--r--templates/app/zookeeper_http/README.md4
-rw-r--r--templates/app/zookeeper_http/template_app_zookeeper_http.yaml2
-rw-r--r--templates/db/clickhouse_http/README.md14
-rw-r--r--templates/db/clickhouse_http/template_db_clickhouse_http.yaml18
-rw-r--r--templates/db/ignite_jmx/README.md2
-rw-r--r--templates/db/ignite_jmx/template_db_ignite_jmx.yaml2
-rw-r--r--templates/db/oracle_agent2/README.md2
-rw-r--r--templates/db/oracle_agent2/template_db_oracle_agent2.yaml2
-rw-r--r--templates/db/oracle_odbc/README.md2
-rw-r--r--templates/db/oracle_odbc/template_db_oracle_odbc.yaml2
-rw-r--r--templates/db/postgresql/README.md2
-rw-r--r--templates/db/postgresql_agent2/README.md2
-rw-r--r--templates/db/postgresql_agent2/template_db_postgresql_agent2.yaml2
-rw-r--r--templates/net/arista_snmp/README.md2
-rw-r--r--templates/net/arista_snmp/template_net_arista_snmp.yaml2
-rw-r--r--templates/net/brocade_foundry_sw_snmp/README.md2
-rw-r--r--templates/net/brocade_foundry_sw_snmp/template_net_brocade_foundry_sw_snmp.yaml2
-rw-r--r--templates/net/morningstar_snmp/tristar_mppt_600V_snmp/README.md4
-rw-r--r--templates/net/morningstar_snmp/tristar_mppt_600V_snmp/tristar_mppt_600V_snmp.yaml4
-rw-r--r--templates/net/morningstar_snmp/tristar_mppt_snmp/README.md4
-rw-r--r--templates/net/morningstar_snmp/tristar_mppt_snmp/tristar_mppt_snmp.yaml4
-rw-r--r--templates/san/huawei_5300v5_snmp/README.md2
-rw-r--r--templates/san/huawei_5300v5_snmp/template_san_huawei_5300v5_snmp.yaml2
-rw-r--r--templates/san/netapp_aff_a700_http/README.md20
-rw-r--r--templates/san/netapp_aff_a700_http/template_san_netapp_aff_a700_http.yaml20
-rw-r--r--templates/server/dell_idrac_snmp/README.md2
-rw-r--r--templates/server/dell_idrac_snmp/template_server_dell_idrac_snmp.yaml2
-rw-r--r--templates/tel/asterisk_http/README.md2
-rw-r--r--templates/tel/asterisk_http/template_tel_asterisk_http.yaml2
42 files changed, 94 insertions, 94 deletions
diff --git a/templates/app/activemq_jmx/README.md b/templates/app/activemq_jmx/README.md
index 88f16fcc77c..b414ae3eab4 100644
--- a/templates/app/activemq_jmx/README.md
+++ b/templates/app/activemq_jmx/README.md
@@ -112,16 +112,16 @@ There are no template links in this template.
|Broker {#JMXBROKERNAME}: Storage usage is too high (over {$ACTIVEMQ.STORE.MAX.HIGH:"{#JMXBROKERNAME}"}%) |<p>-</p> |`{TEMPLATE_NAME:jmx[{#JMXOBJ},StorePercentUsage].min({$ACTIVEMQ.STORE.TIME:"{#JMXBROKERNAME}"})}>{$ACTIVEMQ.STORE.MAX.HIGH:"{#JMXBROKERNAME}"}` |HIGH | |
|Broker {#JMXBROKERNAME}: Temp usage is too high (over {$ACTIVEMQ.TEMP.MAX.WARN:"{#JMXBROKERNAME}"}%) |<p>-</p> |`{TEMPLATE_NAME:jmx[{#JMXOBJ},TempPercentUsage].min({$ACTIVEMQ.TEMP.TIME:"{#JMXBROKERNAME}"})}>{$ACTIVEMQ.TEMP.MAX.WARN}` |AVERAGE |<p>**Depends on**:</p><p>- Broker {#JMXBROKERNAME}: Temp usage is too high (over {$ACTIVEMQ.TEMP.MAX.WARN:"{#JMXBROKERNAME}"}%)</p> |
|Broker {#JMXBROKERNAME}: Temp usage is too high (over {$ACTIVEMQ.TEMP.MAX.WARN:"{#JMXBROKERNAME}"}%) |<p>-</p> |`{TEMPLATE_NAME:jmx[{#JMXOBJ},TempPercentUsage].min({$ACTIVEMQ.TEMP.TIME:"{#JMXBROKERNAME}"})}>{$ACTIVEMQ.TEMP.MAX.HIGH}` |HIGH | |
-|Broker {#JMXBROKERNAME}: Message enqueue rate is higer than dequeue rate for {$ACTIVEMQ.MSG.RATE.WARN.TIME:"{#JMXBROKERNAME}"} |<p>Enqueue rate is higer than dequeue rate. It may indicate performance problems.</p> |`{TEMPLATE_NAME:jmx[{#JMXOBJ},TotalEnqueueCount].avg({$ACTIVEMQ.MSG.RATE.WARN.TIME:"{#JMXBROKERNAME}"})}>{Apache ActiveMQ by JMX:jmx[{#JMXOBJ},TotalEnqueueCount].avg({$ACTIVEMQ.MSG.RATE.WARN.TIME:"{#JMXBROKERNAME}"})}` |AVERAGE | |
+|Broker {#JMXBROKERNAME}: Message enqueue rate is higher than dequeue rate for {$ACTIVEMQ.MSG.RATE.WARN.TIME:"{#JMXBROKERNAME}"} |<p>Enqueue rate is higher than dequeue rate. It may indicate performance problems.</p> |`{TEMPLATE_NAME:jmx[{#JMXOBJ},TotalEnqueueCount].avg({$ACTIVEMQ.MSG.RATE.WARN.TIME:"{#JMXBROKERNAME}"})}>{Apache ActiveMQ by JMX:jmx[{#JMXOBJ},TotalEnqueueCount].avg({$ACTIVEMQ.MSG.RATE.WARN.TIME:"{#JMXBROKERNAME}"})}` |AVERAGE | |
|Broker {#JMXBROKERNAME}: Consumers count is too low (below {$ACTIVEMQ.BROKER.CONSUMERS.MIN.HIGH:"{#JMXBROKERNAME}"} for {$ACTIVEMQ.BROKER.CONSUMERS.MIN.TIME:"{#JMXBROKERNAME}"}) |<p>-</p> |`{TEMPLATE_NAME:jmx[{#JMXOBJ},TotalConsumerCount].max({$ACTIVEMQ.BROKER.CONSUMERS.MIN.TIME:"{#JMXBROKERNAME}"})}<{$ACTIVEMQ.BROKER.CONSUMERS.MIN.HIGH:"{#JMXBROKERNAME}"}` |HIGH | |
|Broker {#JMXBROKERNAME}: Producers count is too low (below {$ACTIVEMQ.BROKER.PRODUCERS.MIN.HIGH:"{#JMXBROKERNAME}"} for {$ACTIVEMQ.BROKER.PRODUCERS.MIN.TIME:"{#JMXBROKERNAME}"}) |<p>-</p> |`{TEMPLATE_NAME:jmx[{#JMXOBJ},TotalProducerCount].max({$ACTIVEMQ.BROKER.PRODUCERS.MIN.TIME:"{#JMXBROKERNAME}"})}<{$ACTIVEMQ.BROKER.PRODUCERS.MIN.HIGH:"{#JMXBROKERNAME}"}` |HIGH | |
|{#JMXBROKERNAME}: {#JMXDESTINATIONTYPE} {#JMXDESTINATIONNAME}: Consumers count is too low (below {$ACTIVEMQ.DESTINATION.CONSUMERS.MIN.HIGH:"{#JMXDESTINATIONNAME}"} for {$ACTIVEMQ.DESTINATION.CONSUMERS.MIN.TIME:"{#JMXDESTINATIONNAME}"}) |<p>-</p> |`{TEMPLATE_NAME:jmx[{#JMXOBJ},ConsumerCount].max({$ACTIVEMQ.DESTINATION.CONSUMERS.MIN.TIME:"{#JMXDESTINATIONNAME}"})}<{$ACTIVEMQ.DESTINATION.CONSUMERS.MIN.HIGH:"{#JMXDESTINATIONNAME}"} and {Apache ActiveMQ by JMX:jmx["org.apache.activemq:type=Broker,brokerName={#JMXBROKERNAME}",{$ACTIVEMQ.TOTAL.CONSUMERS.COUNT: "{#JMXDESTINATIONNAME}"}].last()}>{$ACTIVEMQ.BROKER.CONSUMERS.MIN.HIGH:"{#JMXBROKERNAME}"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:jmx[{#JMXOBJ},ConsumerCount].min({$ACTIVEMQ.DESTINATION.CONSUMERS.MIN.TIME:"{#JMXDESTINATIONNAME}"})}>={$ACTIVEMQ.DESTINATION.CONSUMERS.MIN.HIGH:"{#JMXDESTINATIONNAME}"}` |AVERAGE |<p>Manual close: YES</p> |
|{#JMXBROKERNAME}: {#JMXDESTINATIONTYPE} {#JMXDESTINATIONNAME}: Producers count is too low (below {$ACTIVEMQ.DESTINATION.PRODUCERS.MIN.HIGH:"{#JMXDESTINATIONNAME}"} for {$ACTIVEMQ.DESTINATION.PRODUCERS.MIN.TIME:"{#JMXDESTINATIONNAME}"}) |<p>-</p> |`{TEMPLATE_NAME:jmx[{#JMXOBJ},ProducerCount].max({$ACTIVEMQ.DESTINATION.PRODUCERS.MIN.TIME:"{#JMXDESTINATIONNAME}"})}<{$ACTIVEMQ.DESTINATION.PRODUCERS.MIN.HIGH:"{#JMXDESTINATIONNAME}"} and {Apache ActiveMQ by JMX:jmx["org.apache.activemq:type=Broker,brokerName={#JMXBROKERNAME}",{$ACTIVEMQ.TOTAL.PRODUCERS.COUNT: "{#JMXDESTINATIONNAME}"}].last()}>{$ACTIVEMQ.BROKER.PRODUCERS.MIN.HIGH:"{#JMXBROKERNAME}"}`<p>Recovery expression:</p>`{TEMPLATE_NAME:jmx[{#JMXOBJ},ProducerCount].min({$ACTIVEMQ.DESTINATION.PRODUCERS.MIN.TIME:"{#JMXDESTINATIONNAME}"})}>={$ACTIVEMQ.DESTINATION.PRODUCERS.MIN.HIGH:"{#JMXDESTINATIONNAME}"}` |AVERAGE |<p>Manual close: YES</p> |
|{#JMXBROKERNAME}: {#JMXDESTINATIONTYPE} {#JMXDESTINATIONNAME}: Memory usage is too high (over {$ACTIVEMQ.MEM.MAX.WARN:"{#JMXDESTINATIONNAME}"}%) |<p>-</p> |`{TEMPLATE_NAME:jmx[{#JMXOBJ},MemoryPercentUsage].last()}>{$ACTIVEMQ.MEM.MAX.WARN:"{#JMXDESTINATIONNAME}"}` |AVERAGE | |
|{#JMXBROKERNAME}: {#JMXDESTINATIONTYPE} {#JMXDESTINATIONNAME}: Memory usage is too high (over {$ACTIVEMQ.MEM.MAX.HIGH:"{#JMXDESTINATIONNAME}"}%) |<p>-</p> |`{TEMPLATE_NAME:jmx[{#JMXOBJ},MemoryPercentUsage].last()}>{$ACTIVEMQ.MEM.MAX.HIGH:"{#JMXDESTINATIONNAME}"}` |HIGH | |
-|{#JMXBROKERNAME}: {#JMXDESTINATIONTYPE} {#JMXDESTINATIONNAME}: Message enqueue rate is higer than dequeue rate for {$ACTIVEMQ.MSG.RATE.WARN.TIME:"{#JMXDESTINATIONNAME}"} |<p>Enqueue rate is higer than dequeue rate. It may indicate performance problems.</p> |`{TEMPLATE_NAME:jmx[{#JMXOBJ},EnqueueCount].avg({$ACTIVEMQ.MSG.RATE.WARN.TIME:"{#JMXDESTINATIONNAME}"})}>{Apache ActiveMQ by JMX:jmx[{#JMXOBJ},DequeueCount].avg({$ACTIVEMQ.MSG.RATE.WARN.TIME:"{#JMXDESTINATIONNAME}"})}` |AVERAGE | |
-|{#JMXBROKERNAME}: {#JMXDESTINATIONTYPE} {#JMXDESTINATIONNAME}: Queue size higer than {$ACTIVEMQ.QUEUE.WARN:"{#JMXDESTINATIONNAME}"} for {$ACTIVEMQ.QUEUE.TIME:"{#JMXDESTINATIONNAME}"} |<p>Queue size is higer than treshold. It may indicate performance problems.</p> |`{TEMPLATE_NAME:jmx[{#JMXOBJ},QueueSize].min({$ACTIVEMQ.QUEUE.TIME:"{#JMXDESTINATIONNAME}"})}>{$ACTIVEMQ.QUEUE.WARN:"{#JMXDESTINATIONNAME}"} and {$ACTIVEMQ.QUEUE.ENABLED:"{#JMXDESTINATIONNAME}"}=1` |AVERAGE | |
-|{#JMXBROKERNAME}: {#JMXDESTINATIONTYPE} {#JMXDESTINATIONNAME}: Expired messages count higer than {$ACTIVEMQ.EXPIRIED.WARN:"{#JMXDESTINATIONNAME}"} |<p>This metric represents the number of messages that expired before they could be delivered. If you expect all messages to be delivered and acknowledged within a certain amount of time, you can set an expiration for each message, and investigate if your ExpiredCount metric rises above zero.</p> |`{TEMPLATE_NAME:jmx[{#JMXOBJ},ExpiredCount].last()}>{$ACTIVEMQ.EXPIRIED.WARN:"{#JMXDESTINATIONNAME}"}` |AVERAGE | |
+|{#JMXBROKERNAME}: {#JMXDESTINATIONTYPE} {#JMXDESTINATIONNAME}: Message enqueue rate is higher than dequeue rate for {$ACTIVEMQ.MSG.RATE.WARN.TIME:"{#JMXDESTINATIONNAME}"} |<p>Enqueue rate is higher than dequeue rate. It may indicate performance problems.</p> |`{TEMPLATE_NAME:jmx[{#JMXOBJ},EnqueueCount].avg({$ACTIVEMQ.MSG.RATE.WARN.TIME:"{#JMXDESTINATIONNAME}"})}>{Apache ActiveMQ by JMX:jmx[{#JMXOBJ},DequeueCount].avg({$ACTIVEMQ.MSG.RATE.WARN.TIME:"{#JMXDESTINATIONNAME}"})}` |AVERAGE | |
+|{#JMXBROKERNAME}: {#JMXDESTINATIONTYPE} {#JMXDESTINATIONNAME}: Queue size higher than {$ACTIVEMQ.QUEUE.WARN:"{#JMXDESTINATIONNAME}"} for {$ACTIVEMQ.QUEUE.TIME:"{#JMXDESTINATIONNAME}"} |<p>Queue size is higher than threshold. It may indicate performance problems.</p> |`{TEMPLATE_NAME:jmx[{#JMXOBJ},QueueSize].min({$ACTIVEMQ.QUEUE.TIME:"{#JMXDESTINATIONNAME}"})}>{$ACTIVEMQ.QUEUE.WARN:"{#JMXDESTINATIONNAME}"} and {$ACTIVEMQ.QUEUE.ENABLED:"{#JMXDESTINATIONNAME}"}=1` |AVERAGE | |
+|{#JMXBROKERNAME}: {#JMXDESTINATIONTYPE} {#JMXDESTINATIONNAME}: Expired messages count higher than {$ACTIVEMQ.EXPIRIED.WARN:"{#JMXDESTINATIONNAME}"} |<p>This metric represents the number of messages that expired before they could be delivered. If you expect all messages to be delivered and acknowledged within a certain amount of time, you can set an expiration for each message, and investigate if your ExpiredCount metric rises above zero.</p> |`{TEMPLATE_NAME:jmx[{#JMXOBJ},ExpiredCount].last()}>{$ACTIVEMQ.EXPIRIED.WARN:"{#JMXDESTINATIONNAME}"}` |AVERAGE | |
## Feedback
diff --git a/templates/app/activemq_jmx/template_app_activemq_jmx.yaml b/templates/app/activemq_jmx/template_app_activemq_jmx.yaml
index 5adbd2b5971..2ea6ea226c4 100644
--- a/templates/app/activemq_jmx/template_app_activemq_jmx.yaml
+++ b/templates/app/activemq_jmx/template_app_activemq_jmx.yaml
@@ -260,9 +260,9 @@ zabbix_export:
trigger_prototypes:
-
expression: '{avg({$ACTIVEMQ.MSG.RATE.WARN.TIME:"{#JMXBROKERNAME}"})}>{avg({$ACTIVEMQ.MSG.RATE.WARN.TIME:"{#JMXBROKERNAME}"})}'
- name: 'Broker {#JMXBROKERNAME}: Message enqueue rate is higer than dequeue rate for {$ACTIVEMQ.MSG.RATE.WARN.TIME:"{#JMXBROKERNAME}"}'
+ name: 'Broker {#JMXBROKERNAME}: Message enqueue rate is higher than dequeue rate for {$ACTIVEMQ.MSG.RATE.WARN.TIME:"{#JMXBROKERNAME}"}'
priority: AVERAGE
- description: 'Enqueue rate is higer than dequeue rate. It may indicate performance problems.'
+ description: 'Enqueue rate is higher than dequeue rate. It may indicate performance problems.'
-
name: 'Broker {#JMXBROKERNAME}: Producers count total'
type: JMX
@@ -500,7 +500,7 @@ zabbix_export:
trigger_prototypes:
-
expression: '{last()}>{$ACTIVEMQ.EXPIRIED.WARN:"{#JMXDESTINATIONNAME}"}'
- name: '{#JMXBROKERNAME}: {#JMXDESTINATIONTYPE} {#JMXDESTINATIONNAME}: Expired messages count higer than {$ACTIVEMQ.EXPIRIED.WARN:"{#JMXDESTINATIONNAME}"}'
+ name: '{#JMXBROKERNAME}: {#JMXDESTINATIONTYPE} {#JMXDESTINATIONNAME}: Expired messages count higher than {$ACTIVEMQ.EXPIRIED.WARN:"{#JMXDESTINATIONNAME}"}'
priority: AVERAGE
description: 'This metric represents the number of messages that expired before they could be delivered. If you expect all messages to be delivered and acknowledged within a certain amount of time, you can set an expiration for each message, and investigate if your ExpiredCount metric rises above zero.'
-
@@ -552,9 +552,9 @@ zabbix_export:
trigger_prototypes:
-
expression: '{min({$ACTIVEMQ.QUEUE.TIME:"{#JMXDESTINATIONNAME}"})}>{$ACTIVEMQ.QUEUE.WARN:"{#JMXDESTINATIONNAME}"} and {$ACTIVEMQ.QUEUE.ENABLED:"{#JMXDESTINATIONNAME}"}=1'
- name: '{#JMXBROKERNAME}: {#JMXDESTINATIONTYPE} {#JMXDESTINATIONNAME}: Queue size higer than {$ACTIVEMQ.QUEUE.WARN:"{#JMXDESTINATIONNAME}"} for {$ACTIVEMQ.QUEUE.TIME:"{#JMXDESTINATIONNAME}"}'
+ name: '{#JMXBROKERNAME}: {#JMXDESTINATIONTYPE} {#JMXDESTINATIONNAME}: Queue size higher than {$ACTIVEMQ.QUEUE.WARN:"{#JMXDESTINATIONNAME}"} for {$ACTIVEMQ.QUEUE.TIME:"{#JMXDESTINATIONNAME}"}'
priority: AVERAGE
- description: 'Queue size is higer than treshold. It may indicate performance problems.'
+ description: 'Queue size is higher than threshold. It may indicate performance problems.'
trigger_prototypes:
-
expression: '{Apache ActiveMQ by JMX:jmx[{#JMXOBJ},ConsumerCount].max({$ACTIVEMQ.DESTINATION.CONSUMERS.MIN.TIME:"{#JMXDESTINATIONNAME}"})}<{$ACTIVEMQ.DESTINATION.CONSUMERS.MIN.HIGH:"{#JMXDESTINATIONNAME}"} and {Apache ActiveMQ by JMX:jmx["org.apache.activemq:type=Broker,brokerName={#JMXBROKERNAME}",{$ACTIVEMQ.TOTAL.CONSUMERS.COUNT: "{#JMXDESTINATIONNAME}"}].last()}>{$ACTIVEMQ.BROKER.CONSUMERS.MIN.HIGH:"{#JMXBROKERNAME}"}'
@@ -565,9 +565,9 @@ zabbix_export:
manual_close: 'YES'
-
expression: '{Apache ActiveMQ by JMX:jmx[{#JMXOBJ},EnqueueCount].avg({$ACTIVEMQ.MSG.RATE.WARN.TIME:"{#JMXDESTINATIONNAME}"})}>{Apache ActiveMQ by JMX:jmx[{#JMXOBJ},DequeueCount].avg({$ACTIVEMQ.MSG.RATE.WARN.TIME:"{#JMXDESTINATIONNAME}"})}'
- name: '{#JMXBROKERNAME}: {#JMXDESTINATIONTYPE} {#JMXDESTINATIONNAME}: Message enqueue rate is higer than dequeue rate for {$ACTIVEMQ.MSG.RATE.WARN.TIME:"{#JMXDESTINATIONNAME}"}'
+ name: '{#JMXBROKERNAME}: {#JMXDESTINATIONTYPE} {#JMXDESTINATIONNAME}: Message enqueue rate is higher than dequeue rate for {$ACTIVEMQ.MSG.RATE.WARN.TIME:"{#JMXDESTINATIONNAME}"}'
priority: AVERAGE
- description: 'Enqueue rate is higer than dequeue rate. It may indicate performance problems.'
+ description: 'Enqueue rate is higher than dequeue rate. It may indicate performance problems.'
-
expression: '{Apache ActiveMQ by JMX:jmx[{#JMXOBJ},ProducerCount].max({$ACTIVEMQ.DESTINATION.PRODUCERS.MIN.TIME:"{#JMXDESTINATIONNAME}"})}<{$ACTIVEMQ.DESTINATION.PRODUCERS.MIN.HIGH:"{#JMXDESTINATIONNAME}"} and {Apache ActiveMQ by JMX:jmx["org.apache.activemq:type=Broker,brokerName={#JMXBROKERNAME}",{$ACTIVEMQ.TOTAL.PRODUCERS.COUNT: "{#JMXDESTINATIONNAME}"}].last()}>{$ACTIVEMQ.BROKER.PRODUCERS.MIN.HIGH:"{#JMXBROKERNAME}"}'
recovery_mode: RECOVERY_EXPRESSION
diff --git a/templates/app/generic_java_jmx/README.md b/templates/app/generic_java_jmx/README.md
index a8c83ab7cf9..db07e4efb4b 100644
--- a/templates/app/generic_java_jmx/README.md
+++ b/templates/app/generic_java_jmx/README.md
@@ -26,7 +26,7 @@ No specific Zabbix configuration is required.
|{$JMX.FILE.DESCRIPTORS.TIME} |<p>The time during which the file descriptors count may exceed the threshold.</p> |`3m` |
|{$JMX.HEAP.MEM.USAGE.MAX} |<p>A threshold in percent for Heap memory utilization trigger.</p> |`85` |
|{$JMX.HEAP.MEM.USAGE.TIME} |<p>The time during which the Heap memory utilization may exceed the threshold.</p> |`10m` |
-|{$JMX.MP.USAGE.MAX} |<p>A threshold in percent for memory pools utilization trigger. Use a context to change the treshold for a specific pool.</p> |`85` |
+|{$JMX.MP.USAGE.MAX} |<p>A threshold in percent for memory pools utilization trigger. Use a context to change the threshold for a specific pool.</p> |`85` |
|{$JMX.MP.USAGE.TIME} |<p>The time during which the memory pools utilization may exceed the threshold.</p> |`10m` |
|{$JMX.NONHEAP.MEM.USAGE.MAX} |<p>A threshold in percent for Non-heap memory utilization trigger.</p> |`85` |
|{$JMX.NONHEAP.MEM.USAGE.TIME} |<p>The time during which the Non-heap memory utilization may exceed the threshold.</p> |`10m` |
diff --git a/templates/app/generic_java_jmx/template_app_generic_java_jmx.yaml b/templates/app/generic_java_jmx/template_app_generic_java_jmx.yaml
index f32412bcd95..d7d5b088a8b 100644
--- a/templates/app/generic_java_jmx/template_app_generic_java_jmx.yaml
+++ b/templates/app/generic_java_jmx/template_app_generic_java_jmx.yaml
@@ -879,7 +879,7 @@ zabbix_export:
-
macro: '{$JMX.MP.USAGE.MAX}'
value: '85'
- description: 'A threshold in percent for memory pools utilization trigger. Use a context to change the treshold for a specific pool.'
+ description: 'A threshold in percent for memory pools utilization trigger. Use a context to change the threshold for a specific pool.'
-
macro: '{$JMX.MP.USAGE.TIME}'
value: 10m
diff --git a/templates/app/gitlab_http/README.md b/templates/app/gitlab_http/README.md
index 7f739da3d7a..8429f309d9d 100644
--- a/templates/app/gitlab_http/README.md
+++ b/templates/app/gitlab_http/README.md
@@ -130,7 +130,7 @@ There are no template links in this template.
|GitLab: Failed to fetch info data (or no data for 30m) |<p>Zabbix has not received data for metrics for the last 30 minutes</p> |`{TEMPLATE_NAME:gitlab.ruby.threads_running.nodata(30m)}=1` |WARNING |<p>Manual close: YES</p><p>**Depends on**:</p><p>- GitLab: Liveness check was failed</p> |
|GitLab: Current number of open files is too high (over {$GITLAB.OPEN.FDS.MAX.WARN}% for 5m) |<p>-</p> |`{TEMPLATE_NAME:gitlab.ruby.file_descriptors.max.min(5m)}/{GitLab by HTTP:gitlab.ruby.process_max_fds.last()}*100>{$GITLAB.OPEN.FDS.MAX.WARN}` |WARNING | |
|GitLab: Too many HTTP requests failures (over {$GITLAB.HTTP.FAIL.MAX.WARN} for 5m)' |<p>"Too many requests failed on GitLab instance with 5xx HTTP code"</p> |`{TEMPLATE_NAME:gitlab.http.requests.5xx.rate.min(5m)}>{$GITLAB.HTTP.FAIL.MAX.WARN}` |WARNING | |
-|GitLab: Puma instance thread utilization is too hight (over {$GITLAB.PUMA.UTILIZATION.MAX.WARN}% for 5m) |<p>-</p> |`{TEMPLATE_NAME:gitlab.puma.active_connections[{#SINGLETON}].min(5m)}/{GitLab by HTTP:gitlab.puma.max_threads[{#SINGLETON}].last()}*100>{$GITLAB.PUMA.UTILIZATION.MAX.WARN}` |WARNING | |
+|GitLab: Puma instance thread utilization is too high (over {$GITLAB.PUMA.UTILIZATION.MAX.WARN}% for 5m) |<p>-</p> |`{TEMPLATE_NAME:gitlab.puma.active_connections[{#SINGLETON}].min(5m)}/{GitLab by HTTP:gitlab.puma.max_threads[{#SINGLETON}].last()}*100>{$GITLAB.PUMA.UTILIZATION.MAX.WARN}` |WARNING | |
|GitLab: Puma is queueing requests (over {$GITLAB.PUMA.QUEUE.MAX.WARN}% for 15m) |<p>-</p> |`{TEMPLATE_NAME:gitlab.puma.queued_connections[{#SINGLETON}].min(15m)}>{$GITLAB.PUMA.QUEUE.MAX.WARN}` |WARNING | |
|GitLab: Unicorn worker utilization is too high (over {$GITLAB.UNICORN.UTILIZATION.MAX.WARN}% for 5m) |<p>-</p> |`{TEMPLATE_NAME:gitlab.unicorn.active_connections[{#SINGLETON}].min(5m)}/{GitLab by HTTP:gitlab.unicorn.unicorn_workers[{#SINGLETON}].last()}*100>{$GITLAB.UNICORN.UTILIZATION.MAX.WARN}` |WARNING | |
|GitLab: Unicorn is queueing requests (over {$GITLAB.UNICORN.QUEUE.MAX.WARN}% for 5m) |<p>-</p> |`{TEMPLATE_NAME:gitlab.unicorn.queued_connections[{#SINGLETON}].min(5m)}>{$GITLAB.UNICORN.QUEUE.MAX.WARN}` |WARNING | |
diff --git a/templates/app/gitlab_http/template_app_gitlab_http.yaml b/templates/app/gitlab_http/template_app_gitlab_http.yaml
index fc5a1ff2bfa..4b62558671a 100644
--- a/templates/app/gitlab_http/template_app_gitlab_http.yaml
+++ b/templates/app/gitlab_http/template_app_gitlab_http.yaml
@@ -1164,7 +1164,7 @@ zabbix_export:
trigger_prototypes:
-
expression: '{GitLab by HTTP:gitlab.puma.active_connections[{#SINGLETON}].min(5m)}/{GitLab by HTTP:gitlab.puma.max_threads[{#SINGLETON}].last()}*100>{$GITLAB.PUMA.UTILIZATION.MAX.WARN}'
- name: 'GitLab: Puma instance thread utilization is too hight (over {$GITLAB.PUMA.UTILIZATION.MAX.WARN}% for 5m)'
+ name: 'GitLab: Puma instance thread utilization is too high (over {$GITLAB.PUMA.UTILIZATION.MAX.WARN}% for 5m)'
priority: WARNING
url: '{$GITLAB.URL}:{$GITLAB.PORT}/-/metrics'
preprocessing:
diff --git a/templates/app/jenkins/README.md b/templates/app/jenkins/README.md
index 575f9aa2800..7733db22955 100644
--- a/templates/app/jenkins/README.md
+++ b/templates/app/jenkins/README.md
@@ -89,8 +89,8 @@ There are no template links in this template.
|Jenkins |Jenkins: Job building duration, median |<p>The amount of time which jobs spend building.</p> |DEPENDENT |jenkins.job.building.duration.p50<p>**Preprocessing**:</p><p>- JSONPATH: `$.timers.['jenkins.job.building.duration'].p50`</p> |
|Jenkins |Jenkins: Job buildable, m1 rate |<p>The rate at which jobs in the build queue enter the buildable state.</p> |DEPENDENT |jenkins.job.buildable.m1.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.timers.['jenkins.job.buildable.duration'].m1_rate`</p> |
|Jenkins |Jenkins: Job buildable, m5 rate |<p>The rate at which jobs in the build queue enter the buildable state.</p> |DEPENDENT |jenkins.job.buildable.m5.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.timers.['jenkins.job.buildable.duration'].m5_rate`</p> |
-|Jenkins |Jenkins: Job buildable duration, p95 |<p>The amount of time which jobs spend inthe buildable state.</p> |DEPENDENT |jenkins.job.buildable.duration.p95<p>**Preprocessing**:</p><p>- JSONPATH: `$.timers.['jenkins.job.buildable.duration'].p95`</p> |
-|Jenkins |Jenkins: Job buildable duration, median |<p>The amount of time which jobs spend inthe buildable state.</p> |DEPENDENT |jenkins.job.buildable.duration.p50<p>**Preprocessing**:</p><p>- JSONPATH: `$.timers.['jenkins.job.buildable.duration'].p50`</p> |
+|Jenkins |Jenkins: Job buildable duration, p95 |<p>The amount of time which jobs spend in the buildable state.</p> |DEPENDENT |jenkins.job.buildable.duration.p95<p>**Preprocessing**:</p><p>- JSONPATH: `$.timers.['jenkins.job.buildable.duration'].p95`</p> |
+|Jenkins |Jenkins: Job buildable duration, median |<p>The amount of time which jobs spend in the buildable state.</p> |DEPENDENT |jenkins.job.buildable.duration.p50<p>**Preprocessing**:</p><p>- JSONPATH: `$.timers.['jenkins.job.buildable.duration'].p50`</p> |
|Jenkins |Jenkins: Job queuing, m1 rate |<p>The rate at which jobs are queued.</p> |DEPENDENT |jenkins.job.queuing.m1.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.timers.['jenkins.job.queuing.duration'].m1_rate`</p> |
|Jenkins |Jenkins: Job queuing, m5 rate |<p>The rate at which jobs are queued.</p> |DEPENDENT |jenkins.job.queuing.m5.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.timers.['jenkins.job.queuing.duration'].m5_rate`</p> |
|Jenkins |Jenkins: Job queuing duration, p95 |<p>The total time which jobs spend in the build queue.</p> |DEPENDENT |jenkins.job.queuing.duration.p95<p>**Preprocessing**:</p><p>- JSONPATH: `$.timers.['jenkins.job.queuing.duration'].p95`</p> |
diff --git a/templates/app/jenkins/template_app_jenkins.yaml b/templates/app/jenkins/template_app_jenkins.yaml
index 2e0839d1130..ff614ce941f 100644
--- a/templates/app/jenkins/template_app_jenkins.yaml
+++ b/templates/app/jenkins/template_app_jenkins.yaml
@@ -611,7 +611,7 @@ zabbix_export:
history: 7d
value_type: FLOAT
units: s
- description: 'The amount of time which jobs spend inthe buildable state.'
+ description: 'The amount of time which jobs spend in the buildable state.'
applications:
-
name: Jenkins
@@ -630,7 +630,7 @@ zabbix_export:
history: 7d
value_type: FLOAT
units: s
- description: 'The amount of time which jobs spend inthe buildable state.'
+ description: 'The amount of time which jobs spend in the buildable state.'
applications:
-
name: Jenkins
diff --git a/templates/app/memcached/README.md b/templates/app/memcached/README.md
index fe91816072c..4c7c1bd68e2 100644
--- a/templates/app/memcached/README.md
+++ b/templates/app/memcached/README.md
@@ -83,7 +83,7 @@ There are no template links in this template.
|Memcached: Service is down |<p>-</p> |`{TEMPLATE_NAME:memcached.ping["{$MEMCACHED.CONN.URI}"].last()}=0` |AVERAGE |<p>Manual close: YES</p> |
|Memcached: Failed to fetch info data (or no data for 30m) |<p>Zabbix has not received data for items for the last 30 minutes</p> |`{TEMPLATE_NAME:memcached.cpu.sys.nodata(30m)}=1` |WARNING |<p>Manual close: YES</p><p>**Depends on**:</p><p>- Memcached: Service is down</p> |
|Memcached: Too many queued connections (over {$MEMCACHED.CONN.QUEUED.MAX.WARN} in 5m) |<p>The max number of connections is reachedand and a new connection had to wait in the queue as a result.</p> |`{TEMPLATE_NAME:memcached.connections.queued.rate.min(5m)}>{$MEMCACHED.CONN.QUEUED.MAX.WARN}` |WARNING | |
-|Memcached: Too many throttled connections (over {$MEMCACHED.CONN.THROTTLED.MAX.WARN} in 5m) |<p>Number of times a client connection was throttled is too hight.</p><p>When sending GETs in batch mode and the connection contains too many requests (limited by -R parameter) the connection might be throttled to prevent starvation.</p> |`{TEMPLATE_NAME:memcached.connections.throttled.rate.min(5m)}>{$MEMCACHED.CONN.THROTTLED.MAX.WARN}` |WARNING | |
+|Memcached: Too many throttled connections (over {$MEMCACHED.CONN.THROTTLED.MAX.WARN} in 5m) |<p>Number of times a client connection was throttled is too high.</p><p>When sending GETs in batch mode and the connection contains too many requests (limited by -R parameter) the connection might be throttled to prevent starvation.</p> |`{TEMPLATE_NAME:memcached.connections.throttled.rate.min(5m)}>{$MEMCACHED.CONN.THROTTLED.MAX.WARN}` |WARNING | |
|Memcached: Total number of connected clients is too high (over {$MEMCACHED.CONN.PRC.MAX.WARN}% in 5m) |<p>When the number of connections reaches the value of the "max_connections" parameter, new connections will be rejected.</p> |`{TEMPLATE_NAME:memcached.connections.current.min(5m)}/{Memcached:memcached.connections.max.last()}*100>{$MEMCACHED.CONN.PRC.MAX.WARN}` |WARNING | |
|Memcached: Version has changed (new version: {ITEM.VALUE}) |<p>Memcached version has changed. Ack to close.</p> |`{TEMPLATE_NAME:memcached.version.diff()}=1 and {TEMPLATE_NAME:memcached.version.strlen()}>0` |INFO |<p>Manual close: YES</p> |
|Memcached: has been restarted (uptime < 10m) |<p>Uptime is less than 10 minutes</p> |`{TEMPLATE_NAME:memcached.uptime.last()}<10m` |INFO |<p>Manual close: YES</p> |
diff --git a/templates/app/memcached/template_app_memcached.yaml b/templates/app/memcached/template_app_memcached.yaml
index ab20380a3a1..d5c366aacb3 100644
--- a/templates/app/memcached/template_app_memcached.yaml
+++ b/templates/app/memcached/template_app_memcached.yaml
@@ -244,7 +244,7 @@ zabbix_export:
name: 'Memcached: Too many throttled connections (over {$MEMCACHED.CONN.THROTTLED.MAX.WARN} in 5m)'
priority: WARNING
description: |
- Number of times a client connection was throttled is too hight.
+ Number of times a client connection was throttled is too high.
When sending GETs in batch mode and the connection contains too many requests (limited by -R parameter) the connection might be throttled to prevent starvation.
-
name: 'Memcached: CPU sys'
diff --git a/templates/app/squid_snmp/README.md b/templates/app/squid_snmp/README.md
index 95284bef1be..c9a89579aaf 100644
--- a/templates/app/squid_snmp/README.md
+++ b/templates/app/squid_snmp/README.md
@@ -38,9 +38,9 @@ No specific Zabbix configuration is required.
|Name|Description|Default|
|----|-----------|-------|
-|{$SQUID.FILE.DESC.WARN.MIN} |<p>The threshold for minimum number of avaliable file descriptors</p> |`100` |
+|{$SQUID.FILE.DESC.WARN.MIN} |<p>The threshold for minimum number of available file descriptors</p> |`100` |
|{$SQUID.HTTP.PORT} |<p>http_port configured in squid.conf (Default: 3128)</p> |`3128` |
-|{$SQUID.PAGE.FAULT.WARN} |<p>The threshold for sys page faults rate in percent of recieved HTTP requests</p> |`90` |
+|{$SQUID.PAGE.FAULT.WARN} |<p>The threshold for sys page faults rate in percent of received HTTP requests</p> |`90` |
|{$SQUID.SNMP.COMMUNITY} |<p>SNMP community allowed by ACL in squid.conf</p> |`public` |
|{$SQUID.SNMP.PORT} |<p>snmp_port configured in squid.conf (Default: 3401)</p> |`3401` |
@@ -120,7 +120,7 @@ There are no template links in this template.
|Squid: Swap usage is more than low watermark (>{ITEM.VALUE2}%) |<p>-</p> |`{TEMPLATE_NAME:squid[cacheCurrentSwapSize].last()}>{Squid SNMP:squid[cacheSwapLowWM].last()}*{Squid SNMP:squid[cacheSwapMaxSize].last()}/100` |WARNING | |
|Squid: Swap usage is more than high watermark (>{ITEM.VALUE2}%) |<p>-</p> |`{TEMPLATE_NAME:squid[cacheCurrentSwapSize].last()}>{Squid SNMP:squid[cacheSwapHighWM].last()}*{Squid SNMP:squid[cacheSwapMaxSize].last()}/100` |HIGH | |
|Squid: Squid is running out of file descriptors (<{$SQUID.FILE.DESC.WARN.MIN}) |<p>-</p> |`{TEMPLATE_NAME:squid[cacheCurrentUnusedFDescrCnt].last()}<{$SQUID.FILE.DESC.WARN.MIN}` |WARNING | |
-|Squid: High sys page faults rate (>{$SQUID.PAGE.FAULT.WARN}% of recieved HTTP requests) |<p>-</p> |`{TEMPLATE_NAME:squid[cacheSysPageFaults].avg(5m)}>{Squid SNMP:squid[cacheProtoClientHttpRequests].avg(5m)}/100*{$SQUID.PAGE.FAULT.WARN}` |WARNING | |
+|Squid: High sys page faults rate (>{$SQUID.PAGE.FAULT.WARN}% of received HTTP requests) |<p>-</p> |`{TEMPLATE_NAME:squid[cacheSysPageFaults].avg(5m)}>{Squid SNMP:squid[cacheProtoClientHttpRequests].avg(5m)}/100*{$SQUID.PAGE.FAULT.WARN}` |WARNING | |
## Feedback
diff --git a/templates/app/squid_snmp/template_app_squid_snmp.yaml b/templates/app/squid_snmp/template_app_squid_snmp.yaml
index 49270d5c466..67941d9beb8 100644
--- a/templates/app/squid_snmp/template_app_squid_snmp.yaml
+++ b/templates/app/squid_snmp/template_app_squid_snmp.yaml
@@ -881,7 +881,7 @@ zabbix_export:
-
macro: '{$SQUID.FILE.DESC.WARN.MIN}'
value: '100'
- description: 'The threshold for minimum number of avaliable file descriptors'
+ description: 'The threshold for minimum number of available file descriptors'
-
macro: '{$SQUID.HTTP.PORT}'
value: '3128'
@@ -889,7 +889,7 @@ zabbix_export:
-
macro: '{$SQUID.PAGE.FAULT.WARN}'
value: '90'
- description: 'The threshold for sys page faults rate in percent of recieved HTTP requests'
+ description: 'The threshold for sys page faults rate in percent of received HTTP requests'
-
macro: '{$SQUID.SNMP.COMMUNITY}'
value: public
@@ -911,7 +911,7 @@ zabbix_export:
triggers:
-
expression: '{Squid SNMP:squid[cacheSysPageFaults].avg(5m)}>{Squid SNMP:squid[cacheProtoClientHttpRequests].avg(5m)}/100*{$SQUID.PAGE.FAULT.WARN}'
- name: 'Squid: High sys page faults rate (>{$SQUID.PAGE.FAULT.WARN}% of recieved HTTP requests)'
+ name: 'Squid: High sys page faults rate (>{$SQUID.PAGE.FAULT.WARN}% of received HTTP requests)'
priority: WARNING
-
expression: '{Squid SNMP:squid[cacheCurrentSwapSize].last()}>{Squid SNMP:squid[cacheSwapHighWM].last()}*{Squid SNMP:squid[cacheSwapMaxSize].last()}/100'
diff --git a/templates/app/tomcat_jmx/README.md b/templates/app/tomcat_jmx/README.md
index 757c2b8c741..2337c8b5fb5 100644
--- a/templates/app/tomcat_jmx/README.md
+++ b/templates/app/tomcat_jmx/README.md
@@ -20,7 +20,7 @@ Metrics are collected by JMX.
1. Enable and configure JMX access to Apache Tomcat.
See documentation for [instructions](https://tomcat.apache.org/tomcat-10.0-doc/monitoring.html#Enabling_JMX_Remote) (chose your version).
-2. If your Tomcat installation require authentification for JMX, set values in host macros {$TOMCAT.USERNAME} and {$TOMCAT.PASSWORD}.
+2. If your Tomcat installation require authentication for JMX, set values in host macros {$TOMCAT.USERNAME} and {$TOMCAT.PASSWORD}.
3. You can set custom macro values and add macros with context for specific metrics following macro description.
diff --git a/templates/app/zookeeper_http/README.md b/templates/app/zookeeper_http/README.md
index e9570c3c0b0..89a0472fa93 100644
--- a/templates/app/zookeeper_http/README.md
+++ b/templates/app/zookeeper_http/README.md
@@ -19,7 +19,7 @@ This template was tested on:
This template works with standalone and cluster instances. Metrics are collected from each Zookeper node by requests to [AdminServer](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_adminserver).
By default AdminServer is enabled and listens on port 8080.
-You can еnable or configure AdminServer parameters according [official documentations](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_adminserver_config).
+You can enable or configure AdminServer parameters according [official documentations](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_adminserver_config).
Don't forget to change macros {$ZOOKEEPER.COMMAND_URL}, {$ZOOKEEPER.PORT}, {$ZOOKEEPER.SCHEME}.
@@ -32,7 +32,7 @@ No specific Zabbix configuration is required.
|Name|Description|Default|
|----|-----------|-------|
|{$ZOOKEEPER.COMMAND_URL} |<p>The URL for listing and issuing commands relative to the root URL (admin.commandURL).</p> |`commands` |
-|{$ZOOKEEPER.FILE_DESCRIPTORS.MAX.WARN} |<p>Maximum percentage of file descriptors usage alert treshold (for trigger expression).</p> |`85` |
+|{$ZOOKEEPER.FILE_DESCRIPTORS.MAX.WARN} |<p>Maximum percentage of file descriptors usage alert threshold (for trigger expression).</p> |`85` |
|{$ZOOKEEPER.OUTSTANDING_REQ.MAX.WARN} |<p>Maximum number of outstanding requests (for trigger expression).</p> |`10` |
|{$ZOOKEEPER.PENDING_SYNCS.MAX.WARN} |<p>Maximum number of pending syncs from the followers (for trigger expression).</p> |`10` |
|{$ZOOKEEPER.PORT} |<p>The port the embedded Jetty server listens on (admin.serverPort).</p> |`8080` |
diff --git a/templates/app/zookeeper_http/template_app_zookeeper_http.yaml b/templates/app/zookeeper_http/template_app_zookeeper_http.yaml
index 56108ebe5f9..093c2a4d9da 100644
--- a/templates/app/zookeeper_http/template_app_zookeeper_http.yaml
+++ b/templates/app/zookeeper_http/template_app_zookeeper_http.yaml
@@ -1098,7 +1098,7 @@ zabbix_export:
-
macro: '{$ZOOKEEPER.FILE_DESCRIPTORS.MAX.WARN}'
value: '85'
- description: 'Maximum percentage of file descriptors usage alert treshold (for trigger expression).'
+ description: 'Maximum percentage of file descriptors usage alert threshold (for trigger expression).'
-
macro: '{$ZOOKEEPER.OUTSTANDING_REQ.MAX.WARN}'
value: '10'
diff --git a/templates/db/clickhouse_http/README.md b/templates/db/clickhouse_http/README.md
index bfb932fc575..5ee34e5b384 100644
--- a/templates/db/clickhouse_http/README.md
+++ b/templates/db/clickhouse_http/README.md
@@ -115,8 +115,8 @@ There are no template links in this template.
|ClickHouse |ClickHouse: Resident memory |<p>"Maximum number of bytes in physically resident data pages mapped by the allocator, </p><p>comprising all pages dedicated to allocator metadata, pages backing active allocations, </p><p>and unused dirty pages."</p> |DEPENDENT |clickhouse.jemalloc.resident<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "jemalloc.resident")].value.first()`</p> |
|ClickHouse |ClickHouse: Mapped memory |<p>"Total number of bytes in active extents mapped by the allocator."</p> |DEPENDENT |clickhouse.jemalloc.mapped<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "jemalloc.mapped")].value.first()`</p> |
|ClickHouse |ClickHouse: Memory used for queries |<p>"Total amount of memory (bytes) allocated in currently executing queries."</p> |DEPENDENT |clickhouse.memory.tracking<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "MemoryTracking")].value.first()`</p> |
-|ClickHouse |ClickHouse: Memory used for background merges |<p>"Total amount of memory (bytes) allocated in background processing pool (that is dedicated for backround merges, mutations and fetches).</p><p> Note that this value may include a drift when the memory was allocated in a context of background processing pool and freed in other context or vice-versa. This happens naturally due to caches for tables indexes and doesn't indicate memory leaks."</p> |DEPENDENT |clickhouse.memory.tracking.background<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "MemoryTrackingInBackgroundProcessingPool")].value.first()`</p> |
-|ClickHouse |ClickHouse: Memory used for backround moves |<p>"Total amount of memory (bytes) allocated in background processing pool (that is dedicated for backround moves). Note that this value may include a drift when the memory was allocated in a context of background processing pool and freed in other context or vice-versa.</p><p> This happens naturally due to caches for tables indexes and doesn't indicate memory leaks."</p> |DEPENDENT |clickhouse.memory.tracking.background.moves<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "MemoryTrackingInBackgroundMoveProcessingPool")].value.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
+|ClickHouse |ClickHouse: Memory used for background merges |<p>"Total amount of memory (bytes) allocated in background processing pool (that is dedicated for background merges, mutations and fetches).</p><p> Note that this value may include a drift when the memory was allocated in a context of background processing pool and freed in other context or vice-versa. This happens naturally due to caches for tables indexes and doesn't indicate memory leaks."</p> |DEPENDENT |clickhouse.memory.tracking.background<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "MemoryTrackingInBackgroundProcessingPool")].value.first()`</p> |
+|ClickHouse |ClickHouse: Memory used for background moves |<p>"Total amount of memory (bytes) allocated in background processing pool (that is dedicated for background moves). Note that this value may include a drift when the memory was allocated in a context of background processing pool and freed in other context or vice-versa.</p><p> This happens naturally due to caches for tables indexes and doesn't indicate memory leaks."</p> |DEPENDENT |clickhouse.memory.tracking.background.moves<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "MemoryTrackingInBackgroundMoveProcessingPool")].value.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p> |
|ClickHouse |ClickHouse: Memory used for background schedule pool |<p>"Total amount of memory (bytes) allocated in background schedule pool (that is dedicated for bookkeeping tasks of Replicated tables)."</p> |DEPENDENT |clickhouse.memory.tracking.schedule.pool<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "MemoryTrackingInBackgroundSchedulePool")].value.first()`</p> |
|ClickHouse |ClickHouse: Memory used for merges |<p>"Total amount of memory (bytes) allocated for background merges. Included in MemoryTrackingInBackgroundProcessingPool. Note that this value may include a drift when the memory was allocated in a context of background processing pool and freed in other context or vice-versa. </p><p>This happens naturally due to caches for tables indexes and doesn't indicate memory leaks."</p> |DEPENDENT |clickhouse.memory.tracking.merges<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "MemoryTrackingForMerges")].value.first()`</p> |
|ClickHouse |ClickHouse: Current distributed files to insert |<p>Number of pending files to process for asynchronous insertion into Distributed tables. Number of files for every shard is summed.</p> |DEPENDENT |clickhouse.distributed.files<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "DistributedFilesToInsert")].value.first()`</p> |
@@ -148,9 +148,9 @@ There are no template links in this template.
|ClickHouse_ZooKeeper |ClickHouse: ZooKeeper watches |<p>Number of watches (e.g., event subscriptions) in ZooKeeperr.</p> |DEPENDENT |clickhouse.zookeper.watch<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "ZooKeeperWatch")].value.first()`</p> |
|ClickHouse_ZooKeeper |ClickHouse: ZooKeeper requests |<p>Number of requests to ZooKeeper in progress.</p> |DEPENDENT |clickhouse.zookeper.request<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.metric == "ZooKeeperRequest")].value.first()`</p> |
|ClickHouse_ZooKeeper |ClickHouse: ZooKeeper wait time |<p>Time spent in waiting for ZooKeeper operations.</p> |DEPENDENT |clickhouse.zookeper.wait.time<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.event == "ZooKeeperWaitMicroseconds")].value.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- MULTIPLIER: `0.000001`</p><p>- CHANGE_PER_SECOND |
-|ClickHouse_ZooKeeper |ClickHouse: ZooKeeper exeptions per second |<p>Count of ZooKeeper exceptions that does not belong to user/hardware exceptions.</p> |DEPENDENT |clickhouse.zookeper.exeptions.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.event == "ZooKeeperOtherExceptions")].value.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
-|ClickHouse_ZooKeeper |ClickHouse: ZooKeeper hardware exeptions per second |<p>Count of ZooKeeper exceptions caused by session moved/expired, connection loss, marshalling error, operation timed out and invalid zhandle state.</p> |DEPENDENT |clickhouse.zookeper.hw_exeptions.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.event == "ZooKeeperHardwareExceptions")].value.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
-|ClickHouse_ZooKeeper |ClickHouse: ZooKeeper user exeptions per second |<p>Count of ZooKeeper exceptions caused by no znodes, bad version, node exists, node empty and no children for ephemeral.</p> |DEPENDENT |clickhouse.zookeper.user_exeptions.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.event == "ZooKeeperUserExceptions")].value.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
+|ClickHouse_ZooKeeper |ClickHouse: ZooKeeper exceptions per second |<p>Count of ZooKeeper exceptions that does not belong to user/hardware exceptions.</p> |DEPENDENT |clickhouse.zookeper.exceptions.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.event == "ZooKeeperOtherExceptions")].value.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
+|ClickHouse_ZooKeeper |ClickHouse: ZooKeeper hardware exceptions per second |<p>Count of ZooKeeper exceptions caused by session moved/expired, connection loss, marshalling error, operation timed out and invalid zhandle state.</p> |DEPENDENT |clickhouse.zookeper.hw_exeptions.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.event == "ZooKeeperHardwareExceptions")].value.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
+|ClickHouse_ZooKeeper |ClickHouse: ZooKeeper user exceptions per second |<p>Count of ZooKeeper exceptions caused by no znodes, bad version, node exists, node empty and no children for ephemeral.</p> |DEPENDENT |clickhouse.zookeper.user_exeptions.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$[?(@.event == "ZooKeeperUserExceptions")].value.first()`</p><p>⛔️ON_FAIL: `CUSTOM_VALUE -> 0`</p><p>- CHANGE_PER_SECOND |
|Zabbix_raw_items |ClickHouse: Get system.events |<p>Get information about the number of events that have occurred in the system.</p> |HTTP_AGENT |clickhouse.system.events<p>**Preprocessing**:</p><p>- JSONPATH: `$.data`</p> |
|Zabbix_raw_items |ClickHouse: Get system.metrics |<p>Get metrics which can be calculated instantly, or have a current value format JSONEachRow</p> |HTTP_AGENT |clickhouse.system.metrics<p>**Preprocessing**:</p><p>- JSONPATH: `$.data`</p> |
|Zabbix_raw_items |ClickHouse: Get system.asynchronous_metrics |<p>Get metrics that are calculated periodically in the background</p> |HTTP_AGENT |clickhouse.system.asynchronous_metrics<p>**Preprocessing**:</p><p>- JSONPATH: `$.data`</p> |
@@ -164,8 +164,8 @@ There are no template links in this template.
|Name|Description|Expression|Severity|Dependencies and additional info|
|----|-----------|----|----|----|
|ClickHouse: There are queries running more than {$CLICKHOUSE.QUERY_TIME.MAX.WARN} seconds |<p>-</p> |`{TEMPLATE_NAME:clickhouse.process.elapsed.last()}>{$CLICKHOUSE.QUERY_TIME.MAX.WARN}` |AVERAGE |<p>Manual close: YES</p> |
-|ClickHouse: Port {$CLICKHOUSE.PORT} is unavaliable |<p>-</p> |`{TEMPLATE_NAME:net.tcp.service[{$CLICKHOUSE.SCHEME},"{HOST.CONN}","{$CLICKHOUSE.PORT}"].last()}=0` |AVERAGE |<p>Manual close: YES</p> |
-|ClickHouse: Service is down |<p>-</p> |`{TEMPLATE_NAME:clickhouse.ping.last()}=0 or {ClickHouse by HTTP:net.tcp.service[{$CLICKHOUSE.SCHEME},"{HOST.CONN}","{$CLICKHOUSE.PORT}"].last()} = 0` |AVERAGE |<p>Manual close: YES</p><p>**Depends on**:</p><p>- ClickHouse: Port {$CLICKHOUSE.PORT} is unavaliable</p> |
+|ClickHouse: Port {$CLICKHOUSE.PORT} is unavailable |<p>-</p> |`{TEMPLATE_NAME:net.tcp.service[{$CLICKHOUSE.SCHEME},"{HOST.CONN}","{$CLICKHOUSE.PORT}"].last()}=0` |AVERAGE |<p>Manual close: YES</p> |
+|ClickHouse: Service is down |<p>-</p> |`{TEMPLATE_NAME:clickhouse.ping.last()}=0 or {ClickHouse by HTTP:net.tcp.service[{$CLICKHOUSE.SCHEME},"{HOST.CONN}","{$CLICKHOUSE.PORT}"].last()} = 0` |AVERAGE |<p>Manual close: YES</p><p>**Depends on**:</p><p>- ClickHouse: Port {$CLICKHOUSE.PORT} is unavailable</p> |
|ClickHouse: Version has changed (new version: {ITEM.VALUE}) |<p>ClickHouse version has changed. Ack to close.</p> |`{TEMPLATE_NAME:clickhouse.version.diff()}=1 and {TEMPLATE_NAME:clickhouse.version.strlen()}>0` |INFO |<p>Manual close: YES</p> |
|ClickHouse: has been restarted (uptime < 10m) |<p>Uptime is less than 10 minutes</p> |`{TEMPLATE_NAME:clickhouse.uptime.last()}<10m` |INFO |<p>Manual close: YES</p> |
|ClickHouse: Failed to fetch info data (or no data for 30m) |<p>Zabbix has not received data for items for the last 30 minutes</p> |`{TEMPLATE_NAME:clickhouse.uptime.nodata(30m)}=1` |WARNING |<p>Manual close: YES</p><p>**Depends on**:</p><p>- ClickHouse: Service is down</p> |
diff --git a/templates/db/clickhouse_http/template_db_clickhouse_http.yaml b/templates/db/clickhouse_http/template_db_clickhouse_http.yaml
index 634286ede08..f784227ba48 100644
--- a/templates/db/clickhouse_http/template_db_clickhouse_http.yaml
+++ b/templates/db/clickhouse_http/template_db_clickhouse_http.yaml
@@ -425,7 +425,7 @@ zabbix_export:
value_type: FLOAT
units: B
description: |
- "Total amount of memory (bytes) allocated in background processing pool (that is dedicated for backround merges, mutations and fetches).
+ "Total amount of memory (bytes) allocated in background processing pool (that is dedicated for background merges, mutations and fetches).
Note that this value may include a drift when the memory was allocated in a context of background processing pool and freed in other context or vice-versa. This happens naturally due to caches for tables indexes and doesn't indicate memory leaks."
applications:
-
@@ -438,7 +438,7 @@ zabbix_export:
master_item:
key: clickhouse.system.metrics
-
- name: 'ClickHouse: Memory used for backround moves'
+ name: 'ClickHouse: Memory used for background moves'
type: DEPENDENT
key: clickhouse.memory.tracking.background.moves
delay: '0'
@@ -446,7 +446,7 @@ zabbix_export:
value_type: FLOAT
units: B
description: |
- "Total amount of memory (bytes) allocated in background processing pool (that is dedicated for backround moves). Note that this value may include a drift when the memory was allocated in a context of background processing pool and freed in other context or vice-versa.
+ "Total amount of memory (bytes) allocated in background processing pool (that is dedicated for background moves). Note that this value may include a drift when the memory was allocated in a context of background processing pool and freed in other context or vice-versa.
This happens naturally due to caches for tables indexes and doesn't indicate memory leaks."
applications:
-
@@ -1101,9 +1101,9 @@ zabbix_export:
master_item:
key: clickhouse.system.metrics
-
- name: 'ClickHouse: ZooKeeper exeptions per second'
+ name: 'ClickHouse: ZooKeeper exceptions per second'
type: DEPENDENT
- key: clickhouse.zookeper.exeptions.rate
+ key: clickhouse.zookeper.exceptions.rate
delay: '0'
history: 7d
value_type: FLOAT
@@ -1125,7 +1125,7 @@ zabbix_export:
master_item:
key: clickhouse.system.events
-
- name: 'ClickHouse: ZooKeeper hardware exeptions per second'
+ name: 'ClickHouse: ZooKeeper hardware exceptions per second'
type: DEPENDENT
key: clickhouse.zookeper.hw_exeptions.rate
delay: '0'
@@ -1191,7 +1191,7 @@ zabbix_export:
"Number of sessions (connections) to ZooKeeper.
Should be no more than one, because using more than one connection to ZooKeeper may lead to bugs due to lack of linearizability (stale reads) that ZooKeeper consistency model allows."
-
- name: 'ClickHouse: ZooKeeper user exeptions per second'
+ name: 'ClickHouse: ZooKeeper user exceptions per second'
type: DEPENDENT
key: clickhouse.zookeper.user_exeptions.rate
delay: '0'
@@ -2078,13 +2078,13 @@ zabbix_export:
host: 'ClickHouse by HTTP'
key: clickhouse.uptime
-
- name: 'ClickHouse: Zookeeper exeptions rate'
+ name: 'ClickHouse: Zookeeper exceptions rate'
graph_items:
-
color: 1A7C11
item:
host: 'ClickHouse by HTTP'
- key: clickhouse.zookeper.exeptions.rate
+ key: clickhouse.zookeper.exceptions.rate
-
sortorder: '1'
color: 2774A4
diff --git a/templates/db/ignite_jmx/README.md b/templates/db/ignite_jmx/README.md
index 902e6b86598..dc1814d4a82 100644
--- a/templates/db/ignite_jmx/README.md
+++ b/templates/db/ignite_jmx/README.md
@@ -161,7 +161,7 @@ There are no template links in this template.
|Data region {#JMXNAME}: Node started to evict pages |<p>You store more data then region can accommodate. Data started to move to disk it can make requests work slower. Ack to close.</p> |`{TEMPLATE_NAME:jmx["{#JMXOBJ}",EvictionRate].min(5m)}>0` |INFO |<p>Manual close: YES</p> |
|Data region {#JMXNAME}: Data region utilisation is too high (over {$IGNITE.DATA.REGION.PUSED.MAX.WARN} in 5m) |<p>Data region utilization is high. Increase data region size or delete any data.</p> |`{TEMPLATE_NAME:jmx["{#JMXOBJ}",OffheapUsedSize].min(5m)}/{Ignite by JMX:jmx["{#JMXOBJ}",OffHeapSize].last()}*100>{$IGNITE.DATA.REGION.PUSED.MAX.WARN}` |WARNING |<p>**Depends on**:</p><p>- Data region {#JMXNAME}: Data region utilisation is too high (over {$IGNITE.DATA.REGION.PUSED.MAX.HIGH} in 5m)</p> |
|Data region {#JMXNAME}: Data region utilisation is too high (over {$IGNITE.DATA.REGION.PUSED.MAX.HIGH} in 5m) |<p>Data region utilization is high. Increase data region size or delete any data.</p> |`{TEMPLATE_NAME:jmx["{#JMXOBJ}",OffheapUsedSize].min(5m)}/{Ignite by JMX:jmx["{#JMXOBJ}",OffHeapSize].last()}*100>{$IGNITE.DATA.REGION.PUSED.MAX.HIGH}` |HIGH | |
-|Data region {#JMXNAME}: Pages replace rate more than 0 |<p>There is more data than DataRegionMaxSize. Сluster started to replace pages in memory. Page replacement can slow down operations.</p> |`{TEMPLATE_NAME:jmx["{#JMXOBJ}",PagesReplaceRate].min(5m)}>0` |WARNING | |
+|Data region {#JMXNAME}: Pages replace rate more than 0 |<p>There is more data than DataRegionMaxSize. Cluster started to replace pages in memory. Page replacement can slow down operations.</p> |`{TEMPLATE_NAME:jmx["{#JMXOBJ}",PagesReplaceRate].min(5m)}>0` |WARNING | |
|Data region {#JMXNAME}: Checkpoint buffer utilization is too high (over {$IGNITE.CHECKPOINT.PUSED.MAX.WARN} in 5m) |<p>Checkpoint buffer utilization is high. Threads will be throttled to avoid buffer overflow. It can be caused by high disk utilization.</p> |`{TEMPLATE_NAME:jmx["{#JMXOBJ}",UsedCheckpointBufferSize].min(5m)}/{Ignite by JMX:jmx["{#JMXOBJ}",CheckpointBufferSize].last()}*100>{$IGNITE.CHECKPOINT.PUSED.MAX.WARN}` |WARNING |<p>**Depends on**:</p><p>- Data region {#JMXNAME}: Checkpoint buffer utilization is too high (over {$IGNITE.CHECKPOINT.PUSED.MAX.HIGH} in 5m)</p> |
|Data region {#JMXNAME}: Checkpoint buffer utilization is too high (over {$IGNITE.CHECKPOINT.PUSED.MAX.HIGH} in 5m) |<p>Checkpoint buffer utilization is high. Threads will be throttled to avoid buffer overflow. It can be caused by high disk utilization.</p> |`{TEMPLATE_NAME:jmx["{#JMXOBJ}",UsedCheckpointBufferSize].min(5m)}/{Ignite by JMX:jmx["{#JMXOBJ}",CheckpointBufferSize].last()}*100>{$IGNITE.CHECKPOINT.PUSED.MAX.HIGH}` |HIGH | |
|Cache group [{#JMXNAME}]: One or more backups are unavailable |<p>-</p> |`{TEMPLATE_NAME:jmx["{#JMXOBJ}",Backups].min(5m)}>={Ignite by JMX:jmx["{#JMXOBJ}",MinimumNumberOfPartitionCopies].max(5m)}` |WARNING | |
diff --git a/templates/db/ignite_jmx/template_db_ignite_jmx.yaml b/templates/db/ignite_jmx/template_db_ignite_jmx.yaml
index 5e9602ae5d8..c908fb93dba 100644
--- a/templates/db/ignite_jmx/template_db_ignite_jmx.yaml
+++ b/templates/db/ignite_jmx/template_db_ignite_jmx.yaml
@@ -182,7 +182,7 @@ zabbix_export:
expression: '{min(5m)}>0'
name: 'Data region {#JMXNAME}: Pages replace rate more than 0'
priority: WARNING
- description: 'There is more data than DataRegionMaxSize. Сluster started to replace pages in memory. Page replacement can slow down operations.'
+ description: 'There is more data than DataRegionMaxSize. Cluster started to replace pages in memory. Page replacement can slow down operations.'
-
name: 'Data region {#JMXNAME}: Allocated, bytes'
type: JMX
diff --git a/templates/db/oracle_agent2/README.md b/templates/db/oracle_agent2/README.md
index d1aafafde04..f04da422241 100644
--- a/templates/db/oracle_agent2/README.md
+++ b/templates/db/oracle_agent2/README.md
@@ -185,7 +185,7 @@ There are no template links in this template.
|Oracle: Too many active sessions (over {$ORACLE.SESSIONS.MAX.WARN}% for 5 min) |<p>Active sessions are using more than {$ORACLE.SESSIONS.MAX.WARN}% of the available sessions.</p> |`{TEMPLATE_NAME:oracle.session_count.min(5m)} * 100 / {Oracle by Zabbix Agent 2:oracle.session_limit.last()} > {$ORACLE.SESSIONS.MAX.WARN}` |WARNING | |
|Oracle: Too many locked sessions (over {$ORACLE.SESSIONS.LOCK.MAX.WARN}% for 5 min) |<p>Number of locked sessions is over {$ORACLE.SESSIONS.LOCK.MAX.WARN}% of the running sessions.</p> |`{TEMPLATE_NAME:oracle.session_lock_rate.min(5m)} > {$ORACLE.SESSIONS.LOCK.MAX.WARN}` |WARNING | |
|Oracle: Too many sessions locked over {$ORACLE.SESSION.LOCK.MAX.TIME}s (over {$ORACLE.SESSION.LONG.LOCK.MAX.WARN} for 5 min) |<p>Number of sessions locked over {$ORACLE.SESSION.LOCK.MAX.TIME} seconds is too high. Long-term locks can negatively affect database performance, therefore, if they are detected, you should first find the most difficult queries from the database point of view and analyze possible resource leaks.</p> |`{TEMPLATE_NAME:oracle.session_long_time_locked.min(5m)} > {$ORACLE.SESSION.LONG.LOCK.MAX.WARN}` |WARNING | |
-|Oracle: Too hight database concurrency (over {$ORACLE.CONCURRENCY.MAX.WARN}% for 5 min) |<p>Concurrency rate is over {$ORACLE.CONCURRENCY.MAX.WARN}%. A high contention value does not indicate the root cause of the problem, but is a signal to search for it. In the case of high competition, an analysis of resource consumption should be carried out, the most "heavy" queries made in the database, possibly - session tracing. All this will help determine the root cause and possible optimization points both in the database configuration and in the logic of building queries of the application itself.</p> |`{TEMPLATE_NAME:oracle.session_concurrency_rate.min(5m)} > {$ORACLE.CONCURRENCY.MAX.WARN}` |WARNING | |
+|Oracle: Too high database concurrency (over {$ORACLE.CONCURRENCY.MAX.WARN}% for 5 min) |<p>Concurrency rate is over {$ORACLE.CONCURRENCY.MAX.WARN}%. A high contention value does not indicate the root cause of the problem, but is a signal to search for it. In the case of high competition, an analysis of resource consumption should be carried out, the most "heavy" queries made in the database, possibly - session tracing. All this will help determine the root cause and possible optimization points both in the database configuration and in the logic of building queries of the application itself.</p> |`{TEMPLATE_NAME:oracle.session_concurrency_rate.min(5m)} > {$ORACLE.CONCURRENCY.MAX.WARN}` |WARNING | |
|Oracle: Total PGA inuse is too high (over {$ORACLE.PGA.USE.MAX.WARN}% for 5 min) |<p>Total PGA in use is more than {$ORACLE.PGA.USE.MAX.WARN}% of PGA_AGGREGATE_TARGET.</p> |`{TEMPLATE_NAME:oracle.total_pga_used.min(5m)} * 100 / {Oracle by Zabbix Agent 2:oracle.pga_target.last()} > {$ORACLE.PGA.USE.MAX.WARN}` |WARNING | |
|Oracle: Zabbix account will expire soon (under {$ORACLE.EXPIRE.PASSWORD.MIN.WARN} days) |<p>Password for zabbix user in the database will expire soon.</p> |`{TEMPLATE_NAME:oracle.user.info["{$ORACLE.CONNSTRING}","{$ORACLE.USER}","{$ORACLE.PASSWORD}","{$ORACLE.SERVICE}"].last()} < {$ORACLE.EXPIRE.PASSWORD.MIN.WARN}` |WARNING | |
|Oracle: Number of REDO logs available for switching is too low (less {$ORACLE.REDO.MIN.WARN} for 5 min) |<p>Number of available for log switching inactive/unused REDOs is low (Database down risk)</p> |`{TEMPLATE_NAME:oracle.redolog.info["{$ORACLE.CONNSTRING}","{$ORACLE.USER}","{$ORACLE.PASSWORD}","{$ORACLE.SERVICE}"].max(5m)} < {$ORACLE.REDO.MIN.WARN}` |WARNING | |
diff --git a/templates/db/oracle_agent2/template_db_oracle_agent2.yaml b/templates/db/oracle_agent2/template_db_oracle_agent2.yaml
index 3dc2d1006ef..5d5fd2440c5 100644
--- a/templates/db/oracle_agent2/template_db_oracle_agent2.yaml
+++ b/templates/db/oracle_agent2/template_db_oracle_agent2.yaml
@@ -925,7 +925,7 @@ zabbix_export:
triggers:
-
expression: '{min(5m)} > {$ORACLE.CONCURRENCY.MAX.WARN}'
- name: 'Oracle: Too hight database concurrency (over {$ORACLE.CONCURRENCY.MAX.WARN}% for 5 min)'
+ name: 'Oracle: Too high database concurrency (over {$ORACLE.CONCURRENCY.MAX.WARN}% for 5 min)'
priority: WARNING
description: 'Concurrency rate is over {$ORACLE.CONCURRENCY.MAX.WARN}%. A high contention value does not indicate the root cause of the problem, but is a signal to search for it. In the case of high competition, an analysis of resource consumption should be carried out, the most "heavy" queries made in the database, possibly - session tracing. All this will help determine the root cause and possible optimization points both in the database configuration and in the logic of building queries of the application itself.'
-
diff --git a/templates/db/oracle_odbc/README.md b/templates/db/oracle_odbc/README.md
index 18b976c25d2..b716037e47b 100644
--- a/templates/db/oracle_odbc/README.md
+++ b/templates/db/oracle_odbc/README.md
@@ -246,7 +246,7 @@ There are no template links in this template.
|Oracle: Too many active sessions (over {$ORACLE.SESSIONS.MAX.WARN}% for 5 min) |<p>Active sessions are using more than {$ORACLE.SESSIONS.MAX.WARN}% of the available sessions.</p> |`{TEMPLATE_NAME:oracle.session_count.min(5m)} * 100 / {Oracle by ODBC:oracle.session_limit.last()} > {$ORACLE.SESSIONS.MAX.WARN}` |WARNING | |
|Oracle: Too many locked sessions (over {$ORACLE.SESSIONS.LOCK.MAX.WARN}% for 5 min) |<p>Number of locked sessions is over {$ORACLE.SESSIONS.LOCK.MAX.WARN}% of the running sessions.</p> |`{TEMPLATE_NAME:oracle.session_lock_rate.min(5m)} > {$ORACLE.SESSIONS.LOCK.MAX.WARN}` |WARNING | |
|Oracle: Too many sessions locked over {$ORACLE.SESSION.LOCK.MAX.TIME}s (over {$ORACLE.SESSION.LONG.LOCK.MAX.WARN} for 5 min) |<p>Number of sessions locked over {$ORACLE.SESSION.LOCK.MAX.TIME} seconds is too high. Long-term locks can negatively affect database performance, therefore, if they are detected, you should first find the most difficult queries from the database point of view and analyze possible resource leaks.</p> |`{TEMPLATE_NAME:oracle.session_long_time_locked.min(5m)} > {$ORACLE.SESSION.LONG.LOCK.MAX.WARN}` |WARNING | |
-|Oracle: Too hight database concurrency (over {$ORACLE.CONCURRENCY.MAX.WARN}% for 5 min) |<p>Concurrency rate is over {$ORACLE.CONCURRENCY.MAX.WARN}%. A high contention value does not indicate the root cause of the problem, but is a signal to search for it. In the case of high competition, an analysis of resource consumption should be carried out, the most "heavy" queries made in the database, possibly - session tracing. All this will help determine the root cause and possible optimization points both in the database configuration and in the logic of building queries of the application itself.</p> |`{TEMPLATE_NAME:oracle.session_concurrency_rate.min(5m)} > {$ORACLE.CONCURRENCY.MAX.WARN}` |WARNING | |
+|Oracle: Too high database concurrency (over {$ORACLE.CONCURRENCY.MAX.WARN}% for 5 min) |<p>Concurrency rate is over {$ORACLE.CONCURRENCY.MAX.WARN}%. A high contention value does not indicate the root cause of the problem, but is a signal to search for it. In the case of high competition, an analysis of resource consumption should be carried out, the most "heavy" queries made in the database, possibly - session tracing. All this will help determine the root cause and possible optimization points both in the database configuration and in the logic of building queries of the application itself.</p> |`{TEMPLATE_NAME:oracle.session_concurrency_rate.min(5m)} > {$ORACLE.CONCURRENCY.MAX.WARN}` |WARNING | |
|Oracle: Zabbix account will expire soon (under {$ORACLE.EXPIRE.PASSWORD.MIN.WARN} days) |<p>Password for zabbix user in the database will expire soon.</p> |`{TEMPLATE_NAME:oracle.user_expire_password.last()} < {$ORACLE.EXPIRE.PASSWORD.MIN.WARN}` |WARNING | |
|Oracle: Total PGA inuse is too high (over {$ORACLE.PGA.USE.MAX.WARN}% for 5 min) |<p>Total PGA in use is more than {$ORACLE.PGA.USE.MAX.WARN}% of PGA_AGGREGATE_TARGET.</p> |`{TEMPLATE_NAME:oracle.total_pga_used.min(5m)} * 100 / {Oracle by ODBC:oracle.pga_target.last()} > {$ORACLE.PGA.USE.MAX.WARN}` |WARNING | |
|Oracle: Number of REDO logs available for switching is too low (less {$ORACLE.REDO.MIN.WARN} for 5 min) |<p>Number of available for log switching inactive/unused REDOs is low (Database down risk)</p> |`{TEMPLATE_NAME:oracle.redo_logs_available.max(5m)} < {$ORACLE.REDO.MIN.WARN}` |WARNING | |
diff --git a/templates/db/oracle_odbc/template_db_oracle_odbc.yaml b/templates/db/oracle_odbc/template_db_oracle_odbc.yaml
index 9b9c25c3a36..b2ca93f7871 100644
--- a/templates/db/oracle_odbc/template_db_oracle_odbc.yaml
+++ b/templates/db/oracle_odbc/template_db_oracle_odbc.yaml
@@ -1123,7 +1123,7 @@ zabbix_export:
triggers:
-
expression: '{min(5m)} > {$ORACLE.CONCURRENCY.MAX.WARN}'
- name: 'Oracle: Too hight database concurrency (over {$ORACLE.CONCURRENCY.MAX.WARN}% for 5 min)'
+ name: 'Oracle: Too high database concurrency (over {$ORACLE.CONCURRENCY.MAX.WARN}% for 5 min)'
priority: WARNING
description: 'Concurrency rate is over {$ORACLE.CONCURRENCY.MAX.WARN}%. A high contention value does not indicate the root cause of the problem, but is a signal to search for it. In the case of high competition, an analysis of resource consumption should be carried out, the most "heavy" queries made in the database, possibly - session tracing. All this will help determine the root cause and possible optimization points both in the database configuration and in the logic of building queries of the application itself.'
-
diff --git a/templates/db/postgresql/README.md b/templates/db/postgresql/README.md
index 5ef0345987a..79227290383 100644
--- a/templates/db/postgresql/README.md
+++ b/templates/db/postgresql/README.md
@@ -4,7 +4,7 @@
## Overview
Templates to monitor PostgreSQL by Zabbix.\
-This template was tested on Zabbix 4.2.1 and PostgreSQL vesions 9.6, 10 and 11 on Linux and Windows.
+This template was tested on Zabbix 4.2.1 and PostgreSQL versions 9.6, 10 and 11 on Linux and Windows.
## Setup
diff --git a/templates/db/postgresql_agent2/README.md b/templates/db/postgresql_agent2/README.md
index ce92dfc502b..1ca3010d22b 100644
--- a/templates/db/postgresql_agent2/README.md
+++ b/templates/db/postgresql_agent2/README.md
@@ -132,7 +132,7 @@ There are no template links in this template.
| PostgreSQL | Application {#APPLICATION}: Replication replay lag | | DEPENDENT | pgsql.replication.process.replay_lag["{#APPLICATION}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$['{#APPLICATION}'].replay_lag`</p> |
| PostgreSQL | Application {#APPLICATION}: Replication write lag | | DEPENDENT | pgsql.replication.process.write_lag["{#APPLICATION}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$['{#APPLICATION}'].write_lag`</p> |
| PostgreSQL | DB {#DBNAME}: Database age | <p>Database age</p> | ZABBIX_PASSIVE | pgsql.db.age["{$PG.URI}","{$PG.USER}","{$PG.PASSWORD}","{#DBNAME}"] |
-| PostgreSQL | DB {#DBNAME}: Get bloating tables | <p>Number оf bloating tables</p> | ZABBIX_PASSIVE | pgsql.db.bloating_tables["{$PG.URI}","{$PG.USER}","{$PG.PASSWORD}","{#DBNAME}"] |
+| PostgreSQL | DB {#DBNAME}: Get bloating tables | <p>Number of bloating tables</p> | ZABBIX_PASSIVE | pgsql.db.bloating_tables["{$PG.URI}","{$PG.USER}","{$PG.PASSWORD}","{#DBNAME}"] |
| PostgreSQL | DB {#DBNAME}: Database size | <p>Database size</p> | ZABBIX_PASSIVE | pgsql.db.size["{$PG.URI}","{$PG.USER}","{$PG.PASSWORD}","{#DBNAME}"] |
| PostgreSQL | DB {#DBNAME}: Blocks hit per second | <p>Total number of times disk blocks were found already in the buffer cache, so that a read was not necessary</p> | DEPENDENT | pgsql.dbstat.blks_hit.rate["{#DBNAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$['{#DBNAME}'].blks_hit`</p><p>- CHANGE_PER_SECOND |
| PostgreSQL | DB {#DBNAME}: Disk blocks read per second | <p>Total number of disk blocks read in this database</p> | DEPENDENT | pgsql.dbstat.blks_read.rate["{#DBNAME}"]<p>**Preprocessing**:</p><p>- JSONPATH: `$['{#DBNAME}'].blks_read`</p><p>- CHANGE_PER_SECOND |
diff --git a/templates/db/postgresql_agent2/template_db_postgresql_agent2.yaml b/templates/db/postgresql_agent2/template_db_postgresql_agent2.yaml
index 301fec12cf8..a8589ee1213 100644
--- a/templates/db/postgresql_agent2/template_db_postgresql_agent2.yaml
+++ b/templates/db/postgresql_agent2/template_db_postgresql_agent2.yaml
@@ -1152,7 +1152,7 @@ zabbix_export:
name: 'DB {#DBNAME}: Get bloating tables'
key: 'pgsql.db.bloating_tables["{$PG.URI}","{$PG.USER}","{$PG.PASSWORD}","{#DBNAME}"]'
history: 7d
- description: 'Number оf bloating tables'
+ description: 'Number of bloating tables'
application_prototypes:
-
name: 'PostgreSQL: DB {#DBNAME}'
diff --git a/templates/net/arista_snmp/README.md b/templates/net/arista_snmp/README.md
index 75437ecf2ce..19e79bafdec 100644
--- a/templates/net/arista_snmp/README.md
+++ b/templates/net/arista_snmp/README.md
@@ -22,7 +22,7 @@ No specific Zabbix configuration is required.
|Name|Description|Default|
|----|-----------|-------|
|{$FAN_CRIT_STATUS} |<p>-</p> |`3` |
-|{$MEMORY.NAME.NOT_MATCHES} |<p>Filter is overriden to ignore RAM(Cache) and RAM(Buffers) memory objects.</p> |`(Buffer|Cache)` |
+|{$MEMORY.NAME.NOT_MATCHES} |<p>Filter is overridden to ignore RAM(Cache) and RAM(Buffers) memory objects.</p> |`(Buffer|Cache)` |
|{$PSU_CRIT_STATUS} |<p>-</p> |`2` |
|{$VFS.FS.PUSED.MAX.CRIT} |<p>-</p> |`95` |
|{$VFS.FS.PUSED.MAX.WARN} |<p>-</p> |`90` |
diff --git a/templates/net/arista_snmp/template_net_arista_snmp.yaml b/templates/net/arista_snmp/template_net_arista_snmp.yaml
index f429deb2577..b9cbb11d0c5 100644
--- a/templates/net/arista_snmp/template_net_arista_snmp.yaml
+++ b/templates/net/arista_snmp/template_net_arista_snmp.yaml
@@ -621,7 +621,7 @@ zabbix_export:
-
macro: '{$MEMORY.NAME.NOT_MATCHES}'
value: (Buffer|Cache)
- description: 'Filter is overriden to ignore RAM(Cache) and RAM(Buffers) memory objects.'
+ description: 'Filter is overridden to ignore RAM(Cache) and RAM(Buffers) memory objects.'
-
macro: '{$PSU_CRIT_STATUS}'
value: '2'
diff --git a/templates/net/brocade_foundry_sw_snmp/README.md b/templates/net/brocade_foundry_sw_snmp/README.md
index f80ae50f6d5..dfbbf9c017d 100644
--- a/templates/net/brocade_foundry_sw_snmp/README.md
+++ b/templates/net/brocade_foundry_sw_snmp/README.md
@@ -109,7 +109,7 @@ No specific Zabbix configuration is required.
|Inventory |Firmware version |<p>MIB: FOUNDRY-SN-AGENT-MIB</p><p>The version of the running software in the form'major.minor.maintenance[letters]'</p> |SNMP |system.hw.firmware<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1d`</p> |
|Power_supply |PSU {#PSU_INDEX}: Power supply status |<p>MIB: FOUNDRY-SN-AGENT-MIB</p> |SNMP |sensor.psu.status[snChasPwrSupplyOperStatus.{#SNMPINDEX}] |
|Temperature |{#SENSOR_DESCR}: Temperature |<p>MIB: FOUNDRY-SN-AGENT-MIB</p><p>Temperature of the sensor represented by this row. Each unit is 0.5 degrees Celsius.</p> |SNMP |sensor.temp.value[snAgentTempValue.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.5`</p> |
-|Temperature |Chassis #{#SNMPINDEX}: Temperature |<p>MIB: FOUNDRY-SN-AGENT-MIB</p><p>Temperature of the chassis. Each unit is 0.5 degrees Celcius.</p><p>Only management module built with temperature sensor hardware is applicable.</p><p>For those non-applicable management module, it returns no-such-name.</p> |SNMP |sensor.temp.value[snChasActualTemperature.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.5`</p> |
+|Temperature |Chassis #{#SNMPINDEX}: Temperature |<p>MIB: FOUNDRY-SN-AGENT-MIB</p><p>Temperature of the chassis. Each unit is 0.5 degrees Celsius.</p><p>Only management module built with temperature sensor hardware is applicable.</p><p>For those non-applicable management module, it returns no-such-name.</p> |SNMP |sensor.temp.value[snChasActualTemperature.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.5`</p> |
## Triggers
diff --git a/templates/net/brocade_foundry_sw_snmp/template_net_brocade_foundry_sw_snmp.yaml b/templates/net/brocade_foundry_sw_snmp/template_net_brocade_foundry_sw_snmp.yaml
index e47806c8807..321652c9b9f 100644
--- a/templates/net/brocade_foundry_sw_snmp/template_net_brocade_foundry_sw_snmp.yaml
+++ b/templates/net/brocade_foundry_sw_snmp/template_net_brocade_foundry_sw_snmp.yaml
@@ -188,7 +188,7 @@ zabbix_export:
units: °C
description: |
MIB: FOUNDRY-SN-AGENT-MIB
- Temperature of the chassis. Each unit is 0.5 degrees Celcius.
+ Temperature of the chassis. Each unit is 0.5 degrees Celsius.
Only management module built with temperature sensor hardware is applicable.
For those non-applicable management module, it returns no-such-name.
applications:
diff --git a/templates/net/morningstar_snmp/tristar_mppt_600V_snmp/README.md b/templates/net/morningstar_snmp/tristar_mppt_600V_snmp/README.md
index f111dd172d1..09f04abdc5e 100644
--- a/templates/net/morningstar_snmp/tristar_mppt_600V_snmp/README.md
+++ b/templates/net/morningstar_snmp/tristar_mppt_600V_snmp/README.md
@@ -59,8 +59,8 @@ There are no template links in this template.
|Battery |Battery: Charge Current |<p>MIB: TRISTAR-MPPT</p><p>Description:Battery Current</p><p>Scaling Factor:1.0</p><p>Units:A</p><p>Range:[-10, 80]</p><p>Modbus address:0x001c</p> |SNMP |charge.current[batteryCurrent.0] |
|Battery |Battery: Output Power |<p>MIB: TRISTAR-MPPT</p><p>Description:Output Power</p><p>Scaling Factor:1.0</p><p>Units:W</p><p>Range:[-10, 4000]</p><p>Modbus address:0x003a</p> |SNMP |charge.output_power[ outputPower.0] |
|Battery |Battery: Voltage{#SINGLETON} |<p>MIB: TRISTAR-MPPT</p><p>Description:Battery voltage</p><p>Scaling Factor:1.0</p><p>Units:V</p><p>Range:[-10, 80]</p><p>Modbus address:0x0018</p> |SNMP |battery.voltage[batteryVoltage.0{#SINGLETON}] |
-|Counter |Counter: Charge Amp-hours |<p>MIB: TRISTAR-MPPT</p><p>Description:Ah Charge Resetable</p><p>Scaling Factor:1.0</p><p>Units:Ah</p><p>Range:[0.0, 5000]</p><p>Modbus addresses:H=0x0034 L=0x0035</p> |SNMP |counter.charge_amp_hours[ahChargeResetable.0] |
-|Counter |Counter: Charge KW-hours |<p>MIB: TRISTAR-MPPT</p><p>Description:kWh Charge Resetable</p><p>Scaling Factor:1.0</p><p>Units:kWh</p><p>Range:[0.0, 65535.0]</p><p>Modbus address:0x0038</p> |SNMP |counter.charge_kw_hours[kwhChargeResetable.0] |
+|Counter |Counter: Charge Amp-hours |<p>MIB: TRISTAR-MPPT</p><p>Description:Ah Charge Resettable</p><p>Scaling Factor:1.0</p><p>Units:Ah</p><p>Range:[0.0, 5000]</p><p>Modbus addresses:H=0x0034 L=0x0035</p> |SNMP |counter.charge_amp_hours[ahChargeResetable.0] |
+|Counter |Counter: Charge KW-hours |<p>MIB: TRISTAR-MPPT</p><p>Description:kWh Charge Resettable</p><p>Scaling Factor:1.0</p><p>Units:kWh</p><p>Range:[0.0, 65535.0]</p><p>Modbus address:0x0038</p> |SNMP |counter.charge_kw_hours[kwhChargeResetable.0] |
|Status |Status: Uptime |<p>Device uptime in seconds</p> |SNMP |status.uptime<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.01`</p> |
|Status |Status: Faults |<p>MIB: TRISTAR-MPPT</p><p>Description:Faults</p><p>Modbus addresses:H=0x002c L=0x002d</p> |SNMP |status.faults[faults.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
|Status |Status: Alarms |<p>MIB: TRISTAR-MPPT</p><p>Description:Alarms</p><p>Modbus addresses:H=0x002e L=0x002f</p> |SNMP |status.alarms[alarms.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
diff --git a/templates/net/morningstar_snmp/tristar_mppt_600V_snmp/tristar_mppt_600V_snmp.yaml b/templates/net/morningstar_snmp/tristar_mppt_600V_snmp/tristar_mppt_600V_snmp.yaml
index 9604dfa72b9..da77a95d835 100644
--- a/templates/net/morningstar_snmp/tristar_mppt_600V_snmp/tristar_mppt_600V_snmp.yaml
+++ b/templates/net/morningstar_snmp/tristar_mppt_600V_snmp/tristar_mppt_600V_snmp.yaml
@@ -233,7 +233,7 @@ zabbix_export:
units: Ah
description: |
MIB: TRISTAR-MPPT
- Description:Ah Charge Resetable
+ Description:Ah Charge Resettable
Scaling Factor:1.0
Units:Ah
Range:[0.0, 5000]
@@ -251,7 +251,7 @@ zabbix_export:
units: '!kWh'
description: |
MIB: TRISTAR-MPPT
- Description:kWh Charge Resetable
+ Description:kWh Charge Resettable
Scaling Factor:1.0
Units:kWh
Range:[0.0, 65535.0]
diff --git a/templates/net/morningstar_snmp/tristar_mppt_snmp/README.md b/templates/net/morningstar_snmp/tristar_mppt_snmp/README.md
index b718f8ff16b..976d44df503 100644
--- a/templates/net/morningstar_snmp/tristar_mppt_snmp/README.md
+++ b/templates/net/morningstar_snmp/tristar_mppt_snmp/README.md
@@ -59,8 +59,8 @@ There are no template links in this template.
|Battery |Battery: Charge Current |<p>MIB: TRISTAR-MPPT</p><p>Description:Battery Current</p><p>Scaling Factor:0.00244140625</p><p>Units:A</p><p>Range:[-10, 80]</p><p>Modbus address:0x001c</p> |SNMP |charge.current[batteryCurrent.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.00244140625`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
|Battery |Battery: Output Power |<p>MIB: TRISTAR-MPPT</p><p>Description:Output Power</p><p>Scaling Factor:0.10986328125</p><p>Units:W</p><p>Range:[-10, 5000]</p><p>Modbus address:0x003a</p> |SNMP |charge.output_power[ outputPower.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.1098632813`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
|Battery |Battery: Voltage{#SINGLETON} |<p>MIB: TRISTAR-MPPT</p><p>Description:Battery voltage</p><p>Scaling Factor:0.0054931640625</p><p>Units:V</p><p>Range:[-10, 180.0]</p><p>Modbus address:0x0018</p> |SNMP |battery.voltage[batteryVoltage.0{#SINGLETON}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.005493164063`</p><p>- REGEX: `^(\d+)(\.\d{1,2})? \1\2`</p> |
-|Counter |Counter: Charge Amp-hours |<p>MIB: TRISTAR-MPPT</p><p>Description:Ah Charge Resetable</p><p>Scaling Factor:0.1</p><p>Units:Ah</p><p>Range:[0.0, 5000]</p><p>Modbus addresses:H=0x0034 L=0x0035</p> |SNMP |counter.charge_amp_hours[ahChargeResetable.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.1`</p> |
-|Counter |Counter: Charge KW-hours |<p>MIB: TRISTAR-MPPT</p><p>Description:kWh Charge Resetable</p><p>Scaling Factor:0.1</p><p>Units:kWh</p><p>Range:[0.0, 65535.0]</p><p>Modbus address:0x0038</p> |SNMP |counter.charge_kw_hours[kwhChargeResetable.0] |
+|Counter |Counter: Charge Amp-hours |<p>MIB: TRISTAR-MPPT</p><p>Description:Ah Charge Resettable</p><p>Scaling Factor:0.1</p><p>Units:Ah</p><p>Range:[0.0, 5000]</p><p>Modbus addresses:H=0x0034 L=0x0035</p> |SNMP |counter.charge_amp_hours[ahChargeResetable.0]<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.1`</p> |
+|Counter |Counter: Charge KW-hours |<p>MIB: TRISTAR-MPPT</p><p>Description:kWh Charge Resettable</p><p>Scaling Factor:0.1</p><p>Units:kWh</p><p>Range:[0.0, 65535.0]</p><p>Modbus address:0x0038</p> |SNMP |counter.charge_kw_hours[kwhChargeResetable.0] |
|Status |Status: Uptime |<p>Device uptime in seconds</p> |SNMP |status.uptime<p>**Preprocessing**:</p><p>- MULTIPLIER: `0.01`</p> |
|Status |Status: Faults |<p>MIB: TRISTAR-MPPT</p><p>Description:Faults</p><p>Modbus address:0x002c</p> |SNMP |status.faults[faults.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
|Status |Status: Alarms |<p>MIB: TRISTAR-MPPT</p><p>Description:Faults</p><p>Modbus address:0x002c</p> |SNMP |status.alarms[alarms.0]<p>**Preprocessing**:</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `1h`</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
diff --git a/templates/net/morningstar_snmp/tristar_mppt_snmp/tristar_mppt_snmp.yaml b/templates/net/morningstar_snmp/tristar_mppt_snmp/tristar_mppt_snmp.yaml
index 228adc9810e..1a3b71ced75 100644
--- a/templates/net/morningstar_snmp/tristar_mppt_snmp/tristar_mppt_snmp.yaml
+++ b/templates/net/morningstar_snmp/tristar_mppt_snmp/tristar_mppt_snmp.yaml
@@ -301,7 +301,7 @@ zabbix_export:
units: Ah
description: |
MIB: TRISTAR-MPPT
- Description:Ah Charge Resetable
+ Description:Ah Charge Resettable
Scaling Factor:0.1
Units:Ah
Range:[0.0, 5000]
@@ -323,7 +323,7 @@ zabbix_export:
units: '!kWh'
description: |
MIB: TRISTAR-MPPT
- Description:kWh Charge Resetable
+ Description:kWh Charge Resettable
Scaling Factor:0.1
Units:kWh
Range:[0.0, 65535.0]
diff --git a/templates/san/huawei_5300v5_snmp/README.md b/templates/san/huawei_5300v5_snmp/README.md
index 200e71e9c14..0a38de6cee9 100644
--- a/templates/san/huawei_5300v5_snmp/README.md
+++ b/templates/san/huawei_5300v5_snmp/README.md
@@ -61,7 +61,7 @@ No specific Zabbix configuration is required.
|FANs discovery |<p>Discovery of FANs</p> |SNMP |huawei.5300.fan.discovery |
|BBU discovery |<p>Discovery of BBU</p> |SNMP |huawei.5300.bbu.discovery |
|Disks discovery |<p>Discovery of disks</p> |SNMP |huawei.5300.disks.discovery |
-|Nodes performance discovery |<p>Discovery of nodes perfomance counters</p> |SNMP |huawei.5300.nodes.discovery |
+|Nodes performance discovery |<p>Discovery of nodes performance counters</p> |SNMP |huawei.5300.nodes.discovery |
|LUNs discovery |<p>Discovery of LUNs</p> |SNMP |huawei.5300.lun.discovery |
|Storage pools discovery |<p>Discovery of storage pools</p> |SNMP |huawei.5300.pool.discovery |
diff --git a/templates/san/huawei_5300v5_snmp/template_san_huawei_5300v5_snmp.yaml b/templates/san/huawei_5300v5_snmp/template_san_huawei_5300v5_snmp.yaml
index 37dc530d8ff..e34600edff1 100644
--- a/templates/san/huawei_5300v5_snmp/template_san_huawei_5300v5_snmp.yaml
+++ b/templates/san/huawei_5300v5_snmp/template_san_huawei_5300v5_snmp.yaml
@@ -752,7 +752,7 @@ zabbix_export:
snmp_oid: 'discovery[{#NODE},1.3.6.1.4.1.34774.4.1.21.3.1.1]'
key: huawei.5300.nodes.discovery
delay: 1h
- description: 'Discovery of nodes perfomance counters'
+ description: 'Discovery of nodes performance counters'
item_prototypes:
-
name: 'Node {#NODE}: CPU utilization'
diff --git a/templates/san/netapp_aff_a700_http/README.md b/templates/san/netapp_aff_a700_http/README.md
index a1e8c030dd0..c02d8d5b108 100644
--- a/templates/san/netapp_aff_a700_http/README.md
+++ b/templates/san/netapp_aff_a700_http/README.md
@@ -65,15 +65,15 @@ There are no template links in this template.
|General |Cluster status |<p>The status of the cluster: ok, error, partial_no_data, partial_no_response, partial_other_error, negative_delta, backfilled_data, inconsistent_delta_time, inconsistent_old_data.</p> |DEPENDENT |netapp.cluster.status<p>**Preprocessing**:</p><p>- JSONPATH: `$.statistics.status`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
|General |Cluster throughput, other rate |<p>Throughput bytes observed at the storage object. Performance metric for other I/O operations. Other I/O operations can be metadata operations, such as directory lookups and so on.</p> |DEPENDENT |netapp.cluster.statistics.throughput.other.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.statistics.throughput_raw.other`</p><p>- CHANGE_PER_SECOND |
|General |Cluster throughput, read rate |<p>Throughput bytes observed at the storage object. Performance metric for read I/O operations.</p> |DEPENDENT |netapp.cluster.statistics.throughput.read.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.statistics.throughput_raw.read`</p><p>- CHANGE_PER_SECOND |
-|General |Cluster throughput, write rate |<p>Throughput bytes observed at the storage object. Peformance metric for write I/O operations.</p> |DEPENDENT |netapp.cluster.statistics.throughput.write.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.statistics.throughput_raw.write`</p><p>- CHANGE_PER_SECOND |
+|General |Cluster throughput, write rate |<p>Throughput bytes observed at the storage object. Performance metric for write I/O operations.</p> |DEPENDENT |netapp.cluster.statistics.throughput.write.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.statistics.throughput_raw.write`</p><p>- CHANGE_PER_SECOND |
|General |Cluster throughput, total rate |<p>Throughput bytes observed at the storage object. Performance metric aggregated over all types of I/O operations.</p> |DEPENDENT |netapp.cluster.statistics.throughput.total.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.statistics.throughput_raw.total`</p><p>- CHANGE_PER_SECOND |
|General |Cluster IOPS, other rate |<p>The number of I/O operations observed at the storage object. Performance metric for other I/O operations. Other I/O operations can be metadata operations, such as directory lookups and so on.</p> |DEPENDENT |netapp.cluster.statistics.iops.other.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.statistics.iops_raw.other`</p><p>- CHANGE_PER_SECOND |
|General |Cluster IOPS, read rate |<p>The number of I/O operations observed at the storage object. Performance metric for read I/O operations.</p> |DEPENDENT |netapp.cluster.statistics.iops.read.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.statistics.iops_raw.read`</p><p>- CHANGE_PER_SECOND |
-|General |Cluster IOPS, write rate |<p>The number of I/O operations observed at the storage object. Peformance metric for write I/O operations.</p> |DEPENDENT |netapp.cluster.statistics.iops.write.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.statistics.iops_raw.write`</p><p>- CHANGE_PER_SECOND |
+|General |Cluster IOPS, write rate |<p>The number of I/O operations observed at the storage object. Performance metric for write I/O operations.</p> |DEPENDENT |netapp.cluster.statistics.iops.write.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.statistics.iops_raw.write`</p><p>- CHANGE_PER_SECOND |
|General |Cluster IOPS, total rate |<p>The number of I/O operations observed at the storage object. Performance metric aggregated over all types of I/O operations.</p> |DEPENDENT |netapp.cluster.statistics.iops.total.rate<p>**Preprocessing**:</p><p>- JSONPATH: `$.statistics.iops_raw.total`</p><p>- CHANGE_PER_SECOND |
|General |Cluster latency, other |<p>The average latency per I/O operation in milliseconds observed at the storage object. Performance metric for other I/O operations. Other I/O operations can be metadata operations, such as directory lookups and so on.</p> |CALCULATED |netapp.cluster.statistics.latency.other<p>**Expression**:</p>`(last(netapp.cluster.statistics.latency_raw.other) - prev(netapp.cluster.statistics.latency_raw.other)) / (last(netapp.cluster.statistics.iops_raw.other) - prev(netapp.cluster.statistics.iops_raw.other) + (last(netapp.cluster.statistics.iops_raw.other) - prev(netapp.cluster.statistics.iops_raw.other) = 0) ) * 0.001 ` |
|General |Cluster latency, read |<p>The average latency per I/O operation in milliseconds observed at the storage object. Performance metric for read I/O operations.</p> |CALCULATED |netapp.cluster.statistics.latency.read<p>**Expression**:</p>`(last(netapp.cluster.statistics.latency_raw.read) - prev(netapp.cluster.statistics.latency_raw.read)) / ( last(netapp.cluster.statistics.iops_raw.read) - prev(netapp.cluster.statistics.iops_raw.read) + (last(netapp.cluster.statistics.iops_raw.read) - prev(netapp.cluster.statistics.iops_raw.read) = 0) ) * 0.001 ` |
-|General |Cluster latency, write |<p>The average latency per I/O operation in milliseconds observed at the storage object. Peformance metric for write I/O operations.</p> |CALCULATED |netapp.cluster.statistics.latency.write<p>**Expression**:</p>`(last(netapp.cluster.statistics.latency_raw.write) - prev(netapp.cluster.statistics.latency_raw.write)) / ( last(netapp.cluster.statistics.iops_raw.write) - prev(netapp.cluster.statistics.iops_raw.write) + (last(netapp.cluster.statistics.iops_raw.write) - prev(netapp.cluster.statistics.iops_raw.write) = 0) ) * 0.001 ` |
+|General |Cluster latency, write |<p>The average latency per I/O operation in milliseconds observed at the storage object. Performance metric for write I/O operations.</p> |CALCULATED |netapp.cluster.statistics.latency.write<p>**Expression**:</p>`(last(netapp.cluster.statistics.latency_raw.write) - prev(netapp.cluster.statistics.latency_raw.write)) / ( last(netapp.cluster.statistics.iops_raw.write) - prev(netapp.cluster.statistics.iops_raw.write) + (last(netapp.cluster.statistics.iops_raw.write) - prev(netapp.cluster.statistics.iops_raw.write) = 0) ) * 0.001 ` |
|General |Cluster latency, total |<p>The average latency per I/O operation in milliseconds observed at the storage object. Performance metric aggregated over all types of I/O operations.</p> |CALCULATED |netapp.cluster.statistics.latency.total<p>**Expression**:</p>`(last(netapp.cluster.statistics.latency_raw.total) - prev(netapp.cluster.statistics.latency_raw.total)) / ( last(netapp.cluster.statistics.iops_raw.total) - prev(netapp.cluster.statistics.iops_raw.total) + (last(netapp.cluster.statistics.iops_raw.total) - prev(netapp.cluster.statistics.iops_raw.total) = 0) ) * 0.001 ` |
|General |{#NODENAME}: Software version |<p>This returns the cluster version information. When the cluster has more than one node, the cluster version is equivalent to the lowest of generation, major, and minor versions on all nodes.</p> |DEPENDENT |netapp.node.version[{#NODENAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.records[?(@.name=='{#NODENAME}')].version.full.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
|General |{#NODENAME}: Location |<p>The location of the node.</p> |DEPENDENT |netapp.nodes.location[{#NODENAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.records[?(@.name=='{#NODENAME}')].location.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
@@ -102,11 +102,11 @@ There are no template links in this template.
|General |{#VOLUMENAME}: Used size |<p>The virtual space used (includes volume reserves) before storage efficiency, in bytes.</p> |DEPENDENT |netapp.volume.space_used[{#VOLUMENAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.records[?(@.name=='{#VOLUMENAME}')].space.used.first()`</p><p>- DISCARD_UNCHANGED_HEARTBEAT: `6h`</p> |
|General |{#VOLUMENAME}: Volume throughput, other rate |<p>Throughput bytes observed at the storage object. Performance metric for other I/O operations. Other I/O operations can be metadata operations, such as directory lookups and so on.</p> |DEPENDENT |netapp.volume.statistics.throughput.other.rate[{#VOLUMENAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.records[?(@.name=='{#VOLUMENAME}')].statistics.throughput_raw.other.first()`</p><p>- CHANGE_PER_SECOND |
|General |{#VOLUMENAME}: Volume throughput, read rate |<p>Throughput bytes observed at the storage object. Performance metric for read I/O operations.</p> |DEPENDENT |netapp.volume.statistics.throughput.read.rate[{#VOLUMENAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.records[?(@.name=='{#VOLUMENAME}')].statistics.throughput_raw.read.first()`</p><p>- CHANGE_PER_SECOND |
-|General |{#VOLUMENAME}: Volume throughput, write rate |<p>Throughput bytes observed at the storage object. Peformance metric for write I/O operations.</p> |DEPENDENT |netapp.volume.statistics.throughput.write.rate[{#VOLUMENAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.records[?(@.name=='{#VOLUMENAME}')].statistics.throughput_raw.write.first()`</p><p>- CHANGE_PER_SECOND |
+|General |{#VOLUMENAME}: Volume throughput, write rate |<p>Throughput bytes observed at the storage object. Performance metric for write I/O operations.</p> |DEPENDENT |netapp.volume.statistics.throughput.write.rate[{#VOLUMENAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.records[?(@.name=='{#VOLUMENAME}')].statistics.throughput_raw.write.first()`</p><p>- CHANGE_PER_SECOND |
|General |{#VOLUMENAME}: Volume throughput, total rate |<p>Throughput bytes observed at the storage object. Performance metric aggregated over all types of I/O operations.</p> |DEPENDENT |netapp.volume.statistics.throughput.total.rate[{#VOLUMENAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.records[?(@.name=='{#VOLUMENAME}')].statistics.throughput_raw.total.first()`</p><p>- CHANGE_PER_SECOND |
|General |{#VOLUMENAME}: Volume IOPS, other rate |<p>The number of I/O operations observed at the storage object. Performance metric for other I/O operations. Other I/O operations can be metadata operations, such as directory lookups and so on.</p> |DEPENDENT |netapp.volume.statistics.iops.other.rate[{#VOLUMENAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.records[?(@.name=='{#VOLUMENAME}')].statistics.iops_raw.other.first()`</p><p>- CHANGE_PER_SECOND |
|General |{#VOLUMENAME}: Volume IOPS, read rate |<p>The number of I/O operations observed at the storage object. Performance metric for read I/O operations.</p> |DEPENDENT |netapp.volume.statistics.iops.read.rate[{#VOLUMENAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.records[?(@.name=='{#VOLUMENAME}')].statistics.iops_raw.read.first()`</p><p>- CHANGE_PER_SECOND |
-|General |{#VOLUMENAME}: Volume IOPS, write rate |<p>The number of I/O operations observed at the storage object. Peformance metric for write I/O operations.</p> |DEPENDENT |netapp.volume.statistics.iops.write.rate[{#VOLUMENAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.records[?(@.name=='{#VOLUMENAME}')].statistics.iops_raw.write.first()`</p><p>- CHANGE_PER_SECOND |
+|General |{#VOLUMENAME}: Volume IOPS, write rate |<p>The number of I/O operations observed at the storage object. Performance metric for write I/O operations.</p> |DEPENDENT |netapp.volume.statistics.iops.write.rate[{#VOLUMENAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.records[?(@.name=='{#VOLUMENAME}')].statistics.iops_raw.write.first()`</p><p>- CHANGE_PER_SECOND |
|General |{#VOLUMENAME}: Volume IOPS, total rate |<p>The number of I/O operations observed at the storage object. Performance metric aggregated over all types of I/O operations.</p> |DEPENDENT |netapp.volume.statistics.iops.total.rate[{#VOLUMENAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.records[?(@.name=='{#VOLUMENAME}')].statistics.iops_raw.total.first()`</p><p>- CHANGE_PER_SECOND |
|General |{#VOLUMENAME}: Volume latency, other |<p>The average latency per I/O operation in milliseconds observed at the storage object. Performance metric for other I/O operations. Other I/O operations can be metadata operations, such as directory lookups and so on.</p> |CALCULATED |netapp.volume.statistics.latency.other[{#VOLUMENAME}]<p>**Expression**:</p>`(last(netapp.volume.statistics.latency_raw.other[{#VOLUMENAME}]) - prev(netapp.volume.statistics.latency_raw.other[{#VOLUMENAME}])) / ( last(netapp.volume.statistics.iops_raw.other[{#VOLUMENAME}]) - prev(netapp.volume.statistics.iops_raw.other[{#VOLUMENAME}]) + (last(netapp.volume.statistics.iops_raw.other[{#VOLUMENAME}]) - prev(netapp.volume.statistics.iops_raw.other[{#VOLUMENAME}]) = 0) ) * 0.001 ` |
|General |{#VOLUMENAME}: Volume latency, read |<p>The average latency per I/O operation in milliseconds observed at the storage object. Performance metric for read I/O operations.</p> |CALCULATED |netapp.volume.statistics.latency.read[{#VOLUMENAME}]<p>**Expression**:</p>`(last(netapp.volume.statistics.latency_raw.read[{#VOLUMENAME}]) - prev(netapp.volume.statistics.latency_raw.read[{#VOLUMENAME}])) / ( last(netapp.volume.statistics.iops_raw.read[{#VOLUMENAME}]) - prev(netapp.volume.statistics.iops_raw.read[{#VOLUMENAME}]) + (last(netapp.volume.statistics.iops_raw.read[{#VOLUMENAME}]) - prev(netapp.volume.statistics.iops_raw.read[{#VOLUMENAME}]) = 0)) * 0.001 ` |
@@ -116,7 +116,7 @@ There are no template links in this template.
|Zabbix_raw_items |Get nodes |<p>-</p> |HTTP_AGENT |netapp.nodes.get |
|Zabbix_raw_items |Get disks |<p>-</p> |HTTP_AGENT |netapp.disks.get |
|Zabbix_raw_items |Get volumes |<p>-</p> |HTTP_AGENT |netapp.volumes.get |
-|Zabbix_raw_items |Get ehternet ports |<p>-</p> |HTTP_AGENT |netapp.ports.eth.get |
+|Zabbix_raw_items |Get ethernet ports |<p>-</p> |HTTP_AGENT |netapp.ports.eth.get |
|Zabbix_raw_items |Get FC ports |<p>-</p> |HTTP_AGENT |netapp.ports.fc.get |
|Zabbix_raw_items |Get SVMs |<p>-</p> |HTTP_AGENT |netapp.svms.get |
|Zabbix_raw_items |Get LUNs |<p>-</p> |HTTP_AGENT |netapp.luns.get |
@@ -124,19 +124,19 @@ There are no template links in this template.
|Zabbix_raw_items |Get FRUs |<p>-</p> |HTTP_AGENT |netapp.frus.get<p>**Preprocessing**:</p><p>- JAVASCRIPT: `The text is too long. Please see the template.`</p> |
|Zabbix_raw_items |Cluster latency raw, other |<p>The raw latency in microseconds observed at the storage object. This can be divided by the raw IOPS value to calculate the average latency per I/O operation. Performance metric for other I/O operations. Other I/O operations can be metadata operations, such as directory lookups and so on.</p> |DEPENDENT |netapp.cluster.statistics.latency_raw.other<p>**Preprocessing**:</p><p>- JSONPATH: `$.statistics.latency_raw.other`</p> |
|Zabbix_raw_items |Cluster latency raw, read |<p>The raw latency in microseconds observed at the storage object. This can be divided by the raw IOPS value to calculate the average latency per I/O operation. Performance metric for read I/O operations.</p> |DEPENDENT |netapp.cluster.statistics.latency_raw.read<p>**Preprocessing**:</p><p>- JSONPATH: `$.statistics.latency_raw.read`</p> |
-|Zabbix_raw_items |Cluster latency raw, write |<p>The raw latency in microseconds observed at the storage object. This can be divided by the raw IOPS value to calculate the average latency per I/O operation. Peformance metric for write I/O operations.</p> |DEPENDENT |netapp.cluster.statistics.latency_raw.write<p>**Preprocessing**:</p><p>- JSONPATH: `$.statistics.latency_raw.write`</p> |
+|Zabbix_raw_items |Cluster latency raw, write |<p>The raw latency in microseconds observed at the storage object. This can be divided by the raw IOPS value to calculate the average latency per I/O operation. Performance metric for write I/O operations.</p> |DEPENDENT |netapp.cluster.statistics.latency_raw.write<p>**Preprocessing**:</p><p>- JSONPATH: `$.statistics.latency_raw.write`</p> |
|Zabbix_raw_items |Cluster latency raw, total |<p>The raw latency in microseconds observed at the storage object. This can be divided by the raw IOPS value to calculate the average latency per I/O operation. Performance metric aggregated over all types of I/O operations.</p> |DEPENDENT |netapp.cluster.statistics.latency_raw.total<p>**Preprocessing**:</p><p>- JSONPATH: `$.statistics.latency_raw.total`</p> |
|Zabbix_raw_items |Cluster IOPS raw, other |<p>The number of I/O operations observed at the storage object. Performance metric for other I/O operations. Other I/O operations can be metadata operations, such as directory lookups and so on.</p> |DEPENDENT |netapp.cluster.statistics.iops_raw.other<p>**Preprocessing**:</p><p>- JSONPATH: `$.statistics.iops_raw.other`</p> |
|Zabbix_raw_items |Cluster IOPS raw, read |<p>The number of I/O operations observed at the storage object. Performance metric for read I/O operations.</p> |DEPENDENT |netapp.cluster.statistics.iops_raw.read<p>**Preprocessing**:</p><p>- JSONPATH: `$.statistics.iops_raw.read`</p> |
-|Zabbix_raw_items |Cluster IOPS raw, write |<p>The number of I/O operations observed at the storage object. Peformance metric for write I/O operations.</p> |DEPENDENT |netapp.cluster.statistics.iops_raw.write<p>**Preprocessing**:</p><p>- JSONPATH: `$.statistics.iops_raw.write`</p> |
+|Zabbix_raw_items |Cluster IOPS raw, write |<p>The number of I/O operations observed at the storage object. Performance metric for write I/O operations.</p> |DEPENDENT |netapp.cluster.statistics.iops_raw.write<p>**Preprocessing**:</p><p>- JSONPATH: `$.statistics.iops_raw.write`</p> |
|Zabbix_raw_items |Cluster IOPS raw, total |<p>The number of I/O operations observed at the storage object. Performance metric aggregated over all types of I/O operations.</p> |DEPENDENT |netapp.cluster.statistics.iops_raw.total<p>**Preprocessing**:</p><p>- JSONPATH: `$.statistics.iops_raw.total`</p> |
|Zabbix_raw_items |{#VOLUMENAME}: Volume latency raw, other |<p>The raw latency in microseconds observed at the storage object. Performance metric for other I/O operations. Other I/O operations can be metadata operations, such as directory lookups and so on.</p> |DEPENDENT |netapp.volume.statistics.latency_raw.other[{#VOLUMENAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.records[?(@.name=='{#VOLUMENAME}')].statistics.latency_raw.other.first()`</p> |
|Zabbix_raw_items |{#VOLUMENAME}: Volume latency raw, read |<p>The raw latency in microseconds observed at the storage object. Performance metric for read I/O operations.</p> |DEPENDENT |netapp.volume.statistics.latency_raw.read[{#VOLUMENAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.records[?(@.name=='{#VOLUMENAME}')].statistics.latency_raw.read.first()`</p> |
-|Zabbix_raw_items |{#VOLUMENAME}: Volume latency raw, write |<p>The raw latency in microseconds observed at the storage object. Peformance metric for write I/O operations.</p> |DEPENDENT |netapp.volume.statistics.latency_raw.write[{#VOLUMENAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.records[?(@.name=='{#VOLUMENAME}')].statistics.latency_raw.write.first()`</p> |
+|Zabbix_raw_items |{#VOLUMENAME}: Volume latency raw, write |<p>The raw latency in microseconds observed at the storage object. Performance metric for write I/O operations.</p> |DEPENDENT |netapp.volume.statistics.latency_raw.write[{#VOLUMENAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.records[?(@.name=='{#VOLUMENAME}')].statistics.latency_raw.write.first()`</p> |
|Zabbix_raw_items |{#VOLUMENAME}: Volume latency raw, total |<p>The raw latency in microseconds observed at the storage object. Performance metric aggregated over all types of I/O operations.</p> |DEPENDENT |netapp.volume.statistics.latency_raw.total[{#VOLUMENAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.records[?(@.name=='{#VOLUMENAME}')].statistics.latency_raw.total.first()`</p> |
|Zabbix_raw_items |{#VOLUMENAME}: Volume IOPS raw, other |<p>The number of I/O operations observed at the storage object. Performance metric for other I/O operations. Other I/O operations can be metadata operations, such as directory lookups and so on.</p> |DEPENDENT |netapp.volume.statistics.iops_raw.other[{#VOLUMENAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.records[?(@.name=='{#VOLUMENAME}')].statistics.iops_raw.other.first()`</p> |
|Zabbix_raw_items |{#VOLUMENAME}: Volume IOPS raw, read |<p>The number of I/O operations observed at the storage object. Performance metric for read I/O operations.</p> |DEPENDENT |netapp.volume.statistics.iops_raw.read[{#VOLUMENAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.records[?(@.name=='{#VOLUMENAME}')].statistics.iops_raw.read.first()`</p> |
-|Zabbix_raw_items |{#VOLUMENAME}: Volume IOPS raw, write |<p>The number of I/O operations observed at the storage object. Peformance metric for write I/O operations.</p> |DEPENDENT |netapp.volume.statistics.iops_raw.write[{#VOLUMENAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.records[?(@.name=='{#VOLUMENAME}')].statistics.iops_raw.write.first()`</p> |
+|Zabbix_raw_items |{#VOLUMENAME}: Volume IOPS raw, write |<p>The number of I/O operations observed at the storage object. Performance metric for write I/O operations.</p> |DEPENDENT |netapp.volume.statistics.iops_raw.write[{#VOLUMENAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.records[?(@.name=='{#VOLUMENAME}')].statistics.iops_raw.write.first()`</p> |
|Zabbix_raw_items |{#VOLUMENAME}: Volume IOPS raw, total |<p>The number of I/O operations observed at the storage object. Performance metric aggregated over all types of I/O operations.</p> |DEPENDENT |netapp.volume.statistics.iops_raw.total[{#VOLUMENAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.records[?(@.name=='{#VOLUMENAME}')].statistics.iops_raw.total.first()`</p> |
## Triggers
diff --git a/templates/san/netapp_aff_a700_http/template_san_netapp_aff_a700_http.yaml b/templates/san/netapp_aff_a700_http/template_san_netapp_aff_a700_http.yaml
index e89ebda63e4..4deab99300e 100644
--- a/templates/san/netapp_aff_a700_http/template_san_netapp_aff_a700_http.yaml
+++ b/templates/san/netapp_aff_a700_http/template_san_netapp_aff_a700_http.yaml
@@ -176,7 +176,7 @@ zabbix_export:
history: 7d
value_type: FLOAT
units: '!iops'
- description: 'The number of I/O operations observed at the storage object. Peformance metric for write I/O operations.'
+ description: 'The number of I/O operations observed at the storage object. Performance metric for write I/O operations.'
applications:
-
name: General
@@ -252,7 +252,7 @@ zabbix_export:
delay: '0'
history: 7d
units: '!iops'
- description: 'The number of I/O operations observed at the storage object. Peformance metric for write I/O operations.'
+ description: 'The number of I/O operations observed at the storage object. Performance metric for write I/O operations.'
applications:
-
name: 'Zabbix raw items'
@@ -319,7 +319,7 @@ zabbix_export:
(last(netapp.cluster.statistics.latency_raw.write) - prev(netapp.cluster.statistics.latency_raw.write)) /
( last(netapp.cluster.statistics.iops_raw.write) - prev(netapp.cluster.statistics.iops_raw.write) +
(last(netapp.cluster.statistics.iops_raw.write) - prev(netapp.cluster.statistics.iops_raw.write) = 0) ) * 0.001
- description: 'The average latency per I/O operation in milliseconds observed at the storage object. Peformance metric for write I/O operations.'
+ description: 'The average latency per I/O operation in milliseconds observed at the storage object. Performance metric for write I/O operations.'
applications:
-
name: General
@@ -384,7 +384,7 @@ zabbix_export:
delay: '0'
history: 7d
units: '!mcs'
- description: 'The raw latency in microseconds observed at the storage object. This can be divided by the raw IOPS value to calculate the average latency per I/O operation. Peformance metric for write I/O operations.'
+ description: 'The raw latency in microseconds observed at the storage object. This can be divided by the raw IOPS value to calculate the average latency per I/O operation. Performance metric for write I/O operations.'
applications:
-
name: 'Zabbix raw items'
@@ -472,7 +472,7 @@ zabbix_export:
history: 7d
value_type: FLOAT
units: Bps
- description: 'Throughput bytes observed at the storage object. Peformance metric for write I/O operations.'
+ description: 'Throughput bytes observed at the storage object. Performance metric for write I/O operations.'
applications:
-
name: General
@@ -622,7 +622,7 @@ zabbix_export:
timeout: '{$HTTP.AGENT.TIMEOUT}'
url: '{$URL}/api/cluster/nodes?fields=*'
-
- name: 'Get ehternet ports'
+ name: 'Get ethernet ports'
type: HTTP_AGENT
key: netapp.ports.eth.get
history: '0'
@@ -1606,7 +1606,7 @@ zabbix_export:
history: 7d
value_type: FLOAT
units: '!iops'
- description: 'The number of I/O operations observed at the storage object. Peformance metric for write I/O operations.'
+ description: 'The number of I/O operations observed at the storage object. Performance metric for write I/O operations.'
application_prototypes:
-
name: 'Volume "{#VOLUMENAME}"'
@@ -1682,7 +1682,7 @@ zabbix_export:
delay: '0'
history: 7d
units: '!iops'
- description: 'The number of I/O operations observed at the storage object. Peformance metric for write I/O operations.'
+ description: 'The number of I/O operations observed at the storage object. Performance metric for write I/O operations.'
applications:
-
name: 'Zabbix raw items'
@@ -1814,7 +1814,7 @@ zabbix_export:
delay: '0'
history: 7d
units: '!mcs'
- description: 'The raw latency in microseconds observed at the storage object. Peformance metric for write I/O operations.'
+ description: 'The raw latency in microseconds observed at the storage object. Performance metric for write I/O operations.'
applications:
-
name: 'Zabbix raw items'
@@ -1902,7 +1902,7 @@ zabbix_export:
history: 7d
value_type: FLOAT
units: Bps
- description: 'Throughput bytes observed at the storage object. Peformance metric for write I/O operations.'
+ description: 'Throughput bytes observed at the storage object. Performance metric for write I/O operations.'
application_prototypes:
-
name: 'Volume "{#VOLUMENAME}"'
diff --git a/templates/server/dell_idrac_snmp/README.md b/templates/server/dell_idrac_snmp/README.md
index d76207bfd3b..a2f1348b9a2 100644
--- a/templates/server/dell_idrac_snmp/README.md
+++ b/templates/server/dell_idrac_snmp/README.md
@@ -112,7 +112,7 @@ No specific Zabbix configuration is required.
|Virtual_disks |Disk {#SNMPVALUE}({#DISK_NAME}): Read policy |<p>MIB: IDRAC-MIB-SMIv2</p><p>The read policy used by the controller for read operations on this virtual disk.</p><p>Possible values:</p><p>1: No Read Ahead.</p><p>2: Read Ahead.</p><p>3: Adaptive Read Ahead.</p> |SNMP |system.hw.virtualdisk.readpolicy[virtualDiskReadPolicy.{#SNMPINDEX}] |
|Virtual_disks |Disk {#SNMPVALUE}({#DISK_NAME}): Write policy |<p>MIB: IDRAC-MIB-SMIv2</p><p>The write policy used by the controller for write operations on this virtual disk.</p><p>Possible values:</p><p>1: Write Through.</p><p>2: Write Back.</p><p>3: Force Write Back.</p> |SNMP |system.hw.virtualdisk.writepolicy[virtualDiskWritePolicy.{#SNMPINDEX}] |
|Virtual_disks |Disk {#SNMPVALUE}({#DISK_NAME}): Disk size |<p>MIB: IDRAC-MIB-SMIv2</p><p>The size of the virtual disk in megabytes.</p> |SNMP |system.hw.virtualdisk.size[virtualDiskSizeInMB.{#SNMPINDEX}]<p>**Preprocessing**:</p><p>- MULTIPLIER: `1048576`</p> |
-|Virtual_disks |Disk {#SNMPVALUE}({#DISK_NAME}): Status |<p>MIB: IDRAC-MIB-SMIv2</p><p>The current state of this virtual disk (which includes any member physical disks.)</p><p>Possible states:</p><p>1: The current state could not be determined.</p><p>2: The virtual disk is operating normally or optimally.</p><p>3: The virtual disk has encountered a failure. The data on disk is lost or is about to be lost.</p><p>4: The virtual disk encounterd a failure with one or all of the constituent redundant physical disks.</p><p>The data on the virtual disk might no longer be fault tolerant.</p> |SNMP |system.hw.virtualdisk.status[virtualDiskState.{#SNMPINDEX}] |
+|Virtual_disks |Disk {#SNMPVALUE}({#DISK_NAME}): Status |<p>MIB: IDRAC-MIB-SMIv2</p><p>The current state of this virtual disk (which includes any member physical disks.)</p><p>Possible states:</p><p>1: The current state could not be determined.</p><p>2: The virtual disk is operating normally or optimally.</p><p>3: The virtual disk has encountered a failure. The data on disk is lost or is about to be lost.</p><p>4: The virtual disk encountered a failure with one or all of the constituent redundant physical disks.</p><p>The data on the virtual disk might no longer be fault tolerant.</p> |SNMP |system.hw.virtualdisk.status[virtualDiskState.{#SNMPINDEX}] |
## Triggers
diff --git a/templates/server/dell_idrac_snmp/template_server_dell_idrac_snmp.yaml b/templates/server/dell_idrac_snmp/template_server_dell_idrac_snmp.yaml
index 6ed5d77e7f8..97e7541649b 100644
--- a/templates/server/dell_idrac_snmp/template_server_dell_idrac_snmp.yaml
+++ b/templates/server/dell_idrac_snmp/template_server_dell_idrac_snmp.yaml
@@ -886,7 +886,7 @@ zabbix_export:
1: The current state could not be determined.
2: The virtual disk is operating normally or optimally.
3: The virtual disk has encountered a failure. The data on disk is lost or is about to be lost.
- 4: The virtual disk encounterd a failure with one or all of the constituent redundant physical disks.
+ 4: The virtual disk encountered a failure with one or all of the constituent redundant physical disks.
The data on the virtual disk might no longer be fault tolerant.
applications:
-
diff --git a/templates/tel/asterisk_http/README.md b/templates/tel/asterisk_http/README.md
index 30093d6e6af..d450cb297c7 100644
--- a/templates/tel/asterisk_http/README.md
+++ b/templates/tel/asterisk_http/README.md
@@ -89,7 +89,7 @@ There are no template links in this template.
|Asterisk |PJSIP trunk "{#OBJECTNAME}": Active channels |<p>The total number of active PJSIP trunk channels.</p> |DEPENDENT |asterisk.pjsip.trunk.active_channels[{#OBJECTNAME}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.pjsip.trunks[?(@.ObjectName=='{#OBJECTNAME}')].active_channels.first()`</p> |
|Asterisk |"{#QUEUE}": Logged in |<p>The number of queue members.</p> |DEPENDENT |asterisk.queue.loggedin[{#QUEUE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.queue.queues[?(@.Queue=='{#QUEUE}')].LoggedIn.first()`</p> |
|Asterisk |"{#QUEUE}": Available |<p>The number of available queue members.</p> |DEPENDENT |asterisk.queue.available[{#QUEUE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.queue.queues[?(@.Queue=='{#QUEUE}')].Available.first()`</p> |
-|Asterisk |"{#QUEUE}": Callers |<p>The number incomming calls in queue.</p> |DEPENDENT |asterisk.queue.callers[{#QUEUE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.queue.queues[?(@.Queue=='{#QUEUE}')].Callers.first()`</p> |
+|Asterisk |"{#QUEUE}": Callers |<p>The number incoming calls in queue.</p> |DEPENDENT |asterisk.queue.callers[{#QUEUE}]<p>**Preprocessing**:</p><p>- JSONPATH: `$.queue.queues[?(@.Queue=='{#QUEUE}')].Callers.first()`</p> |
|Zabbix_raw_items |Asterisk: Get stats |<p>Asterisk system information in JSON format.</p> |HTTP_AGENT |asterisk.get_stats<p>**Preprocessing**:</p><p>- JAVASCRIPT: `Text is too long. Please see the template.`</p> |
## Triggers
diff --git a/templates/tel/asterisk_http/template_tel_asterisk_http.yaml b/templates/tel/asterisk_http/template_tel_asterisk_http.yaml
index 717aa61c49b..89ce74ff2ce 100644
--- a/templates/tel/asterisk_http/template_tel_asterisk_http.yaml
+++ b/templates/tel/asterisk_http/template_tel_asterisk_http.yaml
@@ -921,7 +921,7 @@ zabbix_export:
key: 'asterisk.queue.callers[{#QUEUE}]'
delay: '0'
history: 7d
- description: 'The number incomming calls in queue.'
+ description: 'The number incoming calls in queue.'
application_prototypes:
-
name: 'Asterisk queue "{#QUEUE}"'