Jump to content

Recommended Posts

Винни-Пух Прометей и все-все-все (ну-у-у, почти), или Статистики много не бывает.

рrometheus

https://prometheus.io/

https://prometheus.io/docs/introduction/overview/

Установить пакет: `opkg install prometheus`

и отредактировать конфиг "/opt/etc/prometheus/prometheus.yml" (заменить "localhost" на адрес устройства):

Скрытый текст
# my global config
global:
  scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

# Alertmanager configuration
alerting:
  alertmanagers:
    - static_configs:
        - targets:
          # - alertmanager:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  # - "first_rules.yml"
  # - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: "prometheus"

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
      - targets: ["192.168.1.1:9090"]

 

Запустить сервис: `/opt/etc/init.d/S70prometheus start`

В любимом браузере отправиться на адрес устройства и порт 9090:

Скрытый текст

screen_2022-12-28_16:42:44-prom-stat.png

screen_2022-12-28_16:48:51-prom-targ.png

screen_2022-12-28_16:44:56-prom-metr.png

Прометей умеет в графики "искаропки":

Скрытый текст

screen_2022-12-28_16:53:20-prom-graf-w.png

screen_2022-12-28_16:56:42-prom-graf-b.png

или (IP:9090/consoles/prometheus.html):

Скрытый текст

screen_2022-12-28_16:59:53-prom-graf-con.png

И всё?

  • Thanks 1
  • Upvote 4
Link to comment
Share on other sites

snmp_exporter

https://github.com/prometheus/snmp_exporter

Keenetic умеет в snmp "искаропки" (если компонент "Сервер SNMP" установлен):

! Если компонент не установлен и захотите его добавить, прошивка обновиться до актуальной версии (в зависимости от канала обновлений).

Активировать сервис, если не активен, нужно в CLI (telnet|SSH) или web: `service snmp`

и сохранить настройки: `system configuration save`

Установить пакет: `opkg install prometheus-snmp-exporter`

и отредактировать конфиг "/opt/etc/prometheus/prometheus.yml":

(добавить в конфиг прометея)

  # snmp
  - job_name: "snmp"
    static_configs:
    - targets: ["192.168.1.1"]
    metrics_path: /snmp
    params:
      module: [if_mib]
    relabel_configs:
    - source_labels: [__address__]
      target_label: __param_target
    - source_labels: [__param_target]
      target_label: instance
    - target_label: __address__
      replacement: 192.168.1.1:9116
Скрытый текст
# my global config
global:
  scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

# Alertmanager configuration
alerting:
  alertmanagers:
    - static_configs:
        - targets:
          # - alertmanager:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  # - "first_rules.yml"
  # - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: "prometheus"

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
      - targets: ["192.168.1.1:9090"]

  # snmp
  - job_name: "snmp"
    static_configs:
    - targets: ["192.168.1.1"]
    metrics_path: /snmp
    params:
      module: [if_mib]
    relabel_configs:
    - source_labels: [__address__]
      target_label: __param_target
    - source_labels: [__param_target]
      target_label: instance
    - target_label: __address__
      replacement: 192.168.1.1:9116

 

!!! После правок конфига прометея, сервис перезапускать обязательно !!!

`/opt/etc/init.d/S70prometheus restart`

Запустить сервис: `/opt/etc/init.d/S99snmp_exporter start`

В любимом браузере отправиться на адрес устройства и порт 9090:

Скрытый текст

screen_2022-12-28_17:25:54-snmp-targ.png

screen_2022-12-28_17:37:07-snmp-graf.png

 

  • Thanks 1
  • Upvote 1
Link to comment
Share on other sites

"Маловато будет! Маловато!" (из м/ф "Падал прошлогодний снег", СССР, 1983)

node_exporter

https://github.com/prometheus/node_exporter

Установить пакет: `opkg install prometheus-node-exporter`

и отредактировать конфиг "/opt/etc/prometheus/prometheus.yml":

(добавить в конфиг прометея)

  # node
  - job_name: "node"
    static_configs:
    - targets: ["192.168.1.1:9100"]
Скрытый текст
# my global config
global:
  scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

# Alertmanager configuration
alerting:
  alertmanagers:
    - static_configs:
        - targets:
          # - alertmanager:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  # - "first_rules.yml"
  # - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: "prometheus"

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
      - targets: ["192.168.1.1:9090"]

  # node
  - job_name: "node"
    static_configs:
    - targets: ["192.168.1.1:9100"]

 

!!! После правок конфига прометея, сервис перезапускать обязательно !!!

`/opt/etc/init.d/S70prometheus restart`

Запустить сервис: `/opt/etc/init.d/S99node_exporter start`

В любимом браузере отправиться на адрес устройства и порт 9090:

Скрытый текст

screen_2022-12-28_17:52:16-node-targ.png

screen_2022-12-28_17:56:00-node-graf.png

или (IP:9090/consoles/node.html)

Скрытый текст

screen_2022-12-28_18:00:22-node-cons.png

 

  • Thanks 1
  • Upvote 2
Link to comment
Share on other sites

Я требую продолжения банкета статистику HAProxy!

haproxy_exporter

https://github.com/prometheus/haproxy_exporter

Установить пакет: `opkg install prometheus-haproxy-exporter`

и отредактировать конфиги "/opt/etc/prometheus/prometheus.yml" и "/opt/etc/haproxy.cfg":

(добавить в конфиг прометея)

  # haproxy
  - job_name: "haproxy"
    static_configs:
    - targets: ["192.168.1.1:8404"]
Скрытый текст
# my global config
global:
  scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

# Alertmanager configuration
alerting:
  alertmanagers:
    - static_configs:
        - targets:
          # - alertmanager:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  # - "first_rules.yml"
  # - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: "prometheus"

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
      - targets: ["192.168.1.1:9090"]

  # haproxy
  - job_name: "haproxy"
    static_configs:
    - targets: ["192.168.1.1:8404"]

 

!!! После правок конфига прометея, сервис перезапускать обязательно !!!

`/opt/etc/init.d/S70prometheus restart`

(добавить в конфиг HAProxy и закомментировать или удалить строку "mode health")

# Prometheus
frontend stats
  mode http
  bind *:8404
  http-request use-service prometheus-exporter if { path /metrics }
  stats enable
  stats uri /stats
  stats refresh 10s
Скрытый текст
# Example configuration file for HAProxy 2.0, refer to the url below for
# a full documentation and examples for configuration:
# https://cbonte.github.io/haproxy-dconv/2.0/configuration.html


# Global parameters
global

	# Log events to a remote syslog server at given address using the
	# specified facility and verbosity level. Multiple log options 
	# are allowed.
	#log 10.0.0.1 daemon info

	# Specifiy the maximum number of allowed connections.
	maxconn 32000

	# Raise the ulimit for the maximum allowed number of open socket
	# descriptors per process. This is usually at least twice the
	# number of allowed connections (maxconn * 2 + nb_servers + 1) .
	ulimit-n 65535

	# Drop privileges (setuid, setgid), default is "root" on OpenWrt.
	uid 0
	gid 0

	# Perform chroot into the specified directory.
	#chroot /var/run/haproxy/

	# Daemonize on startup
	daemon

	nosplice
	# Enable debugging
	#debug

	# Spawn given number of processes and distribute load among them,
	# used for multi-core environments or to circumvent per-process
	# limits like number of open file descriptors. Default is 1.
	#nbproc 2

# Default parameters
defaults
	# Default timeouts
	timeout connect 5000ms
	timeout client 50000ms
	timeout server 50000ms


# Example HTTP proxy listener
listen my_http_proxy

	# Bind to port 81 and 444 on all interfaces (0.0.0.0)
	bind :81,:444

	# We're proxying HTTP here...
	mode http

	# Simple HTTP round robin over two servers using the specified
	# source ip 192.168.1.1 .
	balance roundrobin
	server server01 192.168.1.10:80 source 192.168.1.1
	server server02 192.168.1.20:80 source 192.168.1.1

	# Serve an internal statistics page on /stats:
	stats enable
	stats uri /stats

	# Enable HTTP basic auth for the statistics:
	stats realm HA_Stats
	stats auth username:password


# Example SMTP proxy listener
listen my_smtp_proxy

	# Disable this instance without commenting out the section.
	disabled

	# Bind to port 26 and 588 on localhost
	bind 127.0.0.1:26,127.0.0.1:588

	# This is a TCP proxy
	mode tcp

	# Round robin load balancing over two servers on port 123 forcing
	# the address 192.168.1.1 and port 25 as source.
	balance roundrobin
	#use next line for transparent proxy, so the servers can see the 
	#original ip-address and remove source keyword in server definition
	#source 0.0.0.0 usesrc clientip
	server server01 192.168.1.10:123 source 192.168.1.1:25
	server server02 192.168.1.20:123 source 192.168.1.1:25
	

# Special health check listener for integration with external load
# balancers.
listen local_health_check

	# Listen on port 60000
	bind :60000

	# This is a health check
	#mode health <= или удалить

	# Enable HTTP-style responses: "HTTP/1.0 200 OK"
	# else just print "OK".
	#option httpchk

# Prometheus
frontend stats
  mode http
  bind *:8404
  http-request use-service prometheus-exporter if { path /metrics }
  stats enable
  stats uri /stats
  stats refresh 10s

 

Запустить сервисы: `/opt/etc/init.d/S99haproxy start && /opt/etc/init.d/S99haproxy_exporter start`

В любимом браузере отправиться на адрес устройства и порт 9090:

Скрытый текст

screen_2022-12-28_18:38:36-hap-targ.png

screen_2022-12-28_18:44:11-hap-graf.png

 

  • Thanks 1
  • Upvote 1
Link to comment
Share on other sites

А у вас нет такого же, но с перламутровыми пуговицами для collectd?

collectd_exporter

https://github.com/prometheus/collectd_exporter

Установить пакет: `opkg install prometheus-collectd-exporter`

и отредактировать конфиги "/opt/etc/prometheus/prometheus.yml" и "/opt/etc/collectd.conf":

(добавить в конфиг прометея)

  # collectd
  - job_name: "collectd"
    static_configs:
    - targets: ["192.168.1.1:9103"]
Скрытый текст
# my global config
global:
  scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

# Alertmanager configuration
alerting:
  alertmanagers:
    - static_configs:
        - targets:
          # - alertmanager:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  # - "first_rules.yml"
  # - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: "prometheus"

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
      - targets: ["192.168.1.1:9090"]

  # collectd
  - job_name: "collectd"
    static_configs:
    - targets: ["192.168.1.1:9103"]

 

!!! После правок конфига прометея, сервис перезапускать обязательно !!!

`/opt/etc/init.d/S70prometheus restart`

(добавить в конфиг collectd)

LoadPlugin network
<Plugin network>
  Server "127.0.0.1" "25826"
</Plugin>

Запустить сервисы: `/opt/etc/init.d/S70collectd start && /opt/etc/init.d/S99collectd_exporter start`

В любимом браузере отправиться на адрес устройства и порт 9090:

Скрытый текст

screen_2022-12-28_18:59:16-coll-targ.png

screen_2022-12-28_19:02:29-coll-graf.png

 

  • Thanks 1
  • Upvote 1
Link to comment
Share on other sites

"

или Красота спасёт мир!

grafana

https://grafana.com

Установить пакет: `opkg install grafana`

Запустить сервис: `/opt/etc/init.d/S80grafana-server start`

В любимом браузере отправиться на адрес устройства и порт 3000:

Подключаем прометея: "Configuration" => "Data source" => "Add data source" => "Prometheus" => "URL" <= адрес устройства и порт => ""Save & test"

Скрытый текст

screen_2022-12-28_19:22:48-grafa-prom.pngscreen_2022-12-28_19:23:54-grafa-prom-s.png

"Искаропки" (очень простой)

Скрытый текст

screen_2022-12-28_19:58:15-grafa-stub.png

"Искаропки" (простой)

Скрытый текст

screen_2022-12-28_19:55:54-grafa-stub2.png

 Дальше, строим самостоятельно или импортируем готовые https://grafana.com/grafana/dashboards/

SNMP - ID: 11169

Скрытый текст

screen_2022-12-28_19:36:37-grafa-snmp.png

screen_2022-12-28_19:37:32-grafa-snmp.png

SNMP - ID: 10523

Скрытый текст

screen_2022-12-28_19:43:36-grafa-snmp.png

node -ID: 1860

Скрытый текст

screen_2022-12-28_19:47:13-grafa-node.png

screen_2022-12-28_19:52:02-grafa-node.png

screen_2022-12-28_19:53:32-grafa-node.png

 

  • Thanks 1
  • Upvote 1
Link to comment
Share on other sites

А что там за flow такой?

goflow

https://github.com/cloudflare/goflow

netflow-exporter

https://github.com/AlfredArouna/netflow_exporter

Keenetic имеет netflow "искаропки" (если компонент "Сенсор NetFlow" установлен):

! Если компонент не установлен и захотите его добавить, прошивка обновиться до актуальной версии (в зависимости от канала обновлений).

Активировать сервис, если не активен, нужно в CLI (telnet|SSH) или web:

1. выбирать интерфейс (напр., "Bridge0") и вариант прослушки (входяший(ingress)|исходящий(egress)|оба-два(both)): `interface Bridge0 ip flow both`

2. указать адрес (напр., 127.0.0.1) и порт - 2055 сервера для приёма: `ip flow-export destination 127.0.0.1 2055`

3. указать версию протокола netflow (для cnflegacy|csflow - 5 , для остальных - 9): `ip flow-export version 9`

4. сохранить настройки: `system configuration save`

Установить один из пакетов:

`opkg install cnetflow` или

`opkg install cnflegacy` или

`opkg install csflow` или

`opkg install goflow` или

`opkg install netflow-exporter`

и отредактировать конфиг "/opt/etc/prometheus/prometheus.yml":

(добавить в конфиг прометея)

  # goflow
  - job_name: "goflow"
    static_configs:
    - targets: ["192.168.1.1:8080"]
Скрытый текст
# my global config
global:
  scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

# Alertmanager configuration
alerting:
  alertmanagers:
    - static_configs:
        - targets:
          # - alertmanager:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  # - "first_rules.yml"
  # - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: "prometheus"

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
      - targets: ["192.168.1.1:9090"]

  # goflow
  - job_name: "goflow"
    static_configs:
    - targets: ["192.168.1.1:8080"]

 

или

  # netflow
  - job_name: "netflow"
    static_configs:
    - targets: ["192.168.1.1:9191"]
Скрытый текст
# my global config
global:
  scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

# Alertmanager configuration
alerting:
  alertmanagers:
    - static_configs:
        - targets:
          # - alertmanager:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  # - "first_rules.yml"
  # - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: "prometheus"

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
      - targets: ["192.168.1.1:9090"]

  # netflow
  - job_name: "netflow"
    static_configs:
    - targets: ["192.168.1.1:9191"]

 

!!! После правок конфига прометея, сервис перезапускать обязательно !!!

`/opt/etc/init.d/S70prometheus restart`

Запустить сервис (в зависимости от установленного пакета):

`/opt/etc/init.d/S99cnetflow start` или

`/opt/etc/init.d/S99cnflegacy start` или

`/opt/etc/init.d/S99csflow start` или

`/opt/etc/init.d/S99goflow start` или

`/opt/etc/init.d/S99netflow_exporter start`

В любимом браузере отправиться на адрес устройства и порт 9090:

(goflow)

Скрытый текст

screen_2022-12-28_20:44:20-goflow-targ.png

screen_2022-12-28_20:47:18-goflow-graf.png

(netflow-exporter)

Скрытый текст

screen_2022-12-28_20:42:32-flow-targ.png

screen_2022-12-28_20:40:28-flow-graf.png

 

  • Thanks 3
  • Upvote 1
Link to comment
Share on other sites

  • 4 weeks later...

Метки графана не хватает. или добавить в тему "prometheus + graphana".

Ибо когда прочитал название темы - подумалось "хорошо, но по форуму графану не нашёл, вот бы была"... Но пока не дошёл до поста с графаной - так и думал, что надежды мало. А теперь так вообще сказка.

Link to comment
Share on other sites

34 minutes ago, hedin163 said:

Метки графана не хватает. или добавить в тему "prometheus + graphana".

Спасибо, метку "grafana" добавили (правда, всё же не через "ph").

Link to comment
Share on other sites

1 час назад, admin сказал:

Спасибо, метку "grafana" добавили (правда, всё же не через "ph").

а ну да, grafana же))) почему писал ph - хз. по правилам наверно?)

Link to comment
Share on other sites

Настраивал всё на отдельном ubuntu сервере ибо кинетик и так пыхтит на полную. С SNMP  получилось смнимать статистику --> prometheus --> grafana.
А вот как снимать netflow - так и не понял. На  кинетике экспорт на сервер  --> prometheus  на такой-то порт. А вот как скормить netflow в prometheus фиг знает. 
На гите куча всяких **flow, но там так всё слабо описано.. Самы  популярный goflow, но ему нужна kafka, а не prometheus. Либо какие-то другие для Influxdb. А вот  для прометея никак не  могу всё  подружить и настроить. Бьюсь уже неделю, может кто подсказку даст или линкой поделится? Совсем руки опускаются уже..(

Link to comment
Share on other sites

13 часа назад, Totoro сказал:

но ему нужна kafka,

https://github.com/cloudflare/goflow#run

Цитата

Disable Kafka sending -kafka=false.

Не?

Скрытый текст
# HELP flow_decoder_count Decoder processed count.
# TYPE flow_decoder_count counter
flow_decoder_count{name="NetFlow",worker="0"} 3347
# HELP flow_process_nf_count NetFlows processed.
# TYPE flow_process_nf_count counter
flow_process_nf_count{router="127.0.0.1",version="9"} 3347
# HELP flow_process_nf_delay_summary_seconds NetFlows time difference between time of flow and processing.
# TYPE flow_process_nf_delay_summary_seconds summary
flow_process_nf_delay_summary_seconds{router="127.0.0.1",version="9",quantile="0.5"} 31
flow_process_nf_delay_summary_seconds{router="127.0.0.1",version="9",quantile="0.9"} 41
flow_process_nf_delay_summary_seconds{router="127.0.0.1",version="9",quantile="0.99"} 52
flow_process_nf_delay_summary_seconds_sum{router="127.0.0.1",version="9"} 1.7794e+06
flow_process_nf_delay_summary_seconds_count{router="127.0.0.1",version="9"} 58287
# HELP flow_process_nf_errors_count NetFlows processed errors.
# TYPE flow_process_nf_errors_count counter
flow_process_nf_errors_count{error="template_not_found",router="127.0.0.1"} 98
# HELP flow_process_nf_flowset_records_sum NetFlows FlowSets sum of records.
# TYPE flow_process_nf_flowset_records_sum counter
flow_process_nf_flowset_records_sum{router="127.0.0.1",type="DataFlowSet",version="9"} 58287
flow_process_nf_flowset_records_sum{router="127.0.0.1",type="OptionsDataFlowSet",version="9"} 8013
flow_process_nf_flowset_records_sum{router="127.0.0.1",type="OptionsTemplateFlowSet",version="9"} 1367
# HELP flow_process_nf_flowset_sum NetFlows FlowSets sum.
# TYPE flow_process_nf_flowset_sum counter
flow_process_nf_flowset_sum{router="127.0.0.1",type="DataFlowSet",version="9"} 47295
flow_process_nf_flowset_sum{router="127.0.0.1",type="OptionsDataFlowSet",version="9"} 1616
flow_process_nf_flowset_sum{router="127.0.0.1",type="OptionsTemplateFlowSet",version="9"} 348
flow_process_nf_flowset_sum{router="127.0.0.1",type="TemplateFlowSet",version="9"} 1019
# HELP flow_process_nf_templates_count NetFlows Template count.
# TYPE flow_process_nf_templates_count counter
flow_process_nf_templates_count{obs_domain_id="0",router="127.0.0.1",template_id="285",type="options_template",version="9"} 99
flow_process_nf_templates_count{obs_domain_id="0",router="127.0.0.1",template_id="286",type="template",version="9"} 160
flow_process_nf_templates_count{obs_domain_id="0",router="127.0.0.1",template_id="288",type="template",version="9"} 159
flow_process_nf_templates_count{obs_domain_id="0",router="127.0.0.1",template_id="289",type="template",version="9"} 161
flow_process_nf_templates_count{obs_domain_id="0",router="127.0.0.1",template_id="290",type="template",version="9"} 160
flow_process_nf_templates_count{obs_domain_id="0",router="127.0.0.1",template_id="291",type="options_template",version="9"} 151
flow_process_nf_templates_count{obs_domain_id="0",router="127.0.0.1",template_id="292",type="template",version="9"} 150
flow_process_nf_templates_count{obs_domain_id="0",router="127.0.0.1",template_id="293",type="template",version="9"} 150
flow_process_nf_templates_count{obs_domain_id="0",router="127.0.0.1",template_id="294",type="options_template",version="9"} 98
flow_process_nf_templates_count{obs_domain_id="0",router="127.0.0.1",template_id="295",type="template",version="9"} 88
# HELP flow_summary_decoding_time_us Decoding time summary.
...

напр.,

screen_2023-02-05_12:42:17-prom-gofl.png

В стартовых скриптах оно есть...

~ # grep ^ARG /opt/etc/init.d/S99goflow 
ARGS="-kafka=false -nfl=false -sflow=false"
~ #

 

  • Thanks 1
  • Upvote 1
Link to comment
Share on other sites

  • 2 months later...

Добрый день!

При попытке загрузить пакет прометеус система выдала ошибку что такого нет. 
Сможете подсказать - как сейчас осуществить загрузку?

Роутер Viva

opkg install prometheus

Unknown package 'prometheus'.

Collected errors:

opkg_install_cmd: Cannot install package prometheus.

Link to comment
Share on other sites

Скрытый текст

 

~ # rm -rf /opt/var/opkg-lists/*
~ # 
~ # opkg info prometheus
~ # 
~ # opkg update
Downloading http://bin.entware.net/mipselsf-k3.4/Packages.gz
Updated list of available packages in /opt/var/opkg-lists/entware
Downloading http://bin.entware.net/mipselsf-k3.4/keenetic/Packages.gz
Updated list of available packages in /opt/var/opkg-lists/keendev
~ # 
~ # opkg info prometheus
Package: prometheus
Version: 2.42.0-1
Depends: libc, libssp, librt, libpthread
Status: unknown ok not-installed
Section: utils
Architecture: mipsel-3.4
Size: 37751366
Filename: prometheus_2.42.0-1_mipsel-3.4.ipk
Description: Prometheus, a Cloud Native Computing Foundation project,
 is a systems and service monitoring system. It collects
 metrics from configured targets at given intervals, evaluates
 rule expressions, displays the results, and can trigger alerts
 when specified conditions are observed.

~ # 
~ # opkg install prometheus
Installing prometheus (2.42.0-1) to root...
Downloading http://bin.entware.net/mipselsf-k3.4/keenetic/prometheus_2.42.0-1_mipsel-3.4.ipk
Configuring prometheus.
~ # 
~ # opkg info prometheus
Package: prometheus
Version: 2.42.0-1
Depends: libc, libssp, librt, libpthread
Status: install user installed
Section: utils
Architecture: mipsel-3.4
Size: 37751366
Filename: prometheus_2.42.0-1_mipsel-3.4.ipk
Conffiles:
 /opt/etc/prometheus/prometheus.yml 6c568c1bdc95b97c1c35e565f4cd337d328eed000551e49b741d94e639e0a78f
Description: Prometheus, a Cloud Native Computing Foundation project,
 is a systems and service monitoring system. It collects
 metrics from configured targets at given intervals, evaluates
 rule expressions, displays the results, and can trigger alerts
 when specified conditions are observed.
Installed-Time: 1680769740

~ #

 

Link to comment
Share on other sites

  • 5 months later...
On 12/28/2022 at 5:58 PM, TheBB said:

Что бы такого сделать плохого установить?

snmp_exporter

https://github.com/prometheus/snmp_exporter

[...]

  # snmp
  - job_name: "snmp"
    static_configs:
    - targets: ["192.168.1.1"]
    metrics_path: /snmp
    params:
      module: [if_mib]
    relabel_configs:
    - source_labels: [__address__]
      target_label: __param_target
    - source_labels: [__param_target]
      target_label: instance
    - target_label: __address__
      replacement: 192.168.1.1:9116

 

Добрый день.
Чуть оффтоп: пытаюсь запустить snmp_exporter не на кинетике, но для мониторинга кинетика. Конфиг точно такой же, однако получаю в ответ
image.png.a1c9e8581e5cef98f8483cae10b528c8.png

и на http запрос c эндпоинта получаю

Unknown auth 'public_v2'


Другие экспортеры работают нормально... Prometheus и все экпортеры в контейнерах, версии свежие.
В чем может быть проблема? Спасибо.


 

Edited by pnr
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...