[log_parse] Add a simple exporter to generate metrics out of logs.
[disk_performance] Bg brain is smart and knows this is better name.
[disk_exporter] Improve README and fix forgotten argument in main.


browse  log 



You can also use your local clone with git send-email.


Collection of simple exporters that produce .prom files to be exported with the prometheus node_exporter

#Developed for ungleich's Data Center Light

This is part of what monitors Data Center Light, home of your Zerocarbon Swiss VPS and much more.


If not further specified, python3 and prometheus-python-client are the only runtime dependencies.

#disk_performance.py -- "dd exporter"

usage: disk_performance.py [-h] [--input INPUT] [--output OUTPUT]
                           [--pool [POOL [POOL ...]]]

Use dd to write either random data or zeros to specified files. This can be
used to export pool performance data for network-based file systems like CEPH,
gluster, ... Run with a cronjob and write a `.prom` file that can be exported
with the prometheus node_exporter. Notice that by default if a process does
not finish writing within 30 seconds it will be killed and the probe will be
reported as failed. To be on the safe side, you shouldn't run this more often
than (N+1)/2 minutes where N is the amount of pools you'll monitor.

optional arguments:
  -h, --help            show this help message and exit
  --input INPUT         This will read a POOL_DESCRIPTION from each line not
                        starting with "#" in the given file. Use '-' for
                        stdin. If not passed, no input file will be processed.
                        See --pool for POOL_DESCRIPTION.
  --output OUTPUT       The file where the generated data will be written.
                        Must end with .prom to be processed by the
                        node_exporter. Defaults to stdout.
  --pool [POOL [POOL ...]]
                        POOL_DESCRIPTION given in POOL=OUTPUT_FILE[,Z][,BYTES]
                        format. Notice that OUTPUT_FILE will be overwritten
                        without asking any questions. May be passed multiple
                        values. BYTES determines how many bytes will be
                        written to OUTPUT_FILE. If Z is present with value "Z"
                        we will write zeros to OUTPUT_FILE instead of random
                        data (default). These pools will be processed in
                        addition to processing --input if passed. Examples:
$ python disk_performance.py --pool HDD=/hdd_pool/exporter SDD=/sdd_pool/exporter

# HELP dd_written_bytes Bytes written in last run
# TYPE dd_written_bytes gauge
dd_written_bytes{pool="HDD"} 5.24288e+07
dd_written_bytes{pool="SDD"} 5.24288e+07
# HELP dd_duration_seconds Time required for last run
# TYPE dd_duration_seconds gauge
dd_duration_seconds{pool="HDD"} 0.81591884
dd_duration_seconds{pool="SDD"} 0.4233204
# HELP dd_success Whether or not last run succeeded
# TYPE dd_success gauge
dd_success{pool="HDD"} 1.0
dd_success{pool="SDD"} 1.0
# HELP dd_last_run Timestamp of the last time a run finished
# TYPE dd_last_run gauge
dd_last_run{pool="HDD"} 1.5935844544610152e+09
dd_last_run{pool="SDD"} 1.5935844548849928e+09
# HELP dd_speed_bytes_per_second Pool write speed in bytes per second
# TYPE dd_speed_bytes_per_second gauge
dd_speed_bytes_per_second{pool="HDD"} 6.425737148072227e+07
dd_speed_bytes_per_second{pool="SDD"} 1.238513428599236e+08

#log_parse.sh -- "Export a log metric"

This is a simple, low-overhead way to export a particular metric off logfiles.


This will echo to stdout prometheus metrics counting MATCH_REGEX grouping
$ sh log_parse.sh example.log zed 'zed\[.*\]:' pool_guid
# HELP zed_count The total number of entries in the file.
# TYPE zed_count counter
zed_count{pool_guid="0x69E5B29BD15B9EA3"}       12
zed_count{pool_guid="0x69E5B29BD15B9EA4"}        1
# HELP zed_last_run Timestamp of the last time a run finished
# TYPE zed_last_run gauge
zed_last_run{pool_guid="0x69E5B29BD15B9EA3"} 1594840473