Merge branch 'develop'

This commit is contained in:
nicolargo 2024-06-29 09:54:13 +02:00
commit afa1da59d1
37 changed files with 1018 additions and 831 deletions

View File

@ -6,6 +6,9 @@ labels: ''
assignees: '' assignees: ''
--- ---
**Check the bug**
Before filling this bug report, please search if a similar issue already exists.
In this case, just add a comment on this existing issue.
**Describe the bug** **Describe the bug**
A clear and concise description of what the bug is. A clear and concise description of what the bug is.
@ -26,11 +29,7 @@ If applicable, add screenshots to help explain your problem.
- Operating System (lsb_release -a or OS name/version): `To be completed with result of: lsb_release -a` - Operating System (lsb_release -a or OS name/version): `To be completed with result of: lsb_release -a`
- Glances & psutil versions: `To be completed with result of: glances -V` - Glances & psutil versions: `To be completed with result of: glances -V`
- How do you install Glances (Pypi package, script, package manager, source): `To be completed` - How do you install Glances (Pypi package, script, package manager, source): `To be completed`
- Glances test (only available with Glances 3.1.7 or higher): - Glances test: ` To be completed with result of: glances --issue`
```
To be completed with result of: glances --issue
```
**Additional context** **Additional context**
Add any other context about the problem here. Add any other context about the problem here.

22
.github/workflows/inactive_issues.yml vendored Normal file
View File

@ -0,0 +1,22 @@
name: Label inactive issues
on:
schedule:
- cron: "30 1 * * *"
jobs:
close-issues:
runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: write
steps:
- uses: actions/stale@v5
with:
days-before-issue-stale: 90
days-before-issue-close: -1
stale-issue-label: "inactive"
stale-issue-message: "This issue is stale because it has been open for 3 months with no activity."
close-issue-message: "This issue was closed because it has been inactive for 30 days since being marked as stale."
days-before-pr-stale: -1
days-before-pr-close: -1
repo-token: ${{ secrets.GITHUB_TOKEN }}

22
.github/workflows/needs_contributor.yml vendored Normal file
View File

@ -0,0 +1,22 @@
name: Add a message when needs contributor tag is used
on:
issues:
types:
- labeled
jobs:
add-comment:
if: github.event.label.name == 'needs contributor'
runs-on: ubuntu-latest
permissions:
issues: write
steps:
- name: Add comment
run: gh issue comment "$NUMBER" --body "$BODY"
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GH_REPO: ${{ github.repository }}
NUMBER: ${{ github.event.issue.number }}
BODY: >
This issue is available for anyone to work on.
**Make sure to reference this issue in your pull request.**
:sparkles: Thank you for your contribution ! :sparkles:

34
.readthedocs.yaml Normal file
View File

@ -0,0 +1,34 @@
# Read the Docs configuration file for Glances projects
# Required
version: 2
# Set the OS, Python version and other tools you might need
build:
os: ubuntu-22.04
tools:
python: "3.12"
# You can also specify other tool versions:
# nodejs: "20"
# rust: "1.70"
# golang: "1.20"
# Build documentation in the "docs/" directory with Sphinx
sphinx:
configuration: docs/conf.py
# You can configure Sphinx to use a different builder, for instance use the dirhtml builder for simpler URLs
# builder: "dirhtml"
# Fail on all warnings to avoid broken references
# fail_on_warning: true
# Optionally build your docs in additional formats such as PDF and ePub
# formats:
# - pdf
# - epub
# Optional but recommended, declare the Python requirements required
# to build your documentation
# See https://docs.readthedocs.io/en/stable/guides/reproducible-builds.html
python:
install:
- requirements: doc-requirements.txt

View File

@ -3,62 +3,39 @@
============================================================================== ==============================================================================
=============== ===============
Version 4.0.8 Version 4.1.0
=============== ===============
* Make CORS option configurable security webui #2812 Enhancements:
* When Glances is installed via venv, default configuration file is not used documentation packaging #2803
* GET /1272f6e9e8f9d6bfd6de.png results in 404 bug webui #2781 by Emporea was closed May 25, 2024
* Screen frequently flickers when outputting to local display bug needs test #2490
* Retire ujson for being in maintenance mode dependencies enhancement #2791
Minor breaking change in AMP: please use && instead of ; as command line separator. * Call process_iter.clear_cache() (PsUtil 6+) when Glances user force a refresh (F5 or CTRL-R) #2753
* PsUtil 6+ no longer check PID reused #2755
* Add support for automatically hiding network interfaces that are down or that don't have any IP addresses #2799
=============== Bug corrected:
Version 4.0.7
===============
* cpu_hz_current not available on NetBSD #2792 * API: Network module is disabled but appears in endpoint "all" #2815
* SensorType change in REST API breaks compatibility in 4.0.4 #2788 * API is not compatible with requests containing spcial/encoding char #2820
* 'j' hot key crashs Glances #2831
* Raspberry PI - CPU info is not correct #2616
* Graph export is broken if there is no graph section in Glances configuration file #2839
* Glances API status check returns Error 405 - Method Not Allowed #2841
* Rootless podman containers cause glances to fail with KeyError #2827
* --export-process-filter Filter using complete command #2824
* Exception when Glances is ran with limited plugin list #2822
* Disable separator option do not work #2823
=============== Continious integration and documentation:
Version 4.0.6
===============
* No GPU info on Web View #2796 * test test_107_fs_plugin_method fails on aarch64-linux #2819
=============== Thanks to all contibutors and bug reporters !
Version 4.0.5
===============
* SensorType change in REST API breaks compatibility in 4.0.4 #2788 Special thanks to:
* Please make pydantic optional dependency, not required one #2777
* Update the Grafana dashboard #2780
* 4.0.4 - On Glances startup "ERROR -- Can not init battery class #2776
* In codeSpace (with Python 3.8), an error occurs in ./unittest-restful.py #2773
Use Ruff as default Linter. * Bharath Vignesh J K
* RazCrimson
=============== * Vadim Smal
Version 4.0.4
===============
Hostfix release for support sensors plugin on python 3.8
===============
Version 4.0.3
===============
Additional fixes for Sensor plugin
===============
Version 4.0.2
===============
* hotfix: plugin(sensors) - race conditions btw fan_speed & temperature… #2766
* fix: include requirements.txt and SECURITY.md for pypi dist #2761
Thanks to RazCrimson for the sensors patch !
=============== ===============
Version 4.0.8 Version 4.0.8

View File

@ -221,8 +221,6 @@ Run last version of Glances container in *console mode*:
By default, the /etc/glances/glances.conf file is used (based on docker-compose/glances.conf). By default, the /etc/glances/glances.conf file is used (based on docker-compose/glances.conf).
By default, the /etc/glances/glances.conf file is used (based on docker-compose/glances.conf).
Additionally, if you want to use your own glances.conf file, you can Additionally, if you want to use your own glances.conf file, you can
create your own Dockerfile: create your own Dockerfile:

View File

@ -2,13 +2,10 @@
## Supported Versions ## Supported Versions
Use this section to tell people about which versions of your project are
currently being supported with security updates.
| Version | Support security updates | | Version | Support security updates |
| ------- | ------------------------ | | ------- | ------------------------ |
| 3.x | :white_check_mark: | | 4.x | :white_check_mark: |
| < 3.0 | :x: | | < 4.0 | :x: |
## Reporting a Vulnerability ## Reporting a Vulnerability
@ -31,4 +28,3 @@ If there are any vulnerabilities in {{cookiecutter.project_name}}, don't hesitat
4. Please do not disclose the vulnerability publicly until a fix is released! 4. Please do not disclose the vulnerability publicly until a fix is released!
Once we have either a) published a fix, or b) declined to address the vulnerability for whatever reason, you are free to publicly disclose it. Once we have either a) published a fix, or b) declined to address the vulnerability for whatever reason, you are free to publicly disclose it.

View File

@ -26,7 +26,7 @@ history_size=1200
# Options for all UIs # Options for all UIs
#-------------------- #--------------------
# Separator in the Curses and WebUI interface (between top and others plugins) # Separator in the Curses and WebUI interface (between top and others plugins)
separator=True #separator=True
# Set the the Curses and WebUI interface left menu plugin list (comma-separated) # Set the the Curses and WebUI interface left menu plugin list (comma-separated)
#left_menu=network,wifi,connections,ports,diskio,fs,irq,folders,raid,smart,sensors,now #left_menu=network,wifi,connections,ports,diskio,fs,irq,folders,raid,smart,sensors,now
# Limit the number of processes to display (in the WebUI) # Limit the number of processes to display (in the WebUI)
@ -217,9 +217,9 @@ hide=docker.*,lo
# Define the list of wireless network interfaces to be show (comma-separated) # Define the list of wireless network interfaces to be show (comma-separated)
#show=docker.* #show=docker.*
# Automatically hide interface not up (default is False) # Automatically hide interface not up (default is False)
#hide_no_up=True hide_no_up=True
# Automatically hide interface with no IP address (default is False) # Automatically hide interface with no IP address (default is False)
#hide_no_ip=True hide_no_ip=True
# It is possible to overwrite the bitrate thresholds per interface # It is possible to overwrite the bitrate thresholds per interface
# WLAN 0 Default limits (in bits per second aka bps) for interface bitrate # WLAN 0 Default limits (in bits per second aka bps) for interface bitrate
#wlan0_rx_careful=4000000 #wlan0_rx_careful=4000000

View File

@ -1,3 +1,5 @@
psutil
defusedxml
orjson orjson
reuse reuse
setuptools>=65.5.1 # not directly required, pinned by Snyk to avoid a vulnerability setuptools>=65.5.1 # not directly required, pinned by Snyk to avoid a vulnerability

View File

@ -26,7 +26,7 @@ history_size=1200
# Options for all UIs # Options for all UIs
#-------------------- #--------------------
# Separator in the Curses and WebUI interface (between top and others plugins) # Separator in the Curses and WebUI interface (between top and others plugins)
separator=True #separator=True
# Set the the Curses and WebUI interface left menu plugin list (comma-separated) # Set the the Curses and WebUI interface left menu plugin list (comma-separated)
#left_menu=network,wifi,connections,ports,diskio,fs,irq,folders,raid,smart,sensors,now #left_menu=network,wifi,connections,ports,diskio,fs,irq,folders,raid,smart,sensors,now
# Limit the number of processes to display (in the WebUI) # Limit the number of processes to display (in the WebUI)
@ -60,7 +60,7 @@ max_processes_display=25
#cors_headers=* #cors_headers=*
############################################################################## ##############################################################################
# plugins # Plugins
############################################################################## ##############################################################################
[quicklook] [quicklook]

View File

@ -48,7 +48,7 @@ virtual docker interface (docker0, docker1, ...):
# Automatically hide interface with no IP address (default is False) # Automatically hide interface with no IP address (default is False)
hide_no_ip=True hide_no_ip=True
# WLAN 0 alias # WLAN 0 alias
wlan0_alias=Wireless IF alias=wlan0:Wireless IF
# It is possible to overwrite the bitrate thresholds per interface # It is possible to overwrite the bitrate thresholds per interface
# WLAN 0 Default limits (in bits per second aka bps) for interface bitrate # WLAN 0 Default limits (in bits per second aka bps) for interface bitrate
wlan0_rx_careful=4000000 wlan0_rx_careful=4000000
@ -64,4 +64,4 @@ Filtering is based on regular expression. Please be sure that your regular
expression works as expected. You can use an online tool like `regex101`_ in expression works as expected. You can use an online tool like `regex101`_ in
order to test your regular expression. order to test your regular expression.
.. _regex101: https://regex101.com/ .. _regex101: https://regex101.com/

View File

@ -27,8 +27,7 @@ There is no alert on this information.
.. note 3:: .. note 3::
If a sensors has temperature and fan speed with the same name unit, If a sensors has temperature and fan speed with the same name unit,
it is possible to alias it using: it is possible to alias it using:
unitname_temperature_core_alias=Alias for temp alias=unitname_temperature_core_alias:Alias for temp,unitname_fan_speed_alias:Alias for fan speed
unitname_fan_speed_alias=Alias for fan speed
.. note 4:: .. note 4::
If a sensors has multiple identical features names (see #2280), then If a sensors has multiple identical features names (see #2280), then

View File

@ -141,7 +141,7 @@ Get plugin stats::
"refresh": 3.0, "refresh": 3.0,
"regex": True, "regex": True,
"result": None, "result": None,
"timer": 0.24439311027526855}, "timer": 0.48795127868652344},
{"count": 0, {"count": 0,
"countmax": 20.0, "countmax": 20.0,
"countmin": None, "countmin": None,
@ -150,7 +150,7 @@ Get plugin stats::
"refresh": 3.0, "refresh": 3.0,
"regex": True, "regex": True,
"result": None, "result": None,
"timer": 0.2443389892578125}] "timer": 0.48785948753356934}]
Fields descriptions: Fields descriptions:
@ -178,7 +178,7 @@ Get a specific item when field matches the given value::
"refresh": 3.0, "refresh": 3.0,
"regex": True, "regex": True,
"result": None, "result": None,
"timer": 0.24439311027526855}]} "timer": 0.48795127868652344}]}
GET cloud GET cloud
--------- ---------
@ -219,21 +219,7 @@ GET containers
Get plugin stats:: Get plugin stats::
# curl http://localhost:61208/api/4/containers # curl http://localhost:61208/api/4/containers
[{"command": "/bin/sh -c /venv/bin/python3 -m glances $GLANCES_OPT", []
"cpu": {"total": 0.0},
"cpu_percent": 0.0,
"created": "2024-05-25T13:52:22.535373707Z",
"engine": "docker",
"id": "bb99d31288db8904ed4cd43db8255a926830936189bc180d77c3459cbaa7f490",
"image": ["nicolargo/glances:latest"],
"io": {},
"key": "name",
"memory": {},
"memory_usage": None,
"name": "wizardly_nightingale",
"network": {},
"status": "running",
"uptime": "14 mins"}]
Fields descriptions: Fields descriptions:
@ -254,31 +240,6 @@ Fields descriptions:
* **pod_name**: Pod name (only with Podman) (unit is *None*) * **pod_name**: Pod name (only with Podman) (unit is *None*)
* **pod_id**: Pod ID (only with Podman) (unit is *None*) * **pod_id**: Pod ID (only with Podman) (unit is *None*)
Get a specific field::
# curl http://localhost:61208/api/4/containers/name
{"name": ["wizardly_nightingale"]}
Get a specific item when field matches the given value::
# curl http://localhost:61208/api/4/containers/name/wizardly_nightingale
{"wizardly_nightingale": [{"command": "/bin/sh -c /venv/bin/python3 -m glances "
"$GLANCES_OPT",
"cpu": {"total": 0.0},
"cpu_percent": 0.0,
"created": "2024-05-25T13:52:22.535373707Z",
"engine": "docker",
"id": "bb99d31288db8904ed4cd43db8255a926830936189bc180d77c3459cbaa7f490",
"image": ["nicolargo/glances:latest"],
"io": {},
"key": "name",
"memory": {},
"memory_usage": None,
"name": "wizardly_nightingale",
"network": {},
"status": "running",
"uptime": "14 mins"}]}
GET core GET core
-------- --------
@ -304,19 +265,19 @@ Get plugin stats::
# curl http://localhost:61208/api/4/cpu # curl http://localhost:61208/api/4/cpu
{"cpucore": 16, {"cpucore": 16,
"ctx_switches": 7772781, "ctx_switches": 426798284,
"guest": 0.0, "guest": 0.0,
"idle": 3.0, "idle": 85.4,
"interrupts": 6643340, "interrupts": 358987449,
"iowait": 0.0, "iowait": 0.1,
"irq": 0.0, "irq": 0.0,
"nice": 0.0, "nice": 0.0,
"soft_interrupts": 1761276, "soft_interrupts": 133317922,
"steal": 0.0, "steal": 0.0,
"syscalls": 0, "syscalls": 0,
"system": 0.0, "system": 3.2,
"total": 16.7, "total": 6.7,
"user": 1.0} "user": 11.4}
Fields descriptions: Fields descriptions:
@ -349,7 +310,7 @@ Fields descriptions:
Get a specific field:: Get a specific field::
# curl http://localhost:61208/api/4/cpu/total # curl http://localhost:61208/api/4/cpu/total
{"total": 16.7} {"total": 6.7}
GET diskio GET diskio
---------- ----------
@ -359,14 +320,14 @@ Get plugin stats::
# curl http://localhost:61208/api/4/diskio # curl http://localhost:61208/api/4/diskio
[{"disk_name": "nvme0n1", [{"disk_name": "nvme0n1",
"key": "disk_name", "key": "disk_name",
"read_bytes": 3983976960, "read_bytes": 7464889856,
"read_count": 115648, "read_count": 275684,
"write_bytes": 3073111040, "write_bytes": 24858043392,
"write_count": 91409}, "write_count": 1204326},
{"disk_name": "nvme0n1p1", {"disk_name": "nvme0n1p1",
"key": "disk_name", "key": "disk_name",
"read_bytes": 7476224, "read_bytes": 7558144,
"read_count": 576, "read_count": 605,
"write_bytes": 1024, "write_bytes": 1024,
"write_count": 2}] "write_count": 2}]
@ -402,10 +363,10 @@ Get a specific item when field matches the given value::
# curl http://localhost:61208/api/4/diskio/disk_name/nvme0n1 # curl http://localhost:61208/api/4/diskio/disk_name/nvme0n1
{"nvme0n1": [{"disk_name": "nvme0n1", {"nvme0n1": [{"disk_name": "nvme0n1",
"key": "disk_name", "key": "disk_name",
"read_bytes": 3983976960, "read_bytes": 7464889856,
"read_count": 115648, "read_count": 275684,
"write_bytes": 3073111040, "write_bytes": 24858043392,
"write_count": 91409}]} "write_count": 1204326}]}
GET folders GET folders
----------- -----------
@ -432,13 +393,13 @@ Get plugin stats::
# curl http://localhost:61208/api/4/fs # curl http://localhost:61208/api/4/fs
[{"device_name": "/dev/mapper/ubuntu--vg-ubuntu--lv", [{"device_name": "/dev/mapper/ubuntu--vg-ubuntu--lv",
"free": 902284546048, "free": 896615567360,
"fs_type": "ext4", "fs_type": "ext4",
"key": "mnt_point", "key": "mnt_point",
"mnt_point": "/", "mnt_point": "/",
"percent": 5.3, "percent": 5.9,
"size": 1003736440832, "size": 1003736440832,
"used": 50389389312}] "used": 56058368000}]
Fields descriptions: Fields descriptions:
@ -459,13 +420,13 @@ Get a specific item when field matches the given value::
# curl http://localhost:61208/api/4/fs/mnt_point// # curl http://localhost:61208/api/4/fs/mnt_point//
{"/": [{"device_name": "/dev/mapper/ubuntu--vg-ubuntu--lv", {"/": [{"device_name": "/dev/mapper/ubuntu--vg-ubuntu--lv",
"free": 902284546048, "free": 896615567360,
"fs_type": "ext4", "fs_type": "ext4",
"key": "mnt_point", "key": "mnt_point",
"mnt_point": "/", "mnt_point": "/",
"percent": 5.3, "percent": 5.9,
"size": 1003736440832, "size": 1003736440832,
"used": 50389389312}]} "used": 56058368000}]}
GET gpu GET gpu
------- -------
@ -539,9 +500,9 @@ Get plugin stats::
# curl http://localhost:61208/api/4/load # curl http://localhost:61208/api/4/load
{"cpucore": 16, {"cpucore": 16,
"min1": 0.560546875, "min1": 0.69091796875,
"min15": 0.54833984375, "min15": 0.93115234375,
"min5": 0.70166015625} "min5": 0.9248046875}
Fields descriptions: Fields descriptions:
@ -553,7 +514,7 @@ Fields descriptions:
Get a specific field:: Get a specific field::
# curl http://localhost:61208/api/4/load/min1 # curl http://localhost:61208/api/4/load/min1
{"min1": 0.560546875} {"min1": 0.69091796875}
GET mem GET mem
------- -------
@ -561,16 +522,16 @@ GET mem
Get plugin stats:: Get plugin stats::
# curl http://localhost:61208/api/4/mem # curl http://localhost:61208/api/4/mem
{"active": 5540470784, {"active": 8055664640,
"available": 11137699840, "available": 4539228160,
"buffers": 226918400, "buffers": 112582656,
"cached": 6333566976, "cached": 4634251264,
"free": 11137699840, "free": 4539228160,
"inactive": 3872489472, "inactive": 5664567296,
"percent": 32.2, "percent": 72.4,
"shared": 656637952, "shared": 794791936,
"total": 16422486016, "total": 16422486016,
"used": 5284786176} "used": 11883257856}
Fields descriptions: Fields descriptions:
@ -597,13 +558,13 @@ GET memswap
Get plugin stats:: Get plugin stats::
# curl http://localhost:61208/api/4/memswap # curl http://localhost:61208/api/4/memswap
{"free": 4294963200, {"free": 3367235584,
"percent": 0.0, "percent": 21.6,
"sin": 0, "sin": 12046336,
"sout": 0, "sout": 929779712,
"time_since_update": 1, "time_since_update": 1,
"total": 4294963200, "total": 4294963200,
"used": 0} "used": 927727616}
Fields descriptions: Fields descriptions:
@ -628,26 +589,15 @@ Get plugin stats::
# curl http://localhost:61208/api/4/network # curl http://localhost:61208/api/4/network
[{"alias": None, [{"alias": None,
"bytes_all": 0, "bytes_all": 0,
"bytes_all_gauge": 422462144, "bytes_all_gauge": 5602118637,
"bytes_recv": 0, "bytes_recv": 0,
"bytes_recv_gauge": 413272561, "bytes_recv_gauge": 5324018799,
"bytes_sent": 0, "bytes_sent": 0,
"bytes_sent_gauge": 9189583, "bytes_sent_gauge": 278099838,
"interface_name": "wlp0s20f3", "interface_name": "wlp0s20f3",
"key": "interface_name", "key": "interface_name",
"speed": 0, "speed": 0,
"time_since_update": 0.24814295768737793}, "time_since_update": 0.4937009811401367}]
{"alias": None,
"bytes_all": 0,
"bytes_all_gauge": 18987,
"bytes_recv": 0,
"bytes_recv_gauge": 528,
"bytes_sent": 0,
"bytes_sent_gauge": 18459,
"interface_name": "vethfc47299",
"key": "interface_name",
"speed": 10485760000,
"time_since_update": 0.24814295768737793}]
Fields descriptions: Fields descriptions:
@ -669,22 +619,22 @@ Fields descriptions:
Get a specific field:: Get a specific field::
# curl http://localhost:61208/api/4/network/interface_name # curl http://localhost:61208/api/4/network/interface_name
{"interface_name": ["wlp0s20f3", "vethfc47299"]} {"interface_name": ["wlp0s20f3"]}
Get a specific item when field matches the given value:: Get a specific item when field matches the given value::
# curl http://localhost:61208/api/4/network/interface_name/wlp0s20f3 # curl http://localhost:61208/api/4/network/interface_name/wlp0s20f3
{"wlp0s20f3": [{"alias": None, {"wlp0s20f3": [{"alias": None,
"bytes_all": 0, "bytes_all": 0,
"bytes_all_gauge": 422462144, "bytes_all_gauge": 5602118637,
"bytes_recv": 0, "bytes_recv": 0,
"bytes_recv_gauge": 413272561, "bytes_recv_gauge": 5324018799,
"bytes_sent": 0, "bytes_sent": 0,
"bytes_sent_gauge": 9189583, "bytes_sent_gauge": 278099838,
"interface_name": "wlp0s20f3", "interface_name": "wlp0s20f3",
"key": "interface_name", "key": "interface_name",
"speed": 0, "speed": 0,
"time_since_update": 0.24814295768737793}]} "time_since_update": 0.4937009811401367}]}
GET now GET now
------- -------
@ -692,7 +642,7 @@ GET now
Get plugin stats:: Get plugin stats::
# curl http://localhost:61208/api/4/now # curl http://localhost:61208/api/4/now
{"custom": "2024-06-08 10:22:29 CEST", "iso": "2024-06-08T10:22:29+02:00"} {"custom": "2024-06-29 09:51:57 CEST", "iso": "2024-06-29T09:51:57+02:00"}
Fields descriptions: Fields descriptions:
@ -702,7 +652,7 @@ Fields descriptions:
Get a specific field:: Get a specific field::
# curl http://localhost:61208/api/4/now/iso # curl http://localhost:61208/api/4/now/iso
{"iso": "2024-06-08T10:22:29+02:00"} {"iso": "2024-06-29T09:51:57+02:00"}
GET percpu GET percpu
---------- ----------
@ -713,7 +663,7 @@ Get plugin stats::
[{"cpu_number": 0, [{"cpu_number": 0,
"guest": 0.0, "guest": 0.0,
"guest_nice": 0.0, "guest_nice": 0.0,
"idle": 0.0, "idle": 39.0,
"iowait": 0.0, "iowait": 0.0,
"irq": 0.0, "irq": 0.0,
"key": "cpu_number", "key": "cpu_number",
@ -721,12 +671,12 @@ Get plugin stats::
"softirq": 0.0, "softirq": 0.0,
"steal": 0.0, "steal": 0.0,
"system": 0.0, "system": 0.0,
"total": 100.0, "total": 61.0,
"user": 0.0}, "user": 0.0},
{"cpu_number": 1, {"cpu_number": 1,
"guest": 0.0, "guest": 0.0,
"guest_nice": 0.0, "guest_nice": 0.0,
"idle": 0.0, "idle": 40.0,
"iowait": 0.0, "iowait": 0.0,
"irq": 0.0, "irq": 0.0,
"key": "cpu_number", "key": "cpu_number",
@ -734,7 +684,7 @@ Get plugin stats::
"softirq": 0.0, "softirq": 0.0,
"steal": 0.0, "steal": 0.0,
"system": 0.0, "system": 0.0,
"total": 100.0, "total": 60.0,
"user": 0.0}] "user": 0.0}]
Fields descriptions: Fields descriptions:
@ -769,7 +719,7 @@ Get plugin stats::
"port": 0, "port": 0,
"refresh": 30, "refresh": 30,
"rtt_warning": None, "rtt_warning": None,
"status": 0.005593, "status": 0.00589,
"timeout": 3}] "timeout": 3}]
Fields descriptions: Fields descriptions:
@ -797,7 +747,7 @@ Get a specific item when field matches the given value::
"port": 0, "port": 0,
"refresh": 30, "refresh": 30,
"rtt_warning": None, "rtt_warning": None,
"status": 0.005593, "status": 0.00589,
"timeout": 3}]} "timeout": 3}]}
GET processcount GET processcount
@ -806,7 +756,7 @@ GET processcount
Get plugin stats:: Get plugin stats::
# curl http://localhost:61208/api/4/processcount # curl http://localhost:61208/api/4/processcount
{"pid_max": 0, "running": 0, "sleeping": 279, "thread": 1568, "total": 410} {"pid_max": 0, "running": 1, "sleeping": 280, "thread": 1620, "total": 419}
Fields descriptions: Fields descriptions:
@ -819,7 +769,7 @@ Fields descriptions:
Get a specific field:: Get a specific field::
# curl http://localhost:61208/api/4/processcount/total # curl http://localhost:61208/api/4/processcount/total
{"total": 410} {"total": 419}
GET processlist GET processlist
--------------- ---------------
@ -827,7 +777,100 @@ GET processlist
Get plugin stats:: Get plugin stats::
# curl http://localhost:61208/api/4/processlist # curl http://localhost:61208/api/4/processlist
[] [{"cmdline": ["/snap/firefox/4336/usr/lib/firefox/firefox",
"-contentproc",
"-childID",
"2",
"-isForBrowser",
"-prefsLen",
"28296",
"-prefMapSize",
"244444",
"-jsInitLen",
"231800",
"-parentBuildID",
"20240527194810",
"-greomni",
"/snap/firefox/4336/usr/lib/firefox/omni.ja",
"-appomni",
"/snap/firefox/4336/usr/lib/firefox/browser/omni.ja",
"-appDir",
"/snap/firefox/4336/usr/lib/firefox/browser",
"{01aadc3d-85fc-4851-9ca1-a43a1e81c3fa}",
"4591",
"true",
"tab"],
"cpu_percent": 0.0,
"cpu_times": {"children_system": 0.0,
"children_user": 0.0,
"iowait": 0.0,
"system": 123.52,
"user": 2733.04},
"gids": {"effective": 1000, "real": 1000, "saved": 1000},
"io_counters": [8411136, 0, 0, 0, 0],
"key": "pid",
"memory_info": {"data": 3639250944,
"dirty": 0,
"lib": 0,
"rss": 3594973184,
"shared": 128897024,
"text": 987136,
"vms": 6192345088},
"memory_percent": 21.89055408844624,
"name": "Isolated Web Co",
"nice": 0,
"num_threads": 28,
"pid": 4848,
"status": "S",
"time_since_update": 1,
"username": "nicolargo"},
{"cmdline": ["/snap/firefox/4336/usr/lib/firefox/firefox",
"-contentproc",
"-childID",
"3",
"-isForBrowser",
"-prefsLen",
"28296",
"-prefMapSize",
"244444",
"-jsInitLen",
"231800",
"-parentBuildID",
"20240527194810",
"-greomni",
"/snap/firefox/4336/usr/lib/firefox/omni.ja",
"-appomni",
"/snap/firefox/4336/usr/lib/firefox/browser/omni.ja",
"-appDir",
"/snap/firefox/4336/usr/lib/firefox/browser",
"{0ae685c6-7105-4724-886c-98d4a4a9a4f8}",
"4591",
"true",
"tab"],
"cpu_percent": 0.0,
"cpu_times": {"children_system": 0.0,
"children_user": 0.0,
"iowait": 0.0,
"system": 52.27,
"user": 492.22},
"gids": {"effective": 1000, "real": 1000, "saved": 1000},
"io_counters": [2974720, 0, 0, 0, 0],
"key": "pid",
"memory_info": {"data": 1949392896,
"dirty": 0,
"lib": 0,
"rss": 1926062080,
"shared": 121397248,
"text": 987136,
"vms": 4444565504},
"memory_percent": 11.72820045712621,
"name": "Isolated Web Co",
"nice": 0,
"num_threads": 28,
"pid": 4852,
"status": "S",
"time_since_update": 1,
"username": "nicolargo"}]
Fields descriptions: Fields descriptions:
@ -851,7 +894,7 @@ GET psutilversion
Get plugin stats:: Get plugin stats::
# curl http://localhost:61208/api/4/psutilversion # curl http://localhost:61208/api/4/psutilversion
"5.9.8" "6.0.0"
GET quicklook GET quicklook
------------- -------------
@ -859,18 +902,18 @@ GET quicklook
Get plugin stats:: Get plugin stats::
# curl http://localhost:61208/api/4/quicklook # curl http://localhost:61208/api/4/quicklook
{"cpu": 16.7, {"cpu": 6.7,
"cpu_hz": 4475000000.0, "cpu_hz": 4475000000.0,
"cpu_hz_current": 1230624312.5, "cpu_hz_current": 676840312.5,
"cpu_log_core": 16, "cpu_log_core": 16,
"cpu_name": "13th Gen Intel(R) Core(TM) i7-13620H", "cpu_name": "13th Gen Intel(R) Core(TM) i7-13620H",
"cpu_phys_core": 10, "cpu_phys_core": 10,
"load": 3.4, "load": 5.8,
"mem": 32.1, "mem": 72.4,
"percpu": [{"cpu_number": 0, "percpu": [{"cpu_number": 0,
"guest": 0.0, "guest": 0.0,
"guest_nice": 0.0, "guest_nice": 0.0,
"idle": 0.0, "idle": 39.0,
"iowait": 0.0, "iowait": 0.0,
"irq": 0.0, "irq": 0.0,
"key": "cpu_number", "key": "cpu_number",
@ -878,12 +921,12 @@ Get plugin stats::
"softirq": 0.0, "softirq": 0.0,
"steal": 0.0, "steal": 0.0,
"system": 0.0, "system": 0.0,
"total": 100.0, "total": 61.0,
"user": 0.0}, "user": 0.0},
{"cpu_number": 1, {"cpu_number": 1,
"guest": 0.0, "guest": 0.0,
"guest_nice": 0.0, "guest_nice": 0.0,
"idle": 0.0, "idle": 40.0,
"iowait": 0.0, "iowait": 0.0,
"irq": 0.0, "irq": 0.0,
"key": "cpu_number", "key": "cpu_number",
@ -891,12 +934,12 @@ Get plugin stats::
"softirq": 0.0, "softirq": 0.0,
"steal": 0.0, "steal": 0.0,
"system": 0.0, "system": 0.0,
"total": 100.0, "total": 60.0,
"user": 0.0}, "user": 0.0},
{"cpu_number": 2, {"cpu_number": 2,
"guest": 0.0, "guest": 0.0,
"guest_nice": 0.0, "guest_nice": 0.0,
"idle": 0.0, "idle": 39.0,
"iowait": 0.0, "iowait": 0.0,
"irq": 0.0, "irq": 0.0,
"key": "cpu_number", "key": "cpu_number",
@ -904,12 +947,12 @@ Get plugin stats::
"softirq": 0.0, "softirq": 0.0,
"steal": 0.0, "steal": 0.0,
"system": 0.0, "system": 0.0,
"total": 100.0, "total": 61.0,
"user": 0.0}, "user": 0.0},
{"cpu_number": 3, {"cpu_number": 3,
"guest": 0.0, "guest": 0.0,
"guest_nice": 0.0, "guest_nice": 0.0,
"idle": 0.0, "idle": 40.0,
"iowait": 0.0, "iowait": 0.0,
"irq": 0.0, "irq": 0.0,
"key": "cpu_number", "key": "cpu_number",
@ -917,25 +960,25 @@ Get plugin stats::
"softirq": 0.0, "softirq": 0.0,
"steal": 0.0, "steal": 0.0,
"system": 0.0, "system": 0.0,
"total": 100.0, "total": 60.0,
"user": 0.0}, "user": 0.0},
{"cpu_number": 4, {"cpu_number": 4,
"guest": 0.0, "guest": 0.0,
"guest_nice": 0.0, "guest_nice": 0.0,
"idle": 0.0, "idle": 0.0,
"iowait": 0.0, "iowait": 1.0,
"irq": 0.0, "irq": 0.0,
"key": "cpu_number", "key": "cpu_number",
"nice": 0.0, "nice": 0.0,
"softirq": 0.0, "softirq": 0.0,
"steal": 0.0, "steal": 0.0,
"system": 0.0, "system": 4.0,
"total": 100.0, "total": 100.0,
"user": 0.0}, "user": 35.0},
{"cpu_number": 5, {"cpu_number": 5,
"guest": 0.0, "guest": 0.0,
"guest_nice": 0.0, "guest_nice": 0.0,
"idle": 0.0, "idle": 40.0,
"iowait": 0.0, "iowait": 0.0,
"irq": 0.0, "irq": 0.0,
"key": "cpu_number", "key": "cpu_number",
@ -943,12 +986,12 @@ Get plugin stats::
"softirq": 0.0, "softirq": 0.0,
"steal": 0.0, "steal": 0.0,
"system": 0.0, "system": 0.0,
"total": 100.0, "total": 60.0,
"user": 0.0}, "user": 1.0},
{"cpu_number": 6, {"cpu_number": 6,
"guest": 0.0, "guest": 0.0,
"guest_nice": 0.0, "guest_nice": 0.0,
"idle": 0.0, "idle": 40.0,
"iowait": 0.0, "iowait": 0.0,
"irq": 0.0, "irq": 0.0,
"key": "cpu_number", "key": "cpu_number",
@ -956,12 +999,12 @@ Get plugin stats::
"softirq": 0.0, "softirq": 0.0,
"steal": 0.0, "steal": 0.0,
"system": 0.0, "system": 0.0,
"total": 100.0, "total": 60.0,
"user": 0.0}, "user": 0.0},
{"cpu_number": 7, {"cpu_number": 7,
"guest": 0.0, "guest": 0.0,
"guest_nice": 0.0, "guest_nice": 0.0,
"idle": 0.0, "idle": 40.0,
"iowait": 0.0, "iowait": 0.0,
"irq": 0.0, "irq": 0.0,
"key": "cpu_number", "key": "cpu_number",
@ -969,12 +1012,12 @@ Get plugin stats::
"softirq": 0.0, "softirq": 0.0,
"steal": 0.0, "steal": 0.0,
"system": 0.0, "system": 0.0,
"total": 100.0, "total": 60.0,
"user": 0.0}, "user": 0.0},
{"cpu_number": 8, {"cpu_number": 8,
"guest": 0.0, "guest": 0.0,
"guest_nice": 0.0, "guest_nice": 0.0,
"idle": 1.0, "idle": 40.0,
"iowait": 0.0, "iowait": 0.0,
"irq": 0.0, "irq": 0.0,
"key": "cpu_number", "key": "cpu_number",
@ -982,25 +1025,25 @@ Get plugin stats::
"softirq": 0.0, "softirq": 0.0,
"steal": 0.0, "steal": 0.0,
"system": 0.0, "system": 0.0,
"total": 99.0, "total": 60.0,
"user": 0.0}, "user": 1.0},
{"cpu_number": 9, {"cpu_number": 9,
"guest": 0.0, "guest": 0.0,
"guest_nice": 0.0, "guest_nice": 0.0,
"idle": 0.0, "idle": 39.0,
"iowait": 0.0, "iowait": 0.0,
"irq": 0.0, "irq": 0.0,
"key": "cpu_number", "key": "cpu_number",
"nice": 0.0, "nice": 0.0,
"softirq": 0.0, "softirq": 0.0,
"steal": 0.0, "steal": 0.0,
"system": 0.0, "system": 1.0,
"total": 100.0, "total": 61.0,
"user": 0.0}, "user": 0.0},
{"cpu_number": 10, {"cpu_number": 10,
"guest": 0.0, "guest": 0.0,
"guest_nice": 0.0, "guest_nice": 0.0,
"idle": 0.0, "idle": 40.0,
"iowait": 0.0, "iowait": 0.0,
"irq": 0.0, "irq": 0.0,
"key": "cpu_number", "key": "cpu_number",
@ -1008,12 +1051,12 @@ Get plugin stats::
"softirq": 0.0, "softirq": 0.0,
"steal": 0.0, "steal": 0.0,
"system": 0.0, "system": 0.0,
"total": 100.0, "total": 60.0,
"user": 0.0}, "user": 0.0},
{"cpu_number": 11, {"cpu_number": 11,
"guest": 0.0, "guest": 0.0,
"guest_nice": 0.0, "guest_nice": 0.0,
"idle": 0.0, "idle": 40.0,
"iowait": 0.0, "iowait": 0.0,
"irq": 0.0, "irq": 0.0,
"key": "cpu_number", "key": "cpu_number",
@ -1021,12 +1064,12 @@ Get plugin stats::
"softirq": 0.0, "softirq": 0.0,
"steal": 0.0, "steal": 0.0,
"system": 0.0, "system": 0.0,
"total": 100.0, "total": 60.0,
"user": 0.0}, "user": 0.0},
{"cpu_number": 12, {"cpu_number": 12,
"guest": 0.0, "guest": 0.0,
"guest_nice": 0.0, "guest_nice": 0.0,
"idle": 0.0, "idle": 40.0,
"iowait": 0.0, "iowait": 0.0,
"irq": 0.0, "irq": 0.0,
"key": "cpu_number", "key": "cpu_number",
@ -1034,25 +1077,25 @@ Get plugin stats::
"softirq": 0.0, "softirq": 0.0,
"steal": 0.0, "steal": 0.0,
"system": 0.0, "system": 0.0,
"total": 100.0, "total": 60.0,
"user": 0.0}, "user": 0.0},
{"cpu_number": 13, {"cpu_number": 13,
"guest": 0.0, "guest": 0.0,
"guest_nice": 0.0, "guest_nice": 0.0,
"idle": 0.0, "idle": 40.0,
"iowait": 0.0, "iowait": 0.0,
"irq": 0.0, "irq": 0.0,
"key": "cpu_number", "key": "cpu_number",
"nice": 0.0, "nice": 0.0,
"softirq": 0.0, "softirq": 0.0,
"steal": 0.0, "steal": 0.0,
"system": 0.0, "system": 1.0,
"total": 100.0, "total": 60.0,
"user": 0.0}, "user": 0.0},
{"cpu_number": 14, {"cpu_number": 14,
"guest": 0.0, "guest": 0.0,
"guest_nice": 0.0, "guest_nice": 0.0,
"idle": 0.0, "idle": 40.0,
"iowait": 0.0, "iowait": 0.0,
"irq": 0.0, "irq": 0.0,
"key": "cpu_number", "key": "cpu_number",
@ -1060,12 +1103,12 @@ Get plugin stats::
"softirq": 0.0, "softirq": 0.0,
"steal": 0.0, "steal": 0.0,
"system": 0.0, "system": 0.0,
"total": 100.0, "total": 60.0,
"user": 0.0}, "user": 0.0},
{"cpu_number": 15, {"cpu_number": 15,
"guest": 0.0, "guest": 0.0,
"guest_nice": 0.0, "guest_nice": 0.0,
"idle": 0.0, "idle": 40.0,
"iowait": 0.0, "iowait": 0.0,
"irq": 0.0, "irq": 0.0,
"key": "cpu_number", "key": "cpu_number",
@ -1073,9 +1116,9 @@ Get plugin stats::
"softirq": 0.0, "softirq": 0.0,
"steal": 0.0, "steal": 0.0,
"system": 0.0, "system": 0.0,
"total": 100.0, "total": 60.0,
"user": 0.0}], "user": 0.0}],
"swap": 0.0} "swap": 21.6}
Fields descriptions: Fields descriptions:
@ -1113,14 +1156,14 @@ Get plugin stats::
"label": "Ambient", "label": "Ambient",
"type": "temperature_core", "type": "temperature_core",
"unit": "C", "unit": "C",
"value": 36, "value": 35,
"warning": 0}, "warning": 0},
{"critical": None, {"critical": None,
"key": "label", "key": "label",
"label": "Ambient 3", "label": "Ambient 3",
"type": "temperature_core", "type": "temperature_core",
"unit": "C", "unit": "C",
"value": 29, "value": 31,
"warning": 0}] "warning": 0}]
Fields descriptions: Fields descriptions:
@ -1181,7 +1224,7 @@ Get a specific item when field matches the given value::
"label": "Ambient", "label": "Ambient",
"type": "temperature_core", "type": "temperature_core",
"unit": "C", "unit": "C",
"value": 36, "value": 35,
"warning": 0}]} "warning": 0}]}
GET smart GET smart
@ -1225,7 +1268,7 @@ GET uptime
Get plugin stats:: Get plugin stats::
# curl http://localhost:61208/api/4/uptime # curl http://localhost:61208/api/4/uptime
"0:14:44" "19 days, 16:54:43"
GET version GET version
----------- -----------
@ -1233,7 +1276,7 @@ GET version
Get plugin stats:: Get plugin stats::
# curl http://localhost:61208/api/4/version # curl http://localhost:61208/api/4/version
"4.0.8" "4.1.0"
GET wifi GET wifi
-------- --------
@ -1242,8 +1285,8 @@ Get plugin stats::
# curl http://localhost:61208/api/4/wifi # curl http://localhost:61208/api/4/wifi
[{"key": "ssid", [{"key": "ssid",
"quality_level": -66.0, "quality_level": -61.0,
"quality_link": 44.0, "quality_link": 49.0,
"ssid": "wlp0s20f3"}] "ssid": "wlp0s20f3"}]
Get a specific field:: Get a specific field::
@ -1255,8 +1298,8 @@ Get a specific item when field matches the given value::
# curl http://localhost:61208/api/4/wifi/ssid/wlp0s20f3 # curl http://localhost:61208/api/4/wifi/ssid/wlp0s20f3
{"wlp0s20f3": [{"key": "ssid", {"wlp0s20f3": [{"key": "ssid",
"quality_level": -66.0, "quality_level": -61.0,
"quality_link": 44.0, "quality_link": 49.0,
"ssid": "wlp0s20f3"}]} "ssid": "wlp0s20f3"}]}
GET all stats GET all stats
@ -1301,34 +1344,34 @@ GET stats history
History of a plugin:: History of a plugin::
# curl http://localhost:61208/api/4/cpu/history # curl http://localhost:61208/api/4/cpu/history
{"system": [["2024-06-08T10:22:30.631056", 0.0], {"system": [["2024-06-29T09:51:58.684649", 3.2],
["2024-06-08T10:22:31.667772", 1.0], ["2024-06-29T09:51:59.762140", 0.6],
["2024-06-08T10:22:32.723737", 1.0]], ["2024-06-29T09:52:00.774689", 0.6]],
"user": [["2024-06-08T10:22:30.631046", 1.0], "user": [["2024-06-29T09:51:58.684643", 11.4],
["2024-06-08T10:22:31.667765", 1.0], ["2024-06-29T09:51:59.762137", 0.7],
["2024-06-08T10:22:32.723728", 1.0]]} ["2024-06-29T09:52:00.774685", 0.7]]}
Limit history to last 2 values:: Limit history to last 2 values::
# curl http://localhost:61208/api/4/cpu/history/2 # curl http://localhost:61208/api/4/cpu/history/2
{"system": [["2024-06-08T10:22:31.667772", 1.0], {"system": [["2024-06-29T09:51:59.762140", 0.6],
["2024-06-08T10:22:32.723737", 1.0]], ["2024-06-29T09:52:00.774689", 0.6]],
"user": [["2024-06-08T10:22:31.667765", 1.0], "user": [["2024-06-29T09:51:59.762137", 0.7],
["2024-06-08T10:22:32.723728", 1.0]]} ["2024-06-29T09:52:00.774685", 0.7]]}
History for a specific field:: History for a specific field::
# curl http://localhost:61208/api/4/cpu/system/history # curl http://localhost:61208/api/4/cpu/system/history
{"system": [["2024-06-08T10:22:29.515717", 0.0], {"system": [["2024-06-29T09:51:57.513763", 3.2],
["2024-06-08T10:22:30.631056", 0.0], ["2024-06-29T09:51:58.684649", 3.2],
["2024-06-08T10:22:31.667772", 1.0], ["2024-06-29T09:51:59.762140", 0.6],
["2024-06-08T10:22:32.723737", 1.0]]} ["2024-06-29T09:52:00.774689", 0.6]]}
Limit history for a specific field to last 2 values:: Limit history for a specific field to last 2 values::
# curl http://localhost:61208/api/4/cpu/system/history # curl http://localhost:61208/api/4/cpu/system/history
{"system": [["2024-06-08T10:22:31.667772", 1.0], {"system": [["2024-06-29T09:51:59.762140", 0.6],
["2024-06-08T10:22:32.723737", 1.0]]} ["2024-06-29T09:52:00.774689", 0.6]]}
GET limits (used for thresholds) GET limits (used for thresholds)
-------------------------------- --------------------------------
@ -1413,6 +1456,8 @@ All limits/thresholds::
"network": {"history_size": 1200.0, "network": {"history_size": 1200.0,
"network_disable": ["False"], "network_disable": ["False"],
"network_hide": ["docker.*", "lo"], "network_hide": ["docker.*", "lo"],
"network_hide_no_ip": ["True"],
"network_hide_no_up": ["True"],
"network_rx_careful": 70.0, "network_rx_careful": 70.0,
"network_rx_critical": 90.0, "network_rx_critical": 90.0,
"network_rx_warning": 80.0, "network_rx_warning": 80.0,

View File

@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]] .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u .in \\n[rst2man-indent\\n[rst2man-indent-level]]u
.. ..
.TH "GLANCES" "1" "Jun 08, 2024" "4.0.8" "Glances" .TH "GLANCES" "1" "Jun 29, 2024" "4.1.0" "Glances"
.SH NAME .SH NAME
glances \- An eye on your system glances \- An eye on your system
.SH SYNOPSIS .SH SYNOPSIS

View File

@ -19,7 +19,7 @@ import tracemalloc
# Global name # Global name
# Version should start and end with a numerical char # Version should start and end with a numerical char
# See https://packaging.python.org/specifications/core-metadata/#version # See https://packaging.python.org/specifications/core-metadata/#version
__version__ = '4.0.8' __version__ = '4.1.0'
__apiversion__ = '4' __apiversion__ = '4'
__author__ = 'Nicolas Hennion <nicolas@nicolargo.com>' __author__ = 'Nicolas Hennion <nicolas@nicolargo.com>'
__license__ = 'LGPLv3' __license__ = 'LGPLv3'

View File

@ -95,12 +95,16 @@ class GlancesClientBrowser:
# Mandatory stats # Mandatory stats
try: try:
# CPU% # CPU%
cpu_percent = 100 - orjson.loads(s.getPlugin('cpu'))['idle'] # logger.info(f"CPU stats {s.getPlugin('cpu')}")
server['cpu_percent'] = f'{cpu_percent:.1f}' # logger.info(f"CPU views {s.getPluginView('cpu')}")
server['cpu_percent'] = orjson.loads(s.getPlugin('cpu'))['total']
server['cpu_percent_decoration'] = orjson.loads(s.getPluginView('cpu'))['total']['decoration']
# MEM% # MEM%
server['mem_percent'] = orjson.loads(s.getPlugin('mem'))['percent'] server['mem_percent'] = orjson.loads(s.getPlugin('mem'))['percent']
server['mem_percent_decoration'] = orjson.loads(s.getPluginView('mem'))['percent']['decoration']
# OS (Human Readable name) # OS (Human Readable name)
server['hr_name'] = orjson.loads(s.getPlugin('system'))['hr_name'] server['hr_name'] = orjson.loads(s.getPlugin('system'))['hr_name']
server['hr_name_decoration'] = 'DEFAULT'
except (OSError, Fault, KeyError) as e: except (OSError, Fault, KeyError) as e:
logger.debug(f"Error while grabbing stats form server ({e})") logger.debug(f"Error while grabbing stats form server ({e})")
server['status'] = 'OFFLINE' server['status'] = 'OFFLINE'
@ -120,8 +124,8 @@ class GlancesClientBrowser:
# Optional stats (load is not available on Windows OS) # Optional stats (load is not available on Windows OS)
try: try:
# LOAD # LOAD
load_min5 = orjson.loads(s.getPlugin('load'))['min5'] server['load_min5'] = round(orjson.loads(s.getPlugin('load'))['min5'], 1)
server['load_min5'] = f'{load_min5:.2f}' server['load_min5_decoration'] = orjson.loads(s.getPluginView('load'))['min5']['decoration']
except Exception as e: except Exception as e:
logger.warning(f"Error while grabbing stats form server ({e})") logger.warning(f"Error while grabbing stats form server ({e})")

View File

@ -17,36 +17,30 @@ from glances.timer import Timer
class CpuPercent: class CpuPercent:
"""Get and store the CPU percent.""" """Get and store the CPU percent."""
def __init__(self, cached_timer_cpu=3): def __init__(self, cached_timer_cpu=2):
self.cpu_info = {'cpu_name': None, 'cpu_hz_current': None, 'cpu_hz': None}
self.cpu_percent = 0
self.percpu_percent = []
# Get CPU name
self.cpu_info['cpu_name'] = self.__get_cpu_name()
# cached_timer_cpu is the minimum time interval between stats updates # cached_timer_cpu is the minimum time interval between stats updates
# since last update is passed (will retrieve old cached info instead) # since last update is passed (will retrieve old cached info instead)
self.cached_timer_cpu = cached_timer_cpu self.cached_timer_cpu = cached_timer_cpu
self.timer_cpu = Timer(0)
self.timer_percpu = Timer(0)
# psutil.cpu_freq() consumes lots of CPU # psutil.cpu_freq() consumes lots of CPU
# So refresh the stats every refresh*2 (6 seconds) # So refresh CPU frequency stats every refresh * 2
self.cached_timer_cpu_info = cached_timer_cpu * 2 self.cached_timer_cpu_info = cached_timer_cpu * 2
# Get CPU name
self.timer_cpu_info = Timer(0) self.timer_cpu_info = Timer(0)
self.cpu_info = {'cpu_name': self.__get_cpu_name(), 'cpu_hz_current': None, 'cpu_hz': None}
# Warning from PsUtil documentation
# The first time this function is called with interval = 0.0 or None
# it will return a meaningless 0.0 value which you are supposed to ignore.
self.timer_cpu = Timer(0)
self.cpu_percent = self.get_cpu()
self.timer_percpu = Timer(0)
self.percpu_percent = self.get_percpu()
def get_key(self): def get_key(self):
"""Return the key of the per CPU list.""" """Return the key of the per CPU list."""
return 'cpu_number' return 'cpu_number'
def get(self, percpu=False):
"""Update and/or return the CPU using the psutil library.
If percpu, return the percpu stats"""
if percpu:
return self.__get_percpu()
return self.__get_cpu()
def get_info(self): def get_info(self):
"""Get additional information about the CPU""" """Get additional information about the CPU"""
# Never update more than 1 time per cached_timer_cpu_info # Never update more than 1 time per cached_timer_cpu_info
@ -71,7 +65,7 @@ class CpuPercent:
def __get_cpu_name(self): def __get_cpu_name(self):
# Get the CPU name once from the /proc/cpuinfo file # Get the CPU name once from the /proc/cpuinfo file
# Read the first line with the "model name" # Read the first line with the "model name" ("Model" for Raspberry Pi)
ret = None ret = None
try: try:
cpuinfo_file = open('/proc/cpuinfo').readlines() cpuinfo_file = open('/proc/cpuinfo').readlines()
@ -79,26 +73,31 @@ class CpuPercent:
pass pass
else: else:
for line in cpuinfo_file: for line in cpuinfo_file:
if line.startswith('model name'): if line.startswith('model name') or line.startswith('Model') or line.startswith('cpu model'):
ret = line.split(':')[1].strip() ret = line.split(':')[1].strip()
break break
return ret if ret else 'CPU' return ret if ret else 'CPU'
def __get_cpu(self): def get_cpu(self):
"""Update and/or return the CPU using the psutil library.""" """Update and/or return the CPU using the psutil library."""
# Never update more than 1 time per cached_timer_cpu # Never update more than 1 time per cached_timer_cpu
if self.timer_cpu.finished(): if self.timer_cpu.finished():
self.cpu_percent = psutil.cpu_percent(interval=0.0)
# Reset timer for cache # Reset timer for cache
self.timer_cpu.reset(duration=self.cached_timer_cpu) self.timer_cpu.reset(duration=self.cached_timer_cpu)
# Update the stats
self.cpu_percent = psutil.cpu_percent(interval=0.0)
return self.cpu_percent return self.cpu_percent
def __get_percpu(self): def get_percpu(self):
"""Update and/or return the per CPU list using the psutil library.""" """Update and/or return the per CPU list using the psutil library."""
# Never update more than 1 time per cached_timer_cpu # Never update more than 1 time per cached_timer_cpu
if self.timer_percpu.finished(): if self.timer_percpu.finished():
self.percpu_percent = [] # Reset timer for cache
for cpu_number, cputimes in enumerate(psutil.cpu_times_percent(interval=0.0, percpu=True)): self.timer_percpu.reset(duration=self.cached_timer_cpu)
# Get stats
percpu_percent = []
psutil_percpu = enumerate(psutil.cpu_times_percent(interval=0.0, percpu=True))
for cpu_number, cputimes in psutil_percpu:
cpu = { cpu = {
'key': self.get_key(), 'key': self.get_key(),
'cpu_number': cpu_number, 'cpu_number': cpu_number,
@ -123,9 +122,9 @@ class CpuPercent:
if hasattr(cputimes, 'guest_nice'): if hasattr(cputimes, 'guest_nice'):
cpu['guest_nice'] = cputimes.guest_nice cpu['guest_nice'] = cputimes.guest_nice
# Append new CPU to the list # Append new CPU to the list
self.percpu_percent.append(cpu) percpu_percent.append(cpu)
# Reset timer for cache # Update stats
self.timer_percpu.reset(duration=self.cached_timer_cpu) self.percpu_percent = percpu_percent
return self.percpu_percent return self.percpu_percent

View File

@ -196,11 +196,10 @@ class GlancesExport:
for key, value in sorted(iteritems(stats)): for key, value in sorted(iteritems(stats)):
if isinstance(value, bool): if isinstance(value, bool):
value = json_dumps(value) value = json_dumps(value)
if isinstance(value, list): if isinstance(value, list):
try: value = ' '.join([str(v) for v in value])
value = value[0]
except IndexError:
value = ''
if isinstance(value, dict): if isinstance(value, dict):
item_names, item_values = self.build_export(value) item_names, item_values = self.build_export(value)
item_names = [pre_key + key.lower() + str(i) for i in item_names] item_names = [pre_key + key.lower() + str(i) for i in item_names]

View File

@ -34,10 +34,12 @@ class Export(GlancesExport):
# Manage options (command line arguments overwrite configuration file) # Manage options (command line arguments overwrite configuration file)
self.path = args.export_graph_path or self.path self.path = args.export_graph_path or self.path
self.generate_every = int(getattr(self, 'generate_every', 0)) self.generate_every = int(getattr(self, 'generate_every', 0) or 0)
self.width = int(getattr(self, 'width', 800)) self.width = int(getattr(self, 'width', 800) or 800)
self.height = int(getattr(self, 'height', 600)) self.height = int(getattr(self, 'height', 600) or 600)
self.style = getattr(pygal.style, getattr(self, 'style', 'DarkStyle'), pygal.style.DarkStyle) self.style = (
getattr(pygal.style, getattr(self, 'style', 'DarkStyle'), pygal.style.DarkStyle) or pygal.style.DarkStyle
)
# Create export folder # Create export folder
try: try:

View File

@ -162,7 +162,7 @@ class _GlancesCurses:
self._init_cursor() self._init_cursor()
# Init the colors # Init the colors
self._init_colors() self.colors_list = build_colors_list(args)
# Init main window # Init main window
self.term_window = self.screen.subwin(0, 0) self.term_window = self.screen.subwin(0, 0)
@ -195,8 +195,10 @@ class _GlancesCurses:
"""Load the outputs section of the configuration file.""" """Load the outputs section of the configuration file."""
if config is not None and config.has_section('outputs'): if config is not None and config.has_section('outputs'):
logger.debug('Read the outputs section in the configuration file') logger.debug('Read the outputs section in the configuration file')
# Separator ? # Separator
self.args.enable_separator = config.get_bool_value('outputs', 'separator', default=True) self.args.enable_separator = config.get_bool_value(
'outputs', 'separator', default=self.args.enable_separator
)
# Set the left sidebar list # Set the left sidebar list
self._left_sidebar = config.get_list_value('outputs', 'left_menu', default=self._left_sidebar) self._left_sidebar = config.get_list_value('outputs', 'left_menu', default=self._left_sidebar)
@ -214,133 +216,6 @@ class _GlancesCurses:
curses.cbreak() curses.cbreak()
self.set_cursor(0) self.set_cursor(0)
def _init_colors(self):
"""Init the Curses color layout."""
# Set curses options
try:
if hasattr(curses, 'start_color'):
curses.start_color()
logger.debug(f'Curses interface compatible with {curses.COLORS} colors')
if hasattr(curses, 'use_default_colors'):
curses.use_default_colors()
except Exception as e:
logger.warning(f'Error initializing terminal color ({e})')
# Init colors
if self.args.disable_bold:
A_BOLD = 0
self.args.disable_bg = True
else:
A_BOLD = curses.A_BOLD
self.title_color = A_BOLD
self.title_underline_color = A_BOLD | curses.A_UNDERLINE
self.help_color = A_BOLD
if curses.has_colors():
# The screen is compatible with a colored design
# ex: export TERM=xterm-256color
# export TERM=xterm-color
curses.init_pair(1, -1, -1)
if self.args.disable_bg:
curses.init_pair(2, curses.COLOR_RED, -1)
curses.init_pair(3, curses.COLOR_GREEN, -1)
curses.init_pair(5, curses.COLOR_MAGENTA, -1)
else:
curses.init_pair(2, -1, curses.COLOR_RED)
curses.init_pair(3, -1, curses.COLOR_GREEN)
curses.init_pair(5, -1, curses.COLOR_MAGENTA)
curses.init_pair(4, curses.COLOR_BLUE, -1)
curses.init_pair(6, curses.COLOR_RED, -1)
curses.init_pair(7, curses.COLOR_GREEN, -1)
curses.init_pair(8, curses.COLOR_MAGENTA, -1)
# Colors text styles
self.no_color = curses.color_pair(1)
self.default_color = curses.color_pair(3) | A_BOLD
self.nice_color = curses.color_pair(8)
self.cpu_time_color = curses.color_pair(8)
self.ifCAREFUL_color = curses.color_pair(4) | A_BOLD
self.ifWARNING_color = curses.color_pair(5) | A_BOLD
self.ifCRITICAL_color = curses.color_pair(2) | A_BOLD
self.default_color2 = curses.color_pair(7)
self.ifCAREFUL_color2 = curses.color_pair(4)
self.ifWARNING_color2 = curses.color_pair(8) | A_BOLD
self.ifCRITICAL_color2 = curses.color_pair(6) | A_BOLD
self.ifINFO_color = curses.color_pair(4)
self.filter_color = A_BOLD
self.selected_color = A_BOLD
self.separator = curses.color_pair(1)
if curses.COLORS > 8:
# ex: export TERM=xterm-256color
colors_list = [curses.COLOR_CYAN, curses.COLOR_YELLOW]
for i in range(0, 3):
try:
curses.init_pair(i + 9, colors_list[i], -1)
except Exception:
curses.init_pair(i + 9, -1, -1)
self.filter_color = curses.color_pair(9) | A_BOLD
self.selected_color = curses.color_pair(10) | A_BOLD
# Define separator line style
try:
curses.init_color(11, 500, 500, 500)
curses.init_pair(11, curses.COLOR_BLACK, -1)
self.separator = curses.color_pair(11)
except Exception:
# Catch exception in TMUX
pass
else:
# The screen is NOT compatible with a colored design
# switch to B&W text styles
# ex: export TERM=xterm-mono
self.no_color = -1
self.default_color = -1
self.nice_color = A_BOLD
self.cpu_time_color = A_BOLD
self.ifCAREFUL_color = A_BOLD
self.ifWARNING_color = curses.A_UNDERLINE
self.ifCRITICAL_color = curses.A_REVERSE
self.default_color2 = -1
self.ifCAREFUL_color2 = A_BOLD
self.ifWARNING_color2 = curses.A_UNDERLINE
self.ifCRITICAL_color2 = curses.A_REVERSE
self.ifINFO_color = A_BOLD
self.filter_color = A_BOLD
self.selected_color = A_BOLD
self.separator = -1
# Define the colors list (hash table) for stats
self.colors_list = {
'DEFAULT': self.no_color,
'UNDERLINE': curses.A_UNDERLINE,
'BOLD': A_BOLD,
'SORT': curses.A_UNDERLINE | A_BOLD,
'OK': self.default_color2,
'MAX': self.default_color2 | A_BOLD,
'FILTER': self.filter_color,
'TITLE': self.title_color,
'PROCESS': self.default_color2,
'PROCESS_SELECTED': self.default_color2 | curses.A_UNDERLINE,
'STATUS': self.default_color2,
'NICE': self.nice_color,
'CPU_TIME': self.cpu_time_color,
'CAREFUL': self.ifCAREFUL_color2,
'WARNING': self.ifWARNING_color2,
'CRITICAL': self.ifCRITICAL_color2,
'OK_LOG': self.default_color,
'CAREFUL_LOG': self.ifCAREFUL_color,
'WARNING_LOG': self.ifWARNING_color,
'CRITICAL_LOG': self.ifCRITICAL_color,
'PASSWORD': curses.A_PROTECT,
'SELECTED': self.selected_color,
'INFO': self.ifINFO_color,
'ERROR': self.selected_color,
'SEPARATOR': self.separator,
}
def set_cursor(self, value): def set_cursor(self, value):
"""Configure the curse cursor appearance. """Configure the curse cursor appearance.
@ -495,7 +370,7 @@ class _GlancesCurses:
logger.info(f"Stop Glances (keypressed: {self.pressedkey})") logger.info(f"Stop Glances (keypressed: {self.pressedkey})")
def _handle_refresh(self): def _handle_refresh(self):
pass glances_processes.reset_internal_cache()
def loop_position(self): def loop_position(self):
"""Return the current sort in the loop""" """Return the current sort in the loop"""
@ -570,13 +445,14 @@ class _GlancesCurses:
self.new_line() self.new_line()
self.line -= 1 self.line -= 1
line_width = self.term_window.getmaxyx()[1] - self.column line_width = self.term_window.getmaxyx()[1] - self.column
self.term_window.addnstr( if self.line >= 0:
self.line, self.term_window.addnstr(
self.column, self.line,
unicode_message('MEDIUM_LINE', self.args) * line_width, self.column,
line_width, unicode_message('MEDIUM_LINE', self.args) * line_width,
self.colors_list[color], line_width,
) self.colors_list[color],
)
def __get_stat_display(self, stats, layer): def __get_stat_display(self, stats, layer):
"""Return a dict of dict with all the stats display. """Return a dict of dict with all the stats display.
@ -1300,3 +1176,128 @@ class GlancesTextboxYesNo(Textbox):
def do_command(self, ch): def do_command(self, ch):
return super().do_command(ch) return super().do_command(ch)
def build_colors_list(args):
"""Init the Curses color layout."""
# Set curses options
try:
if hasattr(curses, 'start_color'):
curses.start_color()
logger.debug(f'Curses interface compatible with {curses.COLORS} colors')
if hasattr(curses, 'use_default_colors'):
curses.use_default_colors()
except Exception as e:
logger.warning(f'Error initializing terminal color ({e})')
# Init colors
if args.disable_bold:
A_BOLD = 0
args.disable_bg = True
else:
A_BOLD = curses.A_BOLD
title_color = A_BOLD
if curses.has_colors():
# The screen is compatible with a colored design
# ex: export TERM=xterm-256color
# export TERM=xterm-color
curses.init_pair(1, -1, -1)
if args.disable_bg:
curses.init_pair(2, curses.COLOR_RED, -1)
curses.init_pair(3, curses.COLOR_GREEN, -1)
curses.init_pair(5, curses.COLOR_MAGENTA, -1)
else:
curses.init_pair(2, -1, curses.COLOR_RED)
curses.init_pair(3, -1, curses.COLOR_GREEN)
curses.init_pair(5, -1, curses.COLOR_MAGENTA)
curses.init_pair(4, curses.COLOR_BLUE, -1)
curses.init_pair(6, curses.COLOR_RED, -1)
curses.init_pair(7, curses.COLOR_GREEN, -1)
curses.init_pair(8, curses.COLOR_MAGENTA, -1)
# Colors text styles
no_color = curses.color_pair(1)
default_color = curses.color_pair(3) | A_BOLD
nice_color = curses.color_pair(8)
cpu_time_color = curses.color_pair(8)
ifCAREFUL_color = curses.color_pair(4) | A_BOLD
ifWARNING_color = curses.color_pair(5) | A_BOLD
ifCRITICAL_color = curses.color_pair(2) | A_BOLD
default_color2 = curses.color_pair(7)
ifCAREFUL_color2 = curses.color_pair(4)
ifWARNING_color2 = curses.color_pair(8) | A_BOLD
ifCRITICAL_color2 = curses.color_pair(6) | A_BOLD
ifINFO_color = curses.color_pair(4)
filter_color = A_BOLD
selected_color = A_BOLD
separator = curses.color_pair(1)
if curses.COLORS > 8:
# ex: export TERM=xterm-256color
colors_list = [curses.COLOR_CYAN, curses.COLOR_YELLOW]
for i in range(0, 3):
try:
curses.init_pair(i + 9, colors_list[i], -1)
except Exception:
curses.init_pair(i + 9, -1, -1)
filter_color = curses.color_pair(9) | A_BOLD
selected_color = curses.color_pair(10) | A_BOLD
# Define separator line style
try:
curses.init_color(11, 500, 500, 500)
curses.init_pair(11, curses.COLOR_BLACK, -1)
separator = curses.color_pair(11)
except Exception:
# Catch exception in TMUX
pass
else:
# The screen is NOT compatible with a colored design
# switch to B&W text styles
# ex: export TERM=xterm-mono
no_color = -1
default_color = -1
nice_color = A_BOLD
cpu_time_color = A_BOLD
ifCAREFUL_color = A_BOLD
ifWARNING_color = curses.A_UNDERLINE
ifCRITICAL_color = curses.A_REVERSE
default_color2 = -1
ifCAREFUL_color2 = A_BOLD
ifWARNING_color2 = curses.A_UNDERLINE
ifCRITICAL_color2 = curses.A_REVERSE
ifINFO_color = A_BOLD
filter_color = A_BOLD
selected_color = A_BOLD
separator = -1
# Define the colors list (hash table) for stats
return {
'DEFAULT': no_color,
'UNDERLINE': curses.A_UNDERLINE,
'BOLD': A_BOLD,
'SORT': curses.A_UNDERLINE | A_BOLD,
'OK': default_color2,
'MAX': default_color2 | A_BOLD,
'FILTER': filter_color,
'TITLE': title_color,
'PROCESS': default_color2,
'PROCESS_SELECTED': default_color2 | curses.A_UNDERLINE,
'STATUS': default_color2,
'NICE': nice_color,
'CPU_TIME': cpu_time_color,
'CAREFUL': ifCAREFUL_color2,
'WARNING': ifWARNING_color2,
'CRITICAL': ifCRITICAL_color2,
'OK_LOG': default_color,
'CAREFUL_LOG': ifCAREFUL_color,
'WARNING_LOG': ifWARNING_color,
'CRITICAL_LOG': ifCRITICAL_color,
'PASSWORD': curses.A_PROTECT,
'SELECTED': selected_color,
'INFO': ifINFO_color,
'ERROR': selected_color,
'SEPARATOR': separator,
}

View File

@ -24,11 +24,11 @@ class GlancesCursesBrowser(_GlancesCurses):
super().__init__(args=args) super().__init__(args=args)
_colors_list = { _colors_list = {
'UNKNOWN': self.no_color, 'UNKNOWN': self.colors_list['DEFAULT'],
'SNMP': self.default_color2, 'SNMP': self.colors_list['OK'],
'ONLINE': self.default_color2, 'ONLINE': self.colors_list['OK'],
'OFFLINE': self.ifCRITICAL_color2, 'OFFLINE': self.colors_list['CRITICAL'],
'PROTECTED': self.ifWARNING_color2, 'PROTECTED': self.colors_list['WARNING'],
} }
self.colors_list.update(_colors_list) self.colors_list.update(_colors_list)
@ -299,13 +299,11 @@ class GlancesCursesBrowser(_GlancesCurses):
# Item description: [stats_id, column name, column size] # Item description: [stats_id, column name, column size]
column_def = [ column_def = [
['name', 'Name', 16], ['name', 'Name', 16],
['alias', None, None],
['load_min5', 'LOAD', 6], ['load_min5', 'LOAD', 6],
['cpu_percent', 'CPU%', 5], ['cpu_percent', 'CPU%', 5],
['mem_percent', 'MEM%', 5], ['mem_percent', 'MEM%', 5],
['status', 'STATUS', 9], ['status', 'STATUS', 9],
['ip', 'IP', 15], ['ip', 'IP', 15],
# ['port', 'PORT', 5],
['hr_name', 'OS', 16], ['hr_name', 'OS', 16],
] ]
y = 2 y = 2
@ -331,24 +329,10 @@ class GlancesCursesBrowser(_GlancesCurses):
# Display table # Display table
line = 0 line = 0
for v in current_page: for server_stat in current_page:
# Limit the number of displayed server (see issue #1256) # Limit the number of displayed server (see issue #1256)
if line >= stats_max: if line >= stats_max:
continue continue
# Get server stats
server_stat = {}
for c in column_def:
try:
server_stat[c[0]] = v[c[0]]
except KeyError as e:
logger.debug(f"Cannot grab stats {c[0]} from server (KeyError: {e})")
server_stat[c[0]] = '?'
# Display alias instead of name
try:
if c[0] == 'alias' and v[c[0]] is not None:
server_stat['name'] = v[c[0]]
except KeyError:
pass
# Display line for server stats # Display line for server stats
cpt = 0 cpt = 0
@ -362,9 +346,20 @@ class GlancesCursesBrowser(_GlancesCurses):
# Display the line # Display the line
xc += 2 xc += 2
for c in column_def: for c in column_def:
if xc < screen_x and y < screen_y and c[1] is not None: if xc < screen_x and y < screen_y:
# Display server stats # Display server stats
self.term_window.addnstr(y, xc, format(server_stat[c[0]]), c[2], self.colors_list[v['status']]) value = format(server_stat.get(c[0], '?'))
if c[0] == 'name' and 'alias' in server_stat:
value = server_stat['alias']
decoration = self.colors_list.get(
server_stat[c[0] + '_decoration'].replace('_LOG', '')
if c[0] + '_decoration' in server_stat
else self.colors_list[server_stat['status']],
self.colors_list['DEFAULT'],
)
if c[0] == 'status':
decoration = self.colors_list[server_stat['status']]
self.term_window.addnstr(y, xc, value, c[2], decoration)
xc += c[2] + self.space_between_column xc += c[2] + self.space_between_column
cpt += 1 cpt += 1
# Next line, next server... # Next line, next server...

View File

@ -185,6 +185,7 @@ class GlancesRestfulApi:
router.add_api_route( router.add_api_route(
f'/api/{self.API_VERSION}/status', f'/api/{self.API_VERSION}/status',
status_code=status.HTTP_200_OK, status_code=status.HTTP_200_OK,
methods=['HEAD', 'GET'],
response_class=ORJSONResponse, response_class=ORJSONResponse,
endpoint=self._api_status, endpoint=self._api_status,
) )
@ -262,7 +263,7 @@ class GlancesRestfulApi:
endpoint=self._api_item_unit, endpoint=self._api_item_unit,
) )
router.add_api_route( router.add_api_route(
f'/api/{self.API_VERSION}/{{plugin}}/{{item}}/{{value}}', f'/api/{self.API_VERSION}/{{plugin}}/{{item}}/{{value:path}}',
response_class=ORJSONResponse, response_class=ORJSONResponse,
endpoint=self._api_value, endpoint=self._api_value,
) )

View File

@ -1056,12 +1056,12 @@
} }
}, },
"node_modules/braces": { "node_modules/braces": {
"version": "3.0.2", "version": "3.0.3",
"resolved": "https://registry.npmjs.org/braces/-/braces-3.0.2.tgz", "resolved": "https://registry.npmjs.org/braces/-/braces-3.0.3.tgz",
"integrity": "sha512-b8um+L1RzM3WDSzvhm6gIz1yfTbBt6YTlcEKAvsmqCZZFw46z626lVj9j1yEPW33H5H+lBQpZMP1k8l+78Ha0A==", "integrity": "sha512-yQbXgO/OSZVD2IsiLlro+7Hf6Q18EJrKSEsdoMzKePKXct3gvD8oLcOQdIzGupr5Fj+EDe8gO/lxc1BzfMpxvA==",
"dev": true, "dev": true,
"dependencies": { "dependencies": {
"fill-range": "^7.0.1" "fill-range": "^7.1.1"
}, },
"engines": { "engines": {
"node": ">=8" "node": ">=8"
@ -2300,9 +2300,9 @@
} }
}, },
"node_modules/fill-range": { "node_modules/fill-range": {
"version": "7.0.1", "version": "7.1.1",
"resolved": "https://registry.npmjs.org/fill-range/-/fill-range-7.0.1.tgz", "resolved": "https://registry.npmjs.org/fill-range/-/fill-range-7.1.1.tgz",
"integrity": "sha512-qOo9F+dMUmC2Lcb4BbVvnKJxTPjCm+RRpe4gDuGrzkL7mEVl/djYSu2OdQ2Pa302N4oqkSg9ir6jaLWJ2USVpQ==", "integrity": "sha512-YsGpe3WHLK8ZYi4tWDg2Jy3ebRz2rXowDxnld4bkQB00cc/1Zw9AWnC0i9ztDJitivtQvaI9KaLyKrc+hBW0yg==",
"dev": true, "dev": true,
"dependencies": { "dependencies": {
"to-regex-range": "^5.0.1" "to-regex-range": "^5.0.1"
@ -5800,15 +5800,6 @@
"integrity": "sha512-CC1bOL87PIWSBhDcTrdeLo6eGT7mCFtrg0uIJtqJUFyK+eJnzl8A1niH56uu7KMa5XFrtiV+AQuHO3n7DsHnLQ==", "integrity": "sha512-CC1bOL87PIWSBhDcTrdeLo6eGT7mCFtrg0uIJtqJUFyK+eJnzl8A1niH56uu7KMa5XFrtiV+AQuHO3n7DsHnLQ==",
"dev": true "dev": true
}, },
"node_modules/word-wrap": {
"version": "1.2.5",
"resolved": "https://registry.npmjs.org/word-wrap/-/word-wrap-1.2.5.tgz",
"integrity": "sha512-BN22B5eaMMI9UMtjrGd5g5eCYPpCPDUy0FJXbYsaT5zYxjFOckS53SQDE3pWkVoWpHXVb3BrYcEN4Twa55B5cA==",
"dev": true,
"engines": {
"node": ">=0.10.0"
}
},
"node_modules/wrappy": { "node_modules/wrappy": {
"version": "1.0.2", "version": "1.0.2",
"resolved": "https://registry.npmjs.org/wrappy/-/wrappy-1.0.2.tgz", "resolved": "https://registry.npmjs.org/wrappy/-/wrappy-1.0.2.tgz",
@ -5816,9 +5807,9 @@
"dev": true "dev": true
}, },
"node_modules/ws": { "node_modules/ws": {
"version": "8.13.0", "version": "8.17.1",
"resolved": "https://registry.npmjs.org/ws/-/ws-8.13.0.tgz", "resolved": "https://registry.npmjs.org/ws/-/ws-8.17.1.tgz",
"integrity": "sha512-x9vcZYTrFPC7aSIbj7sRCYo7L/Xb8Iy+pW0ng0wt2vCJv7M9HOMy0UoN3rr+IFC7hb7vXoqS+P9ktyLLLhO+LA==", "integrity": "sha512-6XQFvXTkbfUOZOKKILFG1PDK2NDQs4azKQl26T0YS5CxqWLgXajbPZ+h4gZekJyRqFU8pvnbAbbs/3TgRPy+GQ==",
"dev": true, "dev": true,
"engines": { "engines": {
"node": ">=10.0.0" "node": ">=10.0.0"
@ -6736,12 +6727,12 @@
} }
}, },
"braces": { "braces": {
"version": "3.0.2", "version": "3.0.3",
"resolved": "https://registry.npmjs.org/braces/-/braces-3.0.2.tgz", "resolved": "https://registry.npmjs.org/braces/-/braces-3.0.3.tgz",
"integrity": "sha512-b8um+L1RzM3WDSzvhm6gIz1yfTbBt6YTlcEKAvsmqCZZFw46z626lVj9j1yEPW33H5H+lBQpZMP1k8l+78Ha0A==", "integrity": "sha512-yQbXgO/OSZVD2IsiLlro+7Hf6Q18EJrKSEsdoMzKePKXct3gvD8oLcOQdIzGupr5Fj+EDe8gO/lxc1BzfMpxvA==",
"dev": true, "dev": true,
"requires": { "requires": {
"fill-range": "^7.0.1" "fill-range": "^7.1.1"
} }
}, },
"browserslist": { "browserslist": {
@ -7663,9 +7654,9 @@
} }
}, },
"fill-range": { "fill-range": {
"version": "7.0.1", "version": "7.1.1",
"resolved": "https://registry.npmjs.org/fill-range/-/fill-range-7.0.1.tgz", "resolved": "https://registry.npmjs.org/fill-range/-/fill-range-7.1.1.tgz",
"integrity": "sha512-qOo9F+dMUmC2Lcb4BbVvnKJxTPjCm+RRpe4gDuGrzkL7mEVl/djYSu2OdQ2Pa302N4oqkSg9ir6jaLWJ2USVpQ==", "integrity": "sha512-YsGpe3WHLK8ZYi4tWDg2Jy3ebRz2rXowDxnld4bkQB00cc/1Zw9AWnC0i9ztDJitivtQvaI9KaLyKrc+hBW0yg==",
"dev": true, "dev": true,
"requires": { "requires": {
"to-regex-range": "^5.0.1" "to-regex-range": "^5.0.1"
@ -10151,11 +10142,6 @@
"integrity": "sha512-CC1bOL87PIWSBhDcTrdeLo6eGT7mCFtrg0uIJtqJUFyK+eJnzl8A1niH56uu7KMa5XFrtiV+AQuHO3n7DsHnLQ==", "integrity": "sha512-CC1bOL87PIWSBhDcTrdeLo6eGT7mCFtrg0uIJtqJUFyK+eJnzl8A1niH56uu7KMa5XFrtiV+AQuHO3n7DsHnLQ==",
"dev": true "dev": true
}, },
"word-wrap": {
"version": "https://registry.npmjs.org/word-wrap/-/word-wrap-1.2.5.tgz",
"integrity": "sha512-BN22B5eaMMI9UMtjrGd5g5eCYPpCPDUy0FJXbYsaT5zYxjFOckS53SQDE3pWkVoWpHXVb3BrYcEN4Twa55B5cA==",
"dev": true
},
"wrappy": { "wrappy": {
"version": "1.0.2", "version": "1.0.2",
"resolved": "https://registry.npmjs.org/wrappy/-/wrappy-1.0.2.tgz", "resolved": "https://registry.npmjs.org/wrappy/-/wrappy-1.0.2.tgz",
@ -10163,9 +10149,9 @@
"dev": true "dev": true
}, },
"ws": { "ws": {
"version": "8.13.0", "version": "8.17.1",
"resolved": "https://registry.npmjs.org/ws/-/ws-8.13.0.tgz", "resolved": "https://registry.npmjs.org/ws/-/ws-8.17.1.tgz",
"integrity": "sha512-x9vcZYTrFPC7aSIbj7sRCYo7L/Xb8Iy+pW0ng0wt2vCJv7M9HOMy0UoN3rr+IFC7hb7vXoqS+P9ktyLLLhO+LA==", "integrity": "sha512-6XQFvXTkbfUOZOKKILFG1PDK2NDQs4azKQl26T0YS5CxqWLgXajbPZ+h4gZekJyRqFU8pvnbAbbs/3TgRPy+GQ==",
"dev": true, "dev": true,
"requires": {} "requires": {}
}, },

View File

@ -9,10 +9,11 @@
"""Docker Extension unit for Glances' Containers plugin.""" """Docker Extension unit for Glances' Containers plugin."""
import time import time
from typing import Any, Dict, List, Optional, Tuple
from glances.globals import iterkeys, itervalues, nativestr, pretty_date, replace_special_chars from glances.globals import iterkeys, itervalues, nativestr, pretty_date, replace_special_chars
from glances.logger import logger from glances.logger import logger
from glances.plugins.containers.stats_streamer import StatsStreamer from glances.plugins.containers.stats_streamer import ThreadedIterableStreamer
# Docker-py library (optional and Linux-only) # Docker-py library (optional and Linux-only)
# https://github.com/docker/docker-py # https://github.com/docker/docker-py
@ -43,7 +44,7 @@ class DockerStatsFetcher:
# Threaded Streamer # Threaded Streamer
stats_iterable = container.stats(decode=True) stats_iterable = container.stats(decode=True)
self._streamer = StatsStreamer(stats_iterable, initial_stream_value={}) self._streamer = ThreadedIterableStreamer(stats_iterable, initial_stream_value={})
def _log_debug(self, msg, exception=None): def _log_debug(self, msg, exception=None):
logger.debug(f"containers (Docker) ID: {self._container.id} - {msg} ({exception}) ") logger.debug(f"containers (Docker) ID: {self._container.id} - {msg} ({exception}) ")
@ -53,7 +54,7 @@ class DockerStatsFetcher:
self._streamer.stop() self._streamer.stop()
@property @property
def activity_stats(self): def activity_stats(self) -> Dict[str, Dict[str, Any]]:
"""Activity Stats """Activity Stats
Each successive access of activity_stats will cause computation of activity_stats Each successive access of activity_stats will cause computation of activity_stats
@ -63,7 +64,7 @@ class DockerStatsFetcher:
self._last_stats_computed_time = time.time() self._last_stats_computed_time = time.time()
return computed_activity_stats return computed_activity_stats
def _compute_activity_stats(self): def _compute_activity_stats(self) -> Dict[str, Dict[str, Any]]:
with self._streamer.result_lock: with self._streamer.result_lock:
io_stats = self._get_io_stats() io_stats = self._get_io_stats()
cpu_stats = self._get_cpu_stats() cpu_stats = self._get_cpu_stats()
@ -78,11 +79,11 @@ class DockerStatsFetcher:
} }
@property @property
def time_since_update(self): def time_since_update(self) -> float:
# In case no update, default to 1 # In case no update, default to 1
return max(1, self._streamer.last_update_time - self._last_stats_computed_time) return max(1, self._streamer.last_update_time - self._last_stats_computed_time)
def _get_cpu_stats(self): def _get_cpu_stats(self) -> Optional[Dict[str, float]]:
"""Return the container CPU usage. """Return the container CPU usage.
Output: a dict {'total': 1.49} Output: a dict {'total': 1.49}
@ -116,7 +117,7 @@ class DockerStatsFetcher:
# Return the stats # Return the stats
return stats return stats
def _get_memory_stats(self): def _get_memory_stats(self) -> Optional[Dict[str, float]]:
"""Return the container MEMORY. """Return the container MEMORY.
Output: a dict {'usage': ..., 'limit': ..., 'inactive_file': ...} Output: a dict {'usage': ..., 'limit': ..., 'inactive_file': ...}
@ -139,7 +140,7 @@ class DockerStatsFetcher:
# Return the stats # Return the stats
return stats return stats
def _get_network_stats(self): def _get_network_stats(self) -> Optional[Dict[str, float]]:
"""Return the container network usage using the Docker API (v1.0 or higher). """Return the container network usage using the Docker API (v1.0 or higher).
Output: a dict {'time_since_update': 3000, 'rx': 10, 'tx': 65}. Output: a dict {'time_since_update': 3000, 'rx': 10, 'tx': 65}.
@ -168,7 +169,7 @@ class DockerStatsFetcher:
# Return the stats # Return the stats
return stats return stats
def _get_io_stats(self): def _get_io_stats(self) -> Optional[Dict[str, float]]:
"""Return the container IO usage using the Docker API (v1.0 or higher). """Return the container IO usage using the Docker API (v1.0 or higher).
Output: a dict {'time_since_update': 3000, 'ior': 10, 'iow': 65}. Output: a dict {'time_since_update': 3000, 'ior': 10, 'iow': 65}.
@ -221,7 +222,7 @@ class DockerContainersExtension:
self.connect() self.connect()
def connect(self): def connect(self) -> None:
"""Connect to the Docker server.""" """Connect to the Docker server."""
# Init the Docker API Client # Init the Docker API Client
try: try:
@ -236,12 +237,12 @@ class DockerContainersExtension:
# return self.client.version() # return self.client.version()
return {} return {}
def stop(self): def stop(self) -> None:
# Stop all streaming threads # Stop all streaming threads
for t in itervalues(self.stats_fetchers): for t in itervalues(self.stats_fetchers):
t.stop() t.stop()
def update(self, all_tag): def update(self, all_tag) -> Tuple[Dict, List[Dict]]:
"""Update Docker stats using the input method.""" """Update Docker stats using the input method."""
if not self.client: if not self.client:
@ -280,22 +281,30 @@ class DockerContainersExtension:
return version_stats, container_stats return version_stats, container_stats
@property @property
def key(self): def key(self) -> str:
"""Return the key of the list.""" """Return the key of the list."""
return 'name' return 'name'
def generate_stats(self, container): def generate_stats(self, container) -> Dict[str, Any]:
# Init the stats for the current container # Init the stats for the current container
stats = { stats = {
'key': self.key, 'key': self.key,
# Export name
'name': nativestr(container.name), 'name': nativestr(container.name),
# Container Id
'id': container.id, 'id': container.id,
# Container Status (from attrs)
'status': container.attrs['State']['Status'], 'status': container.attrs['State']['Status'],
'created': container.attrs['Created'], 'created': container.attrs['Created'],
'command': [], 'command': [],
'io': {},
'cpu': {},
'memory': {},
'network': {},
'io_rx': None,
'io_wx': None,
'cpu_percent': None,
'memory_percent': None,
'network_rx': None,
'network_tx': None,
'uptime': None,
} }
# Container Image # Container Image
@ -312,37 +321,31 @@ class DockerContainersExtension:
if not stats['command']: if not stats['command']:
stats['command'] = None stats['command'] = None
if stats['status'] in self.CONTAINER_ACTIVE_STATUS: if stats['status'] not in self.CONTAINER_ACTIVE_STATUS:
started_at = container.attrs['State']['StartedAt'] return stats
stats_fetcher = self.stats_fetchers[container.id]
activity_stats = stats_fetcher.activity_stats
stats.update(activity_stats)
# Additional fields stats_fetcher = self.stats_fetchers[container.id]
stats['cpu_percent'] = stats["cpu"]['total'] activity_stats = stats_fetcher.activity_stats
stats['memory_usage'] = stats["memory"].get('usage') stats.update(activity_stats)
if stats['memory'].get('cache') is not None:
stats['memory_usage'] -= stats['memory']['cache'] # Additional fields
if 'time_since_update' in stats['io']: stats['cpu_percent'] = stats['cpu']['total']
stats['io_rx'] = stats['io'].get('ior') // stats['io'].get('time_since_update') stats['memory_usage'] = stats['memory'].get('usage')
stats['io_wx'] = stats['io'].get('iow') // stats['io'].get('time_since_update') if stats['memory'].get('cache') is not None:
if 'time_since_update' in stats['network']: stats['memory_usage'] -= stats['memory']['cache']
stats['network_rx'] = stats['network'].get('rx') // stats['network'].get('time_since_update')
stats['network_tx'] = stats['network'].get('tx') // stats['network'].get('time_since_update') if all(k in stats['io'] for k in ('ior', 'iow', 'time_since_update')):
stats['uptime'] = pretty_date(parser.parse(started_at).astimezone(tz.tzlocal()).replace(tzinfo=None)) stats['io_rx'] = stats['io']['ior'] // stats['io']['time_since_update']
# Manage special chars in command (see isse#2733) stats['io_wx'] = stats['io']['iow'] // stats['io']['time_since_update']
stats['command'] = replace_special_chars(' '.join(stats['command']))
else: if all(k in stats['network'] for k in ('rx', 'tx', 'time_since_update')):
stats['io'] = {} stats['network_rx'] = stats['network']['rx'] // stats['network']['time_since_update']
stats['cpu'] = {} stats['network_tx'] = stats['network']['tx'] // stats['network']['time_since_update']
stats['memory'] = {}
stats['network'] = {} started_at = container.attrs['State']['StartedAt']
stats['io_rx'] = None stats['uptime'] = pretty_date(parser.parse(started_at).astimezone(tz.tzlocal()).replace(tzinfo=None))
stats['io_wx'] = None
stats['cpu_percent'] = None # Manage special chars in command (see issue#2733)
stats['memory_percent'] = None stats['command'] = replace_special_chars(' '.join(stats['command']))
stats['network_rx'] = None
stats['network_tx'] = None
stats['uptime'] = None
return stats return stats

View File

@ -7,11 +7,13 @@
"""Podman Extension unit for Glances' Containers plugin.""" """Podman Extension unit for Glances' Containers plugin."""
import time
from datetime import datetime from datetime import datetime
from typing import Any, Dict, Optional, Tuple
from glances.globals import iterkeys, itervalues, nativestr, pretty_date, replace_special_chars, string_value_to_float from glances.globals import iterkeys, itervalues, nativestr, pretty_date, replace_special_chars, string_value_to_float
from glances.logger import logger from glances.logger import logger
from glances.plugins.containers.stats_streamer import StatsStreamer from glances.plugins.containers.stats_streamer import ThreadedIterableStreamer
# Podman library (optional and Linux-only) # Podman library (optional and Linux-only)
# https://pypi.org/project/podman/ # https://pypi.org/project/podman/
@ -26,63 +28,94 @@ else:
class PodmanContainerStatsFetcher: class PodmanContainerStatsFetcher:
MANDATORY_FIELDS = ["CPU", "MemUsage", "MemLimit", "NetInput", "NetOutput", "BlockInput", "BlockOutput"] MANDATORY_FIELDS = ["CPU", "MemUsage", "MemLimit", "BlockInput", "BlockOutput"]
def __init__(self, container): def __init__(self, container):
self._container = container self._container = container
# Previous stats are stored in the self._old_computed_stats variable
# We store time data to enable rate calculations to avoid complexity for consumers of the APIs exposed.
self._old_computed_stats = {}
# Last time when output stats (results) were computed
self._last_stats_computed_time = 0
# Threaded Streamer # Threaded Streamer
stats_iterable = container.stats(decode=True) stats_iterable = container.stats(decode=True)
self._streamer = StatsStreamer(stats_iterable, initial_stream_value={}) self._streamer = ThreadedIterableStreamer(stats_iterable, initial_stream_value={})
def _log_debug(self, msg, exception=None):
logger.debug(f"containers (Podman) ID: {self._container.id} - {msg} ({exception})")
logger.debug(self._streamer.stats)
def stop(self): def stop(self):
self._streamer.stop() self._streamer.stop()
@property def get_streamed_stats(self) -> Dict[str, Any]:
def stats(self):
stats = self._streamer.stats stats = self._streamer.stats
if stats["Error"]: if stats["Error"]:
self._log_debug("Stats fetching failed", stats["Error"]) logger.error(f"containers (Podman) Container({self._container.id}): Stats fetching failed")
logger.debug(f"containers (Podman) Container({self._container.id}): ", stats)
return stats["Stats"][0] return stats["Stats"][0]
@property @property
def activity_stats(self): def activity_stats(self) -> Dict[str, Any]:
result_stats = {"cpu": {}, "memory": {}, "io": {}, "network": {}} """Activity Stats
api_stats = self.stats
if any(field not in api_stats for field in self.MANDATORY_FIELDS): Each successive access of activity_stats will cause computation of activity_stats
self._log_debug("Missing mandatory fields") """
return result_stats computed_activity_stats = self._compute_activity_stats()
self._old_computed_stats = computed_activity_stats
self._last_stats_computed_time = time.time()
return computed_activity_stats
def _compute_activity_stats(self) -> Dict[str, Dict[str, Any]]:
stats = {"cpu": {}, "memory": {}, "io": {}, "network": {}}
api_stats = self.get_streamed_stats()
if any(field not in api_stats for field in self.MANDATORY_FIELDS) or (
"Network" not in api_stats and any(k not in api_stats for k in ['NetInput', 'NetOutput'])
):
logger.error(f"containers (Podman) Container({self._container.id}): Missing mandatory fields")
return stats
try: try:
cpu_usage = float(api_stats.get("CPU", 0)) stats["cpu"]["total"] = api_stats['CPU']
mem_usage = float(api_stats["MemUsage"]) stats["memory"]["usage"] = api_stats["MemUsage"]
mem_limit = float(api_stats["MemLimit"]) stats["memory"]["limit"] = api_stats["MemLimit"]
rx = float(api_stats["NetInput"]) stats["io"]["ior"] = api_stats["BlockInput"]
tx = float(api_stats["NetOutput"]) stats["io"]["iow"] = api_stats["BlockOutput"]
stats["io"]["time_since_update"] = 1
# Hardcode `time_since_update` to 1 as podman already sends at the same fixed rate per second
ior = float(api_stats["BlockInput"]) if "Network" not in api_stats:
iow = float(api_stats["BlockOutput"]) # For podman rooted mode
stats["network"]['rx'] = api_stats["NetInput"]
stats["network"]['tx'] = api_stats["NetOutput"]
stats["network"]['time_since_update'] = 1
# Hardcode to 1 as podman already sends at the same fixed rate per second
elif api_stats["Network"] is not None:
# api_stats["Network"] can be None if the infra container of the pod is killed
# For podman in rootless mode
stats['network'] = {
"cumulative_rx": sum(interface["RxBytes"] for interface in api_stats["Network"].values()),
"cumulative_tx": sum(interface["TxBytes"] for interface in api_stats["Network"].values()),
}
# Using previous stats to calculate rates
old_network_stats = self._old_computed_stats.get("network")
if old_network_stats:
stats['network']['time_since_update'] = round(self.time_since_update)
stats['network']['rx'] = stats['network']['cumulative_rx'] - old_network_stats["cumulative_rx"]
stats['network']['tx'] = stats['network']['cumulative_tx'] - old_network_stats['cumulative_tx']
# Hardcode `time_since_update` to 1 as podman
# already sends the calculated rate per second
result_stats = {
"cpu": {"total": cpu_usage},
"memory": {"usage": mem_usage, "limit": mem_limit},
"io": {"ior": ior, "iow": iow, "time_since_update": 1},
"network": {"rx": rx, "tx": tx, "time_since_update": 1},
}
except ValueError as e: except ValueError as e:
self._log_debug("Non float stats values found", e) logger.error(f"containers (Podman) Container({self._container.id}): Non float stats values found", e)
return result_stats return stats
@property
def time_since_update(self) -> float:
# In case no update (at startup), default to 1
return max(1, self._streamer.last_update_time - self._last_stats_computed_time)
class PodmanPodStatsFetcher: class PodmanPodStatsFetcher:
@ -92,7 +125,7 @@ class PodmanPodStatsFetcher:
# Threaded Streamer # Threaded Streamer
# Temporary patch to get podman extension working # Temporary patch to get podman extension working
stats_iterable = (pod_manager.stats(decode=True) for _ in iter(int, 1)) stats_iterable = (pod_manager.stats(decode=True) for _ in iter(int, 1))
self._streamer = StatsStreamer(stats_iterable, initial_stream_value={}, sleep_duration=2) self._streamer = ThreadedIterableStreamer(stats_iterable, initial_stream_value={}, sleep_duration=2)
def _log_debug(self, msg, exception=None): def _log_debug(self, msg, exception=None):
logger.debug(f"containers (Podman): Pod Manager - {msg} ({exception})") logger.debug(f"containers (Podman): Pod Manager - {msg} ({exception})")
@ -118,13 +151,13 @@ class PodmanPodStatsFetcher:
"io": io_stats or {}, "io": io_stats or {},
"memory": memory_stats or {}, "memory": memory_stats or {},
"network": network_stats or {}, "network": network_stats or {},
"cpu": cpu_stats or {"total": 0.0}, "cpu": cpu_stats or {},
} }
result_stats[stat["CID"]] = computed_stats result_stats[stat["CID"]] = computed_stats
return result_stats return result_stats
def _get_cpu_stats(self, stats): def _get_cpu_stats(self, stats: Dict) -> Optional[Dict]:
"""Return the container CPU usage. """Return the container CPU usage.
Output: a dict {'total': 1.49} Output: a dict {'total': 1.49}
@ -136,7 +169,7 @@ class PodmanPodStatsFetcher:
cpu_usage = string_value_to_float(stats["CPU"].rstrip("%")) cpu_usage = string_value_to_float(stats["CPU"].rstrip("%"))
return {"total": cpu_usage} return {"total": cpu_usage}
def _get_memory_stats(self, stats): def _get_memory_stats(self, stats) -> Optional[Dict]:
"""Return the container MEMORY. """Return the container MEMORY.
Output: a dict {'usage': ..., 'limit': ...} Output: a dict {'usage': ..., 'limit': ...}
@ -157,7 +190,7 @@ class PodmanPodStatsFetcher:
return {'usage': usage, 'limit': limit, 'inactive_file': 0} return {'usage': usage, 'limit': limit, 'inactive_file': 0}
def _get_network_stats(self, stats): def _get_network_stats(self, stats) -> Optional[Dict]:
"""Return the container network usage using the Docker API (v1.0 or higher). """Return the container network usage using the Docker API (v1.0 or higher).
Output: a dict {'time_since_update': 3000, 'rx': 10, 'tx': 65}. Output: a dict {'time_since_update': 3000, 'rx': 10, 'tx': 65}.
@ -180,10 +213,10 @@ class PodmanPodStatsFetcher:
self._log_debug("Compute MEM usage failed", e) self._log_debug("Compute MEM usage failed", e)
return None return None
# Hardcode `time_since_update` to 1 as podman docs don't specify the rate calculated procedure # Hardcode `time_since_update` to 1 as podman docs don't specify the rate calculation procedure
return {"rx": rx, "tx": tx, "time_since_update": 1} return {"rx": rx, "tx": tx, "time_since_update": 1}
def _get_io_stats(self, stats): def _get_io_stats(self, stats) -> Optional[Dict]:
"""Return the container IO usage using the Docker API (v1.0 or higher). """Return the container IO usage using the Docker API (v1.0 or higher).
Output: a dict {'time_since_update': 3000, 'ior': 10, 'iow': 65}. Output: a dict {'time_since_update': 3000, 'ior': 10, 'iow': 65}.
@ -206,7 +239,7 @@ class PodmanPodStatsFetcher:
self._log_debug("Compute BlockIO usage failed", e) self._log_debug("Compute BlockIO usage failed", e)
return None return None
# Hardcode `time_since_update` to 1 as podman docs don't specify the rate calculated procedure # Hardcode `time_since_update` to 1 as podman docs don't specify the rate calculation procedure
return {"ior": ior, "iow": iow, "time_since_update": 1} return {"ior": ior, "iow": iow, "time_since_update": 1}
@ -242,7 +275,7 @@ class PodmanContainersExtension:
# return self.client.version() # return self.client.version()
return {} return {}
def stop(self): def stop(self) -> None:
# Stop all streaming threads # Stop all streaming threads
for t in itervalues(self.container_stats_fetchers): for t in itervalues(self.container_stats_fetchers):
t.stop() t.stop()
@ -250,7 +283,7 @@ class PodmanContainersExtension:
if self.pods_stats_fetcher: if self.pods_stats_fetcher:
self.pods_stats_fetcher.stop() self.pods_stats_fetcher.stop()
def update(self, all_tag): def update(self, all_tag) -> Tuple[Dict, list[Dict[str, Any]]]:
"""Update Podman stats using the input method.""" """Update Podman stats using the input method."""
if not self.client: if not self.client:
@ -298,55 +331,58 @@ class PodmanContainersExtension:
return version_stats, container_stats return version_stats, container_stats
@property @property
def key(self): def key(self) -> str:
"""Return the key of the list.""" """Return the key of the list."""
return 'name' return 'name'
def generate_stats(self, container): def generate_stats(self, container) -> Dict[str, Any]:
# Init the stats for the current container # Init the stats for the current container
stats = { stats = {
'key': self.key, 'key': self.key,
# Export name
'name': nativestr(container.name), 'name': nativestr(container.name),
# Container Id
'id': container.id, 'id': container.id,
# Container Image
'image': ','.join(container.image.tags if container.image.tags else []), 'image': ','.join(container.image.tags if container.image.tags else []),
# Container Status (from attrs)
'status': container.attrs['State'], 'status': container.attrs['State'],
'created': container.attrs['Created'], 'created': container.attrs['Created'],
'command': container.attrs.get('Command') or [], 'command': container.attrs.get('Command') or [],
'io': {},
'cpu': {},
'memory': {},
'network': {},
'io_rx': None,
'io_wx': None,
'cpu_percent': None,
'memory_percent': None,
'network_rx': None,
'network_tx': None,
'uptime': None,
} }
if stats['status'] in self.CONTAINER_ACTIVE_STATUS: if stats['status'] not in self.CONTAINER_ACTIVE_STATUS:
started_at = datetime.fromtimestamp(container.attrs['StartedAt']) return stats
stats_fetcher = self.container_stats_fetchers[container.id]
activity_stats = stats_fetcher.activity_stats
stats.update(activity_stats)
# Additional fields stats_fetcher = self.container_stats_fetchers[container.id]
stats['cpu_percent'] = stats["cpu"]['total'] activity_stats = stats_fetcher.activity_stats
stats['memory_usage'] = stats["memory"].get('usage') stats.update(activity_stats)
if stats['memory'].get('cache') is not None:
stats['memory_usage'] -= stats['memory']['cache'] # Additional fields
stats['io_rx'] = stats['io'].get('ior') // stats['io'].get('time_since_update') stats['cpu_percent'] = stats['cpu'].get('total')
stats['io_wx'] = stats['io'].get('iow') // stats['io'].get('time_since_update') stats['memory_usage'] = stats['memory'].get('usage')
stats['network_rx'] = stats['network'].get('rx') // stats['network'].get('time_since_update') if stats['memory'].get('cache') is not None:
stats['network_tx'] = stats['network'].get('tx') // stats['network'].get('time_since_update') stats['memory_usage'] -= stats['memory']['cache']
stats['uptime'] = pretty_date(started_at)
# Manage special chars in command (see isse#2733) if all(k in stats['io'] for k in ('ior', 'iow', 'time_since_update')):
stats['command'] = replace_special_chars(' '.join(stats['command'])) stats['io_rx'] = stats['io']['ior'] // stats['io']['time_since_update']
else: stats['io_wx'] = stats['io']['iow'] // stats['io']['time_since_update']
stats['io'] = {}
stats['cpu'] = {} if all(k in stats['network'] for k in ('rx', 'tx', 'time_since_update')):
stats['memory'] = {} stats['network_rx'] = stats['network']['rx'] // stats['network']['time_since_update']
stats['network'] = {} stats['network_tx'] = stats['network']['tx'] // stats['network']['time_since_update']
stats['io_rx'] = None
stats['io_wx'] = None started_at = datetime.fromtimestamp(container.attrs['StartedAt'])
stats['cpu_percent'] = None stats['uptime'] = pretty_date(started_at)
stats['memory_percent'] = None
stats['network_rx'] = None # Manage special chars in command (see issue#2733)
stats['network_tx'] = None stats['command'] = replace_special_chars(' '.join(stats['command']))
stats['uptime'] = None
return stats return stats

View File

@ -11,11 +11,11 @@ import time
from glances.logger import logger from glances.logger import logger
class StatsStreamer: class ThreadedIterableStreamer:
""" """
Utility class to stream an iterable using a background / daemon Thread Utility class to stream an iterable using a background / daemon Thread
Use `StatsStreamer.stats` to access the latest streamed results Use `ThreadedIterableStreamer.stats` to access the latest streamed results
""" """
def __init__(self, iterable, initial_stream_value=None, sleep_duration=0.1): def __init__(self, iterable, initial_stream_value=None, sleep_duration=0.1):

View File

@ -165,8 +165,6 @@ class PluginModel(GlancesPluginModel):
stats = self.update_local() stats = self.update_local()
elif self.input_method == 'snmp': elif self.input_method == 'snmp':
stats = self.update_snmp() stats = self.update_snmp()
else:
stats = self.get_init_value()
# Update the stats # Update the stats
self.stats = stats self.stats = stats
@ -185,7 +183,7 @@ class PluginModel(GlancesPluginModel):
# Init new stats # Init new stats
stats = self.get_init_value() stats = self.get_init_value()
stats['total'] = cpu_percent.get() stats['total'] = cpu_percent.get_cpu()
# Standards stats # Standards stats
# - user: time spent by normal processes executing in user mode; on Linux this also includes guest time # - user: time spent by normal processes executing in user mode; on Linux this also includes guest time

View File

@ -114,127 +114,143 @@ class PluginModel(GlancesPluginModel):
@GlancesPluginModel._log_result_decorator @GlancesPluginModel._log_result_decorator
def update(self): def update(self):
"""Update the FS stats using the input method.""" """Update the FS stats using the input method."""
# Init new stats # Update the stats
stats = self.get_init_value()
if self.input_method == 'local': if self.input_method == 'local':
# Update stats using the standard system lib stats = self.update_local()
else:
# Grab the stats using the psutil disk_partitions stats = self.get_init_value()
# If 'all'=False return physical devices only (e.g. hard disks, cd-rom drives, USB keys)
# and ignore all others (e.g. memory partitions such as /dev/shm)
try:
fs_stat = psutil.disk_partitions(all=False)
except (UnicodeDecodeError, PermissionError):
logger.debug("Plugin - fs: PsUtil fetch failed")
return self.stats
# Optional hack to allow logical mounts points (issue #448)
allowed_fs_types = self.get_conf_value('allow')
if allowed_fs_types:
# Avoid Psutil call unless mounts need to be allowed
try:
all_mounted_fs = psutil.disk_partitions(all=True)
except (UnicodeDecodeError, PermissionError):
logger.debug("Plugin - fs: PsUtil extended fetch failed")
else:
# Discard duplicates (#2299) and add entries matching allowed fs types
tracked_mnt_points = {f.mountpoint for f in fs_stat}
for f in all_mounted_fs:
if (
any(f.fstype.find(fs_type) >= 0 for fs_type in allowed_fs_types)
and f.mountpoint not in tracked_mnt_points
):
fs_stat.append(f)
# Loop over fs
for fs in fs_stat:
# Hide the stats if the mount point is in the exclude list
if not self.is_display(fs.mountpoint):
continue
# Grab the disk usage
try:
fs_usage = psutil.disk_usage(fs.mountpoint)
except OSError:
# Correct issue #346
# Disk is ejected during the command
continue
fs_current = {
'device_name': fs.device,
'fs_type': fs.fstype,
# Manage non breaking space (see issue #1065)
'mnt_point': u(fs.mountpoint).replace('\u00a0', ' '),
'size': fs_usage.total,
'used': fs_usage.used,
'free': fs_usage.free,
'percent': fs_usage.percent,
'key': self.get_key(),
}
# Hide the stats if the device name is in the exclude list
# Correct issue: glances.conf FS hide not applying #1666
if not self.is_display(fs_current['device_name']):
continue
# Add alias if exist (define in the configuration file)
if self.has_alias(fs_current['mnt_point']) is not None:
fs_current['alias'] = self.has_alias(fs_current['mnt_point'])
stats.append(fs_current)
elif self.input_method == 'snmp':
# Update stats using SNMP
# SNMP bulk command to get all file system in one shot
try:
fs_stat = self.get_stats_snmp(snmp_oid=snmp_oid[self.short_system_name], bulk=True)
except KeyError:
fs_stat = self.get_stats_snmp(snmp_oid=snmp_oid['default'], bulk=True)
# Loop over fs
if self.short_system_name in ('windows', 'esxi'):
# Windows or ESXi tips
for fs in fs_stat:
# Memory stats are grabbed in the same OID table (ignore it)
if fs == 'Virtual Memory' or fs == 'Physical Memory' or fs == 'Real Memory':
continue
size = int(fs_stat[fs]['size']) * int(fs_stat[fs]['alloc_unit'])
used = int(fs_stat[fs]['used']) * int(fs_stat[fs]['alloc_unit'])
percent = float(used * 100 / size)
fs_current = {
'device_name': '',
'mnt_point': fs.partition(' ')[0],
'size': size,
'used': used,
'percent': percent,
'key': self.get_key(),
}
# Do not take hidden file system into account
if self.is_hide(fs_current['mnt_point']):
continue
stats.append(fs_current)
else:
# Default behavior
for fs in fs_stat:
fs_current = {
'device_name': fs_stat[fs]['device_name'],
'mnt_point': fs,
'size': int(fs_stat[fs]['size']) * 1024,
'used': int(fs_stat[fs]['used']) * 1024,
'percent': float(fs_stat[fs]['percent']),
'key': self.get_key(),
}
# Do not take hidden file system into account
if self.is_hide(fs_current['mnt_point']) or self.is_hide(fs_current['device_name']):
continue
stats.append(fs_current)
# Update the stats # Update the stats
self.stats = stats self.stats = stats
return self.stats return self.stats
def update_local(self):
"""Update the FS stats using the input method."""
# Init new stats
stats = self.get_init_value()
# Update stats using the standard system lib
# Grab the stats using the psutil disk_partitions
# If 'all'=False return physical devices only (e.g. hard disks, cd-rom drives, USB keys)
# and ignore all others (e.g. memory partitions such as /dev/shm)
try:
fs_stat = psutil.disk_partitions(all=False)
except (UnicodeDecodeError, PermissionError):
logger.debug("Plugin - fs: PsUtil fetch failed")
return stats
# Optional hack to allow logical mounts points (issue #448)
allowed_fs_types = self.get_conf_value('allow')
if allowed_fs_types:
# Avoid Psutil call unless mounts need to be allowed
try:
all_mounted_fs = psutil.disk_partitions(all=True)
except (UnicodeDecodeError, PermissionError):
logger.debug("Plugin - fs: PsUtil extended fetch failed")
else:
# Discard duplicates (#2299) and add entries matching allowed fs types
tracked_mnt_points = {f.mountpoint for f in fs_stat}
for f in all_mounted_fs:
if (
any(f.fstype.find(fs_type) >= 0 for fs_type in allowed_fs_types)
and f.mountpoint not in tracked_mnt_points
):
fs_stat.append(f)
# Loop over fs
for fs in fs_stat:
# Hide the stats if the mount point is in the exclude list
# It avoids unnecessary call to PsUtil disk_usage
if not self.is_display(fs.mountpoint):
continue
# Grab the disk usage
try:
fs_usage = psutil.disk_usage(fs.mountpoint)
except OSError:
# Correct issue #346
# Disk is ejected during the command
continue
fs_current = {
'device_name': fs.device,
'fs_type': fs.fstype,
# Manage non breaking space (see issue #1065)
'mnt_point': u(fs.mountpoint).replace('\u00a0', ' '),
'size': fs_usage.total,
'used': fs_usage.used,
'free': fs_usage.free,
'percent': fs_usage.percent,
'key': self.get_key(),
}
# Hide the stats if the device name is in the exclude list
# Correct issue: glances.conf FS hide not applying #1666
if not self.is_display(fs_current['device_name']):
continue
# Add alias if exist (define in the configuration file)
if self.has_alias(fs_current['mnt_point']) is not None:
fs_current['alias'] = self.has_alias(fs_current['mnt_point'])
stats.append(fs_current)
return stats
def update_snmp(self):
"""Update the FS stats using the input method."""
# Init new stats
stats = self.get_init_value()
# Update stats using SNMP
# SNMP bulk command to get all file system in one shot
try:
fs_stat = self.get_stats_snmp(snmp_oid=snmp_oid[self.short_system_name], bulk=True)
except KeyError:
fs_stat = self.get_stats_snmp(snmp_oid=snmp_oid['default'], bulk=True)
# Loop over fs
if self.short_system_name in ('windows', 'esxi'):
# Windows or ESXi tips
for fs in fs_stat:
# Memory stats are grabbed in the same OID table (ignore it)
if fs == 'Virtual Memory' or fs == 'Physical Memory' or fs == 'Real Memory':
continue
size = int(fs_stat[fs]['size']) * int(fs_stat[fs]['alloc_unit'])
used = int(fs_stat[fs]['used']) * int(fs_stat[fs]['alloc_unit'])
percent = float(used * 100 / size)
fs_current = {
'device_name': '',
'mnt_point': fs.partition(' ')[0],
'size': size,
'used': used,
'percent': percent,
'key': self.get_key(),
}
# Do not take hidden file system into account
if self.is_hide(fs_current['mnt_point']):
continue
stats.append(fs_current)
else:
# Default behavior
for fs in fs_stat:
fs_current = {
'device_name': fs_stat[fs]['device_name'],
'mnt_point': fs,
'size': int(fs_stat[fs]['size']) * 1024,
'used': int(fs_stat[fs]['used']) * 1024,
'percent': float(fs_stat[fs]['percent']),
'key': self.get_key(),
}
# Do not take hidden file system into account
if self.is_hide(fs_current['mnt_point']) or self.is_hide(fs_current['device_name']):
continue
stats.append(fs_current)
return stats
def update_views(self): def update_views(self):
"""Update stats views.""" """Update stats views."""
# Call the father's method # Call the father's method

View File

@ -120,16 +120,12 @@ class PluginModel(GlancesPluginModel):
@GlancesPluginModel._log_result_decorator @GlancesPluginModel._log_result_decorator
def update(self): def update(self):
"""Update per-CPU stats using the input method.""" """Update per-CPU stats using the input method."""
# Init new stats # Grab per-CPU stats using psutil's
stats = self.get_init_value()
# Grab per-CPU stats using psutil's cpu_percent(percpu=True) and
# cpu_times_percent(percpu=True) methods
if self.input_method == 'local': if self.input_method == 'local':
stats = cpu_percent.get(percpu=True) stats = cpu_percent.get_percpu()
else: else:
# Update stats using SNMP # Update stats using SNMP
pass stats = self.get_init_value()
# Update the stats # Update the stats
self.stats = stats self.stats = stats

View File

@ -324,31 +324,34 @@ class PluginModel(GlancesPluginModel):
def _get_process_curses_time(self, p, selected, args): def _get_process_curses_time(self, p, selected, args):
"""Return process time curses""" """Return process time curses"""
cpu_times = p['cpu_times']
try: try:
# Sum user and system time # Sum user and system time
user_system_time = p['cpu_times']['user'] + p['cpu_times']['system'] user_system_time = cpu_times['user'] + cpu_times['system']
except (OverflowError, TypeError): except (OverflowError, TypeError, KeyError):
# Catch OverflowError on some Amazon EC2 server # Catch OverflowError on some Amazon EC2 server
# See https://github.com/nicolargo/glances/issues/87 # See https://github.com/nicolargo/glances/issues/87
# Also catch TypeError on macOS # Also catch TypeError on macOS
# See: https://github.com/nicolargo/glances/issues/622 # See: https://github.com/nicolargo/glances/issues/622
# Also catch KeyError (as no stats be present for processes of other users)
# See: https://github.com/nicolargo/glances/issues/2831
# logger.debug("Cannot get TIME+ ({})".format(e)) # logger.debug("Cannot get TIME+ ({})".format(e))
msg = self.layout_header['time'].format('?') msg = self.layout_header['time'].format('?')
ret = self.curse_add_line(msg, optional=True) return self.curse_add_line(msg, optional=True)
hours, minutes, seconds = seconds_to_hms(user_system_time)
if hours > 99:
msg = f'{hours:<7}h'
elif 0 < hours < 100:
msg = f'{hours}h{minutes}:{seconds}'
else: else:
hours, minutes, seconds = seconds_to_hms(user_system_time) msg = f'{minutes}:{seconds}'
if hours > 99:
msg = f'{hours:<7}h' msg = self.layout_stat['time'].format(msg)
elif 0 < hours < 100: if hours > 0:
msg = f'{hours}h{minutes}:{seconds}' return self.curse_add_line(msg, decoration='CPU_TIME', optional=True)
else:
msg = f'{minutes}:{seconds}' return self.curse_add_line(msg, optional=True)
msg = self.layout_stat['time'].format(msg)
if hours > 0:
ret = self.curse_add_line(msg, decoration='CPU_TIME', optional=True)
else:
ret = self.curse_add_line(msg, optional=True)
return ret
def _get_process_curses_thread(self, p, selected, args): def _get_process_curses_thread(self, p, selected, args):
"""Return process thread curses""" """Return process thread curses"""

View File

@ -118,8 +118,8 @@ class PluginModel(GlancesPluginModel):
# Get the CPU percent value (global and per core) # Get the CPU percent value (global and per core)
# Stats is shared across all plugins # Stats is shared across all plugins
stats['cpu'] = cpu_percent.get() stats['cpu'] = cpu_percent.get_cpu()
stats['percpu'] = cpu_percent.get(percpu=True) stats['percpu'] = cpu_percent.get_percpu()
# Get the virtual and swap memory # Get the virtual and swap memory
stats['mem'] = psutil.virtual_memory().percent stats['mem'] = psutil.virtual_memory().percent

View File

@ -22,13 +22,13 @@ from glances.globals import file_exists, nativestr
from glances.logger import logger from glances.logger import logger
from glances.plugins.plugin.model import GlancesPluginModel from glances.plugins.plugin.model import GlancesPluginModel
# Backup solution is to use the /proc/net/wireless file # Use stats available in the /proc/net/wireless file
# but it only give signal information about the current hotspot # Note: it only give signal information about the current hotspot
WIRELESS_FILE = '/proc/net/wireless' WIRELESS_FILE = '/proc/net/wireless'
wireless_file_exists = file_exists(WIRELESS_FILE) wireless_file_exists = file_exists(WIRELESS_FILE)
if not wireless_file_exists: if not wireless_file_exists:
logger.debug(f"Wifi plugin is disabled (no {WIRELESS_FILE} file found)") logger.debug(f"Wifi plugin is disabled (can not read {WIRELESS_FILE} file)")
# Fields description # Fields description
# description: human readable description # description: human readable description
@ -96,31 +96,12 @@ class PluginModel(GlancesPluginModel):
return stats return stats
if self.input_method == 'local' and wireless_file_exists: if self.input_method == 'local' and wireless_file_exists:
# As a backup solution, use the /proc/net/wireless file try:
with open(WIRELESS_FILE) as f: stats = self._get_wireless_stats()
# The first two lines are header except (PermissionError, FileNotFoundError) as e:
f.readline() logger.debug(f"Wifi plugin error: can not read {WIRELESS_FILE} file ({e})")
f.readline()
# Others lines are Wifi stats
wifi_stats = f.readline()
while wifi_stats != '':
# Extract the stats
wifi_stats = wifi_stats.split()
# Add the Wifi link to the list
stats.append(
{
'key': self.get_key(),
'ssid': wifi_stats[0][:-1],
'quality_link': float(wifi_stats[2]),
'quality_level': float(wifi_stats[3]),
}
)
# Next line
wifi_stats = f.readline()
elif self.input_method == 'snmp': elif self.input_method == 'snmp':
# Update stats using SNMP # Update stats using SNMP
# Not implemented yet # Not implemented yet
pass pass
@ -129,6 +110,31 @@ class PluginModel(GlancesPluginModel):
return self.stats return self.stats
def _get_wireless_stats(self):
ret = self.get_init_value()
# As a backup solution, use the /proc/net/wireless file
with open(WIRELESS_FILE) as f:
# The first two lines are header
f.readline()
f.readline()
# Others lines are Wifi stats
wifi_stats = f.readline()
while wifi_stats != '':
# Extract the stats
wifi_stats = wifi_stats.split()
# Add the Wifi link to the list
ret.append(
{
'key': self.get_key(),
'ssid': wifi_stats[0][:-1],
'quality_link': float(wifi_stats[2]),
'quality_level': float(wifi_stats[3]),
}
)
# Next line
wifi_stats = f.readline()
return ret
def get_alert(self, value): def get_alert(self, value):
"""Overwrite the default get_alert method. """Overwrite the default get_alert method.

View File

@ -119,6 +119,14 @@ class GlancesProcesses:
"""Set args.""" """Set args."""
self.args = args self.args = args
def reset_internal_cache(self):
"""Reset the internal cache."""
self.cache_timer = Timer(0)
self.processlist_cache = {}
if hasattr(psutil.process_iter, 'cache_clear'):
# Cache clear only available in PsUtil 6 or higher
psutil.process_iter.cache_clear()
def reset_processcount(self): def reset_processcount(self):
"""Reset the global process count""" """Reset the global process count"""
self.processcount = {'total': 0, 'running': 0, 'sleeping': 0, 'thread': 0, 'pid_max': None} self.processcount = {'total': 0, 'running': 0, 'sleeping': 0, 'thread': 0, 'pid_max': None}
@ -445,7 +453,9 @@ class GlancesProcesses:
) )
) )
# Only get the info key # Only get the info key
processlist = [p.info for p in processlist] # PsUtil 6+ no longer check PID reused #2755 so use is_running in the loop
# Note: not sure it is realy needed but CPU consumption look teh same with or without it
processlist = [p.info for p in processlist if p.is_running()]
# Sort the processes list by the current sort_key # Sort the processes list by the current sort_key
processlist = sort_stats(processlist, sorted_by=self.sort_key, reverse=True) processlist = sort_stats(processlist, sorted_by=self.sort_key, reverse=True)

View File

@ -249,8 +249,8 @@ class GlancesStats:
def load_limits(self, config=None): def load_limits(self, config=None):
"""Load the stats limits (except the one in the exclude list).""" """Load the stats limits (except the one in the exclude list)."""
# For each plugins, call the load_limits method # For each plugins (enable or not), call the load_limits method
for p in self._plugins: for p in self.getPluginsList(enable=False):
self._plugins[p].load_limits(config) self._plugins[p].load_limits(config)
def __update_plugin(self, p): def __update_plugin(self, p):
@ -260,19 +260,13 @@ class GlancesStats:
self._plugins[p].update_views() self._plugins[p].update_views()
def update(self): def update(self):
"""Wrapper method to update the stats. """Wrapper method to update all stats.
Only called by standalone and server modes Only called by standalone and server modes
""" """
threads = []
# Start update of all enable plugins # Start update of all enable plugins
for p in self.getPluginsList(enable=True): for p in self.getPluginsList(enable=True):
thread = threading.Thread(target=self.__update_plugin, args=(p,)) self.__update_plugin(p)
thread.start()
threads.append(thread)
# Wait the end of the update
for t in threads:
t.join()
def export(self, input_stats=None): def export(self, input_stats=None):
"""Export all the stats. """Export all the stats.
@ -286,7 +280,7 @@ class GlancesStats:
input_stats = input_stats or {} input_stats = input_stats or {}
for e in self.getExportsList(enable=True): for e in self.getExportsList():
logger.debug(f"Export stats using the {e} module") logger.debug(f"Export stats using the {e} module")
thread = threading.Thread(target=self._exports[e].update, args=(input_stats,)) thread = threading.Thread(target=self._exports[e].update, args=(input_stats,))
thread.start() thread.start()
@ -294,12 +288,20 @@ class GlancesStats:
return True return True
def getAll(self): def getAll(self):
"""Return all the stats (list).""" """Return all the stats (list).
return [self._plugins[p].get_raw() for p in self._plugins] This method is called byt the XML/RPC API.
It should return all the plugins (enable or not) because filtering can be done by the client.
"""
return [self._plugins[p].get_raw() for p in self.getPluginsList(enable=False)]
def getAllAsDict(self): def getAllAsDict(self, plugin_list=None):
"""Return all the stats (dict).""" """Return all the stats (as dict).
return {p: self._plugins[p].get_raw() for p in self._plugins} This method is called by the RESTFul API.
"""
if plugin_list is None:
# All enabled plugins should be exported
plugin_list = self.getPluginsList()
return {p: self._plugins[p].get_raw() for p in plugin_list}
def getAllExports(self, plugin_list=None): def getAllExports(self, plugin_list=None):
"""Return all the stats to be exported as a list. """Return all the stats to be exported as a list.
@ -310,7 +312,7 @@ class GlancesStats:
if plugin_list is None: if plugin_list is None:
# All enabled plugins should be exported # All enabled plugins should be exported
plugin_list = self.getPluginsList() plugin_list = self.getPluginsList()
return [self._plugins[p].get_export() for p in self._plugins] return [self._plugins[p].get_export() for p in plugin_list]
def getAllExportsAsDict(self, plugin_list=None): def getAllExportsAsDict(self, plugin_list=None):
"""Return all the stats to be exported as a dict. """Return all the stats to be exported as a dict.
@ -345,17 +347,23 @@ class GlancesStats:
plugin_list = self.getPluginsList() plugin_list = self.getPluginsList()
return {p: self._plugins[p].limits for p in plugin_list} return {p: self._plugins[p].limits for p in plugin_list}
def getAllViews(self): def getAllViews(self, plugin_list=None):
"""Return the plugins views.""" """Return the plugins views.
return [self._plugins[p].get_views() for p in self._plugins] This method is called byt the XML/RPC API.
It should return all the plugins views (enable or not) because filtering can be done by the client.
"""
if plugin_list is None:
plugin_list = self.getPluginsList(enable=False)
return [self._plugins[p].get_views() for p in plugin_list]
def getAllViewsAsDict(self): def getAllViewsAsDict(self, plugin_list=None):
"""Return all the stats views (dict).""" """Return all the stats views (dict).
return {p: self._plugins[p].get_views() for p in self._plugins} This method is called by the RESTFul API.
"""
def get_plugin_list(self): if plugin_list is None:
"""Return the plugin list.""" # All enabled plugins should be exported
return self._plugins plugin_list = self.getPluginsList()
return {p: self._plugins[p].get_views() for p in plugin_list}
def get_plugin(self, plugin_name): def get_plugin(self, plugin_name):
"""Return the plugin stats.""" """Return the plugin stats."""

View File

@ -1,5 +1,5 @@
name: glances name: glances
version: '4.0.8' version: '4.1.0+build01'
summary: Glances an Eye on your system. A top/htop alternative. summary: Glances an Eye on your system. A top/htop alternative.
description: | description: |
@ -31,6 +31,7 @@ apps:
- removable-media - removable-media
- power-control - power-control
- process-control - process-control
- network-setup-observe
environment: environment:
LANG: C.UTF-8 LANG: C.UTF-8
LC_ALL: C.UTF-8 LC_ALL: C.UTF-8

View File

@ -0,0 +1,25 @@
import sys
import time
sys.path.insert(0, '../glances')
###########
# from glances.cpu_percent import cpu_percent
# for _ in range(0, 5):
# print([i['total'] for i in cpu_percent.get_percpu()])
# time.sleep(2)
###########
from glances.main import GlancesMain
from glances.stats import GlancesStats
core = GlancesMain()
stats = GlancesStats(config=core.get_config(), args=core.get_args())
for _ in range(0, 5):
stats.update()
print([i['total'] for i in stats.get_plugin('percpu').get_raw()])
time.sleep(2)

View File

@ -88,7 +88,7 @@ class TestGlances(unittest.TestCase):
# Check stats # Check stats
self.assertIsInstance(plugin_instance.get_raw(), (dict, list)) self.assertIsInstance(plugin_instance.get_raw(), (dict, list))
if isinstance(plugin_instance.get_raw(), dict): if isinstance(plugin_instance.get_raw(), dict) and plugin_instance.get_raw() != {}:
res = False res = False
for f in plugin_instance.fields_description: for f in plugin_instance.fields_description:
if f not in plugin_instance.get_raw(): if f not in plugin_instance.get_raw():
@ -96,7 +96,7 @@ class TestGlances(unittest.TestCase):
else: else:
res = True res = True
self.assertTrue(res) self.assertTrue(res)
elif isinstance(plugin_instance.get_raw(), list): elif isinstance(plugin_instance.get_raw(), list) and len(plugin_instance.get_raw()) > 0:
res = False res = False
for i in plugin_instance.get_raw(): for i in plugin_instance.get_raw():
for f in i: for f in i:
@ -119,7 +119,7 @@ class TestGlances(unittest.TestCase):
current_stats['foo'] = 'bar' current_stats['foo'] = 'bar'
current_stats = plugin_instance.filter_stats(current_stats) current_stats = plugin_instance.filter_stats(current_stats)
self.assertTrue('foo' not in current_stats) self.assertTrue('foo' not in current_stats)
elif isinstance(plugin_instance.get_raw(), list): elif isinstance(plugin_instance.get_raw(), list) and len(plugin_instance.get_raw()) > 0:
current_stats[0]['foo'] = 'bar' current_stats[0]['foo'] = 'bar'
current_stats = plugin_instance.filter_stats(current_stats) current_stats = plugin_instance.filter_stats(current_stats)
self.assertTrue('foo' not in current_stats[0]) self.assertTrue('foo' not in current_stats[0])
@ -133,34 +133,36 @@ class TestGlances(unittest.TestCase):
if plugin_instance.history_enable(): if plugin_instance.history_enable():
if isinstance(plugin_instance.get_raw(), dict): if isinstance(plugin_instance.get_raw(), dict):
first_history_field = plugin_instance.get_items_history_list()[0]['name'] first_history_field = plugin_instance.get_items_history_list()[0]['name']
elif isinstance(plugin_instance.get_raw(), list): elif isinstance(plugin_instance.get_raw(), list) and len(plugin_instance.get_raw()) > 0:
first_history_field = '_'.join( first_history_field = '_'.join(
[ [
plugin_instance.get_raw()[0][plugin_instance.get_key()], plugin_instance.get_raw()[0][plugin_instance.get_key()],
plugin_instance.get_items_history_list()[0]['name'], plugin_instance.get_items_history_list()[0]['name'],
] ]
) )
self.assertEqual(len(plugin_instance.get_raw_history(first_history_field)), 2) if len(plugin_instance.get_raw()) > 0:
self.assertGreater( self.assertEqual(len(plugin_instance.get_raw_history(first_history_field)), 2)
plugin_instance.get_raw_history(first_history_field)[1][0], self.assertGreater(
plugin_instance.get_raw_history(first_history_field)[0][0], plugin_instance.get_raw_history(first_history_field)[1][0],
) plugin_instance.get_raw_history(first_history_field)[0][0],
)
# Update stats (add third element) # Update stats (add third element)
plugin_instance.update() plugin_instance.update()
plugin_instance.update_stats_history() plugin_instance.update_stats_history()
plugin_instance.update_views() plugin_instance.update_views()
self.assertEqual(len(plugin_instance.get_raw_history(first_history_field)), 3) if len(plugin_instance.get_raw()) > 0:
self.assertEqual(len(plugin_instance.get_raw_history(first_history_field, 2)), 2) self.assertEqual(len(plugin_instance.get_raw_history(first_history_field)), 3)
self.assertIsInstance(json.loads(plugin_instance.get_stats_history()), dict) self.assertEqual(len(plugin_instance.get_raw_history(first_history_field, 2)), 2)
self.assertIsInstance(json.loads(plugin_instance.get_stats_history()), dict)
# Check views # Check views
self.assertIsInstance(plugin_instance.get_views(), dict) self.assertIsInstance(plugin_instance.get_views(), dict)
if isinstance(plugin_instance.get_raw(), dict): if isinstance(plugin_instance.get_raw(), dict):
self.assertIsInstance(plugin_instance.get_views(first_history_field), dict) self.assertIsInstance(plugin_instance.get_views(first_history_field), dict)
self.assertTrue('decoration' in plugin_instance.get_views(first_history_field)) self.assertTrue('decoration' in plugin_instance.get_views(first_history_field))
elif isinstance(plugin_instance.get_raw(), list): elif isinstance(plugin_instance.get_raw(), list) and len(plugin_instance.get_raw()) > 0:
first_history_field = plugin_instance.get_items_history_list()[0]['name'] first_history_field = plugin_instance.get_items_history_list()[0]['name']
first_item = plugin_instance.get_raw()[0][plugin_instance.get_key()] first_item = plugin_instance.get_raw()[0][plugin_instance.get_key()]
self.assertIsInstance(plugin_instance.get_views(item=first_item, key=first_history_field), dict) self.assertIsInstance(plugin_instance.get_views(item=first_item, key=first_history_field), dict)
@ -592,6 +594,7 @@ class TestGlances(unittest.TestCase):
bar.percent = 110 bar.percent = 110
self.assertEqual(bar.get(), '|||||||||||||||||||||||||||||||||||||||||||| >100%') self.assertEqual(bar.get(), '|||||||||||||||||||||||||||||||||||||||||||| >100%')
# Error in Github Action. Do not remove the comment.
# def test_100_system_plugin_method(self): # def test_100_system_plugin_method(self):
# """Test system plugin methods""" # """Test system plugin methods"""
# print('INFO: [TEST_100] Test system plugin methods') # print('INFO: [TEST_100] Test system plugin methods')
@ -623,6 +626,7 @@ class TestGlances(unittest.TestCase):
print('INFO: [TEST_105] Test network plugin methods') print('INFO: [TEST_105] Test network plugin methods')
self._common_plugin_tests('network') self._common_plugin_tests('network')
# Error in Github Action. Do not remove the comment.
# def test_106_diskio_plugin_method(self): # def test_106_diskio_plugin_method(self):
# """Test diskio plugin methods""" # """Test diskio plugin methods"""
# print('INFO: [TEST_106] Test diskio plugin methods') # print('INFO: [TEST_106] Test diskio plugin methods')