feature: Rework network y-axis, linear interpolation for off-screen data (#437)

Rewrite of the y-axis labeling and scaling for the network widget, along with more customization. This still has one step to be optimized (cache results so we don't have to recalculate the legend each time), but will be done in another PR for sake of this one being too large already.

Furthermore, this change adds linear interpolation at the 0 point in the case a data point shoots too far back - this seems to have lead to ugly gaps to the left of graphs in some cases, because the left hand limit was not big enough for the data point. We address this by grabbing values just outside the time range and linearly interpolating at the leftmost limit. This affects all graph widgets (CPU, mem, network).

This can be optimized, and will hopefully be prior to release in a separate change.
This commit is contained in:
Clement Tsang 2021-04-04 05:38:57 -04:00 committed by GitHub
parent 40f4c796f8
commit eb6a737d34
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
31 changed files with 1158 additions and 384 deletions

View File

@ -52,6 +52,8 @@ jobs:
override: true
components: clippy
# TODO: Can probably put cache here in the future; I'm worried if this will cause issues with clippy though since cargo check breaks it; maybe wait until 1.52, when fix lands.
- run: cargo clippy --all-targets --workspace -- -D warnings
# Compile/check test.

View File

@ -26,7 +26,10 @@
"Mahmoud",
"Marcin",
"Mousebindings",
"NAS's",
"Nonexhaustive",
"PEBI",
"PETA",
"PKGBUILD",
"PKGBUILDs",
"Polishchuk",
@ -90,6 +93,7 @@
"libc",
"markdownlint",
"memb",
"minmax",
"minwindef",
"musl",
"musleabihf",
@ -102,6 +106,7 @@
"nvme",
"paren",
"pcpu",
"piasecki",
"pids",
"pmem",
"powerpc",
@ -109,7 +114,9 @@
"ppid",
"prepush",
"processthreadsapi",
"pvanheus",
"regexes",
"rposition",
"rsplitn",
"runlevel",
"rustc",

View File

@ -17,6 +17,8 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- [#381](https://github.com/ClementTsang/bottom/pull/381): Added a filter in the config file for network interfaces.
- [#392](https://github.com/ClementTsang/bottom/pull/392): Added CPU load averages (1, 5, 15) for Unix-based systems.
- [#406](https://github.com/ClementTsang/bottom/pull/406): Added the Nord colour scheme, as well as a light variant.
- [#409](https://github.com/ClementTsang/bottom/pull/409): Added `Ctrl-w` and `Ctrl-h` shortcuts in search, to delete a word and delete a character respectively.
@ -25,6 +27,8 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- [#425](https://github.com/ClementTsang/bottom/pull/425): Added user into the process widget for Unix-based systems.
- [#437](https://github.com/ClementTsang/bottom/pull/437): Redo dynamic network y-axis, add linear scaling, unit type, and prefix options.
## Changes
- [#372](https://github.com/ClementTsang/bottom/pull/372): Hides the SWAP graph and legend in normal mode if SWAP is 0.
@ -37,6 +41,8 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- [#420](https://github.com/ClementTsang/bottom/pull/420): Updated tui-rs, allowing for prettier looking tables!
- [#437](https://github.com/ClementTsang/bottom/pull/437): Add linear interpolation step in drawing step to pr event missing entries on the right side of charts.
## Bug Fixes
- [#416](https://github.com/ClementTsang/bottom/pull/416): Fixes grouped vs ungrouped modes in the processes widget having inconsistent spacing.

View File

@ -48,8 +48,10 @@ futures = "0.3.12"
indexmap = "~1.6"
itertools = "0.10.0"
once_cell = "1.5.2"
# ordered-float = "2.1.1"
regex = "1.4.3"
serde = { version = "1.0.123", features = ["derive"] }
# Sysinfo is still used in Linux for the ProcessStatus
sysinfo = "0.16.4"
thiserror = "1.0.24"
toml = "0.5.8"

View File

@ -128,7 +128,7 @@ Or, you can just download the binary from the [latest release](https://github.co
### Nightly
You can install pre-release nightly versions [here](https://github.com/ClementTsang/bottom/releases/tag/nightly). Builds are generated every day at 00:00 UTC, based on the most recent commit on the master branch.
You can install pre-release nightly versions [here](https://github.com/ClementTsang/bottom/releases/tag/nightly). Builds are generated every day at 00:00 UTC, based on the most recent commit on the master branch.
### Cargo
@ -252,38 +252,41 @@ Run using `btm`.
Use `btm --help` for more information.
```
--advanced_kill Shows more options when killing a process on Unix-like systems.
--autohide_time Temporarily shows the time scale in graphs.
-b, --basic Hides graphs and uses a more basic look.
--battery Shows the battery widget.
-S, --case_sensitive Enables case sensitivity by default.
-c, --celsius Sets the temperature type to Celsius.
--color <COLOR SCHEME> Use a color scheme, use --help for supported values.
-C, --config <CONFIG PATH> Sets the location of the config file.
-u, --current_usage Sets process CPU% to be based on current CPU%.
-t, --default_time_value <MS> Default time value for graphs in ms.
--default_widget_count <INT> Sets the n'th selected widget type as the default.
--default_widget_type <WIDGET TYPE> Sets the default widget type, use --help for more info.
--disable_click Disables mouse clicks.
-m, --dot_marker Uses a dot marker for graphs.
-f, --fahrenheit Sets the temperature type to Fahrenheit.
-g, --group Groups processes with the same name by default.
-h, --help Prints help information. Use --help for more info.
-a, --hide_avg_cpu Hides the average CPU usage.
--hide_table_gap Hides the spacing between table headers and entries.
--hide_time Completely hides the time scaling.
-k, --kelvin Sets the temperature type to Kelvin.
-l, --left_legend Puts the CPU chart legend to the left side.
--mem_as_value Defaults to showing process memory usage by value.
--process_command Show processes as their commands by default.
-r, --rate <MS> Sets a refresh rate in ms.
-R, --regex Enables regex by default.
--show_table_scroll_position Shows the scroll position tracker in table widgets.
-d, --time_delta <MS> The amount in ms changed upon zooming.
-T, --tree Defaults to showing the process widget in tree mode.
--use_old_network_legend DEPRECATED - uses the older network legend.
-V, --version Prints version information.
-W, --whole_word Enables whole-word matching by default.
--advanced_kill Shows more options when killing a process on Unix-like systems.
--autohide_time Temporarily shows the time scale in graphs.
-b, --basic Hides graphs and uses a more basic look.
--battery Shows the battery widget.
-S, --case_sensitive Enables case sensitivity by default.
-c, --celsius Sets the temperature type to Celsius.
--color <COLOR SCHEME> Use a color scheme, use --help for supported values.
-C, --config <CONFIG PATH> Sets the location of the config file.
-u, --current_usage Sets process CPU% to be based on current CPU%.
-t, --default_time_value <MS> Default time value for graphs in ms.
--default_widget_count <INT> Sets the n'th selected widget type as the default.
--default_widget_type <WIDGET TYPE> Sets the default widget type, use --help for more info.
--disable_click Disables mouse clicks.
-m, --dot_marker Uses a dot marker for graphs.
-f, --fahrenheit Sets the temperature type to Fahrenheit.
-g, --group Groups processes with the same name by default.
-h, --help Prints help information. Use --help for more info.
-a, --hide_avg_cpu Hides the average CPU usage.
--hide_table_gap Hides the spacing between table headers and entries.
--hide_time Completely hides the time scaling.
-k, --kelvin Sets the temperature type to Kelvin.
-l, --left_legend Puts the CPU chart legend to the left side.
--mem_as_value Defaults to showing process memory usage by value.
--network_use_binary_prefix Displays the network widget with binary prefixes.
--network_use_bytes Displays the network widget using bytes.
--network_use_log Displays the network widget with a log scale.
--process_command Show processes as their commands by default.
-r, --rate <MS> Sets a refresh rate in ms.
-R, --regex Enables regex by default.
--show_table_scroll_position Shows the scroll position tracker in table widgets.
-d, --time_delta <MS> The amount in ms changed upon zooming.
-T, --tree Defaults to showing the process widget in tree mode.
--use_old_network_legend DEPRECATED - uses the older network legend.
-V, --version Prints version information.
-W, --whole_word Enables whole-word matching by default.
```
### Keybindings
@ -464,7 +467,7 @@ As yet _another_ process/system visualization and management application, bottom
- RAM and swap usage visualization
- Network visualization for receiving and transmitting, on a log-graph scale
- Network visualization for receiving and transmitting
- Display information about disk capacity and I/O per second
@ -599,6 +602,9 @@ These are the following supported flag config values, which correspond to the fl
| `show_table_scroll_position` | Boolean | Shows the scroll position tracker in table widgets. |
| `process_command` | Boolean | Show processes as their commands by default. |
| `advanced_kill` | Boolean | Shows more options when killing a process on Unix-like systems. |
| `network_use_binary_prefix` | Boolean | Displays the network widget with binary prefixes. |
| `network_use_bytes` | Boolean | Displays the network widget using bytes. |
| `network_use_log` | Boolean | Displays the network widget with a log scale. |
#### Theming

View File

@ -21,6 +21,7 @@ use crate::{
options::Config,
options::ConfigFlags,
options::WidgetIdEnabled,
units::data_units::DataUnit,
utils::error::{BottomError, Result},
Pid,
};
@ -34,6 +35,12 @@ pub mod states;
const MAX_SEARCH_LENGTH: usize = 200;
#[derive(Debug, Clone)]
pub enum AxisScaling {
Log,
Linear,
}
/// AppConfigFields is meant to cover basic fields that would normally be set
/// by config files or launch options.
#[derive(Debug)]
@ -55,6 +62,10 @@ pub struct AppConfigFields {
pub no_write: bool,
pub show_table_scroll_position: bool,
pub is_advanced_kill: bool,
// TODO: Remove these, move network details state-side.
pub network_unit_type: DataUnit,
pub network_scale_type: AxisScaling,
pub network_use_binary_prefix: bool,
}
/// For filtering out information
@ -708,7 +719,7 @@ impl App {
if self.delete_dialog_state.is_showing_dd {
if self.dd_err.is_some() {
self.close_dd();
} else if self.delete_dialog_state.selected_signal != KillSignal::CANCEL {
} else if self.delete_dialog_state.selected_signal != KillSignal::Cancel {
// If within dd...
if self.dd_err.is_none() {
// Also ensure that we didn't just fail a dd...
@ -886,7 +897,7 @@ impl App {
if kbd_signal > 31 {
kbd_signal %= 10;
}
self.delete_dialog_state.selected_signal = KillSignal::KILL(kbd_signal);
self.delete_dialog_state.selected_signal = KillSignal::Kill(kbd_signal);
if kbd_signal < 10 {
self.delete_dialog_state.keyboard_signal_select = kbd_signal;
} else {
@ -991,15 +1002,15 @@ impl App {
{
if self.app_config_fields.is_advanced_kill {
match self.delete_dialog_state.selected_signal {
KillSignal::KILL(prev_signal) => {
KillSignal::Kill(prev_signal) => {
self.delete_dialog_state.selected_signal = match prev_signal - 1 {
0 => KillSignal::CANCEL,
0 => KillSignal::Cancel,
// 32+33 are skipped
33 => KillSignal::KILL(31),
signal => KillSignal::KILL(signal),
33 => KillSignal::Kill(31),
signal => KillSignal::Kill(signal),
};
}
KillSignal::CANCEL => {}
KillSignal::Cancel => {}
};
} else {
self.delete_dialog_state.selected_signal = KillSignal::default();
@ -1007,7 +1018,7 @@ impl App {
}
#[cfg(target_os = "windows")]
{
self.delete_dialog_state.selected_signal = KillSignal::KILL(1);
self.delete_dialog_state.selected_signal = KillSignal::Kill(1);
}
}
}
@ -1067,23 +1078,23 @@ impl App {
{
if self.app_config_fields.is_advanced_kill {
let new_signal = match self.delete_dialog_state.selected_signal {
KillSignal::CANCEL => 1,
KillSignal::Cancel => 1,
// 32+33 are skipped
#[cfg(target_os = "linux")]
KillSignal::KILL(31) => 34,
KillSignal::Kill(31) => 34,
#[cfg(target_os = "macos")]
KillSignal::KILL(31) => 31,
KillSignal::KILL(64) => 64,
KillSignal::KILL(signal) => signal + 1,
KillSignal::Kill(31) => 31,
KillSignal::Kill(64) => 64,
KillSignal::Kill(signal) => signal + 1,
};
self.delete_dialog_state.selected_signal = KillSignal::KILL(new_signal);
self.delete_dialog_state.selected_signal = KillSignal::Kill(new_signal);
} else {
self.delete_dialog_state.selected_signal = KillSignal::CANCEL;
self.delete_dialog_state.selected_signal = KillSignal::Cancel;
}
}
#[cfg(target_os = "windows")]
{
self.delete_dialog_state.selected_signal = KillSignal::CANCEL;
self.delete_dialog_state.selected_signal = KillSignal::Cancel;
}
}
}
@ -1091,15 +1102,15 @@ impl App {
pub fn on_page_up(&mut self) {
if self.delete_dialog_state.is_showing_dd {
let mut new_signal = match self.delete_dialog_state.selected_signal {
KillSignal::CANCEL => 0,
KillSignal::KILL(signal) => max(signal, 8) - 8,
KillSignal::Cancel => 0,
KillSignal::Kill(signal) => max(signal, 8) - 8,
};
if new_signal > 23 && new_signal < 33 {
new_signal -= 2;
}
self.delete_dialog_state.selected_signal = match new_signal {
0 => KillSignal::CANCEL,
sig => KillSignal::KILL(sig),
0 => KillSignal::Cancel,
sig => KillSignal::Kill(sig),
};
}
}
@ -1107,13 +1118,13 @@ impl App {
pub fn on_page_down(&mut self) {
if self.delete_dialog_state.is_showing_dd {
let mut new_signal = match self.delete_dialog_state.selected_signal {
KillSignal::CANCEL => 8,
KillSignal::KILL(signal) => min(signal + 8, MAX_SIGNAL),
KillSignal::Cancel => 8,
KillSignal::Kill(signal) => min(signal + 8, MAX_SIGNAL),
};
if new_signal > 31 && new_signal < 42 {
new_signal += 2;
}
self.delete_dialog_state.selected_signal = KillSignal::KILL(new_signal);
self.delete_dialog_state.selected_signal = KillSignal::Kill(new_signal);
}
}
@ -1672,8 +1683,8 @@ impl App {
if let Some(current_selected_processes) = &self.to_delete_process_list {
#[cfg(target_family = "unix")]
let signal = match self.delete_dialog_state.selected_signal {
KillSignal::KILL(sig) => sig,
KillSignal::CANCEL => 15, // should never happen, so just TERM
KillSignal::Kill(sig) => sig,
KillSignal::Cancel => 15, // should never happen, so just TERM
};
for pid in &current_selected_processes.1 {
#[cfg(target_family = "unix")]
@ -2229,7 +2240,7 @@ impl App {
} else if self.help_dialog_state.is_showing_help {
self.help_dialog_state.scroll_state.current_scroll_index = 0;
} else if self.delete_dialog_state.is_showing_dd {
self.delete_dialog_state.selected_signal = KillSignal::CANCEL;
self.delete_dialog_state.selected_signal = KillSignal::Cancel;
}
}
@ -2312,7 +2323,7 @@ impl App {
.max_scroll_index
.saturating_sub(1);
} else if self.delete_dialog_state.is_showing_dd {
self.delete_dialog_state.selected_signal = KillSignal::KILL(MAX_SIGNAL);
self.delete_dialog_state.selected_signal = KillSignal::Kill(MAX_SIGNAL);
}
}
@ -2871,13 +2882,13 @@ impl App {
},
) {
Some((_, _, _, _, 0)) => {
self.delete_dialog_state.selected_signal = KillSignal::CANCEL
self.delete_dialog_state.selected_signal = KillSignal::Cancel
}
Some((_, _, _, _, idx)) => {
if *idx > 31 {
self.delete_dialog_state.selected_signal = KillSignal::KILL(*idx + 2)
self.delete_dialog_state.selected_signal = KillSignal::Kill(*idx + 2)
} else {
self.delete_dialog_state.selected_signal = KillSignal::KILL(*idx)
self.delete_dialog_state.selected_signal = KillSignal::Kill(*idx)
}
}
_ => {}

View File

@ -19,7 +19,7 @@ use std::{time::Instant, vec::Vec};
use crate::app::data_harvester::load_avg::LoadAvgHarvest;
use crate::{
data_harvester::{batteries, cpu, disks, load_avg, mem, network, processes, temperature, Data},
utils::gen_util::get_simple_byte_values,
utils::gen_util::get_decimal_bytes,
};
use regex::Regex;
@ -57,7 +57,7 @@ pub struct DataCollection {
pub load_avg_harvest: load_avg::LoadAvgHarvest,
pub process_harvest: Vec<processes::ProcessHarvest>,
pub disk_harvest: Vec<disks::DiskHarvest>,
pub io_harvest: disks::IOHarvest,
pub io_harvest: disks::IoHarvest,
pub io_labels_and_prev: Vec<((u64, u64), (u64, u64))>,
pub io_labels: Vec<(String, String)>,
pub temp_harvest: Vec<temperature::TempHarvest>,
@ -77,7 +77,7 @@ impl Default for DataCollection {
load_avg_harvest: load_avg::LoadAvgHarvest::default(),
process_harvest: Vec::default(),
disk_harvest: Vec::default(),
io_harvest: disks::IOHarvest::default(),
io_harvest: disks::IoHarvest::default(),
io_labels_and_prev: Vec::default(),
io_labels: Vec::default(),
temp_harvest: Vec::default(),
@ -95,7 +95,7 @@ impl DataCollection {
self.cpu_harvest = cpu::CpuHarvest::default();
self.process_harvest = Vec::default();
self.disk_harvest = Vec::default();
self.io_harvest = disks::IOHarvest::default();
self.io_harvest = disks::IoHarvest::default();
self.io_labels_and_prev = Vec::default();
self.temp_harvest = Vec::default();
self.battery_harvest = Vec::default();
@ -205,22 +205,15 @@ impl DataCollection {
}
fn eat_network(&mut self, network: network::NetworkHarvest, new_entry: &mut TimedData) {
// trace!("Eating network.");
// FIXME [NETWORKING; CONFIG]: The ability to config this?
// FIXME [NETWORKING]: Support bits, support switching between decimal and binary units (move the log part to conversion and switch on the fly)
// RX
new_entry.rx_data = if network.rx > 0 {
(network.rx as f64).log2()
} else {
0.0
};
if network.rx > 0 {
new_entry.rx_data = network.rx as f64;
}
// TX
new_entry.tx_data = if network.tx > 0 {
(network.tx as f64).log2()
} else {
0.0
};
if network.tx > 0 {
new_entry.tx_data = network.tx as f64;
}
// In addition copy over latest data for easy reference
self.network_harvest = network;
@ -250,7 +243,7 @@ impl DataCollection {
}
fn eat_disks(
&mut self, disks: Vec<disks::DiskHarvest>, io: disks::IOHarvest, harvested_time: Instant,
&mut self, disks: Vec<disks::DiskHarvest>, io: disks::IoHarvest, harvested_time: Instant,
) {
// trace!("Eating disks.");
// TODO: [PO] To implement
@ -300,8 +293,8 @@ impl DataCollection {
*io_prev = (io_r_pt, io_w_pt);
if let Some(io_labels) = self.io_labels.get_mut(itx) {
let converted_read = get_simple_byte_values(r_rate, false);
let converted_write = get_simple_byte_values(w_rate, false);
let converted_read = get_decimal_bytes(r_rate);
let converted_write = get_decimal_bytes(w_rate);
*io_labels = (
format!("{:.*}{}/s", 0, converted_read.0, converted_read.1),
format!("{:.*}{}/s", 0, converted_write.0, converted_write.1),

View File

@ -36,7 +36,7 @@ pub struct Data {
pub network: Option<network::NetworkHarvest>,
pub list_of_processes: Option<Vec<processes::ProcessHarvest>>,
pub disks: Option<Vec<disks::DiskHarvest>>,
pub io: Option<disks::IOHarvest>,
pub io: Option<disks::IoHarvest>,
pub list_of_batteries: Option<Vec<batteries::BatteryHarvest>>,
}

View File

@ -20,19 +20,11 @@ pub fn refresh_batteries(manager: &Manager, batteries: &mut [Battery]) -> Vec<Ba
Some(BatteryHarvest {
secs_until_full: {
let optional_time = battery.time_to_full();
if let Some(time) = optional_time {
Some(f64::from(time.get::<second>()) as i64)
} else {
None
}
optional_time.map(|time| f64::from(time.get::<second>()) as i64)
},
secs_until_empty: {
let optional_time = battery.time_to_empty();
if let Some(time) = optional_time {
Some(f64::from(time.get::<second>()) as i64)
} else {
None
}
optional_time.map(|time| f64::from(time.get::<second>()) as i64)
},
charge_percent: f64::from(battery.state_of_charge().get::<percent>()),
power_consumption_rate_watts: f64::from(battery.energy_rate().get::<watt>()),

View File

@ -10,21 +10,21 @@ pub struct DiskHarvest {
}
#[derive(Clone, Debug)]
pub struct IOData {
pub struct IoData {
pub read_bytes: u64,
pub write_bytes: u64,
}
pub type IOHarvest = std::collections::HashMap<String, Option<IOData>>;
pub type IoHarvest = std::collections::HashMap<String, Option<IoData>>;
pub async fn get_io_usage(actually_get: bool) -> crate::utils::error::Result<Option<IOHarvest>> {
pub async fn get_io_usage(actually_get: bool) -> crate::utils::error::Result<Option<IoHarvest>> {
if !actually_get {
return Ok(None);
}
use futures::StreamExt;
let mut io_hash: std::collections::HashMap<String, Option<IOData>> =
let mut io_hash: std::collections::HashMap<String, Option<IoData>> =
std::collections::HashMap::new();
let counter_stream = heim::disk::io_counters().await?;
@ -37,7 +37,7 @@ pub async fn get_io_usage(actually_get: bool) -> crate::utils::error::Result<Opt
// FIXME: [MOUNT POINT] Add the filter here I guess?
io_hash.insert(
mount_point.to_string(),
Some(IOData {
Some(IoData {
read_bytes: io.read_bytes().get::<heim::units::information::byte>(),
write_bytes: io.write_bytes().get::<heim::units::information::byte>(),
}),

View File

@ -1,6 +1,7 @@
use std::time::Instant;
#[derive(Default, Clone, Debug)]
/// All units in bits.
pub struct NetworkHarvest {
pub rx: u64,
pub tx: u64,
@ -47,8 +48,8 @@ pub async fn get_network_data(
};
if to_keep {
total_rx += network.get_total_received();
total_tx += network.get_total_transmitted();
total_rx += network.get_total_received() * 8;
total_tx += network.get_total_transmitted() * 8;
}
}
@ -106,8 +107,12 @@ pub async fn get_network_data(
};
if to_keep {
total_rx += io.bytes_recv().get::<heim::units::information::byte>();
total_tx += io.bytes_sent().get::<heim::units::information::byte>();
// TODO: Use bytes as the default instead, perhaps?
// Since you might have to do a double conversion (bytes -> bits -> bytes) in some cases;
// but if you stick to bytes, then in the bytes, case, you do no conversion, and in the bits case,
// you only do one conversion...
total_rx += io.bytes_recv().get::<heim::units::information::bit>();
total_tx += io.bytes_sent().get::<heim::units::information::bit>();
}
}
}

View File

@ -2,6 +2,9 @@ use crate::Pid;
use std::path::PathBuf;
use sysinfo::ProcessStatus;
#[cfg(target_os = "linux")]
use std::path::Path;
#[cfg(target_family = "unix")]
use crate::utils::error;
@ -168,7 +171,7 @@ fn cpu_usage_calculation(
// SC in case that the parsing will fail due to length:
if val.len() <= 10 {
return Err(error::BottomError::InvalidIO(format!(
return Err(error::BottomError::InvalidIo(format!(
"CPU parsing will fail due to too short of a return value; saw {} values, expected 10 values.",
val.len()
)));
@ -222,8 +225,8 @@ fn get_linux_process_vsize_rss(stat: &[&str]) -> (u64, u64) {
#[cfg(target_os = "linux")]
/// Preferably use this only on small files.
fn read_path_contents(path: &PathBuf) -> std::io::Result<String> {
Ok(std::fs::read_to_string(path)?)
fn read_path_contents(path: &Path) -> std::io::Result<String> {
std::fs::read_to_string(path)
}
#[cfg(target_os = "linux")]
@ -272,9 +275,8 @@ fn get_macos_cpu_usage(pids: &[i32]) -> std::io::Result<std::collections::HashMa
let output = std::process::Command::new("ps")
.args(&["-o", "pid=,pcpu=", "-p"])
.arg(
pids.iter()
.map(i32::to_string)
.intersperse(",".to_string())
// Has to look like this since otherwise, it you hit a `unstable_name_collisions` warning.
Itertools::intersperse(pids.iter().map(i32::to_string), ",".to_string())
.collect::<String>(),
)
.output()?;
@ -298,7 +300,7 @@ fn get_macos_cpu_usage(pids: &[i32]) -> std::io::Result<std::collections::HashMa
}
#[cfg(target_os = "linux")]
fn get_uid_and_gid(path: &PathBuf) -> (Option<u32>, Option<u32>) {
fn get_uid_and_gid(path: &Path) -> (Option<u32>, Option<u32>) {
// FIXME: [OPT] - can we merge our /stat and /status calls?
use std::io::prelude::*;
use std::io::BufReader;
@ -470,15 +472,15 @@ fn read_proc(
Ok(ProcessHarvest {
pid,
parent_pid,
name,
command,
cpu_usage_percent,
mem_usage_percent,
mem_usage_bytes,
cpu_usage_percent,
total_read_bytes,
total_write_bytes,
name,
command,
read_bytes_per_sec,
write_bytes_per_sec,
total_read_bytes,
total_write_bytes,
process_state,
process_state_char,
uid,

View File

@ -95,11 +95,7 @@ pub async fn get_temperature_data(
while let Some(sensor) = sensor_data.next().await {
if let Ok(sensor) = sensor {
let component_name = Some(sensor.unit().to_string());
let component_label = if let Some(label) = sensor.label() {
Some(label.to_string())
} else {
None
};
let component_label = sensor.label().map(|label| label.to_string());
let name = match (component_name, component_label) {
(Some(name), Some(label)) => format!("{}: {}", name, label),

View File

@ -188,11 +188,7 @@ impl ProcessQuery for ProcWidgetState {
let initial_or = Or {
lhs: And {
lhs: Prefix {
or: if let Some(or) = list_of_ors.pop_front() {
Some(Box::new(or))
} else {
None
},
or: list_of_ors.pop_front().map(Box::new),
compare_prefix: None,
regex_prefix: None,
},

View File

@ -42,18 +42,18 @@ pub struct AppScrollWidgetState {
#[derive(PartialEq)]
pub enum KillSignal {
CANCEL,
KILL(usize),
Cancel,
Kill(usize),
}
impl Default for KillSignal {
#[cfg(target_family = "unix")]
fn default() -> Self {
KillSignal::KILL(15)
KillSignal::Kill(15)
}
#[cfg(target_os = "windows")]
fn default() -> Self {
KillSignal::KILL(1)
KillSignal::Kill(1)
}
}
@ -690,13 +690,29 @@ impl ProcState {
pub struct NetWidgetState {
pub current_display_time: u64,
pub autohide_timer: Option<Instant>,
// pub draw_max_range_cache: f64,
// pub draw_labels_cache: Vec<String>,
// pub draw_time_start_cache: f64,
// TODO: Re-enable these when we move net details state-side!
// pub unit_type: DataUnitTypes,
// pub scale_type: AxisScaling,
}
impl NetWidgetState {
pub fn init(current_display_time: u64, autohide_timer: Option<Instant>) -> Self {
pub fn init(
current_display_time: u64,
autohide_timer: Option<Instant>,
// unit_type: DataUnitTypes,
// scale_type: AxisScaling,
) -> Self {
NetWidgetState {
current_display_time,
autohide_timer,
// draw_max_range_cache: 0.0,
// draw_labels_cache: vec![],
// draw_time_start_cache: 0.0,
// unit_type,
// scale_type,
}
}
}

View File

@ -26,6 +26,10 @@ use crossterm::{
};
use tui::{backend::CrosstermBackend, Terminal};
// TODO: Add a debugger tool:
// Debugger binary. This isn't implemented yet; the idea for this is to make it easier to troubleshoot bug reports
// by providing a built-in debugger to help gather relevant information to narrow down the problem.
fn main() -> Result<()> {
let matches = clap::get_matches();
// let is_debug = matches.is_present("debug");
@ -178,6 +182,9 @@ fn main() -> Result<()> {
false,
app.app_config_fields.use_basic_mode
|| app.app_config_fields.use_old_network_legend,
&app.app_config_fields.network_scale_type,
&app.app_config_fields.network_unit_type,
app.app_config_fields.network_use_binary_prefix,
);
app.canvas_data.network_data_rx = network_data.rx;
app.canvas_data.network_data_tx = network_data.tx;

View File

@ -9,6 +9,8 @@ use tui::{
Frame, Terminal,
};
// use ordered_float::OrderedFloat;
use canvas_colours::*;
use dialogs::*;
use screens::*;
@ -54,7 +56,7 @@ pub struct DisplayableData {
pub mem_labels: Option<(String, String)>,
pub swap_labels: Option<(String, String)>,
pub mem_data: Vec<Point>,
pub mem_data: Vec<Point>, // TODO: Switch this and all data points over to a better data structure...
pub swap_data: Vec<Point>,
pub load_avg_data: [f32; 3],
pub cpu_data: Vec<ConvertedCpuData>,

View File

@ -72,11 +72,11 @@ impl KillDialog for Painter {
) {
if cfg!(target_os = "windows") || !app_state.app_config_fields.is_advanced_kill {
let (yes_button, no_button) = match app_state.delete_dialog_state.selected_signal {
KillSignal::KILL(_) => (
KillSignal::Kill(_) => (
Span::styled("Yes", self.colours.currently_selected_text_style),
Span::raw("No"),
),
KillSignal::CANCEL => (
KillSignal::Cancel => (
Span::raw("Yes"),
Span::styled("No", self.colours.currently_selected_text_style),
),
@ -249,8 +249,8 @@ impl KillDialog for Painter {
.split(*button_draw_loc)[1];
let mut selected = match app_state.delete_dialog_state.selected_signal {
KillSignal::CANCEL => 0,
KillSignal::KILL(signal) => signal,
KillSignal::Cancel => 0,
KillSignal::Kill(signal) => signal,
};
// 32+33 are skipped
if selected > 31 {

View File

@ -117,6 +117,12 @@ pub fn get_column_widths(
filtered_column_widths
}
/// FIXME: [command move] This is a greedy method of determining column widths. This is reserved for columns where we are okay with
/// shoving information as far right as required.
// pub fn greedy_get_column_widths() -> Vec<u16> {
// vec![]
// }
pub fn get_search_start_position(
num_columns: usize, cursor_direction: &app::CursorDirection, cursor_bar: &mut usize,
current_cursor_position: usize, is_force_redraw: bool,
@ -205,3 +211,14 @@ pub fn calculate_basic_use_bars(use_percentage: f64, num_bars_available: usize)
num_bars_available,
)
}
/// Interpolates between two points. Mainly used to help fill in tui-rs blanks in certain situations.
/// It is expected point_one is "further left" compared to point_two.
/// A point is two floats, in (x, y) form. x is time, y is value.
pub fn interpolate_points(point_one: &(f64, f64), point_two: &(f64, f64), time: f64) -> f64 {
let delta_x = point_two.0 - point_one.0;
let delta_y = point_two.1 - point_one.1;
let slope = delta_y / delta_x;
(point_one.1 + (time - point_one.0) * slope).max(0.0)
}

View File

@ -4,7 +4,7 @@ use unicode_segmentation::UnicodeSegmentation;
use crate::{
app::{layout_manager::WidgetDirection, App},
canvas::{
drawing_utils::{get_column_widths, get_start_position},
drawing_utils::{get_column_widths, get_start_position, interpolate_points},
Painter,
},
constants::*,
@ -146,32 +146,34 @@ impl CpuGraphWidget for Painter {
];
let y_axis_labels = vec![
Span::styled("0%", self.colours.graph_style),
Span::styled(" 0%", self.colours.graph_style),
Span::styled("100%", self.colours.graph_style),
];
let time_start = -(cpu_widget_state.current_display_time as f64);
let x_axis = if app_state.app_config_fields.hide_time
|| (app_state.app_config_fields.autohide_time
&& cpu_widget_state.autohide_timer.is_none())
{
Axis::default().bounds([-(cpu_widget_state.current_display_time as f64), 0.0])
Axis::default().bounds([time_start, 0.0])
} else if let Some(time) = cpu_widget_state.autohide_timer {
if std::time::Instant::now().duration_since(time).as_millis()
< AUTOHIDE_TIMEOUT_MILLISECONDS as u128
{
Axis::default()
.bounds([-(cpu_widget_state.current_display_time as f64), 0.0])
.bounds([time_start, 0.0])
.style(self.colours.graph_style)
.labels(display_time_labels)
} else {
cpu_widget_state.autohide_timer = None;
Axis::default().bounds([-(cpu_widget_state.current_display_time as f64), 0.0])
Axis::default().bounds([time_start, 0.0])
}
} else if draw_loc.height < TIME_LABEL_HEIGHT_LIMIT {
Axis::default().bounds([-(cpu_widget_state.current_display_time as f64), 0.0])
Axis::default().bounds([time_start, 0.0])
} else {
Axis::default()
.bounds([-(cpu_widget_state.current_display_time as f64), 0.0])
.bounds([time_start, 0.0])
.style(self.colours.graph_style)
.labels(display_time_labels)
};
@ -184,6 +186,59 @@ impl CpuGraphWidget for Painter {
let use_dot = app_state.app_config_fields.use_dot;
let show_avg_cpu = app_state.app_config_fields.show_average_cpu;
let current_scroll_position = cpu_widget_state.scroll_state.current_scroll_position;
let interpolated_cpu_points = cpu_data
.iter_mut()
.enumerate()
.map(|(itx, cpu)| {
let to_show = if current_scroll_position == ALL_POSITION {
true
} else {
itx == current_scroll_position
};
if to_show {
if let Some(end_pos) = cpu
.cpu_data
.iter()
.position(|(time, _data)| *time >= time_start)
{
if end_pos > 1 {
let start_pos = end_pos - 1;
let outside_point = cpu.cpu_data.get(start_pos);
let inside_point = cpu.cpu_data.get(end_pos);
if let (Some(outside_point), Some(inside_point)) =
(outside_point, inside_point)
{
let old = *outside_point;
let new_point = (
time_start,
interpolate_points(outside_point, inside_point, time_start),
);
if let Some(to_replace) = cpu.cpu_data.get_mut(start_pos) {
*to_replace = new_point;
Some((start_pos, old))
} else {
None // Failed to get mutable reference.
}
} else {
None // Point somehow doesn't exist in our data
}
} else {
None // Point is already "leftmost", no need to interpolate.
}
} else {
None // There is no point.
}
} else {
None
}
})
.collect::<Vec<_>>();
let dataset_vector: Vec<Dataset<'_>> = if current_scroll_position == ALL_POSITION {
cpu_data
.iter()
@ -311,6 +366,18 @@ impl CpuGraphWidget for Painter {
.y_axis(y_axis),
draw_loc,
);
// Reset interpolated points
cpu_data
.iter_mut()
.zip(interpolated_cpu_points)
.for_each(|(cpu, interpolation)| {
if let Some((index, old_value)) = interpolation {
if let Some(to_replace) = cpu.cpu_data.get_mut(index) {
*to_replace = old_value;
}
}
});
}
}

View File

@ -1,4 +1,8 @@
use crate::{app::App, canvas::Painter, constants::*};
use crate::{
app::App,
canvas::{drawing_utils::interpolate_points, Painter},
constants::*,
};
use tui::{
backend::Backend,
@ -22,8 +26,10 @@ impl MemGraphWidget for Painter {
&self, f: &mut Frame<'_, B>, app_state: &mut App, draw_loc: Rect, widget_id: u64,
) {
if let Some(mem_widget_state) = app_state.mem_state.widget_states.get_mut(&widget_id) {
let mem_data: &[(f64, f64)] = &app_state.canvas_data.mem_data;
let swap_data: &[(f64, f64)] = &app_state.canvas_data.swap_data;
let mem_data: &mut [(f64, f64)] = &mut app_state.canvas_data.mem_data;
let swap_data: &mut [(f64, f64)] = &mut app_state.canvas_data.swap_data;
let time_start = -(mem_widget_state.current_display_time as f64);
let display_time_labels = vec![
Span::styled(
@ -33,7 +39,7 @@ impl MemGraphWidget for Painter {
Span::styled("0s".to_string(), self.colours.graph_style),
];
let y_axis_label = vec![
Span::styled("0%", self.colours.graph_style),
Span::styled(" 0%", self.colours.graph_style),
Span::styled("100%", self.colours.graph_style),
];
@ -41,24 +47,24 @@ impl MemGraphWidget for Painter {
|| (app_state.app_config_fields.autohide_time
&& mem_widget_state.autohide_timer.is_none())
{
Axis::default().bounds([-(mem_widget_state.current_display_time as f64), 0.0])
Axis::default().bounds([time_start, 0.0])
} else if let Some(time) = mem_widget_state.autohide_timer {
if std::time::Instant::now().duration_since(time).as_millis()
< AUTOHIDE_TIMEOUT_MILLISECONDS as u128
{
Axis::default()
.bounds([-(mem_widget_state.current_display_time as f64), 0.0])
.bounds([time_start, 0.0])
.style(self.colours.graph_style)
.labels(display_time_labels)
} else {
mem_widget_state.autohide_timer = None;
Axis::default().bounds([-(mem_widget_state.current_display_time as f64), 0.0])
Axis::default().bounds([time_start, 0.0])
}
} else if draw_loc.height < TIME_LABEL_HEIGHT_LIMIT {
Axis::default().bounds([-(mem_widget_state.current_display_time as f64), 0.0])
Axis::default().bounds([time_start, 0.0])
} else {
Axis::default()
.bounds([-(mem_widget_state.current_display_time as f64), 0.0])
.bounds([time_start, 0.0])
.style(self.colours.graph_style)
.labels(display_time_labels)
};
@ -68,6 +74,75 @@ impl MemGraphWidget for Painter {
.bounds([0.0, 100.5])
.labels(y_axis_label);
// Interpolate values to avoid ugly gaps
let interpolated_mem_point = if let Some(end_pos) = mem_data
.iter()
.position(|(time, _data)| *time >= time_start)
{
if end_pos > 1 {
let start_pos = end_pos - 1;
let outside_point = mem_data.get(start_pos);
let inside_point = mem_data.get(end_pos);
if let (Some(outside_point), Some(inside_point)) = (outside_point, inside_point)
{
let old = *outside_point;
let new_point = (
time_start,
interpolate_points(outside_point, inside_point, time_start),
);
if let Some(to_replace) = mem_data.get_mut(start_pos) {
*to_replace = new_point;
Some((start_pos, old))
} else {
None // Failed to get mutable reference.
}
} else {
None // Point somehow doesn't exist in our data
}
} else {
None // Point is already "leftmost", no need to interpolate.
}
} else {
None // There is no point.
};
let interpolated_swap_point = if let Some(end_pos) = swap_data
.iter()
.position(|(time, _data)| *time >= time_start)
{
if end_pos > 1 {
let start_pos = end_pos - 1;
let outside_point = swap_data.get(start_pos);
let inside_point = swap_data.get(end_pos);
if let (Some(outside_point), Some(inside_point)) = (outside_point, inside_point)
{
let old = *outside_point;
let new_point = (
time_start,
interpolate_points(outside_point, inside_point, time_start),
);
if let Some(to_replace) = swap_data.get_mut(start_pos) {
*to_replace = new_point;
Some((start_pos, old))
} else {
None // Failed to get mutable reference.
}
} else {
None // Point somehow doesn't exist in our data
}
} else {
None // Point is already "leftmost", no need to interpolate.
}
} else {
None // There is no point.
};
let mut mem_canvas_vec: Vec<Dataset<'_>> = vec![];
if let Some((label_percent, label_frac)) = &app_state.canvas_data.mem_labels {
@ -147,6 +222,19 @@ impl MemGraphWidget for Painter {
.hidden_legend_constraints((Constraint::Ratio(3, 4), Constraint::Ratio(3, 4))),
draw_loc,
);
// Now if you're done, reset any interpolated points!
if let Some((index, old_value)) = interpolated_mem_point {
if let Some(to_replace) = mem_data.get_mut(index) {
*to_replace = old_value;
}
}
if let Some((index, old_value)) = interpolated_swap_point {
if let Some(to_replace) = swap_data.get_mut(index) {
*to_replace = old_value;
}
}
}
if app_state.should_get_widget_bounds() {

View File

@ -3,9 +3,13 @@ use std::cmp::max;
use unicode_segmentation::UnicodeSegmentation;
use crate::{
app::App,
canvas::{drawing_utils::get_column_widths, Painter},
app::{App, AxisScaling},
canvas::{
drawing_utils::{get_column_widths, interpolate_points},
Painter,
},
constants::*,
units::data_units::DataUnit,
utils::gen_util::*,
};
@ -82,103 +86,344 @@ impl NetworkGraphWidget for Painter {
/// Point is of time, data
type Point = (f64, f64);
/// Returns the max data point and time given a time.
fn get_max_entry(
rx: &[Point], tx: &[Point], time_start: f64, network_scale_type: &AxisScaling,
network_use_binary_prefix: bool,
) -> (f64, f64) {
/// Determines a "fake" max value in circumstances where we couldn't find one from the data.
fn calculate_missing_max(
network_scale_type: &AxisScaling, network_use_binary_prefix: bool,
) -> f64 {
match network_scale_type {
AxisScaling::Log => {
if network_use_binary_prefix {
LOG_KIBI_LIMIT
} else {
LOG_KILO_LIMIT
}
}
AxisScaling::Linear => {
if network_use_binary_prefix {
KIBI_LIMIT_F64
} else {
KILO_LIMIT_F64
}
}
}
}
// First, let's shorten our ranges to actually look. We can abuse the fact that our rx and tx arrays
// are sorted, so we can short-circuit our search to filter out only the relevant data points...
let filtered_rx = if let (Some(rx_start), Some(rx_end)) = (
rx.iter().position(|(time, _data)| *time >= time_start),
rx.iter().rposition(|(time, _data)| *time <= 0.0),
) {
Some(&rx[rx_start..=rx_end])
} else {
None
};
let filtered_tx = if let (Some(tx_start), Some(tx_end)) = (
tx.iter().position(|(time, _data)| *time >= time_start),
tx.iter().rposition(|(time, _data)| *time <= 0.0),
) {
Some(&tx[tx_start..=tx_end])
} else {
None
};
// Then, find the maximal rx/tx so we know how to scale, and return it.
match (filtered_rx, filtered_tx) {
(None, None) => (
time_start,
calculate_missing_max(network_scale_type, network_use_binary_prefix),
),
(None, Some(filtered_tx)) => {
match filtered_tx
.iter()
.max_by(|(_, data_a), (_, data_b)| get_ordering(data_a, data_b, false))
{
Some((best_time, max_val)) => {
if *max_val == 0.0 {
(
time_start,
calculate_missing_max(
network_scale_type,
network_use_binary_prefix,
),
)
} else {
(*best_time, *max_val)
}
}
None => (
time_start,
calculate_missing_max(network_scale_type, network_use_binary_prefix),
),
}
}
(Some(filtered_rx), None) => {
match filtered_rx
.iter()
.max_by(|(_, data_a), (_, data_b)| get_ordering(data_a, data_b, false))
{
Some((best_time, max_val)) => {
if *max_val == 0.0 {
(
time_start,
calculate_missing_max(
network_scale_type,
network_use_binary_prefix,
),
)
} else {
(*best_time, *max_val)
}
}
None => (
time_start,
calculate_missing_max(network_scale_type, network_use_binary_prefix),
),
}
}
(Some(filtered_rx), Some(filtered_tx)) => {
match filtered_rx
.iter()
.chain(filtered_tx)
.max_by(|(_, data_a), (_, data_b)| get_ordering(data_a, data_b, false))
{
Some((best_time, max_val)) => {
if *max_val == 0.0 {
(
*best_time,
calculate_missing_max(
network_scale_type,
network_use_binary_prefix,
),
)
} else {
(*best_time, *max_val)
}
}
None => (
time_start,
calculate_missing_max(network_scale_type, network_use_binary_prefix),
),
}
}
}
}
/// Returns the required max data point and labels.
fn adjust_network_data_point(
rx: &[Point], tx: &[Point], time_start: f64, time_end: f64,
max_entry: f64, network_scale_type: &AxisScaling, network_unit_type: &DataUnit,
network_use_binary_prefix: bool,
) -> (f64, Vec<String>) {
// First, filter and find the maximal rx or tx so we know how to scale
let mut max_val_bytes = 0.0;
let filtered_rx = rx
.iter()
.cloned()
.filter(|(time, _data)| *time >= time_start && *time <= time_end);
// So, we're going with an approach like this for linear data:
// - Main goal is to maximize the amount of information displayed given a specific height.
// We don't want to drown out some data if the ranges are too far though! Nor do we want to filter
// out too much data...
// - Change the y-axis unit (kilo/kibi, mega/mebi...) dynamically based on max load.
//
// The idea is we take the top value, build our scale such that each "point" is a scaled version of that.
// So for example, let's say I use 390 Mb/s. If I drew 4 segments, it would be 97.5, 195, 292.5, 390, and
// probably something like 438.75?
//
// So, how do we do this in tui-rs? Well, if we are using intervals that tie in perfectly to the max
// value we want... then it's actually not that hard. Since tui-rs accepts a vector as labels and will
// properly space them all out... we just work with that and space it out properly.
//
// Dynamic chart idea based off of FreeNAS's chart design.
//
// ===
//
// For log data, we just use the old method of log intervals (kilo/mega/giga/etc.). Keep it nice and simple.
let filtered_tx = tx
.iter()
.cloned()
.filter(|(time, _data)| *time >= time_start && *time <= time_end);
// Now just check the largest unit we correspond to... then proceed to build some entries from there!
for (_time, data) in filtered_rx.clone().chain(filtered_tx.clone()) {
if data > max_val_bytes {
max_val_bytes = data;
let unit_char = match network_unit_type {
DataUnit::Byte => "B",
DataUnit::Bit => "b",
};
match network_scale_type {
AxisScaling::Linear => {
let (k_limit, m_limit, g_limit, t_limit) = if network_use_binary_prefix {
(
KIBI_LIMIT_F64,
MEBI_LIMIT_F64,
GIBI_LIMIT_F64,
TEBI_LIMIT_F64,
)
} else {
(
KILO_LIMIT_F64,
MEGA_LIMIT_F64,
GIGA_LIMIT_F64,
TERA_LIMIT_F64,
)
};
let bumped_max_entry = max_entry * 1.5; // We use the bumped up version to calculate our unit type.
let (max_value_scaled, unit_prefix, unit_type): (f64, &str, &str) =
if bumped_max_entry < k_limit {
(max_entry, "", unit_char)
} else if bumped_max_entry < m_limit {
(
max_entry / k_limit,
if network_use_binary_prefix { "Ki" } else { "K" },
unit_char,
)
} else if bumped_max_entry < g_limit {
(
max_entry / m_limit,
if network_use_binary_prefix { "Mi" } else { "M" },
unit_char,
)
} else if bumped_max_entry < t_limit {
(
max_entry / g_limit,
if network_use_binary_prefix { "Gi" } else { "G" },
unit_char,
)
} else {
(
max_entry / t_limit,
if network_use_binary_prefix { "Ti" } else { "T" },
unit_char,
)
};
// Finally, build an acceptable range starting from there, using the given height!
// Note we try to put more of a weight on the bottom section vs. the top, since the top has less data.
let base_unit = max_value_scaled;
let labels: Vec<String> = vec![
format!("0{}{}", unit_prefix, unit_type),
format!("{:.1}", base_unit * 0.5),
format!("{:.1}", base_unit),
format!("{:.1}", base_unit * 1.5),
]
.into_iter()
.map(|s| format!("{:>5}", s)) // Pull 5 as the longest legend value is generally going to be 5 digits (if they somehow hit over 5 terabits per second)
.collect();
(bumped_max_entry, labels)
}
}
AxisScaling::Log => {
let (m_limit, g_limit, t_limit) = if network_use_binary_prefix {
(LOG_MEBI_LIMIT, LOG_GIBI_LIMIT, LOG_TEBI_LIMIT)
} else {
(LOG_MEGA_LIMIT, LOG_GIGA_LIMIT, LOG_TERA_LIMIT)
};
// FIXME [NETWORKING]: Granularity. Just scale up the values.
// FIXME [NETWORKING]: Ability to set fixed scale in config.
// Currently we do 32 -> 33... which skips some gigabit values
let true_max_val: f64;
let mut labels = vec![];
if max_val_bytes < LOG_KIBI_LIMIT {
true_max_val = LOG_KIBI_LIMIT;
labels = vec!["0B".to_string(), "1KiB".to_string()];
} else if max_val_bytes < LOG_MEBI_LIMIT {
true_max_val = LOG_MEBI_LIMIT;
labels = vec!["0B".to_string(), "1KiB".to_string(), "1MiB".to_string()];
} else if max_val_bytes < LOG_GIBI_LIMIT {
true_max_val = LOG_GIBI_LIMIT;
labels = vec![
"0B".to_string(),
"1KiB".to_string(),
"1MiB".to_string(),
"1GiB".to_string(),
];
} else if max_val_bytes < LOG_TEBI_LIMIT {
true_max_val = max_val_bytes.ceil() + 1.0;
let cap_u32 = true_max_val as u32;
for i in 0..=cap_u32 {
match i {
0 => labels.push("0B".to_string()),
LOG_KIBI_LIMIT_U32 => labels.push("1KiB".to_string()),
LOG_MEBI_LIMIT_U32 => labels.push("1MiB".to_string()),
LOG_GIBI_LIMIT_U32 => labels.push("1GiB".to_string()),
_ if i == cap_u32 => {
labels.push(format!("{}GiB", 2_u64.pow(cap_u32 - LOG_GIBI_LIMIT_U32)))
}
_ if i == (LOG_GIBI_LIMIT_U32 + cap_u32) / 2 => labels.push(format!(
"{}GiB",
2_u64.pow(cap_u32 - ((LOG_GIBI_LIMIT_U32 + cap_u32) / 2))
)), // ~Halfway point
_ => labels.push(String::default()),
fn get_zero(network_use_binary_prefix: bool, unit_char: &str) -> String {
format!(
"{}0{}",
if network_use_binary_prefix { " " } else { " " },
unit_char
)
}
}
} else {
true_max_val = max_val_bytes.ceil() + 1.0;
let cap_u32 = true_max_val as u32;
for i in 0..=cap_u32 {
match i {
0 => labels.push("0B".to_string()),
LOG_KIBI_LIMIT_U32 => labels.push("1KiB".to_string()),
LOG_MEBI_LIMIT_U32 => labels.push("1MiB".to_string()),
LOG_GIBI_LIMIT_U32 => labels.push("1GiB".to_string()),
LOG_TEBI_LIMIT_U32 => labels.push("1TiB".to_string()),
_ if i == cap_u32 => {
labels.push(format!("{}GiB", 2_u64.pow(cap_u32 - LOG_TEBI_LIMIT_U32)))
}
_ if i == (LOG_TEBI_LIMIT_U32 + cap_u32) / 2 => labels.push(format!(
"{}TiB",
2_u64.pow(cap_u32 - ((LOG_TEBI_LIMIT_U32 + cap_u32) / 2))
)), // ~Halfway point
_ => labels.push(String::default()),
fn get_k(network_use_binary_prefix: bool, unit_char: &str) -> String {
format!(
"1{}{}",
if network_use_binary_prefix { "Ki" } else { "K" },
unit_char
)
}
fn get_m(network_use_binary_prefix: bool, unit_char: &str) -> String {
format!(
"1{}{}",
if network_use_binary_prefix { "Mi" } else { "M" },
unit_char
)
}
fn get_g(network_use_binary_prefix: bool, unit_char: &str) -> String {
format!(
"1{}{}",
if network_use_binary_prefix { "Gi" } else { "G" },
unit_char
)
}
fn get_t(network_use_binary_prefix: bool, unit_char: &str) -> String {
format!(
"1{}{}",
if network_use_binary_prefix { "Ti" } else { "T" },
unit_char
)
}
fn get_p(network_use_binary_prefix: bool, unit_char: &str) -> String {
format!(
"1{}{}",
if network_use_binary_prefix { "Pi" } else { "P" },
unit_char
)
}
if max_entry < m_limit {
(
m_limit,
vec![
get_zero(network_use_binary_prefix, unit_char),
get_k(network_use_binary_prefix, unit_char),
get_m(network_use_binary_prefix, unit_char),
],
)
} else if max_entry < g_limit {
(
g_limit,
vec![
get_zero(network_use_binary_prefix, unit_char),
get_k(network_use_binary_prefix, unit_char),
get_m(network_use_binary_prefix, unit_char),
get_g(network_use_binary_prefix, unit_char),
],
)
} else if max_entry < t_limit {
(
t_limit,
vec![
get_zero(network_use_binary_prefix, unit_char),
get_k(network_use_binary_prefix, unit_char),
get_m(network_use_binary_prefix, unit_char),
get_g(network_use_binary_prefix, unit_char),
get_t(network_use_binary_prefix, unit_char),
],
)
} else {
// I really doubt anyone's transferring beyond petabyte speeds...
(
if network_use_binary_prefix {
LOG_PEBI_LIMIT
} else {
LOG_PETA_LIMIT
},
vec![
get_zero(network_use_binary_prefix, unit_char),
get_k(network_use_binary_prefix, unit_char),
get_m(network_use_binary_prefix, unit_char),
get_g(network_use_binary_prefix, unit_char),
get_t(network_use_binary_prefix, unit_char),
get_p(network_use_binary_prefix, unit_char),
],
)
}
}
}
(true_max_val, labels)
}
if let Some(network_widget_state) = app_state.net_state.widget_states.get_mut(&widget_id) {
let network_data_rx: &[(f64, f64)] = &app_state.canvas_data.network_data_rx;
let network_data_tx: &[(f64, f64)] = &app_state.canvas_data.network_data_tx;
let network_data_rx: &mut [(f64, f64)] = &mut app_state.canvas_data.network_data_rx;
let network_data_tx: &mut [(f64, f64)] = &mut app_state.canvas_data.network_data_tx;
let time_start = -(network_widget_state.current_display_time as f64);
let (max_range, labels) = adjust_network_data_point(
network_data_rx,
network_data_tx,
-(network_widget_state.current_display_time as f64),
0.0,
);
let display_time_labels = vec![
Span::styled(
format!("{}s", network_widget_state.current_display_time / 1000),
@ -190,29 +435,138 @@ impl NetworkGraphWidget for Painter {
|| (app_state.app_config_fields.autohide_time
&& network_widget_state.autohide_timer.is_none())
{
Axis::default().bounds([-(network_widget_state.current_display_time as f64), 0.0])
Axis::default().bounds([time_start, 0.0])
} else if let Some(time) = network_widget_state.autohide_timer {
if std::time::Instant::now().duration_since(time).as_millis()
< AUTOHIDE_TIMEOUT_MILLISECONDS as u128
{
Axis::default()
.bounds([-(network_widget_state.current_display_time as f64), 0.0])
.bounds([time_start, 0.0])
.style(self.colours.graph_style)
.labels(display_time_labels)
} else {
network_widget_state.autohide_timer = None;
Axis::default()
.bounds([-(network_widget_state.current_display_time as f64), 0.0])
Axis::default().bounds([time_start, 0.0])
}
} else if draw_loc.height < TIME_LABEL_HEIGHT_LIMIT {
Axis::default().bounds([-(network_widget_state.current_display_time as f64), 0.0])
Axis::default().bounds([time_start, 0.0])
} else {
Axis::default()
.bounds([-(network_widget_state.current_display_time as f64), 0.0])
.bounds([time_start, 0.0])
.style(self.colours.graph_style)
.labels(display_time_labels)
};
// Interpolate a point for rx and tx between the last value outside of the left bounds and the first value
// inside it.
// Because we assume it is all in order for... basically all our code, we can't just append it,
// and insertion in the middle seems. So instead, we swap *out* the value that is outside with our
// interpolated point, draw and do whatever calculations, then swap back in the old value!
//
// Note there is some re-used work here! For potential optimizations, we could re-use some work here in/from
// get_max_entry...
let interpolated_rx_point = if let Some(rx_end_pos) = network_data_rx
.iter()
.position(|(time, _data)| *time >= time_start)
{
if rx_end_pos > 1 {
let rx_start_pos = rx_end_pos - 1;
let outside_rx_point = network_data_rx.get(rx_start_pos);
let inside_rx_point = network_data_rx.get(rx_end_pos);
if let (Some(outside_rx_point), Some(inside_rx_point)) =
(outside_rx_point, inside_rx_point)
{
let old = *outside_rx_point;
let new_point = (
time_start,
interpolate_points(outside_rx_point, inside_rx_point, time_start),
);
// debug!(
// "Interpolated between {:?} and {:?}, got rx for time {:?}: {:?}",
// outside_rx_point, inside_rx_point, time_start, new_point
// );
if let Some(to_replace) = network_data_rx.get_mut(rx_start_pos) {
*to_replace = new_point;
Some((rx_start_pos, old))
} else {
None // Failed to get mutable reference.
}
} else {
None // Point somehow doesn't exist in our network_data_rx
}
} else {
None // Point is already "leftmost", no need to interpolate.
}
} else {
None // There is no point.
};
let interpolated_tx_point = if let Some(tx_end_pos) = network_data_tx
.iter()
.position(|(time, _data)| *time >= time_start)
{
if tx_end_pos > 1 {
let tx_start_pos = tx_end_pos - 1;
let outside_tx_point = network_data_tx.get(tx_start_pos);
let inside_tx_point = network_data_tx.get(tx_end_pos);
if let (Some(outside_tx_point), Some(inside_tx_point)) =
(outside_tx_point, inside_tx_point)
{
let old = *outside_tx_point;
let new_point = (
time_start,
interpolate_points(outside_tx_point, inside_tx_point, time_start),
);
if let Some(to_replace) = network_data_tx.get_mut(tx_start_pos) {
*to_replace = new_point;
Some((tx_start_pos, old))
} else {
None // Failed to get mutable reference.
}
} else {
None // Point somehow doesn't exist in our network_data_tx
}
} else {
None // Point is already "leftmost", no need to interpolate.
}
} else {
None // There is no point.
};
// TODO: Cache network results: Only update if:
// - Force update (includes time interval change)
// - Old max time is off screen
// - A new time interval is better and does not fit (check from end of vector to last checked; we only want to update if it is TOO big!)
// Find the maximal rx/tx so we know how to scale, and return it.
let (_best_time, max_entry) = get_max_entry(
network_data_rx,
network_data_tx,
time_start,
&app_state.app_config_fields.network_scale_type,
app_state.app_config_fields.network_use_binary_prefix,
);
let (max_range, labels) = adjust_network_data_point(
max_entry,
&app_state.app_config_fields.network_scale_type,
&app_state.app_config_fields.network_unit_type,
app_state.app_config_fields.network_use_binary_prefix,
);
// Cache results.
// network_widget_state.draw_max_range_cache = max_range;
// network_widget_state.draw_time_start_cache = best_time;
// network_widget_state.draw_labels_cache = labels;
let y_axis_labels = labels
.iter()
.map(|label| Span::styled(label, self.colours.graph_style))
@ -250,12 +604,12 @@ impl NetworkGraphWidget for Painter {
let legend_constraints = if hide_legend {
(Constraint::Ratio(0, 1), Constraint::Ratio(0, 1))
} else {
(Constraint::Ratio(3, 4), Constraint::Ratio(3, 4))
(Constraint::Ratio(1, 1), Constraint::Ratio(3, 4))
};
// TODO: Add support for clicking on legend to only show that value on chart.
let dataset = if app_state.app_config_fields.use_old_network_legend && !hide_legend {
let mut ret_val = vec![];
ret_val.push(
vec![
Dataset::default()
.name(format!("RX: {:7}", app_state.canvas_data.rx_display))
.marker(if app_state.app_config_fields.use_dot {
@ -266,9 +620,6 @@ impl NetworkGraphWidget for Painter {
.style(self.colours.rx_style)
.data(&network_data_rx)
.graph_type(tui::widgets::GraphType::Line),
);
ret_val.push(
Dataset::default()
.name(format!("TX: {:7}", app_state.canvas_data.tx_display))
.marker(if app_state.app_config_fields.use_dot {
@ -279,30 +630,21 @@ impl NetworkGraphWidget for Painter {
.style(self.colours.tx_style)
.data(&network_data_tx)
.graph_type(tui::widgets::GraphType::Line),
);
ret_val.push(
Dataset::default()
.name(format!(
"Total RX: {:7}",
app_state.canvas_data.total_rx_display
))
.style(self.colours.total_rx_style),
);
ret_val.push(
Dataset::default()
.name(format!(
"Total TX: {:7}",
app_state.canvas_data.total_tx_display
))
.style(self.colours.total_tx_style),
);
ret_val
]
} else {
let mut ret_val = vec![];
ret_val.push(
vec![
Dataset::default()
.name(&app_state.canvas_data.rx_display)
.marker(if app_state.app_config_fields.use_dot {
@ -313,9 +655,6 @@ impl NetworkGraphWidget for Painter {
.style(self.colours.rx_style)
.data(&network_data_rx)
.graph_type(tui::widgets::GraphType::Line),
);
ret_val.push(
Dataset::default()
.name(&app_state.canvas_data.tx_display)
.marker(if app_state.app_config_fields.use_dot {
@ -326,9 +665,7 @@ impl NetworkGraphWidget for Painter {
.style(self.colours.tx_style)
.data(&network_data_tx)
.graph_type(tui::widgets::GraphType::Line),
);
ret_val
]
};
f.render_widget(
@ -348,10 +685,22 @@ impl NetworkGraphWidget for Painter {
.hidden_legend_constraints(legend_constraints),
draw_loc,
);
// Now if you're done, reset any interpolated points!
if let Some((index, old_value)) = interpolated_rx_point {
if let Some(to_replace) = network_data_rx.get_mut(index) {
*to_replace = old_value;
}
}
if let Some((index, old_value)) = interpolated_tx_point {
if let Some(to_replace) = network_data_tx.get_mut(index) {
*to_replace = old_value;
}
}
}
}
// TODO: [DEPRECATED] Get rid of this in, like, 0.6...?
fn draw_network_labels<B: Backend>(
&self, f: &mut Frame<'_, B>, app_state: &mut App, draw_loc: Rect, widget_id: u64,
) {

View File

@ -18,6 +18,7 @@ pub fn get_matches() -> clap::ArgMatches<'static> {
build_app().get_matches()
}
// TODO: Refactor this a bit, it's quite messy atm
pub fn build_app() -> App<'static, 'static> {
// Temps
let kelvin = Arg::with_name("kelvin")
@ -383,6 +384,30 @@ The minimum is 1s (1000), and defaults to 15s (15000).\n\n\n",
Defaults to showing the process widget in tree mode.\n\n",
);
let network_use_bytes = Arg::with_name("network_use_bytes")
.long("network_use_bytes")
.help("Displays the network widget using bytes.")
.long_help(
"\
Displays the network widget using bytes. Defaults to bits.\n\n",
);
let network_use_log = Arg::with_name("network_use_log")
.long("network_use_log")
.help("Displays the network widget with a log scale.")
.long_help(
"\
Displays the network widget with a log scale. Defaults to a non-log scale.\n\n",
);
let network_use_binary_prefix = Arg::with_name("network_use_binary_prefix")
.long("network_use_binary_prefix")
.help("Displays the network widget with binary prefixes.")
.long_help(
"\
Displays the network widget with binary prefixes (i.e. kibibits, mebibits) rather than a decimal prefix (i.e. kilobits, megabits). Defaults to decimal prefixes.\n\n\n",
);
App::new(crate_name!())
.setting(AppSettings::UnifiedHelpMessage)
.version(crate_version!())
@ -422,6 +447,9 @@ Defaults to showing the process widget in tree mode.\n\n",
.arg(regex)
.arg(time_delta)
.arg(tree)
.arg(network_use_bytes)
.arg(network_use_log)
.arg(network_use_binary_prefix)
.arg(current_usage)
.arg(use_old_network_legend)
.arg(whole_word)

View File

@ -474,6 +474,12 @@ pub const OLD_CONFIG_TEXT: &str = r##"# This is a default config file for bottom
#show_table_scroll_position = false
# Show processes as their commands by default in the process widget.
#process_command = false
# Displays the network widget with binary prefixes.
#network_use_binary_prefix = false
# Displays the network widget using bytes.
#network_use_bytes = false
# Displays the network widget with a log scale.
#network_use_log = false
# These are all the components that support custom theming. Note that colour support
# will depend on terminal support.

View File

@ -1,6 +1,6 @@
//! This mainly concerns converting collected data into things that the canvas
//! can actually handle.
use crate::Pid;
use crate::{app::AxisScaling, units::data_units::DataUnit, Pid};
use crate::{
app::{data_farmer, data_harvester, App, ProcWidgetState},
utils::{self, gen_util::*},
@ -118,13 +118,13 @@ pub fn convert_disk_row(current_data: &data_farmer::DataCollection) -> Vec<Vec<S
.zip(&current_data.io_labels)
.for_each(|(disk, (io_read, io_write))| {
let free_space_fmt = if let Some(free_space) = disk.free_space {
let converted_free_space = get_simple_byte_values(free_space, false);
let converted_free_space = get_decimal_bytes(free_space);
format!("{:.*}{}", 0, converted_free_space.0, converted_free_space.1)
} else {
"N/A".to_string()
};
let total_space_fmt = if let Some(total_space) = disk.total_space {
let converted_total_space = get_simple_byte_values(total_space, false);
let converted_total_space = get_decimal_bytes(total_space);
format!(
"{:.*}{}",
0, converted_total_space.0, converted_total_space.1
@ -298,18 +298,22 @@ pub fn convert_swap_data_points(
pub fn convert_mem_labels(
current_data: &data_farmer::DataCollection,
) -> (Option<(String, String)>, Option<(String, String)>) {
fn return_unit_and_numerator_for_kb(mem_total_kb: u64) -> (&'static str, f64) {
if mem_total_kb < 1024 {
// Stay with KB
/// Returns the unit type and denominator for given total amount of memory in kibibytes.
///
/// Yes, this function is a bit of a lie. But people seem to generally expect, say, GiB when what they actually
/// wanted calculated was GiB.
fn return_unit_and_denominator_for_mem_kib(mem_total_kib: u64) -> (&'static str, f64) {
if mem_total_kib < 1024 {
// Stay with KiB
("KB", 1.0)
} else if mem_total_kb < 1_048_576 {
// Use MB
} else if mem_total_kib < 1_048_576 {
// Use MiB
("MB", 1024.0)
} else if mem_total_kb < 1_073_741_824 {
// Use GB
} else if mem_total_kib < 1_073_741_824 {
// Use GiB
("GB", 1_048_576.0)
} else {
// Use TB
// Use TiB
("TB", 1_073_741_824.0)
}
}
@ -328,15 +332,15 @@ pub fn convert_mem_labels(
}
),
{
let (unit, numerator) = return_unit_and_numerator_for_kb(
let (unit, denominator) = return_unit_and_denominator_for_mem_kib(
current_data.memory_harvest.mem_total_in_kib,
);
format!(
" {:.1}{}/{:.1}{}",
current_data.memory_harvest.mem_used_in_kib as f64 / numerator,
current_data.memory_harvest.mem_used_in_kib as f64 / denominator,
unit,
(current_data.memory_harvest.mem_total_in_kib as f64 / numerator),
(current_data.memory_harvest.mem_total_in_kib as f64 / denominator),
unit
)
},
@ -357,7 +361,7 @@ pub fn convert_mem_labels(
}
),
{
let (unit, numerator) = return_unit_and_numerator_for_kb(
let (unit, numerator) = return_unit_and_denominator_for_mem_kib(
current_data.swap_harvest.mem_total_in_kib,
);
@ -377,7 +381,8 @@ pub fn convert_mem_labels(
}
pub fn get_rx_tx_data_points(
current_data: &data_farmer::DataCollection, is_frozen: bool,
current_data: &data_farmer::DataCollection, is_frozen: bool, network_scale_type: &AxisScaling,
network_unit_type: &DataUnit, network_use_binary_prefix: bool,
) -> (Vec<Point>, Vec<Point>) {
let mut rx: Vec<Point> = Vec::new();
let mut tx: Vec<Point> = Vec::new();
@ -394,8 +399,34 @@ pub fn get_rx_tx_data_points(
for (time, data) in &current_data.timed_data_vec {
let time_from_start: f64 = (current_time.duration_since(*time).as_millis() as f64).floor();
rx.push((-time_from_start, data.rx_data));
tx.push((-time_from_start, data.tx_data));
let (rx_data, tx_data) = match network_scale_type {
AxisScaling::Log => {
if network_use_binary_prefix {
match network_unit_type {
DataUnit::Byte => {
// As dividing by 8 is equal to subtracting 4 in base 2!
((data.rx_data).log2() - 4.0, (data.tx_data).log2() - 4.0)
}
DataUnit::Bit => ((data.rx_data).log2(), (data.tx_data).log2()),
}
} else {
match network_unit_type {
DataUnit::Byte => {
((data.rx_data / 8.0).log10(), (data.tx_data / 8.0).log10())
}
DataUnit::Bit => ((data.rx_data).log10(), (data.tx_data).log10()),
}
}
}
AxisScaling::Linear => match network_unit_type {
DataUnit::Byte => (data.rx_data / 8.0, data.tx_data / 8.0),
DataUnit::Bit => (data.rx_data, data.tx_data),
},
};
rx.push((-time_from_start, rx_data));
tx.push((-time_from_start, tx_data));
if *time == current_time {
break;
}
@ -406,19 +437,62 @@ pub fn get_rx_tx_data_points(
pub fn convert_network_data_points(
current_data: &data_farmer::DataCollection, is_frozen: bool, need_four_points: bool,
network_scale_type: &AxisScaling, network_unit_type: &DataUnit,
network_use_binary_prefix: bool,
) -> ConvertedNetworkData {
let (rx, tx) = get_rx_tx_data_points(current_data, is_frozen);
let (rx, tx) = get_rx_tx_data_points(
current_data,
is_frozen,
network_scale_type,
network_unit_type,
network_use_binary_prefix,
);
let total_rx_converted_result: (f64, String);
let rx_converted_result: (f64, String);
let total_tx_converted_result: (f64, String);
let tx_converted_result: (f64, String);
let unit = match network_unit_type {
DataUnit::Byte => "B",
DataUnit::Bit => "b",
};
rx_converted_result = get_exact_byte_values(current_data.network_harvest.rx, false);
total_rx_converted_result = get_exact_byte_values(current_data.network_harvest.total_rx, false);
let (rx_data, tx_data, total_rx_data, total_tx_data) = match network_unit_type {
DataUnit::Byte => (
current_data.network_harvest.rx / 8,
current_data.network_harvest.tx / 8,
current_data.network_harvest.total_rx / 8,
current_data.network_harvest.total_tx / 8,
),
DataUnit::Bit => (
current_data.network_harvest.rx,
current_data.network_harvest.tx,
current_data.network_harvest.total_rx / 8, // We always make this bytes...
current_data.network_harvest.total_tx / 8,
),
};
tx_converted_result = get_exact_byte_values(current_data.network_harvest.tx, false);
total_tx_converted_result = get_exact_byte_values(current_data.network_harvest.total_tx, false);
let (rx_converted_result, total_rx_converted_result): ((f64, String), (f64, String)) =
if network_use_binary_prefix {
(
get_binary_prefix(rx_data, unit), // If this isn't obvious why there's two functions, one you can configure the unit, the other is always bytes
get_binary_bytes(total_rx_data),
)
} else {
(
get_decimal_prefix(rx_data, unit),
get_decimal_bytes(total_rx_data),
)
};
let (tx_converted_result, total_tx_converted_result): ((f64, String), (f64, String)) =
if network_use_binary_prefix {
(
get_binary_prefix(tx_data, unit),
get_binary_bytes(total_tx_data),
)
} else {
(
get_decimal_prefix(tx_data, unit),
get_decimal_bytes(total_tx_data),
)
};
if need_four_points {
let rx_display = format!("{:.*}{}", 1, rx_converted_result.0, rx_converted_result.1);
@ -441,20 +515,42 @@ pub fn convert_network_data_points(
}
} else {
let rx_display = format!(
"RX: {:<9} All: {:<9}",
format!("{:.1}{:3}", rx_converted_result.0, rx_converted_result.1),
format!(
"{:.1}{:3}",
total_rx_converted_result.0, total_rx_converted_result.1
)
"RX: {:<8} All: {}",
if network_use_binary_prefix {
format!("{:.1}{:3}", rx_converted_result.0, rx_converted_result.1)
} else {
format!("{:.1}{:2}", rx_converted_result.0, rx_converted_result.1)
},
if network_use_binary_prefix {
format!(
"{:.1}{:3}",
total_rx_converted_result.0, total_rx_converted_result.1
)
} else {
format!(
"{:.1}{:2}",
total_rx_converted_result.0, total_rx_converted_result.1
)
}
);
let tx_display = format!(
"TX: {:<9} All: {:<9}",
format!("{:.1}{:3}", tx_converted_result.0, tx_converted_result.1),
format!(
"{:.1}{:3}",
total_tx_converted_result.0, total_tx_converted_result.1
)
"TX: {:<8} All: {}",
if network_use_binary_prefix {
format!("{:.1}{:3}", tx_converted_result.0, tx_converted_result.1)
} else {
format!("{:.1}{:2}", tx_converted_result.0, tx_converted_result.1)
},
if network_use_binary_prefix {
format!(
"{:.1}{:3}",
total_tx_converted_result.0, total_tx_converted_result.1
)
} else {
format!(
"{:.1}{:2}",
total_tx_converted_result.0, total_tx_converted_result.1
)
}
);
ConvertedNetworkData {
@ -492,10 +588,10 @@ pub fn convert_process_data(
existing_converted_process_data.keys().copied().collect();
for process in &current_data.process_harvest {
let converted_rps = get_exact_byte_values(process.read_bytes_per_sec, false);
let converted_wps = get_exact_byte_values(process.write_bytes_per_sec, false);
let converted_total_read = get_exact_byte_values(process.total_read_bytes, false);
let converted_total_write = get_exact_byte_values(process.total_write_bytes, false);
let converted_rps = get_binary_bytes(process.read_bytes_per_sec);
let converted_wps = get_binary_bytes(process.write_bytes_per_sec);
let converted_total_read = get_binary_bytes(process.total_read_bytes);
let converted_total_write = get_binary_bytes(process.total_write_bytes);
let read_per_sec = format!("{:.*}{}/s", 0, converted_rps.0, converted_rps.1);
let write_per_sec = format!("{:.*}{}/s", 0, converted_wps.0, converted_wps.1);
@ -530,7 +626,7 @@ pub fn convert_process_data(
process_entry.cpu_percent_usage = process.cpu_usage_percent;
process_entry.mem_percent_usage = process.mem_usage_percent;
process_entry.mem_usage_bytes = process.mem_usage_bytes;
process_entry.mem_usage_str = get_exact_byte_values(process.mem_usage_bytes, false);
process_entry.mem_usage_str = get_binary_bytes(process.mem_usage_bytes);
process_entry.group_pids = vec![process.pid];
process_entry.read_per_sec = read_per_sec;
process_entry.write_per_sec = write_per_sec;
@ -556,7 +652,7 @@ pub fn convert_process_data(
cpu_percent_usage: process.cpu_usage_percent,
mem_percent_usage: process.mem_usage_percent,
mem_usage_bytes: process.mem_usage_bytes,
mem_usage_str: get_exact_byte_values(process.mem_usage_bytes, false),
mem_usage_str: get_binary_bytes(process.mem_usage_bytes),
group_pids: vec![process.pid],
read_per_sec,
write_per_sec,
@ -586,7 +682,7 @@ pub fn convert_process_data(
cpu_percent_usage: process.cpu_usage_percent,
mem_percent_usage: process.mem_usage_percent,
mem_usage_bytes: process.mem_usage_bytes,
mem_usage_str: get_exact_byte_values(process.mem_usage_bytes, false),
mem_usage_str: get_binary_bytes(process.mem_usage_bytes),
group_pids: vec![process.pid],
read_per_sec,
write_per_sec,
@ -1085,10 +1181,10 @@ pub fn group_process_data(
.iter()
.map(|(identifier, process_details)| {
let p = process_details.clone();
let converted_rps = get_exact_byte_values(p.read_per_sec as u64, false);
let converted_wps = get_exact_byte_values(p.write_per_sec as u64, false);
let converted_total_read = get_exact_byte_values(p.total_read as u64, false);
let converted_total_write = get_exact_byte_values(p.total_write as u64, false);
let converted_rps = get_binary_bytes(p.read_per_sec as u64);
let converted_wps = get_binary_bytes(p.write_per_sec as u64);
let converted_total_read = get_binary_bytes(p.total_read as u64);
let converted_total_write = get_binary_bytes(p.total_write as u64);
let read_per_sec = format!("{:.*}{}/s", 0, converted_rps.0, converted_rps.1);
let write_per_sec = format!("{:.*}{}/s", 0, converted_wps.0, converted_wps.1);
@ -1107,7 +1203,7 @@ pub fn group_process_data(
cpu_percent_usage: p.cpu_percent_usage,
mem_percent_usage: p.mem_percent_usage,
mem_usage_bytes: p.mem_usage_bytes,
mem_usage_str: get_exact_byte_values(p.mem_usage_bytes, false),
mem_usage_str: get_binary_bytes(p.mem_usage_bytes),
group_pids: p.group_pids,
read_per_sec,
write_per_sec,

View File

@ -47,6 +47,7 @@ pub mod clap;
pub mod constants;
pub mod data_conversion;
pub mod options;
pub mod units;
#[cfg(target_family = "windows")]
pub type Pid = usize;
@ -326,7 +327,13 @@ pub fn handle_force_redraws(app: &mut App) {
}
if app.net_state.force_update.is_some() {
let (rx, tx) = get_rx_tx_data_points(&app.data_collection, app.is_frozen);
let (rx, tx) = get_rx_tx_data_points(
&app.data_collection,
app.is_frozen,
&app.app_config_fields.network_scale_type,
&app.app_config_fields.network_unit_type,
app.app_config_fields.network_use_binary_prefix,
);
app.canvas_data.network_data_rx = rx;
app.canvas_data.network_data_tx = tx;
app.net_state.force_update = None;
@ -352,18 +359,21 @@ pub fn update_all_process_lists(app: &mut App) {
}
fn update_final_process_list(app: &mut App, widget_id: u64) {
let process_states = match app.proc_state.widget_states.get(&widget_id) {
Some(process_state) => Some((
process_state
.process_search_state
.search_state
.is_invalid_or_blank_search(),
process_state.is_using_command,
process_state.is_grouped,
process_state.is_tree_mode,
)),
None => None,
};
let process_states = app
.proc_state
.widget_states
.get(&widget_id)
.map(|process_state| {
(
process_state
.process_search_state
.search_state
.is_invalid_or_blank_search(),
process_state.is_using_command,
process_state.is_grouped,
process_state.is_tree_mode,
)
});
if let Some((is_invalid_or_blank, is_using_command, is_grouped, is_tree)) = process_states {
if !app.is_frozen {

View File

@ -12,6 +12,7 @@ use crate::{
app::{layout_manager::*, *},
canvas::ColourScheme,
constants::*,
units::data_units::DataUnit,
utils::error::{self, BottomError},
};
@ -157,6 +158,15 @@ pub struct ConfigFlags {
#[builder(default, setter(strip_option))]
pub advanced_kill: Option<bool>,
#[builder(default, setter(strip_option))]
pub network_use_bytes: Option<bool>,
#[builder(default, setter(strip_option))]
pub network_use_log: Option<bool>,
#[builder(default, setter(strip_option))]
pub network_use_binary_prefix: Option<bool>,
}
#[derive(Clone, Default, Debug, Deserialize, Serialize)]
@ -265,6 +275,10 @@ pub fn build_app(
let is_default_command = get_is_default_process_command(matches, config);
let is_advanced_kill = get_is_using_advanced_kill(matches, config);
let network_unit_type = get_network_unit_type(matches, config);
let network_scale_type = get_network_scale_type(matches, config);
let network_use_binary_prefix = get_network_use_binary_prefix(matches, config);
for row in &widget_layout.rows {
for col in &row.children {
for col_row in &col.children {
@ -319,7 +333,12 @@ pub fn build_app(
Net => {
net_state_map.insert(
widget.widget_id,
NetWidgetState::init(default_time_value, autohide_timer),
NetWidgetState::init(
default_time_value,
autohide_timer,
// network_unit_type.clone(),
// network_scale_type.clone(),
),
);
}
Proc => {
@ -404,6 +423,9 @@ pub fn build_app(
no_write: false,
show_table_scroll_position: get_show_table_scroll_position(matches, config),
is_advanced_kill,
network_scale_type,
network_unit_type,
network_use_binary_prefix,
};
let used_widgets = UsedWidgets {
@ -818,11 +840,9 @@ fn get_default_widget_and_count(
let widget_count = if let Some(widget_count) = matches.value_of("default_widget_count") {
Some(widget_count.parse::<u128>()?)
} else if let Some(flags) = &config.flags {
if let Some(widget_count) = flags.default_widget_count {
Some(widget_count as u128)
} else {
None
}
flags
.default_widget_count
.map(|widget_count| widget_count as u128)
} else {
None
};
@ -1031,3 +1051,42 @@ fn get_is_using_advanced_kill(matches: &clap::ArgMatches<'static>, config: &Conf
}
false
}
fn get_network_unit_type(matches: &clap::ArgMatches<'static>, config: &Config) -> DataUnit {
if matches.is_present("network_use_bytes") {
return DataUnit::Byte;
} else if let Some(flags) = &config.flags {
if let Some(network_use_bytes) = flags.network_use_bytes {
if network_use_bytes {
return DataUnit::Byte;
}
}
}
DataUnit::Bit
}
fn get_network_scale_type(matches: &clap::ArgMatches<'static>, config: &Config) -> AxisScaling {
if matches.is_present("network_use_log") {
return AxisScaling::Log;
} else if let Some(flags) = &config.flags {
if let Some(network_use_log) = flags.network_use_log {
if network_use_log {
return AxisScaling::Log;
}
}
}
AxisScaling::Linear
}
fn get_network_use_binary_prefix(matches: &clap::ArgMatches<'static>, config: &Config) -> bool {
if matches.is_present("network_use_binary_prefix") {
return true;
} else if let Some(flags) = &config.flags {
if let Some(network_use_binary_prefix) = flags.network_use_binary_prefix {
return network_use_binary_prefix;
}
}
false
}

1
src/units.rs Normal file
View File

@ -0,0 +1 @@
pub mod data_units;

5
src/units/data_units.rs Normal file
View File

@ -0,0 +1,5 @@
#[derive(Debug, Clone)]
pub enum DataUnit {
Byte,
Bit,
}

View File

@ -10,7 +10,7 @@ pub type Result<T> = result::Result<T, BottomError>;
pub enum BottomError {
/// An error when there is an IO exception.
#[error("IO exception, {0}")]
InvalidIO(String),
InvalidIo(String),
/// An error when the heim library encounters a problem.
#[error("Error caused by Heim, {0}")]
InvalidHeim(String),
@ -39,7 +39,7 @@ pub enum BottomError {
impl From<std::io::Error> for BottomError {
fn from(err: std::io::Error) -> Self {
BottomError::InvalidIO(err.to_string())
BottomError::InvalidIo(err.to_string())
}
}

View File

@ -9,15 +9,26 @@ pub const MEBI_LIMIT: u64 = 1_048_576;
pub const GIBI_LIMIT: u64 = 1_073_741_824;
pub const TEBI_LIMIT: u64 = 1_099_511_627_776;
pub const KILO_LIMIT_F64: f64 = 1000.0;
pub const MEGA_LIMIT_F64: f64 = 1_000_000.0;
pub const GIGA_LIMIT_F64: f64 = 1_000_000_000.0;
pub const TERA_LIMIT_F64: f64 = 1_000_000_000_000.0;
pub const KIBI_LIMIT_F64: f64 = 1024.0;
pub const MEBI_LIMIT_F64: f64 = 1_048_576.0;
pub const GIBI_LIMIT_F64: f64 = 1_073_741_824.0;
pub const TEBI_LIMIT_F64: f64 = 1_099_511_627_776.0;
pub const LOG_KILO_LIMIT: f64 = 3.0;
pub const LOG_MEGA_LIMIT: f64 = 6.0;
pub const LOG_GIGA_LIMIT: f64 = 9.0;
pub const LOG_TERA_LIMIT: f64 = 12.0;
pub const LOG_PETA_LIMIT: f64 = 15.0;
pub const LOG_KIBI_LIMIT: f64 = 10.0;
pub const LOG_MEBI_LIMIT: f64 = 20.0;
pub const LOG_GIBI_LIMIT: f64 = 30.0;
pub const LOG_TEBI_LIMIT: f64 = 40.0;
pub const LOG_PEBI_LIMIT: f64 = 50.0;
pub const LOG_KILO_LIMIT_U32: u32 = 3;
pub const LOG_MEGA_LIMIT_U32: u32 = 6;
@ -29,40 +40,12 @@ pub const LOG_MEBI_LIMIT_U32: u32 = 20;
pub const LOG_GIBI_LIMIT_U32: u32 = 30;
pub const LOG_TEBI_LIMIT_U32: u32 = 40;
pub fn float_min(a: f32, b: f32) -> f32 {
match a.partial_cmp(&b) {
Some(x) => match x {
Ordering::Greater => b,
Ordering::Less => a,
Ordering::Equal => a,
},
None => a,
}
}
pub fn float_max(a: f32, b: f32) -> f32 {
match a.partial_cmp(&b) {
Some(x) => match x {
Ordering::Greater => a,
Ordering::Less => b,
Ordering::Equal => a,
},
None => a,
}
}
/// Returns a tuple containing the value and the unit. In units of 1024.
/// This only supports up to a tebibyte.
pub fn get_exact_byte_values(bytes: u64, spacing: bool) -> (f64, String) {
/// Returns a tuple containing the value and the unit in bytes. In units of 1024.
/// This only supports up to a tebi. Note the "single" unit will have a space appended to match the others if
/// `spacing` is true.
pub fn get_binary_bytes(bytes: u64) -> (f64, String) {
match bytes {
b if b < KIBI_LIMIT => (
bytes as f64,
if spacing {
" B".to_string()
} else {
"B".to_string()
},
),
b if b < KIBI_LIMIT => (bytes as f64, "B".to_string()),
b if b < MEBI_LIMIT => (bytes as f64 / 1024.0, "KiB".to_string()),
b if b < GIBI_LIMIT => (bytes as f64 / 1_048_576.0, "MiB".to_string()),
b if b < TERA_LIMIT => (bytes as f64 / 1_073_741_824.0, "GiB".to_string()),
@ -70,18 +53,12 @@ pub fn get_exact_byte_values(bytes: u64, spacing: bool) -> (f64, String) {
}
}
/// Returns a tuple containing the value and the unit. In units of 1000.
/// This only supports up to a terabyte. Note the "byte" unit will have a space appended to match the others.
pub fn get_simple_byte_values(bytes: u64, spacing: bool) -> (f64, String) {
/// Returns a tuple containing the value and the unit in bytes. In units of 1000.
/// This only supports up to a tera. Note the "single" unit will have a space appended to match the others if
/// `spacing` is true.
pub fn get_decimal_bytes(bytes: u64) -> (f64, String) {
match bytes {
b if b < KILO_LIMIT => (
bytes as f64,
if spacing {
" B".to_string()
} else {
"B".to_string()
},
),
b if b < KILO_LIMIT => (bytes as f64, "B".to_string()),
b if b < MEGA_LIMIT => (bytes as f64 / 1000.0, "KB".to_string()),
b if b < GIGA_LIMIT => (bytes as f64 / 1_000_000.0, "MB".to_string()),
b if b < TERA_LIMIT => (bytes as f64 / 1_000_000_000.0, "GB".to_string()),
@ -89,21 +66,49 @@ pub fn get_simple_byte_values(bytes: u64, spacing: bool) -> (f64, String) {
}
}
/// Returns a tuple containing the value and the unit. In units of 1024.
/// This only supports up to a tebi. Note the "single" unit will have a space appended to match the others if
/// `spacing` is true.
pub fn get_binary_prefix(quantity: u64, unit: &str) -> (f64, String) {
match quantity {
b if b < KIBI_LIMIT => (quantity as f64, unit.to_string()),
b if b < MEBI_LIMIT => (quantity as f64 / 1024.0, format!("Ki{}", unit)),
b if b < GIBI_LIMIT => (quantity as f64 / 1_048_576.0, format!("Mi{}", unit)),
b if b < TERA_LIMIT => (quantity as f64 / 1_073_741_824.0, format!("Gi{}", unit)),
_ => (quantity as f64 / 1_099_511_627_776.0, format!("Ti{}", unit)),
}
}
/// Returns a tuple containing the value and the unit. In units of 1000.
/// This only supports up to a tera. Note the "single" unit will have a space appended to match the others if
/// `spacing` is true.
pub fn get_decimal_prefix(quantity: u64, unit: &str) -> (f64, String) {
match quantity {
b if b < KILO_LIMIT => (quantity as f64, unit.to_string()),
b if b < MEGA_LIMIT => (quantity as f64 / 1000.0, format!("K{}", unit)),
b if b < GIGA_LIMIT => (quantity as f64 / 1_000_000.0, format!("M{}", unit)),
b if b < TERA_LIMIT => (quantity as f64 / 1_000_000_000.0, format!("G{}", unit)),
_ => (quantity as f64 / 1_000_000_000_000.0, format!("T{}", unit)),
}
}
/// Gotta get partial ordering? No problem, here's something to deal with it~
///
/// Note that https://github.com/reem/rust-ordered-float exists, maybe move to it one day? IDK.
pub fn get_ordering<T: std::cmp::PartialOrd>(
a_val: T, b_val: T, descending_order: bool,
a_val: T, b_val: T, reverse_order: bool,
) -> std::cmp::Ordering {
match a_val.partial_cmp(&b_val) {
Some(x) => match x {
Ordering::Greater => {
if descending_order {
if reverse_order {
std::cmp::Ordering::Less
} else {
std::cmp::Ordering::Greater
}
}
Ordering::Less => {
if descending_order {
if reverse_order {
std::cmp::Ordering::Greater
} else {
std::cmp::Ordering::Less