feat: v0.2 Docker container integration#1
Conversation
There was a problem hiding this comment.
Code Review
This pull request updates devtop to version 0.2.0, introducing a new Docker container monitoring panel. The implementation includes a Docker collector using the bollard library to gather real-time CPU and memory statistics, which are then displayed in a dynamic TUI panel that auto-hides when Docker is unavailable. My feedback focuses on improving the robustness of the Docker integration, specifically by making the collector resilient to daemon connection failures and ensuring the UI correctly reflects Docker's availability state. Additionally, there are opportunities to optimize UI performance by reducing string allocations and improving code maintainability by replacing magic numbers with constants.
| CollectorMessage::Docker(containers) => { | ||
| self.docker_available = true; | ||
| self.containers = containers; | ||
| } |
There was a problem hiding this comment.
The docker_available flag is set to true upon receiving the first Docker message and is never reset. If the Docker daemon stops, the collector will likely start sending empty lists of containers, but docker_available will remain true. This results in an empty Docker panel being displayed instead of being hidden.
To correctly reflect Docker's availability, the CollectorMessage::Docker could be changed to carry a Result, allowing you to set docker_available to false upon receiving an error.
| pub async fn start(tx: mpsc::Sender<CollectorMessage>) { | ||
| let docker = match Docker::connect_with_local_defaults() { | ||
| Ok(d) => d, | ||
| Err(_) => return, | ||
| }; | ||
| if docker.ping().await.is_err() { | ||
| return; | ||
| } | ||
|
|
||
| let mut prev_stats: HashMap<String, (u64, u64)> = HashMap::new(); | ||
|
|
||
| loop { | ||
| let containers = collect_containers(&docker, &mut prev_stats).await; | ||
| if tx.send(CollectorMessage::Docker(containers)).await.is_err() { | ||
| break; | ||
| } | ||
| tokio::time::sleep(Duration::from_secs(1)).await; | ||
| } | ||
| } |
There was a problem hiding this comment.
The Docker collector's start function is not resilient to connection failures.
- Startup Failure: If the Docker daemon is not running when
devtopstarts, the collector exits permanently and will not detect if Docker becomes available later. - Runtime Failure: If the connection to the Docker daemon is lost while
devtopis running,collect_containerssuppresses the error and returns an empty list. Thestartfunction continues to loop, sending empty lists to the UI, but it never attempts to re-establish the connection.
To align with the 'dynamic' behavior described in the PR, the collector should be refactored to handle these failures gracefully. It should periodically attempt to connect/reconnect to the Docker daemon and propagate connection errors so the UI can be updated accordingly (i.e., hide the Docker panel).
src/collector/docker.rs
Outdated
| let list = match docker.list_containers(Some(options)).await { | ||
| Ok(v) => v, | ||
| Err(_) => return Vec::new(), | ||
| }; |
There was a problem hiding this comment.
Errors from docker.list_containers are currently being suppressed, and an empty vector is returned instead. This makes it impossible for the caller to distinguish between a scenario with no running containers and an error scenario (e.g., Docker daemon stopped). This can lead to the UI showing an empty but visible Docker panel when it should be hidden because Docker is unavailable.
Consider propagating the error from this function, for example by changing the return type to Result<Vec<ContainerInfo>, bollard::errors::Error>. The caller can then handle the error appropriately, for instance by attempting to reconnect to the Docker daemon.
src/ui/widgets/docker.rs
Outdated
| Cell::from(c.name.clone()), | ||
| Cell::from(c.status.clone()).style(status_style), |
There was a problem hiding this comment.
The name and status strings are being cloned for each render cycle. This is inefficient as rendering happens frequently. You can avoid these clones by passing string slices to Cell::from.
Cell::from can accept a &str, so you can pass &c.name and &c.status directly.
| Cell::from(c.name.clone()), | |
| Cell::from(c.status.clone()).style(status_style), | |
| Cell::from(&c.name), | |
| Cell::from(&c.status).style(status_style), |
| fn format_bytes(bytes: u64) -> String { | ||
| if bytes >= 1024 * 1024 * 1024 { | ||
| format!("{:.1}GB", bytes as f64 / (1024.0 * 1024.0 * 1024.0)) | ||
| } else if bytes >= 1024 * 1024 { | ||
| format!("{:.0}MB", bytes as f64 / (1024.0 * 1024.0)) | ||
| } else if bytes >= 1024 { | ||
| format!("{:.0}KB", bytes as f64 / 1024.0) | ||
| } else { | ||
| format!("{}B", bytes) | ||
| } | ||
| } |
There was a problem hiding this comment.
The format_bytes function uses magic numbers for byte conversions. To improve readability and maintainability, it's better to define constants for kilobyte, megabyte, and gigabyte thresholds.
This also makes the logic for unit conversion clearer.
| fn format_bytes(bytes: u64) -> String { | |
| if bytes >= 1024 * 1024 * 1024 { | |
| format!("{:.1}GB", bytes as f64 / (1024.0 * 1024.0 * 1024.0)) | |
| } else if bytes >= 1024 * 1024 { | |
| format!("{:.0}MB", bytes as f64 / (1024.0 * 1024.0)) | |
| } else if bytes >= 1024 { | |
| format!("{:.0}KB", bytes as f64 / 1024.0) | |
| } else { | |
| format!("{}B", bytes) | |
| } | |
| } | |
| const KB: u64 = 1024; | |
| const MB: u64 = 1024 * 1024; | |
| const GB: u64 = 1024 * 1024 * 1024; | |
| fn format_bytes(bytes: u64) -> String { | |
| if bytes >= GB { | |
| format!("{:.1}GB", bytes as f64 / GB as f64) | |
| } else if bytes >= MB { | |
| format!("{:.0}MB", bytes as f64 / MB as f64) | |
| } else if bytes >= KB { | |
| format!("{:.0}KB", bytes as f64 / KB as f64) | |
| } else { | |
| format!("{}B", bytes) | |
| } | |
| } |
Summary
Test Plan
cargo testpasses (13 tests)🤖 Generated with Claude Code