// Overview

NGINX Log Monitor is a real-time terminal dashboard built with Python's Textual TUI framework, providing instant visibility into your web server's traffic patterns.

Think of it as htop for NGINX - a lightweight, zero-configuration tool that parses your access and error logs and presents them in a beautiful, interactive dashboard.

// Architecture

NGINX Logs
Log Parser
Data Aggregator
Textual TUI
Rich Renderer
File Watcher
|
Stats Engine
|
Panel Layout

The tool uses a reactive pipeline - a file watcher detects new log lines, the parser extracts structured data, and the aggregator updates rolling statistics that the TUI panels consume via Textual's message system.

// Dashboard Panels

  • Overview with total requests, unique visitors, and bandwidth
  • Top IPs hitting your server
  • Most requested pages and endpoints
  • Status codes distribution with color coding
  • Hourly traffic patterns visualization
  • HTTP methods breakdown (GET, POST, etc.)
  • Error log monitoring with severity levels
  • User agent analysis
  • Bandwidth usage statistics

// Key Features

  • Real-time updates with configurable refresh intervals
  • Zero configuration - works with standard NGINX log format
  • Self-contained virtual environment management
  • Color-coded output for status codes and error levels
  • Full keyboard navigation and controls
  • Low resource usage - perfect for production servers
  • Docker deployment support
  • Systemd service integration

// Challenges & Solutions

Challenge: Parsing high-volume log files in real-time without falling behind or consuming excessive memory.

Solution: Implemented a streaming parser that reads logs line-by-line using file seeking, only processing new entries. Statistics are maintained as rolling aggregates rather than storing raw data, keeping memory usage constant regardless of log file size.

Challenge: Keeping the terminal UI responsive while processing thousands of log entries per second.

Solution: Decoupled parsing from rendering using Textual's worker threads. The parser runs in a background worker and posts batched updates to the UI at configurable intervals, preventing the display from becoming a bottleneck.

// What I'd Improve

  • Add support for custom log format parsing via configuration
  • Implement alerting rules for anomalous traffic patterns
  • Add historical data export to CSV/JSON for external analysis
  • Build a web-based dashboard alternative for remote monitoring

// Installation

Quick Start: Clone the repo, run the script, and you're monitoring. The tool automatically creates its own virtual environment and installs dependencies.

System-wide: Install to /usr/local/bin for easy access from anywhere on your server.

Docker: Run in a container with volume mounts to your log files for isolated, portable monitoring.