Add telemetry data service and dashboard revamp
Some checks failed
Update GitHub Versions (New) / update-github-versions (push) Has been cancelled
Update Versions from GitHub / update-versions (push) Has been cancelled

Introduce a telemetry data microservice under misc/data: add Dockerfile, entrypoint, migration tools, README, LICENSE and a .gitignore. Increase Docker CACHE_TTL_SECONDS to 300s. Implement extensive dashboard and analytics updates in dashboard.go: add total_all_time and sample_size, return total item counts from fetchRecords (with page/limit handling and a maxRecords guard), raise top-N limits, add a minimum-installs threshold for failed-apps, and numerous UI/style/layout improvements in the embedded DashboardHTML. Minor formatting tweak to misc/api.func.
This commit is contained in:
CanbiZ (MickLesk) 2026-02-12 13:10:06 +01:00
parent e4a8ee845a
commit 0231b72d78
16 changed files with 3198 additions and 830 deletions

View File

@ -87,17 +87,17 @@ detect_repo_source() {
# Map detected owner/repo to canonical repo_source value
case "$owner_repo" in
community-scripts/ProxmoxVE) REPO_SOURCE="ProxmoxVE" ;;
community-scripts/ProxmoxVED) REPO_SOURCE="ProxmoxVED" ;;
"")
# No URL detected — use hardcoded fallback
# CI sed transforms this on promotion: ProxmoxVED → ProxmoxVE
REPO_SOURCE="ProxmoxVED"
;;
*)
# Fork or unknown repo
REPO_SOURCE="external"
;;
community-scripts/ProxmoxVE) REPO_SOURCE="ProxmoxVE" ;;
community-scripts/ProxmoxVED) REPO_SOURCE="ProxmoxVED" ;;
"")
# No URL detected — use hardcoded fallback
# CI sed transforms this on promotion: ProxmoxVED → ProxmoxVE
REPO_SOURCE="ProxmoxVED"
;;
*)
# Fork or unknown repo
REPO_SOURCE="external"
;;
esac
export REPO_SOURCE

34
misc/data/.gitignore vendored Normal file
View File

@ -0,0 +1,34 @@
# If you prefer the allow list template instead of the deny list, see community template:
# https://github.com/github/gitignore/blob/main/community/Golang/Go.AllowList.gitignore
#
# Binaries for programs and plugins
*.exe
*.exe~
*.dll
*.so
*.dylib
telemetry-service
migration/migrate
# Test binary, built with `go test -c`
*.test
# Code coverage profiles and other test artifacts
*.out
coverage.*
*.coverprofile
profile.cov
# Dependency directories (remove the comment below to include it)
# vendor/
# Go workspace file
go.work
go.work.sum
# env file
.env
# Editor/IDE
# .idea/
# .vscode/

View File

@ -24,7 +24,7 @@ ENV ENABLE_REQUEST_LOGGING="false"
# Cache config (optional)
ENV ENABLE_CACHE="true"
ENV CACHE_TTL_SECONDS="60"
ENV CACHE_TTL_SECONDS="300"
ENV ENABLE_REDIS="false"
# ENV REDIS_URL="redis://localhost:6379"

21
misc/data/LICENSE Normal file
View File

@ -0,0 +1,21 @@
MIT License
Copyright (c) 2026 Community Scripts
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

81
misc/data/README.md Normal file
View File

@ -0,0 +1,81 @@
# Telemetry Service
A standalone Go microservice that collects anonymous telemetry data from [ProxmoxVE](https://github.com/community-scripts/ProxmoxVE) and [ProxmoxVED](https://github.com/community-scripts/ProxmoxVED) script installations.
## Overview
This service acts as a telemetry ingestion layer between the bash installation scripts and a PocketBase backend. When users run scripts from the ProxmoxVE/ProxmoxVED repositories, optional anonymous usage data is sent here for aggregation and analysis.
**What gets collected:**
- Script name and installation status (success/failed)
- Container/VM type and resource allocation (CPU, RAM, disk)
- OS type and version
- Proxmox VE version
- Anonymous session ID (randomly generated UUID)
**What is NOT collected:**
- IP addresses (not logged, not stored)
- Hostnames or domain names
- User credentials or personal information
- Hardware identifiers (MAC addresses, serial numbers)
- Network configuration or internal IPs
- Any data that could identify a person or system
**What this enables:**
- Understanding which scripts are most popular
- Identifying scripts with high failure rates
- Tracking resource allocation trends
- Improving script quality based on real-world data
## Features
- **Telemetry Ingestion** - Receives and validates telemetry data from bash scripts
- **PocketBase Integration** - Stores data in PocketBase collections
- **Rate Limiting** - Configurable per-IP rate limiting to prevent abuse
- **Caching** - In-memory or Redis-backed caching support
- **Email Alerts** - SMTP-based alerts when failure rates exceed thresholds
- **Dashboard** - Built-in HTML dashboard for telemetry visualization
- **Migration Tool** - Migrate data from external sources to PocketBase
## Architecture
```
┌─────────────────┐ ┌───────────────────┐ ┌────────────┐
│ Bash Scripts │────▶│ Telemetry Service │────▶│ PocketBase │
│ (ProxmoxVE/VED) │ │ (this repo) │ │ Database │
└─────────────────┘ └───────────────────┘ └────────────┘
```
## Project Structure
```
├── service.go # Main service, HTTP handlers, rate limiting
├── cache.go # In-memory and Redis caching
├── alerts.go # SMTP alert system
├── dashboard.go # Dashboard HTML generation
├── migration/
│ ├── migrate.go # Data migration tool
│ └── migrate.sh # Migration shell script
├── Dockerfile # Container build
├── entrypoint.sh # Container entrypoint with migration support
└── go.mod # Go module definition
```
## Related Projects
- [ProxmoxVE](https://github.com/community-scripts/ProxmoxVE) - Proxmox VE Helper Scripts
- [ProxmoxVED](https://github.com/community-scripts/ProxmoxVED) - Proxmox VE Helper Scripts (Dev)
## Privacy & Compliance
This service is designed with privacy in mind and is **GDPR/DSGVO compliant**:
- ✅ **No personal data** - Only anonymous technical metrics are collected
- ✅ **No IP logging** - Request logging is disabled by default, IPs are never stored
- ✅ **Transparent** - All collected fields are documented and the code is open source
- ✅ **No tracking** - Session IDs are randomly generated and cannot be linked to users
- ✅ **No third parties** - Data is only stored in our self-hosted PocketBase instance
## License
MIT License - see [LICENSE](LICENSE) file.

File diff suppressed because it is too large Load Diff

View File

@ -12,43 +12,43 @@ export POCKETBASE_COLLECTION="${POCKETBASE_COLLECTION:-$PB_TARGET_COLLECTION}"
# Run migration if enabled
if [ "$RUN_MIGRATION" = "true" ]; then
echo ""
echo "🔄 Migration mode enabled"
echo " Source: $MIGRATION_SOURCE_URL"
echo " Target: $POCKETBASE_URL"
echo " Collection: $POCKETBASE_COLLECTION"
echo ""
# Wait for PocketBase to be ready
echo "⏳ Waiting for PocketBase to be ready..."
RETRIES=30
until wget -q --spider "$POCKETBASE_URL/api/health" 2>/dev/null; do
RETRIES=$((RETRIES - 1))
if [ $RETRIES -le 0 ]; then
echo "❌ PocketBase not reachable after 30 attempts"
if [ "$MIGRATION_REQUIRED" = "true" ]; then
exit 1
fi
echo "⚠️ Continuing without migration..."
break
echo ""
echo "🔄 Migration mode enabled"
echo " Source: $MIGRATION_SOURCE_URL"
echo " Target: $POCKETBASE_URL"
echo " Collection: $POCKETBASE_COLLECTION"
echo ""
# Wait for PocketBase to be ready
echo "⏳ Waiting for PocketBase to be ready..."
RETRIES=30
until wget -q --spider "$POCKETBASE_URL/api/health" 2>/dev/null; do
RETRIES=$((RETRIES - 1))
if [ $RETRIES -le 0 ]; then
echo "❌ PocketBase not reachable after 30 attempts"
if [ "$MIGRATION_REQUIRED" = "true" ]; then
exit 1
fi
echo "⚠️ Continuing without migration..."
break
fi
echo " Waiting... ($RETRIES attempts left)"
sleep 2
done
if wget -q --spider "$POCKETBASE_URL/api/health" 2>/dev/null; then
echo "✅ PocketBase is ready"
echo ""
echo "🚀 Starting migration..."
/app/migrate || {
if [ "$MIGRATION_REQUIRED" = "true" ]; then
echo "❌ Migration failed!"
exit 1
fi
echo "⚠️ Migration failed, but continuing..."
}
echo ""
fi
echo " Waiting... ($RETRIES attempts left)"
sleep 2
done
if wget -q --spider "$POCKETBASE_URL/api/health" 2>/dev/null; then
echo "✅ PocketBase is ready"
echo ""
echo "🚀 Starting migration..."
/app/migrate || {
if [ "$MIGRATION_REQUIRED" = "true" ]; then
echo "❌ Migration failed!"
exit 1
fi
echo "⚠️ Migration failed, but continuing..."
}
echo ""
fi
fi
echo "🚀 Starting telemetry service..."

View File

@ -93,13 +93,13 @@ func main() {
pbCollection = os.Getenv("PB_TARGET_COLLECTION")
}
if pbCollection == "" {
pbCollection = "_telemetry_data"
pbCollection = "telemetry"
}
// Auth collection
authCollection := os.Getenv("PB_AUTH_COLLECTION")
if authCollection == "" {
authCollection = "_telemetry_service"
authCollection = "telemetry_service_user"
}
// Credentials

View File

@ -13,7 +13,7 @@ SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# Default values
POCKETBASE_URL="${1:-http://localhost:8090}"
POCKETBASE_COLLECTION="${2:-_telemetry_data}"
POCKETBASE_COLLECTION="${2:-telemetry}"
echo "============================================="
echo " ProxmoxVED Data Migration Tool"
@ -27,10 +27,10 @@ echo ""
# Check if PocketBase is reachable
echo "🔍 Checking PocketBase connection..."
if ! curl -sf "$POCKETBASE_URL/api/health" > /dev/null 2>&1; then
echo "❌ Cannot reach PocketBase at $POCKETBASE_URL"
echo " Make sure PocketBase is running and the URL is correct."
exit 1
if ! curl -sf "$POCKETBASE_URL/api/health" >/dev/null 2>&1; then
echo "❌ Cannot reach PocketBase at $POCKETBASE_URL"
echo " Make sure PocketBase is running and the URL is correct."
exit 1
fi
echo "✅ PocketBase is reachable"
echo ""
@ -39,8 +39,8 @@ echo ""
echo "🔍 Checking source API..."
SUMMARY=$(curl -sf "https://api.htl-braunau.at/dev/data/summary" 2>/dev/null || echo "")
if [ -z "$SUMMARY" ]; then
echo "❌ Cannot reach source API"
exit 1
echo "❌ Cannot reach source API"
exit 1
fi
TOTAL=$(echo "$SUMMARY" | grep -o '"total_entries":[0-9]*' | cut -d: -f2)
@ -51,8 +51,8 @@ echo ""
read -p "⚠️ Do you want to start the migration? [y/N] " -n 1 -r
echo ""
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
echo "Migration cancelled."
exit 0
echo "Migration cancelled."
exit 0
fi
echo ""

View File

@ -0,0 +1,106 @@
#!/bin/bash
# Post-migration script to fix timestamps in PocketBase
# Run this INSIDE the PocketBase container after migration completes
#
# Usage: ./fix-timestamps.sh
set -e
DB_PATH="/app/pb_data/data.db"
echo "==========================================================="
echo " Fix Timestamps in PocketBase"
echo "==========================================================="
echo ""
# Check if sqlite3 is available
if ! command -v sqlite3 &> /dev/null; then
echo "sqlite3 not found. Installing..."
apk add sqlite 2>/dev/null || apt-get update && apt-get install -y sqlite3
fi
# Check if database exists
if [ ! -f "$DB_PATH" ]; then
echo "Database not found at $DB_PATH"
echo "Trying alternative paths..."
if [ -f "/pb_data/data.db" ]; then
DB_PATH="/pb_data/data.db"
elif [ -f "/pb/pb_data/data.db" ]; then
DB_PATH="/pb/pb_data/data.db"
else
DB_PATH=$(find / -name "data.db" 2>/dev/null | head -1)
fi
if [ -z "$DB_PATH" ] || [ ! -f "$DB_PATH" ]; then
echo "Could not find PocketBase database!"
exit 1
fi
fi
echo "Database: $DB_PATH"
echo ""
# List tables
echo "Tables in database:"
sqlite3 "$DB_PATH" ".tables"
echo ""
# Find the telemetry table (usually matches collection name)
echo "Looking for telemetry/installations table..."
TABLE_NAME=$(sqlite3 "$DB_PATH" ".tables" | tr ' ' '\n' | grep -E "telemetry|installations" | head -1)
if [ -z "$TABLE_NAME" ]; then
echo "Could not auto-detect table. Available tables:"
sqlite3 "$DB_PATH" ".tables"
echo ""
read -p "Enter table name: " TABLE_NAME
fi
echo "Using table: $TABLE_NAME"
echo ""
# Check if old_created column exists
HAS_OLD_CREATED=$(sqlite3 "$DB_PATH" "PRAGMA table_info($TABLE_NAME);" | grep -c "old_created" || echo "0")
if [ "$HAS_OLD_CREATED" -eq "0" ]; then
echo "Column 'old_created' not found in table $TABLE_NAME"
echo "Migration may not have been run with timestamp preservation."
exit 1
fi
# Show sample data before update
echo "Sample data BEFORE update:"
sqlite3 "$DB_PATH" "SELECT id, created, old_created FROM $TABLE_NAME WHERE old_created IS NOT NULL AND old_created != '' LIMIT 3;"
echo ""
# Count records to update
COUNT=$(sqlite3 "$DB_PATH" "SELECT COUNT(*) FROM $TABLE_NAME WHERE old_created IS NOT NULL AND old_created != '';")
echo "Records to update: $COUNT"
echo ""
read -p "Proceed with timestamp update? [y/N] " -n 1 -r
echo ""
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
echo "Aborted."
exit 0
fi
# Perform the update
echo "Updating timestamps..."
sqlite3 "$DB_PATH" "UPDATE $TABLE_NAME SET created = old_created, updated = old_created WHERE old_created IS NOT NULL AND old_created != '';"
# Show sample data after update
echo ""
echo "Sample data AFTER update:"
sqlite3 "$DB_PATH" "SELECT id, created, old_created FROM $TABLE_NAME LIMIT 3;"
echo ""
echo "==========================================================="
echo " Timestamp Update Complete!"
echo "==========================================================="
echo ""
echo "Next steps:"
echo "1. Verify data in PocketBase Admin UI"
echo "2. Remove the 'old_created' field from the collection schema"
echo ""

View File

@ -0,0 +1,77 @@
#!/bin/sh
# Direct SQLite Import - Pure Shell, FAST batch mode!
# Imports MongoDB Extended JSON directly into PocketBase SQLite
#
# Usage:
# docker cp import-direct.sh pocketbase:/tmp/
# docker cp data.json pocketbase:/tmp/
# docker exec -it pocketbase sh -c "cd /tmp && chmod +x import-direct.sh && ./import-direct.sh"
set -e
JSON_FILE="${1:-/tmp/data.json}"
TABLE="${2:-telemetry}"
REPO="${3:-Proxmox VE}"
DB="${4:-/app/pb_data/data.db}"
BATCH=5000
echo "========================================================="
echo " Direct SQLite Import (Batch Mode)"
echo "========================================================="
echo "JSON: $JSON_FILE"
echo "Table: $TABLE"
echo "Repo: $REPO"
echo "Batch: $BATCH"
echo "---------------------------------------------------------"
# Install jq if missing
command -v jq >/dev/null || apk add --no-cache jq
# Optimize SQLite for bulk
sqlite3 "$DB" "PRAGMA journal_mode=WAL; PRAGMA synchronous=OFF; PRAGMA cache_size=100000;"
SQL_FILE="/tmp/batch.sql"
echo "[INFO] Converting JSON to SQL..."
START=$(date +%s)
# Convert entire JSON to SQL file (much faster than line-by-line sqlite3 calls)
{
echo "BEGIN TRANSACTION;"
jq -r '.[] | @json' "$JSON_FILE" | while read -r r; do
CT=$(echo "$r" | jq -r 'if .ct_type|type=="object" then .ct_type["$numberLong"] else .ct_type end // 0')
DISK=$(echo "$r" | jq -r 'if .disk_size|type=="object" then .disk_size["$numberLong"] else .disk_size end // 0')
CORE=$(echo "$r" | jq -r 'if .core_count|type=="object" then .core_count["$numberLong"] else .core_count end // 0')
RAM=$(echo "$r" | jq -r 'if .ram_size|type=="object" then .ram_size["$numberLong"] else .ram_size end // 0')
OS=$(echo "$r" | jq -r '.os_type // ""' | sed "s/'/''/g")
OSVER=$(echo "$r" | jq -r '.os_version // ""' | sed "s/'/''/g")
DIS6=$(echo "$r" | jq -r '.disable_ip6 // "no"' | sed "s/'/''/g")
APP=$(echo "$r" | jq -r '.nsapp // "unknown"' | sed "s/'/''/g")
METH=$(echo "$r" | jq -r '.method // ""' | sed "s/'/''/g")
PVE=$(echo "$r" | jq -r '.pveversion // ""' | sed "s/'/''/g")
STAT=$(echo "$r" | jq -r '.status // "unknown"')
[ "$STAT" = "done" ] && STAT="success"
RID=$(echo "$r" | jq -r '.random_id // ""' | sed "s/'/''/g")
TYPE=$(echo "$r" | jq -r '.type // "lxc"' | sed "s/'/''/g")
ERR=$(echo "$r" | jq -r '.error // ""' | sed "s/'/''/g")
DATE=$(echo "$r" | jq -r 'if .created_at|type=="object" then .created_at["$date"] else .created_at end // ""')
ID=$(head -c 100 /dev/urandom | tr -dc 'a-z0-9' | head -c 15)
REPO_ESC=$(echo "$REPO" | sed "s/'/''/g")
echo "INSERT OR IGNORE INTO $TABLE (id,created,updated,ct_type,disk_size,core_count,ram_size,os_type,os_version,disableip6,nsapp,method,pve_version,status,random_id,type,error,repo_source) VALUES ('$ID','$DATE','$DATE',$CT,$DISK,$CORE,$RAM,'$OS','$OSVER','$DIS6','$APP','$METH','$PVE','$STAT','$RID','$TYPE','$ERR','$REPO_ESC');"
done
echo "COMMIT;"
} > "$SQL_FILE"
MID=$(date +%s)
echo "[INFO] SQL generated in $((MID - START))s"
echo "[INFO] Importing into SQLite..."
sqlite3 "$DB" < "$SQL_FILE"
END=$(date +%s)
COUNT=$(wc -l < "$SQL_FILE")
rm -f "$SQL_FILE"
echo "========================================================="
echo "Done! ~$((COUNT - 2)) records in $((END - START)) seconds"
echo "========================================================="

View File

@ -0,0 +1,89 @@
#!/bin/bash
# Migration script for Proxmox VE data
# Run directly on the server machine
#
# Usage: ./migrate-linux.sh
#
# Prerequisites:
# - Go installed (apt install golang-go)
# - Network access to source API and PocketBase
set -e
echo "==========================================================="
echo " Proxmox VE Data Migration to PocketBase"
echo "==========================================================="
# Configuration - EDIT THESE VALUES
export MIGRATION_SOURCE_URL="https://api.htl-braunau.at/data"
export POCKETBASE_URL="http://db.community-scripts.org"
export POCKETBASE_COLLECTION="telemetry"
export PB_AUTH_COLLECTION="_superusers"
export PB_IDENTITY="db_admin@community-scripts.org"
export PB_PASSWORD="YOUR_PASSWORD_HERE" # <-- CHANGE THIS!
export REPO_SOURCE="Proxmox VE"
export DATE_UNTIL="2026-02-10"
export BATCH_SIZE="500"
# Optional: Resume from specific page
# export START_PAGE="100"
# Optional: Only import records after this date
# export DATE_FROM="2020-01-01"
echo ""
echo "Configuration:"
echo " Source: $MIGRATION_SOURCE_URL"
echo " Target: $POCKETBASE_URL"
echo " Collection: $POCKETBASE_COLLECTION"
echo " Repo: $REPO_SOURCE"
echo " Until: $DATE_UNTIL"
echo " Batch: $BATCH_SIZE"
echo ""
# Check if Go is installed
if ! command -v go &> /dev/null; then
echo "Go is not installed. Installing..."
apt-get update && apt-get install -y golang-go
fi
# Download migrate.go if not present
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
MIGRATE_GO="$SCRIPT_DIR/migrate.go"
if [ ! -f "$MIGRATE_GO" ]; then
echo "migrate.go not found in $SCRIPT_DIR"
echo "Please copy migrate.go to this directory first."
exit 1
fi
echo "Building migration tool..."
cd "$SCRIPT_DIR"
go build -o migrate migrate.go
echo ""
echo "Starting migration..."
echo "Press Ctrl+C to stop (you can resume later with START_PAGE)"
echo ""
./migrate
echo ""
echo "==========================================================="
echo " Post-Migration Steps"
echo "==========================================================="
echo ""
echo "1. Connect to PocketBase container:"
echo " docker exec -it <pocketbase-container> sh"
echo ""
echo "2. Find the table name:"
echo " sqlite3 /app/pb_data/data.db '.tables'"
echo ""
echo "3. Update timestamps (replace <table> with actual name):"
echo " sqlite3 /app/pb_data/data.db \"UPDATE <table> SET created = old_created, updated = old_created WHERE old_created IS NOT NULL AND old_created != ''\""
echo ""
echo "4. Verify timestamps:"
echo " sqlite3 /app/pb_data/data.db \"SELECT created, old_created FROM <table> LIMIT 5\""
echo ""
echo "5. Remove old_created field in PocketBase Admin UI"
echo ""

File diff suppressed because it is too large Load Diff

View File

@ -13,7 +13,7 @@ SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# Default values
POCKETBASE_URL="${1:-http://localhost:8090}"
POCKETBASE_COLLECTION="${2:-_telemetry_data}"
POCKETBASE_COLLECTION="${2:-telemetry}"
echo "============================================="
echo " ProxmoxVED Data Migration Tool"

View File

@ -95,13 +95,13 @@ func main() {
pbCollection = os.Getenv("PB_TARGET_COLLECTION")
}
if pbCollection == "" {
pbCollection = "_telemetry_data"
pbCollection = "telemetry"
}
// Auth collection
authCollection := os.Getenv("PB_AUTH_COLLECTION")
if authCollection == "" {
authCollection = "_telemetry_service"
authCollection = "telemetry_service_user"
}
// Credentials - prefer admin auth for timestamp preservation

View File

@ -106,7 +106,7 @@ type TelemetryIn struct {
RepoSource string `json:"repo_source,omitempty"` // "ProxmoxVE", "ProxmoxVED", or "external"
}
// TelemetryOut is sent to PocketBase (matches _telemetry_data collection)
// TelemetryOut is sent to PocketBase (matches telemetry collection)
type TelemetryOut struct {
RandomID string `json:"random_id"`
Type string `json:"type"`
@ -309,7 +309,7 @@ func (p *PBClient) UpdateTelemetryStatus(ctx context.Context, recordID string, u
}
// FetchRecordsPaginated retrieves records with pagination and optional filters.
func (p *PBClient) FetchRecordsPaginated(ctx context.Context, page, limit int, status, app, osType, sortField, repoSource string) ([]TelemetryRecord, int, error) {
func (p *PBClient) FetchRecordsPaginated(ctx context.Context, page, limit int, status, app, osType, typeFilter, sortField, repoSource string) ([]TelemetryRecord, int, error) {
if err := p.ensureAuth(ctx); err != nil {
return nil, 0, err
}
@ -325,6 +325,9 @@ func (p *PBClient) FetchRecordsPaginated(ctx context.Context, page, limit int, s
if osType != "" {
filters = append(filters, fmt.Sprintf("os_type='%s'", osType))
}
if typeFilter != "" {
filters = append(filters, fmt.Sprintf("type='%s'", typeFilter))
}
if repoSource != "" {
filters = append(filters, fmt.Sprintf("repo_source='%s'", repoSource))
}
@ -759,7 +762,7 @@ func main() {
// Cache config
RedisURL: env("REDIS_URL", ""),
EnableRedis: envBool("ENABLE_REDIS", false),
CacheTTL: time.Duration(envInt("CACHE_TTL_SECONDS", 60)) * time.Second,
CacheTTL: time.Duration(envInt("CACHE_TTL_SECONDS", 300)) * time.Second,
CacheEnabled: envBool("ENABLE_CACHE", true),
// Alert config
@ -883,14 +886,12 @@ func main() {
// Dashboard API endpoint (with caching)
mux.HandleFunc("/api/dashboard", func(w http.ResponseWriter, r *http.Request) {
days := 30
days := 7 // Default: 7 days
if d := r.URL.Query().Get("days"); d != "" {
fmt.Sscanf(d, "%d", &days)
if days < 1 {
days = 1
}
if days > 365 {
days = 365
// days=0 means "all entries", negative values are invalid
if days < 0 {
days = 7
}
}
@ -904,7 +905,8 @@ func main() {
repoSource = ""
}
ctx, cancel := context.WithTimeout(r.Context(), 10*time.Second)
// Increase timeout for large datasets (dashboard aggregation takes time)
ctx, cancel := context.WithTimeout(r.Context(), 120*time.Second)
defer cancel()
// Try cache first
@ -941,6 +943,7 @@ func main() {
status := r.URL.Query().Get("status")
app := r.URL.Query().Get("app")
osType := r.URL.Query().Get("os")
typeFilter := r.URL.Query().Get("type")
sort := r.URL.Query().Get("sort")
repoSource := r.URL.Query().Get("repo")
if repoSource == "" {
@ -966,7 +969,7 @@ func main() {
ctx, cancel := context.WithTimeout(r.Context(), 10*time.Second)
defer cancel()
records, total, err := pb.FetchRecordsPaginated(ctx, page, limit, status, app, osType, sort, repoSource)
records, total, err := pb.FetchRecordsPaginated(ctx, page, limit, status, app, osType, typeFilter, sort, repoSource)
if err != nil {
log.Printf("records fetch failed: %v", err)
http.Error(w, "failed to fetch records", http.StatusInternalServerError)
@ -1114,6 +1117,22 @@ func main() {
ReadHeaderTimeout: 3 * time.Second,
}
// Background cache warmup job - pre-populates cache for common dashboard queries
if cfg.CacheEnabled {
go func() {
// Initial warmup after startup
time.Sleep(10 * time.Second)
warmupDashboardCache(pb, cache, cfg)
// Periodic refresh (every 4 minutes, before 5-minute TTL expires)
ticker := time.NewTicker(4 * time.Minute)
for range ticker.C {
warmupDashboardCache(pb, cache, cfg)
}
}()
log.Println("background cache warmup enabled")
}
log.Printf("telemetry-ingest listening on %s", cfg.ListenAddr)
log.Fatal(srv.ListenAndServe())
}
@ -1199,4 +1218,44 @@ func splitCSV(s string) []string {
}
}
return out
}
// warmupDashboardCache pre-populates the cache with common dashboard queries
func warmupDashboardCache(pb *PBClient, cache *Cache, cfg Config) {
log.Println("[CACHE] Starting dashboard cache warmup...")
// Common day ranges and repos to pre-cache
dayRanges := []int{7, 30, 90}
repos := []string{"ProxmoxVE", ""} // ProxmoxVE and "all"
warmed := 0
for _, days := range dayRanges {
for _, repo := range repos {
ctx, cancel := context.WithTimeout(context.Background(), 120*time.Second)
cacheKey := fmt.Sprintf("dashboard:%d:%s", days, repo)
// Check if already cached
var existing *DashboardData
if cache.Get(ctx, cacheKey, &existing) {
cancel()
continue // Already cached, skip
}
// Fetch and cache
data, err := pb.FetchDashboardData(ctx, days, repo)
cancel()
if err != nil {
log.Printf("[CACHE] Warmup failed for days=%d repo=%s: %v", days, repo, err)
continue
}
_ = cache.Set(context.Background(), cacheKey, data, cfg.CacheTTL)
warmed++
log.Printf("[CACHE] Warmed cache for days=%d repo=%s (%d installs)", days, repo, data.TotalAllTime)
}
}
log.Printf("[CACHE] Dashboard cache warmup complete (%d entries)", warmed)
}