Performance Tuning
Optimize TopGun for your specific workload: high throughput, low latency, or balanced production use.
Performance Benchmarks
Measured Performance (single node)
Benchmarks run with native MessagePack harness (msgpackr) on a single server node. Results measured under sustained load with 100 concurrent connections. Optimizations include msgpackr for 2x faster serialization, native xxHash64 for Merkle tree hashing, and subscription-based routing.
Understanding the Optimizations
TopGun includes several performance optimizations inspired by proven patterns from high-performance distributed systems:
Events are only sent to clients with active subscriptions, eliminating unnecessary broadcasts and reducing network traffic.
A capacity-limited queue protects against OOM under load spikes. Events are rejected rather than accumulating unbounded.
Periodic synchronous processing prevents async operation buildup and ensures consistent latency under load.
Multiple small messages are batched into single syscalls, dramatically reducing kernel overhead at high throughput.
Tuning for High Throughput
Use these settings when your priority is maximizing events per second:
# High throughput configuration
# The Rust server is optimized for high throughput out of the box
# using tokio async runtime, zero-copy MsgPack, and Tower middleware
PORT=8080 \
DATABASE_URL=postgres://user:pass@localhost/topgun \
topgun-server
# The Rust server automatically handles:
# - Async I/O via tokio (no event queue tuning needed)
# - Efficient MsgPack serialization
# - Connection pooling via sqlx
# For advanced tuning, see: /docs/reference/server When to Adjust
| Setting | Increase When | Decrease When |
|---|---|---|
eventQueueCapacity | Seeing queue_rejected metrics | Memory constrained |
eventStripeCount | CPU has many cores, queue contention | Few cores, diminishing returns |
backpressureSyncFrequency | Throughput is priority over latency | Latency spikes are unacceptable |
writeCoalescingMaxDelayMs | Network is bottleneck | Real-time delivery needed |
Tuning for Low Latency
Use these settings for gaming, live collaboration, or trading applications:
# Low latency configuration
# The Rust server delivers sub-millisecond in-memory operations
# with immediate WebSocket push to subscribers
PORT=8080 \
DATABASE_URL=postgres://user:pass@localhost/topgun \
topgun-server
# In-memory CRDT operations are synchronous (0ms)
# WebSocket broadcast is immediate after CRDT merge
# For latency tuning details, see: /docs/reference/server Trade-off Warning
Latency vs Throughput Trade-offs
writeCoalescingMaxDelayMs | Latency | Throughput | Use Case |
|---|---|---|---|
Disabled (false) | ~0ms added | Lower | Live gaming, trading |
| 1ms | ~1ms added | Medium | Live collaboration |
| 5ms (default) | ~5ms added | High | General purpose |
| 10-20ms | ~10-20ms added | Maximum | Batch processing |
Balanced Production Settings
# Balanced production configuration
PORT=8080 \
DATABASE_URL=postgres://user:pass@localhost/topgun \
JWT_SECRET=your-production-secret \
RUST_LOG=topgun_server=info \
topgun-server
# The Rust server includes built-in production defaults:
# - Tower LoadShed middleware for overload protection
# - Request timeout enforcement
# - Prometheus metrics at /metrics
# - Structured logging via tracing
# For full configuration, see: /docs/reference/server Monitoring Performance
Critical Metrics
| Metric | Healthy Range | Action if Exceeded |
|---|---|---|
topgun_event_queue_size | <80% of capacity | Increase eventQueueCapacity or add nodes |
topgun_event_queue_rejected_total | 0 | Urgent: queue is full, events being dropped |
topgun_backpressure_timeouts_total | 0 | Increase backpressureBackoffMs or reduce load |
topgun_backpressure_pending_ops | <80% of maxPending | Increase backpressureMaxPending |
topgun_connections_rejected_total | Near 0 | Increase rate limits or investigate DDoS |
Grafana Dashboard Queries
# Queue utilization (should stay below 80%)
topgun_event_queue_size / topgun_event_queue_capacity * 100
# Connection rejection rate (should be near 0)
rate(topgun_connections_rejected_total[5m])
# Backpressure timeout rate (alert if > 0)
rate(topgun_backpressure_timeouts_total[5m])
# Event throughput
rate(topgun_events_routed_total[5m])
# Average subscribers per event
topgun_subscribers_per_event{quantile="0.5"} Recommended Alerts
| Alert Name | Condition | Severity |
|---|---|---|
| EventQueueFull | rate(topgun_event_queue_rejected_total[5m]) > 0 | Critical |
| BackpressureTimeout | rate(topgun_backpressure_timeouts_total[5m]) > 0 | Critical |
| HighQueueUtilization | topgun_event_queue_size / capacity > 0.8 | Warning |
| ConnectionRateLimitHit | rate(topgun_connections_rejected_total[5m]) > 10 | Warning |
OS-Level Tuning
For production servers handling thousands of connections, tune the operating system:
Linux Sysctl Settings
# Increase socket backlog
sudo sysctl -w net.core.somaxconn=65535
sudo sysctl -w net.ipv4.tcp_max_syn_backlog=65535
# Increase file descriptor limits
ulimit -n 65535
# Enable TCP keepalive tuning
sudo sysctl -w net.ipv4.tcp_keepalive_time=60
sudo sysctl -w net.ipv4.tcp_keepalive_intvl=10
sudo sysctl -w net.ipv4.tcp_keepalive_probes=6
# Persist settings in /etc/sysctl.conf
cat >> /etc/sysctl.conf << EOF
net.core.somaxconn=65535
net.ipv4.tcp_max_syn_backlog=65535
net.ipv4.tcp_keepalive_time=60
net.ipv4.tcp_keepalive_intvl=10
net.ipv4.tcp_keepalive_probes=6
EOF File Descriptor Limits
# /etc/security/limits.conf
topgun soft nofile 65535
topgun hard nofile 65535
topgun soft nproc 65535
topgun hard nproc 65535 Docker/Kubernetes Note
When running in containers, ensure the host has these settings applied. You may also need to setulimits in your Docker Compose or Kubernetes pod spec.Quick Reference
High Throughput
- eventQueueCapacity: 50000+
- eventStripeCount: 8
- writeCoalescingMaxDelayMs: 10-20
- backpressureSyncFrequency: 200
Low Latency
- writeCoalescingEnabled: false
- OR writeCoalescingMaxDelayMs: 1
- backpressureSyncFrequency: 50
- eventStripeCount: 4
Balanced
- Use defaults
- eventQueueCapacity: 10000-50000
- writeCoalescingMaxDelayMs: 5
- backpressureSyncFrequency: 100