Shared LRU TTL Cache for NetBox Custom Scripts #11933

Open
opened 2025-12-29 21:51:40 +01:00 by adam · 0 comments
Owner

Originally created by @SaschaSchwarzK on GitHub (Dec 18, 2025).

NetBox version

v4.4.8

Feature type

New functionality

Proposed functionality

Introduce a built‑in LRU‑style cache with TTL support that is accessible from custom scripts, event rules, and background worker processes in NetBox.
The cache should:

  • Use NetBox’s configured Django cache backend (django.core.cache).
  • Be shared across all worker processes and app instances.
  • Support:
    • LRU eviction policy
    • TTL (time‑to‑live) per entry
    • Namespacing
    • Stampede protection via Redis locking
  • Provide a Python API/decorator, e.g.:
@cache_function(namespace="vault", ttl=300)
def get_secret(device_id): ...

Use case

Custom scripts triggered via event rules often make external API calls—such as retrieving secrets from Vault or querying external inventory/monitoring systems.

In high‑volume environments with multiple workers, the same script may run simultaneously for many devices, causing:

  • Duplicate external API calls
  • Rate‑limit violations
  • Higher latency and unnecessary load on workers

A shared cross‑worker TTL cache prevents these issues by letting the first worker compute and cache a value, while all others reuse the cached data.

Database changes

No changes to PostgreSQL are required.
All cached data remains ephemeral and stored only in the configured cache backend.

External dependencies

  • A Django‑compatible cache backend (Redis/Valkey recommended)
  • Shared access for all app and worker instances
  • Redis locking support for stampede protection
Originally created by @SaschaSchwarzK on GitHub (Dec 18, 2025). ### NetBox version v4.4.8 ### Feature type New functionality ### Proposed functionality Introduce a built‑in **LRU‑style cache with TTL support** that is accessible from **custom scripts, event rules, and background worker processes** in NetBox. The cache should: - Use NetBox’s configured **Django cache backend** (`django.core.cache`). - Be **shared across all worker processes and app instances**. - Support: - **LRU eviction policy** - **TTL (time‑to‑live)** per entry - **Namespacing** - **Stampede protection** via Redis locking - Provide a Python API/decorator, e.g.: ```python @cache_function(namespace="vault", ttl=300) def get_secret(device_id): ... ``` ### Use case Custom scripts triggered via event rules often make external API calls—such as retrieving secrets from Vault or querying external inventory/monitoring systems. In high‑volume environments with multiple workers, the same script may run simultaneously for many devices, causing: - Duplicate external API calls - Rate‑limit violations - Higher latency and unnecessary load on workers A shared cross‑worker TTL cache prevents these issues by letting the first worker compute and cache a value, while all others reuse the cached data. ### Database changes No changes to PostgreSQL are required. All cached data remains ephemeral and stored only in the configured cache backend. ### External dependencies - A Django‑compatible cache backend (Redis/Valkey recommended) - Shared access for all app and worker instances - Redis locking support for stampede protection
adam added the type: featurenetboxstatus: needs triage labels 2025-12-29 21:51:40 +01:00
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: starred/netbox#11933