Files
godoxy-yusing/internal/proxmox
yusing 57a2ca26db feat(proxmox): support node-level routes and journalctl access
This change enables Proxmox node-level operations without requiring a specific
LXC container VMID.

**Features added:**
- New `/proxmox/journalctl/{node}` API endpoint for streaming node journalctl
- Route configuration support for Proxmox nodes (VMID = 0)
- `ReverseLookupNode` function for node discovery by hostname/IP/alias
- `NodeJournalctl` method for executing journalctl on nodes

**Behavior changes:**
- VMID parameter in journalctl endpoints is now optional
- Routes targeting nodes (without specific containers) are now valid

**Bug fixes:**
- Fixed error message variable reference in route validation
2026-01-25 13:04:09 +08:00
..
2025-04-24 15:02:31 +08:00

Proxmox

The proxmox package provides Proxmox VE integration for GoDoxy, enabling management of Proxmox LXC containers.

Overview

The proxmox package implements Proxmox API client management, node discovery, and LXC container operations including power management and IP address retrieval.

Key Features

  • Proxmox API client management
  • Node discovery and pool management
  • LXC container operations (start, stop, status, stats, journalctl)
  • IP address retrieval for containers (online and offline)
  • Container stats streaming (like docker stats)
  • Journalctl streaming for LXC containers
  • Reverse resource lookup by IP, hostname, or alias
  • TLS configuration options
  • Token and username/password authentication

Architecture

graph TD
    A[Proxmox Config] --> B[Create Client]
    B --> C[Connect to API]
    C --> D[Fetch Cluster Info]
    D --> E[Discover Nodes]
    E --> F[Add to Node Pool]

    G[LXC Operations] --> H[Get IPs]
    G --> I[Start Container]
    G --> J[Stop Container]
    G --> K[Check Status]

    subgraph Node Pool
        F --> L[Nodes Map]
        L --> M[Node 1]
        L --> N[Node 2]
        L --> O[Node 3]
    end

Core Components

Config

type Config struct {
    URL       string            `json:"url" validate:"required,url"`
    Username  string            `json:"username" validate:"required_without=TokenID Secret"`
    Password  strutils.Redacted `json:"password" validate:"required_without=TokenID Secret"`
    Realm     string            `json:"realm" validate:"required_without=TokenID Secret"`
    TokenID   string            `json:"token_id" validate:"required_without=Username Password"`
    Secret    strutils.Redacted `json:"secret" validate:"required_without=Username Password"`
    NoTLSVerify bool            `json:"no_tls_verify"`

    client *Client
}

Client

type Client struct {
    *proxmox.Client
    *proxmox.Cluster
    Version *proxmox.Version
    // id -> resource; id: lxc/<vmid> or qemu/<vmid>
    resources   map[string]*VMResource
    resourcesMu sync.RWMutex
}

type VMResource struct {
    *proxmox.ClusterResource
    IPs []net.IP
}

Node

type Node struct {
    name   string
    id     string
    client *Client
}

var Nodes = pool.New[*Node]("proxmox_nodes")

NodeConfig

type NodeConfig struct {
    Node    string `json:"node" validate:"required"`
    VMID    int    `json:"vmid" validate:"required"`
    VMName  string `json:"vmname,omitempty"`
    Service string `json:"service,omitempty"`
}

Public API

Configuration

// Init initializes the Proxmox client.
func (c *Config) Init(ctx context.Context) gperr.Error

// Client returns the Proxmox client.
func (c *Config) Client() *Client

Client Operations

// UpdateClusterInfo fetches cluster info and discovers nodes.
func (c *Client) UpdateClusterInfo(ctx context.Context) error

// UpdateResources fetches VM resources and their IP addresses.
func (c *Client) UpdateResources(ctx context.Context) error

// GetResource gets a resource by kind and id.
func (c *Client) GetResource(kind string, id int) (*VMResource, error)

// ReverseLookupResource looks up a resource by IP, hostname, or alias.
func (c *Client) ReverseLookupResource(ip net.IP, hostname string, alias string) (*VMResource, error)

Node Operations

// AvailableNodeNames returns all available node names.
func AvailableNodeNames() string

// Node.Client returns the Proxmox client.
func (n *Node) Client() *Client

// Node.Get performs a GET request on the node.
func (n *Node) Get(ctx context.Context, path string, v any) error

Usage

Basic Setup

proxmoxCfg := &proxmox.Config{
    URL:         "https://proxmox.example.com:8006",
    TokenID:     "user@pam!token-name",
    Secret:      "your-api-token-secret",
    NoTLSVerify: false,
}

ctx := context.Background()
err := proxmoxCfg.Init(ctx)
if err != nil {
    log.Fatal(err)
}

client := proxmoxCfg.Client()

Node Access

// Get a specific node
node, ok := proxmox.Nodes.Get("pve")
if !ok {
    log.Fatal("Node not found")
}

fmt.Printf("Node: %s (%s)\n", node.Name(), node.Key())

Available Nodes

names := proxmox.AvailableNodeNames()
fmt.Printf("Available nodes: %s\n", names)

LXC Operations

Container Status

type LXCStatus string

const (
    LXCStatusRunning   LXCStatus = "running"
    LXCStatusStopped   LXCStatus = "stopped"
    LXCStatusSuspended LXCStatus = "suspended"
)

// LXCStatus returns the current status of a container.
func (node *Node) LXCStatus(ctx context.Context, vmid int) (LXCStatus, error)

// LXCIsRunning checks if a container is running.
func (node *Node) LXCIsRunning(ctx context.Context, vmid int) (bool, error)

// LXCIsStopped checks if a container is stopped.
func (node *Node) LXCIsStopped(ctx context.Context, vmid int) (bool, error)

// LXCName returns the name of a container.
func (node *Node) LXCName(ctx context.Context, vmid int) (string, error)

Container Actions

type LXCAction string

const (
    LXCStart    LXCAction = "start"
    LXCShutdown LXCAction = "shutdown"
    LXCSuspend  LXCAction = "suspend"
    LXCResume   LXCAction = "resume"
    LXCReboot   LXCAction = "reboot"
)

// LXCAction performs an action on a container with task tracking.
func (node *Node) LXCAction(ctx context.Context, vmid int, action LXCAction) error

// LXCSetShutdownTimeout sets the shutdown timeout for a container.
func (node *Node) LXCSetShutdownTimeout(ctx context.Context, vmid int, timeout time.Duration) error

Get Container IPs

// LXCGetIPs returns IP addresses of a container.
// First tries interfaces (online), then falls back to config (offline).
func (node *Node) LXCGetIPs(ctx context.Context, vmid int) ([]net.IP, error)

// LXCGetIPsFromInterfaces returns IP addresses from network interfaces.
// Returns empty if container is stopped.
func (node *Node) LXCGetIPsFromInterfaces(ctx context.Context, vmid int) ([]net.IP, error)

// LXCGetIPsFromConfig returns IP addresses from container config.
// Works for stopped/offline containers.
func (node *Node) LXCGetIPsFromConfig(ctx context.Context, vmid int) ([]net.IP, error)

Container Stats (like docker stats)

// LXCStats streams container statistics.
// Format: "STATUS|CPU%%|MEM USAGE/LIMIT|MEM%%|NET I/O|BLOCK I/O"
// Example: "running|31.1%|9.6GiB/20GiB|48.87%|4.7GiB/3.3GiB|25GiB/36GiB"
func (node *Node) LXCStats(ctx context.Context, vmid int, stream bool) (io.ReadCloser, error)

Data Flow

sequenceDiagram
    participant Config
    participant Client
    participant NodePool
    participant LXC

    Config->>Client: NewClient(url, options)
    Client->>ProxmoxAPI: GET /cluster/status
    ProxmoxAPI-->>Client: Cluster Info

    Client->>NodePool: Add Nodes
    NodePool->>NodePool: Store in Pool

    participant User
    User->>NodePool: Get Node
    NodePool-->>User: Node

    User->>Node: LXCGetIPs(vmid)
    Node->>ProxmoxAPI: GET /lxc/{vmid}/config
    ProxmoxAPI-->>Node: Config with IPs
    Node-->>User: IP addresses

    User->>Node: LXCAction(vmid, "start")
    Node->>ProxmoxAPI: POST /lxc/{vmid}/status/start
    ProxmoxAPI-->>Node: Success
    Node-->>User: Done

Configuration

YAML Configuration

providers:
  proxmox:
    - url: https://proxmox.example.com:8006
      # Token-based authentication (optional)
      token_id: user@pam!token-name
      secret: your-api-token-secret

      # Username/Password authentication (required for journalctl (service logs) streaming)
      # username: root
      # password: your-password
      # realm: pam

      no_tls_verify: false

Authentication Options

// Token-based authentication (recommended)
opts := []proxmox.Option{
    proxmox.WithAPIToken(c.TokenID, c.Secret.String()),
    proxmox.WithHTTPClient(&http.Client{Transport: tr}),
}

// Username/Password authentication
opts := []proxmox.Option{
    proxmox.WithCredentials(&proxmox.Credentials{
        Username: c.Username,
        Password: c.Password.String(),
        Realm:    c.Realm,
    }),
    proxmox.WithHTTPClient(&http.Client{Transport: tr}),
}

TLS Configuration

// With TLS verification (default)
tr := gphttp.NewTransport()

// Without TLS verification (insecure)
tr := gphttp.NewTransportWithTLSConfig(&tls.Config{
    InsecureSkipVerify: true,
})

Node Pool

The package maintains a global node pool:

var Nodes = pool.New[*Node]("proxmox_nodes")

Pool Operations

// Add a node
Nodes.Add(&Node{name: "pve1", id: "node/pve1", client: client})

// Get a node
node, ok := Nodes.Get("pve1")

// Iterate nodes
for _, node := range Nodes.Iter {
    fmt.Printf("Node: %s\n", node.Name())
}

Integration with Route

The proxmox package integrates with the route package for idlewatcher:

// In route validation
if r.Idlewatcher != nil && r.Idlewatcher.Proxmox != nil {
    node := r.Idlewatcher.Proxmox.Node
    vmid := r.Idlewatcher.Proxmox.VMID

    node, ok := proxmox.Nodes.Get(node)
    if !ok {
        return gperr.Errorf("proxmox node %s not found", node)
    }

    // Get container IPs
    ips, err := node.LXCGetIPs(ctx, vmid)
    // ... check reachability
}

Authentication

The package supports two authentication methods:

  1. API Token (recommended): Uses token_id and secret
  2. Username/Password: Uses username, password, and realm

Both methods support TLS verification options.

Error Handling

// Timeout handling
if errors.Is(err, context.DeadlineExceeded) {
    return gperr.New("timeout fetching proxmox cluster info")
}

// Connection errors
return gperr.New("failed to fetch proxmox cluster info").With(err)

// Resource not found
return gperr.New("resource not found").With(ErrResourceNotFound)

Performance Considerations

  • Cluster info fetched once on init
  • Nodes cached in pool
  • Resources updated in background loop (every 3 seconds by default)
  • Concurrent IP resolution for all containers (limited to GOMAXPROCS * 2)
  • 5-second timeout for initial connection
  • Per-operation API calls with 3-second timeout

Constants

const ResourcePollInterval = 3 * time.Second

The ResourcePollInterval constant controls how often resources are updated in the background loop.