mirror of
https://github.com/beshu-tech/deltaglider.git
synced 2026-05-01 20:54:48 +02:00
Compare commits
5 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
85dd315424 | ||
|
|
dbd2632cae | ||
|
|
3d04a407c0 | ||
|
|
47f022fffe | ||
|
|
7a2ed16ee7 |
27
CHANGELOG.md
27
CHANGELOG.md
@@ -7,6 +7,31 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
||||
|
||||
## [Unreleased]
|
||||
|
||||
## [5.1.0] - 2025-10-10
|
||||
|
||||
### Added
|
||||
- **New CLI Command**: `deltaglider stats <bucket>` for bucket statistics and compression metrics
|
||||
- Supports `--detailed` flag for comprehensive analysis
|
||||
- Supports `--json` flag for machine-readable output
|
||||
- Accepts multiple formats: `s3://bucket/`, `s3://bucket`, `bucket`
|
||||
- **Session-Level Statistics Caching**: Bucket stats now cached per client instance
|
||||
- Automatic cache invalidation on mutations (put, delete, bucket operations)
|
||||
- Intelligent cache reuse (detailed stats serve quick stat requests)
|
||||
- Enhanced `list_buckets()` includes cached stats when available
|
||||
- **Programmatic Cache Management**: Added cache management APIs for long-running applications
|
||||
- `clear_cache()`: Clear all cached references
|
||||
- `evict_cache()`: Remove specific cached reference
|
||||
- Session-scoped cache lifecycle management
|
||||
|
||||
### Changed
|
||||
- Bucket statistics are now cached within client session for performance
|
||||
- `list_buckets()` response includes `DeltaGliderStats` metadata when cached
|
||||
|
||||
### Documentation
|
||||
- Added comprehensive DG_MAX_RATIO tuning guide in docs/
|
||||
- Updated CLI command reference in CLAUDE.md and README.md
|
||||
- Added detailed cache management documentation
|
||||
|
||||
## [5.0.3] - 2025-10-10
|
||||
|
||||
### Security
|
||||
@@ -152,6 +177,8 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
||||
- Delta compression for versioned artifacts
|
||||
- 99%+ compression for similar files
|
||||
|
||||
[5.1.0]: https://github.com/beshu-tech/deltaglider/compare/v5.0.3...v5.1.0
|
||||
[5.0.3]: https://github.com/beshu-tech/deltaglider/compare/v5.0.1...v5.0.3
|
||||
[5.0.1]: https://github.com/beshu-tech/deltaglider/compare/v5.0.0...v5.0.1
|
||||
[5.0.0]: https://github.com/beshu-tech/deltaglider/compare/v4.2.4...v5.0.0
|
||||
[4.2.4]: https://github.com/beshu-tech/deltaglider/compare/v4.2.3...v4.2.4
|
||||
|
||||
17
CLAUDE.md
17
CLAUDE.md
@@ -74,6 +74,17 @@ export AWS_SECRET_ACCESS_KEY=minioadmin
|
||||
|
||||
# Now you can use deltaglider commands
|
||||
deltaglider cp test.zip s3://test-bucket/
|
||||
deltaglider stats test-bucket # Get bucket statistics
|
||||
```
|
||||
|
||||
### Available CLI Commands
|
||||
```bash
|
||||
cp # Copy files to/from S3 (AWS S3 compatible)
|
||||
ls # List S3 buckets or objects (AWS S3 compatible)
|
||||
rm # Remove S3 objects (AWS S3 compatible)
|
||||
sync # Synchronize directories with S3 (AWS S3 compatible)
|
||||
stats # Get bucket statistics and compression metrics
|
||||
verify # Verify integrity of delta file
|
||||
```
|
||||
|
||||
## Architecture
|
||||
@@ -189,7 +200,11 @@ Core delta logic is in `src/deltaglider/core/service.py`:
|
||||
## Environment Variables
|
||||
|
||||
- `DG_LOG_LEVEL`: Logging level (default: "INFO")
|
||||
- `DG_MAX_RATIO`: Maximum acceptable delta/file ratio (default: "0.5")
|
||||
- `DG_MAX_RATIO`: Maximum acceptable delta/file ratio (default: "0.5", range: "0.0-1.0")
|
||||
- **See [docs/DG_MAX_RATIO.md](docs/DG_MAX_RATIO.md) for complete tuning guide**
|
||||
- Controls when to use delta vs. direct storage
|
||||
- Lower (0.2-0.3) = conservative, only high-quality compression
|
||||
- Higher (0.6-0.7) = permissive, accept modest savings
|
||||
- `DG_CACHE_BACKEND`: Cache backend type - "filesystem" (default) or "memory"
|
||||
- `DG_CACHE_MEMORY_SIZE_MB`: Memory cache size limit in MB (default: "100")
|
||||
- `DG_CACHE_ENCRYPTION_KEY`: Optional base64-encoded Fernet key for persistent encryption (ephemeral by default)
|
||||
|
||||
@@ -71,6 +71,11 @@ HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
|
||||
ENV DG_LOG_LEVEL=INFO
|
||||
|
||||
# Performance & Compression
|
||||
# DG_MAX_RATIO: Maximum delta/file ratio (0.0-1.0)
|
||||
# Default 0.5 means: only use delta if delta_size ≤ 50% of original_size
|
||||
# Lower (0.2-0.3) = more conservative, only high-quality compression
|
||||
# Higher (0.6-0.7) = more permissive, accept modest savings
|
||||
# See docs/DG_MAX_RATIO.md for complete tuning guide
|
||||
ENV DG_MAX_RATIO=0.5
|
||||
|
||||
# Cache Configuration
|
||||
|
||||
11
README.md
11
README.md
@@ -85,7 +85,7 @@ docker run -v /shared-cache:/tmp/.deltaglider \
|
||||
|
||||
**Environment Variables**:
|
||||
- `DG_LOG_LEVEL`: Logging level (default: `INFO`, options: `DEBUG`, `INFO`, `WARNING`, `ERROR`)
|
||||
- `DG_MAX_RATIO`: Maximum delta/file ratio (default: `0.5`, range: `0.0-1.0`)
|
||||
- `DG_MAX_RATIO`: Maximum delta/file ratio (default: `0.5`, range: `0.0-1.0`) - [📖 Complete Guide](docs/DG_MAX_RATIO.md)
|
||||
- `DG_CACHE_BACKEND`: Cache backend (default: `filesystem`, options: `filesystem`, `memory`)
|
||||
- `DG_CACHE_MEMORY_SIZE_MB`: Memory cache size in MB (default: `100`)
|
||||
- `DG_CACHE_ENCRYPTION_KEY`: Optional base64-encoded encryption key for cross-process cache sharing
|
||||
@@ -94,6 +94,8 @@ docker run -v /shared-cache:/tmp/.deltaglider \
|
||||
- `AWS_SECRET_ACCESS_KEY`: AWS secret key
|
||||
- `AWS_DEFAULT_REGION`: AWS region (default: `us-east-1`)
|
||||
|
||||
> **💡 Tip**: `DG_MAX_RATIO` is a powerful tuning parameter. See the [DG_MAX_RATIO guide](docs/DG_MAX_RATIO.md) to learn how to optimize compression for your use case.
|
||||
|
||||
**Security Notes**:
|
||||
- Encryption is **always enabled** (cannot be disabled)
|
||||
- Each container gets ephemeral encryption keys for maximum security
|
||||
@@ -187,6 +189,13 @@ deltaglider sync s3://releases/ ./local-backup/ # Sync from S3
|
||||
deltaglider sync --delete ./src/ s3://backup/ # Mirror exactly
|
||||
deltaglider sync --exclude "*.log" ./src/ s3://backup/ # Exclude patterns
|
||||
|
||||
# Get bucket statistics (compression metrics)
|
||||
deltaglider stats my-bucket # Quick stats overview
|
||||
deltaglider stats s3://my-bucket # Also accepts s3:// format
|
||||
deltaglider stats s3://my-bucket/ # With or without trailing slash
|
||||
deltaglider stats my-bucket --detailed # Detailed compression metrics (slower)
|
||||
deltaglider stats my-bucket --json # JSON output for automation
|
||||
|
||||
# Works with MinIO, R2, and S3-compatible storage
|
||||
deltaglider cp file.zip s3://bucket/ --endpoint-url http://localhost:9000
|
||||
```
|
||||
|
||||
684
docs/CACHE_MANAGEMENT.md
Normal file
684
docs/CACHE_MANAGEMENT.md
Normal file
@@ -0,0 +1,684 @@
|
||||
# Cache Management for Long-Running Applications
|
||||
|
||||
## Overview
|
||||
|
||||
DeltaGlider uses caching to store reference files locally for efficient delta compression. However, unlike the CLI which automatically cleans up cache on exit, **programmatic SDK usage requires manual cache management** for long-running applications.
|
||||
|
||||
This guide explains how to manage cache in production applications, including:
|
||||
- When and how to clear cache
|
||||
- Encryption key management strategies
|
||||
- Memory vs. filesystem cache trade-offs
|
||||
- Best practices for different application types
|
||||
|
||||
## The Problem
|
||||
|
||||
**CLI (Automatic Cleanup)**:
|
||||
```bash
|
||||
# Cache created in /tmp/deltaglider-xyz123/
|
||||
deltaglider cp file.zip s3://bucket/
|
||||
|
||||
# Process exits → Cache automatically deleted via atexit handler
|
||||
# ✅ No manual cleanup needed
|
||||
```
|
||||
|
||||
**SDK (Manual Cleanup Required)**:
|
||||
```python
|
||||
from deltaglider import create_client
|
||||
|
||||
client = create_client()
|
||||
|
||||
# Long-running application (runs for hours/days)
|
||||
while True:
|
||||
client.put_object(Bucket='releases', Key='file.zip', Body=data)
|
||||
time.sleep(600) # Upload every 10 minutes
|
||||
|
||||
# ❌ Process never exits → Cache never cleaned
|
||||
# ❌ Cache grows indefinitely
|
||||
# ❌ Memory/disk exhaustion after days/weeks
|
||||
```
|
||||
|
||||
## Solution: Manual Cache Management
|
||||
|
||||
### Basic Cache Clearing
|
||||
|
||||
```python
|
||||
from deltaglider import create_client
|
||||
|
||||
client = create_client()
|
||||
|
||||
# Do some uploads
|
||||
client.put_object(Bucket='releases', Key='file1.zip', Body=data1)
|
||||
client.put_object(Bucket='releases', Key='file2.zip', Body=data2)
|
||||
|
||||
# Clear cache to free resources
|
||||
client.clear_cache()
|
||||
|
||||
# ✅ All cached references removed
|
||||
# ✅ Memory/disk freed
|
||||
# ✅ Next upload will fetch fresh reference from S3
|
||||
```
|
||||
|
||||
### When to Clear Cache
|
||||
|
||||
| Scenario | Frequency | Reason |
|
||||
|----------|-----------|--------|
|
||||
| **Long-running services** | Every 1-4 hours | Prevent memory/disk growth |
|
||||
| **After config changes** | Immediately | Old cache is invalid |
|
||||
| **High memory pressure** | As needed | Free resources |
|
||||
| **Test cleanup** | After each test | Ensure clean state |
|
||||
| **Scheduled jobs** | After job completes | Clean up before next run |
|
||||
| **Key rotation** | After rotation | Old encrypted cache unusable |
|
||||
|
||||
## Cache Strategies by Application Type
|
||||
|
||||
### 1. Long-Running Background Service
|
||||
|
||||
**Scenario**: Continuous upload service running 24/7
|
||||
|
||||
```python
|
||||
from deltaglider import create_client
|
||||
import schedule
|
||||
import time
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
client = create_client()
|
||||
|
||||
def upload_task():
|
||||
"""Upload latest build."""
|
||||
try:
|
||||
with open('latest-build.zip', 'rb') as f:
|
||||
response = client.put_object(
|
||||
Bucket='releases',
|
||||
Key=f'builds/{datetime.now().isoformat()}.zip',
|
||||
Body=f
|
||||
)
|
||||
logger.info(f"Uploaded: {response['ETag']}")
|
||||
except Exception as e:
|
||||
logger.error(f"Upload failed: {e}")
|
||||
|
||||
def cleanup_task():
|
||||
"""Clear cache to prevent growth."""
|
||||
client.clear_cache()
|
||||
logger.info("Cache cleared - freed resources")
|
||||
|
||||
# Upload every 10 minutes
|
||||
schedule.every(10).minutes.do(upload_task)
|
||||
|
||||
# Clear cache every 2 hours (balance performance vs. memory)
|
||||
schedule.every(2).hours.do(cleanup_task)
|
||||
|
||||
# Run indefinitely
|
||||
while True:
|
||||
schedule.run_pending()
|
||||
time.sleep(60)
|
||||
```
|
||||
|
||||
**Cache Clearing Frequency Guidelines**:
|
||||
- Every 1 hour: High upload frequency (>100/day), memory-constrained
|
||||
- Every 2-4 hours: Moderate upload frequency (10-100/day), normal memory
|
||||
- Every 6-12 hours: Low upload frequency (<10/day), abundant memory
|
||||
|
||||
### 2. Periodic Batch Job
|
||||
|
||||
**Scenario**: Daily backup script
|
||||
|
||||
```python
|
||||
from deltaglider import create_client
|
||||
import glob
|
||||
from pathlib import Path
|
||||
|
||||
def daily_backup():
|
||||
"""Upload daily backups and clean up."""
|
||||
client = create_client()
|
||||
|
||||
try:
|
||||
# Upload all backup files
|
||||
for backup in glob.glob('/backups/*.zip'):
|
||||
with open(backup, 'rb') as f:
|
||||
client.put_object(
|
||||
Bucket='backups',
|
||||
Key=f'daily/{Path(backup).name}',
|
||||
Body=f
|
||||
)
|
||||
print(f"Backed up: {backup}")
|
||||
|
||||
finally:
|
||||
# ALWAYS clear cache at end of job
|
||||
client.clear_cache()
|
||||
print("Cache cleared - job complete")
|
||||
|
||||
# Run daily via cron/systemd timer
|
||||
if __name__ == '__main__':
|
||||
daily_backup()
|
||||
```
|
||||
|
||||
**Best Practice**: Always clear cache in `finally` block to ensure cleanup even if job fails.
|
||||
|
||||
### 3. Web Application / API Server
|
||||
|
||||
**Scenario**: Flask/FastAPI app with upload endpoints
|
||||
|
||||
```python
|
||||
from fastapi import FastAPI, UploadFile, BackgroundTasks
|
||||
from deltaglider import create_client
|
||||
import asyncio
|
||||
|
||||
app = FastAPI()
|
||||
|
||||
# Create client once at startup
|
||||
client = create_client()
|
||||
|
||||
async def periodic_cache_cleanup():
|
||||
"""Background task to clear cache periodically."""
|
||||
while True:
|
||||
await asyncio.sleep(3600) # Every hour
|
||||
client.clear_cache()
|
||||
print("Cache cleared in background")
|
||||
|
||||
@app.on_event("startup")
|
||||
async def startup_event():
|
||||
"""Start background cache cleanup task."""
|
||||
asyncio.create_task(periodic_cache_cleanup())
|
||||
|
||||
@app.post("/upload")
|
||||
async def upload_file(file: UploadFile):
|
||||
"""Upload endpoint with automatic cache management."""
|
||||
content = await file.read()
|
||||
|
||||
response = client.put_object(
|
||||
Bucket='uploads',
|
||||
Key=f'files/{file.filename}',
|
||||
Body=content
|
||||
)
|
||||
|
||||
return {"message": "Uploaded", "etag": response['ETag']}
|
||||
|
||||
@app.post("/admin/clear-cache")
|
||||
async def admin_clear_cache():
|
||||
"""Manual cache clear endpoint for admin."""
|
||||
client.clear_cache()
|
||||
return {"message": "Cache cleared"}
|
||||
```
|
||||
|
||||
**Best Practice**: Run periodic cache cleanup in background task, provide manual clear endpoint for emergencies.
|
||||
|
||||
### 4. Testing / CI/CD
|
||||
|
||||
**Scenario**: Test suite using DeltaGlider
|
||||
|
||||
```python
|
||||
import pytest
|
||||
from deltaglider import create_client
|
||||
|
||||
@pytest.fixture
|
||||
def deltaglider_client():
|
||||
"""Provide clean client for each test."""
|
||||
client = create_client()
|
||||
yield client
|
||||
# ALWAYS clear cache after test
|
||||
client.clear_cache()
|
||||
|
||||
def test_upload(deltaglider_client):
|
||||
"""Test upload with automatic cleanup."""
|
||||
response = deltaglider_client.put_object(
|
||||
Bucket='test-bucket',
|
||||
Key='test-file.zip',
|
||||
Body=b'test data'
|
||||
)
|
||||
assert response['ETag'] is not None
|
||||
# Cache automatically cleared by fixture
|
||||
|
||||
def test_download(deltaglider_client):
|
||||
"""Test download with clean cache."""
|
||||
# Cache is clean from previous test
|
||||
deltaglider_client.put_object(Bucket='test', Key='file.zip', Body=b'data')
|
||||
response = deltaglider_client.get_object(Bucket='test', Key='file.zip')
|
||||
assert response['Body'].read() == b'data'
|
||||
# Cache automatically cleared by fixture
|
||||
```
|
||||
|
||||
**Best Practice**: Use pytest fixtures to ensure cache is cleared after each test.
|
||||
|
||||
### 5. AWS Lambda / Serverless
|
||||
|
||||
**Scenario**: Lambda function with warm container reuse
|
||||
|
||||
```python
|
||||
import os
|
||||
from deltaglider import create_client
|
||||
|
||||
# Initialize client outside handler (reused across invocations)
|
||||
client = create_client()
|
||||
|
||||
# Track invocation count for cache clearing
|
||||
invocation_count = 0
|
||||
|
||||
def lambda_handler(event, context):
|
||||
"""Lambda handler with periodic cache clearing."""
|
||||
global invocation_count
|
||||
invocation_count += 1
|
||||
|
||||
try:
|
||||
# Upload file
|
||||
response = client.put_object(
|
||||
Bucket='lambda-uploads',
|
||||
Key=event['filename'],
|
||||
Body=event['data']
|
||||
)
|
||||
|
||||
# Clear cache every 50 invocations (warm container optimization)
|
||||
if invocation_count % 50 == 0:
|
||||
client.clear_cache()
|
||||
print(f"Cache cleared after {invocation_count} invocations")
|
||||
|
||||
return {'statusCode': 200, 'etag': response['ETag']}
|
||||
|
||||
except Exception as e:
|
||||
# Clear cache on error to prevent poisoned state
|
||||
client.clear_cache()
|
||||
return {'statusCode': 500, 'error': str(e)}
|
||||
```
|
||||
|
||||
**Best Practice**: Clear cache periodically (every N invocations) and on errors. Lambda warm containers can reuse cache across invocations for performance.
|
||||
|
||||
## Encryption Key Management
|
||||
|
||||
DeltaGlider always encrypts cache data. Understanding key management is critical for programmatic usage.
|
||||
|
||||
### Ephemeral Keys (Default - Recommended)
|
||||
|
||||
**How It Works**:
|
||||
- New encryption key generated per client instance
|
||||
- Cache encrypted with instance-specific key
|
||||
- Key lost when client is garbage collected
|
||||
- **Maximum security** - keys never persist
|
||||
|
||||
**When to Use**:
|
||||
- Single-process applications
|
||||
- Short-lived scripts
|
||||
- CI/CD pipelines
|
||||
- Testing
|
||||
- Maximum security requirements
|
||||
|
||||
**Example**:
|
||||
```python
|
||||
from deltaglider import create_client
|
||||
|
||||
# Create client (generates ephemeral key automatically)
|
||||
client = create_client()
|
||||
|
||||
# Upload file (encrypted with ephemeral key)
|
||||
client.put_object(Bucket='bucket', Key='file.zip', Body=data)
|
||||
|
||||
# Clear cache
|
||||
client.clear_cache()
|
||||
|
||||
# ✅ Encrypted cache cleared
|
||||
# ✅ Key was never persisted
|
||||
# ✅ Perfect forward secrecy
|
||||
```
|
||||
|
||||
**Characteristics**:
|
||||
- ✅ Maximum security (keys never leave process)
|
||||
- ✅ Perfect forward secrecy
|
||||
- ✅ No key management overhead
|
||||
- ❌ Cache not shareable between processes
|
||||
- ❌ Cache not reusable after client recreation
|
||||
|
||||
### Persistent Keys (Advanced - Shared Cache)
|
||||
|
||||
**How It Works**:
|
||||
- Use same encryption key across multiple processes/clients
|
||||
- Key stored in environment variable or secrets manager
|
||||
- All processes can read each other's encrypted cache
|
||||
- **Trade-off**: Convenience vs. security
|
||||
|
||||
**When to Use**:
|
||||
- Multi-process applications (workers, replicas)
|
||||
- Shared cache across containers
|
||||
- Cache persistence across application restarts
|
||||
- Horizontal scaling scenarios
|
||||
|
||||
**Example - Environment Variable**:
|
||||
```python
|
||||
import os
|
||||
from cryptography.fernet import Fernet
|
||||
import base64
|
||||
|
||||
# Generate persistent key (do this ONCE, securely)
|
||||
key = Fernet.generate_key()
|
||||
key_b64 = base64.b64encode(key).decode('utf-8')
|
||||
print(f"DG_CACHE_ENCRYPTION_KEY={key_b64}") # Store in secrets manager!
|
||||
|
||||
# Set in environment (or use secrets manager)
|
||||
os.environ['DG_CACHE_ENCRYPTION_KEY'] = key_b64
|
||||
|
||||
# All client instances use same key
|
||||
client1 = create_client()
|
||||
client2 = create_client()
|
||||
|
||||
# Client1 writes to cache
|
||||
client1.put_object(Bucket='bucket', Key='file.zip', Body=data)
|
||||
|
||||
# Client2 can read same cached data (same key!)
|
||||
client2.get_object(Bucket='bucket', Key='file.zip')
|
||||
|
||||
# ✅ Cache shared between processes
|
||||
# ⚠️ Key must be securely managed
|
||||
```
|
||||
|
||||
**Example - AWS Secrets Manager**:
|
||||
```python
|
||||
import boto3
|
||||
import json
|
||||
from deltaglider import create_client
|
||||
|
||||
def get_encryption_key_from_secrets_manager():
|
||||
"""Retrieve encryption key from AWS Secrets Manager."""
|
||||
secrets = boto3.client('secretsmanager', region_name='us-west-2')
|
||||
response = secrets.get_secret_value(SecretId='deltaglider/cache-encryption-key')
|
||||
secret = json.loads(response['SecretString'])
|
||||
return secret['encryption_key']
|
||||
|
||||
# Retrieve key securely
|
||||
os.environ['DG_CACHE_ENCRYPTION_KEY'] = get_encryption_key_from_secrets_manager()
|
||||
|
||||
# Create client with persistent key
|
||||
client = create_client()
|
||||
```
|
||||
|
||||
**Security Best Practices for Persistent Keys**:
|
||||
1. **Generate once**: Never regenerate unless rotating
|
||||
2. **Store securely**: AWS Secrets Manager, HashiCorp Vault, etc.
|
||||
3. **Rotate regularly**: Follow your key rotation policy
|
||||
4. **Audit access**: Log who accesses encryption keys
|
||||
5. **Principle of least privilege**: Only processes that need shared cache get the key
|
||||
|
||||
### Key Rotation
|
||||
|
||||
**When to Rotate**:
|
||||
- Regular schedule (every 90 days recommended)
|
||||
- After security incident
|
||||
- When team members leave
|
||||
- After key exposure
|
||||
|
||||
**How to Rotate**:
|
||||
```python
|
||||
import os
|
||||
from deltaglider import create_client
|
||||
|
||||
# OLD KEY (existing cache encrypted with this)
|
||||
old_key = os.environ.get('DG_CACHE_ENCRYPTION_KEY')
|
||||
|
||||
# Generate NEW KEY
|
||||
from cryptography.fernet import Fernet
|
||||
new_key = Fernet.generate_key()
|
||||
new_key_b64 = base64.b64encode(new_key).decode('utf-8')
|
||||
|
||||
# Steps for rotation:
|
||||
# 1. Clear old cache (encrypted with old key)
|
||||
client_old = create_client() # Uses old key from env
|
||||
client_old.clear_cache()
|
||||
|
||||
# 2. Update environment with new key
|
||||
os.environ['DG_CACHE_ENCRYPTION_KEY'] = new_key_b64
|
||||
|
||||
# 3. Create new client with new key
|
||||
client_new = create_client() # Uses new key
|
||||
|
||||
# 4. Continue operations
|
||||
client_new.put_object(Bucket='bucket', Key='file.zip', Body=data)
|
||||
|
||||
# ✅ Cache now encrypted with new key
|
||||
# ✅ Old encrypted cache cleared
|
||||
```
|
||||
|
||||
## Memory vs. Filesystem Cache
|
||||
|
||||
### Filesystem Cache (Default)
|
||||
|
||||
**Characteristics**:
|
||||
- Stored in `/tmp/deltaglider-*/`
|
||||
- Persistent across client recreations (within same process)
|
||||
- Can be shared between processes (with persistent encryption key)
|
||||
- Slower than memory cache (disk I/O)
|
||||
|
||||
**Configuration**:
|
||||
```python
|
||||
import os
|
||||
|
||||
# Explicitly set filesystem cache (this is the default)
|
||||
os.environ['DG_CACHE_BACKEND'] = 'filesystem'
|
||||
|
||||
from deltaglider import create_client
|
||||
client = create_client()
|
||||
```
|
||||
|
||||
**When to Use**:
|
||||
- Default choice for most applications
|
||||
- When cache should persist across client recreations
|
||||
- Multi-process applications (with persistent key)
|
||||
- Memory-constrained environments
|
||||
|
||||
**Cache Clearing**:
|
||||
```python
|
||||
client.clear_cache()
|
||||
# Removes all files from /tmp/deltaglider-*/
|
||||
# Frees disk space
|
||||
```
|
||||
|
||||
### Memory Cache
|
||||
|
||||
**Characteristics**:
|
||||
- Stored in process memory (RAM)
|
||||
- Fast access (no disk I/O)
|
||||
- Automatically freed when process exits
|
||||
- LRU eviction prevents unlimited growth
|
||||
- Not shared between processes
|
||||
|
||||
**Configuration**:
|
||||
```python
|
||||
import os
|
||||
|
||||
# Enable memory cache
|
||||
os.environ['DG_CACHE_BACKEND'] = 'memory'
|
||||
os.environ['DG_CACHE_MEMORY_SIZE_MB'] = '200' # Default: 100MB
|
||||
|
||||
from deltaglider import create_client
|
||||
client = create_client()
|
||||
```
|
||||
|
||||
**When to Use**:
|
||||
- High-performance requirements
|
||||
- Ephemeral environments (containers, Lambda)
|
||||
- Short-lived applications
|
||||
- CI/CD pipelines
|
||||
- When disk I/O is bottleneck
|
||||
|
||||
**Cache Clearing**:
|
||||
```python
|
||||
client.clear_cache()
|
||||
# Frees memory immediately
|
||||
# No disk I/O
|
||||
```
|
||||
|
||||
**LRU Eviction**:
|
||||
Memory cache automatically evicts least recently used entries when size limit is reached. No manual intervention needed.
|
||||
|
||||
## Best Practices Summary
|
||||
|
||||
### ✅ DO
|
||||
|
||||
1. **Clear cache periodically** in long-running applications
|
||||
2. **Clear cache in `finally` blocks** for batch jobs
|
||||
3. **Use fixtures for tests** to ensure clean state
|
||||
4. **Monitor cache size** in production
|
||||
5. **Use ephemeral keys** when possible (maximum security)
|
||||
6. **Store persistent keys securely** (Secrets Manager, Vault)
|
||||
7. **Rotate encryption keys** regularly
|
||||
8. **Use memory cache** for ephemeral environments
|
||||
9. **Clear cache after config changes**
|
||||
10. **Document cache strategy** for your application
|
||||
|
||||
### ❌ DON'T
|
||||
|
||||
1. **Never let cache grow unbounded** in long-running apps
|
||||
2. **Don't share ephemeral encrypted cache** between processes
|
||||
3. **Don't store persistent keys in code** or version control
|
||||
4. **Don't forget to clear cache in tests**
|
||||
5. **Don't assume cache is automatically cleaned** in SDK usage
|
||||
6. **Don't use persistent keys** unless you need cross-process sharing
|
||||
7. **Don't skip key rotation**
|
||||
8. **Don't ignore memory limits** for memory cache
|
||||
|
||||
## Monitoring Cache Health
|
||||
|
||||
### Cache Size Tracking
|
||||
|
||||
```python
|
||||
import os
|
||||
from pathlib import Path
|
||||
from deltaglider import create_client
|
||||
|
||||
def get_cache_size_mb(cache_dir: Path) -> float:
|
||||
"""Calculate total cache size in MB."""
|
||||
total_bytes = sum(f.stat().st_size for f in cache_dir.rglob('*') if f.is_file())
|
||||
return total_bytes / (1024 * 1024)
|
||||
|
||||
# Get cache directory (ephemeral, changes per process)
|
||||
cache_dir = Path(tempfile.gettempdir())
|
||||
deltaglider_caches = list(cache_dir.glob('deltaglider-*'))
|
||||
|
||||
if deltaglider_caches:
|
||||
cache_path = deltaglider_caches[0]
|
||||
cache_size_mb = get_cache_size_mb(cache_path)
|
||||
print(f"Cache size: {cache_size_mb:.2f} MB")
|
||||
|
||||
# Clear if over threshold
|
||||
if cache_size_mb > 500:
|
||||
client = create_client()
|
||||
client.clear_cache()
|
||||
print("Cache cleared - exceeded 500MB")
|
||||
```
|
||||
|
||||
### Memory Cache Monitoring
|
||||
|
||||
```python
|
||||
import os
|
||||
os.environ['DG_CACHE_BACKEND'] = 'memory'
|
||||
|
||||
from deltaglider import create_client
|
||||
|
||||
client = create_client()
|
||||
|
||||
# Memory cache auto-evicts at configured limit
|
||||
# Monitor via application memory usage tools (not direct API)
|
||||
|
||||
# Example with memory_profiler
|
||||
from memory_profiler import profile
|
||||
|
||||
@profile
|
||||
def upload_many_files():
|
||||
for i in range(1000):
|
||||
client.put_object(Bucket='test', Key=f'file{i}.zip', Body=b'data' * 1000)
|
||||
# Check memory profile to see cache memory usage
|
||||
|
||||
upload_many_files()
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Problem: Cache Growing Too Large
|
||||
|
||||
**Symptoms**:
|
||||
- Disk space running out (`/tmp` filling up)
|
||||
- High memory usage
|
||||
|
||||
**Solution**:
|
||||
```python
|
||||
# Implement automatic cache clearing
|
||||
import psutil
|
||||
|
||||
client = create_client()
|
||||
|
||||
def smart_cache_clear():
|
||||
"""Clear cache if memory/disk pressure detected."""
|
||||
# Check disk space
|
||||
disk = psutil.disk_usage('/tmp')
|
||||
if disk.percent > 80:
|
||||
client.clear_cache()
|
||||
print("Cache cleared - disk pressure")
|
||||
return
|
||||
|
||||
# Check memory usage
|
||||
memory = psutil.virtual_memory()
|
||||
if memory.percent > 80:
|
||||
client.clear_cache()
|
||||
print("Cache cleared - memory pressure")
|
||||
|
||||
# Call periodically
|
||||
schedule.every(15).minutes.do(smart_cache_clear)
|
||||
```
|
||||
|
||||
### Problem: Decryption Failures After Key Rotation
|
||||
|
||||
**Symptoms**:
|
||||
- `CacheCorruptionError: Decryption failed`
|
||||
- After rotating encryption keys
|
||||
|
||||
**Solution**:
|
||||
```python
|
||||
# Clear cache before using new key
|
||||
old_client = create_client() # Old key
|
||||
old_client.clear_cache()
|
||||
|
||||
# Update encryption key
|
||||
os.environ['DG_CACHE_ENCRYPTION_KEY'] = new_key
|
||||
|
||||
# Create new client
|
||||
new_client = create_client() # New key
|
||||
```
|
||||
|
||||
### Problem: Tests Failing Due to Cached Data
|
||||
|
||||
**Symptoms**:
|
||||
- Tests pass in isolation, fail when run together
|
||||
- Unexpected data in downloads
|
||||
|
||||
**Solution**:
|
||||
```python
|
||||
# Always clear cache in test teardown
|
||||
@pytest.fixture(autouse=True)
|
||||
def clear_cache_after_test():
|
||||
"""Automatically clear cache after every test."""
|
||||
yield
|
||||
# Teardown
|
||||
client = create_client()
|
||||
client.clear_cache()
|
||||
```
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [docs/sdk/getting-started.md](sdk/getting-started.md) - SDK configuration
|
||||
- [README.md](../README.md) - Docker and environment variables
|
||||
- [CLAUDE.md](../CLAUDE.md) - Development guide
|
||||
|
||||
## Quick Reference
|
||||
|
||||
```python
|
||||
from deltaglider import create_client
|
||||
|
||||
# Create client
|
||||
client = create_client()
|
||||
|
||||
# Clear all cache
|
||||
client.clear_cache()
|
||||
|
||||
# Use memory cache
|
||||
os.environ['DG_CACHE_BACKEND'] = 'memory'
|
||||
|
||||
# Use persistent encryption key
|
||||
os.environ['DG_CACHE_ENCRYPTION_KEY'] = 'base64-encoded-key'
|
||||
```
|
||||
526
docs/DG_MAX_RATIO.md
Normal file
526
docs/DG_MAX_RATIO.md
Normal file
@@ -0,0 +1,526 @@
|
||||
# DG_MAX_RATIO - Delta Compression Efficiency Guard
|
||||
|
||||
## Overview
|
||||
|
||||
`DG_MAX_RATIO` is a **safety threshold** that prevents inefficient delta compression. It controls the maximum acceptable ratio of `delta_size / original_file_size`.
|
||||
|
||||
**Default**: `0.5` (50%)
|
||||
**Range**: `0.0` to `1.0`
|
||||
**Environment Variable**: `DG_MAX_RATIO`
|
||||
|
||||
## The Problem It Solves
|
||||
|
||||
When files are **too different**, xdelta3 can create a delta file that's **almost as large as the original file** (or even larger!). This defeats the purpose of compression and wastes:
|
||||
- ❌ Storage space (storing a large delta instead of the original)
|
||||
- ❌ CPU time (creating and applying the delta)
|
||||
- ❌ Network bandwidth (downloading delta + reference instead of just the file)
|
||||
|
||||
## How It Works
|
||||
|
||||
```
|
||||
1. Upload file → Create delta using xdelta3
|
||||
2. Calculate ratio = delta_size / original_file_size
|
||||
3. If ratio > DG_MAX_RATIO:
|
||||
❌ Discard delta, store original file directly
|
||||
Else:
|
||||
✅ Keep delta, save storage space
|
||||
```
|
||||
|
||||
### Example Flow
|
||||
|
||||
```
|
||||
Original file: 100MB
|
||||
Reference file: 100MB (uploaded previously)
|
||||
|
||||
xdelta3 creates delta: 60MB
|
||||
Ratio = 60MB / 100MB = 0.6 (60%)
|
||||
|
||||
With DG_MAX_RATIO=0.5:
|
||||
❌ Delta rejected (60% > 50%)
|
||||
Action: Store 100MB original file
|
||||
Reason: Delta not efficient enough
|
||||
|
||||
With DG_MAX_RATIO=0.7:
|
||||
✅ Delta accepted (60% ≤ 70%)
|
||||
Action: Store 60MB delta
|
||||
Savings: 40MB (40%)
|
||||
```
|
||||
|
||||
## Default Value: 0.5 (50%)
|
||||
|
||||
**Meaning**: Only use delta compression if the delta is **≤50% of the original file size**
|
||||
|
||||
This default provides:
|
||||
- ✅ Minimum 50% storage savings when delta compression is used
|
||||
- ✅ Prevents wasting CPU on inefficient compression
|
||||
- ✅ Works well for typical versioned releases (minor updates between versions)
|
||||
- ✅ Balanced approach without manual tuning
|
||||
|
||||
## When to Adjust
|
||||
|
||||
### 🔽 Lower Value (More Conservative)
|
||||
|
||||
**Set `DG_MAX_RATIO=0.2-0.3` when:**
|
||||
- Files change significantly between versions (major updates, refactors)
|
||||
- Storage cost is **very high** (premium S3 tiers, small quotas)
|
||||
- You want to avoid **any** inefficient compression
|
||||
- You need guaranteed high-quality compression (≥70% savings)
|
||||
|
||||
**Example**:
|
||||
```bash
|
||||
export DG_MAX_RATIO=0.3
|
||||
|
||||
# Only accept deltas that are ≤30% of original size
|
||||
# More files stored directly, but guaranteed ≥70% savings when using deltas
|
||||
```
|
||||
|
||||
**Trade-off**:
|
||||
- ✅ Higher quality compression when deltas are used
|
||||
- ❌ Fewer files will use delta compression
|
||||
- ❌ More direct uploads (higher storage for dissimilar files)
|
||||
|
||||
### 🔼 Higher Value (More Permissive)
|
||||
|
||||
**Set `DG_MAX_RATIO=0.6-0.8` when:**
|
||||
- Files are very similar (minor patches, nightly builds, incremental changes)
|
||||
- Storage cost is **cheap** (large S3 buckets, unlimited quotas)
|
||||
- CPU time is **expensive** (you want to save re-uploading even with modest compression)
|
||||
- You want to maximize delta compression usage
|
||||
|
||||
**Example**:
|
||||
```bash
|
||||
export DG_MAX_RATIO=0.7
|
||||
|
||||
# Accept deltas up to 70% of original size
|
||||
# More files use delta compression, even with modest 30% savings
|
||||
```
|
||||
|
||||
**Trade-off**:
|
||||
- ✅ More files use delta compression
|
||||
- ✅ Save bandwidth even with modest compression
|
||||
- ❌ Some deltas may only save 20-30% space
|
||||
- ❌ More CPU time spent on marginal compressions
|
||||
|
||||
## Real-World Scenarios
|
||||
|
||||
### Scenario 1: Nightly Builds (Minimal Changes) ⭐ IDEAL
|
||||
|
||||
```
|
||||
my-app-v1.0.0.zip → 100MB (reference)
|
||||
my-app-v1.0.1.zip → 100MB (0.1% code change)
|
||||
|
||||
Delta: 200KB (0.2% of original)
|
||||
Ratio: 0.002
|
||||
|
||||
With ANY DG_MAX_RATIO: ✅ Use delta (99.8% savings!)
|
||||
Result: Store 200KB instead of 100MB
|
||||
```
|
||||
|
||||
**This is what DeltaGlider excels at!**
|
||||
|
||||
### Scenario 2: Major Version (Significant Changes)
|
||||
|
||||
```
|
||||
my-app-v1.0.0.zip → 100MB (reference)
|
||||
my-app-v2.0.0.zip → 100MB (complete rewrite, 85% different)
|
||||
|
||||
Delta: 85MB (85% of original)
|
||||
Ratio: 0.85
|
||||
|
||||
With DG_MAX_RATIO=0.5: ❌ Store original (85% > 50%)
|
||||
→ Stores 100MB directly
|
||||
→ No compression benefit, but no CPU waste
|
||||
|
||||
With DG_MAX_RATIO=0.9: ✅ Use delta (85% ≤ 90%)
|
||||
→ Stores 85MB delta
|
||||
→ Only 15% savings, questionable benefit
|
||||
```
|
||||
|
||||
**Recommendation**: For major versions, default `0.5` correctly rejects inefficient compression.
|
||||
|
||||
### Scenario 3: Different File Format (Same Content)
|
||||
|
||||
```
|
||||
my-app-v1.0.0.zip → 100MB (ZIP archive)
|
||||
my-app-v1.0.0.tar → 100MB (TAR archive, same content)
|
||||
|
||||
Delta: 70MB (completely different format structure)
|
||||
Ratio: 0.70
|
||||
|
||||
With DG_MAX_RATIO=0.5: ❌ Store original (70% > 50%)
|
||||
→ Stores 100MB directly
|
||||
→ Correct decision: formats too different
|
||||
|
||||
With DG_MAX_RATIO=0.8: ✅ Use delta (70% ≤ 80%)
|
||||
→ Stores 70MB delta
|
||||
→ 30% savings, but CPU-intensive
|
||||
```
|
||||
|
||||
**Recommendation**: Use consistent file formats for better compression. Default `0.5` correctly rejects cross-format compression.
|
||||
|
||||
### Scenario 4: Incremental Updates (Sweet Spot) ⭐
|
||||
|
||||
```
|
||||
my-app-v1.0.0.zip → 100MB (reference)
|
||||
my-app-v1.0.1.zip → 100MB (minor bugfix, 5% code change)
|
||||
|
||||
Delta: 5MB (5% of original)
|
||||
Ratio: 0.05
|
||||
|
||||
With ANY DG_MAX_RATIO: ✅ Use delta (95% savings!)
|
||||
Result: Store 5MB instead of 100MB
|
||||
```
|
||||
|
||||
**This is the target use case for delta compression!**
|
||||
|
||||
## How to Choose the Right Value
|
||||
|
||||
### Decision Tree
|
||||
|
||||
```
|
||||
Do your files have minimal changes between versions? (< 5% different)
|
||||
├─ YES → Use default 0.5 ✅
|
||||
│ Delta compression will work perfectly
|
||||
│
|
||||
└─ NO → Are your files significantly different? (> 50% different)
|
||||
├─ YES → Lower to 0.2-0.3 🔽
|
||||
│ Avoid wasting time on inefficient compression
|
||||
│
|
||||
└─ NO → Are they moderately different? (20-50% different)
|
||||
├─ Storage is expensive → Lower to 0.3 🔽
|
||||
│ Only high-quality compression
|
||||
│
|
||||
└─ Storage is cheap → Raise to 0.6-0.7 🔼
|
||||
Accept modest savings
|
||||
```
|
||||
|
||||
### Quick Reference Table
|
||||
|
||||
| File Similarity | Recommended DG_MAX_RATIO | Expected Behavior |
|
||||
|----------------|--------------------------|-------------------|
|
||||
| Nearly identical (< 5% change) | **0.5 (default)** | 🟢 95%+ savings |
|
||||
| Minor updates (5-20% change) | **0.5 (default)** | 🟢 80-95% savings |
|
||||
| Moderate changes (20-50% change) | **0.4-0.6** | 🟡 50-80% savings |
|
||||
| Major changes (50-80% change) | **0.3 or lower** | 🔴 Store directly |
|
||||
| Complete rewrites (> 80% change) | **0.3 or lower** | 🔴 Store directly |
|
||||
|
||||
### Use Cases by Industry
|
||||
|
||||
**Software Releases (SaaS)**:
|
||||
```bash
|
||||
export DG_MAX_RATIO=0.5 # Default
|
||||
# Nightly builds with minor changes compress perfectly
|
||||
```
|
||||
|
||||
**Mobile App Builds**:
|
||||
```bash
|
||||
export DG_MAX_RATIO=0.4 # Slightly conservative
|
||||
# iOS/Android builds can vary, want quality compression only
|
||||
```
|
||||
|
||||
**Database Backups**:
|
||||
```bash
|
||||
export DG_MAX_RATIO=0.7 # Permissive
|
||||
# Daily backups are very similar, accept modest savings
|
||||
```
|
||||
|
||||
**Document Archives**:
|
||||
```bash
|
||||
export DG_MAX_RATIO=0.6 # Moderate
|
||||
# Documents change incrementally, accept good savings
|
||||
```
|
||||
|
||||
**Video/Media Archives**:
|
||||
```bash
|
||||
export DG_MAX_RATIO=0.2 # Very conservative
|
||||
# Media files are unique, only compress if very similar
|
||||
```
|
||||
|
||||
## Configuration Examples
|
||||
|
||||
### Docker
|
||||
|
||||
**Conservative (Premium Storage)**:
|
||||
```bash
|
||||
docker run -e DG_MAX_RATIO=0.3 \
|
||||
-e AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY \
|
||||
deltaglider/deltaglider cp file.zip s3://releases/
|
||||
```
|
||||
|
||||
**Default (Balanced)**:
|
||||
```bash
|
||||
docker run -e DG_MAX_RATIO=0.5 \
|
||||
-e AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY \
|
||||
deltaglider/deltaglider cp file.zip s3://releases/
|
||||
```
|
||||
|
||||
**Permissive (Cheap Storage)**:
|
||||
```bash
|
||||
docker run -e DG_MAX_RATIO=0.7 \
|
||||
-e AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY \
|
||||
deltaglider/deltaglider cp file.zip s3://releases/
|
||||
```
|
||||
|
||||
### Python SDK
|
||||
|
||||
```python
|
||||
import os
|
||||
|
||||
# Conservative (only high-quality compression)
|
||||
os.environ['DG_MAX_RATIO'] = '0.3'
|
||||
|
||||
# Default (balanced)
|
||||
os.environ['DG_MAX_RATIO'] = '0.5'
|
||||
|
||||
# Permissive (accept modest savings)
|
||||
os.environ['DG_MAX_RATIO'] = '0.7'
|
||||
|
||||
from deltaglider import create_client
|
||||
client = create_client()
|
||||
|
||||
summary = client.upload("file.zip", "s3://bucket/")
|
||||
print(f"Delta ratio: {summary.delta_ratio:.2f}")
|
||||
print(f"Used delta: {summary.is_delta}")
|
||||
```
|
||||
|
||||
### CLI
|
||||
|
||||
```bash
|
||||
# Conservative
|
||||
export DG_MAX_RATIO=0.3
|
||||
deltaglider cp my-app-v2.0.0.zip s3://releases/
|
||||
|
||||
# Default
|
||||
export DG_MAX_RATIO=0.5
|
||||
deltaglider cp my-app-v2.0.0.zip s3://releases/
|
||||
|
||||
# Permissive
|
||||
export DG_MAX_RATIO=0.7
|
||||
deltaglider cp my-app-v2.0.0.zip s3://releases/
|
||||
```
|
||||
|
||||
### Override Per-Upload (CLI)
|
||||
|
||||
```bash
|
||||
# Use custom ratio for specific file
|
||||
deltaglider cp large-file.zip s3://releases/ --max-ratio 0.3
|
||||
```
|
||||
|
||||
## Monitoring and Tuning
|
||||
|
||||
### Check Delta Ratios After Upload
|
||||
|
||||
```python
|
||||
from deltaglider import create_client
|
||||
|
||||
client = create_client()
|
||||
summary = client.upload("file.zip", "s3://bucket/")
|
||||
|
||||
print(f"Delta ratio: {summary.delta_ratio:.2f}")
|
||||
print(f"Savings: {summary.savings_percent:.0f}%")
|
||||
print(f"Used delta: {summary.is_delta}")
|
||||
|
||||
if summary.is_delta:
|
||||
print(f"✅ Used delta compression (ratio {summary.delta_ratio:.2f})")
|
||||
if summary.delta_ratio > 0.4:
|
||||
print(f"⚠️ Consider lowering DG_MAX_RATIO for better quality")
|
||||
else:
|
||||
print(f"❌ Stored directly (delta would have been ratio ~{summary.delta_ratio:.2f})")
|
||||
if summary.delta_ratio < 0.6:
|
||||
print(f"💡 Consider raising DG_MAX_RATIO to enable compression")
|
||||
```
|
||||
|
||||
### Batch Analysis
|
||||
|
||||
```python
|
||||
from deltaglider import create_client
|
||||
from pathlib import Path
|
||||
|
||||
client = create_client()
|
||||
ratios = []
|
||||
|
||||
for file in Path("releases").glob("*.zip"):
|
||||
summary = client.upload(str(file), f"s3://bucket/{file.name}")
|
||||
if summary.is_delta:
|
||||
ratios.append(summary.delta_ratio)
|
||||
|
||||
if ratios:
|
||||
avg_ratio = sum(ratios) / len(ratios)
|
||||
max_ratio = max(ratios)
|
||||
|
||||
print(f"Average delta ratio: {avg_ratio:.2f}")
|
||||
print(f"Maximum delta ratio: {max_ratio:.2f}")
|
||||
print(f"Files compressed: {len(ratios)}")
|
||||
|
||||
if avg_ratio < 0.2:
|
||||
print("💡 Consider raising DG_MAX_RATIO - you're getting excellent compression")
|
||||
elif max_ratio > 0.6:
|
||||
print("⚠️ Consider lowering DG_MAX_RATIO - some deltas are inefficient")
|
||||
```
|
||||
|
||||
### Optimization Strategy
|
||||
|
||||
1. **Start with default (0.5)**
|
||||
2. **Monitor delta ratios** for 1 week of uploads
|
||||
3. **Analyze results**:
|
||||
- If most ratios < 0.2: Consider raising to 0.6-0.7
|
||||
- If many ratios > 0.4: Consider lowering to 0.3-0.4
|
||||
- If ratios vary widely: Keep default 0.5
|
||||
4. **Adjust and re-test** for 1 week
|
||||
5. **Repeat until optimal** for your use case
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
### Dynamic Ratio Based on File Type
|
||||
|
||||
```python
|
||||
import os
|
||||
from pathlib import Path
|
||||
from deltaglider import create_client
|
||||
|
||||
def get_optimal_ratio(file_path: str) -> float:
|
||||
"""Determine optimal ratio based on file type."""
|
||||
suffix = Path(file_path).suffix.lower()
|
||||
|
||||
# Very compressible (source code archives)
|
||||
if suffix in ['.zip', '.tar', '.gz']:
|
||||
return 0.6
|
||||
|
||||
# Moderately compressible (compiled binaries)
|
||||
elif suffix in ['.jar', '.war', '.deb', '.rpm']:
|
||||
return 0.5
|
||||
|
||||
# Rarely compressible (media, already compressed)
|
||||
elif suffix in ['.mp4', '.jpg', '.png']:
|
||||
return 0.2
|
||||
|
||||
# Default
|
||||
return 0.5
|
||||
|
||||
file = "my-app.zip"
|
||||
os.environ['DG_MAX_RATIO'] = str(get_optimal_ratio(file))
|
||||
|
||||
client = create_client()
|
||||
summary = client.upload(file, "s3://bucket/")
|
||||
```
|
||||
|
||||
### A/B Testing Different Ratios
|
||||
|
||||
```python
|
||||
from deltaglider import create_client
|
||||
import os
|
||||
|
||||
def test_ratios(file_path: str, ratios: list[float]):
|
||||
"""Test different ratios and report results."""
|
||||
results = []
|
||||
|
||||
for ratio in ratios:
|
||||
os.environ['DG_MAX_RATIO'] = str(ratio)
|
||||
client = create_client()
|
||||
|
||||
# Simulate upload (don't actually upload)
|
||||
summary = client.estimate_compression(file_path, "s3://bucket/test/")
|
||||
|
||||
results.append({
|
||||
'ratio_threshold': ratio,
|
||||
'would_use_delta': summary.delta_ratio <= ratio,
|
||||
'delta_ratio': summary.delta_ratio,
|
||||
'savings': summary.savings_percent if summary.delta_ratio <= ratio else 0
|
||||
})
|
||||
|
||||
return results
|
||||
|
||||
# Test different ratios
|
||||
file = "my-app-v2.0.0.zip"
|
||||
test_results = test_ratios(file, [0.3, 0.5, 0.7])
|
||||
|
||||
for result in test_results:
|
||||
print(f"Ratio {result['ratio_threshold']}: "
|
||||
f"Delta={result['would_use_delta']}, "
|
||||
f"Savings={result['savings']:.0f}%")
|
||||
```
|
||||
|
||||
## FAQ
|
||||
|
||||
### Q: What happens if I set DG_MAX_RATIO=1.0?
|
||||
|
||||
**A**: Delta compression will **always** be used, even if the delta is larger than the original file! This is generally a bad idea and defeats the purpose of the threshold.
|
||||
|
||||
**Example**:
|
||||
```bash
|
||||
export DG_MAX_RATIO=1.0
|
||||
|
||||
# File: 100MB, Delta: 120MB
|
||||
# Ratio: 1.2
|
||||
# With DG_MAX_RATIO=1.0: ✅ Use delta (1.2 > 1.0 but we accept anything ≤1.0)
|
||||
# Wait, that's wrong! The delta is LARGER than the original!
|
||||
|
||||
# NEVER set DG_MAX_RATIO to 1.0 or higher
|
||||
```
|
||||
|
||||
### Q: What happens if I set DG_MAX_RATIO=0.0?
|
||||
|
||||
**A**: Delta compression will **never** be used. All files will be stored directly. This is equivalent to disabling DeltaGlider's compression entirely.
|
||||
|
||||
### Q: Can I disable the ratio check?
|
||||
|
||||
**A**: No, and you shouldn't want to. The ratio check is a critical safety feature that prevents wasting storage and CPU on inefficient compression.
|
||||
|
||||
### Q: Does DG_MAX_RATIO affect downloading?
|
||||
|
||||
**A**: No, `DG_MAX_RATIO` only affects **uploads**. During download, DeltaGlider automatically detects whether a file is stored as a delta or directly and handles reconstruction accordingly.
|
||||
|
||||
### Q: Can I set different ratios for different buckets?
|
||||
|
||||
**A**: Not directly via environment variables, but you can change `DG_MAX_RATIO` before each upload in your code:
|
||||
|
||||
```python
|
||||
import os
|
||||
from deltaglider import create_client
|
||||
|
||||
# High-quality compression for production releases
|
||||
os.environ['DG_MAX_RATIO'] = '0.3'
|
||||
client = create_client()
|
||||
client.upload("prod-release.zip", "s3://production/")
|
||||
|
||||
# Permissive compression for dev builds
|
||||
os.environ['DG_MAX_RATIO'] = '0.7'
|
||||
client = create_client()
|
||||
client.upload("dev-build.zip", "s3://development/")
|
||||
```
|
||||
|
||||
### Q: How do I know if my DG_MAX_RATIO is set correctly?
|
||||
|
||||
**A**: Monitor your upload summaries. If most deltas have ratios close to your threshold (e.g., 0.45-0.50 with default 0.5), you might want to lower it. If most deltas have very low ratios (e.g., < 0.2), you could raise it.
|
||||
|
||||
**Ideal scenario**: Most successful delta compressions have ratios < 0.3, and inefficient deltas (> 0.5) are correctly rejected.
|
||||
|
||||
## Summary
|
||||
|
||||
**`DG_MAX_RATIO` prevents wasting time and storage on inefficient delta compression.**
|
||||
|
||||
### Quick Takeaways
|
||||
|
||||
✅ **Default 0.5 works for 90% of use cases**
|
||||
✅ **Lower values (0.2-0.3) for dissimilar files or expensive storage**
|
||||
✅ **Higher values (0.6-0.7) for very similar files or cheap storage**
|
||||
✅ **Monitor delta ratios to tune for your use case**
|
||||
✅ **Never set to 1.0 or higher (defeats the purpose)**
|
||||
✅ **Never set to 0.0 (disables delta compression entirely)**
|
||||
|
||||
### Golden Rule
|
||||
|
||||
**If you're not sure, keep the default `0.5`.**
|
||||
|
||||
It's a sensible balance that:
|
||||
- Prevents inefficient compression (no deltas > 50% of original size)
|
||||
- Allows excellent savings on similar files (most deltas are < 20%)
|
||||
- Works well for typical versioned releases
|
||||
- Requires no manual tuning for most use cases
|
||||
|
||||
---
|
||||
|
||||
**Related Documentation**:
|
||||
- [CLAUDE.md](../CLAUDE.md) - Environment variables reference
|
||||
- [README.md](../README.md) - Docker usage and configuration
|
||||
- [docs/sdk/getting-started.md](sdk/getting-started.md) - SDK configuration guide
|
||||
174
docs/sdk/api.md
174
docs/sdk/api.md
@@ -156,7 +156,7 @@ for obj in response['Contents']:
|
||||
|
||||
#### `get_bucket_stats`
|
||||
|
||||
Get statistics for a bucket with optional detailed compression metrics.
|
||||
Get statistics for a bucket with optional detailed compression metrics. Results are cached per client session for performance.
|
||||
|
||||
```python
|
||||
def get_bucket_stats(
|
||||
@@ -173,16 +173,46 @@ def get_bucket_stats(
|
||||
- With `detailed_stats=False`: ~50ms for any bucket size (LIST calls only)
|
||||
- With `detailed_stats=True`: ~2-3s per 1000 objects (adds HEAD calls for delta files)
|
||||
|
||||
##### Caching Behavior
|
||||
|
||||
- **Session-scoped cache**: Results cached within client instance lifetime
|
||||
- **Automatic invalidation**: Cache cleared on bucket mutations (put, delete, bucket operations)
|
||||
- **Intelligent reuse**: Detailed stats can serve quick stat requests
|
||||
- **Manual cache control**: Use `clear_cache()` to invalidate all cached stats
|
||||
|
||||
##### Returns
|
||||
|
||||
`BucketStats`: Dataclass containing:
|
||||
- **bucket** (`str`): Bucket name
|
||||
- **object_count** (`int`): Total number of objects
|
||||
- **total_size** (`int`): Original size in bytes (before compression)
|
||||
- **compressed_size** (`int`): Actual stored size in bytes
|
||||
- **space_saved** (`int`): Bytes saved through compression
|
||||
- **average_compression_ratio** (`float`): Average compression ratio (0.0-1.0)
|
||||
- **delta_objects** (`int`): Number of delta-compressed objects
|
||||
- **direct_objects** (`int`): Number of directly stored objects
|
||||
|
||||
##### Examples
|
||||
|
||||
```python
|
||||
# Quick stats for dashboard display
|
||||
# Quick stats for dashboard display (cached after first call)
|
||||
stats = client.get_bucket_stats('releases')
|
||||
print(f"Objects: {stats.object_count}, Size: {stats.total_size}")
|
||||
|
||||
# Detailed stats for analytics (slower but accurate)
|
||||
# Second call hits cache (instant response)
|
||||
stats = client.get_bucket_stats('releases')
|
||||
print(f"Space saved: {stats.space_saved} bytes")
|
||||
|
||||
# Detailed stats for analytics (slower but accurate, also cached)
|
||||
stats = client.get_bucket_stats('releases', detailed_stats=True)
|
||||
print(f"Compression ratio: {stats.average_compression_ratio:.1%}")
|
||||
|
||||
# Quick call after detailed call reuses detailed cache (more accurate)
|
||||
quick_stats = client.get_bucket_stats('releases') # Uses detailed cache
|
||||
|
||||
# Clear cache to force refresh
|
||||
client.clear_cache()
|
||||
stats = client.get_bucket_stats('releases') # Fresh computation
|
||||
```
|
||||
|
||||
#### `put_object`
|
||||
@@ -304,7 +334,7 @@ client.delete_bucket(Bucket='old-releases')
|
||||
|
||||
#### `list_buckets`
|
||||
|
||||
List all S3 buckets (boto3-compatible).
|
||||
List all S3 buckets (boto3-compatible). Includes cached statistics when available.
|
||||
|
||||
```python
|
||||
def list_buckets(
|
||||
@@ -315,7 +345,32 @@ def list_buckets(
|
||||
|
||||
##### Returns
|
||||
|
||||
Dict with list of buckets and owner information (identical to boto3).
|
||||
Dict with list of buckets and owner information (identical to boto3). Each bucket may include optional `DeltaGliderStats` metadata if statistics have been previously cached.
|
||||
|
||||
##### Response Structure
|
||||
|
||||
```python
|
||||
{
|
||||
'Buckets': [
|
||||
{
|
||||
'Name': 'bucket-name',
|
||||
'CreationDate': datetime(2025, 1, 1),
|
||||
'DeltaGliderStats': { # Optional, only if cached
|
||||
'Cached': True,
|
||||
'Detailed': bool, # Whether detailed stats were fetched
|
||||
'ObjectCount': int,
|
||||
'TotalSize': int,
|
||||
'CompressedSize': int,
|
||||
'SpaceSaved': int,
|
||||
'AverageCompressionRatio': float,
|
||||
'DeltaObjects': int,
|
||||
'DirectObjects': int
|
||||
}
|
||||
}
|
||||
],
|
||||
'Owner': {...}
|
||||
}
|
||||
```
|
||||
|
||||
##### Examples
|
||||
|
||||
@@ -324,6 +379,17 @@ Dict with list of buckets and owner information (identical to boto3).
|
||||
response = client.list_buckets()
|
||||
for bucket in response['Buckets']:
|
||||
print(f"{bucket['Name']} - Created: {bucket['CreationDate']}")
|
||||
|
||||
# Check if stats are cached
|
||||
if 'DeltaGliderStats' in bucket:
|
||||
stats = bucket['DeltaGliderStats']
|
||||
print(f" Cached stats: {stats['ObjectCount']} objects, "
|
||||
f"{stats['AverageCompressionRatio']:.1%} compression")
|
||||
|
||||
# Fetch stats first, then list buckets to see cached data
|
||||
client.get_bucket_stats('my-bucket', detailed_stats=True)
|
||||
response = client.list_buckets()
|
||||
# Now 'my-bucket' will include DeltaGliderStats in response
|
||||
```
|
||||
|
||||
### Simple API Methods
|
||||
@@ -460,6 +526,104 @@ else:
|
||||
# Re-upload or investigate
|
||||
```
|
||||
|
||||
### Cache Management Methods
|
||||
|
||||
DeltaGlider maintains two types of caches for performance optimization:
|
||||
1. **Reference cache**: Binary reference files used for delta reconstruction
|
||||
2. **Statistics cache**: Bucket statistics (session-scoped)
|
||||
|
||||
#### `clear_cache`
|
||||
|
||||
Clear all cached data including reference files and bucket statistics.
|
||||
|
||||
```python
|
||||
def clear_cache(self) -> None
|
||||
```
|
||||
|
||||
##### Description
|
||||
|
||||
Removes all cached reference files from the local filesystem and invalidates all bucket statistics. Useful for:
|
||||
- Forcing fresh statistics computation
|
||||
- Freeing disk space in long-running applications
|
||||
- Ensuring latest data after external bucket modifications
|
||||
- Testing and development workflows
|
||||
|
||||
##### Cache Types Cleared
|
||||
|
||||
1. **Reference Cache**: Binary reference files stored in `/tmp/deltaglider-*/`
|
||||
- Encrypted at rest with ephemeral keys
|
||||
- Content-addressed storage (SHA256-based filenames)
|
||||
- Automatically cleaned up on process exit
|
||||
|
||||
2. **Statistics Cache**: Bucket statistics cached per client session
|
||||
- Metadata about compression ratios and object counts
|
||||
- Session-scoped (not persisted to disk)
|
||||
- Automatically invalidated on bucket mutations
|
||||
|
||||
##### Examples
|
||||
|
||||
```python
|
||||
# Long-running application
|
||||
client = create_client()
|
||||
|
||||
# Work with files
|
||||
for i in range(1000):
|
||||
client.upload(f"file_{i}.zip", "s3://bucket/")
|
||||
|
||||
# Periodic cache cleanup to prevent disk buildup
|
||||
if i % 100 == 0:
|
||||
client.clear_cache()
|
||||
|
||||
# Force fresh statistics after external changes
|
||||
stats_before = client.get_bucket_stats('releases') # Cached
|
||||
# ... external tool modifies bucket ...
|
||||
client.clear_cache()
|
||||
stats_after = client.get_bucket_stats('releases') # Fresh data
|
||||
|
||||
# Development workflow
|
||||
client.clear_cache() # Start with clean state
|
||||
```
|
||||
|
||||
#### `evict_cache`
|
||||
|
||||
Remove a specific cached reference file from the local cache.
|
||||
|
||||
```python
|
||||
def evict_cache(self, s3_url: str) -> None
|
||||
```
|
||||
|
||||
##### Parameters
|
||||
|
||||
- **s3_url** (`str`): S3 URL of the reference file to evict (e.g., `s3://bucket/prefix/reference.bin`)
|
||||
|
||||
##### Description
|
||||
|
||||
Removes a specific reference file from the cache without affecting other cached files or statistics. Useful for:
|
||||
- Selective cache invalidation when specific references are updated
|
||||
- Memory management in applications with many delta spaces
|
||||
- Testing specific delta compression scenarios
|
||||
|
||||
##### Examples
|
||||
|
||||
```python
|
||||
# Evict specific reference after update
|
||||
client.upload("new-reference.zip", "s3://releases/v2.0.0/")
|
||||
client.evict_cache("s3://releases/v2.0.0/reference.bin")
|
||||
|
||||
# Next upload will fetch fresh reference
|
||||
client.upload("similar-file.zip", "s3://releases/v2.0.0/")
|
||||
|
||||
# Selective eviction for specific delta spaces
|
||||
delta_spaces = ["v1.0.0", "v1.1.0", "v1.2.0"]
|
||||
for space in delta_spaces:
|
||||
client.evict_cache(f"s3://releases/{space}/reference.bin")
|
||||
```
|
||||
|
||||
##### See Also
|
||||
|
||||
- [docs/CACHE_MANAGEMENT.md](../../CACHE_MANAGEMENT.md): Complete cache management guide
|
||||
- `clear_cache()`: Clear all caches
|
||||
|
||||
#### `lifecycle_policy`
|
||||
|
||||
Set lifecycle policy for S3 prefix (placeholder for future implementation).
|
||||
|
||||
@@ -5,15 +5,17 @@ Real-world examples and patterns for using DeltaGlider in production application
|
||||
## Table of Contents
|
||||
|
||||
1. [Performance-Optimized Bucket Listing](#performance-optimized-bucket-listing)
|
||||
2. [Bucket Management](#bucket-management)
|
||||
3. [Software Release Management](#software-release-management)
|
||||
4. [Database Backup System](#database-backup-system)
|
||||
5. [CI/CD Pipeline Integration](#cicd-pipeline-integration)
|
||||
6. [Container Registry Storage](#container-registry-storage)
|
||||
7. [Machine Learning Model Versioning](#machine-learning-model-versioning)
|
||||
8. [Game Asset Distribution](#game-asset-distribution)
|
||||
9. [Log Archive Management](#log-archive-management)
|
||||
10. [Multi-Region Replication](#multi-region-replication)
|
||||
2. [Bucket Statistics and Monitoring](#bucket-statistics-and-monitoring)
|
||||
3. [Session-Level Cache Management](#session-level-cache-management)
|
||||
4. [Bucket Management](#bucket-management)
|
||||
5. [Software Release Management](#software-release-management)
|
||||
6. [Database Backup System](#database-backup-system)
|
||||
7. [CI/CD Pipeline Integration](#cicd-pipeline-integration)
|
||||
8. [Container Registry Storage](#container-registry-storage)
|
||||
9. [Machine Learning Model Versioning](#machine-learning-model-versioning)
|
||||
10. [Game Asset Distribution](#game-asset-distribution)
|
||||
11. [Log Archive Management](#log-archive-management)
|
||||
12. [Multi-Region Replication](#multi-region-replication)
|
||||
|
||||
## Performance-Optimized Bucket Listing
|
||||
|
||||
@@ -199,6 +201,322 @@ performance_comparison('releases')
|
||||
|
||||
2. **Never Fetch for Non-Deltas**: The SDK automatically skips metadata fetching for non-delta files even when `FetchMetadata=True`.
|
||||
|
||||
## Bucket Statistics and Monitoring
|
||||
|
||||
DeltaGlider provides powerful bucket statistics with session-level caching for performance.
|
||||
|
||||
### Quick Dashboard Stats (Cached)
|
||||
|
||||
```python
|
||||
from deltaglider import create_client
|
||||
|
||||
client = create_client()
|
||||
|
||||
def show_bucket_dashboard(bucket: str):
|
||||
"""Display real-time bucket statistics with caching."""
|
||||
|
||||
# First call: computes stats (~50ms)
|
||||
stats = client.get_bucket_stats(bucket)
|
||||
|
||||
# Second call: instant (cached)
|
||||
stats = client.get_bucket_stats(bucket)
|
||||
|
||||
print(f"Dashboard for {stats.bucket}")
|
||||
print(f"=" * 60)
|
||||
print(f"Total Objects: {stats.object_count:,}")
|
||||
print(f" Delta Objects: {stats.delta_objects:,}")
|
||||
print(f" Direct Objects: {stats.direct_objects:,}")
|
||||
print()
|
||||
print(f"Original Size: {stats.total_size / (1024**3):.2f} GB")
|
||||
print(f"Stored Size: {stats.compressed_size / (1024**3):.2f} GB")
|
||||
print(f"Space Saved: {stats.space_saved / (1024**3):.2f} GB")
|
||||
print(f"Compression Ratio: {stats.average_compression_ratio:.1%}")
|
||||
|
||||
# Example: Show stats for multiple buckets (each cached separately)
|
||||
for bucket_name in ['releases', 'backups', 'archives']:
|
||||
show_bucket_dashboard(bucket_name)
|
||||
```
|
||||
|
||||
### Detailed Compression Analysis
|
||||
|
||||
```python
|
||||
def detailed_compression_report(bucket: str):
|
||||
"""Generate detailed compression report with accurate ratios."""
|
||||
|
||||
# Detailed stats fetch metadata for delta files (slower, accurate)
|
||||
stats = client.get_bucket_stats(bucket, detailed_stats=True)
|
||||
|
||||
efficiency = (stats.space_saved / stats.total_size * 100) if stats.total_size > 0 else 0
|
||||
|
||||
print(f"Detailed Compression Report: {stats.bucket}")
|
||||
print(f"=" * 60)
|
||||
print(f"Object Distribution:")
|
||||
print(f" Total: {stats.object_count:,}")
|
||||
print(f" Delta-Compressed: {stats.delta_objects:,} ({stats.delta_objects/stats.object_count*100:.1f}%)")
|
||||
print(f" Direct Storage: {stats.direct_objects:,} ({stats.direct_objects/stats.object_count*100:.1f}%)")
|
||||
print()
|
||||
print(f"Storage Efficiency:")
|
||||
print(f" Original Data: {stats.total_size / (1024**3):.2f} GB")
|
||||
print(f" Actual Storage: {stats.compressed_size / (1024**3):.2f} GB")
|
||||
print(f" Space Saved: {stats.space_saved / (1024**3):.2f} GB")
|
||||
print(f" Efficiency: {efficiency:.1f}%")
|
||||
print(f" Avg Compression: {stats.average_compression_ratio:.2%}")
|
||||
|
||||
# Calculate estimated monthly costs (example: $0.023/GB S3 Standard)
|
||||
cost_without = stats.total_size / (1024**3) * 0.023
|
||||
cost_with = stats.compressed_size / (1024**3) * 0.023
|
||||
monthly_savings = cost_without - cost_with
|
||||
|
||||
print()
|
||||
print(f"Estimated Monthly S3 Costs ($0.023/GB):")
|
||||
print(f" Without DeltaGlider: ${cost_without:.2f}")
|
||||
print(f" With DeltaGlider: ${cost_with:.2f}")
|
||||
print(f" Monthly Savings: ${monthly_savings:.2f}")
|
||||
|
||||
# Example: Detailed report
|
||||
detailed_compression_report('releases')
|
||||
```
|
||||
|
||||
### List Buckets with Cached Stats
|
||||
|
||||
```python
|
||||
def list_buckets_with_stats():
|
||||
"""List all buckets and show cached statistics if available."""
|
||||
|
||||
# Pre-fetch stats for important buckets
|
||||
important_buckets = ['releases', 'backups']
|
||||
for bucket_name in important_buckets:
|
||||
client.get_bucket_stats(bucket_name, detailed_stats=True)
|
||||
|
||||
# List all buckets (includes cached stats automatically)
|
||||
response = client.list_buckets()
|
||||
|
||||
print("All Buckets:")
|
||||
print(f"{'Name':<30} {'Objects':<10} {'Compression':<15} {'Cached'}")
|
||||
print("=" * 70)
|
||||
|
||||
for bucket in response['Buckets']:
|
||||
name = bucket['Name']
|
||||
|
||||
# Check if stats are cached
|
||||
if 'DeltaGliderStats' in bucket:
|
||||
stats = bucket['DeltaGliderStats']
|
||||
obj_count = f"{stats['ObjectCount']:,}"
|
||||
compression = f"{stats['AverageCompressionRatio']:.1%}"
|
||||
cached = "✓ (detailed)" if stats['Detailed'] else "✓ (quick)"
|
||||
else:
|
||||
obj_count = "N/A"
|
||||
compression = "N/A"
|
||||
cached = "✗"
|
||||
|
||||
print(f"{name:<30} {obj_count:<10} {compression:<15} {cached}")
|
||||
|
||||
# Example: List with stats
|
||||
list_buckets_with_stats()
|
||||
```
|
||||
|
||||
### Monitoring Dashboard (Real-Time)
|
||||
|
||||
```python
|
||||
import time
|
||||
|
||||
def monitoring_dashboard(buckets: list[str], refresh_seconds: int = 60):
|
||||
"""Real-time monitoring dashboard with periodic refresh."""
|
||||
|
||||
while True:
|
||||
print("\033[2J\033[H") # Clear screen
|
||||
print(f"DeltaGlider Monitoring Dashboard - {time.strftime('%Y-%m-%d %H:%M:%S')}")
|
||||
print("=" * 80)
|
||||
|
||||
for bucket_name in buckets:
|
||||
# Get cached stats (instant) or compute fresh
|
||||
stats = client.get_bucket_stats(bucket_name)
|
||||
|
||||
print(f"\n{bucket_name}:")
|
||||
print(f" Objects: {stats.object_count:,} | "
|
||||
f"Delta: {stats.delta_objects:,} | "
|
||||
f"Direct: {stats.direct_objects:,}")
|
||||
print(f" Size: {stats.compressed_size/(1024**3):.2f} GB | "
|
||||
f"Saved: {stats.space_saved/(1024**3):.2f} GB | "
|
||||
f"Compression: {stats.average_compression_ratio:.1%}")
|
||||
|
||||
print(f"\n{'=' * 80}")
|
||||
print(f"Refreshing in {refresh_seconds} seconds... (Ctrl+C to exit)")
|
||||
|
||||
time.sleep(refresh_seconds)
|
||||
|
||||
# Clear cache for fresh data on next iteration
|
||||
client.clear_cache()
|
||||
|
||||
# Example: Monitor key buckets
|
||||
try:
|
||||
monitoring_dashboard(['releases', 'backups', 'archives'], refresh_seconds=30)
|
||||
except KeyboardInterrupt:
|
||||
print("\nMonitoring stopped.")
|
||||
```
|
||||
|
||||
## Session-Level Cache Management
|
||||
|
||||
DeltaGlider maintains session-level caches for optimal performance in long-running applications.
|
||||
|
||||
### Long-Running Application Pattern
|
||||
|
||||
```python
|
||||
from deltaglider import create_client
|
||||
import time
|
||||
|
||||
def long_running_upload_service():
|
||||
"""Upload service with periodic cache cleanup."""
|
||||
|
||||
client = create_client()
|
||||
processed_count = 0
|
||||
|
||||
while True:
|
||||
# Simulate file processing
|
||||
files_to_upload = get_pending_files() # Your file queue
|
||||
|
||||
for file_path in files_to_upload:
|
||||
try:
|
||||
summary = client.upload(file_path, "s3://releases/")
|
||||
processed_count += 1
|
||||
|
||||
print(f"Uploaded {file_path}: {summary.savings_percent:.0f}% saved")
|
||||
|
||||
# Periodic cache cleanup (every 100 files)
|
||||
if processed_count % 100 == 0:
|
||||
client.clear_cache()
|
||||
print(f"Cache cleared after {processed_count} files")
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error uploading {file_path}: {e}")
|
||||
|
||||
time.sleep(60) # Check for new files every minute
|
||||
|
||||
# Example: Run upload service
|
||||
# long_running_upload_service()
|
||||
```
|
||||
|
||||
### Cache Invalidation After External Changes
|
||||
|
||||
```python
|
||||
def handle_external_bucket_changes(bucket: str):
|
||||
"""Refresh statistics after external tools modify bucket."""
|
||||
|
||||
# Get initial stats (cached)
|
||||
stats_before = client.get_bucket_stats(bucket)
|
||||
print(f"Before: {stats_before.object_count} objects")
|
||||
|
||||
# External process modifies bucket
|
||||
print("External backup tool running...")
|
||||
run_external_backup_tool(bucket) # Your external tool
|
||||
|
||||
# Clear cache to get fresh data
|
||||
client.clear_cache()
|
||||
|
||||
# Get updated stats
|
||||
stats_after = client.get_bucket_stats(bucket)
|
||||
print(f"After: {stats_after.object_count} objects")
|
||||
print(f"Added: {stats_after.object_count - stats_before.object_count} objects")
|
||||
|
||||
# Example usage
|
||||
handle_external_bucket_changes('backups')
|
||||
```
|
||||
|
||||
### Selective Cache Eviction
|
||||
|
||||
```python
|
||||
def selective_cache_management():
|
||||
"""Manage cache for specific delta spaces."""
|
||||
|
||||
client = create_client()
|
||||
|
||||
# Upload to multiple delta spaces
|
||||
versions = ['v1.0.0', 'v1.1.0', 'v1.2.0']
|
||||
|
||||
for version in versions:
|
||||
client.upload(f"app-{version}.zip", f"s3://releases/{version}/")
|
||||
|
||||
# Update reference for specific version
|
||||
print("Updating v1.1.0 reference...")
|
||||
client.upload("new-reference.zip", "s3://releases/v1.1.0/")
|
||||
|
||||
# Evict only v1.1.0 cache (others remain cached)
|
||||
client.evict_cache("s3://releases/v1.1.0/reference.bin")
|
||||
|
||||
# Next upload to v1.1.0 fetches fresh reference
|
||||
# v1.0.0 and v1.2.0 still use cached references
|
||||
client.upload("similar-file.zip", "s3://releases/v1.1.0/")
|
||||
|
||||
# Example: Selective eviction
|
||||
selective_cache_management()
|
||||
```
|
||||
|
||||
### Testing with Clean Cache
|
||||
|
||||
```python
|
||||
import pytest
|
||||
from deltaglider import create_client
|
||||
|
||||
def test_upload_workflow():
|
||||
"""Test with clean cache state."""
|
||||
|
||||
client = create_client()
|
||||
client.clear_cache() # Start with clean state
|
||||
|
||||
# Test first upload (no reference exists)
|
||||
summary1 = client.upload("file1.zip", "s3://test-bucket/prefix/")
|
||||
assert not summary1.is_delta # First file is reference
|
||||
|
||||
# Test subsequent upload (uses cached reference)
|
||||
summary2 = client.upload("file2.zip", "s3://test-bucket/prefix/")
|
||||
assert summary2.is_delta # Should use delta
|
||||
|
||||
# Clear and test again
|
||||
client.clear_cache()
|
||||
summary3 = client.upload("file3.zip", "s3://test-bucket/prefix/")
|
||||
assert summary3.is_delta # Still delta (reference in S3)
|
||||
|
||||
# Run test
|
||||
# test_upload_workflow()
|
||||
```
|
||||
|
||||
### Cache Performance Monitoring
|
||||
|
||||
```python
|
||||
import time
|
||||
|
||||
def measure_cache_performance(bucket: str):
|
||||
"""Measure performance impact of caching."""
|
||||
|
||||
client = create_client()
|
||||
|
||||
# Test 1: Cold cache
|
||||
client.clear_cache()
|
||||
start = time.time()
|
||||
stats1 = client.get_bucket_stats(bucket, detailed_stats=True)
|
||||
cold_time = (time.time() - start) * 1000
|
||||
|
||||
# Test 2: Warm cache
|
||||
start = time.time()
|
||||
stats2 = client.get_bucket_stats(bucket, detailed_stats=True)
|
||||
warm_time = (time.time() - start) * 1000
|
||||
|
||||
# Test 3: Quick stats from detailed cache
|
||||
start = time.time()
|
||||
stats3 = client.get_bucket_stats(bucket, detailed_stats=False)
|
||||
reuse_time = (time.time() - start) * 1000
|
||||
|
||||
print(f"Cache Performance for {bucket}:")
|
||||
print(f" Cold Cache (detailed): {cold_time:.0f}ms")
|
||||
print(f" Warm Cache (detailed): {warm_time:.0f}ms")
|
||||
print(f" Cache Reuse (quick): {reuse_time:.0f}ms")
|
||||
print(f" Speedup (detailed): {cold_time/warm_time:.1f}x")
|
||||
print(f" Speedup (reuse): {cold_time/reuse_time:.1f}x")
|
||||
|
||||
# Example: Measure cache performance
|
||||
measure_cache_performance('releases')
|
||||
```
|
||||
|
||||
3. **Use Pagination**: For large buckets, use `MaxKeys` and `ContinuationToken` to paginate results.
|
||||
|
||||
4. **Cache Results**: If you need metadata frequently, consider caching the results to avoid repeated HEAD requests.
|
||||
|
||||
@@ -76,6 +76,10 @@ DeltaGlider supports the following environment variables:
|
||||
**Logging & Performance**:
|
||||
- `DG_LOG_LEVEL`: Logging level (default: `INFO`, options: `DEBUG`, `INFO`, `WARNING`, `ERROR`)
|
||||
- `DG_MAX_RATIO`: Maximum delta/file ratio (default: `0.5`, range: `0.0-1.0`)
|
||||
- **See [DG_MAX_RATIO.md](../DG_MAX_RATIO.md) for complete tuning guide**
|
||||
- Controls when to use delta compression vs. direct storage
|
||||
- Lower (0.2-0.3) = conservative, only high-quality compression
|
||||
- Higher (0.6-0.7) = permissive, accept modest savings
|
||||
|
||||
**Cache Configuration**:
|
||||
- `DG_CACHE_BACKEND`: Cache backend type (default: `filesystem`, options: `filesystem`, `memory`)
|
||||
|
||||
@@ -242,5 +242,29 @@ class ContentAddressedCache(CachePort):
|
||||
|
||||
# NOTE: We don't delete the actual CAS file because:
|
||||
# - Other deltaspaces may reference the same SHA
|
||||
|
||||
def clear(self) -> None:
|
||||
"""Clear all cached references.
|
||||
|
||||
Removes all cached files and mappings. This is a destructive operation
|
||||
that forcibly removes the entire cache directory.
|
||||
|
||||
Use cases:
|
||||
- Long-running applications that need to free disk space
|
||||
- Manual cache invalidation
|
||||
- Test cleanup
|
||||
- Ensuring fresh data fetch after configuration changes
|
||||
"""
|
||||
import shutil
|
||||
|
||||
# Clear in-memory mapping
|
||||
self._deltaspace_to_sha.clear()
|
||||
|
||||
# Remove all cache files (destructive!)
|
||||
if self.base_dir.exists():
|
||||
shutil.rmtree(self.base_dir, ignore_errors=True)
|
||||
|
||||
# Recreate base directory with secure permissions
|
||||
self.base_dir.mkdir(parents=True, mode=0o700, exist_ok=True)
|
||||
# - The ephemeral cache will be cleaned on process exit anyway
|
||||
# - For persistent cache (future), we'd need reference counting
|
||||
|
||||
@@ -281,3 +281,25 @@ class EncryptedCache(CachePort):
|
||||
|
||||
# Evict from backend
|
||||
self.backend.evict(bucket, prefix)
|
||||
|
||||
def clear(self) -> None:
|
||||
"""Clear all cached references and encryption mappings.
|
||||
|
||||
Removes all cached data and clears encryption key mappings.
|
||||
This is the proper way to forcibly clean up cache in long-running
|
||||
applications.
|
||||
|
||||
Use cases:
|
||||
- Long-running applications needing to free resources
|
||||
- Manual cache invalidation after key rotation
|
||||
- Test cleanup
|
||||
- Memory pressure situations
|
||||
|
||||
Note: After clearing, the cache will use a fresh encryption key
|
||||
(ephemeral mode) or the same persistent key (if DG_CACHE_ENCRYPTION_KEY set).
|
||||
"""
|
||||
# Clear encryption mapping
|
||||
self._plaintext_sha_map.clear()
|
||||
|
||||
# Delegate to backend to clear actual files/memory
|
||||
self.backend.clear()
|
||||
|
||||
@@ -640,6 +640,84 @@ def verify(service: DeltaService, s3_url: str) -> None:
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
@cli.command()
|
||||
@click.argument("bucket")
|
||||
@click.option("--detailed", is_flag=True, help="Fetch detailed compression metrics (slower)")
|
||||
@click.option("--json", "output_json", is_flag=True, help="Output in JSON format")
|
||||
@click.pass_obj
|
||||
def stats(service: DeltaService, bucket: str, detailed: bool, output_json: bool) -> None:
|
||||
"""Get bucket statistics and compression metrics.
|
||||
|
||||
BUCKET can be specified as:
|
||||
- s3://bucket-name/
|
||||
- s3://bucket-name
|
||||
- bucket-name
|
||||
"""
|
||||
from ...client import DeltaGliderClient
|
||||
|
||||
try:
|
||||
# Parse bucket from S3 URL if needed
|
||||
if bucket.startswith("s3://"):
|
||||
# Remove s3:// prefix and any trailing slashes
|
||||
bucket = bucket[5:].rstrip("/")
|
||||
# Extract just the bucket name (first path component)
|
||||
bucket = bucket.split("/")[0] if "/" in bucket else bucket
|
||||
|
||||
if not bucket:
|
||||
click.echo("Error: Invalid bucket name", err=True)
|
||||
sys.exit(1)
|
||||
|
||||
# Create client from service
|
||||
client = DeltaGliderClient(service=service)
|
||||
|
||||
# Get bucket stats
|
||||
bucket_stats = client.get_bucket_stats(bucket, detailed_stats=detailed)
|
||||
|
||||
if output_json:
|
||||
# JSON output
|
||||
output = {
|
||||
"bucket": bucket_stats.bucket,
|
||||
"object_count": bucket_stats.object_count,
|
||||
"total_size": bucket_stats.total_size,
|
||||
"compressed_size": bucket_stats.compressed_size,
|
||||
"space_saved": bucket_stats.space_saved,
|
||||
"average_compression_ratio": bucket_stats.average_compression_ratio,
|
||||
"delta_objects": bucket_stats.delta_objects,
|
||||
"direct_objects": bucket_stats.direct_objects,
|
||||
}
|
||||
click.echo(json.dumps(output, indent=2))
|
||||
else:
|
||||
# Human-readable output
|
||||
def format_bytes(size: float) -> str:
|
||||
"""Format bytes to human-readable size."""
|
||||
for unit in ["B", "KB", "MB", "GB", "TB"]:
|
||||
if size < 1024.0:
|
||||
return f"{size:.2f} {unit}"
|
||||
size /= 1024.0
|
||||
return f"{size:.2f} PB"
|
||||
|
||||
click.echo(f"Bucket Statistics: {bucket_stats.bucket}")
|
||||
click.echo(f"{'=' * 60}")
|
||||
click.echo(f"Total Objects: {bucket_stats.object_count:,}")
|
||||
click.echo(f" Delta Objects: {bucket_stats.delta_objects:,}")
|
||||
click.echo(f" Direct Objects: {bucket_stats.direct_objects:,}")
|
||||
click.echo("")
|
||||
click.echo(
|
||||
f"Original Size: {format_bytes(bucket_stats.total_size)} ({bucket_stats.total_size:,} bytes)"
|
||||
)
|
||||
click.echo(
|
||||
f"Compressed Size: {format_bytes(bucket_stats.compressed_size)} ({bucket_stats.compressed_size:,} bytes)"
|
||||
)
|
||||
click.echo(
|
||||
f"Space Saved: {format_bytes(bucket_stats.space_saved)} ({bucket_stats.space_saved:,} bytes)"
|
||||
)
|
||||
click.echo(f"Compression Ratio: {bucket_stats.average_compression_ratio:.1%}")
|
||||
|
||||
except Exception as e:
|
||||
click.echo(f"Error: {e}", err=True)
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def main() -> None:
|
||||
"""Main entry point."""
|
||||
cli()
|
||||
|
||||
@@ -63,6 +63,52 @@ class DeltaGliderClient:
|
||||
self.service = service
|
||||
self.endpoint_url = endpoint_url
|
||||
self._multipart_uploads: dict[str, Any] = {} # Track multipart uploads
|
||||
# Session-scoped bucket statistics cache (cleared with the client lifecycle)
|
||||
self._bucket_stats_cache: dict[str, dict[bool, BucketStats]] = {}
|
||||
|
||||
# -------------------------------------------------------------------------
|
||||
# Internal helpers
|
||||
# -------------------------------------------------------------------------
|
||||
|
||||
def _invalidate_bucket_stats_cache(self, bucket: str | None = None) -> None:
|
||||
"""Invalidate cached bucket statistics."""
|
||||
if bucket is None:
|
||||
self._bucket_stats_cache.clear()
|
||||
else:
|
||||
self._bucket_stats_cache.pop(bucket, None)
|
||||
|
||||
def _store_bucket_stats_cache(
|
||||
self,
|
||||
bucket: str,
|
||||
detailed_stats: bool,
|
||||
stats: BucketStats,
|
||||
) -> None:
|
||||
"""Store bucket statistics in the session cache."""
|
||||
bucket_cache = self._bucket_stats_cache.setdefault(bucket, {})
|
||||
bucket_cache[detailed_stats] = stats
|
||||
# Detailed stats are a superset of quick stats; reuse them for quick calls.
|
||||
if detailed_stats:
|
||||
bucket_cache[False] = stats
|
||||
|
||||
def _get_cached_bucket_stats(self, bucket: str, detailed_stats: bool) -> BucketStats | None:
|
||||
"""Retrieve cached stats for a bucket, preferring detailed metrics when available."""
|
||||
bucket_cache = self._bucket_stats_cache.get(bucket)
|
||||
if not bucket_cache:
|
||||
return None
|
||||
if detailed_stats:
|
||||
return bucket_cache.get(True)
|
||||
return bucket_cache.get(False) or bucket_cache.get(True)
|
||||
|
||||
def _get_cached_bucket_stats_for_listing(self, bucket: str) -> tuple[BucketStats | None, bool]:
|
||||
"""Return best cached stats for bucket listings."""
|
||||
bucket_cache = self._bucket_stats_cache.get(bucket)
|
||||
if not bucket_cache:
|
||||
return (None, False)
|
||||
if True in bucket_cache:
|
||||
return (bucket_cache[True], True)
|
||||
if False in bucket_cache:
|
||||
return (bucket_cache[False], False)
|
||||
return (None, False)
|
||||
|
||||
# ============================================================================
|
||||
# Boto3-compatible APIs (matches S3 client interface)
|
||||
@@ -171,13 +217,15 @@ class DeltaGliderClient:
|
||||
}
|
||||
|
||||
# Return as dict[str, Any] for public API (TypedDict is a dict at runtime!)
|
||||
return cast(
|
||||
response = cast(
|
||||
dict[str, Any],
|
||||
build_put_response(
|
||||
etag=f'"{sha256_hash}"',
|
||||
deltaglider_info=deltaglider_info,
|
||||
),
|
||||
)
|
||||
self._invalidate_bucket_stats_cache(Bucket)
|
||||
return response
|
||||
finally:
|
||||
# Clean up temp file
|
||||
if tmp_path.exists():
|
||||
@@ -418,7 +466,7 @@ class DeltaGliderClient:
|
||||
deltaglider_info["DependentDeltas"] = dependent_deltas
|
||||
|
||||
# Return as dict[str, Any] for public API (TypedDict is a dict at runtime!)
|
||||
return cast(
|
||||
response = cast(
|
||||
dict[str, Any],
|
||||
build_delete_response(
|
||||
delete_marker=False,
|
||||
@@ -426,6 +474,8 @@ class DeltaGliderClient:
|
||||
deltaglider_info=deltaglider_info,
|
||||
),
|
||||
)
|
||||
self._invalidate_bucket_stats_cache(Bucket)
|
||||
return response
|
||||
|
||||
def delete_objects(
|
||||
self,
|
||||
@@ -502,6 +552,7 @@ class DeltaGliderClient:
|
||||
}
|
||||
|
||||
response["ResponseMetadata"] = {"HTTPStatusCode": 200}
|
||||
self._invalidate_bucket_stats_cache(Bucket)
|
||||
return response
|
||||
|
||||
def delete_objects_recursive(
|
||||
@@ -627,6 +678,7 @@ class DeltaGliderClient:
|
||||
if single_details:
|
||||
response["DeltaGliderInfo"]["SingleDeletes"] = single_details # type: ignore[index]
|
||||
|
||||
self._invalidate_bucket_stats_cache(Bucket)
|
||||
return response
|
||||
|
||||
def head_object(
|
||||
@@ -703,7 +755,7 @@ class DeltaGliderClient:
|
||||
is_delta = summary.delta_size is not None
|
||||
stored_size = summary.delta_size if is_delta else summary.file_size
|
||||
|
||||
return UploadSummary(
|
||||
upload_summary = UploadSummary(
|
||||
operation=summary.operation,
|
||||
bucket=summary.bucket,
|
||||
key=summary.key,
|
||||
@@ -712,6 +764,8 @@ class DeltaGliderClient:
|
||||
is_delta=is_delta,
|
||||
delta_ratio=summary.delta_ratio or 0.0,
|
||||
)
|
||||
self._invalidate_bucket_stats_cache(bucket)
|
||||
return upload_summary
|
||||
|
||||
def download(self, s3_url: str, output_path: str | Path) -> None:
|
||||
"""Download and reconstruct a file from S3.
|
||||
@@ -938,7 +992,12 @@ class DeltaGliderClient:
|
||||
stats = client.get_bucket_stats('releases', detailed_stats=True)
|
||||
print(f"Compression ratio: {stats.average_compression_ratio:.1%}")
|
||||
"""
|
||||
cached = self._get_cached_bucket_stats(bucket, detailed_stats)
|
||||
if cached:
|
||||
return cached
|
||||
|
||||
result: BucketStats = _get_bucket_stats(self, bucket, detailed_stats)
|
||||
self._store_bucket_stats_cache(bucket, detailed_stats, result)
|
||||
return result
|
||||
|
||||
def generate_presigned_url(
|
||||
@@ -1010,7 +1069,9 @@ class DeltaGliderClient:
|
||||
... CreateBucketConfiguration={'LocationConstraint': 'us-west-2'}
|
||||
... )
|
||||
"""
|
||||
return _create_bucket(self, Bucket, CreateBucketConfiguration, **kwargs)
|
||||
response = _create_bucket(self, Bucket, CreateBucketConfiguration, **kwargs)
|
||||
self._invalidate_bucket_stats_cache(Bucket)
|
||||
return response
|
||||
|
||||
def delete_bucket(
|
||||
self,
|
||||
@@ -1032,7 +1093,9 @@ class DeltaGliderClient:
|
||||
>>> client = create_client()
|
||||
>>> client.delete_bucket(Bucket='my-bucket')
|
||||
"""
|
||||
return _delete_bucket(self, Bucket, **kwargs)
|
||||
response = _delete_bucket(self, Bucket, **kwargs)
|
||||
self._invalidate_bucket_stats_cache(Bucket)
|
||||
return response
|
||||
|
||||
def list_buckets(self, **kwargs: Any) -> dict[str, Any]:
|
||||
"""List all S3 buckets (boto3-compatible).
|
||||
@@ -1061,6 +1124,87 @@ class DeltaGliderClient:
|
||||
tags[key] = value
|
||||
return tags
|
||||
|
||||
# ============================================================================
|
||||
# Cache Management APIs (DeltaGlider Extensions)
|
||||
# ============================================================================
|
||||
|
||||
def clear_cache(self) -> None:
|
||||
"""Clear all cached reference files.
|
||||
|
||||
Forcibly removes all cached data from memory or disk. This is essential for
|
||||
long-running applications that need to:
|
||||
- Free memory/disk space
|
||||
- Invalidate cache after configuration changes
|
||||
- Ensure fresh data fetch from S3
|
||||
- Clean up after tests
|
||||
|
||||
**Important for Long-Running Applications**:
|
||||
Unlike the CLI which cleans up cache on exit, programmatic SDK usage
|
||||
requires manual cache management. Call this method periodically or when:
|
||||
- Application runs for extended periods (hours/days)
|
||||
- Memory usage is high
|
||||
- Configuration changes (endpoint, credentials, encryption key)
|
||||
- Testing scenarios requiring clean state
|
||||
|
||||
**Effects**:
|
||||
- Filesystem cache: Removes all files from cache directory
|
||||
- Memory cache: Clears all in-memory data
|
||||
- Encrypted cache: Clears encryption key mappings
|
||||
- Next upload will re-fetch reference from S3
|
||||
|
||||
**Example - Long-Running Service**:
|
||||
```python
|
||||
from deltaglider import create_client
|
||||
import schedule
|
||||
import time
|
||||
|
||||
client = create_client()
|
||||
|
||||
def upload_task():
|
||||
client.put_object(Bucket='releases', Key='app.zip', Body=open('app.zip', 'rb'))
|
||||
|
||||
def cleanup_task():
|
||||
client.clear_cache() # Free resources every hour
|
||||
print("Cache cleared")
|
||||
|
||||
# Upload every 10 minutes
|
||||
schedule.every(10).minutes.do(upload_task)
|
||||
|
||||
# Clear cache every hour
|
||||
schedule.every().hour.do(cleanup_task)
|
||||
|
||||
while True:
|
||||
schedule.run_pending()
|
||||
time.sleep(1)
|
||||
```
|
||||
|
||||
**Example - Test Cleanup**:
|
||||
```python
|
||||
def test_upload():
|
||||
client = create_client()
|
||||
try:
|
||||
client.put_object(Bucket='test', Key='file.zip', Body=b'data')
|
||||
finally:
|
||||
client.clear_cache() # Ensure clean state for next test
|
||||
```
|
||||
|
||||
**Example - After Configuration Change**:
|
||||
```python
|
||||
client = create_client(endpoint_url='http://minio1:9000')
|
||||
client.put_object(Bucket='bucket', Key='file.zip', Body=b'data')
|
||||
|
||||
# Switch to different endpoint
|
||||
client.clear_cache() # Clear cache from old endpoint
|
||||
client = create_client(endpoint_url='http://minio2:9000')
|
||||
```
|
||||
|
||||
See Also:
|
||||
- `evict_cache()`: Remove specific cached reference
|
||||
- docs/CACHE_MANAGEMENT.md: Complete cache management guide
|
||||
"""
|
||||
self._invalidate_bucket_stats_cache()
|
||||
self.service.cache.clear()
|
||||
|
||||
|
||||
def create_client(
|
||||
endpoint_url: str | None = None,
|
||||
|
||||
@@ -138,10 +138,32 @@ def list_buckets(
|
||||
# Check if storage adapter has boto3 client
|
||||
if hasattr(storage_adapter, "client"):
|
||||
try:
|
||||
response = storage_adapter.client.list_buckets()
|
||||
raw_response = storage_adapter.client.list_buckets()
|
||||
|
||||
buckets: list[dict[str, Any]] = []
|
||||
for bucket_entry in raw_response.get("Buckets", []):
|
||||
bucket_data = dict(bucket_entry)
|
||||
name = bucket_data.get("Name")
|
||||
if isinstance(name, str) and name:
|
||||
cached_stats, detailed = client._get_cached_bucket_stats_for_listing(name)
|
||||
if cached_stats is not None:
|
||||
bucket_data["DeltaGliderStats"] = {
|
||||
"Cached": True,
|
||||
"Detailed": detailed,
|
||||
"ObjectCount": cached_stats.object_count,
|
||||
"TotalSize": cached_stats.total_size,
|
||||
"CompressedSize": cached_stats.compressed_size,
|
||||
"SpaceSaved": cached_stats.space_saved,
|
||||
"AverageCompressionRatio": cached_stats.average_compression_ratio,
|
||||
"DeltaObjects": cached_stats.delta_objects,
|
||||
"DirectObjects": cached_stats.direct_objects,
|
||||
}
|
||||
|
||||
buckets.append(bucket_data)
|
||||
|
||||
return {
|
||||
"Buckets": response.get("Buckets", []),
|
||||
"Owner": response.get("Owner", {}),
|
||||
"Buckets": buckets,
|
||||
"Owner": raw_response.get("Owner", {}),
|
||||
"ResponseMetadata": {
|
||||
"HTTPStatusCode": 200,
|
||||
},
|
||||
|
||||
@@ -42,3 +42,18 @@ class CachePort(Protocol):
|
||||
def evict(self, bucket: str, prefix: str) -> None:
|
||||
"""Remove cached reference."""
|
||||
...
|
||||
|
||||
def clear(self) -> None:
|
||||
"""Clear all cached references.
|
||||
|
||||
This method forcibly removes all cached data, useful for:
|
||||
- Long-running applications that need to free memory
|
||||
- Test cleanup
|
||||
- Manual cache invalidation
|
||||
- Ensuring fresh data fetch
|
||||
|
||||
Note: For filesystem caches, this removes all files in the cache directory.
|
||||
For memory caches, this clears all in-memory data.
|
||||
For encrypted caches, this also clears encryption key mappings.
|
||||
"""
|
||||
...
|
||||
|
||||
@@ -1,11 +1,13 @@
|
||||
"""Tests for bucket management APIs."""
|
||||
|
||||
from typing import Any
|
||||
from unittest.mock import Mock
|
||||
|
||||
import pytest
|
||||
|
||||
from deltaglider.app.cli.main import create_service
|
||||
from deltaglider.client import DeltaGliderClient
|
||||
from deltaglider.client_models import BucketStats
|
||||
|
||||
|
||||
class TestBucketManagement:
|
||||
@@ -123,6 +125,47 @@ class TestBucketManagement:
|
||||
assert response["Buckets"] == []
|
||||
assert response["ResponseMetadata"]["HTTPStatusCode"] == 200
|
||||
|
||||
def test_list_buckets_includes_cached_stats(self):
|
||||
"""Bucket list should merge cached stats when available."""
|
||||
service = create_service()
|
||||
mock_storage = Mock()
|
||||
service.storage = mock_storage
|
||||
|
||||
mock_boto3_client = Mock()
|
||||
mock_boto3_client.list_buckets.return_value = {
|
||||
"Buckets": [
|
||||
{"Name": "bucket1", "CreationDate": "2025-01-01T00:00:00Z"},
|
||||
{"Name": "bucket2", "CreationDate": "2025-01-02T00:00:00Z"},
|
||||
],
|
||||
"Owner": {"DisplayName": "test-user", "ID": "12345"},
|
||||
}
|
||||
mock_storage.client = mock_boto3_client
|
||||
|
||||
client = DeltaGliderClient(service)
|
||||
|
||||
cached_stats = BucketStats(
|
||||
bucket="bucket1",
|
||||
object_count=10,
|
||||
total_size=1000,
|
||||
compressed_size=600,
|
||||
space_saved=400,
|
||||
average_compression_ratio=0.4,
|
||||
delta_objects=6,
|
||||
direct_objects=4,
|
||||
)
|
||||
client._store_bucket_stats_cache("bucket1", detailed_stats=True, stats=cached_stats)
|
||||
|
||||
response = client.list_buckets()
|
||||
|
||||
bucket1 = next(bucket for bucket in response["Buckets"] if bucket["Name"] == "bucket1")
|
||||
assert bucket1["DeltaGliderStats"]["Cached"] is True
|
||||
assert bucket1["DeltaGliderStats"]["Detailed"] is True
|
||||
assert bucket1["DeltaGliderStats"]["ObjectCount"] == cached_stats.object_count
|
||||
assert bucket1["DeltaGliderStats"]["TotalSize"] == cached_stats.total_size
|
||||
|
||||
bucket2 = next(bucket for bucket in response["Buckets"] if bucket["Name"] == "bucket2")
|
||||
assert "DeltaGliderStats" not in bucket2
|
||||
|
||||
def test_delete_bucket_success(self):
|
||||
"""Test deleting a bucket successfully."""
|
||||
service = create_service()
|
||||
@@ -178,6 +221,69 @@ class TestBucketManagement:
|
||||
with pytest.raises(RuntimeError, match="Failed to delete bucket"):
|
||||
client.delete_bucket(Bucket="full-bucket")
|
||||
|
||||
def test_get_bucket_stats_caches_per_session(self, monkeypatch):
|
||||
"""Verify bucket stats are cached within the client session."""
|
||||
service = create_service()
|
||||
mock_storage = Mock()
|
||||
service.storage = mock_storage
|
||||
|
||||
mock_storage.client = Mock()
|
||||
|
||||
client = DeltaGliderClient(service)
|
||||
|
||||
quick_stats = BucketStats(
|
||||
bucket="bucket1",
|
||||
object_count=5,
|
||||
total_size=500,
|
||||
compressed_size=300,
|
||||
space_saved=200,
|
||||
average_compression_ratio=0.4,
|
||||
delta_objects=3,
|
||||
direct_objects=2,
|
||||
)
|
||||
detailed_stats = BucketStats(
|
||||
bucket="bucket1",
|
||||
object_count=5,
|
||||
total_size=520,
|
||||
compressed_size=300,
|
||||
space_saved=220,
|
||||
average_compression_ratio=0.423,
|
||||
delta_objects=3,
|
||||
direct_objects=2,
|
||||
)
|
||||
|
||||
call_count = {"value": 0}
|
||||
|
||||
def fake_get_bucket_stats(_: Any, bucket: str, detailed_stats_flag: bool) -> BucketStats:
|
||||
call_count["value"] += 1
|
||||
assert bucket == "bucket1"
|
||||
return detailed_stats if detailed_stats_flag else quick_stats
|
||||
|
||||
monkeypatch.setattr("deltaglider.client._get_bucket_stats", fake_get_bucket_stats)
|
||||
|
||||
# First call should invoke underlying function
|
||||
result_quick = client.get_bucket_stats("bucket1")
|
||||
assert result_quick is quick_stats
|
||||
assert call_count["value"] == 1
|
||||
|
||||
# Second quick call should hit cache
|
||||
assert client.get_bucket_stats("bucket1") is quick_stats
|
||||
assert call_count["value"] == 1
|
||||
|
||||
# Detailed call triggers new computation
|
||||
result_detailed = client.get_bucket_stats("bucket1", detailed_stats=True)
|
||||
assert result_detailed is detailed_stats
|
||||
assert call_count["value"] == 2
|
||||
|
||||
# Quick call after detailed uses detailed cached value (more accurate)
|
||||
assert client.get_bucket_stats("bucket1") is detailed_stats
|
||||
assert call_count["value"] == 2
|
||||
|
||||
# Clearing the cache should force recomputation
|
||||
client.clear_cache()
|
||||
assert client.get_bucket_stats("bucket1") is quick_stats
|
||||
assert call_count["value"] == 3
|
||||
|
||||
def test_bucket_methods_without_boto3_client(self):
|
||||
"""Test that bucket methods raise NotImplementedError when storage doesn't support it."""
|
||||
service = create_service()
|
||||
|
||||
214
tests/integration/test_stats_command.py
Normal file
214
tests/integration/test_stats_command.py
Normal file
@@ -0,0 +1,214 @@
|
||||
"""Integration tests for stats CLI command."""
|
||||
|
||||
import json
|
||||
from unittest.mock import Mock, patch
|
||||
|
||||
from click.testing import CliRunner
|
||||
|
||||
from deltaglider.app.cli.main import cli
|
||||
from deltaglider.client_models import BucketStats
|
||||
|
||||
|
||||
class TestStatsCommand:
|
||||
"""Test stats CLI command."""
|
||||
|
||||
def test_stats_json_output(self):
|
||||
"""Test stats command with JSON output."""
|
||||
# Create mock bucket stats
|
||||
mock_stats = BucketStats(
|
||||
bucket="test-bucket",
|
||||
object_count=10,
|
||||
total_size=1000000,
|
||||
compressed_size=500000,
|
||||
space_saved=500000,
|
||||
average_compression_ratio=0.5,
|
||||
delta_objects=7,
|
||||
direct_objects=3,
|
||||
)
|
||||
|
||||
with patch("deltaglider.client.DeltaGliderClient") as mock_client_class:
|
||||
# Setup mock client
|
||||
mock_client = Mock()
|
||||
mock_client.get_bucket_stats.return_value = mock_stats
|
||||
mock_client_class.return_value = mock_client
|
||||
|
||||
# Run command
|
||||
runner = CliRunner()
|
||||
result = runner.invoke(cli, ["stats", "test-bucket", "--json"])
|
||||
|
||||
# Verify
|
||||
assert result.exit_code == 0
|
||||
output = json.loads(result.output)
|
||||
assert output["bucket"] == "test-bucket"
|
||||
assert output["object_count"] == 10
|
||||
assert output["total_size"] == 1000000
|
||||
assert output["compressed_size"] == 500000
|
||||
assert output["space_saved"] == 500000
|
||||
assert output["average_compression_ratio"] == 0.5
|
||||
assert output["delta_objects"] == 7
|
||||
assert output["direct_objects"] == 3
|
||||
|
||||
# Verify client was called correctly
|
||||
mock_client.get_bucket_stats.assert_called_once_with(
|
||||
"test-bucket", detailed_stats=False
|
||||
)
|
||||
|
||||
def test_stats_json_output_detailed(self):
|
||||
"""Test stats command with detailed JSON output."""
|
||||
mock_stats = BucketStats(
|
||||
bucket="test-bucket",
|
||||
object_count=5,
|
||||
total_size=2000000,
|
||||
compressed_size=100000,
|
||||
space_saved=1900000,
|
||||
average_compression_ratio=0.95,
|
||||
delta_objects=5,
|
||||
direct_objects=0,
|
||||
)
|
||||
|
||||
with patch("deltaglider.client.DeltaGliderClient") as mock_client_class:
|
||||
mock_client = Mock()
|
||||
mock_client.get_bucket_stats.return_value = mock_stats
|
||||
mock_client_class.return_value = mock_client
|
||||
|
||||
runner = CliRunner()
|
||||
result = runner.invoke(cli, ["stats", "test-bucket", "--detailed", "--json"])
|
||||
|
||||
assert result.exit_code == 0
|
||||
output = json.loads(result.output)
|
||||
assert output["average_compression_ratio"] == 0.95
|
||||
|
||||
# Verify detailed flag was passed
|
||||
mock_client.get_bucket_stats.assert_called_once_with("test-bucket", detailed_stats=True)
|
||||
|
||||
def test_stats_human_readable_output(self):
|
||||
"""Test stats command with human-readable output."""
|
||||
mock_stats = BucketStats(
|
||||
bucket="test-bucket",
|
||||
object_count=10,
|
||||
total_size=1500000, # ~1.43 MB
|
||||
compressed_size=300000, # ~293 KB
|
||||
space_saved=1200000, # ~1.14 MB
|
||||
average_compression_ratio=0.8,
|
||||
delta_objects=7,
|
||||
direct_objects=3,
|
||||
)
|
||||
|
||||
with patch("deltaglider.client.DeltaGliderClient") as mock_client_class:
|
||||
mock_client = Mock()
|
||||
mock_client.get_bucket_stats.return_value = mock_stats
|
||||
mock_client_class.return_value = mock_client
|
||||
|
||||
runner = CliRunner()
|
||||
result = runner.invoke(cli, ["stats", "test-bucket"])
|
||||
|
||||
assert result.exit_code == 0
|
||||
output = result.output
|
||||
|
||||
# Verify human-readable format
|
||||
assert "Bucket Statistics: test-bucket" in output
|
||||
assert "Total Objects:" in output
|
||||
assert "10" in output
|
||||
assert "Delta Objects:" in output
|
||||
assert "7" in output
|
||||
assert "Direct Objects:" in output
|
||||
assert "3" in output
|
||||
assert "Original Size:" in output
|
||||
assert "Compressed Size:" in output
|
||||
assert "Space Saved:" in output
|
||||
assert "Compression Ratio:" in output
|
||||
assert "80.0%" in output # 0.8 = 80%
|
||||
|
||||
def test_stats_error_handling(self):
|
||||
"""Test stats command error handling."""
|
||||
with patch("deltaglider.client.DeltaGliderClient") as mock_client_class:
|
||||
mock_client = Mock()
|
||||
mock_client.get_bucket_stats.side_effect = Exception("Bucket not found")
|
||||
mock_client_class.return_value = mock_client
|
||||
|
||||
runner = CliRunner()
|
||||
result = runner.invoke(cli, ["stats", "nonexistent-bucket"])
|
||||
|
||||
assert result.exit_code == 1
|
||||
assert "Error: Bucket not found" in result.output
|
||||
|
||||
def test_stats_with_s3_url(self):
|
||||
"""Test stats command with s3:// URL format."""
|
||||
mock_stats = BucketStats(
|
||||
bucket="test-bucket",
|
||||
object_count=5,
|
||||
total_size=1000000,
|
||||
compressed_size=500000,
|
||||
space_saved=500000,
|
||||
average_compression_ratio=0.5,
|
||||
delta_objects=3,
|
||||
direct_objects=2,
|
||||
)
|
||||
|
||||
with patch("deltaglider.client.DeltaGliderClient") as mock_client_class:
|
||||
mock_client = Mock()
|
||||
mock_client.get_bucket_stats.return_value = mock_stats
|
||||
mock_client_class.return_value = mock_client
|
||||
|
||||
runner = CliRunner()
|
||||
result = runner.invoke(cli, ["stats", "s3://test-bucket", "--json"])
|
||||
|
||||
assert result.exit_code == 0
|
||||
# Verify bucket name was parsed correctly from S3 URL
|
||||
mock_client.get_bucket_stats.assert_called_once_with(
|
||||
"test-bucket", detailed_stats=False
|
||||
)
|
||||
|
||||
def test_stats_with_s3_url_trailing_slash(self):
|
||||
"""Test stats command with s3:// URL format with trailing slash."""
|
||||
mock_stats = BucketStats(
|
||||
bucket="test-bucket",
|
||||
object_count=5,
|
||||
total_size=1000000,
|
||||
compressed_size=500000,
|
||||
space_saved=500000,
|
||||
average_compression_ratio=0.5,
|
||||
delta_objects=3,
|
||||
direct_objects=2,
|
||||
)
|
||||
|
||||
with patch("deltaglider.client.DeltaGliderClient") as mock_client_class:
|
||||
mock_client = Mock()
|
||||
mock_client.get_bucket_stats.return_value = mock_stats
|
||||
mock_client_class.return_value = mock_client
|
||||
|
||||
runner = CliRunner()
|
||||
result = runner.invoke(cli, ["stats", "s3://test-bucket/", "--json"])
|
||||
|
||||
assert result.exit_code == 0
|
||||
# Verify bucket name was parsed correctly from S3 URL with trailing slash
|
||||
mock_client.get_bucket_stats.assert_called_once_with(
|
||||
"test-bucket", detailed_stats=False
|
||||
)
|
||||
|
||||
def test_stats_with_s3_url_with_prefix(self):
|
||||
"""Test stats command with s3:// URL format with prefix (should ignore prefix)."""
|
||||
mock_stats = BucketStats(
|
||||
bucket="test-bucket",
|
||||
object_count=5,
|
||||
total_size=1000000,
|
||||
compressed_size=500000,
|
||||
space_saved=500000,
|
||||
average_compression_ratio=0.5,
|
||||
delta_objects=3,
|
||||
direct_objects=2,
|
||||
)
|
||||
|
||||
with patch("deltaglider.client.DeltaGliderClient") as mock_client_class:
|
||||
mock_client = Mock()
|
||||
mock_client.get_bucket_stats.return_value = mock_stats
|
||||
mock_client_class.return_value = mock_client
|
||||
|
||||
runner = CliRunner()
|
||||
result = runner.invoke(cli, ["stats", "s3://test-bucket/some/prefix/", "--json"])
|
||||
|
||||
assert result.exit_code == 0
|
||||
# Verify only bucket name was extracted, prefix ignored
|
||||
mock_client.get_bucket_stats.assert_called_once_with(
|
||||
"test-bucket", detailed_stats=False
|
||||
)
|
||||
Reference in New Issue
Block a user