Zero Memory Optimization
Athena.Cache implements advanced memory optimization techniques that can reduce memory allocations by up to 98% in high-traffic scenarios. This guide explains these optimizations and how to leverage them effectively.
Why Zero Memory Matters
In high-performance applications, excessive memory allocations can cause:
- Frequent GC collections slowing down your application
- Memory pressure leading to performance degradation
- Higher hosting costs due to increased memory requirements
- Reduced throughput under heavy load
Athena.Cache addresses these issues through multiple optimization layers.
Optimization Phases
Phase 1: String Pooling
Cache keys and frequently used strings are pooled to avoid repeated allocations.
Automatic String Pooling
// Traditional approach (creates new strings)
var key1 = $"user_{userId}_{role}"; // New allocation
var key2 = $"user_{userId}_{role}"; // Another allocation (same content!)
// Athena.Cache approach (pooled strings)
[AthenaCache(KeyPattern = "user_{userId}_{role}")]
public async Task<UserDto> GetUser(int userId, string role)
{
// Key is retrieved from pool, no allocation
return await _userService.GetUserAsync(userId, role);
}
Custom String Pool Usage
// Access the string pool directly for custom scenarios
public class CustomCacheKeyService
{
public string GenerateKey(string template, params object[] parameters)
{
// Use Athena's high-performance string pool
return HighPerformanceStringPool.Get(template, parameters);
}
}
Phase 2: Value Type Optimizations
Athena.Cache uses value types and stack allocation where possible.
Stack-allocated Key Building
// Traditional heap allocation
public string BuildKey(string controller, string action, Dictionary<string, object> parameters)
{
var sb = new StringBuilder(); // Heap allocation
sb.Append(controller);
sb.Append(':');
sb.Append(action);
// ... more heap allocations
return sb.ToString();
}
// Athena.Cache optimized approach
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public ref struct CacheKeyBuilder // Stack allocated!
{
private Span<char> _buffer;
private int _length;
public void Append(ReadOnlySpan<char> value)
{
value.CopyTo(_buffer.Slice(_length));
_length += value.Length;
}
public readonly string ToStringAndClear()
{
return new string(_buffer.Slice(0, _length));
}
}
Efficient Parameter Serialization
// Athena.Cache automatically optimizes parameter serialization
[AthenaCache]
public async Task<ProductDto[]> GetProducts(
int categoryId, // Efficient int serialization
decimal minPrice, // Efficient decimal serialization
bool inStock, // Single byte
ProductFilter filter) // Optimized object serialization
{
// Parameters are serialized without intermediate allocations
return await _productService.GetProductsAsync(categoryId, minPrice, inStock, filter);
}
Phase 3: Collection Pooling
Reusable collections minimize allocation overhead.
Pooled Collections
// Traditional approach (multiple allocations)
public async Task<Dictionary<string, object>> ProcessData(IEnumerable<DataItem> items)
{
var results = new Dictionary<string, object>(); // Heap allocation
var tempList = new List<string>(); // Another allocation
foreach (var item in items)
{
var processedItems = new List<ProcessedItem>(); // More allocations!
// ... processing
}
return results;
}
// Athena.Cache optimized approach
public async Task<Dictionary<string, object>> ProcessDataOptimized(IEnumerable<DataItem> items)
{
// Rent collections from pool
using var results = CollectionPools.RentDictionary<string, object>();
using var tempList = CollectionPools.RentList<string>();
foreach (var item in items)
{
using var processedItems = CollectionPools.RentList<ProcessedItem>();
// ... processing (no allocations!)
}
// Collections are automatically returned to pool when disposed
return results.ToResultDictionary();
}
Automatic Collection Pooling
// Athena.Cache automatically pools collections internally
[AthenaCache]
public async Task<UserDto[]> SearchUsers([FromQuery] UserSearchRequest request)
{
// Athena.Cache uses pooled collections for:
// - Parameter serialization
// - Cache key building
// - Response caching
// - Invalidation tracking
return await _userService.SearchUsersAsync(request);
}
Phase 4: Lazy Initialization
Expensive objects are initialized only when needed.
Lazy Cache Configuration
// Athena.Cache uses lazy initialization for expensive resources
public class LazyCache<T> where T : class
{
private readonly Lazy<T> _lazyValue;
private readonly Func<T> _factory;
public LazyCache(Func<T> factory)
{
_factory = factory;
_lazyValue = new Lazy<T>(factory, LazyThreadSafetyMode.ExecutionAndPublication);
}
public T Value => _lazyValue.Value;
public bool IsValueCreated => _lazyValue.IsValueCreated;
}
// Usage in your code
public class ExpensiveResourceManager
{
private readonly LazyCache<ExpensiveResource> _resource = new(() => CreateExpensiveResource());
public async Task<DataDto> GetData()
{
// Resource is created only when first accessed
var resource = _resource.Value;
return await resource.GetDataAsync();
}
}
Phase 5: Memory Pressure Management
Automatic cleanup based on memory pressure prevents out-of-memory situations.
Automatic Memory Monitoring
// Configure memory pressure management
builder.Services.AddAthenaCacheComplete(options =>
{
options.MemoryPressure.EnableAutomaticCleanup = true;
options.MemoryPressure.CleanupThresholdMB = 100; // Start cleanup at 100MB
options.MemoryPressure.AggressiveCleanupThresholdMB = 500; // Aggressive cleanup at 500MB
options.MemoryPressure.MonitoringIntervalSeconds = 30; // Check every 30 seconds
});
Custom Memory Pressure Handling
public class CustomMemoryPressureManager
{
private readonly IMemoryCache _cache;
private readonly ILogger<CustomMemoryPressureManager> _logger;
public void MonitorAndCleanup()
{
var totalMemory = GC.GetTotalMemory(false);
var workingSet = Environment.WorkingSet;
if (totalMemory > _thresholdBytes)
{
_logger.LogWarning("High memory pressure detected: {TotalMemory} bytes", totalMemory);
// Trigger cache cleanup
TriggerCacheCleanup();
// Force garbage collection if necessary
if (totalMemory > _criticalThresholdBytes)
{
GC.Collect(2, GCCollectionMode.Aggressive);
GC.WaitForPendingFinalizers();
}
}
}
private void TriggerCacheCleanup()
{
// Remove least recently used items
_cache.TrimToSize(0.7); // Keep only 70% of current items
}
}
Measurement and Monitoring
Memory Allocation Tracking
[ApiController]
public class MemoryDiagnosticsController : ControllerBase
{
[HttpGet("memory/stats")]
public IActionResult GetMemoryStats()
{
return Ok(new
{
TotalMemory = GC.GetTotalMemory(false),
WorkingSet = Environment.WorkingSet,
Gen0Collections = GC.CollectionCount(0),
Gen1Collections = GC.CollectionCount(1),
Gen2Collections = GC.CollectionCount(2),
StringPoolStats = HighPerformanceStringPool.GetStatistics(),
CollectionPoolStats = CollectionPools.GetStatistics()
});
}
[HttpPost("memory/force-gc")]
public IActionResult ForceGarbageCollection()
{
var beforeMemory = GC.GetTotalMemory(false);
GC.Collect();
GC.WaitForPendingFinalizers();
GC.Collect();
var afterMemory = GC.GetTotalMemory(true);
return Ok(new
{
BeforeGC = beforeMemory,
AfterGC = afterMemory,
Freed = beforeMemory - afterMemory
});
}
}
Performance Benchmarking
// Example benchmark comparing traditional vs optimized approaches
[MemoryDiagnoser]
[SimpleJob(RuntimeMoniker.Net80)]
public class CachingBenchmarks
{
private const int OperationCount = 10000;
[Benchmark]
public void TraditionalCaching()
{
for (int i = 0; i < OperationCount; i++)
{
var key = $"user_{i}_profile_{i % 10}"; // String allocation
var data = new UserProfile { Id = i }; // Object allocation
// Traditional caching operations
}
}
[Benchmark]
public void OptimizedCaching()
{
for (int i = 0; i < OperationCount; i++)
{
var key = HighPerformanceStringPool.Get("user_{0}_profile_{1}", i, i % 10);
using var dataPool = ObjectPools.RentUserProfile();
var data = dataPool.Value;
data.Reset(i);
// Optimized caching operations
}
}
}
Configuration for Maximum Performance
Production Settings
// Optimal settings for high-performance production environments
builder.Services.AddAthenaCacheComplete(options =>
{
// Memory optimization settings
options.MemoryOptimization.EnableStringPooling = true;
options.MemoryOptimization.EnableCollectionPooling = true;
options.MemoryOptimization.EnableValueTypeOptimizations = true;
options.MemoryOptimization.EnableLazyInitialization = true;
// String pool settings
options.StringPool.MaxPoolSize = 10000;
options.StringPool.MaxStringLength = 1000;
options.StringPool.CleanupIntervalMinutes = 10;
// Collection pool settings
options.CollectionPools.MaxDictionaryPoolSize = 100;
options.CollectionPools.MaxListPoolSize = 200;
options.CollectionPools.MaxArrayPoolSize = 50;
// Memory pressure settings
options.MemoryPressure.EnableAutomaticCleanup = true;
options.MemoryPressure.CleanupThresholdMB = 200;
options.MemoryPressure.MonitoringIntervalSeconds = 15;
});
Development Settings
// Settings for development with more debugging info
builder.Services.AddAthenaCacheComplete(options =>
{
// Enable memory tracking in development
options.MemoryOptimization.EnableAllocationTracking = true;
options.MemoryOptimization.LogAllocationWarnings = true;
// More frequent cleanup for testing
options.MemoryPressure.CleanupThresholdMB = 50;
options.MemoryPressure.MonitoringIntervalSeconds = 10;
// Pool statistics logging
options.Logging.LogPoolStatistics = true;
options.Logging.LogMemoryPressure = true;
});
Custom Optimization Strategies
Object Pooling
// Implement object pooling for your domain objects
public class UserDtoPool : IObjectPool<UserDto>
{
private readonly ConcurrentBag<UserDto> _objects = new();
public UserDto Get()
{
if (_objects.TryTake(out var item))
{
return item;
}
return new UserDto();
}
public void Return(UserDto obj)
{
// Reset object state
obj.Reset();
_objects.Add(obj);
}
}
// Register and use object pool
builder.Services.AddSingleton<IObjectPool<UserDto>, UserDtoPool>();
public class OptimizedUserService
{
private readonly IObjectPool<UserDto> _userPool;
public async Task<UserDto> GetUserOptimizedAsync(int id)
{
var user = _userPool.Get();
try
{
// Populate user data without allocating new object
await PopulateUserDataAsync(user, id);
return user;
}
catch
{
_userPool.Return(user);
throw;
}
}
}
Memory-efficient Serialization
// Custom serialization that minimizes allocations
public class OptimizedSerializer
{
private readonly ArrayPool<byte> _bytePool = ArrayPool<byte>.Shared;
public byte[] Serialize<T>(T obj)
{
var buffer = _bytePool.Rent(1024);
try
{
using var stream = new MemoryStream(buffer);
// Serialize directly to rented buffer
JsonSerializer.Serialize(stream, obj);
// Return only the used portion
var result = new byte[stream.Position];
Array.Copy(buffer, result, stream.Position);
return result;
}
finally
{
_bytePool.Return(buffer);
}
}
}
Monitoring Zero Memory Performance
Real-time Memory Metrics
[ApiController]
public class ZeroMemoryMetricsController : ControllerBase
{
[HttpGet("zero-memory/metrics")]
public IActionResult GetZeroMemoryMetrics()
{
return Ok(new
{
StringPool = new
{
TotalStrings = HighPerformanceStringPool.TotalStrings,
PoolHitRate = HighPerformanceStringPool.HitRate,
MemorySaved = HighPerformanceStringPool.EstimatedMemorySaved
},
CollectionPools = new
{
ListPoolHitRate = CollectionPools.ListPool.HitRate,
DictionaryPoolHitRate = CollectionPools.DictionaryPool.HitRate,
ArrayPoolHitRate = CollectionPools.ArrayPool.HitRate
},
GCStats = new
{
Gen0Collections = GC.CollectionCount(0),
Gen1Collections = GC.CollectionCount(1),
Gen2Collections = GC.CollectionCount(2),
TotalMemory = GC.GetTotalMemory(false),
LastGCDuration = GC.GetTotalPauseDuration()
}
});
}
}
Best Practices
1. Enable All Optimizations in Production
// Always enable all optimizations for production
options.MemoryOptimization.EnableStringPooling = true;
options.MemoryOptimization.EnableCollectionPooling = true;
options.MemoryOptimization.EnableValueTypeOptimizations = true;
2. Monitor Memory Pressure
// Set appropriate thresholds based on your hosting environment
options.MemoryPressure.CleanupThresholdMB = availableMemoryMB * 0.8;
3. Use Source Generator
# The Source Generator provides additional compile-time optimizations
dotnet add package Athena.Cache.SourceGenerator
4. Profile Your Application
Use tools like dotMemory, PerfView, or Application Insights to measure the impact:
// Add memory profiling in development
#if DEBUG
options.MemoryOptimization.EnableAllocationTracking = true;
#endif
Troubleshooting
High Memory Usage Despite Optimizations
- Check pool sizes: Pools that are too large can consume memory
- Monitor pool hit rates: Low hit rates indicate pools aren’t being used effectively
- Review custom code: Ensure your business logic isn’t creating unnecessary allocations
Pool Contention Issues
// Increase pool sizes if you see contention
options.StringPool.MaxPoolSize = 20000; // Increase from default
options.CollectionPools.MaxListPoolSize = 500; // Increase from default
GC Pressure Still High
// More aggressive memory management
options.MemoryPressure.CleanupThresholdMB = 100; // Lower threshold
options.MemoryPressure.MonitoringIntervalSeconds = 10; // More frequent monitoring
For more performance topics, see:
- Source Generator - Compile-time optimizations
- Production Tuning - Overall performance tuning
- Redis Setup - Distributed caching performance