gofnext is a Go library (version >=1.21) that provides function extension capabilities, primarily focusing on cache decorators similar to Python's functools.cache and functools.lru_cache (readme.md:1-18). The library enables developers to wrap functions with caching layers that are concurrent-goroutine safe, supporting multiple backend storage options including in-memory maps, LRU eviction, Redis, and PostgreSQL. The core value proposition lies in its ability to transparently add caching to any function without modifying the original function logic, significantly improving performance for repeated calls with identical arguments (readme.md:11-16).
| Technology | Version/Requirement | Purpose |
|---|---|---|
| Go | >= 1.21 | Primary language (required for generics support) |
| Redis | Optional | Distributed cache backend |
| PostgreSQL | Optional (via extension) | Persistent cache backend |
| Memory Map | Built-in | Default in-memory cache storage |
| LRU Cache | Built-in | Size-limited memory cache with eviction |
The library provides a comprehensive set of features centered around function caching and extension:
The library provides a systematic mapping between function signatures and their corresponding cache decorators:
| Function Signature | Decorator |
|---|---|
func f() R | gofnext.CacheFn0(f) |
func f(K1) R | gofnext.CacheFn1(f) |
func f(K1,K2) R | gofnext.CacheFn2(f) |
func f(K1,K2,K3) R | gofnext.CacheFn3(f) |
func f() (R,error) | gofnext.CacheFn0Err(f) |
func f(K1) (R,error) | gofnext.CacheFn1Err(f) |
func f(K1, K2) (R,error) | gofnext.CacheFn2Err(f) |
For functions returning errors, the library provides specialized decorators (CacheFn0Err, CacheFn1Err, CacheFn2Err) that handle error caching separately with configurable TTL settings. Configuration options include TTL for successful results (TTL: time.Hour) and separate TTL for error results (ErrTTl), allowing fine-grained control over cache invalidation behavior (readme.md:20-32).
The architecture of gofnext follows a layered decorator pattern where cache functionality is abstracted behind a CacheMap interface, allowing different storage backends to be plugged in transparently.
正在加载图表渲染器...
Architecture Explanation:
User Functions Layer: The top layer represents user-defined functions with varying signatures (0-3 parameters, with or without error returns). These functions remain unchanged and are wrapped by decorators.
Decorator Layer: Each function signature has a corresponding decorator (CacheFn0 through CacheFn3, plus error variants). Decorators intercept function calls, check cache, and only invoke the original function on cache misses.
Configuration Layer: The gofnext.Config struct centralizes all cache behavior settings including TTL values, custom hash functions, and cache backend selection (readme.md:33-39).
Backend Layer: Multiple storage implementations conform to the CacheMap interface. Memory is default; LRU provides size-limited eviction; Redis enables distributed caching; PostgreSQL offers persistence.
Evidence Points:
The following sequence diagram illustrates the critical data flow when a cached function is invoked, showing the decision points for cache hit/miss scenarios and error handling:
正在加载图表渲染器...
Data Flow Explanation:
Invocation: When a caller invokes the cached function, the decorator intercepts the call before any computation occurs.
Key Generation: Arguments are hashed using either the default hash function or a custom implementation configured via gofnext.Config. This handles pointer address vs value hashing behavior (readme.md:37-38).
Cache Lookup: The decorator queries the configured CacheMap backend (memory, LRU, Redis, etc.) using the generated key.
Cache Hit Path: If a value exists and hasn't expired (based on TTL), the cached value is returned immediately without invoking the original function.
Cache Miss Path: On cache miss, the original function executes with the provided arguments. The result is then stored in cache with appropriate TTL (regular TTL for success, ErrTTL for errors) before being returned.
Error Handling: Error-returning functions use specialized decorators that cache errors separately, preventing repeated failed calls from hitting the underlying function.
Evidence Points:
Performance testing reveals significant improvements when using cache decorators, with memory cache providing approximately 117,000x faster execution compared to uncached calls:
| Cache Type | Operations/sec | Latency (ns/op) | Memory (B/op) | Allocations |
|---|---|---|---|---|
| No Cache | 100 | 11,179,015 | 281,220 | 99 |
| Memory Cache | 11,036,955 | 95.49 | 72 | 2 |
| LRU Cache | 11,362,039 | 104.8 | 72 | 2 |
| Redis Cache | 15,850 | 74,653 | 28,072 | 29 |
Benchmark Analysis:
Memory vs LRU Cache: Performance is nearly identical (~100ns/op), with LRU adding minimal overhead for eviction tracking. Both reduce allocations from 99 to 2 per operation.
Redis Cache Trade-off: Redis caching is ~780x slower than memory cache due to network round-trips and serialization overhead. However, it still provides ~150x improvement over no cache and enables distributed caching across multiple application instances.
Memory Efficiency: Memory and LRU caches reduce memory allocation by ~97% (281KB to 72B per operation), significantly reducing GC pressure in high-throughput scenarios.
Use Case Guidance: Memory/LRU caches are ideal for single-instance applications with high call frequency. Redis cache suits distributed systems requiring cache sharing across instances, accepting the latency trade-off.
Evidence Points:
The gofnext.Config struct provides comprehensive control over cache behavior:
| Config Item | Type | Description |
|---|---|---|
TTL | time.Duration | Time-to-live for successful cache entries |
ErrTTL | time.Duration | Time-to-live for error cache entries |
CacheMap | CacheMap interface | Cache backend implementation |
HashKeyFunc | func(...any) string | Custom key generation function |
Configuration Examples:
go1// Memory cache with 1-hour TTL 2gofnext.CacheFn0Err(f, &gofnext.Config{TTL: time.Hour}) 3 4// LRU cache with max 9999 entries 5gofnext.CacheFn0(f, &gofnext.Config{CacheMap: gofnext.NewCacheLru(9999)}) 6 7// Redis cache backend 8gofnext.CacheFn0(f, &gofnext.Config{CacheMap: gofnext.NewCacheRedis("cacheKey")})
The library is particularly well-suited for:
Expensive Computations: Functions performing complex calculations, database queries, or API calls that are called repeatedly with the same arguments.
Recursive Algorithms: Fibonacci, factorial, and dynamic programming problems where overlapping subproblems exist.
Database Query Caching: Reducing database load by caching query results with configurable TTL.
Distributed Systems: Using Redis backend to share cache across multiple application instances.
Rate-Limited APIs: Caching external API responses to stay within rate limits while serving repeated requests.
The following diagram illustrates the recommended reading order for understanding the gofnext library comprehensively:
正在加载图表渲染器...
Reading Path Explanation:
Start Here: Project Overview (current page) provides the foundation understanding of library purpose and architecture.
Core Concepts: Features and Decorator Usage sections explain the API surface and common patterns.
Deep Dive: Configuration and Performance sections cover optimization and tuning.
Implementation: For contributors or advanced users needing to understand internal mechanics or implement custom backends.
| Metric | Value |
|---|---|
| Supported Parameter Counts | 0-3 (CacheFn0 through CacheFn3) |
| Error-Variant Decorators | 3 (CacheFn0Err, CacheFn1Err, CacheFn2Err) |
| Built-in Cache Backends | 3 (Memory, LRU, Redis) |
| Extension Backends | 1+ (PostgreSQL via gofnext_pg) |
| Minimum Go Version | 1.21 |
| Performance Improvement (Memory Cache) | ~117,000x vs no cache |
| Memory Reduction | ~97% fewer allocations |