This page is generated from the following source files:
The gofnext library provides a function decorator pattern for caching Go function results with support for multiple cache backends, TTL management, and error handling. The architecture centers around a generic decorator that wraps user functions and intercepts calls to serve cached results when available.
The system is organized into four primary layers: the Decorator Layer (user-facing API), the Cache Abstraction Layer (interface-based storage contracts), the Storage Implementation Layer (concrete cache backends), and the Serialization Layer (key hashing and value marshaling).
正在加载图表渲染器...
Key Architectural Points:
Generic Decorator Core: The decorator.go:43-248 defines the cachedFn[K1, K2, K3, V] struct that wraps user functions with caching logic, supporting up to 3 parameters through type parameters.
Interface-Based Storage: The cache-types.go:4-18 defines the CacheMap interface with Store, Load, and TTL configuration methods, enabling pluggable cache backends.
Multiple Backend Support: Three implementations exist—memory-based (cache-map-mem.go:21-76), LRU with eviction (cache-map-lru.go:25-93), and Redis for distributed caching (cache-map-redis.go:27-66).
Serialization for Unhashable Keys: The serial/dump.go:12-146 provides custom serialization to handle Go's unhashable types (maps, slices) as cache keys.
Single-Flight Pattern: The Load method returns hasCache and alive booleans to coordinate concurrent requests for the same key, preventing cache stampede.
The cachedFn[K1, K2, K3, V] struct is the heart of the decorator system, using Go 1.18+ generics to support functions with 0-3 parameters. The type parameters K1, K2, K3 represent parameter types, and V is the return value type.
go1// Simplified structure from decorator.go 2type cachedFn[K1, K2, K3, V] struct { 3 fn func(K1, K2, K3) (V, error) 4 cacheMap CacheMap 5 hashKeyFunc func(...any) []byte 6 hashKeyPointerAddr bool 7 needDumpKey bool 8}
Configuration Management (decorator.go:43-248):
The setConfig method initializes the decorator with a Config struct. If no cache is specified, it defaults to in-memory cache:
go1if config.CacheMap == nil { 2 config.CacheMap = newCacheMapMem(config.TTL) 3}
The configuration validates TTL values and panics on invalid inputs:
ErrTTL < -1 triggers panic (line 66)TTL < 0 triggers panic (line 69)Invocation Methods:
The decorator provides separate invocation methods for different parameter arities:
| Method | Parameters | Returns | Source |
|---|---|---|---|
invoke0() | 0 | V | decorator.go:209-215 |
invoke0err() | 0 | (V, error) | decorator.go:217-224 |
invoke1(k1) | 1 | V | decorator.go:226-232 |
invoke1err(k1) | 1 | (V, error) | decorator.go:233-237 |
invoke2(k1, k2) | 2 | V | decorator.go:239-244 |
invoke2err(k1, k2) | 2 | (V, error) | decorator.go:245-248 |
All invocation methods delegate to invoke3err, which implements the core caching logic.
The cache-types.go:4-18 defines the storage abstraction:
go1type CacheMap interface { 2 Store(key, value any, err error) 3 Load(key any) (value any, hasCache, alive bool, err error) 4 SetTTL(ttl time.Duration) CacheMap 5 SetErrTTL(ttl time.Duration) CacheMap 6 SetReuseTTL(ttl time.Duration) CacheMap 7 NeedMarshal() bool 8}
Key Design Decisions:
Error Storage: The Store method accepts an error parameter, allowing the cache to persist error states alongside values.
Three-State Load: The Load method returns two booleans (hasCache, alive) to distinguish between:
hasCache=false)hasCache=true, alive=false)hasCache=true, alive=true)Fluent API: TTL setters return CacheMap for method chaining.
The cache-map-mem.go:21-76 implements a simple in-memory cache using sync.Map for concurrent safety without explicit locking.
go1type memCacheMap struct { 2 Map *sync.Map 3 ttl time.Duration 4 errTtl time.Duration 5 reuseTtl time.Duration 6}
Store Operation (cache-map-mem.go:29-36):
go1func (m *memCacheMap) Store(key, value any, err error) { 2 el := cachedValue{ 3 val: value, 4 createdAt: time.Now(), 5 err: err, 6 } 7 m.Map.Store(key, &el) 8}
Load Operation with TTL Logic (cache-map-mem.go:38-58):
The Load method implements sophisticated TTL handling:
time.Since(createdAt) <= ttl and no error (or error within ErrTTL)reuseTtl + ttl, returns stale cache with alive=falsehasCache=falsego1if (m.ttl > 0 && time.Since(el.createdAt) > m.ttl) || 2 (el.err != nil && m.errTtl >= 0 && time.Since(el.createdAt) > m.errTtl) { 3 if m.reuseTtl > 0 && time.Since(el.createdAt) < m.reuseTtl+m.ttl { 4 return el.val, true, false, el.err // Reuse stale cache 5 } else { 6 m.Map.Delete(key) 7 return el.val, false, false, el.err 8 } 9}
The cache-map-lru.go:25-93 implements size-bounded caching with Least Recently Used eviction.
go1type cacheLru struct { 2 maxSize int 3 list *list.List 4 listMap *sync.Map 5 mu sync.Mutex 6 ttl time.Duration 7 errTtl time.Duration 8 resueTtl time.Duration 9}
Eviction Policy (cache-map-lru.go:46-52):
When the cache reaches maxSize, the oldest entry is evicted:
go1if m.maxSize > 0 && m.list.Len() >= m.maxSize { 2 elInter := m.list.Back() 3 m.list.Remove(elInter) 4 m.listMap.Delete(elInter.Value) 5}
Access Order Maintenance (cache-map-lru.go:74-76):
On cache hit, entries are moved to the front of the list:
go1// 3. cache is valid: move to front 2m.list.MoveToFront(el.element) 3return el.val, true, true, el.err
The cache-map-redis.go:27-66 provides distributed caching using Redis, suitable for multi-instance deployments.
go1type redisMap struct { 2 redisClient redis.UniversalClient 3 redisFuncKey string 4 ttl time.Duration 5 errTtl time.Duration 6}
Key Namespacing (cache-map-redis.go:47):
Redis keys are prefixed with _gofnext: followed by the user-provided function key:
go1redisFuncKey: "_gofnext:" + funcKey,
Configuration Options:
The Redis implementation supports multiple configuration methods:
SetRedisAddr(addr string) - Simple address configurationSetRedisOpts(opts *redis.Options) - Full Redis optionsSetRedisUniversalOpts(opts *redis.UniversalOptions) - Cluster/sentinel support正在加载图表渲染器...
Flow Explanation:
Key Serialization: The decorator converts function arguments into a cache key using hashKeyFunc. For unhashable types (maps, slices), the serial/dump.go:12-146 serializes them to bytes.
Cache Lookup: The Load method checks the cache backend for an existing entry, returning validity flags.
Three-Way Branch:
Result Storage: The Store method saves both value and error, with timestamp for TTL calculation.
The architecture implements a single-flight pattern through the hasCache and alive flags. When multiple goroutines request the same key simultaneously:
hasCache=false, acquires execution righthasCache=true, alive=false (if ReuseTTL configured), return stale dataEvidence from cache-map-mem_test.go:28-32 shows parallel call testing:
go1parallelCall(func() { 2 userinfo, err := getNumWithCache() 3 fmt.Println(userinfo, err) 4}, 10)
The library implements three distinct TTL configurations:
| TTL Type | Purpose | Default | Source |
|---|---|---|---|
TTL | Normal cache expiration | 0 (never) | cache-map-mem.go:60-63 |
ErrTTL | Error result expiration | -1 (never cache) | cache-map-mem.go:65-68 |
ReuseTTL | Stale-while-revalidate window | 0 (disabled) | cache-map-mem.go:69-72 |
TTL Validation (decorator.go:65-70):
go1if config.ErrTTL < -1 { 2 panic("ErrTTL should not be less than -1") 3} 4if config.TTL < 0 { 5 panic("TTL should not be less than 0") 6}
By default, errors are NOT cached (ErrTTL = -1). This ensures transient failures don't persist. The examples/decorator-err_test.go:29-47 demonstrates this:
go1func TestNoCacheErr(t *testing.T) { 2 // Without ErrTTL config, errors are not cached 3 getUserAndErrCached := gofnext.CacheFn0Err(getUserWithErr, nil) 4 5 parallelCall(func() { 6 userinfo, err := getUserAndErrCached() 7 // err is always "db error" - function executes each time 8 }, 10) 9 10 // count = 10, not 1 11 if count.Load() != uint32(times) { 12 t.Fatalf("Execute count should be %d", times) 13 } 14}
Enabling Error Caching (examples/decorator-err_test.go:51-77):
When ErrTTL is set to a positive duration, errors are cached:
go1getUserAndErrCached := gofnext.CacheFn1Err(getUserAndErr, &gofnext.Config{ 2 ErrTTL: time.Hour, 3}) 4 5// Now errors are cached for 1 hour 6// count = 1 after 5 parallel calls
The ReuseTTL config implements stale-while-revalidate pattern. When cache is expired but within ReuseTTL window:
Evidence from cache-map-mem_test.go:48-82:
go1getNumWithCache := CacheFn0Err(getNum, &Config{ 2 TTL: 200 * time.Millisecond, 3 ReuseTTL: 200 * time.Millisecond, 4}) 5 6num, _ := getNumWithCache() // count = 1 7time.Sleep(200 * time.Millisecond) 8num, _ = getNumWithCache() // Returns stale, count still 1 9time.Sleep(100 * time.Millisecond) 10num, _ = getNumWithCache() // count = 2 (refreshed)
Go's map keys must be comparable, but many types (maps, slices, functions) are not hashable. The serial/dump.go:12-146 solves this by serializing any value to a byte slice.
Supported Types (serial/dump.go:41-87):
| Kind | Representation | Example Output |
|---|---|---|
| Basic types | Direct value | 42, "hello", 3.14 |
| Pointer | &value or *0x... | &42, *0xc0000b2008 |
| Slice/Array | [elem1,elem2,...] | [1,2,3] |
| Map | {key1:val1,key2:val2} | {"a":1,"b":2} |
| Struct | Name{field:val} | Person{Name:"Alice"} |
| Func/Chan | Type name | <func>, <chan> |
Cycle Detection (serial/dump.go:13-22):
The PtrSeen type tracks visited pointers to prevent infinite recursion:
go1func (ps PtrSeen) Add(rv reflect.Value) bool { 2 ptr := rv.Pointer() 3 if _, ok := ps[ptr]; ok { 4 return false // Cycle detected 5 } 6 ps[ptr] = struct{}{} 7 return true 8}
When cycles are detected, the serializer outputs placeholders:
<cycle pointer> for pointer cycles<cycle slice> for slice cycles<cycle map> for map cyclesEvidence from examples/serial_test.go:17-24:
go1func TestDumpCycleSlice(t *testing.T) { 2 person := Person{children: []Person{{}}} 3 person.children[0].children = []Person{person} 4 result := serial.String(&person, false) 5 assertContains(t, result, `<cycle slice>`) 6}
The HashKeyPointerAddr config option controls whether pointer addresses are included in cache keys:
false (default): Dereference pointers, hash the pointed valuetrue: Hash the pointer address itself (*0x...)This is useful when different pointer instances pointing to identical values should be treated as separate cache keys.
Evidence from decorator-hash-ptr_test.go:7-41:
go1func TestIsHashableKeyNotHashPtr(t *testing.T) { 2 // Hashable types 3 key1 := 10 4 canHash1 := isHashableKey(key1, false) // true 5 6 // Unhashable types 7 key2 := map[string]int{"a": 1} 8 canHash2 := isHashableKey(key2, false) // false 9 10 // Pointers are unhashable by default 11 key5 := &key1 12 canHash5 := isHashableKey(key5, false) // false 13}
正在加载图表渲染器...
Dependency Analysis:
Core Independence: The cachedFn and Config structs have no external dependencies beyond the standard library.
Interface Segregation: Cache implementations depend only on the CacheMap interface, not on each other.
Optional Redis: The Redis dependency (go.mod:6) is only imported when NewCacheRedis is used.
Serialization Isolation: The serial package is internal and only used by the decorator for key hashing.
Decision: Use Go 1.18+ generics instead of code generation.
Rationale: Generics provide compile-time type safety without maintenance burden of generated code. The limitation to 3 parameters is a practical trade-off for generic complexity.
Trade-off: Cannot support arbitrary parameter counts (e.g., 4+ parameters) without additional type parameters.
Decision: Define CacheMap interface with Store/Load methods.
Rationale: Enables pluggable backends (memory, LRU, Redis) without modifying decorator logic. Users can implement custom storage (e.g., Memcached).
Trade-off: Interface methods must accept any types, requiring type assertions in implementations.
Decision: Default ErrTTL = -1 means errors are not cached.
Rationale: Transient errors (network timeouts, temporary failures) should not persist. Explicit ErrTTL configuration is required to cache errors.
Trade-off: High-error-rate scenarios may cause excessive function calls unless ErrTTL is configured.
Decision: Implement ReuseTTL for returning stale data during refresh.
Rationale: Improves perceived latency during cache refresh. Critical for high-traffic scenarios where fresh computation is expensive.
Trade-off: Clients may receive slightly stale data within the ReuseTTL window.
Decision: Implement custom serialization in serial/dump.go instead of using encoding/json.
Rationale: Custom serializer handles:
Trade-off: Maintenance burden for edge cases; less battle-tested than standard library.
Decision: Use sync.Map instead of map with sync.RWMutex.
Rationale: sync.Map is optimized for read-heavy workloads with disjoint key sets, typical in caching scenarios.
Trade-off: Write performance degrades with concurrent writes to the same key.
Decision: Prefix all Redis keys with _gofnext:.
Rationale: Prevents key collisions when sharing Redis instance with other applications.
Trade-off: Users cannot customize prefix; may conflict with existing key naming conventions.
Decision: Panic on invalid TTL values (TTL < 0, ErrTTL < -1).
Rationale: Configuration errors are programmer mistakes, not runtime conditions. Fail-fast prevents subtle bugs.
Trade-off: No graceful degradation; application crashes on misconfiguration.
| Technology | Purpose | Selection Rationale | Alternative Considered |
|---|---|---|---|
| Go 1.21+ | Runtime | Generic support, mature concurrency | Go 1.17 with codegen |
sync.Map | Memory cache | Optimized for read-heavy, concurrent access | map + sync.RWMutex |
container/list | LRU eviction | Standard library, no dependencies | Third-party LRU libraries |
go-redis | Redis client | Universal client supports standalone/cluster | redigo, go-redis/v9 |
msgpack/v5 | Binary serialization | Compact, fast for Redis values | encoding/json, gob |
reflect | Type introspection | Required for generic key hashing | Code generation |
slog | Structured logging | Standard library (Go 1.21+) | logrus, zap |
Dependency Footprint (go.mod:1-20):
The library maintains minimal dependencies:
go-redis, msgpack/v5ginkgo, gomega (testing only)go1type Config struct { 2 TTL time.Duration // Cache expiration time 3 ErrTTL time.Duration // Error cache expiration (-1 = never) 4 ReuseTTL time.Duration // Stale-while-revalidate window 5 CacheMap CacheMap // Storage backend (nil = memory) 6 HashKeyFunc func(...any) []byte // Custom key hasher 7 HashKeyPointerAddr bool // Include pointer address in key 8 NeedDumpKey bool // Force serialization 9}
CacheFnXErr(fn, config) where X is parameter countconfig == nil, defaults are applied (decorator.go:54-59)CacheMap == nil, newCacheMapMem(TTL) is createdHashKeyFunc == nil, reflection-based hasher is used (decorator.go:77-84)Basic In-Memory Cache:
go1cachedFn := gofnext.CacheFn1Err(expensiveFunc, &gofnext.Config{ 2 TTL: time.Hour, 3})
LRU Cache with Error Caching:
go1cachedFn := gofnext.CacheFn2Err(dbQuery, &gofnext.Config{ 2 TTL: 5 * time.Minute, 3 ErrTTL: 30 * time.Second, 4 CacheMap: gofnext.NewCacheLru(1000), 5})
Redis Cache with Stale-While-Revalidate:
go1cachedFn := gofnext.CacheFn1Err(apiCall, &gofnext.Config{ 2 TTL: time.Minute, 3 ReuseTTL: 30 * time.Second, 4 CacheMap: gofnext.NewCacheRedis("my-api-cache"). 5 SetRedisAddr("redis:6379"), 6})