-
Notifications
You must be signed in to change notification settings - Fork 0
bug: L1-only mode serializes data unnecessarily (tuples→lists) #73
Description
Description
Competitive testing reveals that @cache(backend=None) (L1-only mode) serializes data through MessagePack even though there's no L2 backend to send bytes to. This causes type degradation that pure in-memory caches (lru_cache, cachetools) don't have.
Evidence
From tests/competitive/test_head_to_head.py::TestCollectionTypes::test_tuple_preservation:
@cache(backend=None, ttl=300)
def fn(): return (1, 2, 3)
result = fn()
# Expected: (1, 2, 3) — tuple preserved (in-memory, no serialization needed)
# Actual: [1, 2, 3] — list (MessagePack converted tuple→list)All three competitors preserve tuples in their in-memory mode:
functools.lru_cache— stores raw Python objectscachetools.TTLCache— stores raw Python objectsaiocache.SimpleMemoryCache— stores raw Python objects
Impact
- Sets, frozensets, tuples, and nested structures containing them lose type identity
- Users migrating from
lru_cacheget different behavior - The comparison doc claims "Same ~50ns performance" vs lru_cache but doesn't disclose this type degradation
Suggested Fix
When backend=None, skip serialization entirely. Store the raw Python object in L1. Serialization is only needed when data crosses process boundaries (L2 backends).
This would make @cache(backend=None) a true drop-in replacement for lru_cache with added TTL, metrics, and unhashable arg support.
Test Evidence
pytest tests/competitive/test_head_to_head.py -k tuple -v — 50/50 passing, documenting current (broken) behavior.