Skip to content

fix(core): reorder LruCache entries on get() for falsy values#2968

Open
chinesepowered wants to merge 1 commit intoQwenLM:mainfrom
chinesepowered:fix/lru-cache-falsy-reorder
Open

fix(core): reorder LruCache entries on get() for falsy values#2968
chinesepowered wants to merge 1 commit intoQwenLM:mainfrom
chinesepowered:fix/lru-cache-falsy-reorder

Conversation

@chinesepowered
Copy link
Copy Markdown
Contributor

Fix LruCache.get() skipping the LRU reorder for falsy values.

TLDR

LruCache.get() guarded the LRU reorder with a truthy check (if (value)), so cached values that were 0, '', false, or null were never promoted to most-recently-used on access. The returned value was still correct, but the internal Map ordering was wrong — a legitimate falsy value stayed in its original insertion slot and would be evicted earlier than its true access pattern warranted. Switch to Map.has() for existence so every cached entry gets the reorder regardless of its value.

Screenshots / Video Demo

N/A — internal cache semantics fix, no user-visible UI.

Dive Deeper

Before:

get(key: K): V | undefined {
  const value = this.cache.get(key);
  if (value) {                    // truthy check — not existence
    // Move to end to mark as recently used
    this.cache.delete(key);
    this.cache.set(key, value);
  }
  return value;
}

JavaScript falsy values (0, '', false, null, NaN) fail the if (value) guard, so the delete-and-reinsert never runs. Map iteration order is insertion order, and the eviction logic in set() (lines 29-33) uses this.cache.keys().next().value to pick the victim — always the oldest insertion. A frequently-accessed entry whose value happens to be 0 would therefore be evicted before a rarely-accessed entry whose value is a non-empty object, silently violating the LRU invariant.

LruCache is generic over V, so any consumer that stores falsy values is affected:

  • Numeric caches that legitimately cache 0 (token counts, file sizes)
  • Boolean caches (feature-flag resolution, permission lookups)
  • Caches keyed on IDs where ''/null might be a legitimate sentinel

After:

get(key: K): V | undefined {
  if (!this.cache.has(key)) {
    return undefined;
  }
  const value = this.cache.get(key) as V;
  // Move to end to mark as recently used
  this.cache.delete(key);
  this.cache.set(key, value);
  return value;
}

Map.has() is the existence-correct check. Using it avoids ambiguity with undefined (which cache.get() also returns on miss) and ensures every cached entry — regardless of value falsiness — gets the reorder on access.

Modified file:

  • packages/core/src/utils/LruCache.ts — use has() for existence, always reorder on hit

Reviewer Test Plan

  1. Create new LruCache<string, number>(3) and .set('a', 0), .set('b', 1), .set('c', 2)
  2. Call .get('a') — before the fix, 'a' stayed at the front of the eviction queue because 0 is falsy; after the fix, 'a' is promoted to most-recent
  3. Call .set('d', 3) to trigger eviction — before: 'a' was evicted (wrong); after: 'b' is evicted (correct, it's now the oldest non-accessed entry)
  4. Repeat with LruCache<string, boolean> storing false values, and LruCache<string, string> storing '' values — all should now reorder correctly on get
  5. Regression: truthy values still work identically

Testing Matrix

macOS Windows Linux
npm run ? pass ?
npx ? ? ?
Docker ? ? ?
Podman ? - -
Seatbelt ? - -

LruCache.get() guarded the LRU reorder with 'if (value)' — a truthy
check that skipped the reorder when the cached value was 0, '', false,
or null. The value was still returned correctly, but it stayed in its
original insertion-order slot rather than being promoted to most-
recently-used, so legitimate falsy values were evicted earlier than
they should have been.

Switch to Map.has() for existence so all values — truthy or not — get
the reorder.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant