⬆️ Bump @babel/register from 7.10.4 to 7.10.5#1
Merged
Conversation
Bumps [@babel/register](https://github.com/babel/babel/tree/HEAD/packages/babel-register) from 7.10.4 to 7.10.5. - [Release notes](https://github.com/babel/babel/releases) - [Changelog](https://github.com/babel/babel/blob/main/CHANGELOG.md) - [Commits](https://github.com/babel/babel/commits/v7.10.5/packages/babel-register) Signed-off-by: dependabot[bot] <support@github.com>
wwilsman
pushed a commit
that referenced
this pull request
Jul 30, 2021
…ndabot (#457) * Update dependabot.yml * ⬆️ Bump ws from 7.5.3 to 8.0.0 (#2) Bumps [ws](https://github.com/websockets/ws) from 7.5.3 to 8.0.0. - [Release notes](https://github.com/websockets/ws/releases) - [Commits](websockets/ws@7.5.3...8.0.0) --- updated-dependencies: - dependency-name: ws dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * ⬆️ Bump actions/setup-node from 1 to 2 (#1) * ⬆️ Bump actions/setup-node from 1 to 2.3.0 Bumps [actions/setup-node](https://github.com/actions/setup-node) from 1 to 2.3.0. - [Release notes](https://github.com/actions/setup-node/releases) - [Commits](actions/setup-node@v1...v2.3.0) --- updated-dependencies: - dependency-name: actions/setup-node dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] <support@github.com> * Update .github/workflows/lint.yml * Update .github/workflows/release.yml * Update .github/workflows/windows.yml * Update .github/workflows/windows.yml * Update .github/workflows/typecheck.yml * Update .github/workflows/test.yml Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Matthieu MOREL <mmorel-35@users.noreply.github.com> * ⬆️ Bump rollup from 2.54.0 to 2.55.1 (#4) Bumps [rollup](https://github.com/rollup/rollup) from 2.54.0 to 2.55.1. - [Release notes](https://github.com/rollup/rollup/releases) - [Changelog](https://github.com/rollup/rollup/blob/master/CHANGELOG.md) - [Commits](rollup/rollup@v2.54.0...v2.55.1) --- updated-dependencies: - dependency-name: rollup dependency-type: direct:development update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * ⬆️ Bump @rollup/plugin-alias from 3.1.4 to 3.1.5 (#5) Bumps [@rollup/plugin-alias](https://github.com/rollup/plugins/tree/HEAD/packages/alias) from 3.1.4 to 3.1.5. - [Release notes](https://github.com/rollup/plugins/releases) - [Changelog](https://github.com/rollup/plugins/blob/master/packages/alias/CHANGELOG.md) - [Commits](https://github.com/rollup/plugins/commits/alias-v3.1.5/packages/alias) --- updated-dependencies: - dependency-name: "@rollup/plugin-alias" dependency-type: direct:development update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
4 tasks
pranavz28
added a commit
that referenced
this pull request
Apr 22, 2026
… egress, effective cap) Addresses items 1-8 from PR #2192 review. Must-fix: #1 discovery.js: always increment unsetModeBytes; only gate the warn-log emission on warningFired. Previously the byte counter froze at the threshold crossing so cache_summary.peak_bytes misreported every Map-mode run that hit the threshold. #2 byte-lru.js: reorder ByteLRU.set — reject oversize BEFORE touching any existing entry. Fixes a failed oversize re-set silently evicting the prior (valid) value for the same key. Should-fix: #3 percy.js: move sendCacheSummary AFTER sendBuildLogs in stop()'s finally. A slow/stalled pager hop on the analytics event can no longer delay the primary log egress. #4 discovery.js: do not mutate percy.config.discovery.maxCacheRam on clamp. Store effectiveMaxCacheRamMB on CACHE_STATS_KEY; percy.js sendCacheSummary reads it from there. User config stays read-only. Nits: #5 discovery.js: read PERCY_CACHE_WARN_THRESHOLD_BYTES once at queue construction instead of on every saveResource. #6 discovery.test.js: use 'instanceof ByteLRU' (imported) instead of string match on constructor.name. #7 discovery.js: emit a debug log when PERCY_CACHE_WARN_THRESHOLD_BYTES override is active, so support can spot misconfigured overrides. #8 README: note decimal MB (1,000,000 bytes) vs binary MiB. Coverage fill-in (closes gaps visible in earlier nyc run): * byte-lru.test.js: .clear(), oversize re-set regression, onEvict reason='too-big' path. * discovery.test.js: - --max-cache-ram between 25 and 50 MB warns without clamping. - PERCY_CACHE_WARN_THRESHOLD_BYTES override emits debug log. - cache_eviction_started info log fires when LRU evicts. - unsetModeBytes keeps growing post-warningFired (regression for #1). - sendCacheSummary swallows client rejections without throwing. - sendCacheSummary short-circuits when build / cache / stats missing. Tests: 19 unit specs (byte-lru) + 13 integration specs (discovery). Lint clean. Built dist/ regenerated.
3 tasks
pranavz28
added a commit
that referenced
this pull request
May 6, 2026
…95] (#2192) * feat(core): add ByteLRU + entrySize helpers Hand-rolled byte-budget LRU cache backing the forthcoming --max-cache-ram flag. Pure/sync (no logger or external calls) so callers can log after .set() returns without risking event-loop yield mid-mutation. Exposes a Map-compatible surface (.get/.set/.has/.delete/.values/.size) plus .calculatedSize and .stats for telemetry. entrySize() computes body-bytes + fixed per-entry overhead, handling both single-resource entries and array-of-resources (root-resource with multiple widths from discovery.js:465). 16 unit specs covering LRU semantics, recency bump, multi-evict, oversized-entry skip, peak-bytes transient high-water, and array-entry sizing. Zero new dependencies. * feat(cli-command): add --max-cache-ram flag (MB units) New flag/env/percyrc surface for the forthcoming bounded asset-discovery cache. Value is an integer MB (e.g. --max-cache-ram 300 means 300MB). Flows through env PERCY_MAX_CACHE_RAM or percyrc discovery.maxCacheRam. Precedence follows existing Percy convention (flag > env > percyrc). Raw parse is Number; full validation happens at Percy startup once the flag is consumed (subsequent commit). * feat(core): extend discovery config schema with maxCacheRam Add maxCacheRam integer-or-null property under discovery. Value is the cap in MB (so percyrc users write 'maxCacheRam: 300' for 300MB, matching the flag). Null/unset preserves today's unbounded behavior. Schema validation catches non-integer and negative inputs at config load time; additional startup validation (e.g. minimum floor based on MAX_RESOURCE_SIZE) happens in the discovery integration commit. * feat(core): swap Map for ByteLRU when --max-cache-ram is set Replaces the unbounded resource cache Map at discovery.js:408 with a byte-budget-aware ByteLRU when percy.config.discovery.maxCacheRam is configured. Without the flag, behaviour is byte-for-byte identical to today (new Map(), no eviction). In saveResource, oversized entries (size > cap) are skipped from the global cache with a debug log but still land in snapshot.resources so the current snapshot renders correctly. ByteLRU's onEvict callback emits a debug log for each LRU eviction. Startup validation (runs once in the discovery queue's start handler): * Rejects caps below 25MB (MAX_RESOURCE_SIZE floor) with a clear error * Warns on caps below 50MB (silently-useless territory) * Info log when --max-cache-ram and --disable-cache are both set (max-cache-ram becomes a no-op) A CACHE_STATS_KEY Symbol is exported alongside RESOURCE_CACHE_KEY to hold per-run stats the telemetry layer will read in a later commit. Existing discovery tests remain green. * feat(core): add warning-at-threshold when --max-cache-ram is unset When the flag is unset (Map mode), track cumulative bytes written to the global resource cache in a side-channel counter. On first crossing of 500MB, emit a one-shot warn-level log pointing the user at the --max-cache-ram flag before the CI runner OOMs. * warn level (not info) so --quiet users still see it * one-shot guarded by a per-run stats flag — does not re-fire on shrink/regrow cycles, and is bypassed entirely when the flag is set (opt-in users do not need the discovery hint) * threshold override via PERCY_CACHE_WARN_THRESHOLD_BYTES for post-ship tuning (undocumented) This is the primary discovery mechanism for the flag — users find it through normal CI output before they need support. * feat(core): emit cache_eviction_started + cache_summary telemetry Two CLI-side events travel through the existing sendBuildEvents pipeline (UDP pager → Amplitude). No Percy API changes. * cache_eviction_started — fires exactly once per run, from ByteLRU's onEvict callback on the first LRU eviction. Payload includes the configured budget, peak bytes at eviction time, and eviction count. Emits an info log alongside telling the user eviction is active. * cache_summary — fires once per run from Percy.stop()'s finally block. Payload includes budget + hits/misses/evictions/peak_bytes/final_bytes/ entry_count/oversize_skipped. Feeds Amplitude for adoption, hit-rate, and sizing calibration metrics post-GA. Both are fire-and-forget; exceptions are logged at debug and swallowed so telemetry loss cannot fail a Percy run (same pattern as sendBuildLogs at percy.js:707). Both gate on percy.build?.id being set so they cannot emit before the build exists. * test(core): integration coverage for --max-cache-ram Two new describe blocks in discovery.test.js: 'with --max-cache-ram' (5 specs): * installs a ByteLRU when the flag is set (type + initial stats shape) * falls back to a plain Map when the flag is unset (backward compat) * rejects a cap below the 25MB MAX_RESOURCE_SIZE floor with a clear error * emits an info log when the flag and --disable-cache are both set * records oversize_skipped and leaves calculatedSize at 0 when a single entry exceeds the cap 'warning-at-threshold (unset cap)' (1 spec): * PERCY_CACHE_WARN_THRESHOLD_BYTES override triggers the warn log once; does not re-fire on a subsequent snapshot run (one-shot gate holds). Filter: --filter='max-cache-ram|warning-at-threshold' runs 6 of 149 specs in ~16s for focused iteration. Full suite stays green. * docs(core): document --max-cache-ram flag Adds maxCacheRam to the discovery options list in @percy/core README. Covers: value semantics (MB), default (unset/unbounded), eviction behavior, the cap-body-bytes vs RSS relationship users need to know for sizing, the 25 MB floor, and the three equivalent surfaces (flag / env / percyrc). * fix(core): clamp --max-cache-ram below 25MB instead of failing Follow-up to the initial --max-cache-ram implementation. Previously a value below the 25 MB MAX_RESOURCE_SIZE floor threw an error at Percy startup, killing otherwise-healthy builds for a misconfigured cap. Switch to a warn-level log and continue with the 25 MB minimum: --max-cache-ram=10MB is below the 25MB minimum (individual resources up to 25MB would otherwise be dropped). Continuing with the minimum: 25MB. Also mutates percy.config.discovery.maxCacheRam to the clamped value so the cache_summary telemetry event reports the effective cap. Updates: * discovery.js — throw → warn + clamp * discovery.test.js — integration test asserts warn log + clamped cap * README.md — docs reflect the clamp behaviour * cli-command/test/flags.test.js — stale help-text fixture: inserts the --max-cache-ram line between --disable-cache and --debug (broken since the flag was added; unrelated to the clamp, bundled here since both are help-surface / startup-UX fixes) Verified in-anger with percy snapshot --max-cache-ram 10 against https://example.com (build #356): [percy:core] --max-cache-ram=10MB is below the 25MB minimum (individual resources up to 25MB would otherwise be dropped). Continuing with the minimum: 25MB. * fix(core): address PR review (frozen counter, reorder checks, reorder egress, effective cap) Addresses items 1-8 from PR #2192 review. Must-fix: #1 discovery.js: always increment unsetModeBytes; only gate the warn-log emission on warningFired. Previously the byte counter froze at the threshold crossing so cache_summary.peak_bytes misreported every Map-mode run that hit the threshold. #2 byte-lru.js: reorder ByteLRU.set — reject oversize BEFORE touching any existing entry. Fixes a failed oversize re-set silently evicting the prior (valid) value for the same key. Should-fix: #3 percy.js: move sendCacheSummary AFTER sendBuildLogs in stop()'s finally. A slow/stalled pager hop on the analytics event can no longer delay the primary log egress. #4 discovery.js: do not mutate percy.config.discovery.maxCacheRam on clamp. Store effectiveMaxCacheRamMB on CACHE_STATS_KEY; percy.js sendCacheSummary reads it from there. User config stays read-only. Nits: #5 discovery.js: read PERCY_CACHE_WARN_THRESHOLD_BYTES once at queue construction instead of on every saveResource. #6 discovery.test.js: use 'instanceof ByteLRU' (imported) instead of string match on constructor.name. #7 discovery.js: emit a debug log when PERCY_CACHE_WARN_THRESHOLD_BYTES override is active, so support can spot misconfigured overrides. #8 README: note decimal MB (1,000,000 bytes) vs binary MiB. Coverage fill-in (closes gaps visible in earlier nyc run): * byte-lru.test.js: .clear(), oversize re-set regression, onEvict reason='too-big' path. * discovery.test.js: - --max-cache-ram between 25 and 50 MB warns without clamping. - PERCY_CACHE_WARN_THRESHOLD_BYTES override emits debug log. - cache_eviction_started info log fires when LRU evicts. - unsetModeBytes keeps growing post-warningFired (regression for #1). - sendCacheSummary swallows client rejections without throwing. - sendCacheSummary short-circuits when build / cache / stats missing. Tests: 19 unit specs (byte-lru) + 13 integration specs (discovery). Lint clean. Built dist/ regenerated. * test(core): close remaining coverage gaps Targets the branches nyc still flagged after the review-fix commit: byte-lru.js: * .delete() on a non-existent key returns false (line 66) * entrySize() handles null entries + entries without content inside an array (line 97 optional-chain branches) discovery.js: * fireCacheEventSafe's .catch debug-log path (line 440) — spy sendBuildEvents to reject, force eviction, microtask-wait * saveResource oversize-skip branch (lines 598-607) — serve a 25MB resource from the test server so the real intercept flow triggers the oversize path, not just the direct ByteLRU.set test percy.js: * sendCacheSummary entry_count '?? 0' fallback (line 409) — call directly with a defensive-shape cache lacking .size 21 unit specs + additional integration specs; lint clean. * feat(core): spill evicted resources to disk, restore on lookup Extends --max-cache-ram with a disk-backed overflow tier so evictions no longer drop resources. When the in-memory ByteLRU evicts, the full resource is written to a per-run temp directory under os.tmpdir() and a slim metadata reference stays in memory. getResource falls through RAM miss → disk tier before the browser ever refetches from origin. On any disk I/O failure we return false/undefined and fall back to the old drop behaviour — the browser refetches exactly as it would without spill, so disk-tier failure is strictly additive. What lives in byte-lru.js: - ByteLRU.onEvict(key, reason, value) — adds the evicted value so the discovery wiring can capture it before it is GC'd. - DiskSpillStore — sync mkdirSync/writeFileSync/readFileSync/rmSync. Counter-based filenames (no URL-derived data flows to path.join). Self-healing index: a read failure purges the stale entry so the next lookup cleanly misses. Best-effort destroy swallows errors. - createSpillDir — os.tmpdir()/percy-cache-<pid>-<random-hex>. - lookupCacheResource — pulled out of the inline getResource closure so the RAM-miss-to-disk-hit path is directly unit-testable. Discovery wiring (discovery.js): - start handler constructs the DiskSpillStore alongside the ByteLRU. - onEvict calls diskStore.set(key, value); debug-log differentiates `cache spill:` (success) from `cache evict:` (disk failed or disk not ready). - saveResource clears any prior disk entry up front so a fresh discovery write wins over a spilled copy — prevents a race where getResource would serve stale content. - end handler calls diskStore.destroy(); cleanup errors are swallowed by DiskSpillStore so they cannot fail percy.stop(). Telemetry (percy.js sendCacheSummary): Eight new disk-tier fields on cache_summary: disk_spill_enabled, disk_spilled_count, disk_restored_count, disk_spill_failures, disk_read_failures, disk_peak_bytes, disk_final_bytes, disk_final_entries. cache_eviction_started also carries disk_spill_enabled so the dashboard can distinguish "disabled" from "enabled with zero activity" from "enabled but failing." Tests (all pass locally): - 44 unit specs in byte-lru.test.js exercise ByteLRU, entrySize, DiskSpillStore, createSpillDir, and lookupCacheResource end-to-end — including init failure via /dev/null, write failure via mocked EACCES, read self-heal via mocked ENOENT, unlinkSync error tolerance on delete + overwrite, destroy error swallowing, the not-ready short-circuit branches, and array-root width matching. - 8 integration specs in discovery.test.js cover DiskSpillStore presence, spill-on-eviction, byte-for-byte rehydration, ENOSPC fallback, saveResource clearing stale entries, queue-end teardown (asserts both disk.ready flag and fs.existsSync for race-safety), and graceful handling when the store fails to init. Zero new npm dependencies (fs/os/path/crypto are built-ins). Node engine unchanged. Clean semgrep run on all changed files. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * fix(core): preserve disk-tier stats for cache_summary telemetry Real-build verification surfaced an ordering bug: percy.stop() calls discovery.end() before sendCacheSummary, and the 'end' handler destroys the DiskSpillStore and deletes percy[DISK_SPILL_KEY]. sendCacheSummary then read a null diskStore and emitted disk_spill_enabled=false with all eight disk_* fields zeroed, regardless of actual activity. A run that spilled 97 resources and restored 96 from disk shipped zeros to Percy. Snapshot the disk stats onto stats.finalDiskStats in the 'end' handler before destroy() runs. sendCacheSummary prefers the live store and falls back to the snapshot, so both the in-discovery path (existing tests) and the post-discovery path (real builds) populate the telemetry correctly. Two new specs cover (a) sendCacheSummary using the finalDiskStats fallback when DISK_SPILL_KEY is unset, and (b) the discovery 'end' handler copying diskStore.stats onto the snapshot before destroy. * fix(core): drop dead stats guard in disk-spill end handler CACHE_STATS_KEY is always set alongside DISK_SPILL_KEY in the 'start' handler, so the false branch of \`if (stats)\` was unreachable and showed up as a 99.46% branch coverage gap on discovery.js line 553. Drop the guard; write directly through the stats reference. Behavior is unchanged — coverage lands back at 100/100/100/100. * test(core): make DiskSpillStore mkdir-failure tests Windows-portable Four specs forced mkdirSync to fail by passing '/dev/null/cannot-mkdir-here'. On POSIX, /dev/null is a character device so the nested path errors with ENOTDIR. On Windows, /dev/null does not exist and mkdirSync(..., recursive) just creates the directory tree, leaving the store in the ready state and flipping every assertion. Spy on fs.mkdirSync and throw directly — same behavior on every platform, no path tricks. * test(core): drop redundant disk-stats integration test The discovery 'end' handler is already covered by 'calls diskStore.destroy ... in the queue end handler' test. Extend that existing test to also assert that stats.finalDiskStats is snapshotted before destroy clears the live store, instead of duplicating the percy.start/stop dance in a second spec. Removes a percy.stop() graceful path that doubled the file's end-to-end cost on slower runners (Windows core was hitting 6h job timeouts) without adding meaningful coverage. * refactor(core): address review nits — leak fix, dedupe, perms, drop dead code Five small follow-ups from PR review: 1. discovery 'end' handler now wraps browser.close() in try/finally so the per-run spill dir is destroyed even when the browser teardown throws, instead of leaking under os.tmpdir(). 2. fireCacheEventSafe (discovery) and sendCacheSummary (percy) shared the exact { message, cliVersion, clientInfo, extra } shape. Extracted a single sendCacheTelemetry(message, extra) instance method on Percy and route both call sites through it, so the pager schema only lives in one place. 3. DiskSpillStore mkdirSync now uses mode 0o700 — spilled bytes are origin refetchable so the threat model is small, but on shared-tenant CI hosts other users on the same box should not be able to read them. 4. ByteLRU.values() was unreferenced anywhere in src/ and only had a half-implemented iterator (no .return/.throw). Dropped it (and its test) per YAGNI; if a future caller needs Map-style iteration they can use the standard protocol on a real Map. 5. Two new unit tests: - lookupCacheResource on a disk index hit whose readFileSync throws — covers the combined self-heal + caller-refetch path that the existing isolated tests miss. - DiskSpillStore handles back-to-back saves of the same URL without doubling the index, the byte total, or leaving prior spill files on disk. Regression guard for any future parallel-discovery work. * fix(core): restore strict pre-refactor parity in cache telemetry path Two parity restores from the second-pass review on aeae5da: 1. sendCacheSummary's outer try/catch was lost when the egress moved into sendCacheTelemetry. Nothing in the payload realistically throws today, but the original contract was "nothing in sendCacheSummary can fail percy.stop()", not "nothing in the egress". Re-wrap the whole method. 2. fireCacheEventSafe lost its Promise.resolve().then(...) microtask hop; without it, sendCacheTelemetry's synchronous prelude (header build, payload serialization inside the client) ran inside the onEvict callback. The eviction event fires once per run so this is noise in practice, but the hop keeps eviction-loop timing bit-for-bit identical to the pre-refactor version. * test(core): cover sendCacheSummary payload-construction catch The outer try/catch restored in f92aee6 was uncovered (percy.js:442 showed 99.74% line coverage, blocking the 100% gate). Adds a spec that makes the cache.stats getter throw, exercising the catch and asserting sendCacheSummary still resolves and emits a 'cache_summary build failed' debug log. * test(core): assert cache_summary build catch via spy not logger mock The previous spec relied on logger.mock({ level: 'debug' }) to capture the catch-branch debug log, but logger.mock has to run before percy.start to stub the right writer — by then percy was already started in the outer beforeEach, so logger.stderr stayed empty and the assertion failed on Linux CI (it happened to look fine locally only because of preceding crashed-Chromium output bleed). Spy on percy.log.debug directly to assert the call without depending on stream capture order. * test(core): force-stop instead of graceful in disk-spill destroy spec The 'end' handler runs the same way under both stop modes, so we still exercise the destroy path + finalDiskStats snapshot. Force-stop skips the flush + saveHostnamesToAutoConfigure pipeline, which is the slow path on Windows runners and the most likely contributor to the core test job blowing past master's 40-minute baseline. * fix(core): guard fireCacheEventSafe microtask chain against unhandled rejection Node 14 (still the engine on the Windows CI runner) treats unhandled promise rejections as fatal in some configurations. sendCacheTelemetry already catches pager errors, but if its catch arm itself ever throws — e.g. a stubbed log.debug in a test — the rejection escapes the chain and can take down the runner. Trailing .catch(() => {}) closes that window without changing observable behavior. * test(core): cover fireCacheEventSafe trailing .catch The new defensive .catch on the discovery.js microtask chain was the last 0.04% gap on the function-coverage line. Drives sendCacheTelemetry to a rejection (spy + rejectWith) and lets two setImmediate ticks settle the chain — verifies the path is reachable without depending on the inner catch arm actually throwing in production. Locally green; coverage gate should clear on the next CI run. * fix(core): address PR review fixes — RAM promotion, array spill, byte-len, overcounting, schema floor Five fixes from PR #2192 review by amandeepsingh333: 1. HIGH: Promote disk-tier hits back to RAM in lookupCacheResource. A hot URL evicted once should not pay a readFileSync round-trip on every subsequent access. After disk.get returns, re-insert into ByteLRU and delete the disk entry — the LRU naturally re-evicts the actual coldest resource if room is needed. 2. HIGH: Spill multi-width root arrays via JSON+base64. Arrays of root resources (multi-width DOM snapshots) used to silently fail to spill because resource.content is undefined on arrays — fall through the `if (content == null) return false` and the whole array got dropped on eviction. Now serialize the array elementwise with binary content base64-encoded so the entire shape roundtrips through disk. 3. MEDIUM: entrySize uses Buffer.byteLength for string content. content.length on a string returns JS string units (UTF-16 code units), not bytes, so multi-byte UTF-8 (CJK, emoji) was undercounted. The cache budget could drift past its cap. 4. MEDIUM: Subtract prior entry size before overwriting in unset-cap saveResource path. Shared CSS/JS reused across many snapshots used to inflate unsetModeBytes by N× because every saveResource added entrySize without reclaiming the prior entry's footprint, triggering the 500MB warning prematurely. 5. LOW: Bump config schema maxCacheRam minimum from 0 to 1. Zero has no meaningful semantics — null means unbounded, --disable-cache means off — so reject it at schema time rather than silently bumping to 25MB via the discovery clamp. Tests: - byte-lru.test.js: UTF-8 byte-length asserts, multi-width array spill roundtrip, array-decode self-heal, JSON.stringify-throws refusal, RAM-promotion + disk-cleanup verified via two consecutive lookups with a readFileSync spy on the second. - discovery.test.js: new "does not over-count unsetModeBytes when the same URL is saved twice" regression guard for fix #4. Existing "keeps incrementing" test rewired to use a direct cache.set instead of relying on browser discovery to add new URLs. * test(core): cover byte-lru contentBytes/encode/decode fallthrough branches Adds three small tests to close the coverage gap that was failing the test:coverage gate (lines 96, 116, 125 in byte-lru.js): - contentBytes returns .length ?? 0 for non-Buffer non-string content - encodeArrayElement coerces non-Buffer non-null array content to string - decodeArrayElement passes null/string content through without decoding * test(core): cover encode/decodeArrayElement falsy-entry early-return Adds a null entry to the multi-width round-trip array so both encode/decodeArrayElement's `if (!r) return r;` short-circuits are exercised, closing the remaining branch-coverage gap on byte-lru.js (lines 112, 120). * test(core): cover saveResource race-window guard for stale disk entries Existing 'stale disk entry' test goes through lookupCacheResource's promotion path (which also disk.deletes), so saveResource's own guard at discovery.js:636 was uncovered. Force a lookup miss while the disk entry remains, so the saveResource branch fires — which is exactly the race the comment above the guard describes. --------- Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Manoj-Katta
added a commit
that referenced
this pull request
May 13, 2026
…rectly (PPLT-4214) Bundle of ce:review + PR review fixes for the v143 PlzDedicatedWorker direct-fetch fallback path in network.js. Closes the L755 coverage gap that's been blocking CI since the unstable worker-based Test D was reverted in 2d48f20. Renames: - captureScriptDirectly → captureResourceDirectly. The function captures any allowlisted resource that fell through to direct fetch, not just worker scripts; the old name was misleading. Module-private; no external callers affected. Must-fix (P1/P2 production-risk items): - Cookies: read from network.page.session (full Network domain) instead of the triggering session, since worker sessions throw "Internal error" on Network.getCookies. Defensive try/catch retained. - DIRECT_FETCH_TIMEOUT (5s) caps captureResourceDirectly via Promise.race; prevents idle() from hanging the full networkIdleWaitTimeout (~30s) when a worker host accepts TCP and stalls. (P1 #1) - makeDirectRequest now returns { body, status }; captureResourceDirectly enforces the 25MB MAX_RESOURCE_SIZE guard before saveResource and records the real HTTP status instead of hardcoded 200. Font path call site updated to destructure. (P1 #2) - Direct-fetch gate at _handleLoadingFinished mirrors saveResponseResource's disallowedHostnames-then-allowedHostnames precedence; emits a debug log when fallback is skipped due to hostname gating. (P2 #10 + C12) - Authorization header in makeDirectRequest now requires target origin to match the page's snapshot origin; prevents Basic-auth credential leak to redirected third-party origins. (P2 #11) Reviewer polish: - _handleResponsePaused malformed/oversized branch: Fetch.failRequest runs BEFORE _forgetRequest so Chrome's Fetch state can't leak paused if failRequest throws. Unknown errors trigger a last-resort Fetch.continueResponse to un-pause. Known races (ABORTED_MESSAGE / Invalid InterceptionId) remain silent. (P2 #4) - Unknown errors in _handleResponsePaused failRequest catch and _continueResponse catch now log at warn (was debug) for production observability. (P2 #5 + C8) - _handleResponsePaused: inline comment explaining when the untracked-request branch fires (service-worker-fulfilled responses or cleanup races). (C2) - _handleResponsePaused: url normalization consistent between tracked and untracked paths via normalizeURL on the untracked fallback. (C3) - parseInt now uses explicit radix 10 at both call sites. (C7) - RESPONSE_RECEIVED_TIMEOUT comment notes the cumulative N*2s worst case. (C9) Test: - New deterministic spec "logs gracefully when the direct-fetch fallback fails" exercises captureResourceDirectly's catch path by dropping Network.responseReceived for a CSS asset (no JS execution, no real worker). Closes the L755 coverage gap. (C1 / P1 #3) Intentionally deferred (reviewer comments replied separately): - P2 #15 — no-response branch flip-flop predates this PR (commit ae1d388, 2022-09-21); out of scope. - P2 #6 — sec-ch-ua hardcoded version is one of several stale literals in the makeDirectRequest header block; deferring full audit. - C4 + S4 — DISABLED_FEATURES extraction; existing block-level comment adequate. - C5 — percy.test.js timing fix; reviewer pre-approved deferring. - S2 / S3 — example / reference already present in existing comments. - Favicon Task A — pending separate investigation of snapshot.test.js timing. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Bumps @babel/register from 7.10.4 to 7.10.5.
Release notes
Sourced from @babel/register's releases.
Changelog
Sourced from @babel/register's changelog.
Commits
f7964a9v7.10.5ffb4248Bump some deps for audit (#11818)1a65ba7test: add console warn spy on babel-register tests (#11780)ae1e40achore: remove babel-register generated test artifacts (#11776)Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting
@dependabot rebase.Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
@dependabot rebasewill rebase this PR@dependabot recreatewill recreate this PR, overwriting any edits that have been made to it@dependabot mergewill merge this PR after your CI passes on it@dependabot squash and mergewill squash and merge this PR after your CI passes on it@dependabot cancel mergewill cancel a previously requested merge and block automerging@dependabot reopenwill reopen this PR if it is closed@dependabot closewill close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually@dependabot ignore this major versionwill close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this minor versionwill close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this dependencywill close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)