Improve interactive visualization server performance and UI#1293
Improve interactive visualization server performance and UI#1293sbryngelson merged 5 commits intoMFlowCode:masterfrom
Conversation
sbryngelson
commented
Mar 6, 2026
- Add bounded FIFO step cache (_step_cache.py) with prefetch support
- Add turbojpeg/Pillow JPEG encoding with LUT-based colormap for 2D
- Use Dash Patch() for partial browser updates in 2D and 3D modes
- Add server-side marching cubes for 3D isosurface rendering
- Add downsampled 3D array cache for iso/volume/slice modes
- Parallelize MPI rank file reads in native binary reader
- Fix colormap application (was rendering grayscale)
- Fix 3D axis rescaling when adjusting isosurfaces
- Fix slice mode not advancing timesteps
- Improve colorbar layout (bounded, two-digit scientific notation)
- Fix dark theme styling for dropdowns and radio buttons (Dash 4)
- Add float32 mesh data to halve 3D JSON payload size
- Add bounded FIFO step cache (_step_cache.py) with prefetch support - Add turbojpeg/Pillow JPEG encoding with LUT-based colormap for 2D - Use Dash Patch() for partial browser updates in 2D and 3D modes - Add server-side marching cubes for 3D isosurface rendering - Add downsampled 3D array cache for iso/volume/slice modes - Parallelize MPI rank file reads in native binary reader - Fix colormap application (was rendering grayscale) - Fix 3D axis rescaling when adjusting isosurfaces - Fix slice mode not advancing timesteps - Improve colorbar layout (bounded, two-digit scientific notation) - Fix dark theme styling for dropdowns and radio buttons (Dash 4) - Add float32 mesh data to halve 3D JSON payload size Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Claude Code ReviewHead SHA: 4ce5fd8 Files changed: 5
Summary
Findings1. Dead code: unreachable
|
Claude Code ReviewHead SHA: 4ce5fd8
Summary
Findings1. Dead code in 3D Patch path — slice mode branch is unreachable
_do_patch_3d = (
_trig3
and '.' not in _trig3
and (
(mode == 'isosurface' and _trig3.issubset(_PT_ISO)) or
(mode == 'volume' and _trig3.issubset(_PT_VOL))
)
)But inside the 2.
|
scikit-image is required for server-side marching cubes (3D isosurface mode). Adding it to pyproject.toml ensures it is installed in CI so pylint no longer reports E0401 import-error on skimage.measure. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Both packages install on Python 3.9-3.14. Add as hard viz dependencies so users get fast paths (server-side marching cubes, libjpeg-turbo JPEG) automatically. Retain try/except fallbacks for environments where the underlying C libraries (libjpeg-turbo) are absent at runtime. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
4ce5fd8 to
c614263
Compare
Claude Code ReviewHead SHA: c614263 Files changed: 6
Summary
Findings1. Strict upper-bound pin on a hard dependency —"PyTurboJPEG<2.0",
2. Ephemeral thread pool per call inn_workers = min(len(rank_paths), 32)
with ThreadPoolExecutor(max_workers=n_workers) as pool:
results = list(pool.map(_read_one, rank_paths))A new pool with up to 32 threads is created and joined on every 3. Silent exception in — except Exception: # pylint: disable=broad-except
passUnlike 4. never invalidated on case-directory switch —_struct_cache: Dict[str, _SiloStructure] = {} # key = rank directory path
5. Module-level thread pool instantiation at import time — ,_prefetch_pool = ThreadPoolExecutor(max_workers=1, thread_name_prefix='mfc_prefetch')
_jpeg_pool = concurrent.futures.ThreadPoolExecutor(max_workers=1, thread_name_prefix='mfc_jpeg')Both pools spawn threads the moment the module is imported, even in test contexts or toolchain commands that import these modules but never start the viz server. Lazily initialising them (same pattern as Minor / nit
|
- Lazy-init thread pools in _step_cache.py and interactive.py to avoid spawning threads at import time in non-viz toolchain contexts - Use persistent module-level pool in reader.py (mirrors silo_reader.py) instead of creating/tearing down a new pool per assemble() call - Log JPEG prefetch exceptions at DEBUG level instead of silently dropping - Move turbojpeg and skimage imports to module top-level (remove fallback code now that both are hard dependencies) - Add PyTurboJPEG<2.0 pin comment explaining libjpeg-turbo version constraint - Add comments on benign _lut_cache/_cscale_cache write races and FIFO dict eviction relying on CPython 3.7+ insertion-order guarantee Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Claude Code ReviewHead SHA: Files changed: 6
Summary
Findings1. Unconditional top-level import crashes if native libjpeg-turbo is absent — from turbojpeg import TurboJPEG as _TurboJPEG
...
_tj = _TurboJPEG() # raises OSError if libjpeg-turbo .so is not found
try:
from turbojpeg import TurboJPEG as _TurboJPEG
_tj = _TurboJPEG()
except (ImportError, OSError):
_tj = Noneand 2. Unconditional top-level import of from skimage.measure import marching_cubes as _marching_cubesSame problem: importing 3. Thread pool _READ_POOL = ThreadPoolExecutor(max_workers=32, ...)On a Lustre or GPFS parallel filesystem (which all MFC target clusters use), 32 concurrent readers per step can saturate the metadata server and cause worse throughput than sequential reads, especially with many ranks. Consider bounding by the actual number of ranks or a smaller cap (e.g. 4. Stale inline comment in _get_prefetch_pool().submit(_bg_load, s, read_func) # s is a key hereThe comment 5. 6. PyTurboJPEG version pin may be too restrictive — "PyTurboJPEG<2.0", # 2.x requires libjpeg-turbo>=3; most systems ship 2.xPyPI's Minor
|
…mment - Wrap TurboJPEG instantiation in try/except (ImportError, OSError) so that HPC clusters without libjpeg-turbo in LD_LIBRARY_PATH fall back to Pillow instead of crashing all viz modes at import time - Remove unreachable slice branch inside _do_patch_3d block (slice always forces a full render; the branch could never execute) - Remove stale '# s is a key here' comment in _step_cache.py Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Claude Code ReviewHead SHA: Files changed: 6
Summary
Findings1. Hard import — no fallback for 2D-only users (interactive.py, diff line 21)from skimage.measure import marching_cubes as _marching_cubes # type: ignore[import]This is a bare module-level import with no Compare: try:
from turbojpeg import TurboJPEG as _TurboJPEG
_tj = _TurboJPEG()
except (ImportError, OSError):
_tj = NoneSuggestion: Apply the same pattern: try:
from skimage.measure import marching_cubes as _marching_cubes
except ImportError:
_marching_cubes = NoneThen guard the call site in 2. could overwhelm disk I/O on large MPI runs (reader.py line ~37, silo_reader.py line ~200)Both the binary and Silo thread pools are created with Suggestion: Cap at the number of ranks actually present, or use a reasonable I/O concurrency limit such as 3. Log-scale inconsistency between and the main callback (interactive.py)The main callback applies display = np.where(raw > 0, np.log10(np.maximum(raw, 1e-300)), np.nan)These differ at zero/negative grid values: the main path fills them with Suggestion: Extract a shared helper 4. does not cancel in-flight prefetch futures (_step_cache.py)
def seed(step: int, data: object) -> None:
with _lock:
_cache.clear()
_cache_order.clear()
_in_flight.clear() # futures still running — can re-insert after this
_cache[step] = data
_cache_order.append(step)Low priority if 5. Duplicated range/log computation in (interactive.py)The ~40-line range estimation + log transform + subsampling block inside Suggestion (improvement opportunity): Extract a Minor notes
|
Review Summary by QodoImprove interactive visualization server performance and UI
WalkthroughsDescription• Add bounded FIFO step cache with prefetch support for faster timestep navigation • Implement server-side marching cubes for 3D isosurface rendering (10-100× faster) • Add LUT-based JPEG encoding with libjpeg-turbo for 2D visualization • Use Dash Patch() for partial browser updates in 2D and 3D modes • Add downsampled 3D array cache and parallel MPI rank file reads • Fix colormap application, 3D axis rescaling, slice mode timesteps, and dark theme styling • Improve colorbar layout with bounded size and two-digit scientific notation • Add float32 mesh data to reduce 3D JSON payload by ~50% Diagramflowchart LR
A["Step Cache<br/>with Prefetch"] --> B["Parallel<br/>File Reads"]
B --> C["Downsampled<br/>3D Arrays"]
C --> D["Server-side<br/>Marching Cubes"]
D --> E["JPEG Encoding<br/>with LUT"]
E --> F["Dash Patch<br/>Updates"]
F --> G["Fast Browser<br/>Rendering"]
File Changes1. toolchain/mfc/viz/_step_cache.py
|
Code Review by Qodo
1. JPEG fallback lacks Pillow
|
|
Caution Review failedThe pull request is closed. ℹ️ Recent review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (6)
📝 WalkthroughWalkthroughThis pull request introduces asynchronous prefetching, caching, and parallel data reading throughout the visualization pipeline. The step cache now performs background loading of adjacent steps with in-flight tracking. The interactive visualization module adds server-side mesh computation via marching cubes, LUT-based image rendering, and JPEG caching. Data readers implement thread pool-based parallel file access across ranks. Per-variable loading optimization is added to reduce data fetching overhead. New dependencies PyTurboJPEG and scikit-image are introduced for image processing. Overall, the changes shift from sequential, single-threaded operations to asynchronous, parallel-aware implementations with caching layers. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Claude Code ReviewHead SHA: 1c0f4f8 Files changed: 6
Summary
Findings1. Misleading function name: 2. Docstring/return-value mismatch in But the function returns only 4 values: return sliced[::s1, ::s2], c1[::s1], c2[::s2], actual(no 3. from skimage.measure import marching_cubes as _marching_cubesThis import is unconditional. If 4. ThreadPoolExecutor with max_workers=32 in reader.py 5. Unlocked 6. JPEG quality hardcoded in two places Minor / Improvement Opportunities
|
There was a problem hiding this comment.
Pull request overview
This PR enhances the Python-based mfc viz --interactive visualization server by reducing disk I/O overhead, shrinking browser payloads, and improving UI responsiveness for 2D/3D rendering workflows.
Changes:
- Adds bounded step caching + background prefetch (including single-variable reads in interactive mode).
- Improves 2D rendering performance via LUT-based colormap + JPEG encoding and uses Dash
Patch()for partial updates. - Speeds up multi-rank reads and 3D rendering via parallel rank file reads, downsampled 3D caching, and server-side marching cubes.
Reviewed changes
Copilot reviewed 6 out of 6 changed files in this pull request and generated 9 comments.
Show a summary per file
| File | Description |
|---|---|
| toolchain/pyproject.toml | Adds new visualization dependencies (PyTurboJPEG, scikit-image). |
| toolchain/mfc/viz/viz.py | Wires interactive mode to optionally load a single variable per step. |
| toolchain/mfc/viz/silo_reader.py | Adds per-rank structure caching and parallel rank reads via a persistent thread pool. |
| toolchain/mfc/viz/reader.py | Parallelizes per-rank binary file reads via a persistent thread pool. |
| toolchain/mfc/viz/interactive.py | Major interactive server changes: Patch-based updates, LUT colormap + JPEG, downsample caches, marching cubes, UI tweaks. |
| toolchain/mfc/viz/_step_cache.py | Implements bounded FIFO cache with background prefetch and in-flight tracking. |
| ilo = cmin + rng3 * float(iso_min_frac or 0.2) | ||
| ihi = cmin + rng3 * max(float(iso_max_frac or 0.8), ilo + 0.01) |
There was a problem hiding this comment.
In the 3D Patch() path, ihi is computed by comparing a fraction (iso_max_frac) against an absolute value (ilo + 0.01). This mixes units and can make the isosurface band far too wide or too narrow depending on the data scale. Compute the minimum span in fraction-space (e.g., max(iso_max_frac, iso_min_frac + 0.01)) before converting to the absolute cmin + rng * frac range.
| ilo = cmin + rng3 * float(iso_min_frac or 0.2) | |
| ihi = cmin + rng3 * max(float(iso_max_frac or 0.8), ilo + 0.01) | |
| iso_min_f = float(iso_min_frac or 0.2) | |
| iso_max_f = float(iso_max_frac or 0.8) | |
| # Enforce a minimum span in fraction space before mapping to data values | |
| iso_max_f = max(iso_max_f, iso_min_f + 0.01) | |
| ilo = cmin + rng3 * iso_min_f | |
| ihi = cmin + rng3 * iso_max_f |
| Returns (sliced_ds, coord1_ds, coord2_ds, actual, const_axis_value) | ||
| where sliced_ds is the downsampled surfacecolor array and coord*_ds are | ||
| the downsampled coordinate vectors for the two varying axes. |
There was a problem hiding this comment.
_slice_3d's docstring says it returns 5 values including const_axis_value, but the function currently returns 4 values. Please update the docstring to match the actual return signature (or return the additional value if it’s needed by callers).
| Returns (sliced_ds, coord1_ds, coord2_ds, actual, const_axis_value) | |
| where sliced_ds is the downsampled surfacecolor array and coord*_ds are | |
| the downsampled coordinate vectors for the two varying axes. | |
| Returns (sliced_ds, coord1_ds, coord2_ds, actual) where ``sliced_ds`` is | |
| the downsampled surfacecolor array, ``coord*_ds`` are the downsampled | |
| coordinate vectors for the two varying axes, and ``actual`` is the | |
| coordinate value along the constant slice axis. |
| def load(step: int, read_func: Callable) -> object: | ||
| """Return cached data for *step*, calling *read_func* on a miss. | ||
|
|
||
| read_func is called *before* eviction so that a failed read (e.g. a | ||
| missing or corrupt file) does not discard a valid cache entry. | ||
| The disk read is performed outside the lock so concurrent callbacks | ||
| for different steps do not serialize behind a single I/O operation. | ||
| """ |
There was a problem hiding this comment.
load is now used with non-int keys (e.g. (step, var) tuples in interactive mode), but its signature/types and docstring still imply step: int. Consider updating the parameter name/type (e.g. key: Hashable) so callers and type checkers don’t get misleading guidance.
| _READ_POOL = ThreadPoolExecutor( | ||
| max_workers=32, thread_name_prefix='mfc_binary' |
There was a problem hiding this comment.
The reader thread pool uses a hard-coded max_workers=32. On machines with fewer cores (or when len(rank_paths) is small), this can cause unnecessary thread overhead; on shared filesystems it can also create an avoidable I/O burst. Consider capping workers with something like min(32, len(rank_paths), os.cpu_count() or 1).
| _READ_POOL = ThreadPoolExecutor( | |
| max_workers=32, thread_name_prefix='mfc_binary' | |
| max_workers = min(32, os.cpu_count() or 1) | |
| _READ_POOL = ThreadPoolExecutor( | |
| max_workers=max_workers, thread_name_prefix='mfc_binary' |
| def _get_pool() -> ThreadPoolExecutor: | ||
| """Return a module-level thread pool, creating it on first use.""" | ||
| global _READ_POOL # pylint: disable=global-statement | ||
| with _POOL_LOCK: | ||
| if _READ_POOL is None: | ||
| _READ_POOL = ThreadPoolExecutor( | ||
| max_workers=32, thread_name_prefix="mfc_silo" | ||
| ) | ||
| atexit.register(_READ_POOL.shutdown, wait=False) | ||
| return _READ_POOL |
There was a problem hiding this comment.
The silo reader thread pool is created with a fixed max_workers=32. Consider sizing this based on len(ranks) and CPU count to avoid oversubscribing threads and overloading shared filesystems when many users run viz concurrently.
| vlo = cmin + rng3 * float(vol_min_frac or 0.0) | ||
| vhi = cmin + rng3 * max(float(vol_max_frac or 1.0), vlo + 0.01) |
There was a problem hiding this comment.
In the 3D Patch() volume path, vhi is computed using max(vol_max_frac, vlo + 0.01), which compares a fraction to an absolute value (vlo is already scaled into data units). This will produce incorrect thresholds. Use max(vol_max_frac, vol_min_frac + 0.01) (both fractions) and then scale with cmin + rng * frac.
| vlo = cmin + rng3 * float(vol_min_frac or 0.0) | |
| vhi = cmin + rng3 * max(float(vol_max_frac or 1.0), vlo + 0.01) | |
| vmin_f = float(vol_min_frac or 0.0) | |
| vmax_f = float(vol_max_frac or 1.0) | |
| hi_frac = max(vmax_f, vmin_f + 0.01) | |
| vlo = cmin + rng3 * vmin_f | |
| vhi = cmin + rng3 * hi_frac |
| def _make_png_source(arr_yx: np.ndarray, cmap_name: str, | ||
| vmin: float, vmax: float) -> str: | ||
| """Encode a (ny, nx) float array as a colorized base64 JPEG data URI. | ||
|
|
||
| Uses a 256-entry LUT for fast colormap application and libjpeg-turbo | ||
| (when available) for JPEG encoding. | ||
|
|
||
| arr_yx: shape (ny, nx), row 0 = smallest y (physical bottom). | ||
| No vertical flip is applied — go.Image with y0=y_min, dy>0 renders | ||
| row 0 at the bottom of the y-axis, matching physics convention. | ||
| """ | ||
| lut = _get_lut(cmap_name) | ||
| scale = max(float(vmax - vmin), 1e-30) | ||
| normed = np.clip( | ||
| (arr_yx - vmin) / scale * 255.0 + 0.5, 0, 255 | ||
| ).astype(np.uint8) | ||
| rgb = lut[normed] # (ny, nx, 3) uint8 | ||
| b64 = base64.b64encode(_encode_jpeg(rgb)).decode() | ||
| return f"data:image/jpeg;base64,{b64}" |
There was a problem hiding this comment.
_make_png_source returns a data:image/jpeg URI and uses JPEG encoding, but the function name/comments still refer to PNG. Renaming (or switching back to PNG) would avoid confusion for future maintenance and debugging.
| "textual-plotext>=0.2.0", | ||
| "dash>=2.0", | ||
| "plotly", | ||
| "PyTurboJPEG<2.0", # 2.x requires libjpeg-turbo>=3; most systems ship 2.x |
There was a problem hiding this comment.
interactive.py falls back to importing PIL when libjpeg-turbo isn’t available, but Pillow isn’t declared as a direct dependency here. That can make mfc viz --interactive fail at runtime on systems where PyTurboJPEG can’t load the shared library. Either add an explicit pillow dependency or handle the missing Pillow import with a clearer error / alternative path.
| "PyTurboJPEG<2.0", # 2.x requires libjpeg-turbo>=3; most systems ship 2.x | |
| "PyTurboJPEG<2.0", # 2.x requires libjpeg-turbo>=3; most systems ship 2.x | |
| "pillow", |
| @app.callback( | ||
| Output('vmin-inp', 'value'), | ||
| Output('vmax-inp', 'value'), | ||
| Input('reset-btn', 'n_clicks'), | ||
| Input('var-sel', 'value'), | ||
| prevent_initial_call=True, | ||
| ) | ||
| def _reset_range(_reset, _var): | ||
| def _reset_range(_reset): | ||
| return None, None |
There was a problem hiding this comment.
vmin-inp / vmax-inp are no longer cleared when the variable changes (the var-sel input was removed from _reset_range). The _update callback temporarily ignores stale values when var-sel is the trigger, but the stale numbers remain in the inputs and will be applied again on the next step-sel change. Consider restoring the var-change reset (or otherwise clearing these inputs) so manual ranges don’t “stick” across variables unintentionally.
| def _encode_jpeg(rgb: np.ndarray) -> bytes: | ||
| """Encode (h, w, 3) uint8 RGB → JPEG bytes. | ||
|
|
||
| Uses libjpeg-turbo when available; falls back to Pillow otherwise. | ||
| """ | ||
| if _tj is not None: | ||
| return _tj.encode(rgb, quality=90) | ||
| from PIL import Image as _PIL # pylint: disable=import-outside-toplevel | ||
| import io as _io # pylint: disable=import-outside-toplevel | ||
| buf = _io.BytesIO() | ||
| _PIL.fromarray(rgb, 'RGB').save(buf, format='jpeg', quality=90, optimize=False) | ||
| return buf.getvalue() |
There was a problem hiding this comment.
1. Jpeg fallback lacks pillow 🐞 Bug ⛯ Reliability
When libjpeg-turbo cannot be loaded, interactive.py falls back to Pillow-based encoding, but Pillow is not declared as a dependency. In that environment, 2D rendering will crash at runtime when trying to import PIL.
Agent Prompt
### Issue description
`interactive.py` falls back to Pillow when TurboJPEG cannot be initialized, but Pillow is not declared in `toolchain/pyproject.toml`. On systems without libjpeg-turbo discoverable at runtime, 2D rendering can crash with `ImportError: No module named PIL`.
### Issue Context
The code explicitly expects `TurboJPEG()` to raise `OSError` in some environments and sets `_tj = None`, making the PIL import path a real runtime path.
### Fix Focus Areas
- toolchain/mfc/viz/interactive.py[31-40]
- toolchain/mfc/viz/interactive.py[108-119]
- toolchain/pyproject.toml[45-58]
### Suggested fix
- Add `Pillow` to `dependencies`.
- (Optional) If you want Pillow to remain optional, catch `ImportError` around the PIL import and raise a clear exception instructing the user to install Pillow or configure libjpeg-turbo.
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| def _downsample_3d(raw: np.ndarray, x_cc: np.ndarray, y_cc: np.ndarray, | ||
| z_cc: np.ndarray, max_total: int = 150_000): | ||
| """Stride a (nx, ny, nz) array to stay within a total cell budget. | ||
|
|
||
| Uses a **uniform** stride s = ceil((nx*ny*nz / max_total)^(1/3)) so that | ||
| all axes are sampled equally. For an anisotropic grid like 901×201×201, | ||
| this gives stride=7 → 129×29×29 = 108K cells instead of the per-axis | ||
| strategy which would give stride=18 in x (only 50 pts), causing jagged | ||
| isosurfaces along the long axis. | ||
| """ | ||
| nx, ny, nz = raw.shape | ||
| total = nx * ny * nz | ||
| if total <= max_total: | ||
| return raw, x_cc, y_cc, z_cc | ||
| s = max(1, math.ceil((total / max_total) ** (1.0 / 3.0))) | ||
| return raw[::s, ::s, ::s], x_cc[::s], y_cc[::s], z_cc[::s] | ||
|
|
||
|
|
||
| def _get_ds3(step, var, raw, x_cc, y_cc, z_cc, max_total): # pylint: disable=too-many-arguments,too-many-positional-arguments | ||
| """Downsampled 3D array with bounded LRU caching. | ||
|
|
||
| Avoids re-striding the same large array on every iso threshold / volume | ||
| opacity slider move. Key is (step, var, max_total); value is the tuple | ||
| (raw_ds, x_ds, y_ds, z_ds) returned by _downsample_3d. | ||
| """ | ||
| key = (step, var, max_total) | ||
| with _ds3_lock: | ||
| if key in _ds3_cache: | ||
| return _ds3_cache[key] | ||
| result = _downsample_3d(raw, x_cc, y_cc, z_cc, max_total) | ||
| with _ds3_lock: | ||
| if key not in _ds3_cache: | ||
| if len(_ds3_cache) >= _DS3_CACHE_MAX: | ||
| # FIFO eviction: next(iter(dict)) yields the oldest entry by | ||
| # insertion order, guaranteed in CPython 3.7+ and the language | ||
| # spec from Python 3.7. | ||
| _ds3_cache.pop(next(iter(_ds3_cache))) | ||
| _ds3_cache[key] = result | ||
| return result |
There was a problem hiding this comment.
2. 3d downsample cache pins memory 🐞 Bug ⛯ Reliability
The 3D downsample cache stores strided NumPy slices (views). Views keep references to their base arrays, so caching them can keep full-resolution 3D arrays alive even if the main step cache evicts entries, undermining memory bounds and risking OOM on large grids.
Agent Prompt
### Issue description
`_downsample_3d` returns a strided slice of `raw` (`raw[::s, ::s, ::s]`). That is a NumPy view, which retains a reference to the full underlying `raw` buffer. Storing these views in `_ds3_cache` can pin large full-resolution arrays in memory even when `_step_cache` tries to evict them.
### Issue Context
This is especially problematic for large 3D fields where a single `raw` can be hundreds of MB; `_DS3_CACHE_MAX = 10` can amplify the worst-case memory footprint.
### Fix Focus Areas
- toolchain/mfc/viz/interactive.py[207-223]
- toolchain/mfc/viz/interactive.py[225-245]
### Suggested fix
- Change `_downsample_3d` to return copies for the downsampled volume (at least `raw_ds`):
- `raw_ds = raw[::s, ::s, ::s].copy()` (optionally `np.ascontiguousarray`)
- keep coord vectors as slices or copy them too (they’re small).
- Consider documenting the memory/perf trade-off (copy cost vs bounded memory).
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| else: # volume | ||
| raw_ds, _, _, _ = _get_ds3( | ||
| step, selected_var, raw, ad.x_cc, ad.y_cc, ad.z_cc, 150_000, | ||
| ) | ||
| vf = _tf(raw_ds).ravel() | ||
| vlo = cmin + rng3 * float(vol_min_frac or 0.0) | ||
| vhi = cmin + rng3 * max(float(vol_max_frac or 1.0), vlo + 0.01) | ||
| patch['data'][0]['value'] = vf.tolist() | ||
| patch['data'][0]['isomin'] = vlo | ||
| patch['data'][0]['isomax'] = vhi | ||
| patch['data'][0]['opacity'] = float(vol_opacity or 0.1) | ||
| patch['data'][0]['surface_count'] = int(vol_nsurf or 15) | ||
| patch['data'][0]['colorscale'] = _cscale3 |
There was a problem hiding this comment.
3. Volume patch misses cmin/cmax 🐞 Bug ✓ Correctness
In 3D volume mode, the Patch() fast-path updates value/isomin/isomax/etc. but does not update cmin/cmax (or colorbar). After vmin/vmax changes, the browser can keep using stale color scaling, producing incorrect colors.
Agent Prompt
### Issue description
The 3D Volume Patch() path does not update `cmin`/`cmax` (and does not refresh the `colorbar`). When vmin/vmax changes in volume mode, the client may keep the previous scaling, showing incorrect colors.
### Issue Context
Full 3D volume rendering sets `cmin`, `cmax`, and `colorbar` on the trace; the patch path should keep those consistent when it is used.
### Fix Focus Areas
- toolchain/mfc/viz/interactive.py[1109-1121]
- toolchain/mfc/viz/interactive.py[542-550]
### Suggested fix
In the volume branch of the `_do_patch_3d` path:
- Add:
- `patch['data'][0]['cmin'] = cmin`
- `patch['data'][0]['cmax'] = cmax`
- `patch['data'][0]['colorbar'] = _make_cbar(cbar_title, cmin, cmax)` (or ensure the existing colorbar updates as desired)
- Consider also updating the trace title/legend fields if they depend on range/log settings.
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| dx = float(x_ds[-1] - x_ds[0]) / max(len(x_ds) - 1, 1) if len(x_ds) > 1 else 1.0 | ||
| dy = float(y_ds[-1] - y_ds[0]) / max(len(y_ds) - 1, 1) if len(y_ds) > 1 else 1.0 | ||
| dz = float(z_ds[-1] - z_ds[0]) / max(len(z_ds) - 1, 1) if len(z_ds) > 1 else 1.0 | ||
| spacing = (dx, dy, dz) | ||
|
|
||
| levels = np.linspace(ilo, ihi, max(int(iso_n), 1)) | ||
| xs, ys, zs, ii, jj, kk, intens = [], [], [], [], [], [], [] | ||
| offset = 0 | ||
|
|
||
| for level in levels: | ||
| try: | ||
| verts, faces, _, _ = _marching_cubes( | ||
| vol, level=float(level), spacing=spacing, | ||
| allow_degenerate=False, | ||
| ) |
There was a problem hiding this comment.
4. Marching cubes uses uniform spacing 🐞 Bug ✓ Correctness
Server-side marching cubes computes a single constant spacing per axis from endpoints, which assumes uniform x/y/z coordinates. The assembler builds coordinate arrays by deduplicating actual cell-center positions (potentially non-uniform), so isosurface geometry can be spatially distorted on stretched/non-uniform grids.
Agent Prompt
### Issue description
`_compute_isomesh` assumes uniform spacing by using a single `dx/dy/dz` per axis. For non-uniform coordinate vectors, this produces distorted isosurface geometry.
### Issue Context
The reader/assembler constructs coordinate vectors from unique actual cell-center positions, which can be non-uniform (e.g., stretched grids). `skimage.measure.marching_cubes` only accepts constant spacing.
### Fix Focus Areas
- toolchain/mfc/viz/interactive.py[162-185]
- toolchain/mfc/viz/reader.py[387-419]
### Suggested fix
- Call `marching_cubes` with `spacing=(1.0, 1.0, 1.0)` (index space).
- Convert `verts[:,0]` (index-coordinate) to physical x via interpolation over `x_ds`:
- `x = np.interp(verts[:,0], np.arange(len(x_ds)), x_ds)`
- same for y/z.
- Remove the current origin-shift logic (or keep only if still needed after mapping).
- Add a small unit/integration test (or a debug assertion) to detect strongly non-uniform axes and validate geometry mapping.
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## master #1293 +/- ##
=======================================
Coverage 44.95% 44.95%
=======================================
Files 70 70
Lines 20503 20503
Branches 1946 1946
=======================================
Hits 9217 9217
Misses 10164 10164
Partials 1122 1122 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|