Skip to content

Commit 7997663

Browse files
sbryngelsonclaude
andauthored
Add CLAUDE.md and .claude/rules/ for Claude Code guidance (#1255)
* Add CLAUDE.md and .claude/rules/ for Claude Code guidance Adds project-level instructions for Claude Code interactive sessions and automated PR reviews. Core CLAUDE.md covers commands, development workflow contract, architecture, and critical rules. Domain-specific knowledge (Fortran conventions, GPU/MPI patterns, parameter system, common pitfalls) lives in modular .claude/rules/ files. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Update copilot-instructions.md to mention OpenMP target offload The GPU section previously only mentioned OpenACC. Now reflects that MFC supports both OpenACC and OpenMP target offload via backend-agnostic GPU_* Fypp macros. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Fix parameter checklist and validation attribution - Expand 3-location to 4-location checklist: add m_global_parameters.fpp as required step for declaring Fortran variables (without this, adding a parameter to the namelist in m_start_up.fpp causes a compile error) - Fix fastjsonschema attribution: JSON schema validation is in case.py and params/registry.py, not in case_validator.py which does physics constraint checking Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Fix mixed-precision stp description: half, not single In mixed-precision mode (--mixed), stp is set to half_precision (kind=2), not single_precision, as confirmed in src/common/m_precision_select.f90. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Fix Fypp macro syntax and precheck coverage claims - GPU macros use $: prefix (inline), not @: — matches all 400+ usages in src/ - GPU_PARALLEL, GPU_DATA, GPU_HOST_DATA are block macros using #:call/#:endcall - Add block macro usage example - Fix GPU_ROUTINE args: parallelism='[seq]', not function_name= - Split forbidden patterns into precheck-enforced vs convention-enforced (goto, COMMON, save, stop are not checked by precheck.sh) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Fix documentation accuracy: checklist counts, phantom macro, missing patterns - Fix "3-location checklist" heading to "4-location" (parameter-system.md) - Fix "all 3 locations" to "all 4 locations" in PR checklist (common-pitfalls.md) - Remove non-existent $:END_GPU_LOOP() from block macro example (gpu-and-mpi.md) - Add d0 literals, double precision, dcos/dsin/dtan to forbidden patterns list - Reference toolchain/bootstrap/precheck.sh for full forbidden pattern list - Fix Frontier system slug: OpenACC/OpenMP, not just OpenACC - Clarify stp as "field data arrays and I/O", not just "I/O" - Document @:PROHIBIT, @:ASSERT, @:LOG error-checking macros - Document @:PREFER_GPU unified memory macro - Document m_checker*.fpp Fortran-side runtime validation Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Add field indexing, ghost cells, test system, analytical IC docs - Fix parameter checklist: first 3 mandatory, #4 only if constraints exist - Fix namelist format: &user_inputs ... &end/ (not &user_inputs / ... / &end) - Update counts: ~3,400 params (was ~3,300), 560+ tests (was 500+) - Add ghost cell allocation pattern (-buff_size:m+buff_size, idwint/idwbuff) - Add field variable indexing system (cont_idx, mom_idx, E_idx, adv_idx, sys_size) - Add test system details: programmatic generation, CaseGeneratorStack, UUID hashing - Add scalar_field uses stp (not wp) to precision types - Add analytical IC available variables (x, y, z, xc, yc, zc, lx, ly, lz, r, e, t) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
1 parent 38f0457 commit 7997663

File tree

6 files changed

+487
-1
lines changed

6 files changed

+487
-1
lines changed

.claude/rules/common-pitfalls.md

Lines changed: 63 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,63 @@
1+
# Common Pitfalls
2+
3+
## Array Bounds & Ghost Cells
4+
- Grid dimensions: `m`, `n`, `p` (cells in x, y, z). 1D: n=p=0. 2D: p=0.
5+
- Interior domain: `0:m`, `0:n`, `0:p`
6+
- Buffer/ghost region: `-buff_size:m+buff_size` (similar for n, p in multi-D)
7+
- `buff_size` depends on WENO order and features (typically `2*weno_polyn + 2`)
8+
- Domain bounds: `idwint(1:3)` (interior `0:m`), `idwbuff(1:3)` (with ghost cells)
9+
- Cell-center coords: `x_cc(-buff_size:m+buff_size)`, `y_cc(...)`, `z_cc(...)`
10+
- Cell-boundary coords: `x_cb(-1-buff_size:m+buff_size)`
11+
- Riemann solver indexing: left state at `j`, right state at `j+1`
12+
- Off-by-one errors in ghost cell regions are a common source of bugs
13+
14+
## Field Variable Indexing
15+
- Conserved variables: `q_cons_vf(1:sys_size)`. Primitive: `q_prim_vf(1:sys_size)`.
16+
- Index ranges depend on `model_eqns` and enabled features (set in `m_global_parameters.fpp`):
17+
- `cont_idx` — continuity (partial densities, one per fluid)
18+
- `mom_idx` — momentum components
19+
- `E_idx` — total energy (scalar)
20+
- `adv_idx` — volume fractions (advection equations)
21+
- `bub_idx`, `stress_idx`, `xi_idx`, `species_idx`, `B_idx`, `c_idx` — optional
22+
- Shorthand scalars: `momxb`/`momxe`, `contxb`/`contxe`, `advxb`/`advxe`, etc.
23+
- `sys_size` = total number of conserved variables (computed at startup)
24+
- Changing `model_eqns` or enabling features changes ALL index positions
25+
26+
## Blast Radius
27+
- `src/common/` is shared by ALL three executables (pre_process, simulation, post_process)
28+
- Any change to common/ requires testing all three targets
29+
- Public subroutine signature changes affect all callers across all targets
30+
- Parameter default changes affect all existing case files
31+
32+
## Physics Consistency
33+
- Pressure formula MUST match `model_eqns` setting
34+
- Model-specific conservative ↔ primitive conversion paths exist
35+
- Volume fractions must sum to 1.0
36+
- Boundary condition symmetry requirements must be maintained
37+
38+
## Compiler-Specific Issues
39+
- Code must compile on gfortran, nvfortran, Cray ftn, and Intel ifx
40+
- Each compiler has different strictness levels and warning behavior
41+
- Fypp macros must expand correctly for both GPU and CPU builds
42+
- GPU builds only work with nvfortran, Cray ftn, and AMD flang
43+
44+
## Test System
45+
- Tests are generated **programmatically** in `toolchain/mfc/test/cases.py`, not standalone files
46+
- Each test is a parameter modification on top of `BASE_CFG` defaults
47+
- Test UUID = CRC32 hash of the test's trace string; `./mfc.sh test -l` lists all
48+
- To add a test: modify `cases.py` using `CaseGeneratorStack` push/pop pattern
49+
- Golden files: `tests/<UUID>/golden.txt` — tolerance-based comparison, not exact match
50+
- If your change intentionally modifies output, regenerate golden files:
51+
`./mfc.sh test --generate --only <affected_tests> -j 8`
52+
- Do not regenerate ALL golden files unless you understand every output change
53+
54+
## PR Checklist
55+
Before submitting a PR:
56+
- [ ] `./mfc.sh format -j 8` (auto-format)
57+
- [ ] `./mfc.sh precheck -j 8` (5 CI lint checks)
58+
- [ ] `./mfc.sh build -j 8` (compiles)
59+
- [ ] `./mfc.sh test --only <relevant> -j 8` (tests pass)
60+
- [ ] If adding parameters: all 4 locations updated
61+
- [ ] If modifying `src/common/`: all three targets tested
62+
- [ ] If changing output: golden files regenerated for affected tests
63+
- [ ] One logical change per commit
Lines changed: 63 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,63 @@
1+
# Fortran Conventions
2+
3+
## File Format
4+
- Source files use `.fpp` extension (Fortran + Fypp preprocessor macros)
5+
- Fypp preprocesses `.fpp``.f90` at build time via CMake
6+
- Fypp supports conditional compilation, code generation, and regex macros
7+
8+
## Module Structure
9+
Every Fortran module follows this pattern:
10+
- File: `m_<feature>.fpp`
11+
- Module: `module m_<feature>`
12+
- `implicit none` required
13+
- Explicit `intent(in)`, `intent(out)`, or `intent(inout)` on ALL subroutine/function arguments
14+
- Initialization subroutine: `s_initialize_<feature>_module`
15+
- Finalization subroutine: `s_finalize_<feature>_module`
16+
17+
## Naming
18+
- Modules: `m_<feature>`
19+
- Public subroutines: `s_<verb>_<noun>`
20+
- Public functions: `f_<verb>_<noun>`
21+
- Private/local variables: no prefix required
22+
- Constants: descriptive names, not ALL_CAPS
23+
24+
## Forbidden Patterns
25+
26+
Caught by `./mfc.sh precheck` (source lint step 4/5):
27+
- `dsqrt`, `dexp`, `dlog`, `dble`, `dabs`, `dcos`, `dsin`, `dtan`, etc. → use generic intrinsics
28+
- `1.0d0`, `2.5d-3` (Fortran `d` exponent literals) → use `1.0_wp`, `2.5e-3_wp`
29+
- `double precision` → use `real(wp)` or `real(stp)`
30+
- `real(8)`, `real(4)` → use `wp` or `stp` kind parameters
31+
- Raw `!$acc` or `!$omp` directives → use Fypp GPU_* macros from `parallel_macros.fpp`
32+
- Full list of forbidden patterns: `toolchain/bootstrap/precheck.sh`
33+
34+
Enforced by convention/code review (not automated):
35+
- `goto`, `COMMON` blocks, global `save` variables
36+
- `stop`, `error stop` → use `call s_mpi_abort()` or `@:PROHIBIT()`/`@:ASSERT()`
37+
38+
## Error Checking Macros (from macros.fpp)
39+
- `@:PROHIBIT(condition, message)` — Runtime constraint check; aborts with file/line info
40+
- `@:ASSERT(predicate, message)` — Invariant assertion; aborts if predicate is false
41+
- `@:LOG(expr)` — Debug logging, active only in `MFC_DEBUG` builds
42+
- Fortran-side runtime validation also exists in `m_checker*.fpp` files using `@:PROHIBIT`
43+
44+
## Precision Types
45+
- `wp` (working precision): used for computation. Double by default.
46+
- `stp` (storage precision): used for field data arrays and I/O. Double by default.
47+
- In single-precision mode (`--single`): both become single.
48+
- In mixed-precision mode (`--mixed`): wp=double, stp=half.
49+
- MPI type matching: `mpi_p` must match `wp`, `mpi_io_p` must match `stp`.
50+
- Always use generic intrinsics: `sqrt` not `dsqrt`, `abs` not `dabs`.
51+
- Cast with `real(..., wp)` or `real(..., stp)`, never `dble(...)`.
52+
53+
Key derived types (`m_derived_types.fpp`):
54+
- `scalar_field``real(stp), pointer :: sf(:,:,:)`. Uses `stp`, NOT `wp`.
55+
- `vector_field` — allocatable array of `scalar_field` components.
56+
- New field arrays MUST use `stp` for storage precision consistency.
57+
58+
## Size Guidelines (soft)
59+
- Subroutine: ≤500 lines
60+
- Helper routine: ≤150 lines
61+
- Function: ≤100 lines
62+
- File: ≤1000 lines
63+
- Arguments: ≤6 preferred

.claude/rules/gpu-and-mpi.md

Lines changed: 124 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,124 @@
1+
# GPU and MPI Patterns
2+
3+
## GPU Offloading Architecture
4+
5+
Only `src/simulation/` is GPU-accelerated. Pre/post_process run on CPU only.
6+
7+
MFC uses a **backend-agnostic GPU abstraction** via Fypp macros. The same source code
8+
compiles to either OpenACC or OpenMP target offload depending on the build flag:
9+
10+
- `./mfc.sh build --gpu acc` → OpenACC backend (NVIDIA nvfortran, Cray ftn)
11+
- `./mfc.sh build --gpu mp` → OpenMP target offload backend (Cray ftn, AMD flang)
12+
- `./mfc.sh build` (no --gpu) → CPU-only, GPU macros expand to plain Fortran
13+
14+
### Macro Layers (in src/common/include/)
15+
- `parallel_macros.fpp`**Use these.** Generic `GPU_*` macros that dispatch to the
16+
correct backend based on `MFC_OpenACC` / `MFC_OpenMP` compile definitions.
17+
- `acc_macros.fpp` — OpenACC-specific `ACC_*` implementations (do not call directly)
18+
- `omp_macros.fpp` — OpenMP target offload `OMP_*` implementations (do not call directly)
19+
- OMP macros generate **compiler-specific** directives: NVIDIA uses `target teams loop`,
20+
Cray uses `target teams distribute parallel do simd`, AMD uses
21+
`target teams distribute parallel do`
22+
- `shared_parallel_macros.fpp` — Shared helpers (collapse, private, reduction generators)
23+
24+
### Key GPU Macros (always use the `GPU_*` prefix)
25+
26+
Inline macros (use `$:` prefix):
27+
- `$:GPU_PARALLEL_LOOP(collapse=N, private=[...], reduction=[...], reductionOp='+')`
28+
Parallel loop over GPU threads. Most common GPU macro.
29+
- `$:END_GPU_PARALLEL_LOOP()` — Required closing for GPU_PARALLEL_LOOP.
30+
- `$:GPU_LOOP(collapse=N, ...)` — Inner loop within a GPU parallel region.
31+
- `$:GPU_ENTER_DATA(create=[...])` — Allocate device memory (unscoped).
32+
- `$:GPU_EXIT_DATA(delete=[...])` — Free device memory.
33+
- `$:GPU_UPDATE(host=[...])` — Copy device → host (before MPI send).
34+
- `$:GPU_UPDATE(device=[...])` — Copy host → device (after MPI receive).
35+
- `$:GPU_ROUTINE(parallelism='[seq]')` — Mark routine for device compilation.
36+
- `$:GPU_DECLARE(create=[...])` — Declare device-resident data.
37+
- `$:GPU_ATOMIC(atomic='update')` — Atomic operation on device.
38+
- `$:GPU_WAIT()` — Synchronization barrier.
39+
40+
Block macros (use `#:call`/`#:endcall`):
41+
- `GPU_PARALLEL(...)` — GPU parallel region wrapping a code block.
42+
- `GPU_DATA(copy=..., create=..., ...)` — Scoped data region.
43+
- `GPU_HOST_DATA(use_device_addr=[...])` — Host code with device pointers.
44+
45+
Block macro usage:
46+
```
47+
#:call GPU_PARALLEL(copyin='[var1]', copyout='[var2]')
48+
$:GPU_LOOP(collapse=N)
49+
do k = 0, n; do j = 0, m
50+
! loop body
51+
end do; end do
52+
#:endcall GPU_PARALLEL
53+
```
54+
55+
NEVER write raw `!$acc` or `!$omp` directives. Always use `GPU_*` Fypp macros.
56+
The precheck source lint will catch raw directives and fail.
57+
58+
### Memory Management Macros (from macros.fpp)
59+
- `@:ALLOCATE(var1, var2, ...)` — Fortran allocate + `GPU_ENTER_DATA(create=...)`
60+
- `@:DEALLOCATE(var1, var2, ...)``GPU_EXIT_DATA(delete=...)` + Fortran deallocate
61+
- `@:PREFER_GPU(var1, var2, ...)` — NVIDIA unified memory page placement hint
62+
- Every `@:ALLOCATE` MUST have a matching `@:DEALLOCATE` in finalization
63+
- Conditional allocation MUST have conditional deallocation
64+
65+
### GPU Field Setup (Cray-specific, from macros.fpp)
66+
- `@:ACC_SETUP_VFs(...)` / `@:ACC_SETUP_SFs(...)` — GPU pointer setup for vector/scalar fields
67+
- These compile only for Cray (`_CRAYFTN`); other compilers skip them
68+
69+
### Compiler-Backend Matrix
70+
| Compiler | `--gpu acc` (OpenACC) | `--gpu mp` (OpenMP) | CPU-only |
71+
|-----------------|----------------------|---------------------|----------|
72+
| GNU gfortran | No | No | Yes |
73+
| NVIDIA nvfortran| Yes (primary) | Yes | Yes |
74+
| Cray ftn (CCE) | Yes | Yes (primary) | Yes |
75+
| Intel ifx | No | No | Yes |
76+
| AMD flang | No | Yes | Yes |
77+
78+
## Preprocessor Defines (`#ifdef` / `#ifndef`)
79+
80+
Raw `#ifdef` / `#ifndef` preprocessor guards are **normal and expected** in MFC.
81+
They are NOT the same as raw `!$acc`/`!$omp` pragmas (which are forbidden).
82+
83+
Use `#ifdef` for feature, target, compiler, and library gating:
84+
85+
### Feature gating
86+
- `MFC_MPI` — MPI-enabled build (`--mpi` flag, default ON)
87+
- `MFC_OpenACC` — OpenACC GPU backend (`--gpu acc`)
88+
- `MFC_OpenMP` — OpenMP target offload backend (`--gpu mp`)
89+
- `MFC_GPU` — Any GPU build (either OpenACC or OpenMP)
90+
- `MFC_DEBUG` — Debug build (`--debug`)
91+
- `MFC_SINGLE_PRECISION` — Single-precision mode (`--single`)
92+
- `MFC_MIXED_PRECISION` — Mixed-precision mode (`--mixed`)
93+
94+
### Target gating (for code in `src/common/` shared across executables)
95+
- `MFC_PRE_PROCESS` — Only in pre_process builds
96+
- `MFC_SIMULATION` — Only in simulation builds
97+
- `MFC_POST_PROCESS` — Only in post_process builds
98+
99+
### Compiler gating (for compiler-specific workarounds)
100+
- `_CRAYFTN` — Cray Fortran compiler
101+
- `__NVCOMPILER_GPU_UNIFIED_MEM` — NVIDIA unified memory (GH-200 / `--unified`)
102+
- `__PGI` — Legacy PGI/NVIDIA compiler
103+
- `__INTEL_COMPILER` — Intel compiler
104+
- `FRONTIER_UNIFIED` — Frontier HPC unified memory
105+
106+
### Library-specific code
107+
- FFTW (`m_fftw.fpp`) uses heavy `#ifdef` gating for `MFC_GPU` and `__PGI`
108+
- CUDA Fortran (`cudafor` module) is gated behind `__NVCOMPILER_GPU_UNIFIED_MEM`
109+
- SILO/HDF5 interfaces may have conditional paths
110+
111+
When adding new `#ifdef` blocks, always provide an `#else` or `#endif` path so
112+
the code compiles in all configurations (CPU-only, GPU-ACC, GPU-OMP, with/without MPI).
113+
114+
## MPI
115+
116+
### Halo Exchange
117+
- Pack/unpack offset calculations are error-prone — verify carefully
118+
- Buffer sizing depends on dimensionality and QBMM state
119+
- GPU coherence: always `GPU_UPDATE(host=...)` before MPI send,
120+
`GPU_UPDATE(device=...)` after MPI receive
121+
122+
### Error Handling
123+
- Use `call s_mpi_abort()` for fatal errors, never `stop` or `error stop`
124+
- MPI must be finalized before program exit

.claude/rules/parameter-system.md

Lines changed: 66 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,66 @@
1+
# Parameter System
2+
3+
## Overview
4+
MFC has ~3,400 simulation parameters defined in Python and read by Fortran via namelist files.
5+
6+
## Parameter Flow: Python → Fortran
7+
8+
1. **Definition**: `toolchain/mfc/params/definitions.py` — source of truth
9+
- Parameters are indexed families: `patch_icpp(i)%attr`, `fluid_pp(i)%attr`, etc.
10+
- Each has type, default, constraints, and tags
11+
12+
2. **Validation** (two layers):
13+
- `toolchain/mfc/case.py` / `toolchain/mfc/params/registry.py` — JSON schema validation
14+
via fastjsonschema (type checking, defaults)
15+
- `toolchain/mfc/case_validator.py` — Physics constraint checking
16+
(e.g., volume fractions sum to 1, dependency validation)
17+
18+
3. **Input Generation**: `toolchain/mfc/run/input.py`
19+
- Python case dict → Fortran namelist `.inp` file
20+
- Format: `&user_inputs` ... `&end/`
21+
22+
4. **Fortran Reading**: `src/*/m_start_up.fpp`
23+
- Reads `&user_inputs` namelist
24+
- Each parameter must be declared in the namelist statement
25+
26+
## Adding a New Parameter (4-location checklist)
27+
28+
YOU MUST update the first 3 locations. Missing any causes silent failures or compile errors.
29+
Location 4 is required only if the parameter has physics constraints.
30+
31+
1. **`toolchain/mfc/params/definitions.py`**: Add parameter with type, default, constraints
32+
2. **`src/*/m_global_parameters.fpp`**: Declare the Fortran variable in the relevant
33+
target(s). If the param is used by simulation only, add it there. If shared, add to
34+
all three targets' m_global_parameters.fpp.
35+
3. **`src/*/m_start_up.fpp`**: Add to the Fortran `namelist` declaration in the relevant
36+
target(s).
37+
4. **`toolchain/mfc/case_validator.py`**: Add validation rules if the parameter has
38+
physics constraints. Include `PHYSICS_DOCS` entry with title, category, explanation.
39+
40+
## Case Files
41+
- Case files are Python scripts (`.py`) that define a dict of parameters
42+
- Validated with `./mfc.sh validate case.py`
43+
- Examples in `examples/` directory
44+
- Create new cases with `./mfc.sh new <name>`
45+
- Search parameters with `./mfc.sh params <query>`
46+
47+
## Fortran-Side Runtime Validation
48+
Each target has `m_checker*.fpp` files (e.g., `src/simulation/m_checker.fpp`,
49+
`src/common/m_checker_common.fpp`) containing runtime parameter validation using
50+
`@:PROHIBIT(condition, message)`. When adding parameters with physics constraints,
51+
add Fortran-side checks here in addition to `case_validator.py`.
52+
53+
## Analytical Initial Conditions
54+
String expressions in parameters become Fortran code via `case.py.__get_analytic_ic_fpp()`.
55+
These are compiled into the binary, so syntax errors cause build failures, not runtime errors.
56+
57+
Available variables in analytical IC expressions:
58+
- `x`, `y`, `z` — cell-center coordinates (mapped to `x_cc(i)`, `y_cc(j)`, `z_cc(k)`)
59+
- `xc`, `yc`, `zc` — patch centroid coordinates
60+
- `lx`, `ly`, `lz` — patch lengths
61+
- `r` — patch radius; `eps`, `beta` — vortex parameters
62+
- `e` — Euler's number (2.71828...)
63+
- Standard Fortran math intrinsics available: `sin`, `cos`, `exp`, `sqrt`, `abs`, etc.
64+
- For moving immersed boundaries: `t` (simulation time) is also available
65+
66+
Example: `'patch_icpp(1)%vel(2)': '(x - xc) * exp(-((x-xc)**2 + (y-yc)**2))'`

.github/copilot-instructions.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ Formatting and linting are enforced by pre-commit hooks. Focus review effort on
1616
* Sources in `src/`, tests in `tests/`, examples in `examples/`, Python toolchain in `toolchain/`.
1717
* Most source files are `.fpp` (Fypp templates); CMake transpiles them to `.f90`.
1818
* **Fypp macros** are in `src/<subprogram>/include/`, where `<subprogram>` is `simulation`, `common`, `pre_process`, or `post_process`.
19-
* Only `simulation` (plus its `common` dependencies) is GPU-accelerated via **OpenACC**.
19+
* Only `simulation` (plus its `common` dependencies) is GPU-accelerated via **OpenACC** or **OpenMP target offload** (`--gpu acc` or `--gpu mp`). GPU code uses backend-agnostic `GPU_*` Fypp macros (in `src/common/include/parallel_macros.fpp`) that dispatch to the correct backend at compile time.
2020
* Code must compile with **GNU gfortran**, **NVIDIA nvfortran**, **Cray ftn**, and **Intel ifx**.
2121
* Precision modes: double (default), single, and mixed (`wp` = working precision, `stp` = storage precision).
2222
* **Python toolchain** requires **Python 3.10+** — do not suggest `from __future__` imports or other backwards-compatibility shims.

0 commit comments

Comments
 (0)