Skip to content

Commit 0238823

Browse files
Axectclaude
andcommitted
Replace personal project examples with generic SolarFlux/FluxNet
OSPREY/DeepONet/NeuralHamilton → SolarFlux/FluxNet/WavePredict across README, docs, and skill files. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
1 parent e79df6e commit 0238823

6 files changed

Lines changed: 49 additions & 49 deletions

File tree

.claude/skills/pytorch-train/SKILL.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -36,9 +36,9 @@ Before creating any files, confirm these with the user:
3636

3737
| Item | Example | Notes |
3838
|------|---------|-------|
39-
| Project name | `OSPREY`, `NeuralHamilton` | Used in `project:` field and directory names |
40-
| Version | `v0.10`, `v1.32` | Determines config subdirectory |
41-
| Model name | `deeponet`, `mambonet`, `mlp` | File prefix and `net:` path |
39+
| Project name | `SolarFlux`, `WavePredict` | Used in `project:` field and directory names |
40+
| Version | `v0.3`, `v1.32` | Determines config subdirectory |
41+
| Model name | `fluxnet`, `wavenet`, `mlp` | File prefix and `net:` path |
4242
| Task type | regression / classification | Determines criterion, metric direction |
4343
| Model module path | `model.MLP`, `recipes.regression.model.MLP` | Importlib path for `net:` field |
4444
| net_config | `{nodes: 64, layers: 4}` | Architecture hyperparameters |
@@ -86,11 +86,11 @@ configs/<MAIN_CONTRIBUTION>_v<VERSION>/<MODEL_NAME>_{run,opt,best}.yaml
8686
```
8787

8888
Examples:
89-
- `configs/OSPREY_v0.10/deeponet_run.yaml`
90-
- `configs/OSPREY_v0.10/deeponet_opt.yaml`
91-
- `configs/OSPREY_v0.10/deeponet_best.yaml`
89+
- `configs/SolarFlux_v0.3/fluxnet_run.yaml`
90+
- `configs/SolarFlux_v0.3/fluxnet_opt.yaml`
91+
- `configs/SolarFlux_v0.3/fluxnet_best.yaml`
9292

93-
The `project:` field follows: `<PROJECT>_v<VERSION>_<MODEL>` (e.g., `OSPREY_v0.10_DeepONet`).
93+
The `project:` field follows: `<PROJECT>_v<VERSION>_<MODEL>` (e.g., `SolarFlux_v0.3_FluxNet`).
9494

9595
### 2a: run.yaml (HPO Base Config)
9696

.claude/skills/pytorch-train/references/config_templates.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ Full annotated YAML templates for each config type. Copy and modify as needed.
88

99
```yaml
1010
# ── Project Identification ──
11-
project: <PROJECT>_v<VERSION>_<MODEL> # e.g., OSPREY_v0.10_DeepONet
11+
project: <PROJECT>_v<VERSION>_<MODEL> # e.g., SolarFlux_v0.3_FluxNet
1212
device: cuda:0 # cuda:N or cpu
1313

1414
# ── Model ──
@@ -62,7 +62,7 @@ checkpoint_config:
6262
6363
```yaml
6464
# ── Study ──
65-
study_name: <MODEL>_TPE # e.g., DeepONet_TPE
65+
study_name: <MODEL>_TPE # e.g., FluxNet_TPE
6666
trials: 50 # 30-100 depending on search space
6767
seed: 42
6868
metric: val_loss
@@ -112,7 +112,7 @@ search_space:
112112
113113
```yaml
114114
# ── Project Identification ──
115-
project: <PROJECT>_v<VERSION>_<MODEL> # Remove _Opt suffix
115+
project: <PROJECT>_v<VERSION>_<MODEL> # Remove _Opt suffix (e.g., SolarFlux_v0.3_FluxNet)
116116
device: cuda:0
117117

118118
# ── Model ──

README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -88,14 +88,14 @@ python -m cli analyze ← validate results, generate plots
8888
This template ships with a built-in [Claude Code](https://claude.ai/claude-code) skill that guides you through the entire experiment lifecycle:
8989

9090
```
91-
You: "Set up HPO for my DeepONet model, version 0.10"
91+
You: "Set up HPO for my FluxNet model, version 0.3"
9292
93-
Agent: Creates configs/OSPREY_v0.10/deeponet_run.yaml
94-
Creates configs/OSPREY_v0.10/deeponet_opt.yaml
93+
Agent: Creates configs/SolarFlux_v0.3/fluxnet_run.yaml
94+
Creates configs/SolarFlux_v0.3/fluxnet_opt.yaml
9595
Runs preflight to catch any config issues
9696
Launches HPO with SPlus + ExpHyperbolicLR defaults
9797
Runs hpo-report to analyze results
98-
Extracts best params → deeponet_best.yaml
98+
Extracts best params → fluxnet_best.yaml
9999
Launches final multi-seed training
100100
```
101101

README_KR.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -88,14 +88,14 @@ python -m cli analyze ← 결과 검증 및 플롯 생성
8888
이 템플릿에는 전체 실험 생애주기를 안내하는 내장 [Claude Code](https://claude.ai/claude-code) skill이 포함되어 있습니다:
8989

9090
```
91-
User: "DeepONet 모델 버전 0.10에 HPO 설정해줘"
91+
User: "FluxNet 모델 버전 0.3에 HPO 설정해줘"
9292
93-
Agent: configs/OSPREY_v0.10/deeponet_run.yaml 생성
94-
configs/OSPREY_v0.10/deeponet_opt.yaml 생성
93+
Agent: configs/SolarFlux_v0.3/fluxnet_run.yaml 생성
94+
configs/SolarFlux_v0.3/fluxnet_opt.yaml 생성
9595
preflight를 실행해 설정 오류 확인
9696
최적 SPlus + ExpHyperbolicLR 기본값으로 HPO 실행
9797
hpo-report를 실행해 결과 분석
98-
최적 파라미터 추출 → deeponet_best.yaml
98+
최적 파라미터 추출 → fluxnet_best.yaml
9999
최종 다중 시드 학습 시작
100100
```
101101

docs/01_pipeline.md

Lines changed: 24 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ Here is a complete annotated `run.yaml` for a regression task with SPlus + ExpHy
1717

1818
```yaml
1919
# ── Project Identification ──
20-
project: OSPREY_v0.10_DeepONet # Convention: <NAME>_v<VERSION>_<MODEL>
20+
project: SolarFlux_v0.3_FluxNet # Convention: <NAME>_v<VERSION>_<MODEL>
2121
device: cuda:0 # cuda:N or cpu
2222

2323
# ── Model ──
@@ -73,10 +73,10 @@ Change datasets by changing the `data:` field — one line. The function must re
7373
Config files live under `configs/<MAIN_CONTRIBUTION>_v<VERSION>/`:
7474

7575
```
76-
configs/OSPREY_v0.10/
77-
├── deeponet_run.yaml # HPO base config
78-
├── deeponet_opt.yaml # HPO search config
79-
└── deeponet_best.yaml # Final training config (created after HPO)
76+
configs/SolarFlux_v0.3/
77+
├── fluxnet_run.yaml # HPO base config
78+
├── fluxnet_opt.yaml # HPO search config
79+
└── fluxnet_best.yaml # Final training config (created after HPO)
8080
```
8181

8282
See **[Chapter 2: Configuration Deep Dive](02_config.html)** for every field explained.
@@ -88,7 +88,7 @@ See **[Chapter 2: Configuration Deep Dive](02_config.html)** for every field exp
8888
A single shape mismatch can waste hours of GPU time. Preflight catches it in seconds by running one batch forward and backward through the full stack — data loading, model instantiation, optimizer step, gradient check.
8989

9090
```bash
91-
python -m cli preflight configs/OSPREY_v0.10/deeponet_run.yaml --device cuda:0
91+
python -m cli preflight configs/SolarFlux_v0.3/fluxnet_run.yaml --device cuda:0
9292
```
9393

9494
The output is a table of checks:
@@ -114,16 +114,16 @@ Fix every FAIL and investigate every WARN before proceeding. Use `--json` for ma
114114
Also available before preflight:
115115

116116
```bash
117-
python -m cli validate configs/OSPREY_v0.10/deeponet_run.yaml # Schema + semantic checks only
118-
python -m cli preview configs/OSPREY_v0.10/deeponet_run.yaml # Print model architecture
117+
python -m cli validate configs/SolarFlux_v0.3/fluxnet_run.yaml # Schema + semantic checks only
118+
python -m cli preview configs/SolarFlux_v0.3/fluxnet_run.yaml # Print model architecture
119119
```
120120

121121
---
122122

123123
## Phase 3: Training
124124

125125
```bash
126-
python -m cli train configs/OSPREY_v0.10/deeponet_run.yaml --device cuda:0
126+
python -m cli train configs/SolarFlux_v0.3/fluxnet_run.yaml --device cuda:0
127127
```
128128

129129
What happens when you run this:
@@ -145,11 +145,11 @@ Two diagnostic callbacks run automatically during every training session:
145145
For long runs, queue with pueue so they survive session termination:
146146

147147
```bash
148-
pueue group add OSPREY
149-
pueue add -g OSPREY -- bash -c \
148+
pueue group add SolarFlux
149+
pueue add -g SolarFlux -- bash -c \
150150
"cd /path/to/project && .venv/bin/python -m cli train \
151-
configs/OSPREY_v0.10/deeponet_run.yaml --device cuda:0"
152-
pueue status -g OSPREY
151+
configs/SolarFlux_v0.3/fluxnet_run.yaml --device cuda:0"
152+
pueue status -g SolarFlux
153153
```
154154

155155
---
@@ -159,7 +159,7 @@ pueue status -g OSPREY
159159
HPO finds the best hyperparameters by running many short training trials and using the results to guide the search. The optimizer config (`opt.yaml`) defines the search space:
160160

161161
```yaml
162-
study_name: DeepONet_TPE
162+
study_name: FluxNet_TPE
163163
trials: 50
164164
seed: 42
165165
metric: val_loss
@@ -199,8 +199,8 @@ search_space:
199199
Run HPO:
200200

201201
```bash
202-
python -m cli train configs/OSPREY_v0.10/deeponet_run.yaml \
203-
--optimize-config configs/OSPREY_v0.10/deeponet_opt.yaml \
202+
python -m cli train configs/SolarFlux_v0.3/fluxnet_run.yaml \
203+
--optimize-config configs/SolarFlux_v0.3/fluxnet_opt.yaml \
204204
--device cuda:0
205205
```
206206

@@ -221,12 +221,12 @@ After HPO completes, analyze the results before creating `best.yaml`:
221221
python -m cli hpo-report
222222
223223
# Explicit — use when multiple studies exist
224-
python -m cli hpo-report --db OSPREY_v0.10_DeepONet_Opt.db --study-name DeepONet_TPE
224+
python -m cli hpo-report --db SolarFlux_v0.3_FluxNet_Opt.db --study-name FluxNet_TPE
225225
226226
# With boundary warnings — recommended
227227
python -m cli hpo-report \
228-
--db OSPREY_v0.10_DeepONet_Opt.db \
229-
--opt-config configs/OSPREY_v0.10/deeponet_opt.yaml
228+
--db SolarFlux_v0.3_FluxNet_Opt.db \
229+
--opt-config configs/SolarFlux_v0.3/fluxnet_opt.yaml
230230
```
231231

232232
The report shows:
@@ -279,9 +279,9 @@ checkpoint_config:
279279
Then validate and run:
280280

281281
```bash
282-
python -m cli validate configs/OSPREY_v0.10/deeponet_best.yaml
283-
python -m cli preflight configs/OSPREY_v0.10/deeponet_best.yaml
284-
python -m cli train configs/OSPREY_v0.10/deeponet_best.yaml --device cuda:0
282+
python -m cli validate configs/SolarFlux_v0.3/fluxnet_best.yaml
283+
python -m cli preflight configs/SolarFlux_v0.3/fluxnet_best.yaml
284+
python -m cli train configs/SolarFlux_v0.3/fluxnet_best.yaml --device cuda:0
285285
```
286286

287287
With 5 seeds and 150 epochs, this is a long run. Use pueue.
@@ -293,7 +293,7 @@ With 5 seeds and 150 epochs, this is a long run. Use pueue.
293293
After training completes, check the run directories and analyze results:
294294

295295
```bash
296-
ls runs/OSPREY_v0.10_DeepONet/ # One subdirectory per group name
296+
ls runs/SolarFlux_v0.3_FluxNet/ # One subdirectory per group name
297297
python -m cli analyze # Interactive model loading and evaluation
298298
```
299299

@@ -303,7 +303,7 @@ The `runs/` directory structure after training:
303303

304304
```
305305
runs/
306-
└── OSPREY_v0.10_DeepONet/
306+
└── SolarFlux_v0.3_FluxNet/
307307
└── MLP_n_64_l_5_SPlus_lr_3.42e-01.../
308308
├── config.yaml # Exact config used for this group
309309
├── 58/

docs/02_config.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -189,20 +189,20 @@ Configs live under `configs/` and follow this convention:
189189
configs/<CONTRIBUTION>_v<VERSION>/<MODEL>_{run,opt,best}.yaml
190190
```
191191

192-
- `<CONTRIBUTION>` — the experiment name or project shorthand (e.g., `OSPREY`, `Neural_Hamilton`)
192+
- `<CONTRIBUTION>` — the experiment name or project shorthand (e.g., `SolarFlux`, `WavePredict`)
193193
- `<VERSION>` — integer version, incremented when the search space or architecture changes significantly
194194
- `<MODEL>` — the model or configuration variant being tested
195195

196196
Examples:
197197

198198
```
199-
configs/OSPREY_v1/MLP_run.yaml # base run config for HPO
200-
configs/OSPREY_v1/MLP_opt.yaml # HPO search space
201-
configs/OSPREY_v1/MLP_best.yaml # best config after HPO
199+
configs/SolarFlux_v1/MLP_run.yaml # base run config for HPO
200+
configs/SolarFlux_v1/MLP_opt.yaml # HPO search space
201+
configs/SolarFlux_v1/MLP_best.yaml # best config after HPO
202202
203-
configs/Neural_Hamilton_v2/HNN_run.yaml
204-
configs/Neural_Hamilton_v2/HNN_opt.yaml
205-
configs/Neural_Hamilton_v2/HNN_best.yaml
203+
configs/WavePredict_v2/HNN_run.yaml
204+
configs/WavePredict_v2/HNN_opt.yaml
205+
configs/WavePredict_v2/HNN_best.yaml
206206
```
207207

208208
This convention makes it immediately clear which phase of the workflow each file belongs to, and version numbers let you track search space evolution without losing old configs.

0 commit comments

Comments
 (0)