Skip to content

fix: mute unsupported field attribute warning on startup#1773

Merged
akoumpa merged 1 commit intomainfrom
akoumparouli/fix_mute_UnsupportedFieldAttributeWarning_warning
Apr 14, 2026
Merged

fix: mute unsupported field attribute warning on startup#1773
akoumpa merged 1 commit intomainfrom
akoumparouli/fix_mute_UnsupportedFieldAttributeWarning_warning

Conversation

@akoumpa
Copy link
Copy Markdown
Contributor

@akoumpa akoumpa commented Apr 10, 2026

What does this PR do ?

While confirming #1766 (comment) I saw there were more warning left, with this filtering

root@c16f65276617:/mnt/4tb/auto/26_04/Automodel# python3 app.py examples/llm_finetune/llama3_2/llama3_2_1b_squad_peft.yaml --nproc-per-node=2
INFO:__main__:Running from source checkout (app.py). For production use, install the package and run `automodel` or `am` instead.
INFO:nemo_automodel.cli.app:Config: /mnt/4tb/auto/26_04/Automodel/examples/llm_finetune/llama3_2/llama3_2_1b_squad_peft.yaml
INFO:nemo_automodel.cli.app:Recipe: nemo_automodel.recipes.llm.train_ft.``
INFO:nemo_automodel.cli.app:Launching job interactively (local)
cfg-path: /mnt/4tb/auto/26_04/Automodel/examples/llm_finetune/llama3_2/llama3_2_1b_squad_peft.yaml
/usr/local/lib/python3.12/dist-packages/torch/cuda/__init__.py:61: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report t
his to the maintainers of the package that installed pynvml for you.
  import pynvml  # type: ignore[import]
INFO:nemo_automodel.components.launcher.interactive:Running job using source from: /mnt/4tb/auto/26_04/Automodel
INFO:nemo_automodel.components.launcher.interactive:Launching job locally on 2 devices
W0410 19:01:12.990000 450 torch/distributed/run.py:851]
W0410 19:01:12.990000 450 torch/distributed/run.py:851] *****************************************
W0410 19:01:12.990000 450 torch/distributed/run.py:851] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variabl
e for optimal performance in your application as needed.
W0410 19:01:12.990000 450 torch/distributed/run.py:851] *****************************************
/usr/local/lib/python3.12/dist-packages/torch/cuda/__init__.py:61: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report t
his to the maintainers of the package that installed pynvml for you.
  import pynvml  # type: ignore[import]
/usr/local/lib/python3.12/dist-packages/torch/cuda/__init__.py:61: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report t
his to the maintainers of the package that installed pynvml for you.
  import pynvml  # type: ignore[import]
cfg-path: /mnt/4tb/auto/26_04/Automodel/examples/llm_finetune/llama3_2/llama3_2_1b_squad_peft.yaml
cfg-path: /mnt/4tb/auto/26_04/Automodel/examples/llm_finetune/llama3_2/llama3_2_1b_squad_peft.yaml
> initializing torch distributed with 2 workers.
2026-04-10 19:01:21 | INFO | nemo_automodel.components.loggers.log_utils | Setting logging level to 20
2026-04-10 19:01:21 | INFO | nemo_automodel.shared.te_patches | Applied FusedAdam QuantizedTensor monkey-patch.
2026-04-10 19:01:21 | INFO | root | Experiment_details:
2026-04-10 19:01:21 | INFO | root | Timestamp: '2026-04-10T19:01:21'
2026-04-10 19:01:21 | INFO | root | User: root
2026-04-10 19:01:21 | INFO | root | Host: c16f65276617
2026-04-10 19:01:21 | INFO | root | World size: 2
2026-04-10 19:01:21 | INFO | root | Backend: nccl
2026-04-10 19:01:21 | INFO | root | Recipe: TrainFinetuneRecipeForNextTokenPrediction
2026-04-10 19:01:21 | INFO | root | Model name: meta-llama/Llama-3.2-1B
recipe: TrainFinetuneRecipeForNextTokenPrediction
step_scheduler:
  global_batch_size: 64
  local_batch_size: 8
  ckpt_every_steps: 1000

Changelog

  • Add specific line by line info of high level changes in this PR.

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you add or update any necessary documentation?

If you haven't finished some of the above items you can still open "Draft" PR.

Additional Information

  • Related to # (issue)

@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot bot commented Apr 10, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@akoumpa akoumpa changed the title fix: mute unsupported field attribute warning warning fix: mute unsupported field attribute warning on startup Apr 10, 2026
@akoumpa akoumpa force-pushed the akoumparouli/fix_mute_UnsupportedFieldAttributeWarning_warning branch from 8cd8942 to 0560c2c Compare April 10, 2026 21:11
Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>
@akoumpa akoumpa force-pushed the akoumparouli/fix_mute_UnsupportedFieldAttributeWarning_warning branch from 0560c2c to 9673b6c Compare April 10, 2026 21:12
@akoumpa
Copy link
Copy Markdown
Contributor Author

akoumpa commented Apr 10, 2026

/ok to test 9673b6c

@akoumpa akoumpa marked this pull request as ready for review April 10, 2026 23:39
@akoumpa akoumpa requested a review from HuiyingLi as a code owner April 10, 2026 23:39
@akoumpa akoumpa merged commit e095001 into main Apr 14, 2026
55 of 56 checks passed
@akoumpa akoumpa deleted the akoumparouli/fix_mute_UnsupportedFieldAttributeWarning_warning branch April 14, 2026 22:37
edjson pushed a commit to edjson/Automodel that referenced this pull request Apr 17, 2026
…#1773)

mute warnings v2

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>
edjson pushed a commit to edjson/Automodel that referenced this pull request Apr 18, 2026
…#1773)

mute warnings v2

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>
Signed-off-by: Edison <edisonggacc@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants