Skip to content

feat: add MiniMax as LLM evaluation provider#983

Open
octo-patch wants to merge 1 commit intoh2oai:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as LLM evaluation provider#983
octo-patch wants to merge 1 commit intoh2oai:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

Summary

Add MiniMax as an alternative LLM provider for GPT-based model evaluation, alongside OpenAI and Azure OpenAI.

MiniMax provides OpenAI-compatible chat completion models (M2.7, M2.5, M2.5-highspeed) that can serve as AI judges for evaluating fine-tuned model outputs.

Changes

  • Metrics (text_causal_language_modeling_metrics.py): Extend get_openai_client() factory to support MiniMax via OPENAI_API_TYPE=minimax or auto-detection when MINIMAX_API_KEY is set and OPENAI_API_KEY is not
  • Config (text_causal_language_modeling_config.py): Add MiniMax-M2.7, MiniMax-M2.5, MiniMax-M2.5-highspeed to the predefined metric model dropdown
  • Validation (utils.py): Update check_metric() to preserve GPT metric when MINIMAX_API_KEY is available
  • Settings UI (settings.py): Add MiniMax API Token field in the settings page
  • Environment (app_utils/utils.py, config.py): Wire MINIMAX_API_KEY through training environment variables and default settings
  • Documentation: Update evaluation guide with MiniMax setup instructions and tooltip descriptions

Usage

Option 1 — Auto-detect (simplest):

export MINIMAX_API_KEY="your-key"
# No OPENAI_API_KEY needed — MiniMax is used automatically

Option 2 — Explicit selection (when both keys are set):

export OPENAI_API_TYPE=minimax
export MINIMAX_API_KEY="your-key"

Then select MiniMax-M2.7 (or M2.5 / M2.5-highspeed) as the Metric Gpt Model in the experiment configuration.

Test plan

  • 7 unit tests for get_openai_client() covering all provider paths (default OpenAI, MiniMax via API type, MiniMax auto-detect, preference when both keys set, explicit override, custom base URL, Azure)
  • 2 new unit tests for check_metric() with MiniMax key scenarios
  • 3 integration tests for MiniMax provider (auto-detect client, explicit client, check_metric preservation)
  • 1 live integration test calling MiniMax API (skipped when MINIMAX_API_KEY not set)
  • All 7 existing BLEU score tests continue to pass
  • All 2 existing check_metric tests continue to pass

11 files changed, 259 additions(+), 14 deletions(-)

Add MiniMax (MiniMax-M2.7, M2.5, M2.5-highspeed) as an alternative
LLM provider for GPT-based model evaluation alongside OpenAI and Azure.

- Extend get_openai_client() with MiniMax provider support via
  OPENAI_API_TYPE=minimax or auto-detection when MINIMAX_API_KEY is set
- Add MiniMax models to the metric_gpt_model predefined dropdown
- Update check_metric() to recognize MINIMAX_API_KEY
- Add MiniMax API Token field in Settings UI
- Wire MINIMAX_API_KEY through the training environment
- Update evaluation documentation and tooltips
- Add 7 unit tests and 4 integration tests (including live API)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant