Skip to content

feat: add MiniMax as first-class LLM provider#955

Open
octo-patch wants to merge 1 commit intocrestalnetwork:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as first-class LLM provider#955
octo-patch wants to merge 1 commit intocrestalnetwork:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

Summary

Add MiniMax as a native LLM provider, enabling direct API access via MiniMax's OpenAI-compatible endpoint (api.minimax.io/v1) without requiring OpenRouter as an intermediary.

Changes

  • intentkit/models/llm.py: Add MINIMAX to LLMProvider enum with is_configured / display_name support; add MiniMaxLLM class (extends LLMModel, uses ChatOpenAI with MiniMax base URL); register in create_llm_model() factory
  • intentkit/models/llm.csv: Add MiniMax-M2.7 (intelligence=5, 204K context) and MiniMax-M2.7-highspeed (speed=5, cost-efficient) native model entries
  • intentkit/models/llm_picker.py: Update pick_default_model() and pick_summarize_model() to prefer native MiniMax when MINIMAX_API_KEY is configured, with graceful fallback to OpenRouter
  • intentkit/config/config.py: Add MINIMAX_API_KEY environment variable
  • .env.example: Add MINIMAX_API_KEY= entry

Models

Model Intelligence Speed Context Input Price Output Price
MiniMax-M2.7 5/5 3/5 204K $0.30/M $1.20/M
MiniMax-M2.7-highspeed 4/5 5/5 204K $0.07/M $0.28/M

Backward Compatibility

  • Existing OpenRouter-proxied MiniMax models (minimax/minimax-m2.7, minimax/minimax-m2-her) remain unchanged
  • Native provider only activates when MINIMAX_API_KEY is set
  • No changes to existing providers or their behavior

Test Plan

  • 30 unit and integration tests added (all passing)
  • test_llm.py: Model loading/filtering with MiniMax key, coexistence with OpenRouter
  • test_llm_picker.py: Priority ordering with native MiniMax, fallback behavior
  • test_minimax_llm.py: Provider enum, LLM class (base URL, API key, temperature, model override, params), factory routing, model info
  • Ruff format and check pass
  • All existing tests continue to pass

Co-Authored-By: Octopus liyuan851277048@icloud.com

Add native MiniMax LLM provider support via OpenAI-compatible API,
enabling direct API access without OpenRouter proxy overhead.

- Add MINIMAX enum to LLMProvider with is_configured/display_name
- Add MiniMaxLLM class using ChatOpenAI with api.minimax.io/v1
- Add MiniMax-M2.7 and MiniMax-M2.7-highspeed to model CSV
- Register MiniMax in create_llm_model factory
- Update llm_picker to prefer native MiniMax when API key is set
- Add MINIMAX_API_KEY to config and .env.example
- Add 30 unit and integration tests (all passing)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant