Skip to content

feat(chat_v2): add MiniMax AI as LLM provider#101

Open
octo-patch wants to merge 1 commit intoruc-datalab:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat(chat_v2): add MiniMax AI as LLM provider#101
octo-patch wants to merge 1 commit intoruc-datalab:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

@octo-patch octo-patch commented Mar 22, 2026

Summary

  • Add MiniMax AI as a third model provider in the chat demo alongside Local (vLLM) and HeyWhale API
  • Users can now select "MiniMax AI" from the Model Provider dropdown and use cloud-hosted MiniMax-M2.7 models without running a local model service
  • Includes MiniMax-specific temperature clamping (0.01-1.0) and API key management in both backend and frontend

Changes

Backend (demo/chat_v2/backend_app/services/chat.py):

  • _iter_minimax_stream(): new streaming function using OpenAI-compatible SDK with MiniMax base URL
  • _normalize_minimax_temperature(): clamps temperature to MiniMax-accepted range [0.01, 1.0]
  • Extended build_chat_runtime_config() with minimax provider routing, auto-defaults for API base and model
  • Updated bot_stream() dispatch to route minimax provider to the new stream iterator

Frontend (demo/chat_v2/frontend/components/three-panel-interface.tsx):

  • Added MiniMax AI option in Model Provider Select dropdown
  • Added MiniMax API Key input with sessionStorage persistence
  • Validation: blocks send if MiniMax is selected without API key
  • Sends correct model name (MiniMax-M2.7) when MiniMax provider is active

Documentation (README.md, README_ZH.md):

  • Listed MiniMax AI as a supported provider in features
  • Added note about using MiniMax without local model service

Test plan

  • 27 unit tests: temperature clamping, config building, stream mocking, provider routing (all pass)
  • 3 integration tests: live MiniMax API calls for basic completion, streaming, temperature (all pass)
  • Manual: select MiniMax AI in UI, enter API key, send a data analysis prompt
  • Verify Local and HeyWhale providers still work unchanged

Add MiniMax AI (https://www.minimaxi.com) as a third model provider
alongside Local (vLLM) and HeyWhale API. This allows users to leverage
cloud-hosted MiniMax models (M2.7) without running a local model service.

Backend changes:
- Add _iter_minimax_stream() using OpenAI-compatible SDK
- Extend build_chat_runtime_config() with minimax provider routing
- Add MiniMax-specific temperature clamping (0.01-1.0)

Frontend changes:
- Add MiniMax AI option in Model Provider dropdown
- Add MiniMax API Key input field with persistence
- Wire model name and API key into chat request

Tests:
- 27 unit tests covering temperature clamping, config building,
  stream mocking, and provider routing
- 3 integration tests hitting the live MiniMax API
@LIUyizheSDU
Copy link
Copy Markdown
Collaborator

Is the API base https://api.minimaxi.com/v1? I’ve registered a MiniMax account, but my API key doesn’t work with https://api.minimaxi.io/v1.

@octo-patch
Copy link
Copy Markdown
Author

Thanks for trying it out @LIUyizheSDU!

The correct API base URL for MiniMax is:

https://api.minimax.io/v1

Note: It's minimax.io, not minimaxi.com. The URL in the code should already be set correctly. If you're seeing issues, please double-check:

  1. Your API key is valid (you can get one from the MiniMax platform)
  2. The base URL in the configuration is https://api.minimax.io/v1

Let me know if you run into any other issues!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants