fix(litellm): Avoid double span exits when streaming #5933
1 issue
code-review: Found 1 issue (1 medium)
Medium
Test may fail if openai is not installed since streaming_chat_completions_model_response fixture uses openai types - `tests/integrations/litellm/test_litellm.py:238`
The streaming_chat_completions_model_response fixture in tests/conftest.py uses openai.types.chat.ChatCompletionChunk without guarding against openai being None (it's conditionally imported). The test relies on this fixture, but there's no pytest.importorskip('openai') or similar guard in the test file for this dependency. If openai is not installed while running litellm tests, this will cause a runtime error when the fixture tries to use openai.types.
Duration: 2m 2s · Tokens: 653.5k in / 4.6k out · Cost: $1.04 (+extraction: $0.00)
Annotations
Check warning on line 238 in tests/integrations/litellm/test_litellm.py
sentry-warden / warden: code-review
Test may fail if openai is not installed since streaming_chat_completions_model_response fixture uses openai types
The `streaming_chat_completions_model_response` fixture in `tests/conftest.py` uses `openai.types.chat.ChatCompletionChunk` without guarding against `openai` being None (it's conditionally imported). The test relies on this fixture, but there's no `pytest.importorskip('openai')` or similar guard in the test file for this dependency. If `openai` is not installed while running litellm tests, this will cause a runtime error when the fixture tries to use `openai.types`.