feat(cerebras): integrate Cerebras cloud SDK for native completion su…#5735
feat(cerebras): integrate Cerebras cloud SDK for native completion su…#5735lorenzejay wants to merge 1 commit intomainfrom
Conversation
…pport - Added support for the Cerebras cloud SDK, enabling native chat completions. - Introduced class for handling requests to the Cerebras API. - Updated to include as a dependency. - Added tests for the new Cerebras provider, including unit tests and VCR tests for API interactions. - Updated existing files to accommodate the new provider and its configurations.
| try: | ||
| self._client = self._build_sync_client() | ||
| self._async_client = self._build_async_client() | ||
| except ValueError: |
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 2 potential issues.
❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
Reviewed by Cursor Bugbot for commit b27ab32. Configure here.
| ) | ||
| raise e from e | ||
|
|
||
| return content |
There was a problem hiding this comment.
Async non-streaming path missing after_llm_call hooks invocation
Medium Severity
The _ahandle_completion (async non-streaming) method is missing the _invoke_after_llm_call_hooks call that exists in the equivalent sync path _handle_completion (line 532-534), the sync streaming path (line 706), and the async streaming path (line 987). This means registered after_llm_call hooks silently won't fire when using the async non-streaming code path.
Additional Locations (1)
Reviewed by Cursor Bugbot for commit b27ab32. Configure here.
| CerebrasCompletion, | ||
| ) | ||
|
|
||
| return CerebrasCompletion |
There was a problem hiding this comment.
Cerebras import lacks fallback when SDK missing
High Severity
The if provider == "cerebras" block in _get_native_provider doesn't wrap the import in a try/except ImportError. When cerebras-cloud-sdk isn't installed, the ImportError propagates uncaught, preventing the intended fallback to OpenAICompatibleCompletion (which still lists "cerebras" at line 623). The corresponding test test_falls_back_to_openai_compat_when_sdk_missing expects graceful degradation that the code doesn't implement.
Additional Locations (1)
Reviewed by Cursor Bugbot for commit b27ab32. Configure here.


…pport
Note
Medium Risk
Introduces a new native LLM provider with its own request/streaming/tool-calling logic plus a new third-party SDK dependency, which could affect routing and runtime behavior for
cerebras/*models. Risk is moderated by added unit + VCR coverage and keeping LiteLLM/OpenAI-compatible fallback behavior.Overview
Adds a native
cerebrasprovider backed bycerebras-cloud-sdk, enabling directchat.completionscalls (sync/async, streaming with usage, tool calling, and structured output validation) via the newCerebrasCompletionimplementation.Updates the LLM factory routing to return
CerebrasCompletionforprovider="cerebras"orcerebras/<model>prefixes (while still allowing an OpenAI-compatible fallback when the SDK extra isn’t installed), and introducesCEREBRAS_MODELSconstants for model validation.Wires a new optional extra
crewai[cerebras](and lockfile updates) and adds unit + VCR replay tests/cassettes covering basic completion, streaming, and parameter passthrough (e.g.,temperature,seed).Reviewed by Cursor Bugbot for commit b27ab32. Bugbot is set up for automated code reviews on this repo. Configure here.