Bump dependencies and make evaluation not installed in testing#172
Bump dependencies and make evaluation not installed in testing#172sonleoracle wants to merge 3 commits into
Conversation
115da65 to
5d00e91
Compare
|
Internal regression succeeded 🍏: Build ID #429 |
| from litellm import acompletion | ||
| from litellm.types.utils import ModelResponse | ||
|
|
||
| acompletion: Any |
There was a problem hiding this comment.
It's because static type-checking under TYPE_CHECKING still requires [evaluation] to be installed, but in this PR I am trying to remove that from requirements-dev.txt, so it's not installed in the CI, so it would crash the static checks
There was a problem hiding this comment.
I am missing something. We were already doing the same (i.e., not installing it by default) with crewai, and the type checker was not complaining afaik. Am I missing something?
There was a problem hiding this comment.
The type checker did not complain about crewai because in github actions (.github/workflows/tests.yaml), mypy is being run with --exclude pyagentspec/src/pyagentspec/adapters/crewai. Otherwise, if mypy is also run on the crewai adapter, it would complain about uninstalled dependencies.
On the other hand, in our regression CI, it is also ignoring missing crewai imports
There was a problem hiding this comment.
I just added back the static imports like before, but with mypy ignore flags
|
|
||
|
|
||
| def _is_litellm_model_response(response: Any) -> bool: | ||
| ModelResponse = getattr(importlib.import_module("litellm.types.utils"), "ModelResponse") |
There was a problem hiding this comment.
Let's avoid importlib, it's not needed
There was a problem hiding this comment.
changed to dynamic import of litellm
| # CrewAI is not installed by default due to dependency incompatibility with other frameworks | ||
| # -e .[crewai] | ||
| # Evaluation is not installed by default due to dependency incompatibility with langchain-openai | ||
| # -e .[evaluation] |
There was a problem hiding this comment.
This means that langgraph and evaluation cannot be used together. We must find a solution to this. What is causing the collision? Could you at least file an issue to keep track of this?
There was a problem hiding this comment.
we need langchain-openai>=1.1.14, which needs openai>=2.26, but we also need litellm==1.83.14, which needs openai==2.24
There was a problem hiding this comment.
should we just wait for a litellm release that uses openai >= 2.26?
5d00e91 to
f36d737
Compare
|
Internal regression succeeded 🍏: Build ID #433 |
Closes #116 #167 #168 #169 #170
Updates dependency pins for the 26.2 branch and separates evaluation from the default dev install because of dependency conflicts with the LangGraph/OpenAI stack.
Key changes:
Note that this means the CI will stop running tests for the evaluation submodule.