feat(dotAI): Dot AI LangChain4J integration (First provider/OpenAI)#35150
Merged
ihoffmann-dot merged 77 commits intomainfrom Apr 20, 2026
Merged
feat(dotAI): Dot AI LangChain4J integration (First provider/OpenAI)#35150ihoffmann-dot merged 77 commits intomainfrom
ihoffmann-dot merged 77 commits intomainfrom
Conversation
Contributor
|
Claude finished @ihoffmann-dot's task in 3m 15s —— View job Rollback Safety Analysis
Result: Unsafe to Rollback — two categories matched:
Label |
nollymar
reviewed
Mar 31, 2026
nollymar
reviewed
Mar 31, 2026
…lableModels from AIModels
5 tasks
nollymar
approved these changes
Apr 17, 2026
…ide, cache key sentinel
…ns param, replace sha256Hex with DigestUtils
fabrizzio-dotCMS
approved these changes
Apr 17, 2026
Contributor
|
Pull Request Unsafe to Rollback!!!
|
This was referenced Apr 22, 2026
riccardoruocco
pushed a commit
to riccardoruocco/core
that referenced
this pull request
Apr 27, 2026
…MS#35445) ## Summary Replaces the generic Apps UI textarea for `providerConfig` with a dedicated Angular screen that shows the current config and an example JSON side-by-side. Also ports backend fixes from dotCMS#35426 and adds a `PUT /api/v1/ai/completions/config` endpoint with credential-preserving merge. - New `DotAiConfigDetailComponent`: two-column layout — editable textarea on the left, formatted example JSON on the right - `DotAiConfigDetailResolver`: dedicated resolver that hardcodes `dotAI` as `appKey`, fixing the 404 caused by the generic resolver reading `null` from route params - Route `dotAI/edit/:id` added before the generic `:appKey/edit/:id` so dotAI navigates to the custom screen - `ChangeDetectorRef.detectChanges()` after async config load to fix textarea not rendering value until user interaction - `PUT /api/v1/ai/completions/config` (admin-only): saves `providerConfig` JSON; `ProviderConfigMerger` preserves stored credentials when the payload contains `*****` sentinel values - `dotAI.yml` description updated to reference OpenAI/ChatGPT directly - Ported from dotCMS#35426: flush SSE chunks, `cancelled` flag on `IOException`, `maxRetries` warn for streaming, null check in `parseSection`, `deepCopy` in `injectApiKeyIntoSections`, `maxTokens` → `max_completion_tokens` routing for OpenAI o-series models, PR review refactors ## Configuration ```json { "chat": { "provider": "openai", "apiKey": "sk-...", "model": "gpt-4o", "maxTokens": 16384, "temperature": 1.0, "maxRetries": 3, "rolePrompt": "You are dotCMSbot, an AI assistant to help content creators.", "textPrompt": "Use Descriptive writing style." }, "embeddings": { "provider": "openai", "apiKey": "sk-...", "model": "text-embedding-ada-002", "listenerIndexer": { "default": "blog,news,webPageContent" } }, "image": { "provider": "openai", "apiKey": "sk-...", "model": "dall-e-3", "size": "1792x1024", "imagePrompt": "Use 16:9 aspect ratio." } } ``` ## Notes - The custom UI is accessed via `#/apps/dotAI/edit/:siteId`; the "App screen" link in `render.jsp` already points to this URL - Credential masking: fields with `*****` in a PUT payload are replaced with the stored values before saving, so partial edits don't wipe secrets - `providerConfig` is not required — omitting it disables the AI features gracefully ## Related Issues - [feat(dotAI): Dot AI LangChain4J integration (First provider/OpenAI) dotCMS#35150](dotCMS#35150) - [fix(dotAI): Dot AI LangChain4J - ProviderConfig fixes dotCMS#35426](dotCMS#35426)
riccardoruocco
pushed a commit
to riccardoruocco/core
that referenced
this pull request
Apr 27, 2026
…uery param (dotCMS#35456) ## Summary Config endpoints were always resolving the target host from the HTTP `Host` header, making it impossible to read or save configuration for a specific site (including `SYSTEM_HOST`) through the API. - `GET /api/v1/ai/completions/config` and `PUT /api/v1/ai/completions/config` now accept an optional `?siteId=<identifier>` query param to scope the operation to a specific site - `SYSTEM_HOST` is supported as a special-case value - Falls back to HTTP host resolution when `siteId` is not provided (backward-compatible) - Frontend passes the site identifier from the route param (`dotAI/edit/:id`) on both load and save ## Notes - Without this fix, saving config through the custom UI always targeted the site resolved from the HTTP `Host` header (e.g. `demo.dotcms.com`), never `SYSTEM_HOST`. Background threads use `SYSTEM_HOST` config, so embeddings would silently pick up stale configuration. - `SYSTEM_HOST` can now be explicitly targeted by passing `?siteId=SYSTEM_HOST`. ## Related Issue This PR fixes dotCMS#35150
ihoffmann-dot
added a commit
that referenced
this pull request
Apr 28, 2026
…35456) (#35494) ## Summary Reverts the LangChain4J integration and related changes merged in: - #35150 — LangChain4J integration (Phase 1 / OpenAI) - #35445 — Custom UI for provider config - #35456 — Per-site config support via siteId Restores dotAI to the state at tag `v26.04.28-01` prior to those merges. ## Test plan - Verify dotAI app configuration UI works as before - Verify AI completions, embeddings, and image generation function with the original OpenAI client
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Replaces the direct OpenAI HTTP client (
OpenAIClient) with a LangChain4J abstraction layer, enabling multi-provider AI support in dotCMS. This PR covers Phase 1: OpenAI via LangChain4J.Changes
LangChain4jAIClient.java: NewAIProxiedClientimplementation that delegates chat, embeddings, and image requests to LangChain4J models.LangChain4jModelFactory.java: Factory that buildsChatModel,EmbeddingModel, andImageModelinstances from aProviderConfig. Only place with provider-specific builder logic.ProviderConfig.java: Deserializable POJO for theproviderConfigJSON secret (per provider section: model, apiKey, endpoint, maxTokens, maxCompletionTokens, etc.).AppConfig.java: Replaced legacy individual-field secrets (apiKey, model, etc.) with a singleproviderConfigJSON string.isEnabled()now only checks this field.AIAppValidator.java: Removed the OpenAI/v1/modelsvalidation call, which is incompatible with the multi-provider architecture.CompletionsResource.java: Updated/api/v1/ai/completions/configto derive model names and config values fromAppConfiggetters instead of iterating rawAppKeys.dotAI.yml: Removed legacy hidden fields; addedproviderConfigas the single configuration entry point.ProviderConfig,LangChain4jModelFactory, andLangChain4jAIClient; updatedAIProxyClientTestintegration test to useproviderConfig-based setup.Motivation
The previous implementation was tightly coupled to OpenAI's API contract (hardcoded HTTP calls, OpenAI-specific parameters, model validation via
/v1/models). LangChain4J provides a provider-agnostic model interface, allowing future phases to add Azure OpenAI, AWS Bedrock, and Vertex AI without touching the core request/response flow.The
providerConfigJSON secret replaces multiple individual secrets with a single structured configuration, supporting per-section (chat/embeddings/image) provider and model settings.Related Issue
This PR fixes #35183
EPIC: dotAI Multi-Provider Support #33970
Note
High Risk
High risk because it replaces the core AI client/provider path (OpenAI HTTP + model fallback/validation) with a new LangChain4J-backed implementation and a new
providerConfigsecret format, impacting chat, embeddings, and image generation behavior and configuration compatibility.Overview
dotAI now routes chat, embeddings, and image requests through a new LangChain4J-backed client (
LangChain4jAIClient) and sets LangChain4J as the default provider, replacing the direct OpenAI HTTP client and removing the model-fallback strategy.Configuration is migrated from many per-field secrets to a single
providerConfigJSON (with hashing + per-host model caching), updatingAppConfig.isEnabled(),dotAI.yml, and the/v1/ai/completions/configoutput (including credential redaction). Several OpenAI-specific model management/validation classes and tests are removed, and integration/unit tests are updated/added for the new providerConfig + LangChain4J flow.Embeddings/image handling is adjusted: embeddings requests now send raw text (with token-count fallback when encoding is unavailable), the async thread pool key is renamed to
AIThreadPool, max-token resolution is made more resilient, and image temp-file creation now supports base64 (b64_json) responses.Reviewed by Cursor Bugbot for commit 31cb86e. Bugbot is set up for automated code reviews on this repo. Configure here.