Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/.release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "1.29.0"
".": "1.30.0"
}
2 changes: 1 addition & 1 deletion .github/release-please-config.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"$schema": "https://raw.githubusercontent.com/googleapis/release-please/main/schemas/config.json",
"last-release-sha": "6b1600fbf53bcf634c5fe4793f02921bc0b75125",
"last-release-sha": "80a7ecf4b31e4c6de4a1425b03422f384c1a032d",
"packages": {
".": {
"release-type": "python",
Expand Down
27 changes: 27 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,32 @@
# Changelog

## [1.30.0](https://github.com/google/adk-python/compare/v1.29.0...v1.30.0) (2026-04-13)


### Features

* Add Auth Provider support to agent registry ([f2c68eb](https://github.com/google/adk-python/commit/f2c68eb1536f1c0018c2cf7ee3f4417ca442080c))
* Add Parameter Manager integration to ADK ([b0715d7](https://github.com/google/adk-python/commit/b0715d77a2a433bb2ed07a2475cc4d1f2d662b6c))
* Add support for Gemma 4 models in ADK ([9d4ecbe](https://github.com/google/adk-python/commit/9d4ecbe9fd1141693e4682cbfe4d542cc62b76ac)), closes [#5156](https://github.com/google/adk-python/issues/5156)
* allow users to include artifacts from artifact_service in A2A events using provided interceptor ([e63d991](https://github.com/google/adk-python/commit/e63d991be84e373fd31be29d4b6b0e32fdbde557))
* emit a `TaskStatusUpdateEvent` for ADK events with no output parts but with event.actions ([dcc485b](https://github.com/google/adk-python/commit/dcc485b23e3509e2e386636d841033b91c9a401c))
* Live avatar support in ADK ([a64a8e4](https://github.com/google/adk-python/commit/a64a8e46480753439b91b9cfd41fd190b4dad493))
* **live:** expose live_session_resumption_update as Event in BaseLlmFlow ([2626ad7](https://github.com/google/adk-python/commit/2626ad7c69fb64a88372225d5583085fc08b1fcd)), closes [#4357](https://github.com/google/adk-python/issues/4357)
* Promote BigQuery tools to Stable ([abcf14c](https://github.com/google/adk-python/commit/abcf14c166baf4f8cc6e919b1eb4c063bf3a92af))
* **samples:** add sample for skill activation via environment tools ([2cbb523](https://github.com/google/adk-python/commit/2cbb52306910fac994fe1d29bdfcfacb258703b4))


### Bug Fixes

* Add "gcloud config unset project" command to express mode flow ([e7d8160](https://github.com/google/adk-python/commit/e7d81604126cbdb4d9ee4624e1d1410b06585750))
* avoid load all agents in adk web server ([cb4dd42](https://github.com/google/adk-python/commit/cb4dd42eff2df6d20c5e53211718ecb023f127fc))
* Change express mode user flow so it's more clear that an express mode project is being created ([0fedb3b](https://github.com/google/adk-python/commit/0fedb3b5eb2074999d8ccdb839e054ea80da486f))
* Custom pickling in McpToolset to exclude unpicklable objects like errlog ([d62558c](https://github.com/google/adk-python/commit/d62558cc2d7d6c0372e43c9f009c8c7a6863ff0a))
* Fix credential leakage vulnerability in Agent Registry ([e3567a6](https://github.com/google/adk-python/commit/e3567a65196bb453cdac4a5ae42f7f079476d748))
* Include a link to the deployed agent ([547766a](https://github.com/google/adk-python/commit/547766a47779915a8a47745237a46882a02dae9a))
* preserve interaction ids for interactions SSE tool calls ([9a19304](https://github.com/google/adk-python/commit/9a1930407a4eff67093ea9f14292f1931631a661)), closes [#5169](https://github.com/google/adk-python/issues/5169)
* validate user_id and session_id against path traversal ([cbcb5e6](https://github.com/google/adk-python/commit/cbcb5e6002b5bae89de5309caf7b9bb02d563cfc)), closes [#5110](https://github.com/google/adk-python/issues/5110)

## [1.29.0](https://github.com/google/adk-python/compare/v1.28.0...v1.29.0) (2026-04-09)


Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ dependencies = [
"google-cloud-spanner>=3.56.0, <4.0.0", # For Spanner database
"google-cloud-speech>=2.30.0, <3.0.0", # For Audio Transcription
"google-cloud-storage>=2.18.0, <4.0.0", # For GCS Artifact service
"google-genai>=1.64.0, <2.0.0", # Google GenAI SDK
"google-genai>=1.72.0, <2.0.0", # Google GenAI SDK
"graphviz>=0.20.2, <1.0.0", # Graphviz for graph rendering
"httpx>=0.27.0, <1.0.0", # HTTP client library
"jsonschema>=4.23.0, <5.0.0", # Agent Builder config validation
Expand Down
3 changes: 3 additions & 0 deletions src/google/adk/agents/run_config.py
Original file line number Diff line number Diff line change
Expand Up @@ -198,6 +198,9 @@ class RunConfig(BaseModel):
response_modalities: Optional[list[str]] = None
"""The output modalities. If not set, it's default to AUDIO."""

avatar_config: Optional[types.AvatarConfig] = None
"""Avatar configuration for the live agent."""

save_input_blobs_as_artifacts: bool = Field(
default=False,
deprecated=True,
Expand Down
13 changes: 13 additions & 0 deletions src/google/adk/evaluation/eval_metrics.py
Original file line number Diff line number Diff line change
Expand Up @@ -144,6 +144,19 @@ class RubricsBasedCriterion(BaseCriterion):
),
)

evaluate_full_response: bool = Field(
default=False,
description=(
"Whether to evaluate the full agent response including intermediate"
" natural language text (e.g. text emitted before tool calls) in"
" addition to the final response. By default, only the final"
" response text is sent to the judge. When True, text from all"
" intermediate invocation events is concatenated with the final"
" response before evaluation. This is useful for agents that emit"
" text both before and after tool calls within a single invocation."
),
)


class HallucinationsCriterion(BaseCriterion):
"""Criterion to use when evaluating agents response for hallucinations."""
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -274,7 +274,18 @@ def format_auto_rater_prompt(
"""Returns the autorater prompt."""
self.create_effective_rubrics_list(actual_invocation.rubrics)
user_input = get_text_from_content(actual_invocation.user_content)
final_response = get_text_from_content(actual_invocation.final_response)

# When evaluate_full_response is enabled, include text from intermediate
# invocation events (e.g. text emitted before tool calls) in addition to
# the final response. This is useful for agents that stream text, call
# tools, then stream more text within a single invocation.
criterion = self._eval_metric.criterion
evaluate_full = getattr(criterion, "evaluate_full_response", False)

if evaluate_full:
final_response = self._get_full_response_text(actual_invocation)
else:
final_response = get_text_from_content(actual_invocation.final_response)

rubrics_text = "\n".join([
f"* {r.rubric_content.text_property}"
Expand Down Expand Up @@ -310,3 +321,25 @@ def format_auto_rater_prompt(
)

return auto_rater_prompt

@staticmethod
def _get_full_response_text(invocation: Invocation) -> str:
"""Concatenates all NL text from invocation events and the final response.

When an agent emits text before a tool call (e.g. presenting a plan),
that text is stored in intermediate_data.invocation_events but not in
final_response. This method collects text from both sources to give the
judge a complete picture of the agent's output.
"""
parts = []
if invocation.intermediate_data and isinstance(
invocation.intermediate_data, InvocationEvents
):
for evt in invocation.intermediate_data.invocation_events:
text = get_text_from_content(evt.content)
if text:
parts.append(text)
final_text = get_text_from_content(invocation.final_response)
if final_text:
parts.append(final_text)
return "\n\n".join(parts)
3 changes: 3 additions & 0 deletions src/google/adk/flows/llm_flows/basic.py
Original file line number Diff line number Diff line change
Expand Up @@ -90,6 +90,9 @@ def _build_basic_request(
llm_request.live_connect_config.context_window_compression = (
invocation_context.run_config.context_window_compression
)
llm_request.live_connect_config.avatar_config = (
invocation_context.run_config.avatar_config
)


class _BasicLlmRequestProcessor(BaseLlmRequestProcessor):
Expand Down
17 changes: 2 additions & 15 deletions src/google/adk/models/gemini_llm_connection.py
Original file line number Diff line number Diff line change
Expand Up @@ -115,16 +115,7 @@ async def send_content(self, content: types.Content):
is_gemini_31 = model_name_utils.is_gemini_3_1_flash_live(
self._model_version
)
is_gemini_api = self._api_backend == GoogleLLMVariant.GEMINI_API

# As of now, Gemini 3.1 Flash Live is only available in Gemini API, not
# Vertex AI.
if (
is_gemini_31
and is_gemini_api
and len(content.parts) == 1
and content.parts[0].text
):
if is_gemini_31 and len(content.parts) == 1 and content.parts[0].text:
logger.debug('Using send_realtime_input for Gemini 3.1 text input')
await self._gemini_session.send_realtime_input(
text=content.parts[0].text
Expand All @@ -149,11 +140,7 @@ async def send_realtime(self, input: RealtimeInput):
is_gemini_31 = model_name_utils.is_gemini_3_1_flash_live(
self._model_version
)
is_gemini_api = self._api_backend == GoogleLLMVariant.GEMINI_API

# As of now, Gemini 3.1 Flash Live is only available in Gemini API, not
# Vertex AI.
if is_gemini_31 and is_gemini_api:
if is_gemini_31:
if input.mime_type and input.mime_type.startswith('audio/'):
await self._gemini_session.send_realtime_input(audio=input)
elif input.mime_type and input.mime_type.startswith('image/'):
Expand Down
54 changes: 28 additions & 26 deletions src/google/adk/sessions/vertex_ai_session_service.py
Original file line number Diff line number Diff line change
Expand Up @@ -270,6 +270,7 @@ async def append_event(self, session: Session, event: Event) -> Event:

reasoning_engine_id = self._get_reasoning_engine_id(session.app_name)

# Build config (Monolithic approach)
config = {}
if event.content:
config['content'] = event.content.model_dump(
Expand All @@ -286,9 +287,6 @@ async def append_event(self, session: Session, event: Event) -> Event:
k: json.loads(v.model_dump_json(exclude_none=True, by_alias=True))
for k, v in event.actions.requested_auth_configs.items()
},
# TODO: add requested_tool_confirmations, agent_state once
# they are available in the API.
# Note: compaction is stored via event_metadata.custom_metadata.
}
if event.error_code:
config['error_code'] = event.error_code
Expand All @@ -311,10 +309,8 @@ async def append_event(self, session: Session, event: Event) -> Event:
metadata_dict['grounding_metadata'] = event.grounding_metadata.model_dump(
exclude_none=True, mode='json'
)
# Store compaction data in custom_metadata since the Vertex AI service
# does not yet support the compaction field.
# TODO: Stop writing to custom_metadata once the Vertex AI service
# supports the compaction field natively in EventActions.

# ALWAYS write to custom_metadata
if event.actions and event.actions.compaction:
compaction_dict = event.actions.compaction.model_dump(
exclude_none=True, mode='json'
Expand All @@ -324,8 +320,6 @@ async def append_event(self, session: Session, event: Event) -> Event:
key=_COMPACTION_CUSTOM_METADATA_KEY,
value=compaction_dict,
)
# Store usage_metadata in custom_metadata since the Vertex AI service
# does not persist it in EventMetadata.
if event.usage_metadata:
usage_dict = event.usage_metadata.model_dump(
exclude_none=True, mode='json'
Expand All @@ -335,7 +329,12 @@ async def append_event(self, session: Session, event: Event) -> Event:
key=_USAGE_METADATA_CUSTOM_METADATA_KEY,
value=usage_dict,
)

config['event_metadata'] = metadata_dict

# Persist the full event state using raw_event. If the client-side SDK
# does not support this field, it will raise a ValidationError, and we
# will fall back to legacy field-based storage.
config['raw_event'] = event.model_dump(
exclude_none=True,
mode='json',
Expand All @@ -345,7 +344,8 @@ async def append_event(self, session: Session, event: Event) -> Event:
# Retry without raw_event if client side validation fails for older SDK
# versions.
async with self._get_api_client() as api_client:
try:

async def _do_append(cfg: dict[str, Any]):
await api_client.agent_engines.sessions.events.append(
name=(
f'reasoningEngines/{reasoning_engine_id}/sessions/{session.id}'
Expand All @@ -355,22 +355,16 @@ async def append_event(self, session: Session, event: Event) -> Event:
timestamp=datetime.datetime.fromtimestamp(
event.timestamp, tz=datetime.timezone.utc
),
config=config,
config=cfg,
)

try:
await _do_append(config)
except pydantic.ValidationError:
logger.warning('Vertex SDK does not support raw_event, falling back.')
if 'raw_event' in config:
del config['raw_event']
await api_client.agent_engines.sessions.events.append(
name=(
f'reasoningEngines/{reasoning_engine_id}/sessions/{session.id}'
),
author=event.author,
invocation_id=event.invocation_id,
timestamp=datetime.datetime.fromtimestamp(
event.timestamp, tz=datetime.timezone.utc
),
config=config,
)
await _do_append(config)
return event

def _get_reasoning_engine_id(self, app_name: str):
Expand Down Expand Up @@ -429,8 +423,8 @@ def _get_raw_event(api_event_obj: Any) -> dict[str, Any] | None:

def _from_api_event(api_event_obj: vertexai.types.SessionEvent) -> Event:
"""Converts an API event object to an Event object."""
# Read event data from raw_event first before falling back to top level
# fields.
# Prioritize reading from raw_event to restore full state. Fall back to
# top-level fields for older data that lacks raw_event.
raw_event_dict = _get_raw_event(api_event_obj)
if raw_event_dict:
event_dict = copy.deepcopy(raw_event_dict)
Expand All @@ -439,8 +433,9 @@ def _from_api_event(api_event_obj: vertexai.types.SessionEvent) -> Event:
'id': api_event_obj.name.split('/')[-1],
'invocation_id': getattr(api_event_obj, 'invocation_id', None),
'author': getattr(api_event_obj, 'author', None),
'timestamp': timestamp_obj.timestamp() if timestamp_obj else None,
})
if timestamp_obj:
event_dict['timestamp'] = timestamp_obj.timestamp()
return Event.model_validate(event_dict)

actions = getattr(api_event_obj, 'actions', None)
Expand Down Expand Up @@ -514,6 +509,13 @@ def _from_api_event(api_event_obj: vertexai.types.SessionEvent) -> Event:
usage_metadata_data
)

timestamp_obj = getattr(api_event_obj, 'timestamp', None)
timestamp = (
timestamp_obj.timestamp()
if timestamp_obj
else datetime.datetime.now(datetime.timezone.utc).timestamp()
)

return Event(
id=api_event_obj.name.split('/')[-1],
invocation_id=api_event_obj.invocation_id,
Expand All @@ -522,7 +524,7 @@ def _from_api_event(api_event_obj: vertexai.types.SessionEvent) -> Event:
content=_session_util.decode_model(
getattr(api_event_obj, 'content', None), types.Content
),
timestamp=api_event_obj.timestamp.timestamp(),
timestamp=timestamp,
error_code=getattr(api_event_obj, 'error_code', None),
error_message=getattr(api_event_obj, 'error_message', None),
partial=partial,
Expand Down
10 changes: 10 additions & 0 deletions src/google/adk/tools/_automatic_function_calling_util.py
Original file line number Diff line number Diff line change
Expand Up @@ -368,6 +368,11 @@ def from_function_with_options(
parameters_json_schema[name] = types.Schema.model_validate(
json_schema_dict
)
if param.default is not inspect.Parameter.empty:
if param.default is not None:
parameters_json_schema[name].default = param.default
else:
parameters_json_schema[name].nullable = True
except Exception as e:
_function_parameter_parse_util._raise_for_unsupported_param(
param, func.__name__, e
Expand All @@ -392,6 +397,11 @@ def from_function_with_options(
type='OBJECT',
properties=parameters_json_schema,
)
declaration.parameters.required = (
_function_parameter_parse_util._get_required_fields(
declaration.parameters
)
)

if variant == GoogleLLMVariant.GEMINI_API:
return declaration
Expand Down
6 changes: 1 addition & 5 deletions src/google/adk/utils/model_name_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -130,9 +130,6 @@ def is_gemini_2_or_above(model_string: Optional[str]) -> bool:
def is_gemini_3_1_flash_live(model_string: Optional[str]) -> bool:
"""Check if the model is a Gemini 3.1 Flash Live model.

Note: This is a very specific model name for live bidi streaming, so we check
for exact match.

Args:
model_string: The model name

Expand All @@ -141,5 +138,4 @@ def is_gemini_3_1_flash_live(model_string: Optional[str]) -> bool:
"""
if not model_string:
return False

return model_string == 'gemini-3.1-flash-live-preview'
return model_string.startswith('gemini-3.1-flash-live')
2 changes: 1 addition & 1 deletion src/google/adk/version.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,4 +13,4 @@
# limitations under the License.

# version: major.minor.patch
__version__ = "1.29.0"
__version__ = "1.30.0"
33 changes: 33 additions & 0 deletions tests/unittests/agents/test_run_config.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@
from unittest.mock import patch

from google.adk.agents.run_config import RunConfig
from google.genai import types
import pytest


Expand Down Expand Up @@ -64,3 +65,35 @@ def test_audio_transcription_configs_are_not_shared_between_instances():
assert (
config1.input_audio_transcription is not config2.input_audio_transcription
)


def test_avatar_config_initialization():
custom_avatar = types.CustomizedAvatar(
image_mime_type="image/jpeg", image_data=b"image_bytes"
)
avatar_config = types.AvatarConfig(
audio_bitrate_bps=128000,
video_bitrate_bps=1000000,
customized_avatar=custom_avatar,
)
run_config = RunConfig(avatar_config=avatar_config)

assert run_config.avatar_config == avatar_config
assert run_config.avatar_config.customized_avatar == custom_avatar
assert (
run_config.avatar_config.customized_avatar.image_mime_type == "image/jpeg"
)
assert run_config.avatar_config.customized_avatar.image_data == b"image_bytes"


def test_avatar_config_with_name():
avatar_config = types.AvatarConfig(
audio_bitrate_bps=128000,
video_bitrate_bps=1000000,
avatar_name="test_avatar",
)
run_config = RunConfig(avatar_config=avatar_config)

assert run_config.avatar_config == avatar_config
assert run_config.avatar_config.avatar_name == "test_avatar"
assert run_config.avatar_config.customized_avatar is None
Loading