Conversation
* Improve security checks in service mode * Update tree -s -v response with better structure * Add share-report and ls command in slack and team setup command list * Fix flag in convert and convert-all * Add tree command in team and slack setup command list
* Added spinner and cr/lf normalization option * Added spinner and cr/lf normalization option
…delete (#1965) * Added --purge flag to rm command to differentiate remove vs. delete The rm command now supports two distinct operations: - Default (no flag): removes the record from the current user's vault only, leaving it intact for other users (pre_delete/unlink flow). - --purge: permanently hard-deletes the record for all users via record_update/delete_records. Requires the caller to be the record owner; non-owned records are skipped with a warning. Also fixed rm failing to resolve records in shared folders when searched by title, by adding a global fallback search across all vault folders. Improved the not-found error message to be more actionable. * Improved rm --purge reliability and user feedback - Added sync_down after a successful purge so the local cache reflects deletions immediately rather than waiting for a lazy sync - vault_changed and BreachWatch cleanup are now only triggered when at least one record was actually deleted successfully - Added a success log showing how many records were permanently deleted - Global title fallback now errors with a list of matching UIDs when more than one record shares the same title, preventing unintended bulk deletes on ambiguous names
Remove pam_privileged_workflow.py and its workflow_pb2.py stub to avoid conflicts with the internal workflow implementation. Keep pam_privileged_access (IdP/cloud access) intact. Made-with: Cursor
* Implementation of all commands for PAM Workflow * Update protobuf for approve and deny endpoints * Add restriction for user to launch/tunnel PAM record * Fix missing record_name in json format & team name validation * Fix record name display in output, Update start and end commands to use record title/uid and flow uid * Separate pending and approved requests and update workflow with escalation delay * Add table view for my-access command * Add MFA based tunnel or launch restrictions if respective workflow has MFA flag * Fix start workflow to use flow_uid * Update workflow protobuf to fix state using -r * Refactor workflow code into files and folders * Add force checkin, escalate, enforcement policy based admin and user difference * Disable workflow requests/mfa for admins * Update protobuf imports * Code review updates * Add cancel flag in request command * refactored * Updates for review comments * Move imports to top --------- Co-authored-by: Sajid Ali <sali@keepersecurity.com>
…#1940) (#1950) * Made --policy-name a required argument in PedmPolicyAddCommand to prevent policies from being silently created with an empty name. * Replicated admin console behavior: adding a policy of type elevation, file_access, or command now requires at least one user, machine, and application collection via --user-filter, --machine-filter, and --app-filter. LeastPrivilege policies remain unrestricted.
…on cache (#1971) Pass unresolved collection UIDs through to the server instead of failing client-side. Also fixes KeyError on missing keys and adds type 201 (CustomMachineCollection) to machine filter type list.
* Fixed keeper server hostname parsing * fixed duplicate test module names * Updated security audit tests to use typed records only (since we removed legacy) * Fixed all tests
Previously ``_get_launch_credential_uid`` was called three times per launch — once in ``launch.execute`` and twice inside ``extract_terminal_settings`` — and each call built a fresh ``TunnelDAG`` (2-3 HTTP round-trips each). ``find_gateway`` also re-resolved the config UID via ``get_config_uid_from_record``. Build the TunnelDAG once in ``execute()`` and thread it through: - ``_get_launch_credential_uid(params, record_uid, tdag=...)`` reuses the caller's DAG when provided. - ``find_gateway(params, record_uid, tdag=...)`` reads ``config_uid`` from ``tdag.record.record_uid`` and uses the new ``_gateway_uid_from_config`` helper to skip the redundant ``get_leafs`` roundtrip. - ``extract_terminal_settings(..., dag_linked_uid=...)`` takes a pre-resolved value via a ``_DAG_UID_UNSET`` sentinel (``None`` is a valid resolved result) and drops both inline DAG lookups. Add a ``PamConnectTiming`` framework (new ``connect_timing.py``) gated by ``PAM_CONNECT_TIMING=1`` and instrument the full launch: - ``pam-launch:execute`` — pre-phase checkpoints through gateway resolution (previously invisible). - ``pam-launch:terminal_connection`` / ``pam-launch:webrtc-tunnel`` — existing phase boundaries around tunnel open. - ``pam-launch:cli_session`` — checkpoints through guac ready. - ``pam-launch:total`` — grand-total wall clock from command entry to ``input_handler.start()``. Verified against QA: grand total ``ready_for_prompt`` drops from ~17.0s to ~12.4s (~4.6s saved). A single ``Found launch credential via DAG`` log line per launch (was three). Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
… listener (#1982) Builds on the PR1 TunnelDAG caching. Measured against the post-PR1 baseline (~12.4s grand total), this brings ``pam launch`` down to ~10.3s (another ~2.1s saved; cumulative ~6.7s vs pre-PR1 baseline). Tune the hardcoded sleeps in the WebRTC tunnel open path, then use those savings in an adaptive-fallback retry so the fast path stays fast but the unlucky first-try-fail path still gets the legacy safety window on the retry. Sleep / polling changes (all env-tunable via the new helpers in ``connect_timing.py``): - ``WEBSOCKET_BACKEND_DELAY`` default 2.0s → 0.30s (router/gateway conversation-registration window). Saves ~1.7s on the happy path. - Hardcoded ``time.sleep(1)`` before the offer POST is replaced with ``pre_offer_delay_sec()``, default 0.0s (the preceding backend delay already covers router registration). Saves ~1.0s. Set ``PAM_PRE_OFFER_LEGACY=1`` to restore the 1.0s wait. - ``PAM_OPEN_CONNECTION_DELAY`` default 0.2s → 0.05s. The existing ``open_handler_connection`` retry loop (exponential backoff) already handles slow DataChannel readiness, so the fixed sleep was mostly redundant. Saves ~150ms. - WebRTC connection-state poll tick 100ms → 25ms via new ``PAM_WEBRTC_POLL_MS`` env var. Cheap FFI call; tightens P99 handoff latency. Parallelize WebSocket listener with tube creation: - ``start_websocket_listener`` is now called *before* ``create_tube``, right after ``signal_handler`` is wired to ``tunnel_session``. The Rust tube creation (~500ms) runs in parallel with the WebSocket TLS handshake and router registration instead of serially after. - The listener only reads ``conversation_id`` from ``tunnel_session`` for routing; ``tube_id`` is used only for the thread name and log context, so the temp-UUID-to-real-tube-id swap after create_tube is race-free (the gateway doesn't emit messages until it receives our offer, which happens after the swap). Gateway-offer retry with adaptive backend-delay catch-up: - Unified retry loop (``PAM_GATEWAY_OFFER_MAX_ATTEMPTS``, default 2) wraps ``router_send_action_to_gateway`` for both streaming and non-streaming paths. A local helper ``_send_gateway_offer_with_retry`` replaces two near-identical inline call sites. - On a first-attempt failure that looks transient (``timeout``, ``rrc_timeout``, ``bad_state``, 502/503/504, ``controller_down``), before the retry we sleep ``offer_retry_extra_delay_sec() + (legacy_backend_delay - fast_backend_delay)`` so the cumulative wait matches the pre-change legacy 2.0s behavior for the cold-router case. Fast path stays fast; unlucky launch still gets the full safety window on retry. - New checkpoints ``gateway_offer_backend_catchup_delay_{start,done}`` and ``gateway_offer_http_attempt_{N}`` make the retry path visible in ``PAM_CONNECT_TIMING=1`` output. WebSocket listener checkpoint renamed ``websocket_listener_started`` → ``websocket_listener_started_early`` to reflect its new position in the flow. Verified in QA: happy path ~10.3s (was 12.4s), gateway-offline retry case exercises the full adaptive 2.95s catch-up (1.25s retry + 1.7s backend-delay delta) exactly as designed before re-attempt. Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
…1984) Three related tunnel-open improvements measured against the post-PR2 release (~10.3s grand total ``ready_for_prompt``): this brings ``pam launch`` down to ~9.3s on trickle mode (~1s saved end-to-end) and ~700ms off ``pam launch --no-trickle-ice``. Gateway changes are not required — batch support has existed on the gateway's Python side since 1.7.0 (commit 60f594b3, released 2025-07-24), and trickle ICE itself requires gateway >= 1.7.0 so any client that uses the default path is guaranteed to be talking to a batch-capable gateway. Also affects ``pam tunnel start`` (secondary beneficiary — same tunnel setup path). ``pam tunnel start --no-trickle-ice`` is on an already- optimized path in tunnel_helpers.py and is untouched. 1. Batch buffered ICE candidates into one HTTP POST -------------------------------------------------- Every trickle-mode offer flushed the local candidate buffer by calling ``_send_ice_candidate_immediately`` in a loop — 7-8 candidates * ~500ms serial round-trip each = ~3.5s of HTTP time after the offer was acked. Add ``TunnelSignalHandler._send_ice_candidates_batch(candidates, tube_id)`` that sends all candidates in a single ``icecandidate`` action with payload ``{"candidates": [c1, c2, ..., cN]}`` — the gateway already iterates ``for candidate in ice_candidates`` in ``WebRTCSessionAction.add_ice_candidates_to_conversation_tunnel`` and the per-candidate ``add_ice_candidate`` PyO3 binding is spawn-and-return, so one batch costs the gateway ~the same as one candidate. Converts the 5 client-side flush sites: 3 in ``tunnel_helpers.py`` (SDP answer in WS listener, state-change-to-connected, post-offer flush in ``start_rust_tunnel``) and 2 in ``pam_launch/terminal_connection.py`` (streaming offer branch, non-streaming SDP-answer handler). ``_send_ice_candidate_immediately`` is kept for the single-candidate live path (post-offer candidate that arrives one at a time) at ``tunnel_helpers.py:1727`` — that one is already one HTTP call per event. Net webrtc-tunnel phase drop: 6189ms -> 2965ms (-3.2s). Most of that moves to ``cli_session.webrtc_data_plane_connected`` (974ms -> 3355ms) because the ICE pair selection / data-channel open was previously hidden behind the serial HTTP loop and is now the exposed critical path. End-to-end wall-clock saving: ~1s per launch. 2. Skip WebSocket-ready wait + backend_delay in --no-trickle-ice mode -------------------------------------------------------------------- ``_open_terminal_webrtc_tunnel`` was unconditionally blocking for ``tunnel_session.websocket_ready_event.wait()`` + ``WEBSOCKET_BACKEND_DELAY`` (~700ms total) before sending the offer. In non-trickle mode the SDP answer arrives in the HTTP response body of the offer POST itself and ICE candidates are carried inside the offer SDP — there is no streamed conversation on the WebSocket to wait on. Wrap that block in ``if trickle_ice:`` and skip entirely in the non-trickle branch. The listener keeps running in the background for async signaling (disconnect / state changes); the main thread just does not block on its readiness. Matches the pattern already used by ``tunnel_helpers.py::start_rust_tunnel`` for non-trickle mode. Saves ~700ms on every ``pam launch --no-trickle-ice``. 3. Always emit PamConnectTiming checkpoints at DEBUG level --------------------------------------------------------- Commander's ``debug --file=<path>`` installs a file log handler with an explicit ``record.levelno != logging.INFO`` filter (see ``cli.py::setup_file_logging``) so user-facing ``logging.info(...)`` prints stay out of the debug log. PamConnectTiming previously bumped its checkpoint / summary records to INFO when ``PAM_CONNECT_TIMING=1`` was set, which meant those records were being silently dropped by the file-debug filter — timing lines never appeared in the captured log when ``debug --file`` + ``PAM_CONNECT_TIMING=1`` were used together. Always emit at DEBUG regardless of the env var. ``connect_timing_log_enabled()`` still gates whether to emit at all; only the chosen level changes. DEBUG passes the file-debug filter cleanly. Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
…bRTC timeout) (#1985) 1. Session-scoped DAG + gateway cache (new launch_cache.py) 2. Rust/webrtc log filter grace period (rust_log_filter.py) 3. WebRTC connect timeout aligned with gateway 15s (connect_timing.py, launch.py)
You can create multiple automators in a single node, but the oldest enabled one will be the one running all tasks. Added: - A check for enabled automators in the same node - A warning if any are found, with printed list - Prompt to proceed anyway
Contributor
There was a problem hiding this comment.
CodeQL found more than 20 potential problems in the proposed changes. Check the Files changed tab for more details.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
No description provided.