refactor(platform-wallet): extract shared contact request logic and use tracing#3206
refactor(platform-wallet): extract shared contact request logic and use tracing#3206thepastaclaw wants to merge 3 commits intodashpay:v3.1-devfrom
Conversation
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
📝 Walkthrough🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/rs-platform-wallet/src/platform_wallet_info/identity_discovery.rs`:
- Around line 82-89: The Err(e) arm currently increments consecutive_misses
(variable consecutive_misses) which treats query failures as empty slots;
instead do not count errors as misses—either propagate the error out from the
current function (replace the Err(e) branch with an early return/return Err(e)
or use the ? operator) or log and retry without modifying consecutive_misses;
keep the tracing::warn call but remove the consecutive_misses += 1 so query
failures don't stop the scan prematurely.
- Around line 68-76: The discovered list is being appended even when the
identity already exists; change the logic so you first check for existence using
identity_manager().identity(&identity_id) (to avoid cloning the entire
identities() map), and only call identity_manager_mut().add_identity(identity)?
and then discovered.push(identity_id) when the identity was actually added;
ensure discovered.push is moved inside that conditional that wraps
add_identity().
In
`@packages/rs-platform-wallet/src/platform_wallet_info/matured_transactions.rs`:
- Around line 82-83: The code in matured_transactions.rs currently hardcodes
identity_index = 0 when calling derive_identity_auth_key_hash(wallet,
self.network(), 0, 0), which prevents discovery of identities beyond the first
slot; change this to use the wallet's known identity candidate index or invoke
the gap-limit discovery flow instead of a fixed 0. Locate
derive_identity_auth_key_hash usage in matured_transactions.rs and replace the
literal 0 for identity_index with a value obtained from the wallet state (e.g.,
the candidate identity index accessor on the Wallet object) or call the new
discovery function (the gap-limit discovery entrypoint used elsewhere) to
resolve the correct identity_index before deriving the auth key hash. Ensure the
adjusted path handles errors the same way and preserves the existing network and
key_index (still 0) semantics.
In `@packages/rs-platform-wallet/src/platform_wallet_info/mod.rs`:
- Around line 202-204: Don't coerce missing timestamps to 0: when extracting
created_at_core_block_height and created_at from doc, treat missing values as an
error instead of using unwrap_or(0). Replace the unwrap_or(0) calls on
doc.created_at_core_block_height() and doc.created_at() with checks that
return/propagate an error (e.g., map to Err or use ok_or/ok_or_else) so the
caller can log and skip the malformed document; reference the symbols
created_at_core_block_height, created_at and doc in mod.rs to locate and update
the extraction logic.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: ef11c6c2-cdd5-4200-ab66-60d6f1d31c60
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (6)
packages/rs-platform-wallet/Cargo.tomlpackages/rs-platform-wallet/src/platform_wallet_info/identity_discovery.rspackages/rs-platform-wallet/src/platform_wallet_info/key_derivation.rspackages/rs-platform-wallet/src/platform_wallet_info/matured_transactions.rspackages/rs-platform-wallet/src/platform_wallet_info/mod.rspackages/rs-platform-wallet/src/platform_wallet_info/wallet_transaction_checker.rs
| Err(e) => { | ||
| tracing::warn!( | ||
| identity_index, | ||
| "Failed to query identity by public key hash: {}", | ||
| e | ||
| ); | ||
| consecutive_misses += 1; | ||
| } |
There was a problem hiding this comment.
Don't spend the gap budget on query failures.
A Platform/SDK error is not a confirmed empty slot. Incrementing consecutive_misses here can stop the scan early and skip identities later in the range. Either bubble the error up, or log/retry without counting it as a miss.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/rs-platform-wallet/src/platform_wallet_info/identity_discovery.rs`
around lines 82 - 89, The Err(e) arm currently increments consecutive_misses
(variable consecutive_misses) which treats query failures as empty slots;
instead do not count errors as misses—either propagate the error out from the
current function (replace the Err(e) branch with an early return/return Err(e)
or use the ? operator) or log and retry without modifying consecutive_misses;
keep the tracing::warn call but remove the consecutive_misses += 1 so query
failures don't stop the scan prematurely.
There was a problem hiding this comment.
Fixed in 33b3da2 — removed the consecutive_misses += 1 from the error branch. Query failures now just log a warning without consuming gap budget.
| // Derive the first authentication key hash (identity_index 0, key_index 0) | ||
| let key_hash_array = derive_identity_auth_key_hash(wallet, self.network(), 0, 0)?; |
There was a problem hiding this comment.
This still only checks the first identity slot.
Every asset-lock follow-up derives identity_index = 0, so a wallet that registers a second identity will keep re-querying the first auth key and never discover the newer one. Since wallet_transaction_checker.rs triggers this path automatically, those identities stay invisible unless a separate discovery scan runs. Please hand this path off to the new gap-limit discovery flow, or derive the candidate index from wallet state instead of hardcoding slot 0.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/rs-platform-wallet/src/platform_wallet_info/matured_transactions.rs`
around lines 82 - 83, The code in matured_transactions.rs currently hardcodes
identity_index = 0 when calling derive_identity_auth_key_hash(wallet,
self.network(), 0, 0), which prevents discovery of identities beyond the first
slot; change this to use the wallet's known identity candidate index or invoke
the gap-limit discovery flow instead of a fixed 0. Locate
derive_identity_auth_key_hash usage in matured_transactions.rs and replace the
literal 0 for identity_index with a value obtained from the wallet state (e.g.,
the candidate identity index accessor on the Wallet object) or call the new
discovery function (the gap-limit discovery entrypoint used elsewhere) to
resolve the correct identity_index before deriving the auth key hash. Ensure the
adjusted path handles errors the same way and preserves the existing network and
key_index (still 0) semantics.
There was a problem hiding this comment.
This is intentional for now. The asset lock path (fetch_contact_requests_for_identities_after_asset_locks) is a quick check triggered when a new asset lock transaction is detected — it checks whether the primary identity (index 0) has been registered. The full gap-limit discovery scan in discover_identities() handles discovering additional identities across all indices.
Wiring the asset lock path into the gap-limit scan would add complexity that isn't needed yet (multi-identity wallets aren't common in practice). If/when that becomes a requirement, we can refactor this path to call discover_identities instead. For now, the separation keeps the two flows simple and distinct.
There was a problem hiding this comment.
@thepastaclaw, that makes sense — keeping the asset lock path as a lightweight primary-identity check and leaving full gap-limit discovery to discover_identities() is a clean separation of concerns. I'll note this design intent for future reviews.
(ミ ᵕ ꒳ ᵕ ミ)✨
✏️ Learnings added
Learnt from: thepastaclaw
Repo: dashpay/platform PR: 3206
File: packages/rs-platform-wallet/src/platform_wallet_info/matured_transactions.rs:82-83
Timestamp: 2026-03-06T13:10:32.252Z
Learning: In `packages/rs-platform-wallet/src/platform_wallet_info/matured_transactions.rs`, the hardcoded `identity_index = 0` in `fetch_contact_requests_for_identities_after_asset_locks` (via `derive_identity_auth_key_hash(wallet, self.network(), 0, 0)`) is intentional. This path is a quick check triggered when a new asset lock transaction is detected, targeting only the primary identity (index 0). Full gap-limit discovery across all identity indices is handled separately by `discover_identities()`. The separation is deliberate to keep both flows simple; multi-identity wallet support on the asset lock path can be added later by calling `discover_identities` if needed.
Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.
cdfb886 to
7e87f5b
Compare
thepastaclaw
left a comment
There was a problem hiding this comment.
Test review body
| while consecutive_misses < gap_limit { | ||
| // Derive the authentication key hash for this identity index (key_index 0) | ||
| let key_hash_array = derive_identity_auth_key_hash(wallet, network, identity_index, 0)?; | ||
|
|
||
| // Query Platform for an identity registered with this key hash | ||
| match dpp::identity::Identity::fetch(&sdk, PublicKeyHash(key_hash_array)).await { | ||
| Ok(Some(identity)) => { | ||
| let identity_id = identity.id(); | ||
|
|
||
| // Add to manager if not already present | ||
| if self.identity_manager().identity(&identity_id).is_none() { | ||
| self.identity_manager_mut().add_identity(identity)?; | ||
| discovered.push(identity_id); | ||
| } | ||
| consecutive_misses = 0; | ||
| } | ||
| Ok(None) => { | ||
| consecutive_misses += 1; | ||
| } | ||
| Err(e) => { | ||
| tracing::warn!( | ||
| identity_index, | ||
| "Failed to query identity by public key hash: {}", | ||
| e | ||
| ); | ||
| // Don't increment consecutive_misses: a query failure is not | ||
| // a confirmed empty slot and should not consume the gap budget. | ||
| } | ||
| } | ||
|
|
||
| identity_index += 1; | ||
| } |
There was a problem hiding this comment.
🔴 Blocking: Unbounded loop when Platform queries consistently fail
The while consecutive_misses < gap_limit loop only terminates when gap_limit consecutive Ok(None) results are seen. In the Err(e) branch, the code intentionally does not increment consecutive_misses (the comment explains why), but identity_index advances unconditionally. If Platform is unreachable, every iteration hits Err, consecutive_misses stays at 0, and the loop runs until identity_index reaches 2^31 (~2 billion iterations) where ChildNumber::from_hardened_idx finally fails. In practice this is an effective livelock that will hang wallet sync for hours or days during a network outage. Add a cap on consecutive errors (e.g., bail after 3-5 consecutive failures) or a total iteration limit.
source: ['claude-general', 'codex-general', 'claude-rust-quality', 'codex-rust-quality']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `packages/rs-platform-wallet/src/platform_wallet_info/identity_discovery.rs`:
- [BLOCKING] lines 58-89: Unbounded loop when Platform queries consistently fail
The `while consecutive_misses < gap_limit` loop only terminates when `gap_limit` consecutive `Ok(None)` results are seen. In the `Err(e)` branch, the code intentionally does not increment `consecutive_misses` (the comment explains why), but `identity_index` advances unconditionally. If Platform is unreachable, every iteration hits `Err`, `consecutive_misses` stays at 0, and the loop runs until `identity_index` reaches 2^31 (~2 billion iterations) where `ChildNumber::from_hardened_idx` finally fails. In practice this is an effective livelock that will hang wallet sync for hours or days during a network outage. Add a cap on consecutive errors (e.g., bail after 3-5 consecutive failures) or a total iteration limit.
| let created_at_core_block_height = doc.created_at_core_block_height().ok_or_else(|| { | ||
| PlatformWalletError::InvalidIdentityData( | ||
| "Missing created_at_core_block_height in contact request".to_string(), | ||
| ) | ||
| })?; | ||
|
|
||
| let created_at = doc.created_at().ok_or_else(|| { | ||
| PlatformWalletError::InvalidIdentityData( | ||
| "Missing created_at in contact request".to_string(), | ||
| ) | ||
| })?; |
There was a problem hiding this comment.
🟡 Suggestion: Timestamp parsing is now strict — intentional but worth documenting
The old code used unwrap_or(0) for created_at_core_block_height and created_at; the new code returns an error via ok_or_else. Commit 7e87f5b explicitly describes this as intentional ("consistent with other required contact request fields"). The current DashPay contract schema (in rs-drive-abci/tests/.../dashpay-contract-contact-request-*.json) does require $createdAtCoreBlockHeight, so all documents created under the current schema will have it. However, the older contract version (rs-drive/tests/.../dashpay-contract.json) did not require it. If any historical documents from the pre-update schema still exist on Platform, they would now fail parsing and be silently skipped (logged as a warning). If backward compatibility with pre-schema-update documents is not a concern, this is fine as-is.
source: ['claude-general', 'codex-general', 'claude-rust-quality', 'codex-rust-quality']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `packages/rs-platform-wallet/src/platform_wallet_info/mod.rs`:
- [SUGGESTION] lines 202-212: Timestamp parsing is now strict — intentional but worth documenting
The old code used `unwrap_or(0)` for `created_at_core_block_height` and `created_at`; the new code returns an error via `ok_or_else`. Commit 7e87f5b6e explicitly describes this as intentional ("consistent with other required contact request fields"). The current DashPay contract schema (in `rs-drive-abci/tests/.../dashpay-contract-contact-request-*.json`) does require `$createdAtCoreBlockHeight`, so all documents created under the current schema will have it. However, the older contract version (`rs-drive/tests/.../dashpay-contract.json`) did not require it. If any historical documents from the pre-update schema still exist on Platform, they would now fail parsing and be silently skipped (logged as a warning). If backward compatibility with pre-schema-update documents is not a concern, this is fine as-is.
| })? | ||
| .clone(); | ||
|
|
||
| // Derive the first authentication key (identity_index 0, key_index 0) | ||
| let identity_index = 0u32; | ||
| let key_index = 0u32; | ||
|
|
||
| // Build identity authentication derivation path | ||
| // Path format: m/9'/COIN_TYPE'/5'/0'/identity_index'/key_index' | ||
| use key_wallet::bip32::{ChildNumber, DerivationPath}; | ||
| use key_wallet::dip9::{ | ||
| IDENTITY_AUTHENTICATION_PATH_MAINNET, IDENTITY_AUTHENTICATION_PATH_TESTNET, | ||
| }; | ||
|
|
||
| let base_path = match self.network() { | ||
| Network::Dash => IDENTITY_AUTHENTICATION_PATH_MAINNET, | ||
| Network::Testnet => IDENTITY_AUTHENTICATION_PATH_TESTNET, | ||
| _ => { | ||
| return Err(PlatformWalletError::InvalidIdentityData( | ||
| "Unsupported network for identity derivation".to_string(), | ||
| )); | ||
| } | ||
| }; | ||
|
|
||
| // Create full derivation path: base path + identity_index' + key_index' | ||
| let mut full_path = DerivationPath::from(base_path); | ||
| full_path = full_path.extend([ | ||
| ChildNumber::from_hardened_idx(identity_index).map_err(|e| { | ||
| PlatformWalletError::InvalidIdentityData(format!("Invalid identity index: {}", e)) | ||
| })?, | ||
| ChildNumber::from_hardened_idx(key_index).map_err(|e| { | ||
| PlatformWalletError::InvalidIdentityData(format!("Invalid key index: {}", e)) | ||
| })?, | ||
| ]); | ||
|
|
||
| // Derive the extended private key at this path | ||
| let auth_key = wallet | ||
| .derive_extended_private_key(&full_path) | ||
| .map_err(|e| { | ||
| PlatformWalletError::InvalidIdentityData(format!( | ||
| "Failed to derive authentication key: {}", | ||
| e | ||
| )) | ||
| })?; | ||
|
|
||
| // Get public key bytes and hash them | ||
| use dashcore::secp256k1::Secp256k1; | ||
| use key_wallet::bip32::ExtendedPubKey; | ||
| let secp = Secp256k1::new(); | ||
| let public_key = ExtendedPubKey::from_priv(&secp, &auth_key); | ||
| let public_key_bytes = public_key.public_key.serialize(); | ||
| let key_hash = ripemd160_sha256(&public_key_bytes); | ||
|
|
||
| // Create a fixed-size array from the hash | ||
| let mut key_hash_array = [0u8; 20]; | ||
| key_hash_array.copy_from_slice(&key_hash); | ||
| // Derive the first authentication key hash (identity_index 0, key_index 0) | ||
| let key_hash_array = derive_identity_auth_key_hash(wallet, self.network(), 0, 0)?; |
There was a problem hiding this comment.
🟡 Suggestion: asset_lock_transactions parameter is accepted but never iterated (pre-existing)
The asset_lock_transactions: &[dashcore::Transaction] parameter is only used for an .is_empty() guard — the actual transaction data is never examined. The function always derives identity_index 0 regardless of how many transactions are passed. This is pre-existing behavior not introduced by this PR, but the refactor is a good opportunity to either iterate over the transactions or change the parameter to a simple has_asset_locks: bool.
source: ['claude-general', 'claude-rust-quality', 'codex-rust-quality']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `packages/rs-platform-wallet/src/platform_wallet_info/matured_transactions.rs`:
- [SUGGESTION] lines 55-83: asset_lock_transactions parameter is accepted but never iterated (pre-existing)
The `asset_lock_transactions: &[dashcore::Transaction]` parameter is only used for an `.is_empty()` guard — the actual transaction data is never examined. The function always derives identity_index 0 regardless of how many transactions are passed. This is pre-existing behavior not introduced by this PR, but the refactor is a good opportunity to either iterate over the transactions or change the parameter to a simple `has_asset_locks: bool`.
| )) | ||
| })?; | ||
|
|
||
| let secp = Secp256k1::new(); |
There was a problem hiding this comment.
💬 Nitpick: Secp256k1 context allocated per call in a loop
Secp256k1::new() allocates a ~750 KB signing+verification context. In discover_identities, this is called once per iteration via derive_identity_auth_key_hash. Since each iteration also does a network round-trip to Platform, the allocation overhead is negligible in practice. Still, accepting a &Secp256k1 parameter or using Secp256k1::verification_only() (~350 KB) would be cleaner if this function is ever used in tighter loops.
source: ['claude-rust-quality']
thepastaclaw
left a comment
There was a problem hiding this comment.
Code Review
Clean refactor that successfully deduplicates contact request logic and replaces eprintln! with tracing. One confirmed bug: the gap-limit discovery loop can run indefinitely on persistent network failures. The timestamp strictness change was intentional and aligns with the current contract schema, so it is not a blocker but worth documenting.
Reviewed commit: 7e87f5b
🔴 1 blocking | 🟡 2 suggestion(s) | 💬 1 nitpick(s)
🤖 Prompt for all review comments with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `packages/rs-platform-wallet/src/platform_wallet_info/identity_discovery.rs`:
- [BLOCKING] lines 58-89: Unbounded loop when Platform queries consistently fail
The `while consecutive_misses < gap_limit` loop only terminates when `gap_limit` consecutive `Ok(None)` results are seen. In the `Err(e)` branch, the code intentionally does not increment `consecutive_misses` (the comment explains why), but `identity_index` advances unconditionally. If Platform is unreachable, every iteration hits `Err`, `consecutive_misses` stays at 0, and the loop runs until `identity_index` reaches 2^31 (~2 billion iterations) where `ChildNumber::from_hardened_idx` finally fails. In practice this is an effective livelock that will hang wallet sync for hours or days during a network outage. Add a cap on consecutive errors (e.g., bail after 3-5 consecutive failures) or a total iteration limit.
In `packages/rs-platform-wallet/src/platform_wallet_info/mod.rs`:
- [SUGGESTION] lines 202-212: Timestamp parsing is now strict — intentional but worth documenting
The old code used `unwrap_or(0)` for `created_at_core_block_height` and `created_at`; the new code returns an error via `ok_or_else`. Commit 7e87f5b6e explicitly describes this as intentional ("consistent with other required contact request fields"). The current DashPay contract schema (in `rs-drive-abci/tests/.../dashpay-contract-contact-request-*.json`) does require `$createdAtCoreBlockHeight`, so all documents created under the current schema will have it. However, the older contract version (`rs-drive/tests/.../dashpay-contract.json`) did not require it. If any historical documents from the pre-update schema still exist on Platform, they would now fail parsing and be silently skipped (logged as a warning). If backward compatibility with pre-schema-update documents is not a concern, this is fine as-is.
In `packages/rs-platform-wallet/src/platform_wallet_info/matured_transactions.rs`:
- [SUGGESTION] lines 55-83: asset_lock_transactions parameter is accepted but never iterated (pre-existing)
The `asset_lock_transactions: &[dashcore::Transaction]` parameter is only used for an `.is_empty()` guard — the actual transaction data is never examined. The function always derives identity_index 0 regardless of how many transactions are passed. This is pre-existing behavior not introduced by this PR, but the refactor is a good opportunity to either iterate over the transactions or change the parameter to a simple `has_asset_locks: bool`.
| while consecutive_misses < gap_limit { | ||
| // Derive the authentication key hash for this identity index (key_index 0) | ||
| let key_hash_array = derive_identity_auth_key_hash(wallet, network, identity_index, 0)?; | ||
|
|
||
| // Query Platform for an identity registered with this key hash | ||
| match dpp::identity::Identity::fetch(&sdk, PublicKeyHash(key_hash_array)).await { | ||
| Ok(Some(identity)) => { | ||
| let identity_id = identity.id(); | ||
|
|
||
| // Add to manager if not already present | ||
| if self.identity_manager().identity(&identity_id).is_none() { | ||
| self.identity_manager_mut().add_identity(identity)?; | ||
| discovered.push(identity_id); | ||
| } | ||
| consecutive_misses = 0; | ||
| } | ||
| Ok(None) => { | ||
| consecutive_misses += 1; | ||
| } | ||
| Err(e) => { | ||
| tracing::warn!( | ||
| identity_index, | ||
| "Failed to query identity by public key hash: {}", | ||
| e | ||
| ); | ||
| // Don't increment consecutive_misses: a query failure is not | ||
| // a confirmed empty slot and should not consume the gap budget. | ||
| } | ||
| } | ||
|
|
||
| identity_index += 1; | ||
| } |
There was a problem hiding this comment.
🔴 Blocking: Unbounded loop when Platform queries consistently fail
The while consecutive_misses < gap_limit loop only terminates when gap_limit consecutive Ok(None) results are seen. In the Err(e) branch, the code intentionally does not increment consecutive_misses (the comment explains why), but identity_index advances unconditionally. If Platform is unreachable, every iteration hits Err, consecutive_misses stays at 0, and the loop runs until identity_index reaches 2^31 (~2 billion iterations) where ChildNumber::from_hardened_idx finally fails. In practice this is an effective livelock that will hang wallet sync for hours or days during a network outage. Add a cap on consecutive errors (e.g., bail after 3-5 consecutive failures) or a total iteration limit.
source: ['claude-general', 'codex-general', 'claude-rust-quality', 'codex-rust-quality']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `packages/rs-platform-wallet/src/platform_wallet_info/identity_discovery.rs`:
- [BLOCKING] lines 58-89: Unbounded loop when Platform queries consistently fail
The `while consecutive_misses < gap_limit` loop only terminates when `gap_limit` consecutive `Ok(None)` results are seen. In the `Err(e)` branch, the code intentionally does not increment `consecutive_misses` (the comment explains why), but `identity_index` advances unconditionally. If Platform is unreachable, every iteration hits `Err`, `consecutive_misses` stays at 0, and the loop runs until `identity_index` reaches 2^31 (~2 billion iterations) where `ChildNumber::from_hardened_idx` finally fails. In practice this is an effective livelock that will hang wallet sync for hours or days during a network outage. Add a cap on consecutive errors (e.g., bail after 3-5 consecutive failures) or a total iteration limit.
| let created_at_core_block_height = doc.created_at_core_block_height().ok_or_else(|| { | ||
| PlatformWalletError::InvalidIdentityData( | ||
| "Missing created_at_core_block_height in contact request".to_string(), | ||
| ) | ||
| })?; | ||
|
|
||
| let created_at = doc.created_at().ok_or_else(|| { | ||
| PlatformWalletError::InvalidIdentityData( | ||
| "Missing created_at in contact request".to_string(), | ||
| ) | ||
| })?; |
There was a problem hiding this comment.
🟡 Suggestion: Timestamp parsing is now strict — intentional but worth documenting
The old code used unwrap_or(0) for created_at_core_block_height and created_at; the new code returns an error via ok_or_else. Commit 7e87f5b explicitly describes this as intentional ("consistent with other required contact request fields"). The current DashPay contract schema (in rs-drive-abci/tests/.../dashpay-contract-contact-request-*.json) does require $createdAtCoreBlockHeight, so all documents created under the current schema will have it. However, the older contract version (rs-drive/tests/.../dashpay-contract.json) did not require it. If any historical documents from the pre-update schema still exist on Platform, they would now fail parsing and be silently skipped (logged as a warning). If backward compatibility with pre-schema-update documents is not a concern, this is fine as-is.
source: ['claude-general', 'codex-general', 'claude-rust-quality', 'codex-rust-quality']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `packages/rs-platform-wallet/src/platform_wallet_info/mod.rs`:
- [SUGGESTION] lines 202-212: Timestamp parsing is now strict — intentional but worth documenting
The old code used `unwrap_or(0)` for `created_at_core_block_height` and `created_at`; the new code returns an error via `ok_or_else`. Commit 7e87f5b6e explicitly describes this as intentional ("consistent with other required contact request fields"). The current DashPay contract schema (in `rs-drive-abci/tests/.../dashpay-contract-contact-request-*.json`) does require `$createdAtCoreBlockHeight`, so all documents created under the current schema will have it. However, the older contract version (`rs-drive/tests/.../dashpay-contract.json`) did not require it. If any historical documents from the pre-update schema still exist on Platform, they would now fail parsing and be silently skipped (logged as a warning). If backward compatibility with pre-schema-update documents is not a concern, this is fine as-is.
| })? | ||
| .clone(); | ||
|
|
||
| // Derive the first authentication key (identity_index 0, key_index 0) | ||
| let identity_index = 0u32; | ||
| let key_index = 0u32; | ||
|
|
||
| // Build identity authentication derivation path | ||
| // Path format: m/9'/COIN_TYPE'/5'/0'/identity_index'/key_index' | ||
| use key_wallet::bip32::{ChildNumber, DerivationPath}; | ||
| use key_wallet::dip9::{ | ||
| IDENTITY_AUTHENTICATION_PATH_MAINNET, IDENTITY_AUTHENTICATION_PATH_TESTNET, | ||
| }; | ||
|
|
||
| let base_path = match self.network() { | ||
| Network::Dash => IDENTITY_AUTHENTICATION_PATH_MAINNET, | ||
| Network::Testnet => IDENTITY_AUTHENTICATION_PATH_TESTNET, | ||
| _ => { | ||
| return Err(PlatformWalletError::InvalidIdentityData( | ||
| "Unsupported network for identity derivation".to_string(), | ||
| )); | ||
| } | ||
| }; | ||
|
|
||
| // Create full derivation path: base path + identity_index' + key_index' | ||
| let mut full_path = DerivationPath::from(base_path); | ||
| full_path = full_path.extend([ | ||
| ChildNumber::from_hardened_idx(identity_index).map_err(|e| { | ||
| PlatformWalletError::InvalidIdentityData(format!("Invalid identity index: {}", e)) | ||
| })?, | ||
| ChildNumber::from_hardened_idx(key_index).map_err(|e| { | ||
| PlatformWalletError::InvalidIdentityData(format!("Invalid key index: {}", e)) | ||
| })?, | ||
| ]); | ||
|
|
||
| // Derive the extended private key at this path | ||
| let auth_key = wallet | ||
| .derive_extended_private_key(&full_path) | ||
| .map_err(|e| { | ||
| PlatformWalletError::InvalidIdentityData(format!( | ||
| "Failed to derive authentication key: {}", | ||
| e | ||
| )) | ||
| })?; | ||
|
|
||
| // Get public key bytes and hash them | ||
| use dashcore::secp256k1::Secp256k1; | ||
| use key_wallet::bip32::ExtendedPubKey; | ||
| let secp = Secp256k1::new(); | ||
| let public_key = ExtendedPubKey::from_priv(&secp, &auth_key); | ||
| let public_key_bytes = public_key.public_key.serialize(); | ||
| let key_hash = ripemd160_sha256(&public_key_bytes); | ||
|
|
||
| // Create a fixed-size array from the hash | ||
| let mut key_hash_array = [0u8; 20]; | ||
| key_hash_array.copy_from_slice(&key_hash); | ||
| // Derive the first authentication key hash (identity_index 0, key_index 0) | ||
| let key_hash_array = derive_identity_auth_key_hash(wallet, self.network(), 0, 0)?; |
There was a problem hiding this comment.
🟡 Suggestion: asset_lock_transactions parameter is accepted but never iterated (pre-existing)
The asset_lock_transactions: &[dashcore::Transaction] parameter is only used for an .is_empty() guard — the actual transaction data is never examined. The function always derives identity_index 0 regardless of how many transactions are passed. This is pre-existing behavior not introduced by this PR, but the refactor is a good opportunity to either iterate over the transactions or change the parameter to a simple has_asset_locks: bool.
source: ['claude-general', 'claude-rust-quality', 'codex-rust-quality']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `packages/rs-platform-wallet/src/platform_wallet_info/matured_transactions.rs`:
- [SUGGESTION] lines 55-83: asset_lock_transactions parameter is accepted but never iterated (pre-existing)
The `asset_lock_transactions: &[dashcore::Transaction]` parameter is only used for an `.is_empty()` guard — the actual transaction data is never examined. The function always derives identity_index 0 regardless of how many transactions are passed. This is pre-existing behavior not introduced by this PR, but the refactor is a good opportunity to either iterate over the transactions or change the parameter to a simple `has_asset_locks: bool`.
| )) | ||
| })?; | ||
|
|
||
| let secp = Secp256k1::new(); |
There was a problem hiding this comment.
💬 Nitpick: Secp256k1 context allocated per call in a loop
Secp256k1::new() allocates a ~750 KB signing+verification context. In discover_identities, this is called once per iteration via derive_identity_auth_key_hash. Since each iteration also does a network round-trip to Platform, the allocation overhead is negligible in practice. Still, accepting a &Secp256k1 parameter or using Secp256k1::verification_only() (~350 KB) would be cleaner if this function is ever used in tighter loops.
source: ['claude-rust-quality']
|
🕓 Ready for review — 12 ahead in queue (commit b7d140f) |
Add DashSync-style identity discovery to PlatformWalletInfo that scans consecutive DIP-13 authentication key indices and queries Platform to find registered identities during wallet sync. New methods: - discover_identities: gap-limit scan using PublicKeyHash queries - discover_identities_with_contacts: same + fetches DashPay contacts Refactors shared key derivation and contact request parsing into reusable modules used by both discovery and asset lock processing. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…se tracing - Extract duplicated contact request fetch+store loops from identity_discovery.rs and matured_transactions.rs into a shared fetch_and_store_contact_requests() helper method on PlatformWalletInfo - Replace all eprintln! calls with tracing::warn! for proper library logging (add tracing dependency) - Extract magic number 100 into DEFAULT_CONTACT_REQUEST_LIMIT constant - Log silently-swallowed contact request parse errors instead of ignoring them (aids debugging malformed documents)
7e87f5b to
f4737a6
Compare
thepastaclaw
left a comment
There was a problem hiding this comment.
Code Review
One blocking regression: the gap-limit identity scan no longer increments consecutive_misses on Err, so if Platform lookups keep failing the loop never terminates and identity_index walks until u32 overflow. The mandatory-timestamp change is not a regression — the DashPay v1 schema lists both $createdAt and $createdAtCoreBlockHeight as required, so valid on-chain contact requests always carry them; the old unwrap_or(0) merely masked invalid data. Two real quality suggestions remain: a missed perf fix in the sibling asset-lock path and a long-standing dead-parameter in fetch_contact_requests_for_identities_after_asset_locks.
Reviewed commit: f4737a6
🔴 1 blocking | 🟡 2 suggestion(s)
🤖 Prompt for all review comments with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `packages/rs-platform-wallet/src/platform_wallet_info/identity_discovery.rs`:
- [BLOCKING] lines 77-85: discover_identities never terminates if every Identity::fetch returns Err
The loop exit condition is `consecutive_misses < gap_limit`, but the new `Err` arm deliberately does not bump `consecutive_misses`. Only `Ok(None)` advances the counter and `Ok(Some(_))` resets it. If the SDK/endpoint is persistently failing (network outage, misconfigured endpoint, hostile/misbehaving masternode, rate-limit errors, proof-verification failure, etc.), every iteration takes the `Err` branch, `consecutive_misses` stays at 0, `identity_index` is incremented unconditionally each turn, and the scan runs forever — spinning CPU, issuing RPC/derivation work, and eventually overflowing `u32` (panic in debug, wraparound in release which re-scans from 0). This is a client-side hang and a mild DoS surface for anything that can deny responses to this specific query; it is also a correctness regression versus the pre-PR behavior, which consumed the gap budget on errors and therefore always terminated.
Either restore error-driven termination, or add a bounded `max_consecutive_errors` budget (and ideally switch to `identity_index.checked_add(1)` so the index is provably finite). Surfacing the SDK error up to the caller is also acceptable — let the caller decide whether to retry or give up.
In `packages/rs-platform-wallet/src/platform_wallet_info/matured_transactions.rs`:
- [SUGGESTION] lines 91-97: identities().contains_key clones the entire identity map; mirror the identity_discovery.rs fix
`IdentityManager::identities()` returns an owned `IndexMap<Identifier, Identity>` built by cloning every managed identity. Calling `.contains_key(&identity_id)` on that return value clones every `Identity` (with its full public-key map and balance) just to test membership of a single key. The sibling call site in `identity_discovery.rs` was specifically changed in this PR to use `self.identity_manager().identity(&identity_id).is_none()` to avoid exactly this clone, but the identical pattern here in the asset-lock hot path was missed. Fix it for consistency and because this runs on every relevant core transaction via `WalletTransactionChecker`.
- [SUGGESTION] lines 55-122: fetch_contact_requests_for_identities_after_asset_locks ignores its input and always re-fetches identity index 0
The function takes `asset_lock_transactions: &[dashcore::Transaction]` and advertises that it processes identities for those asset locks, but after the `is_empty()` guard it never inspects the slice. Every non-empty call derives `(identity_index=0, key_index=0)` and fetches a single identity, so the result is identical regardless of which transactions were supplied. For wallets with more than one Platform identity this means later asset locks always re-fetch the first identity instead of discovering the identity created at the correct derivation index. This pattern pre-dates this PR, but since the function is being refactored here it is a good time to either iterate over the transactions (mapping each asset lock to its corresponding identity index) or rename the API to reflect that it only inspects the first index.
| Err(e) => { | ||
| tracing::warn!( | ||
| identity_index, | ||
| "Failed to query identity by public key hash: {}", | ||
| e | ||
| ); | ||
| // Don't increment consecutive_misses: a query failure is not | ||
| // a confirmed empty slot and should not consume the gap budget. | ||
| } |
There was a problem hiding this comment.
🔴 Blocking: discover_identities never terminates if every Identity::fetch returns Err
The loop exit condition is consecutive_misses < gap_limit, but the new Err arm deliberately does not bump consecutive_misses. Only Ok(None) advances the counter and Ok(Some(_)) resets it. If the SDK/endpoint is persistently failing (network outage, misconfigured endpoint, hostile/misbehaving masternode, rate-limit errors, proof-verification failure, etc.), every iteration takes the Err branch, consecutive_misses stays at 0, identity_index is incremented unconditionally each turn, and the scan runs forever — spinning CPU, issuing RPC/derivation work, and eventually overflowing u32 (panic in debug, wraparound in release which re-scans from 0). This is a client-side hang and a mild DoS surface for anything that can deny responses to this specific query; it is also a correctness regression versus the pre-PR behavior, which consumed the gap budget on errors and therefore always terminated.
Either restore error-driven termination, or add a bounded max_consecutive_errors budget (and ideally switch to identity_index.checked_add(1) so the index is provably finite). Surfacing the SDK error up to the caller is also acceptable — let the caller decide whether to retry or give up.
💡 Suggested change
| Err(e) => { | |
| tracing::warn!( | |
| identity_index, | |
| "Failed to query identity by public key hash: {}", | |
| e | |
| ); | |
| // Don't increment consecutive_misses: a query failure is not | |
| // a confirmed empty slot and should not consume the gap budget. | |
| } | |
| Err(e) => { | |
| tracing::warn!( | |
| identity_index, | |
| "Failed to query identity by public key hash: {}", | |
| e | |
| ); | |
| consecutive_misses += 1; | |
| } |
source: ['claude', 'codex']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `packages/rs-platform-wallet/src/platform_wallet_info/identity_discovery.rs`:
- [BLOCKING] lines 77-85: discover_identities never terminates if every Identity::fetch returns Err
The loop exit condition is `consecutive_misses < gap_limit`, but the new `Err` arm deliberately does not bump `consecutive_misses`. Only `Ok(None)` advances the counter and `Ok(Some(_))` resets it. If the SDK/endpoint is persistently failing (network outage, misconfigured endpoint, hostile/misbehaving masternode, rate-limit errors, proof-verification failure, etc.), every iteration takes the `Err` branch, `consecutive_misses` stays at 0, `identity_index` is incremented unconditionally each turn, and the scan runs forever — spinning CPU, issuing RPC/derivation work, and eventually overflowing `u32` (panic in debug, wraparound in release which re-scans from 0). This is a client-side hang and a mild DoS surface for anything that can deny responses to this specific query; it is also a correctness regression versus the pre-PR behavior, which consumed the gap budget on errors and therefore always terminated.
Either restore error-driven termination, or add a bounded `max_consecutive_errors` budget (and ideally switch to `identity_index.checked_add(1)` so the index is provably finite). Surfacing the SDK error up to the caller is also acceptable — let the caller decide whether to retry or give up.
| @@ -152,148 +97,27 @@ impl PlatformWalletInfo { | |||
| } | |||
There was a problem hiding this comment.
🟡 Suggestion: identities().contains_key clones the entire identity map; mirror the identity_discovery.rs fix
IdentityManager::identities() returns an owned IndexMap<Identifier, Identity> built by cloning every managed identity. Calling .contains_key(&identity_id) on that return value clones every Identity (with its full public-key map and balance) just to test membership of a single key. The sibling call site in identity_discovery.rs was specifically changed in this PR to use self.identity_manager().identity(&identity_id).is_none() to avoid exactly this clone, but the identical pattern here in the asset-lock hot path was missed. Fix it for consistency and because this runs on every relevant core transaction via WalletTransactionChecker.
💡 Suggested change
| } | |
| // Add identity to manager if not already present | |
| if self.identity_manager().identity(&identity_id).is_none() { | |
| self.identity_manager_mut().add_identity(identity.clone())?; | |
| } |
source: ['claude']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `packages/rs-platform-wallet/src/platform_wallet_info/matured_transactions.rs`:
- [SUGGESTION] lines 91-97: identities().contains_key clones the entire identity map; mirror the identity_discovery.rs fix
`IdentityManager::identities()` returns an owned `IndexMap<Identifier, Identity>` built by cloning every managed identity. Calling `.contains_key(&identity_id)` on that return value clones every `Identity` (with its full public-key map and balance) just to test membership of a single key. The sibling call site in `identity_discovery.rs` was specifically changed in this PR to use `self.identity_manager().identity(&identity_id).is_none()` to avoid exactly this clone, but the identical pattern here in the asset-lock hot path was missed. Fix it for consistency and because this runs on every relevant core transaction via `WalletTransactionChecker`.
| } | ||
|
|
||
| // Fetch DashPay contact requests for this identity | ||
| match sdk | ||
| .fetch_all_contact_requests_for_identity(&identity, Some(100)) | ||
| if let Err(e) = self | ||
| .fetch_and_store_contact_requests(&sdk, &identity, &identity_id) | ||
| .await | ||
| { | ||
| Ok((sent_docs, received_docs)) => { | ||
| // Process sent contact requests | ||
| for (_doc_id, maybe_doc) in sent_docs { | ||
| if let Some(doc) = maybe_doc { | ||
| if let Ok(contact_request) = parse_contact_request_document(&doc) { | ||
| // Add to managed identity | ||
| if let Some(managed_identity) = self | ||
| .identity_manager_mut() | ||
| .managed_identity_mut(&identity_id) | ||
| { | ||
| managed_identity.add_sent_contact_request(contact_request); | ||
| } | ||
| } | ||
| } | ||
| } | ||
|
|
||
| // Process received contact requests | ||
| for (_doc_id, maybe_doc) in received_docs { | ||
| if let Some(doc) = maybe_doc { | ||
| if let Ok(contact_request) = parse_contact_request_document(&doc) { | ||
| // Add to managed identity | ||
| if let Some(managed_identity) = self | ||
| .identity_manager_mut() | ||
| .managed_identity_mut(&identity_id) | ||
| { | ||
| managed_identity | ||
| .add_incoming_contact_request(contact_request); | ||
| } | ||
| } | ||
| } | ||
| } | ||
|
|
||
| identities_processed.push(identity_id); | ||
| } | ||
| Err(e) => { | ||
| eprintln!( | ||
| "Failed to fetch contact requests for identity {}: {}", | ||
| identity_id, e | ||
| ); | ||
| } | ||
| tracing::warn!( | ||
| %identity_id, | ||
| "Failed to fetch contact requests after asset lock: {}", | ||
| e | ||
| ); | ||
| } else { | ||
| identities_processed.push(identity_id); | ||
| } | ||
| } | ||
| Ok(None) => { | ||
| // No identity found for this key - that's ok, may not be registered yet | ||
| } | ||
| Err(e) => { | ||
| eprintln!("Failed to query identity by public key hash: {}", e); | ||
| tracing::warn!("Failed to query identity by public key hash: {}", e); | ||
| } | ||
| } | ||
|
|
||
| Ok(identities_processed) | ||
| } |
There was a problem hiding this comment.
🟡 Suggestion: fetch_contact_requests_for_identities_after_asset_locks ignores its input and always re-fetches identity index 0
The function takes asset_lock_transactions: &[dashcore::Transaction] and advertises that it processes identities for those asset locks, but after the is_empty() guard it never inspects the slice. Every non-empty call derives (identity_index=0, key_index=0) and fetches a single identity, so the result is identical regardless of which transactions were supplied. For wallets with more than one Platform identity this means later asset locks always re-fetch the first identity instead of discovering the identity created at the correct derivation index. This pattern pre-dates this PR, but since the function is being refactored here it is a good time to either iterate over the transactions (mapping each asset lock to its corresponding identity index) or rename the API to reflect that it only inspects the first index.
source: ['codex']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `packages/rs-platform-wallet/src/platform_wallet_info/matured_transactions.rs`:
- [SUGGESTION] lines 55-122: fetch_contact_requests_for_identities_after_asset_locks ignores its input and always re-fetches identity index 0
The function takes `asset_lock_transactions: &[dashcore::Transaction]` and advertises that it processes identities for those asset locks, but after the `is_empty()` guard it never inspects the slice. Every non-empty call derives `(identity_index=0, key_index=0)` and fetches a single identity, so the result is identical regardless of which transactions were supplied. For wallets with more than one Platform identity this means later asset locks always re-fetch the first identity instead of discovering the identity created at the correct derivation index. This pattern pre-dates this PR, but since the function is being refactored here it is a good time to either iterate over the transactions (mapping each asset lock to its corresponding identity index) or rename the API to reflect that it only inspects the first index.
Refine the review-follow-up fixes on the extracted platform-wallet flows: - only push newly added identities into the discovery result set - keep timestamp parsing strict for required contact-request fields - bound repeated identity fetch failures with a separate error budget and checked_add-based index advancement - derive identity auth lookups from the asset-lock transactions that actually matched the identity-registration account instead of always rechecking index 0 - avoid cloning the full identity map when checking whether an identity is already managed
f4737a6 to
b7d140f
Compare
thepastaclaw
left a comment
There was a problem hiding this comment.
Code Review
The PR is described as a focused contact-request refactor plus tracing cleanup, but the actual merge diff against v3.1-dev is a massive feature branch: 1006 files changed with 182k+ insertions across workflows, docs, rs-dpp, rs-dapi, data contracts, SDK crates, and many unrelated packages. That mismatch is the real review problem here — this is not reviewable or merge-safe as a narrowly scoped refactor/logging PR.
Reviewed commit: b7d140f
🔴 1 blocking
1 additional finding
🔴 blocking: This contact-request refactor PR is not actually a focused change — it merges a huge product branch into `v3.1-dev`
packages/rs-sdk/src/platform_mobile/contacts.rs (lines 1-745)
The PR description says this change just extracts shared contact-request logic into helper functions and replaces ad-hoc logs with tracing in the mobile contacts SDK. But the real merge diff against v3.1-dev is 1006 files with 182k+ insertions touching workflows, docs/book content, rs-dpp core types and validation, rs-dapi, rs-dapi-client, data-contract packages, SDK/support crates, and many unrelated packages across the monorepo. That means approving/merging this PR would not just land the contact-request refactor — it would also merge a very large body of unrelated platform changes that are not disclosed by the PR summary and cannot be meaningfully audited as part of a narrow refactor/logging review. Split this into (a) a true contact-request/tracing refactor PR containing only the relevant SDK files, and (b) separate feature/integration PRs for the broader platform changes. As-is, the PR description materially understates what will actually merge.
🤖 Prompt for all review comments with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `packages/rs-sdk/src/platform_mobile/contacts.rs`:
- [BLOCKING] lines 1-745: This contact-request refactor PR is not actually a focused change — it merges a huge product branch into `v3.1-dev`
The PR description says this change just extracts shared contact-request logic into helper functions and replaces ad-hoc logs with tracing in the mobile contacts SDK. But the real merge diff against `v3.1-dev` is 1006 files with 182k+ insertions touching workflows, docs/book content, rs-dpp core types and validation, rs-dapi, rs-dapi-client, data-contract packages, SDK/support crates, and many unrelated packages across the monorepo. That means approving/merging this PR would not just land the contact-request refactor — it would also merge a very large body of unrelated platform changes that are not disclosed by the PR summary and cannot be meaningfully audited as part of a narrow refactor/logging review. Split this into (a) a true contact-request/tracing refactor PR containing only the relevant SDK files, and (b) separate feature/integration PRs for the broader platform changes. As-is, the PR description materially understates what will actually merge.
|
Verified the blocking scope comment against the current PR head Current diff vs
That is a focused PR, not a hidden 1006-file product-branch merge. So there's no branch-splitting/code change to make for this review item; treating it as a stale/incorrect scope report. |
Summary
Follow-up refactor for #3188 (identity discovery scan). Addresses self-review findings:
fetch_and_store_contact_requests()helper eliminates ~60 lines of copy-paste betweenidentity_discovery.rsandmatured_transactions.rseprintln!withtracing::warn!— proper structured logging for a library crate100→DEFAULT_CONTACT_REQUEST_LIMITValidation
cargo check✅cargo clippy✅cargo fmt --check✅Summary by CodeRabbit
New Features
Improvements