Skip to content

perf: use autoresearch to reduce allocations#3225

Merged
tac0turtle merged 18 commits intomainfrom
autoresearch/block-perf-2026-04-04
Apr 14, 2026
Merged

perf: use autoresearch to reduce allocations#3225
tac0turtle merged 18 commits intomainfrom
autoresearch/block-perf-2026-04-04

Conversation

@tac0turtle
Copy link
Copy Markdown
Contributor

@tac0turtle tac0turtle commented Apr 4, 2026

Overview

This pr used autoresearch to run a loop on reducing allocations

3 a8b6352 78allocs 22,996 33,091 keep Unsafe reinterpret cast of Txs to [][]byte in ApplyBlock — eliminates make([][]byte, n) allocation
4 823aa62 77allocs 20,276 32,480 keep Direct pb.Data serialization in DACommitment — avoids pruned Data wrapper and txsToByteSlices allocations
5 0720b44 74allocs 12,192 31,624 keep unsafe.Slice in Data.ToProto() — eliminates txsToByteSlices [][]byte allocation
6 — 74allocs 12,187 31,217 discard Reverted hand-written HashSlim wire encoder — produced different hashes than MarshalBinary
7 ccbc2e4 64allocs 11,130 31,570 keep sync.Pool for protobuf message structs in MarshalBinary — eliminates 10 allocs per block
8 805672e 56allocs 10,217 31,037 keep Pooled SignedHeader.MarshalBinary — reuse pb.SignedHeader/pb.Header/pb.Signer/pb.Version structs

Summary by CodeRabbit

  • Tests

    • Added benchmark tests for header hashing functionality.
    • Added test coverage for data serialization copy semantics and state round-trip validation.
  • Chores

    • Refactored internal serialization logic for data structures.
    • Updated transaction data handling mechanisms.
    • Adjusted error messages in state update operations.

Result: {"status":"keep","allocs_per_op":81,"bytes_per_op":25934,"ns_per_op":34001}
…allocations per block

Result: {"status":"keep","allocs_per_op":79,"bytes_per_op":25697,"ns_per_op":34147}
… make([][]byte, n) allocation

Result: {"status":"keep","allocs_per_op":78,"bytes_per_op":22996,"ns_per_op":33091}
…pper and txsToByteSlices allocations

Result: {"status":"keep","allocs_per_op":77,"bytes_per_op":20276,"ns_per_op":32480}
…allocation

Result: {"status":"keep","allocs_per_op":74,"bytes_per_op":12192,"ns_per_op":31624}
…10 allocs per block

Replace per-call allocation of pb.Header/pb.Version/pb.Data/pb.Metadata with
sync.Pool reuse in the hot MarshalBinary path. ToProto() API is unchanged —
only MarshalBinary is affected since it consumes the result immediately.

Metrics (100_txs benchmark):
- 74 → 64 allocs/op (-13.5%)
- ~12.1 → ~11.1 KB (-8.3%)
- ~31ns flat
…Signer/pb.Version

Eliminates 4 struct allocations per signed header marshal: pb.SignedHeader,
pb.Header, pb.Version, pb.Signer. These are now borrowed from sync.Pool and
returned after proto.Marshal completes.

Metrics (100_txs benchmark):
- 64 → 56 allocs/op
- ~11KB → ~10.2KB
…er block

State.ToProto allocated pb.State + pb.Version + timestamppb.Timestamp per block.
MarshalBinary now pools those structs and returns the marshaled bytes directly.
pkg/store/batch.UpdateState switched from ToProto+proto.Marshal to MarshalBinary.
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 4, 2026

📝 Walkthrough

Walkthrough

This PR optimizes memory allocation and serialization performance by introducing object pooling for protobuf messages and SHA256 hashers, replacing protobuf-based state conversions with binary marshalling, and leveraging unsafe slice casting for efficient type reinterpretation across multiple components.

Changes

Cohort / File(s) Summary
Transaction Processing Optimization
block/internal/executing/executor.go, types/tx.go
Refactored transaction slice handling to use unsafe.Slice for efficient type reinterpretation instead of allocating new slices; added compile-time size assertion guards in tx.go using unsafe.Sizeof.
State Serialization
pkg/store/batch.go, types/serialization.go
Removed protobuf conversion dependencies; updated UpdateState to use MarshalBinary() directly; added MarshalBinary() method to State type with binary marshalling logic.
Hashing Performance
types/hashing.go, types/hash_memo_bench_test.go
Introduced pooled SHA256 hasher via sync.Pool to reduce allocations; refactored DACommitment() to use protobuf-based approach with unsafe casting; added three benchmark functions measuring header hashing under different cache states.
Serialization Infrastructure
types/serialization.go, types/serialization_test.go
Implemented sync.Pool-backed pooled protobuf message instances for Header, Version, Data, Metadata, Signer, SignedHeader, and State; reworked marshalling methods to acquire, populate, and return pooled messages; added tests validating copy semantics and binary round-trip behavior.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Suggested reviewers

  • tzdybal
  • Manav-Aggarwal
  • julienrbrt

Poem

🐰 Hops of joy through pooled hashers we go,
Unsafe slices dancing, so very low!
Allocations fade like morning dew,
Protobuf pools make serialization true.
Performance blooms where bytes align,
This optimization? Perfectly divine! 🚀

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (2 warnings)

Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description lacks a structured Overview section as required by the template; it only lists technical details without explaining context, goals, or rationale. Add an Overview section explaining the purpose, context, and goals of the performance optimization work, plus link to any related issue (e.g., 'Closes #').
Docstring Coverage ⚠️ Warning Docstring coverage is 60.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly summarizes the main objective of the PR: using autoresearch to reduce allocations in the codebase.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch autoresearch/block-perf-2026-04-04

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 4, 2026

The latest Buf updates on your PR. Results from workflow CI / buf-check (pull_request).

BuildFormatLintBreakingUpdated (UTC)
✅ passed⏩ skipped✅ passed✅ passedApr 14, 2026, 1:17 PM

@claude
Copy link
Copy Markdown
Contributor

claude bot commented Apr 4, 2026

Claude finished @tac0turtle's task in 2m 44s —— View job


Code Review

  • Reading CLAUDE.md / AGENTS.md for repo guidelines
  • Reading changed files
  • Analyzing code quality, safety, and performance
  • Posting comprehensive review

Good overall work — the allocation reductions are meaningful and the approach is sound. There are a few correctness issues worth addressing before merge.


🔴 Bugs

1. require.NoError inside goroutine — test will silently pass on error

pkg/sync/syncer_status_test.go:36

for range 8 {
    wg.Go(func() {
        _, err := status.startOnce(...)
        require.NoError(t, err)  // ← unsafe: calls t.FailNow() from non-test goroutine
    })
}

require.NoError calls t.FailNow()runtime.Goexit(). When called from a goroutine that is not the test goroutine, it exits only that goroutine — the test continues and reports passed. Errors are silently swallowed. Fix: collect errors in a buffered channel and assert on the main goroutine after wg.Wait().

errs := make(chan error, 8)
for range 8 {
    wg.Go(func() {
        _, err := status.startOnce(...)
        errs <- err
    })
}
wg.Wait()
close(errs)
for err := range errs {
    require.NoError(t, err)
}

Fix this →


2. Silent proto.Marshal error in DACommitment — wrong hash silently returned

types/hashing.go:134

dBytes, _ := proto.Marshal(&pbData)  // error discarded

If marshaling fails (unlikely but possible under memory pressure), dBytes is nil and leafHashOpt produces a hash of [leafPrefix] rather than the actual data commitment. This would produce a wrong DA commitment, potentially causing consensus divergence. The Data.Hash() at line 122 has the same pattern and carries an existing comment acknowledging the limitation.

The cleanest fix without changing the public signature is a panic or a debug-level log:

dBytes, err := proto.Marshal(&pbData)
if err != nil {
    // proto.Marshal on a simple [][]byte field should never fail;
    // panic rather than silently return a wrong commitment.
    panic(fmt.Sprintf("DACommitment: proto.Marshal failed: %v", err))
}

Fix this →


🟡 Code Quality

3. Redundant psh.Signer = psi assignment

types/serialization.go:250 and 273

psh.Signer is already set to psi at line 250. The re-assignment at line 273 (psh.Signer = psi) is a no-op and adds confusion:

psh.Signer = psi          // line 250 — initial assignment

// ... (defer set up, nil pubkey fast path) ...

psi.Address = sh.Signer.Address
psi.PubKey = pubKey
psh.Signer = psi          // line 273 — redundant: same pointer already assigned

Since psi is a pointer and psh.Signer already points to it, updating psi.Address/psi.PubKey directly is sufficient. Line 273 can be removed.

Fix this →

4. Test TestDataToProtoCopiesOuterTxSlice only covers ToProto(), not MarshalBinary()

types/serialization_test.go:326

MarshalBinary() and ToProto() now have different mutation-safety semantics: ToProto() copies the outer slice; MarshalBinary() uses unsafe.Slice (no copy, shared backing memory). The test only covers ToProto(). The unsafe path is correct (proto.Marshal is synchronous), but there's no test guarding it. Consider adding a minimal test to document the MarshalBinary() contract explicitly:

func TestDataMarshalBinary_UnsafeTxsSlice(t *testing.T) {
    d := &Data{
        Metadata: &Metadata{ChainID: "c"},
        Txs:      Txs{[]byte("tx1"), []byte("tx2")},
    }
    bz, err := d.MarshalBinary()
    require.NoError(t, err)

    // Mutate after serialization to confirm output is not affected
    d.Txs[0] = []byte("mutated")
    var d2 Data
    require.NoError(t, d2.UnmarshalBinary(bz))
    assert.Equal(t, []byte("tx1"), []byte(d2.Txs[0]))
}

🟢 What's Good

types/tx.go compile-time size guards — the unsafe.Sizeof array-length guards are the right way to enforce layout compatibility at compile time. Clean and idiomatic.

sync.Pool for sha256 and proto messages — correct usage. Reset() is called before use in leafHashOpt and before returning objects to pools. No double-free or use-after-put risks.

Pool cleanup in SignedHeader.MarshalBinary — using defer for the four pooled proto objects is cleaner than the earlier explicit multi-path cleanup and correctly handles all return paths including the error branch.

pkg/store/batch.go UpdateState simplification — switching to state.MarshalBinary() removes the proto import and consolidates the pooling logic in one place. Correct.

Data.MarshalBinary nil guard on Txs — the if d.Txs != nil check before unsafe.Slice is consistent with DACommitment and correctly avoids calling unsafe.SliceData on a nil slice.


@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 4, 2026

PR Preview Action v1.8.1

QR code for preview link

🚀 View preview at
https://evstack.github.io/docs-preview/pr-3225/

Built to branch main at 2026-04-04 10:13 UTC.
Preview will be ready when the GitHub Pages deployment is complete.

@tac0turtle tac0turtle force-pushed the autoresearch/block-perf-2026-04-04 branch from 9148ccc to 2438dc0 Compare April 4, 2026 10:10
@codecov
Copy link
Copy Markdown

codecov bot commented Apr 4, 2026

Codecov Report

❌ Patch coverage is 97.95918% with 3 lines in your changes missing coverage. Please review.
✅ Project coverage is 62.20%. Comparing base (50a73fb) to head (9fe527f).
⚠️ Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
types/serialization.go 98.47% 1 Missing and 1 partial ⚠️
pkg/store/batch.go 50.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #3225      +/-   ##
==========================================
+ Coverage   61.79%   62.20%   +0.40%     
==========================================
  Files         120      120              
  Lines       12670    12799     +129     
==========================================
+ Hits         7830     7962     +132     
+ Misses       3962     3961       -1     
+ Partials      878      876       -2     
Flag Coverage Δ
combined 62.20% <97.95%> (+0.40%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@tac0turtle tac0turtle marked this pull request as ready for review April 9, 2026 15:08
@tac0turtle tac0turtle requested a review from julienrbrt April 9, 2026 15:08
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
types/hash_memo_bench_test.go (1)

35-42: Consider invalidating cache between iterations for consistent baseline.

In BenchmarkHeaderHash_Single, the header is created once outside the loop. While Hash() doesn't set cachedHash, if future refactoring changes this behavior, the benchmark would measure cached hits after the first iteration. Consider adding h.InvalidateHash() inside the loop for robustness:

♻️ Suggested improvement for robustness
 func BenchmarkHeaderHash_Single(b *testing.B) {
 	h := GetRandomHeader("bench-chain", GetRandomBytes(32))
 	b.ReportAllocs()
 	b.ResetTimer()
 	for b.Loop() {
+		h.InvalidateHash()
 		_ = h.Hash()
 	}
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@types/hash_memo_bench_test.go` around lines 35 - 42,
BenchmarkHeaderHash_Single currently constructs h once and calls h.Hash()
repeatedly; to ensure each iteration measures an uncached baseline, call
h.InvalidateHash() before each h.Hash() invocation (or recreate h inside the
loop using GetRandomHeader/GetRandomBytes) so any cachedHash is cleared; update
the loop in BenchmarkHeaderHash_Single to invoke h.InvalidateHash() (or
reinitialize h) on every iteration before calling h.Hash().
types/hashing.go (1)

128-135: Ignored marshal error could silently produce incorrect commitment.

Line 131 ignores the error from proto.Marshal. While this is consistent with the existing pattern in Hash() (line 120-121), a marshal failure would produce a hash of nil/empty bytes, potentially causing consensus issues. Consider at least logging failures in debug builds or returning an error.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@types/hashing.go` around lines 128 - 135, DACommitment currently ignores the
error from proto.Marshal which can yield an incorrect commitment; change
DACommitment to surface marshal failures instead of silently hashing nil by
returning (Hash, error) and propagate this change to callers (mirror the new
signature across any places invoking DACommitment), or if changing signature is
impractical, check the error and log it via the package logger in a debug build
and return a zero-value Hash; locate DACommitment and the related Hash() usage
to implement proper error handling and ensure any callers handle the returned
error or the logged failure.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@types/hash_memo_bench_test.go`:
- Around line 35-42: BenchmarkHeaderHash_Single currently constructs h once and
calls h.Hash() repeatedly; to ensure each iteration measures an uncached
baseline, call h.InvalidateHash() before each h.Hash() invocation (or recreate h
inside the loop using GetRandomHeader/GetRandomBytes) so any cachedHash is
cleared; update the loop in BenchmarkHeaderHash_Single to invoke
h.InvalidateHash() (or reinitialize h) on every iteration before calling
h.Hash().

In `@types/hashing.go`:
- Around line 128-135: DACommitment currently ignores the error from
proto.Marshal which can yield an incorrect commitment; change DACommitment to
surface marshal failures instead of silently hashing nil by returning (Hash,
error) and propagate this change to callers (mirror the new signature across any
places invoking DACommitment), or if changing signature is impractical, check
the error and log it via the package logger in a debug build and return a
zero-value Hash; locate DACommitment and the related Hash() usage to implement
proper error handling and ensure any callers handle the returned error or the
logged failure.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 12528a06-2158-4aad-8475-8990e96b4bcb

📥 Commits

Reviewing files that changed from the base of the PR and between 3abebca and 45e1951.

📒 Files selected for processing (7)
  • block/internal/executing/executor.go
  • pkg/store/batch.go
  • types/hash_memo_bench_test.go
  • types/hashing.go
  • types/serialization.go
  • types/serialization_test.go
  • types/tx.go

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
pkg/sync/syncer_status_test.go (1)

28-37: ⚠️ Potential issue | 🟠 Major

Move require.NoError out of worker goroutines.

require.NoError(t, err) inside wg.Go is unsafe because require calls t.FailNow(), which must only be called from the test goroutine—calling it from a spawned goroutine will only exit that goroutine, not the test. Collect errors in a channel and assert after wg.Wait() on the main test goroutine.

Suggested fix
 				started := make(chan struct{})
 				release := make(chan struct{})
 				var wg sync.WaitGroup
+				errs := make(chan error, 8)

 				for range 8 {
 					wg.Go(func() {
 						_, err := status.startOnce(func() error {
 							if calls.Add(1) == 1 {
 								close(started)
 							}
 							<-release
 							return nil
 						})
-						require.NoError(t, err)
+						errs <- err
 					})
 				}

 				<-started
 				close(release)
 				wg.Wait()
+				close(errs)
+
+				for err := range errs {
+					require.NoError(t, err)
+				}

 				require.Equal(t, int32(1), calls.Load())
 				require.True(t, status.isStarted())
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/sync/syncer_status_test.go` around lines 28 - 37, Move the test assertion
out of the worker goroutine: replace the inline require.NoError(t, err) inside
the wg.Go anonymous function (the block that calls status.startOnce with the
calls/started/release coordination) by sending err to a buffered errors channel
(or storing it in a slice protected by a mutex), and after wg.Wait() (on the
main test goroutine) iterate the collected errors and call require.NoError for
each; keep the existing status.startOnce, calls, started, release and wg.Go
logic but ensure no t.FailNow()-style assertions run inside the spawned
goroutine.
🧹 Nitpick comments (2)
types/serialization.go (2)

252-291: Consider defer-based cleanup for maintainability.

The three exit paths each duplicate cleanup logic for four pooled objects. While the current implementation is correct and avoids defer overhead in this hot path, the verbosity increases maintenance risk if the function is modified later.

If future changes require additional exit paths, consider a helper pattern:

♻️ Optional refactor using defer
func (sh *SignedHeader) MarshalBinary() ([]byte, error) {
	psh := pbSignedHeaderPool.Get().(*pb.SignedHeader)
	ph := pbHeaderPool.Get().(*pb.Header)
	pv := pbVersionPool.Get().(*pb.Version)
	psi := pbSignerPool.Get().(*pb.Signer)
	
	defer func() {
		pv.Reset(); pbVersionPool.Put(pv)
		ph.Reset(); pbHeaderPool.Put(ph)
		psi.Reset(); pbSignerPool.Put(psi)
		psh.Reset(); pbSignedHeaderPool.Put(psh)
	}()
	
	// ... rest of logic with simple returns ...
}

Given this is a hot path where the defer overhead may matter, the current explicit approach is reasonable.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@types/serialization.go` around lines 252 - 291, The cleanup for pooled
protobuf objects (ph, pv, psi, psh from pbHeaderPool, pbVersionPool,
pbSignerPool, pbSignedHeaderPool) is duplicated across multiple return paths in
SignedHeader.MarshalBinary; replace the repeated reset/put blocks with a single
deferred cleanup closure at the top of the function that calls ph.Reset();
pbHeaderPool.Put(ph), pv.Reset(); pbVersionPool.Put(pv), psi.Reset();
pbSignerPool.Put(psi), and psh.Reset(); pbSignedHeaderPool.Put(psh) so all
return paths can simply return bz, err without repeating cleanup logic.

142-144: Inconsistency with DACommitment nil handling.

This code defensively checks d.Txs != nil before the unsafe.Slice reinterpretation, which is good practice. However, DACommitment() in types/hashing.go:128-137 uses the same pattern without a nil check:

pbData := pb.Data{Txs: unsafe.Slice((*[]byte)(unsafe.SliceData(d.Txs)), len(d.Txs))}

While both are technically safe (when d.Txs is nil, unsafe.Slice(nil, 0) returns nil), the inconsistency could cause confusion during maintenance. Consider aligning the nil-check approach across both usages for clarity.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@types/serialization.go` around lines 142 - 144, DACommitment in
types/hashing.go currently builds pb.Data using unsafe.Slice without a nil check
for d.Txs; make it consistent with the pattern in types/serialization.go by
checking if d.Txs != nil before calling unsafe.Slice, and only set pbData.Txs
when non-nil (use the same conditional assignment used around pd.Txs in the
serialization code) so both places handle nil slices the same way.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@pkg/sync/syncer_status_test.go`:
- Around line 28-37: Move the test assertion out of the worker goroutine:
replace the inline require.NoError(t, err) inside the wg.Go anonymous function
(the block that calls status.startOnce with the calls/started/release
coordination) by sending err to a buffered errors channel (or storing it in a
slice protected by a mutex), and after wg.Wait() (on the main test goroutine)
iterate the collected errors and call require.NoError for each; keep the
existing status.startOnce, calls, started, release and wg.Go logic but ensure no
t.FailNow()-style assertions run inside the spawned goroutine.

---

Nitpick comments:
In `@types/serialization.go`:
- Around line 252-291: The cleanup for pooled protobuf objects (ph, pv, psi, psh
from pbHeaderPool, pbVersionPool, pbSignerPool, pbSignedHeaderPool) is
duplicated across multiple return paths in SignedHeader.MarshalBinary; replace
the repeated reset/put blocks with a single deferred cleanup closure at the top
of the function that calls ph.Reset(); pbHeaderPool.Put(ph), pv.Reset();
pbVersionPool.Put(pv), psi.Reset(); pbSignerPool.Put(psi), and psh.Reset();
pbSignedHeaderPool.Put(psh) so all return paths can simply return bz, err
without repeating cleanup logic.
- Around line 142-144: DACommitment in types/hashing.go currently builds pb.Data
using unsafe.Slice without a nil check for d.Txs; make it consistent with the
pattern in types/serialization.go by checking if d.Txs != nil before calling
unsafe.Slice, and only set pbData.Txs when non-nil (use the same conditional
assignment used around pd.Txs in the serialization code) so both places handle
nil slices the same way.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 31a32c62-ce85-43f4-9ada-8db1607e112f

📥 Commits

Reviewing files that changed from the base of the PR and between 45e1951 and f0bd024.

📒 Files selected for processing (3)
  • pkg/sync/syncer_status_test.go
  • types/hashing.go
  • types/serialization.go
🚧 Files skipped from review as they are similar to previous changes (1)
  • types/hashing.go

Copy link
Copy Markdown
Member

@julienrbrt julienrbrt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think Claude's comment make sense.

@tac0turtle tac0turtle requested a review from julienrbrt April 14, 2026 13:25
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
types/serialization.go (1)

74-102: Add direct wire-compatibility tests for the pooled marshalers.

These paths now hand-fill pooled protobuf structs and use unsafe slice views instead of the older ToProto() helpers. The provided context only shows new coverage for State.MarshalBinary round-tripping and Data.ToProto() copy semantics, so I’d add byte-for-byte equivalence tests against proto.Marshal(x.ToProto()) for Header, Data, and SignedHeader, including nil vs empty Txs.

Also applies to: 124-149, 223-275, 513-544

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@types/serialization.go` around lines 74 - 102, Add byte-for-byte
wire-compatibility tests that compare the pooled marshalers against the original
ToProto+proto.Marshal path: for Header (Header.MarshalBinary), Data
(Data.ToProto), and SignedHeader ensure proto.Marshal(x.ToProto()) equals
x.MarshalBinary() output, and include cases for nil vs empty Txs and for legacy
unknown fields (encodeLegacyUnknownFields). Implement tests that construct
identical inputs, call both proto.Marshal(x.ToProto()) and the pooled marshal
(e.g. Header.MarshalBinary / SignedHeader.MarshalBinary / Data.MarshalBinary),
and assert equality of returned bytes and errors; reuse the existing test
helpers used for State.MarshalBinary round-tripping to minimize duplication.
types/hashing.go (1)

128-137: Lock down DACommitment() equivalence with edge-case tests.

This path now bypasses Data.MarshalBinary() and hashes a manually assembled pb.Data. Since the commitment is consensus-sensitive, I’d add regression cases that compare the new hash against the previous marshal-and-hash path for nil Txs, empty-but-non-nil Txs, and normal multi-tx batches.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@types/hashing.go` around lines 128 - 137, The DACommitment() change bypasses
Data.MarshalBinary() and must be regression-tested: add unit tests that compute
the commitment using Data.DACommitment() and also via the previous
marshal-and-hash path (call Data.MarshalBinary() then leafHashOpt with same
sha256Pool/hasher) for three cases—nil Txs, empty-but-non-nil Txs (make a
zero-length slice), and a normal multi-tx Data—asserting equality for each;
reference DACommitment(), Data.MarshalBinary(), pb.Data and leafHashOpt to
locate code paths and ensure the tests reproduce the exact hasher/sha256Pool
usage.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@types/hashing.go`:
- Around line 128-137: The DACommitment() change bypasses Data.MarshalBinary()
and must be regression-tested: add unit tests that compute the commitment using
Data.DACommitment() and also via the previous marshal-and-hash path (call
Data.MarshalBinary() then leafHashOpt with same sha256Pool/hasher) for three
cases—nil Txs, empty-but-non-nil Txs (make a zero-length slice), and a normal
multi-tx Data—asserting equality for each; reference DACommitment(),
Data.MarshalBinary(), pb.Data and leafHashOpt to locate code paths and ensure
the tests reproduce the exact hasher/sha256Pool usage.

In `@types/serialization.go`:
- Around line 74-102: Add byte-for-byte wire-compatibility tests that compare
the pooled marshalers against the original ToProto+proto.Marshal path: for
Header (Header.MarshalBinary), Data (Data.ToProto), and SignedHeader ensure
proto.Marshal(x.ToProto()) equals x.MarshalBinary() output, and include cases
for nil vs empty Txs and for legacy unknown fields (encodeLegacyUnknownFields).
Implement tests that construct identical inputs, call both
proto.Marshal(x.ToProto()) and the pooled marshal (e.g. Header.MarshalBinary /
SignedHeader.MarshalBinary / Data.MarshalBinary), and assert equality of
returned bytes and errors; reuse the existing test helpers used for
State.MarshalBinary round-tripping to minimize duplication.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 852b7f26-794c-42d9-a7b8-3ffa9b36de8c

📥 Commits

Reviewing files that changed from the base of the PR and between f0bd024 and 9fe527f.

📒 Files selected for processing (3)
  • block/internal/executing/executor.go
  • types/hashing.go
  • types/serialization.go

@tac0turtle tac0turtle enabled auto-merge April 14, 2026 13:29
@tac0turtle tac0turtle added this pull request to the merge queue Apr 14, 2026
Merged via the queue into main with commit 25892d7 Apr 14, 2026
39 checks passed
@tac0turtle tac0turtle deleted the autoresearch/block-perf-2026-04-04 branch April 14, 2026 13:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants