-
Notifications
You must be signed in to change notification settings - Fork 70
feature: block tag support(safe&finalize) #857
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
📝 WalkthroughWalkthroughA new BlockTag service monitors L1 batch commitments to compute and track safe and finalized L2 block tags. Configuration is provided through a dedicated module, the service integrates into the sequencer's lifecycle management, and RPC notifications update Geth of block tag changes via an enhanced RetryableClient. Changes
Sequence DiagramsequenceDiagram
participant Poller as BlockTagService
participant L1 as L1 Node
participant Rollup as Rollup Contract
participant L2 as L2 Node
participant Geth as Geth (RPC)
loop Periodic Poll
Poller->>L1: Get L1 head & finalized block
L1-->>Poller: L1 block numbers
Poller->>L2: Get L2 head
L2-->>Poller: L2 block number
Poller->>Rollup: Query last committed batch at L1 block
Rollup-->>Poller: Batch index & state root
Poller->>L2: Fetch L2 block by index/hash
L2-->>Poller: L2 block data
Note over Poller: Validate state roots match<br/>(L1 vs L2)
alt Tags Changed
Poller->>Geth: SetBlockTags(safeHash, finalizedHash)
Geth-->>Poller: Success
end
end
Estimated Code Review Effort🎯 3 (Moderate) | ⏱️ ~40 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@node/blocktag/service.go`:
- Around line 63-101: In NewBlockTagService, if bindings.NewRollup fails the
previously created l1Client is leaked; ensure l1Client is closed before
returning the error (e.g., close l1Client when rollup creation returns an error
or use a deferred cleanup tied to the rollup creation step) so that l1Client is
not left open on failure.
- Around line 292-335: findCompletedBatchForL2Block can recurse indefinitely
when binarySearchBatch yields an index but forward scanning finds no completed
batch; replace the recursive reset pattern with a single retry loop or a boolean
"cacheReset" flag: when resultIdx == 0, if we haven't already reset the cache,
call setCachedBatchIndex(tagType, 0), set the flag, recompute startIdx via
binarySearchBatch and retry the forward scan once; if it still yields no result,
return an error instead of recursing. Update logic around getCachedBatchIndex,
binarySearchBatch, setCachedBatchIndex, and the forward scan to respect the
single-retry flag.
In `@node/cmd/node/main.go`:
- Around line 147-159: The BlockTagService startup assumes L1NodeAddr and
RollupContractAddress are present but the CLI flags L1NodeAddr and
RollupContractAddress are currently optional; either make the flags required or
skip initializing the service when they're absent. To fix, either (A) update the
flag definitions for L1NodeAddr and RollupContractAddress in node/flags/flags.go
to include Required: true so NewBlockTagService can always rely on them, or (B)
modify the BlockTagService initialization in main.go (the blockTagConfig /
blockTagSvc / NewBlockTagService code path) to check the parsed flag values and
if either L1NodeAddr or RollupContractAddress is empty, log a warning and do not
call blocktag.NewBlockTagService (skip Start()), making the service optional and
avoiding the runtime error.
🧹 Nitpick comments (2)
node/blocktag/config.go (1)
36-39: Consider exposingSafeConfirmationsandPollIntervalvia CLI flags.Currently,
SetCliContextonly readsL1AddrandRollupAddressfrom CLI flags. Users cannot overrideSafeConfirmationsorPollIntervalwithout code changes.Also,
common.HexToAddresssilently returns a zero address for invalid input. WhileNewBlockTagServicevalidates this, consider adding early validation here or logging a warning for better debuggability.♻️ Suggested enhancement
+import ( + "fmt" + ... +) + +// Add new flags in node/flags/flags.go: +// BlockTagSafeConfirmations = cli.Uint64Flag{...} +// BlockTagPollInterval = cli.DurationFlag{...} func (c *Config) SetCliContext(ctx *cli.Context) error { c.L1Addr = ctx.GlobalString(flags.L1NodeAddr.Name) - c.RollupAddress = common.HexToAddress(ctx.GlobalString(flags.RollupContractAddress.Name)) + rollupAddrStr := ctx.GlobalString(flags.RollupContractAddress.Name) + if rollupAddrStr != "" && !common.IsHexAddress(rollupAddrStr) { + return fmt.Errorf("invalid rollup address: %s", rollupAddrStr) + } + c.RollupAddress = common.HexToAddress(rollupAddrStr) + // Optionally read SafeConfirmations and PollInterval from flags return nil }node/blocktag/service.go (1)
154-197:updateBlockTagsreturns nil even when errors occur.The function logs errors for
getL2BlockForTagandnotifyGethbut always returnsnil. This makes it harder for callers to know if updates succeeded. Consider returning an error or using a multi-error pattern if partial success is acceptable.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
node/blocktag/config.gonode/blocktag/service.gonode/cmd/node/main.gonode/types/retryable_client.go
🧰 Additional context used
🧬 Code graph analysis (3)
node/blocktag/config.go (2)
oracle/flags/flags.go (1)
RollupAddress(35-40)node/flags/flags.go (2)
L1NodeAddr(63-67)RollupContractAddress(187-191)
node/blocktag/service.go (4)
tx-submitter/iface/client.go (1)
Client(14-23)node/types/retryable_client.go (1)
RetryableClient(29-34)node/blocktag/config.go (1)
Config(20-25)oracle/flags/flags.go (1)
RollupAddress(35-40)
node/cmd/node/main.go (5)
node/core/executor.go (1)
Executor(31-61)node/sync/syncer.go (1)
Syncer(15-29)node/derivation/derivation.go (1)
Derivation(39-67)node/blocktag/service.go (2)
BlockTagService(33-61)NewBlockTagService(64-101)node/blocktag/config.go (1)
DefaultConfig(28-33)
🪛 GitHub Actions: Tx-submitter
node/types/retryable_client.go
[error] 227-227: rc.authClient.SetBlockTags undefined (type *authclient.Client has no field or method SetBlockTags)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: test
- GitHub Check: check
- GitHub Check: Analyze (go)
- GitHub Check: Analyze (rust)
🔇 Additional comments (7)
node/cmd/node/main.go (1)
186-188: LGTM!Shutdown handling correctly checks for nil before stopping the service.
node/blocktag/service.go (5)
103-118: LGTM!Initialization failure is logged but doesn't block startup, which is reasonable for a non-critical service. The polling loop will retry on subsequent ticks.
120-127: LGTM!Stop cleanly cancels context, waits for the loop to exit via the
stopchannel, and closes the L1 client.
269-290: State root validation logic looks correct.The comparison between L1 committed state root and L2 block header root is a good safety check to detect mismatches.
411-435: LGTM!The notification logic correctly skips redundant RPC calls when tags haven't changed and updates the last-notified hashes only after successful notification.
375-393: No action needed —rpc.FinalizedBlockNumberconversion is compatible and follows an established pattern.The same pattern of converting
rpc.FinalizedBlockNumber(a negative constant) tobig.Intis already successfully used elsewhere in the codebase (node/common/layer1.go:24), where it's passed toHeaderByNumber()without any compatibility issues. This confirms the conversion works correctly with go-ethereum's RPC layer andCallOpts.BlockNumber.node/types/retryable_client.go (1)
225-240: TheSetBlockTagsmethod exists onauthclient.Clientand the code compiles successfully. A recent commit (fd7ab3e feature: block tag support(safe&finalize)) explicitly added this functionality to the morph-l2/go-ethereum fork. All methods called onrc.authClientfollow the identical retry pattern shown here, and the method is actively used innode/blocktag/service.go.Likely an incorrect or invalid review comment.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
| // NewBlockTagService creates a new BlockTagService | ||
| func NewBlockTagService( | ||
| ctx context.Context, | ||
| l2Client *types.RetryableClient, | ||
| config *Config, | ||
| logger tmlog.Logger, | ||
| ) (*BlockTagService, error) { | ||
| if config.L1Addr == "" { | ||
| return nil, fmt.Errorf("L1 RPC address is required") | ||
| } | ||
| if config.RollupAddress == (common.Address{}) { | ||
| return nil, fmt.Errorf("Rollup contract address is required") | ||
| } | ||
|
|
||
| l1Client, err := ethclient.Dial(config.L1Addr) | ||
| if err != nil { | ||
| return nil, fmt.Errorf("failed to connect to L1: %w", err) | ||
| } | ||
|
|
||
| rollup, err := bindings.NewRollup(config.RollupAddress, l1Client) | ||
| if err != nil { | ||
| return nil, fmt.Errorf("failed to create rollup binding: %w", err) | ||
| } | ||
|
|
||
| ctx, cancel := context.WithCancel(ctx) | ||
|
|
||
| return &BlockTagService{ | ||
| ctx: ctx, | ||
| cancel: cancel, | ||
| l1Client: l1Client, | ||
| l2Client: l2Client, | ||
| rollup: rollup, | ||
| rollupAddress: config.RollupAddress, | ||
| safeConfirmations: config.SafeConfirmations, | ||
| pollInterval: config.PollInterval, | ||
| logger: logger.With("module", "blocktag"), | ||
| stop: make(chan struct{}), | ||
| }, nil | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential resource leak: L1 client not closed on subsequent error.
If bindings.NewRollup fails at line 82-85, the l1Client created at line 77 is not closed.
🔧 Suggested fix
l1Client, err := ethclient.Dial(config.L1Addr)
if err != nil {
return nil, fmt.Errorf("failed to connect to L1: %w", err)
}
rollup, err := bindings.NewRollup(config.RollupAddress, l1Client)
if err != nil {
+ l1Client.Close()
return nil, fmt.Errorf("failed to create rollup binding: %w", err)
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // NewBlockTagService creates a new BlockTagService | |
| func NewBlockTagService( | |
| ctx context.Context, | |
| l2Client *types.RetryableClient, | |
| config *Config, | |
| logger tmlog.Logger, | |
| ) (*BlockTagService, error) { | |
| if config.L1Addr == "" { | |
| return nil, fmt.Errorf("L1 RPC address is required") | |
| } | |
| if config.RollupAddress == (common.Address{}) { | |
| return nil, fmt.Errorf("Rollup contract address is required") | |
| } | |
| l1Client, err := ethclient.Dial(config.L1Addr) | |
| if err != nil { | |
| return nil, fmt.Errorf("failed to connect to L1: %w", err) | |
| } | |
| rollup, err := bindings.NewRollup(config.RollupAddress, l1Client) | |
| if err != nil { | |
| return nil, fmt.Errorf("failed to create rollup binding: %w", err) | |
| } | |
| ctx, cancel := context.WithCancel(ctx) | |
| return &BlockTagService{ | |
| ctx: ctx, | |
| cancel: cancel, | |
| l1Client: l1Client, | |
| l2Client: l2Client, | |
| rollup: rollup, | |
| rollupAddress: config.RollupAddress, | |
| safeConfirmations: config.SafeConfirmations, | |
| pollInterval: config.PollInterval, | |
| logger: logger.With("module", "blocktag"), | |
| stop: make(chan struct{}), | |
| }, nil | |
| } | |
| // NewBlockTagService creates a new BlockTagService | |
| func NewBlockTagService( | |
| ctx context.Context, | |
| l2Client *types.RetryableClient, | |
| config *Config, | |
| logger tmlog.Logger, | |
| ) (*BlockTagService, error) { | |
| if config.L1Addr == "" { | |
| return nil, fmt.Errorf("L1 RPC address is required") | |
| } | |
| if config.RollupAddress == (common.Address{}) { | |
| return nil, fmt.Errorf("Rollup contract address is required") | |
| } | |
| l1Client, err := ethclient.Dial(config.L1Addr) | |
| if err != nil { | |
| return nil, fmt.Errorf("failed to connect to L1: %w", err) | |
| } | |
| rollup, err := bindings.NewRollup(config.RollupAddress, l1Client) | |
| if err != nil { | |
| l1Client.Close() | |
| return nil, fmt.Errorf("failed to create rollup binding: %w", err) | |
| } | |
| ctx, cancel := context.WithCancel(ctx) | |
| return &BlockTagService{ | |
| ctx: ctx, | |
| cancel: cancel, | |
| l1Client: l1Client, | |
| l2Client: l2Client, | |
| rollup: rollup, | |
| rollupAddress: config.RollupAddress, | |
| safeConfirmations: config.SafeConfirmations, | |
| pollInterval: config.PollInterval, | |
| logger: logger.With("module", "blocktag"), | |
| stop: make(chan struct{}), | |
| }, nil | |
| } |
🤖 Prompt for AI Agents
In `@node/blocktag/service.go` around lines 63 - 101, In NewBlockTagService, if
bindings.NewRollup fails the previously created l1Client is leaked; ensure
l1Client is closed before returning the error (e.g., close l1Client when rollup
creation returns an error or use a deferred cleanup tied to the rollup creation
step) so that l1Client is not left open on failure.
| // findCompletedBatchForL2Block finds the largest batch where lastL2Block <= l2BlockNum. | ||
| // Uses cached index for optimization: first call binary search, subsequent calls search forward. | ||
| // Separate caches for safe and finalized to avoid conflicts. | ||
| func (s *BlockTagService) findCompletedBatchForL2Block(tagType BlockTagType, l2BlockNum uint64, maxBatchIndex uint64) (uint64, uint64, error) { | ||
| if maxBatchIndex == 0 { | ||
| return 0, 0, fmt.Errorf("no batches available") | ||
| } | ||
|
|
||
| // Get cached index based on tag type | ||
| startIdx := s.getCachedBatchIndex(tagType) | ||
| if startIdx == 0 || startIdx > maxBatchIndex { | ||
| // First time or cache invalid: use binary search to find starting point | ||
| startIdx = s.binarySearchBatch(l2BlockNum, maxBatchIndex) | ||
| if startIdx == 0 { | ||
| return 0, 0, fmt.Errorf("no completed batch found for L2 block %d", l2BlockNum) | ||
| } | ||
| } | ||
|
|
||
| // Search forward from startIdx | ||
| var resultIdx, resultLastL2Block uint64 | ||
| for idx := startIdx; idx <= maxBatchIndex; idx++ { | ||
| batchData, err := s.rollup.BatchDataStore(nil, big.NewInt(int64(idx))) | ||
| if err != nil { | ||
| return 0, 0, fmt.Errorf("failed to get batch data for index %d: %w", idx, err) | ||
| } | ||
|
|
||
| lastL2Block := batchData.BlockNumber.Uint64() | ||
| if lastL2Block <= l2BlockNum { | ||
| resultIdx = idx | ||
| resultLastL2Block = lastL2Block | ||
| s.setCachedBatchIndex(tagType, idx) | ||
| } else { | ||
| break | ||
| } | ||
| } | ||
|
|
||
| // Handle L2 reorg: if cache was too new, reset and use binary search | ||
| if resultIdx == 0 { | ||
| s.setCachedBatchIndex(tagType, 0) | ||
| return s.findCompletedBatchForL2Block(tagType, l2BlockNum, maxBatchIndex) | ||
| } | ||
|
|
||
| return resultIdx, resultLastL2Block, nil | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential infinite recursion in findCompletedBatchForL2Block.
If the binary search returns a valid batch index but the forward search still finds no completed batch (all batches have lastL2Block > l2BlockNum), the function will reset the cache and recurse infinitely:
- Binary search returns index N
- Forward search from N finds nothing (
resultIdxstays 0) - Cache reset to 0, function recurses
- Binary search returns N again
- Repeat forever
🐛 Suggested fix: Track recursion depth or use a flag
-func (s *BlockTagService) findCompletedBatchForL2Block(tagType BlockTagType, l2BlockNum uint64, maxBatchIndex uint64) (uint64, uint64, error) {
+func (s *BlockTagService) findCompletedBatchForL2Block(tagType BlockTagType, l2BlockNum uint64, maxBatchIndex uint64) (uint64, uint64, error) {
+ return s.findCompletedBatchForL2BlockInternal(tagType, l2BlockNum, maxBatchIndex, false)
+}
+
+func (s *BlockTagService) findCompletedBatchForL2BlockInternal(tagType BlockTagType, l2BlockNum uint64, maxBatchIndex uint64, usedBinarySearch bool) (uint64, uint64, error) {
if maxBatchIndex == 0 {
return 0, 0, fmt.Errorf("no batches available")
}
// Get cached index based on tag type
startIdx := s.getCachedBatchIndex(tagType)
if startIdx == 0 || startIdx > maxBatchIndex {
// First time or cache invalid: use binary search to find starting point
startIdx = s.binarySearchBatch(l2BlockNum, maxBatchIndex)
if startIdx == 0 {
return 0, 0, fmt.Errorf("no completed batch found for L2 block %d", l2BlockNum)
}
+ usedBinarySearch = true
}
// Search forward from startIdx
var resultIdx, resultLastL2Block uint64
for idx := startIdx; idx <= maxBatchIndex; idx++ {
// ... existing code ...
}
// Handle L2 reorg: if cache was too new, reset and use binary search
- if resultIdx == 0 {
+ if resultIdx == 0 && !usedBinarySearch {
s.setCachedBatchIndex(tagType, 0)
- return s.findCompletedBatchForL2Block(tagType, l2BlockNum, maxBatchIndex)
+ return s.findCompletedBatchForL2BlockInternal(tagType, l2BlockNum, maxBatchIndex, false)
+ } else if resultIdx == 0 {
+ return 0, 0, fmt.Errorf("no completed batch found for L2 block %d after binary search", l2BlockNum)
}
return resultIdx, resultLastL2Block, nil
}🤖 Prompt for AI Agents
In `@node/blocktag/service.go` around lines 292 - 335,
findCompletedBatchForL2Block can recurse indefinitely when binarySearchBatch
yields an index but forward scanning finds no completed batch; replace the
recursive reset pattern with a single retry loop or a boolean "cacheReset" flag:
when resultIdx == 0, if we haven't already reset the cache, call
setCachedBatchIndex(tagType, 0), set the flag, recompute startIdx via
binarySearchBatch and retry the forward scan once; if it still yields no result,
return an error instead of recursing. Update logic around getCachedBatchIndex,
binarySearchBatch, setCachedBatchIndex, and the forward scan to respect the
single-retry flag.
|
|
||
| // Start BlockTagService for sequencer mode | ||
| blockTagConfig := blocktag.DefaultConfig() | ||
| if err := blockTagConfig.SetCliContext(ctx); err != nil { | ||
| return fmt.Errorf("blocktag config set cli context error: %w", err) | ||
| } | ||
| blockTagSvc, err = blocktag.NewBlockTagService(context.Background(), executor.L2Client(), blockTagConfig, nodeConfig.Logger) | ||
| if err != nil { | ||
| return fmt.Errorf("failed to create BlockTagService: %w", err) | ||
| } | ||
| if err := blockTagSvc.Start(); err != nil { | ||
| return fmt.Errorf("failed to start BlockTagService: %w", err) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check L1NodeAddr flag configuration
echo "=== Searching for L1NodeAddr configuration ==="
rg -n "L1NodeAddr" --type go -B2 -A5 | head -80Repository: morph-l2/morph
Length of output: 3782
🏁 Script executed:
#!/bin/bash
# Check RollupContractAddress configuration
echo "=== Searching for RollupContractAddress configuration ==="
rg -n "RollupContractAddress|RollupAddress" --type go -B2 -A5 | head -80Repository: morph-l2/morph
Length of output: 4588
🏁 Script executed:
#!/bin/bash
# Check blocktag package initialization to see what's required
echo "=== Checking blocktag package ==="
fd -e go -p "blocktag" | head -20Repository: morph-l2/morph
Length of output: 140
🏁 Script executed:
#!/bin/bash
# Check NewBlockTagService signature and defaults
echo "=== Checking NewBlockTagService implementation ==="
rg -n "func NewBlockTagService|func.*NewBlockTagService" --type go -A15 | head -100Repository: morph-l2/morph
Length of output: 1033
🏁 Script executed:
#!/bin/bash
# Check for RollupContractAddress in node flags
echo "=== Searching for RollupContractAddress in node flags ==="
rg -n "RollupContractAddress" node/flags --type go -B2 -A5Repository: morph-l2/morph
Length of output: 855
🏁 Script executed:
#!/bin/bash
# Also check what flags are actually used in blocktag config
echo "=== Checking blocktag config for flag names ==="
cat node/blocktag/config.goRepository: morph-l2/morph
Length of output: 1170
🏁 Script executed:
#!/bin/bash
# Check the full context around BlockTagService creation in main.go
echo "=== Checking BlockTagService creation context in main.go ==="
sed -n '140,195p' node/cmd/node/main.goRepository: morph-l2/morph
Length of output: 1612
🏁 Script executed:
#!/bin/bash
# Check if there's any validator mode check that might skip BlockTagService
echo "=== Checking validator mode condition ==="
sed -n '120,160p' node/cmd/node/main.goRepository: morph-l2/morph
Length of output: 1543
BlockTagService integration has a flag requirement inconsistency.
The service is properly initialized and stopped, but has a design issue: L1NodeAddr and RollupContractAddress flags are defined as optional at the CLI level (no Required: true marker), yet the NewBlockTagService unconditionally rejects sequencer mode startup if either flag is missing. This means sequencer mode will fail at runtime with a service initialization error if these flags aren't provided, despite being optional in the flag definition.
Consider either:
- Mark both
L1NodeAddrandRollupContractAddressasRequired: truein node/flags/flags.go to match the service's runtime expectations, or - Make BlockTagService optional by gracefully skipping its initialization if the flags aren't provided (e.g., log a warning and continue without block tag tracking).
🤖 Prompt for AI Agents
In `@node/cmd/node/main.go` around lines 147 - 159, The BlockTagService startup
assumes L1NodeAddr and RollupContractAddress are present but the CLI flags
L1NodeAddr and RollupContractAddress are currently optional; either make the
flags required or skip initializing the service when they're absent. To fix,
either (A) update the flag definitions for L1NodeAddr and RollupContractAddress
in node/flags/flags.go to include Required: true so NewBlockTagService can
always rely on them, or (B) modify the BlockTagService initialization in main.go
(the blockTagConfig / blockTagSvc / NewBlockTagService code path) to check the
parsed flag values and if either L1NodeAddr or RollupContractAddress is empty,
log a warning and do not call blocktag.NewBlockTagService (skip Start()), making
the service optional and avoiding the runtime error.
fd7ab3e to
268bb59
Compare
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.