Skip to content

Conversation

@tomatoishealthy
Copy link

@tomatoishealthy tomatoishealthy commented Jan 15, 2026

Summary by CodeRabbit

  • New Features
    • Added a block tag tracking service that monitors and updates safe and finalized L2 block tags periodically
    • Implemented automatic block tag synchronization with the node client

✏️ Tip: You can customize this high-level summary in your review settings.

@tomatoishealthy tomatoishealthy requested a review from a team as a code owner January 15, 2026 06:42
@tomatoishealthy tomatoishealthy requested review from twcctop and removed request for a team January 15, 2026 06:42
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 15, 2026

📝 Walkthrough

Walkthrough

A new BlockTag service monitors L1 batch commitments to compute and track safe and finalized L2 block tags. Configuration is provided through a dedicated module, the service integrates into the sequencer's lifecycle management, and RPC notifications update Geth of block tag changes via an enhanced RetryableClient.

Changes

Cohort / File(s) Summary
BlockTag Configuration & Service
node/blocktag/config.go, node/blocktag/service.go
New blocktag package introducing Config struct with L1Addr, RollupAddress, SafeConfirmations, and PollInterval fields; DefaultConfig factory; and BlockTagService for tracking L2 safe/finalized block tags. Service performs periodic polling, queries Rollup contract for batch commitments, validates state roots against L1 commitments, maintains batch index caches with binary search optimization, and notifies Geth via RPC when tags update.
Node Main Integration
node/cmd/node/main.go
Integrates BlockTagService initialization in sequencer mode, binds CLI configuration context, manages service lifecycle (Start on initialization, Stop on shutdown).
RPC Client Enhancement
node/types/retryable_client.go
Adds SetBlockTags method to RetryableClient, invoking authClient.SetBlockTags with retry logic and backoff handling consistent with existing RPC methods.

Sequence Diagram

sequenceDiagram
    participant Poller as BlockTagService
    participant L1 as L1 Node
    participant Rollup as Rollup Contract
    participant L2 as L2 Node
    participant Geth as Geth (RPC)

    loop Periodic Poll
        Poller->>L1: Get L1 head & finalized block
        L1-->>Poller: L1 block numbers
        Poller->>L2: Get L2 head
        L2-->>Poller: L2 block number
        Poller->>Rollup: Query last committed batch at L1 block
        Rollup-->>Poller: Batch index & state root
        Poller->>L2: Fetch L2 block by index/hash
        L2-->>Poller: L2 block data
        Note over Poller: Validate state roots match<br/>(L1 vs L2)
        alt Tags Changed
            Poller->>Geth: SetBlockTags(safeHash, finalizedHash)
            Geth-->>Poller: Success
        end
    end
Loading

Estimated Code Review Effort

🎯 3 (Moderate) | ⏱️ ~40 minutes

Poem

🐰 Safe blocks and finalized dreams,
L1 anchors flowing downstream,
State roots dancing, caches aligned,
Batch commitments perfectly signed,
Geth's notified—the chain is divine! ✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 75.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feature: block tag support(safe&finalize)' directly matches the core functionality added: a new BlockTagService that tracks and updates safe and finalized L2 block tags, with supporting configuration and integration.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Fix all issues with AI agents
In `@node/blocktag/service.go`:
- Around line 63-101: In NewBlockTagService, if bindings.NewRollup fails the
previously created l1Client is leaked; ensure l1Client is closed before
returning the error (e.g., close l1Client when rollup creation returns an error
or use a deferred cleanup tied to the rollup creation step) so that l1Client is
not left open on failure.
- Around line 292-335: findCompletedBatchForL2Block can recurse indefinitely
when binarySearchBatch yields an index but forward scanning finds no completed
batch; replace the recursive reset pattern with a single retry loop or a boolean
"cacheReset" flag: when resultIdx == 0, if we haven't already reset the cache,
call setCachedBatchIndex(tagType, 0), set the flag, recompute startIdx via
binarySearchBatch and retry the forward scan once; if it still yields no result,
return an error instead of recursing. Update logic around getCachedBatchIndex,
binarySearchBatch, setCachedBatchIndex, and the forward scan to respect the
single-retry flag.

In `@node/cmd/node/main.go`:
- Around line 147-159: The BlockTagService startup assumes L1NodeAddr and
RollupContractAddress are present but the CLI flags L1NodeAddr and
RollupContractAddress are currently optional; either make the flags required or
skip initializing the service when they're absent. To fix, either (A) update the
flag definitions for L1NodeAddr and RollupContractAddress in node/flags/flags.go
to include Required: true so NewBlockTagService can always rely on them, or (B)
modify the BlockTagService initialization in main.go (the blockTagConfig /
blockTagSvc / NewBlockTagService code path) to check the parsed flag values and
if either L1NodeAddr or RollupContractAddress is empty, log a warning and do not
call blocktag.NewBlockTagService (skip Start()), making the service optional and
avoiding the runtime error.
🧹 Nitpick comments (2)
node/blocktag/config.go (1)

36-39: Consider exposing SafeConfirmations and PollInterval via CLI flags.

Currently, SetCliContext only reads L1Addr and RollupAddress from CLI flags. Users cannot override SafeConfirmations or PollInterval without code changes.

Also, common.HexToAddress silently returns a zero address for invalid input. While NewBlockTagService validates this, consider adding early validation here or logging a warning for better debuggability.

♻️ Suggested enhancement
+import (
+	"fmt"
+	...
+)
+
+// Add new flags in node/flags/flags.go:
+// BlockTagSafeConfirmations = cli.Uint64Flag{...}
+// BlockTagPollInterval = cli.DurationFlag{...}

 func (c *Config) SetCliContext(ctx *cli.Context) error {
 	c.L1Addr = ctx.GlobalString(flags.L1NodeAddr.Name)
-	c.RollupAddress = common.HexToAddress(ctx.GlobalString(flags.RollupContractAddress.Name))
+	rollupAddrStr := ctx.GlobalString(flags.RollupContractAddress.Name)
+	if rollupAddrStr != "" && !common.IsHexAddress(rollupAddrStr) {
+		return fmt.Errorf("invalid rollup address: %s", rollupAddrStr)
+	}
+	c.RollupAddress = common.HexToAddress(rollupAddrStr)
+	// Optionally read SafeConfirmations and PollInterval from flags
 	return nil
 }
node/blocktag/service.go (1)

154-197: updateBlockTags returns nil even when errors occur.

The function logs errors for getL2BlockForTag and notifyGeth but always returns nil. This makes it harder for callers to know if updates succeeded. Consider returning an error or using a multi-error pattern if partial success is acceptable.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b2636b1 and fd7ab3e.

📒 Files selected for processing (4)
  • node/blocktag/config.go
  • node/blocktag/service.go
  • node/cmd/node/main.go
  • node/types/retryable_client.go
🧰 Additional context used
🧬 Code graph analysis (3)
node/blocktag/config.go (2)
oracle/flags/flags.go (1)
  • RollupAddress (35-40)
node/flags/flags.go (2)
  • L1NodeAddr (63-67)
  • RollupContractAddress (187-191)
node/blocktag/service.go (4)
tx-submitter/iface/client.go (1)
  • Client (14-23)
node/types/retryable_client.go (1)
  • RetryableClient (29-34)
node/blocktag/config.go (1)
  • Config (20-25)
oracle/flags/flags.go (1)
  • RollupAddress (35-40)
node/cmd/node/main.go (5)
node/core/executor.go (1)
  • Executor (31-61)
node/sync/syncer.go (1)
  • Syncer (15-29)
node/derivation/derivation.go (1)
  • Derivation (39-67)
node/blocktag/service.go (2)
  • BlockTagService (33-61)
  • NewBlockTagService (64-101)
node/blocktag/config.go (1)
  • DefaultConfig (28-33)
🪛 GitHub Actions: Tx-submitter
node/types/retryable_client.go

[error] 227-227: rc.authClient.SetBlockTags undefined (type *authclient.Client has no field or method SetBlockTags)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: test
  • GitHub Check: check
  • GitHub Check: Analyze (go)
  • GitHub Check: Analyze (rust)
🔇 Additional comments (7)
node/cmd/node/main.go (1)

186-188: LGTM!

Shutdown handling correctly checks for nil before stopping the service.

node/blocktag/service.go (5)

103-118: LGTM!

Initialization failure is logged but doesn't block startup, which is reasonable for a non-critical service. The polling loop will retry on subsequent ticks.


120-127: LGTM!

Stop cleanly cancels context, waits for the loop to exit via the stop channel, and closes the L1 client.


269-290: State root validation logic looks correct.

The comparison between L1 committed state root and L2 block header root is a good safety check to detect mismatches.


411-435: LGTM!

The notification logic correctly skips redundant RPC calls when tags haven't changed and updates the last-notified hashes only after successful notification.


375-393: No action needed — rpc.FinalizedBlockNumber conversion is compatible and follows an established pattern.

The same pattern of converting rpc.FinalizedBlockNumber (a negative constant) to big.Int is already successfully used elsewhere in the codebase (node/common/layer1.go:24), where it's passed to HeaderByNumber() without any compatibility issues. This confirms the conversion works correctly with go-ethereum's RPC layer and CallOpts.BlockNumber.

node/types/retryable_client.go (1)

225-240: The SetBlockTags method exists on authclient.Client and the code compiles successfully. A recent commit (fd7ab3e feature: block tag support(safe&finalize)) explicitly added this functionality to the morph-l2/go-ethereum fork. All methods called on rc.authClient follow the identical retry pattern shown here, and the method is actively used in node/blocktag/service.go.

Likely an incorrect or invalid review comment.

✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.

Comment on lines +63 to +101
// NewBlockTagService creates a new BlockTagService
func NewBlockTagService(
ctx context.Context,
l2Client *types.RetryableClient,
config *Config,
logger tmlog.Logger,
) (*BlockTagService, error) {
if config.L1Addr == "" {
return nil, fmt.Errorf("L1 RPC address is required")
}
if config.RollupAddress == (common.Address{}) {
return nil, fmt.Errorf("Rollup contract address is required")
}

l1Client, err := ethclient.Dial(config.L1Addr)
if err != nil {
return nil, fmt.Errorf("failed to connect to L1: %w", err)
}

rollup, err := bindings.NewRollup(config.RollupAddress, l1Client)
if err != nil {
return nil, fmt.Errorf("failed to create rollup binding: %w", err)
}

ctx, cancel := context.WithCancel(ctx)

return &BlockTagService{
ctx: ctx,
cancel: cancel,
l1Client: l1Client,
l2Client: l2Client,
rollup: rollup,
rollupAddress: config.RollupAddress,
safeConfirmations: config.SafeConfirmations,
pollInterval: config.PollInterval,
logger: logger.With("module", "blocktag"),
stop: make(chan struct{}),
}, nil
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Potential resource leak: L1 client not closed on subsequent error.

If bindings.NewRollup fails at line 82-85, the l1Client created at line 77 is not closed.

🔧 Suggested fix
 	l1Client, err := ethclient.Dial(config.L1Addr)
 	if err != nil {
 		return nil, fmt.Errorf("failed to connect to L1: %w", err)
 	}

 	rollup, err := bindings.NewRollup(config.RollupAddress, l1Client)
 	if err != nil {
+		l1Client.Close()
 		return nil, fmt.Errorf("failed to create rollup binding: %w", err)
 	}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// NewBlockTagService creates a new BlockTagService
func NewBlockTagService(
ctx context.Context,
l2Client *types.RetryableClient,
config *Config,
logger tmlog.Logger,
) (*BlockTagService, error) {
if config.L1Addr == "" {
return nil, fmt.Errorf("L1 RPC address is required")
}
if config.RollupAddress == (common.Address{}) {
return nil, fmt.Errorf("Rollup contract address is required")
}
l1Client, err := ethclient.Dial(config.L1Addr)
if err != nil {
return nil, fmt.Errorf("failed to connect to L1: %w", err)
}
rollup, err := bindings.NewRollup(config.RollupAddress, l1Client)
if err != nil {
return nil, fmt.Errorf("failed to create rollup binding: %w", err)
}
ctx, cancel := context.WithCancel(ctx)
return &BlockTagService{
ctx: ctx,
cancel: cancel,
l1Client: l1Client,
l2Client: l2Client,
rollup: rollup,
rollupAddress: config.RollupAddress,
safeConfirmations: config.SafeConfirmations,
pollInterval: config.PollInterval,
logger: logger.With("module", "blocktag"),
stop: make(chan struct{}),
}, nil
}
// NewBlockTagService creates a new BlockTagService
func NewBlockTagService(
ctx context.Context,
l2Client *types.RetryableClient,
config *Config,
logger tmlog.Logger,
) (*BlockTagService, error) {
if config.L1Addr == "" {
return nil, fmt.Errorf("L1 RPC address is required")
}
if config.RollupAddress == (common.Address{}) {
return nil, fmt.Errorf("Rollup contract address is required")
}
l1Client, err := ethclient.Dial(config.L1Addr)
if err != nil {
return nil, fmt.Errorf("failed to connect to L1: %w", err)
}
rollup, err := bindings.NewRollup(config.RollupAddress, l1Client)
if err != nil {
l1Client.Close()
return nil, fmt.Errorf("failed to create rollup binding: %w", err)
}
ctx, cancel := context.WithCancel(ctx)
return &BlockTagService{
ctx: ctx,
cancel: cancel,
l1Client: l1Client,
l2Client: l2Client,
rollup: rollup,
rollupAddress: config.RollupAddress,
safeConfirmations: config.SafeConfirmations,
pollInterval: config.PollInterval,
logger: logger.With("module", "blocktag"),
stop: make(chan struct{}),
}, nil
}
🤖 Prompt for AI Agents
In `@node/blocktag/service.go` around lines 63 - 101, In NewBlockTagService, if
bindings.NewRollup fails the previously created l1Client is leaked; ensure
l1Client is closed before returning the error (e.g., close l1Client when rollup
creation returns an error or use a deferred cleanup tied to the rollup creation
step) so that l1Client is not left open on failure.

Comment on lines 292 to 346
// findCompletedBatchForL2Block finds the largest batch where lastL2Block <= l2BlockNum.
// Uses cached index for optimization: first call binary search, subsequent calls search forward.
// Separate caches for safe and finalized to avoid conflicts.
func (s *BlockTagService) findCompletedBatchForL2Block(tagType BlockTagType, l2BlockNum uint64, maxBatchIndex uint64) (uint64, uint64, error) {
if maxBatchIndex == 0 {
return 0, 0, fmt.Errorf("no batches available")
}

// Get cached index based on tag type
startIdx := s.getCachedBatchIndex(tagType)
if startIdx == 0 || startIdx > maxBatchIndex {
// First time or cache invalid: use binary search to find starting point
startIdx = s.binarySearchBatch(l2BlockNum, maxBatchIndex)
if startIdx == 0 {
return 0, 0, fmt.Errorf("no completed batch found for L2 block %d", l2BlockNum)
}
}

// Search forward from startIdx
var resultIdx, resultLastL2Block uint64
for idx := startIdx; idx <= maxBatchIndex; idx++ {
batchData, err := s.rollup.BatchDataStore(nil, big.NewInt(int64(idx)))
if err != nil {
return 0, 0, fmt.Errorf("failed to get batch data for index %d: %w", idx, err)
}

lastL2Block := batchData.BlockNumber.Uint64()
if lastL2Block <= l2BlockNum {
resultIdx = idx
resultLastL2Block = lastL2Block
s.setCachedBatchIndex(tagType, idx)
} else {
break
}
}

// Handle L2 reorg: if cache was too new, reset and use binary search
if resultIdx == 0 {
s.setCachedBatchIndex(tagType, 0)
return s.findCompletedBatchForL2Block(tagType, l2BlockNum, maxBatchIndex)
}

return resultIdx, resultLastL2Block, nil
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Potential infinite recursion in findCompletedBatchForL2Block.

If the binary search returns a valid batch index but the forward search still finds no completed batch (all batches have lastL2Block > l2BlockNum), the function will reset the cache and recurse infinitely:

  1. Binary search returns index N
  2. Forward search from N finds nothing (resultIdx stays 0)
  3. Cache reset to 0, function recurses
  4. Binary search returns N again
  5. Repeat forever
🐛 Suggested fix: Track recursion depth or use a flag
-func (s *BlockTagService) findCompletedBatchForL2Block(tagType BlockTagType, l2BlockNum uint64, maxBatchIndex uint64) (uint64, uint64, error) {
+func (s *BlockTagService) findCompletedBatchForL2Block(tagType BlockTagType, l2BlockNum uint64, maxBatchIndex uint64) (uint64, uint64, error) {
+	return s.findCompletedBatchForL2BlockInternal(tagType, l2BlockNum, maxBatchIndex, false)
+}
+
+func (s *BlockTagService) findCompletedBatchForL2BlockInternal(tagType BlockTagType, l2BlockNum uint64, maxBatchIndex uint64, usedBinarySearch bool) (uint64, uint64, error) {
 	if maxBatchIndex == 0 {
 		return 0, 0, fmt.Errorf("no batches available")
 	}

 	// Get cached index based on tag type
 	startIdx := s.getCachedBatchIndex(tagType)
 	if startIdx == 0 || startIdx > maxBatchIndex {
 		// First time or cache invalid: use binary search to find starting point
 		startIdx = s.binarySearchBatch(l2BlockNum, maxBatchIndex)
 		if startIdx == 0 {
 			return 0, 0, fmt.Errorf("no completed batch found for L2 block %d", l2BlockNum)
 		}
+		usedBinarySearch = true
 	}

 	// Search forward from startIdx
 	var resultIdx, resultLastL2Block uint64
 	for idx := startIdx; idx <= maxBatchIndex; idx++ {
 		// ... existing code ...
 	}

 	// Handle L2 reorg: if cache was too new, reset and use binary search
-	if resultIdx == 0 {
+	if resultIdx == 0 && !usedBinarySearch {
 		s.setCachedBatchIndex(tagType, 0)
-		return s.findCompletedBatchForL2Block(tagType, l2BlockNum, maxBatchIndex)
+		return s.findCompletedBatchForL2BlockInternal(tagType, l2BlockNum, maxBatchIndex, false)
+	} else if resultIdx == 0 {
+		return 0, 0, fmt.Errorf("no completed batch found for L2 block %d after binary search", l2BlockNum)
 	}

 	return resultIdx, resultLastL2Block, nil
 }
🤖 Prompt for AI Agents
In `@node/blocktag/service.go` around lines 292 - 335,
findCompletedBatchForL2Block can recurse indefinitely when binarySearchBatch
yields an index but forward scanning finds no completed batch; replace the
recursive reset pattern with a single retry loop or a boolean "cacheReset" flag:
when resultIdx == 0, if we haven't already reset the cache, call
setCachedBatchIndex(tagType, 0), set the flag, recompute startIdx via
binarySearchBatch and retry the forward scan once; if it still yields no result,
return an error instead of recursing. Update logic around getCachedBatchIndex,
binarySearchBatch, setCachedBatchIndex, and the forward scan to respect the
single-retry flag.

Comment on lines +147 to +159

// Start BlockTagService for sequencer mode
blockTagConfig := blocktag.DefaultConfig()
if err := blockTagConfig.SetCliContext(ctx); err != nil {
return fmt.Errorf("blocktag config set cli context error: %w", err)
}
blockTagSvc, err = blocktag.NewBlockTagService(context.Background(), executor.L2Client(), blockTagConfig, nodeConfig.Logger)
if err != nil {
return fmt.Errorf("failed to create BlockTagService: %w", err)
}
if err := blockTagSvc.Start(); err != nil {
return fmt.Errorf("failed to start BlockTagService: %w", err)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check L1NodeAddr flag configuration
echo "=== Searching for L1NodeAddr configuration ==="
rg -n "L1NodeAddr" --type go -B2 -A5 | head -80

Repository: morph-l2/morph

Length of output: 3782


🏁 Script executed:

#!/bin/bash
# Check RollupContractAddress configuration
echo "=== Searching for RollupContractAddress configuration ==="
rg -n "RollupContractAddress|RollupAddress" --type go -B2 -A5 | head -80

Repository: morph-l2/morph

Length of output: 4588


🏁 Script executed:

#!/bin/bash
# Check blocktag package initialization to see what's required
echo "=== Checking blocktag package ==="
fd -e go -p "blocktag" | head -20

Repository: morph-l2/morph

Length of output: 140


🏁 Script executed:

#!/bin/bash
# Check NewBlockTagService signature and defaults
echo "=== Checking NewBlockTagService implementation ==="
rg -n "func NewBlockTagService|func.*NewBlockTagService" --type go -A15 | head -100

Repository: morph-l2/morph

Length of output: 1033


🏁 Script executed:

#!/bin/bash
# Check for RollupContractAddress in node flags
echo "=== Searching for RollupContractAddress in node flags ==="
rg -n "RollupContractAddress" node/flags --type go -B2 -A5

Repository: morph-l2/morph

Length of output: 855


🏁 Script executed:

#!/bin/bash
# Also check what flags are actually used in blocktag config
echo "=== Checking blocktag config for flag names ==="
cat node/blocktag/config.go

Repository: morph-l2/morph

Length of output: 1170


🏁 Script executed:

#!/bin/bash
# Check the full context around BlockTagService creation in main.go
echo "=== Checking BlockTagService creation context in main.go ==="
sed -n '140,195p' node/cmd/node/main.go

Repository: morph-l2/morph

Length of output: 1612


🏁 Script executed:

#!/bin/bash
# Check if there's any validator mode check that might skip BlockTagService
echo "=== Checking validator mode condition ==="
sed -n '120,160p' node/cmd/node/main.go

Repository: morph-l2/morph

Length of output: 1543


BlockTagService integration has a flag requirement inconsistency.

The service is properly initialized and stopped, but has a design issue: L1NodeAddr and RollupContractAddress flags are defined as optional at the CLI level (no Required: true marker), yet the NewBlockTagService unconditionally rejects sequencer mode startup if either flag is missing. This means sequencer mode will fail at runtime with a service initialization error if these flags aren't provided, despite being optional in the flag definition.

Consider either:

  1. Mark both L1NodeAddr and RollupContractAddress as Required: true in node/flags/flags.go to match the service's runtime expectations, or
  2. Make BlockTagService optional by gracefully skipping its initialization if the flags aren't provided (e.g., log a warning and continue without block tag tracking).
🤖 Prompt for AI Agents
In `@node/cmd/node/main.go` around lines 147 - 159, The BlockTagService startup
assumes L1NodeAddr and RollupContractAddress are present but the CLI flags
L1NodeAddr and RollupContractAddress are currently optional; either make the
flags required or skip initializing the service when they're absent. To fix,
either (A) update the flag definitions for L1NodeAddr and RollupContractAddress
in node/flags/flags.go to include Required: true so NewBlockTagService can
always rely on them, or (B) modify the BlockTagService initialization in main.go
(the blockTagConfig / blockTagSvc / NewBlockTagService code path) to check the
parsed flag values and if either L1NodeAddr or RollupContractAddress is empty,
log a warning and do not call blocktag.NewBlockTagService (skip Start()), making
the service optional and avoiding the runtime error.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants