Skip to content

Conversation

@MasterPtato
Copy link
Contributor

No description provided.

@vercel
Copy link

vercel bot commented Nov 21, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
rivetkit-serverless Ready Ready Preview Comment Dec 2, 2025 7:11pm
3 Skipped Deployments
Project Deployment Preview Comments Updated (UTC)
rivet-cloud Ignored Ignored Dec 2, 2025 7:11pm
rivet-inspector Ignored Ignored Preview Dec 2, 2025 7:11pm
rivet-site Ignored Ignored Preview Dec 2, 2025 7:11pm

Copy link
Contributor Author

MasterPtato commented Nov 21, 2025

Warning

This pull request is not mergeable via GitHub because a downstack PR is open. Once all requirements are satisfied, merge this PR as a stack on Graphite.
Learn more


How to use the Graphite Merge Queue

Add the label merge-queue to this PR to add it to the merge queue.

You must have a Graphite account in order to use the merge queue. Sign up using this link.

An organization admin has enabled the Graphite Merge Queue in this repository.

Please do not merge from GitHub as this will restart CI on PRs being processed by the merge queue.

This stack of pull requests is managed by Graphite. Learn more about stacking.

@claude
Copy link

claude bot commented Nov 21, 2025

PR Review: fix: parallelize publish signal awaits

Summary

This PR optimizes the publish_signal_inner function by parallelizing two independent database lookups using tokio::try_join!. The change fetches workflow_name_key and wake_signal_key concurrently instead of sequentially.

Code Quality ✅

Good changes:

  • Using tokio::try_join! is the correct approach for parallelizing independent async operations
  • The code is cleaner with the key definitions moved closer together at the top of the function
  • Both database lookups are independent and can safely run in parallel

Correctness ✅

The change is semantically equivalent to the original code:

  1. Both keys are created from inputs available at function start (workflow_id, signal_name)
  2. Neither lookup depends on the result of the other
  3. The wake_signal_entry is still checked in the same conditional logic, just fetched earlier

Performance ✅

This is a valid performance optimization:

  • Before: 2 sequential database round-trips
  • After: 1 parallel database round-trip (both queries execute concurrently)
  • This reduces latency by approximately the time of one database query

Security ✅

No security concerns - the change only affects the ordering/parallelization of existing read operations within the same transaction.

Potential Consideration ⚠️

One minor note: if the workflow doesn't exist (checked at line 155-157), we now also fetch the wake_signal_entry unnecessarily. However, this is a negligible cost since:

  1. Both queries run in parallel anyway
  2. The "workflow not found" case is likely an error/edge case rather than the happy path
  3. The slight overhead is worth the latency improvement on the common path

Test Coverage

Since this is a performance optimization that doesn't change behavior, existing tests should provide adequate coverage. The function's contract remains unchanged.

Verdict: LGTM

Clean, correct performance optimization. Approve.

@pkg-pr-new
Copy link

pkg-pr-new bot commented Nov 21, 2025

More templates

@rivetkit/cloudflare-workers

pnpm add https://pkg.pr.new/rivet-dev/rivet/@rivetkit/cloudflare-workers@3523

@rivetkit/db

pnpm add https://pkg.pr.new/rivet-dev/rivet/@rivetkit/db@3523

@rivetkit/framework-base

pnpm add https://pkg.pr.new/rivet-dev/rivet/@rivetkit/framework-base@3523

@rivetkit/next-js

pnpm add https://pkg.pr.new/rivet-dev/rivet/@rivetkit/next-js@3523

@rivetkit/react

pnpm add https://pkg.pr.new/rivet-dev/rivet/@rivetkit/react@3523

rivetkit

pnpm add https://pkg.pr.new/rivet-dev/rivet/rivetkit@3523

@rivetkit/sql-loader

pnpm add https://pkg.pr.new/rivet-dev/rivet/@rivetkit/sql-loader@3523

@rivetkit/engine-runner

pnpm add https://pkg.pr.new/rivet-dev/rivet/@rivetkit/engine-runner@3523

@rivetkit/engine-runner-protocol

pnpm add https://pkg.pr.new/rivet-dev/rivet/@rivetkit/engine-runner-protocol@3523

commit: d451f0a

@MasterPtato MasterPtato changed the base branch from main to graphite-base/3523 November 22, 2025 02:45
@MasterPtato MasterPtato force-pushed the 11-18-fix_parallelize_publish_signal_awaits branch from cda5885 to d6cccca Compare November 22, 2025 02:45
@MasterPtato MasterPtato changed the base branch from graphite-base/3523 to 11-21-chore_fix_diff_between_ee_fix_plural_api_query_params November 22, 2025 02:45
@claude
Copy link

claude bot commented Nov 22, 2025

Code Review: fix: parallelize publish signal awaits

Summary

This PR optimizes publish_signal_inner by parallelizing two independent database reads using tokio::try_join!. Previously, these reads were sequential.

✅ What looks good

  1. Correct parallelization pattern: Using tokio::try_join! is the right approach for running independent async operations concurrently. Both tx.get() calls are independent reads that can safely execute in parallel.

  2. No semantic changes: The logic remains identical - the workflow existence check and wake signal check are performed with the same data, just fetched more efficiently.

  3. Clean code organization: Moving the wake_signal_key declaration up to co-locate it with its fetch is a sensible refactor that improves readability.

  4. Net reduction in code: The change reduces lines (-12, +9) while improving performance, which is ideal.

🔍 Review notes

  1. Transaction isolation level: Both fetches use Serializable isolation, which is consistent with the original code. This ensures the parallel reads maintain the same consistency guarantees.

  2. Error handling: try_join! will short-circuit and return early if either operation fails, which matches the original sequential behavior.

  3. Variable naming: The shadowing of workflow_name_entry (from the tuple result to the Some binding) is fine and follows existing patterns in the codebase.

💭 Minor suggestion

Consider adding a brief comment above the try_join! explaining why these reads are parallelized:

// Fetch workflow name and wake signal status in parallel since they're independent reads
let (workflow_name_entry, wake_signal_entry) = tokio::try_join!(
    tx.get(&self.subspace.pack(&workflow_name_key), Serializable),
    tx.get(&self.subspace.pack(&wake_signal_key), Serializable),
)?;

This is optional since the intent is fairly clear from context.

✅ Verdict

LGTM - This is a clean optimization that improves performance without changing behavior. The parallel reads should reduce latency for the publish_signal_inner operation.

@claude
Copy link

claude bot commented Nov 22, 2025

PR Review: fix: parallelize publish signal awaits

Summary

This PR parallelizes two sequential database reads in publish_signal_inner using tokio::try_join!, reducing latency by executing both fetches concurrently.

Code Quality ✅

  • Good use of tokio::try_join!: The change correctly uses try_join! to parallelize two independent database fetches that were previously sequential
  • Clean refactoring: The wake_signal_key declaration is appropriately moved earlier to enable the parallel fetch
  • No behavioral change: The logic remains identical - both values are still checked at the same points, just fetched earlier

Correctness ✅

The change is correct:

  1. Both workflow_name_key and wake_signal_key fetches are independent operations with no data dependencies between them
  2. The wake_signal_entry is only used later in the function (line 208) to conditionally wake the workflow
  3. Error handling via ? on try_join! properly propagates any failures from either operation

Performance ✅

This is a net positive performance improvement:

  • Before: 2 sequential awaits = sum of both latencies
  • After: 1 parallel await = max of both latencies

For database operations, this can provide meaningful latency reduction, especially under load.

Potential Considerations

  1. Transaction isolation: Both reads use Serializable isolation level within the same transaction (tx), which should maintain consistency. This looks correct as written.
  2. Early fetch of wake_signal_entry: The signal wake entry is now fetched even when the workflow doesn't exist (early return at line 156). This is a minor inefficiency in the error case, but the happy path improvement outweighs this edge case cost.

Test Coverage

No new tests added, but this is a pure refactoring that doesn't change behavior. Existing tests should cover this code path.

Verdict: Approve

This is a clean, focused optimization that correctly parallelizes independent async operations. The code is readable and maintains the same behavior while reducing latency.

@claude
Copy link

claude bot commented Nov 25, 2025

PR Review: Parallelize publish signal awaits

Summary

This PR optimizes the publish_signal function in the gasoline workflow engine by parallelizing two database lookups using tokio::try_join!. The change reduces sequential database calls from 2 to 1 by fetching both workflow_name_entry and wake_signal_entry concurrently.

✅ Positives

  1. Performance Improvement: Using tokio::try_join! to parallelize independent database queries is an excellent optimization that reduces latency.
  2. Clean Refactoring: The code reorganization is logical - the wake_signal_key is now defined earlier alongside workflow_name_key, improving code readability.
  3. No Logic Changes: The change maintains identical behavior while improving performance - a low-risk optimization.
  4. Proper Error Handling: The tokio::try_join! macro correctly propagates errors from either future.

📝 Code Quality

The changes follow Rivet's conventions well:

  • Consistent with the codebase's async/await patterns
  • Maintains existing error handling semantics
  • Clean separation of concerns

🔍 Detailed Analysis

Before: Two sequential database calls

// First await
let workflow_name_entry = tx.get(&self.subspace.pack(&workflow_name_key), Serializable).await?;
// ... error handling ...
// Second await (much later in the function)
let wake_signal_entry = tx.get(&self.subspace.pack(&wake_signal_key), Serializable).await?;

After: Parallel database calls

let (workflow_name_entry, wake_signal_entry) = tokio::try_join!(
    tx.get(&self.subspace.pack(&workflow_name_key), Serializable),
    tx.get(&self.subspace.pack(&wake_signal_key), Serializable),
)?;

💭 Minor Considerations

  1. Transaction Semantics: Since both queries are within the same transaction (tx), parallelizing them should be safe. The transaction isolation level will ensure consistency. ✅

  2. No Side Effects: Both tx.get() calls are read-only operations with no side effects, making them safe to parallelize. ✅

  3. Early Exit Pattern: The early return for WorkflowNotFound happens before any database writes, which is correct - we check existence before publishing the signal. ✅

🎯 Verdict

LGTM 🚀

This is a clean, focused optimization that improves performance without introducing risk. The change is well-scoped and follows Rust best practices for concurrent async operations.

📊 Impact

  • Performance: Reduces latency by eliminating sequential wait time between two independent database queries
  • Risk: Low - maintains identical behavior and error handling
  • Test Coverage: Existing tests should cover this code path adequately since behavior is unchanged

Review completed by Claude Code

@claude
Copy link

claude bot commented Nov 25, 2025

PR Review: Parallelize publish signal awaits

Summary

This PR optimizes the publish_signal_inner function by parallelizing two independent database reads using tokio::try_join!. The change is correct and beneficial.

What Changed

Before: Two sequential database reads

let workflow_name_entry = tx.get(&self.subspace.pack(&workflow_name_key), Serializable).await?;
// ... later ...
let wake_signal_entry = tx.get(&self.subspace.pack(&wake_signal_key), Serializable).await?;

After: Parallel database reads with tokio::try_join!

let (workflow_name_entry, wake_signal_entry) = tokio::try_join!(
    tx.get(&self.subspace.pack(&workflow_name_key), Serializable),
    tx.get(&self.subspace.pack(&wake_signal_key), Serializable),
)?;

Code Quality: ✅ Excellent

Strengths:

  • Correct optimization: Both reads are independent and can safely execute in parallel
  • Proper error handling: tokio::try_join! short-circuits on first error, preserving original error semantics
  • Cleaner code: Reduces code by 3 lines while improving performance
  • Better resource utilization: Reduces database round-trip latency

Potential Issues: ✅ None Found

Correctness:

  • Both database reads are independent (no data dependencies)
  • Error handling behavior is preserved
  • The workflow exists check still happens before using the data
  • wake_signal_entry usage at line 208 remains correct

Performance:

  • Improvement: Reduces total latency by executing reads concurrently
  • No downsides: No additional overhead from parallelization
  • In high-latency database scenarios, this could cut ~50% of read time

Security: ✅ No Concerns

  • No changes to authorization/validation logic
  • Transaction isolation preserved
  • No new attack surfaces introduced

Test Coverage

No new tests needed - this is a pure optimization that:

  • Doesn't change functional behavior
  • Preserves error handling semantics
  • Maintains the same logical flow

Existing tests for publish_signal_inner should continue to pass.

Recommendations

None - This is a well-executed optimization that follows Rust best practices. The change is:

  • Safe ✅
  • Correct ✅
  • Performance-improving ✅
  • Code-simplifying ✅

Approved - Ready to merge! 🚀

@MasterPtato MasterPtato force-pushed the 11-21-chore_fix_diff_between_ee_fix_plural_api_query_params branch from a931dae to 2e19120 Compare November 26, 2025 02:16
@MasterPtato MasterPtato force-pushed the 11-18-fix_parallelize_publish_signal_awaits branch from 480189f to 279e66f Compare November 26, 2025 02:16
@claude
Copy link

claude bot commented Nov 26, 2025

Code Review

Summary

This PR optimizes the publish_signal_inner function by parallelizing two independent database reads using tokio::try_join!. This is a good performance improvement that reduces latency by executing concurrent I/O operations.

Positive Points ✅

  1. Performance Improvement: Parallelizing the two tx.get() calls is a smart optimization. Since both database reads are independent, there's no reason to execute them sequentially.

  2. Consistent Pattern: The use of tokio::try_join! matches existing patterns in the codebase (I found 11+ other usages in gasoline, including similar database read patterns in debug.rs and elsewhere in mod.rs).

  3. Error Handling: The try_join! properly propagates errors from either operation via the ? operator, maintaining correct error semantics.

  4. Code Cleanliness: The refactoring actually improves readability by moving the wake_signal_key definition earlier, grouping related key definitions together (lines 144-146).

Potential Considerations 🤔

  1. Transaction Isolation: Both reads use Serializable isolation level, which is appropriate for consistency. The parallelization doesn't change the semantics since both reads occur within the same transaction context.

  2. Error Granularity: With sequential awaits, you could theoretically know which specific get operation failed. With try_join!, the error comes from whichever future fails first. However, this is unlikely to be an issue in practice since:

    • Database errors typically include enough context
    • The logging at the function start (lines 136-142) provides workflow/signal context
    • Both operations fail for similar reasons (connection issues, serialization errors)
  3. Testing: I didn't find specific unit tests for publish_signal_inner. While this optimization is low-risk, it would be ideal to have integration tests covering the signal publishing flow, especially edge cases like:

    • Publishing to non-existent workflows (line 155-157)
    • Publishing signals that wake waiting workflows (line 208-221)
    • Error scenarios during signal body writes

Performance Impact 📊

Best-case scenario: If both database reads take ~10ms each, this change could reduce function latency by ~10ms (from 20ms sequential to 10ms parallel). Actual impact depends on database round-trip time and load.

Recommendation

Approve

This is a solid optimization with no functional changes. The code is cleaner, follows existing patterns, and should provide measurable performance improvements in the signal publishing path.


Minor suggestion (optional, non-blocking): Consider adding a comment explaining why these two reads can be safely parallelized, for future maintainers:

// Fetch workflow name and wake signal status in parallel since they're independent reads
let (workflow_name_entry, wake_signal_entry) = tokio::try_join!(
    tx.get(&self.subspace.pack(&workflow_name_key), Serializable),
    tx.get(&self.subspace.pack(&wake_signal_key), Serializable),
)?;

However, the code is clear enough without it.

@claude
Copy link

claude bot commented Nov 26, 2025

Code Review

Summary

This PR parallelizes two database lookups in the publish_signal method by using tokio::try_join! instead of sequential .await calls. The change is a performance optimization that eliminates unnecessary sequential blocking.

Positive Aspects ✅

  1. Performance Improvement: The change converts two independent database reads from sequential to parallel execution, which should reduce latency when publishing signals.

  2. Clean Implementation: The refactor is well-executed:

    • Uses tokio::try_join! appropriately for parallel async operations
    • Properly handles errors with the ? operator
    • Maintains identical error handling behavior (early return on WorkflowNotFound)
  3. Code Simplification: The diff shows a net reduction of 3 lines while improving performance - this is excellent.

  4. Correctness: The two database reads are truly independent (no data dependencies), making this a safe parallelization:

    • workflow_name_key lookup validates workflow existence
    • wake_signal_key lookup checks if the workflow is waiting for this signal
    • Neither depends on the other's result until both complete

Considerations 🤔

  1. Transaction Context: Both reads happen within the same transaction (tx). This is correct and maintains consistency, but it's worth noting that the performance gain depends on how the underlying universaldb implementation handles concurrent reads within a transaction.

  2. Error Ordering: With sequential awaits, you'd always get the "workflow not found" error before checking the wake signal. With try_join!, if both fail, the error returned is non-deterministic (whichever completes first). However, this is not a concern here since:

    • Only the workflow_name_entry result is checked for errors
    • The wake_signal_entry is only checked for presence (.is_some())
  3. Testing: Consider adding a test case that verifies signal publishing performance or correctness when both database reads are stressed, though the existing test suite likely covers this.

Security & Safety ✅

  • No security concerns
  • No unsafe code introduced
  • Maintains the same validation logic (workflow existence check)

Performance Analysis 📊

Before: Sequential execution time ≈ T(workflow_name_lookup) + T(wake_signal_lookup)
After: Parallel execution time ≈ max(T(workflow_name_lookup), T(wake_signal_lookup))

This is a clear win, especially in high-latency database scenarios.

Verdict: LGTM

This is a solid optimization with no downsides. The code is cleaner, faster, and maintains all existing correctness guarantees. Nice work!


Minor suggestion: If you have benchmarks or performance tests for the workflow engine, it would be valuable to measure the impact of this change in signal publishing latency.

@claude
Copy link

claude bot commented Nov 27, 2025

Code Review: Fix Parallelize Publish Signal Awaits

Summary

This PR optimizes the publish_signal_inner function by parallelizing two independent database lookups using tokio::try_join!. This is a solid performance improvement that reduces sequential waiting time.


✅ Positive Changes

Performance Optimization

  • Good: Using tokio::try_join! to run both get operations concurrently is the right approach for independent database queries
  • Good: The refactoring maintains the same error handling behavior with the `?" operator
  • Good: Code is more concise (net -3 lines) while being more performant

Code Quality

  • Good: The logic remains clear and readable
  • Good: No behavioral changes—purely a performance optimization
  • Good: Proper destructuring of the tuple result maintains type safety

📝 Observations

1. Wake Signal Key Moved Earlier
The wake_signal_key construction is now performed before the database lookups rather than later in the function. This is necessary for the parallel execution but does mean we're constructing this key even when the workflow doesn't exist (early return at line 156).

Impact: Negligible—key construction is cheap compared to database I/O.

2. Early Return Behavior
The early return when workflow_name_entry is None (line 155-156) still occurs before we use wake_signal_entry, which is correct. The unused wake_signal_entry in the error case is acceptable given the performance benefits in the success case.


✅ Code Quality Checklist

  • Error Handling: Properly maintained with tokio::try_join! and `?" operator
  • Style: Follows Rust conventions and existing codebase patterns
  • Imports: No new imports needed—uses existing tokio::try_join!
  • Security: No security concerns introduced
  • Performance: Clear improvement by parallelizing independent I/O operations

🎯 Suggestions

Optional: Consider Adding a Comment
Since this is a performance-focused change, consider adding a brief comment explaining why both keys are fetched in parallel:

// Fetch both entries in parallel since they're independent lookups
let (workflow_name_entry, wake_signal_entry) = tokio::try_join!(
    tx.get(&self.subspace.pack(&workflow_name_key), Serializable),
    tx.get(&self.subspace.pack(&wake_signal_key), Serializable),
)?;

However, per CLAUDE.md guidelines ("Only add comments where the logic isn't self-evident"), this might be unnecessary since the pattern is relatively clear.


🧪 Testing Considerations

Existing Test Coverage

  • The change preserves all existing behavior, so current tests should pass without modification
  • No new edge cases are introduced

Potential Test Areas (if not already covered):

  • Workflow exists with wake signal present
  • Workflow exists without wake signal
  • Workflow doesn't exist (early return path)

✅ Final Verdict

LGTM - This is a clean, focused performance optimization with no downsides. The change:

  • Reduces latency by parallelizing independent database operations
  • Maintains correctness and error handling
  • Follows Rust and codebase conventions
  • Introduces no security or correctness issues

Approval recommended


Automated review by Claude Code

@claude
Copy link

claude bot commented Nov 27, 2025

Code Review - PR #3523: Parallelize publish signal awaits

Summary

This PR optimizes the publish_signal_inner function by parallelizing two independent database reads using tokio::try_join!. This is a good performance improvement that reduces sequential awaits.

✅ Positive Aspects

  1. Performance Improvement: The change correctly identifies that workflow_name_entry and wake_signal_entry are independent reads that can be parallelized. This reduces latency by executing both database operations concurrently.

  2. Code Quality: The refactoring is clean and maintains the same logic flow while improving efficiency.

  3. Error Handling: Using tokio::try_join! properly propagates errors from either operation, maintaining the same error handling behavior.

  4. Line Reduction: The change reduces code from 12 lines to 9 lines while improving performance - a win-win.

🔍 Observations

  1. Functional Equivalence: The logic is functionally equivalent to the original:

    • Both workflow_name_entry and wake_signal_entry are read before being used
    • The workflow_name_entry check for None still happens in the same location
    • The wake_signal_entry is still used in the same conditional check at line 208
  2. No Side Effects: Both tx.get() calls are pure reads with no side effects, making them safe to parallelize.

📋 Recommendations

None - This is a solid optimization with no issues identified.

Conclusion

LGTM

This is a straightforward and correct performance optimization. The parallelization of independent database reads is a best practice that reduces latency without introducing any behavioral changes or risks.


Review generated by Claude Code

@graphite-app
Copy link
Contributor

graphite-app bot commented Dec 3, 2025

Merge activity

  • Dec 3, 12:33 AM UTC: NathanFlurry added this pull request to the Graphite merge queue.
  • Dec 3, 12:33 AM UTC: CI is running for this pull request on a draft pull request (#3577) due to your merge queue CI optimization settings.
  • Dec 3, 12:34 AM UTC: Merged by the Graphite merge queue via draft PR: #3577.

graphite-app bot pushed a commit that referenced this pull request Dec 3, 2025
@graphite-app graphite-app bot closed this Dec 3, 2025
@graphite-app graphite-app bot deleted the 11-18-fix_parallelize_publish_signal_awaits branch December 3, 2025 00:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants