Skip to content

Conversation

@spuckhafte
Copy link

@spuckhafte spuckhafte commented Dec 23, 2025

Fixes #XXXX.

Description


This PR has:

  • been tested to ensure log ingestion and log query works.
  • added comments explaining the "why" and the intent of the code wherever would not be obvious for an unfamiliar reader.
  • added documentation for new or modified features or behaviors.

Summary by CodeRabbit

  • Bug Fixes

    • Stream deletion now also removes orphaned filters and surfaces cleanup failures instead of hiding them.
  • New Features

    • Deletion now performs an automatic zombie-filter cleanup and records/report counts of successful and failed removals to aid troubleshooting.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 23, 2025

Walkthrough

Calls a new async delete_zombie_filters(stream_name) during stream deletion to remove orphaned filters from the metastore and in-memory FILTERS cache; the helper returns counts of successful/failed deletions and returns a MetastoreError::ZombieResourceStreamStillExists if the stream still exists.

Changes

Cohort / File(s) Summary
Stream deletion update
src/handlers/http/logstream.rs
After stats deletion, invokes delete_zombie_filters(&stream_name).await? and logs the returned ZombieResourceCleanupOk; propagates errors instead of swallowing them.
Zombie filters deletion utility
src/handlers/http/modal/utils/logstream_utils.rs
Added pub async fn delete_zombie_filters(stream_name: &str) -> Result<ZombieResourceCleanupOk, MetastoreError> and pub struct ZombieResourceCleanupOk { ok_deletions: i32, failed_deletions: i32 }. Function checks stream non-existence via PARSEABLE.streams, lists filters from PARSEABLE.metastore.get_filters(), deletes matching filters with PARSEABLE.metastore.delete_filter, logs per-filter failures, removes in-memory entries via FILTERS.delete_filter when present, and returns deletion counts.
Metastore error enum
src/metastore/mod.rs
Added public enum variant ZombieResourceStreamStillExists { stream_name: String, resource_type: String }, with mapping to a detailed MetastoreErrorDetail and HTTP status 500 in to_detail/status_code.

Sequence Diagram(s)

sequenceDiagram
    %% Styling note: green = success paths, red = error; color used sparingly
    participant Handler as HTTP Handler
    participant Streams as PARSEABLE.streams
    participant Metastore as PARSEABLE.metastore
    participant Cache as FILTERS

    Handler->>Handler: delete stream stats
    Handler->>Streams: check stream exists?(stream_name)
    alt stream exists
        Streams-->>Handler: exists
        Handler-->>Handler: return MetastoreError::ZombieResourceStreamStillExists
    else stream missing
        Handler->>Metastore: get_filters()
        Metastore-->>Handler: filters[]
        Handler->>Handler: filter by stream_name
        loop for each matching filter
            Handler->>Metastore: delete_filter(filter_id)
            alt delete success
                Metastore-->>Handler: OK
                Handler->>Cache: FILTERS.delete_filter(filter_id) (if present)
                Cache-->>Handler: OK
            else delete failure
                Metastore-->>Handler: Error (logged)
            end
        end
        Handler-->>Handler: return ZombieResourceCleanupOk { ok_deletions, failed_deletions }
    end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Poem

🐇 I nibble at ghosts in the filterbed,

I hop through stores where old rules tread,
A tidy count of cleared and failed,
I hum, I clean, no quest derailed,
Streams rest easy — no phantoms fled.

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description is empty except for the template structure; it lacks actual implementation details, rationale, issue reference, and all checklist items remain unchecked. Fill in the Description section with the goal, chosen solution with rationale, and key changes. Update the issue reference, add testing confirmation, and check completed items in the checklist.
Docstring Coverage ⚠️ Warning Docstring coverage is 54.55% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately summarizes the main change: removing associated filters during stream deletion, which directly matches the core functionality added in the changeset.
✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link
Contributor

CLA Assistant Lite bot:
Thank you for your submission, we really appreciate it. Like many open-source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution. You can sign the CLA by just posting a Pull Request Comment same as the below format.


I have read the CLA Document and I hereby sign the CLA


You can retrigger this bot by commenting recheck in this Pull Request

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4e67fc6 and aca2671.

📒 Files selected for processing (1)
  • src/handlers/http/logstream.rs
🧰 Additional context used
🧠 Learnings (4)
📚 Learning: 2025-02-14T09:49:25.818Z
Learnt from: de-sh
Repo: parseablehq/parseable PR: 1185
File: src/handlers/http/logstream.rs:255-261
Timestamp: 2025-02-14T09:49:25.818Z
Learning: In Parseable's logstream handlers, stream existence checks must be performed for both query and standalone modes. The pattern `!PARSEABLE.streams.contains(&stream_name) && (PARSEABLE.options.mode != Mode::Query || !PARSEABLE.create_stream_and_schema_from_storage(&stream_name).await?)` ensures proper error handling in both modes.

Applied to files:

  • src/handlers/http/logstream.rs
📚 Learning: 2025-08-25T01:31:41.786Z
Learnt from: nikhilsinhaparseable
Repo: parseablehq/parseable PR: 1415
File: src/metadata.rs:63-68
Timestamp: 2025-08-25T01:31:41.786Z
Learning: The TOTAL_EVENTS_INGESTED_DATE, TOTAL_EVENTS_INGESTED_SIZE_DATE, and TOTAL_EVENTS_STORAGE_SIZE_DATE metrics in src/metadata.rs and src/storage/object_storage.rs are designed to track total events across all streams, not per-stream. They use labels [origin, parsed_date] to aggregate by format and date, while per-stream metrics use [stream_name, origin, parsed_date] labels.

Applied to files:

  • src/handlers/http/logstream.rs
📚 Learning: 2025-08-25T01:32:25.980Z
Learnt from: nikhilsinhaparseable
Repo: parseablehq/parseable PR: 1415
File: src/metrics/mod.rs:163-173
Timestamp: 2025-08-25T01:32:25.980Z
Learning: The TOTAL_EVENTS_INGESTED_DATE, TOTAL_EVENTS_INGESTED_SIZE_DATE, and TOTAL_EVENTS_STORAGE_SIZE_DATE metrics in src/metrics/mod.rs are intentionally designed to track global totals across all streams for a given date, using labels ["format", "date"] rather than per-stream labels. This is the correct design for global aggregation purposes.

Applied to files:

  • src/handlers/http/logstream.rs
📚 Learning: 2025-10-28T02:10:41.140Z
Learnt from: nikhilsinhaparseable
Repo: parseablehq/parseable PR: 1453
File: src/parseable/mod.rs:397-400
Timestamp: 2025-10-28T02:10:41.140Z
Learning: In Parseable enterprise deployments with multiple query nodes, hot tier configuration must be persisted in object storage so that newly started query nodes can fetch and synchronize the hot tier settings at startup (file: src/parseable/mod.rs, function: create_stream_and_schema_from_storage).

Applied to files:

  • src/handlers/http/logstream.rs
🔇 Additional comments (1)
src/handlers/http/logstream.rs (1)

32-32: LGTM!

The import of Filter is necessary for the new filter deletion logic and is correctly placed.

coderabbitai[bot]
coderabbitai bot previously approved these changes Dec 23, 2025
Copy link
Author

@spuckhafte spuckhafte left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let PARSEABLE::metastore::get_filters be queryable using logstream name or id.

The current implementation of it fetches all the filters across existing logstreams at once.

Copy link
Contributor

@parmesant parmesant left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's move filter deletion so that it happens after stream deletion. Also, moving the filter deletion logic in a utility function would be better.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
src/parseable/mod.rs (1)

238-245: Simplify the boolean return expression.

The if-else construct is redundant when returning a boolean derived from a condition. This can be simplified to a single expression.

🔎 Proposed simplification
-    // check if a stream exists
-    pub fn check_stream_exists(&self, stream_name: &str) -> bool {
-        if self.streams.contains(stream_name) {
-            return true;
-        } else {
-            return false;
-        }
-    }
+    /// Checks if a stream exists in the in-memory streams map.
+    pub fn check_stream_exists(&self, stream_name: &str) -> bool {
+        self.streams.contains(stream_name)
+    }
src/metastore/metastores/object_store_metastore.rs (1)

549-579: Use self methods instead of global PARSEABLE.metastore calls.

The implementation calls PARSEABLE.metastore.get_filters() and PARSEABLE.metastore.delete_filter() instead of using self.get_filters() and self.delete_filter(). This is inconsistent with other methods in this trait implementation and creates unnecessary indirection through the global static. Using self is more idiomatic and avoids the implicit assumption that PARSEABLE.metastore is the same instance.

🔎 Proposed fix
     // clear filters associated to a deleted stream
     async fn delete_zombie_filters(&self, stream_name: &str) -> Result<bool, MetastoreError> {
         // stream should not exist in order to have zombie filters
         if PARSEABLE.check_stream_exists(stream_name) {
             warn!("no zombie filters cleared for [undeleted] stream {}", stream_name);
             return Ok(false);
         }

-        let all_filters = match PARSEABLE.metastore.get_filters().await {
-            Ok(all_f) => all_f,
-            Err(e) => {
-                return Err(e);
-            }
-        };
+        let all_filters = self.get_filters().await?;

         // collect filters associated with the logstream being deleted
         let filters_for_stream: Vec<Filter> = all_filters
             .into_iter()
             .filter(|filter| filter.stream_name == stream_name)
             .collect();

         for filter in filters_for_stream.iter() {
-            PARSEABLE.metastore.delete_filter(filter).await?;
-            
+            self.delete_filter(filter).await?;
+
             if let Some(filter_id) = filter.filter_id.as_ref() {
                 FILTERS.delete_filter(filter_id).await;
             }
         }

-        return Ok(true);
+        Ok(true)
     }
📜 Review details

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 19704d9 and d812994.

📒 Files selected for processing (4)
  • src/handlers/http/logstream.rs
  • src/metastore/metastore_traits.rs
  • src/metastore/metastores/object_store_metastore.rs
  • src/parseable/mod.rs
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/handlers/http/logstream.rs
🧰 Additional context used
🧠 Learnings (7)
📓 Common learnings
Learnt from: de-sh
Repo: parseablehq/parseable PR: 1185
File: src/handlers/http/logstream.rs:255-261
Timestamp: 2025-02-14T09:49:25.818Z
Learning: In Parseable's logstream handlers, stream existence checks must be performed for both query and standalone modes. The pattern `!PARSEABLE.streams.contains(&stream_name) && (PARSEABLE.options.mode != Mode::Query || !PARSEABLE.create_stream_and_schema_from_storage(&stream_name).await?)` ensures proper error handling in both modes.
📚 Learning: 2025-02-14T09:49:25.818Z
Learnt from: de-sh
Repo: parseablehq/parseable PR: 1185
File: src/handlers/http/logstream.rs:255-261
Timestamp: 2025-02-14T09:49:25.818Z
Learning: In Parseable's logstream handlers, stream existence checks must be performed for both query and standalone modes. The pattern `!PARSEABLE.streams.contains(&stream_name) && (PARSEABLE.options.mode != Mode::Query || !PARSEABLE.create_stream_and_schema_from_storage(&stream_name).await?)` ensures proper error handling in both modes.

Applied to files:

  • src/parseable/mod.rs
  • src/metastore/metastores/object_store_metastore.rs
📚 Learning: 2025-10-21T02:22:24.403Z
Learnt from: nikhilsinhaparseable
Repo: parseablehq/parseable PR: 1448
File: src/parseable/mod.rs:419-432
Timestamp: 2025-10-21T02:22:24.403Z
Learning: In Parseable's internal stream creation (`create_internal_stream_if_not_exists` in `src/parseable/mod.rs`), errors should not propagate to fail server initialization. The function creates both pmeta and pbilling internal streams, and failures are logged but the function always returns `Ok(())` to ensure server startup resilience. Individual stream creation failures should not prevent syncing of successfully created streams.

Applied to files:

  • src/parseable/mod.rs
📚 Learning: 2025-07-28T17:10:39.448Z
Learnt from: nikhilsinhaparseable
Repo: parseablehq/parseable PR: 1392
File: src/migration/stream_metadata_migration.rs:303-322
Timestamp: 2025-07-28T17:10:39.448Z
Learning: In Parseable's migration system (src/migration/stream_metadata_migration.rs), each migration function updates the metadata to the current latest format using CURRENT_OBJECT_STORE_VERSION and CURRENT_SCHEMA_VERSION constants, rather than producing incremental versions. For example, v5_v6 function produces v7 format output when these constants are set to "v7", not v6 format.

Applied to files:

  • src/metastore/metastores/object_store_metastore.rs
📚 Learning: 2025-10-28T02:10:41.140Z
Learnt from: nikhilsinhaparseable
Repo: parseablehq/parseable PR: 1453
File: src/parseable/mod.rs:397-400
Timestamp: 2025-10-28T02:10:41.140Z
Learning: In Parseable enterprise deployments with multiple query nodes, hot tier configuration must be persisted in object storage so that newly started query nodes can fetch and synchronize the hot tier settings at startup (file: src/parseable/mod.rs, function: create_stream_and_schema_from_storage).

Applied to files:

  • src/metastore/metastores/object_store_metastore.rs
📚 Learning: 2025-09-18T09:52:07.554Z
Learnt from: nikhilsinhaparseable
Repo: parseablehq/parseable PR: 1415
File: src/storage/object_storage.rs:173-177
Timestamp: 2025-09-18T09:52:07.554Z
Learning: In Parseable's upload system (src/storage/object_storage.rs), the update_storage_metrics function can safely use path.metadata().map_err() to fail on local file metadata read failures because parquet validation (validate_uploaded_parquet_file) ensures file integrity before this step, and the system guarantees local staging files remain accessible throughout the upload flow.

Applied to files:

  • src/metastore/metastores/object_store_metastore.rs
📚 Learning: 2025-09-06T04:26:17.191Z
Learnt from: parmesant
Repo: parseablehq/parseable PR: 1424
File: src/enterprise/utils.rs:65-72
Timestamp: 2025-09-06T04:26:17.191Z
Learning: In Parseable's metastore implementation, MetastoreError::to_detail() returns a MetastoreErrorDetail struct (not a string), which contains structured error information including operation, message, stream_name, and other contextual fields. This struct is designed to be boxed in ObjectStorageError::MetastoreError(Box<MetastoreErrorDetail>).

Applied to files:

  • src/metastore/metastores/object_store_metastore.rs
🧬 Code graph analysis (2)
src/metastore/metastore_traits.rs (2)
src/metastore/metastores/object_store_metastore.rs (1)
  • delete_zombie_filters (550-579)
src/validator.rs (1)
  • stream_name (36-71)
src/metastore/metastores/object_store_metastore.rs (2)
src/users/filters.rs (1)
  • migrate_v1_v2 (181-193)
src/metastore/metastore_traits.rs (1)
  • delete_zombie_filters (111-111)
🔇 Additional comments (3)
src/metastore/metastore_traits.rs (1)

111-111: LGTM - trait method signature is appropriate.

The new delete_zombie_filters method follows the existing trait patterns and is logically grouped with other filter methods. Consider adding a doc comment to clarify the return semantics (true = filters deleted, false = stream still exists).

src/metastore/metastores/object_store_metastore.rs (2)

59-59: LGTM - import updated correctly.

The FILTERS import is necessary for in-memory cache cleanup during zombie filter deletion.


570-576: Consider partial failure handling.

If deletion fails mid-loop (e.g., network error on the 3rd of 5 filters), the method returns an error but some filters have already been deleted from storage. The in-memory FILTERS cache may become inconsistent with storage state. Consider whether to:

  1. Continue deleting remaining filters and collect errors
  2. Log which filters were successfully deleted before the error

This is a minor robustness concern since the operation can be retried.

coderabbitai[bot]
coderabbitai bot previously approved these changes Dec 24, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
src/metastore/metastores/object_store_metastore.rs (1)

549-579: LGTM! Zombie filter cleanup logic is sound.

The implementation correctly:

  • Guards against cleaning filters for active streams (line 552)
  • Fetches and filters to stream-specific filters
  • Removes from both storage and in-memory cache for consistency

The dual deletion (storage + in-memory) at lines 571-574 is essential to prevent stale filter references.

Optional: Simplify code style
-        let all_filters = match self.get_filters().await {
-            Ok(all_f) => all_f,
-            Err(e) => {
-                return Err(e);
-            }
-        };
+        let all_filters = self.get_filters().await?;
 
         // collect filters associated with the logstream being deleted
         let filters_for_stream: Vec<Filter> = all_filters
@@ -569,13 +564,11 @@
 
         for filter in filters_for_stream.iter() {
             self.delete_filter(filter).await?;
-            
             if let Some(filter_id) = filter.filter_id.as_ref() {
                 FILTERS.delete_filter(filter_id).await;
             }
         }
 
-        return Ok(true);
+        Ok(true)
     }
📜 Review details

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d812994 and 6e96d09.

📒 Files selected for processing (2)
  • src/metastore/metastores/object_store_metastore.rs
  • src/parseable/mod.rs
🧰 Additional context used
🧠 Learnings (8)
📓 Common learnings
Learnt from: de-sh
Repo: parseablehq/parseable PR: 1185
File: src/handlers/http/logstream.rs:255-261
Timestamp: 2025-02-14T09:49:25.818Z
Learning: In Parseable's logstream handlers, stream existence checks must be performed for both query and standalone modes. The pattern `!PARSEABLE.streams.contains(&stream_name) && (PARSEABLE.options.mode != Mode::Query || !PARSEABLE.create_stream_and_schema_from_storage(&stream_name).await?)` ensures proper error handling in both modes.
📚 Learning: 2025-10-28T02:10:41.140Z
Learnt from: nikhilsinhaparseable
Repo: parseablehq/parseable PR: 1453
File: src/parseable/mod.rs:397-400
Timestamp: 2025-10-28T02:10:41.140Z
Learning: In Parseable enterprise deployments with multiple query nodes, hot tier configuration must be persisted in object storage so that newly started query nodes can fetch and synchronize the hot tier settings at startup (file: src/parseable/mod.rs, function: create_stream_and_schema_from_storage).

Applied to files:

  • src/metastore/metastores/object_store_metastore.rs
📚 Learning: 2025-07-28T17:10:39.448Z
Learnt from: nikhilsinhaparseable
Repo: parseablehq/parseable PR: 1392
File: src/migration/stream_metadata_migration.rs:303-322
Timestamp: 2025-07-28T17:10:39.448Z
Learning: In Parseable's migration system (src/migration/stream_metadata_migration.rs), each migration function updates the metadata to the current latest format using CURRENT_OBJECT_STORE_VERSION and CURRENT_SCHEMA_VERSION constants, rather than producing incremental versions. For example, v5_v6 function produces v7 format output when these constants are set to "v7", not v6 format.

Applied to files:

  • src/metastore/metastores/object_store_metastore.rs
📚 Learning: 2025-09-18T09:52:07.554Z
Learnt from: nikhilsinhaparseable
Repo: parseablehq/parseable PR: 1415
File: src/storage/object_storage.rs:173-177
Timestamp: 2025-09-18T09:52:07.554Z
Learning: In Parseable's upload system (src/storage/object_storage.rs), the update_storage_metrics function can safely use path.metadata().map_err() to fail on local file metadata read failures because parquet validation (validate_uploaded_parquet_file) ensures file integrity before this step, and the system guarantees local staging files remain accessible throughout the upload flow.

Applied to files:

  • src/metastore/metastores/object_store_metastore.rs
📚 Learning: 2025-09-06T04:26:17.191Z
Learnt from: parmesant
Repo: parseablehq/parseable PR: 1424
File: src/enterprise/utils.rs:65-72
Timestamp: 2025-09-06T04:26:17.191Z
Learning: In Parseable's metastore implementation, MetastoreError::to_detail() returns a MetastoreErrorDetail struct (not a string), which contains structured error information including operation, message, stream_name, and other contextual fields. This struct is designed to be boxed in ObjectStorageError::MetastoreError(Box<MetastoreErrorDetail>).

Applied to files:

  • src/metastore/metastores/object_store_metastore.rs
📚 Learning: 2025-02-14T09:49:25.818Z
Learnt from: de-sh
Repo: parseablehq/parseable PR: 1185
File: src/handlers/http/logstream.rs:255-261
Timestamp: 2025-02-14T09:49:25.818Z
Learning: In Parseable's logstream handlers, stream existence checks must be performed for both query and standalone modes. The pattern `!PARSEABLE.streams.contains(&stream_name) && (PARSEABLE.options.mode != Mode::Query || !PARSEABLE.create_stream_and_schema_from_storage(&stream_name).await?)` ensures proper error handling in both modes.

Applied to files:

  • src/metastore/metastores/object_store_metastore.rs
  • src/parseable/mod.rs
📚 Learning: 2025-10-21T02:22:24.403Z
Learnt from: nikhilsinhaparseable
Repo: parseablehq/parseable PR: 1448
File: src/parseable/mod.rs:419-432
Timestamp: 2025-10-21T02:22:24.403Z
Learning: In Parseable's internal stream creation (`create_internal_stream_if_not_exists` in `src/parseable/mod.rs`), errors should not propagate to fail server initialization. The function creates both pmeta and pbilling internal streams, and failures are logged but the function always returns `Ok(())` to ensure server startup resilience. Individual stream creation failures should not prevent syncing of successfully created streams.

Applied to files:

  • src/parseable/mod.rs
📚 Learning: 2025-09-05T09:18:44.813Z
Learnt from: parmesant
Repo: parseablehq/parseable PR: 1425
File: src/query/mod.rs:484-495
Timestamp: 2025-09-05T09:18:44.813Z
Learning: In the Parseable system, stream names and column names cannot contain quotes, which eliminates SQL injection concerns when interpolating these identifiers directly into SQL queries in src/query/mod.rs.

Applied to files:

  • src/parseable/mod.rs
🧬 Code graph analysis (2)
src/metastore/metastores/object_store_metastore.rs (2)
src/users/filters.rs (1)
  • migrate_v1_v2 (181-193)
src/metastore/metastore_traits.rs (1)
  • delete_zombie_filters (111-111)
src/parseable/mod.rs (1)
src/validator.rs (1)
  • stream_name (36-71)
🔇 Additional comments (2)
src/parseable/mod.rs (1)

238-241: LGTM! Clean helper method.

The synchronous in-memory stream existence check is straightforward and fits the need for the metastore filter cleanup operation. The naming clearly conveys its purpose.

src/metastore/metastores/object_store_metastore.rs (1)

59-59: LGTM! Import supports in-memory cache cleanup.

The addition of FILTERS to imports is necessary for the in-memory filter deletion at line 574.

coderabbitai[bot]
coderabbitai bot previously approved these changes Dec 24, 2025
@nitisht nitisht requested a review from parmesant December 27, 2025 09:39
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
src/handlers/http/modal/utils/logstream_utils.rs (2)

112-125: Consider moving in-memory cache deletion inside the success branch.

Currently, if delete_filter fails on the metastore, the warning is logged but the in-memory FILTERS cache is still cleaned up (lines 122-124). This creates an inconsistency where the filter exists in persistent storage but not in memory—on restart, the filter would reappear.

Consider deleting from the cache only when metastore deletion succeeds:

🔎 Proposed fix
     for filter in filters_for_stream.iter() {
-        if let Err(err) = PARSEABLE.metastore.delete_filter(filter).await {
+        match PARSEABLE.metastore.delete_filter(filter).await {
+            Ok(_) => {
+                if let Some(filter_id) = filter.filter_id.as_ref() {
+                    FILTERS.delete_filter(filter_id).await;
+                }
+            }
+            Err(err) => {
             tracing::warn!(
                 "failed to delete the zombie filter: {} \nfrom storage. For logstream: {}\nError: {:#?}", 
                 filter.filter_name,
                 stream_name,
                 err 
             );
-        }
-
-        if let Some(filter_id) = filter.filter_id.as_ref() {
-            FILTERS.delete_filter(filter_id).await;
+            }
         }
     }

127-127: Remove explicit return for idiomatic Rust.

The trailing return keyword is unnecessary when the expression is the last statement in the function.

🔎 Proposed fix
-    return Ok(());
+    Ok(())
📜 Review details

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6e96d09 and ffd81aa.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (2)
  • src/handlers/http/logstream.rs
  • src/handlers/http/modal/utils/logstream_utils.rs
🧰 Additional context used
🧠 Learnings (7)
📚 Learning: 2025-06-16T02:04:58.990Z
Learnt from: nikhilsinhaparseable
Repo: parseablehq/parseable PR: 1346
File: src/handlers/http/health_check.rs:81-90
Timestamp: 2025-06-16T02:04:58.990Z
Learning: In the shutdown function in src/handlers/http/health_check.rs, the design approach is to log errors from sync operations rather than propagate them. This is intentional because the shutdown function is called on SIGTERM/SIGINT signals, and the goal is to perform best-effort cleanup (syncing pending files to object storage) while allowing the shutdown to proceed regardless of sync failures. Logging provides debugging information without blocking the shutdown process.

Applied to files:

  • src/handlers/http/logstream.rs
📚 Learning: 2025-02-14T09:49:25.818Z
Learnt from: de-sh
Repo: parseablehq/parseable PR: 1185
File: src/handlers/http/logstream.rs:255-261
Timestamp: 2025-02-14T09:49:25.818Z
Learning: In Parseable's logstream handlers, stream existence checks must be performed for both query and standalone modes. The pattern `!PARSEABLE.streams.contains(&stream_name) && (PARSEABLE.options.mode != Mode::Query || !PARSEABLE.create_stream_and_schema_from_storage(&stream_name).await?)` ensures proper error handling in both modes.

Applied to files:

  • src/handlers/http/logstream.rs
📚 Learning: 2025-10-21T02:22:24.403Z
Learnt from: nikhilsinhaparseable
Repo: parseablehq/parseable PR: 1448
File: src/parseable/mod.rs:419-432
Timestamp: 2025-10-21T02:22:24.403Z
Learning: In Parseable's internal stream creation (`create_internal_stream_if_not_exists` in `src/parseable/mod.rs`), errors should not propagate to fail server initialization. The function creates both pmeta and pbilling internal streams, and failures are logged but the function always returns `Ok(())` to ensure server startup resilience. Individual stream creation failures should not prevent syncing of successfully created streams.

Applied to files:

  • src/handlers/http/logstream.rs
📚 Learning: 2025-10-28T02:10:41.140Z
Learnt from: nikhilsinhaparseable
Repo: parseablehq/parseable PR: 1453
File: src/parseable/mod.rs:397-400
Timestamp: 2025-10-28T02:10:41.140Z
Learning: In Parseable enterprise deployments with multiple query nodes, hot tier configuration must be persisted in object storage so that newly started query nodes can fetch and synchronize the hot tier settings at startup (file: src/parseable/mod.rs, function: create_stream_and_schema_from_storage).

Applied to files:

  • src/handlers/http/logstream.rs
📚 Learning: 2025-08-25T01:31:41.786Z
Learnt from: nikhilsinhaparseable
Repo: parseablehq/parseable PR: 1415
File: src/metadata.rs:63-68
Timestamp: 2025-08-25T01:31:41.786Z
Learning: The TOTAL_EVENTS_INGESTED_DATE, TOTAL_EVENTS_INGESTED_SIZE_DATE, and TOTAL_EVENTS_STORAGE_SIZE_DATE metrics in src/metadata.rs and src/storage/object_storage.rs are designed to track total events across all streams, not per-stream. They use labels [origin, parsed_date] to aggregate by format and date, while per-stream metrics use [stream_name, origin, parsed_date] labels.

Applied to files:

  • src/handlers/http/logstream.rs
  • src/handlers/http/modal/utils/logstream_utils.rs
📚 Learning: 2025-10-20T17:48:53.444Z
Learnt from: nikhilsinhaparseable
Repo: parseablehq/parseable PR: 1448
File: src/handlers/http/cluster/mod.rs:1370-1400
Timestamp: 2025-10-20T17:48:53.444Z
Learning: In src/handlers/http/cluster/mod.rs, the billing metrics processing logic should NOT accumulate counter values from multiple Prometheus samples with the same labels. The intended behavior is to convert each received counter from nodes into individual events for ingestion, using `.insert()` to store the counter value directly.

Applied to files:

  • src/handlers/http/logstream.rs
📚 Learning: 2025-08-25T01:32:25.980Z
Learnt from: nikhilsinhaparseable
Repo: parseablehq/parseable PR: 1415
File: src/metrics/mod.rs:163-173
Timestamp: 2025-08-25T01:32:25.980Z
Learning: The TOTAL_EVENTS_INGESTED_DATE, TOTAL_EVENTS_INGESTED_SIZE_DATE, and TOTAL_EVENTS_STORAGE_SIZE_DATE metrics in src/metrics/mod.rs are intentionally designed to track global totals across all streams for a given date, using labels ["format", "date"] rather than per-stream labels. This is the correct design for global aggregation purposes.

Applied to files:

  • src/handlers/http/logstream.rs
🧬 Code graph analysis (2)
src/handlers/http/logstream.rs (1)
src/handlers/http/modal/utils/logstream_utils.rs (1)
  • delete_zombie_filters (96-128)
src/handlers/http/modal/utils/logstream_utils.rs (2)
src/parseable/mod.rs (1)
  • storage (284-286)
src/storage/mod.rs (1)
  • from (182-188)
🔇 Additional comments (4)
src/handlers/http/logstream.rs (2)

23-23: LGTM!

Import correctly added for the new zombie filters cleanup utility.


83-90: LGTM with a note on failure semantics.

The zombie filter cleanup is correctly integrated after the stream is removed from memory. The error conversion to Anyhow loses type information but is acceptable for this use case.

Note: If this call fails after the stream has already been deleted from storage/memory (lines 61-79), the user receives an error response even though the stream deletion effectively succeeded. This is a reasonable trade-off since filter cleanup errors are logged with warnings and the function continues processing remaining filters.

src/handlers/http/modal/utils/logstream_utils.rs (2)

21-32: LGTM!

Imports are correctly updated to include the necessary types for the zombie filters cleanup feature.


82-93: LGTM!

Error types are well-defined using thiserror. The StreamFoundForZombieFilters wrapper provides clear error messaging, and the enum properly distinguishes between the stream-still-exists case and metastore errors.

coderabbitai[bot]
coderabbitai bot previously approved these changes Dec 28, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
src/handlers/http/modal/utils/logstream_utils.rs (1)

114-119: Prefer structured logging over embedded newlines.

The log message uses \n characters which may not render well in structured logging systems (JSON logs, etc.). Using tracing's field syntax is more portable.

🔎 Suggested structured logging format
-            tracing::warn!(
-                "failed to delete the zombie filter: {} \nfrom storage. For logstream: {}\nError: {:#?}", 
-                filter.filter_name,
-                stream_name,
-                err 
-            );
+            tracing::warn!(
+                filter_name = %filter.filter_name,
+                stream_name = %stream_name,
+                error = ?err,
+                "failed to delete zombie filter from storage"
+            );
📜 Review details

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ffd81aa and fbc3b52.

📒 Files selected for processing (2)
  • src/handlers/http/logstream.rs
  • src/handlers/http/modal/utils/logstream_utils.rs
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/handlers/http/logstream.rs
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-08-25T01:31:41.786Z
Learnt from: nikhilsinhaparseable
Repo: parseablehq/parseable PR: 1415
File: src/metadata.rs:63-68
Timestamp: 2025-08-25T01:31:41.786Z
Learning: The TOTAL_EVENTS_INGESTED_DATE, TOTAL_EVENTS_INGESTED_SIZE_DATE, and TOTAL_EVENTS_STORAGE_SIZE_DATE metrics in src/metadata.rs and src/storage/object_storage.rs are designed to track total events across all streams, not per-stream. They use labels [origin, parsed_date] to aggregate by format and date, while per-stream metrics use [stream_name, origin, parsed_date] labels.

Applied to files:

  • src/handlers/http/modal/utils/logstream_utils.rs
📚 Learning: 2025-02-14T09:49:25.818Z
Learnt from: de-sh
Repo: parseablehq/parseable PR: 1185
File: src/handlers/http/logstream.rs:255-261
Timestamp: 2025-02-14T09:49:25.818Z
Learning: In Parseable's logstream handlers, stream existence checks must be performed for both query and standalone modes. The pattern `!PARSEABLE.streams.contains(&stream_name) && (PARSEABLE.options.mode != Mode::Query || !PARSEABLE.create_stream_and_schema_from_storage(&stream_name).await?)` ensures proper error handling in both modes.

Applied to files:

  • src/handlers/http/modal/utils/logstream_utils.rs
🧬 Code graph analysis (1)
src/handlers/http/modal/utils/logstream_utils.rs (2)
src/parseable/mod.rs (1)
  • storage (284-286)
src/storage/mod.rs (1)
  • from (182-188)
🔇 Additional comments (3)
src/handlers/http/modal/utils/logstream_utils.rs (3)

21-32: Imports look correct for the new functionality.

The added imports support the new delete_zombie_filters function and error types appropriately.


82-93: Error types are well-defined.

The error enum and wrapper struct are appropriately structured. Using #[from] for automatic conversions is idiomatic.


104-110: The metastore trait does not support filtering by stream_name. The only available method is get_filters() which returns all filters regardless of stream. The current client-side filtering approach in lines 107-110 is the only viable implementation given the metastore API design.

Likely an incorrect or invalid review comment.

Comment on lines 112 to 125
for filter in filters_for_stream.iter() {
if let Err(err) = PARSEABLE.metastore.delete_filter(filter).await {
tracing::warn!(
"failed to delete the zombie filter: {} \nfrom storage. For logstream: {}\nError: {:#?}",
filter.filter_name,
stream_name,
err
);
} else { // ok: have the filter removed from memory only when the storage deletion succeeds
if let Some(filter_id) = filter.filter_id.as_ref() {
FILTERS.delete_filter(filter_id).await;
}
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# Find Filter struct definition
rg -n "struct Filter" -A 10 --type rust

Repository: parseablehq/parseable

Length of output: 4126


🏁 Script executed:

# Check how delete_zombie_filters is called
rg -n "delete_zombie_filters" --type rust -B 2 -A 2

Repository: parseablehq/parseable

Length of output: 1351


🏁 Script executed:

# Find where filter_id is used and if it's ever None
rg -n "filter_id" --type rust -C 3 | head -150

Repository: parseablehq/parseable

Length of output: 10295


🏁 Script executed:

# Find complete delete_zombie_filters implementation
rg -n "pub async fn delete_zombie_filters" -A 30 --type rust src/handlers/http/modal/utils/logstream_utils.rs

Repository: parseablehq/parseable

Length of output: 1442


🏁 Script executed:

# Find get_filters_for_stream implementation
rg -n "get_filters_for_stream" --type rust -A 10

Repository: parseablehq/parseable

Length of output: 47


🏁 Script executed:

# Check ZombieFiltersDeletionError definition
rg -n "enum ZombieFiltersDeletionError\|struct ZombieFiltersDeletionError" --type rust -A 5

Repository: parseablehq/parseable

Length of output: 47


🏁 Script executed:

# Search for ZombieFiltersDeletionError
rg -n "ZombieFiltersDeletionError" --type rust

Repository: parseablehq/parseable

Length of output: 422


🏁 Script executed:

# Check the file containing delete_zombie_filters for error definitions
head -100 src/handlers/http/modal/utils/logstream_utils.rs

Repository: parseablehq/parseable

Length of output: 3705


🏁 Script executed:

# Check how PARSEABLE.metastore.delete_filter is defined and what it returns
rg -n "fn delete_filter" --type rust -A 5

Repository: parseablehq/parseable

Length of output: 1590


🏁 Script executed:

# Check how filters are loaded from metastore and if filter_id is always set
rg -n "get_filters\(" --type rust -A 10 | head -80

Repository: parseablehq/parseable

Length of output: 3834


🏁 Script executed:

# Check ObjectStoreMetastore implementation of get_filters
rg -n "async fn get_filters" --type rust -A 20

Repository: parseablehq/parseable

Length of output: 3827


🏁 Script executed:

# Get the complete get_filters implementation
sed -n '457,530p' src/metastore/metastores/object_store_metastore.rs

Repository: parseablehq/parseable

Length of output: 3410


Partial filter deletion failures are silently swallowed, creating potential cleanup inconsistencies during stream deletion.

The function always returns Ok(()) even when individual filter deletions fail (line 113-119). While failures are logged, the caller (logstream.rs:85-87) uses the ? operator expecting error propagation on critical cleanup failures. This mismatch means incomplete filter cleanup during stream deletion will silently succeed.

Additionally, if a filter has filter_id = None, the metastore deletion succeeds (line 113) but the in-memory cache is never updated (line 121-123 only executes for Some), creating a storage/memory inconsistency.

Consider:

  1. Track and propagate deletion failures so the caller can handle incomplete cleanup appropriately
  2. Add a warning when filter_id is None after successful storage deletion
  3. Use structured logging fields instead of \n in the warning message

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
src/handlers/http/modal/utils/logstream_utils.rs (1)

102-119: Critical: Partial deletion failures still treated as success.

This is the same core issue flagged in the previous review. The function returns Ok(ZombieResourceCleanupOk { failed_deletions: ... }) even when some deletions fail. Since the caller uses .await?, any result with failed_deletions > 0 is treated as a complete success, leaving orphaned filters in storage without signaling the failure.

Consider propagating an error when failed_deletions > 0, or use a custom error type that forces the caller to handle partial failures explicitly.

Additionally, when filter_id is None after successful storage deletion (line 113), consider logging a warning to flag potential data inconsistencies.

📜 Review details

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between fbc3b52 and 332485d.

📒 Files selected for processing (3)
  • src/handlers/http/logstream.rs
  • src/handlers/http/modal/utils/logstream_utils.rs
  • src/metastore/mod.rs
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/handlers/http/logstream.rs
🧰 Additional context used
🧠 Learnings (4)
📚 Learning: 2025-09-06T04:26:17.191Z
Learnt from: parmesant
Repo: parseablehq/parseable PR: 1424
File: src/enterprise/utils.rs:65-72
Timestamp: 2025-09-06T04:26:17.191Z
Learning: In Parseable's metastore implementation, MetastoreError::to_detail() returns a MetastoreErrorDetail struct (not a string), which contains structured error information including operation, message, stream_name, and other contextual fields. This struct is designed to be boxed in ObjectStorageError::MetastoreError(Box<MetastoreErrorDetail>).

Applied to files:

  • src/metastore/mod.rs
📚 Learning: 2025-10-21T02:22:24.403Z
Learnt from: nikhilsinhaparseable
Repo: parseablehq/parseable PR: 1448
File: src/parseable/mod.rs:419-432
Timestamp: 2025-10-21T02:22:24.403Z
Learning: In Parseable's internal stream creation (`create_internal_stream_if_not_exists` in `src/parseable/mod.rs`), errors should not propagate to fail server initialization. The function creates both pmeta and pbilling internal streams, and failures are logged but the function always returns `Ok(())` to ensure server startup resilience. Individual stream creation failures should not prevent syncing of successfully created streams.

Applied to files:

  • src/metastore/mod.rs
  • src/handlers/http/modal/utils/logstream_utils.rs
📚 Learning: 2025-02-14T09:49:25.818Z
Learnt from: de-sh
Repo: parseablehq/parseable PR: 1185
File: src/handlers/http/logstream.rs:255-261
Timestamp: 2025-02-14T09:49:25.818Z
Learning: In Parseable's logstream handlers, stream existence checks must be performed for both query and standalone modes. The pattern `!PARSEABLE.streams.contains(&stream_name) && (PARSEABLE.options.mode != Mode::Query || !PARSEABLE.create_stream_and_schema_from_storage(&stream_name).await?)` ensures proper error handling in both modes.

Applied to files:

  • src/handlers/http/modal/utils/logstream_utils.rs
📚 Learning: 2025-06-16T02:04:58.990Z
Learnt from: nikhilsinhaparseable
Repo: parseablehq/parseable PR: 1346
File: src/handlers/http/health_check.rs:81-90
Timestamp: 2025-06-16T02:04:58.990Z
Learning: In the shutdown function in src/handlers/http/health_check.rs, the design approach is to log errors from sync operations rather than propagate them. This is intentional because the shutdown function is called on SIGTERM/SIGINT signals, and the goal is to perform best-effort cleanup (syncing pending files to object storage) while allowing the shutdown to proceed regardless of sync failures. Logging provides debugging information without blocking the shutdown process.

Applied to files:

  • src/handlers/http/modal/utils/logstream_utils.rs
🧬 Code graph analysis (1)
src/handlers/http/modal/utils/logstream_utils.rs (1)
src/parseable/mod.rs (1)
  • storage (284-286)
🔇 Additional comments (5)
src/metastore/mod.rs (3)

58-59: LGTM! Error variant is well-structured.

The new error variant appropriately captures the context needed for zombie resource cleanup scenarios, with clear naming and relevant fields.


149-162: LGTM! Error detail mapping is consistent.

The implementation correctly follows the established pattern for error detail construction, properly populating all fields including structured metadata.


174-174: LGTM! Status code is appropriate.

Returning 500 INTERNAL_SERVER_ERROR is correct for this scenario, as the presence of a stream during zombie resource cleanup indicates an internal logic error in the deletion sequence.

src/handlers/http/modal/utils/logstream_utils.rs (2)

22-31: LGTM! Imports are appropriate.

All newly added imports are necessary for the delete_zombie_filters implementation.


82-89: LGTM! Guard clause is appropriate.

The stream existence check correctly prevents zombie filter cleanup when the stream is still present, with proper error propagation.

Comment on lines +127 to +131
#[derive(Debug)]
pub struct ZombieResourceCleanupOk {
pub ok_deletions: i32,
pub failed_deletions: i32,
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Use unsigned integer types for deletion counts.

The ok_deletions and failed_deletions fields represent counts and cannot be negative. Use u32 or usize instead of i32.

🔎 Proposed fix
 #[derive(Debug)]
 pub struct ZombieResourceCleanupOk {
-    pub ok_deletions: i32,
-    pub failed_deletions: i32,
+    pub ok_deletions: u32,
+    pub failed_deletions: u32,
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
#[derive(Debug)]
pub struct ZombieResourceCleanupOk {
pub ok_deletions: i32,
pub failed_deletions: i32,
}
#[derive(Debug)]
pub struct ZombieResourceCleanupOk {
pub ok_deletions: u32,
pub failed_deletions: u32,
}
🤖 Prompt for AI Agents
In src/handlers/http/modal/utils/logstream_utils.rs around lines 127 to 131, the
struct ZombieResourceCleanupOk uses signed i32 for ok_deletions and
failed_deletions even though these are non-negative counts; change their types
to an unsigned integer (prefer u32 for stable cross-platform size or usize if
you need pointer-sized indexing) and update any code that constructs or reads
these fields to use the new type (adjust literal suffixes, casts, or function
signatures that accept/return these fields accordingly).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants