Skip to content

Conversation

@sawka
Copy link
Member

@sawka sawka commented Dec 16, 2025

No description provided.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 16, 2025

Walkthrough

This pull request introduces diagnostic ping functionality, refactors AI mode handling with telemetry-gating support, and updates related documentation. Changes include: adding a diagnostic ping endpoint and sending mechanism in the backend (with environment variable support), implementing network status checks via RPC, refactoring AI mode configuration and validation logic with telemetry prerequisites, updating the AI panel UI to gate Wave AI cloud features behind telemetry requirements, documenting the new diagnostic ping system and BYOK/custom model telemetry bypass, and extending telemetry event properties. The RPC layer gains a new NetworkOnlineCommand for checking online status across server and client components.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

  • Duplicate code detected: NetworkOnlineCommand method appears twice in frontend/app/store/wshclientapi.ts — requires verification of intentionality
  • Telemetry gating logic: Multiple files implement telemetry prerequisite checks for Wave AI cloud modes; consistency across frontend/app/aipanel/aipanel.tsx, pkg/aiusechat/usechat.go, and frontend/app/aipanel/aimode.tsx needs validation
  • AI mode validation refactoring: Mode fallback logic removed in pkg/aiusechat/usechat-mode.go; verify error handling when empty/invalid modes provided
  • Component refactor with state centralization: AIErrorMessage in frontend/app/aipanel/aipanel.tsx changed from prop-based to singleton-model-based; validate error state flow and clearing behavior
  • New async pattern: ConfigChangeModeFixer in frontend/app/aipanel/aipanel.tsx subscribes to atoms and triggers fixes; verify atom subscription lifecycle and side-effects
  • Backend diagnostic loop wiring: Integration of diagnosticLoop goroutine in cmd/server/main-server.go startup sequence; check for race conditions and WAVETERM_NOPING guard behavior
  • Cross-layer RPC integration: New NetworkOnlineCommand wired through multiple layers (RPC interface, server handler, client wrapper) — trace complete call path

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Description check ⚠️ Warning No pull request description was provided by the author, making it impossible to assess whether the description relates to the changeset. Add a pull request description that explains the changes, their purpose, and any relevant context for reviewers.
Docstring Coverage ⚠️ Warning Docstring coverage is 13.33% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately describes the main purpose of the changeset: enabling Wave AI panel functionality without requiring telemetry when using BYOK (Bring Your Own Key) or local models.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch sawka/diag

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
pkg/aiusechat/openai/openai-backend.go (1)

27-43: Custom endpoint hostnames leak in error messages—restore endpoint-agnostic sanitization.

The function now hardcodes uctypes.DefaultAIEndpoint for sanitization, losing the ability to sanitize errors from custom endpoints. Since the codebase supports custom endpoints (via AiBaseURL/BaseURL fields) and distinguishes them in error handling (see openai-convertmessage.go:159), custom endpoint hostnames will appear unredacted in error messages—a regression for BYOK/local models support.

Restore the original behavior: pass the actual endpoint being used to this function so sanitization applies uniformly to all endpoints, not just the default Wave cloud endpoint.

🧹 Nitpick comments (6)
cmd/server/main-server.go (1)

155-174: Check the return value of SendDiagnosticPing.

The function returns true regardless of whether wcloud.SendDiagnosticPing succeeds or fails. Since SendDiagnosticPing returns an error, this should be checked. If the ping request fails (network issues after the online check, server error), lastSentDate will still be updated in diagnosticLoop, preventing retry for the entire day.

 func sendDiagnosticPing() bool {
 	ctx, cancelFn := context.WithTimeout(context.Background(), 5*time.Second)
 	defer cancelFn()
 
 	rpcClient := wshclient.GetBareRpcClient()
 	isOnline, err := wshclient.NetworkOnlineCommand(rpcClient, &wshrpc.RpcOpts{Route: "electron", Timeout: 2000})
 	if err != nil || !isOnline {
 		return false
 	}
 	clientData, err := wstore.DBGetSingleton[*waveobj.Client](ctx)
 	if err != nil {
 		return false
 	}
 	if clientData == nil {
 		return false
 	}
 	usageTelemetry := telemetry.IsTelemetryEnabled()
-	wcloud.SendDiagnosticPing(ctx, clientData.OID, usageTelemetry)
-	return true
+	err = wcloud.SendDiagnosticPing(ctx, clientData.OID, usageTelemetry)
+	if err != nil {
+		log.Printf("[error] sending diagnostic ping: %v\n", err)
+		return false
+	}
+	return true
 }
pkg/wcloud/wcloud.go (1)

295-316: Missing X-PromptAPIUrl header affects error message clarity.

The doRequest function uses req.Header.Get("X-PromptAPIUrl") for logging and error messages. Without this header, error messages will display an empty string (e.g., error contacting wcloud "" service: ...).

Consider adding the header for consistent logging:

 	req.Header.Set("Content-Type", "application/json")
+	req.Header.Set("X-PromptAPIUrl", apiUrl)
 	req.Close = true
frontend/app/aipanel/aimode.tsx (1)

203-210: Consider cleanup for the setTimeout callback.

The setTimeout in handleEnableTelemetry could fire after the component unmounts, potentially causing a warning or no-op call to model.focusInput(). While not critical (model is a singleton), consider using a ref or cleanup pattern for robustness.

+    const isMountedRef = useRef(true);
+
+    useEffect(() => {
+        return () => {
+            isMountedRef.current = false;
+        };
+    }, []);
+
     const handleEnableTelemetry = () => {
         fireAndForget(async () => {
             await RpcApi.WaveAIEnableTelemetryCommand(TabRpcClient);
             setTimeout(() => {
-                model.focusInput();
+                if (isMountedRef.current) {
+                    model.focusInput();
+                }
             }, 100);
         });
     };
frontend/app/aipanel/aipanel.tsx (1)

232-244: ConfigChangeModeFixer handles mode validation on config changes.

This component ensures that when telemetry or AI mode configs change, the current mode is validated and potentially corrected. The implementation is clean, though including model in the dependency array is unnecessary since it's a singleton that never changes.

     useEffect(() => {
         model.fixModeAfterConfigChange();
-    }, [telemetryEnabled, aiModeConfigs, model]);
+    }, [telemetryEnabled, aiModeConfigs]);
frontend/app/aipanel/waveai-model.tsx (2)

164-164: Consider gating window attachment behind a dev flag.

Exposing WaveAIModel on window is useful for debugging but adds to the global namespace in production. Consider wrapping this in a development check if not already handled elsewhere.


409-417: Consider reusing getRTInfo() internally.

fixModeAfterConfigChange() duplicates the RPC call that getRTInfo() already encapsulates. Consider using getRTInfo() for consistency and reduced duplication.

 async fixModeAfterConfigChange(): Promise<void> {
-    const rtInfo = await RpcApi.GetRTInfoCommand(TabRpcClient, {
-        oref: this.orefContext,
-    });
+    const rtInfo = await this.getRTInfo();
     const mode = rtInfo?.["waveai:mode"];
     if (mode == null || !this.isValidMode(mode)) {
         this.setAIModeToDefault();
     }
 }
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8b37bc6 and 71afc42.

📒 Files selected for processing (28)
  • Taskfile.yml (4 hunks)
  • cmd/server/main-server.go (6 hunks)
  • docs/docs/faq.mdx (2 hunks)
  • docs/docs/telemetry.mdx (1 hunks)
  • docs/docs/waveai-modes.mdx (1 hunks)
  • emain/emain-wsh.ts (2 hunks)
  • frontend/app/aipanel/ai-utils.ts (2 hunks)
  • frontend/app/aipanel/aimode.tsx (7 hunks)
  • frontend/app/aipanel/aipanel-contextmenu.ts (1 hunks)
  • frontend/app/aipanel/aipanel.tsx (10 hunks)
  • frontend/app/aipanel/telemetryrequired.tsx (1 hunks)
  • frontend/app/aipanel/waveai-model.tsx (7 hunks)
  • frontend/app/store/wshclientapi.ts (1 hunks)
  • frontend/app/tab/tab.tsx (2 hunks)
  • frontend/types/gotypes.d.ts (2 hunks)
  • pkg/aiusechat/chatstore/chatstore.go (1 hunks)
  • pkg/aiusechat/openai/openai-backend.go (2 hunks)
  • pkg/aiusechat/uctypes/uctypes.go (1 hunks)
  • pkg/aiusechat/usechat-mode.go (0 hunks)
  • pkg/aiusechat/usechat.go (3 hunks)
  • pkg/service/clientservice/clientservice.go (0 hunks)
  • pkg/telemetry/telemetrydata/telemetrydata.go (1 hunks)
  • pkg/wcloud/wcloud.go (5 hunks)
  • pkg/wcore/block.go (2 hunks)
  • pkg/wcore/wcore.go (2 hunks)
  • pkg/wshrpc/wshclient/wshclient.go (1 hunks)
  • pkg/wshrpc/wshrpctypes.go (2 hunks)
  • pkg/wshrpc/wshserver/wshserver.go (0 hunks)
💤 Files with no reviewable changes (3)
  • pkg/aiusechat/usechat-mode.go
  • pkg/service/clientservice/clientservice.go
  • pkg/wshrpc/wshserver/wshserver.go
🧰 Additional context used
🧠 Learnings (4)
📚 Learning: 2025-01-22T01:28:41.417Z
Learnt from: esimkowitz
Repo: wavetermdev/waveterm PR: 1790
File: pkg/remote/fileshare/wshfs/wshfs.go:122-122
Timestamp: 2025-01-22T01:28:41.417Z
Learning: The RpcClient in pkg/remote/fileshare/wshfs/wshfs.go is initialized and handled downstream by either main-server or wshcmd-connserver, as documented in the package comment.

Applied to files:

  • pkg/wshrpc/wshclient/wshclient.go
  • cmd/server/main-server.go
📚 Learning: 2025-10-15T03:21:02.229Z
Learnt from: sawka
Repo: wavetermdev/waveterm PR: 2433
File: pkg/aiusechat/tools_readfile.go:197-197
Timestamp: 2025-10-15T03:21:02.229Z
Learning: In Wave Terminal's AI tool definitions (pkg/aiusechat/tools_*.go), the Description field should not mention approval requirements even when ToolApproval returns ApprovalNeedsApproval. This prevents the LLM from asking users for approval before calling the tool, avoiding redundant double-approval prompts since the runtime will enforce approval anyway.

Applied to files:

  • pkg/aiusechat/usechat.go
📚 Learning: 2025-10-14T06:30:54.763Z
Learnt from: sawka
Repo: wavetermdev/waveterm PR: 2430
File: frontend/app/aipanel/aimessage.tsx:137-144
Timestamp: 2025-10-14T06:30:54.763Z
Learning: In `frontend/app/aipanel/aimessage.tsx`, the `AIToolUseGroup` component splits file operation tool calls into separate batches (`fileOpsNeedApproval` and `fileOpsNoApproval`) based on their approval state before passing them to `AIToolUseBatch`. This ensures each batch has homogeneous approval states, making group-level approval handling valid.

Applied to files:

  • frontend/app/aipanel/aipanel.tsx
📚 Learning: 2025-10-21T05:09:26.916Z
Learnt from: sawka
Repo: wavetermdev/waveterm PR: 2465
File: frontend/app/onboarding/onboarding-upgrade.tsx:13-21
Timestamp: 2025-10-21T05:09:26.916Z
Learning: In the waveterm codebase, clientData is loaded and awaited in wave.ts before React runs, ensuring it is always available when components mount. This means atoms.client will have data on first render.

Applied to files:

  • frontend/app/aipanel/waveai-model.tsx
🧬 Code graph analysis (8)
pkg/wcore/wcore.go (3)
pkg/panichandler/panichandler.go (1)
  • PanicHandler (25-43)
pkg/wstore/wstore_dbops.go (1)
  • DBGetSingleton (102-105)
pkg/wcloud/wcloud.go (1)
  • SendNoTelemetryUpdate (283-293)
pkg/wshrpc/wshclient/wshclient.go (2)
frontend/app/store/wshclientapi.ts (1)
  • NetworkOnlineCommand (396-398)
pkg/wshutil/wshrpc.go (1)
  • WshRpc (47-61)
pkg/wcore/block.go (1)
pkg/waveobj/metaconsts.go (1)
  • MetaKey_Controller (11-11)
emain/emain-wsh.ts (1)
frontend/app/store/wshclient.ts (1)
  • RpcResponseHelper (159-159)
pkg/aiusechat/openai/openai-backend.go (1)
pkg/aiusechat/uctypes/uctypes.go (1)
  • DefaultAIEndpoint (13-13)
frontend/app/store/wshclientapi.ts (3)
frontend/app/store/wshclient.ts (1)
  • WshClient (159-159)
pkg/remote/fileshare/wshfs/wshfs.go (1)
  • WshClient (22-22)
pkg/wshrpc/wshrpctypes.go (1)
  • RpcOpts (370-376)
frontend/app/aipanel/waveai-model.tsx (4)
pkg/wconfig/settingsconfig.go (1)
  • AIModeConfigType (267-286)
frontend/app/store/global.ts (3)
  • atoms (842-842)
  • getSettingsKeyAtom (861-861)
  • globalStore (865-865)
frontend/app/store/wshclientapi.ts (1)
  • RpcApi (697-697)
frontend/app/store/wshrpcutil.ts (1)
  • TabRpcClient (37-37)
pkg/wcloud/wcloud.go (3)
pkg/wavebase/wavebase.go (3)
  • IsDevMode (113-115)
  • ClientArch (338-340)
  • WaveVersion (25-25)
cmd/server/main-server.go (1)
  • WaveVersion (53-53)
cmd/wsh/main-wsh.go (1)
  • WaveVersion (12-12)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Build for TestDriver.ai
  • GitHub Check: Analyze (go)
  • GitHub Check: Analyze (javascript-typescript)
🔇 Additional comments (47)
pkg/aiusechat/chatstore/chatstore.go (1)

87-94: LGTM! Clear, actionable error messages.

The updated error messages provide helpful guidance to users when AI configuration mismatches occur. This is especially valuable when users switch between Wave AI cloud and BYOK/local models, as they'll now clearly understand they need to start a new chat rather than continue with the existing one.

frontend/app/tab/tab.tsx (1)

82-91: Good defensive improvements to text selection.

The early return guard and explicit focus call before selection are solid improvements that will prevent edge cases where the editable element might not be ready or focused during text selection.

frontend/types/gotypes.d.ts (2)

1225-1225: LGTM: Telemetry type extension.

The addition of "settings:transparent" to TEventProps follows the existing naming convention and is appropriately marked as optional.


1310-1310: LGTM: Consistent user property addition.

The addition mirrors the change to TEventProps at line 1225. Since this file is auto-generated (line 4), ensure the corresponding Go types in telemetrydata.go are correct.

pkg/telemetry/telemetrydata/telemetrydata.go (1)

109-109: LGTM: BlockController telemetry field added.

The new BlockController field follows the existing naming convention with the block: prefix and is properly tagged with omitempty for optional serialization.

pkg/wcore/block.go (2)

103-104: LGTM: Controller metadata retrieval and telemetry call.

The blockController retrieval follows the same pattern as blockView at line 102, using the appropriate metadata constant and providing an empty string default. The telemetry function call is correctly updated with the new parameter.


109-109: LGTM: Telemetry function signature and props updated.

The function signature now accepts blockController, and the telemetry event props correctly include both BlockView and BlockController. The existing guard at line 113 appropriately prevents telemetry when blockView is empty.

Also applies to: 124-125

pkg/aiusechat/usechat.go (4)

87-89: LGTM! Telemetry gating for Wave AI cloud modes.

The telemetry prerequisite check correctly enforces that Wave AI cloud modes require telemetry to be enabled, while allowing BYOK/local models to function without it. The error message is clear and actionable.


646-649: LGTM! Clear validation for required AIMode parameter.

The validation correctly enforces that AIMode must be provided in the request body, with a clear error message. This is consistent with the chatid validation pattern above.


650-650: LGTM! Correct usage of validated AIMode parameter.

The call to getWaveAISettings correctly passes the validated req.AIMode parameter, ensuring the AI mode flows from the client request through to the settings configuration.


75-75: No action required. The function signature change to include aiModeName parameter has been properly implemented—there is only one call site (line 650) and it correctly passes req.AIMode as the parameter.

pkg/aiusechat/uctypes/uctypes.go (1)

633-633: LGTM!

The addition of "gpt-5.2" to the compatibility map is consistent with the existing pattern for GPT-5 model variants.

pkg/aiusechat/openai/openai-backend.go (1)

525-525: LGTM!

The call site correctly updated to match the new function signature.

pkg/wshrpc/wshclient/wshclient.go (1)

479-483: LGTM!

The new NetworkOnlineCommand follows the established pattern for RPC command wrappers in this generated file. It correctly uses sendRpcRequestCallHelper[bool] with nil data and returns the boolean response, consistent with the frontend counterpart in wshclientapi.ts.

Taskfile.yml (1)

30-30: LGTM!

The new WCLOUD_PING_ENDPOINT environment variable is consistently added across all four Electron development tasks, pointing to the dev ping endpoint. This aligns with the existing WCLOUD_ENDPOINT and WCLOUD_WS_ENDPOINT patterns for development environments.

pkg/wcore/wcore.go (1)

134-156: LGTM!

The GoSendNoTelemetryUpdate function correctly implements a fire-and-forget pattern with:

  • Proper panic recovery using panichandler.PanicHandler
  • Reasonable 5-second context timeout for the network call
  • Defensive nil check on clientData
  • Correct inversion of telemetryEnabled to noTelemetryVal for the SendNoTelemetryUpdate API

The error logging is appropriate for a background operation where failures shouldn't block the caller.

cmd/server/main-server.go (4)

134-153: LGTM!

The diagnosticLoop implementation is well-structured:

  • Proper panic recovery
  • Environment variable (WAVETERM_NOPING) to disable the feature
  • Once-per-day ping logic using date string comparison
  • Reasonable timing: 5-minute initial wait allows the app to stabilize before first ping

188-188: LGTM!

Clean refactor to use the centralized wcore.GoSendNoTelemetryUpdate wrapper instead of spawning a goroutine inline.


347-348: LGTM!

Using wavebase.WaveVersion and wavebase.BuildTime is correct since these are set from the local WaveVersion and BuildTime variables at startup (lines 462-463).


562-562: LGTM!

The diagnosticLoop goroutine is appropriately placed alongside other background loops (telemetry, backup cleanup).

frontend/app/aipanel/telemetryrequired.tsx (1)

63-74: Update documentation URL — the /waveai-modes page may not exist.

The BYOK and local model information is well-placed with proper security attributes (target="_blank" with rel="noopener noreferrer"), and styling is consistent with other links. However, verify that https://docs.waveterm.dev/waveai-modes exists—consider linking to https://docs.waveterm.dev/waveai or confirming the correct URL path for AI modes documentation.

pkg/wcloud/wcloud.go (2)

68-69: Missing validation for ping endpoint in dev mode.

Unlike WCloudEndpoint_VarCache and WCloudWSEndpoint_VarCache, the ping endpoint is not validated via checkEndpointVar or a similar function. In dev mode, if WCLOUD_PING_ENDPOINT is set to an invalid value (e.g., missing https:// prefix), it will be used without validation.

If this is intentional (e.g., ping is optional in dev), consider adding a comment to clarify. Otherwise, consider adding validation:

 	WCloudPingEndpoint_VarCache = os.Getenv(WCloudPingEndpointVarName)
+	err = checkEndpointVar(WCloudPingEndpoint_VarCache, "wcloud ping endpoint", WCloudPingEndpointVarName)
+	if err != nil {
+		return err
+	}
 	os.Unsetenv(WCloudPingEndpointVarName)

318-348: LGTM!

The PingInputType struct and SendDiagnosticPing function are well-implemented. The graceful degradation (returning nil when endpoint is empty) is appropriate for optional diagnostic pings, and the payload includes useful metadata (client architecture, version, date, telemetry status).

docs/docs/waveai-modes.mdx (1)

77-79: LGTM!

The documentation clearly explains the telemetry bypass behavior for custom/BYOK models. The placement after the default mode configuration section is logical, and the VersionBadge appropriately indicates the feature availability.

emain/emain-wsh.ts (1)

105-107: LGTM!

The handle_networkonline method correctly uses Electron's net.isOnline() API to check network connectivity. The implementation follows the existing handler pattern in the class.

docs/docs/faq.mdx (1)

57-68: LGTM!

The new FAQ entry is well-written, providing clear steps for users who want to use Wave AI without telemetry. The cross-reference to the Wave AI modes documentation is helpful, and the VersionBadge appropriately indicates feature availability.

pkg/wshrpc/wshrpctypes.go (2)

183-183: LGTM!

The new Command_NetworkOnline constant is appropriately placed in the electron commands section.


306-306: LGTM!

The NetworkOnlineCommand interface method follows the established pattern and is correctly placed in the emain section of the interface.

frontend/app/aipanel/aipanel-contextmenu.ts (1)

1-159: LGTM!

The context menu simplification is well-executed. Removing the AI Modes submenu in favor of the dedicated AIModeDropdown component (seen in aimode.tsx) centralizes mode selection and aligns with the telemetry-gating approach introduced in this PR. The retained functionality (New Chat, Max Output Tokens, Configure Modes, Hide) is appropriate.

frontend/app/store/wshclientapi.ts (1)

395-398: LGTM!

The NetworkOnlineCommand follows the established pattern for parameterless RPC calls (passing null as data). The implementation aligns with the Go-side interface and other similar commands like ConnListCommand.

frontend/app/aipanel/ai-utils.ts (1)

550-565: LGTM!

The addition of currentMode parameter and isCurrentModeCloud logic is well-reasoned. This ensures that when a user's active mode is a Wave AI cloud mode (e.g., waveai@balanced), the cloud modes section remains visible even if they have custom models configured and showCloudModes is false. This prevents users from being locked out of their current mode's UI.

docs/docs/telemetry.mdx (3)

33-41: Well-documented diagnostics ping feature.

The new Diagnostics Ping section clearly explains what data is sent (version, OS/arch, date, client ID, telemetry status), explicitly states what's NOT included, and provides a clear opt-out mechanism via WAVETERM_NOPING. This transparency aligns with good privacy practices.


27-31: Fix broken documentation link.

The reference to ./telemetry-old.mdx appears to be invalid. Legacy documentation is available at legacydocs.waveterm.dev rather than as a local file. Consider updating the link to point directly to the legacy documentation URL or confirm the correct file path if an internal reference is intended.


59-59: GitHub link to telemetrydata.go is correct.

The telemetrydata package exists at the path referenced in the documentation, and the link structure https://github.com/wavetermdev/waveterm/blob/main/pkg/telemetry/telemetrydata/telemetrydata.go is valid.

frontend/app/aipanel/aimode.tsx (3)

113-132: LGTM!

The computeWaveCloudSections function correctly gates Wave AI Cloud options behind telemetry status. Setting noTelemetry: !telemetryEnabled provides a clear flag for the UI to render the appropriate disabled state and enable-telemetry prompt.


271-278: Good UX for telemetry gating.

The clickable prompt "(enable telemetry to unlock Wave AI Cloud)" provides a clear, actionable path for users who want to use cloud features. The green color and hover state make it visually distinct from the disabled section header.


306-312: New Chat action well-placed in the dropdown.

Adding the New Chat button to the mode dropdown provides convenient access alongside mode selection and configuration. The implementation is clean and consistent with the Configure Modes button styling.

frontend/app/aipanel/aipanel.tsx (4)

257-262: Well-designed access control logic.

The allowAccess logic correctly implements the PR objective: users can access Wave AI either with telemetry enabled OR when using their own custom (BYOK/local) models. The condition hasCustomModes && isUsingCustomMode ensures both that custom modes exist AND the default mode is set to one.


200-228: LGTM!

The refactored AIErrorMessage component is cleaner - deriving state from the model atom instead of receiving props. The addition of the "New Chat" link within the error message provides a helpful recovery action. The early return for null error state is efficient.


362-430: Drag handlers correctly check allowAccess.

All drag event handlers (dragOver, dragEnter, dragLeave, drop) properly check allowAccess and return early when access is denied. The drop handler additionally prevents the default action and clears drag state to ensure clean behavior.


464-472: Update dependency array for handleFileItemDrop.

The allowAccess dependency is correctly added to the callback's dependency array, ensuring the callback is re-created when access changes.

frontend/app/aipanel/waveai-model.tsx (6)

60-63: LGTM!

The atom declarations follow the existing pattern in the class, using definite assignment assertions (!) for atoms that are initialized in the constructor.


84-87: LGTM!

The premium status logic is defensively designed—defaulting to premium when rate limit info is unavailable or unknown ensures users aren't incorrectly restricted during transient states.


270-284: LGTM!

Adding clearError() to clearChat() ensures error messages from the previous chat session don't persist into a new chat, providing a cleaner user experience.


374-386: LGTM!

The validation logic correctly enforces that Wave AI cloud modes (waveai@*) require telemetry to be enabled, while also verifying mode existence in the configuration.


440-448: LGTM!

The mode initialization logic in loadInitialChat() properly handles all scenarios: missing mode (use default), valid stored mode (restore it), and invalid stored mode (reset to default). This ensures the AI panel always starts with a valid mode configuration.


117-141: Sentinel value handling is properly implemented in the UI.

The defaultModeAtom can return sentinel values "invalid" and "unknown", and these are correctly handled in the primary UI consumer (AIModeDropdown in aimode.tsx). At line 175, the code checks if displayConfig exists before using it: const displayName = displayConfig ? getModeDisplayName(displayConfig) : \Invalid (${currentMode})`;. When a sentinel value is encountered, the UI displays "Invalid (mode_value)"` with a question mark icon, preventing any confusing mode names from appearing to users. No changes needed.

ChatID string `json:"chatid"`
Msg uctypes.AIMessage `json:"msg"`
WidgetAccess bool `json:"widgetaccess,omitempty"`
AIMode string `json:"aimode,omitempty"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove omitempty tag to match required validation.

The AIMode field has the omitempty JSON tag, which suggests it's optional. However, the handler validation at lines 646-649 treats it as required. This creates an inconsistency in the API contract.

Apply this diff to fix the inconsistency:

-	AIMode       string            `json:"aimode,omitempty"`
+	AIMode       string            `json:"aimode"`
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
AIMode string `json:"aimode,omitempty"`
AIMode string `json:"aimode"`
🤖 Prompt for AI Agents
In pkg/aiusechat/usechat.go around line 603, the AIMode struct field currently
has the `json:"aimode,omitempty"` tag which contradicts the handler validation
that requires AIMode; remove the `omitempty` option so the tag reads
`json:"aimode"` to make the field required in JSON unmarshalling and align with
the validation logic.

@sawka sawka merged commit 667de56 into main Dec 16, 2025
9 checks passed
@sawka sawka deleted the sawka/diag branch December 16, 2025 17:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants