-
Notifications
You must be signed in to change notification settings - Fork 686
updates to allow wave ai panel to function without telemetry with BYOK/local models #2685
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…elemetry. update telemetry page as well
…le from mode switcher
WalkthroughThis pull request introduces diagnostic ping functionality, refactors AI mode handling with telemetry-gating support, and updates related documentation. Changes include: adding a diagnostic ping endpoint and sending mechanism in the backend (with environment variable support), implementing network status checks via RPC, refactoring AI mode configuration and validation logic with telemetry prerequisites, updating the AI panel UI to gate Wave AI cloud features behind telemetry requirements, documenting the new diagnostic ping system and BYOK/custom model telemetry bypass, and extending telemetry event properties. The RPC layer gains a new NetworkOnlineCommand for checking online status across server and client components. Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes
Pre-merge checks and finishing touches❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
pkg/aiusechat/openai/openai-backend.go (1)
27-43: Custom endpoint hostnames leak in error messages—restore endpoint-agnostic sanitization.The function now hardcodes
uctypes.DefaultAIEndpointfor sanitization, losing the ability to sanitize errors from custom endpoints. Since the codebase supports custom endpoints (viaAiBaseURL/BaseURLfields) and distinguishes them in error handling (seeopenai-convertmessage.go:159), custom endpoint hostnames will appear unredacted in error messages—a regression for BYOK/local models support.Restore the original behavior: pass the actual endpoint being used to this function so sanitization applies uniformly to all endpoints, not just the default Wave cloud endpoint.
🧹 Nitpick comments (6)
cmd/server/main-server.go (1)
155-174: Check the return value ofSendDiagnosticPing.The function returns
trueregardless of whetherwcloud.SendDiagnosticPingsucceeds or fails. SinceSendDiagnosticPingreturns anerror, this should be checked. If the ping request fails (network issues after the online check, server error),lastSentDatewill still be updated indiagnosticLoop, preventing retry for the entire day.func sendDiagnosticPing() bool { ctx, cancelFn := context.WithTimeout(context.Background(), 5*time.Second) defer cancelFn() rpcClient := wshclient.GetBareRpcClient() isOnline, err := wshclient.NetworkOnlineCommand(rpcClient, &wshrpc.RpcOpts{Route: "electron", Timeout: 2000}) if err != nil || !isOnline { return false } clientData, err := wstore.DBGetSingleton[*waveobj.Client](ctx) if err != nil { return false } if clientData == nil { return false } usageTelemetry := telemetry.IsTelemetryEnabled() - wcloud.SendDiagnosticPing(ctx, clientData.OID, usageTelemetry) - return true + err = wcloud.SendDiagnosticPing(ctx, clientData.OID, usageTelemetry) + if err != nil { + log.Printf("[error] sending diagnostic ping: %v\n", err) + return false + } + return true }pkg/wcloud/wcloud.go (1)
295-316: MissingX-PromptAPIUrlheader affects error message clarity.The
doRequestfunction usesreq.Header.Get("X-PromptAPIUrl")for logging and error messages. Without this header, error messages will display an empty string (e.g.,error contacting wcloud "" service: ...).Consider adding the header for consistent logging:
req.Header.Set("Content-Type", "application/json") + req.Header.Set("X-PromptAPIUrl", apiUrl) req.Close = truefrontend/app/aipanel/aimode.tsx (1)
203-210: Consider cleanup for the setTimeout callback.The
setTimeoutinhandleEnableTelemetrycould fire after the component unmounts, potentially causing a warning or no-op call tomodel.focusInput(). While not critical (model is a singleton), consider using a ref or cleanup pattern for robustness.+ const isMountedRef = useRef(true); + + useEffect(() => { + return () => { + isMountedRef.current = false; + }; + }, []); + const handleEnableTelemetry = () => { fireAndForget(async () => { await RpcApi.WaveAIEnableTelemetryCommand(TabRpcClient); setTimeout(() => { - model.focusInput(); + if (isMountedRef.current) { + model.focusInput(); + } }, 100); }); };frontend/app/aipanel/aipanel.tsx (1)
232-244: ConfigChangeModeFixer handles mode validation on config changes.This component ensures that when telemetry or AI mode configs change, the current mode is validated and potentially corrected. The implementation is clean, though including
modelin the dependency array is unnecessary since it's a singleton that never changes.useEffect(() => { model.fixModeAfterConfigChange(); - }, [telemetryEnabled, aiModeConfigs, model]); + }, [telemetryEnabled, aiModeConfigs]);frontend/app/aipanel/waveai-model.tsx (2)
164-164: Consider gating window attachment behind a dev flag.Exposing
WaveAIModelonwindowis useful for debugging but adds to the global namespace in production. Consider wrapping this in a development check if not already handled elsewhere.
409-417: Consider reusinggetRTInfo()internally.
fixModeAfterConfigChange()duplicates the RPC call thatgetRTInfo()already encapsulates. Consider usinggetRTInfo()for consistency and reduced duplication.async fixModeAfterConfigChange(): Promise<void> { - const rtInfo = await RpcApi.GetRTInfoCommand(TabRpcClient, { - oref: this.orefContext, - }); + const rtInfo = await this.getRTInfo(); const mode = rtInfo?.["waveai:mode"]; if (mode == null || !this.isValidMode(mode)) { this.setAIModeToDefault(); } }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (28)
Taskfile.yml(4 hunks)cmd/server/main-server.go(6 hunks)docs/docs/faq.mdx(2 hunks)docs/docs/telemetry.mdx(1 hunks)docs/docs/waveai-modes.mdx(1 hunks)emain/emain-wsh.ts(2 hunks)frontend/app/aipanel/ai-utils.ts(2 hunks)frontend/app/aipanel/aimode.tsx(7 hunks)frontend/app/aipanel/aipanel-contextmenu.ts(1 hunks)frontend/app/aipanel/aipanel.tsx(10 hunks)frontend/app/aipanel/telemetryrequired.tsx(1 hunks)frontend/app/aipanel/waveai-model.tsx(7 hunks)frontend/app/store/wshclientapi.ts(1 hunks)frontend/app/tab/tab.tsx(2 hunks)frontend/types/gotypes.d.ts(2 hunks)pkg/aiusechat/chatstore/chatstore.go(1 hunks)pkg/aiusechat/openai/openai-backend.go(2 hunks)pkg/aiusechat/uctypes/uctypes.go(1 hunks)pkg/aiusechat/usechat-mode.go(0 hunks)pkg/aiusechat/usechat.go(3 hunks)pkg/service/clientservice/clientservice.go(0 hunks)pkg/telemetry/telemetrydata/telemetrydata.go(1 hunks)pkg/wcloud/wcloud.go(5 hunks)pkg/wcore/block.go(2 hunks)pkg/wcore/wcore.go(2 hunks)pkg/wshrpc/wshclient/wshclient.go(1 hunks)pkg/wshrpc/wshrpctypes.go(2 hunks)pkg/wshrpc/wshserver/wshserver.go(0 hunks)
💤 Files with no reviewable changes (3)
- pkg/aiusechat/usechat-mode.go
- pkg/service/clientservice/clientservice.go
- pkg/wshrpc/wshserver/wshserver.go
🧰 Additional context used
🧠 Learnings (4)
📚 Learning: 2025-01-22T01:28:41.417Z
Learnt from: esimkowitz
Repo: wavetermdev/waveterm PR: 1790
File: pkg/remote/fileshare/wshfs/wshfs.go:122-122
Timestamp: 2025-01-22T01:28:41.417Z
Learning: The RpcClient in pkg/remote/fileshare/wshfs/wshfs.go is initialized and handled downstream by either main-server or wshcmd-connserver, as documented in the package comment.
Applied to files:
pkg/wshrpc/wshclient/wshclient.gocmd/server/main-server.go
📚 Learning: 2025-10-15T03:21:02.229Z
Learnt from: sawka
Repo: wavetermdev/waveterm PR: 2433
File: pkg/aiusechat/tools_readfile.go:197-197
Timestamp: 2025-10-15T03:21:02.229Z
Learning: In Wave Terminal's AI tool definitions (pkg/aiusechat/tools_*.go), the Description field should not mention approval requirements even when ToolApproval returns ApprovalNeedsApproval. This prevents the LLM from asking users for approval before calling the tool, avoiding redundant double-approval prompts since the runtime will enforce approval anyway.
Applied to files:
pkg/aiusechat/usechat.go
📚 Learning: 2025-10-14T06:30:54.763Z
Learnt from: sawka
Repo: wavetermdev/waveterm PR: 2430
File: frontend/app/aipanel/aimessage.tsx:137-144
Timestamp: 2025-10-14T06:30:54.763Z
Learning: In `frontend/app/aipanel/aimessage.tsx`, the `AIToolUseGroup` component splits file operation tool calls into separate batches (`fileOpsNeedApproval` and `fileOpsNoApproval`) based on their approval state before passing them to `AIToolUseBatch`. This ensures each batch has homogeneous approval states, making group-level approval handling valid.
Applied to files:
frontend/app/aipanel/aipanel.tsx
📚 Learning: 2025-10-21T05:09:26.916Z
Learnt from: sawka
Repo: wavetermdev/waveterm PR: 2465
File: frontend/app/onboarding/onboarding-upgrade.tsx:13-21
Timestamp: 2025-10-21T05:09:26.916Z
Learning: In the waveterm codebase, clientData is loaded and awaited in wave.ts before React runs, ensuring it is always available when components mount. This means atoms.client will have data on first render.
Applied to files:
frontend/app/aipanel/waveai-model.tsx
🧬 Code graph analysis (8)
pkg/wcore/wcore.go (3)
pkg/panichandler/panichandler.go (1)
PanicHandler(25-43)pkg/wstore/wstore_dbops.go (1)
DBGetSingleton(102-105)pkg/wcloud/wcloud.go (1)
SendNoTelemetryUpdate(283-293)
pkg/wshrpc/wshclient/wshclient.go (2)
frontend/app/store/wshclientapi.ts (1)
NetworkOnlineCommand(396-398)pkg/wshutil/wshrpc.go (1)
WshRpc(47-61)
pkg/wcore/block.go (1)
pkg/waveobj/metaconsts.go (1)
MetaKey_Controller(11-11)
emain/emain-wsh.ts (1)
frontend/app/store/wshclient.ts (1)
RpcResponseHelper(159-159)
pkg/aiusechat/openai/openai-backend.go (1)
pkg/aiusechat/uctypes/uctypes.go (1)
DefaultAIEndpoint(13-13)
frontend/app/store/wshclientapi.ts (3)
frontend/app/store/wshclient.ts (1)
WshClient(159-159)pkg/remote/fileshare/wshfs/wshfs.go (1)
WshClient(22-22)pkg/wshrpc/wshrpctypes.go (1)
RpcOpts(370-376)
frontend/app/aipanel/waveai-model.tsx (4)
pkg/wconfig/settingsconfig.go (1)
AIModeConfigType(267-286)frontend/app/store/global.ts (3)
atoms(842-842)getSettingsKeyAtom(861-861)globalStore(865-865)frontend/app/store/wshclientapi.ts (1)
RpcApi(697-697)frontend/app/store/wshrpcutil.ts (1)
TabRpcClient(37-37)
pkg/wcloud/wcloud.go (3)
pkg/wavebase/wavebase.go (3)
IsDevMode(113-115)ClientArch(338-340)WaveVersion(25-25)cmd/server/main-server.go (1)
WaveVersion(53-53)cmd/wsh/main-wsh.go (1)
WaveVersion(12-12)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Build for TestDriver.ai
- GitHub Check: Analyze (go)
- GitHub Check: Analyze (javascript-typescript)
🔇 Additional comments (47)
pkg/aiusechat/chatstore/chatstore.go (1)
87-94: LGTM! Clear, actionable error messages.The updated error messages provide helpful guidance to users when AI configuration mismatches occur. This is especially valuable when users switch between Wave AI cloud and BYOK/local models, as they'll now clearly understand they need to start a new chat rather than continue with the existing one.
frontend/app/tab/tab.tsx (1)
82-91: Good defensive improvements to text selection.The early return guard and explicit focus call before selection are solid improvements that will prevent edge cases where the editable element might not be ready or focused during text selection.
frontend/types/gotypes.d.ts (2)
1225-1225: LGTM: Telemetry type extension.The addition of
"settings:transparent"toTEventPropsfollows the existing naming convention and is appropriately marked as optional.
1310-1310: LGTM: Consistent user property addition.The addition mirrors the change to
TEventPropsat line 1225. Since this file is auto-generated (line 4), ensure the corresponding Go types intelemetrydata.goare correct.pkg/telemetry/telemetrydata/telemetrydata.go (1)
109-109: LGTM: BlockController telemetry field added.The new
BlockControllerfield follows the existing naming convention with theblock:prefix and is properly tagged withomitemptyfor optional serialization.pkg/wcore/block.go (2)
103-104: LGTM: Controller metadata retrieval and telemetry call.The
blockControllerretrieval follows the same pattern asblockViewat line 102, using the appropriate metadata constant and providing an empty string default. The telemetry function call is correctly updated with the new parameter.
109-109: LGTM: Telemetry function signature and props updated.The function signature now accepts
blockController, and the telemetry event props correctly include bothBlockViewandBlockController. The existing guard at line 113 appropriately prevents telemetry whenblockViewis empty.Also applies to: 124-125
pkg/aiusechat/usechat.go (4)
87-89: LGTM! Telemetry gating for Wave AI cloud modes.The telemetry prerequisite check correctly enforces that Wave AI cloud modes require telemetry to be enabled, while allowing BYOK/local models to function without it. The error message is clear and actionable.
646-649: LGTM! Clear validation for required AIMode parameter.The validation correctly enforces that
AIModemust be provided in the request body, with a clear error message. This is consistent with the chatid validation pattern above.
650-650: LGTM! Correct usage of validated AIMode parameter.The call to
getWaveAISettingscorrectly passes the validatedreq.AIModeparameter, ensuring the AI mode flows from the client request through to the settings configuration.
75-75: No action required. The function signature change to includeaiModeNameparameter has been properly implemented—there is only one call site (line 650) and it correctly passesreq.AIModeas the parameter.pkg/aiusechat/uctypes/uctypes.go (1)
633-633: LGTM!The addition of "gpt-5.2" to the compatibility map is consistent with the existing pattern for GPT-5 model variants.
pkg/aiusechat/openai/openai-backend.go (1)
525-525: LGTM!The call site correctly updated to match the new function signature.
pkg/wshrpc/wshclient/wshclient.go (1)
479-483: LGTM!The new
NetworkOnlineCommandfollows the established pattern for RPC command wrappers in this generated file. It correctly usessendRpcRequestCallHelper[bool]with nil data and returns the boolean response, consistent with the frontend counterpart inwshclientapi.ts.Taskfile.yml (1)
30-30: LGTM!The new
WCLOUD_PING_ENDPOINTenvironment variable is consistently added across all four Electron development tasks, pointing to the dev ping endpoint. This aligns with the existingWCLOUD_ENDPOINTandWCLOUD_WS_ENDPOINTpatterns for development environments.pkg/wcore/wcore.go (1)
134-156: LGTM!The
GoSendNoTelemetryUpdatefunction correctly implements a fire-and-forget pattern with:
- Proper panic recovery using
panichandler.PanicHandler- Reasonable 5-second context timeout for the network call
- Defensive nil check on
clientData- Correct inversion of
telemetryEnabledtonoTelemetryValfor theSendNoTelemetryUpdateAPIThe error logging is appropriate for a background operation where failures shouldn't block the caller.
cmd/server/main-server.go (4)
134-153: LGTM!The
diagnosticLoopimplementation is well-structured:
- Proper panic recovery
- Environment variable (
WAVETERM_NOPING) to disable the feature- Once-per-day ping logic using date string comparison
- Reasonable timing: 5-minute initial wait allows the app to stabilize before first ping
188-188: LGTM!Clean refactor to use the centralized
wcore.GoSendNoTelemetryUpdatewrapper instead of spawning a goroutine inline.
347-348: LGTM!Using
wavebase.WaveVersionandwavebase.BuildTimeis correct since these are set from the localWaveVersionandBuildTimevariables at startup (lines 462-463).
562-562: LGTM!The
diagnosticLoopgoroutine is appropriately placed alongside other background loops (telemetry, backup cleanup).frontend/app/aipanel/telemetryrequired.tsx (1)
63-74: Update documentation URL — the/waveai-modespage may not exist.The BYOK and local model information is well-placed with proper security attributes (
target="_blank"withrel="noopener noreferrer"), and styling is consistent with other links. However, verify thathttps://docs.waveterm.dev/waveai-modesexists—consider linking tohttps://docs.waveterm.dev/waveaior confirming the correct URL path for AI modes documentation.pkg/wcloud/wcloud.go (2)
68-69: Missing validation for ping endpoint in dev mode.Unlike
WCloudEndpoint_VarCacheandWCloudWSEndpoint_VarCache, the ping endpoint is not validated viacheckEndpointVaror a similar function. In dev mode, ifWCLOUD_PING_ENDPOINTis set to an invalid value (e.g., missinghttps://prefix), it will be used without validation.If this is intentional (e.g., ping is optional in dev), consider adding a comment to clarify. Otherwise, consider adding validation:
WCloudPingEndpoint_VarCache = os.Getenv(WCloudPingEndpointVarName) + err = checkEndpointVar(WCloudPingEndpoint_VarCache, "wcloud ping endpoint", WCloudPingEndpointVarName) + if err != nil { + return err + } os.Unsetenv(WCloudPingEndpointVarName)
318-348: LGTM!The
PingInputTypestruct andSendDiagnosticPingfunction are well-implemented. The graceful degradation (returningnilwhen endpoint is empty) is appropriate for optional diagnostic pings, and the payload includes useful metadata (client architecture, version, date, telemetry status).docs/docs/waveai-modes.mdx (1)
77-79: LGTM!The documentation clearly explains the telemetry bypass behavior for custom/BYOK models. The placement after the default mode configuration section is logical, and the VersionBadge appropriately indicates the feature availability.
emain/emain-wsh.ts (1)
105-107: LGTM!The
handle_networkonlinemethod correctly uses Electron'snet.isOnline()API to check network connectivity. The implementation follows the existing handler pattern in the class.docs/docs/faq.mdx (1)
57-68: LGTM!The new FAQ entry is well-written, providing clear steps for users who want to use Wave AI without telemetry. The cross-reference to the Wave AI modes documentation is helpful, and the VersionBadge appropriately indicates feature availability.
pkg/wshrpc/wshrpctypes.go (2)
183-183: LGTM!The new
Command_NetworkOnlineconstant is appropriately placed in the electron commands section.
306-306: LGTM!The
NetworkOnlineCommandinterface method follows the established pattern and is correctly placed in the emain section of the interface.frontend/app/aipanel/aipanel-contextmenu.ts (1)
1-159: LGTM!The context menu simplification is well-executed. Removing the AI Modes submenu in favor of the dedicated
AIModeDropdowncomponent (seen inaimode.tsx) centralizes mode selection and aligns with the telemetry-gating approach introduced in this PR. The retained functionality (New Chat, Max Output Tokens, Configure Modes, Hide) is appropriate.frontend/app/store/wshclientapi.ts (1)
395-398: LGTM!The
NetworkOnlineCommandfollows the established pattern for parameterless RPC calls (passingnullas data). The implementation aligns with the Go-side interface and other similar commands likeConnListCommand.frontend/app/aipanel/ai-utils.ts (1)
550-565: LGTM!The addition of
currentModeparameter andisCurrentModeCloudlogic is well-reasoned. This ensures that when a user's active mode is a Wave AI cloud mode (e.g.,waveai@balanced), the cloud modes section remains visible even if they have custom models configured andshowCloudModesis false. This prevents users from being locked out of their current mode's UI.docs/docs/telemetry.mdx (3)
33-41: Well-documented diagnostics ping feature.The new Diagnostics Ping section clearly explains what data is sent (version, OS/arch, date, client ID, telemetry status), explicitly states what's NOT included, and provides a clear opt-out mechanism via
WAVETERM_NOPING. This transparency aligns with good privacy practices.
27-31: Fix broken documentation link.The reference to
./telemetry-old.mdxappears to be invalid. Legacy documentation is available at legacydocs.waveterm.dev rather than as a local file. Consider updating the link to point directly to the legacy documentation URL or confirm the correct file path if an internal reference is intended.
59-59: GitHub link to telemetrydata.go is correct.The telemetrydata package exists at the path referenced in the documentation, and the link structure
https://github.com/wavetermdev/waveterm/blob/main/pkg/telemetry/telemetrydata/telemetrydata.gois valid.frontend/app/aipanel/aimode.tsx (3)
113-132: LGTM!The
computeWaveCloudSectionsfunction correctly gates Wave AI Cloud options behind telemetry status. SettingnoTelemetry: !telemetryEnabledprovides a clear flag for the UI to render the appropriate disabled state and enable-telemetry prompt.
271-278: Good UX for telemetry gating.The clickable prompt "(enable telemetry to unlock Wave AI Cloud)" provides a clear, actionable path for users who want to use cloud features. The green color and hover state make it visually distinct from the disabled section header.
306-312: New Chat action well-placed in the dropdown.Adding the New Chat button to the mode dropdown provides convenient access alongside mode selection and configuration. The implementation is clean and consistent with the Configure Modes button styling.
frontend/app/aipanel/aipanel.tsx (4)
257-262: Well-designed access control logic.The
allowAccesslogic correctly implements the PR objective: users can access Wave AI either with telemetry enabled OR when using their own custom (BYOK/local) models. The conditionhasCustomModes && isUsingCustomModeensures both that custom modes exist AND the default mode is set to one.
200-228: LGTM!The refactored
AIErrorMessagecomponent is cleaner - deriving state from the model atom instead of receiving props. The addition of the "New Chat" link within the error message provides a helpful recovery action. The early return for null error state is efficient.
362-430: Drag handlers correctly check allowAccess.All drag event handlers (dragOver, dragEnter, dragLeave, drop) properly check
allowAccessand return early when access is denied. The drop handler additionally prevents the default action and clears drag state to ensure clean behavior.
464-472: Update dependency array for handleFileItemDrop.The
allowAccessdependency is correctly added to the callback's dependency array, ensuring the callback is re-created when access changes.frontend/app/aipanel/waveai-model.tsx (6)
60-63: LGTM!The atom declarations follow the existing pattern in the class, using definite assignment assertions (
!) for atoms that are initialized in the constructor.
84-87: LGTM!The premium status logic is defensively designed—defaulting to premium when rate limit info is unavailable or unknown ensures users aren't incorrectly restricted during transient states.
270-284: LGTM!Adding
clearError()toclearChat()ensures error messages from the previous chat session don't persist into a new chat, providing a cleaner user experience.
374-386: LGTM!The validation logic correctly enforces that Wave AI cloud modes (
waveai@*) require telemetry to be enabled, while also verifying mode existence in the configuration.
440-448: LGTM!The mode initialization logic in
loadInitialChat()properly handles all scenarios: missing mode (use default), valid stored mode (restore it), and invalid stored mode (reset to default). This ensures the AI panel always starts with a valid mode configuration.
117-141: Sentinel value handling is properly implemented in the UI.The
defaultModeAtomcan return sentinel values"invalid"and"unknown", and these are correctly handled in the primary UI consumer (AIModeDropdownin aimode.tsx). At line 175, the code checks ifdisplayConfigexists before using it:const displayName = displayConfig ? getModeDisplayName(displayConfig) : \Invalid (${currentMode})`;. When a sentinel value is encountered, the UI displays"Invalid (mode_value)"` with a question mark icon, preventing any confusing mode names from appearing to users. No changes needed.
| ChatID string `json:"chatid"` | ||
| Msg uctypes.AIMessage `json:"msg"` | ||
| WidgetAccess bool `json:"widgetaccess,omitempty"` | ||
| AIMode string `json:"aimode,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove omitempty tag to match required validation.
The AIMode field has the omitempty JSON tag, which suggests it's optional. However, the handler validation at lines 646-649 treats it as required. This creates an inconsistency in the API contract.
Apply this diff to fix the inconsistency:
- AIMode string `json:"aimode,omitempty"`
+ AIMode string `json:"aimode"`📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| AIMode string `json:"aimode,omitempty"` | |
| AIMode string `json:"aimode"` |
🤖 Prompt for AI Agents
In pkg/aiusechat/usechat.go around line 603, the AIMode struct field currently
has the `json:"aimode,omitempty"` tag which contradicts the handler validation
that requires AIMode; remove the `omitempty` option so the tag reads
`json:"aimode"` to make the field required in JSON unmarshalling and align with
the validation logic.
No description provided.