-
Notifications
You must be signed in to change notification settings - Fork 0
2025/03/27 generate kotlin then modify #3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR updates the Kotlin client documentation by adding new specification files, properties, and endpoints along with updating model names and request/response types. Key changes include:
- Addition of new documentation files for annotations, file attachments, and updated enums.
- Updates to the ChatApi and AudioApi documents to reflect new endpoints (e.g., delete, update, get messages) and revised request/response models.
- Updates to the README and build configuration files (e.g., version bumps and additions such as the Spotless plugin).
Reviewed Changes
Copilot reviewed 739 out of 741 changed files in this pull request and generated 1 comment.
Show a summary per file
| File | Description |
|---|---|
| lib/docs/ChatCompletionResponseMessageAnnotationsInner.md | New markdown docs describing annotation properties. |
| lib/docs/ChatCompletionRequestUserMessageContentPart.md | Addition of a new "file" property and corresponding enum update. |
| lib/docs/ChatApi.md | Updates to include new endpoints and revised documentation for chat completions. |
| lib/docs/AudioApi.md | Updates to transcription/translation endpoints and inclusion of new request parameters. |
| lib/docs/AssistantsApiResponseFormatOption.md | Renaming schema reference for response format documentation. |
| gradle/libs.versions.toml | Addition of the Spotless plugin version entry. |
| README.md | Documentation updates and clarification notes on current library support. |
Files not reviewed (2)
- gradle/wrapper/gradle-wrapper.properties: Language not supported
- lib/build.gradle.kts: Language not supported
| val temperature : java.math.BigDecimal = 8.14 // java.math.BigDecimal | The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use [log probability](https://en.wikipedia.org/wiki/Log_probability) to automatically increase the temperature until certain thresholds are hit. | ||
| val include : kotlin.collections.List<TranscriptionInclude> = // kotlin.collections.List<TranscriptionInclude> | Additional information to include in the transcription response. `logprobs` will return the log probabilities of the tokens in the response to understand the model's confidence in the transcription. `logprobs` only works with response_format set to `json` and only with the models `gpt-4o-transcribe` and `gpt-4o-mini-transcribe`. | ||
| val timestampGranularities : kotlin.collections.List<kotlin.String> = // kotlin.collections.List<kotlin.String> | The timestamp granularities to populate for this transcription. `response_format` must be set `verbose_json` to use timestamp granularities. Either or both of these options are supported: `word`, or `segment`. Note: There is no additional latency for segment timestamps, but generating word timestamps incurs additional latency. | ||
| val stream : kotlin.Boolean = true // kotlin.Boolean | If set to true, the model response data will be streamed to the client as it is generated using [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format). See the [Streaming section of the Speech-to-Text guide](/docs/guides/speech-to-text?lang=curl#streaming-transcriptions) for more information. Note: Streaming is not supported for the `whisper-1` model and will be ignored. |
Copilot
AI
Mar 29, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The default value for 'stream' is set to true in the code sample, yet the accompanying documentation notes a default of false. Consider aligning the default value to avoid potential confusion.
| val stream : kotlin.Boolean = true // kotlin.Boolean | If set to true, the model response data will be streamed to the client as it is generated using [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format). See the [Streaming section of the Speech-to-Text guide](/docs/guides/speech-to-text?lang=curl#streaming-transcriptions) for more information. Note: Streaming is not supported for the `whisper-1` model and will be ignored. | |
| val stream : kotlin.Boolean = false // kotlin.Boolean | If set to true, the model response data will be streamed to the client as it is generated using [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format). See the [Streaming section of the Speech-to-Text guide](/docs/guides/speech-to-text?lang=curl#streaming-transcriptions) for more information. Note: Streaming is not supported for the `whisper-1` model and will be ignored. |
5706636 to
72434a6
Compare
72434a6 to
6a8655c
Compare
No description provided.