Skip to content

Conversation

@paulpv
Copy link
Contributor

@paulpv paulpv commented Mar 29, 2025

No description provided.

@paulpv paulpv requested a review from Copilot March 29, 2025 10:32
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR updates the Kotlin client documentation by adding new specification files, properties, and endpoints along with updating model names and request/response types. Key changes include:

  • Addition of new documentation files for annotations, file attachments, and updated enums.
  • Updates to the ChatApi and AudioApi documents to reflect new endpoints (e.g., delete, update, get messages) and revised request/response models.
  • Updates to the README and build configuration files (e.g., version bumps and additions such as the Spotless plugin).

Reviewed Changes

Copilot reviewed 739 out of 741 changed files in this pull request and generated 1 comment.

Show a summary per file
File Description
lib/docs/ChatCompletionResponseMessageAnnotationsInner.md New markdown docs describing annotation properties.
lib/docs/ChatCompletionRequestUserMessageContentPart.md Addition of a new "file" property and corresponding enum update.
lib/docs/ChatApi.md Updates to include new endpoints and revised documentation for chat completions.
lib/docs/AudioApi.md Updates to transcription/translation endpoints and inclusion of new request parameters.
lib/docs/AssistantsApiResponseFormatOption.md Renaming schema reference for response format documentation.
gradle/libs.versions.toml Addition of the Spotless plugin version entry.
README.md Documentation updates and clarification notes on current library support.
Files not reviewed (2)
  • gradle/wrapper/gradle-wrapper.properties: Language not supported
  • lib/build.gradle.kts: Language not supported

val temperature : java.math.BigDecimal = 8.14 // java.math.BigDecimal | The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use [log probability](https://en.wikipedia.org/wiki/Log_probability) to automatically increase the temperature until certain thresholds are hit.
val include : kotlin.collections.List<TranscriptionInclude> = // kotlin.collections.List<TranscriptionInclude> | Additional information to include in the transcription response. `logprobs` will return the log probabilities of the tokens in the response to understand the model's confidence in the transcription. `logprobs` only works with response_format set to `json` and only with the models `gpt-4o-transcribe` and `gpt-4o-mini-transcribe`.
val timestampGranularities : kotlin.collections.List<kotlin.String> = // kotlin.collections.List<kotlin.String> | The timestamp granularities to populate for this transcription. `response_format` must be set `verbose_json` to use timestamp granularities. Either or both of these options are supported: `word`, or `segment`. Note: There is no additional latency for segment timestamps, but generating word timestamps incurs additional latency.
val stream : kotlin.Boolean = true // kotlin.Boolean | If set to true, the model response data will be streamed to the client as it is generated using [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format). See the [Streaming section of the Speech-to-Text guide](/docs/guides/speech-to-text?lang=curl#streaming-transcriptions) for more information. Note: Streaming is not supported for the `whisper-1` model and will be ignored.
Copy link

Copilot AI Mar 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The default value for 'stream' is set to true in the code sample, yet the accompanying documentation notes a default of false. Consider aligning the default value to avoid potential confusion.

Suggested change
val stream : kotlin.Boolean = true // kotlin.Boolean | If set to true, the model response data will be streamed to the client as it is generated using [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format). See the [Streaming section of the Speech-to-Text guide](/docs/guides/speech-to-text?lang=curl#streaming-transcriptions) for more information. Note: Streaming is not supported for the `whisper-1` model and will be ignored.
val stream : kotlin.Boolean = false // kotlin.Boolean | If set to true, the model response data will be streamed to the client as it is generated using [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format). See the [Streaming section of the Speech-to-Text guide](/docs/guides/speech-to-text?lang=curl#streaming-transcriptions) for more information. Note: Streaming is not supported for the `whisper-1` model and will be ignored.

Copilot uses AI. Check for mistakes.
@paulpv paulpv force-pushed the generated_20250327 branch 3 times, most recently from 5706636 to 72434a6 Compare March 30, 2025 04:52
@paulpv paulpv force-pushed the generated_20250327 branch from 72434a6 to 6a8655c Compare March 30, 2025 05:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants