Chat completion chunk object
Represents a streamed chunk of a chat completion response returned by model, based on the provided input.
The index of the choice in the list of generated choices.
Role of the generated message author, in this case assistant
.
The contents of the assistant message.
The index of tool call being generated.
The ID of the tool call.
The type of the tool.
Available options: function
The name of the function to call.
The arguments for calling the function, generated by the model in JSON format. Ensure to validate these arguments in your code before invoking the function since the model may not always produce valid JSON.
Termination condition of the generation.
stop
means the API returned the full chat completion generated by the model without running into any limits.
length
means the generation exceeded max_tokens
or the conversation exceeded the max context length.
tool_calls
means the API has generated tool calls.
Log probability information for the choice.
A list of message content tokens with log probability information.
The token.
The log probability of this token.
List of the most likely tokens and their log probability, at this token position.
The token.
The log probability of this token.
Number of tokens in the prompt.
Number of tokens in the generated chat completion.
Total number of tokens used in the request (prompt_tokens
+ completion_tokens
).
The Unix timestamp (in seconds) for when the token sampled.
Was this page helpful?