# Chat

<mark style="color:blue;">**`ENDPOINT : https://app.oxyapi.uk/v1/chat/completions`**</mark>

`messages  [array]`` `<mark style="color:red;">**`REQUIRED`**</mark>

* A list of messages comprising the conversation so far. [Example Python code](https://cookbook.openai.com/examples/how_to_format_inputs_to_chatgpt_models).

`model  [string]`` `<mark style="color:red;">**`REQUIRED`**</mark>

* ID of the model to use. See the [model endpoint compatibility](https://app.oxyapi.uk/v1/models) table for details on which models work with the Chat API.

`frequency_penalty [number or null] Optional Defaults to 0`

* Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.

`logit_biasmap Optional Defaults to null`

* Modify the likelihood of specified tokens appearing in the completion.
* Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.

`logprobs [boolean or null] Optional Defaults to false`

* Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the `content` of `message`. This option is currently not available on the `gpt-4-vision-preview` model.

`top_logprobs [integer] or null Optional`

* An integer between 0 and 5 specifying the number of most likely tokens to return at each token position, each with an associated log probability. `logprobs` must be set to `true` if this parameter is used.

`max_tokens [integer] or null Optional`

* The maximum number of [tokens](https://platform.openai.com/tokenizer) that can be generated in the chat completion.
* The total length of input tokens and generated tokens is limited by the model's context length. [Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken) for counting tokens.

`n [integer] or nullOptionalDefaults to 1`

* How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep `n` as `1` to minimize costs. `n`max = 2

`presence_penalty [number] or null Optional Defaults to 0`

* Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
* [See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/text-generation/parameter-details)

`response_format object Optional`

* An object specifying the format that the model must output. Compatible with `gpt-4-1106-preview` and `gpt-3.5-turbo-1106`.
* Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the message the model generates is valid JSON.
* **Important:** when using JSON mode, you **must** also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if `finish_reason="length"`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length.

`seed [integer or null] Optional`

* This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same `seed` and parameters should return the same result. Determinism is not guaranteed, and you should refer to the `system_fingerprint` response parameter to monitor changes in the backend.

`stop [string / array / null] Optional Defaults to null`

* Up to 4 sequences where the API will stop generating further tokens.

`stream [boolean or null] Optional Defaults to false`

* If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a `data: [DONE]` message. [Example Python code](https://cookbook.openai.com/examples/how_to_stream_completions).

`temperature [number or null] Optional Defaults to 1`

* What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
* We generally recommend altering this or `top_p` but not both.

`top_p [number or null] Optional Defaults to 1`

* An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top\_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
* We generally recommend altering this or `temperature` but not both.

`tools [array] Optional`

* A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for.

`tool_choice [string or object] Optional`

* Controls which (if any) function is called by the model. `none` means the model will not call a function and instead generates a message. `auto` means the model can pick between generating a message or calling a function. Specifying a particular function via `{"type": "function", "function": {"name": "my_function"}}` forces the model to call that function.
* `none` is the default when no functions are present. `auto` is the default if functions are present.

`user [string] Optional`

* A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](https://platform.openai.com/docs/guides/safety-best-practices/end-user-ids).

`function_call Deprecated [string or object] Optional`

* Deprecated in favor of `tool_choice`.
* Controls which (if any) function is called by the model. `none` means the model will not call a function and instead generates a message. `auto` means the model can pick between generating a message or calling a function. Specifying a particular function via `{"name": "my_function"}` forces the model to call that function.
* `none` is the default when no functions are present. `auto` is the default if functions are present.

`functions Deprecated [array] Optional`

* Deprecated in favor of `tools`.
* A list of functions the model may generate JSON inputs for.

## Returns

Returns a chat completion object, or a streamed sequence of chat completion chunk objects if the request is streamed.

## The chat completion object

Represents a chat completion response returned by model, based on the provided input.

`id [string]`

* A unique identifier for the chat completion.

`choices [array]`

* A list of chat completion choices. Can be more than one if `n` is greater than 1.

`created [integer]`

* The Unix timestamp (in seconds) of when the chat completion was created.

`model [[string]`

* The model used for the chat completion.

`system_fingerprint [string]`

* This fingerprint represents the backend configuration that the model runs with.
* Can be used in conjunction with the `seed` request parameter to understand when backend changes have been made that might impact determinism.

`object [string]`

* The object type, which is always `chat.completion`.

## The chat completion chunk object

Represents a streamed chunk of a chat completion response returned by model, based on the provided input.

`id [string]`

* A unique identifier for the chat completion.

`choices [array]`

* A list of chat completion choices. Can be more than one if `n` is greater than 1.

`created [integer]`

* The Unix timestamp (in seconds) of when the chat completion was created.

`model [[string]`

* The model used for the chat completion.

`system_fingerprint [string]`

* This fingerprint represents the backend configuration that the model runs with.
* Can be used in conjunction with the `seed` request parameter to understand when backend changes have been made that might impact determinism.

`object [string]`

* The object type, which is always `chat.completion.chunk`.
