Skip to main content

Salesforce LLM Open Connector API (v1)

Download OpenAPI specification:Download

The LLM Open Connector API allows Salesforce customers and partners to provide access to LLMs in a standard way so that they can be consumed by the Einstein 1 platform. This API is based on OpenAI's API with significant modifications to accommodate Salesforce use cases.

Chat

Given a list of messages comprising a conversation, the model will return a response.

Creates a model response for the given chat conversation.

Authorizations:
ApiKeyAuth
Request Body schema: application/json
required
required
Array of objects (Chat Completion Message) non-empty

A list of messages comprising the conversation so far.

model
required
string

ID of the model to use.

max_tokens
integer or null

The maximum number of tokens that can be generated in the chat completion.

The total length of input tokens and generated tokens is limited by the model's context length.

n
integer or null [ 1 .. 128 ]
Default: 1

How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs.

temperature
number or null [ 0 .. 2 ]
Default: 1

What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.

object

Dictionary of any other parameters that are required by the provider. Values are passed as is to the provider so that the request can include parameters that are unique to a provider.

Responses

Request samples

Content type
application/json
{
  • "messages": [
    • {
      }
    ],
  • "model": "gpt-4-turbo",
  • "max_tokens": 0,
  • "n": 1,
  • "temperature": 1,
  • "parameters": {
    • "top_p": 0.5
    }
}

Response samples

Content type
application/json
{
  • "id": "string",
  • "choices": [
    • {
      }
    ],
  • "created": 0,
  • "model": "string",
  • "object": "chat.completion",
  • "usage": {
    • "completion_tokens": 0,
    • "prompt_tokens": 0,
    • "total_tokens": 0
    }
}