Remote Server Integration
This guide will show you how to configure Jan as a client and point it to any remote & local (self-hosted) API server.
OpenAI Platform Configuration
1. Create a Model JSON
-
In
~/jan/models, create a folder namedgpt-3.5-turbo-16k. -
In this folder, add a
model.jsonfile with Filename asmodel.json,idmatching folder name,Formatasapi,Engineasopenai, andStateasready.
{
"sources": [
{
"filename": "openai",
"url": "https://openai.com"
}
],
"id": "gpt-3.5-turbo-16k",
"object": "model",
"name": "OpenAI GPT 3.5 Turbo 16k",
"version": "1.0",
"description": "OpenAI GPT 3.5 Turbo 16k model is extremely good",
"format": "api",
"settings": {},
"parameters": {},
"metadata": {
"author": "OpenAI",
"tags": ["General", "Big Context Length"]
},
"engine": "openai"
}
model.json
The model.json file is used to set up your local models.
- If you've set up your model's configuration in
nitro.json, please note thatmodel.jsoncan overwrite the settings. - When using OpenAI models like GPT-3.5 and GPT-4, you can use the default settings in
model.jsonfile.
There are two important fields in model.json that you need to setup:
Settings
This is the field where to set your engine configurations, there are two imporant field that you need to define for your local models:
| Term | Description |
|---|---|
ctx_len | Defined based on the model's context size. |
prompt_template | Defined based on the model's trained template (e.g., ChatML, Alpaca). |
To set up the prompt_template based on your model, follow the steps below:
- Visit Hugging Face, an open-source machine learning platform.
- Find the current model that you're using (e.g., Gemma 7b it).
- Review the text and identify the template.
Parameters
parameters is the adjustable settings that affect how your model operates or processes the data.
The fields in parameters are typically general and can be the same across models. An example is provided below:
"parameters":{
"temperature": 0.7,
"top_p": 0.95,
"stream": true,
"max_tokens": 4096,
"frequency_penalty": 0,
"presence_penalty": 0
}
- You can find the list of available models in the OpenAI Platform.
- The
idproperty needs to match the model name in the list.- For example, if you want to use the GPT-4 Turbo, you must set the
idproperty togpt-4-1106-preview.
- For example, if you want to use the GPT-4 Turbo, you must set the
2. Configure OpenAI API Keys
- Find your API keys in the OpenAI Platform.
- Set the OpenAI API keys in
~/jan/engines/openai.jsonfile.
{
"full_url": "https://api.openai.com/v1/chat/completions",
"api_key": "sk-<your key here>"
}
3. Start the Model
Restart Jan and navigate to the Hub. Then, select your configured model and start the model.
Engines with OAI Compatible Configuration
This section will show you how to configure a client connection to a remote/local server using Jan's API server running model mistral-ins-7b-q4 as an example.
Currently, you can only connect to one OpenAI-compatible endpoint at a time.
1. Configure a Client Connection
- Navigate to the
~/jan/enginesfolder. - Modify the
openai.json file.
Please note that currently, the code that supports any OpenAI-compatible endpoint only reads engine/openai.json file. Thus, it will not search any other files in this directory.
- Configure
full_urlproperties with the endpoint server that you want to connect. For example, if you're going to communicate to Jan's API server, you can configure it as follows:
{
// "full_url": "https://<server-ip-address>:<port>/v1/chat/completions"
"full_url": "https://<server-ip-address>:1337/v1/chat/completions"
// Skip api_key if your local server does not require authentication
// "api_key": "sk-<your key here>"
}
2. Create a Model JSON
-
In
~/jan/models, create a folder namedmistral-ins-7b-q4. -
In this folder, add a
model.jsonfile with Filename asmodel.json, ensure the following configurations:
idmatching folder name.Formatset toapi.Engineset toopenaiStateset toready.
{
"sources": [
{
"filename": "janai",
"url": "https://jan.ai"
}
],
"id": "mistral-ins-7b-q4",
"object": "model",
"name": "Mistral Instruct 7B Q4 on Jan API Server",
"version": "1.0",
"description": "Jan integration with remote Jan API server",
"format": "api",
"settings": {},
"parameters": {},
"metadata": {
"author": "MistralAI, The Bloke",
"tags": ["remote", "awesome"]
},
"engine": "openai"
}
3. Start the Model
- Restart Jan and navigate to the Hub.
- Locate your model and click the Use button.
If you have questions or want more preconfigured GGUF models, please join our Discord community for support, updates, and discussions.