To use the DCLM-7B model on Jan.ai, follow these steps:
-
Install the Required Tools: Ensure you have the necessary tools installed. You might need packages like
transformers
andopen_lm
. You can install them using:pip install git+https://github.com/mlfoundations/open_lm.git transformers
-
Download the Model: Integrate the DCLM-7B model from Hugging Face with Jan.ai:
from open_lm.hf import * from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("apple/DCLM-Baseline-7B") model = AutoModelForCausalLM.from_pretrained("apple/DCLM-Baseline-7B")
-
Set Up Jan API Server:
- Enable the Jan API Server. Go to the Local API Server section on Jan.ai and set up the server, including the IP Port, CORS, and verbose server logs.
- Start the server by pressing the Start Server button.
-
Configure the Local Server:
- Navigate to the
~/.continue
directory. - Ensure the configuration file (
~/.continue/config.json
) includes:{ "models": [ { "title": "Jan", "provider": "openai", "model": "dclm-7b", "apiKey": "EMPTY", "apiBase": "http://localhost:1337/v1" } ] }
- Ensure the model name and API base match the ones set up in the Jan API Server.
- Navigate to the
-
Activate the Model in Jan:
- Go to
Settings
>My Models
on Jan.ai. - Click the three dots (⋮) next to the model and select "Start Model" to activate it.
- Go to
Following these steps will allow you to integrate and use the DCLM-7B model with Jan.ai, leveraging its capabilities within your local environment. For more detailed instructions, you can refer to the Jan.ai documentation and the specific integration guide for local AI models【19†source】【20†source】.