Skip to content

Instantly share code, notes, and snippets.

@incubated-geek-cc
Created March 16, 2024 10:24
Show Gist options
  • Save incubated-geek-cc/6d3e549411e6205b8ed9ce72833ed642 to your computer and use it in GitHub Desktop.
Save incubated-geek-cc/6d3e549411e6205b8ed9ce72833ed642 to your computer and use it in GitHub Desktop.
Source code taken originally from https://github.com/nomic-ai/gpt4all/tree/main/gpt4all-bindings/java. Configuration assumes that model is located in the same folder as the project folder.
package gpt4all;
import com.hexadevlabs.gpt4all.LLModel;
import java.io.File;
import java.nio.file.Path;
public class Application {
public static void main(String[] args){
String prompt = "### Human:\nWhat is the meaning of life\n### Assistant:";
// Replace the hardcoded path with the actual path where your model file resides
String modelFilePath = System.getProperty("user.dir") + File.separator + "ggml-gpt4all-j-v1.3-groovy.bin";
try (LLModel model = new LLModel(Path.of(modelFilePath))) {
// May generate up to 4096 tokens but generally stops early
LLModel.GenerationConfig config = LLModel.config()
.withNPredict(4096).build();
// Will also stream to standard output
String fullGeneration = model.generate(prompt, config, true);
System.out.println("###");
System.out.println("[Output] " + fullGeneration);
} catch (Exception e) {
// Exceptions generally may happen if the model file fails to load
// for a number of reasons such as a file not found.
// It is possible that Java may not be able to dynamically load the native shared library or
// the llmodel shared library may not be able to dynamically load the backend
// implementation for the model file you provided.
//
// Once the LLModel class is successfully loaded into memory the text generation calls
// generally should not throw exceptions.
e.printStackTrace(); // Printing here but in a production system you may want to take some action.
}
}
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment