You can also use locally running Ollama models. Installation instructions for Ollama can be found here. Once Ollama is installed, you can start a local LLM by executing ollama run <modelname>.
ollama run <modelname>
npm install @llm-tools/embedjs-ollama
import { RAGApplicationBuilder } from '@llm-tools/embedjs'; import { Ollama } from '@llm-tools/embedjs-ollama'; const app = await new RAGApplicationBuilder() .setModel(new Ollama({ modelName: "llama3", baseUrl: 'http://localhost:11434' }))
Was this page helpful?