RAGApplicationBuilder
Create a EmbedJs RAGApplication
using RAGApplicationBuilder
. RAGApplication
is the main entrypoint for a developer to interact with EmbedJs APIs.
RAGApplicationBuilder
configures the LLM, vector database and embedding model of your choice and return a RAGApplication
at the end.
Attributes
This configures the LLM for the RAG application. Setting NO_MODEL
will not load any LLM - in this case, you can only use semantic search and there will be no no LLM powered Q&A.
SIMPLE_MODELS
are predefined models with sane defaults available in EmbedJs.
All predefined models inherit from BaseModel
. You can therefore pass a custom model that extends BaseModel
/ provide a custom set of parameters for a predefined model.
For a list of predefined LLMs, refer the section on LLMs.
This configures the embedding model for use with the RAG application. Embedding models are used to convert text into vectors. For a list of predefined embedding models, refer the section on embedding models.
This configures the vector database to be used with RAG application. For a list of available vector databases, refer the section on vector databases.
This configures a stores that is used internally by the appliation to keep track of what sources and data have been previously processed. Previously processed data is not reprocessed - thus removing the need for this logic to be implemented at your end. If this is not provided, the application will maintain this data in memorywhich will be lost on app restart. For a list of built-in stores, refer the section on stores.
This configures a temperature to be used with the LLM. This controls the randomness of the LLM output. By default, the application sets the temperature to 0.1.
This parameter is used to control what amounts to a relevant / contextual document when retrieving documents from the vector database. Documents below this cut off are not discarded. EmbedJs uses sane defaults for this parameter but you can customize.
This allows you to customize the system message used when querying the LLM. The system message is included once with every call to the LLM besides the user query and chat history.
Usage
Was this page helpful?