What is EmbedJs?

EmbedJs is an Open Source Framework that makes it easy to create and deploy personalized AI apps. At its core, EmbedJs follows the design principle of being โ€œConventional but Configurableโ€ to serve both software engineers and machine learning engineers.

EmbedJs streamlines the creation of personalized LLM applications, offering a seamless process for managing various types of unstructured data. It efficiently segments data into manageable chunks, generates relevant embeddings, and stores them in a vector database for optimized retrieval. With a suite of diverse APIs, it enables users to extract contextual information, find precise answers, or engage in interactive chat conversations, all tailored to their own data.

Who is EmbedJs for?

EmbedJs is designed for a diverse range of users, from AI professionals like Data Scientists and Machine Learning Engineers to those just starting their AI journey, including college students, independent developers, and hobbyists. Essentially, itโ€™s for anyone with an interest in AI, regardless of their expertise level.

Our APIs are user-friendly yet adaptable, enabling beginners to effortlessly create LLM-powered applications with as few as 7 lines of code. At the same time, we offer extensive customization options for every aspect of building a personalized AI application. This includes the choice of LLMs, vector databases, loaders and chunkers, and more.

Our platformโ€™s clear and well-structured abstraction layers ensure that users can tailor the system to meet their specific needs, whether theyโ€™re crafting a simple project or a complex, nuanced AI application.

Why Use EmbedJs?

Developing a personalized AI application for production use presents numerous complexities, such as:

  • Integrating and indexing data from diverse sources.
  • Determining optimal data chunking methods for each source.
  • Synchronizing the RAG pipeline with regularly updated data sources.
  • Implementing efficient data storage in a vector store.
  • Deciding whether to include metadata with document chunks.
  • Configuring Large Language Models (LLMs).
  • Selecting effective prompts.

EmbedJs is designed to simplify these tasks, offering conventional yet customizable APIs. Our solution handles the intricate processes of loading, chunking, indexing, and retrieving data. This enables you to concentrate on aspects that are crucial for your specific use case or business objectives, ensuring a smoother and more focused development process.

How it works?

EmbedJs makes it easy to add data to your RAG pipeline with these straightforward steps:

  1. Automatic Data Handling: It automatically recognizes the data type and loads it.
  2. Efficient Data Processing: The system creates embeddings for key parts of your data.
  3. Flexible Data Storage: You get to choose where to store this processed data in a vector database.

When a user asks a question, whether for chatting, searching, or querying, EmbedJs simplifies the response process:

  1. Query Processing: It turns the userโ€™s question into embeddings.
  2. Document Retrieval: These embeddings are then used to find related documents in the database.
  3. Answer Generation: The related documents are used by the LLM to craft a precise answer.

With EmbedJs, you donโ€™t have to worry about the complexities of building a personalized AI application. It offers an easy-to-use interface for developing applications with any kind of data.

Getting started

Checkout our quickstart guide to start your first AI application.

Support

Feel free to reach out to us if you have ideas, feedback or questions that we can help out with.

Contribute

Was this page helpful?