LangChain
This documentation describes the integration of MindsDB with LangChain, a framework for developing applications powered by language models. The integration allows for the deployment of LangChain models within MindsDB, providing the models with access to data from various data sources.
Prerequisites
Before proceeding, ensure the following prerequisites are met:
- Install MindsDB locally via Docker or use MindsDB Cloud.
- To use LangChain within MindsDB, install the required dependencies following this instruction.
- Obtain the API key for a selected model (provider) that you want to use through LangChain.
Available models include the following:
- Anthropic (how to get the API key)
- OpenAI (how to get the API key)
- Anyscale (how to get the API key)
The LiteLLM model provider is available in MindsDB Cloud only. Use the MindsDB API key, which can be generated in the MindsDB Cloud editor at cloud.mindsdb.com/account
.
Setup
Create an AI engine from the LangChain handler.
Create a model using langchain_engine
as an engine and one of OpenAI/Anthropic/Anyscale/LiteLLM as a model provider.
Agents
and Tools
are some of the main abstractions that LangChain offers. You can read more about them in the LangChain documentation.
There are three different tools utilized by this agent:
- MindsDB is the internal MindsDB executor.
- Metadata fetches the metadata information for the available tables.
- Write is able to write agent responses into a MindsDB data source.
Each tool exposes the internal MindsDB executor in a different way to perform its tasks, effectively enabling the agent model to read from (and potentially write to) data sources or models available in the active MindsDB project.
Create a conversational model using langchain_engine
as an engine and one of OpenAI/Anthropic/Anyscale/LiteLLM as a model provider.
Usage
The following usage examples utilize langchain_engine
to create a model with the CREATE MODEL
statement.
Create a model that will be used to describe, analyze, and retrieve.
Here, we create the tool_based_agent
model using the LangChain engine, as defined in the engine
parameter. This model answers users’ questions in a helpful way, as defined in the prompt_template
parameter, which specifies input
as the input column when calling the model.
Describe data
Query the model to describe data.
Here is the output:
To get information about the mysql_demo_db.house_sales
table, the agent uses the Metadata tool. Then the agent prepares the response.
Analyze data
Query the model to analyze data.
Here is the output:
Here, the model uses the Metadata tool again to fetch the column information. As there is no beds
column in the mysql_demo_db.home_rentals
table, it uses the number_of_rooms
column and writes the following query:
This query returns the value of 1.6, which is then used to write an answer.
Retrieve data
Query the model to retrieve data.
Here is the output:
Here, the model uses the Metadata tool again to fetch information about the table. Then, it creates and executes the following query:
On execution, the model gets this output:
Consequently, it takes the query output and writes an answer.
Next Steps
Go to the Use Cases section to see more examples.