Push/pull a prompt
Install packages
- pip
- yarn
- npm
pip install -U langchain langchainhub langchain-openai
yarn add langchain
npm install -S langchain
Configure environment variables
If you already have LANGCHAIN_API_KEY
set to your current workspace's api key from LangSmith, you can skip this step.
Otherwise, get an API key for your workspace by navigating to Settings > API Keys > Create API Key
in LangSmith.
Set your environment variable.
export LANGCHAIN_HUB_API_KEY="lsv2_..."
Pull a prompt and use it
You can pull your own private prompts and all of the public prompts in the LangChain Hub.
If you're using Python and encounter the error:
Identifier must be in the format of owner/repo:commit or owner/repo
, you need to update your package version. You can do this by running:
pip install -U langchainhub
To pull a private prompt or your own public prompt you do not need to specify the LangChain Hub handle (though you can, if you have one set).
To pull a public prompt from the LangChain Hub, you need to specify the handle of the prompt's author.
- Python
- TypeScript
from langchain import hub
# pull a private prompt
prompt = hub.pull("my-private-prompt")
print(prompt)
# pull a public prompt
public_prompt = hub.pull("efriis/my-first-prompt")
print(public_prompt)
import * as hub from "langchain/hub";
import { ChatPromptTemplate } from "@langchain/core/prompts";
// pull a private prompt
const prompt = await hub.pull<ChatPromptTemplate>("my-private-prompt");
console.log(prompt);
// pull a public prompt
const publicPrompt = await hub.pull<ChatPromptTemplate>("efriis/my-first-prompt");
console.log(publicPrompt);
You can also pull a specific commit of a prompt by specifying the commit hash.
prompt = hub.pull('my-private-prompt:12344e88')
publicPrompt = hub.pull("efriis/my-first-prompt:56489e79")
Here's an example of pulling a prompt from the LangChain Hub and using it in a runnable.
- Python
- TypeScript
from langchain import hub
# pull a chat prompt
prompt = hub.pull("efriis/my-first-prompt")
# create a model to use it with
from langchain_openai import ChatOpenAI
model = ChatOpenAI()
# use it in a runnable
runnable = prompt | model
response = runnable.invoke({
"profession": "biologist",
"question": "What is special about parrots?",
})
print(response)
// import
import * as hub from "langchain/hub";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { ChatOpenAI } from "@langchain/openai";
// pull a chat prompt
const prompt = await hub.pull<ChatPromptTemplate>("efriis/my-first-prompt");
// create a model to use it with
const model = new ChatOpenAI();
// use it in a runnable
const runnable = prompt.pipe(model);
const result = await runnable.invoke({
"profession": "biologist",
"question": "What is special about parrots?",
});
console.log(result);
Push a prompt
You can push an update to an already-existing prompt or create a new one. If you specify a prompt name that exists in your workspace, a new version of your prompt will be committed. If you specify a new prompt name, a new prompt will be created.
- Python
- TypeScript
from langchain import hub
from langchain.prompts.chat import ChatPromptTemplate
prompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")
hub.push("topic-joke-generator", prompt, new_repo_is_public=False)
import * as hub from "langchain/hub";
import {
ChatPromptTemplate,
HumanMessagePromptTemplate,
} from '@langchain/core/prompts';
const message = HumanMessagePromptTemplate.fromTemplate(
'tell me a joke about {topic}'
);
const prompt = ChatPromptTemplate.fromMessages([message]);
await hub.push("my-first-prompt", prompt, { newRepoIsPublic: false });
"Prompt" used to be called "repo", so any references to "repo" in the code are referring to a prompt.