Chatbots一些相关问题#

Chatbot使用了LLM做对话,下面的列举了相关问题

如何管理对话记忆#

聊天机器人的特点是可以将之前对话的内容用户当前上下文。在之前我们讲过,有多种方式可以做,但之前使用的不是LCEL表达式。下面展示使用LCEL的实现

Setup#

from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate

chat = ChatOpenAI(model="gpt-3.5-turbo-0125")
prompt = ChatPromptTemplate.from_messages(
    [
        (
            "system",
            "你是一个AI小助手,你要尽可能的回答用户的问题",
        ),
        ("placeholder", "{messages}"),
    ]
)

chain = prompt | chat

ai_msg = chain.invoke(
    {
        "messages": [
            (
                "human",
                "你是什么",
            ),
            ("ai", "我是AI小助手"),
            ("human", "刚才问你什么了?"),
        ],
    }
)
print(ai_msg.content)
你问我"你是什么"

ChatHistory#

LangChain提供了 ChatMessageHistory来记录对话历史,基本使用如下:

from langchain_community.chat_message_histories import ChatMessageHistory

demo_ephemeral_chat_history = ChatMessageHistory()
demo_ephemeral_chat_history.add_user_message(
    "你是一个AI小助手,你要尽可能的回答用户的问题"
)
demo_ephemeral_chat_history.add_user_message(
    "你是什么?"
)
demo_ephemeral_chat_history.add_ai_message("我是AI小助手")
demo_ephemeral_chat_history.messages
[HumanMessage(content='你是一个AI小助手,你要尽可能的回答用户的问题'),
 HumanMessage(content='你是什么?'),
 AIMessage(content='我是AI小助手')]

与此同时,在在保存AI response的时候不需要上面的操作,他封装了操作,可以直接将response save起来

demo_ephemeral_chat_history = ChatMessageHistory()
input1 = "你是一个AI小助手,你要尽可能的回答用户的问题"
demo_ephemeral_chat_history.add_user_message(input1)

response = chain.invoke(
    {
        "messages": demo_ephemeral_chat_history.messages,
    }
)

demo_ephemeral_chat_history.add_ai_message(response)

input2 = "刚才问你什么了?"

demo_ephemeral_chat_history.add_user_message(input2)

chain.invoke(
    {
        "messages": demo_ephemeral_chat_history.messages,
    }
)
AIMessage(content='你问我:“你是一个AI小助手,你要尽可能的回答用户的问题”,我回答说:“没问题!请问你有什么问题需要帮助的吗?”', response_metadata={'token_usage': {'completion_tokens': 52, 'prompt_tokens': 92, 'total_tokens': 144}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-b8985b10-2518-413c-b605-1f005c522983-0', usage_metadata={'input_tokens': 92, 'output_tokens': 52, 'total_tokens': 144})

自动管理对话历史#

LangChain提供了自动管理对话历史的类RunnableWithMessageHistory,不用像上面一样手动管理了

prompt = ChatPromptTemplate.from_messages(
    [
        (
            "system",
            "你是一个AI小助手,你要尽可能的回答用户的问题",
        ),
        ("placeholder", "{chat_history}"),
        ("human", "{input}"),
    ]
)

chain = prompt | chat
from langchain_core.runnables.history import RunnableWithMessageHistory

demo_ephemeral_chat_history_for_chain = ChatMessageHistory()

chain_with_message_history = RunnableWithMessageHistory(
    chain,
    lambda session_id: demo_ephemeral_chat_history_for_chain,
    input_messages_key="input",
    history_messages_key="chat_history",
)
chain_with_message_history.invoke(
    {"input": "我饿了"},
    {"configurable": {"session_id": "unused"}},
)
AIMessage(content='那你可以考虑吃点东西来填饱肚子,可以选择健康的食物,比如水果、蔬菜、坚果或者一些轻食。如果需要我帮你找一些简单的食谱或者外卖平台的推荐,也可以告诉我你的口味偏好。希望你能找到满足的食物!', response_metadata={'token_usage': {'completion_tokens': 115, 'prompt_tokens': 36, 'total_tokens': 151}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-d5c40320-035f-4a0d-97ae-e8cf3354b571-0', usage_metadata={'input_tokens': 36, 'output_tokens': 115, 'total_tokens': 151})
chain_with_message_history.invoke(
    {"input": "刚才问你什么了?"}, {"configurable": {"session_id": "unused"}}
)
AIMessage(content='你说你饿了,我建议你吃点东西来填饱肚子。如果需要我再帮你查找一些食谱或者外卖推荐,随时告诉我哦!', response_metadata={'token_usage': {'completion_tokens': 59, 'prompt_tokens': 170, 'total_tokens': 229}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-c70a506e-9db4-4281-8bc9-31b8db55fb18-0', usage_metadata={'input_tokens': 170, 'output_tokens': 59, 'total_tokens': 229})

管理对话历史#

随着对话次数的越来越多,对话历史也就越来越长,如果不处理,就会超过模型的Context,并且输入的Context过长,会对模型造成些干扰。

在这篇文章中有详细的阐述,在这里我列举两个 https://python.langchain.com/v0.2/docs/how_to/trim_messages/

Trimming messages#

这个解决方案是将历史消息传递给大模型之前现在做一次预处理,使用了LCEL表达式语法

demo_ephemeral_chat_history = ChatMessageHistory()
demo_ephemeral_chat_history.add_user_message("hello,我是Ethan")
demo_ephemeral_chat_history.add_ai_message("Hello!")
demo_ephemeral_chat_history.add_user_message("今儿怎么样?")
demo_ephemeral_chat_history.add_ai_message("好着呢?")

demo_ephemeral_chat_history.messages
[HumanMessage(content='hello,我是Ethan'),
 AIMessage(content='Hello!'),
 HumanMessage(content='今儿怎么样?'),
 AIMessage(content='好着呢?')]
from operator import itemgetter

from langchain_core.messages import trim_messages
from langchain_core.runnables import RunnablePassthrough
from langchain.globals import set_debug
set_debug(True)

trimmer = trim_messages(strategy="last", max_tokens=2, token_counter=len)


chain_with_trimming = (
    RunnablePassthrough.assign(chat_history=itemgetter("chat_history") | trimmer)
    | prompt
    | chat
)

chain_with_trimmed_history = RunnableWithMessageHistory(
    chain_with_trimming,
    lambda session_id: demo_ephemeral_chat_history,
    input_messages_key="input",
    history_messages_key="chat_history",
)
chain_with_trimmed_history.invoke(
    {"input": "俺叫什么"},
    {"configurable": {"session_id": "unused"}},
)
[chain/start] [chain:RunnableWithMessageHistory] Entering Chain run with input:
{
  "input": "俺叫什么"
}
[chain/start] [chain:RunnableWithMessageHistory > chain:insert_history] Entering Chain run with input:
{
  "input": "俺叫什么"
}
[chain/start] [chain:RunnableWithMessageHistory > chain:insert_history > chain:RunnableParallel<chat_history>] Entering Chain run with input:
{
  "input": "俺叫什么"
}
[chain/start] [chain:RunnableWithMessageHistory > chain:insert_history > chain:RunnableParallel<chat_history> > chain:load_history] Entering Chain run with input:
{
  "input": "俺叫什么"
}
[chain/end] [chain:RunnableWithMessageHistory > chain:insert_history > chain:RunnableParallel<chat_history> > chain:load_history] [0ms] Exiting Chain run with output:
[outputs]
[chain/end] [chain:RunnableWithMessageHistory > chain:insert_history > chain:RunnableParallel<chat_history>] [1ms] Exiting Chain run with output:
[outputs]
[chain/end] [chain:RunnableWithMessageHistory > chain:insert_history] [4ms] Exiting Chain run with output:
[outputs]
[chain/start] [chain:RunnableWithMessageHistory > chain:RunnableBranch] Entering Chain run with input:
[inputs]
[chain/start] [chain:RunnableWithMessageHistory > chain:RunnableBranch > chain:RunnableWithMessageHistoryInAsyncMode] Entering Chain run with input:
[inputs]
[chain/end] [chain:RunnableWithMessageHistory > chain:RunnableBranch > chain:RunnableWithMessageHistoryInAsyncMode] [0ms] Exiting Chain run with output:
{
  "output": false
}
[chain/start] [chain:RunnableWithMessageHistory > chain:RunnableBranch > chain:RunnableSequence] Entering Chain run with input:
[inputs]
[chain/start] [chain:RunnableWithMessageHistory > chain:RunnableBranch > chain:RunnableSequence > chain:RunnableAssign<chat_history>] Entering Chain run with input:
[inputs]
[chain/start] [chain:RunnableWithMessageHistory > chain:RunnableBranch > chain:RunnableSequence > chain:RunnableAssign<chat_history> > chain:RunnableParallel<chat_history>] Entering Chain run with input:
[inputs]
[chain/start] [chain:RunnableWithMessageHistory > chain:RunnableBranch > chain:RunnableSequence > chain:RunnableAssign<chat_history> > chain:RunnableParallel<chat_history> > chain:RunnableSequence] Entering Chain run with input:
[inputs]
[chain/start] [chain:RunnableWithMessageHistory > chain:RunnableBranch > chain:RunnableSequence > chain:RunnableAssign<chat_history> > chain:RunnableParallel<chat_history> > chain:RunnableSequence > chain:RunnableLambda] Entering Chain run with input:
[inputs]
[chain/end] [chain:RunnableWithMessageHistory > chain:RunnableBranch > chain:RunnableSequence > chain:RunnableAssign<chat_history> > chain:RunnableParallel<chat_history> > chain:RunnableSequence > chain:RunnableLambda] [0ms] Exiting Chain run with output:
[outputs]
[chain/start] [chain:RunnableWithMessageHistory > chain:RunnableBranch > chain:RunnableSequence > chain:RunnableAssign<chat_history> > chain:RunnableParallel<chat_history> > chain:RunnableSequence > chain:trim_messages] Entering Chain run with input:
[inputs]
[chain/end] [chain:RunnableWithMessageHistory > chain:RunnableBranch > chain:RunnableSequence > chain:RunnableAssign<chat_history> > chain:RunnableParallel<chat_history> > chain:RunnableSequence > chain:trim_messages] [1ms] Exiting Chain run with output:
[outputs]
[chain/end] [chain:RunnableWithMessageHistory > chain:RunnableBranch > chain:RunnableSequence > chain:RunnableAssign<chat_history> > chain:RunnableParallel<chat_history> > chain:RunnableSequence] [2ms] Exiting Chain run with output:
[outputs]
[chain/end] [chain:RunnableWithMessageHistory > chain:RunnableBranch > chain:RunnableSequence > chain:RunnableAssign<chat_history> > chain:RunnableParallel<chat_history>] [3ms] Exiting Chain run with output:
[outputs]
[chain/end] [chain:RunnableWithMessageHistory > chain:RunnableBranch > chain:RunnableSequence > chain:RunnableAssign<chat_history>] [4ms] Exiting Chain run with output:
[outputs]
[chain/start] [chain:RunnableWithMessageHistory > chain:RunnableBranch > chain:RunnableSequence > prompt:ChatPromptTemplate] Entering Prompt run with input:
[inputs]
[chain/end] [chain:RunnableWithMessageHistory > chain:RunnableBranch > chain:RunnableSequence > prompt:ChatPromptTemplate] [1ms] Exiting Prompt run with output:
[outputs]
[llm/start] [chain:RunnableWithMessageHistory > chain:RunnableBranch > chain:RunnableSequence > llm:ChatOpenAI] Entering LLM run with input:
{
  "prompts": [
    "System: 你是一个AI小助手,你要尽可能的回答用户的问题\nHuman: 今儿怎么样?\nAI: 好着呢?\nHuman: 俺叫什么"
  ]
}
[llm/end] [chain:RunnableWithMessageHistory > chain:RunnableBranch > chain:RunnableSequence > llm:ChatOpenAI] [2.22s] Exiting LLM run with output:
{
  "generations": [
    [
      {
        "text": "您可以告诉我您的名字,那我就知道您叫什么啦。",
        "generation_info": {
          "finish_reason": "stop",
          "logprobs": null
        },
        "type": "ChatGeneration",
        "message": {
          "lc": 1,
          "type": "constructor",
          "id": [
            "langchain",
            "schema",
            "messages",
            "AIMessage"
          ],
          "kwargs": {
            "content": "您可以告诉我您的名字,那我就知道您叫什么啦。",
            "response_metadata": {
              "token_usage": {
                "completion_tokens": 26,
                "prompt_tokens": 62,
                "total_tokens": 88
              },
              "model_name": "gpt-3.5-turbo-0125",
              "system_fingerprint": null,
              "finish_reason": "stop",
              "logprobs": null
            },
            "type": "ai",
            "id": "run-7e386bf0-ba26-486e-92b1-622d8abfe329-0",
            "usage_metadata": {
              "input_tokens": 62,
              "output_tokens": 26,
              "total_tokens": 88
            },
            "tool_calls": [],
            "invalid_tool_calls": []
          }
        }
      }
    ]
  ],
  "llm_output": {
    "token_usage": {
      "completion_tokens": 26,
      "prompt_tokens": 62,
      "total_tokens": 88
    },
    "model_name": "gpt-3.5-turbo-0125",
    "system_fingerprint": null
  },
  "run": null
}
[chain/end] [chain:RunnableWithMessageHistory > chain:RunnableBranch > chain:RunnableSequence] [2.22s] Exiting Chain run with output:
[outputs]
[chain/end] [chain:RunnableWithMessageHistory > chain:RunnableBranch] [2.23s] Exiting Chain run with output:
{
  "lc": 1,
  "type": "constructor",
  "id": [
    "langchain",
    "schema",
    "messages",
    "AIMessage"
  ],
  "kwargs": {
    "content": "您可以告诉我您的名字,那我就知道您叫什么啦。",
    "response_metadata": {
      "token_usage": {
        "completion_tokens": 26,
        "prompt_tokens": 62,
        "total_tokens": 88
      },
      "model_name": "gpt-3.5-turbo-0125",
      "system_fingerprint": null,
      "finish_reason": "stop",
      "logprobs": null
    },
    "type": "ai",
    "id": "run-7e386bf0-ba26-486e-92b1-622d8abfe329-0",
    "usage_metadata": {
      "input_tokens": 62,
      "output_tokens": 26,
      "total_tokens": 88
    },
    "tool_calls": [],
    "invalid_tool_calls": []
  }
}
[chain/end] [chain:RunnableWithMessageHistory] [2.25s] Exiting Chain run with output:
[outputs]
AIMessage(content='您可以告诉我您的名字,那我就知道您叫什么啦。', response_metadata={'token_usage': {'completion_tokens': 26, 'prompt_tokens': 62, 'total_tokens': 88}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-7e386bf0-ba26-486e-92b1-622d8abfe329-0', usage_metadata={'input_tokens': 62, 'output_tokens': 26, 'total_tokens': 88})
# messages本身不会有别的修改操作,只是传递给模型之前做了一些预处理。
demo_ephemeral_chat_history.messages
[HumanMessage(content='hello,我是Ethan'),
 AIMessage(content='Hello!'),
 HumanMessage(content='今儿怎么样?'),
 AIMessage(content='好着呢?'),
 HumanMessage(content='俺叫什么'),
 AIMessage(content='您可以告诉我您的名字,那我就知道您叫什么啦。', response_metadata={'token_usage': {'completion_tokens': 26, 'prompt_tokens': 62, 'total_tokens': 88}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-7e386bf0-ba26-486e-92b1-622d8abfe329-0', usage_metadata={'input_tokens': 62, 'output_tokens': 26, 'total_tokens': 88})]

从上面的执行日志上可以看到,trim_messages已经帮我们做了对message做了处理 trim_messages中注释中写了详细的用法。

Summary memory#

使用大模型,对对话历史作摘要。将摘要当作对话,历史发送给模型。

demo_ephemeral_chat_history = ChatMessageHistory()
demo_ephemeral_chat_history.add_user_message("hello,我是Ethan")
demo_ephemeral_chat_history.add_ai_message("Hello!")
demo_ephemeral_chat_history.add_user_message("今儿怎么样?")
demo_ephemeral_chat_history.add_ai_message("好着呢?")

demo_ephemeral_chat_history.messages
[HumanMessage(content='hello,我是Ethan'),
 AIMessage(content='Hello!'),
 HumanMessage(content='今儿怎么样?'),
 AIMessage(content='好着呢?')]
prompt = ChatPromptTemplate.from_messages(
    [
        (
            "system",
            "你是一个乐于助人的助手。尽力回答所有问题。提供的聊天记录包含与您交谈的用户相关的事实。",
        ),
        ("placeholder", "{chat_history}"),
        ("user", "{input}"),
    ]
)

chain = prompt | chat

chain_with_message_history = RunnableWithMessageHistory(
    chain,
    lambda session_id: demo_ephemeral_chat_history,
    input_messages_key="input",
    history_messages_key="chat_history",
)
def summarize_messages(chain_input):
    stored_messages = demo_ephemeral_chat_history.messages
    if len(stored_messages) == 0:
        return False
    summarization_prompt = ChatPromptTemplate.from_messages(
        [
            ("placeholder", "{chat_history}"),
            (
                "user",
                "将上述聊天消息提炼成一条总结信息。尽可能包含详细的具体内容。",
            ),
        ]
    )
    summarization_chain = summarization_prompt | chat

    summary_message = summarization_chain.invoke({"chat_history": stored_messages})

    demo_ephemeral_chat_history.clear()

    demo_ephemeral_chat_history.add_message(summary_message)

    return True


chain_with_summarization = (
    RunnablePassthrough.assign(messages_summarized=summarize_messages)
    | chain_with_message_history
)
chain_with_summarization.invoke(
    {"input": "刚才我说我叫什么呀?"},
    {"configurable": {"session_id": "unused"}},
)
[chain/start] [chain:RunnableSequence] Entering Chain run with input:
{
  "input": "刚才我说我叫什么呀?"
}
[chain/start] [chain:RunnableSequence > chain:RunnableAssign<messages_summarized>] Entering Chain run with input:
{
  "input": "刚才我说我叫什么呀?"
}
[chain/start] [chain:RunnableSequence > chain:RunnableAssign<messages_summarized> > chain:RunnableParallel<messages_summarized>] Entering Chain run with input:
{
  "input": "刚才我说我叫什么呀?"
}
[chain/start] [chain:RunnableSequence > chain:RunnableAssign<messages_summarized> > chain:RunnableParallel<messages_summarized> > chain:summarize_messages] Entering Chain run with input:
{
  "input": "刚才我说我叫什么呀?"
}
[chain/start] [chain:RunnableSequence > chain:RunnableAssign<messages_summarized> > chain:RunnableParallel<messages_summarized> > chain:summarize_messages > chain:RunnableSequence] Entering Chain run with input:
[inputs]
[chain/start] [chain:RunnableSequence > chain:RunnableAssign<messages_summarized> > chain:RunnableParallel<messages_summarized> > chain:summarize_messages > chain:RunnableSequence > prompt:ChatPromptTemplate] Entering Prompt run with input:
[inputs]
[chain/end] [chain:RunnableSequence > chain:RunnableAssign<messages_summarized> > chain:RunnableParallel<messages_summarized> > chain:summarize_messages > chain:RunnableSequence > prompt:ChatPromptTemplate] [1ms] Exiting Prompt run with output:
[outputs]
[llm/start] [chain:RunnableSequence > chain:RunnableAssign<messages_summarized> > chain:RunnableParallel<messages_summarized> > chain:summarize_messages > chain:RunnableSequence > llm:ChatOpenAI] Entering LLM run with input:
{
  "prompts": [
    "AI: Ethan与对方进行简短的问候交流,询问对方的情况,表明自己状态良好。\nHuman: 俺叫什么名字\nAI: 抱歉,我无法知道您的真实姓名。您可以告诉我您想让我称呼您什么名字吗?\nHuman: 将上述聊天消息提炼成一条总结信息。尽可能包含详细的具体内容。"
  ]
}
[llm/end] [chain:RunnableSequence > chain:RunnableAssign<messages_summarized> > chain:RunnableParallel<messages_summarized> > chain:summarize_messages > chain:RunnableSequence > llm:ChatOpenAI] [2.54s] Exiting LLM run with output:
{
  "generations": [
    [
      {
        "text": "Ethan与对方进行简短的问候交流,询问对方的情况,表明自己状态良好。对方问\"俺叫什么名字\",Ethan回复说无法知道对方的真实姓名,询问对方是否告诉他想要被称呼的名字。",
        "generation_info": {
          "finish_reason": "stop",
          "logprobs": null
        },
        "type": "ChatGeneration",
        "message": {
          "lc": 1,
          "type": "constructor",
          "id": [
            "langchain",
            "schema",
            "messages",
            "AIMessage"
          ],
          "kwargs": {
            "content": "Ethan与对方进行简短的问候交流,询问对方的情况,表明自己状态良好。对方问\"俺叫什么名字\",Ethan回复说无法知道对方的真实姓名,询问对方是否告诉他想要被称呼的名字。",
            "response_metadata": {
              "token_usage": {
                "completion_tokens": 88,
                "prompt_tokens": 135,
                "total_tokens": 223
              },
              "model_name": "gpt-3.5-turbo-0125",
              "system_fingerprint": null,
              "finish_reason": "stop",
              "logprobs": null
            },
            "type": "ai",
            "id": "run-5e21357c-e67b-4e1b-920b-f132b6fae6fd-0",
            "usage_metadata": {
              "input_tokens": 135,
              "output_tokens": 88,
              "total_tokens": 223
            },
            "tool_calls": [],
            "invalid_tool_calls": []
          }
        }
      }
    ]
  ],
  "llm_output": {
    "token_usage": {
      "completion_tokens": 88,
      "prompt_tokens": 135,
      "total_tokens": 223
    },
    "model_name": "gpt-3.5-turbo-0125",
    "system_fingerprint": null
  },
  "run": null
}
[chain/end] [chain:RunnableSequence > chain:RunnableAssign<messages_summarized> > chain:RunnableParallel<messages_summarized> > chain:summarize_messages > chain:RunnableSequence] [2.54s] Exiting Chain run with output:
[outputs]
[chain/end] [chain:RunnableSequence > chain:RunnableAssign<messages_summarized> > chain:RunnableParallel<messages_summarized> > chain:summarize_messages] [2.54s] Exiting Chain run with output:
{
  "output": true
}
[chain/end] [chain:RunnableSequence > chain:RunnableAssign<messages_summarized> > chain:RunnableParallel<messages_summarized>] [2.54s] Exiting Chain run with output:
{
  "messages_summarized": true
}
[chain/end] [chain:RunnableSequence > chain:RunnableAssign<messages_summarized>] [2.55s] Exiting Chain run with output:
{
  "input": "刚才我说我叫什么呀?",
  "messages_summarized": true
}
[chain/start] [chain:RunnableSequence > chain:RunnableWithMessageHistory] Entering Chain run with input:
{
  "input": "刚才我说我叫什么呀?",
  "messages_summarized": true
}
[chain/start] [chain:RunnableSequence > chain:RunnableWithMessageHistory > chain:insert_history] Entering Chain run with input:
{
  "input": "刚才我说我叫什么呀?",
  "messages_summarized": true
}
[chain/start] [chain:RunnableSequence > chain:RunnableWithMessageHistory > chain:insert_history > chain:RunnableParallel<chat_history>] Entering Chain run with input:
{
  "input": "刚才我说我叫什么呀?",
  "messages_summarized": true
}
[chain/start] [chain:RunnableSequence > chain:RunnableWithMessageHistory > chain:insert_history > chain:RunnableParallel<chat_history> > chain:load_history] Entering Chain run with input:
{
  "input": "刚才我说我叫什么呀?",
  "messages_summarized": true
}
[chain/end] [chain:RunnableSequence > chain:RunnableWithMessageHistory > chain:insert_history > chain:RunnableParallel<chat_history> > chain:load_history] [1ms] Exiting Chain run with output:
[outputs]
[chain/end] [chain:RunnableSequence > chain:RunnableWithMessageHistory > chain:insert_history > chain:RunnableParallel<chat_history>] [3ms] Exiting Chain run with output:
[outputs]
[chain/end] [chain:RunnableSequence > chain:RunnableWithMessageHistory > chain:insert_history] [9ms] Exiting Chain run with output:
[outputs]
[chain/start] [chain:RunnableSequence > chain:RunnableWithMessageHistory > chain:RunnableBranch] Entering Chain run with input:
[inputs]
[chain/start] [chain:RunnableSequence > chain:RunnableWithMessageHistory > chain:RunnableBranch > chain:RunnableWithMessageHistoryInAsyncMode] Entering Chain run with input:
[inputs]
[chain/end] [chain:RunnableSequence > chain:RunnableWithMessageHistory > chain:RunnableBranch > chain:RunnableWithMessageHistoryInAsyncMode] [0ms] Exiting Chain run with output:
{
  "output": false
}
[chain/start] [chain:RunnableSequence > chain:RunnableWithMessageHistory > chain:RunnableBranch > chain:RunnableSequence] Entering Chain run with input:
[inputs]
[chain/start] [chain:RunnableSequence > chain:RunnableWithMessageHistory > chain:RunnableBranch > chain:RunnableSequence > prompt:ChatPromptTemplate] Entering Prompt run with input:
[inputs]
[chain/end] [chain:RunnableSequence > chain:RunnableWithMessageHistory > chain:RunnableBranch > chain:RunnableSequence > prompt:ChatPromptTemplate] [1ms] Exiting Prompt run with output:
[outputs]
[llm/start] [chain:RunnableSequence > chain:RunnableWithMessageHistory > chain:RunnableBranch > chain:RunnableSequence > llm:ChatOpenAI] Entering LLM run with input:
{
  "prompts": [
    "System: 你是一个AI小助手,你要尽可能的回答用户的问题\nAI: Ethan与对方进行简短的问候交流,询问对方的情况,表明自己状态良好。对方问\"俺叫什么名字\",Ethan回复说无法知道对方的真实姓名,询问对方是否告诉他想要被称呼的名字。\nHuman: 刚才我说我叫什么呀?"
  ]
}
[llm/end] [chain:RunnableSequence > chain:RunnableWithMessageHistory > chain:RunnableBranch > chain:RunnableSequence > llm:ChatOpenAI] [3.21s] Exiting LLM run with output:
{
  "generations": [
    [
      {
        "text": "对不起,我无法保存用户的个人信息或对话历史。请问您可以告诉我您想被称呼的名字吗?我会尽量帮助您的。",
        "generation_info": {
          "finish_reason": "stop",
          "logprobs": null
        },
        "type": "ChatGeneration",
        "message": {
          "lc": 1,
          "type": "constructor",
          "id": [
            "langchain",
            "schema",
            "messages",
            "AIMessage"
          ],
          "kwargs": {
            "content": "对不起,我无法保存用户的个人信息或对话历史。请问您可以告诉我您想被称呼的名字吗?我会尽量帮助您的。",
            "response_metadata": {
              "token_usage": {
                "completion_tokens": 54,
                "prompt_tokens": 139,
                "total_tokens": 193
              },
              "model_name": "gpt-3.5-turbo-0125",
              "system_fingerprint": "fp_811936bd4f",
              "finish_reason": "stop",
              "logprobs": null
            },
            "type": "ai",
            "id": "run-4eb2840d-9ad8-46ff-b1d2-05ef59bdea1b-0",
            "usage_metadata": {
              "input_tokens": 139,
              "output_tokens": 54,
              "total_tokens": 193
            },
            "tool_calls": [],
            "invalid_tool_calls": []
          }
        }
      }
    ]
  ],
  "llm_output": {
    "token_usage": {
      "completion_tokens": 54,
      "prompt_tokens": 139,
      "total_tokens": 193
    },
    "model_name": "gpt-3.5-turbo-0125",
    "system_fingerprint": "fp_811936bd4f"
  },
  "run": null
}
[chain/end] [chain:RunnableSequence > chain:RunnableWithMessageHistory > chain:RunnableBranch > chain:RunnableSequence] [3.21s] Exiting Chain run with output:
[outputs]
[chain/end] [chain:RunnableSequence > chain:RunnableWithMessageHistory > chain:RunnableBranch] [3.37s] Exiting Chain run with output:
{
  "lc": 1,
  "type": "constructor",
  "id": [
    "langchain",
    "schema",
    "messages",
    "AIMessage"
  ],
  "kwargs": {
    "content": "对不起,我无法保存用户的个人信息或对话历史。请问您可以告诉我您想被称呼的名字吗?我会尽量帮助您的。",
    "response_metadata": {
      "token_usage": {
        "completion_tokens": 54,
        "prompt_tokens": 139,
        "total_tokens": 193
      },
      "model_name": "gpt-3.5-turbo-0125",
      "system_fingerprint": "fp_811936bd4f",
      "finish_reason": "stop",
      "logprobs": null
    },
    "type": "ai",
    "id": "run-4eb2840d-9ad8-46ff-b1d2-05ef59bdea1b-0",
    "usage_metadata": {
      "input_tokens": 139,
      "output_tokens": 54,
      "total_tokens": 193
    },
    "tool_calls": [],
    "invalid_tool_calls": []
  }
}
[chain/end] [chain:RunnableSequence > chain:RunnableWithMessageHistory] [3.39s] Exiting Chain run with output:
[outputs]
[chain/end] [chain:RunnableSequence] [5.97s] Exiting Chain run with output:
[outputs]
AIMessage(content='对不起,我无法保存用户的个人信息或对话历史。请问您可以告诉我您想被称呼的名字吗?我会尽量帮助您的。', response_metadata={'token_usage': {'completion_tokens': 54, 'prompt_tokens': 139, 'total_tokens': 193}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': 'fp_811936bd4f', 'finish_reason': 'stop', 'logprobs': None}, id='run-4eb2840d-9ad8-46ff-b1d2-05ef59bdea1b-0', usage_metadata={'input_tokens': 139, 'output_tokens': 54, 'total_tokens': 193})

虽然他没有答出来,但是从debug的日志里面可以看到做了消息的摘要。

如何做检索?#

检索是聊天机器人的一个很有用的特点。通过检索,可以让模型获取最新的消息。回答知识。利用模型的推理能力,回答知识。

Setup#

from langchain_openai import ChatOpenAI

chat = ChatOpenAI(model="gpt-3.5-turbo-1106", temperature=0.2)

创建检索器#

from langchain_community.document_loaders import WebBaseLoader

# 网页加载 
loader = WebBaseLoader("https://docs.smith.langchain.com/overview")
data = loader.load()
from langchain_text_splitters import RecursiveCharacterTextSplitter

text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)
all_splits = text_splitter.split_documents(data)

# 向量化
from langchain_chroma import Chroma
from langchain_openai import OpenAIEmbeddings

vectorstore = Chroma.from_documents(documents=all_splits, embedding=OpenAIEmbeddings())
# k is the number of chunks to retrieve
retriever = vectorstore.as_retriever(k=4)

docs = retriever.invoke("Can LangSmith help test my LLM applications?")

docs
[Document(page_content='Skip to main contentGo to API DocsSearchRegionUSEUGo to AppQuick startTutorialsHow-to guidesConceptsReferencePricingSelf-hostingLangGraph CloudQuick startOn this pageGet started with LangSmithLangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!1. Install LangSmith\u200bPythonTypeScriptpip install -U', metadata={'description': 'LangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'Get started with LangSmith | 🦜️🛠️ LangSmith'}),
 Document(page_content='Skip to main contentGo to API DocsSearchRegionUSEUGo to AppQuick startTutorialsHow-to guidesConceptsReferencePricingSelf-hostingLangGraph CloudQuick startOn this pageGet started with LangSmithLangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!1. Install LangSmith\u200bPythonTypeScriptpip install -U', metadata={'description': 'LangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'Get started with LangSmith | 🦜️🛠️ LangSmith'}),
 Document(page_content='Get started with LangSmith | 🦜️🛠️ LangSmith', metadata={'description': 'LangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'Get started with LangSmith | 🦜️🛠️ LangSmith'}),
 Document(page_content='Get started with LangSmith | 🦜️🛠️ LangSmith', metadata={'description': 'LangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'Get started with LangSmith | 🦜️🛠️ LangSmith'})]

Document chains#

这里的步骤和之前很相似了。创建一个文档的检索链。使用create_stuff_documents_chain方法,将所有的文档套入到prompt

from langchain.chains.combine_documents import create_stuff_documents_chain
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder

SYSTEM_TEMPLATE = """
Answer the user's questions based on the below context. 
If the context doesn't contain any relevant information to the question, don't make something up and just say "I don't know":

<context>
{context}
</context>
"""

question_answering_prompt = ChatPromptTemplate.from_messages(
    [
        (
            "system",
            SYSTEM_TEMPLATE,
        ),
        MessagesPlaceholder(variable_name="messages"),
    ]
)

document_chain = create_stuff_documents_chain(chat, question_answering_prompt)
from langchain_core.messages import HumanMessage

document_chain.invoke(
    {
        "context": docs,
        "messages": [
            HumanMessage(content="Can LangSmith help test my LLM applications?")
        ],
    }
)
[chain/start] [chain:stuff_documents_chain] Entering Chain run with input:
[inputs]
[chain/start] [chain:stuff_documents_chain > chain:format_inputs] Entering Chain run with input:
[inputs]
[chain/start] [chain:stuff_documents_chain > chain:format_inputs > chain:RunnableParallel<context>] Entering Chain run with input:
[inputs]
[chain/start] [chain:stuff_documents_chain > chain:format_inputs > chain:RunnableParallel<context> > chain:format_docs] Entering Chain run with input:
[inputs]
[chain/end] [chain:stuff_documents_chain > chain:format_inputs > chain:RunnableParallel<context> > chain:format_docs] [1ms] Exiting Chain run with output:
{
  "output": "Skip to main contentGo to API DocsSearchRegionUSEUGo to AppQuick startTutorialsHow-to guidesConceptsReferencePricingSelf-hostingLangGraph CloudQuick startOn this pageGet started with LangSmithLangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!1. Install LangSmith​PythonTypeScriptpip install -U\n\nSkip to main contentGo to API DocsSearchRegionUSEUGo to AppQuick startTutorialsHow-to guidesConceptsReferencePricingSelf-hostingLangGraph CloudQuick startOn this pageGet started with LangSmithLangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!1. Install LangSmith​PythonTypeScriptpip install -U\n\nGet started with LangSmith | 🦜️🛠️ LangSmith\n\nGet started with LangSmith | 🦜️🛠️ LangSmith"
}
[chain/end] [chain:stuff_documents_chain > chain:format_inputs > chain:RunnableParallel<context>] [3ms] Exiting Chain run with output:
{
  "context": "Skip to main contentGo to API DocsSearchRegionUSEUGo to AppQuick startTutorialsHow-to guidesConceptsReferencePricingSelf-hostingLangGraph CloudQuick startOn this pageGet started with LangSmithLangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!1. Install LangSmith​PythonTypeScriptpip install -U\n\nSkip to main contentGo to API DocsSearchRegionUSEUGo to AppQuick startTutorialsHow-to guidesConceptsReferencePricingSelf-hostingLangGraph CloudQuick startOn this pageGet started with LangSmithLangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!1. Install LangSmith​PythonTypeScriptpip install -U\n\nGet started with LangSmith | 🦜️🛠️ LangSmith\n\nGet started with LangSmith | 🦜️🛠️ LangSmith"
}
[chain/end] [chain:stuff_documents_chain > chain:format_inputs] [5ms] Exiting Chain run with output:
[outputs]
[chain/start] [chain:stuff_documents_chain > prompt:ChatPromptTemplate] Entering Prompt run with input:
[inputs]
[chain/end] [chain:stuff_documents_chain > prompt:ChatPromptTemplate] [1ms] Exiting Prompt run with output:
[outputs]
[llm/start] [chain:stuff_documents_chain > llm:ChatOpenAI] Entering LLM run with input:
{
  "prompts": [
    "System: \nAnswer the user's questions based on the below context. \nIf the context doesn't contain any relevant information to the question, don't make something up and just say \"I don't know\":\n\n<context>\nSkip to main contentGo to API DocsSearchRegionUSEUGo to AppQuick startTutorialsHow-to guidesConceptsReferencePricingSelf-hostingLangGraph CloudQuick startOn this pageGet started with LangSmithLangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!1. Install LangSmith​PythonTypeScriptpip install -U\n\nSkip to main contentGo to API DocsSearchRegionUSEUGo to AppQuick startTutorialsHow-to guidesConceptsReferencePricingSelf-hostingLangGraph CloudQuick startOn this pageGet started with LangSmithLangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!1. Install LangSmith​PythonTypeScriptpip install -U\n\nGet started with LangSmith | 🦜️🛠️ LangSmith\n\nGet started with LangSmith | 🦜️🛠️ LangSmith\n</context>\n\nHuman: Can LangSmith help test my LLM applications?"
  ]
}
[llm/end] [chain:stuff_documents_chain > llm:ChatOpenAI] [2.08s] Exiting LLM run with output:
{
  "generations": [
    [
      {
        "text": "Yes, LangSmith allows you to closely monitor and evaluate your LLM applications, which can help in testing them effectively.",
        "generation_info": {
          "finish_reason": "stop",
          "logprobs": null
        },
        "type": "ChatGeneration",
        "message": {
          "lc": 1,
          "type": "constructor",
          "id": [
            "langchain",
            "schema",
            "messages",
            "AIMessage"
          ],
          "kwargs": {
            "content": "Yes, LangSmith allows you to closely monitor and evaluate your LLM applications, which can help in testing them effectively.",
            "response_metadata": {
              "token_usage": {
                "completion_tokens": 24,
                "prompt_tokens": 304,
                "total_tokens": 328
              },
              "model_name": "gpt-3.5-turbo-1106",
              "system_fingerprint": "fp_276aa25277",
              "finish_reason": "stop",
              "logprobs": null
            },
            "type": "ai",
            "id": "run-bc3449f0-d349-4850-900d-83c4b06e964a-0",
            "usage_metadata": {
              "input_tokens": 304,
              "output_tokens": 24,
              "total_tokens": 328
            },
            "tool_calls": [],
            "invalid_tool_calls": []
          }
        }
      }
    ]
  ],
  "llm_output": {
    "token_usage": {
      "completion_tokens": 24,
      "prompt_tokens": 304,
      "total_tokens": 328
    },
    "model_name": "gpt-3.5-turbo-1106",
    "system_fingerprint": "fp_276aa25277"
  },
  "run": null
}
[chain/start] [chain:stuff_documents_chain > parser:StrOutputParser] Entering Parser run with input:
[inputs]
[chain/end] [chain:stuff_documents_chain > parser:StrOutputParser] [1ms] Exiting Parser run with output:
{
  "output": "Yes, LangSmith allows you to closely monitor and evaluate your LLM applications, which can help in testing them effectively."
}
[chain/end] [chain:stuff_documents_chain] [2.10s] Exiting Chain run with output:
{
  "output": "Yes, LangSmith allows you to closely monitor and evaluate your LLM applications, which can help in testing them effectively."
}
'Yes, LangSmith allows you to closely monitor and evaluate your LLM applications, which can help in testing them effectively.'

Retrieval chains#

将document chain和 Retrieval chain结合起来

from typing import Dict

from langchain_core.runnables import RunnablePassthrough


def parse_retriever_input(params: Dict):
    return params["messages"][-1].content



# 下面的意思是说,先获取用户输入messages list的最后一个消息,套入到检索器中
# 做检索,之后,将检索到的答案代入到retrieve chain去做
retrieval_chain = RunnablePassthrough.assign(
    context=parse_retriever_input | retriever, # 这是检索的channel,
).assign(
    answer=document_chain, # 这是文档的chain,
)
retrieval_chain.invoke(
    {
        "messages": [
            HumanMessage(content="Can LangSmith help test my LLM applications?")
        ],
    }
)
[chain/start] [chain:RunnableSequence] Entering Chain run with input:
[inputs]
[chain/start] [chain:RunnableSequence > chain:RunnableAssign<context>] Entering Chain run with input:
[inputs]
[chain/start] [chain:RunnableSequence > chain:RunnableAssign<context> > chain:RunnableParallel<context>] Entering Chain run with input:
[inputs]
[chain/start] [chain:RunnableSequence > chain:RunnableAssign<context> > chain:RunnableParallel<context> > chain:RunnableSequence] Entering Chain run with input:
[inputs]
[chain/start] [chain:RunnableSequence > chain:RunnableAssign<context> > chain:RunnableParallel<context> > chain:RunnableSequence > chain:parse_retriever_input] Entering Chain run with input:
[inputs]
[chain/end] [chain:RunnableSequence > chain:RunnableAssign<context> > chain:RunnableParallel<context> > chain:RunnableSequence > chain:parse_retriever_input] [1ms] Exiting Chain run with output:
{
  "output": "Can LangSmith help test my LLM applications?"
}
[chain/end] [chain:RunnableSequence > chain:RunnableAssign<context> > chain:RunnableParallel<context> > chain:RunnableSequence] [1.36s] Exiting Chain run with output:
[outputs]
[chain/end] [chain:RunnableSequence > chain:RunnableAssign<context> > chain:RunnableParallel<context>] [1.36s] Exiting Chain run with output:
[outputs]
[chain/end] [chain:RunnableSequence > chain:RunnableAssign<context>] [1.36s] Exiting Chain run with output:
[outputs]
[chain/start] [chain:RunnableSequence > chain:RunnableAssign<answer>] Entering Chain run with input:
[inputs]
[chain/start] [chain:RunnableSequence > chain:RunnableAssign<answer> > chain:RunnableParallel<answer>] Entering Chain run with input:
[inputs]
[chain/start] [chain:RunnableSequence > chain:RunnableAssign<answer> > chain:RunnableParallel<answer> > chain:stuff_documents_chain] Entering Chain run with input:
[inputs]
[chain/start] [chain:RunnableSequence > chain:RunnableAssign<answer> > chain:RunnableParallel<answer> > chain:stuff_documents_chain > chain:format_inputs] Entering Chain run with input:
[inputs]
[chain/start] [chain:RunnableSequence > chain:RunnableAssign<answer> > chain:RunnableParallel<answer> > chain:stuff_documents_chain > chain:format_inputs > chain:RunnableParallel<context>] Entering Chain run with input:
[inputs]
[chain/start] [chain:RunnableSequence > chain:RunnableAssign<answer> > chain:RunnableParallel<answer> > chain:stuff_documents_chain > chain:format_inputs > chain:RunnableParallel<context> > chain:format_docs] Entering Chain run with input:
[inputs]
[chain/end] [chain:RunnableSequence > chain:RunnableAssign<answer> > chain:RunnableParallel<answer> > chain:stuff_documents_chain > chain:format_inputs > chain:RunnableParallel<context> > chain:format_docs] [1ms] Exiting Chain run with output:
{
  "output": "Skip to main contentGo to API DocsSearchRegionUSEUGo to AppQuick startTutorialsHow-to guidesConceptsReferencePricingSelf-hostingLangGraph CloudQuick startOn this pageGet started with LangSmithLangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!1. Install LangSmith​PythonTypeScriptpip install -U\n\nSkip to main contentGo to API DocsSearchRegionUSEUGo to AppQuick startTutorialsHow-to guidesConceptsReferencePricingSelf-hostingLangGraph CloudQuick startOn this pageGet started with LangSmithLangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!1. Install LangSmith​PythonTypeScriptpip install -U\n\nGet started with LangSmith | 🦜️🛠️ LangSmith\n\nGet started with LangSmith | 🦜️🛠️ LangSmith"
}
[chain/end] [chain:RunnableSequence > chain:RunnableAssign<answer> > chain:RunnableParallel<answer> > chain:stuff_documents_chain > chain:format_inputs > chain:RunnableParallel<context>] [2ms] Exiting Chain run with output:
{
  "context": "Skip to main contentGo to API DocsSearchRegionUSEUGo to AppQuick startTutorialsHow-to guidesConceptsReferencePricingSelf-hostingLangGraph CloudQuick startOn this pageGet started with LangSmithLangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!1. Install LangSmith​PythonTypeScriptpip install -U\n\nSkip to main contentGo to API DocsSearchRegionUSEUGo to AppQuick startTutorialsHow-to guidesConceptsReferencePricingSelf-hostingLangGraph CloudQuick startOn this pageGet started with LangSmithLangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!1. Install LangSmith​PythonTypeScriptpip install -U\n\nGet started with LangSmith | 🦜️🛠️ LangSmith\n\nGet started with LangSmith | 🦜️🛠️ LangSmith"
}
[chain/end] [chain:RunnableSequence > chain:RunnableAssign<answer> > chain:RunnableParallel<answer> > chain:stuff_documents_chain > chain:format_inputs] [4ms] Exiting Chain run with output:
[outputs]
[chain/start] [chain:RunnableSequence > chain:RunnableAssign<answer> > chain:RunnableParallel<answer> > chain:stuff_documents_chain > prompt:ChatPromptTemplate] Entering Prompt run with input:
[inputs]
[chain/end] [chain:RunnableSequence > chain:RunnableAssign<answer> > chain:RunnableParallel<answer> > chain:stuff_documents_chain > prompt:ChatPromptTemplate] [1ms] Exiting Prompt run with output:
[outputs]
[llm/start] [chain:RunnableSequence > chain:RunnableAssign<answer> > chain:RunnableParallel<answer> > chain:stuff_documents_chain > llm:ChatOpenAI] Entering LLM run with input:
{
  "prompts": [
    "System: \nAnswer the user's questions based on the below context. \nIf the context doesn't contain any relevant information to the question, don't make something up and just say \"I don't know\":\n\n<context>\nSkip to main contentGo to API DocsSearchRegionUSEUGo to AppQuick startTutorialsHow-to guidesConceptsReferencePricingSelf-hostingLangGraph CloudQuick startOn this pageGet started with LangSmithLangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!1. Install LangSmith​PythonTypeScriptpip install -U\n\nSkip to main contentGo to API DocsSearchRegionUSEUGo to AppQuick startTutorialsHow-to guidesConceptsReferencePricingSelf-hostingLangGraph CloudQuick startOn this pageGet started with LangSmithLangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!1. Install LangSmith​PythonTypeScriptpip install -U\n\nGet started with LangSmith | 🦜️🛠️ LangSmith\n\nGet started with LangSmith | 🦜️🛠️ LangSmith\n</context>\n\nHuman: Can LangSmith help test my LLM applications?"
  ]
}
[llm/end] [chain:RunnableSequence > chain:RunnableAssign<answer> > chain:RunnableParallel<answer> > chain:stuff_documents_chain > llm:ChatOpenAI] [2.76s] Exiting LLM run with output:
{
  "generations": [
    [
      {
        "text": "LangSmith is a platform for building production-grade LLM applications and allows you to closely monitor and evaluate your application. While it focuses on monitoring and evaluation, it does not specifically mention testing capabilities.",
        "generation_info": {
          "finish_reason": "stop",
          "logprobs": null
        },
        "type": "ChatGeneration",
        "message": {
          "lc": 1,
          "type": "constructor",
          "id": [
            "langchain",
            "schema",
            "messages",
            "AIMessage"
          ],
          "kwargs": {
            "content": "LangSmith is a platform for building production-grade LLM applications and allows you to closely monitor and evaluate your application. While it focuses on monitoring and evaluation, it does not specifically mention testing capabilities.",
            "response_metadata": {
              "token_usage": {
                "completion_tokens": 39,
                "prompt_tokens": 310,
                "total_tokens": 349
              },
              "model_name": "gpt-3.5-turbo-1106",
              "system_fingerprint": "fp_811936bd4f",
              "finish_reason": "stop",
              "logprobs": null
            },
            "type": "ai",
            "id": "run-71cc28a0-5c22-4d1e-8a4d-bd82908df76b-0",
            "usage_metadata": {
              "input_tokens": 310,
              "output_tokens": 39,
              "total_tokens": 349
            },
            "tool_calls": [],
            "invalid_tool_calls": []
          }
        }
      }
    ]
  ],
  "llm_output": {
    "token_usage": {
      "completion_tokens": 39,
      "prompt_tokens": 310,
      "total_tokens": 349
    },
    "model_name": "gpt-3.5-turbo-1106",
    "system_fingerprint": "fp_811936bd4f"
  },
  "run": null
}
[chain/start] [chain:RunnableSequence > chain:RunnableAssign<answer> > chain:RunnableParallel<answer> > chain:stuff_documents_chain > parser:StrOutputParser] Entering Parser run with input:
[inputs]
[chain/end] [chain:RunnableSequence > chain:RunnableAssign<answer> > chain:RunnableParallel<answer> > chain:stuff_documents_chain > parser:StrOutputParser] [2ms] Exiting Parser run with output:
{
  "output": "LangSmith is a platform for building production-grade LLM applications and allows you to closely monitor and evaluate your application. While it focuses on monitoring and evaluation, it does not specifically mention testing capabilities."
}
[chain/end] [chain:RunnableSequence > chain:RunnableAssign<answer> > chain:RunnableParallel<answer> > chain:stuff_documents_chain] [2.77s] Exiting Chain run with output:
{
  "output": "LangSmith is a platform for building production-grade LLM applications and allows you to closely monitor and evaluate your application. While it focuses on monitoring and evaluation, it does not specifically mention testing capabilities."
}
[chain/end] [chain:RunnableSequence > chain:RunnableAssign<answer> > chain:RunnableParallel<answer>] [2.78s] Exiting Chain run with output:
{
  "answer": "LangSmith is a platform for building production-grade LLM applications and allows you to closely monitor and evaluate your application. While it focuses on monitoring and evaluation, it does not specifically mention testing capabilities."
}
[chain/end] [chain:RunnableSequence > chain:RunnableAssign<answer>] [2.79s] Exiting Chain run with output:
[outputs]
[chain/end] [chain:RunnableSequence] [4.17s] Exiting Chain run with output:
[outputs]
{'messages': [HumanMessage(content='Can LangSmith help test my LLM applications?')],
 'context': [Document(page_content='Skip to main contentGo to API DocsSearchRegionUSEUGo to AppQuick startTutorialsHow-to guidesConceptsReferencePricingSelf-hostingLangGraph CloudQuick startOn this pageGet started with LangSmithLangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!1. Install LangSmith\u200bPythonTypeScriptpip install -U', metadata={'description': 'LangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'Get started with LangSmith | 🦜️🛠️ LangSmith'}),
  Document(page_content='Skip to main contentGo to API DocsSearchRegionUSEUGo to AppQuick startTutorialsHow-to guidesConceptsReferencePricingSelf-hostingLangGraph CloudQuick startOn this pageGet started with LangSmithLangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!1. Install LangSmith\u200bPythonTypeScriptpip install -U', metadata={'description': 'LangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'Get started with LangSmith | 🦜️🛠️ LangSmith'}),
  Document(page_content='Get started with LangSmith | 🦜️🛠️ LangSmith', metadata={'description': 'LangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'Get started with LangSmith | 🦜️🛠️ LangSmith'}),
  Document(page_content='Get started with LangSmith | 🦜️🛠️ LangSmith', metadata={'description': 'LangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'Get started with LangSmith | 🦜️🛠️ LangSmith'})],
 'answer': 'LangSmith is a platform for building production-grade LLM applications and allows you to closely monitor and evaluate your application. While it focuses on monitoring and evaluation, it does not specifically mention testing capabilities.'}

查询转换#

使用检索器做检索的时候,检索本身是没有上下文的。如果检索的问题找不到检索器会返回一些无关的文档。比如你在问?Lang smith的作用是什么?检索器可以找到一些相关的文档,但是,如果你基于对话的上下文儿,你再问。详细解释下。解锁器是查不到任何和Lang Smith作用有关的文档的,然后返回了一些无关的文档。 这个时候就需要查询转换。

retriever.invoke("Tell me more!") # 返回了一些无关的文档
[Document(page_content='Get started with LangSmith | 🦜️🛠️ LangSmith', metadata={'description': 'LangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'Get started with LangSmith | 🦜️🛠️ LangSmith'}),
 Document(page_content='Get started with LangSmith | 🦜️🛠️ LangSmith', metadata={'description': 'LangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'Get started with LangSmith | 🦜️🛠️ LangSmith'}),
 Document(page_content='result.choices[0].message.contentpipeline("Hello, world!")# Out:  Hello there! How can I assist you today?import { OpenAI } from "openai";import { traceable } from "langsmith/traceable";import { wrapOpenAI } from "langsmith/wrappers";// Auto-trace LLM calls in-contextconst client = wrapOpenAI(new OpenAI());// Auto-trace this functionconst pipeline = traceable(async (user_input) => {    const result = await client.chat.completions.create({        messages: [{ role: "user", content: user_input', metadata={'description': 'LangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'Get started with LangSmith | 🦜️🛠️ LangSmith'}),
 Document(page_content='result.choices[0].message.contentpipeline("Hello, world!")# Out:  Hello there! How can I assist you today?import { OpenAI } from "openai";import { traceable } from "langsmith/traceable";import { wrapOpenAI } from "langsmith/wrappers";// Auto-trace LLM calls in-contextconst client = wrapOpenAI(new OpenAI());// Auto-trace this functionconst pipeline = traceable(async (user_input) => {    const result = await client.chat.completions.create({        messages: [{ role: "user", content: user_input', metadata={'description': 'LangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'Get started with LangSmith | 🦜️🛠️ LangSmith'})]

要解决这个问题,可以将对话通过大模型来生成查询。

from langchain_core.messages import AIMessage, HumanMessage

query_transform_prompt = ChatPromptTemplate.from_messages(
    [
        MessagesPlaceholder(variable_name="messages"),
        (
            "user",
            "Given the above conversation, generate a search query to look up in order to get information relevant to the conversation. Only respond with the query, nothing else.",
        ),
    ]
)

query_transformation_chain = query_transform_prompt | chat

query_transformation_chain.invoke(
    {
        "messages": [
            HumanMessage(content="Can LangSmith help test my LLM applications?"),
            AIMessage(
                content="Yes, LangSmith can help test and evaluate your LLM applications. It allows you to quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs. Additionally, LangSmith can be used to monitor your application, log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise."
            ),
            HumanMessage(content="Tell me more!"),
        ],
    }
)
[chain/start] [chain:RunnableSequence] Entering Chain run with input:
[inputs]
[chain/start] [chain:RunnableSequence > prompt:ChatPromptTemplate] Entering Prompt run with input:
[inputs]
[chain/end] [chain:RunnableSequence > prompt:ChatPromptTemplate] [1ms] Exiting Prompt run with output:
[outputs]
[llm/start] [chain:RunnableSequence > llm:ChatOpenAI] Entering LLM run with input:
{
  "prompts": [
    "Human: Can LangSmith help test my LLM applications?\nAI: Yes, LangSmith can help test and evaluate your LLM applications. It allows you to quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs. Additionally, LangSmith can be used to monitor your application, log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise.\nHuman: Tell me more!\nHuman: Given the above conversation, generate a search query to look up in order to get information relevant to the conversation. Only respond with the query, nothing else."
  ]
}
[llm/end] [chain:RunnableSequence > llm:ChatOpenAI] [2.09s] Exiting LLM run with output:
{
  "generations": [
    [
      {
        "text": "\"LangSmith LLM application testing and evaluation\"",
        "generation_info": {
          "finish_reason": "stop",
          "logprobs": null
        },
        "type": "ChatGeneration",
        "message": {
          "lc": 1,
          "type": "constructor",
          "id": [
            "langchain",
            "schema",
            "messages",
            "AIMessage"
          ],
          "kwargs": {
            "content": "\"LangSmith LLM application testing and evaluation\"",
            "response_metadata": {
              "token_usage": {
                "completion_tokens": 10,
                "prompt_tokens": 145,
                "total_tokens": 155
              },
              "model_name": "gpt-3.5-turbo-1106",
              "system_fingerprint": "fp_5aa43294a1",
              "finish_reason": "stop",
              "logprobs": null
            },
            "type": "ai",
            "id": "run-0c88e751-ac9d-4a49-984d-fa0061bde854-0",
            "usage_metadata": {
              "input_tokens": 145,
              "output_tokens": 10,
              "total_tokens": 155
            },
            "tool_calls": [],
            "invalid_tool_calls": []
          }
        }
      }
    ]
  ],
  "llm_output": {
    "token_usage": {
      "completion_tokens": 10,
      "prompt_tokens": 145,
      "total_tokens": 155
    },
    "model_name": "gpt-3.5-turbo-1106",
    "system_fingerprint": "fp_5aa43294a1"
  },
  "run": null
}
[chain/end] [chain:RunnableSequence] [2.10s] Exiting Chain run with output:
[outputs]
AIMessage(content='"LangSmith LLM application testing and evaluation"', response_metadata={'token_usage': {'completion_tokens': 10, 'prompt_tokens': 145, 'total_tokens': 155}, 'model_name': 'gpt-3.5-turbo-1106', 'system_fingerprint': 'fp_5aa43294a1', 'finish_reason': 'stop', 'logprobs': None}, id='run-0c88e751-ac9d-4a49-984d-fa0061bde854-0', usage_metadata={'input_tokens': 145, 'output_tokens': 10, 'total_tokens': 155})

OK,可以看到。已经生成了我们想要的查询语句。下面将他和之前的chain结合起来。

下面的例子中,RunnableBranch表示的是一个分支条件,他要求传入一些判断条件和一个default的条件,在下面的例子中,如果message只有一条,直接去做检索。否则就要经过上面的查询转换。再去做检索。

from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnableBranch

query_transforming_retriever_chain = RunnableBranch(
    (
        lambda x: len(x.get("messages", [])) == 1,
        # If only one message, then we just pass that message's content to retriever
        (lambda x: x["messages"][-1].content) | retriever,
    ),
    # If messages, then we pass inputs to LLM chain to transform the query, then pass to retriever
    query_transform_prompt | chat | StrOutputParser() | retriever,
).with_config(run_name="chat_retriever_chain")
SYSTEM_TEMPLATE = """
Answer the user's questions based on the below context. 
If the context doesn't contain any relevant information to the question, don't make something up and just say "I don't know":

<context>
{context}
</context>
"""

question_answering_prompt = ChatPromptTemplate.from_messages(
    [
        (
            "system",
            SYSTEM_TEMPLATE,
        ),
        MessagesPlaceholder(variable_name="messages"),
    ]
)

document_chain = create_stuff_documents_chain(chat, question_answering_prompt)

conversational_retrieval_chain = RunnablePassthrough.assign(
    context=query_transforming_retriever_chain,
).assign(
    answer=document_chain,
)
conversational_retrieval_chain.invoke(
    {
        "messages": [
            HumanMessage(content="Can LangSmith help test my LLM applications?"),
        ]
    }
)没事
[chain/start] [chain:RunnableSequence] Entering Chain run with input:
[inputs]
[chain/start] [chain:RunnableSequence > chain:RunnableAssign<context>] Entering Chain run with input:
[inputs]
[chain/start] [chain:RunnableSequence > chain:RunnableAssign<context> > chain:RunnableParallel<context>] Entering Chain run with input:
[inputs]
[chain/start] [chain:RunnableSequence > chain:RunnableAssign<context> > chain:RunnableParallel<context> > chain:chat_retriever_chain] Entering Chain run with input:
[inputs]
[chain/start] [chain:RunnableSequence > chain:RunnableAssign<context> > chain:RunnableParallel<context> > chain:chat_retriever_chain > chain:RunnableLambda] Entering Chain run with input:
[inputs]
[chain/end] [chain:RunnableSequence > chain:RunnableAssign<context> > chain:RunnableParallel<context> > chain:chat_retriever_chain > chain:RunnableLambda] [0ms] Exiting Chain run with output:
{
  "output": true
}
[chain/start] [chain:RunnableSequence > chain:RunnableAssign<context> > chain:RunnableParallel<context> > chain:chat_retriever_chain > chain:RunnableSequence] Entering Chain run with input:
[inputs]
[chain/start] [chain:RunnableSequence > chain:RunnableAssign<context> > chain:RunnableParallel<context> > chain:chat_retriever_chain > chain:RunnableSequence > chain:RunnableLambda] Entering Chain run with input:
[inputs]
[chain/end] [chain:RunnableSequence > chain:RunnableAssign<context> > chain:RunnableParallel<context> > chain:chat_retriever_chain > chain:RunnableSequence > chain:RunnableLambda] [0ms] Exiting Chain run with output:
{
  "output": "Can LangSmith help test my LLM applications?"
}
[chain/end] [chain:RunnableSequence > chain:RunnableAssign<context> > chain:RunnableParallel<context> > chain:chat_retriever_chain > chain:RunnableSequence] [1.34s] Exiting Chain run with output:
[outputs]
[chain/end] [chain:RunnableSequence > chain:RunnableAssign<context> > chain:RunnableParallel<context> > chain:chat_retriever_chain] [1.34s] Exiting Chain run with output:
{
  "output": [
    {
      "lc": 1,
      "type": "constructor",
      "id": [
        "langchain",
        "schema",
        "document",
        "Document"
      ],
      "kwargs": {
        "page_content": "Skip to main contentGo to API DocsSearchRegionUSEUGo to AppQuick startTutorialsHow-to guidesConceptsReferencePricingSelf-hostingLangGraph CloudQuick startOn this pageGet started with LangSmithLangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!1. Install LangSmith​PythonTypeScriptpip install -U",
        "metadata": {
          "description": "LangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!",
          "language": "en",
          "source": "https://docs.smith.langchain.com/overview",
          "title": "Get started with LangSmith | 🦜️🛠️ LangSmith"
        },
        "type": "Document"
      }
    },
    {
      "lc": 1,
      "type": "constructor",
      "id": [
        "langchain",
        "schema",
        "document",
        "Document"
      ],
      "kwargs": {
        "page_content": "Skip to main contentGo to API DocsSearchRegionUSEUGo to AppQuick startTutorialsHow-to guidesConceptsReferencePricingSelf-hostingLangGraph CloudQuick startOn this pageGet started with LangSmithLangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!1. Install LangSmith​PythonTypeScriptpip install -U",
        "metadata": {
          "description": "LangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!",
          "language": "en",
          "source": "https://docs.smith.langchain.com/overview",
          "title": "Get started with LangSmith | 🦜️🛠️ LangSmith"
        },
        "type": "Document"
      }
    },
    {
      "lc": 1,
      "type": "constructor",
      "id": [
        "langchain",
        "schema",
        "document",
        "Document"
      ],
      "kwargs": {
        "page_content": "Get started with LangSmith | 🦜️🛠️ LangSmith",
        "metadata": {
          "description": "LangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!",
          "language": "en",
          "source": "https://docs.smith.langchain.com/overview",
          "title": "Get started with LangSmith | 🦜️🛠️ LangSmith"
        },
        "type": "Document"
      }
    },
    {
      "lc": 1,
      "type": "constructor",
      "id": [
        "langchain",
        "schema",
        "document",
        "Document"
      ],
      "kwargs": {
        "page_content": "Get started with LangSmith | 🦜️🛠️ LangSmith",
        "metadata": {
          "description": "LangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!",
          "language": "en",
          "source": "https://docs.smith.langchain.com/overview",
          "title": "Get started with LangSmith | 🦜️🛠️ LangSmith"
        },
        "type": "Document"
      }
    }
  ]
}
[chain/end] [chain:RunnableSequence > chain:RunnableAssign<context> > chain:RunnableParallel<context>] [1.34s] Exiting Chain run with output:
[outputs]
[chain/end] [chain:RunnableSequence > chain:RunnableAssign<context>] [1.35s] Exiting Chain run with output:
[outputs]
[chain/start] [chain:RunnableSequence > chain:RunnableAssign<answer>] Entering Chain run with input:
[inputs]
[chain/start] [chain:RunnableSequence > chain:RunnableAssign<answer> > chain:RunnableParallel<answer>] Entering Chain run with input:
[inputs]
[chain/start] [chain:RunnableSequence > chain:RunnableAssign<answer> > chain:RunnableParallel<answer> > chain:stuff_documents_chain] Entering Chain run with input:
[inputs]
[chain/start] [chain:RunnableSequence > chain:RunnableAssign<answer> > chain:RunnableParallel<answer> > chain:stuff_documents_chain > chain:format_inputs] Entering Chain run with input:
[inputs]
[chain/start] [chain:RunnableSequence > chain:RunnableAssign<answer> > chain:RunnableParallel<answer> > chain:stuff_documents_chain > chain:format_inputs > chain:RunnableParallel<context>] Entering Chain run with input:
[inputs]
[chain/start] [chain:RunnableSequence > chain:RunnableAssign<answer> > chain:RunnableParallel<answer> > chain:stuff_documents_chain > chain:format_inputs > chain:RunnableParallel<context> > chain:format_docs] Entering Chain run with input:
[inputs]
[chain/end] [chain:RunnableSequence > chain:RunnableAssign<answer> > chain:RunnableParallel<answer> > chain:stuff_documents_chain > chain:format_inputs > chain:RunnableParallel<context> > chain:format_docs] [1ms] Exiting Chain run with output:
{
  "output": "Skip to main contentGo to API DocsSearchRegionUSEUGo to AppQuick startTutorialsHow-to guidesConceptsReferencePricingSelf-hostingLangGraph CloudQuick startOn this pageGet started with LangSmithLangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!1. Install LangSmith​PythonTypeScriptpip install -U\n\nSkip to main contentGo to API DocsSearchRegionUSEUGo to AppQuick startTutorialsHow-to guidesConceptsReferencePricingSelf-hostingLangGraph CloudQuick startOn this pageGet started with LangSmithLangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!1. Install LangSmith​PythonTypeScriptpip install -U\n\nGet started with LangSmith | 🦜️🛠️ LangSmith\n\nGet started with LangSmith | 🦜️🛠️ LangSmith"
}
[chain/end] [chain:RunnableSequence > chain:RunnableAssign<answer> > chain:RunnableParallel<answer> > chain:stuff_documents_chain > chain:format_inputs > chain:RunnableParallel<context>] [2ms] Exiting Chain run with output:
{
  "context": "Skip to main contentGo to API DocsSearchRegionUSEUGo to AppQuick startTutorialsHow-to guidesConceptsReferencePricingSelf-hostingLangGraph CloudQuick startOn this pageGet started with LangSmithLangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!1. Install LangSmith​PythonTypeScriptpip install -U\n\nSkip to main contentGo to API DocsSearchRegionUSEUGo to AppQuick startTutorialsHow-to guidesConceptsReferencePricingSelf-hostingLangGraph CloudQuick startOn this pageGet started with LangSmithLangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!1. Install LangSmith​PythonTypeScriptpip install -U\n\nGet started with LangSmith | 🦜️🛠️ LangSmith\n\nGet started with LangSmith | 🦜️🛠️ LangSmith"
}
[chain/end] [chain:RunnableSequence > chain:RunnableAssign<answer> > chain:RunnableParallel<answer> > chain:stuff_documents_chain > chain:format_inputs] [3ms] Exiting Chain run with output:
[outputs]
[chain/start] [chain:RunnableSequence > chain:RunnableAssign<answer> > chain:RunnableParallel<answer> > chain:stuff_documents_chain > prompt:ChatPromptTemplate] Entering Prompt run with input:
[inputs]
[chain/end] [chain:RunnableSequence > chain:RunnableAssign<answer> > chain:RunnableParallel<answer> > chain:stuff_documents_chain > prompt:ChatPromptTemplate] [1ms] Exiting Prompt run with output:
[outputs]
[llm/start] [chain:RunnableSequence > chain:RunnableAssign<answer> > chain:RunnableParallel<answer> > chain:stuff_documents_chain > llm:ChatOpenAI] Entering LLM run with input:
{
  "prompts": [
    "System: \nAnswer the user's questions based on the below context. \nIf the context doesn't contain any relevant information to the question, don't make something up and just say \"I don't know\":\n\n<context>\nSkip to main contentGo to API DocsSearchRegionUSEUGo to AppQuick startTutorialsHow-to guidesConceptsReferencePricingSelf-hostingLangGraph CloudQuick startOn this pageGet started with LangSmithLangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!1. Install LangSmith​PythonTypeScriptpip install -U\n\nSkip to main contentGo to API DocsSearchRegionUSEUGo to AppQuick startTutorialsHow-to guidesConceptsReferencePricingSelf-hostingLangGraph CloudQuick startOn this pageGet started with LangSmithLangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!1. Install LangSmith​PythonTypeScriptpip install -U\n\nGet started with LangSmith | 🦜️🛠️ LangSmith\n\nGet started with LangSmith | 🦜️🛠️ LangSmith\n</context>\n\nHuman: Can LangSmith help test my LLM applications?"
  ]
}
[llm/end] [chain:RunnableSequence > chain:RunnableAssign<answer> > chain:RunnableParallel<answer> > chain:stuff_documents_chain > llm:ChatOpenAI] [2.27s] Exiting LLM run with output:
{
  "generations": [
    [
      {
        "text": "Yes, LangSmith is designed to help you closely monitor and evaluate your LLM applications, so you can ship quickly and with confidence. It is a platform for building production-grade LLM applications, and it allows you to test and monitor your applications effectively.",
        "generation_info": {
          "finish_reason": "stop",
          "logprobs": null
        },
        "type": "ChatGeneration",
        "message": {
          "lc": 1,
          "type": "constructor",
          "id": [
            "langchain",
            "schema",
            "messages",
            "AIMessage"
          ],
          "kwargs": {
            "content": "Yes, LangSmith is designed to help you closely monitor and evaluate your LLM applications, so you can ship quickly and with confidence. It is a platform for building production-grade LLM applications, and it allows you to test and monitor your applications effectively.",
            "response_metadata": {
              "token_usage": {
                "completion_tokens": 51,
                "prompt_tokens": 310,
                "total_tokens": 361
              },
              "model_name": "gpt-3.5-turbo-1106",
              "system_fingerprint": "fp_5aa43294a1",
              "finish_reason": "stop",
              "logprobs": null
            },
            "type": "ai",
            "id": "run-e840aa19-a654-4b0b-a929-fac8dd0054ef-0",
            "usage_metadata": {
              "input_tokens": 310,
              "output_tokens": 51,
              "total_tokens": 361
            },
            "tool_calls": [],
            "invalid_tool_calls": []
          }
        }
      }
    ]
  ],
  "llm_output": {
    "token_usage": {
      "completion_tokens": 51,
      "prompt_tokens": 310,
      "total_tokens": 361
    },
    "model_name": "gpt-3.5-turbo-1106",
    "system_fingerprint": "fp_5aa43294a1"
  },
  "run": null
}
[chain/start] [chain:RunnableSequence > chain:RunnableAssign<answer> > chain:RunnableParallel<answer> > chain:stuff_documents_chain > parser:StrOutputParser] Entering Parser run with input:
[inputs]
[chain/end] [chain:RunnableSequence > chain:RunnableAssign<answer> > chain:RunnableParallel<answer> > chain:stuff_documents_chain > parser:StrOutputParser] [1ms] Exiting Parser run with output:
{
  "output": "Yes, LangSmith is designed to help you closely monitor and evaluate your LLM applications, so you can ship quickly and with confidence. It is a platform for building production-grade LLM applications, and it allows you to test and monitor your applications effectively."
}
[chain/end] [chain:RunnableSequence > chain:RunnableAssign<answer> > chain:RunnableParallel<answer> > chain:stuff_documents_chain] [2.28s] Exiting Chain run with output:
{
  "output": "Yes, LangSmith is designed to help you closely monitor and evaluate your LLM applications, so you can ship quickly and with confidence. It is a platform for building production-grade LLM applications, and it allows you to test and monitor your applications effectively."
}
[chain/end] [chain:RunnableSequence > chain:RunnableAssign<answer> > chain:RunnableParallel<answer>] [2.28s] Exiting Chain run with output:
{
  "answer": "Yes, LangSmith is designed to help you closely monitor and evaluate your LLM applications, so you can ship quickly and with confidence. It is a platform for building production-grade LLM applications, and it allows you to test and monitor your applications effectively."
}
[chain/end] [chain:RunnableSequence > chain:RunnableAssign<answer>] [2.29s] Exiting Chain run with output:
[outputs]
[chain/end] [chain:RunnableSequence] [3.65s] Exiting Chain run with output:
[outputs]
{'messages': [HumanMessage(content='Can LangSmith help test my LLM applications?')],
 'context': [Document(page_content='Skip to main contentGo to API DocsSearchRegionUSEUGo to AppQuick startTutorialsHow-to guidesConceptsReferencePricingSelf-hostingLangGraph CloudQuick startOn this pageGet started with LangSmithLangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!1. Install LangSmith\u200bPythonTypeScriptpip install -U', metadata={'description': 'LangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'Get started with LangSmith | 🦜️🛠️ LangSmith'}),
  Document(page_content='Skip to main contentGo to API DocsSearchRegionUSEUGo to AppQuick startTutorialsHow-to guidesConceptsReferencePricingSelf-hostingLangGraph CloudQuick startOn this pageGet started with LangSmithLangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!1. Install LangSmith\u200bPythonTypeScriptpip install -U', metadata={'description': 'LangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'Get started with LangSmith | 🦜️🛠️ LangSmith'}),
  Document(page_content='Get started with LangSmith | 🦜️🛠️ LangSmith', metadata={'description': 'LangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'Get started with LangSmith | 🦜️🛠️ LangSmith'}),
  Document(page_content='Get started with LangSmith | 🦜️🛠️ LangSmith', metadata={'description': 'LangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'Get started with LangSmith | 🦜️🛠️ LangSmith'})],
 'answer': 'Yes, LangSmith is designed to help you closely monitor and evaluate your LLM applications, so you can ship quickly and with confidence. It is a platform for building production-grade LLM applications, and it allows you to test and monitor your applications effectively.'}

上面只有一条message,所以他不会去做查询转换。

下面我会输入多个message。

conversational_retrieval_chain.invoke(
    {
        "messages": [
            HumanMessage(content="Can LangSmith help test my LLM applications?"),
            AIMessage(
                content="Yes, LangSmith can help test and evaluate your LLM applications. It allows you to quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs. Additionally, LangSmith can be used to monitor your application, log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise."
            ),
            HumanMessage(content="Tell me more!"),
        ],
    }
)啪啪然后我想啊
[chain/start] [chain:RunnableSequence] Entering Chain run with input:
[inputs]
[chain/start] [chain:RunnableSequence > chain:RunnableAssign<context>] Entering Chain run with input:
[inputs]
[chain/start] [chain:RunnableSequence > chain:RunnableAssign<context> > chain:RunnableParallel<context>] Entering Chain run with input:
[inputs]
[chain/start] [chain:RunnableSequence > chain:RunnableAssign<context> > chain:RunnableParallel<context> > chain:chat_retriever_chain] Entering Chain run with input:
[inputs]
[chain/start] [chain:RunnableSequence > chain:RunnableAssign<context> > chain:RunnableParallel<context> > chain:chat_retriever_chain > chain:RunnableLambda] Entering Chain run with input:
[inputs]
[chain/end] [chain:RunnableSequence > chain:RunnableAssign<context> > chain:RunnableParallel<context> > chain:chat_retriever_chain > chain:RunnableLambda] [0ms] Exiting Chain run with output:
{
  "output": false
}
[chain/start] [chain:RunnableSequence > chain:RunnableAssign<context> > chain:RunnableParallel<context> > chain:chat_retriever_chain > chain:RunnableSequence] Entering Chain run with input:
[inputs]
[chain/start] [chain:RunnableSequence > chain:RunnableAssign<context> > chain:RunnableParallel<context> > chain:chat_retriever_chain > chain:RunnableSequence > prompt:ChatPromptTemplate] Entering Prompt run with input:
[inputs]
[chain/end] [chain:RunnableSequence > chain:RunnableAssign<context> > chain:RunnableParallel<context> > chain:chat_retriever_chain > chain:RunnableSequence > prompt:ChatPromptTemplate] [0ms] Exiting Prompt run with output:
[outputs]
[llm/start] [chain:RunnableSequence > chain:RunnableAssign<context> > chain:RunnableParallel<context> > chain:chat_retriever_chain > chain:RunnableSequence > llm:ChatOpenAI] Entering LLM run with input:
{
  "prompts": [
    "Human: Can LangSmith help test my LLM applications?\nAI: Yes, LangSmith can help test and evaluate your LLM applications. It allows you to quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs. Additionally, LangSmith can be used to monitor your application, log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise.\nHuman: Tell me more!\nHuman: Given the above conversation, generate a search query to look up in order to get information relevant to the conversation. Only respond with the query, nothing else."
  ]
}
[llm/end] [chain:RunnableSequence > chain:RunnableAssign<context> > chain:RunnableParallel<context> > chain:chat_retriever_chain > chain:RunnableSequence > llm:ChatOpenAI] [1.97s] Exiting LLM run with output:
{
  "generations": [
    [
      {
        "text": "\"LangSmith LLM application testing and evaluation\"",
        "generation_info": {
          "finish_reason": "stop",
          "logprobs": null
        },
        "type": "ChatGeneration",
        "message": {
          "lc": 1,
          "type": "constructor",
          "id": [
            "langchain",
            "schema",
            "messages",
            "AIMessage"
          ],
          "kwargs": {
            "content": "\"LangSmith LLM application testing and evaluation\"",
            "response_metadata": {
              "token_usage": {
                "completion_tokens": 10,
                "prompt_tokens": 145,
                "total_tokens": 155
              },
              "model_name": "gpt-3.5-turbo-1106",
              "system_fingerprint": "fp_5aa43294a1",
              "finish_reason": "stop",
              "logprobs": null
            },
            "type": "ai",
            "id": "run-49011f25-4d82-4961-b266-0718cadcd8a3-0",
            "usage_metadata": {
              "input_tokens": 145,
              "output_tokens": 10,
              "total_tokens": 155
            },
            "tool_calls": [],
            "invalid_tool_calls": []
          }
        }
      }
    ]
  ],
  "llm_output": {
    "token_usage": {
      "completion_tokens": 10,
      "prompt_tokens": 145,
      "total_tokens": 155
    },
    "model_name": "gpt-3.5-turbo-1106",
    "system_fingerprint": "fp_5aa43294a1"
  },
  "run": null
}
[chain/start] [chain:RunnableSequence > chain:RunnableAssign<context> > chain:RunnableParallel<context> > chain:chat_retriever_chain > chain:RunnableSequence > parser:StrOutputParser] Entering Parser run with input:
[inputs]
[chain/end] [chain:RunnableSequence > chain:RunnableAssign<context> > chain:RunnableParallel<context> > chain:chat_retriever_chain > chain:RunnableSequence > parser:StrOutputParser] [2ms] Exiting Parser run with output:
{
  "output": "\"LangSmith LLM application testing and evaluation\""
}
[chain/end] [chain:RunnableSequence > chain:RunnableAssign<context> > chain:RunnableParallel<context> > chain:chat_retriever_chain > chain:RunnableSequence] [3.92s] Exiting Chain run with output:
[outputs]
[chain/end] [chain:RunnableSequence > chain:RunnableAssign<context> > chain:RunnableParallel<context> > chain:chat_retriever_chain] [3.92s] Exiting Chain run with output:
{
  "output": [
    {
      "lc": 1,
      "type": "constructor",
      "id": [
        "langchain",
        "schema",
        "document",
        "Document"
      ],
      "kwargs": {
        "page_content": "Skip to main contentGo to API DocsSearchRegionUSEUGo to AppQuick startTutorialsHow-to guidesConceptsReferencePricingSelf-hostingLangGraph CloudQuick startOn this pageGet started with LangSmithLangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!1. Install LangSmith​PythonTypeScriptpip install -U",
        "metadata": {
          "description": "LangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!",
          "language": "en",
          "source": "https://docs.smith.langchain.com/overview",
          "title": "Get started with LangSmith | 🦜️🛠️ LangSmith"
        },
        "type": "Document"
      }
    },
    {
      "lc": 1,
      "type": "constructor",
      "id": [
        "langchain",
        "schema",
        "document",
        "Document"
      ],
      "kwargs": {
        "page_content": "Skip to main contentGo to API DocsSearchRegionUSEUGo to AppQuick startTutorialsHow-to guidesConceptsReferencePricingSelf-hostingLangGraph CloudQuick startOn this pageGet started with LangSmithLangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!1. Install LangSmith​PythonTypeScriptpip install -U",
        "metadata": {
          "description": "LangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!",
          "language": "en",
          "source": "https://docs.smith.langchain.com/overview",
          "title": "Get started with LangSmith | 🦜️🛠️ LangSmith"
        },
        "type": "Document"
      }
    },
    {
      "lc": 1,
      "type": "constructor",
      "id": [
        "langchain",
        "schema",
        "document",
        "Document"
      ],
      "kwargs": {
        "page_content": "\"1.0.0\",      revision_id: \"beta\",    },  });Learn more about evaluation in the how-to guides.Was this page helpful?You can leave detailed feedback on GitHub.NextTutorials1. Install LangSmith2. Create an API key3. Set up your environment4. Log your first trace5. Run your first evaluationCommunityDiscordTwitterGitHubDocs CodeLangSmith SDKPythonJS/TSMoreHomepageBlogLangChain Python DocsLangChain JS/TS DocsCopyright © 2024 LangChain, Inc.",
        "metadata": {
          "description": "LangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!",
          "language": "en",
          "source": "https://docs.smith.langchain.com/overview",
          "title": "Get started with LangSmith | 🦜️🛠️ LangSmith"
        },
        "type": "Document"
      }
    },
    {
      "lc": 1,
      "type": "constructor",
      "id": [
        "langchain",
        "schema",
        "document",
        "Document"
      ],
      "kwargs": {
        "page_content": "\"1.0.0\",      revision_id: \"beta\",    },  });Learn more about evaluation in the how-to guides.Was this page helpful?You can leave detailed feedback on GitHub.NextTutorials1. Install LangSmith2. Create an API key3. Set up your environment4. Log your first trace5. Run your first evaluationCommunityDiscordTwitterGitHubDocs CodeLangSmith SDKPythonJS/TSMoreHomepageBlogLangChain Python DocsLangChain JS/TS DocsCopyright © 2024 LangChain, Inc.",
        "metadata": {
          "description": "LangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!",
          "language": "en",
          "source": "https://docs.smith.langchain.com/overview",
          "title": "Get started with LangSmith | 🦜️🛠️ LangSmith"
        },
        "type": "Document"
      }
    }
  ]
}
[chain/end] [chain:RunnableSequence > chain:RunnableAssign<context> > chain:RunnableParallel<context>] [3.93s] Exiting Chain run with output:
[outputs]
[chain/end] [chain:RunnableSequence > chain:RunnableAssign<context>] [3.93s] Exiting Chain run with output:
[outputs]
[chain/start] [chain:RunnableSequence > chain:RunnableAssign<answer>] Entering Chain run with input:
[inputs]
[chain/start] [chain:RunnableSequence > chain:RunnableAssign<answer> > chain:RunnableParallel<answer>] Entering Chain run with input:
[inputs]
[chain/start] [chain:RunnableSequence > chain:RunnableAssign<answer> > chain:RunnableParallel<answer> > chain:stuff_documents_chain] Entering Chain run with input:
[inputs]
[chain/start] [chain:RunnableSequence > chain:RunnableAssign<answer> > chain:RunnableParallel<answer> > chain:stuff_documents_chain > chain:format_inputs] Entering Chain run with input:
[inputs]
[chain/start] [chain:RunnableSequence > chain:RunnableAssign<answer> > chain:RunnableParallel<answer> > chain:stuff_documents_chain > chain:format_inputs > chain:RunnableParallel<context>] Entering Chain run with input:
[inputs]
[chain/start] [chain:RunnableSequence > chain:RunnableAssign<answer> > chain:RunnableParallel<answer> > chain:stuff_documents_chain > chain:format_inputs > chain:RunnableParallel<context> > chain:format_docs] Entering Chain run with input:
[inputs]
[chain/end] [chain:RunnableSequence > chain:RunnableAssign<answer> > chain:RunnableParallel<answer> > chain:stuff_documents_chain > chain:format_inputs > chain:RunnableParallel<context> > chain:format_docs] [1ms] Exiting Chain run with output:
{
  "output": "Skip to main contentGo to API DocsSearchRegionUSEUGo to AppQuick startTutorialsHow-to guidesConceptsReferencePricingSelf-hostingLangGraph CloudQuick startOn this pageGet started with LangSmithLangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!1. Install LangSmith​PythonTypeScriptpip install -U\n\nSkip to main contentGo to API DocsSearchRegionUSEUGo to AppQuick startTutorialsHow-to guidesConceptsReferencePricingSelf-hostingLangGraph CloudQuick startOn this pageGet started with LangSmithLangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!1. Install LangSmith​PythonTypeScriptpip install -U\n\n\"1.0.0\",      revision_id: \"beta\",    },  });Learn more about evaluation in the how-to guides.Was this page helpful?You can leave detailed feedback on GitHub.NextTutorials1. Install LangSmith2. Create an API key3. Set up your environment4. Log your first trace5. Run your first evaluationCommunityDiscordTwitterGitHubDocs CodeLangSmith SDKPythonJS/TSMoreHomepageBlogLangChain Python DocsLangChain JS/TS DocsCopyright © 2024 LangChain, Inc.\n\n\"1.0.0\",      revision_id: \"beta\",    },  });Learn more about evaluation in the how-to guides.Was this page helpful?You can leave detailed feedback on GitHub.NextTutorials1. Install LangSmith2. Create an API key3. Set up your environment4. Log your first trace5. Run your first evaluationCommunityDiscordTwitterGitHubDocs CodeLangSmith SDKPythonJS/TSMoreHomepageBlogLangChain Python DocsLangChain JS/TS DocsCopyright © 2024 LangChain, Inc."
}
[chain/end] [chain:RunnableSequence > chain:RunnableAssign<answer> > chain:RunnableParallel<answer> > chain:stuff_documents_chain > chain:format_inputs > chain:RunnableParallel<context>] [2ms] Exiting Chain run with output:
{
  "context": "Skip to main contentGo to API DocsSearchRegionUSEUGo to AppQuick startTutorialsHow-to guidesConceptsReferencePricingSelf-hostingLangGraph CloudQuick startOn this pageGet started with LangSmithLangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!1. Install LangSmith​PythonTypeScriptpip install -U\n\nSkip to main contentGo to API DocsSearchRegionUSEUGo to AppQuick startTutorialsHow-to guidesConceptsReferencePricingSelf-hostingLangGraph CloudQuick startOn this pageGet started with LangSmithLangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!1. Install LangSmith​PythonTypeScriptpip install -U\n\n\"1.0.0\",      revision_id: \"beta\",    },  });Learn more about evaluation in the how-to guides.Was this page helpful?You can leave detailed feedback on GitHub.NextTutorials1. Install LangSmith2. Create an API key3. Set up your environment4. Log your first trace5. Run your first evaluationCommunityDiscordTwitterGitHubDocs CodeLangSmith SDKPythonJS/TSMoreHomepageBlogLangChain Python DocsLangChain JS/TS DocsCopyright © 2024 LangChain, Inc.\n\n\"1.0.0\",      revision_id: \"beta\",    },  });Learn more about evaluation in the how-to guides.Was this page helpful?You can leave detailed feedback on GitHub.NextTutorials1. Install LangSmith2. Create an API key3. Set up your environment4. Log your first trace5. Run your first evaluationCommunityDiscordTwitterGitHubDocs CodeLangSmith SDKPythonJS/TSMoreHomepageBlogLangChain Python DocsLangChain JS/TS DocsCopyright © 2024 LangChain, Inc."
}
[chain/end] [chain:RunnableSequence > chain:RunnableAssign<answer> > chain:RunnableParallel<answer> > chain:stuff_documents_chain > chain:format_inputs] [3ms] Exiting Chain run with output:
[outputs]
[chain/start] [chain:RunnableSequence > chain:RunnableAssign<answer> > chain:RunnableParallel<answer> > chain:stuff_documents_chain > prompt:ChatPromptTemplate] Entering Prompt run with input:
[inputs]
[chain/end] [chain:RunnableSequence > chain:RunnableAssign<answer> > chain:RunnableParallel<answer> > chain:stuff_documents_chain > prompt:ChatPromptTemplate] [1ms] Exiting Prompt run with output:
[outputs]
[llm/start] [chain:RunnableSequence > chain:RunnableAssign<answer> > chain:RunnableParallel<answer> > chain:stuff_documents_chain > llm:ChatOpenAI] Entering LLM run with input:
{
  "prompts": [
    "System: \nAnswer the user's questions based on the below context. \nIf the context doesn't contain any relevant information to the question, don't make something up and just say \"I don't know\":\n\n<context>\nSkip to main contentGo to API DocsSearchRegionUSEUGo to AppQuick startTutorialsHow-to guidesConceptsReferencePricingSelf-hostingLangGraph CloudQuick startOn this pageGet started with LangSmithLangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!1. Install LangSmith​PythonTypeScriptpip install -U\n\nSkip to main contentGo to API DocsSearchRegionUSEUGo to AppQuick startTutorialsHow-to guidesConceptsReferencePricingSelf-hostingLangGraph CloudQuick startOn this pageGet started with LangSmithLangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!1. Install LangSmith​PythonTypeScriptpip install -U\n\n\"1.0.0\",      revision_id: \"beta\",    },  });Learn more about evaluation in the how-to guides.Was this page helpful?You can leave detailed feedback on GitHub.NextTutorials1. Install LangSmith2. Create an API key3. Set up your environment4. Log your first trace5. Run your first evaluationCommunityDiscordTwitterGitHubDocs CodeLangSmith SDKPythonJS/TSMoreHomepageBlogLangChain Python DocsLangChain JS/TS DocsCopyright © 2024 LangChain, Inc.\n\n\"1.0.0\",      revision_id: \"beta\",    },  });Learn more about evaluation in the how-to guides.Was this page helpful?You can leave detailed feedback on GitHub.NextTutorials1. Install LangSmith2. Create an API key3. Set up your environment4. Log your first trace5. Run your first evaluationCommunityDiscordTwitterGitHubDocs CodeLangSmith SDKPythonJS/TSMoreHomepageBlogLangChain Python DocsLangChain JS/TS DocsCopyright © 2024 LangChain, Inc.\n</context>\n\nHuman: Can LangSmith help test my LLM applications?\nAI: Yes, LangSmith can help test and evaluate your LLM applications. It allows you to quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs. Additionally, LangSmith can be used to monitor your application, log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise.\nHuman: Tell me more!"
  ]
}
[llm/end] [chain:RunnableSequence > chain:RunnableAssign<answer> > chain:RunnableParallel<answer> > chain:stuff_documents_chain > llm:ChatOpenAI] [4.20s] Exiting LLM run with output:
{
  "generations": [
    [
      {
        "text": "LangSmith is designed for building production-grade LLM applications, providing tools to closely monitor and evaluate your application. Here are some key features:\n\n1. **Monitoring and Evaluation**: You can track the performance of your LLM applications, ensuring they meet your quality standards.\n\n2. **Logging Traces**: LangSmith allows you to log traces of your application, which can help in debugging and understanding how your model is performing in real-time.\n\n3. **Visualizing Data**: It provides visualizations for latency and token usage statistics, making it easier to identify bottlenecks or inefficiencies.\n\n4. **Fine-tuning**: You can expand your evaluation sets by editing examples and adding them to datasets, which can help in fine-tuning your model for better performance.\n\n5. **Ease of Use**: LangSmith works independently and does not require the use of LangChain, making it straightforward to integrate into your existing workflows.\n\n6. **Community Support**: There are resources available such as tutorials, how-to guides, and community support through platforms like Discord and GitHub.\n\nOverall, LangSmith aims to help developers ship their LLM applications quickly and with confidence by providing robust tools for testing and evaluation.",
        "generation_info": {
          "finish_reason": "stop",
          "logprobs": null
        },
        "type": "ChatGeneration",
        "message": {
          "lc": 1,
          "type": "constructor",
          "id": [
            "langchain",
            "schema",
            "messages",
            "AIMessage"
          ],
          "kwargs": {
            "content": "LangSmith is designed for building production-grade LLM applications, providing tools to closely monitor and evaluate your application. Here are some key features:\n\n1. **Monitoring and Evaluation**: You can track the performance of your LLM applications, ensuring they meet your quality standards.\n\n2. **Logging Traces**: LangSmith allows you to log traces of your application, which can help in debugging and understanding how your model is performing in real-time.\n\n3. **Visualizing Data**: It provides visualizations for latency and token usage statistics, making it easier to identify bottlenecks or inefficiencies.\n\n4. **Fine-tuning**: You can expand your evaluation sets by editing examples and adding them to datasets, which can help in fine-tuning your model for better performance.\n\n5. **Ease of Use**: LangSmith works independently and does not require the use of LangChain, making it straightforward to integrate into your existing workflows.\n\n6. **Community Support**: There are resources available such as tutorials, how-to guides, and community support through platforms like Discord and GitHub.\n\nOverall, LangSmith aims to help developers ship their LLM applications quickly and with confidence by providing robust tools for testing and evaluation.",
            "response_metadata": {
              "token_usage": {
                "completion_tokens": 244,
                "prompt_tokens": 582,
                "total_tokens": 826
              },
              "model_name": "gpt-3.5-turbo-1106",
              "system_fingerprint": "fp_276aa25277",
              "finish_reason": "stop",
              "logprobs": null
            },
            "type": "ai",
            "id": "run-e3d13f45-9d2d-49c1-9b3f-8703cef97609-0",
            "usage_metadata": {
              "input_tokens": 582,
              "output_tokens": 244,
              "total_tokens": 826
            },
            "tool_calls": [],
            "invalid_tool_calls": []
          }
        }
      }
    ]
  ],
  "llm_output": {
    "token_usage": {
      "completion_tokens": 244,
      "prompt_tokens": 582,
      "total_tokens": 826
    },
    "model_name": "gpt-3.5-turbo-1106",
    "system_fingerprint": "fp_276aa25277"
  },
  "run": null
}
[chain/start] [chain:RunnableSequence > chain:RunnableAssign<answer> > chain:RunnableParallel<answer> > chain:stuff_documents_chain > parser:StrOutputParser] Entering Parser run with input:
[inputs]
[chain/end] [chain:RunnableSequence > chain:RunnableAssign<answer> > chain:RunnableParallel<answer> > chain:stuff_documents_chain > parser:StrOutputParser] [1ms] Exiting Parser run with output:
{
  "output": "LangSmith is designed for building production-grade LLM applications, providing tools to closely monitor and evaluate your application. Here are some key features:\n\n1. **Monitoring and Evaluation**: You can track the performance of your LLM applications, ensuring they meet your quality standards.\n\n2. **Logging Traces**: LangSmith allows you to log traces of your application, which can help in debugging and understanding how your model is performing in real-time.\n\n3. **Visualizing Data**: It provides visualizations for latency and token usage statistics, making it easier to identify bottlenecks or inefficiencies.\n\n4. **Fine-tuning**: You can expand your evaluation sets by editing examples and adding them to datasets, which can help in fine-tuning your model for better performance.\n\n5. **Ease of Use**: LangSmith works independently and does not require the use of LangChain, making it straightforward to integrate into your existing workflows.\n\n6. **Community Support**: There are resources available such as tutorials, how-to guides, and community support through platforms like Discord and GitHub.\n\nOverall, LangSmith aims to help developers ship their LLM applications quickly and with confidence by providing robust tools for testing and evaluation."
}
[chain/end] [chain:RunnableSequence > chain:RunnableAssign<answer> > chain:RunnableParallel<answer> > chain:stuff_documents_chain] [4.21s] Exiting Chain run with output:
{
  "output": "LangSmith is designed for building production-grade LLM applications, providing tools to closely monitor and evaluate your application. Here are some key features:\n\n1. **Monitoring and Evaluation**: You can track the performance of your LLM applications, ensuring they meet your quality standards.\n\n2. **Logging Traces**: LangSmith allows you to log traces of your application, which can help in debugging and understanding how your model is performing in real-time.\n\n3. **Visualizing Data**: It provides visualizations for latency and token usage statistics, making it easier to identify bottlenecks or inefficiencies.\n\n4. **Fine-tuning**: You can expand your evaluation sets by editing examples and adding them to datasets, which can help in fine-tuning your model for better performance.\n\n5. **Ease of Use**: LangSmith works independently and does not require the use of LangChain, making it straightforward to integrate into your existing workflows.\n\n6. **Community Support**: There are resources available such as tutorials, how-to guides, and community support through platforms like Discord and GitHub.\n\nOverall, LangSmith aims to help developers ship their LLM applications quickly and with confidence by providing robust tools for testing and evaluation."
}
[chain/end] [chain:RunnableSequence > chain:RunnableAssign<answer> > chain:RunnableParallel<answer>] [4.22s] Exiting Chain run with output:
{
  "answer": "LangSmith is designed for building production-grade LLM applications, providing tools to closely monitor and evaluate your application. Here are some key features:\n\n1. **Monitoring and Evaluation**: You can track the performance of your LLM applications, ensuring they meet your quality standards.\n\n2. **Logging Traces**: LangSmith allows you to log traces of your application, which can help in debugging and understanding how your model is performing in real-time.\n\n3. **Visualizing Data**: It provides visualizations for latency and token usage statistics, making it easier to identify bottlenecks or inefficiencies.\n\n4. **Fine-tuning**: You can expand your evaluation sets by editing examples and adding them to datasets, which can help in fine-tuning your model for better performance.\n\n5. **Ease of Use**: LangSmith works independently and does not require the use of LangChain, making it straightforward to integrate into your existing workflows.\n\n6. **Community Support**: There are resources available such as tutorials, how-to guides, and community support through platforms like Discord and GitHub.\n\nOverall, LangSmith aims to help developers ship their LLM applications quickly and with confidence by providing robust tools for testing and evaluation."
}
[chain/end] [chain:RunnableSequence > chain:RunnableAssign<answer>] [4.22s] Exiting Chain run with output:
[outputs]
[chain/end] [chain:RunnableSequence] [8.17s] Exiting Chain run with output:
[outputs]
{'messages': [HumanMessage(content='Can LangSmith help test my LLM applications?'),
  AIMessage(content='Yes, LangSmith can help test and evaluate your LLM applications. It allows you to quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs. Additionally, LangSmith can be used to monitor your application, log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise.'),
  HumanMessage(content='Tell me more!')],
 'context': [Document(page_content='Skip to main contentGo to API DocsSearchRegionUSEUGo to AppQuick startTutorialsHow-to guidesConceptsReferencePricingSelf-hostingLangGraph CloudQuick startOn this pageGet started with LangSmithLangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!1. Install LangSmith\u200bPythonTypeScriptpip install -U', metadata={'description': 'LangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'Get started with LangSmith | 🦜️🛠️ LangSmith'}),
  Document(page_content='Skip to main contentGo to API DocsSearchRegionUSEUGo to AppQuick startTutorialsHow-to guidesConceptsReferencePricingSelf-hostingLangGraph CloudQuick startOn this pageGet started with LangSmithLangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!1. Install LangSmith\u200bPythonTypeScriptpip install -U', metadata={'description': 'LangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'Get started with LangSmith | 🦜️🛠️ LangSmith'}),
  Document(page_content='"1.0.0",      revision_id: "beta",    },  });Learn more about evaluation in the how-to guides.Was this page helpful?You can leave detailed feedback on GitHub.NextTutorials1. Install LangSmith2. Create an API key3. Set up your environment4. Log your first trace5. Run your first evaluationCommunityDiscordTwitterGitHubDocs CodeLangSmith SDKPythonJS/TSMoreHomepageBlogLangChain Python DocsLangChain JS/TS DocsCopyright © 2024 LangChain, Inc.', metadata={'description': 'LangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'Get started with LangSmith | 🦜️🛠️ LangSmith'}),
  Document(page_content='"1.0.0",      revision_id: "beta",    },  });Learn more about evaluation in the how-to guides.Was this page helpful?You can leave detailed feedback on GitHub.NextTutorials1. Install LangSmith2. Create an API key3. Set up your environment4. Log your first trace5. Run your first evaluationCommunityDiscordTwitterGitHubDocs CodeLangSmith SDKPythonJS/TSMoreHomepageBlogLangChain Python DocsLangChain JS/TS DocsCopyright © 2024 LangChain, Inc.', metadata={'description': 'LangSmith is a platform for building production-grade LLM applications. It allows you to closely monitor and evaluate your application, so you can ship quickly and with confidence. Use of LangChain is not necessary - LangSmith works on its own!', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'Get started with LangSmith | 🦜️🛠️ LangSmith'})],
 'answer': 'LangSmith is designed for building production-grade LLM applications, providing tools to closely monitor and evaluate your application. Here are some key features:\n\n1. **Monitoring and Evaluation**: You can track the performance of your LLM applications, ensuring they meet your quality standards.\n\n2. **Logging Traces**: LangSmith allows you to log traces of your application, which can help in debugging and understanding how your model is performing in real-time.\n\n3. **Visualizing Data**: It provides visualizations for latency and token usage statistics, making it easier to identify bottlenecks or inefficiencies.\n\n4. **Fine-tuning**: You can expand your evaluation sets by editing examples and adding them to datasets, which can help in fine-tuning your model for better performance.\n\n5. **Ease of Use**: LangSmith works independently and does not require the use of LangChain, making it straightforward to integrate into your existing workflows.\n\n6. **Community Support**: There are resources available such as tutorials, how-to guides, and community support through platforms like Discord and GitHub.\n\nOverall, LangSmith aims to help developers ship their LLM applications quickly and with confidence by providing robust tools for testing and evaluation.'}

Ok,得到了想要的答案。

Streaming#

基于LCEL语句的chain,可以调用stream方法来流水返回

set_debug(False)
stream = conversational_retrieval_chain.stream(
    {
        "messages": [
            HumanMessage(content="Can LangSmith help test my LLM applications?"),
            AIMessage(
                content="Yes, LangSmith can help test and evaluate your LLM applications. It allows you to quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs. Additionally, LangSmith can be used to monitor your application, log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise."
            ),
            HumanMessage(content="Tell me more!"),
        ],
    }
)

for chunk in stream:
    if 'answer' in chunk:
        print(chunk)
{'answer': ''}
{'answer': 'Lang'}
{'answer': 'Smith'}
{'answer': ' is'}
{'answer': ' designed'}
{'answer': ' for'}
{'answer': ' building'}
{'answer': ' production'}
{'answer': '-grade'}
{'answer': ' L'}
{'answer': 'LM'}
{'answer': ' applications'}
{'answer': ','}
{'answer': ' providing'}
{'answer': ' tools'}
{'answer': ' for'}
{'answer': ' monitoring'}
{'answer': ' and'}
{'answer': ' evaluating'}
{'answer': ' your'}
{'answer': ' applications'}
{'answer': ' effectively'}
{'answer': '.'}
{'answer': ' Here'}
{'answer': ' are'}
{'answer': ' some'}
{'answer': ' key'}
{'answer': ' features'}
{'answer': ':\n\n'}
{'answer': '1'}
{'answer': '.'}
{'answer': ' **'}
{'answer': 'Testing'}
{'answer': ' and'}
{'answer': ' Evaluation'}
{'answer': '**'}
{'answer': ':'}
{'answer': ' You'}
{'answer': ' can'}
{'answer': ' closely'}
{'answer': ' monitor'}
{'answer': ' your'}
{'answer': ' application'}
{'answer': ','}
{'answer': ' allowing'}
{'answer': ' for'}
{'answer': ' thorough'}
{'answer': ' testing'}
{'answer': ' and'}
{'answer': ' evaluation'}
{'answer': ' of'}
{'answer': ' its'}
{'answer': ' performance'}
{'answer': '.\n\n'}
{'answer': '2'}
{'answer': '.'}
{'answer': ' **'}
{'answer': 'Editing'}
{'answer': ' and'}
{'answer': ' Exp'}
{'answer': 'anding'}
{'answer': ' D'}
{'answer': 'atasets'}
{'answer': '**'}
{'answer': ':'}
{'answer': ' Lang'}
{'answer': 'Smith'}
{'answer': ' enables'}
{'answer': ' you'}
{'answer': ' to'}
{'answer': ' quickly'}
{'answer': ' edit'}
{'answer': ' examples'}
{'answer': ' and'}
{'answer': ' add'}
{'answer': ' them'}
{'answer': ' to'}
{'answer': ' datasets'}
{'answer': ','}
{'answer': ' which'}
{'answer': ' helps'}
{'answer': ' in'}
{'answer': ' expanding'}
{'answer': ' the'}
{'answer': ' evaluation'}
{'answer': ' sets'}
{'answer': ' or'}
{'answer': ' fine'}
{'answer': '-t'}
{'answer': 'uning'}
{'answer': ' models'}
{'answer': ' for'}
{'answer': ' better'}
{'answer': ' performance'}
{'answer': '.\n\n'}
{'answer': '3'}
{'answer': '.'}
{'answer': ' **'}
{'answer': 'Monitoring'}
{'answer': '**'}
{'answer': ':'}
{'answer': ' It'}
{'answer': ' offers'}
{'answer': ' capabilities'}
{'answer': ' to'}
{'answer': ' log'}
{'answer': ' all'}
{'answer': ' traces'}
{'answer': ','}
{'answer': ' visualize'}
{'answer': ' latency'}
{'answer': ','}
{'answer': ' and'}
{'answer': ' track'}
{'answer': ' token'}
{'answer': ' usage'}
{'answer': ' statistics'}
{'answer': ','}
{'answer': ' which'}
{'answer': ' aids'}
{'answer': ' in'}
{'answer': ' understanding'}
{'answer': ' the'}
{'answer': " application's"}
{'answer': ' behavior'}
{'answer': '.\n\n'}
{'answer': '4'}
{'answer': '.'}
{'answer': ' **'}
{'answer': 'Trou'}
{'answer': 'bles'}
{'answer': 'hooting'}
{'answer': '**'}
{'answer': ':'}
{'answer': ' You'}
{'answer': ' can'}
{'answer': ' troubleshoot'}
{'answer': ' specific'}
{'answer': ' issues'}
{'answer': ' as'}
{'answer': ' they'}
{'answer': ' arise'}
{'answer': ','}
{'answer': ' making'}
{'answer': ' it'}
{'answer': ' easier'}
{'answer': ' to'}
{'answer': ' maintain'}
{'answer': ' and'}
{'answer': ' improve'}
{'answer': ' your'}
{'answer': ' application'}
{'answer': '.\n\n'}
{'answer': '5'}
{'answer': '.'}
{'answer': ' **'}
{'answer': 'No'}
{'answer': ' Need'}
{'answer': ' for'}
{'answer': ' Lang'}
{'answer': 'Chain'}
{'answer': '**'}
{'answer': ':'}
{'answer': ' Lang'}
{'answer': 'Smith'}
{'answer': ' operates'}
{'answer': ' independently'}
{'answer': ','}
{'answer': ' so'}
{'answer': ' you'}
{'answer': " don't"}
{'answer': ' need'}
{'answer': ' to'}
{'answer': ' use'}
{'answer': ' Lang'}
{'answer': 'Chain'}
{'answer': ' to'}
{'answer': ' benefit'}
{'answer': ' from'}
{'answer': ' its'}
{'answer': ' features'}
{'answer': '.\n\n'}
{'answer': 'These'}
{'answer': ' functionalities'}
{'answer': ' make'}
{'answer': ' Lang'}
{'answer': 'Smith'}
{'answer': ' a'}
{'answer': ' valuable'}
{'answer': ' tool'}
{'answer': ' for'}
{'answer': ' developers'}
{'answer': ' looking'}
{'answer': ' to'}
{'answer': ' enhance'}
{'answer': ' the'}
{'answer': ' quality'}
{'answer': ' and'}
{'answer': ' reliability'}
{'answer': ' of'}
{'answer': ' their'}
{'answer': ' L'}
{'answer': 'LM'}
{'answer': ' applications'}
{'answer': '.'}
{'answer': ''}

如何使用工具#

之前已经说过模型是如何调用工具的节。在这里将会演示如何通过agent来调用工具。

创建agent#

我们会使用到AgentExecutor, create_tool_calling_agent

from langchain_core.prompts import ChatPromptTemplate
from langchain.tools import tool

@tool
def current_date():
    """返回当前时间,只有当用户询问当前的时候可以使用此工具"""
    return "2024-08-04 12:12:12"

tools = [current_date]
# Adapted from https://smith.langchain.com/hub/jacob/tool-calling-agent
prompt = ChatPromptTemplate.from_messages(
    [
        (
            "system",
            "You are a helpful assistant. You may not need to use tools for every query - the user may just want to chat!",
        ),
        ("placeholder", "{messages}"),
        ("placeholder", "{agent_scratchpad}"),
    ]
)
from langchain.agents import AgentExecutor, create_tool_calling_agent

agent = create_tool_calling_agent(chat, tools, prompt)

agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
from langchain_core.messages import HumanMessage

agent_executor.invoke({"messages": [HumanMessage(content="I'm Nemo!")]})
> Entering new AgentExecutor chain...
Hi Nemo! How can I assist you today?

> Finished chain.
{'messages': [HumanMessage(content="I'm Nemo!")],
 'output': 'Hi Nemo! How can I assist you today?'}
set_debug(True)
agent_executor.invoke({"messages": [HumanMessage(content="现在几点了?")]})
[chain/start] [chain:AgentExecutor] Entering Chain run with input:
[inputs]
[chain/start] [chain:AgentExecutor > chain:RunnableSequence] Entering Chain run with input:
{
  "input": ""
}
[chain/start] [chain:AgentExecutor > chain:RunnableSequence > chain:RunnableAssign<agent_scratchpad>] Entering Chain run with input:
{
  "input": ""
}
[chain/start] [chain:AgentExecutor > chain:RunnableSequence > chain:RunnableAssign<agent_scratchpad> > chain:RunnableParallel<agent_scratchpad>] Entering Chain run with input:
{
  "input": ""
}
[chain/start] [chain:AgentExecutor > chain:RunnableSequence > chain:RunnableAssign<agent_scratchpad> > chain:RunnableParallel<agent_scratchpad> > chain:RunnableLambda] Entering Chain run with input:
{
  "input": ""
}
[chain/end] [chain:AgentExecutor > chain:RunnableSequence > chain:RunnableAssign<agent_scratchpad> > chain:RunnableParallel<agent_scratchpad> > chain:RunnableLambda] [1ms] Exiting Chain run with output:
{
  "output": []
}
[chain/end] [chain:AgentExecutor > chain:RunnableSequence > chain:RunnableAssign<agent_scratchpad> > chain:RunnableParallel<agent_scratchpad>] [3ms] Exiting Chain run with output:
{
  "agent_scratchpad": []
}
[chain/end] [chain:AgentExecutor > chain:RunnableSequence > chain:RunnableAssign<agent_scratchpad>] [6ms] Exiting Chain run with output:
[outputs]
[chain/start] [chain:AgentExecutor > chain:RunnableSequence > prompt:ChatPromptTemplate] Entering Prompt run with input:
[inputs]
[chain/end] [chain:AgentExecutor > chain:RunnableSequence > prompt:ChatPromptTemplate] [1ms] Exiting Prompt run with output:
[outputs]
[llm/start] [chain:AgentExecutor > chain:RunnableSequence > llm:ChatOpenAI] Entering LLM run with input:
{
  "prompts": [
    "System: You are a helpful assistant. You may not need to use tools for every query - the user may just want to chat!\nHuman: 现在几点了?"
  ]
}
[llm/end] [chain:AgentExecutor > chain:RunnableSequence > llm:ChatOpenAI] [2.53s] Exiting LLM run with output:
{
  "generations": [
    [
      {
        "text": "",
        "generation_info": {
          "finish_reason": "tool_calls",
          "model_name": "gpt-3.5-turbo-1106",
          "system_fingerprint": "fp_811936bd4f"
        },
        "type": "ChatGenerationChunk",
        "message": {
          "lc": 1,
          "type": "constructor",
          "id": [
            "langchain",
            "schema",
            "messages",
            "AIMessageChunk"
          ],
          "kwargs": {
            "content": "",
            "additional_kwargs": {
              "tool_calls": [
                {
                  "index": 0,
                  "id": "call_gtgakhiDWQkzRz9mnxaUIGXI",
                  "function": {
                    "arguments": "{}",
                    "name": "current_date"
                  },
                  "type": "function"
                }
              ]
            },
            "response_metadata": {
              "finish_reason": "tool_calls",
              "model_name": "gpt-3.5-turbo-1106",
              "system_fingerprint": "fp_811936bd4f"
            },
            "type": "AIMessageChunk",
            "id": "run-7404c2e3-35a1-416f-949a-403076d38ae9",
            "tool_calls": [
              {
                "name": "current_date",
                "args": {},
                "id": "call_gtgakhiDWQkzRz9mnxaUIGXI"
              }
            ],
            "tool_call_chunks": [
              {
                "name": "current_date",
                "args": "{}",
                "id": "call_gtgakhiDWQkzRz9mnxaUIGXI",
                "index": 0
              }
            ],
            "invalid_tool_calls": []
          }
        }
      }
    ]
  ],
  "llm_output": null,
  "run": null
}
[chain/start] [chain:AgentExecutor > chain:RunnableSequence > parser:ToolsAgentOutputParser] Entering Parser run with input:
[inputs]
[chain/end] [chain:AgentExecutor > chain:RunnableSequence > parser:ToolsAgentOutputParser] [1ms] Exiting Parser run with output:
[outputs]
[chain/end] [chain:AgentExecutor > chain:RunnableSequence] [2.55s] Exiting Chain run with output:
[outputs]
[tool/start] [chain:AgentExecutor > tool:current_date] Entering Tool run with input:
"{}"
[tool/end] [chain:AgentExecutor > tool:current_date] [0ms] Exiting Tool run with output:
"2024-08-04 12:12:12"
[chain/start] [chain:AgentExecutor > chain:RunnableSequence] Entering Chain run with input:
{
  "input": ""
}
[chain/start] [chain:AgentExecutor > chain:RunnableSequence > chain:RunnableAssign<agent_scratchpad>] Entering Chain run with input:
{
  "input": ""
}
[chain/start] [chain:AgentExecutor > chain:RunnableSequence > chain:RunnableAssign<agent_scratchpad> > chain:RunnableParallel<agent_scratchpad>] Entering Chain run with input:
{
  "input": ""
}
[chain/start] [chain:AgentExecutor > chain:RunnableSequence > chain:RunnableAssign<agent_scratchpad> > chain:RunnableParallel<agent_scratchpad> > chain:RunnableLambda] Entering Chain run with input:
{
  "input": ""
}
[chain/end] [chain:AgentExecutor > chain:RunnableSequence > chain:RunnableAssign<agent_scratchpad> > chain:RunnableParallel<agent_scratchpad> > chain:RunnableLambda] [1ms] Exiting Chain run with output:
[outputs]
[chain/end] [chain:AgentExecutor > chain:RunnableSequence > chain:RunnableAssign<agent_scratchpad> > chain:RunnableParallel<agent_scratchpad>] [4ms] Exiting Chain run with output:
[outputs]
[chain/end] [chain:AgentExecutor > chain:RunnableSequence > chain:RunnableAssign<agent_scratchpad>] [6ms] Exiting Chain run with output:
[outputs]
[chain/start] [chain:AgentExecutor > chain:RunnableSequence > prompt:ChatPromptTemplate] Entering Prompt run with input:
[inputs]
[chain/end] [chain:AgentExecutor > chain:RunnableSequence > prompt:ChatPromptTemplate] [1ms] Exiting Prompt run with output:
[outputs]
[llm/start] [chain:AgentExecutor > chain:RunnableSequence > llm:ChatOpenAI] Entering LLM run with input:
{
  "prompts": [
    "System: You are a helpful assistant. You may not need to use tools for every query - the user may just want to chat!\nHuman: 现在几点了?\nAI: \nTool: 2024-08-04 12:12:12"
  ]
}
[llm/end] [chain:AgentExecutor > chain:RunnableSequence > llm:ChatOpenAI] [1.26s] Exiting LLM run with output:
{
  "generations": [
    [
      {
        "text": "现在是2024年8月4日,12点12分12秒。有什么我可以帮到您的吗?",
        "generation_info": {
          "finish_reason": "stop",
          "model_name": "gpt-3.5-turbo-1106",
          "system_fingerprint": "fp_5aa43294a1"
        },
        "type": "ChatGenerationChunk",
        "message": {
          "lc": 1,
          "type": "constructor",
          "id": [
            "langchain",
            "schema",
            "messages",
            "AIMessageChunk"
          ],
          "kwargs": {
            "content": "现在是2024年8月4日,12点12分12秒。有什么我可以帮到您的吗?",
            "response_metadata": {
              "finish_reason": "stop",
              "model_name": "gpt-3.5-turbo-1106",
              "system_fingerprint": "fp_5aa43294a1"
            },
            "type": "AIMessageChunk",
            "id": "run-f29072f1-b54a-4195-8764-2772d45848e7",
            "tool_calls": [],
            "invalid_tool_calls": []
          }
        }
      }
    ]
  ],
  "llm_output": null,
  "run": null
}
[chain/start] [chain:AgentExecutor > chain:RunnableSequence > parser:ToolsAgentOutputParser] Entering Parser run with input:
[inputs]
[chain/end] [chain:AgentExecutor > chain:RunnableSequence > parser:ToolsAgentOutputParser] [1ms] Exiting Parser run with output:
[outputs]
[chain/end] [chain:AgentExecutor > chain:RunnableSequence] [1.27s] Exiting Chain run with output:
[outputs]
[chain/end] [chain:AgentExecutor] [3.83s] Exiting Chain run with output:
{
  "output": "现在是2024年8月4日,12点12分12秒。有什么我可以帮到您的吗?"
}
{'messages': [HumanMessage(content='现在几点了?')],
 'output': '现在是2024年8月4日,12点12分12秒。有什么我可以帮到您的吗?'}

可以看到,这个agent已经调用了工具,并且也可以正常的使用。

Conversational responses#

因为我们的提示中包含了聊天记录消息的占位符,我们的代理可以考虑之前的互动,并像标准聊天机器人一样进行对话回应:

from langchain_core.messages import AIMessage, HumanMessage

agent_executor.invoke(
    {
        "messages": [
            HumanMessage(content="I'm Nemo!"),
            AIMessage(content="Hello Nemo! How can I assist you today?"),
            HumanMessage(content="What is my name?"),
        ],
    }
)
[chain/start] [chain:AgentExecutor] Entering Chain run with input:
[inputs]
[chain/start] [chain:AgentExecutor > chain:RunnableSequence] Entering Chain run with input:
{
  "input": ""
}
[chain/start] [chain:AgentExecutor > chain:RunnableSequence > chain:RunnableAssign<agent_scratchpad>] Entering Chain run with input:
{
  "input": ""
}
[chain/start] [chain:AgentExecutor > chain:RunnableSequence > chain:RunnableAssign<agent_scratchpad> > chain:RunnableParallel<agent_scratchpad>] Entering Chain run with input:
{
  "input": ""
}
[chain/start] [chain:AgentExecutor > chain:RunnableSequence > chain:RunnableAssign<agent_scratchpad> > chain:RunnableParallel<agent_scratchpad> > chain:RunnableLambda] Entering Chain run with input:
{
  "input": ""
}
[chain/end] [chain:AgentExecutor > chain:RunnableSequence > chain:RunnableAssign<agent_scratchpad> > chain:RunnableParallel<agent_scratchpad> > chain:RunnableLambda] [1ms] Exiting Chain run with output:
{
  "output": []
}
[chain/end] [chain:AgentExecutor > chain:RunnableSequence > chain:RunnableAssign<agent_scratchpad> > chain:RunnableParallel<agent_scratchpad>] [3ms] Exiting Chain run with output:
{
  "agent_scratchpad": []
}
[chain/end] [chain:AgentExecutor > chain:RunnableSequence > chain:RunnableAssign<agent_scratchpad>] [6ms] Exiting Chain run with output:
[outputs]
[chain/start] [chain:AgentExecutor > chain:RunnableSequence > prompt:ChatPromptTemplate] Entering Prompt run with input:
[inputs]
[chain/end] [chain:AgentExecutor > chain:RunnableSequence > prompt:ChatPromptTemplate] [1ms] Exiting Prompt run with output:
[outputs]
[llm/start] [chain:AgentExecutor > chain:RunnableSequence > llm:ChatOpenAI] Entering LLM run with input:
{
  "prompts": [
    "System: You are a helpful assistant. You may not need to use tools for every query - the user may just want to chat!\nHuman: I'm Nemo!\nAI: Hello Nemo! How can I assist you today?\nHuman: What is my name?"
  ]
}
[llm/end] [chain:AgentExecutor > chain:RunnableSequence > llm:ChatOpenAI] [1.74s] Exiting LLM run with output:
{
  "generations": [
    [
      {
        "text": "Your name is Nemo!",
        "generation_info": {
          "finish_reason": "stop",
          "model_name": "gpt-3.5-turbo-1106",
          "system_fingerprint": "fp_5aa43294a1"
        },
        "type": "ChatGenerationChunk",
        "message": {
          "lc": 1,
          "type": "constructor",
          "id": [
            "langchain",
            "schema",
            "messages",
            "AIMessageChunk"
          ],
          "kwargs": {
            "content": "Your name is Nemo!",
            "response_metadata": {
              "finish_reason": "stop",
              "model_name": "gpt-3.5-turbo-1106",
              "system_fingerprint": "fp_5aa43294a1"
            },
            "type": "AIMessageChunk",
            "id": "run-c76c1165-73e8-4148-b80c-0e545cf59b4e",
            "tool_calls": [],
            "invalid_tool_calls": []
          }
        }
      }
    ]
  ],
  "llm_output": null,
  "run": null
}
[chain/start] [chain:AgentExecutor > chain:RunnableSequence > parser:ToolsAgentOutputParser] Entering Parser run with input:
[inputs]
[chain/end] [chain:AgentExecutor > chain:RunnableSequence > parser:ToolsAgentOutputParser] [2ms] Exiting Parser run with output:
[outputs]
[chain/end] [chain:AgentExecutor > chain:RunnableSequence] [1.76s] Exiting Chain run with output:
[outputs]
[chain/end] [chain:AgentExecutor] [1.76s] Exiting Chain run with output:
{
  "output": "Your name is Nemo!"
}
{'messages': [HumanMessage(content="I'm Nemo!"),
  AIMessage(content='Hello Nemo! How can I assist you today?'),
  HumanMessage(content='What is my name?')],
 'output': 'Your name is Nemo!'}

就可以结合History来包装agent,让他可以记住对话历史

set_debug(False)
agent = create_tool_calling_agent(chat, tools, prompt)

agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

from langchain_community.chat_message_histories import ChatMessageHistory
from langchain_core.runnables.history import RunnableWithMessageHistory


demo_ephemeral_chat_history_for_chain = ChatMessageHistory()
conversational_agent_executor = RunnableWithMessageHistory(
    agent_executor,
    lambda session_id: demo_ephemeral_chat_history_for_chain,
    input_messages_key="messages",
    output_messages_key="output",
)

conversational_agent_executor.invoke(
    {"messages": [HumanMessage("I'm Nemo!")]},
    {"configurable": {"session_id": "unused"}},
)
> Entering new AgentExecutor chain...
Hi Nemo! How can I assist you today?

> Finished chain.
{'messages': [HumanMessage(content="I'm Nemo!")],
 'output': 'Hi Nemo! How can I assist you today?'}