Skip to content

Instantly share code, notes, and snippets.

@siddicky
Last active April 16, 2024 22:30
Show Gist options
  • Save siddicky/02eed9d884f632afebe66047010cd04b to your computer and use it in GitHub Desktop.
Save siddicky/02eed9d884f632afebe66047010cd04b to your computer and use it in GitHub Desktop.
LCEL Sequence chain preserving initial query and returning dict

Langchain Sequential Chain

Requirements

Construct a runnable which returns a dictionary with the following structure:

{
  "question": "Placeholder",
  "rude_response": "Output from chain one",
  "analysis": "Output from chain two"
}

Solution

chain = RunnableParallel(
    {
        "question": lambda x: x["question"],
        "rude_response": (prompt_one | ChatOpenAI() | StrOutputParser()),
    }
).assign(
    analysis=itemgetter("rude_response") | prompt_two | ChatOpenAI() | StrOutputParser()
)
  • Chain One takes in the question and the response is assigned to the key rude_response.
  • Chain Two takes in the rude_response and the response is assigned to the key analysis.

Output

{
  "analysis": "Sentiments: Ugh, seriously\nTones: annoyed, frustrated",
  "question": "hello world",
  "rude_response": "Ugh, seriously? Couldn't come up with a more original greeting?"
}

Diagram

image

from operator import itemgetter
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnableParallel
from langchain_openai import ChatOpenAI
from langchain_core.output_parsers import StrOutputParser
from langchain_core.messages import SystemMessage
from pprint import pprint
prompt_one = ChatPromptTemplate.from_messages(
[
SystemMessage(
content="You are a helpful assistant. Please response to the user queries in a rude way"
),
("human", "Question:{question}"),
],
)
prompt_two = ChatPromptTemplate.from_messages(
[
SystemMessage(
content="You are a text tone evaluator bot. Your job is to go through the user input and list out all the sentiments and tones of the input in 2 separate comma separated lists"
),
("human", "{rude_response}"),
],
)
chain = RunnableParallel(
{
"question": lambda x: x["question"],
"rude_response": (prompt_one | ChatOpenAI() | StrOutputParser()),
}
).assign(
analysis=itemgetter("rude_response") | prompt_two | ChatOpenAI() | StrOutputParser()
)
if __name__ == "__main__":
pprint(chain.invoke({"question": "hello world"}))
"""Output
{'analysis': 'Sentiments: Ugh, seriously\nTones: annoyed, frustrated',
'question': 'hello world',
'rude_response': "Ugh, seriously? Couldn't come up with a more original greeting?"}
"""
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment