Last active
December 11, 2024 12:32
-
-
Save aurotripathy/452216cc258bbc47377275937fab4774 to your computer and use it in GitHub Desktop.
llama3.1-tool-calling-with-local-llm.py
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# attribution : https://github.com/AgiFlow/llama31/blob/main/tool_calls.ipynb | |
from furiosa_llm import LLM, SamplingParams | |
prompt = """ | |
<|begin_of_text|> | |
<|start_header_id|>system<|end_header_id|> | |
You are a helpful assistant with tool calling capabilities. When you receive a tool call response, use the output to format an answer to the orginal use question. | |
If you are using tools, respond in the format {"name": function name, "parameters": dictionary of function arguments}. Do not use variables. | |
{ | |
"type": "function", | |
"function": { | |
"name": "get_current_conditions", | |
"description": "Get the current weather conditions for a specific location", | |
"parameters": { | |
"type": "object", | |
"properties": { | |
"location": { | |
"type": "string", | |
"description": "The city and state, e.g., San Francisco, CA" | |
}, | |
"unit": { | |
"type": "string", | |
"enum": ["Celsius", "Fahrenheit"], | |
"description": "The temperature unit to use. Infer this from the user's location." | |
} | |
}, | |
"required": ["location", "unit"] | |
} | |
} | |
} | |
<|eot_id|> | |
<|start_header_id|>user<|end_header_id|> | |
Question: what is the weather like in Sydney? | |
<|eot_id|> | |
<|start_header_id|>assistant<|end_header_id|> | |
<|python_tag|>{"name": "get_current_conditions", "parameters": {"location": "Sydney", "unit": "Celsius"}} | |
<|eot_id|> | |
<|start_header_id|>ipython<|end_header_id|> | |
Clouds giving way to sun Hi: 26° Tonight: Mainly clear early, then areas of low clouds forming Lo: 13° | |
<|eot_id|> | |
<|start_header_id|>assistant<|end_header_id|> | |
""" | |
path = "./Llama-3.1-8B-Instruct" | |
llm = LLM.from_artifacts(path) | |
sampling_params = SamplingParams(min_tokens=10, top_p=0.3, top_k=100) | |
responses = llm.generate([prompt], sampling_params) | |
for response in responses: | |
print(response.outputs[0].text) |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment