Skip to content

Instantly share code, notes, and snippets.

@lmolkova
Last active June 7, 2025 03:57
Show Gist options
  • Save lmolkova/09ba0de7f68280f1eac27a6acfd9b1a6 to your computer and use it in GitHub Desktop.
Save lmolkova/09ba0de7f68280f1eac27a6acfd9b1a6 to your computer and use it in GitHub Desktop.
gen_ai_models.py
from enum import Enum
import json
from typing import Annotated, Any, List, Literal, Optional, Union
from pydantic import BaseModel, Field, RootModel
class TextPart(BaseModel):
"""
Describes text content sent to or received from the model.
"""
type: Literal['text']
content: str = Field(description="Text content provided to or received from the model.")
class Config:
extra = "allow"
class ToolCallPart(BaseModel):
"""
Describes a tool call requested by the model.
"""
type: Literal["tool_call"] = Field(description="Content type.")
id: str = Field(description="Unique identifier for the tool call.")
name: str = Field(description="Name of the tool.")
arguments: Optional[Any] = Field(description="Arguments for the tool call.")
class Config:
extra = "allow"
class ToolCallResponsePart(BaseModel):
"""
Describes a result of tool call sent to the model.
"""
type: Literal['tool_call_response'] = Field(description="Message type.")
id: str = Field(description="Unique tool call identifier.")
result: Optional[Any] = Field(description="Result of the tool call.")
class Config:
extra = "allow"
MessagePart = Annotated[
Union[
TextPart,
ToolCallPart,
ToolCallResponsePart,
# Add other message part types here as needed,
# e.g. image URL, image blob, audio URL, structured output, hosted tool call, etc.
],
Field(discriminator='type'),
]
class ChatMessage(BaseModel):
role: str
parts: List[MessagePart]
class Config:
extra = "allow"
class InputMessages(RootModel[List[ChatMessage]]):
"""
Describes input messages sent to the model.
"""
pass
class FinishReason(str, Enum):
"""
Describes the reason for finishing the generation.
"""
STOP = "stop"
LENGTH = "length"
CONTENT_FILTER = "content_filter"
TOOL_CALL = "tool_call"
ERROR = "error"
class OutputMessage(ChatMessage):
"""
Describes generated output.
"""
finish_reason: Union[FinishReason, str] = Field(description="Reason for finishing.")
class OutputMessages(RootModel[List[OutputMessage]]):
"""
Describes the output messages generated by the model.
"""
pass
class SystemInstructionMessage(BaseModel):
"""
Describes system instructions.
"""
role: str
message: TextPart = Field(description="System instruction message.")
class Config:
extra = "allow"
def strip_titles(obj):
if isinstance(obj, dict):
# Build a new dict, skipping keys named "title" or "Title"
return {
k: strip_titles(v)
for k, v in obj.items()
if k.lower() != "title"
}
if isinstance(obj, list):
return [strip_titles(item) for item in obj]
# scalars (str, int, None, …) – return unchanged
return obj
print(json.dumps(strip_titles(OutputMessages.model_json_schema()), indent=4))
@aabmass
Copy link

aabmass commented Jun 3, 2025

From SIG

  • we could remove the wrapping messages like InputMessages and just do a list

  • we don't need index if Choice is an array, which VertexAI API uses. so it would be like

    class Choice(BaseModel):
      finish_reason: str
      message: ChatMessage | None
    
    OutputMessages = list[Choice]

@lmolkova
Copy link
Author

lmolkova commented Jun 7, 2025

@aabmass @alexmojaki I pushed this to the PR open-telemetry/semantic-conventions#2179 -

https://github.com/open-telemetry/semantic-conventions/blob/e81cda3676bb509aa95799e54c0e5ed4f5a27817/docs/gen-ai/non-normative/models.ipynb

Output messages are defined there as class OutputMessages(RootModel[List[OutputMessage]]): and RootModel effectively means that it'd be just a list.

Python is not my first or even second language, so I'm pretty sure there are some problems in that code which I hope you could point me to :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment