Skip to content

Instantly share code, notes, and snippets.

@caoer
Created December 31, 2024 15:02
Show Gist options
  • Save caoer/9d8ec64bcbb516dc7aafa79dffad2e83 to your computer and use it in GitHub Desktop.
Save caoer/9d8ec64bcbb516dc7aafa79dffad2e83 to your computer and use it in GitHub Desktop.

Directory structure: └── microsoft-TypeChat/ β”œβ”€β”€ SUPPORT.md β”œβ”€β”€ typescript/ β”‚ β”œβ”€β”€ examples/ β”‚ β”‚ β”œβ”€β”€ crossword/ β”‚ β”‚ β”‚ └── README.md β”‚ β”‚ β”œβ”€β”€ coffeeShop-zod/ β”‚ β”‚ β”‚ └── README.md β”‚ β”‚ β”œβ”€β”€ sentiment-zod/ β”‚ β”‚ β”‚ └── README.md β”‚ β”‚ β”œβ”€β”€ calendar/ β”‚ β”‚ β”‚ └── README.md β”‚ β”‚ β”œβ”€β”€ healthData/ β”‚ β”‚ β”‚ └── README.md β”‚ β”‚ β”œβ”€β”€ restaurant/ β”‚ β”‚ β”‚ └── README.md β”‚ β”‚ β”œβ”€β”€ math/ β”‚ β”‚ β”‚ └── README.md β”‚ β”‚ β”œβ”€β”€ music/ β”‚ β”‚ β”‚ β”œβ”€β”€ migrations.md β”‚ β”‚ β”‚ └── README.md β”‚ β”‚ β”œβ”€β”€ coffeeShop/ β”‚ β”‚ β”‚ └── README.md β”‚ β”‚ β”œβ”€β”€ sentiment/ β”‚ β”‚ β”‚ └── README.md β”‚ β”‚ β”œβ”€β”€ README.md β”‚ β”‚ └── multiSchema/ β”‚ β”‚ └── README.md β”‚ └── README.md β”œβ”€β”€ python/ β”‚ β”œβ”€β”€ examples/ β”‚ β”‚ β”œβ”€β”€ calendar/ β”‚ β”‚ β”‚ └── README.md β”‚ β”‚ β”œβ”€β”€ healthData/ β”‚ β”‚ β”‚ └── README.md β”‚ β”‚ β”œβ”€β”€ restaurant/ β”‚ β”‚ β”‚ └── README.md β”‚ β”‚ β”œβ”€β”€ math/ β”‚ β”‚ β”‚ └── README.md β”‚ β”‚ β”œβ”€β”€ music/ β”‚ β”‚ β”‚ └── README.md β”‚ β”‚ β”œβ”€β”€ coffeeShop/ β”‚ β”‚ β”‚ └── README.md β”‚ β”‚ β”œβ”€β”€ sentiment/ β”‚ β”‚ β”‚ └── README.md β”‚ β”‚ β”œβ”€β”€ README.md β”‚ β”‚ └── multiSchema/ β”‚ β”‚ └── README.md β”‚ └── README.md β”œβ”€β”€ dotnet/ β”‚ └── README.md β”œβ”€β”€ site/ β”‚ └── src/ β”‚ β”œβ”€β”€ docs/ β”‚ β”‚ β”œβ”€β”€ introduction.md β”‚ β”‚ β”œβ”€β”€ typescript/ β”‚ β”‚ β”‚ └── basic-usage.md β”‚ β”‚ β”œβ”€β”€ examples.md β”‚ β”‚ β”œβ”€β”€ faq.md β”‚ β”‚ └── techniques.md β”‚ └── blog/ β”‚ β”œβ”€β”€ introducing-typechat.md β”‚ └── announcing-typechat-0-1-0.md β”œβ”€β”€ CODE_OF_CONDUCT.md β”œβ”€β”€ SECURITY.md └── README.md

================================================ File: /README.md

TypeChat

TypeChat is a library that makes it easy to build natural language interfaces using types.

Building natural language interfaces has traditionally been difficult. These apps often relied on complex decision trees to determine intent and collect the required inputs to take action. Large language models (LLMs) have made this easier by enabling us to take natural language input from a user and match to intent. This has introduced its own challenges including the need to constrain the model's reply for safety, structure responses from the model for further processing, and ensuring that the reply from the model is valid. Prompt engineering aims to solve these problems, but comes with a steep learning curve and increased fragility as the prompt increases in size.

TypeChat replaces prompt engineering with schema engineering.

Simply define types that represent the intents supported in your natural language application. That could be as simple as an interface for categorizing sentiment or more complex examples like types for a shopping cart or music application. For example, to add additional intents to a schema, a developer can add additional types into a discriminated union. To make schemas hierarchical, a developer can use a "meta-schema" to choose one or more sub-schemas based on user input.

After defining your types, TypeChat takes care of the rest by:

  1. Constructing a prompt to the LLM using types.
  2. Validating the LLM response conforms to the schema. If the validation fails, repair the non-conforming output through further language model interaction.
  3. Summarizing succinctly (without use of a LLM) the instance and confirm that it aligns with user intent.

Types are all you need!

Getting Started

Install TypeChat for TypeScript/JavaScript:

npm install typechat

You can also work with TypeChat from source for:

To see TypeChat in action, we recommend exploring the TypeChat example projects. You can try them on your local machine or in a GitHub Codespace.

To learn more about TypeChat, visit the documentation which includes more information on TypeChat and how to get started.

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.

================================================ File: /SUPPORT.md

Support

How to file issues and get help

This project uses GitHub Issues to track bugs and feature requests. Please search the existing issues before filing new issues to avoid duplicates. For new issues, file your bug or feature request as a new issue.

For help and questions about using this project, please either use the project's GitHub Discussions area or Stack Overflow.

Microsoft Support Policy

Support for this project is limited to the resources listed above.

================================================ File: /typescript/examples/crossword/README.md

Crossword

The Crossword example shows how to include an image in a multimodal prompt and use the image to answer a user's question. The responses follow the CrosswordActions type.

Target models

This example explores multi-modal input. Torun this, you will need a model that accepts images as input. The example has beeentested with gpt-4-vision and gpt-4-omni models.

Try Crossword

To run the Crossword example, follow the instructions in the examples README.

Usage

Example prompts can be found in src/input.txt.

For example, given the following input statement:

Input:

🏁> What is the clue for 61 across

Output:

"Monogram in French fashion"

================================================ File: /typescript/examples/coffeeShop-zod/README.md

Coffee Shop

The Coffee Shop example shows how to capture user intent as a set of "nouns". In this case, the nouns are items in a coffee order, where valid items are defined starting from the Cart type. This example also uses the UnknownText type as a way to capture user input that doesn't match to an existing type in Cart.

Try Coffee Shop

To run the Coffee Shop example, follow the instructions in the examples README.

Usage

Example prompts can be found in src/input.txt and src/input2.txt.

For example, we could use natural language to describe our coffee shop order:

Input:

β˜•> we'd like a cappuccino with a pack of sugar

Output:

{
  "items": [
    {
      "type": "lineitem",
      "product": {
        "type": "LatteDrinks",
        "name": "cappuccino",
        "options": [
          {
            "type": "Sweeteners",
            "name": "sugar",
            "optionQuantity": "regular"
          }
        ]
      },
      "quantity": 1
    }
  ]
}

================================================ File: /typescript/examples/sentiment-zod/README.md

Sentiment

The Sentiment example shows how to match user intent to a set of nouns, in this case categorizing user sentiment of the input as negative, neutral, or positive with the SentimentResponse type.

Try Sentiment

To run the Sentiment example, follow the instructions in the examples README.

Usage

Example prompts can be found in src/input.txt.

For example, given the following input statement:

Input:

πŸ˜€> TypeChat is awesome!

Output:

The sentiment is positive

================================================ File: /typescript/examples/calendar/README.md

Calendar

The Calendar example shows how you can capture user intent as a sequence of actions, such as adding event to a calendar or searching for an event as defined by the CalendarActions type.

Try Calendar

To run the Calendar example, follow the instructions in the examples README.

Usage

Example prompts can be found in src/input.txt.

For example, we could use natural language to describe an event coming up soon:

Input:

πŸ“…> I need to get my tires changed from 12:00 to 2:00 pm on Friday March 15, 2024

Output:

{
  "actions": [
    {
      "actionType": "add event",
      "event": {
        "day": "Friday March 15, 2024",
        "timeRange": {
          "startTime": "12:00 pm",
          "endTime": "2:00 pm"
        },
        "description": "get my tires changed"
      }
    }
  ]
}

================================================ File: /typescript/examples/healthData/README.md

Health Data Agent

This example requires GPT-4.

Demonstrates a strongly typed chat: a natural language interface for entering health information. You work with a health data agent to interactively enter your medications or conditions.

The Health Data Agent shows how strongly typed agents with history could interact with a user to collect information needed for one or more data types ("form filling").

Target models

For best and consistent results, use gpt-4.

Try the Health Data Agent

To run the Sentiment example, follow the instructions in the examples README.

Usage

Example prompts can be found in src/input.txt.

For example, given the following input statement:

Input:

🀧> I am taking klaritin for my allergies

Output:

================================================ File: /typescript/examples/restaurant/README.md

Restaurant

The Restaurant example shows how to capture user intent as a set of "nouns", but with more complex linguistic input. This example can act as a "stress test" for language models, illustrating the line between simpler and more advanced language models in handling compound sentences, distractions, and corrections. This example also shows how we can create a "user intent summary" to display to a user. It uses a natural language experience for placing an order with the Order type.

Try Restaurant

To run the Restaurant example, follow the instructions in the examples README.

Usage

Example prompts can be found in src/input.txt.

For example, given the following order:

Input:

πŸ•> I want three pizzas, one with mushrooms and the other two with sausage. Make one sausage a small. And give me a whole Greek and a Pale Ale. And give me a Mack and Jacks.

Output:

This is GPT-4-0613 output; GPT-3.5-turbo and most other models miss this one.

1 large pizza with mushrooms
1 large pizza with sausage
1 small pizza with sausage
1 whole Greek salad
1 Pale Ale
1 Mack and Jacks

Note

Across different models, you may see that model responses may not correspond to the user intent. In the above example, some models may not be able to capture the fact that the order is still only for 3 pizzas, and that "make one sausage a small" is not a request for a new pizza.

  1 large pizza with mushrooms
- 1 large pizza with sausage
+ 2 large pizza with sausage
  1 small pizza with sausage
  1 whole Greek salad
  1 Pale Ale
  1 Mack and Jacks

The output here from GPT 3.5-turbo incorrectly shows 1 mushroom pizza and 3 sausage pizzas.

Because all language models are probabilistic and therefore will sometimes output incorrect inferences, the TypeChat pattern includes asking the user for confirmation (or giving the user an easy way to undo actions). It is important to ask for confirmation without use of the language model so that incorrect inference is guaranteed not to be part of the intent summary generated.

In this example, the function printOrder in the file main.ts summarizes the food order (as seen in the above output) without use of a language model. The printOrder function can work with a strongly typed Order object because the TypeChat validation process has checked that the emitted JSON corresponds to the Order type:

function printOrder(order: Order) {

Having a validated, typed data structure simplifies the task of generating a succinct summary suitable for user confirmation.

================================================ File: /typescript/examples/math/README.md

Math

The Math example shows how to use TypeChat for program generation based on an API schema with the evaluateJsonProgram function. This example translates calculations into simple programs given an API type that can perform the four basic mathematical operations.

Try Math

To run the Math example, follow the instructions in the examples README.

Usage

Example prompts can be found in src/input.txt.

For example, we could use natural language to describe mathematical operations, and TypeChat will generate a program that can execute the math API defined in the schema.

Input:

🟰> multiply two by three, then multiply four by five, then sum the results

Output:

import { API } from "./schema";
function program(api: API) {
  const step1 = api.mul(2, 3);
  const step2 = api.mul(4, 5);
  return api.add(step1, step2);
}
Running program:
mul(2, 3)
mul(4, 5)
add(6, 20)
Result: 26

================================================ File: /typescript/examples/music/migrations.md

Local Music DB Migrations

Tracks table

CREATE TABLE tracks (
    id INTEGER PRIMARY KEY,
    title TEXT NOT NULL,
    artist_id INTEGER NOT NULL,
    album_id INTEGER,
    duration INTEGER,
    release_date TEXT,
    genre TEXT,
);

Albums table

CREATE TABLE albums (
    id INTEGER PRIMARY KEY,
    title TEXT NOT NULL,
    artist_id INTEGER NOT NULL,
    release_date TEXT,
    genre TEXT,
);

Playlists table

CREATE TABLE playlists (
    id INTEGER PRIMARY KEY,
    title TEXT NOT NULL,
    user_id INTEGER NOT NULL,
    creation_date TEXT,
    description TEXT,
);

Artists table

CREATE TABLE artists (
    id INTEGER PRIMARY KEY,
    name TEXT NOT NULL,
    country TEXT,
    genre TEXT
);

================================================ File: /typescript/examples/music/README.md

Music

The Music example shows how to capture user intent as actions in JSON which corresponds to a simple dataflow program over the API provided in the intent schema. This example shows this pattern using natural language to control the Spotify API to play music, create playlists, and perform other actions from the API.

Try Music

A Spotify Premium account is required to run this example.

To run the Music example, follow the instructions in the examples README.

This example also requires additional setup to use the Spotify API:

  1. Go to https://developer.spotify.com/dashboard.
  2. Log into Spotify with your user account if you are not already logged in.
  3. Click the button in the upper right labeled "Create App".
  4. Fill in the form, making sure the Redirect URI is http://localhost:PORT/callback, where PORT is a four-digit port number you choose for the authorization redirect.
  5. Click the settings button and copy down the Client ID and Client Secret (the client secret requires you to click 'View client secret').
  6. In your .env file, set SPOTIFY_APP_CLI to your Client ID and SPOTIFY_APP_CLISEC to your Client Secret. Also set SPOTIFY_APP_PORT to the PORT on your local machine that you chose in step 4.

Usage

Example prompts can be found in src/input.txt.

For example, use natural language to start playing a song with the Spotify player:

Input:

🎡> play shake it off by taylor swift

Output:

Plan Validated:
{
    "@steps": [
        {
            "@func": "searchTracks",
            "@args": [
                "shake it off taylor swift"
            ]
        },
        {
            "@func": "play",
            "@args": [
                {
                    "@ref": 0
                }
            ]
        }
    ]
}
import { API } from "./schema";
function program(api: API) {
  const step1 = api.searchTracks("shake it off taylor swift");
  return api.play(step1);
}
Playing...
Shake It Off

================================================ File: /typescript/examples/coffeeShop/README.md

Coffee Shop

The Coffee Shop example shows how to capture user intent as a set of "nouns". In this case, the nouns are items in a coffee order, where valid items are defined starting from the Cart type. This example also uses the UnknownText type as a way to capture user input that doesn't match to an existing type in Cart.

Try Coffee Shop

To run the Coffee Shop example, follow the instructions in the examples README.

Usage

Example prompts can be found in src/input.txt and src/input2.txt.

For example, we could use natural language to describe our coffee shop order:

Input:

β˜•> we'd like a cappuccino with a pack of sugar

Output:

{
  "items": [
    {
      "type": "lineitem",
      "product": {
        "type": "LatteDrinks",
        "name": "cappuccino",
        "options": [
          {
            "type": "Sweeteners",
            "name": "sugar",
            "optionQuantity": "regular"
          }
        ]
      },
      "quantity": 1
    }
  ]
}

================================================ File: /typescript/examples/sentiment/README.md

Sentiment

The Sentiment example shows how to match user intent to a set of nouns, in this case categorizing user sentiment of the input as negative, neutral, or positive with the SentimentResponse type.

Try Sentiment

To run the Sentiment example, follow the instructions in the examples README.

Usage

Example prompts can be found in src/input.txt.

For example, given the following input statement:

Input:

πŸ˜€> TypeChat is awesome!

Output:

The sentiment is positive

================================================ File: /typescript/examples/README.md

To see TypeChat in action, check out the examples found in this directory.

Each example shows how TypeChat handles natural language input, and maps to validated JSON as output. Most example inputs run on both GPT 3.5 and GPT 4. We are working to reproduce outputs with other models. Generally, models trained on both code and natural language text have high accuracy.

We recommend reading each example in the following order.

Name Description
Sentiment A sentiment classifier which categorizes user input as negative, neutral, or positive. This is TypeChat's "hello world!"
Coffee Shop An intelligent agent for a coffee shop. This sample translates user intent is translated to a list of coffee order items.
Calendar An intelligent scheduler. This sample translates user intent into a sequence of actions to modify a calendar.
Restaurant An intelligent agent for taking orders at a restaurant. Similar to the coffee shop example, but uses a more complex schema to model more complex linguistic input. The prose files illustrate the line between simpler and more advanced language models in handling compound sentences, distractions, and corrections. This example also shows how we can use TypeScript to provide a user intent summary.
Math Translate calculations into simple programs given an API that can perform the 4 basic mathematical operators. This example highlights TypeChat's program generation capabilities.
Music An app for playing music, creating playlists, etc. on Spotify through natural language. Each user intent is translated into a series of actions in JSON which correspond to a simple dataflow program, where each step can consume data produced from previous step.

Step 1: Configure your development environment

Option 1: Local Machine

You can experiment with these TypeChat examples on your local machine with just Node.js.

Ensure Node.js (18.16.0 LTS or newer) or newer is installed.

git clone https://github.com/microsoft/TypeChat
cd TypeChat/typescript
npm install

Option 2: GitHub Codespaces

GitHub Codespaces enables you to try TypeChat quickly in a development environment hosted in the cloud.

On the TypeChat repository page:

  1. Click the green button labeled <> Code
  2. Select the Codespaces tab.
  3. Click the green Create codespace button.
If this is your first time creating a codespace, read this.

If this is your first time creating a codespace on this repository, GitHub will take a moment to create a dev container image for your session. Once the image has been created, the browser will load Visual Studio Code in a developer environment automatically configured with the necessary prerequisites, TypeChat cloned, and packages installed.

Remember that you are running in the cloud, so all changes you make to the source tree must be committed and pushed before destroying the codespace. GitHub accounts are usually configured to automatically delete codespaces that have been inactive for 30 days.

For more information, see the GitHub Codespaces Overview

Step 2: Build TypeChat Samples

Build TypeChat and the examples by running the following command in the repository root:

npm run build-all

Step 3: Configure environment variables

Currently, the examples are running on OpenAI or Azure OpenAI endpoints. To use an OpenAI endpoint, include the following environment variables:

Variable Value
OPENAI_MODEL The OpenAI model name (e.g. gpt-3.5-turbo or gpt-4)
OPENAI_API_KEY Your OpenAI API key
OPENAI_ENDPOINT OpenAI API Endpoint - optional, defaults to "https://api.openai.com/v1/chat/completions"
OPENAI_ORGANIZATION OpenAI Organization - optional, defaults to ""

To use an Azure OpenAI endpoint, include the following environment variables:

Variable Value
AZURE_OPENAI_ENDPOINT The full URL of the Azure OpenAI REST API (e.g. https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/chat/completions?api-version=2023-05-15)
AZURE_OPENAI_API_KEY Your Azure OpenAI API key

We recommend setting environment variables by creating a .env file in the root directory of the project that looks like the following:

# For OpenAI
OPENAI_MODEL=...
OPENAI_API_KEY=...

# For Azure OpenAI
AZURE_OPENAI_ENDPOINT=...
AZURE_OPENAI_API_KEY=...

Step 4: Run the examples

Examples can be found in the examples directory.

To run an example interactively, type node ./dist/main.js from the example's directory and enter requests when prompted. Type quit or exit to end the session. You can also open in VS Code the selected example's directory and press F5 to launch it in debug mode.

Note that there are various sample "prose" files (e.g. input.txt) provided in each src directory that can give a sense of what you can run.

To run an example with one of these input files, run node ./dist/main.js <input-file-path>. For example, in the coffeeShop directory, you can run:

node ./dist/main.js ./dist/input.txt

================================================ File: /typescript/examples/multiSchema/README.md

MultiSchema

This application demonstrates a simple way to write a super-app that automatically routes user requests to child apps.

In this example, the child apps are existing TypeChat chat examples:

  • CoffeeShop
  • Restaurant
  • Calendar
  • Sentiment
  • Math
  • Plugins
  • HealthData

Target Models

Works with GPT-3.5 Turbo and GPT-4.

Sub-apps like HealthData and Plugins work best with GPT-4.

Usage

Example prompts can be found in src/input.txt.

================================================ File: /typescript/README.md

TypeChat

TypeChat is a library that makes it easy to build natural language interfaces using types.

Building natural language interfaces has traditionally been difficult. These apps often relied on complex decision trees to determine intent and collect the required inputs to take action. Large language models (LLMs) have made this easier by enabling us to take natural language input from a user and match to intent. This has introduced its own challenges including the need to constrain the model's reply for safety, structure responses from the model for further processing, and ensuring that the reply from the model is valid. Prompt engineering aims to solve these problems, but comes with a steep learning curve and increased fragility as the prompt increases in size.

TypeChat replaces prompt engineering with schema engineering.

Simply define types that represent the intents supported in your natural language application. That could be as simple as an interface for categorizing sentiment or more complex examples like types for a shopping cart or music application. For example, to add additional intents to a schema, a developer can add additional types into a discriminated union. To make schemas hierarchical, a developer can use a "meta-schema" to choose one or more sub-schemas based on user input.

After defining your types, TypeChat takes care of the rest by:

  1. Constructing a prompt to the LLM using types.
  2. Validating the LLM response conforms to the schema. If the validation fails, repair the non-conforming output through further language model interaction.
  3. Summarizing succinctly (without use of a LLM) the instance and confirm that it aligns with user intent.

Types are all you need!

Getting Started

Install TypeChat:

npm install typechat

You can also build TypeChat from source:

git clone https://github.com/microsoft/TypeChat
cd TypeChat/typescript
npm install
npm run build

To see TypeChat in action, we recommend exploring the TypeChat example projects. You can try them on your local machine or in a GitHub Codespace.

To learn more about TypeChat, visit the documentation which includes more information on TypeChat and how to get started.

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.

================================================ File: /python/examples/calendar/README.md

Calendar

The Calendar example shows how you can capture user intent as a sequence of actions, such as adding event to a calendar or searching for an event as defined by the CalendarActions type.

Try Calendar

To run the Calendar example, follow the instructions in the examples README.

Usage

Example prompts can be found in input.txt.

For example, we could use natural language to describe an event coming up soon:

Input:

πŸ“…> I need to get my tires changed from 12:00 to 2:00 pm on Friday March 15, 2024

Output:

{
  "actions": [
    {
      "actionType": "add event",
      "event": {
        "day": "Friday March 15, 2024",
        "timeRange": {
          "startTime": "12:00 pm",
          "endTime": "2:00 pm"
        },
        "description": "get my tires changed"
      }
    }
  ]
}

================================================ File: /python/examples/healthData/README.md

Health Data Agent

This example requires GPT-4.

Demonstrates a strongly typed chat: a natural language interface for entering health information. You work with a health data agent to interactively enter your medications or conditions.

The Health Data Agent shows how strongly typed agents with history could interact with a user to collect information needed for one or more data types ("form filling").

Target models

For best and consistent results, use gpt-4.

Try the Health Data Agent

To run the Sentiment example, follow the instructions in the examples README.

Usage

Example prompts can be found in input.txt.

For example, given the following input statement:

Input:

🀧> I am taking klaritin for my allergies

Output:

================================================ File: /python/examples/restaurant/README.md

Restaurant

The Restaurant example shows how to capture user intent as a set of "nouns", but with more complex linguistic input. This example can act as a "stress test" for language models, illustrating the line between simpler and more advanced language models in handling compound sentences, distractions, and corrections. This example also shows how we can create a "user intent summary" to display to a user. It uses a natural language experience for placing an order with the Order type.

Try Restaurant

To run the Restaurant example, follow the instructions in the examples README.

Usage

Example prompts can be found in input.txt.

For example, given the following order:

Input:

πŸ•> I want three pizzas, one with mushrooms and the other two with sausage. Make one sausage a small. And give me a whole Greek and a Pale Ale. And give me a Mack and Jacks.

Output:

This is GPT-4-0613 output; GPT-3.5-turbo and most other models miss this one.

1 large pizza with mushrooms
1 large pizza with sausage
1 small pizza with sausage
1 whole Greek salad
1 Pale Ale
1 Mack and Jacks

Note

Across different models, you may see that model responses may not correspond to the user intent. In the above example, some models may not be able to capture the fact that the order is still only for 3 pizzas, and that "make one sausage a small" is not a request for a new pizza.

  1 large pizza with mushrooms
- 1 large pizza with sausage
+ 2 large pizza with sausage
  1 small pizza with sausage
  1 whole Greek salad
  1 Pale Ale
  1 Mack and Jacks

The output here from GPT 3.5-turbo incorrectly shows 1 mushroom pizza and 3 sausage pizzas.

Because all language models are probabilistic and therefore will sometimes output incorrect inferences, the TypeChat pattern includes asking the user for confirmation (or giving the user an easy way to undo actions). It is important to ask for confirmation without use of the language model so that incorrect inference is guaranteed not to be part of the intent summary generated.

In this example, the function printOrder in the file main.ts summarizes the food order (as seen in the above output) without use of a language model. The printOrder function can work with a strongly typed Order object because the TypeChat validation process has checked that the emitted JSON corresponds to the Order type:

function printOrder(order: Order) {

Having a validated, typed data structure simplifies the task of generating a succinct summary suitable for user confirmation.

================================================ File: /python/examples/math/README.md

Math

The Math example shows how to use TypeChat for program generation based on an API schema with the evaluateJsonProgram function. This example translates calculations into simple programs given an API type that can perform the four basic mathematical operations.

Try Math

To run the Math example, follow the instructions in the examples README.

Usage

Example prompts can be found in input.txt.

For example, we could use natural language to describe mathematical operations, and TypeChat will generate a program that can execute the math API defined in the schema.

Input:

🟰> multiply two by three, then multiply four by five, then sum the results

Output:

import { API } from "./schema";
function program(api: API) {
  const step1 = api.mul(2, 3);
  const step2 = api.mul(4, 5);
  return api.add(step1, step2);
}
Running program:
mul(2, 3)
mul(4, 5)
add(6, 20)
Result: 26

================================================ File: /python/examples/music/README.md

Music

The Music example shows how to capture user intent as actions in JSON which corresponds to a simple dataflow program over the API provided in the intent schema. This example shows this pattern using natural language to control the Spotify API to play music, create playlists, and perform other actions from the API.

Try Music

A Spotify Premium account is required to run this example.

To run the Music example, follow the instructions in the examples README.

This example also requires additional setup to use the Spotify API:

  1. Go to https://developer.spotify.com/dashboard.
  2. Log into Spotify with your user account if you are not already logged in.
  3. Click the button in the upper right labeled "Create App".
  4. Fill in the form, making sure the Redirect URI is http://localhost:PORT/callback, where PORT is a four-digit port number you choose for the authorization redirect.
  5. Click the settings button and copy down the Client ID and Client Secret (the client secret requires you to click 'View client secret').
  6. In your .env file, set SPOTIFY_APP_CLI to your Client ID and SPOTIFY_APP_CLISEC to your Client Secret. Also set SPOTIFY_APP_PORT to the PORT on your local machine that you chose in step 4.

Usage

Example prompts can be found in input.txt.

For example, use natural language to start playing a song with the Spotify player:

Input:

🎡> play shake it off by taylor swift

Output:

JSON View
{
  "actions": [
    {
      "actionName": "play",
      "parameters": {
        "artist": "taylor swift",
        "trackName": "shake it off",
        "quantity": 0
      }
    }
  ]
}
Playing...
Shake It Off

================================================ File: /python/examples/coffeeShop/README.md

Coffee Shop

The Coffee Shop example shows how to capture user intent as a set of "nouns". In this case, the nouns are items in a coffee order, where valid items are defined starting from the Cart type. This example also uses the UnknownText type as a way to capture user input that doesn't match to an existing type in Cart.

Try Coffee Shop

To run the Coffee Shop example, follow the instructions in the examples README.

Usage

Example prompts can be found in src/input.txt and src/input2.txt.

For example, we could use natural language to describe our coffee shop order:

Input:

β˜•> we'd like a cappuccino with a pack of sugar

Output:

{
  "items": [
    {
      "type": "lineitem",
      "product": {
        "type": "LatteDrinks",
        "name": "cappuccino",
        "options": [
          {
            "type": "Sweeteners",
            "name": "sugar",
            "optionQuantity": "regular"
          }
        ]
      },
      "quantity": 1
    }
  ]
}

================================================ File: /python/examples/sentiment/README.md

Sentiment

The Sentiment example shows how to match user intent to a set of nouns, in this case categorizing user sentiment of the input as negative, neutral, or positive with the SentimentResponse type.

Try Sentiment

To run the Sentiment example, follow the instructions in the examples README.

Usage

Example prompts can be found in input.txt.

For example, given the following input statement:

Input:

πŸ˜€> TypeChat is awesome!

Output:

The sentiment is positive

================================================ File: /python/examples/README.md

To see TypeChat in action, check out the examples found in this directory.

Each example shows how TypeChat handles natural language input, and maps to validated JSON as output. Most example inputs run on both GPT 3.5 and GPT 4. We are working to reproduce outputs with other models. Generally, models trained on both code and natural language text have high accuracy.

We recommend reading each example in the following order.

Name Description
Sentiment A sentiment classifier which categorizes user input as negative, neutral, or positive. This is TypeChat's "hello world!"
Coffee Shop An intelligent agent for a coffee shop. This sample translates user intent is translated to a list of coffee order items.
Calendar An intelligent scheduler. This sample translates user intent into a sequence of actions to modify a calendar.
HealthData The Health Data Agent shows how strongly typed agents with history could interact with a user to collect information needed for one or more data types ("form filling").
Restaurant An intelligent agent for taking orders at a restaurant. Similar to the coffee shop example, but uses a more complex schema to model more complex linguistic input. The prose files illustrate the line between simpler and more advanced language models in handling compound sentences, distractions, and corrections. This example also shows how we can use TypeScript to provide a user intent summary.
Math Translate calculations into simple programs given an API that can perform the 4 basic mathematical operators. This example highlights TypeChat's program generation capabilities.
MultiSchema This application demonstrates a simple way to write a super-app that automatically routes user requests to child apps.
Music An app for playing music, creating playlists, etc. on Spotify through natural language. Each user intent is translated into a series of actions in JSON which correspond to a simple dataflow program, where each step can consume data produced from previous step.

Step 1: Configure your development environment

Option 1: Local Machine

You can experiment with these TypeChat examples on your local machine.

You will need Python >=3.11 and hatch.

git clone https://github.com/microsoft/TypeChat
cd TypeChat/python
hatch shell
python examples/sentiment/demo.py

Alternatively, you can just use venv and pip:

git clone https://github.com/microsoft/TypeChat
cd TypeChat/python
python -m venv ../.venv

# Activate the virtual environment
# Windows
../.venv/Scripts/Activate.ps1
# Unix/POSIX
source ../.venv/bin/activate

pip install .[examples]

python examples/sentiment/demo.py

Option 2: GitHub Codespaces

GitHub Codespaces enables you to try TypeChat quickly in a development environment hosted in the cloud.

On the TypeChat repository page:

  1. Click the green button labeled <> Code
  2. Select the Codespaces tab.
  3. Click the green Create codespace button.
If this is your first time creating a codespace, read this.

If this is your first time creating a codespace on this repository, GitHub will take a moment to create a dev container image for your session. Once the image has been created, the browser will load Visual Studio Code in a developer environment automatically configured with the necessary prerequisites, TypeChat cloned, and packages installed.

Remember that you are running in the cloud, so all changes you make to the source tree must be committed and pushed before destroying the codespace. GitHub accounts are usually configured to automatically delete codespaces that have been inactive for 30 days.

For more information, see the GitHub Codespaces Overview

Step 2: Configure environment variables

Currently, the examples are running on OpenAI or Azure OpenAI endpoints. To use an OpenAI endpoint, include the following environment variables:

Variable Value
OPENAI_MODEL The OpenAI model name (e.g. gpt-3.5-turbo or gpt-4)
OPENAI_API_KEY Your OpenAI API key
OPENAI_ENDPOINT OpenAI API Endpoint - optional, defaults to "https://api.openai.com/v1/chat/completions"
OPENAI_ORGANIZATION OpenAI Organization - optional, defaults to ""

To use an Azure OpenAI endpoint, include the following environment variables:

Variable Value
AZURE_OPENAI_ENDPOINT The full URL of the Azure OpenAI REST API (e.g. https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/chat/completions?api-version=2023-05-15)
AZURE_OPENAI_API_KEY Your Azure OpenAI API key

We recommend setting environment variables by creating a .env file in the root directory of the project that looks like the following:

# For OpenAI
OPENAI_MODEL=...
OPENAI_API_KEY=...

# For Azure OpenAI
AZURE_OPENAI_ENDPOINT=...
AZURE_OPENAI_API_KEY=...

Step 3: Run the examples

Examples can be found in the examples directory.

To run an example interactively, type python examples/<example_name>/demo.py from the example's directory and enter requests when prompted. Type quit or exit to end the session. You can also open in VS Code the selected example's directory and press F5 to launch it in debug mode.

Note that there are various sample "prose" files (e.g. input.txt) provided in each src directory that can give a sense of what you can run.

To run an example with one of these input files, run python demo.py <input-file-path>. For example, in the coffeeShop directory, you can run:

python demo.py input.txt

================================================ File: /python/examples/multiSchema/README.md

MultiSchema

This application demonstrates a simple way to write a super-app that automatically routes user requests to child apps.

In this example, the child apps are existing TypeChat chat examples:

  • CoffeeShop
  • Restaurant
  • Calendar
  • Sentiment
  • Math
  • Plugins
  • HealthData

Target Models

Works with GPT-3.5 Turbo and GPT-4.

Sub-apps like HealthData and Plugins work best with GPT-4.

Usage

Example prompts can be found in input.txt.

================================================ File: /python/README.md

TypeChat

TypeChat is a library that makes it easy to build natural language interfaces using types.

Building natural language interfaces has traditionally been difficult. These apps often relied on complex decision trees to determine intent and collect the required inputs to take action. Large language models (LLMs) have made this easier by enabling us to take natural language input from a user and match to intent. This has introduced its own challenges including the need to constrain the model's reply for safety, structure responses from the model for further processing, and ensuring that the reply from the model is valid. Prompt engineering aims to solve these problems, but comes with a steep learning curve and increased fragility as the prompt increases in size.

TypeChat replaces prompt engineering with schema engineering.

Simply define types that represent the intents supported in your natural language application. That could be as simple as an interface for categorizing sentiment or more complex examples like types for a shopping cart or music application. For example, to add additional intents to a schema, a developer can add additional types into a discriminated union. To make schemas hierarchical, a developer can use a "meta-schema" to choose one or more sub-schemas based on user input.

After defining your types, TypeChat takes care of the rest by:

  1. Constructing a prompt to the LLM using types.
  2. Validating the LLM response conforms to the schema. If the validation fails, repair the non-conforming output through further language model interaction.
  3. Summarizing succinctly (without use of a LLM) the instance and confirm that it aligns with user intent.

Types are all you need!

Getting Started

Note

TypeChat is not currently published. For now, install from our GitHub repository.

Install TypeChat:

pip install "typechat @ git+https://github.com/microsoft/TypeChat#subdirectory=python"

You can also develop TypeChat from source, which needs Python >=3.11, hatch, and Node.js >=20:

git clone https://github.com/microsoft/TypeChat
cd TypeChat/python
hatch shell
npm ci

To see TypeChat in action, we recommend exploring the TypeChat example projects. You can try them on your local machine or in a GitHub Codespace.

To learn more about TypeChat, visit the documentation which includes more information on TypeChat and how to get started.

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.

================================================ File: /dotnet/README.md

TypeChat for .NET

TypeChat in .NET and C# is currently available on a separate TypeChat.NET repository.

================================================ File: /site/src/docs/introduction.md


layout: doc-page title: Introduction

Large language models (or LLMs) can give us surprisingly rich answers to requests. Maybe you've tried chatting with a language model and been delighted at how effective it is to brainstorm with it.

User:

It's rainy in Seattle (surprise!). Got 3 quick suggestions for what I should do this weekend? Keep it short and direct.

ChatGPT:

  1. Visit the Seattle Art Museum (SAM) for a day of artistic inspiration and exploration.
  2. Enjoy a cozy afternoon at a local coffee shop, sipping your favorite brew and diving into a good book.
  3. Embrace the rainy weather by taking a scenic drive to Snoqualmie Falls and witness the majestic waterfall in all its splendor.

These models are great chat assistants on their own, but can they be used to add natural language interfaces to an application? The answer as many of us have seen is "yes - sort of", and it really depends on how we ask the model.

Adding Structure

Remember, by default these models respond in natural language which is how we read and write today; but traditional programs are kind of bad at working with these answers. Most software expects to work off of a level of structure over bytes, characters, JSON blobs, etc. β€” not natural languages like English. When we don't have structure, we have to parse out what we're interested in, and it might not always be practical.

One way to add some structure and make parsing easier might be to ask for a list of answers:

User:

Provide 3 suggestions for specific places to go to in Seattle on a rainy day. Respond in the form of a bulleted list. Be succinct.

ChatGPT:

  • Seattle Art Museum: Explore the extensive art collection and enjoy the indoor exhibits.
  • Chihuly Garden and Glass: Admire the stunning glass artwork of Dale Chihuly in a covered garden and gallery.
  • Pacific Science Center: Engage in interactive exhibits, planetarium shows, and IMAX films for a day of educational fun.

Maybe a bulleted list is enough to go off of, but how would we break this data up? Strip the leading bullet points and split on ": " to separate the venue from its description? What if a venue has a ": " in its name? It turns out that we're still left with problems parsing out natural language.

Now what if we asked the model for a much more structured format like JSON? As a matter of fact, many language models are great at generating JSON!

User: Provide 3 suggestions for specific places to go to in Seattle on a rainy day. Respond in the form of JSON. The JSON should have the following format

[
    { "venue": "...", "description": "..." },
    { "venue": "...", "description": "..." }
]

ChatGPT:

[
    {
        "venue": "Seattle Art Museum",
        "description": "The Seattle Art Museum (SAM) is a world-class museum with a diverse collection of art from around the globe. It offers a great indoor activity for a rainy day, allowing you to explore various art exhibitions and displays, including contemporary art, African and Oceanic art, and European painting and sculpture."
    },
    {
        "venue": "Chihuly Garden and Glass",
        "description": "Chihuly Garden and Glass is an extraordinary exhibition showcasing the breathtaking glass artwork of Dale Chihuly. Located at the Seattle Center, this indoor attraction offers a mesmerizing experience as you wander through vibrant glass sculptures and installations. The beautiful interplay of light and color will captivate your senses, making it a perfect rainy day destination."
    },
    {
        "venue": "Museum of Pop Culture",
        "description": "The Museum of Pop Culture (MoPOP) is a vibrant museum dedicated to contemporary pop culture, music, science fiction, and more. It features engaging exhibits that celebrate music icons, explore science fiction themes, and delve into the world of video games. Whether you're a fan of music, movies, or pop culture in general, MoPOP offers an immersive and entertaining experience, making it an ideal choice on a rainy day."
    }
]

Here we asked a model to respond directly with JSON code and provided a sort of "template" or example response to guide its answer. As you can see, it worked pretty well!

Example responses are great, but we wondered if there was another approach. In our case, we asked whether we could simply guide a model with TypeScript types.

User: Provide 3 suggestions for specific places to go to in Seattle on a rainy day. Respond strictly with JSON. The JSON should be compatible with the TypeScript type Response from the following:

interface Response {
     data: Array<{ venue: string, description: string }>;
}

ChatGPT:

{
  "data": [
    {
      "venue": "Museum of Pop Culture",
      "description": "Explore the fascinating world of popular culture through interactive exhibits, music memorabilia, and multimedia displays at the Museum of Pop Culture."
    },
    {
      "venue": "Seattle Art Museum",
      "description": "Indulge in the diverse collection of artworks from around the world at the Seattle Art Museum, featuring both contemporary and classical pieces."
    },
    {
      "venue": "Pike Place Market",
      "description": "Enjoy the vibrant atmosphere of Pike Place Market, where you can wander through covered stalls offering fresh produce, crafts, and delicious food from local vendors."
    }
  ]
}

This works fairly well! And despite being conceptually simple, it's a very powerful way to bridge the gap between natural language and something our applications can work with. But we wanted to make it as easy as possible to use this pattern, and brush away some of the sharp edges of language model responses.

This is the key idea behind TypeChat. By just giving a model with a schema (some types) and a request, we can integrate natural language into an application and work through well-typed structured data.

Introducing TypeChat

TypeChat makes it easy to build natural language interfaces using types.

Simply define types that represent the intents supported in your NL application. That could be as simple as an interface for categorizing sentiment or more complex examples like types for a shopping cart or music application. For example, to add additional intents to a schema, a developer can add the intents using type composition, such as adding additional types into a discriminated union. To make schemas hierarchical, a developer can use a "meta-schema" to choose one or more sub-schemas based on user input.

After defining your types, TypeChat takes care of the rest by:

  1. Constructing a prompt to the LLM using types.
  2. Validating the LLM response conforms to the schema. If the validation fails, repair the non-conforming output through further language model interaction.
  3. Summarizing succinctly (without use of a LLM) the instance and confirm that it aligns with user intent.

Types are all you need!

================================================ File: /site/src/docs/typescript/basic-usage.md


layout: doc-page title: Basic TypeScript Usage

TypeChat is currently a small library, so let's take a look at some basic usage to understand it.

import fs from "fs";
import path from "path";
import { createJsonTranslator, createLanguageModel } from "typechat";
import { processRequests } from "typechat/interactive";
import { createTypeScriptJsonValidator } from "typechat/ts";
import { SentimentResponse } from "./sentimentSchema";

// Create a model.
const model = createLanguageModel(process.env);

// Create a validator.
const schema = fs.readFileSync(path.join(__dirname, "sentimentSchema.ts"), "utf8");
const validator = createTypeScriptJsonValidator<SentimentResponse>(schema, "SentimentResponse");

// Create a translator.
const translator = createJsonTranslator(model, validator);

// Process requests interactively or from the input file specified on the command line
processRequests("πŸ˜€> ", process.argv[2], async (request) => {
    const response = await translator.translate(request);
    if (!response.success) {
        console.log(response.message);
        return;
    }
    console.log(`The sentiment is ${response.data.sentiment}`);
});

Providing a Model

TypeChat can be used with any language model. As long as you can construct an object with the following properties:

export interface TypeChatLanguageModel {
    /**
     * Optional property that specifies the maximum number of retry attempts (the default is 3).
     */
    retryMaxAttempts?: number;
    /**
     * Optional property that specifies the delay before retrying in milliseconds (the default is 1000ms).
     */
    retryPauseMs?: number;
    /**
     * Obtains a completion from the language model for the given prompt.
     * @param prompt The prompt string.
     */
    complete(prompt: string): Promise<Result<string>>;
}

then you should be able to try TypeChat out with such a model.

The key thing here is that only complete is required. complete is just a function that takes a string and eventually returns a string if all goes well.

For convenience, TypeChat provides two functions out of the box to connect to the OpenAI API and Azure's OpenAI Services. You can call these directly.

export function createOpenAILanguageModel(apiKey: string, model: string, endPoint? string): TypeChatLanguageModel;

export function createAzureOpenAILanguageModel(apiKey: string, endPoint: string): TypeChatLanguageModel;

For even more convenience, TypeChat also provides a function to infer whether you're using OpenAI or Azure OpenAI.

export function createLanguageModel(env: Record<string, string | undefined>): TypeChatLanguageModel

You can populate your environment variables, and based on whether OPENAI_API_KEY or AZURE_OPENAI_API_KEY is set, you'll get a model of the appropriate type.

import dotenv from "dotenv";
dotenv.config(/*...*/);
import * as typechat from "typechat";
const model = typechat.createLanguageModel(process.env);

Regardless, of how you decide to construct your model, we recommend keeping your secret tokens/API keys in a .env file, and specifying .env in a .gitignore. You can use a library like dotenv to help load these up.

Loading the Schema

TypeChat describes types to language models to help guide their responses. In this case, we are using a TypeScriptJsonValidator which uses the TypeScript compiler to validate data against a set of types. That means that we'll be writing out the types of the data we expect to get back in a .ts file. Here's what our schema file sentimentSchema.ts look like:

// The following is a schema definition for determining the sentiment of a some user input.

export interface SentimentResponse {
    sentiment: "negative" | "neutral" | "positive";  // The sentiment of the text
}

It also means we will need to manually load up an input .ts file verbatim.

// Load up the type from our schema.
import type { SentimentResponse } from "./sentimentSchema";

// Load up the schema file contents.
const schema = fs.readFileSync(path.join(__dirname, "sentimentSchema.ts"), "utf8");

Note: this code assumes a CommonJS module. If you're using ECMAScript modules, you can use import.meta.url or via import.meta.dirname depending on the version of your runtime.

This introduces some complications to certain kinds of builds, since our input files need to be treated as local assets. One way to achieve this is to use a runtime or tool like ts-node to both import the file for its types, as well as read the file contents. Another is to use a utility like copyfiles to move specific schema files to the output directory. If you're using a bundler, there might be custom way to import a file as a raw string as well. Regardless, our examples should work with either of the first two options.

Alternatively, if we want, we can build our schema with objects entirely in memory using Zod and a ZodValidator which we'll touch on in a moment. Here's what our schema would look like if we went down that path.

import { z } from "zod";

export const SentimentResponse = z.object({
    sentiment: z.enum(["negative", "neutral", "positive"]).describe("The sentiment of the text")
});

export const SentimentSchema = {
    SentimentResponse
};

Creating a Validator

A validator really has two jobs generating a textual schema for language models, and making sure any data fits a given shape. The interface looks roughly like this:

/**
 * An object that represents a TypeScript schema for JSON objects.
 */
export interface TypeChatJsonValidator<T extends object> {
    /**
     * Return a string containing TypeScript source code for the validation schema.
     */
    getSchemaText(): string;
    /**
     * Return the name of the JSON object target type in the schema.
     */
    getTypeName(): string;
    /**
     * Validates the given JSON object according to the associated TypeScript schema. Returns a
     * `Success<T>` object containing the JSON object if validation was successful. Otherwise, returns
     * an `Error` object with a `message` property describing the error.
     * @param jsonText The JSON object to validate.
     * @returns The JSON object or an error message.
     */
    validate(jsonObject: object): Result<T>;
}

In other words, this is just the text of all types, the name of the top-level type to respond with, and a validation function that returns a strongly-typed view of the input if it succeeds.

TypeChat ships with two validators.

TypeScriptJsonValidator

A TypeScriptJsonValidator operates off of TypeScript text files. To create one, we have to import createTypeScriptJsonValidator out of typechat/ts:

import { createTypeScriptJsonValidator } from "typechat/ts";

We'll also need to actually import the type from our schema.

import { SentimentResponse } from "./sentimentSchema";

With our schema text and this type, we have enough to create a validator:

const validator = createTypeScriptJsonValidator<SentimentResponse>(schema, "SentimentResponse");

We provided the text of the schema and the name of the type we want returned data to satisfy. We also have to provide the type argument SentimentResponse to explain what data shape we expect (though note that this is a bit like a type cast and isn't guaranteed).

Zod Validators

If you chose to define your schema with Zod, you can use the createZodJsonValidator function:

import { createZodJsonValidator } from "typechat/zod";

Instead of a source file, a Zod validator needs a JavaScript object mapping from type names to Zod type objects like myObj in the following example:

export const MyType = z.object(/*...*/);

export const MyOtherType = z.object(/*...*/);

export let myObj = {
    MyType,
    MyOtherType,
}

From above, that was just SentimentSchema:

export const SentimentSchema = {
    SentimentResponse
};

So we'll need to import that object...

import { SentimentSchema } from "./sentimentSchema";

and provide it, along with our expected type name, to createZodJsonValidator:

const validator = createZodJsonValidator(SentimentSchema, "SentimentResponse");

Creating a JSON Translator

A TypeChatJsonTranslator brings these together.

import { createJsonTranslator } from "typechat";

A translator takes both a model and a validator, and provides a way to translate some user input into objects within our schema. To do so, it crafts a prompt based on the schema, reaches out to the model, parses out JSON data, and attempts validation. Optionally, it will craft repair prompts and retry if validation failed..

const translator = createJsonTranslator(model, validator);

When we are ready to translate a user request, we can call the translate method.

translator.translate("Hello world! πŸ™‚");

We'll come back to this.

Creating the Prompt

TypeChat exports a processRequests function that makes it easy to experiment with TypeChat. We need to import it from typechat/interactive.

import { processRequests } from "typechat/interactive";

It either creates an interactive command line prompt, or reads lines in from a file.

typechat.processRequests("πŸ˜€> ", process.argv[2], async (request) => {
    // ...
});

processRequests takes 3 things. First, there's the prompt prefix - this is what a user will see before their own text in interactive scenarios. You can make this playful. We like to use emoji here. πŸ˜„

Next, we take a text file name. Input strings will be read from this file on a per-line basis. If the file name was undefined, processRequests will work on standard input and provide an interactive prompt. Using process.argv[2] makes our program interactive by default unless the person running the program provided an input file as a command line argument (e.g. node ./dist/main.js inputFile.txt).

Finally, there's the request handler. We'll fill that in next.

Translating Requests

Our handler receives some user input (the request string) each time it's called. It's time to pass that string into over to our translator object.

typechat.processRequests("πŸ˜€> ", process.argv[2], async (request) => {
    const response = await translator.translate(request);
    if (!response.success) {
        console.log(response.message);
        return;
    }
    console.log(`The sentiment is ${response.data.sentiment}`);
});

We're calling the translate method on each string and getting a response. If something goes wrong, TypeChat will retry requests up to a maximum specified by retryMaxAttempts on our model. However, if the initial request as well as all retries fail, response.success will be false and we'll be able to grab a message explaining what went wrong.

In the ideal case, response.success will be true and we'll be able to access our well-typed data property! This will correspond to the type that we passed in when we created our translator object (i.e. SentimentResponse).

That's it! You should now have a basic idea of TypeChat's APIs and how to get started with a new project. πŸŽ‰

================================================ File: /site/src/docs/examples.md


layout: doc-page title: Examples

To see TypeChat in action, check out the examples found in /typescript/examples.

Each example shows how TypeChat handles natural language input, and maps to validated JSON as output. Most example inputs run on both GPT 3.5 and GPT 4. We are working to reproduce outputs with other models. Generally, models trained on both code and natural language text have high accuracy.

We recommend reading each example in the following order.

Name Description
Sentiment A sentiment classifier which categorizes user input as negative, neutral, or positive. This is TypeChat's "hello world!"
Coffee Shop An intelligent agent for a coffee shop. This sample translates user intent into a list of coffee order items.
Calendar An intelligent scheduler. This sample translates user intent into a sequence of actions to modify a calendar.
Restaurant An intelligent agent for taking orders at a restaurant. Similar to the coffee shop example, but uses a more complex schema to model more complex linguistic input. The prose files illustrate the line between simpler and more advanced language models in handling compound sentences, distractions, and corrections. This example also shows how we can use TypeScript to provide a user intent summary.
Math Translate calculations into simple programs given an API that can perform the 4 basic mathematical operators. This example highlights TypeChat's program generation capabilities.
Music An app for playing music, creating playlists, etc. on Spotify through natural language. Each user intent is translated into a series of actions in JSON which correspond to a simple dataflow program, where each step can consume data produced from previous step.

Step 1: Configure development environment

Option 1: Local Machine

You can experiment with these TypeChat examples on your local machine with just Node.js.

Ensure Node.js (18.16.0 LTS or newer) or newer is installed.

git clone https://github.com/microsoft/TypeChat
cd TypeChat
npm install

Option 2: GitHub Codespaces

GitHub Codespaces enables you to try TypeChat quickly in a development environment hosted in the cloud.

On the TypeChat repository page:

  1. Click the green button labeled <> Code
  2. Select the Codespaces tab.
  3. Click the green Create codespace button.
If this is your first time creating a codespace, read this.

If this is your first time creating a codespace on this repository, GitHub will take a moment to create a dev container image for your session. Once the image has been created, the browser will load Visual Studio Code in a developer environment automatically configured with the necessary prerequisites, TypeChat cloned, and packages installed.

Remember that you are running in the cloud, so all changes you make to the source tree must be committed and pushed before destroying the codespace. GitHub accounts are usually configured to automatically delete codespaces that have been inactive for 30 days.

For more information, see the GitHub Codespaces Overview

Step 2: Build TypeChat Examples

Build TypeChat and the examples by running the following command in the repository root:

npm run build-all

Step 3: Configure environment variables

Currently, the examples are running on OpenAI or Azure OpenAI endpoints. To use an OpenAI endpoint, include the following environment variables:

Variable Value
OPENAI_MODEL The OpenAI model name (e.g. gpt-3.5-turbo or gpt-4)
OPENAI_API_KEY Your OpenAI API key

To use an Azure OpenAI endpoint, include the following environment variables:

Variable Value
AZURE_OPENAI_ENDPOINT The full URL of the Azure OpenAI REST API (e.g. https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/chat/completions?api-version=2023-05-15)
AZURE_OPENAI_API_KEY Your Azure OpenAI API key

We recommend setting environment variables by creating a .env file in the root directory of the project that looks like the following:

# For OpenAI
OPENAI_MODEL=...
OPENAI_API_KEY=...

# For Azure OpenAI
AZURE_OPENAI_ENDPOINT=...
AZURE_OPENAI_API_KEY=...

Step 4: Run the examples

Examples can be found in the typescript/examples directory.

To run an example interactively, type node ./dist/main.js from the example's directory and enter requests when prompted. Type quit or exit to end the session. You can also open in VS Code the selected example's directory and press F5 to launch it in debug mode.

Note that there are various sample "prose" files (e.g. input.txt) provided in each src directory that can give a sense of what you can run.

To run an example with one of these input files, run node ./dist/main.js <input-file-path>. For example, in the coffeeShop directory, you can run:

node ./dist/main.js ./dist/input.txt

================================================ File: /site/src/docs/faq.md


layout: doc-page title: Frequently Asked Questions (FAQ)

What is TypeChat?

TypeChat makes it easy to build natural language interfaces using types. These types represent your application's domain, such as an interface for representing user sentiment or types for actions a user could take in a music app.

After defining your types, TypeChat takes care of the rest by:

  1. Constructing a prompt to the LLM using types.
  2. Validating the LLM response conforms to the schema. If the validation fails, repair the non-conforming output through further language model interaction.
  3. Summarizing succinctly (without use of a LLM) the instance and confirm that it aligns with user intent.

Types are all you need!

Why is TypeChat useful?

If you want to add a natural language interface to an app – for example, let’s assume a coffee ordering app that let’s you speak out your order – then you eventually need to translate a request into something precise and concrete that your app can process for tasks like billing, ordering, etc.

TypeChat lets you push on large language models to do this work without having to worry about how to parse out its response or dealing with β€œimaginary” items and tasks. This is because everything must be structured JSON that is validated against your types.

What are the benefits of using TypeChat?

TypeChat was created with the purpose of increasing safety in natural language interfaces.

We believe TypeChat has three key primary benefits when working with large language models:

  1. Accurate: Large language models do a great job matching user intent to scoped types. TypeChat's validation and repair cleans up the rest!
  2. Approachable: No more prompt engineering! Types are all you need. You probably have them already lying around.
  3. Safety: Types constrain domain and model uncertainty. Repeating back the instance confirms that it aligns with user intent before taking action.

How does TypeChat work? How does TypeChat relate to TypeScript?

TypeChat uses TypeScript types as the β€œspecification language” for responses from language models. The approach for sending a request is minimal that includes the user's inputs, your types, and text requesting the model to translate the user input into a JSON object in alignment with the TypeScript types.

Once receiving an AI response, TypeChat uses the TypeScript compiler API under the hood to validate the data based on the types you provided. If validation fails, TypeChat sends a repair prompt back to the model that includes diagnostics from the TypeScript compiler. That’s how TypeChat can guarantee that your response is correctly typed.

How reliable is TypeChat?

TypeChat is very reliable. Large language models have proven they do well when constrained with unambiguous, formal descriptions of possible outputs. They also perform better the more training they have received. TypeScript is the type system for the world's most popular programming language, and JSON is the interchange format for the most popular programming language. As a result, the model has extreme familiarity with both, increasing accuracy. TypeChat purposely creates the prompt compact, and TypeScript can be as much as 5x more concise than a JSON Schema equivalent. Most of the time, the model responds well to the prompt from TypeChat, and sends back a valid instance. TypeChat adds validation, and (if that fails) self-repairing logic to obtain a valid response from the model using diagnostics from the TypeScript compiler. Finally, TypeChat keeps the user in the loop for final confirmation of intent, serving as a final safety mechanism.

What languages does TypeChat support?

Currently TypeChat is being developed just for TypeScript and JavaScript. Developers interested in support for additional languages can engage in discussion on TypeChat's repo in GitHub Discussions.

================================================ File: /site/src/docs/techniques.md


layout: doc-page title: Techniques

This document defines techniques for working with TypeChat.

Schema Engineering

TypeChat replaces prompt engineering with schema engineering: Instead of writing unstructured natural language prompts to describe the format of your desired output, you write TypeScript type definitions. These TypeScript schema aren't necessarily the exact types your application uses to process and store your data. Rather, they're types that bridge between natural language and your application logic by controlling and constraining LLM responses in ways that are meaningful to your application.

To use an analogy, in the Model-View-ViewModel (MVVM) user interface design pattern, the ViewModel bridges between the user interface and the application logic, but it isn't the model the application uses to process and store information. The schema you design for TypeChat are like the ViewModel, but are perhaps more meaningfully called Response Models.

To maximize success with TypeChat, we recommend the following best practices when defining Response Model types:

  • Keep it simple (primitives, arrays, and objects).
  • Only use types that are representable as JSON (i.e. no classes).
  • Make data structures as flat and regular as possible.
  • Include comments on types and properties that describe intent in natural language.
  • Restrict use of generics.
  • Avoid deep inheritance hierarchies.
  • Don't use conditional, mapped, and indexed access types.
  • Allow room for LLMs to color slightly outside the lines (e.g. use string instead of literal types).
  • Include an escape hatch to suppress hallucinations.

The last point merits further elaboration. We've found that when Response Models attempt to fit user requests into narrow schema with no wiggle room, the LLMs are likely to hallucinate answers for user requests that are outside the domain. For example, if you ask your coffee shop bot for "two tall trees", given no other option it may well turn that into two tall lattes (without letting you know it did so).

However, when you include an escape hatch in the form of an "unknown" category in your schema, the LLMs happily route non-domain requests into that bucket. Not only does this greatly suppress hallucinations, it also gives you a convenient way of letting the user know which parts of a request weren't understood. The examples in the TypeChat repo all use this technique.

================================================ File: /site/src/blog/introducing-typechat.md


title: Introducing TypeChat layout: blog tags: post date: 2023-07-20 authors: ["Anders Hejlsberg", "Steve Lucco", "Daniel Rosenwasser", "Pierce Boggan", "Umesh Madan", "Mike Hopcroft", "Gayathri Chandrasekaran"]

{{title}}

{{date | formatDate}}{% if authors %} by {{authors | formatList}}{% endif %}

In the last few months, we've seen a rush of excitement around the newest wave of large language models. While chat assistants have been the most direct application, there's a big question around how to best integrate these models into existing app interfaces.

In other words, how do we augment traditional UI with natural language interfaces? How do we use AI to take a user request and turn it into something our apps can operate on? And how do we make sure our apps are safe, and doing work that developers and users alike can trust?

Today we're releasing TypeChat, an experimental library that aims to answer these questions. It uses the type definitions in your codebase to retrieve structured AI responses that are type-safe.

You can get up and running with TypeChat today by running

npm install typechat

and hooking it up with any language model to work with your app.

But let's first quickly explore why TypeChat exists.

Pampering and Parsing

The current wave of LLMs default to conversational natural language β€” languages that humans communicate in like English. Parsing natural language is an extremely difficult task, no matter how much you pamper a prompt with rules like "respond in the form a bulleted list". Natural language might have structure, but it's hard for typical software to reconstruct it from raw text.

Surprisingly, we can ask LLMs to respond in the form of JSON, and they generally respond with something sensible!

User:

Translate the following request into JSON.

Could I get a blueberry muffin and a grande latte?

Respond only in JSON like the following:

{
    "items": [
        { "name": "croissant", "quantity": 2  },
        { "name": "latte", "quantity": 1, "size": "tall" }
    ]
}

ChatBot:

{
    "items": [
        {
            "name": "blueberry muffin",
            "quantity": 1
        },
        {
            "name": "latte",
            "quantity": 1,
            "size": "grande"
        }
    ]
}

This is good β€” though this example shows the best-case response. While examples can help guide structure, they don't define what an AI should return extensively, and they don't provide anything we can validate against.

Just Add Types!

Luckily types do precisely that. What we've found is that because LLMs have seen so many type definitions in the wild, types also act as a great guide for how an AI should respond. Because we're typically working with JSON β€” JavaScript Object Notation β€” and because it's is very near and dear to our hearts, we've been using TypeScript types in our prompts.

User:

Translate the following request into JSON.

Could I get a blueberry muffin and a grande latte?

Respond only in JSON that satisfies the Response type:

type Response = {
    items: Item[];
};

type Item = {
    name: string;
    quantity: number;
    size?: string;
    notes?: string;
}

ChatBot:

{
  "items": [
    {
      "name": "blueberry muffin",
      "quantity": 1
    },
    {
      "name": "latte",
      "quantity": 1,
      "size": "grande"
    }
  ]
}

This is pretty great! TypeScript has shown that it's well-suited to precisely describe JSON. But what happens when a language model stumbles and makes up a response that doesn't conform to our types?

Well because these types are valid TypeScript code, we can validate the response against them using the TypeScript compiler itself! In fact, the error feedback from the compiler can even be used to guide repairs. When put together, we can get a robust process for getting well-typed responses that our apps can further massage, validate with a user, etc.

In other words, types are all you need.

Enter TypeChat

The technique of combining a human prompt and a "response schema" is not necessarily unique β€” but it is promising. And as we've focused on translating user intent to structured data, we've found that TypeScript is very well-suited for the task. We've grown more confident with this approach, and in order to prove it out, we're releasing a library called TypeChat to help make it easier to use in your apps. TypeChat is already on npm if you want to try it now, and provides tools for prompt prototyping, schema validation, repair, and more.

Here's the basic code to hook TypeChat up to an LLM and decide if a sentence is negative, neutral, or positive.

// ./src/sentimentSchema.ts

// The following is a schema definition for determining the sentiment of a some user input.

export interface SentimentResponse {
    /** The sentiment of the text. */
    sentiment: "negative" | "neutral" | "positive";
}
// ./src/main.ts

import * as fs from "fs";
import * as path from "path";
import dotenv from "dotenv";
import * as typechat from "typechat";
import { SentimentResponse } from "./sentimentSchema";

// Load environment variables.
dotenv.config({ path: path.join(__dirname, "../.env") });

// Create a language model based on the environment variables.
const model = typechat.createLanguageModel(process.env);

// Load up the contents of our "Response" schema.
const schema = fs.readFileSync(path.join(__dirname, "sentimentSchema.ts"), "utf8");
const translator = typechat.createJsonTranslator<SentimentResponse>(model, schema, "SentimentResponse");

// Process requests interactively.
typechat.processRequests("πŸ˜€> ", /*inputFile*/ undefined, async (request) => {
    const response = await translator.translate(request);
    if (!response.success) {
        console.log(response.message);
        return;
    }
    console.log(`The sentiment is ${response.data.sentiment}`);
});

TypeChat can be used in a number of different ways. The way we've discussed here so far is all about using a "data schema" to turn some user intent into a structured response; however, TypeChat also makes it possible to use an "API schema" to construct basic programs. We have some docs and examples to get a sense of the different ways you can use TypeChat.

Open and Pluggable

First of all, TypeChat is open-source. We're MIT-licensed and you can find us on GitHub where we're eager to hear your thoughts, share our ideas, and build with you.

Second, TypeChat is built in a way that is meant to be model-neutral. While we have some very basic integration with the OpenAI API and the Azure OpenAI service for convenience, this approach should work for any chat completion-style API that you want to use β€” though note that at the moment, TypeChat works best with models that have been trained on both prose and code.

Try It Today!

We'd love to know if TypeChat is something that's useful and interests you! As we mentioned, we'll be welcoming you on GitHub if you have any question, suggestions, and more.

Happy Hacking!

================================================ File: /site/src/blog/announcing-typechat-0-1-0.md


title: Announcing TypeChat 0.1.0 layout: blog tags: post date: 2024-03-25 authors: ["Daniel Rosenwasser"]

{{title}}

{{date | formatDate}}{% if authors %} by {{authors | formatList}}{% endif %}

Today we've released a new version of TypeChat for TypeScript and JavaScript. To get it, you can run

npm install typechat

As a refresher, TypeChat is an experimental library for getting structured output (like JSON) from AI language models. The way it works is by using types in your programs to guide language models, and then using those same types to ensure that the responses match up with your types. When they don't, TypeChat can use validation errors to guide language models to repair their responses. You can read our original announcement blog post for more details, but we should be able to catch you up to speed here too.

Here's a few things that are new to TypeChat for TypeScript.

Pluggable Validators

The original version of TypeChat actually leveraged the raw contents of a TypeScript schema file. It looked something like this:

// Load up the contents of our "Response" schema.
const schema = fs.readFileSync(path.join(__dirname, "sentimentSchema.ts"), "utf8");
const translator = typechat.createJsonTranslator<SomeType>(model, schema, "SomeType");

// Process requests interactively.
typechat.processRequests("> ", /*inputFile*/ undefined, async (request) => {
    const response = await translator.translate(request);

    if (response.success) {
        console.log(`❌ ${response.message}`);
        return;
    }

    console.log("The request was translated into the following value:")
    console.log(response.data);
});

This worked, but had a few issues:

  1. The schema file had to be self-contained. Everything had to be in the same file for TypeChat.

  2. The schema file also had to be present if you weren't running in-place.

    This often meant copying the schema file along to the output directory if you weren't using something like ts-node, tsx, or tsimp.

  3. The schema was fixed. While possible to generate a text schema on the fly, it's an error-prone task.

While there are a lot of ergonomic benefits to using a textual TypeScript schema, we explored whether there we could add a bit more flexibility and made a few changes to TypeChat.

The first is that we've broken out a piece of TypeChatJsonTranslator into a more granular concept: a TypeChatJsonValidator. A TypeChatJsonValidator is responsible for generating a string schema representation to guide language models, and to actually make sure the data that comes back matches some type. This means that to construct a TypeChatJsonTranslator, you need to make a TypeChatJsonValidator first; but it also means that validators are swappable. Here's what using that looks like now:

import fs from "fs";
import path from "path";

import { createLanguageModel, createJsonTranslator } from "typechat";
import { createTypeScriptJsonValidator } from "typechat/ts";

import { SentimentResponse } from "./sentimentSchema";

const model = createLanguageModel(process.env);
const schema = fs.readFileSync(path.join(__dirname, "sentimentSchema.ts"), "utf8");
const validator = createTypeScriptJsonValidator<SentimentResponse>(schema, "SentimentResponse");
const translator = createJsonTranslator(model, validator);

translator.translate("hello world!").then(response => {
    if (!response.success) {
        console.log(response.message);
        return;
    }
    console.log(`The sentiment is ${response.data.sentiment}`);
});

Notice that instead of passing the schema into createJsonTranslator, we're passing it into createTypeScriptJsonValidator which we need to import from typechat/ts. The created validator the needs to be passed into createJsonTranslator.

For existing calls to createJsonTranslator, you'll probably see a message like:

TS2554: Expected 2 arguments, but got 3.

you'll need to drop the name of the type, and substitute the argument schema with a validator. Here's the effective diff:

  import { createJsonTranslator, createLanguageModel, processRequests } from "typechat";
+ import { createTypeScriptJsonValidator } from "typechat/ts";
  import { SentimentResponse } from "./sentimentSchema";

  // ...

- const translator = createJsonTranslator<SentimentResponse>(model, schema, "Sentiment")
+ const validator = createTypeScriptJsonValidator<SentimentResponse>(schema, "SentimentResponse");
+ const translator = createJsonTranslator(model, validator);

  // ...

Zod Validators

The second change builds on pluggable validators: TypeChat makes it possible to create validators from Zod schemas. If you're not familiar with Zod, it's a popular library in the TypeScript/JavaScript ecosystem for validating data. One strength of this library is that as Zod type validator objects are constructed, static types can be derived from them. But for TypeChat, its more notable strength is the ability to construct schemas dynamically.

To use a Zod-based schema, we first need to create a few Zod type validator objects and create an object defining all the ones we intend to use.

// sentimentSchema.ts

import { z } from "zod";

export const SentimentResponse = z.object({
    sentiment: z.enum(["negative", "neutral", "positive"])
        .describe("The sentiment of the text")
});

// Maps the property "SentimentResponse" to the above Zod validator.
export const SentimentSchema = {
    SentimentResponse
};

Note that while TypeScript schema files can use raw JavaScript/TypeScript // comment syntax, TypeChat generates comments from Zod based on whatever we pass in to .describe() calls.

Next, we have to construct a TypeChat Zod validator. We pass in the object map of types, and specify which type we want the model to conform to:

// main.ts

import { createJsonTranslator, createLanguageModel } from "typechat";
import { createZodJsonValidator } from "typechat/zod";

import { SentimentSchema } from "./sentimentSchema";

const model = createLanguageModel(process.env);
const validator = createZodJsonValidator(SentimentSchema, "SentimentResponse");
const translator = createJsonTranslator(model, validator);

translator.translate("hello world!").then(response => {
    if (!response.success) {
        console.log(response.message);
        return;
    }
    console.log(`The sentiment is ${response.data.sentiment}`);
});

That's it!

While using a Zod schema has lots of advantages, you may still prefer the ergonomics of writing a plain TypeScript schema. Either option works.

For more information, see the changes on GitHub.

A validateInstance Hook

Another new addition to TypeChat is the validateInstance hook on TypeChatJsonTranslators. It allows you to tack on an extra level of validation beyond what the internal validator will perform.

import { createJsonTranslator, error, success } from "typechat";

// ...

const translator = createJsonTranslator(model, validator);
translator.validateInstance = summary => {
    for (const person of summary.people) {
        if (person.age < 0) {
            return error(
                `'{person.name}' has a negative age, that doesn't make sense.`
            )
        }
    }
    return success(summary)
}

If validateInstance returns a TypeChat Error, then the translator will use the message to repair the AI response.

You can see specifics of this change on GitHub.

Other Changes

Other changes to be aware of are:

  • TypeChatJsonProgram and related functions, such as createModuleTextFromProgram, evaluateJsonProgram, and createProgramTranslator all live in typechat/ts (see PR).
  • The processRequests function for creating a REPL-like prompt now lives in typechat/interactive (see PR).

What's Next?

We'll be trying to improve TypeChat based on the feedback we receive. We're also working to bring TypeChat to other language ecosystems, like Python and .NET, so keep an eye out for that in the near future.

Give TypeChat a try and let us know what you think over on GitHub, where you can file an issue or post a topic in our discussion forum!

================================================ File: /CODE_OF_CONDUCT.md

Microsoft Open Source Code of Conduct

This project has adopted the Microsoft Open Source Code of Conduct.

Resources:

================================================ File: /SECURITY.md

Security

Microsoft takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organizations, which include Microsoft, Azure, DotNet, AspNet, Xamarin, and our GitHub organizations.

If you believe you have found a security vulnerability in any Microsoft-owned repository that meets Microsoft's definition of a security vulnerability, please report it to us as described below.

Reporting Security Issues

Please do not report security vulnerabilities through public GitHub issues.

Instead, please report them to the Microsoft Security Response Center (MSRC) at https://msrc.microsoft.com/create-report.

If you prefer to submit without logging in, send email to [email protected]. If possible, encrypt your message with our PGP key; please download it from the Microsoft Security Response Center PGP Key page.

You should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Additional information can be found at microsoft.com/msrc.

Please include the requested information listed below (as much as you can provide) to help us better understand the nature and scope of the possible issue:

  • Type of issue (e.g. buffer overflow, SQL injection, cross-site scripting, etc.)
  • Full paths of source file(s) related to the manifestation of the issue
  • The location of the affected source code (tag/branch/commit or direct URL)
  • Any special configuration required to reproduce the issue
  • Step-by-step instructions to reproduce the issue
  • Proof-of-concept or exploit code (if possible)
  • Impact of the issue, including how an attacker might exploit the issue

This information will help us triage your report more quickly.

If you are reporting for a bug bounty, more complete reports can contribute to a higher bounty award. Please visit our Microsoft Bug Bounty Program page for more details about our active programs.

Preferred Languages

We prefer all communications to be in English.

Policy

Microsoft follows the principle of Coordinated Vulnerability Disclosure.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment