LLM Context

LLM Context originates from the custom data reference concept known as Embeddings.

The idea is that we are going to store our data in the Vector Database and then give the search context for our LLM Prompt to fetch the data specifically for the request we want to process. This comes very handy when we have a large amount of text data, which is impossible to hold inside one prompt template, so we must implement a Vector Database search concept for each prompt request.

Each TreeScale Application has its own Vector Database that we host and manage. However, depending on your use case, you can create as many Contexts as you want.

Creating New Context

Context creation happens whenever you have text data you want to vectorize. The ideal workflow would be first to upload the file, then vectorize that file data with the context name you wish to create.

Create Vectorized Context from file

curl -X POST https://tree.tsapp.dev/api/context \
  --header 'Authorization: Bearer pk_b75cb4fcf8a7258e45f35f5a639e869444c5a2b19b582e418f33a8c0ee1b372c' \
--header 'Content-Type: application/json' \
--data '{
    "name": "test-context",
    "files": ["f4e2aee8-55e3-42ef-989e-a6486dbb8da5"]
}'

It is also possible to vectorize text directly if you don’t want to upload and keep the file separately. However, depending on the text size, the performance would be much better if you give a file ID directly instead of a bulk text in the request payload.

Create Vectorized Context from text

curl -X POST https://tree.tsapp.dev/api/context \
  --header 'Authorization: Bearer pk_b75cb4fcf8a7258e45f35f5a639e869444c5a2b19b582e418f33a8c0ee1b372c' \
--header 'Content-Type: application/json' \
--data '{
    "name": "test-context",
    "text": "some long text here that needs to be vectorized....."
}'

Using Context as a prompt variable

Each prompt variable could be referenced from the context, by giving the search query that you intend to receive from the Vectorized Database (text embeddings).

For example, if you are building a Youtube Script generator TreeScale Application, then you probably need a Context where you would have previously made Youtube video scripts, so that whenever you ask an AI Model to write a Youtube Script, it can perform a search query within the same topic to fix the knowledge base and get the relevant information to write a script.

Prompt Template For Youtube Script Generator App

Based on the given context below write a youtube script for the topic {topic}

Context:
{scripts_for_the_topic}

API Request for the youtube script generator prompt from context data

curl -X POST 'https://tree.tsapp.dev/youtube-script-generator' \
  --header 'Authorization: Bearer pk_b75cb4fcf8a7258e45f35f5a639e869444c5a2b19b582e418f33a8c0ee1b372c' \
  --header 'Content-Type: application/json' \
  --data '{
    "variables": {
        "scripts_for_the_topic": {
					"context": {
						"search": "how to write prompts",
						"name": "test-context"
					}
        },
				"topic": "Prompt Engineering"
    }
}'