TreeScale has the cutting-edge infrastructure to scale and deploy AI Models alongside using existing service providers wrapped as a generic API like OpenAI. Our platform allows you to deploy your dedicated API Endpoints with specific parameters and prompt templates that will scale as you go.

In this quickstart tutorial, we will go over the basics of making a TreeScale App from our user interface and learn how to use TreeScale App APIs with a basic Node.js application.


Creating a new TreeScale App should be done from our application dashboard. The critical thing to remember is that we are assigning unique subdomains to each application on our platform, which means the name of your application should be unique.

  1. Go to https://app.treescale.com/apps/create
  2. Pick a unique name that will suit your application logic. For example quick-app
  3. TreeScale Platform will assign a unique domain name with the quick-app.tsapp.dev format. This would be your AI Model execution API Endpoints root, which you can use as a request hostname in your Node.js, Python, or Golang products.

Choosing AI Service Provider and Model

For now, to configure your application navigate to the Console tab of the App page. You can choose any supported AI Service provider and their models, including our self-hosted and maintained open-source models.

It is recommended to pick something reliable for the beginning, like OpenAI and gpt-3.5-turbo. You can change this anytime; changing it doesn’t affect the Prompt template configuration described below.

Over time, we will add more and more models to our list of supported AI Service providers. The critical thing to remember is that each model is usually pre-trained to perform a specific task better than the others. It is also possible to bring your data to train or create AI Model Embeddings.

App Endpoints

Endpoints are specific API Routes that you can define to create a prompt template or chain multiple prompts together. We introduced the concept of “Endpoint → Prompt Template” because we wanted to guide our customers to make an intuitive API Design around AI Model execution and scope out different use-case within the same application.

Let’s assume we want to create an Endpoint that will receive the country's name and return the number of people living there. We will need to define our API Endpoint route as /country-population.


Prompt Templates

Each App Endpoint can have a single prompt template or chain of prompt templates defined with their variables. We will automatically extract variables from given prompt template text using {variable} string variable formatting.

Based on our Country population Endpoint’s logic, we can define following prompt template to get a number of people leaving in given country.

Respond with a number of people leaving in given country: {country_name}

You will notice that we have now country_name variable as a request body parameter defined as a string type.

The important part of defining prompt template is to be able to write also correcting message, which will force our AI Model to respond with a specific details only, without giving extra information that we don’t need. For our case the correcting message is going to be

Respond with a number of people leaving in given country: {country_name}
Return only number of people leaving in given country above, without any other information, just a number.

This second sentence is the “correcting message”, which essentially makes sure we will get back only number as a result.

Publishing Release

Every time when you publish an app, we make a new release of your application and keep the old configuration, similar to the version control.

After publishing new release we would have a REST API with a following details

POST https://quick-app.tsapp.dev/country-population
Content-Type: application/json
Authorization: Bearer <app-api-key>

	"params": { "country_name": "USA" }

The result of this REST API Call would be something like

{ "result": "Washington" }

Integrating with your product

The assumption is that you should have at least some kind of backend logic, because currently TreeScale Apps are going to be executed with a Secret Key, which should not to be exposed to UI or publicly shared.

Integrating with Node.js, Python, Golang, Java, C# or pretty much to any other backend programming language is as easy as making simple HTTP Requests. Our platform is handling all the parameter validation and checks if given variable types, values or names are correct, so you can offload your backend logic to us