Use registered LLM in ontology typescript or python function

I’ve registered an LLM using function interface as described here:

How can I use that in my ontology typescript or python function like how I would use palantir-provided models?

Hey,

Below is an example of a working TypeScript function (notional data) that uses one of the LMS models provided by Palantir (in this case GPT4o from OpenAI). Most/all of the LMS models have a createChatCompletion() function that gets called exactly like/similar to the below code example. I would assume that an LLM you register would/should have a similar function too.

To be able to use it in a TypeScript repository, you’ll need to import the desired model from the left-hand panel and directly into the src/index.ts file like so:

import { GPT_4o } from "@foundry/models-api/language-models";

Then you can write vanilla TypeScript/Python to interact with the model and output a response. It’s worth noting that the createChatCompletion() function will return a JSON-like object, whose “choices” attribute is an array. The first element in that array has the actual string response generated by the corresponding model nested further within.

@Function()
public async generateWatchOrderSummaryAi(orderId: string): Promise<customReturn> {
    // Implement custom logic here

    // Prepare argument values for `createChatCompletion()`
    const systemPrompt = "You are a helpful assistant that generates concise summaries. Generate a professional summary based on deatils of the provided watch service order in no more than 3-4 sentences.";
    const systemMessage = { role: "SYSTEM", contents: [{ text: systemPrompt }] };

    const text = `
            Status: ${orderStatus}, Description: ${orderDescription}, Order Assignee: ${assignee!.email ?? assignee?.username}, Days Elapsed: ${daysElapsed} 
        `
        
    const userMessage = { role: "USER", contents: [{ text }] };

    const gptResponse = await GPT_4o.createChatCompletion({
        messages: [systemMessage, userMessage],
        params: { temperature: 0.7 }
    });

    return gptResponse.choices[0].message.content ?? "Unable to generate summary.";
}

Hope this helps!