Using LLM Models in Functions

Hi All,

I was trying to do some summarisation in Functions, and I am wondering why I can bring in LLM models such as GPT-5 in Typescript v1 (Via importing models), however I can’t find a way to do it in Typescript V2, and Python.

I am wondering if it’s possible to do this?

Also, I understand there are ways around this, such as creating an AIP Logic, which would make more sense. I am just interested in why it isn’t possible?

Kind Regards
Sam

1 Like

For TSv2, import the @osdk/foundry.functions library from the library sidebar and pass the API name of the model asset function. Here is an example of code to implement this:

import { PlatformClient } from “@osdk/client”;
import { Queries } from “@osdk/foundry.functions”;

async function helloWorld(client: PlatformClient, prompt: string): Promise {
const response = await Queries.execute(
client,
“com.foundry.languagemodelservice.models.gpt5.CreateChatCompletion”,
{
parameters: {
messages: [
{
role: “USER”,
content: prompt,
},
],
},
},
{
preview: true,
},
);

return JSON.stringify(response);
}

// To test this function in Live Preview and later publish to the platform,
// make it the default export of the file.
export default helloWorld;
1 Like

Thank you, this now works!

1 Like

Hi @bbd5be9832271c105aa7, wanted to update that we’re also releasing endpoints allowing use of open-source SDKs against Foundry in TSv2 / Python Functions. Initial docs with examples [1] are out for OpenAI and Anthropic, in the future we’re planning on supporting Gemini and xAI (as well as more functions-specific docs). Feel free to reach out in this thread if you encounter any issues!

[1] https://www.palantir.com/docs/foundry/api/v2/llm-apis/models/openai-responses-proxy

1 Like

Hi @bbd5be9832271c105aa7, models are now supported in TSv2 / Python Functions, see docs: https://www.palantir.com/docs/foundry/functions/language-models-python-tsv2/

1 Like