LLM response undefined in Typescript Code Repository

Hey,

I am using Typescript Code Repositories and trying to access the LLM models, however none of them seem to work. Here is code that I pulled from the docs that doesn’t work:

import { Function } from "@foundry/functions-api"
import { GPT_4o } from "@foundry/models-api/language-models"
export class MyFunctions {
    @Function()
    public async createChatCompletion(userInput: string): Promise<string | undefined> {
        const response = await GPT_4o.createChatCompletion({
            params: {
                "temperature": 0,
                "maxTokens": 1000,
                responseFormat: {type: "json_object"}
            },
            messages: [{role:"SYSTEM", contents:[{text: "Provide response in json format specifying the answer and how certain you are of the response"}]},{ role: "USER", contents: [{ text: userInput }] }],
        });
        console.log("Response: ", response)
        return response.choices[0].message.content;
    }
}

When I run a test in the console, the response variable is undefined so it can’t access choices or any other sub attributes. I’ve imported GPT4o in Resource Imports and there are no console or linter errors. I’ve also set "enableModelFunctions": true in functions.json and rebuilt, but still no luck. It’s not just for GPT4o, every single model doesn’t work.

Any ideas what I’m doing wrong? It’s got to be something stupid, but I’ve been sitting on this issue for 2 weeks now and have no idea what’s wrong. Thanks in advance

Hi @andr3wV.

I can think of a few scenarios:

  1. Empty Input Prompt: If the input prompt is an empty string, the function will return null.

  2. Prompt Exceeding Model Limits: When the input prompt surpasses the model’s context length limits, the function cannot process it and returns null.

(Seen while using PB)

  1. Rate Limiting by LLM Provider: If the LLM provider enforces rate limits, some rows may not be processed, leading to null outputs for those rows. This can occur even after multiple retries. Palantir Community

  2. Output Type Coercion Failures: When the LLM’s response cannot be coerced into the specified output type, the function may return null. This often happens if the response format doesn’t match the expected structure. Palantir Community

Not applicable, but good to know:

  1. Missing Prompt with Only Media Reference: Providing only a media reference without an accompanying prompt results in a null output, as the function requires a textual prompt to operate. Palantir

To diagnose and address these issues, consider a toy example for the user promptt and start but not trying to coerce the response. Here

The Attached code works fine for me. Make sure you have imported the model from the import section in the left pane.


They are added. It’s so weird, I feel like I’m going nuts :joy:

It has to be something dumb, but I’ve tried to do everything possible and still no luck. Maybe I should submit a support ticket?

Hey @arukavina,

Thanks for the reply!

  • The input prompt is not empty.
  • I took a break trying to figure out the problem for 2 weeks, so I pretty sure I am not exceeding model limits. I’ve also tried other models, so rate limits aren’t an issue.
  • The function isn’t returning null, it’s returning undefined. No matter what response type I specify or don’t specify, it’s the same thing every time even for different prompts.
  • I’m not providing any media, just text.

I’ve tried fake prompts and still can’t get it to work. undefined every time. It’s driving me nuts :sob:

@andr3wV , I hear you.

Your code works as is:

I created a repo from scratch to test this.
Can you try bootstrapping a new repo just in case? Then only import GPT_4o

If not, as you said, I’d contact support.

Is there any chance that this model, or even AIP features entirely, are not enabled on your enrollment? Are you able to use these models in AIP Logic for instance?

You can’t make this stuff up.

We were testing by running unit tests which I guess won’t return LLM responses. Using Live Preview in the web app works.

Thanks to everyone in this thread for offering your help! Hopefully that helps anyone who was trying to avoid the web app.

1 Like