I am using Typescript Code Repositories and trying to access the LLM models, however none of them seem to work. Here is code that I pulled from the docs that doesn’t work:
import { Function } from "@foundry/functions-api"
import { GPT_4o } from "@foundry/models-api/language-models"
export class MyFunctions {
@Function()
public async createChatCompletion(userInput: string): Promise<string | undefined> {
const response = await GPT_4o.createChatCompletion({
params: {
"temperature": 0,
"maxTokens": 1000,
responseFormat: {type: "json_object"}
},
messages: [{role:"SYSTEM", contents:[{text: "Provide response in json format specifying the answer and how certain you are of the response"}]},{ role: "USER", contents: [{ text: userInput }] }],
});
console.log("Response: ", response)
return response.choices[0].message.content;
}
}
When I run a test in the console, the response variable is undefined so it can’t access choices or any other sub attributes. I’ve imported GPT4o in Resource Imports and there are no console or linter errors. I’ve also set "enableModelFunctions": true in functions.json and rebuilt, but still no luck. It’s not just for GPT4o, every single model doesn’t work.
Any ideas what I’m doing wrong? It’s got to be something stupid, but I’ve been sitting on this issue for 2 weeks now and have no idea what’s wrong. Thanks in advance
Empty Input Prompt: If the input prompt is an empty string, the function will return null.
Prompt Exceeding Model Limits: When the input prompt surpasses the model’s context length limits, the function cannot process it and returns null.
(Seen while using PB)
Rate Limiting by LLM Provider: If the LLM provider enforces rate limits, some rows may not be processed, leading to null outputs for those rows. This can occur even after multiple retries. Palantir Community
Output Type Coercion Failures: When the LLM’s response cannot be coerced into the specified output type, the function may return null. This often happens if the response format doesn’t match the expected structure. Palantir Community
Not applicable, but good to know:
Missing Prompt with Only Media Reference: Providing only a media reference without an accompanying prompt results in a null output, as the function requires a textual prompt to operate. Palantir
To diagnose and address these issues, consider a toy example for the user promptt and start but not trying to coerce the response. Here
I took a break trying to figure out the problem for 2 weeks, so I pretty sure I am not exceeding model limits. I’ve also tried other models, so rate limits aren’t an issue.
The function isn’t returning null, it’s returning undefined. No matter what response type I specify or don’t specify, it’s the same thing every time even for different prompts.
I’m not providing any media, just text.
I’ve tried fake prompts and still can’t get it to work. undefined every time. It’s driving me nuts
Is there any chance that this model, or even AIP features entirely, are not enabled on your enrollment? Are you able to use these models in AIP Logic for instance?