Registered model in model catalog that uses Compute function throws error

Im trying to register a model in Model Catalog using Typescript functions. I have a separate thread for the issues I have with AWS Javascript . But I tried to take a different approach

  1. I created a compute function that uses Boto3 SDK in python to invoke bedrock
  2. use the Function in a typescript function with the chat completion interface and was able to publish and run the function.
  3. Now the function shows up in registered models in model catalog

But when I try to use it in AIP logic, I get failed to import model.

{
“loadedModels”: [
{
“identifier”: {
“modelId”: “ri.function-registry.main.function.8e8e5318-f2e8-4d65-baa5-2f3216f28d7e”,
“version”: {
“major”: 0,
“minor”: 0,
“patch”: 6
}
},
“model”: {
“type”: “errorDetails”,
“errorDetails”: {
“errors”: [
{
“type”: “callingFunctionsNotAllowed”,
“callingFunctionsNotAllowed”: {
“functionRid”: “ri.function-registry.main.function.7dbbd90a-7484-41e1-9cfd-f0cc07eb1372”
}
}
]
}
}
}
]
}

any suggestions on the right way to do this ?

Hey @maddyAWS,

For now, we only support using models that call out to external models with webhooks. Your function calls out to another “compute function,” which isn’t supported yet.

As a workaround, you can write a webhook that makes an API call to your compute function, import that webhook, and call it from your TypeScript function to make it usable in Logic.

We’ll be removing this restriction soon, so you will be able to run any of the registered functions in the model, including this one, soon.

You mean, a Rest API webhook ?
So create a Datasource and a webhook to call the API that I published in compure module and use that within the typescript function

is there some example on how to do this ? How will I call the function via webhook ?

You can create a REST Datasource that can egress to your stack hostname, and create an associated webhook to make a call to your function.

Here are the relevant docs:

  • make your python function accessible from api gateway here
  • webhook setup instructions are here
// Standard imports
import { Query, Double, Function, isOk } from "@foundry/functions-api";
import { ChatCompletion } from "@palantir/languagemodelservice/contracts";
import { 
    FunctionsGenericChatCompletionRequestMessages, 
    GenericCompletionParams, 
    FunctionsGenericChatCompletionResponse 
} from "@palantir/languagemodelservice/api";
import { Sources } from "@foundry/external-systems";

interface Message {
    role: string;
    content: string;
}

interface Parameters {
    temperature: Double;
    maxTokens: number;
    topK: number;
    topP: Double;
    stopSequences?: string[]; 
}


interface MessageAndParams {
    parameters: {
        messages: Message[];
        params: Parameters;
    };
}

interface TokenUsage {
    promptTokens: number;
    completionTokens: number;
    maxTokens: number;
}

interface CompletionResponse {
    completion: string;
    tokenUsage: TokenUsage;
}

interface CustomWebhookResponse {
    value: CompletionResponse;
}

interface ApiResponse {
    output: {
        invocationResponse: CustomWebhookResponse;
    };
}

export class LLMFunctions {
    @Query({ apiName: "custommodelCompletionInvokev2" })
    @ChatCompletion()
    public async custommodelCompletionv2(
        messages: FunctionsGenericChatCompletionRequestMessages, 
        params?: GenericCompletionParams
    ): Promise<FunctionsGenericChatCompletionResponse> {
        try {
            // Transform messages to required format
            const formattedMessages: Message[] = messages.map(msg => ({
                role: msg.role,
                content: msg.content || ''
            }));

            // Create request body with default params if none provided
            const requestBody: MessageAndParams = {
                parameters: {
                    messages: formattedMessages,
                    params: {
                        temperature: params?.temperature ?? 1.0,
                        maxTokens: params?.maxTokens ?? 3000,
                        topK: 1,
                        topP: 1.0,
                        stopSequences: params?.stopSequences ?? ["assistant"]
                    }
                }
            };

            console.log("Sending request:", JSON.stringify(requestBody, null, 2));
            
            // Call webhook instead of direct function
            const response = await Sources.foundryapiendpoint.webhooks.Executeinvokemodelfunc.call({
                MessageandParams: requestBody
            });

            if (!isOk(response)) {
                throw new Error(`Webhook call failed: ${response.error.message}`);
            }

            const apiResponse = response.value as ApiResponse;
            const completionResponse = apiResponse.output.invocationResponse.value;

            // Format response to match required interface
            return {
                completion: completionResponse.completion,
                tokenUsage: {
                    promptTokens: completionResponse.tokenUsage.promptTokens,
                    completionTokens: completionResponse.tokenUsage.completionTokens,
                    maxTokens: completionResponse.tokenUsage.maxTokens
                }
            } as FunctionsGenericChatCompletionResponse;

        } catch (error) {
            console.error("Error in webhook call:", error);
            throw error;
        }
    }
}

I was able to create a function that will use the compute function via webhook and it works.