Use of Model Adapters in Pipeline Builder

Hello,

Following a question I previously raised in SO:
palantir foundry - How to use a Model Adapter in a User Defined Functions (UDF) for Pipeline Builder - Stack Overflow.

With the new Python UDF functions, we would like to know if it would be possible to use a Model Adapter API within a UDF in order to infer all the rows of input data ?

Best regards,

Hi! Welcome to the Palantir Developer Community!

As of now, model assets are not supported in Pipeline Builder. We definitely want to build an integration, however we do not have any concrete timelines at this time.

As the stack overflow answer suggests, you can use a python transform or batch deployment to create a dataset of model predictions and then use those in your pipeline.

Hello Tucker,

Thank you for your reply and will look forward for this implementation.

Slightly related to this topic but from Workshop module, what would be the way to infer a Live deployed model from Workshop ?
Does using Typescript or Python function with an API call to the model endpoint would work ? (It does work in Slate.)

Best regards,

This should work out of the box

https://palantir.com/docs/foundry/functions/functions-on-models/

Hello,

I would like to follow-up on this feature. Is there any update on:

  • Using a Model Asset in Pipeline builder via a UDF?
  • Using a registered model (not Palantir provided - externally hosted) via a UDF?

Cheers,

It depends on what type of model you want to use.

If you want to use an LLM you can register the LLM in Foundry, which makes it available in the same manner that Palantir provided models are (easy use in Pipeline Builder, AIP Logic, etc). See docs about that here https://www.palantir.com/docs/foundry/administration/bring-your-own-model.

If your model is not an LLM and you want to call from Pipeline Builder you can use a Python function as a UDF in Pipeline Builder, and inside the function you can call the model.

Calling the model from a python function isnt the easiest thing, you need to register the model as function, and then use the platform SDK to call the model.

In the below example, I have a model that is registered with the api name com.foundryfrontend.models.TitanicModel2:

import foundry
from functions.api import function
from foundry.v2 import FoundryClient
from foundry_sdk_runtime import FOUNDRY_HOSTNAME, FOUNDRY_TOKEN

TITANIC_MODEL = "com.foundryfrontend.models.TitanicModel2"

@function
def execute_titanic_model(age: int, gender: str) -> str:
    client = FoundryClient(
        auth=foundry.UserTokenAuth(hostname=FOUNDRY_HOSTNAME.get(), token=FOUNDRY_TOKEN.get()),
        hostname=FOUNDRY_HOSTNAME.get(),
    )
    result = client.functions.Query.execute(
        TITANIC_MODEL,
        parameters={"age": age, "gender": gender},
        preview=True,
    )

    if result.value:
        return "The person is predicted to survive"
    else:
        return "The person is predicted to NOT survive"

You’ll have to change the code to make it work for your use case