With the new Python UDF functions, we would like to know if it would be possible to use a Model Adapter API within a UDF in order to infer all the rows of input data ?
As of now, model assets are not supported in Pipeline Builder. We definitely want to build an integration, however we do not have any concrete timelines at this time.
As the stack overflow answer suggests, you can use a python transform or batch deployment to create a dataset of model predictions and then use those in your pipeline.
Thank you for your reply and will look forward for this implementation.
Slightly related to this topic but from Workshop module, what would be the way to infer a Live deployed model from Workshop ?
Does using Typescript or Python function with an API call to the model endpoint would work ? (It does work in Slate.)
If your model is not an LLM and you want to call from Pipeline Builder you can use a Python function as a UDF in Pipeline Builder, and inside the function you can call the model.
Calling the model from a python function isnt the easiest thing, you need to register the model as function, and then use the platform SDK to call the model.
In the below example, I have a model that is registered with the api name com.foundryfrontend.models.TitanicModel2:
import foundry
from functions.api import function
from foundry.v2 import FoundryClient
from foundry_sdk_runtime import FOUNDRY_HOSTNAME, FOUNDRY_TOKEN
TITANIC_MODEL = "com.foundryfrontend.models.TitanicModel2"
@function
def execute_titanic_model(age: int, gender: str) -> str:
client = FoundryClient(
auth=foundry.UserTokenAuth(hostname=FOUNDRY_HOSTNAME.get(), token=FOUNDRY_TOKEN.get()),
hostname=FOUNDRY_HOSTNAME.get(),
)
result = client.functions.Query.execute(
TITANIC_MODEL,
parameters={"age": age, "gender": gender},
preview=True,
)
if result.value:
return "The person is predicted to survive"
else:
return "The person is predicted to NOT survive"
You’ll have to change the code to make it work for your use case