Best Practices for Deploying Machine Learning Model on Workshop

Hi everyone,

I’m looking for guidance on how to deploy a neural-network model (binary prediction + probability) so that users can run live inference inside Workshop.

For reference this is my adapter:



Here’s what I have:

Model + adapter – The adapter expects a DataFrame with the same 60 fields as the Foundry object we want to score.
Basic join – I can publish the inference output, join it back to the object, and display the results in widgets. That works, but it’s static.
Interactive attempt – I published the model as a deployment and wrapped it in a Workshop Function. After manually mapping all 60 variables, the function returns nothing in Workshop, even though the deployment itself tests fine. I followed the Ontologize tutorial (https://youtu.be/TRIOCHJ6wdw?t=180), but I’m clearly missing a step.

Ideas I’m weighing:

Let the adapter read the Foundry object directly. Use the Object SDK inside the adapter and run predictions there.

Add a TypeScript function. Fetch the object in TS, call the Python inference endpoint, and return the enriched rows to Workshop.

Questions:

  1. What’s the simplest, recommended pattern for live inference in Workshop?
  2. Is there a faster way to bind ~60 inputs than adding them one by one?
  3. Between the two approaches above, which copes better if the object schema changes?
  4. If you’ve deployed something similar, could you share how you structured it?

Thanks a lot for any pointers!