Hi, I have a few questions about using Foundry for hosting (React) apps with the Dev Console, along with DevOps/Marketplace.
We must have our marketplace application unlocked to allow users to make ontology edits to the object types we’ve deployed. During upgrades of unlocked marketplace applications, a warning is shown about the potential to lose edits during the upgrade. Additionally, one cannot automatically upgrade unlocked marketplace applications. A few specific questions:
What assurances can we have that we will not in fact lose ontology edits while performing an upgrade?
Is there some way we could change how we package our Devops product to allow users to make ontology edits without needing to unlock the installation?
Is there any way to enable automatic upgrades for unlocked installations?
Document uploading. We want to be able to take docs from a user’s PC, upload them into Palantir, chunk, clean, and embed them. Right now the code for each of these steps is in Palantir, but the orchestration around it lives in the frontend, since it proved very difficult for us to pass full file objects into Code Repo functions when we built this feature.
Is there a better way for us to do this, keeping all the code on Foundry?
LLM orchestration. We want to make LLM calls, run our own code over the (streamed) outputs, call tools and recursively call more models, and stream the results out to the frontend. I do not believe the agent API is by itself sufficient to meet our needs, since we do a lot of custom processing.
How could we support this use case using code we deploy and run in Foundry, rather than within our frontend app?
I can respond to the DevOps/Marketplace part of the question. When the Marketplace product upgrades, all ontology entities included in the product will be overriden - so it’s generally a bad pattern to unlock an installation and upgrade it frequently, as interactive changes will be lost.
Would there be a way to create separate objects to track the user edits, which would be inputs to the Marketplace product?
Let’s assume that you have an object called Issue where you want users to edit the summary field. Instead, you could remove the summary property from Issue and create a separate IssueSummary object (with an issueId and a summary property) which is linked to Issue. And then make IssueSummary an input to your product.
To install the product, you would first create this IssueSummary object on the target location, and feed it as an input to the Marketplace install. Now, you don’t need to unlock the product anymore, and you can upgrade whenever you want:
when it upgrades, Marketplace will update the Issue object
when users want to update the issue summary, they’ll actually be creating/updating an IssueSummary object linked to the issue
With this setup, marketplace doesn’t manage IssueSummary, which guarantees it will never delete edits made by users.
I’m surprised to hear that all entities will be overridden, we’ve managed to do several version upgrades without seeing any data loss or dropped edits. Are they only overridden if the object type is itself modified?
It would not be practical for us to hand create new input types for each object type that we wish to have editable properties, in every install we make, or to make those installing the app do the same. The reason we are using marketplace/devops in the first place is to manage object types. Could you suggest any other more scalable alternatives?
Hey - to reply on the LLM orchestration question here:
A current way for you to define your own custom logic you want to run on top of an LLM response in Foundry would be to write your own Function e.g. in a Code Repository, and run on top of the AIP Agent response or on a language model function response. You can then call your function from your FE app using the Ontology SDK from your Dev Console app with that function included, or through the platform APIs.
A limitation of this is that functions do not support streamed responses over OSDK/platform APIs yet - I will feed this example back to our product team for tracking the request here.
If the use case requires streaming of the response, and AIP Agents provides the necessary tools you’re looking for here, are you able to include the stream processing code in your FE app?
Hmmm, do you have any idea the time horizon on getting streamed results into the OSDK? Or any suggestions on ways to emulate them, or do something similar? AIP Agents are not sufficient for our use case, since we need to define custom tools at run-time and run completions with structured outputs.
It’s indeed a bit more complicated than that: edits would only be dropped if there is a breaking change to the object, and if you granted permission to marketplace to drop edits to perform an upgrade. Since Object Storage V2, it is possible to manage schema migrations when there is a breaking change.
In Marketplace, when you install a product, you can select the set of migrations you allow the Marketplace to make - if you checked Allow drop migrations of Ontology property edits, then Marketplace may drop edits if needed when upgrading the product
An option here could be to create 2 Marketplace products: one to generate the user-editable objects (which you would just use to bootstrap objects for new installations, and unlock), and one to generate the marketplace-managed objects (which would remain locked and upgrade frequently). Because the first product would be outputting objects that are inputs to the second, we will automatically detect that they are Linked products. This means you’ll be able to install them both simultaneously the first time, and after this initial bootstrapping, only upgrade the “managed” product.
There’s no confirmed timeline for this as a feature request yet, but I’ve passed on the feedback here to our product teams for tracking.
For the model features you mentioned (custom tools defined at runtime, and structured outputs) how are you currently looking to use these in Foundry? Is this using custom models also?
One suggestion that could leverage AIP Agents still, to allow you to use the streaming support there, would be to use application variables to allow you to pass runtime information into a custom Function tool, and use this to dynamically control your tool logic.
Thanks for the guidance on Marketplace, @aguinaudeau, I’ll take a look at Linked Products.
@ClaireR Thank you for passing on the feedback. As to custom tools, we want to be able to inject them into the LLM context based on user input. We may use custom models so as to have access to the standard OpenAI API.