I’m interested in best practices on how to handle configuration management for applications using/combining a lot of ontology based Foundry tools.
For example in large data transformation focused pipeline we store important pipeline/application parameters in a central datasets that is then loaded by each transform that can be parameterized. Within the transform then the necessary parameters are extracted from the config dataset and used. This allows us to have all configuration parameters in one place, allow versioning of configs and easy provisioning of configs for different environments (DEV, TEST, PROD). Editing configs is also easy as you have all parameters in one place.
What would be a good approach in a ontology based application where I need to configure a variety of building blocks (functions, actions, monitors, UIs, etc)?
1 Like
Hi,
thanks for the response. Maybe I’m missing something but I do not see how the documentation page you gave, links to my question on an configuration management. Can you please explain your thoughts?
Best
Hi @robroe-tsi
You don’t mention specifics regarding the building blocks, or what you want to configure specifically, so please excuse the generalised approach:
I’ll generally use Market Place for application versioning. It allows you to split application development, without having to split into different name spaces, and also provides a good way to deploy updates to production workflows.
You will likely need several config files, but it can be done in this way:
- All of your TypeScript functions will have one configuration file, that loads in the params needed (e.g. which variables, models, etc. to use)
- If you are looking to deploy many similar applications, but with different input data, you can create a pipeline that creates objects from datasets, and then have your input config set what the input datasets should be, alongside any other variables. Only manual configuration here would be to create a prefix to the Ontology objects per installation.
- The above objects can be used in Workshop without any change (as long as the schema stays the same)
- This can also create a configuration-Object type in the ontology, that can be loaded into Workshop and used to provide default values for e.g. active tabs, visibility of widgets, etc.
While this is more complicated than the equivalent in your data transforms, remember that it is a more complicated problem to solve. And I’ll promise that if you need to do this many times over, it can save you a lot of time.
Foundry branching might change this for some of my use cases, but not all, and perhaps not for yours either, so now you have a couple of different options.
1 Like
Sounds like some combination of “Release Management” and “Workflow Builder” may help?
https://www.palantir.com/docs/foundry/devops-release-management/overview/
https://www.palantir.com/docs/foundry/app-building/overview/
1 Like
Hey Robroe,
Best practices start with a good plan. The Solutions Designer app can help get your (connectable,linkable, LIVE) pieces on the table. If you are working with a team this can also give you a single pane of glass to collaborate.
As I cannot see the data these are generalized solutions & ideas to try out.
Workshop • Publishing and versioning • Palantir
https://www.palantir.com/docs/foundry/administration
Hi @jakehop
many thanks for the explanations.
I like the idea of a configuration object type that stores either one or multiple configurations and can be loaded by functions or UI elements.
Would it be possible to explain the first bullet point with the “one configuration file” for TypeScript a little bit. That might be new to me.
Also I have some problem understanding your bullet point 2. My case meet your req. as we want to implement multiple similar (but not identical) workflows. Does you mean the usage of DevOps/Marketplace?
Best
Hi,
thanks for the two links.
The description of Workflow Builder (and the name) sounds promising but for now the functionality seems to be restricted on visualizing and describing existing applications.
DevOps/Marketplace: Yes we use that partially for configuring resources (like schedules, inputs, etc.) which often can not be done from within a pipeline. But from my experience it is difficult to configure the behavior/logic within building blocks (within a transform or within a function).
Thanks for the context, here are my responses for how I’d proceed:
- Similarly to how you use config files in Python transform code, you can do the same in TypeScript. There a multiple ways of doing this, but a pretty simple and fool-proof implementation is this: Have a config.ts-file, where you keep your configuration setup and export one function, which returns the configuration variables. Use this function to import the configuration in other files in your repository.
- Use Market Place (if the deployment model fits your use case) and either use that to select which datasets function as inputs for your Object types, or build a pipeline around this which requires manual configuration of input datasets. Pipeline builder can deploy objects. The inputs to the pipelines deploying these objects can be configured on a per-use case basis, allowing you similar workflows with different data inputs.
Hope this explains my thinking better.
1 Like