Can we supply inputs and outputs dynamically to the transform code repository?

We receive inputs in batches and want to run the same transform on them. How can we do that? Basically, we create input media sets for the batches as they come in. The transformation process is the same, and the inputs look identical, but we can’t keep everything in the same media set since they belong to different teams within the organization.

Editing the transform repository seems to be one option. A different team creates those batches, marks the ontology object with a media reference ID, and flags the object as ready for processing. The transform code repository should then start from there.

We’re looking for ideas on how to handle this flow and are flexible to change the design at this point.

Hi @RajKarri - If by “dynamically” you mean at runtime, then that isn’t possible. Inputs must be defined at check/compile time, and outputs must always point to the same project. Otherwise, permissions couldn’t be validated properly, and users might end up overwriting each other’s work.

The closest solution to what you would like to achieve might be Transform generation, described in the documentation here. The same logic can be used for all inputs and outputs. However, the list of input datasets must be known beforehand, as shown in the example in the docs.

1 Like