Periodically Updating a Dataset Circular Dependency

Is it possible to update an existing dataset based on the input of an existing one through pipeline builder? I have a dataset that backs an object that I need to periodically update depending on if I get new data from a different dataset. Would I be able to leverage pipeline builder for that? Not sure because I would need to use the output dataset as the input dataset to compare updates so that would be a circular dependency that it won’t let me build.

Basically I am wondering if anyone has updated a dataset that backs an object based on the values in a separate dataset? I don’t think I can use pipeline builder/data transforms because the output dataset would technically be compared the input dataset as well. I was thinking the best way to do this would be to create an object with the incoming dataset and then have an automate run on a action/function that creates new objects which are backed by the original dataset as I need it, just that seems like a messy solution.

If I’m understanding you correctly, schedules can handle this situation for you. I attached an example of a fusion-backed dataset (raw_current…Table3) being the trigger for a plaid_transaction dataset that runs every time the Table3 dataset updates. The transaction dataset also happens to have a separate schedule.

In this instance, I set up the schedule, but when it was supposed to trigger it ignored the trigger because the actual inputs to the transaction dataset were all up-to-date, so it’s intelligent, if that’s something you were asking. There is also an option to force-build on this trigger if you would like.

Separately, I have done this process before with a loop that leverages schedules, materializations, actions and Automate to update a trigger dataset after that trigger dataset starts the whole process, but it sounds like you may not need that flow for this process.