We have a large contour that we’re deploying via template. Initially, this contour had an input dataset A that we’ve replaced with a dataset B.
But when we tried to deploy the contour, template referred to the two datasets in the artifacts, old (A) and new (B), and asked us to map the two inputs even though the old dataset is no longer in the contour’s input.
We tried duplicating the contour and moving the old one to another project, but always came up with the same issue when trying to do a new deployment.
We come across this very often and the only solution that works is to recreate the contour, but in this case the contour is really big and recreating it will be really tedious.
That’s why we’d like to know if there’s a way to clean up the cache of a contour so that we don’t have to recreate it every time we have this issue.
I have seen the same issue in the past : some parameters and boards reference a dataset rid in their config (typically, a join board): a dataset column will be stored in the config as a dataset rid and the name of the column. And if you switch the upstream dataset from A to B, and B still has the same column names, then the board will actually still work despite storing dataset A’s rid. And this is only ever a problem with Templates which needs to compute the actual contour dependencies.
In our case, we ended up identifying (via the network tab) which boards had the references to the old dataset, and we either recreated the boards, or reselected the columns to update their config.
If it is possible to update your pipelines to somehow have B’s content under dataset A, and if you switched to dataset B recently, then you might have fewer boards that point to B which you would need to recreate. Maybe some scripting could also help.
In any case, these are far from good workarounds, particularly if you need to do this often or your contour analysis is pretty big.