We are developing an application which is rather complex and involves a lot of active compute in typescript functions / python functions without actually invoking a model.
In Workflow Builder, one can get the Model usage chart for a Workshop module.
Having another item there which shows the ontology compute required by that application would be really helpful to get an idea of how expensive and App actually is given all the magic that one has programmed in the background.
Hi!
two big thumbs up on this request! We also have soffisticated workshop applications and struggle to understand what is driving the compute/costs of it.
A starting point for us the newly introduced profiler in workshop that is good to understand if one is actually calling only what’s necessary. However, this only gives us what’s called and the duration. I would really love to see the associated compute. Currently my assumption is, that variable querries which are taking longer are more expensive.
But at the end of the day, we actually have to do a lot of trial and error to identify how much certain functionalities really cost.
A lot of my daily work is at the moment creating copies of workshop applications (as xyz app name [cost test feature XYZ]) where I execute a certain defined scope of steps/interactions. I then wait a day for our resource app to have picked up data from previous day to see how much cost these few actions have incurred at aggregate. To then repeat the same with another subset of steps in order to try to interprete which step costed how much.
So you see that this is a very inefficient trial & error way of doing it, however I see no other choice at the moment.
Crossing all fingers that this gets on the roadmap soon!
Hey @Flackermann and @Phil-M, thanks for the feedback and can definitely see the value of seeing the ontology compute for your applications in Workflow Builder!
We’re still exploring the possibilities here, but can update this thread with our progress. In the meantime, if you already have action logs turned on, you can see the usage of functions/actions across your stack. Even though this isn’t at an application level, it should hopefully help give ideas with respect to how often certain actions are used and give estimates to the compute.
High @helenq! I appreciate you looking into this. Actionlogs works to get an understanding of the usage of actiontypes - which is a start.
However, where I feel our blindspot is, is really understanding the impact of a variable query in workshop. To me understanding how variable queries (e.g. searcharounds, object set aggregations, etc) translate into compute, would really shape (and is imo crucial) to excel at cost-aware application building. The sad truth is we had to deactivate awesome features in our application because they were too costly. even after multiple iterations we did not manage to reduce compute to an acceptable level. So, yeah keeping my fingers crossed that you find a good solution!