Questions related to transforms in a spark pipeline, likely in Code Repository:
Is it possible to have different profiles for incremental builds vs snapshot builds ?
As a general usecase, having a dynamic number of executors via dynamic allocation profiles is possible, but for memory allocation there isn’t the same flexibility, hence the above questions.
To my knowledge, Spark does not currently support dynamic adjustment of memory for executors after the application has started, so it should not be possible to tweak the spark allocation after a build started.