Dynamic allocation spark profiles are the closest fit for this use-case. You can’t modify executor memory or partition size dynamically in this way, but my personal experience is that it’s usually possible to achieve good results even with those constraints.
It’s also worth noting that you can dynamically change the number of partitions that you pass to a call to the Spark repartition function based on whether you are running incrementally or snapshot (or just based on the total amount of data to be processed, which you can efficiently compute at runtime by summing up the total bytes in the files in the input filesystem). This technique can help with reducing the amount of data per-task as necessary to get your job to run well with a static amount of per-executor memory.