Is the “Aggregate Runtime” - which is visible in the Spark Details of a job - the same as the compute-hours cost of a build in RMA?
Specifically, will the aggregate runtime take into account if I exceed the memory-to-core ratio?
Is the “Aggregate Runtime” - which is visible in the Spark Details of a job - the same as the compute-hours cost of a build in RMA?
Specifically, will the aggregate runtime take into account if I exceed the memory-to-core ratio?
No, memory to core ratio is taken into account in Foundry’s compute hours/compute usage formulas, but not into aggregate runtime which is a sum of all task durations in a spark job.
So if I want to get the cost in compute hours, can I just multiply the aggregate runtime with the memory coefficient?
(Trying to understand what a job costs without access to RMA.)
+1 to see the compute hour/RMA metric on an individual job
No, I guess you would need to apply the formulas described in the compute usage. There is examples too.
Optimizing and debugging pipelines • Spark • Understand compute usage • Palantir