How/When can I change the spark configuration of a build via conf.set?

I have a pipeline, defined in code repository, and I need to apply particular configuration to make the run successful (for example if I want to change pivotMaxValues, or the skew configuration, etc.).

Given that I can access the context by adding ctx as a parameter of my transform, I can actuall perform conf.set on the spark session, like:

def compute(ctx, ...):
   ctx.spark_session.conf.set("spark.sql.pivotMaxValues", 50000)

Is this configuration taken into account ? How does it relates to transform profiles (see docs) ? Which config will work and which don’t ?