With palantir_models, I added a validation step in the model adapter in order to return null if the input parameter is outside the validity domain of the training data (in order to avoid model extrapolation). If the input data is outside the validity domain, I would like to return null. Unfortunately, it does not work with the DistributedInferenceWrapper, because spark refuses to write the dataframe with null values.
Any idea how to solve this problem?