Following on this thread, since the null outputs are not cached, we were hoping to rerun the pipeline until all the rows are computed - however, we notice that rerunning actually does not change the number of null outputs from the LLM blocks.
We suspect that this is because the input dataset did not change and the pipeline is just not running, therefore tried the Force Build, that did not change anything as well.
We have checked and the input from those rows does not exceed
What would be the recommended way to iteratively compute the rows that failed?
Normally that should work – if you run one of the rows that failed on an llm separately (just to test) does it succeed or still give you null? My first thought is that the rows are still getting rate limited or error-ed by themselves. If the former works though, you could pass in a subset of the null rows to the llm to see if that succeeds and that way it’ll get stored in the cache