LLM Node occasionally skipping lines during processing

Hi everyone,

I’ve been using the LLM Node in my pipeline and it’s been working well, but recently I’ve noticed it occasionally skips or doesn’t properly process certain lines in my data. It happens randomly - most of the time everything works fine, but some rows just get missed or produce incomplete outputs.

Has anyone experienced this before? I’m wondering if it could be related to token limits, timeouts, or some configuration I might be missing. Any insights would be really helpful!

Hey @Jacob_SE one thing you could do to test out the values that are producing nulls or are incomplete is (on a separate branch) using the include errors option on the output and using the trial run to see what the output error is

Hi @helenq

Thanks for the suggestion! I’ve already tried the “include errors” option, but unfortunately most errors just show up as “unknown error” with no actionable information. Even after resolving context limit issues, these random failures continue with no clear pattern.

This issue is significantly affecting my operations:

  • Build times: 3-4 hours on average (I’ve had builds fail after running for 8 hours)

  • Substantial resource waste from failed runs

  • Extensive time spent on trial-and-error optimization with disappointing results

Interestingly, I rarely experienced these issues during the early LLM Node days with GPT-4o. These random failures seem to have emerged with newer model integrations, which makes me wonder if there’s a regression or compatibility issue.

Have you or the Palantir team identified any patterns for these “unknown errors”?