Does "Skip recomputing rows" cache the LLM results even if the build itself fails?

Hello!

My question is about the Skip recomputing rows feature in a Use LLM block. If the build itself fails, are the LLM results cached or no?

Our use case is we have 11M rows that we want to run entity extraction on. with our GPU this would take around 7 days to complete, i.e. the build will time out. we’re thinking if we could simply keep retrying the build until all the LLM queries get cached or if we should find a smarter approach.

Hey @gustas, if the build fails, the use LLM node won’t cache anything. What has worked for people in the past is they feed a subset of their rows, have the build succeed, and feed another subset of rows (all with the cache turned on). Then, over time the cache will fill up and you’ll be able to populate all your 11mil rows. Let me know if that makes sense!

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.