Inconsistent Answers from LLM Blocks

hello! just had a couple questions about the LLM Block’s Entity Extraction functionality. I noticed that about 1/3 of the responses error out with Cannot coerce to provided type was just wondering:

  1. Whether this is due to how the response from the LLM is being parsed in the back end to provide the desired struct?
  2. Whether there’s anything we can do on our end to sort of mitigate this/give us a higher percentage of extractions/valid responses from the LLM?

Hi!

  1. Yep, this is due to the response from the LLM being non-conforming to the provided output type.
  2. Will have to look at some sample input-output data there, but the general advice is to break the LLM node into multiple nodes if the output type is too wide or deep. Adding a few examples in the system prompt should help too.