Use LLM block returning null with reasoning models

I have a simple test prompt, as shown below. It works when using most models, but largely returns “null” on using reasoning-based models. o3-mini and o1-mini always returned null. Grok-3-Mini-Reasoning returned null for all but one.

Looking at the raw prompts, it doesn’t seem like there’s an output/try at all.

Am I missing any config inputs? How do I debug this? Thanks.

Turned on errors and this is the error I’m getting:

{"ok":null,"error":"Encountered an unknown exception during completion"}

Grok-3-Mini-Reasoning does return non-null outputs for all test inputs now.

Hello! I did a little digging and this appears to be an error with our language model services. Working to resolve with other internal dev teams.

However, this is a good flag that we may need to show better error messages in this case.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.