Enhancing speed for processing LLM in Function or AIP logic

Hi!

I am looking to enhance the speed of obtaining results after processing with a Large Language Model (LLM). I’ve tried using TypeScript functions and AIP Logic, but the functions result in runtime errors and AIP Logic is too slow.

The input token count is around 30k tokens in JSON format.