I have a use case where I need to input 500,000 objects into an AIP logic function. This function will assess each object’s property values and calculate a score rating for each object, writing back to a few columns.
I’m concerned about the computational feasibility and performance of this task. Is it realistic to process this many objects efficiently using AIP logic functions in Foundry? If so, what are some best practices or optimizations to ensure it doesn’t become too slow?
Any insights or recommendations would be greatly appreciated!
Can you give a little more detail on what kind of calculation you’re making to determine the “score”? In my experience, incorporating an LLM into this kind of flow would be most helpful to generate structured data out of unstructured text and then have a deterministic score calculation, rather than have the LLM attempt to provide a score directly.
Regardless of the prompt and scoring approach, however, I expect you might be better off using the Use LLM node in Pipeline Builder to handle this scale, rather than AIP Logic.
Thanks for your feedback! We use a mix of strict calculations and subjective assessments to determine the scores. Some factors are based on clear numeric ranges, while others rely on keywords and context to make a more nuanced judgment. For those relying on keywords, we get the AI to look at descriptions to score them accordingly.
I’m not sure how to handle the subjective part (looking for keywords) without using an LLM. Do you have any suggestions for calculating these scores outside of using an LLM?