Token Caching for AIP Logic Functions/Agents

Are there any plans to offer optional token caching for LLM calls?

Thanks

Hi Will :waving_hand:

Currently we have implemented token caching for the same prompt w/ temp = 0. We’re actively having discussions about offering optional token caching on a broader scale, among other features designed to enhance the usage of LLMs at Palantir. Cannot give any timelines, but be sure to stay tuned into Foundry Announcements for the latest offerings!

2 Likes