Hello,
I’m working on optimizing performance in my Foundry Functions TypeScript V1 code by implementing caching to avoid redundant database calls within a single request. I have a question about the execution model and memory isolation between function invocations.
Context:
I have a service class that frequently calls Objects.search().MyOntologyObject().allAsync()
and always looks at the first row to get some config data. Within a single function execution, this same data might be retrieved multiple times by different services, so I want to cache it.
My Concern:
I’m worried that static properties might be shared across different function invocations/requests, which could cause:
-
Request A caches a value
-
Request B gets the stale cached value from Request A instead of fresh data
-
Race conditions if Request A modifies the cache while Request B is reading it
Questions:
-
Are static class properties shared across concurrent function invocations of the same version in Foundry Functions?
-
What is the isolation model between different function executions of the same version? Does it make a difference Request A and Request B are the same or different published function?
-
Does the behavior differ between Functions and OntologyEditActions?
-
Is each function invocation guaranteed to run in its own isolated context, or might they share memory/static variables?
-
What’s the recommended pattern for request-scoped caching in Foundry Functions?