I have a function that calls an API route for an action using the OSDK to edit an ontology object. It then reads the edited value from the Ontology by ID, like so:
// this function calls fetch to edit the machine execution
await this.executeAction(body, 'upsert-machine');
// this function reads the same machine execution edited above then edits the same object also using fetch
await this.text2ActionInstance.upsertState(undefined, true, machineExecutionId, undefined, SupportedEngines.SALES);
What it looks like is that the read that results in this.text2ActionInstance.upsertState is getting the old copy of the object. I’m going to continue troubleshooting to determine if the error is in my code, but I would like to know what the eventual consistency guarantees are and how long it takes for changes to take effect if there is an eventual consistency issue here.
With further testing this does appear to be an eventually consistency issue. Please advise if I need to implement reads with a backoff policy as a workaround, if there is another workaround, or if this should not be an issue and possibly there4 is some other error in my code.
Is there any kind of caching of object set queries that could be the root cause? Both function execute this query:
// retrieve previous execution if there was one
const execution = Objects.search()
.machineExecutions()
.filter((execution) => execution.id.exactMatch(solution.id!))
.all()?.[0];
I was able to work around the issue by passing the retrieved ontology object to the second function, but this sucks. IF someone can let me know what the exepcted behavior is that would be great,
The Actions endpoint you are hitting should apply edits synchronously if you are on the latest objects storage infrastructure. If not, then there may be some delay before the edits become visible. You can tell whether you are on the latest infrastructure if in the datasources tab in the Ontology Manager App, you can see a graph view of the flow of data.
Even then, the legacy Typescript Functions runtime has caching that will prevent you from seeing different versions of an object within a single execution. The ontology loading APIs and runtime were designed with the intention of being extremely friendly and out-of-the-box performant for non-developers, and back then, the ability to perform edits mid-execution was non-existent. Unfortunately, what you are experiencing is a trade-off of this design.
Since the OSDK methods for loading objects do not perform any persistent caching, this should not be an issue with Python Functions or Typescript V2 Functions–the latter does not have support for external sources yet, but this is actively being worked on.
Again, apologies for the frustration here. As a team, we are very much aligned on designing our new Function templates and experiences around providing greater flexibility, control, and power to our more technical users.
Thanks Marshall. I am on object storage v2. It sounds like what you are saying, is if I am using TypeScript V1 functions that are also performing a fetch request against and OSDK route, and they both edit the same object, the caching layer in TS Functions V1 will prevent me from seeing the edits that take place in my OSDK requests because the OSDK team did not implement support for updating the cache? Is that correct? I understand the history of Functions V1, thanks for clarifying.
Typescript V1 does not utilize the OSDK or make requests through the API Gateway. It predates both of these things. The legacy runtime will communicate with the relevant Ontology services directly and then cache results.
Typescript V2 Functions utilize the OSDK APIs. You can fetch Objects through an OSDK client and the Typescript V2 runtime will not perform any kind of caching behind your back. We leave this kind of performance improvement up to the developer.