I’m currently building a compute module with our ontology and am having problems running it. While I can get the module to boot up successfully, there’s a problem running the main function, which is an asynchronous call. I need to make several async calls within my main function as the code would run far too slowly if I didn’t. The problem is I constantly receive this error:
raise RuntimeError('cannot schedule new futures after '\nRuntimeError: cannot schedule new futures after interpreter shutdown\n"
I believe this is because the event loop the container generates closes while my async function is executing, causing a race condition. I tried initializing the function as an async function, but this didn’t work. I tried the following as well to no success:
import asyncio
from concurrent.futures import ThreadPoolExecutor
from dataclasses import dataclass
from .main_function import main_async_function
import atexit
GLOBAL_EXECUTOR = ThreadPoolExecutor()
GLOBAL_LOOP = asyncio.new_event_loop()
asyncio.set_event_loop(GLOBAL_LOOP)
atexit.register(GLOBAL_EXECUTOR.shutdown)
@function
def predict_resupply(context, event):
coroutine = main_function_wrapper(context, event)
return GLOBAL_LOOP.run_until_complete(coroutine)
async def main_function_wrapper(context, event):
return await main_async_function(event["property"])
Is there a way people have handled async calls? Or is this simply something that’s not supported by compute modules? My alternative idea is to create a FastAPI endpoint that links to my function but I have no idea how I would hook that into a compute module. Thanks all!