Python Function with OR-Tools CP-SAT works in live preview but consistently fails when called from a Workshop function-backed variable
I have a Python Function that uses Google’s OR-Tools CP-SAT solver to optimize assignment of personnel to requirements (classic assignment problem). The function works perfectly in live preview in platform VSCode every time, but consistently fails when called from a Workshop function-backed string variable with this error:
RawClientError(hyper_util::client::legacy::Error(SendRequest, hyper::Error(Io,
Custom { kind: UnexpectedEof, error: "peer closed connection without sending TLS
close_notify" })))
It has worked a couple of times from Workshop, but very intermittently. The vast majority of calls fail.
What I’ve verified:
-
Function works consistently in live preview in platform VSCode (tested dozens of times)
-
Function consistently fails when called from a Workshop function-backed variable, works very rarely
-
The error doesn’t hit my try/catch block, suggesting the process dies before Python can handle it
-
Replacing the OR-Tools optimizer with a pure Python greedy algorithm makes the function work reliably every time from Workshop
-
The issue is specifically OR-Tools – all other code (ontology queries, scoring, data loading) works fine from Workshop
-
OR-Tools is installed via PyPI in the Libraries panel
-
Python 3.10 environment
-
Problem size is small (18 requirements, ~60 eligible candidates after filtering)
My theory is that OR-Tools’ C++ native bindings are crashing the function executor process. The fact that it bypasses try/catch suggests a segfault or process-level crash rather than a Python exception.
Questions:
-
Has anyone successfully used OR-Tools (or other packages with C++ native bindings) in published Python Functions called from Workshop?
-
Is there a known limitation with native extensions in the Python function executor when called from Workshop?
-
Is there a way to configure the function executor to isolate or restart between calls?
-
Any alternative solver recommendations that work reliably in published Python Functions called from Workshop? The solver needs to handle:
-
Classic assignment problem (maximize total score, one-to-one matching)
-
Hard constraints (qualification gates, capacity caps)
-
Multi-objective optimization (quality vs. fairness/distribution balance)
-
Scale to 500+ requirements x 1,000+ candidates
-
Extensible for additional constraint types in the future
-
Any guidance appreciated.