Investigating High Request Overhead in Developer Console Compute Module API Calls

I’m calling a backend service configured through the Developer Console API, which triggers a Compute Module function under the hood. Currently, the function returns a simple JSON response:

{"success": true}

Here’s what I observed:

  • 1st API call: Total time — 581 ms → Compute function — 38 ms → Overhead ≈ 543 ms

  • 2nd API call: Total time — 1259 ms → Compute function — 40 ms → Overhead ≈ 1219 ms

I’m trying to understand why there’s such a large gap between the total response time and the actual compute execution time.

Is this overhead expected by design, or are there ways to optimize or reduce it?

Your compute module requests are submitted to an internal job queue (each compute module maintains a separate job queue). There may be overhead in queue/dequeue operations and overall infrastructure.

Thanks, @tpark - that makes sense.

Quick follow-up: we’re building a high-throughput API (~60 RPS) that calls a SOAP API under the hood. Right now it’s implemented as a Compute Module function triggered through the Developer Console API.

Given that setup:

  • Is a Compute Module the right approach for this kind of sustained traffic, or is there a better option ?

  • We mainly used Compute Modules because they can make external (SOAP) calls — is there a better pattern for handling external API calls at this scale?

  • Is there any way to reduce the overhead for near real-time scenarios?

Appreciate any guidance or best practices you can share.