Using LLMs in Streaming Pipeline

Hi all,

Calling an LLM from a streaming pipeline is a blocker that I am running into… This is how it is set up:

  • Typescript function is the only function that can call Palantir LLMs (Python is not yet set up for this)
  • Typescript function gets wrapped in an SDK
  • Python function calls the typescript function using SDK (This is because only python functions can be used in streaming pipelines / Pipeline Builder)
  • The issue lies with using the SDK within the python function in the streaming pipeline…
  • I believe it could be:
    • when the function is deployed to the pipeline, there is no connection from the pipeline’s container (I think it uses Apache Flink) to call the LLM API service
    • or the environment within the Flink container does not have the correct credentials to connect to the foundry client

If anyone could offer guidance on this topic, that would be great!

Thank you!

Hi! Just to clarify, are you calling the Python function from your streaming pipeline as a user-defined function that you’ve imported to the pipeline?

Although we’re working towards broadening which features are supported for Python functions in Pipeline Builder, using SDKs in your function (e.g. to call a TypeScript function) is not yet supported.

As an alternative approach, have you considered using function-backed models? There are some instructions here for how to set one up in TypeScript and use it from the Use LLM board in Pipeline Builder. Hope this is useful!