Hosting LLM on Foundry

I’m sure someone else has encountered this issue, but I’ve had some headaches trying to get an LLM hosted on Foundry. I’ve created a Docker container (just an instance of Llama 3.2 3B Instruct to test the functionality) and a model adapter to access the API within the container, but keep getting stonewalled trying to deploy it. The container has been successfully pushed to Foundry as well. Whenever I start the deployment process, here are the errors I get:

Screenshot 2025-01-10 at 12.22.23 PM

If anyone else has worked around these issues I’d be extremely grateful for any advice. Thanks!

These two pages from the documentation may be helpful:

https://www.palantir.com/docs/foundry/transforms-python/container-overview#image-requirements

https://www.palantir.com/docs/foundry/integrate-models/container-models-faq

In particular, it seems like the error refers to the requirement that the user for custom docker images must be non-root and numeric:

  1. The image has a numeric userID.

One quick thought: Have you verified that you have the necessary permissions? In my experience, deploying Docker images for certain AI tools often requires specific access rights, so it might be worth double-checking if that applies in this situation.