I’m sure someone else has encountered this issue, but I’ve had some headaches trying to get an LLM hosted on Foundry. I’ve created a Docker container (just an instance of Llama 3.2 3B Instruct to test the functionality) and a model adapter to access the API within the container, but keep getting stonewalled trying to deploy it. The container has been successfully pushed to Foundry as well. Whenever I start the deployment process, here are the errors I get:
One quick thought: Have you verified that you have the necessary permissions? In my experience, deploying Docker images for certain AI tools often requires specific access rights, so it might be worth double-checking if that applies in this situation.