I was wondering whether there was any place to find the exact call made to LLM model providers (especially when using tools)? I want to know for example:
What the chain of thought prompt strategy is exactly changing in each case
What the exact description of the tools are (and whether I can change this or not)
As noted in https://www.anthropic.com/research/building-effective-agents, it’s often quite important from the developer’s perspective to be able to tune the exact descriptions of things for a particular use case.
When you use built-in LLMs in pipeline builder, the output is essentially the API output from OpenAI (or whichever model you’re using). This is to say, Foundry will not provide you any insights that go beyond the API specs. However, you can import your own models into Foundry and if those models include the ability to see chain-of-thought (specified in the API call), this may help you. One way of doing this is by using a Compute Module, which was demonstrated at DevCon by connecting Foundry to Cohere’s API: https://www.youtube.com/watch?v=no-dHZ3pRDo&ab_channel=PalantirDevelopers
Another thing to consider is that you could have a model output its chain-of-thought reasoning process through prompt engineering. Essentially, asking the LLM how it went about solving a specific problem, and viewing that output locally in AIP. Just store the LLM’s answer as a variable. I wonder if AIP Agent Studio will provide some of this functionality natively…
Sorry for the confusion, this is much more straightforward if we’re just talking about AIP Logic. In the debugger, there should be a “Show Raw” button that you can use to see the System and User prompts that go straight to OpenAI. Let me know if that was the issue?