Tool calling with agent studio (streaming) that can be accessed by frontend

Hi,

I have a slightly complicated situation with agent studio. In particular, I am wishing to use it to replace some streaming logic I’ve already got going with OpenAI.

The logic I’ve got enables me to do tool calls with 4o while streaming and then as they come through render them in the chat I have on the frontend and use them to make stateful changes on my frontend.

I’d love to know how to make this happen with Agent Studio as the backer rather than OpenAI directly. This would be useful as then I could also do RAG across my Ontology :slight_smile:

Here’s my code:

const stream = await openai.chat.completions.create({
                model: "gpt-4o",
                messages: messages,
                tools: tools,
                tool_choice: "auto",
                stream: true,
            });

            let buffer = '';
            let functionCall = null;

            for await (const chunk of stream) {
                const delta = chunk.choices[0]?.delta;

                // Handle content (natural language)
                if (delta.content) {
                    buffer += delta.content;
                    res.write(`data: ${JSON.stringify({ content: buffer })}\n\n`);
                    buffer = '';
                }

                // Handle tool calls
                if (delta.tool_calls) {
                    if (!functionCall) {
                        functionCall = {
                            id: delta.tool_calls[0].id,
                            name: delta.tool_calls[0].function?.name || '',
                            arguments: delta.tool_calls[0].function?.arguments || ''
                        };
                    } else {
                        functionCall.arguments += delta.tool_calls[0].function?.arguments || '';
                    }
                    res.write(`data: ${JSON.stringify({ tool_calls: delta.tool_calls })}\n\n`);
                }

                // Handle completion of tool calls
                if (chunk.choices[0]?.finish_reason === 'tool_calls') {
                    res.write(`data: ${JSON.stringify({ complete_tool_call: functionCall })}\n\n`);
                }
            }

Where this corresponds on the frontend to rendering of tool calls as they arise which then changes some states on the frontend:

const response = await fetch('endpoint', {
                method: 'POST',
                headers: {
                    'Authorization': `Bearer ${token}`,
                    'Content-Type': 'application/json',
                },
                body: JSON.stringify({
                    message: newMessage,
                    currentStep
                })
            });

            if (!response.ok) {
                const errorData = await response.json();
                throw new Error(errorData.error || `HTTP error! status: ${response.status}`);
            }

            const reader = response.body?.getReader();
            if (!reader) throw new Error('No reader available');

            let accumulatedContent = '';

            while (true) {
                const { done, value } = await reader.read();
                if (done) break;

                const chunk = new TextDecoder().decode(value);
                const lines = chunk.split('\n');

                for (const line of lines) {
                    if (!line.startsWith('data: ')) continue;

                    const data = line.slice(6);
                    if (data === '[DONE]') break;

                    try {
                        const parsed = JSON.parse(data);

                        // Handle regular content
                        if (parsed.content) {
                            accumulatedContent += parsed.content;
                            setMessages(prev => {
                                const newMessages = [...prev];
                                const lastMessage = newMessages[newMessages.length - 1];
                                lastMessage.content = accumulatedContent;
                                return newMessages;
                            });
                        }

                        // Handle completion of a tool call
                        if (parsed.complete_tool_call) {
                            try {
                                const completedCall = parsed.complete_tool_call;
                                if (completedCall?.name && completedCall?.arguments) {
                                    const toolCall: ToolCallWithStatus = {
                                        type: "function",
                                        function: {
                                            name: completedCall.name,
                                            arguments: JSON.parse(completedCall.arguments)
                                        },
                                        status: 'pending'
                                    };

                                    setMessages(prev => {
                                        const newMessages = [...prev];
                                        const lastMessage = newMessages[newMessages.length - 1] as Message & { toolCalls?: ToolCallWithStatus[] };
                                        if (!lastMessage.toolCalls) {
                                            lastMessage.toolCalls = [];
                                        }
                                        const toolCallExists = lastMessage.toolCalls.some(
                                            existing =>
                                                existing.function.name === toolCall.function.name &&
                                                JSON.stringify(existing.function.arguments) === JSON.stringify(toolCall.function.arguments)
                                        );

                                        if (!toolCallExists) {
                                            lastMessage.toolCalls.push(toolCall);
                                        }
                                        return newMessages;
                                    });
                                }
                            } catch (error) {
                                console.error('Error processing complete tool call:', error);
                            }
                        }
                    } catch (e) {
                        console.error('Error parsing JSON:', e);
                    }
                }
            }
1 Like

Hey!

We’ll get someone with more understanding to answer, but have you looked at the API docs here: https://www.palantir.com/docs/foundry/api/aip-agents-v2-resources/sessions/streaming-continue-session/. I think you can use that API to do this I think?

Hey!

We currently support streaming AIP Agent responses via the Foundry APIs in preview mode - once you’ve create an Agent in Agent Studio, you can then start a new conversation session using createSession and get streamed responses in the session using
streamingContinueSession.

Unfortunately these APIs here do not support exposing the tool calls by the Agent yet - they currently just stream back the final answer from the Agent