OSDK security does not work with LLM proxies

Below is a client I created to work with LLM proxies in Foundry. I will get 403 forbidden when creating an access token with my OSDK client. Is this a known issue? Are personal access tokens required for LLM proxies? If so can you please fix this.

import {
  SupportedFoundryClients,
  type OpenAIService,
} from '@codestrap/developer-foundations-types';
import OpenAI from 'openai';
import { foundryClientFactory } from '../factory/foundryClientFactory';
import type { ChatCompletionCreateParamsStreaming } from 'openai/resources/chat';
import type { RequestOptions } from 'openai/core';
import type { ResponseCreateParamsStreaming } from 'openai/resources/responses/responses';

// ADd tpe definitions for the OpenAI response here, or in a separate file and import them in, to ensure type safety when working with the API response data.
export function makeOpenAIService(): OpenAIService {
  const { getToken, url, ontologyRid } = foundryClientFactory(
    process.env.FOUNDRY_CLIENT_TYPE || SupportedFoundryClients.PRIVATE,
    undefined,
  );

  return {
    // TODO code out all methods using OSDK API calls
    completions: async (
      body: ChatCompletionCreateParamsStreaming,
      options?: RequestOptions,
    ) => {
      const token = await getToken();
      const client = new OpenAI({
        baseURL: `${url}/api/v2/llm/proxy/openai/v1`,
        apiKey: token,
      });

      const stream = await client.chat.completions.create(body, options);

      let text = '';
      for await (const chatCompletionChunk of stream) {
        text += chatCompletionChunk.choices[0]?.delta?.content || '';
      }
      return text;
    },
    responses: async (
      body: ResponseCreateParamsStreaming,
      options?: RequestOptions,
    ) => {
      const token = await getToken();
      const client = new OpenAI({
        baseURL: `${url}/api/v2/llm/proxy/openai/v1`,
        apiKey: token,
      });

      // Responses API streaming emits semantic events (delta, completed, error, etc.)
      const stream = await client.responses.create(
        { ...body, stream: true },
        options,
      );

      let text = '';

      for await (const event of stream) {
        if (event.type === 'error') {
          throw new Error(`OpenAI API error: ${event.code} - ${event.message}`);
        }

        if (event.type === 'response.output_text.delta') {
          text += event.delta ?? '';
        }
      }

      return text;
    },
  };
}

Scopes I am passing:

  const scopes: string[] = [
    'api:use-ontologies-read',
    'api:use-ontologies-write',
    'api:use-admin-read',
    'api:use-connectivity-read',
    'api:use-connectivity-execute',
    'api:use-orchestration-read',
    'api:use-mediasets-read',
    'api:use-mediasets-write',
    'api:use-aip-agents-read',
    'api:use-streams-read',
    'api:use-sql-queries-read',
  ];

Hello,

Can you add the scope api:use-language-models-execute to your request when generating a token? You should not need to use a personal access token, no.

Let me know if this does not fix things for you.

I tried but I get 403 Forbidden. My guess is the OSDK app is not setup to include this permission (there is no option to enable it under platform SDK) hence it is not honoring the assertion in the scopes.

Hey @CodeStrap - you’re right, this is a bug on our side in how we are computing the scope. We have identified the bug and are working on a fix now, thank you for flagging this.

1 Like

Amazing, ty! LMK when a fix is deployed so I can mark this as solved.