Introducing AIP Accelerate [Beta], your notebook environment for LLM workflows.
AIP Accelerate is a notebook environment for building large language model (LLM) powered workflows. Available the week of June 3 across enrollments, AIP Accelerate enables you to rapidly iterate on LLM prompts that leverage your data in various forms, whether it is a dataset imported from Foundry, or a file on your computer. With AIP Accelerate’s provided library of LLM operations, you can build complex chains of thought, interpret uploaded images, and generate LLM-powered insights. With support for the latest generative AI models, AIP Accelerate is a sandbox for wielding the capabilities of LLMs for investigative business workflows.
AIP Accelerate landing page.
A LLM’s response effectiveness highly depends on the contextual environment. AIP Accelerate enables you to carefully manage what context the LLM is given by using Context Threads. Context threads are one of the primary components of Accelerate, and denote a chain of information that is provided to a large language model. Cells pass information downward, via a context thread, to their sub-cells. Workflow builders can use context threads to break out of the linear-chat motif and engineer workflows that are not limited by a single context path. Furthermore, AIP Accelerate leverages primitives within the platform to enable operations on various forms of structured and unstructured data (such as tabular data, PDF, or image files) to do complex analysis.
AIP Accelerate context thread where the child LLM cell inherits the context of its parent, continuing the conversation.
In the image above, we have a two-cell context thread. We can continue nesting cells to drill into a conversation with an LLM, or branch off after any cell and go down a different path.
Secure and customizable AI-powered analysis
The key features of AIP Accelerate include:
-
User-friendly interface: AIP Accelerate has an intuitive interface that makes it easy for workflow builders to construct complex, multi-dimensional analysis using LLMs.
-
Generative AI models: Customize which available LLMs to use when executing both text and vision based tasks.
-
Operate on real data: Deploy LLMs over data within the platform such as datasets or mediasets (images or PDF documents).
-
Share results: Save analysis results as a new dataset for use in other AIP tools or share a workflow with other users.
-
Data security: AIP Accelerate is built on the same rigorous security model that governs the rest of the Palantir platform, including user based data permissions. These platform security controls grant an LLM access only to what is necessary to complete a task.
AIP Accelerate is under active development and support for more operations are coming soon.
Access AIP Accelerate
AIP Accelerate can be accessed from the platform’s workspace navigation bar or by using the quick search shortcuts CMD + J
(macOS) or CTRL + J
(Windows). Alternatively, you can create a new workflow from your Files by selecting +New and then choosing Accelerate, as shown below.
The Foundry app navigation menu.