New NVIDIA & Palantir Collaboration

Announcing a new collaboration between NVIDIA and Palantir that enables the integration of NVIDIA accelerated computing, NVIDIA CUDA-X data science libraries, and open-source NVIDIA Nemotron models into the Ontology.

“Palantir and NVIDIA share a vision: to put AI into action, turning enterprise data into decision intelligence,” said Jensen Huang, founder and CEO of NVIDIA. “By combining Palantir’s powerful AI-driven platform with NVIDIA CUDA-X accelerated computing and Nemotron open AI models, we’re creating a next-generation engine to fuel AI-specialized applications and agents that run the world’s most complex industrial and operational pipelines.”

What does this mean in practice? Here is what you can try now, and what’s coming down the pipe:

Available from today on Enterprise & Developer Tier enrollments

  • NVIDIA Nemotron Super: NVIDIA’s SOTA agentic 49B parameter reasoning model, built using Llama, running on our secure cloud on NVIDIA hardware. Available through Model Catalog* and can be used just like any other LLM.
  • NVIDIA NeMo Retriever: Another SOTA embedding model available to embed content directly through 1st class apps like Pipeline Builder, Code Workspaces, and the Ontology Toolchain. Combining both the models helps users create / improve their own RAG and OAG (Ontology-Augmented Generation) workflows. Available through Model Catalog* and can be used just like any other embedding model.

Coming soon

  • NVIDIA cuOpt Optimization Engine: A cubic optimization engine which supports Linear Programming and Mixed Integer Linear Programming problems. Example: Inventory Rebalancing or Vehicle Routing Problems. Available soon through Marketplace.
  • Other exciting NVIDIA Blueprints.

* please note, to view the LLMs in Model Catalog, you or your Enrollment Administrator may need to enable the NVIDIA model family in AIP settings in Control Panel. Currently only available on enrollments hosted in the USA.

8 Likes