Hi all, wanted to share our recent announcement about improvements to Databricks connectivity and call out a few features that might be of interest to the community:
- New unified Databricks connector. We now support connecting via both JDBC and Spark to Databricks tables via the Databricks connector. In particular, it is now possible to establish Spark-based connectivity to underlying storage locations using vended credentials from Unity Catalog. This removes the previous need to set up multiple connectors and credentials to Databricks and the underlying storage locations separately.
- Compute pushdown for Python transforms . You can now push down compute from Foundry Python transforms to Databricks, allowing you to orchestrate Databricks compute from Foundry, and include tables generated via Databricks compute as part of your scheduled Palantir pipelines and provenance graphs. The compute pushdown capability leverages Databricks Connect
under the hood.
- Bulk registration of virtual tables is now available in Beta, in addition to auto-registration. This means you can browse Unity Catalog and select multiple tables to register in the UI as an alternative to registering tables individually.