Adjusting to Foundry from SQL-First Stacks — Looking for a Better Way to Use SQL in Pipelines

Hey everyone, I’m a data engineer new to the Foundry platform. I’m coming from a more traditional data stack (Airflow, dbt, Snowflake), where SQL is central to everything you do. I’ve been finding it a bit challenging to adjust to Foundry’s low-code/no-code approach, especially when it comes to building pipelines.

I’m aware that Code Repositories exist, but I wanted to ask the community (or Palantir team) whether either of the following is currently possible — and if not, whether it’s something on the roadmap.

  1. Can I include custom SQL blocks within Pipeline Builder? It would be incredibly useful to mix custom SQL queries with low-code/no-code blocks when working with source data. I’m not talking about defining UDFs, but rather running full queries directly as part of the pipeline logic.

  2. Is there a SQL development environment for exploration? I know about the “Explore with SQL” feature, but it seems pretty hidden. I’d love to see a more accessible, centralized interface — similar to Snowflake’s web UI — where I can run ad hoc queries at scale. Code Repositories seem focused on structured ETL jobs, not interactive exploration. My usual workflow involves querying intermediate tables and steps as I build pipelines, and I’d really benefit from an integrated query tool to support that.

Curious how others are working around this, and if there are best practices or features I might have missed.

Thanks!

1 Like

Hey, thanks for the questions.

  1. There isn’t any way to execute custom SQL in Pipeline Builder today, adhering to the low-code / no-code philosophy. Most SQL-like data transformations should be possible with the built-in boards in Pipeline Builder; the functions documentation and AIP Assist / Generate features should be useful in determining how to achieve the desired output in such manner.

  2. I’m not sure if what you’re referring to is the “SQL Preview” feature in Dataset Preview, but the docs for that are here. Otherwise the primary no/low-code data exploration tool that exists in Foundry is Contour, which also has a custom expression language that incorporates multiple functions from Spark SQL. Another option is Code Workspaces which gives you a more pro-code JupyterLab environment for working with datasets.

I think this question is less about low-code/no-code workflows but about SQL centric pro-code workflows.

I do agree that Foundry is lacking here and one nice/obvious solution would be to add SQL transform blocks to pipeline builder. The classical spark based sql transforms in code repository lack the iteration speed that you get in platforms like snowflake where you‘ll have an answer to your query in seconds compared to the minutes you need to wait for checks und build to complete.

1 Like