What is the ideal sync size in Data Connection (in this case a Snowflake connector)

I have a pretty large ingest job from a snowflake connector (hundreds of TBs). I’m assuming doing that all in one sync would be a bad idea given network timeouts and such. Do you have a suggestion on the idea sync size?

I would not recommend to use Syncs as the tend to be slow.

Instead, use a virtual table with lightweight and snowpark.

In the logic, copy the query result into an internal stage, download the parquet files to a temporary directory and upload to the s3 proxy from there.

If computer cost is no concern, use Virtual Tables with Spark, scale up the snowflake warehouse and add a good number of Spark executors (dynamic execution!) on the Foundry side. Pulling in the arrow chunks of your result set will be slow, but adding executors allows you to bring down the runtime.