How should I configure a compute module that needs to read large files that represent databases and output some tables from the db into Foundry datasets?
I have a docker container that works well locally, and I am not sure what’s the best way to set it up with compute modules.
It takes large (20-50 GB) *.mdf files that represent sqlserver databases. Container attaches the database and reads then writes files.
In Foundry, I am not sure if I should use the pipeline approach, if I should mount a volume, etc.
A data connector will sync the .mdf files and write them to a dataset.
For reference the container uses this image that gives everything to read the raw files: mcr.microsoft.com/mssql/server:2017-latest