What are the different ways to pull Foundry logs in QRadar ? Splunk?

I want to forward the logs from Foundry to QRadar or Splunk for analysis ?

What are the way(s) to expose the logs from Foundry to QRadar ?
What are the way(s) to expose the logs from Foundry to Splunk ?

To what I see:

  • External Transforms to push rows to the API exposed by those systems
  • Foundry S3 API to expose an S3-like API for those systems to pull data themselves
  • BI Connector, for those systems to be able to have a JDBC connector to pull the data they need from Foundry
  • [of course - analyse those logs directly in Foundry :wink: ]

Hi @VincentF

What are the way(s) to expose the logs from Foundry to QRadar ?
I don’t have expertise on this.

What are the way(s) to expose the logs from Foundry to Splunk ?
My past experience is mainly with exporting logs to Splunk. The file-logs are exported incrementally to an S3 bucket which then has an Splunk agent .The agent acts as a file-watcher, ingesting the files into Splunk.

Previously, there was some Palantir official documentation on this. However, I cannot find it anymore. Sander Tichelaar - a colleague of yours at Palantir- can provide you more implementation details.

Hope this helps.

We are exporting our logs to Splunk in production using external transforms and their API.
It works very well and the script is very simple. It was a requirement for us to push the data, so it was definitely the best solution in our case.
I hope this is useful and if you need help with the script please reach out!

Could you share this script?

Sure!

EVENTS_RESPONSE_SCHEMA = {
    "start": pl.Datetime,
    "end": pl.Datetime,
    "response": pl.Utf8,
    "data": pl.Utf8,
}


def send_events(splunk_source, events_df, output):
    client = splunk_source.get_https_connection().get_client()
    token = splunk_source.get_secret("additionalSecretToken")

    events_df = events_df.polars()

    if events_df.is_empty():
        return

    headers = {
        "Content-Type": "application/json",
        "Authorization": f"Splunk {token}",
    }

    rows = []
    for row in events_df.iter_rows():
        event = row[0]
        current_time = datetime.now()

        json_data = {
            "event": event,
            "source": SOURCE,
        }

        response = client.post(SPLUNK_URL, headers=headers, json=json_data)

        row = (
            current_time,  # start
            datetime.now(),  # end
            json.dumps(response.json()),  # response
            json.dumps(json_data),  # data
        )

        rows.append(row)

    exports = pl.DataFrame(rows, schema=EVENTS_RESPONSE_SCHEMA, orient="row")
    output.write_table(exports)```