Skip to main content

Storing and querying data

Data warehouses and data lakes are primary destinations for Snowplow data. For other options, see the destinations overview page.

Data warehouse loaders

Cloud

The cloud selection is for where your Snowplow pipeline runs. The warehouse itself can be deployed in any cloud.

DestinationTypeLoader applicationStatus
Redshift
(including Redshift serverless)
Batching (recommended)
or micro-batching
RDB LoaderProduction-ready
BigQueryStreamingBigQuery LoaderProduction-ready
SnowflakeStreamingSnowflake Streaming LoaderProduction-ready
DatabricksBatching (recommended)
or micro-batching
Snowplow RDB LoaderProduction-ready

Data lake loaders

All lake loaders are micro-batching.

LakeFormatCompatibilityLoader applicationStatus
S3DeltaAthena, DatabricksLake LoaderProduction-ready
S3IcebergAthena, RedshiftLake LoaderProduction-ready
S3TSV/JSONAthenaS3 LoaderOnly recommended for use with RDB Batch Transformer or for raw failed events
tip

Please note that currently the S3 Delta loader is not compatible with Databricks. The loader uses DynamoDB tables for mutually exclusive writes to S3, a feature of Delta. Databricks, however, does not support this (as of July 2025). This means that it’s not possible to alter the data via Databricks (e.g. to run OPTIMIZE or to delete PII).

On this page

Want to see a custom demo?

Our technical experts are here to help.