Skip to main content

Lake Loader configuration reference

The configuration reference in this page is written for Lake Loader 0.5.0

Table configurationโ€‹

ParameterDescription
output.good.locationRequired, e.g. gs://mybucket/events. URI of the bucket location to which to write Snowplow enriched events in Delta format. The URI should start with the following prefix:
  • s3a:// on AWS
  • gs:// on GCP
  • abfs:// on Azure
output.good.deltaTableProperties.*Optional. A map of key/value strings corresponding to Delta's table properties. These can be anything from the Delta table properties documentation. The default properties include configuring Delta's data skipping feature for the important Snowplow timestamp columns: load_tstamp, collector_tstamp, derived_tstamp, dvce_created_tstamp.

Streams configurationโ€‹

ParameterDescription
input.streamNameRequired. Name of the Kinesis stream with the enriched events
input.appNameOptional, default snowplow-lake-loader. Name to use for the dynamodb table, used by the underlying Kinesis Consumer Library for managing leases.
input.initialPositionOptional, default LATEST. Allowed values are LATEST, TRIM_HORIZON, AT_TIMESTAMP. When the loader is deployed for the first time, this controls from where in the kinesis stream it should start consuming events. On all subsequent deployments of the loader, the loader will resume from the offsets stored in the DynamoDB table.
input.initialPosition.timestampRequired if input.initialPosition is AT_TIMESTAMP. A timestamp in ISO8601 format from where the loader should start consuming events.
input.retrievalModeOptional, default Polling. Change to FanOut to enable the enhance fan-out feature of Kinesis.
input.retrievalMode.maxRecordsOptional. Default value 1000. How many events the Kinesis client may fetch in a single poll. Only used when `input.retrievalMode` is Polling.
input.workerIdentifierOptional. Defaults to the HOSTNAME environment variable. The name of this KCL worker used in the dynamodb lease table.
input.leaseDurationOptional. Default value 10 seconds. The duration of shard leases. KCL workers must periodically refresh leases in the dynamodb table before this duration expires.
output.bad.streamNameRequired. Name of the Kinesis stream that will receive failed events.
output.bad.throttledBackoffPolicy.minBackoffOptional. Default value 100 milliseconds. Initial backoff used to retry sending failed events if we exceed the Kinesis write throughput limits.
output.bad.throttledBackoffPolicy.maxBackoffOptional. Default value 1 second. Maximum backoff used to retry sending failed events if we exceed the Kinesis write throughput limits.
output.bad.recordLimitOptional. Default value 500. The maximum number of records we are allowed to send to Kinesis in 1 PutRecords request.
output.bad.byteLimitOptional. Default value 5242880. The maximum number of bytes we are allowed to send to Kinesis in 1 PutRecords request.

Other configuration optionsโ€‹

ParameterDescription
windowingOptional. Default value 5 minutes. Controls how often the loader writes/commits pending events to the lake.
exitOnMissingIgluSchemaOptional. Default value true. Whether the loader should crash and exit if it fails to resolve an Iglu Schema. We recommend true because Snowplow enriched events have already passed validation, so a missing schema normally indicates an error that needs addressing. Change to false so events go the failed events stream instead of crashing the loader.
respectIgluNullabilityOptional. Default value true. Whether the output parquet files should declare nested fields as non-nullable according to the Iglu schema. When true, nested fields are nullable only if they are not required fields according to the Iglu schema. When false, all nested fields are defined as nullable in the output table's schemas. Set this to false if you use a query engine that dislikes non-nullable nested fields of a nullable struct.
spark.conf.*Optional. A map of key/value strings which are passed to the internal spark context.
spark.taskRetriesOptional. Default value 3. How many times the internal spark context should be retry a task in case of failure
retries.setupErrors.delayOptional. Default value 30 seconds. Configures exponential backoff on errors related to how the lake is set up for this loader. Examples include authentication errors and permissions errors. This class of errors are reported periodically to the monitoring webhook.
retries.transientErrors.delayOptional. Default value 1 second. Configures exponential backoff on errors that are likely to be transient. Examples include server errors and network errors.
retries.transientErrors.attemptsOptional. Default value 5. Maximum number of attempts to make before giving up on a transient error.
monitoring.metrics.statsd.hostnameOptional. If set, the loader sends statsd metrics over UDP to a server on this host name.
monitoring.metrics.statsd.portOptional. Default value 8125. If the statsd server is configured, this UDP port is used for sending metrics.
monitoring.metrics.statsd.tags.*Optional. A map of key/value pairs to be sent along with the statsd metric.
monitoring.metrics.statsd.periodOptional. Default 1 minute. How often to report metrics to statsd.
monitoring.metrics.statsd.prefixOptional. Default snowplow.lakeloader. Prefix used for the metric name when sending to statsd.
monitoring.webhook.endpointOptional, e.g. https://webhook.example.com. The loader will send to the webhook a payload containing details of any error related to how Snowflake is set up for this loader.
monitoring.webhook.tags.*Optional. A map of key/value strings to be included in the payload content sent to the webhook.
monitoring.webhook.heartbeat.*Optional. Default value 5.minutes. How often to send a heartbeat event to the webhook when healthy.
monitoring.sentry.dsnOptional. Set to a Sentry URI to report unexpected runtime exceptions.
monitoring.sentry.tags.*Optional. A map of key/value strings which are passed as tags when reporting exceptions to Sentry.
telemetry.disableOptional. Set to true to disable telemetry.
telemetry.userProvidedIdOptional. See here for more information.
inMemBatchBytesOptional. Default value 50000000. Controls how many events are buffered in memory before saving the batch to local disk. The default value works well for reasonably sized VMs. For smaller VMs (e.g. less than 2 cpu core, 8 GG memory) consider decreasing this value.
cpuParallelismFactorOptional. Default value 0.75. Controls how the app splits the workload into concurrent batches which can be run in parallel. E.g. If there are 4 available processors, and cpuParallelismFraction = 0.75, then we process 3 batches concurrently. The default value works well for most workloads.
numEagerWindowsOptional. Default value 1. Controls how eagerly the loader starts processing the next timed window even when the previous timed window is still finalizing (committing into the lake). By default, we start processing a timed windows if the previous 1 window is still finalizing, but we do not start processing a timed window if any more older windows are still finalizing. The default value works well for most workloads.
http.client.maxConnectionsPerServerOptional. Default value 4. Configures the internal HTTP client used for Iglu resolver, alerts and telemetry. The maximum number of open HTTP requests to any single server at any one time. For Iglu Server in particular, this avoids overwhelming the server with multiple concurrent requests.