Overview
We use rrweb
to collect "snapshot data" from the browser. This data is gathered by the capture API and sent to ingestion.
Because we use Kafka for ingestion, there is a maximum allowed message size and snapshot data is often larger than that. Replay has its own ingestion infrastructure with larger limits.
We store recording snapshot data in blob storage and aggregated session information in the session_replay_events
table. As a result in capture (posthog-recordings
deployment), we no longer need to chunk recordings.
The ingestion workload batches sessions to disk and periodically flushes them to blob storage. This is to enable us to write fewer files to storage reducing the operational cost of the system.
Recording metadata
The ingestion workload generates metadata and stores it in the session_replay_events
table in ClickHouse. This is used to power the session replay UI. It uses the ClickHouse Aggregating MergeTree engine to power the table.
In combination with the blob storage ingestion workload, this means we're storing at least a hundred times less data per row in ClickHouse than in the original infrastructure. And compressing it about twice as much. 🔥
flowchart TBclickhouse_replay_summary_table[(Session replay summary table)]received_snapshot_event[clients send snapshot data]kafka_recordings_topic>Kafka recordings topic]kafka_clickhouse_replay_summary_topic>Kafka clickhouse replay summary topic]subgraph recordings blob ingestion workloadrecordings_blob_consumer-->|1. statefully batch|diskdisk-->recordings_blob_consumerrecordings_blob_consumer-->|2. periodic flush|blob_storagerecordings_blob_consumer-->|3. after flush|kafka_clickhouse_replay_summary_topickafka_clickhouse_replay_summary_topic-->clickhouse_replay_summary_tableendreceived_snapshot_event --> capturecapture -->|compress into kafka messages| kafka_recordings_topickafka_recordings_topic --> recordings_blob_consumer