The presentation discusses the architecture and design of a change data capture tool for a key-value data store, with a focus on preserving transaction consistency and enabling real-time data capture and exchange.
- The architecture includes multiple layers, including a multi-version key-value data store and a revocation layer using Graft for consistent replicas of regions.
- The goal is to create connectivity from the key-value data store to other parts of the data processing ecosystem, enabling real-time data capture and exchange.
- The design of the algorithm for combining data streams from all regions involves receiving data from all regions simultaneously, producing a watermark when all regions are done sending transactions with a certain commit value, and storing and restoring transactions earlier than that watermark.
- The tool supports decoding ThaiDB records and can write events directly to various downstream systems, including MQs, F3 files, and MySQL-compatible relational databases.
- The tool uses etcd extensively for persistent state storage and as a consensus service to maintain the current state of the CDC cluster.
- The presentation emphasizes the importance of preserving transaction consistency and enabling real-time data capture and exchange for the development of the data processing ecosystem.
The presentation provides an example of how high availability works in the CDC cluster. When a new node is added, the first node is elected as the owner of the cluster and assigns tables to be replicated to each available node. The owner also initiates table migration if there is significant workload imbalance. In the event of node failure, the owner reassigns tables previously being replicated by the dead node to a surviving node. If the owner itself goes down, a new owner is elected to maintain the current state of the cluster.
Data processing is evolving to be stream-oriented to enable interconnection between subsystems or microservices, but it could be challenging for distributed data stores, such as TiKV, to stream data effectively. Zixiong Liu and his team have successfully achieved distributed event streaming from TiKV with low computational cost, low latency, and elimination of single points of failure. Now it is possible to produce a stream of updates from TiKV, which, with suitable deduplication, are ordered by their commit timestamps. In his talk, Zixiong Liu will talk about the techniques used in the design of TiKV that facilitate data streaming, and the implementation of the distributed computation performed on the exported TiKV data so that the data can be converted into formats suitable for consumption by third party data solutions.