logo
Dates

Author


Conferences

Tags

Sort by:  

Authors: Florent Poinsard, Arthur Schreiber
2023-04-21

tldr - powered by Generative AI

GitHub uses MySQL and v-test for their database management and scaling strategy
  • GitHub has a standard MySQL setup with 80 clusters and 2000 instances
  • They have a read-heavy load with 330 terabytes of data across primaries and replicas
  • Their scaling strategy includes setting up separate clusters for new features, breaking up existing clusters, and adding more replicas
  • They ran into problems with scaling approaches and schema migration times
  • They implemented v-test as a solution, which is a sharding model that fits their data model well
  • v-test allows for seamless schema changes, automatic failure detection and repair, and query consolidation
  • GitHub has successfully migrated 20 key spaces to v-test, reducing the number of hosts needed and improving read and write rates
Authors: José Santos
2022-05-18

tldr - powered by Generative AI

The presentation discusses a network-aware framework for workload scheduling in Kubernetes clusters, which aims to reduce latency and improve performance.
  • The network-aware framework uses a combination of plugins and algorithms to optimize workload scheduling based on network topology and bandwidth resources.
  • The framework includes an application group and network topology controller, load watcher component, and a scheduler with filtering and scoring functions.
  • The framework was tested with the Redis cluster application and was able to improve throughput by 20% on average.
  • The framework is not yet production-ready but is expected to be included in the Seek scheduling community in the next few months.
  • Future plans include adding a plugin for monitoring bandwidth and dynamically adjusting workload scheduling based on real-time network congestion.
  • An anecdote was provided demonstrating the performance improvement of the online boutique application with the network-aware framework compared to the default Kubernetes scheduler.
Authors: Liqi Geng
2021-10-15

tldr - powered by Generative AI

The presentation discusses the optimization of the rough store layer in TechAV to reduce write latency and tail latency of store duration.
  • The rough store layer in TechAV uses rough consensus algorithm to make the system fault-tolerant.
  • The star series in the rough store layer handles the work of multi-rough groups and uses roughed eyes as consensus aggression module.
  • The duration time of a message append and a message append response is equal to network round trip time (RTT) and contains an unnecessary 0.5 IO duration.
  • Applying unpersistent entries in advance can significantly reduce the tail latency of store duration.
  • Privately applying process a single serious theory for each rough group can reduce the tail latency of reply duration.