Sort by:  

Authors: Shuning Chen, Ping Yu

To meet the requirement of Multi-Tenancy and Change Data Capture (for RawKV), TiKV introduces significant changes as follows: Separate data space into logical sub ranges for different tenancies, and add timestamp as key postfix for MVCC. But these changes bring great challenge includes: * Region management becomes a bottleneck of region lookup while multi-tenancy brings many more regions, and it is difficult to limit blast radius among tenancies. * TSO service become a bottleneck of performance and resilience, as every write requires a timestamp. To make multi-tenancy elastic and resilient, we first refactor region management and TSO service as micro-services, and isolate tenancies according to their scale and QoS. Second, we implement an TSO cache in TiKV, to acquire TSO in batch for performance and tolerate service interruption during fault and failover of PD. At the same time, handle the causality consistence brought by TSO cache with caption.
Authors: Rodrigo Serra Inacio, Willian Saavedra Moreira Costa

tldr - powered by Generative AI

Cloud Metrics is a scalable and resilient platform for monitoring both systems and environments of a bank. The key to building this platform was isolation and reducing noise between tenants. The main components used were Kubernetes, Prometheus, Grafana, and Alert Manager. The infrastructure was built using EKS and hosted in Sao Paulo, Brazil. Users access their metrics through Graphene and Prometheus images. Each tenant has their own account and bucket to store their metrics.
  • Cloud Metrics is a platform for monitoring both systems and environments of a bank
  • Isolation and reducing noise between tenants was key to building the platform
  • Main components used were Kubernetes, Prometheus, Grafana, and Alert Manager
  • Infrastructure was built using EKS and hosted in Sao Paulo, Brazil
  • Users access their metrics through Graphene and Prometheus images
  • Each tenant has their own account and bucket to store their metrics
Authors: Faseela K, Lin Sun

tldr - powered by Generative AI

The presentation discusses the use of Istio service mesh for multi-tenancy and how it can be configured for single or multiple control planes.
  • Istio service mesh is important for resource-saving and identity isolation in multi-tenancy models
  • Recent enhancements make it easy to configure using the revisions feature and discovery selectors
  • Multiple control planes allow for separate versions and lifecycle management for different applications
  • Mixed multi-tenancy models are possible depending on the organization's requirements
  • Argo CD can be used for deploying and syncing resources in the cluster
Authors: Tasha Drew, Fei Guo, Ryan Bezdicek, Adrian Ludwin

Join the maintainers and leaders of the upstream Kubernetes working group for Multi-Tenancy for an overview of the tools, documentation, tests, and capabilities you can achieve to share Kubernetes clusters between teams and users. We'll also save time for audience questions, so bring your multi-tenancy hopes, dreams and woes!
Authors: Srinivas Malladi

tldr - powered by Generative AI

Multi-tenancy for Argo Workflows and Argo CD at Adobe
  • Adobe's internal developer platform standardizes best practices and consolidates engineering efforts across various internal developer teams while providing a flexible CI/CD experience
  • GitOps is an architectural paradigm that deploys defined state to a live state on a running system
  • Argo CD is an example of GitOps tooling that supports tracking of Kubernetes manifests in Git and supports their deployment and synchronization to a namespace on a cluster
  • Argo Workflows is a workflow engine that can run CI/CD pipelines on a Kubernetes cluster
  • Multi-tenancy is achieved through the isolation of each component of developer CI/CD workflows and the restriction of application deployment with Argo CD AppProjects and RBAC
Authors: Bryan Boreham, Alvin Lin

Cortex is a time-series data store based on Prometheus. Cortex adds: - Scalability: run across dozens of servers to handle millions of samples per second. - Availability: if one server fails then work will be redirected to others. - Multi-tenancy: store data from different groups or customers, segregated so a user from one tenant cannot see data from another. - Durability: use cloud stores (such as S3) to reduce the chance of data loss. This session will provide an overview of Cortex, an update on recent news from the project, and a run-through of top 5 tips for running Cortex in production.
Authors: Lukas Gentele

Multi-tenancy is a hot topic in the Kubernetes community. A lot of operators have started to think about lowering cost and consolidating workloads in large, multi-tenant clusters rather than creating 1000s of micro-managed, small clusters. Namespaces are a great way to separate tenants in shared clusters. But what if tenants need to install their own CRDs, run workloads across multiple namespaces or even require different versions of the Kubernetes API server? Virtual clusters are an exciting new approach that extends namespace-based multi-tenancy to address such advanced use cases. At its core, virtual clusters let you run Kubernetes clusters on top of other Kubernetes clusters by provisioning isolated Kubernetes control planes for each tenant (i.e. separate Kube API server, data store (etcd), controller manager etc). This talk will explain how virtual clusters work, show what implementations are available today, and demonstrate fascinating, real-world use cases for virtual clusters.
Authors: Juraci Paixão Kröhling

tldr - powered by Generative AI

The versatility of OpenTelemetry Collector and its various deployment patterns
  • OpenTelemetry Collector can be used for tracing, metrics, and logs
  • Deployment patterns include fan out, normalizer, sidecar, multi-cluster, and multi-tenant
  • Collectors can be chained together and customized for specific needs
  • Mix and match components and collector instances for optimal deployment
Authors: Yuan Chen, Alex Wang

tldr - powered by Generative AI

The presentation discusses the elastic quota and job queue components of the Kubernetes scheduler and their compatibility with various workload management systems.
  • The elastic quota and job queue components are part of the Kubernetes scheduler and have been extensively tested.
  • The components are compatible with various workload management systems and can be configured to meet specific needs.
  • The goal is to make the components production-ready and widely adopted.
  • The presentation mentions Alibaba and Apple as early adopters of the components.
  • The components can be used for scheduling multiple jobs at the same time and ensuring that resources are not exceeded.
  • The presentation also discusses the possibility of using the components for nomad-style scheduling and SLA-driven scheduling.
Authors: Jim Bugwadia, Tasha Drew, Fei Guo, Adrian Ludwin

Applications need multi-tenancy. Shared services need multi-tenancy. Internal users need multi-tenancy. Tenancy requires segmentations at all layers of the infrastructure and services stack, not to mention surrounding capabilities like charge back, service priority, and cost optimization. Where is it all going? What is the future of multi-tenancy? Join the leads of the upstream working group for multi-tenancy to find out! We will discuss how we see users and entrprises leveraging multi-tenancy, the tools and capabilities our group and the rest of Kubernetes upstream community have been building to make multi-tenancy … tenable … and answer audience questions.