logo
Dates

Author


Conferences

Tags

Sort by:  

Authors: Srinivasan Parthasarathy, Shubham Chaudhary
2022-10-27

You have a principled process for releasing your Kubernetes app that involves load testing, benchmarking and validation of service-level objectives (SLOs). But, will your app perform well when your cluster is subject to compute, memory, i/o, or network stress? In this talk, we will explore a novel approach that combines chaos injection for probing weaknesses in your Kubernetes infrastructure, with load testing, benchmarking and performance validation with SLOs for your app. The core thrust of our approach will be flexibility combined with simplicity. Your app may be cluster-local or externally exposed, may implement an HTTP or a gRPC endpoint, may have been specified using built-in or custom Kubernetes resources, may use any type of horizontal or vertical autoscaling, may use any CD/GitOps process for deployment, and you may be interested in probing your cluster by injecting compute, memory, i/o, network, or any other types of chaos. Regardless of these variations, this talk will demonstrate a dead simple way to automatically launch the unified “chaos + performance validation" experiment whenever the app is updated, and automatically notify an event receiver with metrics and SLO validation results once the experiment is completed.
Authors: Kornilios Kourtis
2021-10-15

tldr - powered by Generative AI

The presentation discusses the importance of tail latency and overhead metrics in performance evaluation, as well as the need for system configuration and multiple experiments to increase confidence in results. The speaker also recommends various tools and resources for performance validation.
  • Tail latency is important as scale grows
  • Consider overhead metrics such as CPU and memory utilization
  • Performance interpretation metrics can help identify botnets
  • System configuration should isolate systems to avoid unwanted interference
  • Multiple experiments increase confidence in results
  • Netperf, cubenet benz, and BPF tools are useful for benchmarking
  • Resources for performance validation include books by Brendan Gregg and the Kernel Pages
Authors: Alex Chircop, Raffaele Spazzoli
2021-10-13

tldr - powered by Generative AI

The presentation discusses the role of the Technical Oversight Committee (TOC) in providing education and guidance to end-users in the storage space. It also highlights the TOC's involvement in the process of moving projects into the CNCF ecosystem and the importance of cloud-native disaster recovery.
  • The TOC's main function is to provide education and guidance to end-users in the storage space through white papers and project reviews
  • The TOC helps in moving projects into the CNCF ecosystem through due diligence reviews and outreach
  • Cloud-native disaster recovery is an alternative approach to traditional disaster recovery that utilizes active-active deployment and autonomous triggering of the disaster recovery procedure
  • Cloud-native disaster recovery requires a high level of maturity and testing to ensure its effectiveness
Conference:  Transform X 2021
Authors: Fei-Fei Li
2021-10-07

tldr - powered by Generative AI

The presentation discusses the creation of a large-scale and diverse robotic learning simulation environment and benchmark called Behavior Benchmark for everyday household activities in virtual interactive ecological environments.
  • The goal is to create a robotic learning simulation environment and benchmark that mimics the real world as much as possible
  • The environment is large-scale and diverse, with a thousand activities or tasks, 50 large-scale real-world things, 8 scene types, more than 2000 object categories, 3000 object models, and a large number of objects per activity
  • The tasks are complex, like long horizon and multi-steps, and have standardized and flexible evaluation metrics
  • The approach is human-centered, taking all the potential tasks that people do through surveys like American Bureau of Labor Statistics and Eurostats
  • The simulation environment has photorealistic rendering, kinematic and dynamic extended states, flexible materials, deformable bodies, realistic fluids, thermal effects, realistic action executions, and object distributions