logo
Dates

Author


Conferences

Tags

Sort by:  

Authors: Liz Rice
2023-04-21

tldr - powered by Generative AI

The presentation discusses how Cilium and its ClusterMesh feature can simplify connectivity across multiple clusters in a cloud-agnostic way, enabling connectivity between services spread across clouds, load balancing requests across backends in multiple clusters, connectivity between Kubernetes and legacy workloads, mutually-authenticated, encrypted connections between services, and multi-cluster network policies. The presentation also addresses challenges related to IP address management, scale, and observability of multi-cluster networks, and how Cilium can help.
  • Cilium and its ClusterMesh feature can simplify connectivity across multiple clusters in a cloud-agnostic way
  • Connectivity between services spread across clouds
  • Load balancing requests across backends in multiple clusters
  • Connectivity between Kubernetes and legacy workloads
  • Mutually-authenticated, encrypted connections between services
  • Multi-cluster network policies
  • Challenges related to IP address management, scale, and observability of multi-cluster networks, and how Cilium can help
Authors: Rey Lejano
2023-04-21

tldr - powered by Generative AI

The presentation discusses the challenges of edge computing and how to solve them using Kubernetes and Cloud Native principles.
  • Resource and physical constraints are challenges in edge computing
  • Kubernetes and Cloud Native principles can bring automation and consistency to edge devices
  • K3s is a Kubernetes distribution specifically designed for resource-constrained environments
  • K3s includes batteries-included features such as containerd, runC, CNI, CoreDNS, and Clipper lb
  • The Linux Foundation's State of the Edge provides a vendor-neutral platform for edge computing research
Authors: Hung-Ying Tai, Vivian Hu
2023-04-21

tldr - powered by Generative AI

The presentation discusses the need for a lighter and more efficient way to manage microservices in the post-pandemic rise of lightweight microservices. The solution presented is the use of WebAssembly System Interface (WASI) to create a more lightweight and efficient infrastructure.
  • The rise of lightweight microservices has created a need for a more efficient way to manage them
  • Current technology is not efficient enough for the large number of microservices required by modern applications
  • WebAssembly System Interface (WASI) provides a more lightweight and efficient infrastructure for managing microservices
  • WASI enables non-blocking sockets, supports domain name lookup, and extends the current API to allow for more functionality
  • WASI can be integrated with various databases and frameworks, including MySQL, Maria DB, PostgreSQL, and Redis server
  • The use of WASI can lead to a more efficient and lightweight infrastructure for managing microservices
Authors: Roland Kool, Ricardo Rocha, Piotr Szczesniak, Christian Huening, Rania Mohamed
2023-04-21

tldr - powered by Generative AI

The challenges of securing and governing communication between services running in multiple clusters or different infrastructure can be addressed through the use of service mesh and gateway API solutions in a distributed, heterogeneous environment.
  • The shift from data centers on premises to cloud and multi-cloud and hybrid environment has created new challenges in securing and governing communication between services
  • Service mesh and gateway API solutions provide a way to address these challenges by offering a shared trust anchor, identity framework, and policies for selective communication
  • Examples of service mesh solutions include Linker D and Istio, while Kubernetes Gateway API offers a portable solution for multi-cluster communication
Authors: Barun Acharya
2023-04-20

Containers and Orchestrators are being rapidly adopted worldwide due to the advantages they provide but so has risen the cyber attacks on the same. With the rise in recent zero days there’s an ever more demanding need to enforce security in containers.Even with Static Analyzers in place which scan for known vulnerabilities, a new vulnerability can pop up anytime or you can be compromised at runtime which may end up in losses. We should try to reduce the attack surface as much as possible to reduce these unknown unknowns.This talk will be about how can one choose to be a minimalist about their workloads right from choosing the right node images to reducing dependencies in our containers and finally restricting minimizing risks at runtime. We will explore about Optimized Operating Systems, RBAC, Docker Slim, Network Policies, Security Context and tooling around Mandatory Access Control and how they can help you out on your path to become minimalist with your workloads to secure them.
Authors: Edidiong Asikpo
2023-04-20

tldr - powered by Generative AI

The use of Telepresence, an open source CNCF tool, has improved the developer experience, accelerated the inner dev loop, and reduced staging environment compute costs for Cloudnative companies. Three case studies are presented to illustrate this point.
  • Building and testing microservice-based applications becomes difficult when running everything locally is no longer feasible due to resource requirements.
  • Moving to the cloud for testing is a solution, but synchronizing local changes with remote Kubernetes environments can be challenging.
  • Telepresence improves the developer experience by allowing developers to test their code changes against external dependencies without the fear of things going wrong or not matching up with production.
  • Telepresence eliminates the need to constantly build, deploy, and test images, which speeds up the inner dev loop.
  • The use of Telepresence has positively impacted the development workflow of companies such as Culture Code, Voice Flow, and a fintech company in the APAC region.
Authors: Vladimir Kovacik, Greg Smith
2023-04-20

A brief story of how we came to use Vitess/Kubernetes to power some of the biggest entertainment franchises on the planet A few years ago we started thinking about: “What would it look like to run a database on Kubernetes?” We had just migrated most of our workloads from VMs to Linux system containers. This unlocked a lot of performance potential, while being a mostly drop-in replacement. As our fleet grew and the on-call burden started to rear its head, we did some requirements gathering for running these databases using our new Kubernetes-based platform. We ended up testing a parallel track using several open source technologies. Months into the testing there was a very clear winner which met our requirements: Vitess. We spent the last few months of the year building a proof of concept for one of our smaller services, and launched it with that year’s major titles. The success of this spurred an increased interest in Vitess across Demonware/Activision leading to many larger services adopting it for the following year. This talk will mainly be about the transitional phases of moving from our classic database stack to Vitess. We will give a high level overview of the experience, what we learned, and some interesting points worth sharing to the wider community.
Authors: Brandon Smith, Howard Hao
2023-04-20

tldr - powered by Generative AI

The presentation discusses the challenges of bringing legacy applications into the modern cloud while reducing costs and the importance of effectively tuning and monitoring Windows containers for optimal performance.
  • Legacy applications need to be brought into the modern cloud to reduce costs and improve business value
  • Windows containers are more efficient than traditional Windows Server VMS
  • Effective tuning and monitoring of Windows containers is essential for optimal performance
  • Performance analysis should be easily accessible and updated guidance should be provided
  • Collaboration between businesses and Microsoft can help improve Windows performance
Authors: Rodrigo Campos Catelin, Marga Manterola
2023-04-20

tldr - powered by Generative AI

The presentation discusses the benefits and challenges of using Kubernetes for Cloud Native applications.
  • Kubernetes can automate tasks and make applications more resilient
  • Automatic health checking and load balancing are important features of Kubernetes
  • Kubernetes is a complex abstraction layer that requires learning and debugging
  • Deploying applications as Kubernetes deployments involves writing YAML files that specify desired state
  • Connecting backend and frontend pods in Kubernetes requires service objects
Authors: Emily Fox
2023-04-20

tldr - powered by Generative AI

The importance of succession planning and knowledge transfer in Cloud native projects
  • Cloud native projects are experiencing turnover and external factors that impact the community and bring innovation, but also strain and change on projects
  • Maintainers and contributors may experience burnout and imposter syndrome, making succession planning and knowledge transfer crucial
  • Scaling knowledge glaciers and distributing knowledge through documentation, community engagement, and contribution is key
  • Establishing trust within the community and contributing back to the project helps with succession planning and ensures diversity and innovation in the project
  • Designing communities into layers of leadership, including canopies of maintainers, sub canopies of technical leads, and ground cover of new contributions, is important for year-round blooms and successive plantings