Sort by:  

Conference:  Defcon 31
Authors: good_pseudonym

PBX (Private Branch Exchange) and UC (Unified Communications) servers are the big communication brokers in enterprise environments where they love on-prem. They do everything to enable internal and external communications including voice, video, conferencing and messaging. But a broader scope also means a broader attack surface. In this talk, we'll give an overview PBX/UC systems, what kind of attack surface they have, as well as several bugs that we recently found in two popular PBX/UC products. The journey includes deep-diving Java's Runtime.exec(), decrypting encrypted PHP, bypassing license restrictions, pretending to be a phone, and (of course) getting some shells.
Authors: David Perez Rodriguez

Everybody either knows what Kubernetes is or has heard it. It’s a critical component to scalable, high availability and distributed design of most cloud based productions systems. Why would I bother understanding how it behaves outside the cloud provider I commonly use? Well, that was the case of this project, which aimed to build an IoT system that handles Terabytes of data, entirely on-prem due to business needs. As expected, things were not behaving the same as in the cloud provider: lots of kube-api errors, missed heartbeats, database operators started rolling restarting deployments because of it; but the main reason was well hidden from the sight: etcd performance was not great on prem. etcd has an extremely and sustained high performance that is based on two factors: latency and throughput. But in this on-prem environment, latency was affected by the hardware’s initial design. How do you measure etcd performance? Benchmarks to the rescue! Learn about this experience, what is benchmark, what is latency, what is throughput and how to effectively measure etcd performance through benchmarks to correctly test your infrastructure when a brand new kubernetes cluster is created, particularly on-prem, and take advantage of the full potential of the Kubernetes environment.
Authors: Christopher Dziomba, Marcel Fest

tldr - powered by Generative AI

Deutsche Telekom shares their experience in implementing a network fabric for on-prem bare metal Kubernetes cloud that supports their internal Cluster-as-a-Service offering.
  • Deutsche Telekom faced challenges in implementing Kubernetes at scale and speed in a complex on-prem environment on bare metal.
  • Legacy network and network legacy were some of their biggest enemies.
  • They reimagined and implemented a network fabric for on-prem bare metal Kubernetes cloud that supports their internal Cluster-as-a-Service offering.
  • Their cloud is hosting clusters where some of their most demanding applications like 5G core are running.
  • They are building an internal GitHub Kubernetes cluster-as-a-service platform almost exclusively using open source components.
  • They want to reliably build Kubernetes clusters with well-defined APIs for their customers and integrate network functions into the platform.
  • They work upstream first and want to work with the community to build and contribute back.
  • They use BGP and IP fabrics to manage network traffic flow.