logo
Dates

Author


Conferences

Tags

Sort by:  

Authors: Kim Wuyts
2023-02-15

tldr - powered by Generative AI

The presentation discusses the importance of threat modeling in ensuring privacy and security in software development. It highlights the different approaches and resources available for successful threat modeling.
  • Threat modeling is crucial for ensuring privacy and security in software development
  • There are different approaches and resources available for successful threat modeling, such as the Threat Modeling Manifesto, Linden, and Stride
  • Threat modeling should be done early in the development cycle, but it's never too late to do it
  • Threat modeling should be a continuous process and the output should be used as input for subsequent steps
  • Threat modeling can be easy and fun, as illustrated by the example of analyzing a doll's privacy risks
Authors: Rob Van der Veer
2023-02-15

tldr - powered by Generative AI

The presentation discusses the importance of treating AI systems as professional software and applying traditional software development life cycle approaches to ensure security and privacy. It provides 10 commandments for AI security and privacy, covering AI life cycle, model attacks, and protection.
  • AI systems should be treated as professional software and maintained using traditional software development life cycle approaches
  • 10 commandments for AI security and privacy include involving AI applications and data scientists in existing software security programs, documenting experimentation, implementing unit testing, and protecting source data and development
  • Model attacks can be carried out through data poisoning, adversarial examples, and model inversion, and can be prevented through techniques such as data sanitization and model robustness
  • Protection measures for AI systems include secure storage and access control for source data, encryption, and versioning
Authors: Anna Westelius, Sponsor: lyft
2022-11-17

In this talk, we’ll discuss scaling security programs through technology and secure-by-defaults in an evolving engineering ecosystem. We’ll share lessons learned from “paving roads” for security over the years, how to find opportunities, create shared accountability with engineering partners, and ultimately reduce security risks.
Authors: Frederic Branczyk, Han Kang, Elana Hashman, David Ashpole
2021-10-13

tldr - powered by Generative AI

The presentation discusses the role of SIG Instrumentation in maintaining and improving observability in Kubernetes through metrics, logging, and auto-scaling.
  • SIG Instrumentation is responsible for maintaining and improving observability in Kubernetes through metrics, logging, and auto-scaling
  • Structured logging is being implemented to improve the logging infrastructure in Kubernetes
  • Projects such as kube-state-metrics, metrics-server, and prometheus-adapter are being maintained to generate and expose metrics for Kubernetes objects
  • Auto-scaling can be done based on any metric using projects such as prometheus-adapter
  • SIG Instrumentation reviews new additions and changes related to metrics to ensure high quality
  • Deprecated command line options related to log file handling will be removed in Kubernetes 1.26
  • SIG Instrumentation also maintains the k-log implementation itself
Conference:  Transform X 2021
Authors: Aerin Kim
2021-10-07

tldr - powered by Generative AI

ML linters and other mechanisms enhance labeler productivity when labeling complex images and scenes, resulting in higher quality data for customers.
  • Quality is important in ML and affects precision, recall, and IOU.
  • Scale AI published four papers this year, including a dataset on Fitzpatrick skin type and a Reddit comment and reply dataset.
  • Scale AI's 3D annotation platform and ML-powered linters catch incorrect annotations.
  • ML linters and other mechanisms improve labeler productivity and result in higher quality data for customers.
Authors: Aaron Rinehart
2021-09-24

Hope isn’t a strategy. Likewise, perfection isn’t a plan. The systems we are responsible for are failing as a normal function of how they operate, whether we like it or not, whether we see it or not. Security chaos engineering is about increasing confidence that our security mechanisms are effective at performing under the conditions for which we designed them. Through continuous security experimentation, we become better prepared as an organization and reduce the likelihood of being caught off guard by unforeseen disruptions. Security Chaos Engineering serves as a foundation for developing a learning culture around how organizations build, operate, instrument, and secure their systems. The goal of these experiments is to move security in practice from subjective assessment into objective measurement. Chaos experiments allow security teams to reduce the “unknown unknowns” and replace “known unknowns” with information that can drive improvements to security posture. During this session Aaron Rinehart, the O’Reilly Author and pioneer behind Security Chaos Engineering will share how you can implement Security Chaos Engineering as a practice at your organization to proactively discover system weakness before they are an advantage of a malicious adversary. In this session Aaron will introduce a new concept known as Security Chaos Engineering and share some best practices and experiences in applying the emerging discipline to create highly secure, performant, and resilient distributed systems.