logo
Dates

Author


Conferences

Tags

Sort by:  

Conference:  Defcon 31
Authors: Xavier Cadena
2023-08-01

Large Language Models are already revolutionizing the software development landscape. As hackers we can only do what we've always done, embrace the machine and use it to do our bidding. There are many valid criticisms of GPT models for writing code like the tendency to hallucinate functions, not being able to reason about architecture, training done on amateur code, limited context due to token length, and more. None of which are particularly important when writing fuzz tests. This presentation will delve into the integration of LLMs into fuzz testing, providing attendees with the insights and tools necessary to transform and automate their security assessment strategies. The presentation will kick off with an introduction to LLMs; how they work, the potential use cases and challenges for hackers, prompt writing tips, and the deficiencies of current models. We will then provide a high level overview explaining the purpose, goals, and obstacles of fuzzing, why this research was undertaken, and why we chose to start with 'memory safe' Python. We will then explore efficient usage of LLMs for coding, and the Primary benefits LLMs offer for security work, paving the way for a comprehensive understanding of how LLMs can automate tasks traditionally performed by humans in fuzz testing engagements. We will then introduce FuzzForest, an open source tool that harnesses the power of LLMs to automatically write, fix, and triage fuzz tests on Python code. A thorough discussion on the workings of FuzzForest will follow, with a focus on the challenges faced during development and our solutions. The highlight of the talk will showcase the results of running the tool on the 20 most popular open-source Python libraries which resulted in identifying dozens of bugs. We will end the talk with an analysis of efficacy and question if we'll all be replaced with a SecurityGPT model soon. To maximize the benefits of this talk, attendees should possess a fundamental understanding of fuzz testing, programming languages, and basic AI concepts. However, a high-level refresher will be provided to ensure a smooth experience for all participants.
Authors: Kapil Thangavelu, Jorge Castro
2023-04-20

tldr - powered by Generative AI

Cloud Custodian is an open source rules engine for cloud management that allows users to define policies and take actions on resources. The presentation covers updates on the project's development and roadmap for 2023, as well as the contribution process.
  • Cloud Custodian is an open source rules engine for cloud management that allows users to define policies and take actions on resources
  • Updates on the project's development include new core maintainers, new providers, and improvements to policy authoring experiences
  • The roadmap for 2023 includes improving policy authoring experiences, adding policy tracing and debugging capabilities, and expanding built-in policy testing
  • The contribution process involves running everything out of the make file, laying out the source tree, and running tests and metrics on different providers
  • C7n-left currently only works with Terraform, but the team is working on adding support for CloudFormation
Authors: Kevin Patrick Hannon
2023-04-18

As Kubernetes matures, many users are exploring how and if they should run multi-cluster workloads. Armada is a Sandbox project in the CNCF and its main focus is enabling batch processing across multiple kubernetes clusters. Armada defines APIs for integration and has released a python API. A primary goal of these APIs is to enable integrations with other open source projects. The users at G-Research wanted to use orchestration software to enable the scheduling of multiple tasks on Armada. The open source decided to build an Airflow Operator that allows G-Research users to take advantage of Airflow’s integrations. In this talk, we will briefly introduce Armada and our integration with Airflow. With this integration, Airflow is now able to schedule jobs on multiple Kubernetes.
Authors: Sam Stepanyan
2023-02-16

tldr - powered by Generative AI

Nettacker: An Automated Penetration Testing Framework
  • Nettacker is a free and open-source automated reconnaissance and penetration testing tool
  • It can scan networks for vulnerabilities, discover expired SSL certificates, and find subdomains hosting vulnerable versions of content management systems
  • Nettacker can be used by both attackers and defenders, and has been helpful for bug bounty research
  • The tool uses YAML modules and is written in Python
  • Nettacker can be automated using GitHub actions and Docker containers
  • Automated scans can be scheduled to run regularly and generate reports as artifacts
Conference:  CloudOpen 2022
Authors: Jaehyun Sim
2022-06-23

tldr - powered by Generative AI

The presentation discusses the challenges faced in managing a Python Package Index (PyPI) server in a cloud-native environment and explores different options for hosting a PyPI server.
  • The speaker discusses the challenges of managing a PyPI server in a cloud-native environment
  • The speaker explores different options for hosting a PyPI server, including public PyPI, self-hosted PyPI, and cloud-based PyPI solutions
  • The speaker emphasizes the importance of portability, security, resiliency, and speed in a PyPI hosting solution
  • The speaker shares an anecdote about the challenges of managing a tangled codebase with embedded machine learning models in multiple services
  • The speaker suggests separating the machine learning model portion of the codebase into different repositories and managing them separately as packages in a PyPI server
Authors: Jeff Zemerick
2022-06-23

tldr - powered by Generative AI

Bringing NLP capabilities to Apache Solr through ONNX and OpenNLP
  • Apache OpenNLP is a Java-based NLP tool that has been around for over a decade and offers various capabilities such as tokenization, document classification, and named entity recognition
  • Apache Solr depends on Apache Lucene for search functionality, and Apache Lucene has a dependency on Apache OpenNLP for some NLP operations
  • The ONNX Runtime allows for the use of deep learning models across programming languages, architectures, and platforms, enabling the use of NLP services created in other languages
  • The speaker demonstrates how a deep learning model trained using PyTorch or Tensorflow can be used for inference from a Java search stack of Apache OpenNLP, Apache Lucene, and Apache Solr
  • The speaker discusses the challenges and relationships between OpenNLP, Lucene, and Solr, and provides resources for attendees to get started with these open source projects
Authors: Eden Federman
2021-10-13

tldr - powered by Generative AI

Effortless Profiling on Kubernetes
  • Profiling is the act of analyzing the performance of applications in order to improve poorly performing sections of code
  • Flame graphs are a popular way to visualize a profile
  • The challenges of profiling include overhead and modifying code
  • Cuba City Flame is a tool that aims to make profiling effortless by removing the need to do code modifications and by doing profiling without having to do a deployment
  • The future of profiling includes ephemera containers, ebpf, and continuous profiling tools
Authors: Himanshu Dwivedi
2021-09-24

Abstract:This talk will discuss one of many methods that are used in the wild to target Shadow APIs and export large volumes of data with a few clicks of a button (lines of code in python code :). Attendees will learn about a very basic yet non-so-obvious problem in securing data, and how hackers are using creative methods to steal large volumes of data.