logo

Good Bot, Bad Bot: Characterizing Automated Browsing Activity

2021-09-24

Authors:   Nick Nikiforakis


Abstract

Abstract:​As the web keeps increasing in size, the number of vulnerable and poorly-managed websites increases commensurately. Attackers rely on armies of malicious bots to discover these vulnerable websites, compromising their servers, and exfiltrating sensitive user data. It is, therefore, crucial for the security of the web to understand the population and behavior of malicious bots.In this presentation, we will report on the design, implementation, and results of Aristeus, a system for deploying large numbers of honeysites, i.e., websites that exist for the sole purpose of attracting and recording bot traffic. Through a seven-month-long experiment with 100 dedicated honeysites, Aristeus recorded 26.4 million requests sent by more than 287K unique IP addresses, with  of them belonging to clearly malicious bots. By analyzing the type of requests and payloads that these bots send, we discover that the average honeysite received more than 76,396 requests each month, with more than 50% of these requests attempting to brute-force credentials, fingerprint the deployed web applications, and exploit large numbers of different vulnerabilities. By comparing the declared identity of these bots with their TLS handshakes and HTTP headers, we uncover that more than 86.2% of bots are claiming to be Mozilla Firefox and Google Chrome, yet are built on simple HTTP libraries and command-line tools.Outline: This talk is all about bot traffic on the web. The presentation will be broken-up as follows:- Background: What are web bots? What is the difference between benign and malicious bots? What are malicious bots after? (exploiting vulnerabilities, stealing backups, scraping, etc.)- Discovering bots on our web applications: How can we differentiate bots from users? How can we differentiate between benign and malicious bots?- Details about our main experiment: Network of 100 honeysites, running different types of web applications, for the sole purpose of attracting web bot requests. How we built it, how we recorded data. Different techniques for identifying bots (client fingerprinting, payload classification, TLS stack fingerprinting, etc.)- Results of the experiment: Number of bots, geographical distribution, how many bots are malicious, how many bots are benign, does bot activity increase or decrease over time? Do bots run JavaScript? Do they use command-line tools or are they instrumenting full-fledged browsers? Showing how our system-generated blocklist with IP addresses of malicious bots, outperforms very popular OSINT listsOver Learning Objectives for Attendees- Being able to describe what malicious bots are after- Knowing multiple techniques for fingerprinting malicious bots- Understanding the basics of deploying of bot-catching infrastructure in their organization, as a new source of blocklisting​​​

Materials: