logo
Dates

Author


Conferences

Tags

Sort by:  

Conference:  Defcon 31
Authors: Corynne McSherry Legal Director, Electronic Frontier Foundation, Cooper Quintin Senior Staff Technologist, Electronic Frontier Foundation, Hannah Zhao Staff Attorney, Electronic Frontier Foundation, Mario Trujillo Staff Attorney, Electronic Frontier Foundation, Rory Mir Associate Director of Community Organizing , Electronic Frontier Foundation
2023-08-01

Electronic Frontier Foundation (EFF) is thrilled to return to DEF CON 31 to answer your burning questions on pressing digital rights issues. Our panelists will provide updates on current EFF work, including the fight against government surveillance and protecting creative expression, before turning it over to attendees to pose questions and receive insights from our panelists on the intersection of technology and civil liberties. This is a valuable opportunity to learn from policy experts and engage in a lively discussion rooted in the problems you face. This year you’ll meet: Corynne McSherry, EFF's Legal Director specializing in intellectual property and free speech; Hannah Zhao, staff attorney focusing on criminal justice and privacy issues; Mario Trijillo, staff attorney with an expertise in privacy law; Rory Mir, Associate Director of Community Organizing; and Cooper Quintin, security researcher and public interest technologist with the EFF Threat Lab.
Conference:  Defcon 31
Authors: Corynne McSherry Legal Director, Electronic Frontier Foundation, Daly Barnett Staff Technologist, Electronic Frontier Foundation,, India McKinney Director of Federal Affairs, EFF,, Kate Bertash Founder, Digital Defense Fund,
2023-08-01

In the year since the Supreme Court overturned federal legal protections for reproductive rights, people seeking, providing, and supporting reproductive healthcare are grappling with the challenges of digital surveillance. Multiple services and apps track our movements and communications, and that data can be used by law enforcement and private parties to police and punish abortion access. Lawsuits and prosecutions are already underway and are likely to increase as states continue to pass or expand anti-abortion laws and undermine legal protections for online expression and privacy. But the fight is far from over. At the state and federal level, lawmakers, activists, and technologists are taking steps to establish and shore up legal and practical protections for secure and private healthcare access. This panel brings together legal and security experts to lead a discussion about defending reproductive justice in the digital age Ð what has already been accomplished, whatÕs coming, and how hackers can help. It will build on and update a discussion held last year, also led by EFF and DDF.
Conference:  Black Hat Asia 2023
Authors: Chrisando Ryan Pardomuan Siahaan, Andry Chowanda
2023-05-12

Have you ever wondered whether screen-sharing could pose a threat to your privacy? Or, perhaps imagine whether it is truly safe to keep your screen-sharing mode active when typing passwords, even if they're masked on-screen? Think about it: during video meetings, we frequently share our screens, giving our audience a real-time view of the characters and symbols as we type them. Some of us don't even bother to stop the screen sharing mode while typing passwords, believing that since the password is masked (hidden) on the screen, there is no potential threat to our privacy. However, while this behavior may not matter to human audiences, a computer vision model observing the screen-sharing session can gain a lot of information. It can determine the precise time a certain character is typed, how often we make mistakes in our typing, and even the delay between one character we type and the next. These metrics, unique to everyone, can be used to identify our generic typing behaviors. This way, an adversary can easily impersonate a victim's typing behavior without the need to install additional software/hardware such as keyloggers. In this presentation, we'll unveil the exploitation algorithms to extract an individual's typing behavior from a recorded screen-sharing video. We'll also demonstrate a staggering 67% chance that an attacker can mimic a victim's typing behavior and deceive a keystroke biometric authentication system to steal the victim's access or identity, just by using a recorded screen-sharing video. Furthermore, we'll demonstrate how an attacker could possibly recover one's typed password by using the mimicked typing pattern. Finally, we'll highlight some recommendations on how to prevent our keystrokes from being mimicked and stolen out, although we believe there isn't yet a silver-bullet approach that could completely annihilate the risks.
Conference:  Black Hat Asia 2023
Authors: Guangdong Bai, Qing Zhang, Guangshuai Xia
2023-05-11

In recent years, most countries and territories have put in place strict regulations for user privacy protection. Checking and monitoring the privacy policy compliance of mobile applications thus has become essential for users, app developers and device manufacturers. Nonetheless, this is a challenging task, as modern mobile operating systems like Android contain multiple channels through which third-party apps can obtain sensitive information. Besides the official APIs that are regulated by its permission system, the apps can exploit other channels such as native calls, Java reflection, Binder services, Webview and even vulnerabilities. Existing techniques based on static and dynamic analysis often fail to cover all possible channels. Network traffic analysis is also ineffective when the sensitive data are set over after encryption.In this session, we will address this challenging task using a low-level detection method. Our work is inspired by the fact that almost all sensitive information is encoded into a String before it is passed to application level. We thus hook the String constructor at the native level, where our approach is able to monitor and check all strings constructed on the mobile device. This strategy seems straightforward yet comprehensive, as any string that is constructed from sensitive information can be monitored regardless of the methods malicious apps obtained them. We implement this approach into a tool and use it to analyze pre-installed apps in some Android devices. Our tool finds that many of them collect user information in many scenarios, such as clipboard and wifi information. Some apps even use previously unknown channels to obtain sensitive user information. Our investigation finds that these channels are caused by OEM manufacturers' improper control over the permissions of their customized APIs. We have submitted these issues to relevant manufacturers, who have acknowledged our findings.
Conference:  Black Hat Asia 2023
Authors: Paul Gerste
2023-05-11

Privacy-oriented webmail providers like Proton Mail, Tutanota, and Skiff, offer an easy way to secure communications. Even non-technical people can send end-to-end encrypted emails, which is especially useful for high-risk users such as journalists, whistleblowers, and political activists, but also privacy-seeking internauts. End-to-end encryption becomes irrelevant when there are vulnerabilities in the client. That's why we had a closer look and found critical vulnerabilities in ProtonMail, Tutanota, and Skiff that could have been used to steal emails, impersonate victims, and in one case even execute code remotely!This talk presents the technical details of these vulnerabilities. We will use three case studies to show how we found and exploited serious flaws with unconventional methods. Come and see an adventure about mXSS, parser differentials, and modern CSS coming to the rescue during exploitation.Warning: may contain exploit demos and traces of popped calcs!
Authors: Kim Wuyts
2023-02-15

tldr - powered by Generative AI

The presentation discusses the importance of threat modeling in ensuring privacy and security in software development. It highlights the different approaches and resources available for successful threat modeling.
  • Threat modeling is crucial for ensuring privacy and security in software development
  • There are different approaches and resources available for successful threat modeling, such as the Threat Modeling Manifesto, Linden, and Stride
  • Threat modeling should be done early in the development cycle, but it's never too late to do it
  • Threat modeling should be a continuous process and the output should be used as input for subsequent steps
  • Threat modeling can be easy and fun, as illustrated by the example of analyzing a doll's privacy risks
Authors: Rob Van der Veer
2023-02-15

tldr - powered by Generative AI

The presentation discusses the importance of treating AI systems as professional software and applying traditional software development life cycle approaches to ensure security and privacy. It provides 10 commandments for AI security and privacy, covering AI life cycle, model attacks, and protection.
  • AI systems should be treated as professional software and maintained using traditional software development life cycle approaches
  • 10 commandments for AI security and privacy include involving AI applications and data scientists in existing software security programs, documenting experimentation, implementing unit testing, and protecting source data and development
  • Model attacks can be carried out through data poisoning, adversarial examples, and model inversion, and can be prevented through techniques such as data sanitization and model robustness
  • Protection measures for AI systems include secure storage and access control for source data, encryption, and versioning
Authors: Adarsh Nair, Greeshma M R
2022-11-17

Metaverse is the concept where rather than just viewing digital content, users can immerse themselves in a space where digital and physical worlds merge. Because of advances in digital technology, we are opening ourselves up to the possibility of being in a universe that is infinite. To mould this virtual environment in this new era of digital inquiry, it is necessary to make use of technology that focusses on privacy. However, just as there are some inherent risks and security issues with the Internet as it exists today, there will be risks that will need to be addressed as we move forward into a world of digital connection. Cybercriminals, obviously, are going to be a part of the metaverse and attempts to steal people's personal information and identities will be made. Identity thefts, unauthorized data collection, ransomware attacks, social engineering attacks, impact on mental health and perceptions, increase in deepfakes and so on, are few of the risks that this paper present. Identity theft could become even more prevalent in the metaverse unless strong security measures are enacted. It already runs as a multibillion-dollar industry, with the number of cases increasing by more than 50% from 2020's figures. Hackers can utilize virtual reality headsets and controllers to steal personal information, such as fingerprints and iris scanning, as well as facial geometries, from people who use them. Ransomware attackers could deny you access to your bank accounts or other critical platforms.The metaverse requires us to give up more personal information than we are used to — more than we currently do while using the internet — and this greatly raises the risk.People can be psychologically manipulated into revealing private information through social engineering. Hackers wishing to sell personal information on the Dark Web could potentially profit from the vast amounts of personal data that will be stored in the metaverse. Since metaverse is an immersive experience, the manipulated, disturbing visual content potentially spread by malicious elements can have higher impact than those consumed via the current web. People's perception of the actual world can be affected by the foundational technologies in virtual and augmented reality, according to a study by Stanford University researchers. Creating deepfakes of your metaverse avatar could be more plausible, which are a threat to the society that thrives on information consumption.As we move toward a world where nearly everything is done digitally, the risks of digital interaction will also increase. Passwords and usernames are no longer sufficient to prevent cyberattacks when viewed from a metaverse perspective. A comprehensive authentication solution can promise a more secure interaction and guarantee better user experience.
Authors: Lisa Nee
2022-11-17

Quantum computing has been a fast growing technology that brings rewards and risks.  In the wrong hands, threat actors can decrypt codes that would take weeks or months. On the other side is quantum cryptography that, while still in development, could enable both the sender and recipient notifications of any eavesdropping which may satisfy privacy concerns of the transfer of data to the US which are subject to the US Patriot Act that enable government seizure of data without legal proceedings or notice. This discussion will introduce a high level basic understanding of quantum computing, international data transfer issues and quantum cryptography as a potential privacy solution, and begin the discussion of whether if and when such technology is available, is it part of an individual's privacy right to have the technology available or create a serious threat to national security and anti-terrorism.
Conference:  Transform X 2021
Authors: Chris Hazard
2021-10-07

tldr - powered by Generative AI

The presentation discusses the importance of privacy in data synthesis and the use of synthetic data to enhance privacy while unlocking the value of data. It also highlights the challenges and potential risks associated with synthetic data and the need for proper application of privacy techniques.
  • Privacy affects behavior and is crucial for building trust and value in a brand
  • Synthetic data can be used to unlock the value of data while maintaining privacy
  • Proper application of privacy techniques is necessary to avoid potential risks and challenges associated with synthetic data
  • Synthetic data can be generated using various techniques such as Bayesian networks and GANs
  • Synthetic data sets should be generated with distributions that have the same analytic outcome as the original data
  • Synthetic data sets should be generated with caution to avoid leaking privacy
  • Synthetic data sets can be generated multiple times with different levels of fidelity as long as privacy is maintained
  • Validation of privacy and value is necessary when using synthetic data