Preventing an OWASP Top 10 in the world of AI


Authors:   Aaron Ansari


The importance of secure design, logging and monitoring, and broken authentication in AI technology
  • API breaches can lead to manipulation of AI engines, which can be dangerous in financial and loan decisions
  • Insecure design and injection can compromise AI engines
  • Broken authentication can lead to unauthorized access and manipulation of AI engines
  • Logging and monitoring are crucial for compliance and abuse prevention
  • Human involvement and fine-tuning are necessary for effective decision-making in AI technology
API breaches can allow for the manipulation of AI engines, which can be dangerous in financial and loan decisions. For example, if an AI engine is involved in loan decisions and a printout comes back saying that the loan is not approved, a human being has no idea why the loan was not approved. This lack of transparency can be dangerous and lead to compliance issues.


Abstract:According to McKinsey & Company, by 2030, companies who fully absorb AI could double their cash flow. As AI continues to be deployed into complex settings (healthcare, transportation and financial services), policy makers have warned against the potential abuses of AI and ML for cybercriminals’ gain. At the same time, the cybersecurity community has highlighted the benefits of using these algorithms to identify and defend against threats by automating the detection of and response to attempted attacks.To prevent a future where OWASP releases a top 10 for AI threats, we need to broaden the conversation around how AI systems can themselves be secured, not just about how they weaken or augment data and network security. In this session, the speaker will offer the benefits of utilizing this emerging technology while illustrating some of its vulnerabilities. He will demonstrate how a simple AI chatbot, like those used by so many companies today, can be easily manipulated. He will also offer suggestions for protecting the algorithms from being compromised. The conversation will include practical ideas on how an organization should structure its AI program including: Whether to utilize Human In The Loop (HITL) to ensure that a person controls when to start or stop any action performed by an AI system; How best to lock down AI based on data classification policies; and Why it is important to analyze log data in real time to provide AI threat monitoring, event correlation and incident response.