Wednesday
Room 2
11:40 - 12:40
(UTC±00)
Talk (60 min)
"Looks Good to Me": A Practical Guide to Handling AI-Generated Code
AI coding assistants like GitHub Copilot, ChatGPT, and Cursor are reshaping how we build software—and open source is no exception. These tools can now generate code, submit pull requests, and even review and merge them automatically. But what’s the cost?
Open source maintainers are increasingly overwhelmed by “almost correct” AI-generated PRs that introduce subtle bugs, security vulnerabilities, or fail to follow contributor guidelines. Meanwhile, the rise of AI-generated Bug Bounty reports (AI slops) risks overwhelming maintainers and undermining the spirit of responsible disclosure potentially pushing projects to abandon bug bounty programs.
In this talk, we’ll explore a security-first and practical framework for using AI in software development and contribution workflows. I will cover guidelines from the well-respected communities like Linux Kernel, OpenSSF and OWASP and real-world (non-fictional) practices from the industry leaders. We will cover how to craft AI prompts with security and compliance in mind; governance templates for managing AI-generated contributions; Strategies for handling AI-generated vulnerability reports without shutting down your bug bounty program.