Wednesday
Room 1
09:00 - 10:00
(UTC±00)
Talk (60 min)
Keynote: The dangers of probably-working software
Software used to be predictable. You could trace the logic, reason about behaviour, and prove the results. Better tools have made us faster and allowed us to build more with less effort. But the further we step away from the code, the less control we really have.
AI
DevOps
Testing
Tools
Ethics
People
This is not a new problem, but it's more relevant than ever. Generative AI has dropped the barrier to entry dramatically, and it's never been easier to produce probably-working software with a single prompt.
So how do we avoid sleepwalking into brittle, opaque systems that only appear correct? When is "good enough" actually good enough? And when the result always looks right, how do we know when to step in?