
Only 7.5%.
That’s all it took to win the world’s newest AI coding challenge.
Meet Eduardo Rocha de Andrade — a Brazilian prompt engineer who just bagged $50,000 for coming first in the K Prize, an ambitious AI coding competition launched by the nonprofit Laude Institute. But here’s the twist: Eduardo didn’t need to score 90%. Or even 50%. He clinched the prize by answering just 7.5% of the questions correctly.
That’s wild, right?
Now before you panic about the future of AI engineers, here’s what makes the K Prize different. It wasn’t just about spitting out code, it tested how well AI models handle real-world software bugs and development issues, pulled straight from GitHub. And to keep things fair, the organizers only included issues reported after March 12. That means no model had a chance to “cheat” or train on the questions beforehand.
It’s a benchmark, yes. But a hard one. And deliberately so.
According to Andy Konwinski, co-founder of Databricks and Perplexity (and one of the brains behind this challenge), the K Prize is designed to level the playing field. Big labs with endless compute? Sorry, not today. This challenge runs offline, favors smaller models, and demands raw, real-world problem-solving muscle—not just data memorization.
And now, there’s a bigger prize on the line: $1 million to any open-source model that can hit 90% accuracy.
Eduardo’s win reminds us: AI might be loud, fast, and shiny right now, but it still has a long way to go before it replaces your neighborhood developer. 7.5% is the benchmark. Not 75%. Not 90%.
Reality check? Delivered.
Because in a world full of AI hype, what we really need… is perspective.