🤔 Interesting read-up on how to incorporate coding agents and testing into a software project:
I proposed “vibe engineering” as the grown up version of vibe coding, where expert programmers use coding agents in a professional and responsible way to produce high quality, reliable results.
Interview by Dwarkesh Patel with Richard Sutton on LLMs, AGI and why he believes AGI will not be born from using LLMs.
In his essay The Bitter Lesson, he argues for not building AI from mimicking human behavior in our systems: "breakthrough progress eventually arrives by an opposing approach based on scaling computation by search and learning."
When a programmer runs the hacked version of NX, the malware drops the exploit into their GitHub and runs that code. The malware stole a lot of people’s login keys and, apparently, their crypto wallets.
Here’s the novel bit — the malware code doesn’t steal your logins or crypto directly. Instead, it sends a prompt to Cursor, Claude Code, or any other AI coding bot on your computer, and it tells them to steal your stuff.
There is a bit of schadenfreude in the article, but the NX-case makes a great cautionary tale.
Would love to know the source for a ChatGPT answer. Wondering what this means for Google and StackOverflow. Would be great if we could discuss further on these answers to improve our collective understanding... 🤔