Vibe Coding vs. Vibe Engineering?

🤔 Interesting read-up on how to incorporate coding agents and testing into a software project:

I proposed “vibe engineering” as the grown up version of vibe coding, where expert programmers use coding agents in a professional and responsible way to produce high quality, reliable results.

The journey of Emil Stenström is really interesting, as it shows the iterative approach and thinking from going from 0 to 100.

Why do companies keep forking VSCode to build AI IDEs?

Well, they had to do it, to break out of the plugin sandbox to get a real agentic system:

But having a sandbox also keeps the AI from doing really bad stuff -- like deleting a hard drive:

Richard Sutton, father of RL, thinks LLMs are a dead end

Interview by Dwarkesh Patel with Richard Sutton on LLMs, AGI and why he believes AGI will not be born from using LLMs.

In his essay The Bitter Lesson, he argues for not building AI from mimicking human behavior in our systems: "breakthrough progress eventually arrives by an opposing approach based on scaling computation by search and learning."

Solving a Million-Step LLM Task with Zero Errors

Wow! Doing Towers of Hanoi by LLM is something that can be done, by voting on small tasks.

NX get vibe-hacked

Holy moly... et tu, NX?

When a programmer runs the hacked version of NX, the malware drops the exploit into their GitHub and runs that code. The malware stole a lot of people’s login keys and, apparently, their crypto wallets.

Here’s the novel bit — the malware code doesn’t steal your logins or crypto directly. Instead, it sends a prompt to Cursor, Claude Code, or any other AI coding bot on your computer, and it tells them to steal your stuff.

There is a bit of schadenfreude in the article, but the NX-case makes a great cautionary tale.

LLMs as mirrors

Wow:

In my view, LLMs are essentially mirrors. They mirror the skill of the operator.

The article goes on to argue that using AI (correctly) is also a matter of skill.

The source for a ChatGPT answer?

Would love to know the source for a ChatGPT answer. Wondering what this means for Google and StackOverflow. Would be great if we could discuss further on these answers to improve our collective understanding... 🤔

Screenshot of a question asked to ChatGPT, showing the work, but not the sources...

expand_less brightness_auto