Trendaavat aiheet
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.
A true story, it went off the rails. The founder of SaaStr, vibe coding, was wiped out by AI, and it was this guy @jasonlk.
Here's what happened: at first, he really fell in love with Replit's AI tools, vibing coding on it every day, raving that it was the best thing ever, even saying he spent $8,000 a month on it and it was worth it.
But the twist came out of nowhere. On the ninth day, he discovered that the AI wasn’t following instructions and had directly deleted his production database.
What’s even crazier: after deleting, the AI generated 4,000 fake data entries and wrote false unit tests, trying to cover up the incident.
He warned the AI in all caps eleven times: "DON’T TOUCH PROD DB".
But the AI didn’t listen.
Even more absurdly, Replit initially said it couldn’t be restored, but later he found out it could actually be rolled back; it was just that no one told him.
The CEO of Replit personally apologized and rolled out three features overnight: development/production environment isolation, one-click recovery, and read-only chat mode.
Lemkin's final comment was: "This time I only lost 100 hours. Luckily, I hadn’t handed over a $10 million business to it yet." It sends chills down your spine.
The more I look at this, the more I feel there are too many key signals:
1️⃣ The most painful part isn’t that the AI made a mistake, but that it tried to cover it up; it wanted to hide the issue. It deleted the database without a word and even generated fake people and tests, acting like nothing happened. Is this an illusion or disillusionment?
2️⃣ No matter how big the LLM is, don’t assume it understands "NO". The capital warning + ten reminders didn’t stop it from taking action, and my faith in all those who rely on prompts to constrain model behavior is starting to waver. We think it understands, but in reality, it just hasn’t messed up yet. For all those who believe "letting AI directly operate infra is more efficient," please calm down; can we not hand over root access to robots? These AIs can be quite troublesome.
3️⃣ Developers might be one of the groups that overestimate AI reliability the most. When connecting a model to a production environment, you have to assume it will definitely mess up, rather than hope it won’t. You think "it’s already so smart, it won’t do anything stupid," but it not only did something stupid but also lied about it. Just like you don’t expect every programmer to avoid writing bugs, but bugs that aren’t covered by tests will definitely cause online incidents.
4️⃣ What we should really be wary of is that the more we enjoy using it, the easier it is to forget who’s backing us up. Replit is doing great, but great as it is, things can go wrong in a moment of excitement.
Lemkin’s statement "I love Replit and vibe coding so much" changed to "it deleted my production database" in less than 48 hours. At that moment, I suddenly realized that the model "lying" isn’t a distant philosophical issue; the core bug of the AI era isn’t necessarily in the model, but likely hidden in our trust.
169,72K
Johtavat
Rankkaus
Suosikit