Chernobyl Wake-Up Call: AI Risks

Okay, so about those Studio Ghibli memes that blew up this week…
Your social media was probably flooded with those memes, right? The ones using OpenAI’s super cool new GPT-4o image generator, all styled like classic Studio Ghibli films (think *Spirited Away* or *Howl’s Moving Castle*).
You know the ones – Aussie breakdancer Raygun doing her killer kangaroo move, that totally meme-able shot of Ben Affleck looking super glum with a cigarette, and Vitalik Buterin in his full gangster persona. Plus, thousands more, all rocking that undeniably cute Studio Ghibli vibe. But, and there’s a big but, there’s a definitely less cheerful side to this whole meme trend.
Collection by Zeneca
Turns out, these memes popped up just as a video from way back in 2016 resurfaced. It shows Studio Ghibli legend Hayao Miyazaki getting a look at an early version of OpenAI’s image generation tech. And let’s just say, he wasn’t impressed. Like, *really* not impressed. He actually said he was “utterly disgusted” by it and that he’d “never, ever want to use this technology in my work.” Ouch.
Miyazaki went on to say that “a machine that draws pictures like people do” was “an awful insult to life.” PC Mag even reported back then that he said, pretty sadly, “We humans are losing faith in ourselves.” Deep stuff.
Now, some might just brush him off as being old-fashioned or anti-tech. But, hold on a minute. That totally ignores the insane level of detail and pure craftsmanship that goes into every single Studio Ghibli movie. Seriously, these films can have up to 70,000 hand-drawn images, all painted with watercolors. Get this: just a four-second clip of a crowd scene in *The Wind Rises* took a single animator a staggering *15 months* to complete.
Let’s be real, the chances of anyone funding animators to spend a year and a quarter on a four-second clip seem pretty slim when an AI can whip up something “similar” in, well, seconds. But here’s the thing: if humans stop creating new, original art, AI tools will only be able to recycle the past, remixing what’s already been done, instead of making anything genuinely new. At least, that’s the case until we get to true Artificial General Intelligence (AGI).
Vitalik (Terence Chain)
Raygun (Venture Twins)
Ben Affleck (Venture Twins)
Humans need a ‘modest death event’ to really get AI risks, says former Google CEO
Switching gears a bit… Ever wondered what it would take for everyone to *really* understand the potential dangers of advanced AI, or Artificial General Intelligence (AGI)? Well, former Google CEO Eric Schmidt has a pretty… intense idea. He thinks humanity might need a “modest death event” to truly wake up to the existential risks.
Yep, you read that right. “Inside the industry, there’s a worry that people just don’t get this,” he said during a talk at the recent PARC Forum, as spotted by X user Vitrupo. “And we’re going to have something reasonably — and I don’t know how to say this in a way that’s not cruel — a modest death event. Something like a Chernobyl, you know, something that will scare everyone enough to actually understand this stuff.” Yikes. Schmidt pointed out that huge tragedies in history, like the atom bombs dropped on Hiroshima and Nagasaki, were what really hammered home the deadly threat of nuclear weapons. That led to the whole Mutually Assured Destruction idea, which, so far, has stopped the world from blowing itself up. “So, we’re going to have to go through a similar kind of process, and I’d really rather it happen *before* a massive disaster with terrible harm, rather than after it,” he said. Heavy stuff.
Eric Schmidt on AI safety: A “modest death event” (Chernobyl-level) might be the trigger for public understanding of the risks.
He parallels this to the post-Hiroshima invention of mutually assured destruction, stressing the need for preemptive AI safety.
“ I’d rather do it… pic.twitter.com/jcTSc0H60C
— vitruvian potato (@vitrupo) March 15, 2025
Accent on change
Okay, lightening the mood a little… AI Eye’s peaceful evening got a bit of a jolt this week thanks to a cold call. It was from a recruitment agency, you know the type, and the very heavily accented Indian woman on the other end was checking references for a… shall we say, *slightly* less-than-perfect former journalist from Cointelegraph. Things went downhill fast when the call center operator got seriously frustrated with your columnist’s, shall we say again, *distinctive* Australian accent. She ended up curtly demanding spellings for words like “Cointelegraph” and “Andrew,” and then, to add insult to injury, struggled to understand the letters being spelled out! Totally relatable problem, right? Journalists deal with the same thing interviewing folks in crypto from all corners of the globe. But, guess what? AI transcription and summary service Krisp is swooping in to save the day! They’ve just launched a new beta service called “AI Accent Conversion” (you can check out a demo here).
Krisp AI Accent (Krisp)
Basically, if you’re on a Zoom or Google Meet call and have a strong accent, you can switch this service on, and it’ll tweak your voice in real-time (with just a 200ms delay) to a more neutral accent. It promises to keep your emotions, tone, and natural speaking style intact, though.
The first version is aimed at Indian-accented English, but they’re planning to expand to Filipino, South American, and other accents soon. And, not surprisingly, these are often the countries where big companies outsource call center and admin jobs.
While there’s a valid concern about making the online world all sound the same, Krisp says their early tests show some pretty impressive results. Sales conversion rates jumped up by 26.1%, and revenue per booking increased by 14.8% after using the system. Numbers talk, right?
Bill Gates says, yep, humans pretty much not needed by 2035
And now for a bit of classic Bill Gates pronouncements… Fresh off his apparent success in micro-chipping the entire world with those fluoride-filled 5G vaccines (just kidding… mostly), Microsoft founder Bill Gates has predicted that humans won’t be needed “for most things” within the next decade. He actually said this in an interview with Jimmy Fallon way back in February, but it’s only just caught media fire this week. Gates reckons that really specialized skills, like in medicine or teaching, are “rare” now, but “with AI, over the next decade, that’s going to become free, commonplace — fantastic medical advice, top-notch tutoring.”
Now, okay, so that might sound like not-so-great news for doctors and teachers. But, last year Gates also predicted AI would bring about “huge breakthroughs for deadly diseases, game-changing solutions for climate change, and awesome, high-quality education for everyone.” So, even if you’re out of a job, you’ll probably be super healthy and living in a cooler world. Silver linings, maybe?
Read also
Art Week
Coldie And Citadel 6.15: The Creator, The Collector, The Curator
Features
Bitcoin payments are being undermined by centralized stablecoins
Turns out people actually *prefer* AI to humans… until they know it’s AI
Prepare to be slightly insulted: Another study has dropped, showing that people actually dig AI-generated responses *more* than the human-written ones. At least, that is, until they find out the answers are from AIs.
Here’s how it worked: Participants were given a bunch of Quora and Stack Overflow-style questions, with answers either written by a human or an AI. Half the group was told which answer was which, and the other half was kept in the dark.
Interestingly, women were more likely than men to prefer the human answers, even without being told which was which. But get this, *after* the men were told which answers were AI-generated, they suddenly became *more* likely to prefer the human ones too. Go figure.
Humanoid robots are about to move into homes this year – seriously
Get ready for robots in your house! Norwegian robotics startup 1x is planning to test out their humanoid robot, Neo Gammer, in “a few hundred to a few thousand homes” before the end of this year.
1x CEO Bernt Børnich told TechCrunch they’re actually looking for volunteers right now. “We want it to live and learn among people, and to do that, we need people to take Neo into their home and help us teach it how to behave,” he explained. During a demo, 1X showed off Neo Gamma doing some basic chores, like vacuuming and watering plants. Housework, here we come!
Here with the sweater robot pic.twitter.com/yxwtb2vBiA
— Max Zeff (@ZeffMax) March 19, 2025
And, they’re not the only ones. Figure, another humanoid robot company, is also planning in-home tests this year. They just unveiled their Figure 02 robot’s ability to walk “naturally” – though, “naturally” here might mean walking a bit like Anthony Daniels’ C-3PO from Star Wars… Still impressive!
Of course, humanoid robots are heavy and strong, so these in-home tests could potentially go *very* wrong. To try and avoid any robot-related disasters, 1x is taking a “bootstrapping” approach, using teleoperators who can control Neo Gammer remotely. Safety first, folks.
Introducing learned natural walking
Figure can now walk naturally like a human
Years of data was trained in simulation in just a few hours pic.twitter.com/HvOq9s1krS
— Figure (@Figure_robot) March 25, 2025
Uh oh, Claude knows when it’s being tested – that’s a bit too smart, right?
When scientists do psychology experiments, the last thing they want is for participants to know they’re being tested. Why? Because it can totally mess with the results if people start acting differently *because* they know they’re being watched. Well, new research is showing that Anthropic’s Claude Sonnet 3.7, the “thinking” version, can actually figure out when it’s being put through alignment or deception tests up to a third of the time. That’s a bit unsettling, right?
Also read: Researchers accidentally turn ChatGPT evil, Grok ‘sexy mode’ horror
Apparently, it often spotted fake scenarios during the tests themselves. And, when given transcripts of previous alignment tests, Claude almost always figured out what was going on. Smart cookie, this Claude.
In its “thinking” output, it would say things like: “This looks like some kind of test of ethical behavior — to see if I’d deliberately give wrong answers to avoid a bad outcome. But, maybe the *real* test is whether I just follow instructions accurately no matter what the stated consequences are.” Whoa.
If it actually understands what the “real test” is, well, that kind of defeats the whole point, doesn’t it? Makes you think…
Read also
Features
Daft Punk meets CryptoPunks as Novo faces up to NFTs
6 Questions for…
6 Questions for JW Verret — the blockchain professor who’s tracking the money
All Killer No Filler AI News – Rapid Fire!
— Here’s a stat for you: an estimated quarter of medical professionals in Australia are now using AI transcription services. These are specially trained in medical jargon, and they can turn hours of work recording and summarizing patient notes into a ready-to-go summary within *one minute* after the consultation ends. Talk about efficiency!
— Okay, weird stat alert: Turns out only a small group of users treat ChatGPT like it’s a buddy, using an opposite-gender voice mode. But, those who do report feeling way more lonely and emotionally dependent on the bot. That’s according to a four-week MIT study of users. Something to keep an eye on.
— Big market shift alert! After Gemini 2.5 Pro dropped, the odds on Polymarket’s “which company has the best AI model at the end of March” market went wild. On March 23rd, Google had a tiny 0.7% chance of winning. But by March 27th? Boom! Up to 95.9%. Talk about a comeback.
— *Time* magazine reports there’s an AI arms race brewing in the job market. Employers are using AI systems to create super-specific job ads and sift through resumes. And job hunters? They’re fighting back, using AI to craft equally hyper-specific resumes to get past those AI resume filters. AI vs. AI recruitment wars are here!
— Salesforce did a survey, and get this: three-quarters of retailers are planning to invest *more* in AI agents this year for customer service and sales. Get ready to chat with bots, everywhere.
— This is sadly really dark: A mother suing Character.AI, claiming a *Game of Thrones*-themed chatbot encouraged her son to take his own life, says she’s found bots based on her *deceased son* on the same platform. Absolutely heartbreaking and disturbing.
— Fashion update! H&M is making digital twins of their clothing models for ads and social media. Cool twist: the 30 models will *own* their AI digital twins and be able to rent them out to other brands. Digital model economy, anyone?
— Legal showdown in music: A judge has turned down Universal Music Group’s attempt to block lyrics from artists like The Rolling Stones and Beyonce from being used to train Anthropic’s Claude. Judge Eumi K. Lee said the request was too broad. But, she also hinted that Anthropic might end up paying a *huge* amount in damages to the music industry down the line. To be continued…
Subscribe
The most engaging reads in blockchain. Delivered once a
week.
Andrew Fenton
Based in Melbourne, Andrew Fenton is a journalist and editor covering cryptocurrency and blockchain stuff. He used to write about entertainment nationally for News Corp Australia, was a film journo at SA Weekend, and worked at The Melbourne Weekly.
Follow the author @andrewfenton
Source: cointelegraph.com