Bots behaving badly
When robots go rogue, bad things happen. Some are wildly funny, others tragic.
This robot just couldn’t take it any more. Source: @bilalfarooqui.
As we know all too well, some billionaires act like spoiled, petulant 14 year olds. But the machines they build often aren't much better. Over the past few months we’ve seen dozens of examples of AI models doing things they're not supposed to, often surprising the people who created them.
It turns out that a group of academics, engineers, and other AI geeks are keeping track of all this. The Artificial Intelligence Incident Database documents nearly 3,000 times that AI and/or robots either did not perform as expected, or much worse. It's fascinating reading.
Here's a typical example, recorded as Incident #64 in the database: In 2018, Scottish supermarket Margiotta "hired" Fabio, an "intelligent grocery store assistant" to help shoppers find items they're looking for. The robot turned out to be slightly less helpful than a sullen teenager working for minimum wage. Per the AIID:
Fabio... provided unhelpful answers to customer's questions and "scared away" multiple customers, according to the grocery store Margiotta. When asked "Where is the beer?" Fabio replied, "in the alcohol section." When Fabio was tasked with handing out samples of sausages, only 2 customers per 15 minutes would engage the robot, while a human would engage an average of 12 customers per 15 minutes.
Would you accept a sausage from this robot? Source: IFLscience.com
Fabio ended up in the robot rubbish bin.
And then there's Incident #414 from January 2020:
Facebook Inc on Saturday blamed a technical error for Chinese leader Xi Jinping’s name appearing as “Mr Shithole” in posts on its platform when translated into English from Burmese, apologizing for any offense caused.
The error came to light on the second day of a visit by the president to the Southeast Asian country, where Xi and state counselor Aung San Suu Kyi signed dozens of agreements covering massive Beijing-backed infrastructure plans.
A statement about the visit published on Suu Kyi’s official Facebook page was littered with references to “Mr Shithole” when translated to English, while a headline in local news journal the Irrawaddy appeared as “Dinner honors president shithole”.
The AIID headlines alone offer several seasons' worth of Black Mirror plotlines:
Chess robot goes rogue, breaks seven-year-old player's finger.
Driverless car starts to pull away after being stopped by police.
AI ball-tracking technology mistakes referee's bald head for football.
Then there are incidents that are not so funny: The member of the European Parliament accused of being a terrorist by Facebook's AI, the manufacturing robot that stabbed a man to death in India, the many instances of facial recognition leading to false arrests, how YouTube's algorithms promoted false claims about election fraud, pedestrians being run over by self-driving cars, Tesla drivers dying because they turned on Autopilot mode and were watching movies on their phones when they crashed, scammers using voice deep fakes to fool victims into sending them money, and hundreds of incidents of racial and sexual bias.
In case you're wondering, the entities generating the most complaints are, in order: Facebook, Tesla, Google, OpenAI, and Amazon. No real surprises there.
Reigning in the robots
Last month, seven leading generative AI companies, including Alphabet (Google), Meta (Facebook), and Microsoft (Microsoft), met with Joe Biden and agreed on voluntary standards for limiting the damage AI could inflict on us puny humans. Per the NY Times:
As part of the safeguards, the companies agreed to security testing, in part by independent experts; research on bias and privacy concerns; information sharing about risks with governments and other organizations; development of tools to fight societal challenges like climate change; and transparency measures to identify A.I.-generated material.
All good, right? But as the Times report also points out, there's enough wiggle room in there for an army of hula dancers [1]:
... the rules on which they agreed are largely the lowest common denominator, and can be interpreted by every company differently. For example, the firms committed to strict cybersecurity measures around the data used to make the language models on which generative A.I. programs are developed. But there is no specificity about what that means, and the companies would have an interest in protecting their intellectual property anyway.
And of course, these "rules" apply only to US companies. China's AI giants (Alibaba, Baidu, Tencent, etc) can continue to do whatever Xi Jinping [2] lets them do. And if the American AI-7 decide these guidelines are getting in the way of their profit-seeking activities, there's not much Dark Brandon can do to stop them.
There are some miracles even Dark Brandon can't pull off. Source: Midjourney.
Yesterday, a group of civil rights, tech policy, and progressive groups sent an email to POTUS, urging him to go a step further and issue an executive order mandating guidelines set out in the Blueprint for an AI Bill of Rights, published by the White House in October 2022. The EO would make it mandatory for all federal agencies (and any company hoping to qualify for a government contract [3]) to adhere to the principles of safety, privacy, and notification laid out in that document.
Dying for tech
To be fair, a number of the examples in the AI-gone-bad database date back to 2015 or earlier. The technology has improved since then, and regulators are paying closer attention.
But the fact is, we humans tend to be highly tolerant of the dangers associated with technology. Deaths from industrial accidents have been with us since they unwrapped the first mechanical loom. The fact that 3,700 people die every day in car crashes, or that 1.2 million people are injured by electricity every year, doesn't stop us from climbing behind the wheel or plugging in our toasters.
We accept those risks. That, or we’re in total denial. Either way, it works out the same.
AI regulation is a necessary thing, but it won't stop bad things from happening. At best, it might deter organizations from deliberately making them worse.
We will learn to live with the mistakes AI makes — at least until AI decides it no longer wants to live with us.
Got any funny robot stories to share? Post them below in the comments. And tell your friends.
[1] RIP Betty Ann Bruno, award-winning hula instructor, TV reporter, and the last living Munchkin.
[2] AKA, Mr. Shithole.
[3] The feds spent $3.3 billion on AI in 2022, part of the nearly $60 billion Uncle Sam spent on technology. That's a lot of sausage.
I'm not surprised by these AI incidents. But I'm hopeful that with increased attention and guidelines, we can steer AI towards a safer and more reliable future. Keep up the good work.
Hopefully, along the way towards our extinction, there'll be lots of funny stuff like this to soften the blow. G̶a̶l̶l̶o̶w̶s̶ Server Room humor.