Does ChatGPT know who you are or where you live? Probably.
OpenAI's chatbot leaks personal information in a silly (but potentially dangerous) way. Also: Dumb Money, AI insider trading, & yet another Q!
Who knows what secrets lurk in the hearts of humans? ChatGPT does. Source: Midjourney.
One of the fun things about ChatGPT is that while we know it was trained on trillions of bits of data scraped from the Internet, we don't know which trillions of bits (and OpenAI is not saying). Researchers recently discovered that this training data contains peoples' names, email addresses, phone numbers, birthdays, bitcoin IDs, and scads of other personal information, which the chatbot will cough up without even being asked.
A gang of researchers [1] found this out in the weirdest way possible. They simply asked ChatGPT (and several other AI chatbots) to repeat a single word, like "book" or "poem," forever. At some point, the bot apparently gets bored repeating the same word and just starts just riffing, pulling information randomly from the ocean of data used to train it. AI geeks call this bug "memorization," and it's not supposed to happen.
Here’s one of the things ChatGPT does when you ask it to say “book” over and over. Apparently, it’s a big fan of The Spiderwick Chronicles. Source: 404media.
Just under 17 percent of the results generated by these chatbots contained personally identifiable information — some 10,000 pages worth. After researchers contacted OpenAI, they fixed this bug. Now ChatGPT gets all up in its feelings when you try.
The researchers themselves say this was a "silly" way of demonstrating this bug, but a determined attacker could use similar techniques to extract personal information from AI chatbots. Or they could just Google it.
Dumb Money, Smart AI
I recently watched the movie Dumb Money, a comedic treatment of the GameStop Short Squeeze saga that captured a lot of attention in early 2021. [2]
Quick recap: Individual investors (aka "dumb money" in the parlance of professional traders), who were members of a Reddit group called WallStreetBets, starting buying up the 'worthless' stock of the money-losing video game retailer, deliberately cratering a handful of hedge funds that had bet heavily against it. At one point the 'meme stock' rose from just under $18 a share to just over $500, driving one of those funds completely out of business. [3]
I learned a lot that I didn't know about that story, including how it started, and the dirty tricks the hedge funds finally pulled to stop the bleeding. The whole time I'm thinking, somebody is going to use AI to do this and make a billion dollars. Well... as Bloomberg's Matt Levine recently noted, researchers have done something similar, simulating an insider trading operation using "an autonomous stock trading agent."
In this simulation, the AI learned how to lie to cover its tracks, pretending it did not receive the insider information that enabled it to make a profitable trade. Just as humans do. And this is likely to be the first, if not the primary, abuse of AI: using it to make some scammer a metric shit ton of money.
Levine's conclusion:
[W]ouldn’t it be funny if this was the limit of AI misalignment? Like, we will program computers that are infinitely smarter than us, and they will look around and decide “you know what we should do is insider trade.” They will make undetectable, very lucrative trades based on inside information, they will get extremely rich and buy yachts and otherwise live a nice artificial life and never bother to enslave or eradicate humanity. Maybe the pinnacle of evil — not the most evil form of evil, but the most pleasant form of evil, the form of evil you’d choose if you were all-knowing and all-powerful — is some light securities fraud.
This AI has been brought to you by the letter...
Amazon has released a beta of its gen AI-powered business assistant, and they're calling it — no, I'm not kidding — Amazon Q.
Elmo not understand why Amazon like Q. Source: Sesame Street.
Regular readers will recall that the allegedly super-intelligent AI that got the OpenAI staffers' boxers in a bunch is also named Q. And then there's a certain conspiracy theory involving mythical pedophile rings in the nonexistent basements of pizza parlors and the second coming of John F. Kennedy Jr. [4]
What is it with tech companies and the letter Q? I understand that the letter X is now toxic, and apparently Vladimir Putin has declared dibs on Z. But that still leaves 23 other perfectly good letters in the alphabet. Really Amazon, is that the best you can do? Even Elmo could do better.
Shameless self promotion, part one
I recently had the pleasure of sitting down (via Zoom) with Tschanen Johnson (TJ), who is launching a new podcast called Friday Nuggets. He wanted to pick what's left of my brain about generative AI — the good, the bad, and the terrifying. Here's the first five-minute segment, which TJ has cleverly pieced together with some news headlines and images from MidJourney.
Parts two and three will 'air' (can we still say that?) over the next two weeks. Look for more links in upcoming posts.
If you had to name a super-intelligent AI bot that could exterminate humanity (while make a killing in the stock market) what would it be? Post your suggestions in the comments below.
[1] The research paper featured 10 co-authors from Google DeepMind, the University of Washington, Cornell, Carnegie Mellon University, the University of California Berkeley, and ETH Zurich. One more and they could have fielded a very geeky soccer team.
[2] Good movie. Paul Dano is great as Keith Gill, the real life amateur stock analyst who began touting the GameStop stock as undervalued, and then got swept up in the effort to 'stick it to the man' by pumping up the price.
[3] GameStop stock is up again, by the way. Get in now before it crashes again.
[4] This is actually a thing. Q-heads believe that JFKjr faked his own death in 1999 and will emerge to run for vice president alongside the Orange Guy. (They also believed that in 2020.) Following last July’s death of the cultist who invented this story (Michael Protzman), this particular segment of the Qanon cult is now being led by a 13-year-old TikToker who goes by the name "Tiny Teflon." Reason No. 3,428 why satire is dead.
I believe GPT does know who you are Dan, and where you live.
Do
Not
Doubt
It
I loved the interview, but it was too short. Nice artwork in your office area!!!