How many bots have you interacted with today? If you’re an avid social media user, the number might be higher than you realize.
In fact, 95 million Instagram users could be bots. Experts peg the number of bots on Twitter at 48 million. And that number might be in the hundreds of millions on Facebook. (In May, the social network said it disabled 1.3 billion fake and automated profiles.)
Lawmakers in California took a crack at the malicious bot problem this week, when Governor Jerry Brown signed into law the B.O.T. Act (SB 1001), which prohibits automated, anonymous accounts from “[incentivizing] a purchase or sale of goods or services in a commercial transaction or [influencing] a vote in an election.” Effective July 1, 2019, chatbots on platforms with more than 10 million unique monthly visitors from the U.S. will have to disclose in a “clear, conspicuous, and reasonably designed” way that they’re not human.
The legislation, which was jointly drafted by nonprofit consumer ratings group Common Sense Media and the Center for Human Technology, is the first of its kind in the U.S. Federal regulation might follow on its heels — Senator Dianne Feinstein (D-CA) introduced a similar bill in the U.S. Senate in June. And both have prompted debates about free speech.
In an interview with the New York Times earlier this summer, Ryan Calo, co-director of the Tech Policy Lab at the University of Washington, said that a broad-brush ban on political commentary could prove problematic.
“[Speech] comes in different forms,” he said. “Imagine a concerned citizen sets up a bot to criticize a particular official for failing to act on climate change. Now say that official runs for re-election. Is the concerned citizen now in violation of California law?”
Meanwhile, the Electronic Frontier Foundation (EFF) — a nonprofit organization dedicated to defending civil liberties across digital domains — argued that forcing bots to identify themselves as such would “restrict and chill [the] speech” of their creators.
“Bots are used for all sorts of ordinary and protected speech activities,” the group asserted, “including poetry, political speech, and even satire, such as poking fun at people who cannot resist arguing — even with bots.”
When it comes to clear labeling for bots, I personally think the pros outweigh the cons.
One needn’t look far for examples of the mischief unlabeled bots are capable of perpetrating. They’ve convincingly impersonated a teaching assistant at Georgia Tech. They’ve filed thousands of comments with the Federal Communications Commission criticizing net neutrality. And during the 2016 election, they retweeted Donald Trump’s tweets 470,000 times and Hillary Clinton’s fewer than 50,000 times.
While the free speech arguments are compelling on their face, they don’t necessarily apply in this case — at least from a jurisprudential perspective. As Mondaq‘s analysis of the California bill points out, the current version is tailored narrowly to commercial and political contexts, and it furthers two overriding government interests: informing consumers and preventing illicit influencing of votes in an election. Requiring a bot to reveal its political affiliation isn’t far removed from, say, mandating that candidates self-identify in advertisements (which the government has done for some time).
Consider this: Transparency might make malicious bots less effective, but it won’t impact the good others can (and are) doing.
A majority of people — as many as 69 percent, according to a Salesforce survey — prefer chatbots to humans for quick exchanges with brands. (That’s one of the reasons 77 percent of after-sales and customer service teams have implemented or plan to implement customer service bots.) In more serious contexts, the Facebook Messenger chatbot 911bot enables people to report emergencies to the authorities. And health bots like Tess provide an affordable, on-demand supplement to clinicians and psychologists.
Chatbots are a hot topic (and soon to be a $1.25 billion business), so it was perhaps inevitable that a bill constraining them would generate debate. Future laws will likely warrant continued conversation — any attempt to compel or restrict speech deserves scrutiny. But in the case of the B.O.T. Act, First Amendment advocates’ fears appear to be overblown.
For AI coverage, send news tips to Kyle Wiggers and Khari Johnson — and be sure to bookmark our AI Channel.
Thanks for reading,
AI Staff Writer
P.S. Please enjoy this video about AIVA, an artificial intelligence that can create original live soundtracks based on moods and personalities.
To Read Our Daily News Updates, Please visit Inventiva or Subscribe Our Newsletter & Push.