Lawmakers and tech companies have been scrambling to catch up.
Now, with a new presidential election season starting and information continuing to emerge regarding bots’ malicious influence, legislators have begun proposing regulations.
In New Jersey, Assemblyman Andrew Zwicker (D., Middlesex) this fall introduced a bill to require upfront identification of online communication bots — a term derived from the word robots to describe automated accounts that generate messages, particularly on social media. The bill won approval from an Assembly committee this month, and Zwicker is hopeful it could get a full vote early next year.
“I’m very strongly opposed to using technology to hide your true intentions, to use technology to deceive people in a way that is unfair to the person who doesn’t know what’s going on,” Zwicker said in an interview. “And I believe if that is your intent — to deceive people — you should disclose you are not a human being.”
But legal experts and technologists warned that this proposal, and others like it, might not address the problems it seeks to solve, while also raising troubling questions about free speech: What exactly is a bot, and when is its speech distinguishable from the speech of its creator? What is political speech? Could disclosure lead to a loss of anonymity online? Could disclosure in the United States lead to censorship elsewhere?
On its face, the New Jersey bill is straightforward and takes up less than two pages: You can’t use a bot posing as a human to try to deceptively influence people’s purchases or votes, and bot accounts must identify themselves as such.
“The average American is every single day online and doing things,” said Zwicker, who chairs the Assembly science and technology committee and works at Princeton University’s plasma physics laboratory. “As big of a story as this has been in general in 2018, I think it’s going to continue to be a bigger story in 2019 and beyond, and it’s beholden on us to get a handle on the right public policies for just the everyday working person.”
Legal and technological experts, while recognizing the desire to combat the malicious use of bots, said Zwicker’s proposal raises complicated free-speech concerns.
One of the most concrete examples is the potential unmasking of anonymous accounts, said Ryan Calo, a law professor at the University of Washington whose work on emerging technologies includes a paper this year examining bot disclosure issues.
A human behind an account accused of being a bot could be forced to reveal his or her identity.
“So while on its face it doesn’t require someone to say who they are, as enforced it has that potential, and it creates a tool to unmask people just by calling them bots,” Calo said.
Even if that doesn’t happen, Calo said, he worries about the chilling effect: What accounts might never get made, what person’s speech might never get heard?
Calo’s coauthor, Madeline Lamo, speaking generally, said bot disclosure also raises questions of whether the government is unconstitutionally compelling speech.
Forcing disclosure also creates a structure that companies or other governments could exploit to censor some accounts, she said. For example, if bot disclosures are required in the United States, another country could use that to identify and completely block bots.
“Any regulation we do will have a ripple effect around the world,” Lamo said. “So if you are requiring that bots that interact with the United States or a certain state here to disclose they are bots, you implement a structure that enables other entities, governments, etc., that don’t value free speech in the same way to use and manipulate that information.”
There are also practical concerns.
For one, Calo said, if the concern is a foreign country using bots to interfere with an election, he said, then the real problem is the foreign country, not the technology.
And bots that effectively alter discourse largely do so at scale, such as by flooding a hashtag to hijack the conversation or retweeting fringe views to make them seem mainstream.
“Knowing something is a bot doesn’t stop it from swamping and skewing discourse,” Calo said.
In addition, regulating bots as they are now could fail to encompass what bots do in the future. On the other hand, they could also unintentionally restrict technologies that have yet to appear.
“We know how today’s technology works,” said Jeremy Gillula, technology projects director at the nonprofit Electronic Frontier Foundation, a civil liberties group focused on the digital world. “It’s a lot harder to predict how regulation will affect technology in the future.” The group is neutral on this bill.
Massaro agreed that unintended consequences are a serious concern. That means lawmakers must be flexible, willing to adapt the law as circumstances evolve.
Lawmakers, she said, “should always err on the side of caution when the downsides of legislation may include serious liberty losses or other harms. They should walk, not run, into the shadows.”
Zwicker said he recognizes his proposal is imperfect, but he hopes that the conversation it has sparked can help lead to reasonable measures to limit bad actors while still protecting freedoms and allowing new technology to flourish.
“I’m going to do what I can, but I don’t know the absolute right answer,” Zwicker said.
This month, the bill was passed unanimously out of a committee Zwicker chairs. It has not been scheduled for a floor vote in the Assembly, but Zwicker said he was “optimistic” it would pass the chamber “in the early part of next year.”