Skip to content
Naked Security Naked Security

Facebook and Twitter may be forced to identify bots

If passed, the bill would give platforms 72 hours to investigate reports of bots seeking to mislead Californians and to remove or disclose them.

Twitter and Facebook are all too aware that they’ve been infiltrated by Russia-backed bots.
Twitter, for its part, has purged tens of thousands of accounts associated with Russia’s meddling in the 2016 US presidential election. The company also said it would email notifications to hundreds of thousands of US users that followed any of the accounts created by the Russian government-linked propaganda factory known as the Internet Research Agency (IRA), and has said that it’s trying to get better at detecting and blocking suspicious accounts. (As of January, it said it was detecting and blocking approximately 523,000 suspicious logins daily for being automatically generated).
That’s not good enough, according to California lawmakers. They’ve introduced a bill that would give online platforms such as Facebook and Twitter three days to investigate whether a given account is a bot, to disclose that it’s a bot if it is in fact auto-generated, or to remove the bot outright.
The bill would make it illegal for anyone to use an automated account to mislead the citizens of California or to interact with them without disclosing that they’re dealing with a bot. Once somebody reports an illegally undisclosed bot, the clock would start ticking for the social media platform on which it’s found. The platforms would also be required to submit a bimonthly report to the state’s Attorney General detailing bot activity and what corrective actions were taken.
According to Bloomberg, the legislation is slated to run through a pair of California committees later this month.
Bloomberg quoted Shum Preston, the national director of advocacy and communications at Common Sense Media and a major supporter of the bill. Preston said that California’s on a bit of a guilt trip, given how the social media platforms that have been used as springboards to stir up political and social unrest are parked in its front yard:

California feels a bit guilty about how our hometown companies have had a negative impact on society as a whole. We are looking to regulate in the absence of the federal government. We don’t think anything is coming from Washington.

New York is also tired of waiting for the Feds to push social media companies into fixing the bot problem. Governor Andrew Cuomo is backing a bill that would require transparency on who pays for political ads on social media.


Proposed legislation at the Federal level includes the bipartisan-supported Honest Ads Act, a proposal to regulate online political ads the same way as television, radio and print, with disclaimers from sponsors.
California’s proposed bill steps it back to the processes that disseminate the content in the first place, but the online platforms say it can be tough to tell human from bot accounts run by ever more sophisticated technologies.
But there are signs to look out for. Twitter has said it’s developed techniques for identifying malicious automation, such as near-instantaneous replies to tweets, non-random tweet timing, and coordinated engagement. It’s also improved the phone verification process and introduced new challenges, including reCAPTCHAs, to validate that a human is in control of an account.
In January, Twitter said that its other plans for 2018 included:

  • Investing further in machine-learning capabilities that help detect and mitigate the effect on users of fake, coordinated, and automated account activity.
  • Limiting the ability of users to perform coordinated actions across multiple accounts in TweetDeck and via the Twitter API.
  • Continuing the expansion of its developer onboarding process to better manage the use cases for developers building on Twitter’s API. This, Twitter said, will help improve how it enforces policies on restricted uses of developer products, including rules on the appropriate use of bots and automation.

Researchers have also been working to come up with a set of tell-tale signs that indicate when non-humans are posting. A 2017 study estimated that as many as 15% of Twitter accounts are bots.
That paper, from researchers at Indiana University and the University of Southern California, also outlines a proposed framework to detect bot-like behavior with the help of machine learning. The data and metadata they took into consideration included social media users’ friends, tweet content and sentiment, and network patterns. One behavioral characteristic they noticed, for example, was that humans tend to interact more with human-like accounts than they do with bot-like ones, on average. Humans also tend to friend each other at a higher rate than bot accounts.
Mind you, not all bots are bad. Take Emoji Aquarium: it’s a bot that shows you a tiny aquarium “full of interesting fishies” every few hours.


Good bots are also useful: they help keep weather, sports, and other news updated in real-time, and they can help find the best price on a product or track down stolen content.
And then too, there’s Bot Hertzberg: the bot created by California Senator Bob Hertzberg to highlight the issue. Hertzberg introduced the pending California bot bill.
Here’s what human Senator Hertzberg, as quoted by Bloomberg, said about his bill:

We need to know if we are having debates with real people or if we’re being manipulated. Right now, we have no law, and it’s just the Wild West.

And here’s what his bot says in its bio:

I am a bot. Automated accounts like mine are made to misinform & exploit users. But unlike most bots, I’m transparent about being a bot! #SB1001 #BotHertzberg


8 Comments

I’m thinking, if people in CA can’t tell they are talking to a bot, they shouldn’t be using the Internet. Alexia agrees with us

This is privacy and freedom of speech. You can say what you want. You can larp fake news all you want. This law is unconstitutional.
The only way they can operate with a law like this is to have some sort of ministry of truth that tells YOU what is legit and what is not legit news. Totally not going to see something like that abused by either side right?
Freedom means that the freedom to be dumb is also there. As smart people you simply need to be able to understand that a forum which is open to anyone may have knuckleheads in there that mislead. Verify sources, don’t believe everything you read, do your own research. Stop trying to pass laws that strip freedoms and put in place government filtering which can be subject to abuse by those who hold the power to do so. You know, not something that politicians ever do right?

Not exactly – the deal with bots is that they aren’t real accounts (in other words, they weren’t created in accordnce with the terms of service that other users might reasonably expect to be enforced).
If you open a bank account with fake ID, even if you aren’t a crook and only ever deposit lawfully-acquired, tax-paid money, then the bank is obliged to report your bogosity if it finds out. In other words, this isn’t so much to do with what you say, but how you go about sneakily wangling ways to say it.

If they “weren’t created in accordance with the terms of service ” (I fixed the missing “a”) then they are already breaking the TOS, and potentially criminal, no additional laws needed.

Didn’t know there was anything criminal about lying to a site. Maybe get you kicked off but not criminally prosecution.

I think the answer is (depends on jurisdiction) that “it’s underscored by what your motivation was” – like the difference in England between trespass (e.g. where you take a shortcut across someone’s land even though you know it’s private – a civil matter that the police can’t and won’t get involved in) and criminal trespass (e.g. where you do onto their property with nefarious or disruptive intent – where the cops can attend, arrest you and cart you off).

Here in the USA you are pretty safe if you pay taxes. That’s what screwed up Al Capone, got busted for income tax evasion. EULA’s are like most contracts, there to protect the company, not the user.
I guess they can’t check the “I’m not a bot” box….

This type of legislation could be questionable. The Internet is FULL of bots doing useful things that we depend on. How is a ‘bot’ defined legally? The difference between a ‘good’ bot and a ‘bad’ bod? I think, like other prohibition issues, this may be a loosing battle. How do you determine the country of origin and what they are doing?
Both of these companies knew this was going on, where were my protections? In the companies bank account.
The US has not protected it’s citizens from any cyber threats via legislation compared to the European countries. Unfortunately some laws may not work as expected. This may be much more complex than it sounds.

Comments are closed.

Subscribe to get the latest updates in your inbox.
Which categories are you interested in?