Skip to content
Naked Security Naked Security

Lawmaker urges regulating Facebook and Google’s algorithms

Once again, lawmakers demonstrate profound lack of understanding of how the internet works

The mighty but mysterious algorithms that power tech companies such as Google and Facebook could face regulation under a future British Labour government, a member of its shadow cabinet has suggested.

The idea was floated by Chi Onwurah, shadow minister for industrial strategy, who was interviewed by the Guardian in response to a recent letter she wrote to the newspaper on the topic of the alleged negative influence of search and news-selection algorithms.

Onwurah told the newspaper:

“The outcomes of algorithms are regulated – the companies which use them have to meet employment law and competition law. The question is, how do we make that regulation effective when we can’t see the algorithm?”

A consultation paper due for publication early next year would ask for suggestions as to how such regulation might work, she said.

The implication of her comments is that tech companies could be required to reveal their algorithms in enough detail for UK regulators to decide whether adjustments are needed to serve public interest as defined by a future media law.

Barely a year ago this would have been seen as an extraordinary idea, so what has changed?

Algorithms are, in essence, just glorified weighting systems used by proprietary software that constantly watches the web. When Google periodically tweaks its algorithm, it tells search engine optimisation (SEO) mavens which new variables matter most, for instance Google favouring sites that use HTTPS instead of insecure HTTP in search results.

For a while people marvelled at the cleverness of machines, but gradually they noticed that algorithms were making decisions about what people could and could not see based on criteria that were either nakedly commercial or merely inscrutable.

By the time the recent US election cycle started filling up with clickbait and fake news sites, people had started asking what the algorithms thought they were playing at.

Facebook was accused of carelessly promoting Russia’s war of disinformation while journalists noticed that Google sometimes sends back disturbing search results to questions such as ‘Did the Holocaust really happen?”

Inarguably, algorithms are important but the question is whether they should – or can – can be regulated by law.

Google and Facebook don’t make public their algorithms because if they did websites would figure out how to game them at which point (the argument goes), they’d stop working.  They also change – frequently.  There is no algorithm, only a sequence of evolving rules.

Regulating this sounds like nailing jelly to a wall. And that’s before the issue of whose law is even mentioned. Internet companies are as global as the internet itself. EU law might have some influence but UK law? Forget it.

An unsettling final possibility is that what lies at the root of fake news and hate sites is not, in fact, the algorithms that appear to make them prominent, but the audiences they have found.

Countering this will take investment, new ideas and better business models – fiddling with algorithms in the cauldron of hate could turn out to be a case of missing the point.


18 Comments

This is a howl. The British spooks are reacting to the false narrative of fake news by demanding to be the arbiter of what constitutes fake news. Perhaps when they get the algorithms, they’ll establish the Ministry of Truth to manage them.

I guess no one remembers reading 1984.

There was no book named 1984, citizen, report to the library for remedial education.

???
Sure there was. From Wikipedia:
“Nineteen Eighty-Four, often published as 1984, is a dystopian novel by English author George Orwell published in 1949.”

Errrrrr, I think the OP’s comment was satirical. (In the world of 1984, you wouldn’t have been to get the book 1984, would you?)

I’m not sure why algorithms are important to governments. If they want certain outcomes, they should legislate those outcomes. If an algorithm fails to deliver those outcomes, then the company is either punished or has to explain why the desired outcome is impossible. If the outcomes are achieved, then how the company did it isn’t important to government.

You’re right – measuring outcomes (content) is more logical than inputs (algorithms). The problem is that if you ask for outcomes to be different then you are, in effect, asking that the input be changed too. That could quickly turn into a complex back and forth that never quite achieves its objectives.

One reason why governments – or, more precisely, public servants responsible for fair play – might very well be interested in “algorithms” (a rather broad word that in this context seems to embrace business practices, commercial decision making, data processing, machine learning, and much more) is to ensure that these “algorithms” are not creating unfair and anti-competitive outcomes under the guise of some sort of scientific correctness. And, indeed, if the algorithms are secret, it can be very hard to determine that there is no jiggery-pokery going on, so there is a school of thought that says, “If the algorithm is secret, you should assume that it is flawed in some important way.” Ask anyone who has ever tried to figure out how well a cryptograhpic algorithm meets its security claims.

I grant you the possibility of “jiggery-pokery” by Google or Facebook if you grant me the certainty of the same from a government approved “algorithm”. I’ll risk the former.

Well, US government-approved algorithms include, for example, AES and SHA-3, which emerged from a public competition open to the whole world, were subjected to scrutiny from anyone who wanted to be involved, and couldn’t easily have been much more open.

So, for all that there may be some – or many, if you feel that way – governmment-approved algorithms that are suspect, or tainted, or otherwise to be avoided, you are rewriting history if you say that applies in all cases. The point is not so much what gets approved, but how the process of approving it works. And the US government can be surprisingly consultative when it feels like it.

While companies and governments debate on how and why to censor our access to information; I’m thinking we need to just go back to using News Groups (young people won’t have a clue what that means). AGQx forever!!!

There is a problem here, even if it’s not readily apparent, and it’s a major issue.
We already have laws in many parts of the world against what is termed “indirect discrimination”, that a decision making process that discriminates against a certain minority group even as an unintended consequence of something else is illegal.

Machine learning is essentially unexaminable, and it presumably takes considerable effort for Google to devise a conscious countermeasure for something their algorithm has discovered for itself.

So if Google’s self-modifying algorithms learn incidental racism, sexism, islamophobia etc as an indirect result of the data, that’s already illegal. But it’s next to impossible to verify. And still illegal. What if Google’s algorithm eventually decides that $BIGOTRY_OF_CHOICE is of overall profit to Google, because the extra revenue generated by right-wing dog-whistle sites more than compensates from the loss of click-through from the targets of $BIGOTRY_OF_CHOICE?

At the end of the day, it does become necessary to stifle technological innovation if that innovation would cause harm, and however much I am fascinated by machine learning and algorithms, auditability of decision-making processes is currently quite important in ensuring equality.

“it presumably takes considerable effort for Google to devise a conscious countermeasure for something their algorithm has discovered for itself.”

They seemed quite capable of fixing things when they got all the bad publicity for labelling those pictures of black teenagers as gorillas. I suspect they like to play the “helpless” card as it allows them to clutch to their corporate giblets the figleaf that they aren’t being evil.

The British government has zero say over what American companies do. Their algorithms are their business. Google’s algorithms are what made them a lot better than every other search engine on the Internet. So go away British government.

Unfortunately this isn’t just the British government.

One of the most respected academic organizations associated with computing, Institute of Electrical and Electronic Engineers (IEEE), publishes an un-refereed “popular” journal called “Edge.” Astoundingly, an article in this month’s magazine called for “Algorithm Governance” because programmers might find out too much about you by sifting the data you’ve put on line in ways the author felt inappropriate. The article actually called for government intervention.

Makes me glad that I dropped out of the IEEE Computer Society last January after 22 years of membership. I’m not associated with this irrational thinking.

You can find the article by searching for “IEEE Edge,” registering (free), and searching this month’s issue (It’s not April 1, is it?). Duck and John: PM me if you would like me to send scanned images.

Algorithm governance? I’m all ears so please send something along.

We already have “algorithm governance”, so the concept can hardly be a surprise. In some countries, of course, it is works more along the lines of algorithm guidance, as in NIST’s guidelines for secure hashing, password storage, and so on. Algorithm governance, or at least attempts at governing and re-governing algorithms, is a part of life in many fields of IT, notably cryptography and electronic gambling.

Some public servants, for example, want to regulate to make encryption weaker so they can allegedly fight crime more easily. (IMO, this is so seriously ill-informed that it ought to disqualify those very public servants from being in a position where they can exert any influence on cryptography ever again, a disqualification condition that would itself rather ironically be “algorithm governance”, where the algorithm in question is job selection.)

Other public servants, bless their hearts, want to make sure that we don’t repeat the cryptographic blunders of the past by regulating against the use of, or at least squeezing us all very hard to make us stop using, outdated encryption technology, e.g. “broken” ciphers and hashes such as RC4 and MD5.

It’s not really surprised that in a world where we have public servants who want to fine Facebook half a million Euros a day for fake news (I’d put the effort into getting what you might call “critical thinking”, or even simply “ho to do the most basic sorts of research” into the school syllabus, but what do I know?) that we have people who want to regulate the golden-egg-laying machinery of the online ad giants.

I’m not saying that these algorithms do lead to anti-competitive behaviour, but it’s hard to argue that they never could.

Comments are closed.

Subscribe to get the latest updates in your inbox.
Which categories are you interested in?