Published in  
R
#REGRETSREPORTER

Regret

In the early days of the Internet emergence, there was a magical algorithm that became the competitive advantage of savvy e-commerce websites like amazon.com and the IP of my first startup.

The magically trained collaborative filter algorithm that we built for WAP xml sites and the first mobile internet portals of the European media companies. A recommendation engine, as it was called, would observe people’s behaviours online and make inductive and deductive conclusions as to the likelihood of someone wanting something in a near time future, or immediately. These “algos” were not even artificial intelligence per se, but were based on a precursor model of object-oriented data bases. I lost you there, I know. Basically, they were functional equations that derived conclusions based on “people who bought that book, also bought this other one, hence why we are recommending it to you” or “people who look for sports news on a Monday morning, will likely do this every Monday morning so let us reorder the news links and put sports at the top” - this is what our recommendation engine did.  Being automatically recommended products or digital consumables became that best thing since sliced bread until YouTube was bought by Google in 2006 and the game changed to a radically more perverse type of recommendation, one that today has prompted governments and regulators to come up with proposed bills that will terminate the current state of demise that social media platforms and recommendation systems impose on our online behaviour.


As early as 2009, researchers began to uncover how Google’s algorithms began to be detrimental to users. I have been collaborating in the research of predatory, discriminatory and radicalisation platforms since 2012. They exploded when the emergence of Fake News dominated the social media platforms, and reputable media outlets like the New York Times, the Washington Post, the Wall Street Journal and consumer protection agencies began to denounce recommendation practices that would be illegal in the analogue reality. Worse, parents began to report how their teenage children were being bullied, induced to starve or self-harm, radicalised or fall prey to pederasts mostly on… YouTube, the world’s preferred video platform, broadcasting around 700 million hours of content each day, 70% of which is derived from its A.I. recommendation algorithm.


A predatory recommendation system will be trained to detect your addictions, weaknesses and psychological insecurities. Gamblers will be served ads enticing them to bet online or buy tickets to casinos. Persons with depression will find themselves hunted down by shopping sprees and airline tickets ads to unaffordable destinations. Discriminatory systems are a next level of evil because they have been designed to select specific people from job applicants, or present job offers descriptions and salaries that would change according to the person accessing the site. The victims are typically ethnic minorities and , of course, women. How do they work, you may ask? The system presents white male candidates higher salaries for the same positions, or positions that are not offered to women or ethnic minorities. How do you like them apples! Radicalisation is perhaps the most detrimental to humans because it can literally ruin lives forever. It prays on teens and even children when they are at their most vulnerable ages. The recommendation algorithms serve them sexualised content disguised as cartoons, or content that convince them to act and think in pernicious ways. They instigate people to believe outlandish claims that will stir up their emotions and prey on their insecurities, like videos that try to disrepute activism, scientific information or claim that certain targeted politicians are involved in conspiracy practices.


No items found.


Perhaps you have never been served videos that started with a typical marketing pre-roll ad promoting a detergent or a car and ended up being ISIS propaganda videos showing gruesome decapitation of prisoners, but they became so recurrent that big consumer goods companies decided to stop using them for their marketing campaigns. Recommended content is targeted at people over time and based on their daily use of not just one platform, but from as many as the system can possibly log onto your individual profile.


This is why governments like the United Kingdom and organisations such as the Mozilla Foundation, are pushing for new regulation and practices that will prevent the continuation of algorithmic recommendations and practices that are detrimental to consumers and illegal. Above all, they will seek that technology companies offering such online products and who run the platforms where people are targeted and abused, take responsibility and are legally accountable.


The proposed online safety bill in the U.K. is still in draft and will be debated in parliament this year.  A principal area of concern that the new bill addresses is to help protect young people and clamp down on racist abuse online, as well as financial fraud on social media and dating apps,  for example protecting people from romance scams and fake investment opportunities. The government believes that it is time for tech companies to be held to account and to protect the British people from harm. Failing to do so, tech companies will face penalties. How hefty these shall be it is still unknown, but at least they will be held accountable for what happens under their roof.


The Internet has been for 25 years a digital wild West where transgressors and tech companies went unpunished because the lack of regulatory frameworks. This is just the beginning to equalising both the analogue and the digital worlds (companies in scope will have a duty of care towards their users so that what is unacceptable offline will also be unacceptable online).

By inertia, some commentators have remarked that this may open the door to government censorship but it is not a guide to censor “content” but illegal actions that in the analogue world would be considered criminal acts. In fact the real censorship has been found out to come from social media platforms companies who have arbitrarily silenced or blocked users – revealing how tech companies currently have vast power over one of our core democratic rights, freedom of expression. Is it legal to ban Donald Trump from social media? Facebook has decidedly so because according to its chief executive, Trump’s posts infringed their community guidelines. Same as Twitter, who simultaneously banned Trump on a permanent basis. But what happens when journalists or certain individuals have been socially silenced for their opinions on the whims of a tech CEO or woke campaigners? With the new legislation, if someone feels their content has been taken down unfairly, they’ll have the right to appeal.


As I have repeated for now, almost 20 years: A.I. is a tool and a force for good and progress, but it  has also allowed stakeholders to abuse the lack of regulation in exchange for ruthless profits and it needs to operate within a legal framework, specially when people’s lives and wellbeing are at stake.