Beset by foes on all sides and traitors from within, Facebook’s billionaire ruler Mark Zuckerberg is poised to rename his kingdom, putting distance between the tainted social media platform and the empire’s broadening ambit.
The Verge got the scoop this week from an anonymous but well-informed insider. It already had a lead from the Zuck earlier this year when he told the tech news site Facebook will change over the next few years from being seen as a social media company to a metaverse company.
For those without a penchant for cyberpunk sci-fi, a metaverse was coined by novelist Neal Stephenson to describe a virtual world escape from a dystopian real world – think The Matrix, but with consent.
Suitably surprised why this is gathering international headlines? You should be since it’s straight out of Google’s playbook when it set up a holding company – Alphabet – that no one uses outside those buying Google stock.
The Zuck does need to change the narrative about his social media empire. This month’s massive outage was embarrassing, but that’s nothing compared to the Wall Street Journal’s Facebook Files series.
Based on the internal documents from whistleblower Frances Haugen, the Facebook chief is again being asked to front up to policymakers, this time about the impact Instagram has on teens’ mental health and what the social media giant knew about it.
Harvesting information
The Facebook kingdom has been under siege ever since the Cambridge Analytica data harvesting scandal emerged in 2018, and that’s still playing out with Zuckerberg added to a privacy suit the firm is facing.
Even the White House is breathing more heavily down the necks of big tech in what could potentially be the biggest overhaul of antitrust regulation since the breakup of John D Rockefeller’s Standard Oil in 1911.
US president Joe Biden had some fighting words for big tech when he described the small number of dominant internet platforms as using “their power to exclude market entrants, to extract monopoly profits and to gather intimate personal information that they can exploit for their own advantage” when issuing a pro-competition executive order in July.
It’s very easy to get fixated on the ills caused by big tech, but these are simply tools we’ve increasingly adopted into our everyday lives.
It’s hard to forget the images spewing forth on Twitter of Tahrir Square during the pro-democracy protests of the Arab Spring a decade ago that have unfortunately been followed by waves of violence and instability, or the harrowing video of George Floyd’s murder that further propelled the Black Lives Matter movement in the US.
In the same vein, the platforms have been a melting pot of extremist hatred and misinformation, targeting the angry and disenfranchised.
For New Zealand, that boiled over in the wake of Christchurch mosques mass murder when prime minister Jacinda Ardern embarked on the Christchurch call to stamp out extremist and hateful content on social media platforms.
Even the investment community got on board, with major fund managers bandying together in an effort to use their heft and lean on the highly profitable big tech platforms that appeared blind to the potential long-term damage from adopting a laissez faire approach to content moderation.
Disappointing and frustrating
The global investor initiative – spearheaded by the NZ Superannuation Fund – came to an end this week with ongoing frustration and disappointment that Facebook, Google-parent Alphabet, and Twitter all ducked for cover when their shareholders wanted to meet the big tech firms’ boards.
However, the investors did pressure Facebook into tightening up an oversight committee charter by explicitly including a focus on the sharing of content that violates its policies, a measure the Super Fund said orientates the social media group towards preventing problems rather than simply putting out fires.
It will be a hard slog for the likes of Facebook, which is discovering its artificial intelligence struggles to identify what it deems as hate speech or excessive violence, and the Super Fund tasked NZ consultancy Brainbox to gauge whether big tech’s changes are up to the scale of the problem.
The Brainbox full report is well worth a read for anyone interested in content moderation, which is shaping up to become one of the thornier legal and political issues facing modern societies.
Brainbox found the big tech firms have made reasonable efforts to reduce the spread of misinformation or graphic content in the event of a major event – such as the mosque attacks – but doesn’t think it can be stamped out entirely. It also doesn’t predict any let up in the need for content moderation, unless there’s a wide scale move to end-to-end encrypted messaging platforms, which can’t be moderated anyway.
Being seen to do something
The report also looks at where increasingly concerned nations are heading in their moves to regulate social media content, concluding the strongest case for intervention is in requiring transparency and the auditing of systems.
Brainbox doesn’t endorse content-specific standards unless the content is deemed illegal. It also warns heavy-handed regulation can lead to practices that undermine human rights, having been “persuaded by the weight of expert criticism that the likely effect of the current regulatory trajectory is highly concerning from a human rights perspective”.
The report is positive about Europe’s proposed Digital Services Act, but says much of the upcoming legislation demonstrates “regulators believe content moderation is a simple rather than complex exercise; that accurate moderation at scale is possible within exceedingly short timeframes; and that automation can accurately and safely achieve this. None of this is correct”.
New Zealand only recently dodged its own regulatory overreach with the Films, Videos, and Publications Classification (Urgent Interim Classification of Publications and Prevention of Online Harm) Amendment Bill.
The proposed law gives powers to the chief censor to quickly classify live streamed content as objectionable and to issue takedown notices in a seemingly sensible response to the Christchurch shootings. However, it would have included a clause to set up a government-backed web filter to block content it deemed objectionable.
Managing public opinion
The ill-defined and technically ineffective clause – which was agreed to by cabinet's social wellbeing committee in December 2019 with the view that public opinion on filters was shifting – went down like a ton of bricks in submissions to the parliamentary select committee reviewing the legislation.
The committee, chaired mostly by National’s Barbara Kuriger, removed the filter provision, saying the lack of detail was a “significant concern for us and for submitters”.
The government’s taken that on board, and internal affairs minister Jan Tinetti said cabinet has agreed to remove those provisions and all references to the filter when the bill completed its second reading earlier this week.
Tinetti may have just been taking up her predecessor Tracey Martin’s bill, but her contortions are a lesson for the Facebook chief.
It may be a minor issue in a far flung corner of the Facebook realm, but the seeds of discontent are firmly rooted across Zuckerberg’s metaverse. A new name had best be accompanied by some substance.