World Economic Forum: Use AI to “Moderate” Online Speech

The World Economic Forum (WEF), organiser of the annual Davos Bond villain conclave and slaver fest in Davos, Switzerland, has just published an “Opinion” piece on 2022-08-10 titled “The solution to online abuse? AI plus human intelligence” by one Inbal Goldberger, who is identified as “VP of Trust & Safety at ActiveFence”. ActiveFence, in turn, is an Israeli company (visit their Web site) which sells “The end-to-end tool stack for agile, scalable, and efficient Trust & Safety teams”.

The WEF prefixes the article with the disclaimer (bold in the original):

Readers: Please be aware that this article has been shared on websites that routinely misrepresent content and spread misinformation. We ask you to note the following:

1) The content of this article is the opinion of the author, not the World Economic Forum.

2) Please read the piece for yourself. The Forum is committed to publishing a wide array of voices and misrepresenting content only diminishes open conversations.

But then, the WEF did, did it not, exercise editorial discretion in deciding to publish this opinion piece instead of, for example, one on the challenges to economic development of sub-Saharan Africa posed by the low mean IQ of its population.

As the internet has evolved, so has the dark world of online harms. Trust and safety teams (the teams typically found within online platforms responsible for removing abusive content and enforcing platform policies) are challenged by an ever-growing list of abuses, such as child abuse, extremism, disinformation, hate speech and fraud; and increasingly advanced actors misusing platforms in unique ways.

The solution, however, is not as simple as hiring another roomful of content moderators or building yet another block list. Without a profound familiarity with different types of abuse, an understanding of hate group verbiage, fluency in terrorist languages and nuanced comprehension of disinformation campaigns, trust and safety teams can only scratch the surface.

A more sophisticated approach is required. By uniquely combining the power of innovative technology, off-platform intelligence collection and the prowess of subject-matter experts who understand how threat actors operate, scaled detection of online abuse can reach near-perfect precision.

While AI provides speed and scale and human moderators provide precision, their combined efforts are still not enough to proactively detect harm before it reaches platforms. To achieve proactivity, trust and safety teams must understand that abusive content doesn’t start and stop on their platforms. Before reaching mainstream platforms, threat actors congregate in the darkest corners of the web to define new keywords, share URLs to resources and discuss new dissemination tactics at length. These secret places where terrorists, hate groups, child predators and disinformation agents freely communicate can provide a trove of information for teams seeking to keep their users safe.

And so, AI can allow surveillance, not just of the big content platforms, but of “the darkest corners of the web”, to identify and “stop threats rising online before they reach users.”

Why, they might call it “Total Information Awareness”.

4 Likes

Thank you for this. I find it deeply chilling and demoralising, but clearly something to know/be aware of in these more than trying times.

3 Likes

Will a return to the peer-to-peer web (3.0) preclude this truly hateful totalitarian censorship? I sure hope so.

3 Likes

Isn’t it funny how those who claim “these aren’t our ideas. We are just making them available on our platform” are the same people who cheer when certain voices are shut out from voicing opinions on other platforms because “it’s their platform and they don’t want to be associated with those ideas”? Well which is it Klaus? God I can’t wait to feed these people feet first to a wood chipper.

3 Likes

The development of peer-to-peer technologies, both from the hardware transport layer (5G, large satellite constellations) and software (IPv6 fixed addresses, distributed and redundant file storage, self-hosting appliances) provide the prerequisites for getting away from the centralised “data silos” which have emerged and a network which has points at which content censoring and/or surveillance can be imposed. But whether these technologies will be allowed to mature and flourish or be eradicated in their infancy by coercive states is very much uncertain today. The hope was that they would develop “under the radar” and become ubiquitous and unstoppable before the legacy powers realised the threat and mobilised to attack it, much as happened with open access to the Internet and encryption technology for privacy in the 1990s.

But there is cause for concern this time. See my recent post on the main page, “U.S. Sanctions Tornado Cash, Dutch Arrest Software Developer. Begun, the Crypto Wars Have”. This is a stunning example of attempting to criminalise software development, use of software tools, and the developers who write the software.

5 Likes

I had not previously discovered your linked post.

From that post:
The Dutch public prosecutor’s office for serious fraud, environmental crime and asset confiscation.

I actually had to search that name to determine if they really are that out in the open about their charter or if it was your terminology.

Asset confiscation should be added to every government agency. The Federal Bureau of Investigation and asset confiscation. The Internal Revenue Service and asset confiscation. The Federal Waterfowl and asset confiscation. The Drug Enforcement and asset confiscation.

If you are a Federal agency and not confiscating people’s property, you better step up your game or risk losing funding.

5 Likes