“Web Environment Integrity”—Google Embraces the Digital Imprimatur

When I wrote “The Digital Imprimatur” almost twenty years ago (published on 2003-09-13), I was motivated by the push for mandated digital rights management with hardware enforcement, attacks on anonymity on the Internet, the ability to track individuals’ use of the Internet, and mandated back-doors that defeated encryption and other means of preserving privacy against government and corporate surveillance. This was informed by the “crypto wars” of the 1990s and, in particular, the attempt by the Clinton regime in the U.S. to mandate the “Clipper chip” with “key escrow” and a built-in backdoor which would allow “law enforcement” to intercept all private communications foolish enough to use it. In 2003, with terrorism hysteria in full force and the so-called PATRIOT Act being seen as just the first act in an epochal power grab by coercive government of the rights to privacy of their once-citizens, now subjects and consumers, I warned of an emerging alliance between Big Brother and Big Media (the term “Big Tech” not having then emerged) to “put the Internet genie back in the bottle”. This was exemplified by a project underway at Microsoft, code named “Palladium” and later officially dubbed the “Next-Generation Secure Computing Base for Windows”, which envisioned forcing personal computer manufacturers, in order to be able to run Microsoft Windows, to incorporate hardware-level protection to block access by the nominal owner of the computer to “sealed storage” containing proprietary programs and data. This would not only enforce digital rights management, prevent software piracy, but allow Microsoft to block the use of non-Microsoft operating systems which did not bear the Microsoft signature at boot time. Such systems could contain back-doors to eavesdrop on users’ private information and communications, with the user unable to detect their presence or circumvent their operation.

In “The Digital Imprimatur”, I predicted “the overwhelming majority of new computer systems sold in the year 2010 to include Trusted Computing functionality”. Well, in this case even I underestimated the ham-fisted blundering ignorance of Microsoft designers, the utter incompetence of their software implementation, and the zombie-shuffling pace of government action in the United States, so by a combination of all of these, in the 2000s we managed to dodge this bullet. But while my forecasts are often (way) too early, they have a way of eventually coming to pass.

That was then. This is now. Slavers are stupid, but very persistent. “Look out, here it comes again!

This time it’s called “Web Environment Integrity” (WEI), and it comes, not from Microsoft but from the company that traded in their original slogan of “Don’t be evil” for “What the Hell, evil pays a lot better!”—Google.

So, what is WEI? Let’s start with a popular overview from Ars Technica.

Google’s newest proposed web standard is… DRM? Over the weekend the Internet got wind of this proposal for a "Web Environment Integrity API. " The explainer is authored by four Googlers, including at least one person on Chrome’s “Privacy Sandbox” team, which is responding to the death of tracking cookies by building a user-tracking ad platform right into the browser.

The intro to the Web Integrity API starts out: “Users often depend on websites trusting the client environment they run in. This trust may assume that the client environment is honest about certain aspects of itself, keeps user data and intellectual property secure, and is transparent about whether or not a human is using it.”

The goal of the project is to learn more about the person on the other side of the web browser, ensuring they aren’t a robot and that the browser hasn’t been modified or tampered with in any unapproved ways. The intro says this data would be useful to advertisers to better count ad impressions, stop social network bots, enforce intellectual property rights, stop cheating in web games, and help financial transactions be more secure.

Google’s plan is that, during a webpage transaction, the web server could require you to pass an “environment attestation” test before you get any data. At this point your browser would contact a “third-party” attestation server, and you would need to pass some kind of test. If you passed, you would get a signed “IntegrityToken” that verifies your environment is unmodified and points to the content you wanted unlocked. You bring this back to the web server, and if the server trusts the attestation company, you get the content unlocked and finally get a response with the data you wanted.

Now let’s take a peek at the cited “Web Environment Integrity Explainer” posted on GitHub.

Users often depend on websites trusting the client environment they run in. This trust may assume that the client environment is honest about certain aspects of itself, keeps user data and intellectual property secure, and is transparent about whether or not a human is using it. This trust is the backbone of the open internet, critical for the safety of user data and for the sustainability of the website’s business.

Mother of babbling God—this is the first paragraph of the document. The fundamental principle of designing any network application, known, applied, and inevitably exploited when forgotten for at least half a century now is “Don’t trust the client.”. A server communicating over a network to a client not under its control must always assume the client is malicious, verify everything independently, guard against “man in the middle” attacks, and defend itself from denial of service attacks. To “assume that the client environment is honest” might be called the Original Sin of network application security, and yet here, in a paper authored by four Google employees, in the very first paragraph, they assert “This trust is the backbone of the open internet, critical for the safety of user data and for the sustainability of the website’s business.”

Now let’s look at:

Some examples of scenarios where users depend on client trust include:

  • Users like visiting websites that are expensive to create and maintain, but they often want or need to do it without paying directly. These websites fund themselves with ads, but the advertisers can only afford to pay for humans to see the ads, rather than robots. This creates a need for human users to prove to websites that they’re human, sometimes through tasks like challenges or logins.
  • Users want to know they are interacting with real people on social websites but bad actors often want to promote posts with fake engagement (for example, to promote products, or make a news story seem more important). Websites can only show users what content is popular with real people if websites are able to know the difference between a trusted and untrusted environment.
  • Users playing a game on a website want to know whether other players are using software that enforces the game’s rules.
  • Users sometimes get tricked into installing malicious software that imitates software like their banking apps, to steal from those users. The bank’s internet interface could protect those users if it could establish that the requests it’s getting actually come from the bank’s or other trustworthy software.

The means to implement this “trust” mean the end of privacy, tattooing an “Internet licence plate” on the user identifying everything they do (and denying them access should they offend the powers that be), locking out users’ ability to customise software running on computers that are their own property, and to filter the content presented to them, becoming passive consumers of what their corporate and government masters permit them to see.

Congratulations, Google, you’ve just invented U.S. television in the age of the three networks and before the VCR. I wonder if this time they’ll even allow you to turn down the volume when a commercial is playing.

Among the Goals of the explainer we find:

Allow web servers to evaluate the authenticity of the device and honest representation of the software stack and the traffic from the device.

This is the “computer certificate” I discussed in “The Digital Imprimatur”. It verifies that the user is running a top-to-bottom “software stack” in which all components are signed by their vendors and have not been modified in any way, and are running on hardware that refuses to run non-certified software. What does this mean if you’re a monopolist such as, say, Google, who wishes to remain one until the stars burn out?

Look back at the first bullet point of “examples of scenarios” above. There goes your ad blocker, serf. No more web crawlers by Google competitors, as they’re not certified as human. Then, perhaps, the browser (certified unmodified) won’t let you scroll past the auto-play ad without spending a specified time with it on screen. Use another browser? Forget it—Google won’t grant it a certificate that allows it to access sites using Web Integrity unless it also reduces its users to passive consumers. What about renegade Web sites that do not require WEI certification for access? Why, Google will simply designate them “insecure” and decline to index them, rendering them effectively invisible on the Internet.

Once every user has been forced to use WEI-compliant hardware and software, the screw can be turned tighter still. Google’s browser and others they certify can refuse to display content from sites which do not enforce WEI attestation for access, and cloud service providers, content distribution networks, and security front-ends such as Cloudflare can refuse to transmit content from sites which decline to require WEI.

From there, it’s a baby step for WEI-certified browsers to refuse to display content which does not bear a document certificate signed by an authorised “trust provider”. The Digital Imprimatur will now be fully implemented and “all that we have known and cared for, will sink into the abyss of a new Dark Age made more sinister, and perhaps more protracted, by the lights of perverted science.”

References:

10 Likes

I have read “The Digital Imprimatur” several times. As a non-techie, some of it escapes me. Nonetheless, I find this post terrifying - as it describes a perfection of “Brave New World” and “1984”. Since you are, as you say, usually ahead of the game (I believe that assertion), would you please describe some countermeasures individuals can take - lest we all become the babbling, uninformed idiots they want? Are there any?

4 Likes

Well. As I re-read these materials, I am thinking the answer to my question about what individuals can do is “nothing”. So, maybe a better question is: can those sufficiently incensed and horrified by this - those with the requisite skills and finance - organize an independent, parallel and open ecosystem for like-minded people (i.e. those with actual functional minds, capable and desirous of critical thought)? I desperately hope this is possible, lest I literally lose sleep over the prospect of this “integrity” future as the only imaginable one.

3 Likes

I’m not sure what is the most promising direction to pursue to defeat or evade the dystopian future toward which the alliance between Big Brother and Big Tech is driving us. When I wrote The Digital Imprimatur in 2003, in an era where the grass roots had recently (we thought) won the crypto wars to such an extent it appeared the snoopers had left the field, it was easier to be optimistic we could do it again, and even Microsoft was not in such command of the high ground as Google is today (77% or more of the browser market when you include Chrome and the other browsers [including Microsoft Edge and Opera] the use Google’s Chromium engine to display Web content], 92% of the search market, control of the widest-used mobile operating system [Android], etc.). The concentration of Web services in the cloud services providers (Amazon AWS, Microsoft Azure, and Google cloud) now means that three Big Tech companies, all seemingly on board with the woke slaver agenda, control 65% of the computing and network access resources used by the Web, and can act as choke points to require compliance with whatever policies they dictate.

It seems to me there are two avenues to resist this.

Technological: Decentralisation

This is the strategy described in George Gilder’s 2018 book, Life after Google (link is to my review). Gilder argues that the evolution of technology itself (migration of compute power from concentration in vast data centres to individual machines on the edge of the network) and devolution of network architecture from the highly centralised form it has become (the antithesis of the design goal of the ARPANET and early Internet) toward distributed grid communication systems enabled by 5G mobile communications, is increasingly tilting the economic advantage away from the centralised data silos and enabling an era in which individuals and companies at the edge of the network own their own data and communicate with peers without a behemoth at the centre of the spider web. In my review of Life after Google I link to some projects underway in 2018 moving in this direction; some are now defunct but others have replaced them, sharing the goal of “routing around” censorship, cancellation, and propaganda.

See the Wikipedia page “Comparison of software and protocols for distributed social networking” for a list of active and defunct projects in that space. The page “Comparison of distributed file systems” lists projects, many in production today, to liberate bulk data storage from corporate data silos.

Political: Trust-Busting

The time may have come for some old-fashioned Bull Moose in the china (or should I say China?) shop trust-busting. Google is uniquely positioned to bring their version of the Digital Imprimatur into being through the combined power of market share in browser engine, search, cloud services, and mobile operating systems. There is no “network effect” requiring these largely unrelated businesses to be controlled by one company and many ways in which that company can exert “dark synergy” to block opposition to plans that further reinforce its dominance. The impact of Google on the world’s flow of information is far beyond that of IBM and AT&T when they were broken up. Maybe it’s time to put Google in the crosshairs.

8 Likes

Yes. I have begun writing my congress-sponges (how is it they all wind up millionaires?) about necessary antitrust actions. These include not only the tech giants, but also Visa - which seems to think it can de-bank anyone they don’t like. If legal actions aren’t forthcoming, some enterprising civil libertarians may well implement some informal remedies, a.k.a. “self help”. “No justice, no peace” seems to scale nicely - laterally to these issues.

I have recently installed (surprisingly, I managed it myself!) network attached storage (NAS) from Synology. It turned out to be such an excellent company, I just bought and set up their new router, which has enabled me to segment and secure my home (secure, guest, IOT, video surveillance) networks. Such modern routers include sophisticated user-defined firewalls, which permit “micro-segmentation”. I am just getting into that, but I am wondering if this isn’t part of an evolving de-centralization? If it were made easy and secure by third parties, I could imagine offering online access to some of my network storage. Does this represent some portion (times scores of millions of others) of the kind of decentralization necessary to supplant googlefacebookamazon et.al. data silos?

4 Likes

Another part of the puzzle needed to restore peer-to-peer operations on the Internet and get rid of the data silos is rolling out, albeit with glacial speed in some areas. This is the IPv6 Internet protocol, which replaces the original 32-bit IPv4 IP addresses with 128 bit addresses, which would allow every atom composing the Earth to have a block of around a trillion unique IP addresses of its own. This will solve the IP address exhaustion problem which caused Internet service providers to assign “floating” addresses to customers and restore permanent, fixed addresses which allow incoming traffic worldwide. (A firewall provides security against accesses to any by specifically opened ports.)

With IPv6, there is no problem with individual mail, Web, or conferencing servers talking directly to one another with end-to-end encryption, cutting out the silo in the middle and the ability to snoop on the traffic. This is one of the things that is motivating efforts like the encryption ban in the UK’s ironically named “Online Safety Bill”, which is being pushed because the snoopers fear their subjects’ communications going dark to them.

Fourmilab.ch has supported IPv6 since February 2017, and Scanalyst has since its launch on October 2021. Every Swisscom DSL customer has IPv6 capability simply by enabling it their router. (It may be enabled by default now: it wasn’t when mine was installed several years ago.)

6 Likes

In the meantime FTC is taking on Meta (21.92% Republican donations) and Microsoft (23.4% Republican donations).

She’s publicly riled against Amazon (28.90% Republican donations), but it wouldn’t be appropriate in a public function.

But she’s not going against Google/Alphabet (15.61% Republican donations), Netflix (3% Republican donations), or Disney (11% Republican donations).

Before Google got their hands into OpenSecrets, the political agenda was even more extreme:

6 Likes
6 Likes


6 Likes