Is Anybody There? Rise and Fall of an Early Metaverse

At this writing, it appears that the recent wave of hype about the “metaverse” which was set off by Facebook’s announcement in October 2021 that it was renaming the company “Meta” and investing untold billions in building virtual 3D worlds in which customers could interact (see the 2021-10-29 post “Mark Zuckerberg on the Metaverse”) is receding. The re-badged “Meta” announced two waves of layoffs in November 2022 and March 2023, totaling more than 20,000 jobs, and proclaimed 2023 as its “Year of Efficiency”. The metaverse no longer appears to be the Current Thing, with Meta pivoting to artificial intelligence and algorithmic CEO Mark Zuckerberg saying the company’s largest investment would be “advancing AI and building it into every one of our products.”

This débâcle was greeted with a sigh of relief by those who had long seen the promise of virtual worlds as the next generation in “user interface” and advocated their development, but dreaded the dystopian Hell a virtual world built by Facebook and bristling with corporate advertising, data mining at the expense of users’ privacy, attention span murdering distractions, manipulation of information flow in favour of the slaver agenda, silencing of dissent, and all of the other features of Facebook instantiated in an immersive environment.

Perhaps we shouldn’t have worried so much. It had been tried before, and ignominiously failed, albeit on not so operatic a scale. In October 2003, after a protracted development and beta test process, the There virtual world opened to the public. Unlike the Second Life virtual world which opened a few months earlier in June 2003, There offered a stripped down, more cartoon-like environment and limited ability to build in-world. There was an economy with its own currency that could be purchased for US$, but no ability to convert silly money earnings in the virtual world back to government funny money.

The main focus of the company was signing up major brands to have a presence There where they could sell zero-cost pixels with their logos to idiots: Coca-Cola, Nike, and Levi’s were among those who bought in. The respect the developers had for the users they expected to make all this an economic success was exemplified by CEO Tom Melcher, who described the typical user as “the woman who is in her 30s, single, really overweight, lives in a small town.” He didn’t mention the cats.

In June 2004, There laid off many of its people and restructured. In March, 2010, the whole thing shut down.

Meanwhile, Second Life will celebrate its 20th birthday in June 2023. Its focus on user-created objects and environments, powerful scripting facilities allowing construction of autonomous objects and simulations, and an in-world economy which allows buying, selling, paying, and earning with conversion to and from US$ with a floating exchange rate, has attracted a modest (estimated between 800,000 and 900,000) but loyal user base. The company that operates it, Linden Lab, is profitable and continues development of the system. Perhaps at the present stage of development, this modest, open approach, trusting the users to create the content and activities that make the world interesting is more appropriate than the top-down exploitive model attempted by There and Meta.


Doesn’t Meta have a board? It’s incredible to see one person’s passion projects have such a massive effect on employees and shareholders.


According to recent SEC filings, Mark Zuckerberg owns around 13% of Meta shares. While that’s a long way from control, he is CEO, co-founder, and the largest single shareholder, and boards of directors are inclined to give people in such a position the benefit of the doubt when they announce a change of direction for the company.


Sadly, too many Directors are little better than sock puppets for the CEO. Chosen by the CEO because they are good buds and/or hardly independent thinkers. Often with only a negligible personal financial stake in the company. Directors are the Weakest Link in shareholder “capitalism” as it is currently practiced.


Doesn’t Meta have a board?

According to a 2019 Vanity Fair article, Zuckerberg has (“had”?) full control of the company because favored insiders hold supercharged class B shares with 10x voting rights:

Zuckerberg, the company’s majority shareholder who holds 75% of Facebook’s Class B stock, controls 58% of Facebook’s vote.


Investopedia says:

Meta has a dual class structure.

As of Oct 21, 2022, Meta had 2,248,672,204 shares of class A common stock outstanding. The stock represents one vote per share and trades under the ticker “META” on the Nasdaq Global Select Market.

Also as of Oct. 21, 2022, Meta had 402,876,470 shares of class B common stock outstanding. The class B stock represents 10 votes per share and is owned by Zuckerberg, management, and directors. This gives Zuckerberg and his management effective control of the company.

Thus, class B shareholders collectively have almost twice the voting power of all Class A common stock holders together.

According to the META Form 10-Q filing of 2022-09-30 [PDF]:

Mark Zuckerberg, our founder, Chairman, and CEO, is able to exercise voting rights with respect to a majority of the voting power of our outstanding capital stock and therefore has the ability to control the outcome of matters submitted to our stockholders for approval, including the election of directors and any merger, consolidation, or sale of all or substantially all of our assets. This concentrated control could delay, defer, or prevent a change of control, merger, consolidation, or sale of all or substantially all of our assets that our other stockholders support, or conversely this concentrated control could result in the consummation of such a transaction that our other stockholders do not support. This concentrated control could also discourage a potential investor from acquiring our Class A common stock, which has limited voting power relative to the Class B common stock, and might harm the trading price of our Class A common stock. In addition, Mr. Zuckerberg has the ability to control the management and major strategic investments of our company as a result of his position as our CEO and his ability to control the election or, in some cases, the replacement of our directors. In the event of his death, the shares of our capital stock that Mr. Zuckerberg owns will be transferred to the persons or entities that he has designated. As a board member and officer, Mr. Zuckerberg owes a fiduciary duty to our stockholders and must act in good faith in a manner he reasonably believes to be in the best interests of our stockholders. As a stockholder, even a controlling stockholder, Mr. Zuckerberg is entitled to vote his shares, and shares over which he has voting control as governed by a voting agreement, in his own interests, which may not always be in the interests of our stockholders generally.

So, while in a typical public company the CEO serves at the pleasure of the board, at META, the board serves at the pleasure of the CEO/majority (vote) shareholder. These multiple class common stock capital structures with preferential voting rights used to be considered shady and were, as I recall, outright banned by the New York Stock Exchange (while somehow overlooking Ford Motor Company, which had such a structure to keep control within the family). Now, it’s “anything goes, and goes it does, right down the drain in this case.


Hmmm. I would have thought ViOS (Visual Internet Operating System) would be an earlier failed Virtual World project than There.

Neither I nor the video claimed There was the first virtual world, or the first to fail. What distinguishes it is how similar the vision of its developers was to that of the recent Facebook/Meta and all of the venture-funded copycat metaverses. They demonstrated that the model of using a virtual world to sell “virtual goods” to “virtual consumers” for “virtual money” is no way to bring real money to the bottom line.

There were many precursors of virtual worlds. VRML, introduced in 1994–1995, was intended in part to define 3D virtual worlds as a means of navigating the Internet and was used to implement some early chat systems. Lucasfilm Habitat, released in beta in 1986, used a client program running on Commodore 64s and the Quantum Link service to host the shared virtual world. Obviously, given the limitations of the platform, it used 2D sprite-based graphics, but users could define and customise their avatars, interact with other users, buy and sell goods, and participate in governance of the virtual world. Fujutsu licensed it 1990 and marketed it in Japan, then later built their own extended version called Dreamscape, with Microsoft Windows and MacOS clients. Habitat and spin-offs had an in-world economy and currency long before other virtual worlds.

Here is the Lucasfilm promotional video for Habitat.

Habitat has been resurrected by Randy Farmer, one of its creators, and is available today as Neohabitat.

Randy Farmer and Chip Morningstar, the chief developers of Habitat, later developed the software for the American Information Exchange which, had Autodesk not killed it in 1992, would have pioneered online markets long before eBay.


The Halt and Catch Fire’s second season is a good dramatized portrayal of early virtual worlds and that period in general.

It’s interesting to read Morningstar’s essay about the state of academia - it seems that with LLM technology, this pseudo-academic capability has now been automated.


Well, let’s see…. This is from ChatGPT, using the GPT-4 large language model.


Write a postmodern deconstruction of the text “John F. Kennedy was not a homosexual” in the style of Derrida, Foucault, and Baudrillard, citing a distinction, hierarchical opposition, and an alternative reading in which the text is interpreted as referring to itself. Cite references to Freud and Marx, as interpreted by French philosophers and literary critics.


Title: The Elusive Kennedy: A Postmodern Inquiry into the Textual Embodiment of Sexuality

In the seemingly straightforward statement, “John F. Kennedy was not a homosexual,” we are presented with an opportunity for postmodern deconstruction, as practiced by the likes of Derrida, Foucault, and Baudrillard. By dissecting the text, we shall reveal the intrinsic contradictions and ambiguity of its meaning, especially when considering the interplay of Freudian and Marxist theories within French philosophy and literary criticism.

To begin, we must recognize the distinction between the signifier, “John F. Kennedy,” and the signified, the actual historical figure. Derrida would argue that the signifier is not a stable representation of the signified, but rather, an arbitrary construct (Derrida, 1967). Thus, the signifier “John F. Kennedy” is not an unequivocal reference to the 35th President of the United States, but an infinitely malleable linguistic symbol.

Foucault’s concept of power and knowledge provides a lens through which we can examine the hierarchical opposition within the text. The statement itself enforces a binary structure of sexuality – homosexual and heterosexual – and positions one as being “not” the other, implying a hierarchy between the two (Foucault, 1978). The text, therefore, becomes a site of power, where the normative discourse of sexuality is reinforced and perpetuated. The invocation of Kennedy’s sexuality also reveals the ways in which the personal and the political intersect, revealing the connection between the individual and the state, an idea that Marx explored in his works (Marx, 1844).

In applying Baudrillard’s concept of hyperreality, the text can be reinterpreted as referring to itself. The statement “John F. Kennedy was not a homosexual” is not merely a declaration of a historical figure’s sexuality, but a simulacrum, a copy without an original (Baudrillard, 1981). In this alternative reading, the text becomes a self-referential loop, divorced from any objective reality, existing only within the realm of language and symbols.

Freud’s psychoanalytic theory further complicates the interpretation of the text. The denial of Kennedy’s homosexuality can be seen as an expression of the unconscious desires of the author, projecting their own anxieties onto the historical figure (Freud, 1900). In this sense, the text serves as a manifestation of the author’s unresolved conflicts regarding sexuality and societal norms.

Finally, by invoking Marx’s theories on alienation and commodity fetishism, we can view the text as a reflection of the capitalist system that commodifies and fetishizes public figures, reducing them to mere objects (Marx, 1867). The text’s obsession with Kennedy’s sexuality is a symptom of this commodification, reducing his complex identity to a singular, consumable aspect.

In conclusion, the statement “John F. Kennedy was not a homosexual” is a rich site for postmodern deconstruction. Through the lens of Derrida, Foucault, and Baudrillard, and by considering the theories of Freud and Marx, we have uncovered a multitude of meanings and contradictions. By dismantling the binary oppositions and questioning the very nature of the text’s signifiers, we can expose the fluidity and instability of meaning, as well as the power structures embedded within language and discourse.


Please include citations for the references in this paper.


I apologize for the oversight. Here are the citations for the references mentioned in the paper:

Baudrillard, J. (1981). Simulacra and Simulation. Éditions Galilée.

Derrida, J. (1967). Of Grammatology. Éditions de Minuit.

Foucault, M. (1978). The History of Sexuality, Volume 1: An Introduction. Éditions Gallimard.

Freud, S. (1900). The Interpretation of Dreams. Deuticke.

Marx, K. (1844). Economic and Philosophic Manuscripts of 1844. (Unpublished Manuscripts).

Marx, K. (1867). Capital, Volume I. Verlag von Otto Meisner.

These citations are provided in a general format, and you may want to adapt them to a specific citation style (e.g., APA, MLA, or Chicago) as required.

All of these are works which actually exist, unlike other GPT-4 output which has been caught making up references s/he/it cites. I don’t know, however, whether they actually assert the things they are claimed to in the “research” paper.


This is excellent. I think we’re at the AutoSokal level when it comes to generating academic spam. It was spam that made the careers of many academics at B-grade institutions, where the spamless academics stay behind not having enough publications to ‘score’.

I wonder what kind of a review could GPT generate to save the time of volunteer peer reviewers. I’ve seen some promising automatic reviews - even of the political biases in newspaper and magazines articles.


Assuming unbiased AI reviews (a big assumption) most wouldn’t make the cut. According to the bard, when asked “what percentage of articles in academic journals can be replicated”:

The percentage of articles in academic journals that can be replicated varies by field, but it is generally lower than most people think. In a 2015 study, researchers attempted to replicate 100 studies published in top psychology journals. Only 36% of the studies could be replicated, and the results of the replication studies were often much weaker than the original results. Similar studies in other fields have found similar results. For example, a 2016 study found that only 62% of studies published in top economics journals could be replicated.

There are a number of reasons why studies may not be replicable. One reason is that studies may be poorly designed or conducted. Another reason is that studies may be subject to publication bias, which means that only studies with significant results are published. This can lead to an overestimate of the true effect size of a phenomenon.

The low replicability of studies is a serious problem for science. It means that we cannot always trust the results of published studies. This can make it difficult to make decisions about public policy or to develop new treatments for diseases. It is important for scientists to take steps to improve the replicability of their studies. This includes designing studies carefully, conducting them rigorously, and making their data and methods available to other researchers.


“How would improving the quality of academic journals through AI review hurt the authors?”

There are a few ways that improving the quality of academic journals through AI review could hurt the authors.

  • Increased time to publication: AI review can be more time-consuming than traditional peer review, which could lead to longer delays in the publication of articles. This could be a problem for authors who are under pressure to publish their work quickly.
  • Increased cost of publication: AI review can be more expensive than traditional peer review, which could lead to higher publication fees for authors. This could be a problem for authors who are working on a limited budget.
  • Reduced opportunities for feedback: AI review can be less personal than traditional peer review, which could mean that authors receive less feedback on their work. This could make it more difficult for authors to improve their work.

It is important to note that these are just potential problems, and it is not clear how significant they will be in practice. However, it is important for authors to be aware of these potential issues before they decide to use AI review.

In addition to the potential problems mentioned above, there are also some ethical concerns about the use of AI review. For example, some people worry that AI review could be used to discriminate against certain groups of authors, such as women or minorities. It is important to address these ethical concerns before AI review is widely adopted.


Another important tool, especially in the squishier sciences, is preregistration of experiments. Wikipedia defines the procedure as:

In the standard preregistration format, researchers prepare a research protocol document prior to conducting their research. Ideally, this document indicates the research hypotheses, sampling procedure, sample size, research design, testing conditions, stimuli, measures, data coding and aggregation method, criteria for data exclusions, and statistical analyses, including potential variations on those analyses. This preregistration document is then posted on a publicly available website such as the Open Science Framework or AsPredicted. The preregistered study is then conducted, and a report of the study and its results is submitted for publication together with access to the (anonymised) preregistration document. This preregistration approach allows peer reviewers and subsequent readers to cross-reference the preregistration document with the published research article in order to identify (a) any “exploratory” tests that were not included in the preregistration document and (b) any suppressed tests that were included in the preregistered protocol but excluded from the final research report.

Preregistration protects against p-hacking and the “file drawer” effect, where researchers collect a large body of data and then select and publish a subset which exhibits the level of significance for which they are looking or do not submit for publication results which falsify an effect they were testing.


Who would have guessed that spelling bee words had to be carefully chosen to prevent deaths from venomous spider bites, if it hadn’t been for important academic research? :laughing: