Elon Musk Announces xAI

https://www.reuters.com/technology/elon-musks-ai-firm-xai-launches-website-2023-07-12/

https://www.reuters.com/technology/elon-musk-says-xai-will-use-public-tweets-ai-model-training-2023-07-14/

On 2023-07-14, Elon Musk held a Twitter Spaces event with the founders of the new xAI venture to describe his goals of “safe, curious, and honest” artificial intelligence. The recording of this announcement is posted on Twitter, which cannot be embedded here. This is a YouTube transcription of the recording.

This is a summary of the discussion posted on Twitter by xAI.

  • The founding team was on hand to introduce themselves, and I must say it is an impressive team with an impressive background. They had very strong backgrounds with Deep Mind, OpenAI, Google, Tesla, etc.
  • Elon Musk said the goal with xAI is to build a good AGI (artificial general intelligence) with the purpose of understanding the universe.
  • Musk said that the safest way is to build an AGI that is ‘maximum curious’ and ‘truth curious,’ and to try and minimize the error between what you think is true and what is actually true.
  • For truth-seeking super intelligence humanity is much more interesting than not humanity, so that’s the safest way to create one. Musk gave the example of how space and Mars is super interesting but it pales in comparison to how interesting humanity is.
  • Musk said there is so much that we think we understand but we don’t in reality. There are a lot of unresolved questions. For example, there are many questions that remain about the nature of gravity, and why there is not massive evidence of aliens. He said he has seen no evidence of aliens whatsoever so far. He went further into the Fermi Paradox and how it’s possible that other consciousness may not exist in our galaxy.
  • If you ask today’s advanced AIs technical questions, you just get nonsense, so Musk believes we are really missing the mark by many orders of magnitude and that needs to get better.
  • xAI will use heavy computing, but the amount of ‘brute force’ will become less as they become to understand the problem better.
  • Co-Founder Greg Yang said that the mathematics they find at xAi could open up new perspectives to existing questions like the ‘Theory of Everything.’
  • Elon stated that you can’t call anything AGI until the computer solves at least one fundamental question.
  • He said that from his experience at Tesla, they have over complicated problems. “We are too dumb to realize how simple the answers really are," he said. "We will probably find this out with AGI as well. Once AGI is solved, we will look back and think, why did we think it would be so hard.”
  • They are going to release more information on the first release of xAI in a couple more weeks.
  • Elon Musk said that xAI is being built as competition to OpenAI, when asked by @krassenstein
  • The goal is to make xAI a useful tool for consumers and businesses and there is value in having multiple entities and competition. Elon said that competition makes companies honest, and he’s in favor of competition.
  • Musk said every organization doing AI has illegally used Twitter’s data for training. Limits had to be put on Twitter because they were being scraped like crazy. Multiple entities were trying to scrape every tweet ever made in a span of days. xAI will use tweets as well for training.
  • At some point you run out of human-created data. So eventually AI will have to generate its own content and self-access that content.
  • Answering a question from @alx, Musk said there is a significant danger in training AI to be politically correct or training it not to say what it thinks is true, so at xAI they will let the AI say what it believes to be true, and Musk believes it will result in some criticism.
  • Musk said it’s very dangerous to grow an AI and teach it to lie.
  • Musk said he would accept a meeting with Kamala Harris if invited. He said he’s not sure if Harris is the best person to be the AI czar, but agrees we need regulatory oversight.
  • Musk believes that China too will have AI regulation. He said the CCP doesn’t want to find themselves subservient to a digital super intelligence.
  • Musk believes we will have a voltage transformer shortage in a year and electricity shortage in 2 years.
  • xAI will work with Tesla in multiple ways and it will be of mutual benefit. Tesla’s self-driving capabilities will be enhanced because of xAI.
  • According to Musk, the proper way to go about AI regulations is to start with insight. If a proposed rule is agreed upon by all or most parties then that rule should be adopted. It should not slow things down for a great amount of time. A little bit of slowing down is OK if it’s for safety.
  • Musk thinks that Ray Kurzweil’s prediction of AGI by 2029 is pretty accurate, give or take a year.

Here is the Web site for xAI: https://x.ai/

9 Likes

This brings up an idea for a business that might be called “Answer The Question Already”:

Patron places money in an escrow account.
Patron posts a question to be posed to one or more persons.
Patron associates a dollar amount with each question, person pair, never exceeding the escrowed amount.
The first person to answer a question posed to them (even if flippant*, like the answer Yann LeCun gave me), collects the total of all money from all patrons associated with that question, person pair.
The escrow accounts are reduced accordingly.

For instance, I asked xAI (a legal person) a concise, and very clear question that is at the very heart of science as it relates to emperical data, hence that gets to the heart of their mission, that Musk’s introductory Twitter Spaces didn’t remotely approach.

*The reason even flippant answers are “acceptable” is to flush out those that hold the world in insular contempt – usually due to their “Stature” that clearly needs to be taken down a few notches in their legacy.

3 Likes

Elon Musk’s experience with xAI’s Grok during the Lex Friedman interview is exactly the kind of thing I’d have predicted from the failure to accord due respect to the Algorithmic Information Criterion for model selection:

PS: Given Musk’s professed commitment to “truth” via xAI, I did apply for a job with xAI and, unsurprisingly, was rejected. By “unsurprisingly” I mean not simply that a long-unemployed, aging, uncredentialed “extremist” who has been critical of Musk, like myself, would be viewed as a loose canon bringing little to the table to compensate, but that the one area in which I have clear priority regarding algorithmic “truth” (AIC) has now been denied by Shane Legg as a valid basis for model selection in a recent interview and Shane Legg was perhaps the first among Marcus Hutter’s PhD students promoting the AIC when Legg cofounded DeepMind. My opinion is that Legg’s cofounders subjected him to what I’ve called Extended Genetic Dominance making him, in effect, lose his mind and succumb to what Marcus Hutter has called “the philosophical nuisance” that choice of UTM in Algorithmic Information renders it ill-founded. This is why I wrote my blog post describing this fallacy – really a kind of mental illness – defining NiNOR Complexity.

Legg’s denial has made its way into John Carmack’s AGI startup as well – Carmack mistaking Legg’s denial for Hutter’s denial.

Denial of Ockham’s Razor* really is a kind of mental illness that is critical to the mindless swarms that mob everyone – including most prominently and recently the anti-Zionist swarm that I predicted decades ago would ultimately attack Jews if Jews didn’t purge their ranks.

*Denial frequently if not usually taking the form of conflating Ockham’s Chainsaw Massacre (lossy compression) with Ockham’s Razor (lossless compression). But perhaps worse than that is the very powerful influence Stephen Wolfram’s, speciously applied in practice, notion of “computational irreducibly” has had in promoting denial of Ockham’s Razor. Wolfram has done more than his share of promoting this specious application of “computational irreducibility” by recklessly and IMHO egotistically laying claim to a “discovery” that is obvious to the most casual observer. The reductio ad absurdum of the speciousness of this notion in practice is the idea that in order to predict the weather, one must simulate the collisions of air molecules. Wolfram’s enormous ego didn’t need this de facto attack on science – his accomplishments in and material rewards for Mathematica should be more than adequate.

5 Likes
5 Likes

Revenge lawfare, a new dissident playbook?

2 Likes

Musk refiles with RICO claims:

https://www.courtlistener.com/docket/69013420/musk-v-altman/

4 Likes

RICO predicate seems to be wire fraud alone. Federal civil RICO allows claiming triple and punitive damages plus attornies’ fees but is notoriously difficult to win.

The contract and fraud claims are pretty solid, though, and have bigger relief requested: to put all OpenAI’s IP into a constructive trust and nullify all Microsoft’s interests in OpenAI, associated for-profit shells and all their IP and licenses.

4 Likes

Amended complaint adds Microsoft and CA attorney general as parties:

3 Likes