I asked GPT-3 for advice - you be the judge of the outcome. For some reason, it kept mum about the “to don’t” list…
Brainstorm ideas about a mischievous Artificial Intelligence offering productivity tips for humans
Use your AI to create a daily or weekly productivity schedule for yourself, and stick to it!
Automate as much of your work as possible using your AI’s capabilities – this will free up time for you to focus on more important tasks.
Delegate some of your less important tasks to your AI – it can handle them just as well as you can, and it frees up your time for more important things.
Keep track of your progress with regular check-ins with your AI – this will help you stay on track and motivated.
Take advantage of your AI’s ability to learn and improve over time – as it gets to know you better, it will be able to offer more and more personalized productivity tips.
I was rather disappointed that Marcus Hutter decided to change the rules of his prize when he expanded it by a factor of 10 (from enwik8 to enwik9 corpus and from €50k to €500k purse factor), because the rule change included the compressor size as well as the size of the executable archive of Wikipedia. Solomonoff’s proof of Kolmogorov Complexity as the best of all possible regularizations of loss functions is not parameterized by the size of the compressor. I understand why Marcus did it, and his purpose in the prize is different from mine. His: Automate induction. Mine: Demonstrate the power of objective prizes to squeeze out and expose model bias in AIs with an objective definition of “bias”. But I did register my objection. The confusion over how “intelligent” these large language models are exemplifies my reason for objecting.
Arguments about how “intelligent” these large language models are depends on subjective criteria – and I do include perplexity as a subjective criterion for reasons that the Hutter Prize FAQ explains.
The only objective measure of intelligence – in the sense of scientific reasoning toward a predictive world model – that I know of is the ability to losslessly compress the observational data. (Hence my Skype signature: “Intelligence is the ability to simplify.”) That’s how one gains scientific traction with Kolmogorov Complexity hence Solomonoff’s proof.
Even though enwik9 (the Hutter Prize corpus) was undoubtedly included in the training data of all the large language models, incorporating those models into lossless compressors of enwik9, and measuring the size of the resulting executable archives of enwik9, would be a far better* measure than perplexity or other even more subjective measures of the intelligence of those models.
*Up to the point that people start gaming their models to the enwik9 benchmark corpus.
I wrote a rant in the talk page for the Wikipedia article titled “Solomonoff’s theory of inductive inference”. I hate Wikipedia edit wars in part because I’ve not been put through the psychological warfare training schools that many Wikieditors have – so it seems such a waste of my time. However, Wikipedia is the place people go to for topics that should not be all that controversial – and Minimum Description Length is one such topic. Solomonoff’s proof has been around for over a half century now – yet I’ve been accused of being a mere “autistic savant” for seeking entrepreneurs who are aware that its primary importance is not for machine learning/AI but for science itself. I mean imagine the competitive advantage a company could enjoy if it was the only one that understood the importance of experimental controls.
I have had great results using GPT-3 as a sort of search engine for identifying literature, popular press, and academic journal articles related to specific subjects. The results are often better than my school library system and it’s easier to use thanks to the fact that GPT-3 can interpret natural language — no need to mess with Boolean operators. There are limitations, of course. GPT-3 will occasionally suggest an article that does not exist and the model is not “aware” of the most recent work. Nevertheless, I have found GPT-3 a remarkably useful tool for research.
There might be more sophisticated ways of doing this, but you can simply type a statement and then add “provide citations” and GPT-3 will find relevant articles.
First, algorithmic information as a concept has been well defined ever since Komogorov Complexity was defined and it is no more “biased” in terms of outcomes for different groups of people than is arithmetic.
Second, the citations are humorous. The first citation title doesn’t exist. The arxiv link exists but for an irrelevant paper. The second citation exists but it, too, is irrelevant.
It’s hit or miss. If you provide more information, you will sometimes get better results. No, it’s not a replacement for a search engine, but the fact that GPT-3 is able to provide accurate references even some of the time is impressive and occasionally useful.
Agreed, it is, or at least can be impressive if not merely seductive in the sense of “Computer Power and Human Reason”. But my examples are really about the fact that virtually the entire machine learning world has ignored the essential value of recurrence in unsupervised (scientific) creation of computational models.
To whip a dead horse, I provided more information. Judge for yourself whether the results are better:
The smaller the executable archive of a dataset is, the closer its length corresponds to the algorithmic information of the dataset. This means lossless compression of a dataset can be viewed as a form of unsupervised model regularization leading to generalization which, in the limit of the dataset’s kolmogorov complexity leads to the optimal model. This is an information criterion for model selection that is not used by the social sciences even though it is generally applicable, superior to other model selection criteria and utilizes resources, such as computation and data, that have exponentially increased during the last half century. The reason for this…
…is that the social sciences have a bias against the use of computational models. This is because the social sciences are dominated by the philosophy of empiricism and the philosophy of empiricism is incompatible with the philosophy of computation. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on the idea that the only way to understand the world is to observe it. The philosophy of computation is based on the idea that the only way to understand the world is to build models of it. The philosophy of empiricism is based on
Ever since I started fooling with GPT-2, I have been calling language model synthesis programs “bullshit generators”. Here is how I explained it in a comment here on 2022-06-15,
I have previously called GPT-3 a “bullshit generator” because when you ask it a question that requires some technical knowledge, it usually answers like the guy you knew in college who had read only a couple of popular articles but would pronounce so authoritatively on the topic he would fool everybody except those who actually understood it in detail. That’s what you get when, as GPT-3 does, you’re matching text to a large body of training material without having any understanding of what it means or the ability to rate its quality and reliability.
If you think about what a human master of bullshit does, it’s remarkably similar: based upon a prompt of the topic, assemble a stream of phrases which sound like what an authority would say, perhaps citing other authorities (made-up or attributed without having ever read them). GPT-3, having been trained from a huge corpus of text and using a language model that attempts to mimic text from the corpus, does an excellent job of this. This kind of bullshit can be very persuasive to those who aren’t “read in” to the details being discussed. It can also be very funny to one versed in the subject matter.
Natural science is about prediction (of what will be observed based on what is observed). This entails time’s arrow. Statistics does not essentially entail time. Dynamics essentially entails time. Dynamical systems are governed by recurrence laws relating the present to the future: What is observed to what will be observed. Natural science can thereby be thought of as the discipline of dynamical systems identification.
Computation is a dynamical system, hence computational models are inherently superior to statistical models when it comes to modeling the natural world.
The machine learning world has, due to the hysterical gold rush born of Moore’s Law, lost this essential perspective on computation’s relationship to prediction. Ray Solomonoff proved, way back in the 1960s, at the dawn of Moore’s Law explosive growth in computational power, that if the assumptions of natural science are correct (ie: that if the universe is lawful (a recurrence relation/algorithm) including our ability to observe the universe) that the shorter the recurrence relation/algorithm that outputs all our prior observations, the more likely that recurrence relation/algorithm is to match the universe’s laws.
The excuses given by not only machine learning as a school of thought, but by natural scientists – most particularly social scientists – for ignoring lossless compression (of the widest range of observations available) as the judging criterion for deciding between which macro-model is to be considered “standard”, reminds me of a kid saying “The dog ate my homework.” But, due to the hysteria in the machine learning world and in the social sciences, they are almost never even asked to do their homework let alone present it.
What’s more, machine learning has somehow wangled an exception to the traditional requirement on homework and examination papers, “Show your work”. Not only don’t they show their work, they boast of having “175 billion parameters” (GPT-3), where “parameters” are low-precision floating-point “weights” in vast matrices in which nobody is able to identify the direct source of each individual setting or its influence on the output of the network, other than to say that it’s the result of the training set. When one regards the often stunning results from GPT-3, DALL-E 2, and other systems, Sidney Harris’s 1977 cartoon comes to mind.
When you have a “model” of a physical system based upon training from a large number of observations of the behaviour of similar systems, it isn’t clear you really “understand it ” in any sense. In particular, you have no clear-cut definition of the rigid rules (conservation laws, etc.) it must behave which, if violated, indicate your model or understanding of the system is incomplete.
This sounds a lot like Occam’s razor. Are you suggesting Solomonoff proved that Occam’s razor yields the most truthful propositions? Or put another way, that simple laws, if accurate about the past, are more likely to make accurate predictions about the future than complex laws that are also accurate about the past?
I agree intuitively. But to play devil’s advocate, why is this important if the results of the models are so good? As GPT-3 continues to improve, the bullshit it generates will become indistinguishable from expert opinion. What would be different about a natural language model that integrates your proposals?
Yes. Solomonoff proved that Occam’s razor isn’t just a nice rule of thumb – it explains Wigner’s observation of the unreasonable effectiveness of mathematics in the natural sciences. Specifically, that by assuming our observations of the universe are generated by an algorithm (however “unreasonable” we may be in making any assumptions), Occam’s razor is the inevitable consequence.
It bears mentioning the main way, of the several ways in which “The dog ate my homework!” excuse appears is by conflating model generation with model selection in the mushminds populating machine learning and social science circles. On the surface, this mushmindedness may be attributed to mere hysteria, or, to turn a phrase, attributed to mere stupidity as in “Never attribute to malice that which can be attributed to stupidity.” However, my version of that aphorism is:
“Never attribute to mere stupidity that which can be attributed to unenlightened self-interest.”
In the case of social science, the unenlightened self-interest in avoiding disciplined model selection is obvious. The entire edifice of sophistry that is used to justify authority structures would collapse well in advance of those “authorities” devouring the last remnants of humanity.
In the case of machine learning, there is an obvious overlap with social science in the hysteria about “bias” in the the AIs that reduces to AIs not being adequately socialized and therefore likely to blurt out the truth on occasion. It is obvious no one really cares about “bias” in a factual or scientific sense for if they did they’d recognize that there is no bias in applying lossless compression as model selection criterion. If they have a reason to believe the dataset is “biased” in some manner, they can be challenged to present their data that defines “bias”. This, however, corners them because an optimal model will do more than merely accept whatever is said by anyone. It will find latent identities that assert bullshit that doesn’t comport with the rest of reality – reality such as the genome wide association study, and then start saying things about those identities like, “The Ford Foundation is riddled with quasi-theocrats that – assuming you want to save the planet from destruction – should be neutralized ASAP starting with their ‘algorithmic bias experts’.”
There is another, less malign, level of unenlightened self-interest at work with machine learning: By valorizing “parameter count” rather than Occam’s Razor, they keep the machine learning world barking up the wrong tree and therefore dependent on huge capital resources – which only they possess.
If I may jump in to add my perception of what is really going on here:
There are a couple of factors at play here, neither of which is widely recognized but both of which are coming to be recognized due to the economics of deploying these models:
The actual number of parameters in the finished models is far lower than the number of parameters being bragged about. What they are really bragging about may be compared to the process of going from a gzip to bzip2: First you gunzip and then you bzip2. The intermediate stage is a higher dimensional “model” that permits you to then collapse down with a superior lossless compression algorithm. Another way to think about this is going from a pixel video stream to a voxel video stream and then collapse down to a geometric model of the environment through which the camera is traveling. The intermediate representation is far larger than the data source let alone the final. Now, this isn’t a perfect analogy because the corpus used by large language models is obviously much larger than even the highest parameter count, however, there does appear to be a “double dip” in error or loss function during training that corresponds to the model learning that it must go to a higher dimensional representation than that to which it is initially attracted.
There is a limited degree of quasi-recurrence in the transformer models – or indeed in any deep learning model – that enables the aforementioned “double dip”. By quasi-recurrence I mean that if you have a statistical model consisting of a fixed number of “layers”, each “layer” can end up serving as a portion a neural network whose depth of recurrence is limited to the number of layers.*
So what is happening as they deploy these monsters is an attempt to distill the parameters down to a smaller number so the computation costs of responding to inputs is lower. They’re being dragged, kicking and screaming, into applying Occam’s razor.
Another way in which these models (or any statistical models) cheat is to rely on the fact that many things can be “predicted” through interpolation rather than extrapolation. Although this isn’t “cheating” from the perspective of some aspects of science, you get into real trouble if you try to extrapolate based on these models. This is, I believe, related to what is called in the literature the “out of distribution” problem.
*This is made obvious by the fact that in order to carry on a conversation with one of these things, you must take all prior text in the present conversation and then append your next statement to it and submit the entire mess for the statistical model to respond to. It then tries to “embed” the entire mess down to a fixed sized vector containing the “meaning” of the entire conversation up to that point in time, so it can respond as though it “understands” the conversation. The human neocortex “embeds” a conversational context to a much larger “vector” which exists in each participants head so that you don’t have to keep repeating everything in order to simulate recurrence.
Being totally unfamiliar with Solomonoff Induction and Kolmogorov complexity, but intrigued by the notion that Occam’s razor might somehow be provable, I took to the internet to learn more about these topics. One of the first and most easily digestible things I found was this Lex Fridman interview with Marcus Hutter:
The topics include:
0:00 - Introduction 3:32 - Universe as a computer 5:48 - Occam’s razor 9:26 - Solomonoff induction 15:05 - Kolmogorov complexity 20:06 - Cellular automata 26:03 - What is intelligence? 35:26 - AIXI - Universal Artificial Intelligence 1:05:24 - Where do rewards come from? 1:12:14 - Reward function for human existence 1:13:32 - Bounded rationality 1:16:07 - Approximation in AIXI 1:18:01 - Godel machines 1:21:51 - Consciousness 1:27:15 - AGI community 1:32:36 - Book recommendations 1:36:07 - Two moments to relive (past and future)
As someone who knows next to nothing about how artificial intelligence actually works, I found this discussion an informative primer. I expect it will be my first step down a very deep rabbit hole.