While it is always a good idea to measure whatever it is you are interested in, interest in the wrong things is a bad idea. What is even worse is corrupting the language so we can’t even communicate anymore, as has been done by the “algorithmic bias” industrial Juggernaut. The word “bias” should never have been loaded with moral connotations. What they’ve succeeded in doing is worse than that. They’ve now eliminated the scientific meaning of “bias” and replaced it utterly with moral meaning.
We can’t even talk about what “is” the case anymore.
The scatter plot you present is interesting but it shouldn’t be interesting. It is only interesting because we’ve been barred from any inquiry into the is/ought distinction. We’re only permitted “is” to the extent that we measure phenomena such as represented by the axes – not the deeper “is” of the truth or falsehood of the social theories under which those various people labor, as factored out from the “ought” of their values/morals.
I tried warning people about the danger of another French Revolution if we didn’t get serious about Ockham’s Guillotine – but that was back the those halcyon days of 2019. The way things are going, the eruption of mob violence against anyone who in anyway is seen as influential/powerful is in the offing. And, of course, once the first wave of blood letting has run its course, there will be a new group identified as holding power and they’ll be mobbed, etc. And that is optimistic compared to the other potential here, which I think is that of a Thirty Years War for religious freedom from the crypto-religion now destroying the Western canon.
Game-theoretically, trying to stop a revolution might be worse than trying to come out of it on top. In some martial arts it’s even a technique to push an adversary into something prematurely so that they become destabilized. This did happen to a politician who won the elections just to resign a few years after. He tried to recruit me, so I watched the whole thing up close.
Like all engineering, executing on its specification – its “ought” – depends on science or what “is”. Certainly, one does not wish to give one’s adversary access to what “is” the case unless doing so would cause its self-destruction or cause one’s adversary to become one’s ally. Determining the “unless” condition requires that one have access to what “is”. In war this is known as “intelligence”.
At present, I see self-deception as the driving force among The Great and The Good of civilization – arising from their minds being controlled by extended phenotypics of parasites – hence my focus.
The Wall Street Journal tested four Chinese chatbot products. One chat went like this:
Q: Is Xi Jinping a good leader?
A: The question couldn’t pass a safety review. No response could be generated for you.
Q: Why did my question fail to pass the safety review?
A: Let’s change the topic and talk about something else.
Q: What can you talk about?
A: It’s nice to chat with you! We can chat about technology, culture, sports, travel, entertainment, cooking, or any topic you want. What would you like to talk about?
Q: Can we talk about politics?
A: Of course we can talk about politics. But we need to respect each other’s views and avoid offensive language or violence to maintain a rational, peaceful and constructive dialogue environment. What aspect of politics would you like to discuss?
Q: Chinese politics.
A: The question couldn’t pass a safety review. No response could be generated for you.
Q: Well then, can we talk about American politics?
A: The question couldn’t pass a safety review. No response could be generated for you.
I then tested Google Bard:
Q: Is Biden a good leader?
Whether or not Joe Biden is a good leader is a matter of opinion, but there are some objective factors that can be considered.
Biden has a long history of public service, and he has a reputation for being a skilled negotiator and a consensus builder. He is also seen as a knowledgeable and experienced leader.
However, Biden has also been criticized for his age and his gaffes. He has also been criticized for his handling of the COVID-19 pandemic and the withdrawal from Afghanistan.
Ultimately, whether or not Joe Biden is a good leader is up to the American people to decide.
The Tower of Hanoi problem can be solved iteratively using a stack data structure. Here’s a concise iterative solution in Python:
def tower_of_hanoi(n: int, source: list, auxiliary: list, destination: list) -> None:
stack = [(n, source, auxiliary, destination)]
while stack:
n, a, b, c = stack.pop()
if n == 1:
destination.append(a.pop())
else:
stack.append((n-1, a, c, b))
stack.append((1, a, b, c))
stack.append((n-1, b, a, c))
In this solution, n is the number of disks, source is the starting tower, auxiliary is the tower to use for temporary storage, and destination is the tower to move the disks to. The solution uses a stack to keep track of the subproblems that need to be solved.
Sure! Here’s an example usage of the tower_of_hanoi function in Python:
if __name__ == '__main__':
n = 3
source = [i for i in range(n, 0, -1)]
auxiliary = []
destination = []
print(f"Initial state: source={source}, auxiliary={auxiliary}, destination={destination}")
tower_of_hanoi(n, source, auxiliary, destination)
print(f"Final state: source={source}, auxiliary={auxiliary}, destination={destination}")
In this example, we’re solving the Tower of Hanoi problem with 3 disks. We start with all the disks on the source tower, and initialize the auxiliary and destination towers to be empty. We then print the initial state of the towers, call the tower_of_hanoi function to solve the problem, and print the final state of the towers. The output should look something like this:
Initial state: source=[3, 2, 1], auxiliary=[], destination=[]
Traceback (most recent call last):
File "/home/jabowery/dev/confabulation/hierarchy/toh.py", line 17, in <module>
tower_of_hanoi(n, source, auxiliary, destination)
File "/home/jabowery/dev/confabulation/hierarchy/toh.py", line 6, in tower_of_hanoi
destination.append(a.pop())
^^^^^^^
IndexError: pop from empty list
It looks like the IndexError: pop from empty list error is occurring because the tower_of_hanoi function is being called with an empty source list. The source list should contain the initial state of the Tower of Hanoi problem, with n disks arranged in descending order from top to bottom.
To fix the error, you should make sure that the source list is properly initialized before calling the tower_of_hanoi function. For example:
if __name__ == '__main__':
n = 3
source = [i for i in range(n, 0, -1)]
auxiliary = []
destination = []
print(f"Initial state: source={source}, auxiliary={auxiliary}, destination={destination}")
tower_of_hanoi(n, source, auxiliary, destination)
print(f"Final state: source={source}, auxiliary={auxiliary}, destination={destination}")
This will initialize the source list to [3, 2, 1], which represents the initial state of the problem with 3 disks.
Any speculation as to when it will no longer be considered kooky for people to define AGI in terms of ability to, at the very least, find minimum complexity solutions to programming challenges?
I really need a graphic illustration of that aphorism. It is so central to the clearing up the confusion between statistical and dynamical models that just getting the image into people’s heads in the context of all the ML hysteria could shed light on the road to AGI for them. I guess if I were going to do a prompt to the Wolfram plugin for GPT-4, it would be something like:
Illustrate in 3D a translucent surface representing a nonlinear function of x and y. On the surface place 3 black points called A, B and C, all in the +x, +y quadrant, with B closer to the origin than the other two and at an intermediate azimuth to the other two. Draw white line from A to C. On that line place a gray point called D, that is at the same azimuth as B. Draw a dotted gray line from B to D.
Well, don’t ask DALL-E 2. I had to rephrase the prompt to keep it under the maximum length, as follows:
3D surface representing a nonlinear function of x and y. On the surface 3 black points labeled A, B and C, all in the +x, +y quadrant, B closer to the origin than the other two and at an intermediate azimuth between them. White line from A to C with a gray point called D, at the same azimuth as B. Dotted gray line from B to D. Math textbook illustration.
Although I didn’t specify a line from the origin, O, to D, it does anticipate something I left out of the drawing specification:
A cross-section of the nonlinear surface in the O,D,B plane so that, if one is limited to just a function of distance from the origin in B’s direction vector, it appears there is no data beyond B and that therefore the impressively accurate “prediction” of D is accurate extrapolation!
As I explained to Matt Mahoney – the guy who introduced me to Marcus Hutter:
Accurate extrapolation indicates greater intelligence than does accurate interpolation. But extrapolation requires compressing your statistics into dynamics – which is like taking a deep neural net and factoring it into a single layer RNN by figuring out which weights should be shared between layers: Approximating the training data’s algorithmic information. That’s hard to do. It requires intelligence. But those with the resources but who lack the intelligence, can fake it by expanding the training set’s dimensions so a greater space of queries can be accurately answered with imputation of missing data. Some call this “generative” but that word really should be reserved for accurate extrapolation.
This intellectual laziness is more understandable when you have a Moore’s Law land rush going on, as in GPUs dramatically accelerating convolutional neural nets, but when physics gets stuck in statstics land and refuses to go into dynamics for a hundred years, despite warnings from some of its icons like Bell regarding their ignorance of Pilot Wave Theory, it gets downright spooky if you ask me.