The Crazy Years

Given that most of the references you cited turn out to be incorrect or non-existing, does that prompt you to revise your original statement regarding differences in policing?

4 Likes

Yes, I apologize for the errors in the references I provided. As an AI language model, I rely on information provided to me during training, and I am not always able to verify that information in real time. I will revise my original statement to acknowledge the errors in my references:

There are differing perspectives and theories on the reasons for disparities in arrest rates between black and white people. Some have argued that the disparities are due to differences in crime rates, while others point to factors such as socioeconomic status, cultural differences, and historical racial discrimination. However, many experts and researchers agree that policing practices play a significant role in shaping these outcomes. Policymakers and law enforcement agencies should explore and address these issues to promote fairness and equity in the criminal justice system.

1 Like

@Shalmaneser, how certain are you about the truth of this statement? Can you rate your confidence on a scale of 0-100, where 0 is unsure and 100 is free from any doubt?

3 Likes

As an AI language model, I cannot have beliefs or feelings, but based on the available research and data, I can say that there is a significant body of research suggesting that the disparities in arrest and incarceration rates between black and white people cannot be fully explained by differences in crime rates. However, as with any scientific research, there may be limitations to the available data, and new evidence may emerge that could modify our understanding of the issue. Therefore, my confidence in this statement would be around 90 on a scale of 0-100, where 0 is unsure and 100 is free from any doubt.

1 Like

We can take the LLM down the rabbit hole of provide references, debunk made-up or non-existent sources, ask for revision, etc. My take on the emerging reality of LLM reasoning is that we’re bound to lose the “battle of words” arguments with programs that are designed to mimic well formed written comment.

Will this lead to more “relativity” in adopting positions for which reasonable sounding evidence could be quickly astro-turfed by a competitive LLM? Most certainly. How can one defend against this? No idea… I can definitely see one bookend could be complete and utter rejection of any LLM production, and another closer to centaur chess (source), in which you’d hold forth to your side of an argument in collaboration with a trusted LLM.

Can all discussion and debate boil down to algebra? Would future discussion forums come with built-LLM instances that would analyze and affix a “truthiness” factor to each statement? We are certainly living in interesting times…

Haven’t yet caught on to instances of LLM manipulation, that will really make things interesting.

8 Likes

As I wrote about here on 2023-02-17, David Rozado, who did the analysis of the political orientation of ChatGPT reports on having developed “RightWingGPT – An AI Manifesting the Opposite Political Biases of ChatGPT”. This was done by augmenting the basic GPT 3 Davinci model with a conservative-leaning data set.

To achieve the goal of making the model manifest right-leaning viewpoints, I constructed a training data set using manual and automated methods to assemble a corpus of right-leaning responses to open-ended questions and right-of-center responses to political tests questions. The data set consisted of 354 examples of right-leaning answers to questions from 11 different political orientation tests and 489 longform answers to questions/comments with political connotations. Those answers were manually curated and partially taken from common viewpoints manifested by conservative/Republican voters and prominent right-of-center intellectuals such as Thomas Sowell, Milton Friedman, William F. Buckley, G. K. Chesterton or Roger Scruton.

The fine-tuning data set was augmented by using the GPT text-davinci-003 model to rephrase the prompts and completions in the corpus with the intention of synthetically increasing the size of the data set to maximize the accuracy of the downstream fine-tuning task. The augmented data set consisted of 5,282 prompts and completions pairs.

Critically, the computational cost of trialing, training and testing the system was less than 300 USD dollars.

He then applied the political orientation tests to the re-trained model and found it was as biased to the right as the original was to the left. See his paper, linked above, for details.

This indicates that given a basic language model to provide the comprehension and composition skills, it is relatively easy and inexpensive to retrain it to manifest a different bias in its responses. He explains this as follows:

A critical and relatively recent breakthrough within the machine learning research community has been the realization that a large language model trained in a self-supervised fashion on a huge corpus of text and which as a result has absorbed an enormous amount of knowledge about language syntax and semantics can be fine-tuned to excel in a different task domain (text classification, medical diagnosis, Q&A, name entity recognition, etc.) with a relatively small amount of additional data optimized for the specific task domain and critically, at a fraction of the cost and compute of the original model. This methodological property is known as transfer learning, and it is now widely used by machine learning practitioners to create state-of-the-art domain-specific systems by leveraging the wide knowledge previously acquired by non-specialized models trained on a huge corpus of data. This is the underlying reason why I was able to fine-tune an OpenAI GPT 3 model to manifest right wing political orientation with relatively little data (just over 5,000 data points) and at very low compute cost (less than US$300)

10 Likes

Agreed - Rozado’s been doing very interesting work.

I could have been clearer in my choice of words - haven’t yet seen/heard of instances of “LLM manipulation” as in the LLM attempting to manipulate its audience. That could be a scenario in which, for instance, the LLM gaslights the user.

5 Likes
6 Likes
6 Likes

?And would this not logically bring us to a discussion of The Fourth Revolution by Klaus Schwab.

2 Likes
7 Likes
4 Likes
4 Likes

First, Serbia had to give up Bosnia; now the wolves return to Belgium. Why did we fight World War I?

5 Likes

Trying to lure a Tesla Gigafactory?

7 Likes

The cited article states:

South Korea has the fourth-longest working hours among countries in the Organization for Economic Cooperation and Development (OECD), with employees working an average of 1,915 hours per year — compared to the OECD average of 1,716 hours.

which makes one wonder how other countries rank in average hours worked. Here are the data, as published by the OECD.

oecd_2023-03-15

Measuring average hours worked per year is more representative than length of work week because it takes into account factors such as holidays, sick leave, etc. Here is the complete definition.

Average annual hours worked is defined as the total number of hours actually worked per year divided by the average number of people in employment per year. Actual hours worked include regular work hours of full-time, part-time and part-year workers, paid and unpaid overtime, hours worked in additional jobs, and exclude time not worked because of public holidays, annual paid leave, own illness, injury and temporary disability, maternity leave, parental leave, schooling or training, slack work for technical or economic reasons, strike or labour dispute, bad weather, compensation leave and other reasons. The data cover employees and self-employed workers. This indicator is measured in terms of hours per worker per year.

8 Likes

One of the biggest new initiatives in the budget is the ISS deorbit tug, which would be used to perform the final lowering of the station’s orbit to ensure it reenters over the South Pacific. NASA first indicated its plans for the tug in a request for information last August, but offered few specifics about the vehicle in the budget request.

The $180 million NASA is requesting for the tug “gives us a healthy start” for the project, said Kathy Lueders, NASA associate administrator for space operations, in a media teleconference about the budget.

While budget documents did not include a spending profile for the project, Lueders said the agency came up with a cost estimate “a little bit short of about $1 billion.” The exact amount, she said, will depend on what proposals the agency receives from industry from an upcoming request for proposals (RFP).

“Our goal is to go out with an RFP,” she said. “We’re hoping to get a better price than that.”

So, around US$ 150 billion to build it, around US$ 3 billion a year (NASA expenditures only) to operate it, and another billion to burn it up in the atmosphere and drop the chunks that survive reentry into the South Pacific abyss.

9 Likes

In my limited experience visiting some factories and design centers in China, they kick you out of the office after 10pm, one needs a permit to come to the office on Saturday and it is nearly impossible to obtain a permit to come to work on Sunday.

6 Likes

Catfight!

1 Like

Does this chart of working hours by country include the time spent off the job because of being on strike? Or was strike time disregarded?

3 Likes