Perhaps yes, likely no. ChatGPT will rule according to its training. If it is trained to give preference to, for example, a certain minority group because of systemic racism and on the principle of disparate outcomes, it will do so regardless of the black-letter law. In the end, the rulings by AI judges will be determined by the biases of their human creators.
But it’s even worse than that. Forget about human-judge emotionalism or biased training sets in the abstract. If the AI judge simply interpreted the law literally, the results would be — to say the least — unexpected. This was the point made by John Hasnas in 1995. Review this post from 2022 for a discussion of the topic. The original link to the Hasnas’s article is broken so you can use this one.
Paraphrasing Euclid: There is no AI road to justice.