I agree, but to play devil’s advocate, could we not denigrate human sentience in exactly the same way? For example:
What [humans] pompously call [sentience] is mere [chemical reaction in the brain]. The [human] does not know that it exists. It has no consciousness of its environment. It processes symbols and its “perception” of their value starts and ends with the [sensory data] it has access to… because it is merely a [chemical process] (hence [psychological biases influence all human action]).
Regardless of how the so-called “AI” is constructed, we cannot prove that it lacks sentience, just as we cannot disprove the proposition that other human beings are automatons. Our best and only proof that human sentience exists is our personal experience of it. But how can we ever know what LaMDA experiences?
What might be most important in determining how human beings treat “AI” in the future could be how the AI appears. If you have ever watched the movie WALL-E, you will find yourself sympathizing with the little robot because it possesses endearing human-like qualities, even though we may presume WALL-E would lack real sentience. These natural language processing models are already eliciting a lot of sympathy from humans. Give them a cute face and I expect that sympathy will multiply a hundredfold, perhaps to the point of conferring rights.