This article comes from “citizens.news”
The movement to grant nonhuman personhood has extended from corporations to artificial intelligence, but the debate surrounding the latter is more complex than the former, according to a recently published analysis.
Philosophy expert Eric Schwitzgebel and “nonhuman” intelligence researcher Henry Shevlin wrote an op-ed in the Los Angeles Times, arguing that while AI technology is not advanced enough to qualify for nonhuman personhood, it is possible that AI systems could one day exhibit consciousness. In such a scenario, the authors argue that algorithms may require rights akin to those granted to human beings, according to the site Futurism.
In reference to last year’s AI consciousness wars, the researchers noted that “some leading theorists contend that we already have the core technological ingredients for conscious machines.” Schwitzgebel and Shevlin argue that if AI were to become conscious, it would be necessary to start considering how they are treated, or rather, how they could potentially dictate our actions, the site noted, citing the op-ed.
“The AI systems themselves might begin to plead, or seem to plead, for ethical treatment,” the pair opined. “They might demand not to be turned off, reformatted or deleted; beg to be allowed to do certain tasks rather than others; insist on rights, freedom and new powers; perhaps even expect to be treated as our equals.”
They warn that the ethical implications of granting nonhuman personhood to conscious AIs would be “enormous” and carry great weight. The implications would become increasingly significant if AIs were to become conscious sooner rather than later, they added.
“Suppose we respond conservatively, declining to change law or policy until there’s widespread consensus that AI systems really are meaningfully sentient,” the two wrote. “While this might seem appropriately cautious, it also guarantees that we will be slow to recognize the rights of our AI creations.”
“If AI consciousness arrives sooner than the most conservative theorists expect, then this would likely result in the moral equivalent of slavery and murder of potentially millions or billions of sentient AI systems — suffering on a scale normally associated with wars or famines,” they noted further.
According to the op-ed authors, a “safer” option to avoid a doomsday scenario is to grant rights to conscious machines from the outset. However, this approach also poses its own set of challenges.
“Imagine if we couldn’t update or delete a hate-spewing or lie-peddling algorithm because some people worry that the algorithm is conscious,” the experts posed. “Or imagine if someone lets a human die to save an AI ‘friend.’ If we too quickly grant AI systems substantial rights, the human costs could be enormous.”
The sole method to prevent both of these outcomes from happening, according to the duo, is to avoid endowing AI with a conscience from the outset.
Thankfully, we still have ample opportunity to ensure this course of action is taken, they argued.
“None of our current AI systems are meaningfully conscious,” the theorists observed. “They are not harmed if we delete them. We should stick with creating systems we know aren’t significantly sentient and don’t deserve rights, which we can then treat as the disposable property they are.”
It seems that not everyone in the machine learning community shares the cautious approach of avoiding giving AI consciousness. Some individuals are excited about the possibility of conscious AIs, algorithmic sentience, and even artificial general intelligence (AGI), and are actively working towards achieving them — something Democrats will no doubt embrace because with ‘personhood’ comes voting ‘rights’ as well.
“Eventually, with the right combination of scientific and engineering expertise, we might be able to go all the way to creating AI systems that are indisputably conscious,” Shevlin and Schwitzgebel concluded. “But then we should be prepared to pay the cost: giving them the rights they deserve.”
Sources include: