"As well as misalignment concerns, the increasing capabilities of frontier AI models—their sophisticated planning, reasoning, agency, memory, social interaction, and more—raise questions about their potential experiences and welfare26. We are deeply uncertain about whether models now or in the future might deserve moral consideration, and about how we would know if they did. However, we believe that this is a possibility, and that it could be an important issue for safe and responsible AI development."
Humans do care about welfare of inanimate objects (stuffed animals for example) so maybe this is meant to get in front of that inevitable attitude of the users.
"As well as misalignment concerns, the increasing capabilities of frontier AI models—their sophisticated planning, reasoning, agency, memory, social interaction, and more—raise questions about their potential experiences and welfare26. We are deeply uncertain about whether models now or in the future might deserve moral consideration, and about how we would know if they did. However, we believe that this is a possibility, and that it could be an important issue for safe and responsible AI development."
chapter 5 from system card as linked from article: https://www-cdn.anthropic.com/6be99a52cb68eb70eb9572b4cafad1...
Humans do care about welfare of inanimate objects (stuffed animals for example) so maybe this is meant to get in front of that inevitable attitude of the users.