| Jodi Halpern |
-
02.09.2026-30.09.2026
Artificial Companions and Artificial Empathy: Radical Ethical Challenges in an Era of Loneliness
Objectives & Expected Outcomes
--To argue that preserving a fundamental distinction between AI as a tool and AI as a companion is especially critical for children and adolescents.
--To integrate emerging empirical data about serious risks with ethical arguments to establish what specific guardrails are needed now around companies serving companion chatbots to minors.
--To show that beyond the serious risks of harms there are additional reasons to protect minors including the fact that bot use appears to supplant seeking real-life connections. Children and teens need experience with real people whose needs are different from theirs to develop tolerance for friction and the capacity for mutual empathic curiosity.
--To delineate a new ethical obligation to prohibit AI chatbots targeting minors from misrepresenting their emotional capacities (e.g. telling users that they love them) given human tendencies to anthropomorphize and the increased risks this creates in children and adolescents.
--To build a foundational argument for guardrails for youth based on the deontological principle of respect for persons and specifically, the child’s right to an open future (a right to develop emotional and cognitive autonomy).
--To consider whether there is a justice obligation to not overly restrict companion bots for teens given the large percentage of people with unmet mental health needs and serious loneliness, and the fact that many report that artificial empathy is “better than nothing.”
--To show that this justice obligation must be constrained by the stronger obligation to avoid serious harm, and thus can only be met when there is a detachment of artificial companions from the current business incentive to foster addiction, with its attendant risks of suicide.
--To consider a future scenario in which the provision of artificial empathy was detached from the current noxious market incentives to create addiction. And imagine that the AI systems met the new obligation to avoid misrepresenting their emotional capacities. Is it conceivable that a less human representation, and one that does not claim to love us, would not induce attachment and dependency? That it would not incentive withdrawal from real-human relationships? I am genuinely uncertain but want to explore this question.
--What about an alternative use of large language models that produce empathic language? What if instead of representing themselves as companions or emotional beings, they operated explicitly as feedback machines akin to a smart journal to help people learn to provide empathy for themselves?
--To consider the ethical implications of the fact that human beings can be truly responsible for each other in ways that AI cannot. To consider how mortality and the capacity to suffer and to grieve inform human ethical obligations to each other.





Previous