A Duty to Deceive? Using New and Deceptive Technologies to Enhance the Lives of Dementia Patients
In this paper I propose a new type of therapy for dementia patients called AIIT. AIIT concerns using artificially intelligent programs to mimic the likeness of a dementia patients' spouse or relative in hopes of providing them comfort and an alternative solution to them constantly reliving grief or being lied to in unsatisfactory ways. I establish the moral permissibility of AIIT through the moral parity claim, which is a conditional claim stating that if current dementia care practices are permissible, then AIIT ought to be as well. This means that AIIT is no more morally problematic than current dementia care practices. To make this claim I evaluate and compare AIIT to current practices from three different perspectives/potential harms. I first find AIIT to be less harmful to dementia patients than current practices since AIIT better preserves dementia patients' beliefs, emotions, and desires. I then conclude that AIIT does not pose a unique harm to the impersonated person, since 1) Many theories of wellbeing do not support the possibility of the deceased being harmed and 2) People with dementia are not commonly creating new impressions, and therefore harms committed to the impersonated person are extremely unlikely. Finally, I claim that AIIT would not cause additional harms to society given that current practices already harm relatives in a similar manner, also have the potential to pose problems if used outside of dementia care, and don't differ from AIIT in respect to affecting trust in the medical system. Having established moral parity, I conclude with a push for a stronger claim, the superior option claim, which states that AIIT is morally permissible by arguing for the antecedent of the moral parity claim. I argue for this by denying that we have an obligation to not deceive dementia patients since they have special conditions that do not allow them to apprehend the world accurately.