2023 ASEE Annual Conference & Exposition

This paper investigates the implications of competing definitions of ‘personhood’ for technology, specifically artificial intelligence (AI) agents and ways in which their legal and moral status may evolve over time. This exploration was the initial basis for a course in a liberal studies program. The basic structure of that course will be presented, including readings. An important starting point for the course and discussion was to look at historical, philosophical, and religious definitions of a person. One of the more natural points of comparison was and continues to be how we regard the status and rights of animals. The questions become one of setting boundaries for categorization. For example, is the boundary about intellectual capacity? How would that be defined? What are the defining hallmarks of cognition? Is it language or logic or something else? And what is the role and importance of physically embedded sensation and perception? What of these features do AI agents possess or are likely to possess? The animal rights movements and legal protections for pets and animals may serve as a template for exploring what may be eventually likely for such artificial agents. The ability to feel, both positive and negative, pleasure and pain, has been brought into arguments about regulating our relationship with the living world and how far ownership and domination may extend. It is also useful to remember earlier understanding of rights, humanity, and personhood of women, children, and slaves and the ways in which that understanding has evolved in Western thought and legal systems. Certainly the personhood of artificial lifeforms has been a staple of science fiction books, television and movies since Frankenstein, but the import of such moral thought experiments is often dismissed as irrelevant when discussing the status of artificial agents and ways in which moral guidance will be instilled into such semi-autonomous beings. Isaac Asimov’s laws of robotics are marginally used as a starting point for such discussions, however seeing what is missing in his statement of the problem can be productive. Generally in Western thought it seems that primacy is given to individual interaction and decision making and little emphasis is placed on the expanding circles of obligation from family, kin, tribe, nation, humanity as a whole. The issues raised here are not just a sterile intellectual exercise but have real consequences as we wrestle with programming decision making in such agents as autonomous cars and prioritizing associated legal and moral goods and virtues.

Authors
  1. Dr. Suzanne Keilson Loyola University, Maryland [biography]
Download paper (774 KB)

Are you a researcher? Would you like to cite this paper? Visit the ASEE document repository at peer.asee.org for more tools and easy citations.