When a student submits a conceptual sketch in response to an architectural design problem, the instructor may presume that the student researched a couple of precedents then formulated their own ideation. How should the instructor react when an artificial intelligence (AI) art generator created or influenced the image? AI art generators create new or adapt existing architectural representations from imported text within seconds. High quality graphic solutions from text-to-image modelmakers are now confronting the academy. OpenAI’s Dall-E 2 and Midjourney are two popular open source and fee-based art generators. Web crawlers regularly scrape the internet to archive digital data. Research companies acquire the data then compile and pair billions of images and associated text descriptors into massive datasets. When a natural language processor interprets a prompt such as ‘Pompidou rendering inspired by Mies’, the deep learning algorithm seeks out the specific pattern associated with the input. The output is in the form of architectural representations. The design visualizations are a series of composites transformed to illustrate the requested version of a building. Although the AI generators make art more accessible to the population, they invite controversy from the art community regarding attribution. This paper discusses the ethical and legal implications surrounding AI art generators and copyrights, describes how the AI generators operate, considers the positions in the creative process, and concludes with suggested best practices for engaging AI art in the architectural design curricula.
Are you a researcher? Would you like to cite this paper? Visit the ASEE document repository at peer.asee.org for more tools and easy citations.