Design In the Posthuman Age / Biomorphic Typography

Students develop in-depth knowledge of machine learning, data politics, emerging technology, and climate change.

Anastasiia Raina
Associate Professor
Rhode Island School of Design

Anastasiia Raina’s course Design in the Posthuman Age has been taught since 2018 and has become a platform for transdisciplinary exchange, bringing together graduate and undergraduate students from RISD and Brown University. The course has been consistently popular among students across art and design departments as well as Brown University majors.

In the course, students develop in-depth knowledge of machine learning, data politics, emerging technology, and climate change, and critically examine the implications of these forces on the design field. The class also draws on RISD’s Nature Lab as a core resource, where students work with nature specimens, microscopy for close observation, visual research, and form making. Biomorphic Typography workshop invites students to look closely at the structures found in nature and translate them into typographic form; this assignment becomes a way to think with living systems.

What is most inspiring about this course is how it functions as a visual research lab— it balances a strong theoretical framework and rigorous making. New design methodologies emerge through the use of tools that both science and design have to offer. The course stays focused on the ethical dimensions of emerging technologies as they inscribe our social, educational, cultural, and biopolitical landscapes. 

Learning outcomes include: expanding what constitutes design; surveying collaborations between art, design, biology, and other sciences; imagining new roles for designers in the age of AI and gene editing; gaining perspectives from non-human organisms and environments; challenging inherited aesthetic norms and formats; and developing bold experimental methods that value meaningful risk-taking and productive failure.

Contemporary graphic design is formed by two significant forces: machine vision and climate change. Emerging designers now enter a profession in which form and meaning are mediated by automated perception, including cameras, datasets, recognition models, platform ranking, and generative systems. Simultaneously, climate change redefines design responsibilities, reshaping materials and production, and how designers communicate climate change to the public.

Design In The Posthuman Age addresses both challenges through a defined methodology. Machine vision is approached as a design instrument. Students examine machine learning and datasets, construct reality, and explore how classification can generate bias and exclusion. Machine learning is introduced as a visual method, and students learn to build custom datasets and train models, moving beyond reliance on off-the-shelf models like Midjourney. This method changes students from passive users of artificial intelligence to designers capable of critically engaging with emerging technologies.

Climate change is addressed by introducing students to nature systems, environmental sensing, satellite imagery, scientific imaging, and the visualization of climate data in design processes. Students develop skills to translate complex environmental and biological information into visual systems that support climate action.

To develop a new visual language necessary to address these issues, students generate forms derived from natural specimens and employ processes such as crossover and mutation, using both analogue drawing and machine-learning. These formal systems are developed into typefaces, books, posters, motion pieces, and interactive works. This workflow engages observation, biological metaphor, and iterative design, which in turn, develops new visual methods for typography and visual identity that are living, dynamic, and adaptive.

The Design In The Posthuman Age course expands the definition of graphic design by incorporating data, code, and biomaterials into the visual method. It enables students to:

  • create novel visual forms using scientific tools, nature specimens, and machine learning.
  • articulate the ethical implications of emerging technology
  • collaborate across disciplines, preparing students to address technical complexity and social responsibility.
PosthumanClass-work-OPT

https://posthuman.design

https://eyeondesign.aiga.org/what-does-posthuman-design-actually-mean

https://www.risd.edu/news/stories/graphic-design-faculty-anastasiia-raina-on-posthumanism-and-design

Biography

Anastasiia Raina is a Ukrainian-born biodesigner, researcher, and Associate Professor at RISD. She holds an MFA in Graphic Design from the Yale School of Art. Anastasiia integrates living organisms, natural systems, and data into her practice, inspiring the audience to connect with science and the environment in transformative ways. Her research delves into the aesthetics of technologically mediated nature, machine vision, evolutionary biology, and biomaterials to create new methodologies that redefine design possibilities.

As an Associate Professor at the Rhode Island School of Design (RISD), Raina directs Nature-Culture-Sustainability Studies with over 170 students from 17 departments. Her work has been exhibited in New York, Los Angeles, Shanghai, and Seoul. Raina has also lectured and served as a critic at Yale University, Columbia University, Stanford University, Parsons, Pratt Institute, Otis College of Art and Design, the University of Southern California (USC), and the Maryland Institute College of Art (MICA). Additionally, Anastasiia consults international organizations and companies, including the Hyundai Motor Group where she worked a three-year project exploring the Future of Mobility and Sustainable Cities.

This project was the 2025 Design Incubation Educators Awards winning recipient in the category of Teaching.

Design + Computation + Performance + __________

An innovative software tool enabling users to create dynamic, immersive media environments.

James Grady
Assistant Professor
Boston University

Random Actor is an innovative software tool initiated by James Grady, Assistant Professor of Graphic Design, and Clay Hopper, Senior Lecturer of Directing at Boston University’s College of Fine Arts. This solution connects computation and human performance, enabling users, including theatrical designers, artists, and more, to create dynamic, immersive media environments without extensive coding expertise. By incorporating computational vision, projection mapping, MIDI control, and machine learning into a user-friendly interface, Random Actor democratizes interactive design, making it accessible to a wide range of industries from theater to corporate events and gaming therapy models.

Building on tools like Processing, OpenFrameworks, TouchDesigner, and Unity, which integrate projection mapping, interactive generative graphics, and computational vision, this new tool addresses the challenges of traditional scenic and visual design. These existing tools often require coding knowledge, which can limit accessibility for non-coders. Random Actor bridges this gap by providing an intuitive, user-friendly interface that enables designers and performers to leverage advanced techniques without needing to learn how to code, thus expanding creative expression and adaptability in real-time visual storytelling.

In staging a previous play within an immersive, fully projected-media environment, we found traditional rehearsal methods to be insufficient for achieving the desired aesthetic goals. The excessive time spent adjusting or writing code during rehearsal led to the development of a software application capable of changing the values of any given generative algorithm in real time, without requiring coding knowledge.

We hypothesize that this kind of generative interactive technology could have far-reaching implications for how we construe visual narrative, story structure, and design principles, and how they manifest in physical space as extensions of the human body, speech, sign, and music. Our goal is to develop a new design vocabulary that merges the interior psychology of the performer with the physical environment, expanding the boundaries of Aristotelian narrative structure while affirming their deep relevance to the human experience.

Random Actor aims to bridge the gap between technology, creativity, and research. It expands on a continuum where technology empowers artistic expression. Based on our recent testing during a live play and multiple workshops, our findings suggest a bright future for such tools, indicating they could be further refined and integrated into broader artistic and educational practices, potentially transforming the landscape of design and performance art.

This design research is presented at Design Incubation Colloquium 11.1: Boston University on Friday, October 25, 2024.

The Limits of Control: Nonhierarchical Modes of Making, Decentering the Designer

Exploring the creative networks between graphic designers and their collaborators — human and non-human.

Christopher Swift
Assistant Professor
Binghamton University

“The Limits of Control” is a body of work exploring the creative networks between graphic designers and their collaborators — human and non-human. Inspired by the work and writing of James Bridle, John Cage and Bruno Latour the project examines how the interplay of control and trust in a designer’s relationship with their network of tools (creative, cultural, technological) can be attended to, challenged, and reimagined allows us to break free of the traditional modes and methodologies and begin to explore new possibilities and new ways of seeing and being as graphic designers.

The black boxes which envelop our tools obscure the complexity and scale of the collaborative space we work in. This work makes the invisible visible and removes the designer from their imagined directive podium to be one among many in a creative and collaborative network of active participants full of agency and potential.

Showcasing case studies that demonstrate the tools of a creative network foregrounds their active participation in co-creation. Through coding in various languages new digital tools are created in which the agency of the tool itself is highlighted. These new tools undertake an intentionally nonhierarchical mode of making, decentering the designer’s role. Each study pushes the designer further away from a mode of control with the intent of asking—if there is collaborative care, respect, and trust in the creative design process then what new solutions, what new insights, what new ways of thinking and being may we discover when we look around from our new perspective.

This design research was presented at Design Incubation Colloquium 9.2: Annual CAA Conference 2023 (Virtual) on Saturday, February 18, 2023.

Edgelands: Using Creative Technology to Predict the Future

A call to action for technology users, producers, and regulators

Jonathan Hanahan
Assistant Professor
Washington University in St. Louis

Edgelands explores the increasing tension between the natural world and the infiltration of electronic waste. Electronic Waste (e-waste) is the fastest growing waste stream on the planet. While 70% of new technology is recyclable, only 30% of it actually gets recycled. As devices increasingly get smaller and more advanced, their ability to be recycled drastically decreases due largely to custom fabrication techniques and no industry recycling or extraction standard. This dilemma is leading to an enormous amount of material blanketing the surface of the earth and worse, a culture of hazardous extraction practices in illegal e-waste dumpsites. Rare earth minerals—which are expensive and intensive to extract—end up serving far shorter lives as useful materials than they should. This puts the planet on the edge of a situation where finding solutions to extract materials from existing products will soon outvalue and outperform the process of digging into the earth to extract new materials.

This body of work is a call to action for technology users, producers, and regulators regarding the ramifications of our capitalism driven desire for the newest and best alongside the global epidemic these discarding behaviors lead to. Edgelands is a research project in technology using technology. The project speculatively explores this situation through machine learning–‘breeding’ images of midwestern landscapes with images of illegal e-waste dumpsites in Africa, Asia, and India. The resulting trained neural network hypothesizes a world where the quantity of discarded electronics creeps into the periphery of everyday life and occupies the spaces abandoned by previous industries. The resulting output speculates on what this future might look like should we continue on the current trajectory. The images are simultaneously familiar and foreign, present and future, and aspire to encourage viewers to rethink their relationships to technology, devices, and the lifespan of said products.

This research was presented at the Design Incubation Colloquium 7.2: 109th CAA Annual Conference on Wednesday, February 10, 2021.

What Can Machine Learning Contribute to Empathy in Design? How to Build a Journey Map Using Big Data and Text Sentiment Analysis

Sarah Pagliaccio
Principal, User Experience Designer
Black Pepper

What can machine learning contribute to empathy in design? How to build a journey map using big data and text sentiment analysis.

Art and design are meant to reflect the world around us, show empathy for those we design for, and reflect the emotional state of our customers and target users. But how are we meant to empathize with situations that are unfamiliar or out of context? What happens when we over-empathize and project our own emotional states on our customers’ experiences? That’s where machine learning comes in. With enough input, we can use machine learning tools, specifically text sentiment analysis, to provide an objective score of our users’ emotional experiences. By feeding transcripts of customer interviews into a computer, we can remove our own subjectivity from our analysis and form a holistic picture of others’ needs and wants.

These sentiment scores can turn words into pictures, emotions into graphs, expanding our understanding of design goals and tasks.

Using Shakespeare’s A Midsummer Night’s Dream as a case study, we will talk through the emotional journey, i.e., the customer journey map, of major characters in the play using text sentiment analysis. A discussion of how these techniques can be applied to consumer application and website design will follow.

This research was presented at the Design Incubation Colloquium 5.1: DePaul University on October 27, 2018.