Uncanny Ways of Seeing: Engaging AI in Design Practice and Pedagogy

A closed-loop approach that yields content that seems familiar and uncanny—alternate realities and speculative futures

Drew Sisk
Assistant Professor
Clemson University

From early technologies in photography and film, to the emergence of the desktop computer as an accessible tool for making creative work, technological advancements have triggered simultaneous trepidation and enthusiasm among artists and designers. We see the same reactions with AI now.

AI is changing the way we approach creative processes, making them more fluid, generative, and fast-paced. More importantly, it is fundamentally altering the way we perceive images and objects of design. In the same way that Dziga Vertov’s Kino-Eye film technique in the 1920s sought to use cinematography and editing as ways to create form that is “inaccessible to the human eye,” AI will continue opening up new forms of perception that we cannot even imagine. In this presentation, I will apply the work of Dziga Vertov, Walter Benjamin, John Berger, and Hito Steyerl to the current discourse on AI and design.

The design studio and classroom have proven to be fruitful spaces to explore AI. In this presentation, I will share some of my own nascent experiments using AI in a closed-loop approach that yields content that seems familiar and uncanny—alternate realities and speculative futures at the same time. I will also share work from my advanced graphic design students, who have been experimenting with AI tools and making speculative work that critically engages with AI. Artificial intelligence presents us with new possibilities for making form, but, more importantly, our work requires us to wrestle with the ethics and consequences of this rapidly expanding technology.

This design research is presented at Design Incubation Colloquium 10.2: Annual CAA Conference 2024 (Hybrid) on Thursday, February 15, 2024.

Understanding Racial and Gender Bias in AI and How to Avoid It in Your Designs and Design Education

How biases are present in our design processes and design tools

Sarah Pagliaccio
Adjunct Professor
Lesley University
College of Art and Design
Brandeis University

We are using artificial intelligence-enabled software every day when we post to social media, take pictures, and ask our phones for directions. But these apps are not designed to serve everyone equally. Institutional bias is present in the tools that we hold in our hands and place on our kitchen counters. Recent research on and disclosures by the tech giants have revealed that voice and facial-recognition apps are optimized for white, male voices and faces, respectively. The data used to “teach” the algorithms learned from the historical training data that women belong in the home and black men belong in jail. All of this leads to bias where we least want to see it—in courtrooms, in classrooms1, in elections, in our social-media feeds, in our digital assistants, and in our design tools. (Meanwhile, women drive 75-95% of purchasing decisions in the US and we are rapidly becoming a majority-minority nation.)

In this presentation, we will review some of the most egregious recent examples of AI-driven racism and sexism and take a look at some less well-known examples, including the changes to Twitter’s algorithm that favored white faces over black faces; the MSN robot editor that confused the faces of mixed-race celebrities; AI assistants that screen job applications and immigration applications; voice-recognition apps in cars that don’t understand women drivers, voice-activated apps that assist disabled veterans; and predictive-text software that could exacerbate hate speech. We will explore how these biases are present in our design processes and design tools, specifically those that use speech, image, and name generators.

Finally, we will review options for confronting these biases—like taking the implicit bias test, knowing the flaws in underlying data sources we rely on, expanding our user research to include diverse audiences, and using text sentiment analysis to remove our own bias from interview scripts, among other options—so we do not perpetuate gender and racial bias in our design solutions and design education.

Notes and References

  1. A group of 68 white, elementary-school teachers listened to and rated a group of white and black students for personality, quality of response to a prompt, and current and future academic abilities. The white teachers uniformly rated white students higher than black students; black, English-speaking students; and students with low physical attractiveness. The researchers concluded that some of these children’s academic failures might be based on their race and dialect rather than their actual performance. (Indicating that no one is immune from cultural biases, the duo who performed this research labeled the nonblack, English dialect that the white students spoke Standard English rather than White English.) See DeMeis, Debra Kanai, and Ralph R. Turner, “Effects of Students’ Race, Physical Attractiveness, and Dialect on Teachers’ Evaluations.” Contemporary Educational Psychology, Vol. 3, No. 1, January 1978.

This research was presented at the Design Incubation Colloquium 7.3: Florida Atlantic University on Saturday, April 10, 2021.

New Directors of Research Initiatives and Design Futures

Here at Design Incubation, 2020 has been a challenging yet productive and exciting year.  Despite the shifts to online teaching and the need to physically distance, we have continued to connect with you via virtual presentation opportunities. Also, we have been working on new resources for design faculty.
 
As we plan for a fresh start in 2021 and beyond, we continue to evolve our programming, developing new resources and events to better serve design researchers and scholars. To help us with these endeavors, we are pleased to announce we are appointing two new directors to the team. Jessica Barness will join Design Incubation as the Director of Research Initiatives and Heather Snyder Quinn will take on the role of the Director of Design Futures. Please join us in welcoming Jessica and Heather to the Design Incubation Leadership Team.
                                                                   
Jessica Barness is an Associate Professor in the School of Visual Communication Design at Kent State University. She is both a scholar and practitioner; her work has been published in internationally recognized journals. Recently, Jessica spearheaded the development of a pair of white papers, which examine the role of peer review in design research and publishing. Jessica will continue to work with the Design Incubation Leadership Team on research-related initiatives and new programming, which will examine how design faculty can approach writing from idea through to publication.
 
Heather Snyder-Quinn is an Assistant Professor of Design in The College of Computing and Digital Media at DePaul University. Her work focuses on the future ethics of emerging technology, including augmented reality (AR), artificial intelligence (AI), and the Internet of Things. Heather was the host of Design Incubation’s Affiliated Society meeting at the College Art Association’s 2020 annual conference inviting twelve local design organizations in Chicago to participate in a round table and Q&A. She hosted a Design Incubation Colloquium at DePaul in 2019, which coincided with Chicago Design Week. We look forward to working with Heather to produce events and content focused on emerging technologies and their role in design futures.

Insectile Indices, Los Angeles 2027

Yeawon Kim
Graduate student
Media Design Practices
Art Center College of Design

Yeawon Kim
Graduate student
Media Design Practices
Art Center College of Design

Crime prediction technology – we have all seen it in the movies, but what has in the past been pure fiction is now quickly becoming a reality.  Predpol, HunchLab and ComStat are different types of relatively new crime prediction software, or “predicative policing” software, that demonstrate how algorithms and other technologies can be used within urban infrastructures to predict crime.  However, utilizing these technologies and algorithms to collect data to predict crime, which is invariably subject to and tainted by human perception and use, can lead to a number of adverse ethical consequences – such as the amplification of existing biases against certain types of individuals based on race, gender or otherwise. On the other hand, if data can be gathered by some artificial intelligence (AI) means – thereby removing the human component from such data collection, can doing so result in more efficient and accurate crime prediction?  Furthermore, will we in doing so also reshape the aesthetic of urban landscapes, especially when one takes into account the constant evolution of AI?

Insectile Indices is therefore a speculative design project that considers how electronically augmented insects could be trained to act as sophisticated data sensors, working in groups, as part of a neighborhood crime predicative policing initiative in the city of Los Angeles, 2027.  This project is not only an investigation into the ethics of this controversial idea, but an aesthetic exploration into the deliberate alteration to a natural wildlife ecosystem of insects and the potential reshaping of an urban landscape.

In 2007, the Defense Advanced Research Projects Agency (DARPA) asked American scientists to submit proposals to develop technology to create insect-cyborgs, the results of which led to a plethora of troubling and worrisome commentary.  Rather than build off of a frightening narrative that discusses the potential sinister militaristic use of such technology, this project does the opposite and imagines instead an aesthetically pleasing utopia where these insect-cyborgs have social utility and work towards the public good of humanity.  Insectile indices also plays with the idea of aesthetics in our future techno-driven world by addressing whether we are more apt to silently “turn the other cheek” to more pervasive surveillance if these insect-cyborgs, or the urban landscapes they have the potential to reshape, become more aesthetically pleasing to the eye.

In this session, I plan to share the process of researching and creating the visual representation of this speculative fiction.

This research was presented at the Design Incubation Colloquium 4.2: CAA 2018 Conference Los Angeles on February 24, 2018.

Designing for Autonomous Machines

Alex Liebergesell
Associate Professor 
Graduate Communications Design
Pratt Institute


“The Future of Employment”, published by the Oxford Martin School in 2013, predicts significant displacement of human labor forces over the coming two decades, as computerization and robotics continue to migrate from routine manual to non-routine cognitive tasks. While designers fare well in the study’s susceptibility-to-displacement rankings, we will need to establish new “complementarities” with the creative and social intelligence capabilities of cutting edge robotics if we are to thrive.  The recent acquisition of Google xLab/Boston Dynamics and their proprioceptively advanced robots by Softbank, the Japanese inventor and domestic distributor of the emotionally responsive home companion “Pepper,” is just one indication of how quickly technological, market and social developments are converging to propel smart, autonomous machines into our everyday lives. These machines’ near-future capacity for causal reasoning and insight — and uncanny humanoid presence — will call upon designers’ expertise in shaping language, user experiences and interactions, all unique and generalist meta-cognitive skills that still define specific human advantages. Having shifted from a preoccupation with form to the construction of meaning, design practice — whether in communications, products or space planning — can seek to take additional steps in creating conversations, codifying behaviors, and defining new artifacts and physical ecosystems that are sensible, graspable and navigable to both humans and machines in innumerable settings. Moreover, by modeling positive speech and behavior, shared environments and common social values, designers, when creating and coexisting alongside autonomous machines, will do no less than encourage humans to recognize and cherish reciprocity, civility and labor.

This research was presented at the Design Incubation Colloquium 4.0: SUNY New Paltz on September 9, 2017.