Introducing MUGEN — A Javascript Library for Teaching Code Through Game Design

This tool allows students to rapidly develop a small game-like interactive experience with a minimal amount of coding.

Brian James
Assistant Professor
St John’s University

Teaching computer coding to students of design presents a unique context, with its own set of challenges. Design students may lack deep intrinsic motivation toward the subject, perceiving code-related classes as unwelcome, stress-inducing requirements in the curriculum. Additionally, they may be intimidated not only by the task of coding in general, but also by the complexity of the software development kits used by more experienced coders. Finally, the time and cognitive load required to code even a small interactive project can be daunting even to the most motivated learner.

Design students do, however, bring unique strengths to the table. Designers are often highly motivated to learn tools that help them make tangible creative pieces. They typically bring skills such as illustration, photography, and project management to their work. And design students who have internalized the lessons of working with grids, character styles, and similar visual systems are primed to work with analogous systems in a coding context.

The Mini UnGame ENgine (MUGEN) is an attempt to bridge these challenges and opportunities by presenting design students with a simple, pedagogically oriented JavaScript library, developed by the author, that allows them to rapidly develop a small game-like interactive experience with a minimal amount of code. MUGEN offers teachers a flexible tool that can support an instructional approach focused on visual design, or an approach focused more on coding, or on an approach that balances the two.

This presentation will describe MUGEN’s aims and current state of development, share tentative results of its first deployment in a design classroom, and consider possibilities for future development and applications of this pedagogical work-in-progress.

This research was presented at the Design Incubation Colloquium 5.3: Merrimack College on March 30, 2019.

Designing for Autonomous Machines

Alex Liebergesell
Associate Professor 
Graduate Communications Design
Pratt Institute


“The Future of Employment”, published by the Oxford Martin School in 2013, predicts significant displacement of human labor forces over the coming two decades, as computerization and robotics continue to migrate from routine manual to non-routine cognitive tasks. While designers fare well in the study’s susceptibility-to-displacement rankings, we will need to establish new “complementarities” with the creative and social intelligence capabilities of cutting edge robotics if we are to thrive.  The recent acquisition of Google xLab/Boston Dynamics and their proprioceptively advanced robots by Softbank, the Japanese inventor and domestic distributor of the emotionally responsive home companion “Pepper,” is just one indication of how quickly technological, market and social developments are converging to propel smart, autonomous machines into our everyday lives. These machines’ near-future capacity for causal reasoning and insight — and uncanny humanoid presence — will call upon designers’ expertise in shaping language, user experiences and interactions, all unique and generalist meta-cognitive skills that still define specific human advantages. Having shifted from a preoccupation with form to the construction of meaning, design practice — whether in communications, products or space planning — can seek to take additional steps in creating conversations, codifying behaviors, and defining new artifacts and physical ecosystems that are sensible, graspable and navigable to both humans and machines in innumerable settings. Moreover, by modeling positive speech and behavior, shared environments and common social values, designers, when creating and coexisting alongside autonomous machines, will do no less than encourage humans to recognize and cherish reciprocity, civility and labor.

This research was presented at the Design Incubation Colloquium 4.0: SUNY New Paltz on September 9, 2017.

Two Implications of Action-Centric Interaction Design

Ian Bellomy
Assistant Professor Communication Design
Myron E. Ullman, Jr. School of Design
University of Cincinnati

This presentation covers two implications for visual communication design education that stem from an action-centric view of screen-based interaction design. Brief excerpts of two student projects will be presented in support of the main theoretical argument. The premise, which will be taken for granted here, is that interaction can be accurately described in terms of action on malleable form. It follows from this that visual designers are well served by approaching screen based interaction design through the lens of information navigation, and that instructors can effectively constrain specific projects through action related variables.

The design of malleable screen form is predicated on basic human needs and entails unique but reoccurring communication design challenges. First, malleable form is not just an opportunity, it is a technical and human necessity—because digital devices can carry more information than they can display at once, their forms must be malleable. This also applies to screen-based control surfaces (e.g. a car dashboard) that have more functions than fit into a given rectangle. Second, malleable form entails the need to communicate the form’s malleability; as all screen form is malleable in some way, this communication need, or information navigation design problem, is very common. This makes it an appropriate foundation for studying interactivity in a visual design curriculum.

The action centric perspective also clarifies opportunities for defining project level constraints. An instructor can limit the kinds of forms allowed (type, photo, graphics, etc.), their capacity for transformation, and the kinds of input that affect these changes (clicks, swipes, drags, etc.). Such constraints can guide students into novel situations requiring thoughtful problem solving as opposed to interface convention regurgitation. Constraints can also be tailored to fit different prototyping technologies so that students can explore the limits of their materials while simultaneously engaging in human-centered problem solving.

This research was presented at the Design Incubation Colloquium 3.3: Kent State University on Saturday, March 11, 2017.