Reclaiming the Human in HCI : a user-first design approach

Author: Donny K. Tusler

Title attributed to Sonia Tiwari

What is Human Centered Interaction?

“Human-Computer Interaction (HCI) is a multidisciplinary field of study focusing on the design of computer technology and, in particular, the interaction between humans (the users) and computers. While initially concerned with computers, HCI has since expanded to cover almost all forms of information technology design”, according to Interaction Design Foundation. And her is more information about some of the history of HCI.

What we should note about this definition is the technology or tool is first in thought and not the human factor, despite the title, Human-Centered. As we enter the realm of the human side, the topic of user experience or UX, and user interface or UI comes into play. There is a whole debate between the differences.

What is UX and UI?

User Experience (UX) is one of those fields that just emerged and has flooded the Information Communication Technology (ICT). UX is a cross disciplinary field defined as the overall experience of a person using a product such as a website or computer application, especially in terms of how easy or pleasing it is to use. A person’s “ease of use” is one side of the perspective. However, UX is cross disciplinary, as it involves other perspectives, such as ease of learning, ease of access, workflow or pathways, cognitive load, visual culture, iconography, design, familiarity, and so much more. A degree which aligns with UX is Human-centered design, which is a design and management framework that develops solutions to problems by involving the human perspective in all steps of the problem-solving process. These two perspectives don’t seem to match exactly; “ease of use” and “problem-solving”, are two the perspectives of UX work. However, they both play on the humanist approach to technology. Where do we evolve from “ease of use” and “solving problems”? Am I writing about good UX or bad UX design? How do we improve UX and make it better?

In my attempt to better understand the differences I find this article highlighting The 3 Big Differences Between UX and UI design.

 

  1. UX deals with the purpose and functionality of the product. UI deals with the quality of the interaction that the end-user has with the product.
  2. UI design has an artistic component as it relates to the design and interface with the product. It affects what the end-user sees, hears, and feels. UX has more of a social component for market research and communicating with clients to understand what their needs are.
  3. UX focuses on project management and analysis through the entire phase of ideation, development, and delivery. UI has more of a technical component to produce the design components for the finished product.

Understanding the differences in these four topics I have mentioned; HCI, HCD, UX, and UI, is important, but one prevailing similarity stands out, HUMAN! Human experience, human interaction, Human centered…

Designing technology or tools for us to use have an affect on us, either completely new or familiar. New and different can be exciting or scary to some, but this is another debate. The familiar is far more meaningful to humans, because the familiarity gives the designer the possibility to evoke a feeling or affinity to using a new product. And we want the positive side of the affinity spectrum.

Been and being done…

Is it the new hardware to enable VR and AR? is it new hardware integrated into our skin and biology? is it the new hardware to influence our senses, like taste and smell? Maybe, I don’t think so! I think it is already hear, and we vilified the evolution of UX. But, this is the perfect time to discuss it. Facebook has been all over the news for its data use, and is now sitting before a congressional committee.

Remember in 2014, Forbes called it, “Facebook Manipulated 689,003 Users’ Emotions For Science“! Manipulation is defined as handle or control (a tool, mechanism, etc.), typically in a skillful manner; 2. control or influence (a person or situation) cleverly, unfairly, or unscrupulously. Well, I don’t know how skillful, cleverly, unfairly, or unscrupulously they were being, without a qualitative interview of all the people participating in the implementation. I am more interested in what they proved! Because this is the next step in UX; the affective use of technology. We have these products appear, Jimbo, Google Assistant, and even a Bonsai tree that follows you like a pet. Affective technology and mood has been studied for over two decades in education technology realms. Norman (2002) is one of the oldest articles I remember studying and using in my Master’s design paper on the development of a “Mood Application” to collect data prior to using an app use. How does one rate user satisfaction, aside from user feedback base on a one time reflection of use, additionally obtain longterm use, and assess the context of using an application.

UX and UI work in the paradigm of implicit learning, and what I see more specifically as sequence learning. Through UX design, the simplification and familiarity of navigation allows users to learn, with ease how to navigate and locate information within a website, and play games. But Facebook’s alteration of algorithms provided evidence, in which  the type of content affects our mood. The question arrises, can we interpret this as implicit learning affecting our moods. Furthermore, Wiki cites this study; adapting paradigms to change stereotypes, where  the study speaks to implicit learning as a contributing factor to moods and depression, related to prejudice and stereotyping. Facebook doesn’t mention implicit learning, but effected the mood of Facebook participants.

The content was a factor, but what are the intermediating factors in a device? To name a few; text, color, dpi, iconography, and interface design. There are factors designers are able to control related to the aesthetics of a software. Classical artists move our emotions with poetry, paintings, sculpture, dance, and music. Designers use the same knowledge of composition, line, symmetry, and more when designing interfaces. But designers tread into the world of how the complex combinations of design interact with people. I will never forget what my art professor, Mark Messersmith said to our class, “Artists create art for themselves, and Designers create for others.” This has always stayed with me.

For a more tangible example, museum learning science, coined the term, “Silent Pedagogy”, or what what is implied in the museum environment. (Eisner & Dobbs, 1988) For example, the rope in front of art work is a barrier to not touch, hard floors and no seating mean move along and don’t loiter, and cold rooms encourage you to move. But, there are practical reasons for some of the design choices; hard floors are for carts to wheel equipment and exhibitions; no seating means nothing in the way of the carts, and cold rooms are to reduce humidity damaging the art work. And the same miscommunication occurs with in the realm of  UX/UI design. One of my favorite is “wasted real-estate”, which is a good reason, especially with mobile devices. We forget the “aesthetic impact”, which implicitly affects so many of us. Are UX & UI designers, designing with humans in mind? Or are they missing the “aesthetic impact” occurring, because of the implicitly of the medium?

Some design perspectives?

Currently, designers have worked with design concepts such as Skeumorphic design, and Flat Design. Skeuomorphic provides greater realistic references, depth, and more complexity to the image. Whereas, Flat Design, does the opposite, less realistic references, no depth, less complexity, and more of an emotive feel. The branding potential of a company with Flat Design is much greater, and needs to be done well, and only once. The Flat Design works at an implicit level, and allows for people to build a unique “association”, a brand, with the company. For example, Instagram is no longer a camera, but now a recognizable logo, unique to them. I remember seeing an blog post direct to the UX/UI audience, stating what does a phone, a camera, a keyboard?

 

 

 

 

 

 

 

 

 

 

For fun, here are logo mistakes for fun! This clearly provides examples of implicit meaning found in visual imagery, and how interpretation may go awry!

Google is calling the next evolution Material Design. However, this is a tool for the designers to express mood and branding. What is the difference, is my argument? Isn’t this just another WYSIWYG, and maybe competition for Photoshop? Both are great tools for a specific market.  But how long till UX/UI designers and just programmers work together to create the next evolution of design in the evolving world of  the “internet-of-things“. 

Facebook has already started allowing color customization with their messenger, and the FaceBook Reaction buttons. Has anyone else caught on? In 2014,

FaceBook affected the mood of users, and then in 2015 the reaction button comes to life. I have a feeling Facebook hasn’t even taken it to the next level, yet! Or maybe they are exploring more. The launch of their AR/VR messengering is in development and likely premature for the mainstream market. But they are preparing and making this an open-sourced effort with their developer’s site.

I hope people read this and we can skip the next 30 versions or updates where developers experiment, and crowd source users for updates of features. Could our designers let the users have some fun with, what learning sciences calls, “affinity spaces“. To reinforce the point, understanding the human-computer interaction and translating the “aesthetic impacts” from an implicit meaning to an explicit meaning is what this post is about. But, this means we need to understand the “aesthetic impacts” in the real world.

Analogical views of reality’s aesthetics in our tangible world:

Kline presentation on LeadershipI attended a Creativity Conference in Sarasota, Florida, where I saw Micheal Kline display some impressive leadership skills used to facilitate a session. He intensionally modified the environment to modify the implicit meaning of the environment and evoke a mood among the participants. Everyone was in the circle, no table, a moderator (who was not the facilitator), and Micheal (ours speaker) is the facilitator with his shoes off and sitting on the floor. The proof is in the picture! Having clear site lines between the people, with no objects blocking, made each person the focus, allowing for openness between each person. The facilitator on the ground was symbolic abdication of authority, causing him to look up to the participants. This was to provide equity between the role of our speaker and participants. Philosophically, Micheal is being humble and showing a modest or low estimate of one’s own importance. Micheal still lead and spoke on the topic of leadership, but there was an implicit message being created with the symbolic behavioral conduct and construction of the environment. Every part of the context was purposeful to create an implicit communication of equity within the group to increase the propensity of collaboration among everyone. Looking at the picture, one would not assume Micheal, the speaker, is conducting the session, but this odd participant. Most importantly, I ask you to take note of the gentleman in a black suit, hugging his clipboard instead of using the clipboard as a tool to record information. If we follow the symbolic and implicit meanings, Micheal is attempting to demonstrate for us, our gentleman in the black suit is not open to the demonstration. To clarify, there maybe many reasons for placing the clipboard against his stomach; upset stomach, a stain on his shirt he wished to hide, and one may imagine a multitude of reasons from the picture. However, from attending the experience, the gentleman in black asked questions, and was not open to Micheal’s responses, despite all the symbolic attempts of delivery. All three times Micheal offered answers the question…

Now, I am only scraping the surface of what he demonstrated and explained about leadership in facilitating collaboration. Within the Distance Education, there is a theory called, Community of Inquiry, and provides a framework to identify elements of the educational experience; social presence, teaching presence, and learner presence, in online learning environments. These elements of the educational experience rotate among the users. (Tusler, 2017)  In this session example, Kline has another member mimic drawings on a large drawing pad and had him mirror the content on the opposite side of the circle. There is no one power point, no technology becoming the main focus. He is symbolically diffusing the focus of the list, and diffusing who disseminates the knowledge through another participant. This is an attempt to create a multiple peer-to-peer learning environment.

Additionally there is an assigned moderator, using a chime, has the ability to pause the group discussion to allow for pause for deep thought, repeat a topic, or intervene in the discussion. Here, Kline is making symbolic and implicit gestures, to create equity and ‘facilitate’ the learning through collaboration, through assigned roles and shared responsibilities.

For the purpose of this blog post, these examples demonstrate the analogical interpretations to create collaborative environments, through focusing on the individuals. UX and UI designers have the intuition and analogical thought training to initiate Humanist-Centered Interaction, in the same manner Kline facilitates a meeting, with a human first mentality. UX and UI designers working with Programmers, will be able to create the features within existing platforms to allow users to communicate an aesthetic through the platforms.

For example, virtually, this is being attempted with Shindig. How well is it being done is the question? Can this be improved? Every person is in square box, which creates a mood and feeling. Participants float and move as you the moderator speaks. How does that make you feel? The black and white auditorium, evokes the a familiar feeling for those of us who have attended the auditorium class. However, do we want the reminder of a large auditorium or an intimate cozy coffee shop with our classmates?  There is this loss of assessing the affected mood when we design in the tech industry. Do we get lost in the programming, the code, the branding, and the holistic scope of managing and finishing a project? In the tech industry, do we go for the cool wow effect?  

Evolving affective features, under the control of the user:

Stronger Affinity

UX designers and Programmers need to allow users to design their “Affinity Spaces”, through limiting the composition features of different tools. What is an “Affinity Space”? Gee (2004) is credited with coining the phrase, “Affinity Space”, and is a virtual environment, where “the content organization is continually transformed by the interactional organization of the space”. A primary feature is “fan-produced websites”, which are socially constructed based on a mutual social interest. (p.83)” I used this definition in one of my papers. However, to simplify, there is a social connectivity among users, and those users have an affinity to the social software. Next, affinity is defined as (1)a spontaneous or natural liking or sympathy for someone or something, and (2) a similarity of characteristics suggesting a relationship, especially a resemblance in structure between animals, plants, or languages. For example, if a person choses to use a calendar app on their phone and sends out a “Save the Date” with a url link to the event, and the group responds; yes, no, maybe, and gives a note apologizing as to why they can’t come; this is some serious “affinity” among a social group using the possible features. A calendar app is an “affinity space” because this virtual space defies time and space in a virtual area, and creates a possible future for those invited to “Save the Date” and are socially networked. Conversely, if a person mails out traditional paper invites, the space is still tangible and there is no virtual space created, meaning there is no “affinity space” for people to interact in. The liking of the virtual space versus the analog mail meets definition (1), and the similarities of characteristics between calendars; analog and virtual for definition (2). Personally, I love Wedding websites with all the information, where to shop, date, and pictures celebrating a couple’s union. I lose paper invitations, and they kill trees! But, I digress with all this learning science talk, and hope you understand a practical description of an “affinity space”. My point is, UX designers can allow users a new level of creative personalized use to increase the level of “affinity” to an application.

Composition

Ms. Mary Stribley made a beautiful page explaining the “10 rules of composition all designers live byfor your quick review.

As digitally fluency rises, so will the ability to adjust features, and create multiple settings. Designers are able to understand composition and how this affects the users, and give this power to the users as simple features. However, first, they may provide default setting to set a tone to a video call, Learning management space, Video Conferences; professional, formal, casual, cozy, bestie, personalized, default, and so on. Penn State treaded on this within their Content Management System, Evolution, in which they changed the theme color and header images for different colleges for user recognition. However, more features should be provided to users to allow the growing digitally fluent population “to play and have fun”, which is what our R&D group is finding is the future of learning.

Color: Providing different colors for meeting rooms, set by the moderator, both to help themselves and the participants.Color affects our moods and there are numerous mood charts with various perspectives; red makes us hungry, blue calms us, yellow excites us, and more. Designers can provide background colors to set the mood. Here is an image fotolia has with the various colors associated to a board room icon. If you joined a meeting, and had to see a color icon first, and a a glow around the frame, a mood would be provided. A seminar on the topic of Love, School Colors, or open with a brand logo to set the mood.

Hierarchy: SnapChat layers AR over the photo, but why?Why cant we layer the live image over the AR? Imagine a “Sage on the Stage” setting, to focus on a speaker, when there is not a an item screen shared, or even when there is one. Or to proclaim a didactic lesson.

Currently, We place everyone in a square box, and line them up in a row, or throw them in a corner, and move them around on our screen. But why can’t the participants be in a circle, oval, cloud, sun, or any other flat design with different colors behind them.

 

Implied Meanings: Imagine a “Round Table” setting, to evoke equity and depth. This could be a primer picture and one day move to the VR realm.

 

Conversely, the square and rectangle evoke different meanings and moods to a website. Designers and visual artist go through rigorous critiques to learn this, and many artists have a high intuitive understanding of visual composition.

 

DataCamp Email AdvertizementThis is possible, but requires collaboration between programmers and designer/artists. For example, DataCamp recently provided a class to learn mathematical art. This pattern could be the foundation to an AR background and the face of an individual, using facial recognition, is layered and integrated into or upon the frames. The shapes create a depth on the screen with composition lines drawing your gaze. Why would we not provide the moderator the ability to modify a meeting space or provide a presentation in the clouds, all to evoke a mood? I know, valuable real-estate on the screen! But are we using this valuable real-estate well?

However, then it is not about the user experience or a humanistic view of the technology, but about how the data being presented makes us feel. Furthermore, designing in this humanistic manner now, would prepare us for VR spaces! I know, I know, this is more labor intensive for the programming aspect, but would set set you apart from other software platforms.

I am not the only one putting humans first, but I will use learning science to backup the perspective:

Looking Forward:

Maybe when we obtain better hepatic technology, the next level of VR will evolve. But for now, we need to create these Humanistic features on the screen we have and allow the user to control those features. The features would be intensionally labeled, giving our users control, and would alleviate the idea of “manipulation”.  Manipulation only occurs when you don’t feel like you have a choice. In research “participant bias” is always a worry, but those who study the humanities, artists and designers, understand this humanistic world and are able to help us take our technology to the next evolution. This Human-centered designing I propose combined with “mobile VR split screen” would make mobile VR appealing within current platforms. There is no reason features can’t be provided to the users to choose their access. Youtube and other media platforms are already implementing the split screen option for VR goggles.

Summarizing:

We need to use all the screen space effectively to create an experience for our users. How we use our living space creates quality in our lives. Interior designing and home makeover shows wouldn’t exist if this “Aesthetic Impact” didn’t exist.  UX  and UI Designers have to tap into the rules of composition, look at classic rules of art, like the Golden Mean, to create quality screen space and increase the affinity of an application. We already know people want quick bits of information. I am guilty of providing to much information at once for modern day digital users. We are all learning on the go. This means it is not about how much information we can squeeze into a screen, no matter the size of the screen or amount of visual real-estate. What matters is the quality of the experience created and if you stay deeply engaged!

References:

Albers, P., Pace, C. L., & Odo, D. M. (2016). From affinity and beyond: Online literacy collaborations. Journal of Literacy Research48(2), 1-30.

Eisner, E. W., & M. Dobbs, S. (1988). Silent pedagogy: How museums help visitors experience exhibitions. Art Education41(4), 6-15.

Garrison, D. R., Anderson, T., & Archer, W. (2001). Critical Thinking, Cognitive Presence, and Computer Conferencing in Distance Education. American Journal of Distance Education. Retrieved from  http://cde.athabascau.ca/coi_site/documents/Garrison_Anderson_Archer_CogPres_Final.pdf

Gee, James P. (2004) Affinity Spaces. Situated Language and Learning. New York. Routledge. pp. 79-89.

Kline, M. (2018, March). Collaborative Leadership and Engagement: A Leader in Every Chair. Session presented at the meeting of Creativity Conference XV, Sarasota, Florida.

Norman, D. A. (2002). Emotion and design: Attractive things work better. Interactions Magazine, ix (4), 36 – 42. Retrieved from http://www.jnd.org/dn.mss/emotion_design_at.html

Tusler, Donny. (2017, March 3). MediumWhat is the most important component of a Community of Inquiry? Retrieved from https://medium.com/@dktguy/what-is-the-most-important-components-of-a-community-of-inquiry-426b569c3f2c

Sample images borrowed from Colour box and Fotofolio

One thought on “Reclaiming the Human in HCI : a user-first design approach

Leave a Reply

Your email address will not be published. Required fields are marked *