The mood by touch project began with my interest in VR, multi-modality and affective engineering. I wanted to try and create an experience that instills emotion with stimuli in three different modalities – visual, haptic and auditive. The project was conducted in Unity by creating one base-world in four different variations inspired by Plutchik’s Wheel of Emotions (1). To make sure that what I actually created was in line with what others would feel I went ahead and did some academic research on how you could instill anger, fear, sadness and hope with visual, auditive and haptic stimuli.
What is important to know about emotion is that it is hypothesized to be pre-cognitive, eg. “before thinking”. Emotion is either innate (eg. from birth, genetical if you will) or nurtured by life experiences, even though it can be instilled and regulated by cognitive processes. The research that has been conducted on the field of trying to instill emotion are mostly looking at the correlation between very basic features of visual, auditive and haptic stimuli and certain self-reported responses.
Instilled emotion in response to stimuli was hypothesized by Darwin, as early as in the 19th century, where he discovered that human seems to be able to attributed the feelings of animals, based on their sounds. The same findings could be found by researchers from Vrije Universiteit Brussel and Ruhr-Universität Bochum, just a couple of year ago (2). However, instead of testing the ability to attribute specific emotions to animals, the study was aimed at finding out whether participants could differentiate between high- and low arousal states.
This is consistent with another article I found called Current Emotion Research in Music Psychology (3), that reviews the state of the field, and that the dimensions where clear correlation between intended instilled emotion and actual instilled emotion are limited to arousal and valence. Eg. Awake-ness and Happiness. If arousal is at rock bottom you are sleeping, and if it’s peaking you’re really tense. If valence is high you’re feeling some kind of positive feeling, and if valence is low, you feel some kind of negative feeling.
These two dimensions didn’t quite give me what I hoped for, as I had hoped for more granularity so that I could pinpoint specific feelings from the Wheel of Emotions(1), but at least I got a formalized starting point from which I could continue my work.
The 2D emotion wheel (4):
When learning what I had to work with, I moved on by trying to find out how to instill low/high arousal and valence. My first finding, from the same article about current research in music psychology was that while we are quite good at feeling aroused from arousing stimuli and vice versa, we’re quite bad at detecting the valence of a stimuli (3). To stimulate valence with sound however, there seems to be a correlation with pitch and frequency towards arousal – high pitch and high frequency sounds are as such high in arousal, low pitch, low frequency sounds are low in arousal. As the tools to design more advanced haptics in unity were also based around frequency and pitch, I decided that I’d use these same principles to design for haptics.
And much like how sound can be visually represented as sharp or round, shapes with similar properties as the auditive, seem to instill similar states of arousal. I used a couple of blogs as foundation for the visual design (5, 6).
In another litterature review called Experimental Methods for Inducing Basic Emotions: A Qualitative Review (7), the researchers had tried to map emotions with a bit more granularity to certain auditive and visual specifics. The study discusses the relation to “real things” and instilled emotion, eg. that spiders, snakes and threats are ususally what instills fear for example, and that happy face expressions can be used to instill happiness. However, when talking about audio, it yielded some usable insights as follows:
Happiness: fast tempo, major harmonies, dance-like rhythm
Fear: Classical music, film scores
Fearful music can also coinduce surprise and anxiety
Sadness: Slow tempo, low volume and minor key
This gave me some more dimensions to work with as a basis for the sound design.
To determine which color schemes to use, I simply used Picular (8), which is a site designed by Future Memories, and is a sort of “Google for colors”, where you can search for words and see what colors they’re typically associated with.
Concept and design
From the research I conducted I had a rough idea about how I should manage to instill arousal based on shape, color, sound and haptics. So at this point I started to fuse them all by creating certain specifications for each mood I was aiming to convey. Although I knew I couldn’t instill certain feeling in a sure-fire way I still decided to move forward with my the very specific emotions of “Hope”, “Anxiety” and “Despair”, combining my research with some intuition to try and convey the feelings properly.
Then I went ahead and did some moodboards with colors, pictures and songs that should convey the emotions I was looking to instill:
I decided for all music to be soundscape-esque ambient. But I soon realised that the music was taking me somewhere else than to despair and anxiety, so I dropped them and made four themes consisting of anger, fear, sadness and hope. I made some 3D sketches in cinema 4D to get color palettes done and to go with the music I had created. And then I worked some more on the music, to fit even better into the visual moods I had managed to convey.
I didn’t do many iterations in the concept phase, but I worked a lot back and forth between different techniques to try to get things to match in terms of quality, setting and technicalities. However, as I was planning to work with quite simple geometries to begin with I decided to work with Unity right of the bat when making the actual project. Before this project I had only dabbled around in Unity and were a bit reluctant to do anything more than necessary in there, but the decision to build the actual scenes in Unity actually turned out to be quite a good decision as it sped up my workflow a lot.
Time was also pressing at this point so instead of designing four individual scenes, I decided to go with one single base scene with four variations. Landscapes, post processing, lighting, sound design and some modelling are done by myself, but the vegetation consists of assets that I’ve bought.
It looks and sounds like this:
I’m quite happy with the visual and auditive results.
At this point the Unity project is playable in VR, but I haven’t designed the ‘Hub world’ where the user can touch spheres associated with the worlds of corresponding moods/emotions (hence the name “mood by touch”), nor have I implemented the haptics yet. Hence there haven’t been a lot of programming done in this project as of yet, but when I am done with that I will update this blog.
Even though it’s not fully completed, it’s been a fun project where I’ve tried to combine academic research in psychology with my aesthetic and technical know-how, hopefully when the project is done I’ll be able to test it on some people to see if I’ve successfully conveyed the emotions I was seeking to convey. If I do, that will be a separate article.
Til’ then, take care, and don’t hesitate to ask me questions about the project! 🙂