Design domain part 2 is mostly dedicated to the technical realization of our idea we thought about during part 1. A few days before the second part launch I managed to reflect on my first idea.
I originally wanted to translate emotions through typeface animations. This idea was very vague and maybe too general in some ways, so I decided to focus on something else, but I didn't really know what.
After a few days of reflection, I decided I wanted to use physical input like voice or heartbeat to trigger the animations.


I spent the first week trying to set things up in my head and come up with a precise and clear idea. After the first tutorial that happened on Monday, I thought about what physical input should I use? Heart rate, voice, keyboard etc...

I managed to stick with a voice input and potentially use speech recognition as well.
On Thursday for our individual tutorials, I explained the idea of how voice and speech recognition could animate typefaces. I wasn't very sure what type of visuals output it should be though. Is it going to be an animated word or the whole sentence that we said but written differently? Is it going to be playful or not? Will it be abstract or very literal? what are the intentions behinds?


I think I had a very serious idea for this project and wanted to create a very serious piece of work. Considering my technical knowledge and the project time frame I rapidly noticed I would need way more time and technical ability to create what I've imagined.

Jen fairly made me think of creating something quite abstract and playful. Emotions are very delicate to talk about and I didn't want to create something "serious-like" and end up with something totally messed up, incoherent and false. I committed to this voluntarily simple stance of literally translating very vague parameters into graphics.
I identified these parameters as voice parameters. I wanted to create something very literal, if you, for example, say "I'm happy", a happy evoking visual will appear. The sentence can be more elaborate as well.

I think the piece just recreates the mood you want to express literally with visuals. As emotions are very complex, diving into the winding of the concept and all the aspects it can have, would be very long, complicated and definitely too much for a 2-week project.

I spent a lot of time thinking about the whole concept and slowly started to panic because I had too many ideas popping up every hour and I wasn't sure if my original idea was the best and if it was any good.
I finally stuck to the simple and childish visual feedback of our voice.

The flow of ideas I've noted on my sketchbook.



Starting week 2 I started to code in P5.js. I started to experiment with Jen's voice recognition sketch and played with it. As I wasn't really sure of the final output I mostly experimented various stuff with speech recognition.

I tried stuff featuring type distortion depending on the volume and started sticking to this idea. But I didn't see the purpose of it and how it would be attached to my first idea.

I only had my final idea on Wednesday. As I said I was very limited by my basic knowledge of P5 and I really understand that distorting typefaces is really hard, takes a lot of time and was very ambitious for such a short time frame. So I decided to "hack" my final output. As I had no idea to work on vector type within P5.js and I wanted to have quite unusual types I decided to use and also create special fonts for the visual feedback.

I created 2 typefaces for the project. I was inspired by some works I've done previously.

2 of my posters from last year

The two types creates in Illustrator

After having the types, I had to create different environments depending on a very large specter of emotions. I've set these environments by changing the background color and text color fill. The aim was to create very simple and straightforward visual feedback to the user's voice.


I very sadly lost everything I coded on Thursday evening due to a bug of P5.js. (it just crashed and erased the project). I had to recode everything from scratch. I didn't have the time to recreate the exact same thing with the exact same features.
My actual code featured limited visual feedback, responding to only some words I've managed to set for the exhibition. The previous sketch features a lexical field recognition allowing the user to speak freely about something and having visual feedback depending on what's really being said. I also managed to have an FTT analyzer allowing me to analyze (very approximately) some voices characteristics such as pitch, volume etc. These characteristics also had an impact on the visual output. For example, if the voice was loud the text size would increase.

Another thing that didn't go well, the speech recognition worked perfectly in English until I lost the code. I might have done some mistakes while re-coding the sketch but didn't remember touching anything about language. So for mysterious reasons it only recognizes french at the moment.

My final sketch features visual responses for these words: love, angriness, happiness, sadness, anxiety. It only works if you voluntarily speak those words with a French accent.


As I said, the final sketch has the speech recognition and does trigger a visual response if certain words are pronounced (with a French accent apparently). If none of the words I've set are pronounced it will just write everything you say with no special visual response. It kind of translates an emotionless state of the user as nothing is triggered. Of course, it's not the case here as the sketch only detects 5 words. (emotion states can't be summed up in 5 words).

Here are the visual outputs :


As Design Domain is a School of Design wide project there was the Open Studio exhibition scheduled to showcase every student answer to the subject. We, therefore, had to think about a set up of our work.
I talk about it with Jen and was thinking about setting up a big screen with a proper directional microphone. But as I started to panic when I lost my code I didn't really have the time to sort this out. Instead, I decided to set up my mac book pro with headphones (with a mic on the headphones' cable).
Afterward, I think the headphone wasn't a bad idea as it canceled the surrounding noise and allowed the user to focus on what he/she wants to say. I think the set up was really minimalist and straightforward just like the work I created. I think it's important to have something matching the 2 universes (physical and digital).
I tried to make the user focus mainly on the screen and its voice so the experience on self-representation would be more accurate.

I think I should also change the initial state message. It wasn't very intuitive. or maybe add an instructions sheet.

Here's the final set. A computer and a pair of noise-canceling headphones equipped with a microphone.

Overview of the set up area. It shows the need of isolation for a true experience.

And the small description


For the assignment, we have to submit a 2min max video of documentation of our work. I started the recording me testing the speech recognition and then some of my peers trying the work during the Open Studio. I'm going to shoot a proper demonstration where the sketch is working



The first part of Design Domain was really intriguing for me and I loved the fact of being completely free to create almost anything answering this year's subject. I know it wasn't supposed to start as our first project but it did kind of messed up my ideas. As it was my first project I had absolutely no technical knowledge and I the fact that part 2 would be much harder.

So as I started part 2 I was very destabilized by the fact that I didn't really know how to technically realize my idea. So that's where I started to change my idea every time I stumbled upon an obstacle. I think I've lost a lot of precious time trying to figure out the whole meaning of my project as it was right in front of my nose. I kept wanting to do something elaborated and serious.
Hopefully, Jen sparkled my mind with the fact that it could be very simple and playful. It doesn't have to be a thesis about emotion translated to type, but just some simple visual feedback about roughly identified emotions. A childish vision of emotion.

If I had to resume the concept briefly I think I just wanted to showcase a very childish approach of how do we emotionally sound. Who we are as human, how do we look like, what do we feel and raise awareness around the emotions identification.

I think it directly relates to "how do we look like" and "who are we?" as the work approaches both questions.


I think there's a lot to say here. As I clearly have a very poor version of what I wanted to be originally I guess the whole thing would need a lot of code additions. And of course, adding more and elaborated visual feedback. I was quite happy with the final set up during the Open Studio. I think it would be perfect if I had the time to create a big and clean set up and create a kind of immersive experience where the user is confronted with him/herself. Something quite private so there's no judgment/ background noise going on.


Overall I really liked the project despite having a lot of difficulties. I think it's a superb opportunity to really create a big and ambitious project with a more reflected concept. I sadly didn't manage well my time and ideas, I kept being lost until the very last minute. I also had this code accident which messed up a lot of things as well.
I know that the summative is due very soon, but I would love to push this project forward for myself at least. I think it's very interesting.
Hopefully, 2020's design domain will be much better as I learned from my mistakes.