Sense & Sensibility is a 3-week project. It focuses on Machine Learning, AI, and computer vision. We had to create a piece of software around the keyword awareness.



We launched the project, overviewed the deliverables and expectations of the brief. Then we started looking at some references in machine learning and AI.
In the afternoon we started with our first workshop, focusing on discovering the different technologies of ML and AI. We looked at the Teachable Machines and how to link it with P5.js


The first week was rich in workshops and discussion points. It was nice to discover this technology through a lot of workshops. Over the upcoming days, we were taught the basics of Teachable Machine, Runway ML, ML5.js, and some computer vision technology using processing.

We then had a seminar on Ai and computer consciousness. I think it was a really nice addition to the usual workshop we have in projects, I liked the opportunity to discuss ideas, concepts and thoughts around the project's theme. It was really helpful.

Some exploration during the workshop sessions.


Over the week, I started to gather some references and inspirations, but I was a bit lost about the brief. I didn't know if the brief was very broad from this keyword of awareness or we had to focus on this concept of computer consciousness.
I gathered some interesting references tied to the theme and some references visually interesting but not related to the Ai / Ml theme.

I really like the "Don't touch your face" application and it inspired me to do something "web-based", like a fun tool. Then I have some inspiration for the variable font I want to make and how to interact with it.

Additionally, there are two projects by students from the CIID, one exploring sound awareness and the other visual awareness. Next to that, there are two experimental photographs by Jean-Vincent Simonet and one of my custom texture I did last year. Everything inspired me more or less during my exploration and experimentation phase. I still sketched some ideas. Since I focused a lot of physical artwork last year I thought I'd like to create something more graphic or in some ways tied to graphic design. Given the length of the project, I also wanted to invest a lot of work into something more graphic and "finished".

I started to think about what type of interaction is possible with the new technologies we discovered.

ML and Ai can either GENERATE, CONTROL or ANSWER.

And then started to write down different raw ideas :

- a small series of programs exploring awareness in a fun way (through the same graphic interface)

- Variable graphics controlled by the body -> maybe do something around accessibility for disabled people

- Morphing shapes

I then realized I really wanted to play around with the graphics design theme and create something more like a "useful / fun" tool. I wanted to explore how could AI or ML help us enhance our creativity with new tools or test the limit of Ai's or ML's creativity in creating graphics for us.

So I restarted to write down some ideas :

- Control a typeface with body or face

- Control a poster with gestures and sound

- Generates gradients or graphics (textures, shapes, assets)

- Train a model to distort or rework a picture

I had nothing really fixed in my mind, but I liked these ideas and wanted to explore them.


For the first tutorial, I asked about the brief and if it was kind of focus only on this consciousness area or not, and then talked through my current ideas and how I wanted to tie them to something more graphic. Jen's feedback helped me a lot reflecting on these concepts. Although I did not have any ideas on how I could pair this with AI or ML, I was reassured to continue to explore these ideas and concepts around graphic design. Jen also gave me some tools and tips to explore for my ideas, like Regression with ML5 and some examples of ML5 application with her teaching with Andreas. These were very helpful examples as I could explore multiple ones and adapt my idea with what's technologically possible.



Over the weekend and the upcoming week, I started to sort and explore my different ideas. The two most plausible ideas would be a variable font or a texture generator.

On one side I started to gather some interesting textures to later train a model with them. and on the other side, I started working on how I could effectively control a variable font with ML5 regression.
Using Jen's and Andreas' example on the matter I discovered poseNet and thought it would be very interesting to use poses to control a variable typeface. So I started to play with a different sketch using regression, poseNet etc etc. To start I looked up an old example I used in my Typographic project last year which allows a variable font to be controlled with different inputs. in this case mouse coordinates. Here's the link.

Then I looked at Andreas' example with poseNet and regression that is controlling a chair. This was almost what I needed to do, although I found the code and interface a bit too complex and slightly misleading. The fact that the user could train the model in the same sketch was very misleading as I thought it was the only way of doing it. Then I discovered the poseNet regression tutorial by The Coding Train. I managed to use both Andreas and The Coding Train example to control a variable font.

- Andreas example adapted.

- Coding Train example adapted

The second example is already trained and works with this stiff/flexible pose model. works if the pose is from head to toe(around 1-2m from your webcam).

At this point, I didn't know what my idea was really I just wanted to explore how I could technically control a font with poses. I managed to adapt the examples quite easily. The next step was to, as Jen suggested, go back and adapt the concept to this computer awareness theme. Although as I explored and experimented, the idea matured and I ended wanting to create something that looked like a photo booth concept.


Aside from working on these sketches, I started to look at how variable font actually worked as I wanted to create a custom one for this project. (It was my goal for the Typographic project, but I guess I didn't manage to do it in such a short project.) Following a tutorial from Glyphs Developer on a variable font I started to work on a super abstract shape that behaved like a variable font.

To create a variable font you have to create two instances (for example: 'light' & 'bold') and declare these instances as part of an axis. You have to attribute an axis value to these instances. Let's take for example a weight axis. So maybe 100 for light and 200 for bold. When you export the font as a variable font, software like Photoshop, Illustrator, or web browser can read the font file and extract vectors and points. For the variable font, vectors and points evolve depending on the instance varying between 100 and 200 in this case.

Here's a small experiment of a character going from the two instances (100 and 200).

Usually, variable fonts control weight, width, height etc, I wanted to push the boundaries of this and explore how the shape could really morph. It's very interesting because the morphing is only partially controllable as you only declare 2 states (light and bold for example). Everything in between is calculations.

My plan was first to control this value with poseNet which was ok to do. Then I wanted to have the ability to download the variable font instance which was shown given your pose. (this instance would usually sit between both extremes, which would give very interesting results). I did loads of research on how that actually worked, as far as I got I couldn't find any simple way to do it, especially all within P5.js. So I had to find a workaround this issue relatively quickly



After this second week of experiments and exploration, I abandoned the idea of the texture generator with the runway image training. Even if the idea was interesting I couldn't have the time to produce thousands of these custom textures. (I did try to train a few images but it didn't work at all so I didn't save it).
I stayed with the idea of a variable font concept being controlled by body postures. Over the weekend I stressed out because I thought this whole concept was too simple and didn't tie with the project theme. I think I was just stressing out because I couldn't see how my final work could be given all the technical issues I was facing. I was lost both in my idea and concept but also in all the technical issues.

So I started to reflect on each step of the project, my ideas, inspirations, references, and research. I took back this question: "Can we enhance our creativity with newly generated tools". I tied this to my current idea of playing with variable font and thought: How can I make a font generated from what the computer sees? How can I have a newly generated font using poseNet?

And for some very weird and unexpected reasons, I thought of the photo booth concept. Maybe because of the word "pose"? I found it super interesting to tie my current idea with this quite old concept. So how can I make a sort of "Font booth", where people pose and get a unique font from how they are seen by the computer.
This was a huge relief as I finally had a clear approach to what I wanted to do with this idea.


On Monday of the third week, I had my last tutorial where I've explained my progress so far with my explorations and some little concerns about my idea. Jen pointed out some key points to consider furthermore with my updated idea, which poses is which font-weight, how can I present this successfully and intuitively on a P5 sketch? What the font should look like? etc etc.

This really helped me again, as I was again reassured I was on the right path and made me think about all the graphic and conceptual choices I had to consider for the final work.


After the tutorial, I spent the afternoon refining every little detail about my idea and noted everything.

So the concept would be to create a sort of online photo booth where you could download a unique font given your pose after a capture action. It would work just like a normal photo booth: Pose, countdown, and get your pictures (a unique font in this case).

The font will be generated given your pose. The interesting part is how the computer sees the poses and attributes it to a font instance. I will go over my poses choice when going over the font design.


Quickly after this final reflection, I started to go back on the code I experimented with earlier on. As I knew exactly what I wanted to do now, I had less trouble choosing what technology / which process / and model to use to make my work.

I started by taking Dan Schiffman's (PoseNet Regression) sketch I adapted and formatting it to match all my requirements. I used its data collection and training sketch as well to generate my model and upload it on my main sketch. I still had this concern about being able to download an instance of the font. So far it was only possible to download a png of the font, like a save frame. Then I thought I could just upload each instance file to my sketch and make it download each specific instance given the pose, but apparently, that's not possible with P5. So I just exported, uploaded, and linked each instance (100 static font files) to a specific link using a custom link shortener (called So I just had to create a button calling the right instance of the font.

All the technicalities of the project were resolved gradually and finally managed to have a pretty much working prototype of my app. Although it didn't look convincing as I only used one character of my custom test font and no styling at all.

I later added the final trained model. I recorded 3-4 stiff poses (instances between 100-125) and 4-5 flexible poses (instance around 175-200).


During this project, Jen ran an OpenSkill workshop which taught us how to go beyond P5.js canvas and add HTML and CSS to the webpage. I thought it would the perfect opportunity to enhance my idea and create a nice interface for my project.

I designed a simple model on Sketch app. I've taken into account the fact that I couldn't code a very sophisticated design so I had a very modern and simple approach web design. The model helped place my elements but then I started exploring different graphic styles only with CSS. That's how I found it more interesting to have a rounded button rather than a text. etc etc

Sadly I couldn't manage to do this as the instance mode is relatively complex... So I had to implement all the styling in the P5 sketch with .style() function. It was very tricky at first to understand how the whole sketch behaved but then it was quite simple to handle with thus .style() function.

Here's the app running with half the styling.

I managed in the end to have the design I wanted with all this additional CSS and HTML. Here's the final looking version. There's some difference to the model designed because I revised a lot of aspects of the design while testing the sketch.


Aside from the web design, I still had to design the variable font. I started with some experiments as I showed previously. It was more to test and explore how variable font behaved. Then I started to sketch some ideas and think about what kind of style I wanted to choose.

I also had to think about what will actually vary, the weight, the shape, the width. When I reflected on my idea I discussed with Théo about whether I should do something super abstract and experiment or a variable font going from a "normal" state to an experimental state. I thought the approach from normal to experimental was more interesting as I could actually map this concept to poses quite effectively.

So by choosing this concept of variable font I almost automatically found my pose concept (the model I'll need to train later on). If the poses are relatively stiff the sharp instance will be dominant and if the pose are more flexible the experimental instance of the font will be dominant. This way is kind of literal but I think it will push this concept of being playful in front of the camera.

Here's the sharp to the flexible instance.

Here's the font evolving from the two instances.

As I found out my idea and resolved some technical issues quite late I didn't have the time to design all the basic characters. That is something though that I really want to finish and expand.


As stated in previous parts, I had a lot of problems during this project.

The first main one was figuring out the final idea. I think not knowing the technology was a bit overwhelming at some point and I guess as a first project in quite a long time was maybe stressing me out.

Then I had an issue with figuring out how to enable a download for each font instances which I resolved quite easily after all.

And finally with the instance mode of P5. I couldn't manage to do it, I had to style everything within the P5 sketch.

I think a lot of little technical issues were resolved by finding workarounds. They work but they aren't optimal I think. One way of improving the final work would be to fix all these issues without a restrictive timeframe.


My final piece is a website, where any user can interact and play with the variable font and generate their own unique static font given by their pose. There's a small camera feedback with a counter just like a photo booth. The font in the middle of the website is evolving live depending on the user's pose.

This enables the user to play a bit before hitting the "I'm ready" button. When pushed a little countdown is starting before getting the pose value which is used to set the font-weight value. When the value is obtained, it creates a new button with the link to the right font instance.

The user can either download it or restart the process. Additionally, there are some indications on how to use the app at the bottom right or in the about section with credits.

The idea was to create a tool with Machine Learning that would help in a fun way designer or anyone to have a graphic asset such as a font relatively new and unique fast and in an unusual way.
This can also be (depending on how people see it) literally a font booth where you pose for the fun and get your unique font depending on yourself and how you behave on camera. This ties quite well with my question of "how can a computer help us in our design practice and how can it enhance it with these newly generated tools".

I uploaded and published the final app with Github pages and updating it through the GitHub repository as it is a simple and free way of putting this online.

You can access the published and latest version of the web app here


I reflected a lot during this project. This was probably because it was the first of this year after quite a huge break. This reflection helped me a lot actually. I often felt lost and overwhelmed as I didn't want to fail this project. But these reflection phases helped me find solutions and adapt my idea.

Towards the end, I felt my idea wasn't as good as I thought, probably because I was focusing way too much on making sure it technically worked. The more I advanced towards the completion, the more my concept made non-sense to me, I started to dislike more and more the idea. I think this was mainly because of the big technical load behind it.


Even if my project was working, there's still a lot of improvements I want to add. As I said try to fix the workarounds, finish the font and why not add multiple fonts or more axis to the current one.

As Jen said during the review it could be interesting to extend this idea of photo booth and add alongside the font, the actual corresponding pose.

And there's probably a lot more interesting little add-ons that I've not figured out yet, but that's definitely a project I want to keep alive and feed from time to time. And if I take this up to the next level I guess I'll have it on a custom domain.


I was excited to start working again and found the project as exciting. Ai and Machine Learning are very new technologies and creating something with it were really fun and interesting. There's a very broad range of applications and I was very happy to find my way to this font booth kind of concept.

I was also quite impressed by how well it went given the current times, it was indeed destabilizing at first but then became the new normal quite fast. I like the fact of having these Miro and pallet tools as well as the seminars and multiple contact points.

Overall it was a very nice project, stressful as it was the first in a long time, but very fun and exciting to work on!