DESIGN DOMAIN PART 2 (Y2)
"The subjectivity of consideration of an object as a material vitality"
LAUNCH & SYMPOSIUM
Design domain part two kicked off with another symposium. I attended Jessica Inn and Caroline Till talks which both were interesting. I think as I was happy with my proposal I didn't really reflect a lot and especially how could those talks help me with this particular project. Thought their point of view and work process were very interesting and noteworthy for the future.
PART ONE PROPOSAL
My idea for this design domain edition is to create a small machine that based on people's faces generated small identity description/affirmation just like a fortune cookie format. I want to questions the potentiality of these sentences to influence or not people in their future. I based my research in part one on the Barnum effect and the object ontology theories. The Barnum effect is a simple theory that says that people can easily be influenced by what they read/hear a tailored description/projection of themself. (horoscope,astrology). The object ontology is a theory where we question how non-human things (objects) can have a really important impact on humans.
From these two theories, I wanted to see how could a small piece of paper with a tailored description/projection of someone's identity could influence them and how much they would care about the piece of paper itself.
The proposal explains everything in more detail. As indicated in my proposal I wanted to use a thermal printer and facial recognition technologies. Though I wasn't exactly sure it was doable and which face recognition process I should use for this particular idea (processing, P5 + API, runway ML, etc).
Some exploration I did with different process of facial recognition and machine learning. I tried to find the best fit for my idea.
I used a lot my proposal as a base for the 2 weeks and it really helped me figure out little details that I've missed or forgotten during the "break" between the two parts.
IDEAS & RESEARCH
So for the second part, I started to prepare myself a few weeks before by reading some articles and essays and looking for the best possible technologies and tools to achieve this project.
I asked Jen about the thermal printer and if there were any huge technical obstacles to my idea yet. I could order an Adafruit printer a week before to get it on time and start the setup. as soon as the project started. For the technical part, Jen reassured me that it was possible to use a printer with processing or P5 so I didn't have to worry too much to discover a whole new piece of technology/code language.
A week after part one I discovered an art/photo magazine called Foam whose latest edition was about 'PLAY'. It was a very nice coincidence and thought it would a perfect source of references and reflection for the second part of Design Domain. It features some very interesting essays about the playfulness of AI and the playfulness of appropriation and helped me position my idea into a more playful way.
The intro of the Foam magazine
I re-read my proposal after those readings and realized I had to tweak some elements of my initial idea. I prepared some question I wanted to ask Jen during our group tutorial as I wasn't sure of some parameters of my idea.
We had a group tutorial on Tuesday. It helped me to consider which part of my idea I should maybe revise based on Jen's feedback. I also asked some questions about whether or not should my sentences be Ai generated? should I focus more on a playful and happy theme for these identities? and other technical stuff. It led to some interesting discussion with Jen and other students. It really helped me go towards the right direction with my final concept.
REFLECTION & FINAL CONCEPT IDEA
As said before the tutorial helped me figure out the last details of my final concept before starting to create and build the piece. I was uncertain about some parameters especially the feedback sentences and the way my facial recognition system would work with this.
The final idea, which slightly differs from the proposal, is to create a system of happy identity description/projection generator based on facial landmarks detected by an AI. The small sentence will describe who you are. It would then be printed on a piece of paper. The user can then keep this paper and have it. With this piece, I want to questions if this biased and fake identity can influence the user and if the object (the small piece of paper) will actually make a difference, have more impact than a simple feedback displayed on a screen.
After having my final idea set, I could start to work on different parts of my project. For this one, I wanted to try another way of working with dividing tasks and working on it in parallel. I think it was the most efficient way as well as I had a lot of things to build and code for this project.
I had to work on working the printer with Arduino, linking to processing with serial, sending data from processing to Arduino, creating my facial recognition system, build the final product and figure out the staging.
As I wrote in my proposal since my idea is mostly a physical object I still wanted to add a touch of graphic design by creating a visual identity for the project. As explained I chose experimental fonts and created a set of colorful shapes that would remind the user of a playful environment. I would then figure later how I could implement this on my final product.
My proposal cover featuring the playful environment
Another visual from my proposal
Some references I used to create this
I then started to work with the printer I ordered. I followed the tutorial made by Adafruit on their website. It was really helpful. I could easily get it to start to work with Arduino. I still had a bit of trouble figuring out the BAUD parameters. I experimented a bit by displaying different types of text and adding the playful shapes on the prints. I then started to look at how I can link it to processing with some very simple code. I managed to trigger with serial the printing process with a basic binary response. I had to look later how to transmit data from processing to Arduino then the printer in order to print the right sentence while the facial recognition was working.
Following the Adafruit tutorial to get the printer up and running.
Troubleshooting some errors with the printer
Final experiments with successful results
I think during the first week I really struggled to find the right tool for facial recognition, I first started to look at processing and faceOSC, then after some discussion with Jen we agreed on working with P5.js and a facial recognition system. I first wanted to have a facial recognition app that detected face landmarks with key points. From the keypoints coordinates, I would be able to create conditions and affect different profile to sentences. I think it was quite complicated and remembered Runway ML had some recognition system models. I thought it would be way better to use this system and attribute some words answers. Like if it detects the word "hair" for example then the action could be triggered.
I choose to work with the model DenseCap which allowed me to get text description data. It is a bit more detailed than other models like Mobilenet or YOLACT. The example in Processing Runway ML library of DenseCap didn't work probably because the model was updated and not the example. Jen kindly submitted a request for a fix and it worked a few hours later just before the weekend.
Thankfully I could start to experiment a bit with the DenseCap example over the weekend and start to link everything together.
Different face recognition system in processing/P5/Runway
PROTOTYPING & EXPERIMENT
After our Monday morning session, I started to gather every code snippets I accumulated from last weeks such as the processing data to Arduino sketch, the Arduino print sketch and the DenseCap sketch. I began to combine them all together to make a final set of sketch regrouping every feature (facial recognition data, sending data to Arduino and printing via Arduino).
After I successfully combine that I still had to work on a lot of adjustments with the printing process, the data that I was gathering and the way I was using it.
I managed to get the data from the facial recognition into processing, but I had to code an efficient and less biased system to compare the data and use it as triggers for my piece. I've set a dozen rules, based on face key features (nose, mouth, hair etc). If these features were detected it would trigger the action to send the data to Arduino and print it.
Once I managed to create this I attributed each of my rules a fake sentence to see if each rule had their unique sentence printed when triggered. I continued to work on it and added a random pick of the DenseCap data so it won't always be the same first word that is being detected. This was to counter the function structure of the model, as it always started like "the woman/man has/is blablabla".
After having this part functioning I had to display it properly on the paper and for some reason, I had massive troubles doing this. Using just Arduino was fine, but processing made it a bit harder. I printed almost 5m of paper to test layouts etc. Apparently Arduino ran twice a function printing everything double. I didn't get it at first so I was really confused. I think I did way too many tests as the solution was just to add a delay to the printing function. I finally managed to get the right data to be printed with my visual identity around it.
The last step was to add an external camera (small webcam) to the whole thing, which was quite easy at first but then I couldn't manage to have this camera a source in Processing. Jen kindly helped me fix this really quickly so I could move on to the next step.
As explained my code was first split into multiples sketches. On one side I had 2 different Arduino sketches. One did the simple print actions triggered by a mouse press in Processing, the other was linking text data between Arduino and Processing.
And on the other side, in Processing, I had multiple sketches that I used to experiment and prototype my piece. One was dedicated to transmitting simple text data to Arduino, the other to get the data from Runway DenseCap model and one last to clean text data and select one word from a sentence.
Then I had to combine everything together, which I thought would be super complicated as it rarely goes as planned, but it was actually super easy and it worked perfectly the first time.
So to sum up the process, on one side I have the runway model called DenseCap which detects elements from a camera shot and creates text description of the image elements. With a processing example, which got fixed by Jen, I could get the text data from Runway directly into Processing. In Processing I set some conditions on a dozen words (face features). If these words are detected in the data from the Runway, it will trigger a print action. This print action is transmitted to the Arduino board with Serial. The Arduino will communicate this to the printer.
To do the comparison of words I used the "equal" feature in processing. I compared randomly, one by one, every word that Runway was sending to Processing. As soon as one was recognized as a condition, the action would be executed, transmit the data and print action to Arduino. I added a delay so the printer wouldn't print all the time.
Runway DenseCap model running
Code snippet to send the code to Arduino provided by Jen
Little function to clean and select randomly the runway data within Processing. (in order to get a wider range of possible outcomes)
Arduino code to print
The second tutorial focused a bit more on the staging (at least for me). As I finished all the technical parts of the piece I had to think about the writing of my sentence and the staging. We discussed different possibilities of staging with stickers and raster on an acrylic box.
We also discussed if whether or not there should be any feedback when the system was triggered etc. I could have set up some led to notify the user the machine was working. But when testing out the final set up the response was almost always immediate. Also, I tried to put the led inside the box as a notification, but as it was already really exposed to light, the led blinking was barely noticeable.
I decided to go with the raster on the acrylic to add my playful visual identity and without the LED as I thought it wasn't that necessary after all. The piece itself was quite straightforward and the response almost immediately.
Since the idea of this concept is based on playful identity descriptions I had to write a bunch of them, one for each face features that I set as a condition. (hair, mouth, moustache, etc). I first wanted it to be generated by an AI as it would be less biased and probably more interesting to have, as it would have been really playful and would make probably less sense, but I couldn't allow too much time as the deadline was really soon. I decided to write them on my own. As discussed during the first tutorial, I decided to write these sentences with a happy and playful "touch". I think it would add way more impact than just simple or even negative sentences. and I obviously didn't feel comfortable writing bad stuff for people.
During the first week, I decided to book a laser-cut slot on Wednesday so I could be sure to have a product ready and in case do some adjustments on Thursday. Also as discussed in the last tutorial I decided as Jen recommended to use raster on my laser cut to have my visual identity displayed instead of stickers. I created the file in parallel to working on the final code.
As I've explained I wanted to have a very simple artifact, quite straightforward, and intuitive. Product design is something that interests me a lot, but I think I definitely underestimated the process of designing an object and sourcing the materials. I chose to work with a simple laser-cut clear acrylic. I think it was the best option I had.
I chose to have a simple cube with two holes on top for the printer and one for the camera as well as one at the back for the cable. I then added my visual identity elements onto the box as raster elements. I would have preferred to use some kind of sticker at first, but I was really happy with the final result. The fact that it just stayed a shade of black and white elements made it more straightforward and visually light. Which I think is a very important thing when designing physical objects.
During this project, I had a lot of technical problems. But I chose to troubleshoot as many as I could because I think it would really help understand more things, especially in this case because I was discovering a lot of new pieces of technology. I had trouble with the serial communication, the printer itself. I struggled to find the right facial recognition system suiting best my concept, then making it work with processing, etc. I think the technical workload for this project was big but still manageable as I really wanted to learn new things, etc. Part of the problems was also reconsidering things as I realized that I won't be able to do them. I had to act quite quickly and maybe rush things. This is not ideal, but I definitely learned my lesson and should plan may be better next time.
I only sought help when I really couldn't troubleshoot it myself. Though having so many issues and resolving them most of the time was really empowering and motivating. It was a really interesting process that I shall use for the upcoming project.
I also somehow messed up the laser-cut dimension for the printer hole, so trying to insert it, I broke the acrylic. I had to re- laser cut it on Thursday morning so I could finish the piece.
The final product is a clear box with some raster on its sides reminding the user of a playful environment (the word 'play', and some playful/childish shapes). On top, there's the thermal printer and a small webcam. The box contains wiring to the Arduino board. There's 1 power cable plugged for the printer and 2 USB cables for the camera and the Arduino. The printer is linked to Arduino for data transmission and plugged into a socket for power.
As soon someone steps in the camera "shot", runway's model DenseCap detects some key features such as the month, the eyes, glasses, etc. The processing sketch will get this data and from it reviews randomly every word given by runway. The first word that equals one of the 15 words that I set as conditions it will execute the printing function of one sentence.
If glasses are detected in the shot then print "you have nice glasses" for example. Each time something is triggered there's a 15-20seconds delay to allow the printer to print and the user to tear up the paper. After the 15-20second delay has passed the loop resets and wait for another face feature to be detected and so on.
Some close ups of the final piece
STAGING & PRESENTATION
Staging for the project was quite straightforward I think. My final piece was a small box connected to a computer. Only requirements were a good source of lights for the camera to be working efficiently and support not too low or too high as the camera angle wasn't really that flexible.
I tried multiple heights of plinths and tables around the studio. I found one that was a little higher than a meter which was really perfect for the webcam angle.
Some staging experiment
I chose to stage it on a small plinth near the window in the studio. The location was pretty nice and worked perfectly with all my needs. The box featured playful visual elements, the users could interact with the piece without any problems. And thankfully there were no technical issues with the piece itself.
The full statement accompanying the piece
Output gathered during the open studio
I thought the documentation video would be something similar to what we had to do for Control, so I planned a small video with only shots showing the final product. I realized only just before the submission we had to cover the whole conception of the project. I didn't record much from before the submission sadly. I edited a 2min documentation out of the footage I had and added a better shot full demo of the piece at the end.
This is the final video.
Plan of shooting the documentation video
As this project was separated into two parts I had the opportunity to reflect on it between the two parts. I think was quite happy with my proposal as I really extended my research and experiments at the time, a thing that I clearly missed last year. It really helped me to construct a strong concept and I was ready to work on it without major changes during part two.
Though I still did some small readings and research on the concept before starting the part, it helped me questions some parts of the original idea. Should the sentence feedback be totally generated by me, or an AI? Should it be more positive feedback or can it be negative? What is the true concept behind this idea? Does it still make sense after such a long pause to make it?
I asked myself questions like this for a while between the project and realized when discussing with friends and family that it was still worth continuing on this concept path. Some other questions remained unanswered until some tutorials with Jen. And it really helped me to carry on after it.
Towards the end, I felt my idea wasn't as good as I thought, probably because I was focusing way too much on making sure it technically worked. The more I advanced towards the completion, the more my concept made non-sense to me, I started to dislike more and more the idea. I think this was mainly because of the big technical load behind it. I also disliked it because I had to rush the "product" design.
Some staging/product ideas I had during the proposal part.
I also thought of a big screen, but a video feedback was a bit too much to handle
Though when I completed it, I was quite happy to see it working well and people interacting with it in a quite playful way.
To be honest, at that point I can't figure out if my project was good or not. I feel I executed the technical part quite well but slightly missed the concept while doing it. I feel it wasn't that clear in a way? Or maybe I don't have enough hindsight yet?
From people's feedback, I think I managed to communicate the main concept, which is reassuring.
I also realized working on such technical stuff especially code made me miss doing graphic design. This project (my idea) was really really focused on making a physical piece and not really a graphic thing. Despite having created a small visual identity I now miss a lot doing this and feel I want to go back to this for a while. I guess it's a sort of ping pong thing. It would be interesting to find a nice balance.
If I had the opportunity to improve this piece I think the first thing would be to somehow add a more complex facial feature recognition. DenseCap was certainly very detailed, but maybe not as I wanted especially for facial features. I would also add more sentences to be outputted depending on the face. So far I had 12 possible outcomes. I would also improve the staging and "product" design by creating a dedicated environment etc.
This year's design domain was quite successful I think. I felt happy with both my idea and my final work. I found the theme very interesting and the speakers really empowering as well. It really motivated me to explore unknown technologies and the creative process, which was very rewarding. Despite the workload for my idea is quite heavy, I had the opportunity to learn a lot of things, alone, which was really rewarding. Seeing a final working product was also really really satisfying!
Overall I really liked working on this year's design domain edition. The theme was good, and I was happy with my submission. It was really nice to have such a long time for research and an intense period of working to create the concept we made in our proposals.
Some examples where the paper was kept and probably had an impact on people.