The first week of the Web Fundamentals project was dedicated to learning P5.js. Craig Steele and Jen ran multiple workshops for us so we could learn the basics of P5 and also how HTML and CSS work altogether.
As I said, during those I learned the differences between processing and P5 but also some new features unique to P5 like the library DOM that lets the user play and interact with some object like sliders or button.


The brief being centered on something "Web-based" or something that refers to or reminds of the web culture I immediately thought about using data of the internet. I knew that APIs were the key to link data to an app. During my previous studies in France, I had the opportunity to work with APIs but in a different type of application. I wanted to get back to it and explore its different use.

First I looked for interesting APIs that were available and their type of data. I did some research and ended on a medium article about the best APIs available today.

I went through a lot of them and every time I had to ask myself some questions :

- what type of data is this?
- how can I use it?
- what would be interesting to do with it?
- does the final visual output has to be related somehow to the data?
- or should it be abstract?

I wanted to explore stuff with the Spotify or SoundCloud API because it was a bit more complex compared to simple weather data. I could access a lot more type of information and I think they would be very interesting to work with.

I started to work on the Spotify API through the Spotify Developer Portal. I mostly did a lot of research what I could access with the API and how I could access the data.

I managed to query some simple data via their portal using an API developer key and a song ID.

As I worked on the Spotify API I also noticed that it might be frustrating to set the API with P5.js and end up with a very simple sketch at the end of the week because of the timeframe.
So as I was already starting to think about something with sounds I started to explore the P5 sound library and its capabilities.


On a previous processing sketch, I made a grid of rectangle change angle depending on the amplitude of the sound recorded live from the microphone.

From there, I thought about a sketch where the visual output evolves depending on a music track that the user could distort using sound effects.

I presented this idea and the previous research during the first tutorial on Monday. I showed Jen and Craig a previous processing sketch where the grid evolves depending on the amplitude of the mic, and how I adapted it in P5.js. The result was quite the same, I couldn't figure out some features like "peasycam".

Comparison of the V1 on P5, and the original sketch on Processing.

During the tutorial, Jen targeted the use of the music track and made me think about considering other sources of audio and how, why it could interact with the sketch and the effects. The aim isn't to recreate a VJ thing.
This really good feedback from Jen and Craig made me think about my source of audio and how I could use

While taking a step back I went through one of my older projects during my degree in France. I stumbled upon a project about Frank Gehry "Deconstructivism" concept I studied during art history lectures.
(I don't think it is the right term for the concept, but I can't any translation from the French explanation..).

So deconstructivism is a concept of defying the current law of construction and modify it. Frank Gehry applied this to his buildings that's why they all have a unique look and no other architect is near it.

A comparison of Gehry's work(1st) and Norman Foster's work(2nd).

So after I remember all of this from my Art history lectures I thought it would be quite interesting to apply this concept on sounds. So how sound can be "deconstruct" and how would it actually sounds. Will it be like Frank Gehry's work compared to other architects work, or will the result be similar?
I wanted to aim at the true composition of the sound.


I started by setting up an audio input and a record function that the user could trigger with simple buttons.
Then I added sliders and pair them to sound effect from the sound library. I noticed that it was quite difficult to set it up. Somehow I couldn't manage to add a reverb effect. I had a buffer error that I couldn't resolve. Time running by I focused on 3 simple effects: a delay, a lowpass filter, and a speed effect.

The 3 sliders and button

Once it was all set up I had to focus on a visual output for my sketch. As I was working earlier on my evolving grid sketch I thought it would be nice to add it as a visual output.
Sadly P5.js on my web browser (Chrome FYI), the 3D evolving grid didn't work as smooth as expected. There was a lot of lag and it crashed multiple times.

So I went for a much simpler visual output. Each time the amplitude went above 0.6 random lines were drawn.

It also worked perfectly on phone.


I went for this final sketch for this project. I think I would have been able to make it better if I had the idea and the reflection process sooner. The main idea was to analyze the sound of our environment and deconstruct it. I wanted to "show" how sound could be interpreted differently and how we could get some unexpected results that don't look/sound like anything we've already heard.

It triggers a part of our brain that sometimes make us feel fear because we can't make a connection with anything we used to hear. I think it somehow resembles the concept of Frank Gehry, his buildings are so unique and somehow unexpected we can't relate with anything else.

My sketch has the same concept but reversed. We're taking a sound from our environment that we know and deconstruct it to get a sound that doesn't sound like anything else we're used to hear.

Obviously, it is not as functional as I wanted to be because of the limited effects I've put into the sketch, it's more like a beta version.


So if I had more time I will definitely develop the visual output and make it more relevant to the sound. So maybe like a sound profile spectrum like the example below.

I think it would be very interesting to see as we're "deconstructing" the sound how the sound profile evolves. I think for this to work I will to clean my code and somehow make it simpler. Because of the time limit, I went for roughs and heavy functions that could be simplified.

I also would have worked on the aesthetics of the buttons and sliders using HTML and CSS. Right now they are on top of the sketch which is not ideal.

I think I will definitely try to improve it afterward.


I really liked this project because of the wide exploration we could undertake. I think it was really interesting to link creative coding with a subject such as "the web". There's a lot of exploration possibilities and each one of them are very unique. That has been demonstrated during the final presentation.
I really enjoyed exploration the sound library and see only a glimpse of what it is able to do. I think I'll definitely continue exploring this field on my own and maybe use it for future projects!