p(ii)tch is a series i started in May 2021 on the new tezos nft platform hicetnunc aka hen.
it wasn't supposed to be a serie at the start but it quickly turned into one.
i had already done ascii/ansi animations before but i felt the envy to push it further.
i ended up with 100 animated pieces and a very flexible procedural system in houdini that was the result of the exercise. everything was designed and built while sometimes producing one animation per day. which was physically very difficult.
the total number of editions generated is 6279. They are nearly all sold out and on the secondary market now.
p(ii)tch also lives on opensea with more pieces and collaborations.
you can buy a catalogue of p(ii)tch here (0,5 tz)
There are 11 unique editions , 3 of them are commissions . It was the first time I experienced a very dynamic secondary market so I experimented with all types of drops and size of editions. I learned a lot.
the last drop was done the Oct 25, 2021
I was never very concerned about having a very consistent style so I wanted to go through the exercise with p(ii)tch. every-time I was bored or feeling having found a recipe I tried to move to something different and explore new ideas but always inside a set of fixed constraints.
p(ii)tch started at variable resolution . It stabilized at 700x1000 at #18.1 which is quite small but I wanted to be able to produce these animations fast and easily to be more agile . Moved up to 910x1300 at #53 . I finished at 1280x1828 at #64 . The average turnaround was 1 day 1/2 for a finished animation. but I usually had several going on in parallel at different stages of the process. I would usually pick one every mint day and finalize it.
The content has no logic and was picked day by day from a large pile of internet downloads. It has a lot of homage to musicians or movies I love but also purely visual samples . I had the great pleasure also to collab with a lot of great artists and friends.
building a pitch gif is done in 4 steps :
-loops : creation of a looping image sequence from footage. This is done using optical flow.
-stipples : creation of several stippling sequences. They consist of sequence of point clouds (.ply) created from the loop images.
-3d renders : creation of several layers of cg renders. done with houdini. rendered with mantra.
-compositing : 2d compositing of the 3d layers. done with nuke.
looping of sequence is done using custom code. It uses optical flow to "warp and morph" images together.
It's not easy to find good loops when working with live footage , but it can be very relaxing. you can see a lot of examples here. read about the method in detail here. the code is accessible on github.
typically i would select footage. cut it down in fragments where i feel there could be interesting loops and try different parameters on it. Eventually i would stabilize footage with 2d tracking to help the process.
p(ii)tch #38 [s(uu)rf] was a very difficult one. i had to manually edit every single frame to get it right. it's one of my favorites :)
Sometimes i would simply fade pictures to achieve the loop if it's visually pleasing. for more chaotic animations I could simply cut.
Stippling is a drawing technique in which areas of light and shadow are created using nothing but dots. The basic idea is simple: For darker areas, you apply a greater number of dots and keep them close together.
i'm using the Linde-Buzo-Gray-Algorithm . a method that distributes points based on Voronoi diagrams. you can find the initial code here. I added optical flow to keep temporal consistency in the animation. the process is repeated on different resolutions. typically i would generate stippling sequences from 700 pixels down to 40 pixels. the output are point clouds written in .ply format which houdini reads natively. [.ply] is handy as it comes in an ascii and binary format and is capable of storing arbitrary attributes on points.
notably i track a unique label on points as long as the stippling process doesn't replace it with another. so i can use this label later to track color or geometry (letters) across the animation. Initial footage color and optical flow motion vectors (backward and forward) are also stored for each point.(some pieces use motion blur)
p(ii)tch #86 [ja(ja)] collaboration with @auniseiva
I use houdini to render the different cg layers composing a piece.
The first pieces of the serie [1..18] were using a simple rectangular grid of points. typographic elements are copied to the points with houdini stamping mechanism.some layers are using only points also.
fonts are imported in .otf or .ttf format. Each font is converted to polygon and sorted by their area. a mapping between point original luminance and area is done to compose the final geometry.
p(ii)tch #3 [d(ii)p]
The system is designed to accept any point cloud source . grid , stippling but also 3d generated . font geometry can be a function of initial footage luminance but also from another image source or simply random. Stippling points label can generate a consistent random across frames or chaotic , changing every frame.
Initially i made the choice to work with fully saturated colors. rgb values were converted to hls color space with saturation forced to 1.
color variations were created only using color shift nodes or random palettes of colors.
At some point [#18] i also started to use black & white footage and added some new techniques to manipulate colors. Ramps mapped on luminance notably. i also combined several layers with different saturation and hues to create more subtle variations.
random colors had two particular seeds i used all along the collection , one cold and one warm. they were usually adapted to the global color ambiance via hue shift.
p(ii)tch #10 [and(yy)]
Most of my picture manipulation pipeline operates via command line programs. but I also use off the shelf software. In this case I used nuke for ease and rapidity. Compositing is mainly achieved using grading , hue shift and glow nodes. layers are usually in screen or over mode. I also used a lot of simple radial and circular ramps to combine layers. Working in full 32 bits all along was also very important.
A lens deformation is applied at the end of the compositing tree, light streaks are created with simple edge extrapolation nodes and also defines the visual signature of the serie.
Compositing was usually the more comfortable part of the process as it allows fast feedback thus a lot of iterations.
I used to mint the pieces straight after compositing so I wouldn't overthink things.
p(ii)tch #46 [l(iii)fe]
I had the pleasure to work with other artists and friends on some pieces. they would typically send me footage they shot themselves. I was stoked to be able to do this kind of collaboration and that people gave me this level of trust. For me those pieces are the most valuable. every single one brings memories and happiness.
the opensea collection features a lot of other collaborations with the same artists. check it out.
thanks for your interest and lots of love to everyone , artists , friends and collectors , who helped me all along this trip :)