portr(aa) - luluxxx

a curated semi-generative , neural-network assisted , automatic portrait generator

Minting is now closed. you can still explore the collection here or find them on secondary market on opensea or looksrare.

the project

i was asked to think about an AI/generative serie of 500-1000 pieces , so i came up with this idea of generating transformative Portraits in an automated way. i already had a pipeline to do that but the process involved a lot of technical and creative decisions along the way. the goal was to automate it and explore the system in a random way and curate the output.

The system inputs a set of photographic images , which are abstracted locally through various digital filters while being transformed through a neural network using style-transfer techniques. Then it goes through a super-resolution and color-grading process to generate a final output.

the final collection consists of 2000 pictures at 1600x2365 pixels resolution.

it has four traits : source[SRC] style[ST] abstraction[ABS] and cartouche.

style transfert

"style transfert" also called "deepart" was first described by his creators Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge in sept. 2015 . I discovered it on DeepArt.io back then. the initial open source code was released by Justin Johnson on github. 

in 2016 Manuel Ruder , Alexey Dosovitskiy and Thomas Brox  brought consistent and stable stylized video sequences using optical flow analysis.

style transfer allows the creation of new images by using an algorithm to redraw one image using the stylistic elements of another image (the style). it uses neural network technology to achieve that so i guess we can call it "artificial intelligence".

my pipeline is historically based on these projects. i added a lot of things during the years essentially making it iterative and pyramidal. i also added a lot of pre and post process to achieve better quality. it has been an endless experiment for years now.

styles

portr(aa) uses 7 styles. [ST0 .. ST6] . styles are created procedurally from a layered system which generates graphical patterns at different scales used for the style transfer. I've been developing these styles for several years now. ST5 and ST6 are an hybridization from the first ones and i didn't used them very much.

the distribution of styles across the full collection is shown here.

. results with ST2 style (the plant) .

sources

i wanted to use a curated set of images(sources) as inputs of the system. i wasn't sure at the beginning how much i would need to have enough variety in the output. the idea was to have minimum work as possible when injecting new sources. the only manual work involved was framing to a specific ratio and generating two b&w masks , one to identify very quickly some facial features and another to blend color variations. it would take a few minutes to inject a new picture. I've done it constantly while developing the system. finally , there are 110 different sources used [SRC].

. results with ST3 style (purple haze) .

abstraction

i experimented with various algorithms to generate abstraction in parts of the pictures. it includes directional and fragment blur , point based in painting , fractal generation , different triangulation techniques and also various local and global geometric transformations like twirls and ripples. i spent a lot of renders figuring out the limits of my parameter space and also in which order i wanted to combine them and at what stage of the process they should apply to give the more interesting results.

i also used coherent line to generate a "sketch" of lines that i would more or less add or subtract on the pictures. it helps defining some details and give a drawing feel.

i kept track of all parameters all along the experiment and tried to generate a global abstraction parameter for each results. all min/max significative parameters value were normalized between [0,100] then weighted together. finally the abstraction [ABS] ranges between 15 to 87 . i was curious to see if it makes sense when looking at the results and to be honest it very vaguely shows. so i would say that the abstract parameter is a bit ... abstract :)

. results with ST4 style (bloody garden) .

cartouche

I wanted to introduce a purely random trait. so at the very end of the pipeline i added this little box under the picture , print a color palette (k-means by decreasing occurrence) from the picture (i love color palettes) and the values of the different traits. black and white backgrounds are random while orange are a selection of early results that i reincorporated in the collection near the end [1039-1552]. some have been generated with values out of the final parameter space but i wanted to keep them.

distribution of the cartouche trait in the final collection.

0[black] 1[white] 2[orange]

early results. (cartouche = 2)

color and grading

one of the most difficult part was managing colors and contrast automatically. at every step of the process i had to calibrate black&white values and gamma correction to keep things on track. i also used linear color transfer (PCA method with covariance matrix and mean vector) to facilitate style transfer. i also did a lot of histogram manipulation (local equalization) . This was adjusted for each style using configuration files.

i also used 3dlut technology using classical stocks templates (Kodak,Fuji,Polaroid) randomly to achieve various gradings and calibrations at the finalize stage.

it gave me a limited set of results to choose from during the curating process (4 usually). all this was done during the generation of the final collection.

methodology

the project was done  approximately across a 2 months period. my pipeline was already command line based , but i would use gimp and nuke for compositing work when done manually. so i had to automate these things.

all picture processing and compositing was moved to cli using the gmic library. everything was wrapped up with perl scripts and ascii config files. while building the procedural system i also started to investigate the aesthetic i wanted to achieve.

once the prototype started to work correctly i experimented with the abstraction part and explored the parameters space.

then over approx 2 weeks i would generate 100 frames by night and curate them the next day until i reached 2000 pictures.

i had to build a lot of tools to keep track of everything and move the collection to the final state with final picture and .json for each.


technical notes about the minting page.

i had to learn everything from scratch to build the minting page. html/css/javascript/node.js . i used ethers.js for accessing web3 functionalities . the server is running on digitalocean.com (NYC1) using their "app" feature , static files are distributed using their CDN and i use mongodb to store connection datas . the collection smart contract is built on top of openzeppelin ERC721 libraries. it includes the EIP-2981  NFT Royalty Standard . royalties are set to 25%. deployment was made with hardhat. all final images and json metadatas are stored on arweave permanent storage. token generation is monitored using graphQL. thank you @devcryptodude and @joseph_AT69 for the support and guidance. it would not have been possible without you !

thank you for your interest. luluxxx . february 2022 .

Using Format