Experimental movies

Yi Ching, The Book of Changes (1)

The Yi Jing, which appeared 3500 years ago, was a divination tool using wooden sticks. Over time, it became the basis of philosophical and moral thought that attempted to understand the world and its transformations. These were represented in the hexagrams which depicted the unfolding of events according to dynamic categories.

The creation of 485 images was necessary to display the Yin Yang symbols, the trigrams and the hexagrams. These images were generated with Python code written for this project using OpenCV and Numpy. These images were then edited in Premiere Pro.

The musical pieces were generated using artificial intelligence algorithms:

• Amper Music from the New York company specializing in music generation;

• Dromify, one of the tools of the Magenta Studio suite which comes from the initiative of “Google Brain Team from the aria BWV 988 of the Goldberg Variations by Johann Sebastian Bach to form a drum solo.

Yi Ching, The Book of Changes (2)

Yi Ching, The Book of Changes (3)

4500 images in AI

This project is based on an artificial intelligence algorithm which uses reference images (here “texture” images of Montreal) and which attributes their “statistical” characteristics to the content of other images in order to transform the latter into the style reference pictures. “Style” models are thus produced, these are artificial neural networks created from several layers of pixel analysis, nineteen layers in the case of the VGG19 model used for this work. There will be one processing applied per image, with a greater or lesser impact depending on the number of iterations performed.

My living room

Based on real-time mappings using a Stereolabs Zed 2 stereo camera and 3D model visualization tool captures.

I propose to bring the viewer to see an imaginary space through the eyes of a stereo camera and to recreate this space. In the first part of the video, I superimpose the images of 2D reality on those of a 3D digitization of space transposed into pixels in color-coded distance calculations. I quickly add another visual layer where polygon edges are generated to reconstruct 3D space. In a second part, I suggest a “nebulous” state, referring to a kind of cosmogony representing a state of reconstruction of data acquired from antagonistic forces, organization and disorder, to recreate a small world, my imaginary living room , with an appearance of reality, but an altered reality.

I used a Zed 2 stereo camera from Stereolabs which I could call “semi-industrial”. It is normally used with programming in Python, but optimally in C++, often with the OpenCV library. Unfortunately, this camera is not a mature product, it was not possible to use the built-in recording features of mapping results. After contacting the manufacturer, I was told “this is a known problem caused by lost frames while saving the SVO and it will be soon fixed”… I turned to using a demonstration executable called ZEDfu (for Zed fusion) which presents three windows when capturing 3D vectors:
• The actual image;
• The mapping of distances where the color of the pixels show the greater or lesser distances by transposing the latter according to a color scale;
• A vector matrix.

With OBS software, I captured the images shown in real time while mapping the room with ZEDfu software. This last software also compiles data accumulated during the mapping to produce 3D files (of type mesh.obj and mesh_raw.ply) at the end of the operation.

In the first thirty seconds of my study video, I used the captures of the three screens made during the initial mapping. All three captures were cropped in Adobe Premiere with varying transparency. And for the second part of the work, the 3D files mesh.obj and mesh_raw.ply were exploited with the viewer included in the ZEDfu executable and the 3D viewer from Microsoft in order to record animated videos (in rotation) of 3d models product

In the manner of…

Here again a project is based on an artificial intelligence algorithm based on style transfer from a video of rain in a gutter, in a transfer of style, in the manner of …

The “content” images to be edited are from a one-minute video of a gutter during rain. 1800 images were extracted from my recording using of Adobe Premiere Pro 2021. In a similar way, I used arbitrarily six images of known paintings, added in Premiere at 10 seconds each and re-exported as 1800 “style” images.

As mentioned earlier, each content image is processed by the algorithm with a style image. To make a progression in the importance of the transformation of the images, the number of iterations increases and decreases during the ten seconds of each sub-sequence. There is one iteration at the first second, to increase to fifty iterations at five seconds and decrease more and more until the tenth second. So on for the six sub-sequences. The images are then integrated back into Premiere.

The choice is more or less arbitrary, I wanted to use known paints that stand out for their colors and their patterns (often determined by the brushstrokes):
• The Guernica of Pablo Picasso
• Composition VIII by Wassily Kandinsky
• Water Lilies and Japanese Bridge by Claude Monet
• The Scream by Edvar Munch
• The Starry Night by Vincent van Gogh
• The Great Wave off Kanagawa by Hokusai

Walking