︎             PROJECTS         RESEARCH        ABOUT         CONTACT




The understanding of the world is as much informed through comparative data as it is through our memory. Within the digital landscape that proliferates with imagery, media converted itself into producers influencing and creating the representation of our world. The exposure to misinformation pose a great risk to our trust in access to knowledge, exposing how much our trust have been mediated through illusion of democratisation of technology. As new technologies coming from the recent research on AI and  machine learning develop increasingly intricate means of manipulation of imagery, we’ll see a flood of tailored information, adjusted to the factors which make up the specific audiences.

When the physical world will be moderated by computer generated data, the belief in this substitute fictional world will create even more embodied “filter bubbles”. This won’t only apply to official documentations, but to anything we have openly shared online. In possession of personal data shared though Facebook, Google Analytics, Amazon Data Exchange etc., the corporations will be able to redesign, enhance and exploit our memories and make us more susceptible to marketing, advertisement and propaganda.


What makes us us? Beyond the outside environmental and cultural factors , memories  are the most integral part of our consciousness. Learning from the past experiences, we create a present version of our personality. There is however a dark end to it; suggestive information, misinformation, ideas or thoughts   can cause a contamination or alteration of memory, this might lead to formation of false memories, which in turn influence our conscious awareness. As long as it happens on an individual level, the harm is not visible, but the moment false memories contaminate larger groups they can spread among whole communities, leading to alteration of facts.

My research deals with a question of the transformation of collective memory in the age of deep learning. The worldwide implementation of self-learning algorithms will undoubtedly lead towards a society, whose knowledge and access to information is predefined by the classification and collection of data. As these processes improve with every iteration, we’ll witness a an aggravated influence of neural networks witch each new generation. The true danger lies in the more subtle mechanisms than visible us such as Deepfakes or text-to-speech translations. Machine learning has the power to pass by our defences to influence and control us without us even noticing anymore. It will not only enhance reality, but also distort and manipulate it to fit its own judgment. Knowing that this technology by far surpasses the current media manipulation techniques, such as photo fabrication, editing, fake news, etc., it’s not hard to depict that our minds will be jeopardised by visual alterations of our past.


Starting from harmful alterations of imagery , this projects want to question the easiness in creative new memories of works which didn’t existed in the first place. By mixing the fiction with facts, the manipulated imaginary will create an entity on its own, a database independent form criticism, in a permanent state of being an original and a copy at the same time. The projects uses GAN’s trained on multiple datasets, which are harvested from the internet or created by using personal data.

GAN generated movie posters based on a dataset of 50,000 images.

GAN generated movie posters based on a dataset of 50,000 images.