TASK
In this project, we worked in small teams to create a digital, responsive graphic novel that combines AI-generated imagery, storytelling, and web technologies. Using tools like DALL·E 3, Stable Diffusion, or Midjourney, we develop a consistent visual style and built a coherent narrative with recurring characters. Text and dialogue can be generated with language models such as GPT. The final product is a responsive web page built with HTML, CSS, and JavaScript, possibly enhanced with animation, interaction, or sound to support the story. The result should provide an intuitive, cross-device reading experience. Deliverables include the finished webpage, a README describing the story, process, and tools used, a visual map of the narrative flow, and a suitable LICENSE file.
IDEA / STORY
Our idea was not to simply create a traditional graphic novel in digital form, but to tell the story in a slightly different way—using the style of a stop-motion film. Content-wise, we came up with the idea of a small love story between two trees, which is ultimately destroyed by human interference. This became a creative and aesthetic exploration of the relationship between humans and nature, and humanity's relentless impact on it. An important aspect of our concept is that we used AI to generate and edit the story in a way that makes it loopable. By integrating text inserts, the reader/viewer is guided more explicitly through the narrative. We also aimed to preserve the fundamental character of a comic or graphic novel by varying the size of the frames.
PROCESS
First, we asked ourselves which tools we could use to bring our idea to life. After some experimentation, we ultimately decided to use Midjourney. However, since even Midjourney didn’t always deliver the best results, we had to edit the images afterward using Adobe Photoshop. To manage the workload more efficiently, we decided to split the tasks: Linda and Kirsten were responsible for generating the images and scenes, while Ivo (that’s me) took care of everything related to code and text.
PROBLEMS / SOLUTIONS / THOUGHTS
Our first problem was figuring out which program to use, as the sheer number of image generation programs quickly overwhelmed us. After some time and experimentation, we ultimately chose Midjourney. However, Midjourney has a weakness when it comes to generating consistent images. We were able to work around this problem using Adobe Photoshop, which allowed us to manually create beautiful frame-by-frame animations. Another obstacle was that Midjourney lacks a classic Image-to-Image model. Instead, we used Midjourney's eraser tool to create subtle differences between the frames. In summary, while Midjourney is one of the better tools for AI-generated images, we were only able to achieve our goal by taking a roundabout approach. The image generation process was also very demanding in the long run, as Midjourney often misunderstood our prompts, requiring us to make many manual adjustments. Personally, I found it difficult that the creative output came from the AI rather than from us. Nevertheless, we achieved our goal of creating a short animated film.