Ancient Vision is developed for PR201, a group project limited to contributors. It aims to create a web-based platform that allows users to transform modern images into styles that mimic ancient artworks using advanced algorithms and AI-based techniques.
Ancient Vision’s goal is to apply styles from historical art movements (such as Renaissance, Baroque, Impressionist, and Ancient Egyptian) to user-uploaded images through deep learning models. The project replicates the intricate features of these art styles, providing an efficient and real-time image conversion experience.
- Image Upload: Users can upload images in formats such as JPEG and PNG.
- Style Selection: Users can choose from ancient art styles, including Renaissance, Baroque, Impressionist, and Egyptian.
- AI-Based Style Transfer: The platform utilizes Neural Style Transfer (NST) and Generative Adversarial Networks (GANs) to apply the chosen art style to the uploaded image.
- Customization Controls: Users can adjust the intensity of the applied style with sliders controlling brushstroke thickness, texture, and colour filters.
- Preview Function: A real-time preview is provided before the final conversion.
- High-Resolution Outputs: The converted images retain high resolution and quality.
- Convolutional Neural Networks (CNNs): Used for feature extraction from both the uploaded images and ancient artworks.
- Generative Adversarial Networks (GANs): Generate images that mimic the texture and colour balance of classical art styles.
- Neural Style Transfer (NST): Blends the content of the input image with the selected ancient art style.
- Image Upload: Users upload an image to the platform.
- Style Selection: Users choose the art style they want to apply.
- AI Style Transfer: Deep learning models apply the chosen style.
- Customization: Users can adjust the intensity and other visual elements of the style.
- Preview and Download: The platform provides a preview before downloading the final high-resolution image.
This project is for academic purposes only and is restricted to the contributors listed above. It is not intended for public use or distribution.