EURO-Online login
- New to EURO? Create an account
- I forgot my username and/or my password.
- Help with cookies
(important for IE8 users)
2860. A new implementation with combining mixture of U-Net and ResNet type architectures for image-to-image translation
Invited abstract in session MC-4: New Trends in Generative Adversarial Networks and Deep Neural Networks , stream Recent Advancements in AI .
Monday, 12:30-14:00Room: 1001 (building: 202)
Authors (first author is the speaker)
1. | Erkut ARICAN
|
Bahcesehir University | |
2. | Khaled Al Hariri
|
Computer Engineering, Bahçeşehir University |
Abstract
Image-to-Image translation is an area of computer vision that has become popular recently, with various studies providing different implementations. By using Image-to-Image translation with a large dataset, we can create different versions of the same image, and each version will allow us to see how the image looks in a different type of scene. This study aims to explore this field and provide a new implementation that could provide new insight into how images can be transformed. A famous image-to-image translation project inspires our implementation, which has a different generator architecture and normalization method. The implementation utilizes a Conditional Generative Adversarial Network, which consists of a generator and a discriminator. We have used a mixture of the U-Net architecture and the ResNet type architecture for the generator instead of only using a U-Net architecture. For the discriminator, we have used the PatchGAN architecture, which separates the image into different patches and classifies each patch of the image as either real or fake. Generated images are evaluated manually, and the calculated Structural Similarity Index is planned to be used in an automated evaluation system. We plan to continue improving our output and evaluate our results in other ways.
Keywords
- Artificial Intelligence
- Machine Learning
- Computer Science/Applications
Status: accepted
Back to the list of papers