Hello! This post can be regarded as a revision of deep image inpainting for my old friends and introductory deep image inpainting for newcomers. I have written more than 10 posts related to deep learning approaches for image inpainting. It’s time to briefly review what we have learned and also provide a highway for newcomers to join us for fun!

What is Image Inpainting?

Figure 1. Examples of Image Inpainting Applications. Image by Jiahui Yu et al. from their paper, DeepFill v2 [13]

Image inpainting is the task of filling missing pixels in an image such that the completed image is realistic-looking and follows the original (true) context. Some applications such as unwanted object(s) removal and interactive image editing are shown in Figure…


Review: Free-Form Image Inpainting with Gated Convolution

Figure 1. Some free-form inpainting results by using DeepFill v2. Note that optional user sketch input is allowed for interactive editing. Image by Jiahui Yu et al. from their paper [1]

Hello guys! Welcome back! Today, we are going to dive into a very practical generative deep image inpainting approach named DeepFill v2. As mentioned in my previous post, this paper can be regarded as an enhanced version of DeepFill v1, Partial Convolution, and EdgeConnect. Simply speaking, the Contextual Attention (CA) layer proposed in DeepFill v1 and the concept of user guidance (optional user sketch input) introduced in EdgeConnect are embedded in DeepFill v2. Also, Partial Convolution (PConv) is modified to Gated Convolution (GConv) in which rule-based mask update is formulated to a learnable gating to the next convolution layer. With…


Hi guys:) Today, I would like to share how to install Anaconda and PyTorch (with/without GPU) in Windows 10 such that you can run different deep learning-based applications. Let’s start!

Image by the author. The image of the tiger is captured from [here] (CC0 Public Domain)

1. Install Anaconda

The first step is to install Anaconda such that you can create different environments for different applications. Note the different applications may require different libraries. For example, some may require OpenCV 3 and some require OpenCV 4. So, it is better to create different environments for different applications.

Please click [here] to go to the official website of Anaconda. Then click “Download” as shown below.


Review - EdgeConnect: Generative Image Inpainting with Adversarial Edge Learning

Figure 1. Some inpainting results by using the proposed approach (EdgeConnect). Left: Input corrupted/masked images. Middle: Completed edge maps (black: computed edges from valid regions using Canny Edge detector; blue: generated edges for the missing regions using an edge generator) Right: Filled images using the proposed EdgeConnect. Image by Kamyar Nazeri et al. from their paper [1]

Hello 👋 :). Today, we are going to dive into an inspirational deep image inpainting paper named EdgeConnect. Simply speaking, this paper adopts a very straightforward approach to handle image inpainting, from easy to difficult. They first predict the skeleton (i.e. edges/lines) of the missing regions, then they fill in colors according to the generated skeleton. This is the “Lines first, Color next” approach. Figure 1 shows some of the inpainting results and the predicted edge maps using the proposed method. Are they look realistic? Let’s dive deeper into this paper and grasp its core ideas!

Motivation

As mentioned in my…


Image Inpainting for Irregular Holes Using Partial Convolutions

Figure 1. Some inpainting results by using Partial Convolutions. Image by Guilin Liu et al. from their paper [1]

Hi. Today, I would like to talk about a good deep image inpainting paper which has broken some limitations on previous inpainting work. In short, most of the previous papers assume that the missing region(s) is/are regular (i.e. a center missing rectangular hole or multiple small rectangular holes) and this paper proposes a Partial Convolutional (PConv) layer to deal with irregular holes. Figure 1 shows some inpainting results using the proposed PConv. Are they good? Let’s grasp the main idea of PConv together!

Motivation

First of all, previous deep image inpainting approaches treat missing pixels and valid pixels the same in…


Review: Image Inpainting via Generative Multi-column Convolutional Neural Networks

Hello guys! Long time no see! Today, we are going to talk about another inpainting paper called Image Inpainting via Generative Multi-column CNNs (GMCNN). The network architecture used in this paper is similar to those papers we have introduced before. The main contribution of this paper is several modifications to the loss function.

Short Recall

As mentioned in my previous posts, how to make use of the information given by the remaining pixels in an image is crucial to superior image inpainting. A very straightforward sense of image inpainting is to directly copy the most similar image patches found in the image…


Review: Generative Image Inpainting with Contextual Attention

Welcome back guys! Happy to see you guys:) Last time, we realized that how copy-and-paste is embedded in CNNs for deep image inpainting. Can you get the main idea? If yes, Good! If no, Don’t worry! Today, we are going to dive into a breakthrough in deep image inpainting, for which contextual attention is proposed. By using contextual attention, we can effectively borrow information from distant spatial locations for reconstructing the local missing pixels. This idea is actually more or less the same as copy-and-paste. Let’s see how they can do that together!

Recall

In my previous post, I have introduced…


Hello everyone:) Welcome back!! Today, we will dive into a more specific deep image inpainting technique, Deep Feature Rearrangement. This technique takes both the advantages of using modern data-driven CNNs and conventional copy-and-paste inpainting method. Let’s learn and enjoy together!

Recall

This is my fifth post related to deep image inpainting. In my first post, I introduced the objective of image inpainting and the first GAN-based image inpainting method. In my second post, we went through an improved version of the first GAN-based image inpainting method in which a texture network is employed to enhance the local texture details. In my…


Welcome back guys:) Today, I would like to give a revision for deep image inpainting we have talked about so far. Also, I want to have another review of an image inpainting paper for the consolidation of knowledge of deep image inpainting. Let’s learn and enjoy!

Recall

Here, Let’s first briefly recall what we have learnt from previous posts.

Context Encoder (CE) [1] is the first GAN-based inpainting algorithm in the literature. It emphasizes the importance of understanding the context of the entire image for the task of inpainting and (channel-wise) fully-connected layer is used to achieve such a function. …


Welcome back guys, I hope that the previous posts aroused your curiosity about deep generative models for image inpainting. If you are a new friend, I highly recommend you skim through the previous posts here and here. As per the announcement made in the previous post, we will dive into another milestone in deep image inpainting today! Are you ready? Let’s start :)

*Image Inpainting and Image Completion represent the same task

Recall

Here is just a short recall of what we have learnt previously.

  • For image inpainting, texture details of the filled pixels are important. The valid pixels and the…

Chu-Tak Li

DO IT FIRST. ONLY U CAN DEFINE YOURSELF. I have started my PhD journey accidentally. To know more about me at: https://chutakcode.wixsite.com/website

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store