UniColor: A Unified Framework for Multi-Modal Colorization with Transformer
    ACM Transactions on Graphics & SIGGRAPH Asia 2022
 Zhitong Huang\(^{1*}\),  Nanxuan Zhao\(^{2*}\) ,  Jing Liao\(^{1 \dagger}\) 
\(^1\): City University of Hong Kong, Hong Kong SAR, China   \(^2\): University of Bath, Bath, United Kingdom
\(^*\) : Both authors contributed equally to this research    \(^\dagger\) : Corresponding author
Abstract
  
  
We propose the first unified framework UniColor to support colorization in multiple modalities, including both unconditional and conditional ones, such as stroke, exemplar, text, and even a mix of them. Rather than learning a separate model for each type of condition, we introduce a two-stage colorization framework for incorporating various conditions into a single model. In the first stage, multi-modal conditions are converted into a common representation of hint points. Particularly, we propose a novel CLIP-based method to convert the text to hint points. In the second stage, we propose a Transformer-based network composed of Chroma-VQGAN and Hybrid-Transformer to generate diverse and high-quality colorization results conditioned on hint points. Both qualitative and quantitative comparisons demonstrate that our method outperforms state-of-the-art methods in every control modality and further enables multi-modal colorization that was not feasible before. Moreover, we design an interactive interface showing the effectiveness of our unified framework in practical usage, including automatic colorization, hybrid-control colorization, local recolorization, and iterative color editing.
 
Methods
  
    
      Two-Stage Method
          
  
      Our framework consists of two stages. In the first stage, all different conditions (stroke, exemplar, and text) are converted to a common form of hint points. In the second stage, diverse results are generated automatically either from scratch or based on the condition of hint points.
     
   
  
    
      Network Architecture
          
  
      Architecture of diverse unified colorization network. The network consists of two sub-nets: (a) a Chroma-VQGAN to disentangle and quantize chroma representation from the continuous gray one, and (b) a Hybrid-Transformer to generate diverse colorization results from unified conditions and continuous gray features.
     
   
  
  ❮
  ❯
 
  
  
Results
  
    
      Unconditional
          
  
      Comparison with unconditional colorization methods: InstColor [Su et al . 2020], ChromaGAN [Vitoria et al . 2020], and Coltran [Kumar et al. 2021]. Our model can generate diverse results for each of the input grayscale images.
     
   
  
    
      Stroke-Based
          
  
      Comparison with the stroke-based method (User-guided [Zhang et al. 2017]).
     
   
  
    
      Exemplar-Based
          
  
      Comparison with exemplar-based methods (Deep Exp. [He et al. 2018] and Exp. Video [Zhang et al . 2019]). The reference images are shown in the right-bottom corner of each input grayscale image.
     
   
  
    
      Text-Based
          
  
      Comparison with the text-based colorization method Learn-Color-Lang [Manjunatha et al. 2018]. We show the comparison in the first group, and our diverse results in the second group.
     
   
  
  ❮
  ❯
 
  
  
  
  
Application
  
    
      Hybrid Control
          
  
      Colorization on legacy old photos with hybrid controls. In each row, we first show the input grayscale photo, followed by four design cases under different hybrid conditions. We indicate stroke-based conditions with blue boundaries, exemplar-based conditions with green boundaries, and text-based conditions with orange boundaries.
     
   
  
    
      Recolorization
          
  
      Our framework also supports recolorization. For each design case, in the middle column, we show the selected region in the upper row, and the input condition in the lower row.
     
   
  
    
      Iterative Editing
          
  
      Iterative editing on legacy old photos. Users can adjust the color based on various conditions iteratively.
     
   
  
  ❮
  ❯