DewarpNet: Single-Image Document Unwarping With Stacked 3D and 2D Regression Networks

Sagnik Das*     Ke Ma*     Zhixin Shu     Dimitris Samaras     Roy Shilkrot

Stony Brook University, New York, USA

Abstract: Capturing document images with hand-held devices in unstructured environments is a common practice nowadays. However, “casual” photos of documents are usually unsuitable for automatic information extraction, mainly due to physical distortion of the document paper, as well as various camera positions and illumination conditions. In this work, we propose DewarpNet, a deep learning approach for document image unwarping from a single image. Our insight is that the 3D geometry of the document not only determines the warping of its texture but also causes the illumination effects. Therefore, our novelty resides on the explicit modeling of 3D shape for document paper in an end-to-end pipeline. Also, we contribute the largest and most comprehensive dataset for document image unwarping to date – Doc3D. This dataset features multiple ground-truth annotations, including 3D shape, surface normals, UV map, albedo image, etc. Training with Doc3D, we demonstrate state-of-the-art performance for DewarpNet with extensive qualitative and quantitative evaluations. Our network also significantly improves OCR performance on captured document images, decreasing character error rate by 42% on average.

PDF   Supplementary   Code   Demo   Dataset  

Cite Our Work

If this project is useful to you, please consider citing our paper:

author = {Das, Sagnik and Ma, Ke and Shu, Zhixin and Samaras, Dimitris and Shilkrot, Roy},
title = {DewarpNet: Single-Image Document Unwarping With Stacked 3D and 2D Regression Networks},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)}, month = {October},
year = {2019}}


If you have any questions about the project, please feel free to contact-

Sagnik Das [email]

Ke Ma [email]

© CVLab@StonyBrook