Monocular depth cues, such as shading, are fundamental for resolving three-dimensional information, such as an object's shape. Animal colour patterns may potentially exploit this mechanism of depth perception, generating false illusions for functions such as camouflage. Reconstructing the potential percept produced by false depth cues is challenging, especially for non-human, animal viewers. Here, we provide a novel step towards solving this problem, taking advantage of state-of-the-art computer vision algorithms typically used for three-dimensional scene reconstruction. We used two approaches for single-image monocular depth estimation: intrinsic image decomposition and deep learning. We first examined how these models performed using images of natural three-dimensional surfaces that moth wing patterns may mimic. We then applied these models to the wing patterns of six species of moth (Lepidoptera) with varying amounts of potential depth information. For one species, we then performed a multi-view reconstruction of the wing pattern to reveal the true (flat) wing shape. Intrinsic image decomposition, which is based on Retinex theory, was sensitive to both real depth cues and high contrast patterns, while the deep-learning models only responded to moths with strong pictorial depth cues. Both approaches reveal how the interpretation of visual cues depends not only on the information available, but also on experience with the natural world.