Access the full text.
Sign up today, get DeepDyve free for 14 days.
Current state-of-the-art point rendering techniques such as splat rendering generally require very high-resolution point clouds in order to create high-quality photo realistic renderings. These can be very time consuming to acquire and oftentimes also require high-end expensive scanners. This paper proposes a novel deep learning-based approach that can generate high-resolution photo realistic point renderings from low-resolution point clouds. More specifically, we propose to use co-registered high-quality photographs as the ground truth data to train the deep neural network for point-based rendering. The proposed method can generate high-quality point rendering images very efficiently and can be used for interactive navigation of large-scale 3D scenes as well as image-based localization. Extensive quantitative evaluations on both synthetic and real datasets show that the proposed method outperforms state-of-the-art methods.
The Visual Computer – Springer Journals
Published: May 11, 2018
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.