Virtual Reality

Team develops image deep learning technology to present VR and AR screens more vividly and realistically

virtual reality

Credit: Pixabay/CC0 Public domain

The research team of Professor Jin Kyong-hwan from the Department of Electrical Engineering and Computer Science at Daegu Gyeongbuk Institute of Science and Technology (DGIST) has developed an image processing deep learning technology that reduces memory speed and increases resolution by 3 dB over existing technologies.

Developed through joint research with Choi Kwang-pyo of Samsung Research, this technology reduces on-screen aliasing compared to existing signal processing-based image interpolation technology (bicubic interpolation), thus producing a more natural video output. In particular, it can clearly restore the high frequency part of the images. It is intended to display a natural screen when using VR or AR.

Image interpolation technology based on signal processing (bicubic interpolation) preserves desired images in various environments by designating a specific location of an image. It has the advantage of saving memory and speed, but degrades the quality and distorts the image.

To solve this problem, ultra-high resolution video image conversion technologies based on deep learning have been developed, but most of them are technologies based on the convolutional artificial intelligence network, which present the disadvantage of inaccurate estimation of values ​​between pixels, which can lead to image distortion. Implicit Expression neural network The technology to overcome these disadvantages is attracting attention, but the disadvantage of implicit expression neural network technology is that it cannot capture high frequency components and increases memory and speed.

Professor Jin Kyong-hwan’s research team has developed a technology that resolves the image into multiple frequencies so that the characteristics of high-frequency components can be expressed in the image, and reassigns the coordinates to the resolved frequencies using of implicit expression neural network technology so that the image can be shown more clearly.

It can be described as a new technology that combines Fourier analysis, which is an image deep learning technology, and implicit expression neural network technology. The newly implemented technology can improve implicit expression neural networks that could not restore high-frequency components by solving essential frequency components when restoring images through an artificial intelligence network.

Prof. Jin Kyong-hwan said, “The technology developed this time is excellent because it shows higher restoration performance and consumes less memory than the technology used in the existing image warping field. We hope the technology will be used for image quality restoration and image editing. in the future and I hope it will contribute to both academia and industry.”

More information:
Jaewon Lee et al, Learning Local Implicit Fourier Representation for Image Warping, arXiv (2022). DOI: 10.48550/arxiv.2207.01831

Journal information:

Provided by DGIST (Daegu Gyeongbuk Institute of Science and Technology)

Quote: Team develops image deep learning technology to present VR and AR displays more vividly and realistically (December 14, 2022) Retrieved December 14, 2022 from 2022-12-team-image-deep-technology-vr.html

This document is subject to copyright. Except for fair use for purposes of private study or research, no part may be reproduced without written permission. The content is provided for information only.

Leave a Reply