X
Chat
Researchers Use GPU to Train Invisible AI Keyboard
Released Sep. 14th, 2021

An invisible AI-enabled keyboard created in South Korea accelerates one of the most ubiquitous tasks of our time: grinding through texts on mobile phones.

The Invisible Mobile Keyboard, or IMK, created by researchers at the Korea Advanced Institute of Science and Technology, lets users blast through 51.6 words per minute while reporting lower strain levels.

To put the keyboard to work, users start typing. No keyboard pops up to obscure the screen. Typed words appear in the appropriate text box. Or users can opt to view a stream of text to check accuracy.

Rather than presenting users with an on-screen keyboard, IMK takes advantage of its user’s memory of where each imaginary key is in relation to all the others.

The heart of the system is what the researchers call a Self-Attention Neural Character Decoder, trained on GPUs and fueled by a vast user input dataset.

It consists of two decoder models.

The geometric decoder takes in a touch input sequence. It then converts it into a character sequence by using the touch locations on the invisible keyboard.

Then, the semantic decoder corrects decoding errors in the character sequence estimated by the geometric decoder by considering semantic meanings.

The researchers trained the semantic decoder on the One Billion Word Benchmark created by Cambridge University and the University of Edinburgh and Google in 2014.

The researchers then fine-tuned the two models together. Both were trained with the open-source PyTorch 1.4.0 library on a GeForce GTX 1080 Ti GPU.

Users can type whether they’re using apps in landscape or portrait mode.  Either way, users can use the whole screen to input text.

The result: users could type 157.5% faster using the Invisible Mobile Keyboard than third-party soft keyboards on their smartphones.

 

Featured image: vintage19_something, some rights reserved.

 

 

The post Researchers Use GPU to Train Invisible AI Keyboard appeared first on The Official NVIDIA Blog.



-- Source: http://feedproxy.google.com/~r/nvidiablog/~3/5JKFUz9VOtg/