A new study conducted by a group of British researchers revealed that artificial intelligence models can determine what users type in their computers – such as passwords – with very high accuracy by listening to and analyzing the sounds of typing on the keyboard.
The study, which was published during the IEEE European (Institute of Electrical and Electronics Engineers) symposium on security and privacy, warned that this technology poses a major threat to the security of users, because it can steal data through the microphones built into the electronic devices that we use throughout the day.
Expressive – iStock
But how does this technology work? And what are the expected risks? How can it be reduced?
The researchers created an artificial intelligence model that can recognize the sounds of typing on the keyboard of the Apple MacBook Pro computer, and after training this model on keystrokes recorded by a nearby phone, it is able to determine which key is pressed with an accuracy of up to 95%. %, based only on the sound of the key being pressed.
The researchers pointed out that when using the voices collected by the computer during Zoom conversations to train the voice classification algorithm, the prediction accuracy decreased to 93%, which is a high and alarming percentage, and is considered a record for this method.
The researchers collected training data by pressing 36 keys on the Macbook Pro computer keyboard 25 times for each key using different fingers and with varying degrees of pressure, then they recorded the sound produced by each press via a smartphone located near the keyboard, or through a call. Zoom is conducted on a computer.
Then they produced waveforms and spectral images from the recordings showing the distinct differences for each key and ran data-processing steps to augment the signals that could be used to determine the sound of the keys.
After testing the model on this data, they found that it was able to identify the correct key from smartphone recordings in 95%, Zoom call recordings in 93%, and Skype call recordings in 91.7%, which is lower but still very high, and worrying.
Expressive – iStock
The researchers say that with the increasing use of video conferencing tools such as: Zoom, the proliferation of devices with built-in microphones everywhere, and the rapid development of artificial intelligence technologies, these attacks can collect a large amount of user data, as passwords, discussions and messages can be accessed. and other sensitive information easily.
Unlike other Side Channel Attacks that require special conditions and are subject to data rate and distance constraints, attacks using voice have become much simpler due to the abundance of devices that have microphones and can make high-quality audio recordings, especially with the rapid development of machine learning.
Certainly, this is not the first study of voice-based cyberattacks, as there are many studies that have shown how vulnerabilities in the microphones of smart devices and voice assistants, such as: Alexa, Siri, and (Google Assistant) Google Assistant, can be exploited in cyberattacks. But the real danger here is how accurate the AI models are.
The researchers say that in their study they used the most advanced methods, artificial intelligence models, and achieved the highest accuracy to date, and these attacks and models will become more accurate over time.
Dr Ihsan Tureni, who was involved in the study at the University of Surrey, said: “These attacks and models will become more accurate over time, and as smart devices with microphones become more common in homes, there is an urgent need for public discussions about how to organize attacks. artificial intelligence”.
The researchers advised users, who are concerned about these attacks, to change the password writing pattern such as: using the shift key to create a mixture of uppercase and lowercase letters with numbers and symbols to avoid knowing the whole password.
They also recommend using biometric authentication or using password manager apps so there is no need to manually enter sensitive information.
Other potential defense measures include using software to reproduce the sounds of keystrokes, or white noise to distort the sound of keyboard buttons being pressed.
In addition to the mechanisms proposed by the researchers; A Zoom spokesperson posted a comment on this study to BleepingComputer advising users to manually adjust the background noise isolation feature in the Zoom app to reduce its intensity, mute the microphone by default when joining a meeting, and mute the microphone when typing during a meeting to help secure and protect their information. such attacks.