As if password authentication’s coffin needed any more nails, researchers in the UK have discovered yet another way to hammer one in. The technique, developed at Durham University, the University of Surrey, and Royal Holloway University of London, builds on previous work to produce a more accurate way to guess your password by listening to the sound of you typing it on your keyboard.
The slight differences in the sounds each key makes is an unintentional leak of information, known as a “side channel”. Computers typically have lots of side channels, such as noises, heat, and changes in electromagnetic emissions, which can be hoovered up and analysed by adversaries to learn more about what’s happening on the computer.
Side channel research can get a little far-fetched and impractical at times but it serves a useful purpose in improving our knowledge about what’s possible. However, this research is firmly rooted in the possible, starting with the decision to monitor sound, rather than something more exotic.
The ubiquity of keyboard acoustic emanations makes them not only a readily available attack vector, but also prompts victims to underestimate (and therefore not try to hide) their output. For example, when typing a password, people will regularly hide their screen but will do little to obfuscate their keyboard’s sound.
The researchers also used real-world attack scenarios, such as snooping on a laptop keyboard using the microphone on a smartphone in the same room, and by capturing the sound on a Zoom call.
As it is in so much cybersecurity research, Artificial Intelligence is centre stage. The new password-busting technique uses Deep Learning (a form of AI that mimics the learning process of the human brain) to determine which of a keyboard’s 36 keys are being pressed. The algorithm was taught using 25 presses on each key of an Apple laptop using different fingers and different pressures. The sounds from the key presses were processed extensively before being turned into images, and then fed into a deep learning algorithm used for image classification.
Did it work? Yes, even over Zoom.
The method presented in this paper achieved a top-1 classification accuracy of 95% on phone-recorded laptop keystrokes, representing improved results for classifiers not utilising language models and the second best accuracy seen across all surveyed literature. When implemented on the Zoom-recorded data, the method resulted in 93% accuracy, an improved result for classifiers using such applications as attack vectors.
Research like this always begs the question, should you be worried about this? Most people have far more basic password problems to concern themselves with before fixing their noisy keyboards. However, all targets of cybercrime are not created equal, and there are nation state agencies willing to spend millions to compromise specific people. It is not difficult to imagine an intelligence agency using a technique like this, and it isn’t too far fetched for industrial espionage either.
As the reseachers point out, a deep learning engine trained on one laptop could probably guess passwords on other laptops of the same model, meaning “a successful attack on a single laptop could prove viable on a large number of devices.” A hint that techniques like this could even be commodified one day.
If you are concerned about this it’s worth noting that entering a password with a password manager makes almost no sound. However, if you think you’re genuinely at risk from techniques like this you should be looking beyond passwords at things like passkeys anyway.
We don’t just report on threats—we remove them
Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.