Brain Computer Interfaces have to be one of my favourite pieces of futuristic tech around today. These are wonderfully named pieces of kit that allow a brain to interface with a computer directly. BCIs work wonders in the medical community giving patients with even very limited mobility like locked-in syndrome the tools to communicate with the world again.

The very idea of controlling computers with our minds is some seriously next generation stuff, and I can’t wait to see it come to gaming. Until then, I’ll be out here celebrating every win I can find in the field, and that includes this implant spotted by Ars Technica that gives users the ability to speak at an almost natural pace.

Before his death in 2018, Steven Hawking became as famous for his special computer voice as he did for his contributions to science. What many don’t know is that it actually took a while for Hawking to turn a thought into speech. His system involved using sensor that detected movements in his cheek muscle to select characters on a screen in his glasses. While ingenious, this usually took Hawking about a minute to speak one word.

This new technology being worked on by neuroprosthetics researchers at UC Davis bypasses these – not to be tongue in cheek – older methods by connecting a neural prosthesis straight to the brain. It also doesn’t break words down into an alphabet for selection, and instead translates the brainsignal directly into sounds. This is a more natural way of speaking, doesn’t rely on the user being able to spell, and is clearly much faster than what we’ve previously achieved.

The first tests of this tech required 256 microelectrodes to be implanted into the patient’s ventral precentral gyrus, a region in the front of the brain that oversees vocal tract muscles. This signal then gets sent to a neural decoder powered by an AI algorithm. Because the algorithm isn’t just trained on text words it works much faster than ones looking for letters. It also had more nuance, like change pitch to indicate a question or tone and was even able to use sounds like “hmm” naturally in conversation.

But perhaps most impressively, it was able to do this basically instantly. The latency delivered by this method was measured to be around 10 miliseconds. That’s about half a milenial pause, so no time at all, really.

The tech still has some serious limitations and is being worked on, but the promising results so far in testing a patient went from nearly unintelligible to having complete scripted conversations that others were able to understand. When it came to unscripted, they still got about half of what the patient was trying to say, which feels like a huge step up from nothing.

Next the team is looking to test their methods with improved tech. 256 is a fairly small amount of electrodes for a task like this. For example, other interfaces like this one from the co-founder of Neuralink uses 4,096 electrodes – though these are noninvasive which means they’re likely further away from the information which can present its own problems. We’ve also seen things like these between hair follicle electrodes which claim to get closer while still being non invasive that could be great for a task like this.

Obviously the goal here is restoring speech and agency to those who need it, so any and all efforts in testing and advancement of the tech are very welcome. I’m hoping I’ll never need this tech in a medical capacity, and can instead look forward to the time when it eventually comes to gaming. I can’t wait to think my way through dialog options one day in my gaming future.

By

Leave a Reply

Your email address will not be published. Required fields are marked *