Oh, so joysticks aren’t good enough for you anymore, eh? You want to go all Jedi warrior and THROW TRUCKS with your MIND, do you?
Not kidding. Real thing.
The preorder form for Throw Trucks With Your Mind is 404-Page-Not-Found at this point. But in 2013, TTWYM was an early neurogame in which you donned a headset that enabled you to hurl trucks or other virtual objects simply by focused thinking.
Neurogaming is, and has been for a few years, a nascent form of gaming that involves the use of brain–computer interfaces (BCI) such as electroencephalography (EEG).
Hurling virtual trucks isn’t the only thing that’s got researchers interested in deciphering brainwaves, of course.
Other companies have worked on designing games to assist patients with ailments ranging from Alzheimer’s disease to attention deficit hyperactivity disorder (ADHD).
At last year’s NeuroGaming Conference and Expo in San Francisco, Akili Interactive demonstrated a seemingly standard iPad game that measures the differences in brain activity between a normal child and one with ADHD or autism. As The Verge reported, the company claimed that it measures 65 different pieces of data every second through gameplay.
And then, of course, there’s security.
Scientists have been looking at unique brain activity as a potential authentication factor since at least 2007.
Recently, over the span of just 10 months, we’ve seen claims for so-called brainprints evolve from 97% accuracy in testing as a biometric for authentication to 100% accuracy (give or take details on how the system behaves when it fails or when it’s confronted with an unknown person, that is).
But researchers say there’s one thing missing in this push to wire our brains to our online lives, be it for authentication or for gaming: plans to secure whatever gets scooped out of our wetware.
At the University of Washington, researchers have been trying to get out in front of this for a few years.
NPR quoted Howard Chizeck, a professor of electrical engineering, in a 2014 article about what happens when marketers manage to scan our brains.
Sound futuristic? It’s happening faster than we thought it would, he said:
A couple of the new products that have shown up are already along the pathway that I think we thought were a couple of years away.
Chizeck is working with a fellow engineer, Tamara Bonaci, and her team to study how invasive these brain sensors could become.
For the past few years, they’ve been working on a study, funded by the National Science Foundation, in which they strap BCIs to humans and have them play a video game the researchers have named “Flappy Whale.”
The team recently strapped Motherboard’s Victoria Turk into a BCI swimcap to play Flappy Whale, which is based on the addictive game Flappy Bird.
As she played, guiding a flopping blue whale through the on-screen course using arrow keys, the researchers were doing two things: they were recording her brainwaves, and they were flashing subliminal messaging at her.
In Turk’s words:
As I happily played, trying to increase my dismal top score, something unusual happened. The logos for American banks started appearing: Chase, Citibank, Wells Fargo – each flickering in the top-right of the screen for just milliseconds before disappearing again. Blink and you’d miss them.
This is an approximation of the brain hacking that Chizeck and his team envision: hackers (or marketers) insert images into games or apps and record your brain’s nonvolitional response through the BCI.
Doing so would reveal which bank a target banks with, for example, or what images you have a strong reaction to. Take it into different realms, as the researchers have done, and you can see where this could get creepy and embarrassing.
Those subliminal bank logo images could be swapped with sexual images of men and women, for example. You wouldn’t even know when your brain was leaking your most intimate information, including your kinks or your sexual orientation. Ditto for images of religious icons or politicians.
Bonaci:
Broadly speaking, the problem with brain-computer interfaces is that, with most of the devices these days, when you’re picking up electric signals to control an application… the application is not only getting access to the useful piece of EEG needed to control that app; it’s also getting access to the whole EEG,. And that whole EEG signal contains rich information about us as persons.
Bonaci’s team hasn’t perfected mind-reading, per se, but they have managed to pick up on personal preferences, and the research is still ongoing.
Here’s Bonaci again:
It’s been known in neuroscience for a while now that if a person has a strong emotional response to one of the presented stimuli, then on average 300 milliseconds after they saw a stimulus there is going to be a positive peak hidden within their EEG signal.
A peak, by itself, doesn’t reveal whether a person’s reaction is positive or negative. But given enough time, enough computer power and enough questions, Bonaci suggests, you can learn an awful lot about a person.
The researchers suggest that in the not too distant future, brain malware might hitch a ride on neurogames, much like the fake, malware-laced Pokémon apps that sprang up in the app store around the same time as the real game was released.
As it is, researchers demonstrated, in a 2013 paper, that side-channel attacks against BCIs could reveal private information like PIN numbers, credit cards, addresses, and birthdays “leaked” from brain signals.
Brainwave malware aside, Bonaci and Chizeck believe that we have more to fear from advertisers thirstily inserting their probosces between our ears and draining away our privacy: imagine how finely targeted marketing could become if they could simply tap directly into our brains?
The Washington researchers have suggested a few ways we might tackle the future assault on the privacy of our brainwaves: one is a policy-oriented approach, in which lawyers, ethicists, and engineers figure out what’s acceptable to do with the data and then come up with something like an app store certificate for the apps that behave themselves accordingly.
The other approach is technical: it would entail filtering signals so apps could only access the specific data they require.
From their 2014 paper:
Unintended information leakage is prevented by never transmitting and never storing raw neural signals and any signal components that are not explicitly needed for the purpose of BCI communication and control.
A student in the lab is currently experimenting with such filtering.
delayedthoughtengineering
As built-in cameras become more high-resolution, the same “brainscanning” can occur in more conventional software by measuring changes in the iris of the eye, which has a popularly-known reaction to things that an individual likes and doesn’t like. Even eyelids can betray one’s feelings. In other words, direct brain scanning is not needed when other, subtle changes in the body can be measured.
Of course, external “tells” might not be 100% accurate, but they will generally be accurate enough that bad actors can obtain information about a large population, and most of the information on most of the individuals in that population will be accurate enough to be useful.
So, as always, remember to put a cover over your webcam when you aren’t using it.
Wowo Canuck
The voices show up too well trained in many cases of ‘schizophrenia’ for it to be schizophrenia, it would take time for a piece of someone’s brain to learn ventriloquism and other arts and that is not what is observed. Also you see the same tactics being used in many cases of ‘schizophrenia’. Voices growing up isolated from each other would take to very different forms of behavior, instead you see many cases of extreme verbal abuse from the outset. You can verify both of these facts by going to schizophrenia forums and looking for accounts of when the voices first started. There are also numerous patents describing mind control and thought reading technologies. It is time for the mental health community to start dividing the schizophrenics from the mental technology abuse victims.