By Justin Koplow
What do Elon Musk, Mark Zuckerberg, and a horde of zombies have in common? They are all coming for your braaains. Communicating thoughts to manipulate devices is moving from the realm of science-fiction toward reality thanks to ongoing development of brain-computer interface (BCI) technology. But before we fully blow the lid – or the skull – off of this matter, we need to take great care to understand where this technology could lead.
- BCI could lead to a new assault on a fundamental limit of privacy intrusion.
It seems to be a scholarly imperative to reference George Orwell’s maxim from 1984 that in world of pervasive surveillance “nothing was your own except the few cubic centimetres inside your skull.” Not living in Eurasia or Oceania (or do we?), I’ve never been inclined to accept that statement as the maximum extent of privacy, much less of data protection. But I have taken some comfort that the skull represented, at least, the minimum amount of privacy we each enjoy. Even the oft-repeated stories of Target targeting ads for baby items at a woman who, herself, did not yet know she was pregnant represent the excellence of algorithms as predicators of behavior, or even of reality. But these algorithms are not perfect, nor are they free of bias. And as good as they might become, they are based on analysis of extrinsic factors and behaviors.
Brain-computer interface (a.k.a. neural-control interface (NCI), mind-machine interface (MMI), direct neural interface (DNI), or brain-machine interface (BMI)) fundamentally change that input structure. These technologies interpret the electrical impulses and signals created when the brain operates. It’s closer to a Fitbit for your brain than it is to artificial intelligence.
And like a Fitbit, that knowledge can be fine – if you are aware of and agree to how it is used; and, even more fundamentally, if you trust those who have access to the data. It’s one thing for a company to misuse how many calories you burned in an hour; it’s another for it to leak your actual deepest, darkest secrets.
- BCI has benefits, but a new field of privacy and ethical concepts needs to be developed.
The possibilities of BCI technology are abundant. Perhaps foremost, approximately 5.4 million people in the United States are living with paralysis, according the Reeve Foundation (yes, that Christopher Reeve). BCI have offered a tetraplegic the ability to control an artificial hand, as well as a computer cursor. Research with the Department of Veterans Affairs are exploring technologies to control robotic prosthetic limbs. Other forms of BCI technology have helped restore the vision of adults with acquired blindness, demonstrated by one man to regain the ability to drive a car. These applications can demonstrably and dramatically improve many lives and deserve to be developed.
Most of these technologies have required “invasive BCIs” – sensors that implanted on or near the brain. Obviously, this requires a great deal of care and expertise, putting the technologies well beyond ready for public consumption. But more elective uses of BCI are coming, too, as the technology comes unfettered from implants. Imagine being able to control your keyboard or mouse or remote control with only your mind. Such technology, especially in combination with interfaces like Google glass, could free the employee from a desk environment.
- Maybe tinfoil hats will be the hot fashion trend of 2020.
Continuing the product development cycle, not only will BCI evolve to be “better, faster, cheaper” but it will also become inter-personal. While perhaps still years off, technology to access or broadcast memories, thoughts, or intentions of others – and to do so remotely, even discretely – seems increasingly possible. The question for today is how we will deal with these issues when they come to pass. As reported by Sigal Samuel for VOX, neuroethicist and researcher at ETH Zurich, Marcello Ienca, has proposed four rights he would see protected in law to avoid the exploitation of thoughts, which Ienca calls “neurocapitalism.”
- Right to cognitive liberty. That neurotechnology should be applied only with the consent of the data subject. For example, consider what the EU has done in the General Data Protection Regulation to regulate the validity of consent.
- Right to mental privacy. Similarly, that brain data should be shared or publicized only with the consent of the data subject. For example, consider the US Fifth Amendment and the uses of lie detector technology.
- Right to mental integrity. That the data subject should not be harmed through the operation of neurotechnology. For example, consider the examples and concerns of hacking into IOT devices, including connected cars or pacemakers, to seize control.
- Right to psychological continuity. Similarly, that “ownership,” in the sense of control of the neurotechnology – but also the technology’s alterations of personality – should remain with the data subject. For example, consider the addictive and personality-altering aspects of tobacco or opioids.
Even more fundamentally (and more speculatively) if we come to find ourselves in a world where all of our deepest, darkest thoughts can be readily accessed by others, how do we engage in society? Would the “end of secrets” begin an age of greater acceptance? Or would we go to ever greater lengths to defend against prying eyes, perhaps fully removing ourselves from society? What about law enforcement or national security? Would perfect awareness lead to great safety or be used as a means to sniff out dissent to authoritarian control?
So what is to be done? Facial recognition systems present similar challenges to our notions of privacy and, in response, several jurisdictions have banned (or are considering banning) use. There have also been commercial and civil society groups that have proposed principles for the ethical use of facial recognition. To be clear, even the bans are seen as interim steps to buy time while more robust solutions are developed. It is foolish to try to stop entirely the development of technology; the Clearview AI developments and the Chinese gene-editing demonstrate there are always those looking to push the envelope. Ultimately, comprehensive and clear rules with the force of law are required to ensure that all actors comport with the reasonable expectations of society – and can be punished when they do not.
Justin Koplow, Three Thoughts on Privacy in the Coming Era of Brain-Computer Interface, SMU Sci. & Tech. L. Rev. Blog (2020), https://smulawjournals.org/stlr/2020/04/01/three-thoughts-on-privacy-in-the-coming-era-of-brain-computer-interface/.
About the Author
Justin Koplow is an adjunct professor at the Southern Methodist University Dedman School of Law and serves as the Senior Legal Counsel, Assistant Vice President – Data Protection Officer for AT&T. Additionally, he previously worked in Assistant General Counsel positions at Central Intelligence Agency and the Director of National Intelligence. Further, he is a Certified Information Privacy Professional by the International Association of Privacy Professionals. He graduated cum laude from Georgetown University with a Bachelor of Arts in Russian and received a Juris Doctor degree from Georgetown Law. At SMU Dedman School of Law, he teaches a course titled Reflections of Global Privacy in Black Mirror.