I recently got an invitation to watch a debate between the UMBC philosophy and computer science departments regarding the ethics of brain-computer interfaces.

BCIs are those interfaces that seek to bypass human motor and sensory systems through direct connectivity to the central nervous system. The field is nascent, but it’s absolutely technologically feasible, and sci-fi fans the world over could probably enumerate the milestones leading to a dystopian future of human-powered machines and malevolent AIs. I’m generally more optimistic and endlessly intrigued by the philosophical questions such technology raises.

My thoughts on the subject are roughly summarized by the following thought snippets:

  • We’re brains in vats. Or at least according to Descartes and a bunch of others, there’s no epistemologically sound argument to be made that we aren’t. There is nothing innately unethical about our circumstances as brains in vats, despite the so called “deceptive diety” and “evil genius”. Therefore, assuming a technologically complete BCI, there are no ethical issues to be considered. After all, in an engineering ethics context, ethical problems dissipate as technical hurdles are overcome.
  • So what does it mean to be “technologically complete”? Well, the brain has to not be able to know it’s in a vat. This presents monumental technical questions, and the research towards answering those questions is fraught with ethical dilemmas. But then, so is any research field; hence our illustrious IRBs, right?
  • But the ethics of any given research question or methodology is too isolated to warrant much discussion. Plus, the details of those questions and methodologies are needed to make any meaningful considerations. So then, what are the deeper questions? Perhaps the following:
    • Is it ethical to bypass human motor and sensory systems?
    • What if those systems are not available to begin with?
    • Are our bodies an intrinsic part of what it means to be human?
    • If the body is intrinsic to humanity, then how do we conceptualize variously and myriad differently-abled bodies?
    • If the body is extrinsic, then how do we operationally define being human? I’m personally happy with cogito ergo sum, but how about everyone else?
    • If the complexities of the mind and its workings are core to our definition of being human, then what does does a direct computer interface play? Is it augmentative, transformative, or one and then the other?
    • If BCIs are augmentative, then we already have a framework by which to assess the ethics of them. They’re just tools.
    • If BCIs are transformative, and the mind is what defines us as human, then we have to ask if it’s ethical to change ourselves into something else. Is it ethical, in a sense, to take control and precipitate our own evolution? If so, how do we set out to do so responsibly?

That last bit is the real crux for me. The question of how far our influence over our own progression as a species can ethically stretch is a significant one. It might be too simplistic to proclaim that because we can, we should. I’d like to say that altruistic endeavors are self justifying. But when we’re talking about endeavors with potentially ontologic-level changes, with what framework can we even measure that alturism?