top of page
Search
Writer's pictureDr Bruce Long

Will AI and AI based neurological enhancement teach us the best ethical theory? Should we commit?

Updated: Oct 22, 2019

Objective

When AI and transhumanism meet, will a strong-AI perceptually and/or cognitively enhanced transhuman being literally teach us how to do ethics? Can we develop a meta-ethical theory that will allow us to evaluate and assess ethical theories that an AI enhanced intelligence might propose as optimal? How, if at all, would it matter which conception of the nature of information was used?


The objective of this research project is to assess whether it is possible to use AI to develop, and reliably and safely apply, a meta-ethical theory in order to ratify an AI-proposed optimal normative ethics. A larger associated objective is to determine whether the choice of conception of information is salient, and, if so, in what way.



Background


Ethical questions about how to handle medium to strong artificial intelligences is already a hot topic in ethical theory and philosophy generally.

There have been numerous recent proposals made by industry technology leaders and scientists to accelerate the introduction of neurological implants for the digital cybernetic augmentation and enhancement of human cognitive and sensory-perceptual functions. Coupled with advances in such things as retinotopic neural-computer interfaces and animal memory implants, this makes the prospect of weak, and, prospectively, strong (or else stronger grades of weak) AI based personal cognitive and perceptual enhancement more ethically pressing.


The history of ethical and moral theory is the story of failure and partial success. As with much of the social sciences, there is little we can do to satisfy the urge for a deductive, or even an inductive, solution. In fact, there isn’t really even a way to apply any statistics, nor, famously, any kind of calculus, to the problem (and Kantian ethicists and virtue ethicists are practised at reminding us of the fact!). At least not without committing ourselves to Millian premises that are open to argument.


The marked differences between different normative ethical theories and their theories of the good demonstrates that either pluralism about ethics and morality is the only solution for human societies, or else we simply do not have the ability to determine the optimal ethical normative theory. Or else, perhaps, objectively, there isn’t such a thing, and, due to some fact about sentient psychologies or group epistemic updating that we do not understand, there never can be.

It’s already apparent that weak AI algorithms present new ethical challenges. It is one thing to ask how a self-driving automobile should chose who lives and dies in an accident. It’s another thing to ask how a strong AI or an AI enhanced human cognition with prospectively better ethical insights would, or should, do this.


It is not clear that a human cognition enhanced by even a weak AI implant will make decisions in the same way. However, it is not impossible (it’s certainly conceivable) that an AI enhanced transhuman person might tell us that there is in fact an optimal normative ethical calculus or system. They may even give it to us.



Problems and Open questions


How would, or should, we deal with the following scenario:


We develop a strong AI enhancement for human cognition, and the enhanced person informs us that there is an optimal ethical theory. Delighted, we ask the enhanced person to regale us with the details, only to discover (in accordance with many science fiction plots!) that we simply cannot understand the theory or how its premises and hypotheses could possibly deliver the claimed or promised ethical outcomes.


Even if unenhanced cognition was not capable of ever understanding the theory and its premises, should we nevertheless commit ourselves to its application? Would it be ethical to do so without significant proper understanding of why or how it works? We behave this way with many other artificial systems (or, at least, most of us do most of the time), but ethical systems would seem to have different epistemic, and ethical, demands, pursuant to their ratification.


We may not even be able to believe that a normative theory developed by an AI enhanced cognition would work. There may not be appropriate levels of credence available for a commitment on that basis. Then submitting to it would also potentially involve suspension of disbelief on a very large scale.


There is a very real sense in which the least capable cognitive and ethical agents would be required to ratify or decline an ethical system proposed by the most capable cognitive and ethical agents. It’s not clear that this is epistemically, ethically, or operationally a good, or even an advisable, outcome.

Perhaps most interestingly, would the meta-ethical and normative-ethical answers be different for unenhanced humans versus AI enhanced transhumans? Such an outcome would suggest that not only a broad pluralism about ethics is the most optimal and correct approach, but that it has not only a naturalistic, but a reductive, basis.


Do we let the AI transhuman being ‘have their head’? How would we even determine an appropriate risk analysis if we did not understand the inputs and outputs adequately?


How would we decide?


Here is another scenario:


The strong AI enhanced transhuman not only delivers a normative ethical solution for us, but, realising that we are a bit too dull to comprehend it, also develops a large scale simulation that we can watch ‘roll-out’ in real time (perhaps with some of the features ‘dumbed down’ a bit, and with a lot of gross abstracting out of finitary details). We can see that things in the simulation seem to demonstrate an improvement for everyone in the simulated society. However, there is still a problem. There remains an enormous epistemic gap for us: we simply cannot adduce whether the simulation is a reliable predictor.


We now have the solution, and a probable demonstration of its effectiveness, but we’re still no better epistemically equipped to decide whether to commit to it.


The project that our institute is interested pursuing is one that involves meta-ethics for AI. Is it possible to develop one without AI, and how would we know how to deploy it to determine if an AI-generated normative ethical theory was correct? We also want to determine what kind of assumptions and premises about the nature of information and information processing would be salient, and whether understanding how to treat information would make a difference to the outcomes.


It’s possible that a wrong conception of information may lead us to make a wrong decision. It’s also possible that it may not matter as much as we suspect. In the above-mentioned scenarios, it’s probably important to know.




10 views0 comments

Comments


bottom of page