There are many philosophical views about, and treatments of, artificial intelligence and its ethical problems. Some approaches embody scientistic enthusiasm and anticipate transhuman outcomes. They involve acceptance of premises about such things as extended cognition according to which our cognition is already augmented by such technology as smartphones and The Internet. Normative ethics of information, computing, and AI done with this metaphilosophical background take seriously anticipated problems like the singularity – that point in history at which strong AI gains human level-intelligence and surpasses it. Other theorists are more conservative and cautious. For example, philosopher and cognitive scientist Daniel Dennett has warned that philosophical treatments of the singularity are currently little more than fanciful speculation.
Either way, existing low-grade unsupervised and Bayesian AI black box algorithms in machine learning and deep learning applications - that are far from any realisation of the singularity or human level intelligence - already have significant ethical implications. (The of consciousness of AI is another, albeit related, question altogether, and not necessarily the same as the question of self-awareness). One is that sophisticated deep learning and machine learning training algorithms like those used in marketing, web, scientific, and medical diagnostic systems are already so cumulatively internally complex as to be either effectively, or else actually, epistemically opaque.
By epistemic opacity I mean something subtly different to epistemic inaccessibility. Human computer scientists and programmers can access and analyse the code associated with such systems as they work (using memory snapshots and dumps, and sophisticated development environments which can step through and monitor code execution, and the inputs, outputs, and activity of functions and methods in the program). However, in many cases at certain levels of abstraction there is no way to determine what the logic of the trained system is actually doing, and why. The information is technically epistemically accessible on a causal and interactive basis, but opaque to epistemically useful or explanatory analysis nonetheless.
As recursively self-modifying unsupervised and Bayesian deep learning training algorithms become more sophisticated, and in some cases as existing systems build enormous banks of trained data, this epistemic opacity increases. As unsupervised training algorithms and reinforcement deep learning becomes more recursion-capable facilitating greater self-modification, epistemic opacity further increases. Even without reconfigurable hardware-firmware like field-programmable gate arrays and other more radical wetware-bioware platforms, software alone can prospectively implement the equivalent of new wetware neuron types with completely different functional parameters. Humans, by comparison, can only evolve new neuron types over long preiods, and have no conscious cognitive control of this process. These more speculative concerns aside, existing recursively self modifying black box training algorithms are epistemically opaque. This presents immediate and long term challenges for policy makers and law makers, and in this project I investigate some of the reasons why this is so, and what the specific implications are.
Interesting Related Readings
Anderson, M., & Anderson, S. L. (2019). How should AI Be developed,validated and implemented in patient care? AMA Journal of Ethics. https://doi.org/10.1001/amajethics.2019.125
Baum, S. D. (2020). Social choice ethics in artificial intelligence. AI and Society. https://doi.org/10.1007/s00146-017-0760-1
Bjerring, J. C., & Busch, J. (2020). Artificial Intelligence and Patient-Centered Decision-Making. Philosophy and Technology. https://doi.org/10.1007/s13347-019-00391-6
Bostrom, N., & Yudkowsky, E. (2011). THE ETHICS OF ARTIFICIAL INTELLIGENCE (2011) Nick Bostrom Eliezer Yudkowsky Draft for Cambridge Handbook of Artificial Intelligence, eds. Cambridge University Press. https://doi.org/10.1017/CBO9781139046855.020
Bryson, J. J., & Kime, P. P. (2011). Just an artifact: Why machines are perceived as moral agents. IJCAI International Joint Conference on Artificial Intelligence. https://doi.org/10.5591/978-1-57735-516-8/IJCAI11-276
de Swarte, T., Boufous, O., & Escalle, P. (2019). Artificial intelligence, ethics and human values: the cases of military drones and companion robots. Artificial Life and Robotics. https://doi.org/10.1007/s10015-019-00525-1
Etzioni, A., & Etzioni, O. (2017). Incorporating Ethics into Artificial Intelligence. Journal of Ethics. https://doi.org/10.1007/s10892-017-9252-2
Floridi, L. (2005a). Information ethics, its nature and scope. ACM SIGCAS Computers and Society. https://doi.org/10.1145/1111646.1111649
Floridi, L. (2005b). The ontological interpretation of informational privacy. Ethics and Information Technology. https://doi.org/10.1007/s10676-006-0001-7
Floridi, L. (2016). Faultless responsibility: on the nature and allocation of moral responsibility for distributed moral actions. Philosophical Transactions. Series A, Mathematical, Physical, and Engineering Sciences, 374(2083). https://doi.org/10.1098/rsta.2016.0112
Floridi, L. (2018). Soft Ethics and the Governance of the Digital. In Philosophy and Technology. https://doi.org/10.1007/s13347-018-0303-9
Floridi, L., & Savulescu, J. (2006). Information ethics: Agents, artefacts and new cultural perspectives. In Ethics and Information Technology. https://doi.org/10.1007/s10676-006-9106-2
Fyffe, R. (2015). The value of information: Normativity, epistemology, and LIS in Luciano Floridi. Portal. https://doi.org/10.1353/pla.2015.0020
Hagendorff, T. (2020). The Ethics of AI Ethics: An Evaluation of Guidelines. Minds and Machines. https://doi.org/10.1007/s11023-020-09517-8
Hooker, J., & Hooker, J. (2018). Ethics of Artificial Intelligence. In Taking Ethics Seriously. https://doi.org/10.4324/9781315097961-14
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence. https://doi.org/10.1038/s42256-019-0088-2
Keskinbora, K. H. (2019). Medical ethics considerations on artificial intelligence. In Journal of Clinical Neuroscience. https://doi.org/10.1016/j.jocn.2019.03.001
Kim, M. P., Ghorbani, A., & Zou, J. (2019). Multiaccuracy: Black-box post-processing for fairness in classification. AIES 2019 - Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. https://doi.org/10.1145/3306618.3314287
Korb, K. B. (2007). Ethics of AI. In Encyclopedia of Information Ethics and Security. https://doi.org/10.4018/978-1-59140-987-8.ch042
McDermott, D. (2008). Why ethics is a high hurdle for AI. North American Conference on Computers and Philosophy (NA-CAP).
Mittelstadt, B. (2019). Principles alone cannot guarantee ethical AI. Nature Machine Intelligence. https://doi.org/10.1038/s42256-019-0114-4
Nuffield Council of Bioethics. (2018). AI in healthcare and research. Bioethics Briefing Note.
Schuklenk, U. (2020). On the ethics of AI ethics. In Bioethics. https://doi.org/10.1111/bioe.12716
Steen, M. (2015). Upon Opening the Black Box and Finding It Full. Science, Technology, & Human Values. https://doi.org/10.1177/0162243914547645
The Cambridge handbook of information and computer ethics. (2010). Choice Reviews Online. https://doi.org/10.5860/choice.48-1520
Turilli, M., & Floridi, L. (2009a). The ethics of information transparency. Ethics and Information Technology, 11(2), 105–112. https://doi.org/10.1007/s10676-009-9187-9
Turilli, M., & Floridi, L. (2009b). The ethics of information transparency. Ethics and Information Technology. https://doi.org/10.1007/s10676-009-9187-9
Volkman, R. (2010). Why information ethics must begin with virtue ethics. Metaphilosophy. https://doi.org/10.1111/j.1467-9973.2010.01638.x Advanced Architectures Arulkumaran, K., Deisenroth, M. P., Brundage, M., & Bharath, A. A. (2017). Deep reinforcement learning: A brief survey. In IEEE Signal Processing Magazine. https://doi.org/10.1109/MSP.2017.2743240
Dalca, A. V., Yu, E., Golland, P., Fischl, B., Sabuncu, M. R., & Eugenio Iglesias, J. (2019). Unsupervised Deep Learning for Bayesian Brain MRI Segmentation. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). https://doi.org/10.1007/978-3-030-32248-9_40
Damiano, L., & Stano, P. (2018). Synthetic biology and artificial intelligence: Grounding a cross-disciplinary approach to the synthetic exploration of (Embodied) cognition. Complex Systems. https://doi.org/10.25088/ComplexSystems.27.3.199
Guo, Y., Liu, Y., Oerlemans, A., Lao, S., Wu, S., & Lew, M. S. (2016). Deep learning for visual understanding: A review. Neurocomputing. https://doi.org/10.1016/j.neucom.2015.09.116
Kalnikaité, V., & Whittaker, S. (2007). Software or wetware? Discovering when and why people use digital prosthetic memory. Conference on Human Factors in Computing Systems - Proceedings. https://doi.org/10.1145/1240624.1240635
Kozma, R., Pino, R. E., & Pazienza, G. E. (2012). Advances in neuromorphic memristor science and applications. In Advances in Neuromorphic Memristor Science and Applications. https://doi.org/10.1007/978-94-007-4491-2
Manin, D. Y., & Manin, Y. I. (2017). Cognitive networks: Brains, internet, and civilizations. In Humanizing Mathematics and its Philosophy: Essays Celebrating the 90th Birthday of Reuben Hersh. https://doi.org/10.1007/978-3-319-61231-7_9
Schmidhuber, J. (2015). Deep Learning in neural networks: An overview. In Neural Networks. https://doi.org/10.1016/j.neunet.2014.09.003
Skokowski, P. (2009). Networks with attitudes. AI and Society. https://doi.org/10.1007/s00146-007-0175-5
Commentaires