The Vatican Has A Plan To Help Keep AI In Control

While the discourse around the development and deployment of AI is being debated among regulators and tech stakeholders, the Vatican is also jumping into the fray with its own exhaustive set of ethics and principles to guide everyone involved. The Vatican's Dicastery for Culture and Education and the Markkula Center for Applied Ethics partnered to establish a body — the Institute for Technology, Ethics and Culture (ITEC) — which has now released a 140-page handbook that details how ethics should guide disruptive tech like AI.

"Major guardrails are absolutely necessary, and countries and governments will implement them in time," says Father Brendan McGuire, a pastor at the St. Simon Parish in Los Altos, through Santa Clara University (SCU). He adds that the book has been written in such a fashion that it will help people at all stages, be it writing code or drafting a technical manual, and covers everything from leadership virtues to organizational culture. Co-author of the book, Ann Skeet, is quoted by SCU as saying that "ethics is more important than technology."

The ITEC says it reached out to tech leaders, government officials, members of academia, and researchers, with a focus on steering technology that treats human impact as a top priority. Naturally, the guidebook — which contains one anchoring principle, seven guiding principles, and 46 specifying principles — converges the discourse around humanity and its civilizational well-being as cutting-edge tech like generative AI makes rapid inroads in our lives. This won't be Vatican's first brush with AI concerns. In 2020, Pope Francis also called for AI regulation using "algor-ethics," as reported by Reuters.

Vatican talks AI

Titled "Ethics In The Age Of Disruptive Technologies," the ITEC handbook cites Microsoft, Google, and IBM's AI principles as examples of how it formulated its own ethics lexicon. Notably, Google worked in part with the Markkula Center for Applied Ethics in drafting its own well-publicized AI principles, though the company has faced some well-known controversies related to the technology.

Broadly, the ITEC principles say AI should respect human dignity and rights, it should empower instead of oppress, and should promote human well-being at all costs. Justice, accessibility, diversity, and equity are some of the inclusive principles that should be followed at all stages. Notably, the handbook also stresses that Earth belongs to all forms of life and, as such, AI development practices should ensure that they are sustainable, don't harm the planet's biodiversity, and take into account the effects on climate.

Accountability — across the spectrum of users, corporate, and development — also takes a forefront among the ITEC principles, alongside transparency and oversight. These two are of critical importance because the development of large AI models is a murky world of data privacy and copyright violations. The Getty lawsuit against Stability AI is arguably a glaring example, and so is artists' concern about the potential for AI to replace them. Notably, the authors warn that some of the principles can wade into cross-conflict as none of them are absolute, in which case, humanity's well-being will take priority.