Skip to main content

What qualities of a tool make it more useful to ministries?

Private & Secure

An overriding ethic of data stewardship should guide builders of AI systems (Genesis 1:28). Developers and users are called to act as stewards, ensuring systems have robust security measures (Deut 22:8) and are beneficial for everyone. This means stewarding the data supplied or generated by users, and caring for their data as a way of caring for them. Furthermore, loving one's neighbor (Matthew 22:39) means protecting others by securing personal and communal data from theft, breach, or misuse, thereby upholding justice and the common good.

Personally Identifiable Information (PII) should never be included in public-facing training data, and almost never be ingested in private, institutional AI systems. With internal systems, builders and executives should exceedingly justify any inclusion of PII.

Risk of Data Pairing

AI systems could conceivably collect various data sets that separately do not represent a breach in privacy but together would qualify as surveillance and/or an invasion of privacy. Builders should be mindful of how various data sets might pair together and take measures to guard against such risks.

AI systems should ensure privacy and security to the highest degree:

  • Informed consent is a bare minimum. Collecting sensitive personal data should be preceded by expressed and well-informed consent by users, allowing individuals to retain agency over their data. We agree with the ERLC, that informed consent is not the "only necessary ethical standard for the collection, manipulation, or exploitation of personal data."

  • Institutional transparency should be a goal. Institutions should be as transparent with users, as those users' data is to the institution. This begins with informed consent, but may go well beyond it.

  • Protect your neighbor's data as you would protect your own. Because data represents people, storing personal data securely aligns with the Christian principles of stewardship and love of neighbor. Christian developers and data managers are bound to God's command to "love your neighbor" and so must decide whether they would do to their friends and loved ones the things they are considering doing with the data at hand (Matthew 7:12).

  • Ensure security. Builders must ensure that AI systems can securely store personal data and can reasonably prevent its disclosure, misuse, or inadvertent inclusion in training data.

Accountable

Technology is intrinsically relational. Therefore, (1) those responsible (creators and users) must determine (2) who they are accountable to and (3) what they are accountable for.

Who Is Responsible

Humans must bear responsibility for what AI does—including its decision-making—and how it operates—including how it determines its output. Individuals and teams that are empowered to deploy AI must also be accountable for that system—and those accountable must also be empowered.

Following SIL, we affirm that a specific person must be deemed responsible for how an AI system operates, and that a different person must be responsible "for monitoring the effects of the AI usage on the people and processes in the areas where it is used." Given their responsibility, both should have the authority to pause, restrict, or terminate AI's operation within their domain of responsibilities.

Accountable To Whom?

Builders have a responsibility—commensurate with their control over AI—to account for the various ways that it might harm those persons or groups (Romans 14:12).

God

Accountable parties must take responsibility for how AI systems relate to what is true. This purview includes aspects of human dignity, bias, and other Biblical principles.

Others

Builders—both as individuals and within organizations—must consider how AI mediates their relationships to a host of others. This includes accountability to:

  • Authorities—Consider local authorities and relevant laws wherever AI systems are implemented
  • Users—Seek to adequately notify users when an AI system will be decommissioned
  • Data / Content Sources—Commit to giving credit with appropriate citations or references
  • The marginalized and vulnerable
  • Those represented by the data sets
  • Those who may encounter the system without knowing it
  • Employees and colleagues
  • Audiences
  • Donors
  • Other organizations
  • "Enemies" who might wish to harm you (Romans 12:14)

To this end, deployers should maintain a feedback system to understand and address grievances.

Accountable For What?

Transparency

AI builders and organizations are accountable to clearly communicate to employees, partners, and users when AI is being used (Philippians 2:3), and especially when and how their data is collected, stored, and used by AI systems. As Praxis writes, "institutions and the systems they deploy [should] become more transparent, while persons and their individual information become more protected." This responsibility also means that, following the Rome Call, "in principle, AI systems must be explainable."

Justifying AI

AI is not suitable in all cases—maybe not in most cases. Therefore, executives should provide clear justification for why AI is the best solution to a given problem, and why a less complex solution would not suffice. Justification should cover many of the areas outlined here, including efficacy, ethics, environmental care, and mission alignment.

Continuous Improvement

Builders should stay up to date on AI technical and ethical standards and should incorporate best practices into AI operations. Builders should regularly review AI systems in light of both technical and ethical standards. They should also seek to confirm that their models consistently produce results that align to current benchmarks—for generative AI, benchmark reviews should include alignment to Christian faith statements.

Aligned & Accurate

Mission Alignment

AI builders and deployers must ask, "To what end are we seeking to deploy our AI system? What future will such systems create?" In other words, does a given AI system align with the stated goals? We must demonstrate that deploying an AI system will support the mission in both outcomes and process.

Bias Awareness

AI bias risks maligning Christian mission. Along with the ERLC, "We affirm that, as a tool created by humans, AI will be inherently subject to bias and that these biases must be accounted for, minimized, or removed through continual human oversight and discretion." For this reason, builders are accountable to pursue AI systems that consistently represent legally protected classes in fair ways—it should neither over- nor under-represent them, nor should it misrepresent or mislabel such groups. In light of Jesus' command to love, Christian builders should dream about how they can extend these requirements beyond legal minimums and mere fairness to actual blessing (Romans 12:14).

Reliability

Builders should seek to build AI systems that work reliably and "do not create or act according to bias" (following the Rome Call). Unreliable or inaccurate outputs should be considered harmful. While users should be reminded to responsibly check accuracy, builders should not knowingly push biases or inaccuracies downstream onto users—that's both unethical and inefficient.

Empowered

Empowerment and accountability should go hand-in-hand. Individuals and teams that deploy AI systems should also be accountable for them—and those who are accountable must also be the decision-makers empowered to withhold, alter, or deploy AI systems.

User Empowerment

Users must also be empowered. To this end, deployers should maintain a feedback system to understand and address grievances.

Policy Making

In AI policy making, those developing ethics guidelines should clearly identify the individuals, departments, and organizations who are empowered and accountable. This clarity will encourage deeper consideration of AI systems.

Pre-deployment Considerations

Before AI deployments, builders and organizations should imagine the far-reaching potential consequences, including abuse, misuse, and unintended consequences of appropriate use. One practice is to do a "pre-mortem" by imagining, "This AI system failed in 1, 5, or 10 years—why did it fail?" With these reflections, builders might see ways they can implement better guardrails, preventative measures, or adequate warnings across the deployment's ecosystem.

Dependency Awareness

AI systems will likely create dependencies. Builders and organizations must determine whether such dependencies pose a risk for them, and to what degree. Without such consideration, they will fail to count the cost.