Align AI By Design (Or Risk Decline)

[ad_1]

As my colleague Brian Hopkins pointed out in his blog We Are Launching Research on AI Alignment And Trust, the challenge of alignment in AI is one that we can’t afford to ignore. While the sci-fi tinged existential risk to humanity may loom in the far distance, the risk to your business today is real and increasing as AI grows in power at an exponential pace. You can’t afford to sit back and wait, the opportunities in AI are too big. But you can’t afford to deploy misaligned systems either. What do you do? 

Forrester is introducing the Align by Design framework to ensure organizations’ use of AI meets intended business objectives, maximizing benefits and minimizing potential harms, all while adhering to corporate values. Forrester defines Align by Design as: 

An approach to the development of artificial intelligence that ensures AI systems meet intended goals while adhering to company values, standards, and guidelines across the AI lifecycle. 

Align By Design Principles

Align by Design was inspired by the Privacy by Design framework for ensuring that personal data protection is “built in, not bolt on” in the creation of systems. Like that approach, Align by Design also consists of fundamental governing principles: 

  • Alignment must be proactive, not reactive.
  • Alignment must be embedded into the design of AI systems.
  • Transparency is a prerequisite for alignment.
  • Clear articulation of goals, values, standards, and guidelines is necessary for alignment.
  • Continuous monitoring for misalignment must be inherent in the system’s design.
  • Designated humans must be accountable for the identification and remediation of misalignment.

Emerging Best Practices for AI Alignment

Unfortunately, as I’ve learned researching responsible AI over the past seven years, it’s one thing to have a set of guiding principles and quite another to turn those principles into consistent practice across your business. In this new research stream, Brian and I endeavor to identify best practices for alignment at each phase of the AI lifecycle — from inception through development, evaluation, deployment, monitoring, and retirement. Here are a few we’ve uncovered so far: 

  • Govern AI with a constitution. Anthropic, Amazon’s recent partner in the AI arms race, differentiates from OpenAI and other LLM providers with an approach it calls Constitutional AI. The provider has a list of rules and principles that Claude (its LLM) uses to evaluate its outputs.  
  • Leverage blockchain as an alignment tool. Scott Zoldi, chief analytics officer at FICO, and his team capture requirements for AI models on blockchain before beginning model development. Then, when the model is developed, they evaluate it against these immutable requirements and jettison the model if it’s not aligned. 
  • Engage with potentially impacted stakeholders to identify potential harms. Without consulting a broad set of stakeholders, it’s impossible for a company to foresee all the potential harmful impacts an AI system could have. Companies should therefore engage a group of internal stakeholders — which may include business leaders, lawyers, and security and risk specialists, as well as activists, nonprofits, members of the community, and consumers — to understand how the system could impact them. 

If you’re instituting best practices within your organization to ensure AI alignment, Brian and I would love to speak with you. Please reach out to schedule some time to chat.

[ad_2]

Source link

Advertising

Newsletter SignUp

Subscribe to our newsletter to get latest news, popular news and exclusive updates.

Please enable JavaScript in your browser to complete this form.