FinOps
Towards Building and Scaling Trusted AI Solutions

Mondweep Chakravorthy
Towards Building and Scaling Trusted AI Solutions
Over 50 countries signed up to a declaration on 'inclusive and sustainable AI' at the Paris AI Action Summit on the 11th of Feb 2025. There were five broad themes:
Public Interest AI - define, build and deliver critical open AI infrastructure for global AI sector for beneficial social, economic and environmental outcomes for public good
Future of Work -promote socially responsible use of artificial intelligence through sustained social dialogue
Innovation and culture - building sustainable innovative ecosystems that work with all economic sectors, specially creative and cultural industries
Trust in AI - consolidate mechanisms to build trust in AI based on a scientific consensus on safety and security issues and
Global AI governance - shape an inclusive and effective framework of international governance on AI
A noted nuance though has been around the theme of AI Safety, which had has been the focus from the times of the Bletchley Summit a couple of years ago. Whereas the first rules of EU's AI Act (which is a risk based approach to AI adoption) has started to take effect; JD Vance, the US Vice President said that "I am not here to talk about AI Safety, which was the title of the conference a couple of years ago; I am here to talk about AI technology". Understandably there has been concerns around over regulation stifling the potential from AI.
As organisations increasingly implement AI uses cases and the global order on AI refines, I thought about a possible framework that would navigate the tension between opportunity and risks. I am calling this framework HIVE : Human-centric, Inclusive, Values-driven and Ethical AI Framework. I believe the 'Ethical' part is essential for AI adoption and scale to enhance trust, and be in public interest. Hence, the framework emphasises an 'self regressive', ethical approach to AI adoption.
Let me illustrate this with an example...
Recently, some organisations have begun implementing blanket and generic policies that state that if applicants do not consent the use of AI in processing job applications, their applications may not be reviewed. These kind of policies raises a number of concerns from forced consent to right to refuse automated decision making, potential exclusion from the process etc.
A more nuanced and ethical approach to the use case along could have been to narrow down the scope of the use case - that is to say improving explainability and transparency to candidates into the decision process - e.g
clarifying to candidates what aspects of their data AI is utilising
how the AI is assessing
providing candidates options to opt out of certain AI processing steps and alternative pathways to provide that information
providing traceability and insights into the selection and decision making process
guiding the candidates to showcase their best verifiable fit for the advertised position
The sections below describe the key components of the proposed framework and how they may manifest in real world scenarios. The Appendix section elaborates on each of the steps further with some examples.
HIVE : Human-centric, Inclusive, Values-driven, and Ethical AI Framework
Your thoughts and suggestions are welcome.
(If this sounds interesting and you are looking for help adopting and scaling AI to grow your business and support your customers, I can help you. Let's chat. I help CIOs of organisations with global operations achieve revenue growth > £15M by targetted digital transformation of their organisation with emerging AI technology and delivering complex projects and programmes.)
Appendix
Diving into the various steps of the framework (focussing on the EU market though the principles are likely desirable globally):
1: Envisioning & Ethical Framing
Problem & Opportunity Definition: Example: Define the specific use case (as narrow as possible rather than generic broad use cases). E.g. It may be reasonable to build a AI lie detector product to fast track entry at the border checkpoint (assuming the data subject consents) than have a generic lie detector product that can be deployed to monitor Zoom calls.
Ethical Charter Creation: Example: Respecting Fundamental Human Rights
Initial RIA - Risk Impact Assessment: Example: Identify privacy risks in data collection
Stakeholder Mapping: Example: List regulators, ethics experts, user groups
Regulatory Analysis: Example: Review EU AI Act requirements for high-risk AI
Success Metrics - Ethical & Functional: Example: Define 'fair access' metric, alongside accuracy
Team Formation - Including Ethics Roles: Example: Assign Data Protection Officer, Ethics Lead roles
Phase 2: Sprint 0 - Foundation & Guardrail Setup
Detailed RIA & Mitigation Planning: Example: Plan technical controls for data anonymisation
Data Governance Framework: Example: Set up data access protocols, GDPR compliance
Explainability Strategy: Example: Choose XAI methods for user transparency
Safety Engineering & Security Considerations: Example: Plan for adversarial attacks, data breaches
Regulatory Checklist: Example: Create checklist based on EU AI Act, GDPR
Ethical Review Board/Process: Example: Establish board with independent ethics experts
Secure Tooling & Infrastructure Setup: Example: Use EU-based cloud, privacy-preserving tools
Sprint 0 Backlog - Foundation & Guardrails: Example: Prioritise data governance, RIA tasks for Sprint 0
Phase 3: Iterative Development Sprints
Sprint Planning - Guardrail Checkpoint: Example: Review sprint backlog against ethical charter
Development & Testing - Agile: Example: Develop bias-mitigated algorithm, unit tests
Data Iteration & QA: Example: Curate EU-representative datasets, data quality checks
Bias Detection & Mitigation: Example: Run fairness audits, apply debiasing techniques
Explainability Implementation & Testing: Example: Integrate XAI module, test user-facing explanations
Safety & Robustness Testing: Example: Test against EU threat scenarios, security tests
Regulatory Checks & Documentation: Example: Update compliance checklist, document design choices
Mid-Sprint Ethical Review: Example: Ethics board reviews progress, provides feedback
Sprint Review & Retrospective - Ethical Reflection: Example: Review ethical metrics, reflect on ethical challenges
Phase 4: Testing & Validation Pre-Release
Comprehensive Bias & Fairness Audits: Example: Independent auditor assesses bias using EU datasets
Robustness & Security Testing: Example: Penetration testing against EU-specific threats
Explainability Validation: Example: User studies in EU to validate explanation clarity
Regulatory Compliance Audit: Example: Legal audit against EU AI Act, GDPR
User Acceptance Testing - UAT : Including Ethical Considerations: Example: UAT with diverse EU users, ethical scenario testing
Independent Ethical Review: Example: Ethics board final sign-off, public statement
Documentation & Transparency : Ethical Considerations Included: Example: Create multi-lingual user docs, transparency reports
Phase 5: Deployment & Continuous Monitoring Post-Release
Phased Rollout & Monitoring: Example: Pilot deployment in one EU region, monitor metrics
Performance Monitoring - Ethical & Functional Metrics: Example: Dashboard to track fairness, explainability, accuracy
Bias Drift Monitoring: Example: Regularly check for bias drift in EU user data
User Feedback & Ethical Incident Reporting: Example: Multi-lingual channels for user feedback, incident logging
Regular Ethical Reviews & Audits: Example: Annual ethics audit by independent board
Regulatory Updates & Adaptation: Example: Monitor EU AI Act updates, adapt governance
Transparency & Communication Updates: Example: Publish annual transparency report for EU stakeholders
Iterative Improvement & Refinement: Example: Use feedback, audit results for next development cycle
Continuous Learning & Adaptation Loop
Data Analysis & Monitoring: Example: Analyse ethical performance data, identify trends
Regulatory Changes & Updates: Example: Track new EU regulations, legal interpretations
Societal Feedback & Ethical Discourse: Example: Monitor public discussions, ethical debates on AI
Technological Advancements & Best Practices: Example: Research new XAI, privacy-enhancing technologies
Project Team Composition
Roles: A blend of technical expertise and ethical awareness is crucial. Beyond suitable product, project and programme management roles, the key roles should include:
AI/ML Engineers: To develop and deploy the AI models.
Data Scientists: To analyse data, identify biases, and ensure data quality.
Ethics Officer/Representative: To guide ethical considerations and ensure alignment with the framework.
Legal/Compliance Expert: To advise on relevant regulations and compliance requirements.
Business Analyst: To bridge the gap between technical implementation and business objectives.
UX/UI Designers: To ensure user interfaces are designed with ethical considerations in mind.
Team Size: This will depend on the project's complexity, but smaller, focused teams (e.g., 5-10 people) can be more agile and efficient.
Working and Delivery Cadence: Agile methodologies with short sprints (e.g., 2-4 weeks) can ensure rapid iteration and adaptation.
Key Ceremonies and Processes
Sprint Planning: Incorporate ethical considerations into sprint goals and tasks.
Daily Stand-ups: Briefly discuss ethical challenges or concerns alongside progress updates.
Sprint Reviews: Demonstrate not only functionality but also how the product adheres to ethical guidelines.
Sprint Retrospectives: Reflect on ethical challenges encountered and identify areas for improvement.
Ethics Reviews: Conduct dedicated reviews at key milestones to assess ethical implications and ensure alignment with the framework.
Documentation: Maintain clear documentation of ethical considerations, decisions, and risk mitigation strategies.
Maturing and Tailoring the Framework
Start Simple, Iterate: Begin with the core principles and gradually incorporate more detailed aspects of the framework as the team gains experience.
Feedback Loops: Encourage feedback from all team members and stakeholders to identify areas for improvement and adaptation.
Case Studies and Knowledge Sharing: Document successful implementations and challenges encountered to build organisational knowledge and promote best practices.
Metrics and Measurement: Track relevant metrics, such as bias detection rates, fairness assessments, and user feedback, to measure the effectiveness of the framework and identify areas for improvement.
External Collaboration: Engage with industry peers, researchers, and ethical experts to stay informed about best practices and emerging trends in AI ethics.
Demonstrating Adherence
Transparency: Clearly communicate commitment to ethical AI principles and the use of the framework.
Audits and Assessments: Conduct regular internal and external audits to assess compliance and identify areas for improvement.
Reporting: Publish regular reports on the company's AI ethics practices, including progress, challenges, and learnings.
Certifications: Consider pursuing relevant certifications or standards, such as ISO/IEC TR 24028, to demonstrate adherence to recognised ethical AI guidelines.
Like this article? Share it.