April 21, 2025

How to Build an Ethical AI Framework

Cynthia Valencia and Farah Iyer

Our teams are using AI everywhere these days—collaborating, supporting customers, analyzing data, managing projects. At Obsidian, we see it firsthand, both in our own work and in how our customers are adopting AI across their organizations.

Being a security company, we take protecting data and using AI responsibly really seriously. It's not just about the AI tools we build—it's about our approach to AI in everything we do. We're always thinking about how to use AI effectively while keeping data secure and systems trustworthy. That's why we wrote this blog—to share our framework for responsible AI adoption that puts security and ethics first.

More Than Just Checking Boxes

AI ethics goes way beyond just checking compliance boxes. Sure, aligning with standards like ISO 42001:2023 (the first AI management system standard) gives you a good foundation, but real ethical AI requires a deeper commitment. It's not just about meeting regulations—it's about building trust and making sure your innovation stands the test of time.

Before you dive in, ask yourself:

  • How does your current compliance approach handle AI systems?
  • Where do ethics overlap with your existing controls?
  • What's missing between what regulations require and what ethics demand?

Integrating with your existing compliance frameworks should feel natural, not forced. When you align your AI ethics with standards you already know—like ISO 27001, ISO 27701, and SOC 2 Type 2—you create a comprehensive approach that strengthens both your compliance position and ethical standing.

Who's Steering the Ship?

A strong AI governance framework acts as your organization's compass for ethical AI operations. As you build your governance structure, think about:

  • Who will champion AI ethics within your organization?
  • How will ethical considerations be integrated into decision-making?
  • What escalation paths exist for ethical concerns?
  • How will you measure the effectiveness of your governance?

Your policy development should build on what you already have while recognizing that AI brings unique challenges. Consider how your current policies might need to evolve to address:

  • Guidelines for developing and deploying AI
  • Boundaries for decision-making and who provides oversight
  • How you'll manage risks
  • Ways humans stay in the loop and provide oversight

Let's Talk About Your AI

Building trust means being transparent. As you develop your documentation approach, ask yourself:

  • How will you explain AI decisions to your stakeholders in a way they'll understand?
  • What level of detail makes sense for different audiences?
  • How will you respond when someone asks for an explanation?
  • What documentation standards do you need to establish?

Your transparency framework should cover:

  • The principles and architecture behind your model design
  • Where your training data comes from and how you selected it
  • The logic and parameters that drive decision-making
  • What limitations and constraints your system has

Keeping Humans in the Loop

AI should enhance, not replace, human judgment. Consider these key questions:

  • Where are the critical points for human oversight?
  • How will you ensure meaningful human involvement?
  • What training do oversight teams need?
  • How will you document human decisions in AI processes?

Getting Started: Your AI Ethics Roadmap

The journey to ethical AI starts with understanding where you are right now. Key things to think about include:

Assessment:

  • What AI systems are you currently using or planning to use?
  • How do these systems affect your different stakeholders?
  • Where are your biggest ethical risks?
  • What resources will you need to make this happen?

Development:

  • How can you build on the governance structures you already have?
  • What new policies and procedures will you need to create?
  • Who needs to be part of developing this?
  • How will you know if you're succeeding?

Growing with Your AI

AI ethics isn't a set-it-and-forget-it process—it's a constantly evolving journey. Stay on top of emerging standards and best practices, regularly revisit and update your frameworks, talk with industry peers and ethics experts, and remain flexible while staying true to your core principles. Ask yourself periodically: How has our AI usage changed? Do our ethical principles still make sense for where we are now? What new challenges have popped up that we didn't anticipate? And how well are our current controls actually working? This ongoing reflection ensures your ethical approach grows alongside your AI implementation.

About Obsidian Security

Obsidian Security is the premier security solution designed to drastically reduce the attack surface area of SaaS applications by 85% on average. With contextual user activity data, configuration posture, and a rich understanding of 3rd party integrations in SaaS, the Obsidian platform reduces incident response times by 10x and streamlines compliance with internal policies and industry regulations. Notable Fortune 500 companies trust Obsidian Security to secure SaaS applications, such as Salesforce, GitHub, ServiceNow, Workday, and Atlassian. Headquartered in Southern California, Obsidian Security is a privately held company backed by Menlo Ventures, Norwest Venture Partners, Greylock Partners, IVP, GV, and Wing. For more information, request a demo.

Get Started

Start in minutes and secure your critical SaaS applications with continuous monitoring and data-driven insights.

get a demo