These Responsible AI Statement are effective as of January 23rd 2026.

Responsible AI Statement

Scope

This Responsible AI Statement applies to Rangam’s use of artificial intelligence within its products, services, and internal operations where AI is used to support workforce, staffing, and related business functions.

It does not apply to:

  1. Third-party technologies or tools not controlled by Rangam
  2. Client-specific configurations or uses outside Rangam’s direct control
  3. Research, pilots, or experimental features unless otherwise stated

Rangam Responsible AI Statement

Rangam Consultants Inc. uses artificial intelligence (AI) to support workforce solutions, improve operational efficiency, enhance decision support, and deliver superior recruitment outcomes for our clients and candidates. We are committed to using AI responsibly, ethically, transparently, and in full compliance with applicable laws and regulations across all jurisdictions where we operate.

Our approach to AI governance is informed by and designed to comply with recognized global frameworks and regulatory requirements, including:

  1. ISO/IEC 42001:2023 - AI Management Systems standard
  2. NIST AI Risk Management Framework - Comprehensive AI risk management
  3. EU Artificial Intelligence Act (AI Act) - European Union AI regulation
  4. General Data Protection Regulation (GDPR) - EU data protection law
  5. California Consumer Privacy Act (CCPA) - California privacy rights
  6. EEOC AI and Algorithmic Fairness Guidance - US employment discrimination prevention
  7. SOC 2 Type II - Security and operational controls

Human Oversight

Rangam’s AI systems are designed to assist human decision-making.

Where AI is used in hiring or workforce-related contexts, it is implemented with appropriate human oversight and does not operate as the sole basis for employment decisions.

Fairness and Risk Management

We take reasonable measures to identify and mitigate risks related to bias, fairness, and unintended outcomes. AI systems are regularly tested and evaluated to ensure fair treatment of all candidates regardless of protected characteristics including race, gender, age, disability, or geographic location.

Rangam conducts systematic bias testing on AI systems used in recruitment. Testing is performed quarterly for high-risk systems and before any major system updates. We monitor discriminatory patterns and maintain fairness standards aligned with EEOC requirements. Systems that fail to meet fairness thresholds are suspended until issues are resolved.

AI systems are continuously evaluated and monitored as part of our broader risk management and governance processes. Identified risks are documented and assigned mitigation strategies with clear accountability.

Transparency

We aim to provide clear, appropriate information about how AI is used within our products and services. Candidates are informed when AI systems are involved in recruitment processes and are provided with information about:

  1. How AI is used in the recruitment process
  2. What data is processed by AI systems
  3. How decisions are made and reviewed
  4. How to request human review or exercise their rights

AI-generated outputs are intended to support decision-making and are subject to review and validation. We maintain comprehensive documentation for all AI systems including system design, data sources, and performance testing results.

Safety and non-maleficence

AI systems does not intentionally or unintentionally cause harm to individuals or groups. They are beneficial, trustworthy, and aligned with human values.

Data Protection and Security

AI systems at Rangam are developed and operated in alignment with applicable data protection and information security requirements. Personal data is processed for defined purposes, protected through appropriate safeguards, and handled in accordance with relevant privacy laws.

We conduct Data Protection Impact Assessments (DPIAs) for all AI systems that process personal data and involve high-risk activities.

Regulatory Alignment

Rangam monitors and adapts to evolving AI-related regulations and guidance across jurisdictions.

AI use cases are reviewed in line with applicable legal, contractual, and compliance obligations.

Continuous Review

Our AI governance practices are periodically reviewed and updated to reflect changes in technology, regulation, and business requirements.

Use of Generative AI

When generative AI technologies are used, they are applied in a controlled manner and subject to internal review processes. Output is not intended to replace professional judgment.

Commitment to Responsible Use

Rangam’s use of AI is guided by principles of responsibility, accountability, and respect for individuals. AI is used to support business objectives while maintaining appropriate safeguards for people, data, and systems.

Important Note

AI technologies continue to evolve. Rangam’s AI practices may change over time to reflect regulatory developments, technological advancements, and operational needs.

Policy Review and Updates

This Responsible AI Statement may be reviewed and updated from time to time to reflect changes in applicable laws or regulations, developments in AI governance frameworks, or changes in Rangam’s business or technology practices.

Rangam anticipates reviewing this statement at least annually, and sooner where required by law or where appropriate