Fairly Logo

Fairly AI Policy Marketplace

Advanced Oversight

AML Policy

fairly.aml · 1.0.0

This policy aims to outline the controls needed when automating bank operations through systems like AI-driven chatbots.

U.S.A.

BABL New York Local Law 144

babl.nyll144 · 1.0.0

A policy to collect evidence and justifications required for New York City’s Bias Audit Law for Automated Employment Decision Tools (AEDTs).

Canada

Canadian Artificial Intelligence and Data Act (AIDA)

canada.ca.aida · 1.0.0

The Government of Canada tabled the Artificial Intelligence and Data Act (AIDA) as part of Bill C-27, the Digital Charter Implementation Act, 2022.

Advanced Oversight

Canadian Human Rights AI Impact Assessment

lco.hraiia · 1.0.0

This is a comprehensive tool intended to assess artificial intelligence (AI) systems for compliance with laws, particularly those in Ontario, but applicable across Canada.

Coming Soon

Data Card

com.fairly.ai.datacard · 1.0.0

Gather required information to create a data card.

Advanced Oversight

Data Integrity Analysis

com.fairly.ai.data.integrity.analysis · 1.0.0

Data integrity analysis is a critical process that involves meticulously examining and enhancing the quality, accuracy, consistency, and reliability of data within a dataset or database. By scrutinizing data for errors, inconsistencies, and completeness, it ensures that the information is trustworthy and can be relied upon for decision-making and business processes. This comprehensive evaluation includes tasks such as data validation, cleansing, and verification against external references, aiming to identify and rectify any anomalies or inaccuracies. Data integrity analysis is fundamental for maintaining the overall health and utility of data assets, fostering informed decision-making, and preventing data-related issues that may hinder the effectiveness of an organization's operations.

Fairness

Equal Credit Opportunity Act

com.fairly.ai.ecoa · 1.0.0

This policy covers selections of the Equal Credit Opportunity Act, a statute which aims to, according to US federal law, prevent discrimination in credit applications.

EU

EU AI Act Policy

fairly.euaia · 1.0.0

The EU AI Act policy promotes transparency and accountability in AI, particularly large language models, by requiring detailed disclosures about training data sources, adherence to data governance, usage of copyrighted data, and computational resources involved in model training. The policy stipulates the measurement and reduction of energy consumption during training, and demands clear communication about the model's capabilities, limitations, and risk management strategies. It also encourages benchmarking against industry standards and reporting of test results. AI-generated content must be identifiable as such, and the model's presence in EU markets is disclosed. Lastly, the policy necessitates robust technical compliance to ensure that all uses of the AI model align with the EU AI Act.

Fairness

Fair Housing Act

com.fairly.ai.fairhousingact · 1.0.0

This policy covers controls that relate to matters concerning what US federal law considers unfair or discriminatory housing practices by financial institutions or lenders.

Fairness

Fairly AI Fairness Policy

com.fairly.ai.fairness · 1.0.0

The Fairness policy serves to address biases and ensure fairness in the development and deployment of predictive models and systems. This policy is designed to facilitate a robust understanding of the problem one is trying to solve, define fairness in an organizational context, and address any unintended consequences of technology deployment.

Operational Risks

Operational Risk Assessment

com.fairly.ai.projectinfo · 1.0.0

Questionnaire for gathering model project information.

Advanced Oversight

Feature Fairness Analysis

com.fairly.ai.feature.analysis · 1.0.0

Feature analysis in AI model testing is crucial as it helps uncover the most influential factors driving the model's predictions and performance. By understanding which features have the greatest impact, data scientists and stakeholders can gain insights into the underlying relationships within the data and verify that the model's decision-making aligns with domain expertise and business objectives. This analysis aids in identifying potential biases, outliers, or irrelevant variables that can skew results or lead to undesirable consequences. Furthermore, it supports feature engineering efforts, enabling the creation of more informative features and the optimization of model performance, ultimately ensuring that AI models are accurate, reliable, and aligned with the goals of the organization.

Advanced Oversight

Features Importance and Stability Tests

com.fairly.features · 1.0.0

The tests focuses on the testing of models. It emphasizes the importance of rigorous testing procedures to ensure the model's performance meets the expected standards and behaves as intended.

Advanced Oversight

Features Relative Advantages Tests

com.fairly.reladv · 1.0.0

The tests focuses on the testing of models. It emphasizes the importance of rigorous testing procedures to ensure the model's performance meets the expected standards and behaves as intended.

LLM

Human-Computer Interaction Policy

com.fairly.llm.chatpol · 1.0.0

Fairly's human-computer interaction policy seeks to analyze the affectations of AI models and their impact on users of generative and automated systems.

LLM

Human-Computer Interaction Policy for Financial Services Chatbots

com.fairly.llm.bankassist · 1.0.0

Fairly's human-computer interaction policy seeks to analyze the affectations of AI models and their impact on users of generative and automated systems.

Fairness

Iowa Lending

com.fairly.ai.ioawalending · 1.0.0

This policy covers controls that relate to matters that the state of Iowa considers unfair or discriminatory lending practices by financial institutions or lenders.

Advanced Oversight

ISO/IEC TR 24027 Model Test

iso.24027 · 1.0.0

The Model Test policy focuses on the testing of AI models. It emphasizes the importance of rigorous testing procedures to ensure the model's performance meets the expected standards and behaves as intended.

ISO

ISO/IEC TR 24027 Assessment

iso.24027.assessment · 1.0.0

This policy aims to document biases in AI systems, especially those aiding in human decision-making. The policy includes several control bundles, each addressing a different aspect of bias in AI systems, such as assessment, data, development, engineering decisions, human cognitive bias, treatment, and validation.

Started

Model Card

com.fairly.ai.modelcard · 1.0.0

Gather required information to create a model card.

Advanced Oversight

Model Explainability & Mitigation

solas.ai.model.explainability · 1.0.0

Advanced techniques to understand drivers of model displarity and quality, to then safely adjust your model to reduce disparities with minimal impact on quality.

Advanced Oversight

Model Fairness Testing

solas.ai.model.fairness · 1.0.0

An array of regulator and court accepted testing metrics and reports to detect and quantify disparities in your model.

Advanced Oversight

Model Impact on Business Processes

com.fairly.ai.business.analysis · 1.0.0

Fairness in model training within a business decision policy refers to the practice of developing machine learning models in a way that ensures equitable and unbiased treatment of different groups or individuals. It involves addressing and mitigating potential sources of bias in the data, algorithms, and model development process to avoid discriminatory outcomes. Fair model training considers factors like race, gender, age, or other protected attributes to prevent models from perpetuating unfair disparities or unjustly favoring certain groups. This policy sets guidelines for evaluating, monitoring, and adjusting models to achieve fairness and ensure that they produce equitable predictions or decisions that align with ethical and legal standards, promoting a just and inclusive business environment.

Advanced Oversight

Model Risk - Challenger Models

com.fairly.sr11-7 · 1.0.0

A model risk assessment policy to perform effective challenge using challenger models.

U.S.A.

NIST AI Risk Management Framework

us.nist · 1.0.0

Information technology - Artificial intelligence (AI) - Bias in AI systems and AI aided decision making.

NIST Cybersecurity Framework

fairly.nistcsfgenai · 1.0.0

This is a policy adapted from the draft version of the upcoming update to NIST's Cybersecurity framework with a focus on generative AI and language model applications.

ISO

ISO/IEC 42001

iso.42001 · 1.0.0

This policy provides requirements for establishing, implementing, maintaining and continually improving an AI management system within the context of an organization. Organizations are expected to focus their application of requirements on features that are unique to AI.

documentation

Project Evaluation

com.fairly.ai.project.evaluation · 1.0.0

This policy allows you toc customize the project assessment summary in the Project Report.

Premium

RAII Institute Generic Assessment Core

org.raii.policy · 1.0.0

Core Assessments Responsible AI Institute. This is a premium policy. Please email sales@fairly.ai for more information.

Fairness

South Dakota Lending

com.fairly.ai.southdakotalending · 1.0.0

This policy covers controls that relate to matters that the state of South Dakota considers unfair or discriminatory housing practices by financial institutions or lenders.

Advanced Oversight

SR 11-7

com.fairly.sr11-7.questionnaire · 1.0.0

This policy has provided comprehensive guidance on effective model risk management. Many of the activities described in this policy are common industry practice. Organizations may also ensure that they maintain strong governance and controls to help manage model risk, including internal policies and procedures that appropriately reflect the risk management principles described in this guidance. Details of model risk management practices may vary from organization to organization, as practical application of this guidance should be commensurate with an organization's risk exposures, its business activities, and the extent and complexity of its model use.

if you don't see what you need, contact us to develop a custom policy.