Cognitive technologies have immense power to influence decisions that could affect a Governmental agency’s mission, staff, and the lives of the citizens the agency serves. Karsun recognizes that such technologies raise important challenges that we must address clearly, thoughtfully, and affirmatively. The following principles set out our commitment to developing technology responsibly:
- Fairness by design: Design AI solutions that drive reliable and fair business decisions by ensuring AI/ML models utilized for decisions are unbiased concerning protected attributes such as gender, race, and age.
- Explainability by Design: Design AI solutions with abilities that track the decision-making process and the rationale behind such decisions or predictions.
- Human-Centered AI Design: We embrace Human-centered principles for all AI solutions by following several best practices that foster an inclusive AI ecosystem that serves citizens and communities.
- Establish Transparency & Privacy: We strive to furnish high-level conceptual and implementation details for cognitive solutions and make them accessible to everyone ensuring privacy protection.
- Methodical Data Monitoring: We create external and internal groups comprising management, security, and policy stakeholders with appropriate clearance to monitor data compliance.
- Smart Data Fencing: We explicitly build a smart fence around data features for AI modeling. We ensure a process-based digital fence that provides a shield for what AI systems can do or cannot do with the data it generates or acquires.
- Data Stewards and Data councils: To reduce the prospect of inadequately trained AI networks, Karsun nominates data stewards and data councils. Data stewards and councils establish and ensure the data sets are appropriate and match the business case. We recognize that bad data pose risks for human workers and prejudice human biases. Our data council apparatus helps us leverage additional human oversight and detect flaws in data sets early in the process.
Karsun recognizes the fact that AI has the potential to change our life for good. However, there are no standard definitions of fairness for decisions made by humans or machines. We seek to identify appropriate fairness criteria for a solution that requires accounting for user experience, cultural, social, historical, gender, race, political, legal, and ethical considerations, which may have trade-offs. Addressing fairness and inclusion in AI is an active area of research. We strongly acknowledge that unbiased data is essential for implementing successful AI programs to avoid significant risks in the future.