Within the context of existing Karsun’s ethics policy, we will use the following responsible AI principles to develop and use AI solutions. We understand the challenges embedded in these principles, yet we set the stretch goal to exuberate our commitment to excellence.

Transparency 

The transparency and explainability needs vary by use case and audience. We will involve governance, risk, and compliance (GRC) professionals, ethicist, relevant stakeholders (including those who are affected by the AI decisions as required) to determine their transparency and explainability requirements and apply appropriate machine learning (ML) algorithms and techniques to create auditable analytical outputs and models which would increase the trust and adoption of them.

In cases where we need complete model transparency, we use ML techniques such as Decision trees, monotonic Gradient Boosting Machine (GBM), Rule-based models, Super-sparse Linear Integer Models (SLIMs), Linear Regressions, Logistic Regressions, etc. to create “white-box” models. In cases where post-hoc interpretability would suffice, we build surrogate models such as Partial dependence plots and Individual Conditional Expectation (ICE), Locally Interpretable Model-agnostic Explanations (LIME), SHapley Additive exPlanations (SHAP), Counterfactuals, etc. to approximate the inner workings of the opaque models, and show how individual decisions are made. We also use model intelligence platforms like Imandra.ai, and ML platforms like H2O.ai to gain insights on model interpretability, detect biases, evaluate fairness, obtain model lineage and document ML reports

Fairness

data + algorithm = model
Bias in, bias out

ML algorithms can learn to discriminate based on gender, age, sexual orientation, or any other perceived differences between groups of people. ML models are not just pattern identifiers, but pattern amplifiers as continuously learning models perpetuate and reinforce its own decisions! We aspire to create models that are fundamentally sound, assessable, inclusive, and reversible to protect against harmful bias. We seek to identify and correct harmful algorithmic bias (due to incomplete data) and historical bias or inequity (in data) in our models.

We will strive to (1) ensure training data is representative of the population on which the analytical outputs are applied, (2) attain just outcome by huddling diverse groups of stakeholders (as folks who are most likely to spot bias are those who are most likely to feel its impact), and (3) empathize with those who are affected by the model decisions. We evaluate our models for bias and assess the overall fairness and inclusivity of the output.

Governance

ML Models are probabilistic and its performance can decay over time. We will continuously monitor and optimize the models by learning from new data, but will have a plan in place to deactivate in case the models start to become biased, offensive, or discriminatory. 

We will strive to (1) create a view of which teams are using what data, how, and in which models, (2) make sure that data is reliable and being collected in accordance with regulations, and (2) have a centralized understanding of which models are used for what business process

Reliability

We will abide by all regulatory, privacy, security, and AI ethics principles of our clients. We will shift testing to the left to prevent issues in the first place. We will test for model drift and degradation.

We will bring software development discipline to data science (ModelOps) to ensure the reproducibility of the ML experiments. We build for the future to secure the ability to integrate new technologies as they emerge

Accountability

Developing an AI system often requires multiple inputs from various internal teams as well as from third parties such as data providers, data labelers, and service providers. Each link in the AI supply chain introduces potential vulnerabilities to the security and stability of the entire system. To combat these risks, we will adopt a holistic approach to building AI systems that includes processes for maintaining accountability, both internally and among suppliers. We will ensure accountability by operationalizing the process for documenting the creation of AI systems across their lifecycle.

Menu