Any work that makes predictions about human beings, i.e., any model that takes in information about an individual and makes a judgment about them, will be approached with special consideration and vetted to ensure subgroup populations of the dataset are fairly represented and/or addressed. Our aim is to reduce as much as possible the algorithmic bias in our models that originates from systemic biases present throughout society.
We will not work on the following:
AI applications related to weapons, offensive military technology, and addictive products, including but not limited to gambling and nicotine products.
We are also broadly opposed to surveillance without informed consent and we will not create models to identify individuals using biometric data for the purposes of surveillance. This includes, but is not limited to, facial recognition, gait recognition, or iris recognition.
We will also not create models using data that is overly personal for business use cases without proper disclosures or approvals, or using data that has been harvested deceptively.
We respect, credit, and treat fairly the people we work with to build AI solutions, including but not limited to end users, data annotators, and data subjects.
As we enter the age of AI, we aspire to lead by example in only building ethical and responsible AI systems and to help our clients to do the same. We also seek to find ways to help ensure that all companies developing AI understand their responsibility to only build ethical and responsible AI and avoid unintended consequences. We do this because it is aligned with our values, living our values makes us a better company and we want to help create a future we want to live in.