KUNGFU.AI’s Approach to Ethics

As a company, KUNGFU.AI takes ethics seriously.

We build technology that can actively make decisions, take actions, and create impact. We work in a multitude of domains on products that impact people in the real world. Our ethics are especially important given that, as a consultancy, we create work to be used by other companies. Our work will cease to be our own once we provide it to our clients. This imbues us with a responsibility to ensure that our work behaves the way we would behave, and performs its tasks in a manner that reflects its creators’ values.

In order to be proactive about this obligation, we have undertaken the not-small effort of quantifying and developing approaches to making our work ethical, and to determining what kinds of work we will and will not do. The first step we’ve undertaken is to scope our concerns within our organization. To do this, we completed a survey to establish where we stand as a company.

As individuals we all hold varying degrees of sensibilities with regards to the kinds of work we will and not be a party to. In an effort to understand these sensibilities, we conducted an internal poll of employees soliciting opinions on a swathe of domains.

The survey consisted of questions posed as follows: ‘Please provide your opinion on which industries, organizations, groups, and products KUNGFU.AI should avoid.’

The categories were smoking products, law enforcement technology (surveillance, predictive policing, etc.), guns, offensive military, industries that impact environment (oil and gas, etc.), data privacy, dual-use cybersecurity (offense/defense), practices affected by bias (healthcare, insurance, etc.), politics (political parties, candidates, etc.), religion (religious organizations, religious leaders, etc.), non-democratic states.

For each of these topics, respondents could reply with the numbers 0>2. 0 corresponded to the ‘Don’t mind working on this’ response. 1 corresponded to the ‘I would want to raise a flag’ response. 2 corresponded to the ‘I would not want to work on this’ response.

Voting 0 for any of these responses did not indicate that somebody broadly condoned those industries or use-cases; it simply means that an individual didn’t mind engaging with and considering those kinds of work.

As individuals, we had various overall sensibilities to the ‘areas of concern’. Some of us were generally unbothered by the various domains. Others had significant objections to many domains. The number of “would not work on this“ responses ranged from 0 of the 11 topics, to 6 of the 11 topics. Most respondents fell below 4 direct objections.

When taking the average of responses, the highest score issues were, in order, ‘offensive military’, ‘non-democratic states’, ‘guns’, and ‘smoking products.

The next set of domains were separated in terms of the average score by a significant margin.

This result was echoed when looking at the number of responses about each of these domains that were in the ‘2- I would not work on this’ category. The order changes slightly, with guns and non-democratic states changing order.

These results indicate two things; firstly, we as a company roughly have some accord about topics that most people do not wish to participate in. We can all roughly agree that we do not care to work for companies dealing in guns, offensive military weapons, or smoking products and that we do not feel comfortable assisting in any way non-democratic states.

Secondly, we realize that there is no limit to the amount of nuance that can be applied when thinking about these topics. For instance, we broadly agree that we will not work on offensive military applications. But how do you define an offensive military application? Does that mean that we won’t build only the explosives or bullets? Does that mean we won’t assist in helping build trucks that might carry weapons? Does that mean we won’t work for an aerospace giant that builds a plane that delivers weapons to target? Does that mean that we won’t work for the military in any regard?

Breaking these topics down and answering such questions might be a good way to identify the boundary region in these domains.

One proposal for vetting incoming inquiries proposed during a weekly ethics meeting was to provide an optional team-feedback opportunity to assign an ‘ethics score’ to proposed work. This ethics score might be the result of a questionnaire filled out by team members after being appraised of the proposed work, and engaging in a discussion about the details of the implementation. The benefit of this approach is that it provides a flexible but objective way to measure how projects relate to our values. However, one drawback is that it may lead to excessive time spent vetting projects that would not come to fruition regardless. One other drawback is that such discussions are inherently subjective, and can be swayed by the presentation of facts (hence why lawyers make the big bucks). Ultimately there is an inherent tension between bringing in business and bringing in the right business that is in line with our values.

Moreover, with a subjective system, values are more likely to be ‘flexed’ in the face of hardship or challenge.

Another proposal is to define a set of binary questions that can be broached to qualify leads.

For instance, let’s say that we collectively decide that the ‘line’ for the ‘offensive military’ question is the following; ‘Does this project in any way enable the increase in efficiency or capability of a system whose express purpose is injury, death or destruction of physical property?’ If the answer to this question is ‘yes’, then this project would immediately be rejected. If the answer is no, there might still be other problems, however, it is not disqualified because of the ‘offensive military’ domain being breached.

The benefit of such a system is that it is direct and clear. It provides guidance in the form of diagnostic questions, and it should enable salespeople to disqualify inbound work and provide engineers with the peace of mind that their work does not infringe on their value system. The drawback, however, is that such binary questions are rarely easy to formulate, and also can have difficult answers that aren’t yes or no. However, these kind of questions are the basis of systems of law around the world, and of scientific diagnosis tools.

Ultimately, any system we come to use will be the result of many mistakes in the pursuit of our goal of defining, and acting in accordance with our shared values. We’ll keep you posted as we continue to hone our system. And we absolutely welcome your thoughts and feedback as we continue to grapple with this issue as an industry.