Ensuring Ethical AI at Persuaide
Ethical Use of Artificial Intelligence
The ethical use of AI in a noble and non-detrimental way is a challenge. Think of combining ethical AI with persuasion technology and you may be wondering if the people behind Persuaide are serious. There are reports of technical interference (Brexit campaign, US elections) in which artificial intelligence had a profound impact.
In this article, we will explain that our technology platform can impact individual communication. However, we emphasize our commitment that Persuaide is for noble purposes. This is part of Persuaide’s moral code regarding Responsible and Ethical AI:
“We are committed to AI responsibility. We believe that Artificial Intelligence is powerful and should only be used ethically and for the benefit of human life. It should never be used to the detriment of one individual over another. To ensure transparency, we commit to openly educate society about our work and the theories behind our algorithms. Second, we commit to working closely with our customers to ensure that our software is used only for the benefit of all humanity and never to the advantage of one individual over another.”
In the following, we address above-mentioned transparency to educate society about our works and the theories behind central algorithms. Last, we communicate how we ensure that our software is used only for the benefit of all humanity. Let us begin.
Research and Work regarding Persuaide
Persuaide is a platform that allows everyone to communicate better with the help of artificial intelligence. Our research and work is based on soundly defined academic evidence.
In order to understand how persuasion and technology (i.e., natural language processing or generation) can be combined, we conducted a literature review to understand the interdependencies of these two fields. In brief, we identified some fifty determinants that explain how people are susceptible to persuasion in different ways and specific types of communication. These are clustered into four broad categories: benevolence, linguistic appropriacy, logical argumentation, and trustworthiness. Persuaide minds these determinants and has them implemented in its codebase.
Next, we want to understand how humans actually adapt texts. Therefore, we conducted an experiment (Read the study here) that allowed us to identify adaptations that make a text more persuasive. From our findings, we theorize successful persuasiveness in natural language generation as the alignment of tonal formality, emotionality, and comprehensiveness with a recipient in a structured and logically coherent manner in a text. Persuaide minds such adaptations and has them implemented in its codebase.
Theories underlying Persuaide
In regard to theories, we relate to four of them: the Theory of Cognitive Dissonance (Festinger 1957), Language Expectancy Theory (Burgoon et al. 2002), Probabilistic Models (Wyer 1970), and Balance Theory (Heider 1958).
Festinger’s Theory of Cognitive Dissonance focuses a persuadee’s perceived benevolence which can be dissonant, consonant, or irrelevant to each other. The magnitude of dissonance determines one’s motivation to reduce it. The more consonant a relationship is, the higher are the chances that persuasion is successful.
Language Expectancy Theory identifies written or spoken language as a rule-based system. Expectations are often consistent with sociological and cultural norms. Preferences tend to relate to societal standards and cultural values. Negative expectations inhibit persuasion, e.g., when persuader makes use of language considered socially unacceptable.
The third central theory pertains to probalistic models. Probability models are based on the rules of formal logic and probability. They predict beliefs regarding the conclusion of reasoning. Beliefs can be ascertained through subjective probabilities.
Lastly, balance theory focuses on the relationship between persuader and persuadee. The persuadee’s attitude towards the persuader can be balanced or unbalanced. If it is in balance, then a persuader has greater chances for persuasion.
Making Persuaide Beneficial to Humanity
This part explains how we ensure that our software is used for the benefit of all humanity. Certainly, we are not able to fully control how humans interact with our platform. To be cautious, we implemented measures that inhibit inappropriacy. This way, we avoid the misuse of Persuaide and foster debiasing. We have three steps for this:
Step one: Every single data point that we process is being checked for politically inappropriate or potentially offensive language. We collated a wide range of inappropriate terms and expressions. In order to keep our algorithms clean and prevent model drift, we are rejecting data processing in the presence of inappropriacy.
Secondly, we deployed deep neural nets that scan textual pieces for hate speech and subconscious biases. Our technology correctly classifies linguistically complex concepts such as irony, sarcasm, subversive violence, hatred, or cultural inacceptability.
As the last step, we use anonymization algorithms never to process personal data in our deep neural nets. Our inherent complex and large-scale transformer architectures render tracing texts computationally expensive and, therefore, almost impossible. Yet, on top of that, we also use masking and alteration algorithms that make sure that a plethora of entities, e.g., names, company names, and geographical locations, are always anonymized.
Despite our three steps, we cannot guarantee a 100% bias-free algorithmic approach. Nonetheless, our internal experiments conducted by ethical AI experts, supplemented by consultations of a digital privacy lawyer, convince us to filter out 99% of inappropriateness. In a nutshell, we confidently communicate that we are using state-of-the-art measures to ensure that Persuaide remains ethical. Persuaide is for the greater good of humanity.
At Persuaide, we set on a mission to help everybody to become better at communication.
Your team from Persuaide
- Heider, F. (1958). The psychology of interpersonal relations Wiley. New York
- Wyer, R. S. (1970). Quantitative prediction of belief and opinion change: A further test of a subjective probability model. Journal of Personality and Social Psychology, 16(4), 559.
- Festinger, L. (1957). A theory of cognitive dissonance (Vol. 2). Stanford university press
- Burgoon, Michael, Vickie Pauls Denning, and Laura Roberts. “Language expectancy theory.” The persuasion handbook: Developments in theory and practice (2002): 117-136.