The following message has been written by Converseon’s CEO & Founder, Rob Key.
Recently, the White House released the “Blueprint for an AI Bill of Rights” to identify principles to guide the design, use, and deployment of models/systems/AI to protect Americans in the digital age. As a leader in conversation-based decision intelligence through the use of autoNLP and advanced AI technologies, we would be amiss to not provide a statement.
As an organization who began applying AI (“machine learning”) to unstructured social data in 2008, we have some strong viewpoints on the ethical (and effective) application of the technology. We’ve advocated for several years now for greater transparency and agreement on how models are built — through input of multiple labelers with diverse backgrounds – how they’re used and, perhaps most importantly, how effective they work in the real world.
Most consumers of models find them to be blackbox and inscrutable as to how well they classify data (their accuracy). Many would say general purpose, one size fits all, automated sentiment, for example, isn’t very good (and we might agree). We then ask exactly how good (or poorly) it performs and often they are not equipped with answers. This is typically for one reason: most vendors of NLP do not (or cannot) provide scores (usually F1 — precision/recall). It’s simply not available.
Yet, in our view, the social listening (alongside of customer experience and voice of customer) industry has an obligation to “get it right.” To accurately reflect the need, wishes, wants and dreams that are expressed – and not mistranslate it though poor AI. After all, organizations are making decisions about ESG investments, improving access and customer experience, serving and communicating with diverse communities, and building new products based on this data.
With that being stated, we fully support the White House’s release of the blueprint. We specifically advocate the following: that every model (“classification”) comes with a clear performance score for those who request it; that those who build models, a process that not only embraces multiple viewpoints in labeling but works for the highest precision, are not just tested once but against new data it hasn’t seen to protect against algorithmic drift; that there is a clear human in the loop to intervene if things go awry and the clear ability to modify models when they do.
There is much talk in our category about “AI,” but far too little discussion on how to do it right and why that is important. I hope this will help spur further discussions on the topic and we can work together for the greater good (and greater impact).
You can read the full blueprint here: https://www.whitehouse.gov/ostp/ai-bill-of-rights/