I was reading a HBR article last night
“Why AI Failed to Live Up to Its Potential During the Pandemic“.
and felt that I needed to write about some of the issues it raises here but expand on these a little. I believe that the limitations that it mentions around the performance of AI are real but that they actually have very little to do with the pandemic or anything else.
Also, they are not likely to go away for a long time.
The main premise of the article was that
“The Covid-19 pandemic was the perfect moment for AI to, literally, save the world. There was an unprecedented convergence of the need for fast, evidence-based decisions and large-scale problem solving with datasets spilling out of every country in the world … AI was — in theory — the ideal tool. AI could be deployed to make predictions, enhance efficiencies, and free up staff through automation; it could help rapidly process vast amounts of information and make lifesaving decisions.
Or, that was the idea at least. But what actually happened is that AI mostly failed.”
So what did they attribute this failure to, as you will see none of these relate necessarily to the pandemic?
1. Bad Datasets
To quote “AI decision-making tools are only as good as the data used to train the underlying algorithms. If the datasets are bad, the algorithms make poor decisions.”
Bad datasets really refers to data that is flawed, incorrect or rife with inaccuracies AND also datasets that are woefully too small to be used to train an AI.
When researching this topic I found some really interesting opinions on just what constitutes an appropriately sized ‘dataset’ to train an AI to work effectively. They seems to be much dissension about this – interesting when surely it there should be a precise requirement for accuracy?
I decided that Google should be a reasonably competent authority to rely upon. The short answer is that for effective AI decisions, you need a LOT of data. Google provides some examples.
For Google ‘Smart Replies’ to work effectively and to simply suggest suitable responses for you to emails (a fairly simple application of AI) required 238 million records in the set of data used to train their AI.
The AI deployed for Google Translate is based on Trillions of records in the training data set!
The upshot of all of this is if anyone is endeavoring to sell you a solution that claims to use AI to provide you with insights it will not be effective UNLESS you can provide this sort of scale of dataset to the solution. And also recall that the data provided needs to be clean and accurate not rife with errors and mistakes.
A few hundred or a few thousand accounts records & the history of contact with these is not going to be close to enough to generate any type of accurate sales insights (for example).
2. Discriminatory Datasets
“Even when there was data available, the predictions and decisions recommended by health care management algorithms led to potentially highly discriminatory decisions”
This is a major problem in the use of AI across society. There is a bias in the collection of data in many areas – for example Google searches and google image searches
As datasets are often historical and collected from periods when access to technology was more skewed to certain demographics or psychographics, the data within these tables is then also skewed.
The article linked to covering google searches suggests searching for the term CEO and noticing how this is far more likely to offer up a white male stereotype (due to out of date or biased datasets) rather than a woman or a person of color.
This bias becomes an issue in social security data, health data and many other areas where AI is being deployed now.
3. Human Error
“The quality of any AI system cannot be decoupled from people and organizations.”
In addition to very large, unbiased datasets being critical for effective decision making by an AI there’s also the issue of data entry errors.
The adage ‘garbage in garbage out’ that first surfaced in 1957, is just as relevant as when it was first coined.
“Many problems can arise when carrying out data due diligence. Usually, the datasets under scrutiny fall short in at least one of these categories”
(Source : Getting Started With AI)
If your company data is rife with errors, any chance at all of meaningful insights from an AI is lost.
I am often asked about whether we use ‘AI’ in Sentia to suggest the most likely opportunities that will convert or the value of leads etc. We do not. Our Sentia application does score leads, accounts and opportunities for clients and generates insights on how sales teams should spend their days but we rely on algorithms, we do not rely on ‘AI’ to do so – for all of the reasons listed above.
Other products in our space do attempt this and I am asked why they are not effective. Firstly the datasets available in most companies CRM/ERP/MIS systems are very modest in size. Also, the data stored in a CRM for example has usually been entered by sales teams and they rarely do so with any particular effort to be overly accurate and timely.
Pipeline data for example can be corrupted by sales teams sometimes entering imaginary opportunities to satisfy pressure from sales management and CFOs. Or, they may enter opportunity data but very rarely update it for stage, close dates and values.
Activity data – i.e recorded reach out to leads, accounts & opportunities is not always recorded in full or completely. Any AI in this environment has zero chance of delivering any insights that make sense and the first few times those insights are generated and are obviously flawed the team will simply stop looking at them.
The promise of AI is widely touted and no doubt in time some of the above limitations will either be resolved or work-arounds created.
Until this time it remains (for most businesses anyway), a future technology that may add value but isn’t there just yet.
https://www.linkedin.com/pulse/ai-failing-live-up-its-potential-david-forder/