Companies looking to position in the AI market will quickly need to satisfy ethical questions for buyers – even though the demand curve looks steep. The impact of buying the ‘wrong AI’ could have huge impact on brand equity.
The AI market is on track to grow to $100bn by 2025 – six-fold from 2019’s figure of $16.4bn.
Research from analyst Omdia – Artificial Intelligence Software Market Forecasts – suggests while the pandemic forced some sectors to slow down their AI efforts, the market is still set for stellar figures as many companies have been forced to accelerate their AI adoption.
“Economic effects from the COVID-19 pandemic have widened the dichotomy between early AI adopters—the ‘AI haves’—and the trailing followers—the ‘AI have nots,’” said Omdia senior analyst, Neil Dunay.
“Industries that have pioneered AI deployments and have the largest AI investments are likely to continue to invest in what they view as proven, indispensable technology for cost cutting, revenue generation, and enhancing customer experience.”
But there are challenges to ensure the AI industry grows safely – not only through its makers but also its buyer. As the industry develops, more questions circle around the ethics and biased foundations of some AI technologies.
For example, just this week, The Bennett Institute for Public Policy at the University of Cambridge issued a report that listed the themes technology buyers need to consider as they reflect the concerns of the general public with AI.
- Privacy and surveillance
- Bias, discrimination, and injustice in algorithmic decisioning
- Encoding of ethical assumptions in autonomous vehicle systems
- Artificial general intelligence as an existential risk to humanity
- Software user interface design as an impediment to human flourishing
- Job displacement from machine-learning and robotics
- Monetary compensation for personal data use
Report co-author Sam Gilbert notes:
“By giving ethics board a formal role in governance structures, and giving individuals transparency and control over how their personal data is collected, stored, and used, tech companies can begin to transcend “ethics washing”.
In a similar tone, researchers from Google’s DeepMind and the University of Oxford have also proposed rebuilding the AI industry on a basis of anti-colonialism – to avoid algorithmic exploitation in a paper released Thursday.
These are just two questions posed by major institutions within a week, but many more are likely to follow to ensure AI companies don’t ignore deep philosophical, political and social questions.