Three qualities really define when a technology has matured, when it has truly come into its own and begun to reach its potential. I’m drawing on past technological advances that have transformed into basic utilities, the things we need in our everyday lives to function. Electricity and mobile phones are two fairly recent examples that come to mind.
No matter what the technology, it will not change the way we live our lives or run our businesses until it matures and becomes easy, cheap, and ubiquitous.
These qualities are benchmarks that can help us define AI’s progress from computer science curiosity to requisite tech tool. Though they sound subjective and fuzzy, if we break each benchmark into its constituent elements, they are measurable. Doing so helps us figure out what AI can do for us, where it needs to go, and how it fits into the models and processes of a wide range of businesses. It helps us boil down a vast number of options and nuances into a few handy rules of thumb to guide our decision making.
Benchmark 1: Easy
“Easy” in this context means fewer skills and less time are required. This may seem paradoxical with a bleeding-edge technology like machine learning and AI, but it is an achievable goal.
To reduce the skills required, you need an easy-to-use interface and flawless user experience. Everything you need should be at your fingertips. You should never be required to alter the back-end or open another tool to complete a task. That’s not easy. Instead, you need a product that is intuitive and covers most use cases.
Removing complexity is a critical, but insufficient component of “easy.” Efficiency and the reduction of time spent are an equally important measure. If it takes fifteen hours to wash your car, then it’s not easy, even if it doesn’t require complex or specialized skills.
The efficiency of your AI platform can be determined by looking at how much of the data science pipeline is streamlined and automated. Each step left as an exercise for the user: Data cleansing, feature engineering, model selection, optimization, dev-ops, and deployment to production, adds complexity and time.
Benchmark 2: Cheap
“Cheap” is relative. Some projects demand sizable resources, but what matters is the return on investment. If the benefits gained are significantly larger than the investment, it’s cheap.
Complexity and inconsistency are the enemies of cheap. They will quickly eat up your technology budget. If every time you ask your AI software platform to build a model it delivers a different technology, such as random forests or neural networks, you will need to deploy different skills, hardware, and software. It’s not cheap to hire, train, and maintain multiple skill sets or build redundant infrastructure based on the whims of your AI platform.
When you only need to support a single infrastructure, you reduce cost and complexity. When you reduce the skills required to use and maintain your systems, it’s easier to hire and train your workforce, which thus becomes much cheaper. And when you can dramatically increase the output of your existing team, your return on investment will skyrocket. In this case, AI becomes immensely cheaper.
Benchmark 3: Ubiquitous
“Ubiquitous,” by definition, means you are using it everywhere. If your AI efforts are limited by the data supported (images or text) or by the types of problems you can solve (supervised or unsupervised learning), then it’s not ubiquitous. If you can only refine pre-built models to try and force-fit them to your business, then it’s not ubiquitous. And if the cost and complexity of building models is so high that you are reserving your resources for only a handful of highest impact projects, then it’s not ubiquitous.
The biggest blocks to broad adoption of AI in businesses are data and understanding. A number of companies struggle with finding and preparing the data they need to build effective models. Those that do, run into roadblocks to development because executives, regulators, or other stakeholders cannot understand how the model works, or why it makes certain decisions. Your AI tools need to help, not hinder these common problems in order to be ubiquitous.
Data quality has a significant impact on the accuracy, and even the possibility, of AI models. Many of the tools available to data scientists today expect pristine levels of cleanliness and massive amounts of observations to work properly. On top of that, many of the platforms are limited to the size of datasets they can work with. This alone causes a number of companies to hit a brick wall before even starting.
The software you choose should be able to build robust models with the data you have on hand. If you need to spend months to prepare data for training, that same effort will be required for predictions. More data is always better. Any platform you choose should be able to support you as your data grows. But it should also be able to at least tell you if gathering more data is worth the effort. Nothing prevents adoption more than spending six months gathering, cleaning, and preparing data only to find your hypothesis was wrong.
Finally, the value of AI to your business depends on being able to use it! If, after all that work, you can’t put a model into production because you can’t explain why it works, then you have wasted time and resources. The solutions you choose should help explain how the predictions are being made and which features (data) impact the outcome. When you can apply AI to all areas of your business and quickly put those models into production, then AI becomes ubiquitous.
When AI is accessible to a broad range of skill sets, from Ph.D.’s to subject matter experts, it becomes easy. When more people can deliver AI faster, it becomes cheap. When you can apply AI across your company, it becomes ubiquitous. Only then will AI become the next utility.
To Read Our Daily News Updates, Please visit Inventiva or Subscribe Our Newsletter & Push.