[Ed. Note: We have heard from a range of AI practitioners for their predictions on AI Trends in 2021. Here are predictions from a selection of those writing.]
From Florian Douetteau, CEO and co-founder of Dataiku:
Inclusive engineering will begin to make its way into the mainstream to support diversity. In order to ensure diversity is baked into their AI plans, companies must also commit the time and resources to practice inclusive engineering. This includes, but certainly isn’t limited to, doing whatever it takes to collect and use diverse datasets. This will help companies to create an experience that welcomes more people to the field — looking at everything from education to hiring practices.
There will be more of an organizational commitment to putting humans and diversity at the center of AI development. Companies will look to include people who are representative of those who will use the algorithms if they want to truly reduce bias and foster diversity. While most training datasets have been developed against a small percentage of the population, companies will now look to consider expanding their scope to design training datasets that are all-inclusive. The more inclusive the group building the AI and the datasets, the less the risk for bias.
AI experimentation will become more strategic. Experimentation takes place throughout the entire model development process – usually every important decision or assumption comes with at least some experiment or previous research to justify those decisions. Experimentation can take many shapes, from building full-fledged predictive ML models to doing statistical tests or charting data. Trying all combinations of every possible hyperparameter, feature handling, etc., quickly becomes untraceable. Therefore, we’ll begin to see organizations define a time and/or computation budget for experiments as well as an acceptability threshold for usefulness of the model.
From Ryohei Fujimaki, Ph.D., Founder & CEO of dotData:
AI Automation will Accelerate Digital Transformation Initiatives: “While the first wave of digital transformation focused on the digitization of products and services, the second wave – and what we will begin to see much more of in the coming year – will focus on using AI to optimize organizational efficiencies, generate deeper data-driven insights, and automate intelligent business decision-making. One of the key reasons that this is happening now is the availability of AI and ML automation platforms that make it possible for organizations to implement AI quickly and easily without investing in a data science team.”
More AI in BI: “As organizations face increased pressure to optimize their workflows, more and more businesses will begin asking BI teams to develop and manage AI/ML models. Because BI teams are closer to the business use-cases than data scientists, the life-cycle from “requirement” to the working model will be accelerated.”
Kendall Clark, founder & CEO of Stardog:
Making Knowledge “Machine-understandable”: “The reality of digital transformation is that the majority of most “data-driven” efforts are doomed to fail, primarily because machines are not humans! Human decision-making is based on contextual intelligence, and in order to successfully automate, machines need to know what we know. One technology that is helping organizations address this need is an enterprise knowledge graph (EKG), a modern data integration approach that allows organizations to discover hidden facts and relationships through inferences that would otherwise be unable to catch on a large scale.”
Semantic Graphs and the New Data Integration landscape: Relational data was never designed to support complex business processes with changing requirements. Relational data integration is an artifact of where data management was at 20 years ago — but truly, relational systems are not meant to represent large-scale information systems.”
Eliano Marques, EVP Data & AI, Protegrity:
Privacy-preserving techniques, synthetic data, and data generalization will drive “Responsible AI”:
“Over the past few years, data sharing has been on the rise, as organizations seek to do more with data and advance their AI and machine learning capabilities. Thankfully, amidst this backdrop, the world of innovators has also recognized the need for “Responsible AI”, which prioritizes privacy and requires greater governance into the decisions made by the AI models.
While there is an awareness today of what technologies can make AI safer and more responsible, research on emerging techniques for multi-party computation will be a priority in 2021, particularly as organizations seek out new ways to share data without compromising security.”
“Companies should look to implement privacy-preserving solutions – such as those that deliver differential privacy and k-anonymity – to guarantee additional privacy of individuals’ data while also reducing bias in ML algorithms. Data generalization, a technique that abstracts low-level value data (e.g., numerical age) with higher-level concepts (e.g., young or elderly) is one potential option to reduce bias. Synthetic data capabilities – such as a machine learning model that generates proxy data based on real data that can then be shared without revealing sensitive information – is also a viable approach to privacy-preservation. These techniques are fairly fresh in the industry, and generating awareness around them will be critical in the next couple of years.”
Anil Kaul, CEO of Absolutdata:
Hyperautomation: “Business-driven hyperautomation is a disciplined approach that organizations use to rapidly identify, vet and automate as many approved business and IT processes as possible. Although hyperautomation has been trending at an unrelenting pace for the past few years, the pandemic has heightened demand with the sudden requirement for everything to be “digital first.” “
“Hyperautomation is now inevitable and irreversible. Everything that can and should be automated will be automated. The acceleration of digital business requires efficiency, speed, and democratization. Hyperautomation often results in the creation of a digital twin of the organization (DTO), allowing organizations to visualize how functions, processes and key performance indicators interact to drive value. The DTO then becomes an integral part of the hyperautomation process, providing real-time, continuous intelligence about the organization and driving significant business opportunities.”
Digital Twins for almost everything: “A digital twin is a virtualized model of a process, product or service. The pairing of the virtual and physical worlds allows data analysis and system monitoring to help identify problems before they even occur. This prevents downtime, develops new opportunities and even plans for the future by using simulations. This generation of digital twins allow businesses to not only model and visualize a business asset, but also to make predictions, take actions in real-time and use current technologies such as AI and ML to augment and act on data in clever ways.”