62% Of US Consumers Like Using Chatbots To Interact With Businesses

Recent surveys, studies, forecasts and other quantitative assessments of the progress of AI highlighted the growth in consumers’ acceptance of chatbots, especially for help with routine tasks; questions about medical AI algorithms missing the worst patient outcomes; and new predictions for 2020 and beyond about the future of AI and work.

AI business and consumer adoption

Consumers in Australia, the UK, and France report the highest levels of chatbot usage, with more than 70% of respondents having used one to interact with a brand in the past year; the US and Germany are slightly behind at just over 50%; among people who have interacted with chatbots in the past year, 80% used them for customer care, up from 67% the previous year; among Americans, interest in using messaging to interact with brands rose significantly, from 52% in 2018 to 62% this year; the European countries surveyed had the highest average demand for messaging, at 65% of respondents; this capability is of most interest to younger demographics, with more than 70% of people 18-34 years old wanting the option to message with businesses; consumers are gaining confidence in the ability of bots to help with routine tasks—for example, more than 50% of respondents said they’d prefer a bot over a human agent to tell them their account balance or update an address; conversely, consumer confidence in bots is lower for more complex tasks—just 15% said they would want a bot to assist with correcting a mistake on a bill [LivePerson survey of more than 5,000 adults in six countries]

64% of surveyed organizations plan to increase their AI investment over the coming year; 77% agree that while AI increasingly peels off simpler customer service requests, it also increases the need for agents to develop additional skills to deal with more complex and higher-value customer inquiries; 74% of respondents state that the number of agents will either grow or stay the same this year; 79% believe that AI would enable contact centers to deliver consistent, timely and contextually relevant experiences [NICE inContact and Forrester Consulting online survey of 307 organizations in the US, the UK, and Australia]

AI business impact

Manpower France collects 1.3 million invoices from 80,000 companies annually. After nine months of testing Sidetrade’s Aimie, a traditional machine learning-based tool, Manpower found that its collections increased 12% [Fortune]

The first eight IT teams at Fannie Mae to receive Moogsoft’s AIOps tool have seen a 35% reduction in IT incidents over the past 12 months; the teams using the AIOps tool have cut the time needed to resolve problems by between 25% and 75%, depending on the issue; Fannie expects that when it deploys the AI system to all business units and the system gets better at pinpointing root causes, monthly incidents will decline by 50% to 60% over the next year [WSJ]

AI research results

Familial hypercholesterolaemia or FH is a common genetic disorder that carries a 20-times higher risk for life-threatening cardiovascular disease, but today less than 10% percent of the 1.3 million Americans born with FH are diagnosed. The FIND FH screening algorithm was trained on data from 939 clinically diagnosed individuals and 83,136 individuals presumed free of FH. The model was then applied to a national health-care encounter database (170 million individuals) and an integrated health-care delivery system dataset (174,000 individuals). Of the cases reviewed by FH experts, 87% in the national database and 77% in the health-care delivery system dataset were categorized as having a high enough clinical suspicion of FH to warrant guideline-based clinical evaluation and treatment [The Lancet Digital Health]

Ancient texts or inscriptions are often damaged and illegible parts of the text must be restored by specialists, known as epigraphists. A deep neural network trained to help fill in the missing pieces achieved a 30.1% character error rate, compared to the 57.3% of human epigraphists [DeepMind]

Researchers argue that average performance across a broad test set of medical AI algorithms is far less important than how these algorithms perform in detecting the smaller subset of disease features associated with the worst patient outcomes. They contend that human doctors, by contrast, tend to be highly attuned to these outliers, even if humans tend to perform worse than the machines on average across all disease types. Most medical A.I. algorithms are not rigorously tested on these outliers disease subsets, and as a result, may misrepresent how safe they are. Medical AI ought to be assessed for its impact on actual patient care and patient outcomes—much in the way drugs are tested today—not just on how well it performs on a broad test set [Fortune and arXiv]

Researchers at the Mayo Clinic used facial-recognition software to match photographs of 84 volunteers to unidentified MRI images showing outlines of their heads; the facial-recognition software correctly identified 70 (83%) of the study participants [The New York Times and New England Journal of Medicine]

Using 41,000 cases of infections, a dataset that was extracted from a large repository of more than 7 million hospital patient encounters, Royal Philips and the U.S. Department of Defense have developed an AI tool that screens vital signs and other biomarkers to predict the likelihood of infection up to 48 hours ahead of clinical suspicion [Healthcare IT News]

The future of work, now

Emerging technologies aren’t actually going to replace the over 1 million warehouse workers in the US anytime soon. But over the next 10 years, the technology may make their lives harder [Recode and UC Berkeley]

The Life of Data, the fuel for AI

On average, data volumes are growing at a rate of 63% a month; 12% report that their data volumes are growing at 100% or more per month; more than 20% of those surveyed report drawing from 1,000 or more data sources; more than 90% said it is challenging to make data available in a format usable for analytics; data portability (45%) and scalability (46%) top the list of perceived benefits of a modern approach to data transformation [Matillion and IDG survey of 200 IT, data science, and data engineering professionals at North American organizations with at least 1,000 employees]

85% of managed service providers (MSPs) reported attacks against small-to-medium-sized businesses (SMBs) over the last two years, up from 79% in 2018; 64% of MSPs report experiencing a loss of business productivity for their SMB clients while 45% report business-threatening downtime. The average cost of that downtime is $141,000, a more than 200% increase from 2018 [Datto survey of more than 1,400 managed service providers worldwide]

AI market forecasts

The market for data-labelling services may triple to $5 billion by 2023, according to Redpoint Ventures [The Economist]

The market for AI-underwritten insurance premiums will grow from $1.3 billion to $20 billion worldwide by 2024 [Juniper]

Asia/Pacific (excluding Japan) spending on AI systems will reach $6.2 billion in 2019, recording an increase of almost 54% when compared to 2018; spending on AI systems will increase to $21.4 billion by 2023 with a compound annual growth rate (CAGR) of 39.6% over the 2018-23 forecast period [IDC]

The future of AI, predicted

By 2023, the number of people with disabilities employed will triple due to AI and emerging technologies, reducing barriers to access; by 2024, AI identification of emotions will influence more than half of the online advertisements you see; by 2023, a self-regulating association for oversight of AI and machine learning designers will be established in at least four of the G7 countries; by 2023, individual activities will be tracked digitally by an “Internet of Behavior” to influence benefit and service eligibility for 40% of people worldwide [Gartner]

By 2022, 30% of organizations using AI for decision-making will come up against “shadow AI”—data outside the ownership of the IT department that is used to build AI models—as the biggest risk to effective and ethical decisions; by 2023, over 75% of large organizations are expected to hire AI specialists in behavior forensics, privacy and customer trust to reduce brand and reputation risk; by 2022, 30% of all cyberattacks will involve “poisoning” the training data that is used to build AI systems, stealing AI models and infiltrating AI models with samples that cause it to make the wrong decisions; by 2022, 30% of organizations will have invested in explainable AI—technology that can explain how AI algorithms come to their conclusions [Gartner]

China has about 200 million CCTV cameras in use today, predicted to rise 213% to 626 million by 2020; Mainland China is home to eight of the world’s 10 most heavily monitored cities [South China Morning Post]

The future of AI in the US, envisioned

“To execute a human-centered national AI strategy, we propose that the US government build a new AI ecosystem across education, research and entrepreneurship, with an investment of at least $120 billion over ten years”—Fei-Fei Li and John Etchemendy, “We Need a National Vision for AI,” Stanford University

AI quotable quotes

“It’s a big thing to integrate [causality] into AI. Current approaches to machine learning assume that the trained AI system will be applied on the same kind of data as the training data. In real life it is often not the case”—Yoshua Bengio

“We are not going to get intelligence as general as humans’ with supervision or with multi-task learning. We are going to have to have something else”—Yann LeCun

“…automatic labeling [as fake news] of news stories raises… many questions about fairness and algorithm transparency, suggesting that it is likely that the final call will still depend on an expert at the end point for a long time”—Julio C. S. Reis, Andre Correia, ­ Fabr­ıcio Murai, Adriano Veloso, and Fabr­ıcio Benevenuto, Supervised Learning for Fake News Detection

AI “mimicking the brain” (or maybe not) quote of the week

“[Mind-reading by machines] would require a full understanding of the language of the brain. And to be very clear, we don’t fully understand the language of the brain”—John Dylan Haynes, professor of neuroscience at the Charité Universitätsmedizin in Berlin