Why AI matters: Six reasons you should give a fig
Right now there are a few other macro-preoccupations that deserve our attention. Nonetheless, whilst we’re busy dealing with these, AI advancement proceeds apace. Here’s six reasons why each of us should give a fig.
Originally published at theaigroupie.com
- AI will permeate and enhance more and more of our daily lives
AI already plays a central behind the scenes role in our lives. I’m not about to rattle off some futuristic day-in-the-life. The future is already here.
When you wake-up, you may find yourself instructing your AI-powered smart speaker to adjust the heating before you attend to a few chores. You order some goods online. They’re available for 24 hour delivery because an AI algorithm has anticipated local demand and has had them warehoused in a local distribution centre. You order some groceries. They’re delivered by a van whose route has been planned by an AI algorithm, tweaked during the day as the algorithm observes local traffic volumes. You access your phone via AI facial recognition and start attending to emails. A portion of these are sufficiently undemanding for you to respond without modifying the AI generated smart replies. Against your better instincts, you put in a call to a telecoms company to chase a new broadband installation. The AI natural language interface listens to your query and routes you to the appropriate team, a team whose size has been dimensioned for that day by an AI algorithm predicting today’s call volume. You get into your car and drive to the station. AI gets your car out of the tight spot you’re boxed into and parks it when you arrive. You get on the train and read a news feed that’s been curated for you by AI and watch a tennis highlights video compiled by AI. The train is re-platformed before it gets to its destination by an AI algorithm that anticipated delays. You leave the train and walk to work listening to music curated by AI through the latest AI-enhanced noise cancelling headphones. You walk through the comms zone of a mobile tower that has been repaired overnight when an AI algorithm predicted a high likelihood of an impending fault. Your day has barely started and yet it has already been actively shaped by AI.
AI is increasingly underpinning the architecture of the lives we lead. Ok, but so what? After all, how many of us really understand how our 24 by 7 companion the mobile phone works let alone AI? For most intents and purposes, we don’t need to.
Unfortunately, AI isn’t just another under-the-bonnet technology that we can happily remain ignorant of owing to the other five reasons.
2. AI will reshape industries and the role of public vs. private
IDC forecasts that global spending on AI will grow from $50bn in 2020 to more than $110bn in 2024. More rooted in the here and now, the McKinsey “Global Survey: The state of AI in 2020” found that “A small contingent of respondents coming from a variety of industries attribute 20 percent or more of their organizations’ earnings before interest and taxes (EBIT) to AI.” If I’ve read their numbers right, it would appear that the “small contingent” is 8% of the surveyed companies. That’s pretty significant given the recency of the tech and the non-trivial organisational and capability requirements for success . It’s also pretty significant given that AI today is largely about point solutions to point problems in data rich environments.
There are already sectors in which AI is a prerequisite to compete. Online search, online advertising and online content consumption are obvious ones. Less obvious are on-demand transportation and ride-sharing, cybersecurity and, increasingly, online retailing and warehouse management. The list is going to grow. As Mark Cuban the billionaire investor bluntly put it:
“If you don’t know AI, you’re the equivalent of somebody in 1999 saying, ‘I’m sure this internet thing will be OK, but I don’t give a s — -.’”
— MARK CUBAN
There is a real possibility of a winner-takes-most phenomenon hitting those industries where AI becomes an essential tool to bringing down costs, improving efficiencies and enhancing products and services and their marketing. AI resources (compute and talent) are expensive. The returns to experience (in the form of developing robust data management environments, developing and managing production machine learning solutions and integrating these into redesigned business processes, etc.) are material. For many corporates — and indeed some governments — these resources and experience gains will be inaccessible except via third parties. The best positioned will sometimes be the usual suspects. We should expect to see the tech giants providing ever more vertically-bespoke AI services to a range of industries.
The holy grail of AI R&D is to break the key limitations of machine learning today and have it work as effectively in small data environments as big, learn through interacting with the world rather than from labelled training data, break the present paradigm of simple prespecified goal-optimisation and ultimately have solutions that are general-purpose rather than narrowly focused. If these limitations are overcome the scope of applicability of AI — and the corresponding returns to AI investments — will be staggering. Increasingly there are fewer and fewer entities capable of funding this research.
OpenAI, the company that released the leading edge language model GPT-3 in 2020 (see “AI Synthetic Media: What to expect and what it means”), regards itself as one of the smaller players in the AI research space. It was originally funded to the tune of c.$1bn. Sam Altman, its CEO, was asked how much it was going to cost to develop Artificial General Intelligence. His answer:
“We will spend whatever it takes… I don’t know what that number is, but it’s going to be vastly more than a billion dollars.”
— SAM ALTMAN, CEO OPENAI
Deepmind — Google’s London-based AI lab — made a loss of $649m in 2019 and had a $1.5bn debt to its parent company waived. Google in many ways represents the ideal parent — the returns to implementing AI improvements in its core search and advertising business are huge, and it has implemented AI gains in a wide range of other areas as well. The investment sums that will seem vast and inaccessible to many corporates and governments are rational and remunerative to Facebook, Google and their peers. McKinsey estimated that in 2016 the tech giants collectively spent $20–30bn on AI, 90 percent of this on R&D. It would be no surprise if the same figure today were much larger. AI compute and talent are concentrating in a small number of commercial entities and a handful of universities. AI-driven commercially provided services — and possibly eventually public services — with huge social impact (e.g., the filtering algorithms that curate personalised online content and arguably influence our personal beliefs and opinions) will increasingly be owned or managed by private corporations with little accountability to the public. The governance of AI is something we should all take an interest in.
3. AI will shape the future of employment
I have three young daughters. Aside from the usual, slightly maniacal insistence that they get very good at things I’m not very good at but always wanted to be, there is this more material consideration of how they should best equip themselves for the future. What are the skills that will best serve them when so many doomsayers are predicting the displacement of so many jobs by AI?
Professor Geoff Hinton — one of the Godfathers of deep learning — famously killed off the profession of radiologist in 2016 when forecasting the impact of AI on image analysis:
“If you work as a radiologist you’re like the coyote that’s already over the edge of the cliff but hasn’t yet looked down so doesn’t realise there’s no ground underneath him… People should stop training radiologists now… It’s completely obvious that within 5 years deep learning is going to do better than radiologists… It might be 10 years.”
— PROFESSOR GEOFF HINTON (IN 2016)
Febrile words for a computer scientist.
I’m not (yet) asserting that Hinton got it wrong, but I am suggesting that the profession of radiologist, fascinatingly, is going be a strong litmus test of the extent of the automation impact of AI on jobs more broadly. In many ways, it is a canary in the coal mine.
In the face of AI, radiology has everything going for it. Radiologists don’t just look at images and can offer highly complementary value-add to technology in the form of overseeing an entire diagnostic pathway including patient facing diagnostic roles (ultrasound, fluoroscopy, biopsy, etc.), consulting with other physicians, and ultimately contributing to a high-consequence treatment decision in which the technical image diagnostic is often not the only consideration. Health budgets aside, innate demand for radiology (and medical services in general) is highly elastic — our reserve price for a little bit more life is very high. If radiology becomes more cost and time efficient and more accurate, demand is likely to grow. Proactive scanning for as yet unidentified conditions might even become feasible. Radiology (at least at present) requires a high qualification level, mitigating the possible effects on wages of a flood of lower-skilled labour working with technology to perform the same service. Right now there was in many countries an acute shortage of radiologists even before the impact of Covid. In the UK, demand for CT and MRI scans pre-Covid was growing at 9% a year — three times the growth rate in the radiologist workforce. AI may well prove to be the only means by which radiology is able to cope.
Other professions may be less fortunate. Take external audit — the provision of an independent review of a company’s accounts by a third party accounting firm. Parts of the audit process are susceptible to automation by AI, bringing down costs and improving accuracy. In contrast to radiology, one could argue that the opportunity to add value around technology is more limited — auditors are constrained in the amount of advice they can provide to clients due to the need to preserve independence. Demand for audit services may well be price inelastic — clients only want so much “audit” and indeed would very much prefer to pay less than they pay today. Auditors I suspect often compete on little more than price. Automation-driven cost efficiencies may not translate into a growth in demand but rather be delivered in whole or in part to clients in the form of lower prices. Audit may well be a profession which is facing into a decline in overall employment and, possibly, wages as a consequence of automation.
This analysis probably betrays a fair amount of ignorance of the audit profession (and of radiology) but it shouldn’t be a surprise if the impact of AI automation on employment levels and wages plays out very differently in different sectors. The MIT economist David Autor in his 2015 paper “Why are there still so many jobs?” (ref. 1) showed how automation in the US has led to a polarisation of employment growth — with growth in the two broad classes of jobs most difficult to automate away (high-end jobs requiring problem-solving, intuition, creativity and persuasion and, at the other end of the wage spectrum, more manual roles requiring situational adaptability, visual recognition and in-person interactions — cleaners, hairdressers, personal services, etc.) but slower or negative growth amongst the highly automatable middle-skilled ranks. AI is likely to further increase the extent of polarisation as it becomes capable of replicating the aforementioned most-difficult-to-automate-away skillsets.
Often ignored in this debate is the prospect of the new job creation that AI is stimulating. The World Economic Forum’s 2020 “Future of Jobs” Survey forecasts that, across the 26 countries surveyed (which collectively represent 80% of world GDP), by 2025 85 million jobs may be displaced by machines, but that 97 million new roles may emerge to support this new division of labour. The top ranked new roles include data scientists, machine learning specialists, big data engineers, digital marketers and process automation specialists. The top-line numbers are encouraging, but they do mean that 3.7 million new roles with these new skills will need to be filled on average per country over the next five years — and most will not be through reskilling. Over time, the shape of a nation’s workforce — and the demands this will place on education and training — are going to change, potentially radically.
4. AI will be a critical tool in the toolbox needed to address the mega-issues facing humanity
As far as catastrophic risks facing humanity go, you can take your pick. From climate change to biodiversity loss, water scarcity to food insecurity, plastics pollution to the newly experienced risk of pandemics, there are no shortage of challenges facing us as we strive to survive the 21st century.
AI’s great strength is specialisation. Whilst the evolutionary process built humans to be general purpose machines — jacks-and-jills-of-all-trades — AI trumps us by being very, very good at ultra-specialisation: finding and exploiting patterns in data one problem at a time. And in the business of existential threat aversion, ultra-specialisation has a role.
Take climate change for example. Commercial and residential buildings account for 18% of greenhouse gas emissions (ref. 2), driven by building heating and cooling systems. Machine learning will be an important element in controlling building energy consumption — a number of companies are already doing this, with energy reductions in the range of 20–50%. The same will be true for some aspects of industrial energy consumption. Datacentres used only 1% of global electricity in 2018. A worst case estimate for data centre power consumption in 2030 is that this will grow by a factor of 40. Machine learning will play an essential role in ensuring that this does not come to pass. Deepmind reduced the energy consumption of Google’s datacentres by 30% using a machine learning system that now runs autonomously. Machine learning will also play a role in optimising the energy output of alternatives.
Road transport accounts for 12% of global emissions. As electric vehicles are rolled out, machine learning will be used to analyse travel patterns and optimally site charging locations, model charging behaviours to predict and manage the new load for grid operators, and manage and maximise battery life. AI is just starting to be deployed to manage and redesign city transport systems, improving flow and thereby reducing emissions — see Vivacity’s case studies to get a feel of use cases already in operation (who knew “Urban Computing” would one day be a thing?).
Machine learning will be behind the computer vision tools needed for the remote sensing of emissions by satellites — so that emissions regulations can be credibly set and enforced. Likewise with respect to deforestation and likewise with respect to the health of global peatlands (which sequester twice as much carbon as all of the world’s forests). Longer-term, machine learning will play a role in identifying suitable underground locations for carbon sequestration and actively monitoring the health of these.
There are many, many other machine learning use cases to address climate change (ref. 3) — the above are only exemplars. Machine learning will play similarly important roles in tackling all of the mega-issues we face. AI is not going to solve world hunger any time soon, but — like it or not — without it we will face a higher risk of extinction.
5. AI will help us understand ourselves better and force us to think more clearly about who we are
The brain has long been a source of inspiration for AI. Some of the features of convolutional neural networks — the deep neural networks designed for image processing — were directly inspired by the visual system of the brain. There are many other examples. Deep learning’s phenomenal success means that the pendulum is now swinging in the other direction. Learnings from deep learning are providing new insights about how the brain works, particularly with respect to the visual and auditory cortices. Some go even further and argue (ref. 4) that the only way to progress from the understanding of the workings of small handfuls of brain neurons to understanding the vast hierarchies of different neural agglomerations within the brain is to adopt the conceptual design framework of deep learning itself. Hubris? In the coming years, we’re going to find out.
Beyond providing insights to the field of neuroscience, AI deployment in spaces of moral ambiguity will force us to face up to moral dilemmas that have plagued us for centuries and perhaps force us to get more clarity on human objectives and priorities. The advent of self-driving cars is one notorious example. As others have written (ref. 5):
“… the advent of self-driving cars would force us to treat as unproblematic the normative regulation of interactions that are inescapably tragic. More concretely, we will need to programme these cars’ algorithms as if there existed uncontroversial answers to the moral dilemmas that they might well face once on the road.”
— R. RODOGNO AND M. NØRSKOV (2019)
Put another way, how should a self-driving car trade-off the safety of its occupants vs. the safety of pedestrians? Such wranglings with human morality will be writ large if the field of autonomous AI weaponry takes off.
Even more philosophically, up until now the human race has regarded itself as being the only species on the planet endowed with intelligence (the feats of chimpanzees and bottlenose dolphins notwithstanding). We are soon going to have a peer — or more accurately, many peers. Whilst it will be a very long time before AGI (see here for a working definition) arrives — if it ever does — AI will increasingly seem to mimic specific aspects of our intelligence. One by one, we will find ourselves with AI bedfellows in the distinct realms of human creativity, dialogue, navigation of the physical world, perhaps even reasoning and long-term planning. These narrowly focused bedfellows may very well in some instances materially surpass our own competencies. How will we react? How will we cope?
6. AI risks are real
Partnership On AI recently launched an AI incident database comprising reports of a diverse range of instances when AI systems have malfunctioned. The database allows those implementing AI to anticipate possible harms so as to design-in mitigants. The incident list makes for a sobering read.
The crudeness of present day AI systems is responsible for many of the reported incidents. Dependence on historical training data can lead to decision-making biases (i.e., racist chatbots). The over-simplicity of the objective that a model is optimised against can lead to unintended consequences (e.g., a recommendation system for children’s content “churning out blood, suicide and cannabilism”). Sometimes the weak link is the human operator (see “Tesla driver killed while using autopilot was watching Harry Potter, witness says”). Sometimes the rules that the model objective implicitly encodes do not accord with acceptable social norms (e.g., prioritising Covid vaccinations for high-level doctors rather than patient-facing frontline medical staff). Sometimes the AI is simply not good enough (see “AI mistakes referee’s bald head for football”).
Addressing the above requires exceptional care to be taken in model design, deployment and ongoing performance monitoring. “Societal” AI risks, on the other hand, will not be so easy to manage away.
Let’s take just one of these as an example. Our routes through public and private spaces, our activities and even our emotions and the words we utter will be betrayed should ever more discerning image recognition technology be pervasively rolled out without adequate checks and balances. AI algorithms already outperform humans in face-based judgements of sexual orientation, personality and even political persuasion (see ref. 6 — although note that the accuracy requirements for real deployments would be much higher, as the recent spate of face recognition-based wrongful arrests have demonstrated). It’s no surprise that there is now a corresponding flurry of regulatory interest. The issue is — as with so many AI risks — a thorny one. How do we both access the benefits of facial recognition tech (which already has many useful use cases across industries) and preserve privacy?
This balancing act will play out over time in a wide variety of other AI application domains: in regulating online platform filtering algorithms (see “Belief-ghettos and Groupthink: Bursting the myth of the myth of filter bubbles”), deepfakes and digital identity rights (see “AI Synthetic Media: what to expect and what it means”), AI explainability, social scoring applications, autonomous weaponry and many, many others.
The challenge will be that AI innovation is vastly outpacing regulatory innovation. In fact, so fast is the pace of AI advancement that a new piece of legislation or industry regulation risks being outdated and ineffective by the time it comes into law. This constant sense of playing catch-up (which, if it isn’t already, will become a source of malaise amongst legislators and regulators) is a risk in itself — possibly provoking crude and far-reaching attempts to legislate away the problem.
AI regulatory arbitrage between nations — and its differential effects on innovation rates — may become a consideration. The winner-takes-most phenomenon described above may also play out on the global stage. Ian Hogarth wrote a widely respected perspective in 2018 on “AI Nationalism” in which he argued that a new kind of geopolitics will emerge as a result of the huge returns to AI investments, and that:
“AI policy will become the single most important area of government policy.”
— IAN HOGARTH
How nations manage the risks of AI without throwing out the proverbial baby with the bathwater is going to be one of the key get-rights over the coming years. Much as I’m generally disheartened by hyperbole, in this case it is entirely reasonable to suggest that the stakes are extremely high.
So, to recap, the six reasons why AI matters and why it’s going to matter more as time passes:
- AI will permeate and enhance more and more of our daily lives
- AI will reshape industries and the role of public vs. private
- AI will shape the future of employment
- AI will be a critical tool in the toolbox needed to address the mega-issues facing humanity
- AI will help us understand ourselves better and force us to think more clearly about who we are
- AI risks are real
Michael Faraday, when demonstrating electromagnetic induction — which led to the widespread production of that great omni-technology, electricity — was asked by a sceptical audience member “What good is it?”. He replied “What good is a newborn baby?”.
AI’s impacts will be so wide-ranging that it too has been described as an omni-technology, albeit one that is now entering a toddler phase of first applications and impacts. Somehow we need to navigate through all of the above and reap the benefits of AI whilst minimising its potential for adverse outcomes. AI is going to be a mega-trend in its own right — one whose early life trials and tribulations should stimulate and engage us all.
References
- David H. Autor. Why are there still so many jobs? The history and future of workplace automation. Journal of Economic Perspectives, Volume 29, Number 3, Summer 2015, Pages 3–30
- Our World in Data. Sector by sector: where do global greenhouse gas emissions come from? September 2020. https://ourworldindata.org/ghg-emissions-by-sector
- D. Rolnick et al. Tackling climate change with machine learning. arXiv:1906.05433v2. 2019
- B.A. Richards, et al. A deep learning framework for neuroscience. Nature Neuroscience, Vol. 22, p1761–1770, 2019
- R. Rodogno and M. Nørskov. The automation of ethics: The case of self-driving cars. 2019
- M. Kosinsky. Facial recognition technology can expose political orientation from naturalistic facial images. Nature, Scientific Reports volume 11, Article number 100. 2021
Sign-up to the newsletter at https://www.theaigroupie.com to access regular big picture views on AI.