What AI Really Means For The World Of Work
Can you clarify exactly what AI is?
Artificial Intelligence (AI) aims for machines to mimic human intelligence and decision-making capabilities. It is an umbrella concept that includes several other AI-related terms, such as machine learning, deep learning, natural language processing (NLP), neural networks, computer vision and cognitive computing. Clearly the new area getting a lot of attention and excitement right now is Generative AI (such as ChatGPT) where algorithms are capable of understanding and predicting information to generate new data in the form of text, images or other media. Large Language Models (LLMs) are also a form of Generative AI.
And how new is it?
It isn't. The first chatbot started in 1964, but what has happened recently is the capability and usability of AI has significantly expanded. Enabled by AI, computers can already hear (e.g. Alexa), talk (e.g. Google Translate), write (ChatGPT), decide (for a financial product or a job), see (facial recognition or radiography), create (images or songs), inform (Waze or digital assistants), and move (driverless cars). Many of these had been areas of competitive advantage for humans and as AI continues to expand, the trends propelling AI forward are not stopping. You no longer need a computer science degree to work with or interact with AI and this increased usability is leading to a boom in creativity for and with AI.
What are the main challenges or limitations currently faced by AI?
There are still many challenges, but the pace of development and discussion to overcome limitations is also happening very rapidly. Although increased accessibility is good, most businesses run on legacy infrastructure – and while many want to integrate these new tools, this is likely to take time. There is also a rush taking place to get access to leading AI providers. Usually, it takes time for businesses to develop processes and people to capitalise on opportunities. In the case of AI, some may be fearful of implementing technology they do not trust or understand, let alone technology that could substitute their roles. There are still many concerns around using AI responsibly, including: the risk of reliability, such as hallucinations where AI models fabricate information, which may be helpful for creative processes but highly risky for factual areas; questions around copyright; use of data; fairness and bias; suitability; security; concentration risks of providers; and transparency. Many companies will need appropriate guardrails around these issues, and these will take time.
Tell us more about the regulatory restrictions...
In addition to company guardrails for responsible AI, many industries are highly regulated, and AI tools will need to have adequate risk management, transparency or explainability and in time, auditability. For example, while AI offers significant opportunity in theory in medicine, a prognosis needs to be explained and often, handled with care.
One certainty is that more AI regulation is coming. As governments try to get up to speed with developments, they are likely to try to find the balance between pro-innovation growth policies and mitigating potential risks. The risks are high given we do not understand how all AI results are formed, exponential growth is taking place in AI, AI is increasingly entangled with many areas of our lives and AI is on a path to surpass human intelligence.
An open letter from the Centre for AI Safety in May 2023, signed by more than 350 executives, researchers and engineers working in AI warned that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.” This complexity, uncertainty and evolving regulations could all slow down AI implementation.
Can you expand more on the advantage of AI – particularly in the workplace?
There are many advantages offered by AI, including: better handling of big data; efficient pattern recognition; faster decision making; the automation of repetitive tasks; reduction in human error; improvements in process efficiency; and 24/7 availability.
AI is a GPT, or General Purpose Technology, as the stream engine, electricity or the internet were before it. A GPT affects the entire economy and AI will be used by most countries, sectors and firms over time. GPTs do not come along very often. AI was first coined in 1956 and after years of experiments, development and false starts, what is happening now is the commercialisation and industrialisation of AI across multiple economies.
Breaking this down, one of the biggest opportunities is improving customer experience and engagement via digital assistants (e.g. chatbots and voice assistants, 24/7/365), real-time language translation, improved customer analytics and in time, hyper-personalisation delivering tailored solutions. Businesses are also attracted to new opportunities, from better targeting and lead detection, improved sales and marketing such as personalised messages, and greater client segmentation, upselling or cross-selling. AI can also improve search, summarisation and generation of insights for research or to identify trends in customer data.
A second large area of opportunity is greater operational efficiencies. AI can help automate routine and mundane tasks and processes, including many back-office functions. A third large area of opportunity is in risk, including legal and compliance. AI is helping assess and detect risk, reducing errors, fraud or theft. While AI is being used by bad actors to increase cyber-crime, AI is also used to detect and mitigate cyber-crime, often in real-time. AI can also now facilitate draft contracts, check content complies with communication standards or regulations, translate policies and help auditors source data.
Are the efficiency and productivity savings as significant as they're made out to be?
The potential is huge, but it will take time. The old adage that people often overestimate the potential of technology in the short term but underestimate it in the long term may also hold true for AI. A recent McKinsey publication suggests generative AI could add the equivalent of $2.6 trillion to $4.4 trillion annually to the global economy across 63 use cases they analysed, with 75% of this coming from customer operations, marketing and sales, software engineering and R&D.
In addition to productivity, one of the new hopes is that generative AI unleashes huge waves of creativity or intelligence. Some are even drawing parallels with Gutenberg’s printing press in 1440. Time will tell if AI does produce a needed boost to productivity, innovation and economic growth, but excitement at the prospects is already driving capital into the area. An investment wave in AI could accelerate the opportunities. And this could also influence politicians keen to see more economic growth after the GFC (Global Financial Crisis) and a post pandemic cost of living crisis.
ChatGPT has garnered a lot of attention in recent months – what’s your view on its potential?
ChatGPT is a Large Language Model that can produce content, answer queries, or assist in tasks, with an easy-to-use interface that is accessible to all. ChatGPT3 burst onto the scene at the end of November 2022 to become the quickest adopted product of all time, reaching 100m users in just two months and captivating many. This included Bill Gates who said, “I knew I had just seen the most important advancement in technology since the graphical user interface.”
ChatGPT’s capability can be extraordinary, and while users can have human-like conversations with this chatbot, it has the power to do much more. ChatGPT3 was able to pass many high-level exams, such as a US Medical exam, a US Bar exam and an MBA exam. When ChatGPT4 was introduced just four months later, in March 2023, the improvement shown in passing multiple exams was significant and beyond the capability of most humans. The rate of progress seen by AI systems has been extraordinary.
However, large Language Models do come with a health warning that their output is not always reliable, but there is a lot of work taking place to improve or build on the technology. For example, GPT4 is helping power what are called ‘domain models’ which ingest and use company’s proprietary data to give more reliable answers. And while ChatGPT is well known, there are other LLMs with great capabilities and others are being formed.
Should employees really be worried about being out of a job?
Techno-optimists believe AI augments and amplifies, rather than substitutes, human workers. Techno-pessimists believe the opposite, with AI building on years of automation and a new trend towards remote digital labour. A paper from Korinek and Juelfs (2022) highlights that since the Industrial Revolution, technological progress has on average benefitted everyone in the economy, but this does not mean this is the future path and there is already evidence that automation has led to declining real wages, particularly for less educated workers.
The new fear is Generative AI will negatively affect more educated knowledge workers. As noted in the study by Eloundou et al. (2023) that 80% of the US workforce could have at least 10% of their work tasks affected by LLMs, the impact on jobs is likely for a vast array of professions, from law, to writers, to coders. This includes creatives, an attribute a year ago that most thought would be out of reach for computers that majored in routine tasks. This may be good news for companies given wages make up the majority of costs, but less good news for workers. This issue requires solutions.
However, despite decades of fear over technological unemployment it is worth reminding ourselves that US unemployment is at a 53-year low. New jobs keep popping up, with opportunities in the green, healthcare and technology sectors look particularly promising ahead. While AI will substitute for some jobs, it is creating others, such as prompt engineers, AI ethicist and virtual world designers. Whether you sit in the techno-optimist or techno-pessimist camp one certainty is that many knowledge worker jobs will be changed by AI. Companies have a big part to play in helping navigate this as they are the ones hiring, and investing in and implementing new technology, such as AI. If we can see now that AI is at a tipping point, let’s upskill as many people as possible to be augmented rather than replaced by AI.
What are some of the ethical considerations and potential risks associated with the widespread adoption of AI?
On one hand AI can be a force for good, but the same technology in the wrong hands can be a force for ill. For example, using AI to help create new medicines or bioweapons. Or helping create useful new content versus creating misinformation factories of spurious information. The latter is of particular concern heading into elections.
Another area of concern is that AI is the first invention that is likely at some point to surpass us in terms of intelligence. We need to heed the warning of 350 executives, researchers and engineers working in AI that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.” It is already a problem that many AI systems are black boxes and we do not know how they got to their answers, or produce surprising results (known as emergence) that we do not understand.
What are the prospects for achieving human-level intelligence in machines?
While is seems almost inevitable that machines will reach human-level intelligence, the truth is no-one knows when. The explosion of generative AI in the last year has accelerated timelines for AGI (Artificial General Intelligence) and AI Singularity (a new level of intelligence that surpasses humans), but we are likely to be talking decades rather than the end of this century. Some leading voices suggest 5-20 years is possible. However, given AI can already see, hear, speak, and move, it’s probably better to think about incremental use cases and improvements that AI is making rather than one explicit point of singularity. The implications of AGI need to be widely discussed while we can still shape the outcome.
How do you envision the role of AI in addressing global challenges such as climate change, resource scarcity, or public health crises?
It’s clear AI can be used for good across multiple areas and most SDGs (sustainability Development Goals). One could argue limited AI resources would be best pointed at the areas of maximum good for society, rather than just towards profit motives, but in practice both can co-exist.
Healthcare is one, if not the, area of greatest opportunity for AI. AI is already being used in a number of medical technology products and will be a huge area for biopharma where costs could be reduced, and timelines of new medicines be accelerated. This was showcased in vaccines for Covid-19. AI will also move us towards a world of greater personalised medicine and more focus on prevention rather than just treating the sick. It is likely to increase access to health knowledge 24/7 and create more data led insights. Consumer adoption of wearables is a simple example of this, but a much bigger opportunity is increasing health knowledge for millions of people who currently have poor access, in their language and, if needed, with a voice interface.
There are multiple ways AI can help with climate change, from measurements and data led insights, to better early warning systems, or more efficient systems and the reduction of waste, or helping produce better materials such as batteries. And as noted, the need for Responsible AI can also learn much from some of the collective progress made towards climate change goals.
Education is also one of the greatest opportunities for AI. As a knowledge industry, AI can help provide greater access, at lower costs, with more personalisation, flexibility and measurement. This is important everywhere but particularly in emerging markets. Knowledge can be a great equaliser in society. AI could also reduce the admin burden on teachers and address what will be a growing need for lifelong learning delivery and finding new jobs. So, while there is a real concern over the implications of generative AI on coursework assignments, AI has so many positives for education.
What are some emerging technologies or research areas that will shape the future of AI?
There are so many AI innovations taking place it can be hard to keep up but let me highlight a few. Generative AI is moving into music, video and game creation. Virtual assistants could become a huge growth category, too, helping with everything from simple tasks to becoming personal tutors or health coaches. A next stage of LLM’s is Auto-GPT, where your instructions will interact directly with tools such as APIs or robots to execute activities.
However, as exciting as Generative AI is, there are many other areas of AI innovation taking place, including Edge AI (given the proliferation of connected devices), Digital Twins, and Quantum AI. The developments taking place in Quantum Computing are very interesting and represent the next paradigm shift in computing that will accelerate machine learning even further. The pace of change is not slowing. While Quantum Computing is complicated, Low Code/No Code is making AI simpler, more accessible and offers significant time and cost savings.
Do you have any final words about how this might impact society?
AI can be a huge force for good while also bringing with it significant new risks. For example, there are questions over the distribution of the gains from AI, or indeed the distribution of jobs. One given that I am particularly interested in, is that a lot of jobs will need to change, and how all stakeholders (i.e. employees, employers, governments, educators) help manage this process will be key. Another given is that digitisation of our lives, which accelerated in the pandemic, will continue. The innovation and progress taking place in AI is phenomenal and this is accelerating its adoption and the continued digitisation of our economies, companies, lives and societies. Given this impacts us all, we all have to shape AI’s direction towards good, positive, human-centric outcomes, while putting in place the guardrails to mitigate against the downsides. That balance requires a greater understanding of the issues and people voting for the future that they want to be part of.
Want to read more? Click here.
*DISCLAIMER: The opinions included in this article are exclusively the personal views of the author. They neither represent the opinion of SheerLuxe or that of Citibank.