AI revolution: what it means for productivity, investing and financial advice
In this Q&A, part of our AI revolution series, our Schroders experts and the Lead Responsible AI Ambassador at Microsoft UK discuss AI’s potential in a range of areas.
Why is everyone so excited about AI at the moment?
Charlotte Wood, Head of Innovation and Fintech Alliances, Schroders: “We’re excited about how this technology can give people access to information that already exists faster and easier, whether or not they would previously have had visibility of it. There's huge opportunity for this tech to enable people to do things that they wouldn't have been able to do before.”
Alex Tedder, Head of Global and Thematic Equities, Schroders: “Financial markets are particularly excited about the application of generative AI to businesses and the productivity gains that can realised.”
Can you put some numbers to these potential productivity gains?
Alex Tedder: “There are around 1 billion knowledge workers – i.e. people who add value through their knowledge – globally. If we assume a knowledge worker earns, say, $15,000 per worker per year (obviously it is more than this in the West, but much less than this in emerging markets), we land up with a global wage bill of $15 trillion a year.
“Now, let’s assume 15% of the work these knowledge workers do is displaced by AI. In theory, there a saving of $2.25 trillion annually, from applying AI to certain parts of the knowledge spectrum. For sure, not all of this will translate into revenues for the companies that supply generative AI models. But even on a conservative basis the annual addressable market could be around $450bn.
“These are big numbers, and that is without productivity gains. You can see how the same logic can be applied at the company level. The potential for cost savings and productivity gains is significant, and that’s why financial market participants are very excited.”
Won't these cost savings and productivity gains be offset by social misery and potentially unrest as people lose jobs?
Adelina Balasa, Lead Responsible AI Ambassador, Microsoft: “Research suggests 65% of knowledge workers actually prefer to delegate some of their work to AI to be more productive, and leaders are twice as likely to be concerned with productivity than with cutting jobs. Of course, businesses can automate as much as they like, but you can’t have a successful business that relies just on AI. You always need a human in the loop, especially in large language models, to make crucial decisions.”
Alex Tedder: “I think AI is actually going to raise the standard of living in certain parts of the world, developing countries especially. AI enables a level of knowledge sharing that simply hasn’t been possible before, which will be hugely beneficial for institutions and individuals.
“I also see AI as a positive thing in the developed world, where aging populations are creating labour shortages, particularly in countries like Japan. AI can allow fewer people to become more productive and effectively offset the loss of workers that results from changing demographics.”
What could AI mean for the investment industry?
Charlotte Wood: “There is so much data that investment teams have to consume every day, and as humans we are limited in how much of this data we can usefully consume and factor into investment decisions. AI presents an opportunity to vastly improve data consumption and application, thereby improving investment decisions and client outcomes.”
Alex Tedder: “AI is going to play into how people allocate capital between different asset classes, different regions and different sectors. And that’s where it gets interesting: there’ll be winners and losers at the corporate level from this in terms of how they adopt AI and how they implement it – how successful they are in improving productivity and creativity.
“Financial markets have been very efficient in pricing the potential impact of AI, particularly in what it could mean for revenue growth in the software and semiconductor sectors. What the market hasn’t really done yet is taken a step back and thought about what it will mean in terms of adding value in other sectors or industries. And at the corporate level, this is where it gets very interesting.
Looking at financial advisers specifically, how AI be used in their day-to-day business?
Adelina Balasa: “Where AI’s value comes in, is its ability to help the financial adviser understand my situation faster, communicate with me quicker, and give me a more personalised experience.
“But I wouldn’t take financial advice from AI without a human in the loop. The type of AI we’re talking about shouldn’t be relied upon to generate numerical figures, only to extract, understand and process them. I would take generic advice like ‘diversification is a good thing’, but not personal advice that hasn’t been vetted by a human.
“Large language models also understand numbers, but that’s because it learns about them from existing data and content, not because it understands or can apply that understanding to situations.
“In fact, some AI models have content safety systems built in that can detect if you are asking ‘What should I do in this specific situation?’. The AI model will give you a general answer, but the content safety system, in addition to detecting and blocking inappropriate language, will also add in at the end that you should ask this advice of a professional.”
Charlotte Wood: “The really amazing thing about this technology is that you don’t have to be a data scientist to use it in your everyday life or in your job. For financial advisers, AI can be used to augment client interaction – for example, it can check that the adviser asked all the right questions of their client.
“Because of this technology’s accessibility, it’s also more readily available to smaller companies who, for example, may not have been able to afford to hire teams of data scientists to deploy machine learning in the past.
“It can also make it easier to personalise information, to help clients interpret data and to disseminate it. This is where AI can have a massive impact in doing that for you.”
What are some of the limitations of, and risks associated with, generative AI?
Adelina Balasa: “Most large language models have been built using open source data from the internet, which doesn’t help you with data lineage – i.e. you can’t necessarily verify where the information is coming from and whether that’s a trusted source. This is why responsible AI best practice is to combine generative AI models with other data solutions such as search engines, which can give you the data traceability and the transparency you need to trust the answer.
“Even with other data solutions attached, sometimes generative AI can also ‘hallucinate’ – i.e. it can make up things that aren’t true and render the entire generated content nondeterministic - and here is where you need to adopt specific responsible AI frameworks and prompt engineering techniques to mitigate the hallucinations.
“In generative AI models, you can implement a threshold where you can dictate how creative the AI should or shouldn’t be. If you’re writing a poem, then you can push the threshold to its maximum creativity, but if you want to extract numbers, then you can tell it to be exact and non-creative.”
Charlotte Wood: “Once you’ve put data into something like a ChatGPT, you no longer have control of what’s being done with that data. OpenAI has the right to store it and use it in future.
“I would definitely encourage people to be cautious about using a public version of generative AI. And certainly don’t put your clients’ details into it because it’s effectively like posting them on the internet directly.”
Who should be held accountable to make sure AI is used responsibly?
Adelina Balasa: “It’s a shared responsibility between organisations that provide technology, organisations that use the technology, and governments. There are actually quite a few guidelines already in place in Europe – for example the EU AI Act – and in the UK we have an AI White Paper and transparency best practices. But I do think that governments need to make actual legislation that everyone needs to follow so AI technology is trustworthy.”
How is Schroders using generative AI?
Charlotte Wood: “We want to get this type of technology into people’s hands in a safe way and enable them to use it on Schroders-specific data because we think that’s where the real value lies. So we built an internal version of ChatGPT, which has been rolled out across the company, but with some extra functionality, like being able to drop documents in and ask questions against that specific document. Importantly though, the data from this technology is not being fed into the internet, so it’s not public information. It’s being stored entirely by Schroders.”
Please note: Any reference to sectors/countries/stocks/securities are for illustrative purposes only and not a recommendation to buy or sell any financial instrument/securities or adopt any investment strategy.
Issued in the Channel Islands by Cazenove Capital which is part of the Schroders Group and is a trading name of Schroders (C.I.) Limited, licensed and regulated by the Guernsey Financial Services Commission for banking and investment business; and regulated by the Jersey Financial Services Commission. Nothing in this document should be deemed to constitute the provision of financial, investment or other professional advice in any way. Past performance is not a guide to future performance. The value of an investment and the income from it may go down as well as up and investors may not get back the amount originally invested. This document may include forward-looking statements that are based upon our current opinions, expectations and projections. We undertake no obligation to update or revise any forward-looking statements. Actual results could differ materially from those anticipated in the forward-looking statements. All data contained within this document is sourced from Cazenove Capital unless otherwise stated.