AI revolution: what’s the impact on connectivity, education and health?
Generative AI has enormous potential but it risks reinforcing existing inequalities, leading to lost opportunity and productivity. We look at some of the human rights issues for investors to consider.
The release of Chat GPT earlier this year has prompted the widespread take-up of generative AI capabilities. The potential applications of AI are varied and are still being discovered. But there are human rights implications too.
In particular, the use of generative AI may exacerbate existing digital divides, both within and between countries. There are also implications for other important areas of human rights such as education and healthcare. Below, we summarise the key issues for investors to consider in these three areas.
What does AI mean for digital inclusion?
Digital access and the growth of communication technologies have made significant contributions to economic development. This has also raised the stakes for an individual or community’s ability to participate in and contribute to the economy. As technology evolves, the digital divide prevents equal participation for low income households, the disabled, rural areas, and older adults.
The costs of unequal access are reflected in untapped job growth and economic output. Digital access connects people to education and employment opportunities, while also enabling businesses to reach new markets and customers. Connectivity is also needed for communities to take advantage of work-from-home employment models, which could help boost contributions to local economies (new jobs, tax revenues).
It is clear that there are ethical concerns in the deployment of advanced technology such as generative AI while many still do not have basic internet or broadband access. And to reap the benefits of generative AI, it’s not only access but high speed connectivity that is needed.
The problem is one that has been recognised by governments and businesses. In the US for example, the Jobs and Infrastructure Act (2021) includes $1.5 billion of funding for digital literacy programmes. From the corporate side, Cisco is innovating to help simplify networks and reduce both operational and upfront capital costs.
Government policy, private capital, and cross broader cooperation all play a vital role in the placement of digital infrastructure and spread of connectivity. As investors, there are some important questions to ask of companies operating in this field.
- Is the company exposed to regions where demand is underserved or growing?
- How is the company responding to changing customer needs resulting from demographic and structural trends in the population e.g., ageing population?
- How reliable is the company’s network?
- How experienced and/or reliant is the company on public subsidies and lobbying?
- Would the regulatory focus affect long-term profit margins for the company?
What does AI mean for the future of education?
Education is both a basic human right as well as a forward-looking indicator of economic growth. AI has significant potential to relieve teachers of some administrative tasks and focus instead on effective instruction.
Generative AI can also help with providing personalised, instant feedback to students which can be too time-consuming for a teacher with a typical class size. For example, take AI-enabled technology that has been developed for the use of maths tutoring. An AI model observes the student as they work on mathematical problems on a computer. This allows for immediate feedback if and when a student missteps in the process. The AI model can get them back on track while highlighting the specific steps that may have been missed or miscalculated.
However, to maximise the benefits, both students and teachers will need to have a solid foundation in digital literacy. The danger is that adoption of AI could isolate students in communities where digital literacy requires improvement.
Deploying AI to enhance learning is one focus when it comes to education; another is providing curricula to develop AI-related skills. Around half of US states require that high schools offer a computer science course. Meanwhile a number of countries (including China, Portugal and Qatar) mandate this across primary and middle schools, as well as high schools.
Education advancements are largely driven by government efforts. But investors need to be aware that companies require a pipeline of talent if they are going to capitalise on technology innovations like AI. Community engagement, particularly with local and regional universities, remains a tool to ensure that there is support for future workforce development.
How is AI being used in healthcare?
AI has the scope to improve efficiency and accuracy for a number of clinical and administrative applications within the healthcare sector.
The healthcare sector is rich in data, which makes the deployment of generative AI both exciting and worrisome. AI technology in public health requires large datasets. The collection, storage, and sharing of medical data raises ethical questions related to safety, governance, and privacy.
AI technologies based on high-quality data can improve the speed and accuracy of diagnosis, improve the quality of care and reduce subjective decision-making. It can also help solve for diagnostic errors and biases by clinicians. However, while an exciting development for potential early diagnosis, treatment, and overall wellbeing, society is telling the medical field that it is not fully ready to trust AI.
Biases can occur in AI algorithms. For example, if an algorithm is trained with a dataset overrepresented by male patients, then symptoms for illnesses and diseases that are more frequently experienced by men can lead to misdiagnosis of women. It will be essential to build more awareness of race and gender differences for improved risk predication and capable diagnostics.
Nonetheless, the question is not if but when AI tools will become part of routine clinical practice. Questions for investors in the area of healthcare AI to consider include:
- Does the company have a clear policy on transparency of trials?
- Does the company conduct bias training at all levels?
- Does the company have sufficient protocols to protect patient data?
- How much is company investing in products/services that cater to growing demand for better health management, prediction and prevention?
- Is the company ahead of AI-related regulation?
This article is issued by Cazenove Capital which is part of the Schroders Group and a trading name of Schroder & Co. Limited, 1 London Wall Place, London EC2Y 5AU. Authorised by the Prudential Regulation Authority and regulated by the Financial Conduct Authority and the Prudential Regulation Authority.
Nothing in this document should be deemed to constitute the provision of financial, investment or other professional advice in any way. Past performance is not a guide to future performance. The value of an investment and the income from it may go down as well as up and investors may not get back the amount originally invested.
This document may include forward-looking statements that are based upon our current opinions, expectations and projections. We undertake no obligation to update or revise any forward-looking statements. Actual results could differ materially from those anticipated in the forward-looking statements.
All data contained within this document is sourced from Cazenove Capital unless otherwise stated.