fbpx

Artificial Intelligence (AI) is not a new development, however the introduction of ChatGPT has brought forward a plethora of opportunities and concerns in fields ranging from politics, conflicts to social life. This article will analyse the impact of AI with a special focus on ChatGPT, and will not only consider the immediate impact, but also highlight the ripple effects of such a technology.

ChatGPT, an AI chatbot created by OpenAI, is the latest technological advancement to capture the eyes of the world with its quick and accurate answers to the queries of users, presented in a conversational manner for ease of understanding. The broad applicability of such AI tools has garnered significant attention from the global audience.

These new technologies are referred to as Large Language Models (LLM). They are AI tools which can read, summarise, translate text into different languages and easier readability, and predict future words in a sentence, allowing them to imitate human way of writing a and speaking. The key limitation of these tools is their inability to understand the words themselves and the meaning behind them.

This new technology offers incredible opportunities across a range of industries such as marketing, education and health sectors, but with these opportunities come additional risks, and potential consequences for the broader society.

The key benefit that chat-based AI presents is an increase in productivity and efficiency, both in the workplace and in broader social scenarios. A simple prompt can elicit an accurate and an in-depth summary, saving hours of researching to produce a concise a summary. From writing articles to responding to customer inquiries in a conversational manner, the applications of ChatGPT are seemingly limitless.

The increase in productivity and efficiency translates into greater cost effectiveness within companies. Employees can use their time more effectively with high effort or time- consuming tasks and speed up internal and external processes.

A prime example of this is within the medical sector, where various applications have been theorised though not yet put into practice. Since AI tools can process a large amount of data, they can be used to support decision-making for physicians including spotting abnormalities in CT scans that the human eye can miss and provide invaluable support to an industry already stretched thin.

Despite the benefits of adopting AI in various sectors, the process of developing the tool has already raised ethical questions for stakeholders. While ChatGPT is proud to offer safeguarded information, it has taken an emotional and physical toll on workers that train this tool to protect its users from harmful or discriminatory information. Kenyan workers reported reviewing material including racism, depictions of sexual violence and exploitative content involving children, all for $2 USD an hour. These working conditions have been widely labelled as exploitative and the safeguarding process, psychologically harmful to the workers.

Other than the harmful social and psychological consequences already reported, another area for concern is the environmental impact of developing and running these tools. The training of GPT-3 i.e.  the precursor to ChatGPT, emitted 550 tonnes of carbon dioxide equivalent to taking 550 round trips between New York and San Francisco. While the AI industry is not the largest contributor to global emissions, the environmental consequence of integrating AI into search engines and associated data centres accounts for one per cent of the world’s greenhouse gas emissions. This is a significant percentage going forward. The rate of emissions from data centres is only expected to rise with the increasing popularity of AI technology and cloud computing. 

Another OpenAI product to gain worldwide attention is DALL-E, a text-to-image generator able to produce photorealistic images of a range of prompts, including CCTV footage of Jesus Christ stealing a bike. DALL-E offers opportunities and risks similar to ChatGPT. Google and OpenAI acknowledge that DALL-E provides risks for encouraging bullying and harassment, generating images that reproduce racism or gender stereotypes, as well as spreading misinformation or reducing public trust with seemingly real images. For instance, DALL-E mini presents a range of white men in suits when asked to produce an image of a CEO and creates images of just white women when asked to produce an image of a woman.

The primary criticism of LLMs is that they have a high propensity to replicate existing biases they ‘learn’ from the content across the web. The lack of regulation around what information is used and how LLMs are developed is a grave consequence for the society that will see these biases perpetuated if these LLMs are not regulated.

All these benefits and limitations have implications for the peacefulness of society, however, insight into these implications will only get clearer with the broad application of LLMs by the society. While the increase in productivity within business has the potential to increase economic output for countries, the risks associated with social and environmental wellbeing cannot be ignored. This is a time of great change for almost all industries. To ensure that this change is sustainable and peaceful, the pillars of Positive Peace must be used as a guiding principle for this emerging field.

AUTHOR

Vision of Humanity Logo – Black-Grey (VOH Logo)

Sanskriti Baxi

Communication Associate

Vision of Humanity

Vision of Humanity is brought to you by the Institute for Economics and Peace (IEP), by staff in our global offices in Sydney, New York, The Hague, Harare and Mexico. Alongside maps and global indices, we present fresh perspectives on current affairs reflecting our editorial philosophy.