The name of this blog, Rainbow Juice, is intentional.
The rainbow signifies unity from diversity. It is holistic. The arch suggests the idea of looking at the over-arching concepts: the big picture. To create a rainbow requires air, fire (the sun) and water (raindrops) and us to see it from the earth.
Juice suggests an extract; hence rainbow juice is extracting the elements from the rainbow, translating them and making them accessible to us. Juice also refreshes us and here it symbolises our nutritional quest for understanding, compassion and enlightenment.

Tuesday, 11 June 2024

Large Language Models: Destroyers of Communication

This blog is on a topic I know little about. However, I do know three things. First, I know how to listen to my inner-tutor (my intuition) and second, I know how to listen to experts in the field. Third, and most importantly, I know how important trust is to the building and maintenance of community and global well-being.

Large Language Models (LLMs) are computational models that enable language generation and processing. Probably the most well-known expression of LLMs is ChatGPT.

When I listen to my intuition and the thoughts of experts, I grow increasingly wary of, and sceptical of, LLMs and Artificial Intelligence (AI) in general.

AI poses many threats and risks to humanity and the rest of the world. To mention just a few here: 1. The energy use by AI is doubling every 100 days, 2. Studies are now being show that LLMs are learning to lie and deceive, 3. Universities and other institutions are being challenged by the issue of plagiarism and originality of thought because of LLMs, 4. Inbuilt bias (gender, race, class, sexuality etc) occurs in the ‘harvesting of words’1 that LLMs undertake, 5. AI undermines democracy and privacy.

I want to focus here on another issue, that of trust.

First though, we must discuss what communication is and is not. Communication is not simply the passing on of information. As with many words in the English language, communication comes to us via Latin, in this case, communicare. In Latin this word can be translated as: to share, divide out, join, unite, participate in, impart, inform.

Communicare itself derives from communis, meaning in common. Com = with, together, and unis = oneness, union.

When we understand this, we realise that communication is much, much, more than simply passing on information.

Communication is a means to commune, a way of building and maintaining relationships. Wholesome communication is a cornerstone of healthy communities.

This is what AI destroys.

Relationships are built on trust, and that is what LLMs undermine.

A study undertaken by the University of Queensland in 2023 of over 17,000 people from 17 countries showed that three out of five people were wary of trusting AI systems.2

Not trusting AI systems is one thing. Not trusting each other is another. AI does nothing to mitigate the already high levels of mistrust and polarisation in the world. It may indeed exacerbate it.

The reason for this is that LLMs are not a communication tool. They are simply an information tool. They pass on information, without regard to the veracity of the information gleaned and generated.

Let me pose a scenario, which is likely to become more prevalent in the future. Suppose I am in communication with someone and have built a relationship of trust with that person. When I read something from them I do not question that what I read is that person’s own ideas and thought.

But then, what happens if I discover that that person has begun to use ChatGPT (or other AI) to generate what they write? Will I accept that the words are indeed those of the person I am in communication with?

If all I am interested in is the information in what I read, then I will possibly be accepting of the material.

However, if my concern is more that of maintaining a relationship with that person (including the exchange of information) then knowing that the material has been generated by AI, and not by the person themselves, my trust in future interactions with that person is likely to be diminished. In other words, the communication between us is seriously undermined.

LLMs, and AI generally, is poised to damage levels of trust between people. When trust is destabilised then relationships founder, and polarisation follows.

Already, the levels of inter-personal (and inter-national) trust are decreasing, and polarisation  increasing.3 AI exacerbates this trend.

AI is destroying true communication.

Notes:

1. Harvesting of words: a term used by Tracey Spicer in a public presentation on Artificial Intelligence, 8 June 2024. Spicer is the author of Man-Made: How the bias of the past is being built into the future, Simon & Schuster, Australia, 2023.

2. Gillespie, N., Lockey, S., Curtis, C., Pool, J., & Akbari, A. Trust in Artificial Intelligence: A Global Study. The University of Queensland and KPMG Australia. 2023, doi:10.14264/00d3c94

3. For example, in the US less than 40% of people felt that ‘most people can be trusted.’ Dan Vallone et al., Two Stories of Distrust in America, More in Common, New York, 2021.

No comments:

Post a Comment

This blogsite is dedicated to positive dialoque and a respectful learning environment. Therefore, I retain the right to remove comments that are: profane, personal attacks, hateful, spam, offensive, irrelevant (off-topic) or detract in other ways from these principles.