news
Globe, the Geneva Graduate Institute Review
28 March 2023

ChatGPT: Large models, expert skills and academia

Oana Ichim, Senior Post-doctoral Researcher at the TechHub looks at how AI models like ChatGPT are changing the digital landscape and how humans can adapt. 

ChatGPT is not a sudden creation enabled by advancements in artificial intelligence. It has a genesis and comes with quite an impressive number of occurrences and contingencies, all of which allow a clearer understanding of its functioning and a better assessment of its added value.

As the name suggests, generative AI produces or generates text, images, music, speech, code or video. Behind this concept lie machine-learning techniques that have evolved over the past decade, allowing us to explore and produce a specific output out of a large corpus of data. ChatGPT is a type of large language model (LLM) that uses deep learning to generate human-like text. GPT actually stands for “generative pretrained transformers”: “generative” because it can generate new text based on the input received, “pretrained” because it is trained on a large corpus of text data before being fine-tuned for specific tasks, and “transformers” because it uses a transformer-based neural network architecture to process input text and generate output text. Large models are trained on massive datasets represented by books, articles and websites; ChatGPT is one such LLM trained through an innovative technique in state-of-the art LLMs: reinforcement learning from human feedback (RLHF). 

OpenAI has not yet disclosed the full details around its creation but they acknowledge that their models can still generate toxic and biased outputs. All the above disclose the background against which potential and futile expectations regarding ChatGPT may arise.

ChatGPT is indeed the largest LLM trained with innovative techniques. However, LLMs construe text based on probabilities learned during training, but are incapable of (re)producing innovative text or knowledge. In short, this means that ChatGPT does not understand its own responses; it can be tricked into answering a question that it initially does not understand because it is built from a lot of data about which it knows nothing. As ChatGPT is a “product” of OpenAI, it will soon have a competitor. Competition in this field is all about talent and compute power: who has the largest cloud supercomputing technology, the “coolest geek” and the richest investor.

How can academia cope with this new technology?
In light of the above, it should follow that ChatGPT is neither a far-reaching opportunity nor a clear threat, but a mix of both.

Rethink exams (essays and take-homes)
If ChatGPT can create student essays, it cannot, as it has been shown, understand nor draft coherent plans. Exams will have to be redesigned away from summarisation towards different goals, such as ordering information overload, extracting overall themes, discovering new perspectives, identifying values and symbols, creating analogies. Students’ skills will have to be reoriented; if ChatGPT can provide the “bullet points”, students will have to provide the criteria or the yardstick that “hold the bullet points together”. While ChatGPT may state facts, students should locate values, including across disciplines. It is up to the students to “leverage” the volume of information against the various human insights that help “playing” with that information.

Create specific managerial infrastructure for managing disruptive technologies
Academic institutions have to develop structures for managing disruptive technologies and especially for exploring possibilities for testing and using those technologies. Universities “cradle” the talent that AI giants and start-ups strive for. It is thus crucial that specialised internal structures keep universities updated with the latest developments in the field of AI research and innovation while, at the same time, lobbying for partnerships with various stakeholders in the AI environment. A strategic partnership between universities and key investors may “shake” the AI economic landscape, orienting innovation away from commercial “predators” and towards more responsible stakeholders. 

Develop curricula on the epistemological implications of technologies
There is no better place than academia to start voicing concerns and raising awareness of AI contingencies. It is wrong to argue that professors are on their way to disappearing, or that ChatGPT threatens academia. Professors are not unknowledgeable, they are just unprepared for what is called the “data deluge”, but it is up to them to start ordering knowledge and start fitting together the pieces of the disjointed monster of information. Academia needs to develop courses for leveraging the attractiveness of technologies.

Reconsider the rules of authorship and co-authorship
If articles or essays are built from sources available on GitHub or students participate in conferences in which they are the “experts” in a field but the computing part is done by their collaborators, clear rules should delimit the extent of their contribution; even clearer rules should state the role of people gathering and curating the data as opposed to those who use it and transform it.
 
Consolidate research centres and invest in an academic brand 
Research centres have to consolidate the academic-specific brand against the new digital technologies and adapt it, not transform it. It is crucial to remember that humanities have not fundamentally changed their approach in decades, despite technology altering the entire world around them. It is imperative to replace our AI resistance and engage with the topic so as to be capable of producing relevant publications.

Learn more about the Institute's TechHub.

This article was published in Globe #31, the Institute Review.