How can we optimise the use of large language models to create a smarter and more inclusive society?

How can we optimise the use of large language models to create a smarter and more inclusive society?

Under the leadership of the Copenhagen Business School and the Max Planck Institute for Human Development (MPIB) in Berlin, researchers are investigating the use of large language models, known as LLMs, which are primarily digital and appear on the Internet. The language models developed as collective intelligence are intended to highlight the opportunities and risks that arise between the models and human abilities such as collective deliberation, decision-making and problem-solving. An article on this has been published in the journal Nature Human Behaviour.

Berlin, September 20th, 2024. The right topic in the right place at the right time has always been a magnet. Whether people, things or nature, outstanding features and unique selling points attract people and in this way become collective intelligence in our modern, fashionable times. With growing order or desired chaos, every object, thing, everything material in the course of wanting to live has a growing individuality. Even when people are physically fully grown and have thus reached a basic physical and mental maturity, things are not over. Growth continues elsewhere.


This is contrasted by the collective with its power. This also requires a certain amount of maturity. Some people find their order earlier, others later. What they all have in common is that knowledge from a favoured combination of information at the right time and in the right place contributes to how the individual and the collective will and can develop.

This collective intelligence is the driving force behind all types of groups, teams or communities such as those made possible by the Internet, the authors describe. An article on the use of large language models has now been published in the scientific journal Nature. The article explains how LLMs can improve collective intelligence and discusses their potential impact on teams and society.

The use of large language end models, known as LLMs, are artificial intelligence systems that analyse and generate texts using large data sets and deep learning techniques. Artificially generated texts therefore have a considerable influence on the development of societies. According to researchers at the Copenhagen Business School and the Max Planck Institute for Human Development (MPIB) in Berlin, the aim of the LLM system is to improve this collective intelligence.


„As large language models increasingly shape the information and decision-making landscape, it is important to find a balance between utilising their potential and safeguarding against risks. Our article shows how human collective intelligence can be enriched by LLMs, but also the potential negative consequences,“ says Ralph Hertwig, co-author of the article.

There is potential, for example, to break down barriers by offering translation services or writing aids. People with different backgrounds should be able to take part in discussions on an equal footing. Of course, we are also aware of the disadvantages. „As LLMs learn from information available online, there is a risk that the views of minorities are not represented in the responses generated by LLMs. This can create a false sense of consensus and exclude some perspectives,“ emphasises Jason Burton, lead author of the study.
„The value of this article is that it demonstrates why we need to think proactively about how LLMs are changing the online information environment and, by extension, our collective intelligence – for better or worse,“ summarises co-author Joshua Becker.

The authors call for more transparency in the creation of LLMs, including disclosure of the sources of training data, and suggest that LLM developers should be subject to external scrutiny and monitoring. This would enable a better understanding of how LLMs are actually developed and curb negative developments.

Original publication:

Burton, J. W., Lopez-Lopez, E., Hechtlinger, S., Rahwan, Z., Aeschbach, S., Bakker, M. A., Becker, J. A., Berditchevskaia, A., Berger, J., Brinkmann, L., Flek, L., Herzog, S. M., Huang, S. S., Kapoor, S., Narayanan, A., Nussberger, A.-M., Yasseri, T., Nickl, P., Almaatouq, A., Hahn, U., Kurvers, R. H., Leavy, S., Rahwan, I., Siddarth, D., Siu, A., Woolley, A. W., Wulff, D. U., & Hertwig, R. (2024). How large language models can reshape collective intelligence. Nature Human Behaviour. Advance online publication. https://www.nature.com/articles/s41562-024-01959-9

Further Informationen:

https://www.mpib-berlin.mpg.de/pressemeldungen/llm-und-kollektive-intelligenz

ImageSource kangso-eun Pixabay


Beitrag veröffentlicht

in

von

Kommentare

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert