adesso BLOG
20.09.2024 By Sascha Windisch and Immo Weber
GraphRAG: Utilising complex data relationships for more efficient LLM queries
Companies and authorities are often faced with the challenge of finding relevant information in huge amounts of data. Although Retrieval Augmented Generation (RAG) is still a relatively new technology for targeted retrieval of local domain knowledge, the technology often fails to aggregate complex distributed information. This is where GraphRAG comes into play. We present it in detail in this blog post.
Read more19.09.2024 By Ellen Tötsch
Down the Rabbit Hole: LLMs and the search for the perfect answer
A lot has happened since the breakthrough for Large Language Models (LLMs) with ChatGPT. What has remained is our desire to supplement these language models with further knowledge. There is no longer a one-size-fits-all solution, but there are numerous possibilities. This blog post provides an overview of the various options for optimising LLMs.
Read more02.09.2024 By Siver Rajab
Entity Linking: How Large Language Models are Revolutionising Data Processing
In the world of data processing, there are various approaches to improving efficiency and accuracy. One particularly promising approach is the use of Large Language Models (LLMs) to improve the linking of entities through entity linking. In this blog post, I will highlight the new possibilities and advantages of this technology.
Read more27.08.2024 By Sascha Windisch and Immo Weber
‘From RAGs to Riches": The path from simple to advanced retrieval augmented generation
Artificial intelligence is developing rapidly and Retrieval Augmented Generation (RAG) in particular has attracted a lot of attention recently. Large language models such as ChatGPT show their full potential when they are enriched with domain-specific knowledge through RAG. Despite this potential, users often face challenges. In this blog post, we look at the transition from basic to advanced RAG approaches and show how typical problems can be overcome.
Read more29.02.2024 By Sascha Windisch and Immo Weber
Retrieval Augmented Generation: LLM on steroids
Large Language Models (LLMs), above all ChatGPT, have taken all areas of computer science by storm over the past year. As they are trained on a broad database, LLMs are fundamentally application-agnostic. Despite their extensive knowledge, however, they have gaps, particularly in highly specialised applications, which in the worst case can only appear to be compensated for by hallucinations. To reduce this risk, "Retrieval Augmented Generation" (RAG) has been established.
Read more16.01.2024 By Azza Baatout and Marc Mezger
LLM operationalisation: a strategic approach for companies
The world of artificial intelligence is developing at a breathtaking speed, and large language models (LLMs) are at the forefront of this revolution. LLM operationalisation is an essential part of this development and offers companies the opportunity not only to push the boundaries of technology, but also to set new standards for human–machine interaction. We explain why this is the case in our blog post.
Read more15.12.2023 By Marc Mezger
Mistral and Phi – a revolution based on small (fine-tuned) language models?
In the world of artificial intelligence (AI), it has often been assumed that larger models are better. However, recent research shows that smaller language models, which were previously considered to only be an intermediate step on the path towards larger models, outperform or at least match the performance of large language models (LLMs) in various applications. In my blog post, I explore this point and present a variety of small language models. I will also take a look at the pros and cons of SLMs in a direct comparison with LLMs.
Read more23.10.2023 By Lilian Do Khac
Machine-generated text summarisation in Aleph Alpha Luminous using R: part 1
Running large language models using a fancy, ready-to-use interface increases accessibility for many, even for people who do not know any programming languages. Part two of my blog series focuses on technical aspects such as specifying the scope of the summary, prompt engineering and quality factors.
Read more10.07.2023 By Marc Mezger
Quickstart with a European-based large language model: Aleph Alpha’s ‘Luminous’
In this blog post, I want to give an introduction to the AI models of the German company Aleph Alpha given the current attention that large language models such as OpenAI’s ChatGPT are attracting and how people are increasingly using it to solve natural language processing problems. I will explain why it is of great importance to have AI companies based in Europe and why the everyday use of ChatGPT can be problematic given that it is the product of a US company.
Read more