Tech »  Topic »  Why multi-agent AI tackles complexities LLMs can’t

Why multi-agent AI tackles complexities LLMs can’t


Image Credit: Image credit: Venturebeat with DALL-E 3

The introduction of ChatGPT has brought large language models (LLMs) into widespread use across both tech and non-tech industries. This popularity is primarily due to two factors:

  1. LLMs as a knowledge storehouse: LLMs are trained on a vast amount of internet data and are updated at regular intervals (that is, GPT-3, GPT-3.5, GPT-4, GPT-4o, and others);
  1.  Emergent abilities: As LLMs grow, they display abilities not found in smaller models.

Does this mean we have already reached human-level intelligence, which we call artificial general intelligence (AGI)? Gartner defines AGI as a form of AI that possesses the ability to understand, learn and apply knowledge across a wide range of tasks and domains. The road to AGI is long, with one key hurdle being the auto-regressive nature of LLM training that predicts words based on past sequences. As one of the pioneers in ...


Copyright of this story solely belongs to venturebeat . To see the full text click HERE