Seeker
  • Introducing Seeker
  • Exploring Large Language Models (LLMs) and AI Generation
  • Understanding Hivemind's Content Filters
Powered by GitBook
On this page

Understanding Hivemind's Content Filters

The data used to train LLMs might include unwanted or harmful content, and occasionally, generative AI might produce inaccuracies, known as hallucinations. To mitigate these risks, Seeker applies additional filtering models to uphold response quality.

Before any message reaches your holder, Seeker verifies that the response:

  • Is Safe: Free from any potentially harmful content.

  • Is Relevant: Directly addresses the holder's query, ensuring the response is not just correct but also pertinent.

  • Is Accurate: Aligns with the information in your knowledge base, thus confirming the truthfulness of the response.

With these safeguards, you can trust that Seeker agents not only makes informed decisions but also delivers responses of the highest quality to your holders.

PreviousExploring Large Language Models (LLMs) and AI Generation

Last updated 4 months ago