RAGFlow is a leading open-source Retrieval-Augmented Generation (RAG) engine that fuses cutting-edge RAG with Agent capabilities to create a superior context layer for LLMs
The all-in-one AI productivity accelerator. On device and privacy first with no annoying setup or configration.
Context7 Platform -- Up-to-date code documentation for LLMs and AI code editors
an open source, extensible AI agent that goes beyond code suggestions - install, execute, edit, and test with any LLM
Integrate cutting-edge LLM technology quickly and easily into your apps
An autonomous agent that conducts deep research on any data using any LLM providers
Open-source AI orchestration framework for building context-engineered, production-ready LLM applications. Design modular pipelines and agent workflows with explicit control over retrieval, routing, memory, and generation. Built for scalable agents, RAG, multimodal applications, semantic search, and conversational systems.
Convert any URL to an LLM-friendly input with a simple prefix https://r.jina.ai/
Python SDK for AI agent monitoring, LLM cost tracking, benchmarking, and more. Integrates with most LLMs and agent frameworks including CrewAI, Agno, OpenAI Agents SDK, Langchain, Autogen, AG2, and CamelAI
A Model Context Protocol server that provides read-only access to MySQL databases. This server enables LLMs to inspect database schemas and execute read-only queries.
A Model Context Protocol (MCP) server for ATLAS, a Neo4j-powered task management system for LLM Agents - implementing a three-tier architecture (Projects, Tasks, Knowledge) to manage complex workflows. Now with Deep Research.
A Model Context Protocol (MCP) server that provides secure, read-only access to BigQuery datasets. Enables Large Language Models (LLMs) to safely query and analyze data through a standardized interface.
A Model Context Protocol server providing tools to fetch and convert web content for usage by LLMs
<p align="center"> <img height="100" width="100" alt="LlamaIndex logo" src="https://ts.llamaindex.ai/square.svg" /> </p> <h1 align="center">LlamaIndex.TS</h1> <h3 align="center"> Data framework for your LLM application. </h3>
structured outputs for llm
Interface between LLMs and your data
LLM framework to build customizable, production-ready LLM applications. Connect components (models, vector DBs, file converters) to pipelines or agents that can interact with your data.
Agent Framework / shim to use Pydantic with LLMs
Library to easily interface with LLM API providers
A Model Context Protocol server providing tools to read, search, and manipulate Git repositories programmatically via LLMs
A Model Context Protocol server providing tools for time queries and timezone conversions for LLMs
Building applications with LLMs through composability