LLPhant is a PHP library for building Generative AI applications in PHP. Created by Maxime Thoonsen, it provides a unified interface for working with multiple LLM providers including OpenAI, Anthropic, Mistral, LM Studio, and Ollama. The library is inspired by LangChain and LLamaIndex, bringing similar patterns to PHP developers.
Multi-Provider LLM Support
LLPhant abstracts away provider differences so you can switch between AI services with minimal code changes. Whether you are using OpenAI's GPT models, Anthropic's Claude, Mistral, LM Studio, or running local models through Ollama, the interface remains consistent:
// OpenAI$chat
=
新的
OpenAIChat
();$响应
=
$chat
->
generateText
(
'What is the capital of France?'
(英文):// Anthropic Claude$chat
=
新的
AnthropicChat
(
新的
AnthropicConfig
(
AnthropicConfig
::
CLAUDE_3_5_SONNET
));$响应
=
$chat
->
generateText
(
'What is the capital of France?'
(英文):// Local models via Ollama$配置
=
新的
OllamaConfig
();$配置
->
模型
=
'llama2'
;$chat
=
新的
OllamaChat
($config);
The library also supports streaming responses for real-time chat interfaces, token usage tracking for cost monitoring, and vision capabilities for image analysis.
Embeddings and Vector Storage
LLPhant includes a complete pipeline for building Retrieval-Augmented Generation (RAG) applications. You can read documents from various sources (PDF, Word, text files), split them into chunks, generate embeddings, and store them in your preferred vector database:
// Read and process documents$reader
=
新的
FileDataReader
(
__你__
。
'/documents'
(英文):文档
=
$reader
->
getDocuments
();// Split into chunks for embedding$splitDocuments
=
DocumentSplitter
::
splitDocuments
($documents,
800
(英文):// Generate embeddings$embeddingGenerator
=
新的
OpenAI3SmallEmbeddingGenerator
();$embeddedDocuments
=
$embeddingGenerator
->
embedDocuments
($splitDocuments);// Store in PostgreSQL with pgvector$vectorStore
=
新的
DoctrineVectorStore
($entityManager,
文档
::班级
(英文):$vectorStore
->
addDocuments
($embeddedDocuments);// Search for similar content$embedding
=
$embeddingGenerator
->
embedText
(
'search query'
(英文):$结果
=
$vectorStore
->
similaritySearch
($embedding,
5
(英文):
Vector store support includes Doctrine (PostgreSQL with pgvector), Redis, Elasticsearch, MongoDB, ChromaDB, Qdrant, Milvus, AstraDB, OpenSearch, Pinecone, and Typesense.
Question Answering with RAG
The QuestionAnswering class handles the entire RAG workflow: retrieving relevant documents from your vector store and generating contextualized responses:
使用
LLPhant\Query\SemanticSearch\QuestionAnswering
;$qa
=
新的
QuestionAnswering
($vectorStore, $embeddingGenerator, $chat);$响应
=
$qa
->
answerQuestion
(
'What are the main topics covered in the documentation?'
(英文):
You can customize the system message template to control how the AI uses retrieved context, add guardrails for safety, and implement multi-query transformations to improve retrieval quality.
Function Calling and Tools
LLPhant supports function calling (tools), allowing your LLM to interact with external APIs and services. Define your tools as PHP classes and the LLM can decide when to invoke them based on the conversation context.
You can learn more about LLPhant and find detailed documentation at llphant.readthedocs.org 和 GitHub 存储库 。





