API Reference
Complete API reference for Mindwave's core classes, methods, and interfaces.
Overview
This reference covers Mindwave's public API, including facades, core classes, and interfaces. For configuration details, see the Configuration Reference.
Facades
Mindwave Facade
The main entry point for Mindwave operations.
Namespace: Mindwave\Mindwave\Facades\Mindwave
llm()
Get an LLM driver instance.
use Mindwave\Mindwave\Facades\Mindwave;
// Get default LLM driver
$llm = Mindwave::llm();
// Get specific driver
$llm = Mindwave::llm('openai');
$llm = Mindwave::llm('anthropic');
$llm = Mindwave::llm('mistral');
// Make a simple LLM call
$response = Mindwave::llm()->generateText('What is Laravel?');Parameters:
string|null $driver- Driver name (null for default)
Returns: LlmDriver instance
prompt()
Create a new PromptComposer instance.
// Create prompt composer
$response = Mindwave::prompt()
->section('system', 'You are helpful')
->section('user', 'Hello')
->run();Returns: PromptComposer instance
See Prompt Composer for full documentation.
brain()
Get a Brain (vector store) instance.
// Get default brain
$brain = Mindwave::brain();
// Get named brain
$brain = Mindwave::brain('documentation');
// Store embeddings
$brain->remember('Laravel is a PHP framework', ['id' => 1]);
// Search
$results = $brain->recall('PHP framework', limit: 5);Parameters:
string|null $name- Brain name (null for default)
Returns: Brain instance
See Brain for full documentation.
stream()
Create a streaming response.
// Simple streaming
Mindwave::stream('Tell me about Laravel')
->respond();
// With model override
Mindwave::stream('Tell me about Laravel')
->model('gpt-4-turbo')
->respond();Parameters:
string $prompt- Text prompt
Returns: StreamedTextResponse instance
See Streaming for full documentation.
LLM Drivers
LlmDriver Interface
Namespace: Mindwave\Mindwave\LLM\Contracts\LlmDriver
All LLM drivers implement this interface.
chat()
Make a chat completion request.
$response = Mindwave::llm()->chat([
['role' => 'system', 'content' => 'You are helpful'],
['role' => 'user', 'content' => 'Hello'],
]);
echo $response->content;Parameters:
array $messages- Array of messages with role and contentarray $options- Optional parameters (model, temperature, etc.)
Returns: Provider-specific response object
generateText()
Generate text from a simple prompt.
$response = Mindwave::llm()->generateText('What is Laravel?');
echo $response->content;Parameters:
string $prompt- Text promptarray $options- Optional parameters
Returns: Provider-specific response object
stream()
Stream a text completion.
$stream = Mindwave::llm()->stream('Tell me a story');
foreach ($stream as $delta) {
echo $delta;
}Parameters:
string $prompt- Text promptarray $options- Optional parameters
Returns: Generator yielding text chunks
streamText()
Alias for stream().
Parameters:
string $prompt- Text promptarray $options- Optional parameters
Returns: Generator yielding text chunks
setOptions()
Set default options for this driver instance.
$llm = Mindwave::llm()
->setOptions([
'model' => 'gpt-4-turbo',
'temperature' => 0.7,
'max_tokens' => 2000,
]);Parameters:
array $options- Options to set
Returns: self
PromptComposer
Namespace: Mindwave\Mindwave\PromptComposer\PromptComposer
Token-aware prompt assembly with automatic fitting.
Methods
section()
Add a section to the prompt.
$composer->section(
name: 'system',
content: 'You are helpful',
priority: 100,
shrinker: null,
metadata: []
);Parameters:
string $name- Section identifierstring|array $content- Section contentint $priority- Priority (0-100, default: 50)string|null $shrinker- Shrinker strategy (null, 'truncate', 'compress')array $metadata- Additional metadata
Returns: self
context()
Add a context section (convenience method).
// Plain text
$composer->context('Some context information');
// With ContextSource
$composer->context($source, priority: 60, query: 'search term', limit: 5);
// With ContextPipeline
$composer->context($pipeline, priority: 60, limit: 10);Parameters:
string|array|ContextSource|ContextPipeline $content- Context contentint $priority- Priority (default: 50)string|null $query- Search query (for sources)int $limit- Result limit (for sources)
Returns: self
model()
Set the model for token counting.
$composer->model('gpt-4-turbo');Parameters:
string $model- Model identifier
Returns: self
reserveOutputTokens()
Reserve tokens for the model's response.
$composer->reserveOutputTokens(1000);Parameters:
int $tokens- Number of tokens to reserve
Returns: self
fit()
Apply auto-fit algorithm to stay within token budget.
$composer->fit();Returns: self
toMessages()
Convert to messages array format.
$messages = $composer->toMessages();Returns: array
toText()
Convert to plain text format.
$text = $composer->toText();Returns: string
run()
Execute the prompt with the configured LLM.
$response = $composer->run(['temperature' => 0.7]);Parameters:
array $options- Optional LLM parameters
Returns: LLM response object
getTokenCount()
Get current total token count.
$tokens = $composer->getTokenCount();Returns: int
getAvailableTokens()
Get available token budget.
$available = $composer->getAvailableTokens();Returns: int
isFitted()
Check if prompt has been fitted.
if (!$composer->isFitted()) {
$composer->fit();
}Returns: bool
getSections()
Get all sections.
$sections = $composer->getSections();Returns: array<Section>
registerShrinker()
Register a custom shrinker.
$composer->registerShrinker('custom', new CustomShrinker());Parameters:
string $name- Shrinker nameShrinkerInterface $shrinker- Shrinker instance
Returns: self
Context Discovery
ContextSource Interface
Namespace: Mindwave\Mindwave\Context\Contracts\ContextSource
All context sources implement this interface.
search()
Search the source and return ranked results.
$results = $source->search('Laravel', limit: 5);Parameters:
string $query- Search queryint $limit- Maximum results (default: 5)
Returns: ContextCollection
getName()
Get the source name.
$name = $source->getName();Returns: string
initialize()
Initialize the source (create indexes, etc.).
$source->initialize();Returns: void
cleanup()
Clean up resources (delete indexes, close connections).
$source->cleanup();Returns: void
ContextCollection
Namespace: Mindwave\Mindwave\Context\ContextCollection
Collection of context items with token management.
formatForPrompt()
Format collection for prompt inclusion.
// Numbered format (default)
$text = $collection->formatForPrompt('numbered');
// Markdown format
$text = $collection->formatForPrompt('markdown');
// JSON format
$text = $collection->formatForPrompt('json');Parameters:
string $format- Format type ('numbered', 'markdown', 'json')
Returns: string
deduplicate()
Remove duplicate items.
$deduplicated = $collection->deduplicate();Returns: self
rerank()
Re-rank by relevance score.
$reranked = $collection->rerank();Returns: self
truncateToTokens()
Truncate to fit within token budget.
$truncated = $collection->truncateToTokens(1000, 'gpt-4');Parameters:
int $maxTokens- Token budgetstring $model- Model for token counting
Returns: self
getTotalTokens()
Get total token count.
$tokens = $collection->getTotalTokens('gpt-4');Parameters:
string $model- Model for token counting
Returns: int
ContextItem
Namespace: Mindwave\Mindwave\Context\ContextItem
Individual context item (readonly).
Properties
readonly class ContextItem
{
public string $content; // Item content
public float $score; // Relevance score (0.0-1.0)
public string $source; // Source name
public array $metadata; // Additional metadata
}make()
Create a new ContextItem.
$item = ContextItem::make(
content: 'Laravel is a framework',
score: 0.95,
source: 'docs',
metadata: ['id' => 1]
);Parameters:
string $content- Item contentfloat $score- Relevance scorestring $source- Source namearray $metadata- Additional data
Returns: ContextItem
Observability
Trace Model
Namespace: Mindwave\Mindwave\Observability\Models\Trace
Eloquent model for trace records.
Attributes
string $trace_id- Unique trace identifierint $total_input_tokens- Total input tokens across spansint $total_output_tokens- Total output tokens across spansfloat $estimated_cost- Total estimated cost in USDint $span_count- Number of spans in tracestring $status- Trace status ('ok', 'error')Carbon $start_time- Trace start timeCarbon $end_time- Trace end timeint $duration- Duration in nanoseconds
Relationships
// Get spans for this trace
$trace->spans;Methods
// Get duration in milliseconds
$ms = $trace->getDurationInMilliseconds();
// Get duration in seconds
$seconds = $trace->getDurationInSeconds();Span Model
Namespace: Mindwave\Mindwave\Observability\Models\Span
Eloquent model for span records.
Attributes
string $span_id- Unique span identifierstring $trace_id- Parent trace IDstring $name- Span namestring $operation_name- Operation type ('chat', 'embeddings', etc.)string $provider_name- LLM provider ('openai', 'anthropic', etc.)string $request_model- Model usedint $input_tokens- Input token countint $output_tokens- Output token countstring $status_code- Status ('ok', 'error')int $duration- Duration in nanoseconds
Scopes
// Find spans by operation
Span::operation('chat')->get();
// Find spans by provider
Span::provider('openai')->get();
// Find spans by model
Span::model('gpt-4')->get();
// Find slow spans (>5000ms by default)
Span::slow()->get();
Span::slow(10000)->get(); // Custom threshold
// Find spans with errors
Span::withErrors()->get();Related Documentation
- Configuration Reference - Full configuration options
- Vector Stores Reference - Vector store implementations
- Embeddings Reference - Embedding providers
- Core Documentation - Getting started guide
For detailed usage examples, see:
- Prompt Composer - Token-aware prompt building
- Context Discovery - Context aggregation
- Streaming - Real-time responses
- Tracing - Observability and monitoring