AI Rule & Stream Gen
AI Rule & Stream Generation
The eKuiper Manager integrates advanced LLM-powered assistants to bridge the gap between industrial requirements and complex stream processing syntax. By leveraging OpenRouter-compatible models (like Gemini 1.5 Flash and Llama 3.3), the platform allows users to generate, analyze, and optimize data flows using natural language.
Overview
Generating stream definitions and SQL rules manually can be error-prone, especially when dealing with complex nested JSON payloads or diverse industrial protocols (MQTT, HTTP, Neuron). The AI Assistant suite provides:
- Natural Language to SQL: Convert "Alert me if temperature exceeds 50 degrees for 5 minutes" into valid eKuiper SQL.
- Context-Aware Generation: Rules are generated based on your actual existing streams and shared connections.
- Automated Stream Discovery: Define data sources by describing the physical device or topic.
AI Stream Generation
The Stream Generator automates the creation of eKuiper Source definitions. It infers appropriate data types, formats, and protocols based on user descriptions.
Usage
Users can provide a simple prompt like: "Create an MQTT stream for a vibration sensor on topic factory/node1/vibe with frequency and amplitude fields."
The AI will return a structured JSON configuration ready for the eKuiper API:
{
"name": "vibration_sensor_stream",
"sourceType": "mqtt",
"datasource": "factory/node1/vibe",
"format": "json",
"fields": [
{ "name": "frequency", "type": "float" },
{ "name": "amplitude", "type": "float" }
],
"description": "Stream for monitoring industrial vibration telemetry."
}
Supported Protocols & Types:
- Sources:
mqtt,httppull,httppush,memory,neuron,edgex,file,redis,simulator. - Data Types:
bigint,float,string,boolean,datetime,bytea,array,struct.
AI Rule Generation
The Rule Generator is "Context-Aware," meaning it analyzes your current eKuiper environment (existing streams, active shared connections, and sink templates) before proposing a rule.
Contextual Logic
Unlike generic AI tools, this generator receives a snapshot of your system metadata:
- Available Streams: Ensures the
FROMclause uses existing stream names. - Sink Metadata: Suggests valid actions (e.g., sending data to a specific SQL database or MQTT broker already configured in your system).
- Shared Connections: Prefers reusing existing connection definitions to maintain resource efficiency.
API Interface
Endpoint: POST /api/ai/rule-gen
Request Body:
| Field | Type | Description |
| :--- | :--- | :--- |
| messages | Array | The conversation history between the user and AI. |
| context | Object | Metadata including streams, connections, and sinkMetadata. |
| modelName | String | (Optional) The specific AI model to use. |
Response Schema:
{
"message": string, // Explanation for the technician
"ruleId": string, // Suggested unique ID
"sql": string, // The generated eKuiper SQL
"actions": Array<any>, // Array of Sink configurations
"options": Object // QoS, EventTime, and other rule options
}
Logic & Topology Analysis
For existing rules, the AI can perform a "Reverse Analysis" to help maintenance teams understand complex logic without reading raw SQL.
Topology Node Mapping
The AI analyzes the Rule SQL and its execution topology to provide plain-English descriptions for every node in the graph.
- Source Nodes: Explains what data is entering.
- Filter Nodes: Explains conditions (e.g., "Filters for values above 100").
- Aggregator Nodes: Explains math (e.g., "Calculates 10-minute rolling average").
- Sink Nodes: Explains the final destination.
Execution Plan (Explain) Analysis
The manager can pass the eKuiper EXPLAIN plan (JSON) to the AI to identify:
- Potential performance bottlenecks.
- Optimization hints for complex joins or windowing.
- Data flow pathing.
Configuration & Models
The AI features require an API key from OpenRouter. By default, the system uses google/gemini-flash-1.5 for its balance of speed and reasoning, but supports a variety of high-performance models including:
meta-llama/llama-3.3-70b-instructqwen/qwen-2.5-vl-7b-instructgoogle/gemini-2.0-flash-exp
Models can be toggled via the UI settings to optimize for either cost-efficiency or deep logical reasoning.