- AI-powered content generation – automatically create engaging copy, blog posts, and marketing materials
- Smart chatbots and assistants – build conversational interfaces that understand context and provide helpful responses
- Intelligent form handling – auto-complete forms, validate inputs, and provide smart suggestions
- Content analysis and insights – understand user behavior, sentiment, and engagement patterns
- Automated workflows – trigger AI actions based on user interactions and data changes
- Multilingual support – automatically translate content and provide localized experiences
- Image and document processing – extract text, analyze content, and generate descriptions from uploaded files
- Smart recommendations – suggest relevant content, products, or actions based on user preferences
- Automated testing and optimization – AI-driven A/B testing and performance improvements
Enabling Superun AI
For the best experience, we recommend using Superun AI with your Supabase integration for data persistence and user management. By default, Superun AI is enabled for your project and ready to use. This means Superun automatically adds AI functionality when you request it in your prompts or through the visual editor. You can manage AI behavior in your project settings.Default AI model
Superun AI uses Gemini 2.5 Flash as the default model for most tasks. If you want to use a different model or combination of models, you can specify your choice directly in your prompts or through the AI settings panel.AI Integration Options
The default setting for AI integration is Always available, meaning Superun AI will be automatically used in your projects when requested. You can configure AI behavior in your project settings:- Always available: Superun automatically performs AI actions when requested, without additional confirmation.
- Ask for confirmation: Superun asks for your approval before executing AI-powered features.
- Disabled: AI features are not available for the current project.
Usage and Pricing
Superun AI uses the same Credits system as the rest of Superun. When you use AI features in conversations or build flows, they simply consume Credits from your project balance—there is no separate AI subscription or standalone AI invoice. You can top up your Credits as needed, and all AI usage is counted together with your other Superun activity. There is no per-model pricing UI or per-request line-item breakdown; instead, you just see your remaining Credits. You can view and manage your Credits balance in the Credits section of your project dashboard.Supported AI Models
Superun AI uses Gemini 2.5 Flash as its default model, but you can specify different models in your prompts or through the AI settings.| Model | Description | Best For |
|---|---|---|
| GPT-5 | Top-tier model for complex decisions, deep analysis, creative work, and vision understanding. Features advanced reasoning capabilities, highest accuracy, and multimodal understanding. Most capable but slowest and most expensive option. | Complex decision making, deep analysis, creative work, vision understanding, accuracy-critical applications, advanced reasoning tasks |
| GPT-5 Mini | Best value for daily office work, email replies, and meeting notes. Optimized balance of performance and cost, delivering strong capabilities at a lower price point than GPT-5. | Daily office work, email replies, meeting notes, business workflows, general assistants, cost-effective production use |
| GPT-5 Nano | Ultra-fast for high-frequency queries, real-time chat, and quick Q&A. Lightweight model designed for speed and efficiency, making it the cheapest and fastest GPT-5 variant. | High-frequency queries, real-time chat, quick Q&A, summaries, classification, high-volume simple tasks, latency-sensitive applications |
| Gemini 2.5 Pro | Long-context expert for processing entire books and massive documents. Features exceptional reasoning capabilities, extremely large context window, and advanced multimodal understanding. Best for complex tasks but slower and most expensive. | Processing entire books, massive documents, deep reasoning, advanced coding, research, complex multimodal tasks, long-context analysis |
| Gemini 2.5 Flash (default) | Fast responses for daily chat, text translation, and image understanding. Balanced model offering good reasoning, speed, and cost efficiency. Optimized for general-purpose tasks with mid-range pricing. | Daily chat, text translation, image understanding, assistants, analysis, general workflows, balanced performance needs |
| Gemini 2.5 Flash Lite | Fastest and cheapest for simple Q&A and high-frequency usage. Lightweight model optimized for speed and cost, handling simple tasks at scale with reduced reasoning depth. | Simple Q&A, high-frequency usage, high-volume lightweight tasks, classification, summarization, translation, cost-sensitive applications |
| Llama-3.1-8b-Instant | Lightning-fast for customer service and high-concurrency real-time chat. Compact 8B parameter model optimized for low latency and high throughput in concurrent scenarios. | Customer service, high-concurrency real-time chat, quick responses, high-throughput applications, low-latency requirements |
| Llama-3.3-70b-Versatile | Versatile assistant for complex task handling and deep conversations. 70B parameter model with strong reasoning capabilities, well-balanced for diverse applications requiring both intelligence and efficiency. | Complex task handling, deep conversations, multi-step problem solving, versatile applications, balanced reasoning needs |
| GPT-OSS-120b | Deep thinking model for research analysis and precise reasoning. Large-scale open-source model with advanced reasoning capabilities, optimized for complex problem-solving and academic research. | Research analysis, precise reasoning, complex problem solving, academic research, deep analysis, scientific computing |
| GPT-OSS-20b | Balanced choice combining reasoning capability with speed. Medium-scale open-source model offering good reasoning performance with faster response times, suitable for general-purpose applications. | General-purpose tasks, balanced reasoning and speed, versatile applications, everyday use cases, production workloads |
Best and most cost-effective choices
By Intelligence Level
- Highest intelligence (deep reasoning, most expensive): GPT-5, Gemini 2.5 Pro, GPT-OSS-120b
- Best for: Complex decision making, research analysis, advanced coding, academic work
- High intelligence (strong reasoning, balanced cost): GPT-5 Mini, Gemini 2.5 Flash, Llama-3.3-70b-Versatile, GPT-OSS-20b
- Best for: General assistants, business workflows, complex task handling, versatile applications
- Lightweight (fast, cost-effective): GPT-5 Nano, Gemini 2.5 Flash Lite, Llama-3.1-8b-Instant
- Best for: High-frequency queries, simple Q&A, high-volume tasks, real-time chat
By Use Case
- Daily office work: GPT-5 Mini, Gemini 2.5 Flash
- High-concurrency real-time chat: Llama-3.1-8b-Instant, GPT-5 Nano
- Research and analysis: GPT-5, Gemini 2.5 Pro, GPT-OSS-120b
- Cost-sensitive high-volume tasks: GPT-5 Nano, Gemini 2.5 Flash Lite
- Balanced production use: GPT-5 Mini, Gemini 2.5 Flash, GPT-OSS-20b

