概述
Superun 提供與各種人工智能提供商的無縫集成,允許您向應用程序添加智能功能。從文本生成和圖像創建到代碼輔助和數據分析,AI 集成為您的 Superun 應用程序帶來了無限的可能性。什麼是人工智能集成
Superun 中的 AI 集成使您能夠:- 文本生成: 使用語言模型創建內容、摘要和響應
- 圖像生成: 從文本描述生成圖像
- 代碼幫助: 獲得有關編碼任務和調試的幫助
- 數據分析: 使用人工智能分析和解釋數據
- 對話式人工智能: 構建聊天機器人和虛擬助手
- 內容審核: 自動審核用戶生成的內容
支持的人工智能提供商
OpenAI
- GPT 模型: 文本生成、完成和對話
- DALL-E: 根據文本描述生成圖像
- Whisper: 語音到文本轉錄
- **嵌入:**用於相似性和搜索的文本嵌入
Anthropic
- Claude: 用於復雜推理的高級語言模型
- Claude 即時: 更快、更輕的模型,可完成快速任務
- 代碼生成: 專門用於編碼任務
入門
1. 選擇人工智能提供商
選擇最適合您需求的人工智能提供商:- OpenAI: 最適合通用文本生成
- Anthropic: 非常適合複雜的推理和編碼
- Google AI: 適合多模式應用
- 複製: 非常適合開源模型
2. 獲取 API 密鑰
對於每個提供商,您需要:- 創建賬戶
- 生成API密鑰
- 配置使用限制和計費
3. 在Superun中配置
在您的 Superun 項目中:- 轉到 設置 → 集成
- 找到您選擇的人工智能提供商
- 輸入您的 API 密鑰
- 配置模型首選項
- 單擊保存
文本生成
基本文本生成
複製
import { OpenAI } from './lib/ai';
const generateText = async (prompt, model = 'gpt-3.5-turbo') => {
try {
const response = await OpenAI.chat.completions.create({
model: model,
messages: [
{
role: 'user',
content: prompt,
},
],
max_tokens: 1000,
temperature: 0.7,
});
return response.choices[0].message.content;
} catch (error) {
console.error('Error generating text:', error);
throw error;
}
};
與上下文對話
複製
const generateConversation = async (messages, model = 'gpt-3.5-turbo') => {
try {
const response = await OpenAI.chat.completions.create({
model: model,
messages: messages,
max_tokens: 1000,
temperature: 0.7,
});
return response.choices[0].message.content;
} catch (error) {
console.error('Error generating conversation:', error);
throw error;
}
};
// Usage
const messages = [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'What is the capital of France?' },
{ role: 'assistant', content: 'The capital of France is Paris.' },
{ role: 'user', content: 'What is the population of Paris?' },
];
const response = await generateConversation(messages);
內容生成
複製
const generateContent = async (type, topic, length = 'medium') => {
const prompts = {
blog: `Write a ${length} blog post about ${topic}`,
email: `Write a professional email about ${topic}`,
summary: `Summarize the following content: ${topic}`,
translation: `Translate the following text to English: ${topic}`,
};
const prompt = prompts[type] || prompts.blog;
return await generateText(prompt);
};
圖像生成
使用 DALL-E 生成圖像
複製
import { OpenAI } from './lib/ai';
const generateImage = async (prompt, size = '1024x1024', quality = 'standard') => {
try {
const response = await OpenAI.images.generate({
model: 'dall-e-3',
prompt: prompt,
size: size,
quality: quality,
n: 1,
});
return response.data[0].url;
} catch (error) {
console.error('Error generating image:', error);
throw error;
}
};
圖像變化
複製
const createImageVariation = async (imageUrl, size = '1024x1024') => {
try {
const response = await OpenAI.images.createVariation({
image: imageUrl,
size: size,
n: 1,
});
return response.data[0].url;
} catch (error) {
console.error('Error creating image variation:', error);
throw error;
}
};
圖像編輯
複製
const editImage = async (imageUrl, maskUrl, prompt, size = '1024x1024') => {
try {
const response = await OpenAI.images.edit({
image: imageUrl,
mask: maskUrl,
prompt: prompt,
size: size,
n: 1,
});
return response.data[0].url;
} catch (error) {
console.error('Error editing image:', error);
throw error;
}
};
代碼生成
生成代碼
複製
const generateCode = async (description, language = 'javascript') => {
const prompt = `Write ${language} code for: ${description}`;
try {
const response = await OpenAI.chat.completions.create({
model: 'GPT-4',
messages: [
{
role: 'system',
content: `You are an expert ${language} developer. Write clean, well-commented code.`,
},
{
role: 'user',
content: prompt,
},
],
max_tokens: 2000,
temperature: 0.3,
});
return response.choices[0].message.content;
} catch (error) {
console.error('Error generating code:', error);
throw error;
}
};
代碼審查
複製
const reviewCode = async (code, language = 'javascript') => {
const prompt = `Review this ${language} code and provide feedback on:
- Code quality
- Potential bugs
- Performance improvements
- Best practices
Code:
\`\`\`${language}
${code}
\`\`\``;
return await generateText(prompt, 'GPT-4');
};
調試代碼
複製
const debugCode = async (code, error, language = 'javascript') => {
const prompt = `Debug this ${language} code. The error is: ${error}
Code:
\`\`\`${language}
${code}
\`\`\`
Provide:
- Explanation of the error
- Fixed code
- Prevention tips`;
return await generateText(prompt, 'GPT-4');
};
數據分析
分析數據
複製
const analyzeData = async (data, analysisType = 'general') => {
const prompts = {
general: `Analyze this data and provide insights: ${JSON.stringify(data)}`,
trends: `Identify trends in this data: ${JSON.stringify(data)}`,
anomalies: `Find anomalies in this data: ${JSON.stringify(data)}`,
summary: `Summarize this data: ${JSON.stringify(data)}`,
};
const prompt = prompts[analysisType] || prompts.general;
return await generateText(prompt, 'GPT-4');
};
生成報告
複製
const generateReport = async (data, reportType = 'executive') => {
const prompts = {
executive: `Create an executive summary report for this data: ${JSON.stringify(data)}`,
technical: `Create a technical analysis report for this data: ${JSON.stringify(data)}`,
marketing: `Create a marketing insights report for this data: ${JSON.stringify(data)}`,
};
const prompt = prompts[reportType] || prompts.executive;
return await generateText(prompt, 'GPT-4');
};
對話式人工智能
聊天機器人實施
複製
import { useState, useEffect } from 'react';
const Chatbot = () => {
const [messages, setMessages] = useState([]);
const [input, setInput] = useState('');
const [loading, setLoading] = useState(false);
const sendMessage = async (message) => {
const newMessages = [...messages, { role: 'user', content: message }];
setMessages(newMessages);
setLoading(true);
try {
const response = await generateConversation(newMessages);
setMessages([...newMessages, { role: 'assistant', content: response }]);
} catch (error) {
console.error('Error sending message:', error);
} finally {
setLoading(false);
}
};
const handleSubmit = (e) => {
e.preventDefault();
if (input.trim()) {
sendMessage(input);
setInput('');
}
};
return (
<div className="chatbot">
<div className="messages">
{messages.map((message, index) => (
<div key={index} className={`message ${message.role}`}>
{message.content}
</div>
))}
{loading && <div className="message assistant">Thinking...</div>}
</div>
<form onSubmit={handleSubmit}>
<input
type="text"
value={input}
onChange={(e) => setInput(e.target.value)}
placeholder="Type your message..."
/>
<button type="submit" disabled={loading}>
Send
</button>
</form>
</div>
);
};
語音助手
複製
const VoiceAssistant = () => {
const [isListening, setIsListening] = useState(false);
const [transcript, setTranscript] = useState('');
const startListening = () => {
const recognition = new window.webkitSpeechRecognition();
recognition.continuous = false;
recognition.interimResults = false;
recognition.lang = 'en-US';
recognition.onstart = () => setIsListening(true);
recognition.onresult = (event) => {
const transcript = event.results[0][0].transcript;
setTranscript(transcript);
// Process the transcript with AI
processVoiceInput(transcript);
};
recognition.onend = () => setIsListening(false);
recognition.start();
};
const processVoiceInput = async (text) => {
try {
const response = await generateText(`Process this voice input: ${text}`);
// Handle the response (e.g., speak it back, execute command)
speak(response);
} catch (error) {
console.error('Error processing voice input:', error);
}
};
const speak = (text) => {
const utterance = new SpeechSynthesisUtterance(text);
window.speechSynthesis.speak(utterance);
};
return (
<div className="voice-assistant">
<button
onClick={startListening}
disabled={isListening}
className={isListening ? 'listening' : ''}
>
{isListening ? 'Listening...' : 'Start Voice Input'}
</button>
{transcript && <p>You said: {transcript}</p>}
</div>
);
};
內容審核
文本審核
複製
const moderateText = async (text) => {
const prompt = `Moderate this text for inappropriate content. Return only "SAFE" or "UNSAFE" with a brief reason:
Text: "${text}"`;
try {
const response = await generateText(prompt, 'gpt-3.5-turbo');
const isSafe = response.includes('SAFE');
const reason = response.replace(/^(SAFE|UNSAFE):\s*/, '');
return {
isSafe,
reason,
originalText: text,
};
} catch (error) {
console.error('Error moderating text:', error);
return { isSafe: false, reason: 'Error during moderation', originalText: text };
}
};
圖像審核
複製
const moderateImage = async (imageUrl) => {
try {
const response = await OpenAI.moderations.create({
input: imageUrl,
});
const result = response.results[0];
return {
isSafe: !result.flagged,
categories: result.categories,
scores: result.category_scores,
};
} catch (error) {
console.error('Error moderating image:', error);
return { isSafe: false, categories: {}, scores: {} };
}
};
高級功能
函數調用
複製
const callFunction = async (prompt, functions) => {
try {
const response = await OpenAI.chat.completions.create({
model: 'gpt-3.5-turbo',
messages: [{ role: 'user', content: prompt }],
functions: functions,
function_call: 'auto',
});
const message = response.choices[0].message;
if (message.function_call) {
const functionName = message.function_call.name;
const functionArgs = JSON.parse(message.function_call.arguments);
return {
functionName,
arguments: functionArgs,
content: message.content,
};
}
return { content: message.content };
} catch (error) {
console.error('Error calling function:', error);
throw error;
}
};
嵌入
複製
const createEmbedding = async (text) => {
try {
const response = await OpenAI.embeddings.create({
model: 'text-embedding-ada-002',
input: text,
});
return response.data[0].embedding;
} catch (error) {
console.error('Error creating embedding:', error);
throw error;
}
};
const findSimilarText = async (query, texts) => {
const queryEmbedding = await createEmbedding(query);
const textEmbeddings = await Promise.all(
texts.map(text => createEmbedding(text))
);
// Calculate cosine similarity
const similarities = textEmbeddings.map((embedding, index) => ({
text: texts[index],
similarity: cosineSimilarity(queryEmbedding, embedding),
}));
return similarities.sort((a, b) => b.similarity - a.similarity);
};
最佳實踐
性能優化
- **緩存響應:**緩存AI響應以避免重複的API調用
- 批量請求: 盡可能合併多個請求
- 使用適當的模型: 為您的用例選擇正確的模型
- 實施速率限制: 遵守 API 速率限制
成本管理
- 監控使用情況: 跟踪 API 使用情況和成本
- **優化提示:**編寫高效的提示,減少token使用
- **使用Caching:**緩存響應以避免重複請求
- 設置限制: 對用戶實施使用限制
- 驗證輸入: 在發送給 AI 之前清理用戶輸入
- 處理錯誤: 實施正確的錯誤處理
- 保護 API 密鑰: 切勿在客戶端代碼中公開 API 密鑰
- 內容過濾: 對AI輸出實施內容過濾
故障排除
常見問題
問:AI 響應不一致 答: 嘗試:- 調整溫度參數
- 使用更具體的提示
- 在對話中提供更多背景Info
- 請求排隊
- 指數退避
- 用戶速率限制
- 緩存
- 使用更快的模型來完成簡單的任務
- 實施響應緩存
- 減少提示長度
- 使用流媒體進行長響應
- 微調審核提示
- 使用不同的模型
- 實施自定義規則
- 結合多種審核方法
需要幫助嗎?
查看我們的常見問題解答,了解常見的 AI 集成問題和故障排除提示。

