feat: Multi-model AI chat and image generation in Dashboard

This commit is contained in:
Wayne Sutton
2026-01-01 22:00:46 -08:00
parent 4cfbb2588a
commit a9f56d9c04
35 changed files with 2859 additions and 179 deletions

View File

@@ -2,6 +2,8 @@
After forking this repo, update these files with your site information. Choose one of two options:
**Important**: Keep your `fork-config.json` file after configuring. The `sync:discovery` commands will use it to update discovery files (`AGENTS.md`, `CLAUDE.md`, `public/llms.txt`) with your configured values.
---
## Option 1: Automated Script (Recommended)
@@ -16,6 +18,8 @@ cp fork-config.json.example fork-config.json
The file `fork-config.json` is gitignored, so your configuration stays local and is not committed. The `.example` file remains as a template.
**Keep this file**: Even after running `npm run configure`, keep the `fork-config.json` file. Future sync commands will use it to maintain your configuration.
### Step 2: Edit fork-config.json
```json
@@ -1070,7 +1074,7 @@ rightSidebar: true
## AI Chat Configuration
Configure the AI writing assistant powered by Anthropic Claude.
Configure the AI writing assistant. The Dashboard AI Agent supports multiple providers (Anthropic, OpenAI, Google) and includes image generation.
### In fork-config.json
@@ -1079,6 +1083,19 @@ Configure the AI writing assistant powered by Anthropic Claude.
"aiChat": {
"enabledOnWritePage": false,
"enabledOnContent": false
},
"aiDashboard": {
"enableImageGeneration": true,
"defaultTextModel": "claude-sonnet-4-20250514",
"textModels": [
{ "id": "claude-sonnet-4-20250514", "name": "Claude Sonnet 4", "provider": "anthropic" },
{ "id": "gpt-4o", "name": "GPT-4o", "provider": "openai" },
{ "id": "gemini-2.0-flash", "name": "Gemini 2.0 Flash", "provider": "google" }
],
"imageModels": [
{ "id": "gemini-2.0-flash-exp-image-generation", "name": "Nano Banana", "provider": "google" },
{ "id": "imagen-3.0-generate-002", "name": "Nano Banana Pro", "provider": "google" }
]
}
}
```
@@ -1092,14 +1109,36 @@ aiChat: {
enabledOnWritePage: true, // Show AI chat toggle on /write page
enabledOnContent: true, // Allow AI chat on posts/pages via frontmatter
},
aiDashboard: {
enableImageGeneration: true, // Enable image generation tab in Dashboard AI Agent
defaultTextModel: "claude-sonnet-4-20250514", // Default model for chat
textModels: [
{ id: "claude-sonnet-4-20250514", name: "Claude Sonnet 4", provider: "anthropic" },
{ id: "gpt-4o", name: "GPT-4o", provider: "openai" },
{ id: "gemini-2.0-flash", name: "Gemini 2.0 Flash", provider: "google" },
],
imageModels: [
{ id: "gemini-2.0-flash-exp-image-generation", name: "Nano Banana", provider: "google" },
{ id: "imagen-3.0-generate-002", name: "Nano Banana Pro", provider: "google" },
],
},
```
**Environment Variables (Convex):**
- `ANTHROPIC_API_KEY` (required): Your Anthropic API key
| Variable | Provider | Features |
| --- | --- | --- |
| `ANTHROPIC_API_KEY` | Anthropic | Claude Sonnet 4 chat |
| `OPENAI_API_KEY` | OpenAI | GPT-4o chat |
| `GOOGLE_AI_API_KEY` | Google | Gemini 2.0 Flash chat + image generation |
**Optional system prompt variables:**
- `CLAUDE_PROMPT_STYLE`, `CLAUDE_PROMPT_COMMUNITY`, `CLAUDE_PROMPT_RULES` (optional): Split system prompts
- `CLAUDE_SYSTEM_PROMPT` (optional): Single system prompt fallback
**Note:** Only configure the API keys for providers you want to use. If a key is not set, users see a helpful setup message when they try to use that model.
**Frontmatter Usage:**
Enable AI chat on posts/pages:
@@ -1114,6 +1153,13 @@ aiChat: true
Requires `rightSidebar: true` and `siteConfig.aiChat.enabledOnContent: true`.
**Dashboard AI Agent Features:**
- **Chat Tab:** Multi-model selector with lazy API key validation
- **Image Tab:** AI image generation with aspect ratio selection (1:1, 16:9, 9:16, 4:3, 3:4)
- Images stored in Convex storage with session tracking
- Gallery view of recent generated images
---
## Posts Display Configuration
@@ -1205,11 +1251,15 @@ Update these files:
3. Run `npm run dev` to test locally
4. Deploy to Netlify when ready
**Note**: Keep your `fork-config.json` file. When you run `npm run sync:discovery` or `npm run sync:all`, it reads from `fork-config.json` to update discovery files with your site information.
---
## Syncing Discovery Files
Discovery files (`AGENTS.md` and `public/llms.txt`) can be automatically updated with your current app data.
Discovery files (`AGENTS.md`, `CLAUDE.md`, and `public/llms.txt`) can be automatically updated with your current app data.
**How it works**: The sync:discovery script reads from `fork-config.json` (if it exists) to get your site name, URL, and GitHub info. This ensures your configured values are preserved when updating discovery files.
### Commands

19
TASK.md
View File

@@ -3,16 +3,29 @@
## To Do
- [ ] fix site confg link
- [ ] dashboard agent nanobanno
- [ ] npm package
## Current Status
v2.5.0 ready. Social footer icons can now display in header navigation.
v2.6.0 ready. Multi-model AI chat and image generation in Dashboard.
## Completed
- [x] Multi-model AI chat and image generation in Dashboard
- [x] AI Agent section with tab-based UI (Chat and Image Generation tabs)
- [x] Multi-model selector for text chat (Claude Sonnet 4, GPT-4o, Gemini 2.0 Flash)
- [x] Lazy API key validation with friendly setup instructions per provider
- [x] Image generation with Nano Banana (gemini-2.0-flash-exp-image-generation) and Nano Banana Pro (imagen-3.0-generate-002)
- [x] Aspect ratio selector for images (1:1, 16:9, 9:16, 4:3, 3:4)
- [x] Generated images stored in Convex storage with session tracking
- [x] New `aiDashboard` configuration in siteConfig.ts
- [x] New `convex/aiImageGeneration.ts` for Gemini image generation
- [x] New `aiGeneratedImages` table in schema for tracking generated images
- [x] Updated aiChatActions.ts with multi-provider support (Anthropic, OpenAI, Google)
- [x] Updated AIChatView.tsx with selectedModel prop
- [x] CSS styles for AI Agent tabs, model selectors, and image display
- [x] Updated files.md, changelog.md, TASK.md, changelog-page.md
- [x] Social footer icons in header navigation
- [x] Added `showInHeader` option to `siteConfig.socialFooter` config
- [x] Exported `platformIcons` from SocialFooter.tsx for reuse

View File

@@ -4,6 +4,49 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
## [2.6.0] - 2026-01-01
### Added
- Multi-model AI chat support in Dashboard
- Model dropdown selector to choose between Anthropic (Claude Sonnet 4), OpenAI (GPT-4o), and Google (Gemini 2.0 Flash)
- Lazy API key validation: errors only shown when user tries to use a specific model
- Each provider has friendly setup instructions with links to get API keys
- AI Image Generation tab in Dashboard
- Generate images using Gemini models (Nano Banana and Nano Banana Pro)
- Aspect ratio selector (1:1, 16:9, 9:16, 4:3, 3:4)
- Generated images stored in Convex storage with session tracking
- Markdown-rendered error messages with setup instructions
- New `aiDashboard` configuration in siteConfig
- `enableImageGeneration`: Toggle image generation tab
- `defaultTextModel`: Set default AI model for chat
- `textModels`: Configure available text chat models
- `imageModels`: Configure available image generation models
### Technical
- Updated `convex/aiChatActions.ts` to support multiple AI providers
- Added `callAnthropicApi`, `callOpenAIApi`, `callGeminiApi` helper functions
- Added `getProviderFromModel` to determine provider from model ID
- Added `getApiKeyForProvider` for lazy API key retrieval
- Added `getNotConfiguredMessage` for provider-specific setup instructions
- Updated `src/components/AIChatView.tsx` with `selectedModel` prop
- Updated `src/pages/Dashboard.tsx` with new `AIAgentSection`
- Tab-based UI for Chat and Image Generation
- Model dropdowns with provider labels
- Aspect ratio selector for image generation
- Added CSS styles for AI Agent section in `src/styles/global.css`
- `.ai-agent-tabs`, `.ai-agent-tab` for tab navigation
- `.ai-model-selector`, `.ai-model-dropdown` for model selection
- `.ai-aspect-ratio-selector` for aspect ratio options
- `.ai-generated-image`, `.ai-image-error`, `.ai-image-loading` for image display
### Environment Variables
- `ANTHROPIC_API_KEY`: Required for Claude models
- `OPENAI_API_KEY`: Required for GPT-4o
- `GOOGLE_AI_API_KEY`: Required for Gemini text chat and image generation
## [2.5.0] - 2026-01-01
### Added

View File

@@ -15,7 +15,7 @@ authorImage: "/images/authors/markdown.png"
# About This Markdown Framework
An open-source publishing framework built for AI agents and developers to ship websites, docs, or blogs.. Write markdown, sync from the terminal. Your content is instantly available to browsers, LLMs, and AI agents. Built on Convex and Netlify.
An open-source publishing framework built for AI agents and developers to ship websites, docs, or blogs. Write markdown, sync from the terminal. Your content is instantly available to browsers, LLMs, and AI agents. Built on Convex and Netlify.
## How It Works

View File

@@ -309,6 +309,8 @@ Once configuration is complete:
3. **Test locally**: Run `npm run dev` and verify your site name, footer, and metadata
4. **Push to git**: Commit all changes and push to trigger a Netlify rebuild
**Important**: Keep your `fork-config.json` file. The `sync:discovery` and `sync:all` commands read from it to update discovery files (`AGENTS.md`, `CLAUDE.md`, `public/llms.txt`) with your configured values. Without it, these files would revert to placeholder values.
## Existing content
The configuration script only updates site-level settings. It does not modify your markdown content in `content/blog/` or `content/pages/`. Your existing posts and pages remain unchanged.

View File

@@ -1536,7 +1536,7 @@ Content is stored in localStorage only and not synced to the database. Refreshin
## AI Agent chat
The site includes an AI writing assistant (Agent) powered by Anthropic Claude API. Agent can be enabled in two places:
The site includes an AI writing assistant (Agent) that supports multiple AI providers. Agent can be enabled in three places:
**1. Write page (`/write`)**
@@ -1566,11 +1566,39 @@ aiChat: true # Enable Agent in right sidebar
---
```
**3. Dashboard AI Agent (`/dashboard`)**
The Dashboard includes a dedicated AI Agent section with a tab-based UI for Chat and Image Generation.
**Chat Tab features:**
- Multi-model selector: Claude Sonnet 4, GPT-4o, Gemini 2.0 Flash
- Per-session chat history stored in Convex
- Markdown rendering for AI responses
- Copy functionality for AI responses
- Lazy API key validation (errors only shown when user tries to use a specific model)
**Image Tab features:**
- AI image generation with two models:
- Nano Banana (gemini-2.0-flash-exp-image-generation) - Experimental model
- Nano Banana Pro (imagen-3.0-generate-002) - Production model
- Aspect ratio selection: 1:1, 16:9, 9:16, 4:3, 3:4
- Images stored in Convex storage with session tracking
- Gallery view of recent generated images
**Environment variables:**
Agent requires the following Convex environment variables:
Agent requires API keys for the providers you want to use. Set these in Convex environment variables:
| Variable | Provider | Features |
| --- | --- | --- |
| `ANTHROPIC_API_KEY` | Anthropic | Claude Sonnet 4 chat |
| `OPENAI_API_KEY` | OpenAI | GPT-4o chat |
| `GOOGLE_AI_API_KEY` | Google | Gemini 2.0 Flash chat + image generation |
**Optional system prompt variables:**
- `ANTHROPIC_API_KEY` (required): Your Anthropic API key for Claude API access
- `CLAUDE_PROMPT_STYLE` (optional): First part of system prompt
- `CLAUDE_PROMPT_COMMUNITY` (optional): Second part of system prompt
- `CLAUDE_PROMPT_RULES` (optional): Third part of system prompt
@@ -1581,7 +1609,10 @@ Agent requires the following Convex environment variables:
1. Go to [Convex Dashboard](https://dashboard.convex.dev)
2. Select your project
3. Navigate to Settings > Environment Variables
4. Add `ANTHROPIC_API_KEY` with your API key value
4. Add API keys for the providers you want to use:
- `ANTHROPIC_API_KEY` for Claude
- `OPENAI_API_KEY` for GPT-4o
- `GOOGLE_AI_API_KEY` for Gemini and image generation
5. Optionally add system prompt variables (`CLAUDE_PROMPT_STYLE`, etc.)
6. Deploy changes
@@ -1592,11 +1623,11 @@ Agent requires the following Convex environment variables:
- Chat history is stored per-session, per-context in Convex (aiChats table)
- Page content can be provided as context for AI responses
- Chat history limited to last 20 messages for efficiency
- If API key is not set, Agent displays "API key is not set" error message
- API key validation is lazy: errors only appear when you try to use a specific model
**Error handling:**
If `ANTHROPIC_API_KEY` is not configured in Convex environment variables, Agent displays a user-friendly error message: "API key is not set". This helps identify when the API key is missing in production deployments.
If an API key is not configured for a provider, Agent displays a user-friendly setup message with instructions when you try to use that model. Only configure the API keys for providers you want to use.
## Dashboard

View File

@@ -3,10 +3,10 @@ title: "About"
slug: "about"
published: true
order: 2
excerpt: "An open-source publishing framework built for AI agents and developers to ship websites, docs, or blogs.."
excerpt: "An open-source publishing framework built for AI agents and developers to ship websites, docs, or blogs."
---
An open-source publishing framework built for AI agents and developers to ship websites, docs, or blogs.. Write markdown, sync from the terminal. Your content is instantly available to browsers, LLMs, and AI agents. Built on Convex and Netlify.
An open-source publishing framework built for AI agents and developers to ship websites, docs, or blogs. Write markdown, sync from the terminal. Your content is instantly available to browsers, LLMs, and AI agents. Built on Convex and Netlify.
## What makes it a dev sync system?

View File

@@ -10,6 +10,58 @@ layout: "sidebar"
All notable changes to this project.
![](https://img.shields.io/badge/License-MIT-yellow.svg)
## v2.6.0
Released January 1, 2026
**Multi-model AI chat and image generation in Dashboard**
- AI Agent section with tab-based UI (Chat and Image Generation tabs)
- Multi-model selector for text chat
- Claude Sonnet 4 (Anthropic)
- GPT-4o (OpenAI)
- Gemini 2.0 Flash (Google)
- Lazy API key validation: errors only shown when user tries to use a specific model
- Each provider has friendly setup instructions with links to get API keys
- AI Image Generation tab
- Generate images using Gemini models (Nano Banana and Nano Banana Pro)
- Aspect ratio selector (1:1, 16:9, 9:16, 4:3, 3:4)
- Generated images stored in Convex storage with session tracking
- Markdown-rendered error messages with setup instructions
- New `aiDashboard` configuration in siteConfig
- `enableImageGeneration`: Toggle image generation tab
- `defaultTextModel`: Set default AI model for chat
- `textModels`: Configure available text chat models
- `imageModels`: Configure available image generation models
**Technical details:**
- New file: `convex/aiImageGeneration.ts` for Gemini image generation action
- New table: `aiGeneratedImages` in schema for tracking generated images
- Updated `convex/aiChatActions.ts` with multi-provider support
- Added `callAnthropicApi`, `callOpenAIApi`, `callGeminiApi` helper functions
- Added `getProviderFromModel` to determine provider from model ID
- Added `getApiKeyForProvider` for lazy API key retrieval
- Added `getNotConfiguredMessage` for provider-specific setup instructions
- Updated `src/components/AIChatView.tsx` with `selectedModel` prop
- Updated `src/pages/Dashboard.tsx` with new AI Agent section
- Tab-based UI for Chat and Image Generation
- Model dropdowns with provider labels
- Aspect ratio selector for image generation
- CSS styles for AI Agent section in `src/styles/global.css`
- `.ai-agent-tabs`, `.ai-agent-tab` for tab navigation
- `.ai-model-selector`, `.ai-model-dropdown` for model selection
- `.ai-aspect-ratio-selector` for aspect ratio options
- `.ai-generated-image`, `.ai-image-error`, `.ai-image-loading` for image display
**Environment Variables:**
- `ANTHROPIC_API_KEY`: Required for Claude models
- `OPENAI_API_KEY`: Required for GPT-4o
- `GOOGLE_AI_API_KEY`: Required for Gemini text chat and image generation
Updated files: `convex/aiImageGeneration.ts`, `convex/aiChatActions.ts`, `convex/aiChats.ts`, `convex/schema.ts`, `src/components/AIChatView.tsx`, `src/pages/Dashboard.tsx`, `src/config/siteConfig.ts`, `src/styles/global.css`, `files.md`, `TASK.md`, `changelog.md`, `content/pages/changelog-page.md`
## v2.5.0
Released January 1, 2026

View File

@@ -4,7 +4,7 @@ slug: "docs"
published: true
order: 0
layout: "sidebar"
aiChat: false
aiChat: true
rightSidebar: true
showFooter: true
---
@@ -1096,11 +1096,34 @@ When `requireAuth` is `false`, the dashboard is open access. When `requireAuth`
### AI Agent
- Dedicated AI chat section separate from the Write page
- Uses Anthropic Claude API (requires `ANTHROPIC_API_KEY` in Convex environment)
The Dashboard includes a dedicated AI Agent section with tab-based UI for Chat and Image Generation.
**Chat Tab:**
- Multi-model selector: Claude Sonnet 4, GPT-4o, Gemini 2.0 Flash
- Per-session chat history stored in Convex
- Markdown rendering for AI responses
- Copy functionality for AI responses
- Lazy API key validation (errors only shown when user tries to use a specific model)
**Image Tab:**
- AI image generation with two models:
- Nano Banana (gemini-2.0-flash-exp-image-generation) - Experimental model
- Nano Banana Pro (imagen-3.0-generate-002) - Production model
- Aspect ratio selection: 1:1, 16:9, 9:16, 4:3, 3:4
- Images stored in Convex storage with session tracking
- Gallery view of recent generated images
**Environment Variables (Convex):**
| Variable | Description |
| --- | --- |
| `ANTHROPIC_API_KEY` | Required for Claude Sonnet 4 |
| `OPENAI_API_KEY` | Required for GPT-4o |
| `GOOGLE_AI_API_KEY` | Required for Gemini 2.0 Flash and image generation |
**Note:** Only configure the API keys for models you want to use. If a key is not set, users see a helpful setup message when they try to use that model.
### Newsletter Management

View File

@@ -10,6 +10,7 @@
import type * as aiChatActions from "../aiChatActions.js";
import type * as aiChats from "../aiChats.js";
import type * as aiImageGeneration from "../aiImageGeneration.js";
import type * as contact from "../contact.js";
import type * as contactActions from "../contactActions.js";
import type * as crons from "../crons.js";
@@ -31,6 +32,7 @@ import type {
declare const fullApi: ApiFromModules<{
aiChatActions: typeof aiChatActions;
aiChats: typeof aiChats;
aiImageGeneration: typeof aiImageGeneration;
contact: typeof contact;
contactActions: typeof contactActions;
crons: typeof crons;

View File

@@ -9,8 +9,20 @@ import type {
TextBlockParam,
ImageBlockParam,
} from "@anthropic-ai/sdk/resources/messages/messages";
import OpenAI from "openai";
import { GoogleGenAI, Content } from "@google/genai";
import FirecrawlApp from "@mendable/firecrawl-js";
import type { Id } from "./_generated/dataModel";
import type { ChatCompletionMessageParam } from "openai/resources/chat/completions";
// Model validator for multi-model support
const modelValidator = v.union(
v.literal("claude-sonnet-4-20250514"),
v.literal("gpt-4o"),
v.literal("gemini-2.0-flash")
);
// Type for model selection
type AIModel = "claude-sonnet-4-20250514" | "gpt-4o" | "gemini-2.0-flash";
// Default system prompt for writing assistant
const DEFAULT_SYSTEM_PROMPT = `You are a helpful writing assistant. Help users write clearly and concisely.
@@ -82,14 +94,229 @@ async function scrapeUrl(url: string): Promise<{
}
}
/**
* Get provider from model ID
*/
function getProviderFromModel(model: AIModel): "anthropic" | "openai" | "google" {
if (model.startsWith("claude")) return "anthropic";
if (model.startsWith("gpt")) return "openai";
if (model.startsWith("gemini")) return "google";
return "anthropic"; // Default fallback
}
/**
* Get API key for a provider, returns null if not configured
*/
function getApiKeyForProvider(provider: "anthropic" | "openai" | "google"): string | null {
switch (provider) {
case "anthropic":
return process.env.ANTHROPIC_API_KEY || null;
case "openai":
return process.env.OPENAI_API_KEY || null;
case "google":
return process.env.GOOGLE_AI_API_KEY || null;
}
}
/**
* Get not configured message for a provider
*/
function getNotConfiguredMessage(provider: "anthropic" | "openai" | "google"): string {
const configs = {
anthropic: {
name: "Claude (Anthropic)",
envVar: "ANTHROPIC_API_KEY",
consoleUrl: "https://console.anthropic.com/",
consoleName: "Anthropic Console",
},
openai: {
name: "GPT (OpenAI)",
envVar: "OPENAI_API_KEY",
consoleUrl: "https://platform.openai.com/api-keys",
consoleName: "OpenAI Platform",
},
google: {
name: "Gemini (Google)",
envVar: "GOOGLE_AI_API_KEY",
consoleUrl: "https://aistudio.google.com/apikey",
consoleName: "Google AI Studio",
},
};
const config = configs[provider];
return (
`**${config.name} is not configured.**\n\n` +
`To enable this model, add your \`${config.envVar}\` to the Convex environment variables.\n\n` +
`**Setup steps:**\n` +
`1. Get an API key from [${config.consoleName}](${config.consoleUrl})\n` +
`2. Add it to Convex: \`npx convex env set ${config.envVar} your-key-here\`\n` +
`3. For production, set it in the [Convex Dashboard](https://dashboard.convex.dev/)\n\n` +
`See the [Convex environment variables docs](https://docs.convex.dev/production/environment-variables) for more details.`
);
}
/**
* Call Anthropic Claude API
*/
async function callAnthropicApi(
apiKey: string,
model: string,
systemPrompt: string,
messages: Array<{
role: "user" | "assistant";
content: string | Array<ContentBlockParam>;
}>
): Promise<string> {
const anthropic = new Anthropic({ apiKey });
const response = await anthropic.messages.create({
model,
max_tokens: 2048,
system: systemPrompt,
messages,
});
const textContent = response.content.find((block) => block.type === "text");
if (!textContent || textContent.type !== "text") {
throw new Error("No text content in Claude response");
}
return textContent.text;
}
/**
* Call OpenAI GPT API
*/
async function callOpenAIApi(
apiKey: string,
model: string,
systemPrompt: string,
messages: Array<{
role: "user" | "assistant";
content: string | Array<ContentBlockParam>;
}>
): Promise<string> {
const openai = new OpenAI({ apiKey });
// Convert messages to OpenAI format
const openaiMessages: ChatCompletionMessageParam[] = [
{ role: "system", content: systemPrompt },
];
for (const msg of messages) {
if (typeof msg.content === "string") {
if (msg.role === "user") {
openaiMessages.push({ role: "user", content: msg.content });
} else {
openaiMessages.push({ role: "assistant", content: msg.content });
}
} else {
// Convert content blocks to OpenAI format
const content: Array<{ type: "text"; text: string } | { type: "image_url"; image_url: { url: string } }> = [];
for (const block of msg.content) {
if (block.type === "text") {
content.push({ type: "text", text: block.text });
} else if (block.type === "image" && "source" in block && block.source.type === "url") {
content.push({ type: "image_url", image_url: { url: block.source.url } });
}
}
if (msg.role === "user") {
openaiMessages.push({
role: "user",
content: content.length === 1 && content[0].type === "text" ? content[0].text : content,
});
} else {
// Assistant messages only support string content in OpenAI
const textContent = content.filter(c => c.type === "text").map(c => (c as { type: "text"; text: string }).text).join("\n");
openaiMessages.push({ role: "assistant", content: textContent });
}
}
}
const response = await openai.chat.completions.create({
model,
max_tokens: 2048,
messages: openaiMessages,
});
const textContent = response.choices[0]?.message?.content;
if (!textContent) {
throw new Error("No text content in OpenAI response");
}
return textContent;
}
/**
* Call Google Gemini API
*/
async function callGeminiApi(
apiKey: string,
model: string,
systemPrompt: string,
messages: Array<{
role: "user" | "assistant";
content: string | Array<ContentBlockParam>;
}>
): Promise<string> {
const ai = new GoogleGenAI({ apiKey });
// Convert messages to Gemini format
const geminiMessages: Content[] = [];
for (const msg of messages) {
const role = msg.role === "assistant" ? "model" : "user";
if (typeof msg.content === "string") {
geminiMessages.push({
role,
parts: [{ text: msg.content }],
});
} else {
// Convert content blocks to Gemini format
const parts: Array<{ text: string } | { inlineData: { mimeType: string; data: string } }> = [];
for (const block of msg.content) {
if (block.type === "text") {
parts.push({ text: block.text });
}
// Note: Gemini handles images differently, would need base64 encoding
// For now, skip image blocks in Gemini
}
if (parts.length > 0) {
geminiMessages.push({ role, parts });
}
}
}
const response = await ai.models.generateContent({
model,
contents: geminiMessages,
config: {
systemInstruction: systemPrompt,
maxOutputTokens: 2048,
},
});
const textContent = response.candidates?.[0]?.content?.parts?.find(
(part: { text?: string }) => part.text
);
if (!textContent || !("text" in textContent)) {
throw new Error("No text content in Gemini response");
}
return textContent.text as string;
}
/**
* Generate AI response for a chat
* Calls Claude API and saves the response
* Supports multiple AI providers: Anthropic, OpenAI, Google
*/
export const generateResponse = action({
args: {
chatId: v.id("aiChats"),
userMessage: v.string(),
model: v.optional(modelValidator),
pageContext: v.optional(v.string()),
attachments: v.optional(
v.array(
@@ -105,17 +332,14 @@ export const generateResponse = action({
},
returns: v.string(),
handler: async (ctx, args) => {
// Get API key - return friendly message if not configured
const apiKey = process.env.ANTHROPIC_API_KEY;
// Use default model if not specified
const selectedModel: AIModel = args.model || "claude-sonnet-4-20250514";
const provider = getProviderFromModel(selectedModel);
// Get API key for the selected provider - lazy check only when model is used
const apiKey = getApiKeyForProvider(provider);
if (!apiKey) {
const notConfiguredMessage =
"**AI chat is not configured on production.**\n\n" +
"To enable AI responses, add your `ANTHROPIC_API_KEY` to the Convex environment variables.\n\n" +
"**Setup steps:**\n" +
"1. Get an API key from [Anthropic Console](https://console.anthropic.com/)\n" +
"2. Add it to Convex: `npx convex env set ANTHROPIC_API_KEY your-key-here`\n" +
"3. For production, set it in the [Convex Dashboard](https://dashboard.convex.dev/)\n\n" +
"See the [Convex environment variables docs](https://docs.convex.dev/production/environment-variables) for more details.";
const notConfiguredMessage = getNotConfiguredMessage(provider);
// Save the message to chat history so it appears in the conversation
await ctx.runMutation(internal.aiChats.addAssistantMessage, {
@@ -172,15 +396,15 @@ export const generateResponse = action({
// Build messages array from chat history (last 20 messages)
const recentMessages = chat.messages.slice(-20);
const claudeMessages: Array<{
const formattedMessages: Array<{
role: "user" | "assistant";
content: string | Array<ContentBlockParam>;
}> = [];
// Convert chat messages to Claude format
// Convert chat messages to provider-agnostic format
for (const msg of recentMessages) {
if (msg.role === "assistant") {
claudeMessages.push({
formattedMessages.push({
role: "assistant",
content: msg.content,
});
@@ -230,7 +454,7 @@ export const generateResponse = action({
}
}
claudeMessages.push({
formattedMessages.push({
role: "user",
content:
contentParts.length === 1 && contentParts[0].type === "text"
@@ -282,7 +506,7 @@ export const generateResponse = action({
}
}
claudeMessages.push({
formattedMessages.push({
role: "user",
content:
newMessageContent.length === 1 && newMessageContent[0].type === "text"
@@ -290,27 +514,26 @@ export const generateResponse = action({
: newMessageContent,
});
// Initialize Anthropic client
const anthropic = new Anthropic({
apiKey,
});
// Call the appropriate AI provider
let assistantMessage: string;
// Call Claude API
const response = await anthropic.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 2048,
system: systemPrompt,
messages: claudeMessages,
});
// Extract text content from response
const textContent = response.content.find((block) => block.type === "text");
if (!textContent || textContent.type !== "text") {
throw new Error("No text content in Claude response");
try {
switch (provider) {
case "anthropic":
assistantMessage = await callAnthropicApi(apiKey, selectedModel, systemPrompt, formattedMessages);
break;
case "openai":
assistantMessage = await callOpenAIApi(apiKey, selectedModel, systemPrompt, formattedMessages);
break;
case "google":
assistantMessage = await callGeminiApi(apiKey, selectedModel, systemPrompt, formattedMessages);
break;
}
} catch (error) {
const errorMessage = error instanceof Error ? error.message : "Unknown error";
assistantMessage = `**Error from ${provider}:** ${errorMessage}`;
}
const assistantMessage = textContent.text;
// Save the assistant message to the chat
await ctx.runMutation(internal.aiChats.addAssistantMessage, {
chatId: args.chatId,

View File

@@ -354,3 +354,60 @@ export const getChatsBySession = query({
},
});
/**
* Save generated image metadata (internal - called from action)
*/
export const saveGeneratedImage = internalMutation({
args: {
sessionId: v.string(),
prompt: v.string(),
model: v.string(),
storageId: v.id("_storage"),
mimeType: v.string(),
},
returns: v.id("aiGeneratedImages"),
handler: async (ctx, args) => {
const imageId = await ctx.db.insert("aiGeneratedImages", {
sessionId: args.sessionId,
prompt: args.prompt,
model: args.model,
storageId: args.storageId,
mimeType: args.mimeType,
createdAt: Date.now(),
});
return imageId;
},
});
/**
* Get recent generated images for a session (internal - called from action)
*/
export const getRecentImagesInternal = internalQuery({
args: {
sessionId: v.string(),
limit: v.number(),
},
returns: v.array(
v.object({
_id: v.id("aiGeneratedImages"),
_creationTime: v.number(),
sessionId: v.string(),
prompt: v.string(),
model: v.string(),
storageId: v.id("_storage"),
mimeType: v.string(),
createdAt: v.number(),
})
),
handler: async (ctx, args) => {
const images = await ctx.db
.query("aiGeneratedImages")
.withIndex("by_session", (q) => q.eq("sessionId", args.sessionId))
.order("desc")
.take(args.limit);
return images;
},
});

230
convex/aiImageGeneration.ts Normal file
View File

@@ -0,0 +1,230 @@
"use node";
import { v } from "convex/values";
import type { Id } from "./_generated/dataModel";
import { action } from "./_generated/server";
import { internal } from "./_generated/api";
// Type for images returned from internal query
type GeneratedImageRecord = {
_id: Id<"aiGeneratedImages">;
_creationTime: number;
sessionId: string;
prompt: string;
model: string;
storageId: Id<"_storage">;
mimeType: string;
createdAt: number;
};
import { GoogleGenAI } from "@google/genai";
// Image model validator
const imageModelValidator = v.union(
v.literal("gemini-2.0-flash-exp-image-generation"),
v.literal("imagen-3.0-generate-002")
);
// Aspect ratio validator
const aspectRatioValidator = v.union(
v.literal("1:1"),
v.literal("16:9"),
v.literal("9:16"),
v.literal("4:3"),
v.literal("3:4")
);
/**
* Generate an image using Gemini's image generation API
* Stores the result in Convex storage and returns metadata
*/
export const generateImage = action({
args: {
sessionId: v.string(),
prompt: v.string(),
model: imageModelValidator,
aspectRatio: v.optional(aspectRatioValidator),
},
returns: v.object({
success: v.boolean(),
storageId: v.optional(v.id("_storage")),
url: v.optional(v.string()),
error: v.optional(v.string()),
}),
handler: async (ctx, args) => {
// Check for API key - return friendly error if not configured
const apiKey = process.env.GOOGLE_AI_API_KEY;
if (!apiKey) {
return {
success: false,
error:
"**Gemini Image Generation is not configured.**\n\n" +
"To use image generation, add your `GOOGLE_AI_API_KEY` to the Convex environment variables.\n\n" +
"**Setup steps:**\n" +
"1. Get an API key from [Google AI Studio](https://aistudio.google.com/apikey)\n" +
"2. Add it to Convex: `npx convex env set GOOGLE_AI_API_KEY your-key-here`\n" +
"3. For production, set it in the [Convex Dashboard](https://dashboard.convex.dev/)\n\n" +
"See the [Convex environment variables docs](https://docs.convex.dev/production/environment-variables) for more details.",
};
}
try {
const ai = new GoogleGenAI({ apiKey });
// Configure generation based on model
let imageBytes: Uint8Array;
let mimeType = "image/png";
if (args.model === "gemini-2.0-flash-exp-image-generation") {
// Gemini Flash experimental image generation
const response = await ai.models.generateContent({
model: args.model,
contents: [{ role: "user", parts: [{ text: args.prompt }] }],
config: {
responseModalities: ["image", "text"],
},
});
// Extract image from response
const parts = response.candidates?.[0]?.content?.parts;
const imagePart = parts?.find(
(part) => {
const inlineData = part.inlineData as { mimeType?: string; data?: string } | undefined;
return inlineData?.mimeType?.startsWith("image/");
}
);
const inlineData = imagePart?.inlineData as { mimeType?: string; data?: string } | undefined;
if (!imagePart || !inlineData || !inlineData.mimeType || !inlineData.data) {
return {
success: false,
error: "No image was generated. Try a different prompt.",
};
}
mimeType = inlineData.mimeType;
imageBytes = base64ToBytes(inlineData.data);
} else {
// Imagen 3.0 model
const response = await ai.models.generateImages({
model: args.model,
prompt: args.prompt,
config: {
numberOfImages: 1,
aspectRatio: args.aspectRatio || "1:1",
},
});
const image = response.generatedImages?.[0];
if (!image || !image.image?.imageBytes) {
return {
success: false,
error: "No image was generated. Try a different prompt.",
};
}
mimeType = "image/png";
imageBytes = base64ToBytes(image.image.imageBytes);
}
// Store the image in Convex storage
const blob = new Blob([imageBytes as BlobPart], { type: mimeType });
const storageId = await ctx.storage.store(blob);
// Get the URL for the stored image
const url = await ctx.storage.getUrl(storageId);
// Save metadata to database
await ctx.runMutation(internal.aiChats.saveGeneratedImage, {
sessionId: args.sessionId,
prompt: args.prompt,
model: args.model,
storageId,
mimeType,
});
return {
success: true,
storageId,
url: url || undefined,
};
} catch (error) {
const errorMessage = error instanceof Error ? error.message : "Unknown error";
// Check for specific API errors
if (errorMessage.includes("quota") || errorMessage.includes("rate")) {
return {
success: false,
error: "**Rate limit exceeded.** Please try again in a few moments.",
};
}
if (errorMessage.includes("safety") || errorMessage.includes("blocked")) {
return {
success: false,
error: "**Image generation blocked.** The prompt may have triggered content safety filters. Try rephrasing your prompt.",
};
}
return {
success: false,
error: `**Image generation failed:** ${errorMessage}`,
};
}
},
});
/**
* Get recent generated images for a session
*/
export const getRecentImages = action({
args: {
sessionId: v.string(),
limit: v.optional(v.number()),
},
returns: v.array(
v.object({
_id: v.id("aiGeneratedImages"),
prompt: v.string(),
model: v.string(),
url: v.union(v.string(), v.null()),
createdAt: v.number(),
})
),
handler: async (ctx, args): Promise<Array<{
_id: Id<"aiGeneratedImages">;
prompt: string;
model: string;
url: string | null;
createdAt: number;
}>> => {
const images: GeneratedImageRecord[] = await ctx.runQuery(internal.aiChats.getRecentImagesInternal, {
sessionId: args.sessionId,
limit: args.limit || 10,
});
// Get URLs for each image
const imagesWithUrls = await Promise.all(
images.map(async (image: GeneratedImageRecord) => ({
_id: image._id,
prompt: image.prompt,
model: image.model,
url: await ctx.storage.getUrl(image.storageId),
createdAt: image.createdAt,
}))
);
return imagesWithUrls;
},
});
/**
* Helper to convert base64 string to Uint8Array
*/
function base64ToBytes(base64: string): Uint8Array {
const binaryString = atob(base64);
const bytes = new Uint8Array(binaryString.length);
for (let i = 0; i < binaryString.length; i++) {
bytes[i] = binaryString.charCodeAt(i);
}
return bytes;
}

View File

@@ -5,7 +5,7 @@ import { rssFeed, rssFullFeed } from "./rss";
const http = httpRouter();
// Site configuration
// Site configuration - update these for your site (or run npm run configure)
const SITE_URL = process.env.SITE_URL || "https://www.markdown.fast";
const SITE_NAME = "markdown sync framework";
@@ -100,7 +100,7 @@ http.route({
site: SITE_NAME,
url: SITE_URL,
description:
"An open-source publishing framework built for AI agents and developers to ship websites, docs, or blogs.. Write markdown, sync from the terminal. Your content is instantly available to browsers, LLMs, and AI agents. Built on Convex and Netlify.",
"An open-source publishing framework built for AI agents and developers to ship websites, docs, or blogs. Write markdown, sync from the terminal. Your content is instantly available to browsers, LLMs, and AI agents. Built on Convex and Netlify.",
posts: posts.map((post: { title: string; slug: string; description: string; date: string; readTime?: string; tags: string[] }) => ({
title: post.title,
slug: post.slug,
@@ -223,7 +223,7 @@ http.route({
site: SITE_NAME,
url: SITE_URL,
description:
"An open-source publishing framework built for AI agents and developers to ship websites, docs, or blogs.. Write markdown, sync from the terminal. Your content is instantly available to browsers, LLMs, and AI agents. Built on Convex and Netlify.",
"An open-source publishing framework built for AI agents and developers to ship websites, docs, or blogs. Write markdown, sync from the terminal. Your content is instantly available to browsers, LLMs, and AI agents. Built on Convex and Netlify.",
exportedAt: new Date().toISOString(),
totalPosts: fullPosts.length,
posts: fullPosts,

View File

@@ -1,11 +1,11 @@
import { httpAction } from "./_generated/server";
import { api } from "./_generated/api";
// Site configuration for RSS feed
// Site configuration for RSS feed - update these for your site (or run npm run configure)
const SITE_URL = process.env.SITE_URL || "https://www.markdown.fast";
const SITE_TITLE = "markdown sync framework";
const SITE_DESCRIPTION =
"An open-source publishing framework built for AI agents and developers to ship websites, docs, or blogs.. Write markdown, sync from the terminal. Your content is instantly available to browsers, LLMs, and AI agents. Built on Convex and Netlify.";
"An open-source publishing framework built for AI agents and developers to ship websites, docs, or blogs. Write markdown, sync from the terminal. Your content is instantly available to browsers, LLMs, and AI agents. Built on Convex and Netlify.";
// Escape XML special characters
function escapeXml(text: string): string {

View File

@@ -149,6 +149,18 @@ export default defineSchema({
.index("by_session_and_context", ["sessionId", "contextId"])
.index("by_session", ["sessionId"]),
// AI generated images from Gemini image generation
aiGeneratedImages: defineTable({
sessionId: v.string(), // Anonymous session ID from localStorage
prompt: v.string(), // User's image prompt
model: v.string(), // Model used: "gemini-2.5-flash-image" or "gemini-3-pro-image-preview"
storageId: v.id("_storage"), // Convex storage ID for the generated image
mimeType: v.string(), // Image MIME type: "image/png" or "image/jpeg"
createdAt: v.number(), // Timestamp when image was generated
})
.index("by_session", ["sessionId"])
.index("by_createdAt", ["createdAt"]),
// Newsletter subscribers table
// Stores email subscriptions with unsubscribe tokens
newsletterSubscribers: defineTable({

View File

@@ -35,7 +35,7 @@ A brief description of each file in the codebase.
| File | Description |
| --------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `siteConfig.ts` | Centralized site configuration (name, logo, blog page, posts display with homepage post limit and read more link, featured section with configurable title via featuredTitle, GitHub contributions, nav order, inner page logo settings, hardcoded navigation items for React routes, GitHub repository config for AI service raw URLs, font family configuration, right sidebar configuration, footer configuration with markdown support, social footer configuration, homepage configuration, AI chat configuration, newsletter configuration with admin and notifications, contact form configuration, weekly digest configuration, stats page configuration with public/private toggle, dashboard configuration with optional WorkOS authentication via requireAuth, image lightbox configuration with enabled toggle) |
| `siteConfig.ts` | Centralized site configuration (name, logo, blog page, posts display with homepage post limit and read more link, featured section with configurable title via featuredTitle, GitHub contributions, nav order, inner page logo settings, hardcoded navigation items for React routes, GitHub repository config for AI service raw URLs, font family configuration, right sidebar configuration, footer configuration with markdown support, social footer configuration, homepage configuration, AI chat configuration, aiDashboard configuration with multi-model support for text chat and image generation, newsletter configuration with admin and notifications, contact form configuration, weekly digest configuration, stats page configuration with public/private toggle, dashboard configuration with optional WorkOS authentication via requireAuth, image lightbox configuration with enabled toggle) |
### Pages (`src/pages/`)
@@ -48,7 +48,7 @@ A brief description of each file in the codebase.
| `TagPage.tsx` | Tag archive page displaying posts filtered by a specific tag. Includes view mode toggle (list/cards) with localStorage persistence |
| `AuthorPage.tsx` | Author archive page displaying posts by a specific author. Includes view mode toggle (list/cards) with localStorage persistence. Author name clickable in posts links to this page. |
| `Write.tsx` | Three-column markdown writing page with Cursor docs-style UI, frontmatter reference with copy buttons, theme toggle, font switcher (serif/sans/monospace), localStorage persistence, and optional AI Agent mode (toggleable via siteConfig.aiChat.enabledOnWritePage). When enabled, Agent replaces the textarea with AIChatView component. Includes scroll prevention when switching to Agent mode to prevent page jump. Title changes to "Agent" when in AI chat mode. |
| `Dashboard.tsx` | Centralized dashboard at `/dashboard` for content management and site configuration. Features include: Posts and Pages list views with filtering, search, pagination, items per page selector; Post/Page editor with markdown editor, live preview, draggable/resizable frontmatter sidebar (200px-600px), independent scrolling, download markdown; Write Post/Page sections with full-screen writing interface; AI Agent section (dedicated chat separate from Write page); Newsletter management (all Newsletter Admin features integrated); Content import (Firecrawl UI); Site configuration (Config Generator UI for all siteConfig.ts settings); Index HTML editor; Analytics (real-time stats dashboard); Sync commands UI with buttons for all sync operations; Header sync buttons for quick sync; Dashboard search; Toast notifications; Command modal; Mobile responsive design. Uses Convex queries for real-time data, localStorage for preferences, ReactMarkdown for preview. Optional WorkOS authentication via siteConfig.dashboard.requireAuth. When requireAuth is false, dashboard is open access. When requireAuth is true and WorkOS is configured, dashboard requires login. Shows setup instructions if requireAuth is true but WorkOS is not configured. |
| `Dashboard.tsx` | Centralized dashboard at `/dashboard` for content management and site configuration. Features include: Posts and Pages list views with filtering, search, pagination, items per page selector; Post/Page editor with markdown editor, live preview, draggable/resizable frontmatter sidebar (200px-600px), independent scrolling, download markdown; Write Post/Page sections with full-screen writing interface; AI Agent section with tab-based UI for Chat and Image Generation, multi-model selector (Claude Sonnet 4, GPT-4o, Gemini 2.0 Flash), image generation with Nano Banana models and aspect ratio selection; Newsletter management (all Newsletter Admin features integrated); Content import (Firecrawl UI); Site configuration (Config Generator UI for all siteConfig.ts settings); Index HTML editor; Analytics (real-time stats dashboard); Sync commands UI with buttons for all sync operations; Header sync buttons for quick sync; Dashboard search; Toast notifications; Command modal; Mobile responsive design. Uses Convex queries for real-time data, localStorage for preferences, ReactMarkdown for preview. Optional WorkOS authentication via siteConfig.dashboard.requireAuth. When requireAuth is false, dashboard is open access. When requireAuth is true and WorkOS is configured, dashboard requires login. Shows setup instructions if requireAuth is true but WorkOS is not configured. |
| `Callback.tsx` | OAuth callback handler for WorkOS authentication. Handles redirect from WorkOS after user login, exchanges authorization code for user information, then redirects to dashboard. Only used when WorkOS is configured. |
| `NewsletterAdmin.tsx` | Three-column newsletter admin page for managing subscribers and sending newsletters. Left sidebar with navigation and stats, main area with searchable subscriber list, right sidebar with send newsletter panel and recent sends. Access at /newsletter-admin, configurable via siteConfig.newsletterAdmin. |
@@ -118,7 +118,8 @@ A brief description of each file in the codebase.
| `rss.ts` | RSS feed generation (update SITE_URL/SITE_TITLE when forking, uses www.markdown.fast) |
| `auth.config.ts` | Convex authentication configuration for WorkOS. Defines JWT providers for WorkOS API and user management. Requires WORKOS_CLIENT_ID environment variable in Convex. Optional - only needed if using WorkOS authentication for dashboard. |
| `aiChats.ts` | Queries and mutations for AI chat history (per-session, per-context storage). Handles anonymous session IDs, per-page chat contexts, and message history management. Supports page content as context for AI responses. |
| `aiChatActions.ts` | Anthropic Claude API integration action for AI chat responses. Requires ANTHROPIC_API_KEY environment variable in Convex. Uses claude-sonnet-3-5-20240620 model. System prompt configurable via environment variables (CLAUDE_PROMPT_STYLE, CLAUDE_PROMPT_COMMUNITY, CLAUDE_PROMPT_RULES, or CLAUDE_SYSTEM_PROMPT). Includes error handling for missing API keys with user-friendly error messages. Supports page content context and chat history (last 20 messages). |
| `aiChatActions.ts` | Multi-provider AI chat action supporting Anthropic (Claude Sonnet 4), OpenAI (GPT-4o), and Google (Gemini 2.0 Flash). Requires respective API keys: ANTHROPIC_API_KEY, OPENAI_API_KEY, GOOGLE_AI_API_KEY. Lazy API key validation only shows errors when user attempts to use a specific model. System prompt configurable via environment variables. Supports page content context and chat history (last 20 messages). |
| `aiImageGeneration.ts` | Gemini image generation action using Google AI API. Supports gemini-2.0-flash-exp-image-generation (Nano Banana) and imagen-3.0-generate-002 (Nano Banana Pro) models. Features aspect ratio selection (1:1, 16:9, 9:16, 4:3, 3:4), Convex storage integration, and session-based image tracking. Requires GOOGLE_AI_API_KEY environment variable. |
| `newsletter.ts` | Newsletter mutations and queries: subscribe, unsubscribe, getSubscriberCount, getActiveSubscribers, getAllSubscribers (admin), deleteSubscriber (admin), getNewsletterStats, getPostsForNewsletter, wasPostSent, recordPostSent, scheduleSendPostNewsletter, scheduleSendCustomNewsletter, scheduleSendStatsSummary, getStatsForSummary. |
| `newsletterActions.ts` | Newsletter actions (Node.js runtime): sendPostNewsletter, sendCustomNewsletter, sendWeeklyDigest, notifyNewSubscriber, sendWeeklyStatsSummary. Uses AgentMail SDK for email delivery. Includes markdown-to-HTML conversion for custom emails. |
| `contact.ts` | Contact form mutations and actions: submitContact, sendContactEmail (AgentMail API), markEmailSent. |

View File

@@ -104,6 +104,19 @@
"enabledOnWritePage": false,
"enabledOnContent": false
},
"aiDashboard": {
"enableImageGeneration": true,
"defaultTextModel": "claude-sonnet-4-20250514",
"textModels": [
{ "id": "claude-sonnet-4-20250514", "name": "Claude Sonnet 4", "provider": "anthropic" },
{ "id": "gpt-4o", "name": "GPT-4o", "provider": "openai" },
{ "id": "gemini-2.0-flash", "name": "Gemini 2.0 Flash", "provider": "google" }
],
"imageModels": [
{ "id": "gemini-2.0-flash-exp-image-generation", "name": "Nano Banana", "provider": "google" },
{ "id": "imagen-3.0-generate-002", "name": "Nano Banana Pro", "provider": "google" }
]
},
"newsletter": {
"enabled": false,
"agentmail": {

View File

@@ -12,7 +12,7 @@
<!-- SEO Meta Tags -->
<meta
name="description"
content="An open-source publishing framework built for AI agents and developers to ship websites, docs, or blogs.. Write markdown, sync from the terminal. Your content is instantly available to browsers, LLMs, and AI agents. Built on Convex and Netlify."
content="An open-source publishing framework built for AI agents and developers to ship websites, docs, or blogs. Write markdown, sync from the terminal. Your content is instantly available to browsers, LLMs, and AI agents. Built on Convex and Netlify."
/>
<meta name="author" content="markdown sync publishing framework" />
<meta
@@ -28,7 +28,7 @@
<meta property="og:title" content="markdown sync publishing framework" />
<meta
property="og:description"
content="An open-source publishing framework built for AI agents and developers to ship websites, docs, or blogs.. Write markdown, sync from the terminal. Your content is instantly available to browsers, LLMs, and AI agents. Built on Convex and Netlify."
content="An open-source publishing framework built for AI agents and developers to ship websites, docs, or blogs. Write markdown, sync from the terminal. Your content is instantly available to browsers, LLMs, and AI agents. Built on Convex and Netlify."
/>
<meta property="og:type" content="website" />
<meta property="og:url" content="https://www.markdown.fast/" />
@@ -48,7 +48,7 @@
<meta name="twitter:title" content="markdown sync publishing framework" />
<meta
name="twitter:description"
content="An open-source publishing framework built for AI agents and developers to ship websites, docs, or blogs.. Write markdown, sync from the terminal. Your content is instantly available to browsers, LLMs, and AI agents. Built on Convex and Netlify."
content="An open-source publishing framework built for AI agents and developers to ship websites, docs, or blogs. Write markdown, sync from the terminal. Your content is instantly available to browsers, LLMs, and AI agents. Built on Convex and Netlify."
/>
<meta
name="twitter:image"
@@ -79,7 +79,7 @@
"@type": "WebSite",
"name": "markdown sync framework",
"url": "https://www.markdown.fast",
"description": "An open-source publishing framework built for AI agents and developers to ship websites, docs, or blogs.. Write markdown, sync from the terminal. Your content is instantly available to browsers, LLMs, and AI agents. Built on Convex and Netlify.",
"description": "An open-source publishing framework built for AI agents and developers to ship websites, docs, or blogs. Write markdown, sync from the terminal. Your content is instantly available to browsers, LLMs, and AI agents. Built on Convex and Netlify.",
"author": {
"@type": "Organization",
"name": "markdown sync framework",

838
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -26,6 +26,7 @@
},
"dependencies": {
"@anthropic-ai/sdk": "^0.71.2",
"@google/genai": "^1.0.1",
"@convex-dev/aggregate": "^0.2.0",
"@convex-dev/workos": "^0.0.1",
"@mendable/firecrawl-js": "^1.21.1",

View File

@@ -5,7 +5,7 @@ Type: page
Date: 2026-01-02
---
An open-source publishing framework built for AI agents and developers to ship websites, docs, or blogs.. Write markdown, sync from the terminal. Your content is instantly available to browsers, LLMs, and AI agents. Built on Convex and Netlify.
An open-source publishing framework built for AI agents and developers to ship websites, docs, or blogs. Write markdown, sync from the terminal. Your content is instantly available to browsers, LLMs, and AI agents. Built on Convex and Netlify.
## What makes it a dev sync system?

View File

@@ -8,6 +8,58 @@ Date: 2026-01-02
All notable changes to this project.
![](https://img.shields.io/badge/License-MIT-yellow.svg)
## v2.6.0
Released January 1, 2026
**Multi-model AI chat and image generation in Dashboard**
- AI Agent section with tab-based UI (Chat and Image Generation tabs)
- Multi-model selector for text chat
- Claude Sonnet 4 (Anthropic)
- GPT-4o (OpenAI)
- Gemini 2.0 Flash (Google)
- Lazy API key validation: errors only shown when user tries to use a specific model
- Each provider has friendly setup instructions with links to get API keys
- AI Image Generation tab
- Generate images using Gemini models (Nano Banana and Nano Banana Pro)
- Aspect ratio selector (1:1, 16:9, 9:16, 4:3, 3:4)
- Generated images stored in Convex storage with session tracking
- Markdown-rendered error messages with setup instructions
- New `aiDashboard` configuration in siteConfig
- `enableImageGeneration`: Toggle image generation tab
- `defaultTextModel`: Set default AI model for chat
- `textModels`: Configure available text chat models
- `imageModels`: Configure available image generation models
**Technical details:**
- New file: `convex/aiImageGeneration.ts` for Gemini image generation action
- New table: `aiGeneratedImages` in schema for tracking generated images
- Updated `convex/aiChatActions.ts` with multi-provider support
- Added `callAnthropicApi`, `callOpenAIApi`, `callGeminiApi` helper functions
- Added `getProviderFromModel` to determine provider from model ID
- Added `getApiKeyForProvider` for lazy API key retrieval
- Added `getNotConfiguredMessage` for provider-specific setup instructions
- Updated `src/components/AIChatView.tsx` with `selectedModel` prop
- Updated `src/pages/Dashboard.tsx` with new AI Agent section
- Tab-based UI for Chat and Image Generation
- Model dropdowns with provider labels
- Aspect ratio selector for image generation
- CSS styles for AI Agent section in `src/styles/global.css`
- `.ai-agent-tabs`, `.ai-agent-tab` for tab navigation
- `.ai-model-selector`, `.ai-model-dropdown` for model selection
- `.ai-aspect-ratio-selector` for aspect ratio options
- `.ai-generated-image`, `.ai-image-error`, `.ai-image-loading` for image display
**Environment Variables:**
- `ANTHROPIC_API_KEY`: Required for Claude models
- `OPENAI_API_KEY`: Required for GPT-4o
- `GOOGLE_AI_API_KEY`: Required for Gemini text chat and image generation
Updated files: `convex/aiImageGeneration.ts`, `convex/aiChatActions.ts`, `convex/aiChats.ts`, `convex/schema.ts`, `src/components/AIChatView.tsx`, `src/pages/Dashboard.tsx`, `src/config/siteConfig.ts`, `src/styles/global.css`, `files.md`, `TASK.md`, `changelog.md`, `content/pages/changelog-page.md`
## v2.5.0
Released January 1, 2026

View File

@@ -1092,11 +1092,34 @@ When `requireAuth` is `false`, the dashboard is open access. When `requireAuth`
### AI Agent
- Dedicated AI chat section separate from the Write page
- Uses Anthropic Claude API (requires `ANTHROPIC_API_KEY` in Convex environment)
The Dashboard includes a dedicated AI Agent section with tab-based UI for Chat and Image Generation.
**Chat Tab:**
- Multi-model selector: Claude Sonnet 4, GPT-4o, Gemini 2.0 Flash
- Per-session chat history stored in Convex
- Markdown rendering for AI responses
- Copy functionality for AI responses
- Lazy API key validation (errors only shown when user tries to use a specific model)
**Image Tab:**
- AI image generation with two models:
- Nano Banana (gemini-2.0-flash-exp-image-generation) - Experimental model
- Nano Banana Pro (imagen-3.0-generate-002) - Production model
- Aspect ratio selection: 1:1, 16:9, 9:16, 4:3, 3:4
- Images stored in Convex storage with session tracking
- Gallery view of recent generated images
**Environment Variables (Convex):**
| Variable | Description |
| --- | --- |
| `ANTHROPIC_API_KEY` | Required for Claude Sonnet 4 |
| `OPENAI_API_KEY` | Required for GPT-4o |
| `GOOGLE_AI_API_KEY` | Required for Gemini 2.0 Flash and image generation |
**Note:** Only configure the API keys for models you want to use. If a key is not set, users see a helpful setup message when they try to use that model.
### Newsletter Management

View File

@@ -303,6 +303,8 @@ Once configuration is complete:
3. **Test locally**: Run `npm run dev` and verify your site name, footer, and metadata
4. **Push to git**: Commit all changes and push to trigger a Netlify rebuild
**Important**: Keep your `fork-config.json` file. The `sync:discovery` and `sync:all` commands read from it to update discovery files (`AGENTS.md`, `CLAUDE.md`, `public/llms.txt`) with your configured values. Without it, these files would revert to placeholder values.
## Existing content
The configuration script only updates site-level settings. It does not modify your markdown content in `content/blog/` or `content/pages/`. Your existing posts and pages remain unchanged.

View File

@@ -46,7 +46,7 @@ This is the homepage index of all published content.
- **[Footer](/raw/footer.md)**
- **[Home Intro](/raw/home-intro.md)**
- **[Docs](/raw/docs.md)**
- **[About](/raw/about.md)** - An open-source publishing framework built for AI agents and developers to ship websites, docs, or blogs..
- **[About](/raw/about.md)** - An open-source publishing framework built for AI agents and developers to ship websites, docs, or blogs.
- **[Projects](/raw/projects.md)**
- **[Contact](/raw/contact.md)**
- **[Changelog](/raw/changelog.md)**

View File

@@ -1529,7 +1529,7 @@ Content is stored in localStorage only and not synced to the database. Refreshin
## AI Agent chat
The site includes an AI writing assistant (Agent) powered by Anthropic Claude API. Agent can be enabled in two places:
The site includes an AI writing assistant (Agent) that supports multiple AI providers. Agent can be enabled in three places:
**1. Write page (`/write`)**
@@ -1559,11 +1559,39 @@ aiChat: true # Enable Agent in right sidebar
---
```
**3. Dashboard AI Agent (`/dashboard`)**
The Dashboard includes a dedicated AI Agent section with a tab-based UI for Chat and Image Generation.
**Chat Tab features:**
- Multi-model selector: Claude Sonnet 4, GPT-4o, Gemini 2.0 Flash
- Per-session chat history stored in Convex
- Markdown rendering for AI responses
- Copy functionality for AI responses
- Lazy API key validation (errors only shown when user tries to use a specific model)
**Image Tab features:**
- AI image generation with two models:
- Nano Banana (gemini-2.0-flash-exp-image-generation) - Experimental model
- Nano Banana Pro (imagen-3.0-generate-002) - Production model
- Aspect ratio selection: 1:1, 16:9, 9:16, 4:3, 3:4
- Images stored in Convex storage with session tracking
- Gallery view of recent generated images
**Environment variables:**
Agent requires the following Convex environment variables:
Agent requires API keys for the providers you want to use. Set these in Convex environment variables:
| Variable | Provider | Features |
| --- | --- | --- |
| `ANTHROPIC_API_KEY` | Anthropic | Claude Sonnet 4 chat |
| `OPENAI_API_KEY` | OpenAI | GPT-4o chat |
| `GOOGLE_AI_API_KEY` | Google | Gemini 2.0 Flash chat + image generation |
**Optional system prompt variables:**
- `ANTHROPIC_API_KEY` (required): Your Anthropic API key for Claude API access
- `CLAUDE_PROMPT_STYLE` (optional): First part of system prompt
- `CLAUDE_PROMPT_COMMUNITY` (optional): Second part of system prompt
- `CLAUDE_PROMPT_RULES` (optional): Third part of system prompt
@@ -1574,7 +1602,10 @@ Agent requires the following Convex environment variables:
1. Go to [Convex Dashboard](https://dashboard.convex.dev)
2. Select your project
3. Navigate to Settings > Environment Variables
4. Add `ANTHROPIC_API_KEY` with your API key value
4. Add API keys for the providers you want to use:
- `ANTHROPIC_API_KEY` for Claude
- `OPENAI_API_KEY` for GPT-4o
- `GOOGLE_AI_API_KEY` for Gemini and image generation
5. Optionally add system prompt variables (`CLAUDE_PROMPT_STYLE`, etc.)
6. Deploy changes
@@ -1585,11 +1616,11 @@ Agent requires the following Convex environment variables:
- Chat history is stored per-session, per-context in Convex (aiChats table)
- Page content can be provided as context for AI responses
- Chat history limited to last 20 messages for efficiency
- If API key is not set, Agent displays "API key is not set" error message
- API key validation is lazy: errors only appear when you try to use a specific model
**Error handling:**
If `ANTHROPIC_API_KEY` is not configured in Convex environment variables, Agent displays a user-friendly error message: "API key is not set". This helps identify when the API key is missing in production deployments.
If an API key is not configured for a provider, Agent displays a user-friendly setup message with instructions when you try to use that model. Only configure the API keys for providers you want to use.
## Dashboard

View File

@@ -440,12 +440,14 @@ function updatePostTsx(config: ForkConfig): void {
console.log("\nUpdating src/pages/Post.tsx...");
updateFile("src/pages/Post.tsx", [
// Match any existing SITE_URL value (https://...)
{
search: /const SITE_URL = "https:\/\/markdowncms\.netlify\.app";/,
search: /const SITE_URL = "https:\/\/[^"]+";/,
replace: `const SITE_URL = "${config.siteUrl}";`,
},
// Match any existing SITE_NAME value
{
search: /const SITE_NAME = "markdown sync framework";/,
search: /const SITE_NAME = "[^"]+";/,
replace: `const SITE_NAME = "${config.siteName}";`,
},
]);
@@ -456,22 +458,31 @@ function updateConvexHttp(config: ForkConfig): void {
console.log("\nUpdating convex/http.ts...");
updateFile("convex/http.ts", [
// Match any existing SITE_URL value with process.env fallback
{
search: /const SITE_URL = process\.env\.SITE_URL \|\| "https:\/\/markdowncms\.netlify\.app";/,
search: /const SITE_URL = process\.env\.SITE_URL \|\| "https:\/\/[^"]+";/,
replace: `const SITE_URL = process.env.SITE_URL || "${config.siteUrl}";`,
},
// Match any existing SITE_NAME value (line 10)
{
search: /const SITE_NAME = "markdown sync framework";/,
search: /const SITE_NAME = "[^"]+";/,
replace: `const SITE_NAME = "${config.siteName}";`,
},
// Match any existing siteUrl in generateMetaHtml function
{
search: /const siteUrl = process\.env\.SITE_URL \|\| "https:\/\/markdowncms\.netlify\.app";/,
search: /const siteUrl = process\.env\.SITE_URL \|\| "https:\/\/[^"]+";/,
replace: `const siteUrl = process.env.SITE_URL || "${config.siteUrl}";`,
},
// Match any existing siteName in generateMetaHtml function
{
search: /const siteName = "markdown sync framework";/,
search: /const siteName = "[^"]+";/,
replace: `const siteName = "${config.siteName}";`,
},
// Update the description in API responses
{
search: /"An open-source publishing framework[^"]*"/g,
replace: `"${config.siteDescription}"`,
},
]);
}
@@ -480,14 +491,17 @@ function updateConvexRss(config: ForkConfig): void {
console.log("\nUpdating convex/rss.ts...");
updateFile("convex/rss.ts", [
// Match any existing SITE_URL value with process.env fallback
{
search: /const SITE_URL = process\.env\.SITE_URL \|\| "https:\/\/markdowncms\.netlify\.app";/,
search: /const SITE_URL = process\.env\.SITE_URL \|\| "https:\/\/[^"]+";/,
replace: `const SITE_URL = process.env.SITE_URL || "${config.siteUrl}";`,
},
// Match any existing SITE_TITLE value
{
search: /const SITE_TITLE = "markdown sync framework";/,
search: /const SITE_TITLE = "[^"]+";/,
replace: `const SITE_TITLE = "${config.siteName}";`,
},
// Match any existing SITE_DESCRIPTION value (multiline)
{
search: /const SITE_DESCRIPTION =\s*"[^"]+";/,
replace: `const SITE_DESCRIPTION =\n "${config.siteDescription}";`,
@@ -500,89 +514,94 @@ function updateIndexHtml(config: ForkConfig): void {
console.log("\nUpdating index.html...");
const replacements: Array<{ search: string | RegExp; replace: string }> = [
// Meta description
// Meta description (match any content)
{
search: /<meta\s*name="description"\s*content="[^"]*"\s*\/>/,
replace: `<meta\n name="description"\n content="${config.siteDescription}"\n />`,
},
// Meta author
// Meta author (match any content)
{
search: /<meta name="author" content="[^"]*" \/>/,
replace: `<meta name="author" content="${config.siteName}" />`,
},
// Open Graph title
// Open Graph title (match any content)
{
search: /<meta property="og:title" content="[^"]*" \/>/,
replace: `<meta property="og:title" content="${config.siteName}" />`,
},
// Open Graph description
// Open Graph description (match any content)
{
search: /<meta\s*property="og:description"\s*content="[^"]*"\s*\/>/,
replace: `<meta\n property="og:description"\n content="${config.siteDescription}"\n />`,
},
// Open Graph URL
// Open Graph URL (match any https URL)
{
search: /<meta property="og:url" content="https:\/\/markdowncms\.netlify\.app\/" \/>/,
search: /<meta property="og:url" content="https:\/\/[^"]*" \/>/,
replace: `<meta property="og:url" content="${config.siteUrl}/" />`,
},
// Open Graph site name
// Open Graph site name (match any content)
{
search: /<meta property="og:site_name" content="[^"]*" \/>/,
search: /<meta property="og:site_name" content="[^"]*"\s*\/>/,
replace: `<meta property="og:site_name" content="${config.siteName}" />`,
},
// Open Graph image
// Open Graph site name with newline formatting
{
search: /<meta\s*property="og:image"\s*content="https:\/\/markdowncms\.netlify\.app[^"]*"\s*\/>/,
replace: `<meta\n property="og:image"\n content="${config.siteUrl}/images/og-default.svg"\n />`,
search: /<meta\s*property="og:site_name"\s*content="[^"]*"\s*>/,
replace: `<meta\n property="og:site_name"\n content="${config.siteName}"\n >`,
},
// Twitter domain
// Open Graph image (match any https URL)
{
search: /<meta\s*property="og:image"\s*content="https:\/\/[^"]*"\s*\/>/,
replace: `<meta\n property="og:image"\n content="${config.siteUrl}/images/og-default.png"\n />`,
},
// Twitter domain (match any domain)
{
search: /<meta property="twitter:domain" content="[^"]*" \/>/,
replace: `<meta property="twitter:domain" content="${config.siteDomain}" />`,
},
// Twitter URL
// Twitter URL (match any https URL)
{
search: /<meta property="twitter:url" content="https:\/\/markdowncms\.netlify\.app\/" \/>/,
search: /<meta property="twitter:url" content="https:\/\/[^"]*" \/>/,
replace: `<meta property="twitter:url" content="${config.siteUrl}/" />`,
},
// Twitter title
// Twitter title (match any content)
{
search: /<meta name="twitter:title" content="[^"]*" \/>/,
replace: `<meta name="twitter:title" content="${config.siteName}" />`,
},
// Twitter description
// Twitter description (match any content)
{
search: /<meta\s*name="twitter:description"\s*content="[^"]*"\s*\/>/,
replace: `<meta\n name="twitter:description"\n content="${config.siteDescription}"\n />`,
},
// Twitter image
// Twitter image (match any https URL)
{
search: /<meta\s*name="twitter:image"\s*content="https:\/\/markdowncms\.netlify\.app[^"]*"\s*\/>/,
replace: `<meta\n name="twitter:image"\n content="${config.siteUrl}/images/og-default.svg"\n />`,
search: /<meta\s*name="twitter:image"\s*content="https:\/\/[^"]*"\s*\/>/,
replace: `<meta\n name="twitter:image"\n content="${config.siteUrl}/images/og-default.png"\n />`,
},
// JSON-LD name
// JSON-LD name (match any value)
{
search: /"name": "markdown sync framework"/g,
replace: `"name": "${config.siteName}"`,
search: /"name": "[^"]+",\s*\n\s*"url":/g,
replace: `"name": "${config.siteName}",\n "url":`,
},
// JSON-LD URL
// JSON-LD URL (match any https URL)
{
search: /"url": "https:\/\/markdowncms\.netlify\.app"/g,
search: /"url": "https:\/\/[^"]+"/g,
replace: `"url": "${config.siteUrl}"`,
},
// JSON-LD description
// JSON-LD description (match any content)
{
search: /"description": "An open-source publishing framework[^"]*"/,
search: /"description": "[^"]+"/,
replace: `"description": "${config.siteDescription}"`,
},
// JSON-LD search target
// JSON-LD search target (match any URL)
{
search: /"target": "https:\/\/markdowncms\.netlify\.app\/\?q=\{search_term_string\}"/,
search: /"target": "https:\/\/[^"]+\/\?q=\{search_term_string\}"/,
replace: `"target": "${config.siteUrl}/?q={search_term_string}"`,
},
// Page title
// Page title (match any title content)
{
search: /<title>markdown "sync" framework<\/title>/,
search: /<title>[^<]+<\/title>/,
replace: `<title>${config.siteTitle}</title>`,
},
];
@@ -733,25 +752,30 @@ function updateOpenApiYaml(config: ForkConfig): void {
const githubUrl = `https://github.com/${config.githubUsername}/${config.githubRepo}`;
updateFile("public/openapi.yaml", [
// Match any title ending with API
{
search: /title: markdown sync framework API/,
search: /title: .+ API/,
replace: `title: ${config.siteName} API`,
},
// Match any GitHub contact URL
{
search: /url: https:\/\/github\.com\/waynesutton\/markdown-site/,
search: /url: https:\/\/github\.com\/[^\/]+\/[^\s]+/,
replace: `url: ${githubUrl}`,
},
// Match any server URL (production server line)
{
search: /- url: https:\/\/markdowncms\.netlify\.app/,
replace: `- url: ${config.siteUrl}`,
search: /- url: https:\/\/[^\s]+\n\s+description: Production server/,
replace: `- url: ${config.siteUrl}\n description: Production server`,
},
// Match any example site name
{
search: /example: markdown sync framework/g,
replace: `example: ${config.siteName}`,
search: /example: .+\n\s+url:/g,
replace: `example: ${config.siteName}\n url:`,
},
// Match any example URL (for site URL)
{
search: /example: https:\/\/markdowncms\.netlify\.app/g,
replace: `example: ${config.siteUrl}`,
search: /example: https:\/\/[^\s]+\n\s+posts:/,
replace: `example: ${config.siteUrl}\n posts:`,
},
]);
}

View File

@@ -2,13 +2,17 @@
/**
* Discovery Files Sync Script
*
* Reads siteConfig.ts and Convex data to update discovery files.
* Reads fork-config.json (if available), siteConfig.ts, and Convex data to update discovery files.
* Run with: npm run sync:discovery (dev) or npm run sync:discovery:prod (prod)
*
* This script updates:
* - AGENTS.md (project overview and current status sections)
* - CLAUDE.md (current status section for Claude Code)
* - public/llms.txt (site info, API endpoints, GitHub links)
*
* IMPORTANT: If fork-config.json exists, it will be used as the source of truth.
* This ensures that after running `npm run configure`, subsequent sync:discovery
* commands will use your configured values.
*/
import fs from "fs";
@@ -33,12 +37,48 @@ const PROJECT_ROOT = process.cwd();
const PUBLIC_DIR = path.join(PROJECT_ROOT, "public");
const ROOT_DIR = PROJECT_ROOT;
// Fork config interface (matches fork-config.json structure)
interface ForkConfig {
siteName: string;
siteTitle: string;
siteDescription: string;
siteUrl: string;
siteDomain: string;
githubUsername: string;
githubRepo: string;
contactEmail?: string;
bio?: string;
gitHubRepoConfig?: {
owner: string;
repo: string;
branch: string;
contentPath: string;
};
}
// Load fork-config.json if it exists
function loadForkConfig(): ForkConfig | null {
try {
const configPath = path.join(PROJECT_ROOT, "fork-config.json");
if (fs.existsSync(configPath)) {
const content = fs.readFileSync(configPath, "utf-8");
const config = JSON.parse(content) as ForkConfig;
console.log("Using configuration from fork-config.json");
return config;
}
} catch (error) {
console.warn("Could not load fork-config.json, falling back to siteConfig.ts");
}
return null;
}
// Site config data structure
interface SiteConfigData {
name: string;
title: string;
bio: string;
description?: string;
siteUrl?: string; // Added to pass URL from fork-config.json
gitHubRepo?: {
owner: string;
repo: string;
@@ -47,8 +87,39 @@ interface SiteConfigData {
};
}
// Load site config from siteConfig.ts using regex
// Cached fork config
let cachedForkConfig: ForkConfig | null | undefined = undefined;
// Get fork config (cached)
function getForkConfig(): ForkConfig | null {
if (cachedForkConfig === undefined) {
cachedForkConfig = loadForkConfig();
}
return cachedForkConfig;
}
// Load site config - prioritizes fork-config.json over siteConfig.ts
function loadSiteConfig(): SiteConfigData {
// First try fork-config.json
const forkConfig = getForkConfig();
if (forkConfig) {
return {
name: forkConfig.siteName,
title: forkConfig.siteTitle,
bio: forkConfig.bio || forkConfig.siteDescription,
description: forkConfig.siteDescription,
siteUrl: forkConfig.siteUrl,
gitHubRepo: forkConfig.gitHubRepoConfig || {
owner: forkConfig.githubUsername,
repo: forkConfig.githubRepo,
branch: "main",
contentPath: "public/raw",
},
};
}
// Fall back to siteConfig.ts
console.log("No fork-config.json found, reading from siteConfig.ts");
try {
const configPath = path.join(
PROJECT_ROOT,
@@ -94,14 +165,14 @@ function loadSiteConfig(): SiteConfigData {
: undefined;
return {
name: nameMatch?.[1] || "markdown sync framework",
title: titleMatch?.[1] || "markdown sync framework",
name: nameMatch?.[1] || "Your Site Name",
title: titleMatch?.[1] || "Your Site Title",
bio:
bioMatch?.[1] ||
"An open-source publishing framework built for AI agents and developers to ship websites, docs, or blogs..",
"Your site description here.",
description:
bioMatch?.[1] ||
"An open-source publishing framework built for AI agents and developers to ship websites, docs, or blogs..",
"Your site description here.",
gitHubRepo,
};
}
@@ -110,30 +181,51 @@ function loadSiteConfig(): SiteConfigData {
}
return {
name: "markdown sync framework",
title: "markdown sync framework",
bio: "An open-source publishing framework built for AI agents and developers to ship websites, docs, or blogs..",
description:
"An open-source publishing framework built for AI agents and developers to ship websites, docs, or blogs..",
name: "Your Site Name",
title: "Your Site Title",
bio: "Your site description here.",
description: "Your site description here.",
};
}
// Get site URL from environment or config
function getSiteUrl(): string {
return (
process.env.SITE_URL || process.env.VITE_SITE_URL || "https://markdown.fast"
);
// Get site URL from fork-config.json, environment, or siteConfig
function getSiteUrl(siteConfig?: SiteConfigData): string {
// 1. Check fork-config.json (via siteConfig)
if (siteConfig?.siteUrl) {
return siteConfig.siteUrl;
}
// 2. Check fork-config.json directly
const forkConfig = getForkConfig();
if (forkConfig?.siteUrl) {
return forkConfig.siteUrl;
}
// 3. Check environment variables
if (process.env.SITE_URL) {
return process.env.SITE_URL;
}
if (process.env.VITE_SITE_URL) {
return process.env.VITE_SITE_URL;
}
// 4. Return placeholder (user should configure)
return "https://yoursite.example.com";
}
// Build GitHub URL from repo config or fallback
// Build GitHub URL from repo config or fork-config.json
function getGitHubUrl(siteConfig: SiteConfigData): string {
if (siteConfig.gitHubRepo) {
return `https://github.com/${siteConfig.gitHubRepo.owner}/${siteConfig.gitHubRepo.repo}`;
}
return (
process.env.GITHUB_REPO_URL ||
"https://github.com/waynesutton/markdown-site"
);
// Check fork-config.json directly
const forkConfig = getForkConfig();
if (forkConfig) {
return `https://github.com/${forkConfig.githubUsername}/${forkConfig.githubRepo}`;
}
// Check environment variable
if (process.env.GITHUB_REPO_URL) {
return process.env.GITHUB_REPO_URL;
}
// Return placeholder
return "https://github.com/yourusername/your-repo";
}
// Update CLAUDE.md with current status
@@ -326,9 +418,9 @@ async function syncDiscoveryFiles() {
// Initialize Convex client
const client = new ConvexHttpClient(convexUrl);
// Load site configuration
// Load site configuration (uses fork-config.json if available)
const siteConfig = loadSiteConfig();
const siteUrl = getSiteUrl();
const siteUrl = getSiteUrl(siteConfig);
console.log(`Site: ${siteConfig.name}`);
console.log(`Title: ${siteConfig.title}`);

View File

@@ -33,6 +33,7 @@ interface AIChatViewProps {
pageContent?: string; // Optional page content for context
onClose?: () => void; // Optional close handler
hideAttachments?: boolean; // Hide image/link attachment buttons (for right sidebar)
selectedModel?: string; // Selected AI model ID (e.g., "claude-sonnet-4-20250514", "gpt-4o", "gemini-2.0-flash")
}
export default function AIChatView({
@@ -40,6 +41,7 @@ export default function AIChatView({
pageContent,
onClose,
hideAttachments = false,
selectedModel,
}: AIChatViewProps) {
// State
const [inputValue, setInputValue] = useState("");
@@ -334,6 +336,7 @@ export default function AIChatView({
await generateResponse({
chatId,
userMessage: message || "",
model: selectedModel as "claude-sonnet-4-20250514" | "gpt-4o" | "gemini-2.0-flash" | undefined,
pageContext: hasLoadedContext ? undefined : pageContent,
attachments:
attachmentsToSend.length > 0 ? attachmentsToSend : undefined,

View File

@@ -192,25 +192,26 @@ export default function Layout({ children }: LayoutProps) {
{/* Desktop search and theme (visible on desktop only) */}
<div className="desktop-controls desktop-only">
{/* Social icons in header (if enabled) */}
{siteConfig.socialFooter?.enabled && siteConfig.socialFooter?.showInHeader && (
<div className="header-social-links">
{siteConfig.socialFooter.socialLinks.map((link) => {
const IconComponent = platformIcons[link.platform];
return (
<a
key={link.platform}
href={link.url}
target="_blank"
rel="noopener noreferrer"
className="header-social-link"
aria-label={`Follow on ${link.platform}`}
>
<IconComponent size={18} weight="regular" />
</a>
);
})}
</div>
)}
{siteConfig.socialFooter?.enabled &&
siteConfig.socialFooter?.showInHeader && (
<div className="header-social-links">
{siteConfig.socialFooter.socialLinks.map((link) => {
const IconComponent = platformIcons[link.platform];
return (
<a
key={link.platform}
href={link.url}
target="_blank"
rel="noopener noreferrer"
className="header-social-link"
aria-label={`Follow on ${link.platform}`}
>
<IconComponent size={18} weight="regular" />
</a>
);
})}
</div>
)}
{/* Search button with icon */}
<button
onClick={openSearch}

View File

@@ -114,6 +114,22 @@ export interface AIChatConfig {
enabledOnContent: boolean; // Allow AI chat on posts/pages via frontmatter aiChat: true
}
// AI Model configuration for Dashboard multi-model support
export interface AIModelOption {
id: string; // Model identifier (e.g., "claude-sonnet-4-20250514", "gpt-4o")
name: string; // Display name (e.g., "Claude Sonnet 4", "GPT-4o")
provider: "anthropic" | "openai" | "google"; // Provider for the model
}
// AI Dashboard configuration
// Controls multi-model AI chat and image generation in the Dashboard
export interface AIDashboardConfig {
enableImageGeneration: boolean; // Enable image generation tab
defaultTextModel: string; // Default model ID for text chat
textModels: AIModelOption[]; // Available text models
imageModels: AIModelOption[]; // Available image generation models
}
// Newsletter signup placement configuration
// Controls where signup forms appear on the site
export interface NewsletterSignupPlacement {
@@ -323,6 +339,9 @@ export interface SiteConfig {
// Image lightbox configuration (optional)
imageLightbox?: ImageLightboxConfig;
// AI Dashboard configuration (optional)
aiDashboard?: AIDashboardConfig;
}
// Default site configuration
@@ -645,6 +664,46 @@ export const siteConfig: SiteConfig = {
imageLightbox: {
enabled: true, // Set to false to disable image lightbox
},
// AI Dashboard configuration
// Multi-model AI chat and image generation in the Dashboard
// Requires API keys in Convex environment variables:
// - ANTHROPIC_API_KEY for Claude models
// - OPENAI_API_KEY for OpenAI models
// - GOOGLE_AI_API_KEY for Gemini models (chat and image generation)
aiDashboard: {
enableImageGeneration: true, // Enable image generation tab
defaultTextModel: "claude-sonnet-4-20250514", // Default model for text chat
textModels: [
{
id: "claude-sonnet-4-20250514",
name: "Claude Sonnet 4",
provider: "anthropic",
},
{
id: "gpt-4o",
name: "GPT-4o",
provider: "openai",
},
{
id: "gemini-2.0-flash",
name: "Gemini 2.0 Flash",
provider: "google",
},
],
imageModels: [
{
id: "gemini-2.0-flash-exp-image-generation",
name: "Nano Banana",
provider: "google",
},
{
id: "imagen-3.0-generate-002",
name: "Nano Banana Pro",
provider: "google",
},
],
},
};
// Export the config as default for easy importing

View File

@@ -1,6 +1,6 @@
import { useState, useCallback, useMemo, useEffect, useRef } from "react";
import { Link } from "react-router-dom";
import { useQuery, useMutation } from "convex/react";
import { useQuery, useMutation, useAction } from "convex/react";
import { api } from "../../convex/_generated/api";
import type { Id } from "../../convex/_generated/dataModel";
import { useTheme } from "../context/ThemeContext";
@@ -52,6 +52,10 @@ import {
ClockCounterClockwise,
TrendUp,
SidebarSimple,
Image,
ChatText,
SpinnerGap,
CaretDown,
} from "@phosphor-icons/react";
import siteConfig from "../config/siteConfig";
import AIChatView from "../components/AIChatView";
@@ -2383,9 +2387,226 @@ published: false
}
function AIAgentSection() {
const [activeTab, setActiveTab] = useState<"chat" | "image">("chat");
const [selectedTextModel, setSelectedTextModel] = useState(
siteConfig.aiDashboard?.defaultTextModel || "claude-sonnet-4-20250514"
);
const [selectedImageModel, setSelectedImageModel] = useState(
siteConfig.aiDashboard?.imageModels?.[0]?.id || "gemini-2.0-flash-exp-image-generation"
);
const [aspectRatio, setAspectRatio] = useState<"1:1" | "16:9" | "9:16" | "4:3" | "3:4">("1:1");
const [imagePrompt, setImagePrompt] = useState("");
const [isGeneratingImage, setIsGeneratingImage] = useState(false);
const [generatedImage, setGeneratedImage] = useState<{ url: string; prompt: string } | null>(null);
const [imageError, setImageError] = useState<string | null>(null);
const [showImageModelDropdown, setShowImageModelDropdown] = useState(false);
const [showTextModelDropdown, setShowTextModelDropdown] = useState(false);
const generateImage = useAction(api.aiImageGeneration.generateImage);
const textModels = siteConfig.aiDashboard?.textModels || [
{ id: "claude-sonnet-4-20250514", name: "Claude Sonnet 4", provider: "anthropic" as const },
];
const imageModels = siteConfig.aiDashboard?.imageModels || [
{ id: "gemini-2.0-flash-exp-image-generation", name: "Nano Banana", provider: "google" as const },
];
const enableImageGeneration = siteConfig.aiDashboard?.enableImageGeneration ?? true;
const handleGenerateImage = async () => {
if (!imagePrompt.trim() || isGeneratingImage) return;
setIsGeneratingImage(true);
setImageError(null);
setGeneratedImage(null);
try {
const result = await generateImage({
sessionId: localStorage.getItem("ai_chat_session_id") || crypto.randomUUID(),
prompt: imagePrompt,
model: selectedImageModel as "gemini-2.0-flash-exp-image-generation" | "imagen-3.0-generate-002",
aspectRatio,
});
if (result.success && result.url) {
setGeneratedImage({ url: result.url, prompt: imagePrompt });
setImagePrompt("");
} else if (result.error) {
setImageError(result.error);
}
} catch (error) {
setImageError(error instanceof Error ? error.message : "Failed to generate image");
} finally {
setIsGeneratingImage(false);
}
};
const selectedTextModelName = textModels.find(m => m.id === selectedTextModel)?.name || "Claude Sonnet 4";
const selectedImageModelName = imageModels.find(m => m.id === selectedImageModel)?.name || "Nano Banana";
return (
<div className="dashboard-ai-section">
<AIChatView contextId="dashboard-agent" />
{/* Tabs */}
<div className="ai-agent-tabs">
<button
className={`ai-agent-tab ${activeTab === "chat" ? "active" : ""}`}
onClick={() => setActiveTab("chat")}
>
<ChatText size={18} weight="bold" />
<span>Chat</span>
</button>
{enableImageGeneration && (
<button
className={`ai-agent-tab ${activeTab === "image" ? "active" : ""}`}
onClick={() => setActiveTab("image")}
>
<Image size={18} weight="bold" />
<span>Image</span>
</button>
)}
</div>
{/* Chat Tab */}
{activeTab === "chat" && (
<div className="ai-agent-chat-container">
{/* Model Selector */}
<div className="ai-model-selector">
<span className="ai-model-label">Model:</span>
<div className="ai-model-dropdown-container">
<button
className="ai-model-dropdown-trigger"
onClick={() => setShowTextModelDropdown(!showTextModelDropdown)}
>
<span>{selectedTextModelName}</span>
<CaretDown size={14} weight="bold" />
</button>
{showTextModelDropdown && (
<div className="ai-model-dropdown">
{textModels.map((model) => (
<button
key={model.id}
className={`ai-model-option ${selectedTextModel === model.id ? "selected" : ""}`}
onClick={() => {
setSelectedTextModel(model.id);
setShowTextModelDropdown(false);
}}
>
<span className="ai-model-name">{model.name}</span>
<span className="ai-model-provider">{model.provider}</span>
</button>
))}
</div>
)}
</div>
</div>
<AIChatView contextId="dashboard-agent" selectedModel={selectedTextModel} />
</div>
)}
{/* Image Generation Tab */}
{activeTab === "image" && enableImageGeneration && (
<div className="ai-agent-image-container">
{/* Image Model Selector */}
<div className="ai-model-selector">
<span className="ai-model-label">Model:</span>
<div className="ai-model-dropdown-container">
<button
className="ai-model-dropdown-trigger"
onClick={() => setShowImageModelDropdown(!showImageModelDropdown)}
>
<span>{selectedImageModelName}</span>
<CaretDown size={14} weight="bold" />
</button>
{showImageModelDropdown && (
<div className="ai-model-dropdown">
{imageModels.map((model) => (
<button
key={model.id}
className={`ai-model-option ${selectedImageModel === model.id ? "selected" : ""}`}
onClick={() => {
setSelectedImageModel(model.id);
setShowImageModelDropdown(false);
}}
>
<span className="ai-model-name">{model.name}</span>
<span className="ai-model-provider">{model.provider}</span>
</button>
))}
</div>
)}
</div>
</div>
{/* Aspect Ratio Selector */}
<div className="ai-aspect-ratio-selector">
<span className="ai-model-label">Aspect:</span>
<div className="ai-aspect-ratio-options">
{(["1:1", "16:9", "9:16", "4:3", "3:4"] as const).map((ratio) => (
<button
key={ratio}
className={`ai-aspect-ratio-option ${aspectRatio === ratio ? "selected" : ""}`}
onClick={() => setAspectRatio(ratio)}
>
{ratio}
</button>
))}
</div>
</div>
{/* Generated Image Display */}
{generatedImage && (
<div className="ai-generated-image">
<img src={generatedImage.url} alt={generatedImage.prompt} />
<p className="ai-generated-image-prompt">{generatedImage.prompt}</p>
</div>
)}
{/* Error Display */}
{imageError && (
<div className="ai-image-error">
<ReactMarkdown remarkPlugins={[remarkGfm]}>{imageError}</ReactMarkdown>
</div>
)}
{/* Loading State */}
{isGeneratingImage && (
<div className="ai-image-loading">
<SpinnerGap size={32} weight="bold" className="ai-image-spinner" />
<span>Generating image...</span>
</div>
)}
{/* Prompt Input */}
<div className="ai-image-input-container">
<textarea
className="ai-image-input"
value={imagePrompt}
onChange={(e) => setImagePrompt(e.target.value)}
onKeyDown={(e) => {
if (e.key === "Enter" && !e.shiftKey) {
e.preventDefault();
handleGenerateImage();
}
}}
placeholder="Describe the image you want to generate..."
rows={3}
disabled={isGeneratingImage}
/>
<button
className="ai-image-generate-button"
onClick={handleGenerateImage}
disabled={!imagePrompt.trim() || isGeneratingImage}
>
{isGeneratingImage ? (
<SpinnerGap size={18} weight="bold" className="ai-image-spinner" />
) : (
<Image size={18} weight="bold" />
)}
<span>Generate</span>
</button>
</div>
</div>
)}
</div>
);
}

View File

@@ -16,8 +16,8 @@ import { ArrowLeft, Link as LinkIcon, Twitter, Rss, Tag } from "lucide-react";
import { useState, useEffect } from "react";
import siteConfig from "../config/siteConfig";
// Site configuration
const SITE_URL = "https://wwwmarkdown.fast";
// Site configuration - update these for your site (or run npm run configure)
const SITE_URL = "https://www.markdown.fast";
const SITE_NAME = "markdown sync framework";
const DEFAULT_OG_IMAGE = "/images/og-default.svg";

View File

@@ -408,7 +408,9 @@ body {
color: var(--text-secondary);
padding: 4px;
border-radius: 4px;
transition: color 0.2s ease, background-color 0.2s ease;
transition:
color 0.2s ease,
background-color 0.2s ease;
}
.header-social-link:hover {
@@ -1369,7 +1371,7 @@ body {
height: 28px;
border-radius: 10%;
object-fit: cover;
border: 1px solid var(--border-color);
border: none;
}
.post-author-name {
@@ -8999,6 +9001,605 @@ body {
border-radius: 8px;
}
/* AI Dashboard Tabs */
.dashboard-ai-tabs {
display: flex;
gap: 0.25rem;
padding: 0.25rem;
background: var(--bg-secondary);
border-radius: 8px;
margin-bottom: 0.75rem;
}
.dashboard-ai-tab {
flex: 1;
display: flex;
align-items: center;
justify-content: center;
gap: 0.5rem;
padding: 0.625rem 1rem;
border: none;
background: transparent;
color: var(--text-secondary);
font-size: var(--font-size-sm);
font-weight: 500;
border-radius: 6px;
cursor: pointer;
transition: all 0.15s ease;
}
.dashboard-ai-tab:hover {
color: var(--text-primary);
background: var(--bg-tertiary);
}
.dashboard-ai-tab.active {
background: var(--bg-primary);
color: var(--text-primary);
box-shadow: 0 1px 2px rgba(0, 0, 0, 0.05);
}
.dashboard-ai-chat-container {
flex: 1;
display: flex;
flex-direction: column;
min-height: 0;
}
/* AI Chat Model Selector */
.ai-chat-header-left {
display: flex;
align-items: center;
gap: 0.75rem;
}
.ai-chat-model-selector {
position: relative;
display: inline-flex;
align-items: center;
}
.ai-chat-model-select {
appearance: none;
background: var(--bg-secondary);
border: 1px solid var(--border-color);
border-radius: 6px;
padding: 0.375rem 1.75rem 0.375rem 0.625rem;
font-size: var(--font-size-xs);
color: var(--text-primary);
cursor: pointer;
font-family: inherit;
}
.ai-chat-model-select:focus {
outline: none;
border-color: var(--accent-color);
}
.ai-chat-model-select:disabled {
opacity: 0.6;
cursor: not-allowed;
}
.ai-chat-model-caret {
position: absolute;
right: 0.5rem;
pointer-events: none;
color: var(--text-secondary);
}
/* Image Generation Panel */
.dashboard-image-gen {
flex: 1;
display: flex;
flex-direction: column;
gap: 1rem;
padding: 1rem;
background: var(--bg-primary);
border: 1px solid var(--border-color);
border-radius: 8px;
overflow-y: auto;
}
.dashboard-image-gen-controls {
display: flex;
flex-direction: column;
gap: 1rem;
}
.dashboard-image-gen-row {
display: flex;
flex-direction: column;
gap: 0.5rem;
}
.dashboard-image-gen-label {
font-size: var(--font-size-sm);
font-weight: 500;
color: var(--text-primary);
}
.dashboard-image-gen-select {
appearance: none;
background: var(--bg-secondary);
border: 1px solid var(--border-color);
border-radius: 6px;
padding: 0.625rem 0.875rem;
font-size: var(--font-size-sm);
color: var(--text-primary);
cursor: pointer;
font-family: inherit;
background-image: url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' width='12' height='12' viewBox='0 0 12 12'%3E%3Cpath fill='%23666' d='M2 4l4 4 4-4'/%3E%3C/svg%3E");
background-repeat: no-repeat;
background-position: right 0.75rem center;
}
.dashboard-image-gen-select:focus {
outline: none;
border-color: var(--accent-color);
}
.dashboard-image-gen-select:disabled {
opacity: 0.6;
cursor: not-allowed;
}
.dashboard-image-gen-aspect-buttons {
display: flex;
gap: 0.5rem;
flex-wrap: wrap;
}
.dashboard-image-gen-aspect-btn {
padding: 0.5rem 0.875rem;
border: 1px solid var(--border-color);
background: var(--bg-secondary);
color: var(--text-secondary);
font-size: var(--font-size-sm);
border-radius: 6px;
cursor: pointer;
transition: all 0.15s ease;
}
.dashboard-image-gen-aspect-btn:hover {
background: var(--bg-tertiary);
color: var(--text-primary);
}
.dashboard-image-gen-aspect-btn.active {
background: var(--accent-color);
border-color: var(--accent-color);
color: white;
}
.dashboard-image-gen-aspect-btn:disabled {
opacity: 0.6;
cursor: not-allowed;
}
.dashboard-image-gen-textarea {
width: 100%;
padding: 0.75rem;
border: 1px solid var(--border-color);
border-radius: 8px;
background: var(--bg-secondary);
color: var(--text-primary);
font-size: var(--font-size-sm);
font-family: inherit;
resize: vertical;
min-height: 80px;
}
.dashboard-image-gen-textarea:focus {
outline: none;
border-color: var(--accent-color);
}
.dashboard-image-gen-textarea:disabled {
opacity: 0.6;
cursor: not-allowed;
}
.dashboard-image-gen-textarea::placeholder {
color: var(--text-tertiary);
}
.dashboard-image-gen-button {
display: flex;
align-items: center;
justify-content: center;
gap: 0.5rem;
padding: 0.75rem 1.5rem;
background: var(--accent-color);
color: white;
border: none;
border-radius: 8px;
font-size: var(--font-size-sm);
font-weight: 500;
cursor: pointer;
transition: all 0.15s ease;
}
.dashboard-image-gen-button:hover:not(:disabled) {
opacity: 0.9;
}
.dashboard-image-gen-button:disabled {
opacity: 0.6;
cursor: not-allowed;
}
.dashboard-image-gen-error {
padding: 1rem;
background: var(--bg-error, rgba(239, 68, 68, 0.1));
border: 1px solid var(--border-error, rgba(239, 68, 68, 0.3));
border-radius: 8px;
color: var(--text-error, #ef4444);
font-size: var(--font-size-sm);
}
.dashboard-image-gen-error p {
margin: 0 0 0.5rem;
}
.dashboard-image-gen-error p:last-child {
margin-bottom: 0;
}
.dashboard-image-gen-error a {
color: inherit;
text-decoration: underline;
}
.dashboard-image-gen-result {
display: flex;
flex-direction: column;
gap: 0.75rem;
align-items: center;
padding: 1rem;
background: var(--bg-secondary);
border-radius: 8px;
}
.dashboard-image-gen-preview {
max-width: 100%;
max-height: 400px;
border-radius: 8px;
object-fit: contain;
}
.dashboard-image-gen-download {
display: flex;
align-items: center;
gap: 0.5rem;
padding: 0.625rem 1rem;
background: var(--bg-primary);
border: 1px solid var(--border-color);
border-radius: 6px;
color: var(--text-primary);
font-size: var(--font-size-sm);
cursor: pointer;
transition: all 0.15s ease;
}
.dashboard-image-gen-download:hover {
background: var(--bg-tertiary);
}
/* AI Agent Tabs */
.ai-agent-tabs {
display: flex;
gap: 0.25rem;
padding: 0.25rem;
background: var(--bg-secondary);
border-radius: 8px;
margin-bottom: 0.75rem;
}
.ai-agent-tab {
flex: 1;
display: flex;
align-items: center;
justify-content: center;
gap: 0.5rem;
padding: 0.625rem 1rem;
border: none;
background: transparent;
color: var(--text-secondary);
font-size: var(--font-size-sm);
font-weight: 500;
border-radius: 6px;
cursor: pointer;
transition: all 0.15s ease;
}
.ai-agent-tab:hover {
color: var(--text-primary);
background: var(--bg-tertiary);
}
.ai-agent-tab.active {
background: var(--bg-primary);
color: var(--text-primary);
box-shadow: 0 1px 2px rgba(0, 0, 0, 0.05);
}
/* AI Agent Chat Container */
.ai-agent-chat-container {
flex: 1;
display: flex;
flex-direction: column;
min-height: 0;
}
.ai-agent-chat-container .ai-chat-view {
flex: 1;
}
/* AI Agent Image Container */
.ai-agent-image-container {
flex: 1;
display: flex;
flex-direction: column;
gap: 1rem;
padding: 1rem;
background: var(--bg-primary);
border: 1px solid var(--border-color);
border-radius: 8px;
overflow-y: auto;
}
/* AI Model Selector */
.ai-model-selector {
display: flex;
align-items: center;
gap: 0.75rem;
}
.ai-model-label {
font-size: var(--font-size-sm);
font-weight: 500;
color: var(--text-secondary);
}
.ai-model-dropdown-container {
position: relative;
}
.ai-model-dropdown-trigger {
display: flex;
align-items: center;
gap: 0.5rem;
padding: 0.5rem 0.75rem;
background: var(--bg-secondary);
border: 1px solid var(--border-color);
border-radius: 6px;
color: var(--text-primary);
font-size: var(--font-size-sm);
cursor: pointer;
transition: all 0.15s ease;
}
.ai-model-dropdown-trigger:hover {
background: var(--bg-tertiary);
border-color: var(--text-tertiary);
}
.ai-model-dropdown {
position: absolute;
top: calc(100% + 4px);
left: 0;
min-width: 200px;
background: var(--bg-primary);
border: 1px solid var(--border-color);
border-radius: 8px;
box-shadow: 0 4px 12px rgba(0, 0, 0, 0.15);
z-index: 100;
overflow: hidden;
}
.ai-model-option {
display: flex;
justify-content: space-between;
align-items: center;
width: 100%;
padding: 0.625rem 0.875rem;
background: transparent;
border: none;
color: var(--text-primary);
font-size: var(--font-size-sm);
cursor: pointer;
transition: background 0.15s ease;
text-align: left;
}
.ai-model-option:hover {
background: var(--bg-secondary);
}
.ai-model-option.selected {
background: var(--bg-tertiary);
}
.ai-model-name {
font-weight: 500;
}
.ai-model-provider {
font-size: var(--font-size-xs);
color: var(--text-tertiary);
text-transform: capitalize;
}
/* AI Aspect Ratio Selector */
.ai-aspect-ratio-selector {
display: flex;
align-items: center;
gap: 0.75rem;
}
.ai-aspect-ratio-options {
display: flex;
gap: 0.5rem;
flex-wrap: wrap;
}
.ai-aspect-ratio-option {
padding: 0.5rem 0.875rem;
border: 1px solid var(--border-color);
background: var(--bg-secondary);
color: var(--text-secondary);
font-size: var(--font-size-sm);
border-radius: 6px;
cursor: pointer;
transition: all 0.15s ease;
}
.ai-aspect-ratio-option:hover {
background: var(--bg-tertiary);
color: var(--text-primary);
}
.ai-aspect-ratio-option.selected {
background: var(--accent-color);
border-color: var(--accent-color);
color: white;
}
/* AI Generated Image Display */
.ai-generated-image {
display: flex;
flex-direction: column;
gap: 0.75rem;
align-items: center;
padding: 1rem;
background: var(--bg-secondary);
border-radius: 8px;
}
.ai-generated-image img {
max-width: 100%;
max-height: 400px;
border-radius: 8px;
object-fit: contain;
}
.ai-generated-image-prompt {
font-size: var(--font-size-sm);
color: var(--text-secondary);
text-align: center;
margin: 0;
font-style: italic;
}
/* AI Image Error */
.ai-image-error {
padding: 1rem;
background: var(--bg-error, rgba(239, 68, 68, 0.1));
border: 1px solid var(--border-error, rgba(239, 68, 68, 0.3));
border-radius: 8px;
color: var(--text-error, #ef4444);
font-size: var(--font-size-sm);
}
.ai-image-error p {
margin: 0 0 0.5rem;
}
.ai-image-error p:last-child {
margin-bottom: 0;
}
.ai-image-error a {
color: inherit;
text-decoration: underline;
}
/* AI Image Loading */
.ai-image-loading {
display: flex;
flex-direction: column;
align-items: center;
justify-content: center;
gap: 1rem;
padding: 3rem;
color: var(--text-secondary);
}
.ai-image-spinner {
animation: spin 1s linear infinite;
}
@keyframes spin {
from {
transform: rotate(0deg);
}
to {
transform: rotate(360deg);
}
}
/* AI Image Input Container */
.ai-image-input-container {
display: flex;
flex-direction: column;
gap: 0.75rem;
margin-top: auto;
}
.ai-image-input {
width: 100%;
padding: 0.75rem;
border: 1px solid var(--border-color);
border-radius: 8px;
background: var(--bg-secondary);
color: var(--text-primary);
font-size: var(--font-size-sm);
font-family: inherit;
resize: vertical;
min-height: 80px;
}
.ai-image-input:focus {
outline: none;
border-color: var(--accent-color);
}
.ai-image-input:disabled {
opacity: 0.6;
cursor: not-allowed;
}
.ai-image-input::placeholder {
color: var(--text-tertiary);
}
.ai-image-generate-button {
display: flex;
align-items: center;
justify-content: center;
gap: 0.5rem;
padding: 0.75rem 1.5rem;
background: var(--accent-color);
color: white;
border: none;
border-radius: 8px;
font-size: var(--font-size-sm);
font-weight: 500;
cursor: pointer;
transition: all 0.15s ease;
align-self: flex-end;
}
.ai-image-generate-button:hover:not(:disabled) {
opacity: 0.9;
}
.ai-image-generate-button:disabled {
opacity: 0.6;
cursor: not-allowed;
}
/* Import URL Section */
.dashboard-import-section {
max-width: 600px;