mirror of
https://github.com/RayLabsHQ/gitea-mirror.git
synced 2025-12-07 20:16:46 +03:00
feat: migrate from Redis to SQLite for event handling and notifications
This commit is contained in:
@@ -8,6 +8,7 @@ NODE_ENV=production
|
|||||||
HOST=0.0.0.0
|
HOST=0.0.0.0
|
||||||
PORT=4321
|
PORT=4321
|
||||||
DATABASE_URL=sqlite://data/gitea-mirror.db
|
DATABASE_URL=sqlite://data/gitea-mirror.db
|
||||||
|
# Note: Redis is no longer required as SQLite is used for all functionality
|
||||||
|
|
||||||
# Security
|
# Security
|
||||||
JWT_SECRET=change-this-to-a-secure-random-string-in-production
|
JWT_SECRET=change-this-to-a-secure-random-string-in-production
|
||||||
|
|||||||
77
README.md
77
README.md
@@ -124,19 +124,15 @@ docker compose -f docker-compose.dev.yml up -d
|
|||||||
|
|
||||||
##### Using Pre-built Images from GitHub Container Registry
|
##### Using Pre-built Images from GitHub Container Registry
|
||||||
|
|
||||||
If you want to run the container directly without Docker Compose, you'll need to set up a Redis instance separately:
|
If you want to run the container directly without Docker Compose:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# First, start a Redis container
|
|
||||||
docker run -d --name gitea-mirror-redis redis:alpine
|
|
||||||
|
|
||||||
# Pull the latest multi-architecture image
|
# Pull the latest multi-architecture image
|
||||||
docker pull ghcr.io/arunavo4/gitea-mirror:latest
|
docker pull ghcr.io/arunavo4/gitea-mirror:latest
|
||||||
|
|
||||||
# Run the application with a link to the Redis container
|
# Run the application with a volume for persistent data
|
||||||
# Note: The REDIS_URL environment variable is required and must point to the Redis container
|
docker run -d -p 4321:4321 \
|
||||||
docker run -d -p 4321:4321 --link gitea-mirror-redis:redis \
|
-v gitea-mirror-data:/app/data \
|
||||||
-e REDIS_URL=redis://redis:6379 \
|
|
||||||
ghcr.io/arunavo4/gitea-mirror:latest
|
ghcr.io/arunavo4/gitea-mirror:latest
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -254,7 +250,7 @@ Key configuration options include:
|
|||||||
- Scheduling options for automatic mirroring
|
- Scheduling options for automatic mirroring
|
||||||
|
|
||||||
> [!IMPORTANT]
|
> [!IMPORTANT]
|
||||||
> **Redis is a required component for Gitea Mirror** as it's used for job queuing and caching.
|
> **SQLite is the only database required for Gitea Mirror**, handling both data storage and real-time event notifications.
|
||||||
|
|
||||||
## 🚀 Development
|
## 🚀 Development
|
||||||
|
|
||||||
@@ -360,8 +356,7 @@ docker compose -f docker-compose.dev.yml up -d
|
|||||||
|
|
||||||
- **Frontend**: Astro, React, Shadcn UI, Tailwind CSS v4
|
- **Frontend**: Astro, React, Shadcn UI, Tailwind CSS v4
|
||||||
- **Backend**: Bun
|
- **Backend**: Bun
|
||||||
- **Database**: SQLite (default) or PostgreSQL
|
- **Database**: SQLite (handles both data storage and event notifications)
|
||||||
- **Caching/Queue**: Redis
|
|
||||||
- **API Integration**: GitHub API (Octokit), Gitea API
|
- **API Integration**: GitHub API (Octokit), Gitea API
|
||||||
|
|
||||||
## Contributing
|
## Contributing
|
||||||
@@ -439,48 +434,34 @@ Try the following steps:
|
|||||||
> external: true
|
> external: true
|
||||||
> ```
|
> ```
|
||||||
|
|
||||||
### Redis Connection Issues
|
### Database Persistence
|
||||||
|
|
||||||
> [!CAUTION]
|
|
||||||
> If the application fails to connect to Redis with errors like `ECONNREFUSED 127.0.0.1:6379`, ensure:
|
|
||||||
>
|
|
||||||
> 1. The Redis container is running:
|
|
||||||
> ```bash
|
|
||||||
> docker ps | grep redis
|
|
||||||
> ```
|
|
||||||
> 2. The `REDIS_URL` environment variable is correctly set to `redis://redis:6379` in your Docker Compose file.
|
|
||||||
> 3. Both the application and Redis containers are on the same Docker network.
|
|
||||||
> 4. If running without Docker Compose, ensure you've started a Redis container and linked it properly:
|
|
||||||
> ```bash
|
|
||||||
> # Start Redis container
|
|
||||||
> docker run -d --name gitea-mirror-redis redis:alpine
|
|
||||||
> # Run application with link to Redis
|
|
||||||
> docker run -d -p 4321:4321 --link gitea-mirror-redis:redis \
|
|
||||||
> -e REDIS_URL=redis://redis:6379 \
|
|
||||||
> ghcr.io/arunavo4/gitea-mirror:latest
|
|
||||||
> ```
|
|
||||||
|
|
||||||
|
|
||||||
#### Improving Redis Connection Resilience
|
|
||||||
|
|
||||||
> [!TIP]
|
> [!TIP]
|
||||||
> For better Redis connection handling, you can modify the `src/lib/redis.ts` file to include retry logic and better error handling:
|
> The application uses SQLite for all data storage and event notifications. Make sure the database file is properly mounted when using Docker:
|
||||||
|
>
|
||||||
|
> ```bash
|
||||||
|
> # Run with a volume for persistent data storage
|
||||||
|
> docker run -d -p 4321:4321 \
|
||||||
|
> -v gitea-mirror-data:/app/data \
|
||||||
|
> ghcr.io/arunavo4/gitea-mirror:latest
|
||||||
|
> ```
|
||||||
|
|
||||||
```typescript
|
|
||||||
import { RedisClient } from "bun";
|
|
||||||
|
|
||||||
// Connect to Redis using REDIS_URL environment variable or default to redis://redis:6379
|
#### Database Maintenance
|
||||||
const redisUrl = process.env.REDIS_URL ?? "redis://redis:6379";
|
|
||||||
|
|
||||||
console.log(`Connecting to Redis at: ${redisUrl}`);
|
> [!TIP]
|
||||||
|
> For database maintenance, you can use the provided scripts:
|
||||||
const redis = new RedisClient(redisUrl, { autoReconnect: true });
|
>
|
||||||
|
> ```bash
|
||||||
redis.onconnect = () => console.log("Redis client connected");
|
> # Check database integrity
|
||||||
redis.onclose = err => {
|
> bun run check-db
|
||||||
if (err) console.error("Redis client error:", err);
|
>
|
||||||
};
|
> # Fix database issues
|
||||||
```
|
> bun run fix-db
|
||||||
|
>
|
||||||
|
> # Reset user accounts (for development)
|
||||||
|
> bun run reset-users
|
||||||
|
> ```
|
||||||
|
|
||||||
|
|
||||||
> [!NOTE]
|
> [!NOTE]
|
||||||
|
|||||||
@@ -51,7 +51,6 @@ services:
|
|||||||
- gitea-mirror-data:/app/data
|
- gitea-mirror-data:/app/data
|
||||||
depends_on:
|
depends_on:
|
||||||
- gitea
|
- gitea
|
||||||
- redis
|
|
||||||
environment:
|
environment:
|
||||||
- NODE_ENV=development
|
- NODE_ENV=development
|
||||||
- DATABASE_URL=file:data/gitea-mirror.db
|
- DATABASE_URL=file:data/gitea-mirror.db
|
||||||
@@ -75,7 +74,6 @@ services:
|
|||||||
- GITEA_ORGANIZATION=${GITEA_ORGANIZATION:-github-mirrors}
|
- GITEA_ORGANIZATION=${GITEA_ORGANIZATION:-github-mirrors}
|
||||||
- GITEA_ORG_VISIBILITY=${GITEA_ORG_VISIBILITY:-public}
|
- GITEA_ORG_VISIBILITY=${GITEA_ORG_VISIBILITY:-public}
|
||||||
- DELAY=${DELAY:-3600}
|
- DELAY=${DELAY:-3600}
|
||||||
- REDIS_URL=redis://redis:6379
|
|
||||||
healthcheck:
|
healthcheck:
|
||||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:4321/"]
|
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:4321/"]
|
||||||
interval: 30s
|
interval: 30s
|
||||||
@@ -85,16 +83,7 @@ services:
|
|||||||
networks:
|
networks:
|
||||||
- gitea-network
|
- gitea-network
|
||||||
|
|
||||||
redis:
|
|
||||||
image: redis:7-alpine
|
|
||||||
container_name: redis
|
|
||||||
restart: unless-stopped
|
|
||||||
ports:
|
|
||||||
- "6379:6379"
|
|
||||||
volumes:
|
|
||||||
- redis-data:/data
|
|
||||||
networks:
|
|
||||||
- gitea-network
|
|
||||||
|
|
||||||
# Define named volumes for data persistence
|
# Define named volumes for data persistence
|
||||||
volumes:
|
volumes:
|
||||||
@@ -102,8 +91,6 @@ volumes:
|
|||||||
gitea-config: # Gitea config volume
|
gitea-config: # Gitea config volume
|
||||||
gitea-mirror-data: # Gitea Mirror database volume
|
gitea-mirror-data: # Gitea Mirror database volume
|
||||||
|
|
||||||
redis-data:
|
|
||||||
|
|
||||||
# Define networks
|
# Define networks
|
||||||
networks:
|
networks:
|
||||||
gitea-network:
|
gitea-network:
|
||||||
|
|||||||
@@ -19,8 +19,6 @@ services:
|
|||||||
- "4321:4321"
|
- "4321:4321"
|
||||||
volumes:
|
volumes:
|
||||||
- gitea-mirror-data:/app/data
|
- gitea-mirror-data:/app/data
|
||||||
depends_on:
|
|
||||||
- redis
|
|
||||||
environment:
|
environment:
|
||||||
- NODE_ENV=production
|
- NODE_ENV=production
|
||||||
- DATABASE_URL=file:data/gitea-mirror.db
|
- DATABASE_URL=file:data/gitea-mirror.db
|
||||||
@@ -44,7 +42,6 @@ services:
|
|||||||
- GITEA_ORGANIZATION=${GITEA_ORGANIZATION:-github-mirrors}
|
- GITEA_ORGANIZATION=${GITEA_ORGANIZATION:-github-mirrors}
|
||||||
- GITEA_ORG_VISIBILITY=${GITEA_ORG_VISIBILITY:-public}
|
- GITEA_ORG_VISIBILITY=${GITEA_ORG_VISIBILITY:-public}
|
||||||
- DELAY=${DELAY:-3600}
|
- DELAY=${DELAY:-3600}
|
||||||
- REDIS_URL=redis://redis:6379
|
|
||||||
healthcheck:
|
healthcheck:
|
||||||
test: ["CMD", "wget", "--no-verbose", "--tries=3", "--spider", "http://localhost:4321/"]
|
test: ["CMD", "wget", "--no-verbose", "--tries=3", "--spider", "http://localhost:4321/"]
|
||||||
interval: 30s
|
interval: 30s
|
||||||
@@ -53,16 +50,6 @@ services:
|
|||||||
start_period: 15s
|
start_period: 15s
|
||||||
profiles: ["production"]
|
profiles: ["production"]
|
||||||
|
|
||||||
redis:
|
|
||||||
image: redis:7-alpine
|
|
||||||
container_name: redis
|
|
||||||
restart: unless-stopped
|
|
||||||
ports:
|
|
||||||
- "6379:6379"
|
|
||||||
volumes:
|
|
||||||
- redis-data:/data
|
|
||||||
|
|
||||||
# Define named volumes for database persistence
|
# Define named volumes for database persistence
|
||||||
volumes:
|
volumes:
|
||||||
gitea-mirror-data: # Database volume
|
gitea-mirror-data: # Database volume
|
||||||
redis-data:
|
|
||||||
|
|||||||
@@ -113,6 +113,20 @@ if [ ! -f "/app/data/gitea-mirror.db" ]; then
|
|||||||
timestamp TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
timestamp TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||||
FOREIGN KEY (user_id) REFERENCES users(id)
|
FOREIGN KEY (user_id) REFERENCES users(id)
|
||||||
);
|
);
|
||||||
|
|
||||||
|
CREATE TABLE IF NOT EXISTS events (
|
||||||
|
id TEXT PRIMARY KEY,
|
||||||
|
user_id TEXT NOT NULL,
|
||||||
|
channel TEXT NOT NULL,
|
||||||
|
payload TEXT NOT NULL,
|
||||||
|
read INTEGER NOT NULL DEFAULT 0,
|
||||||
|
created_at INTEGER NOT NULL DEFAULT (strftime('%s','now')),
|
||||||
|
FOREIGN KEY (user_id) REFERENCES users(id)
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_events_user_channel ON events(user_id, channel);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_events_created_at ON events(created_at);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_events_read ON events(read);
|
||||||
EOF
|
EOF
|
||||||
echo "Database initialized with required tables."
|
echo "Database initialized with required tables."
|
||||||
fi
|
fi
|
||||||
|
|||||||
@@ -16,6 +16,8 @@
|
|||||||
"check-db": "bun scripts/manage-db.ts check",
|
"check-db": "bun scripts/manage-db.ts check",
|
||||||
"fix-db": "bun scripts/manage-db.ts fix",
|
"fix-db": "bun scripts/manage-db.ts fix",
|
||||||
"reset-users": "bun scripts/manage-db.ts reset-users",
|
"reset-users": "bun scripts/manage-db.ts reset-users",
|
||||||
|
"migrate-db": "bun scripts/migrate-db.ts",
|
||||||
|
"cleanup-redis": "bun scripts/cleanup-redis.ts",
|
||||||
"preview": "bunx --bun astro preview",
|
"preview": "bunx --bun astro preview",
|
||||||
"start": "bun dist/server/entry.mjs",
|
"start": "bun dist/server/entry.mjs",
|
||||||
"start:fresh": "bun run cleanup-db && bun run manage-db init && bun dist/server/entry.mjs",
|
"start:fresh": "bun run cleanup-db && bun run manage-db init && bun dist/server/entry.mjs",
|
||||||
|
|||||||
38
scripts/check-events.ts
Normal file
38
scripts/check-events.ts
Normal file
@@ -0,0 +1,38 @@
|
|||||||
|
#!/usr/bin/env bun
|
||||||
|
/**
|
||||||
|
* Script to check events in the database
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { Database } from "bun:sqlite";
|
||||||
|
import path from "path";
|
||||||
|
import fs from "fs";
|
||||||
|
|
||||||
|
// Define the database path
|
||||||
|
const dataDir = path.join(process.cwd(), "data");
|
||||||
|
if (!fs.existsSync(dataDir)) {
|
||||||
|
console.error("Data directory not found:", dataDir);
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
|
||||||
|
const dbPath = path.join(dataDir, "gitea-mirror.db");
|
||||||
|
if (!fs.existsSync(dbPath)) {
|
||||||
|
console.error("Database file not found:", dbPath);
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Open the database
|
||||||
|
const db = new Database(dbPath);
|
||||||
|
|
||||||
|
// Check if the events table exists
|
||||||
|
const tableExists = db.query("SELECT name FROM sqlite_master WHERE type='table' AND name='events'").get();
|
||||||
|
|
||||||
|
if (!tableExists) {
|
||||||
|
console.error("Events table does not exist");
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get all events
|
||||||
|
const events = db.query("SELECT * FROM events").all();
|
||||||
|
|
||||||
|
console.log("Events in the database:");
|
||||||
|
console.log(JSON.stringify(events, null, 2));
|
||||||
33
scripts/cleanup-redis.ts
Normal file
33
scripts/cleanup-redis.ts
Normal file
@@ -0,0 +1,33 @@
|
|||||||
|
#!/usr/bin/env bun
|
||||||
|
/**
|
||||||
|
* Cleanup script to remove Redis-related files and code
|
||||||
|
* This script should be run when migrating from Redis to SQLite
|
||||||
|
*/
|
||||||
|
|
||||||
|
import fs from "fs";
|
||||||
|
import path from "path";
|
||||||
|
|
||||||
|
// Files to remove
|
||||||
|
const filesToRemove = [
|
||||||
|
"src/lib/redis.ts"
|
||||||
|
];
|
||||||
|
|
||||||
|
// Remove files
|
||||||
|
console.log("Removing Redis-related files...");
|
||||||
|
for (const file of filesToRemove) {
|
||||||
|
const filePath = path.join(process.cwd(), file);
|
||||||
|
if (fs.existsSync(filePath)) {
|
||||||
|
fs.unlinkSync(filePath);
|
||||||
|
console.log(`Removed: ${file}`);
|
||||||
|
} else {
|
||||||
|
console.log(`File not found: ${file}`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
console.log("\nRedis cleanup completed successfully");
|
||||||
|
console.log("\nReminder: You should also remove Redis from your Docker Compose files and environment variables.");
|
||||||
|
console.log("The following files have been updated to use SQLite instead of Redis:");
|
||||||
|
console.log("- src/lib/helpers.ts");
|
||||||
|
console.log("- src/pages/api/sse/index.ts");
|
||||||
|
console.log("\nNew files created:");
|
||||||
|
console.log("- src/lib/events.ts");
|
||||||
@@ -35,6 +35,7 @@ async function ensureTablesExist() {
|
|||||||
"repositories",
|
"repositories",
|
||||||
"organizations",
|
"organizations",
|
||||||
"mirror_jobs",
|
"mirror_jobs",
|
||||||
|
"events",
|
||||||
];
|
];
|
||||||
|
|
||||||
for (const table of requiredTables) {
|
for (const table of requiredTables) {
|
||||||
@@ -148,6 +149,24 @@ async function ensureTablesExist() {
|
|||||||
)
|
)
|
||||||
`);
|
`);
|
||||||
break;
|
break;
|
||||||
|
case "events":
|
||||||
|
db.exec(`
|
||||||
|
CREATE TABLE events (
|
||||||
|
id TEXT PRIMARY KEY,
|
||||||
|
user_id TEXT NOT NULL,
|
||||||
|
channel TEXT NOT NULL,
|
||||||
|
payload TEXT NOT NULL,
|
||||||
|
read INTEGER NOT NULL DEFAULT 0,
|
||||||
|
created_at INTEGER NOT NULL DEFAULT (strftime('%s','now')),
|
||||||
|
FOREIGN KEY (user_id) REFERENCES users(id)
|
||||||
|
)
|
||||||
|
`);
|
||||||
|
db.exec(`
|
||||||
|
CREATE INDEX idx_events_user_channel ON events(user_id, channel);
|
||||||
|
CREATE INDEX idx_events_created_at ON events(created_at);
|
||||||
|
CREATE INDEX idx_events_read ON events(read);
|
||||||
|
`);
|
||||||
|
break;
|
||||||
}
|
}
|
||||||
console.log(`✅ Table '${table}' created successfully.`);
|
console.log(`✅ Table '${table}' created successfully.`);
|
||||||
}
|
}
|
||||||
@@ -362,6 +381,24 @@ async function initializeDatabase() {
|
|||||||
)
|
)
|
||||||
`);
|
`);
|
||||||
|
|
||||||
|
db.exec(`
|
||||||
|
CREATE TABLE IF NOT EXISTS events (
|
||||||
|
id TEXT PRIMARY KEY,
|
||||||
|
user_id TEXT NOT NULL,
|
||||||
|
channel TEXT NOT NULL,
|
||||||
|
payload TEXT NOT NULL,
|
||||||
|
read INTEGER NOT NULL DEFAULT 0,
|
||||||
|
created_at INTEGER NOT NULL DEFAULT (strftime('%s','now')),
|
||||||
|
FOREIGN KEY (user_id) REFERENCES users(id)
|
||||||
|
)
|
||||||
|
`);
|
||||||
|
|
||||||
|
db.exec(`
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_events_user_channel ON events(user_id, channel);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_events_created_at ON events(created_at);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_events_read ON events(read);
|
||||||
|
`);
|
||||||
|
|
||||||
// Insert default config if none exists
|
// Insert default config if none exists
|
||||||
const configCountResult = db.query(`SELECT COUNT(*) as count FROM configs`).get();
|
const configCountResult = db.query(`SELECT COUNT(*) as count FROM configs`).get();
|
||||||
const configCount = configCountResult?.count || 0;
|
const configCount = configCountResult?.count || 0;
|
||||||
|
|||||||
53
scripts/migrate-db.ts
Normal file
53
scripts/migrate-db.ts
Normal file
@@ -0,0 +1,53 @@
|
|||||||
|
#!/usr/bin/env bun
|
||||||
|
/**
|
||||||
|
* Database migration script to add the events table
|
||||||
|
* This script should be run when upgrading from a version that used Redis
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { Database } from "bun:sqlite";
|
||||||
|
import fs from "fs";
|
||||||
|
import path from "path";
|
||||||
|
|
||||||
|
// Define the database path
|
||||||
|
const dataDir = path.join(process.cwd(), "data");
|
||||||
|
if (!fs.existsSync(dataDir)) {
|
||||||
|
fs.mkdirSync(dataDir, { recursive: true });
|
||||||
|
}
|
||||||
|
|
||||||
|
const dbPath = path.join(dataDir, "gitea-mirror.db");
|
||||||
|
if (!fs.existsSync(dbPath)) {
|
||||||
|
console.error("Database file not found:", dbPath);
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Open the database
|
||||||
|
const db = new Database(dbPath);
|
||||||
|
|
||||||
|
// Check if the events table already exists
|
||||||
|
const tableExists = db.query("SELECT name FROM sqlite_master WHERE type='table' AND name='events'").get();
|
||||||
|
|
||||||
|
if (tableExists) {
|
||||||
|
console.log("Events table already exists, skipping migration");
|
||||||
|
process.exit(0);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create the events table
|
||||||
|
console.log("Creating events table...");
|
||||||
|
db.exec(`
|
||||||
|
CREATE TABLE events (
|
||||||
|
id TEXT PRIMARY KEY,
|
||||||
|
user_id TEXT NOT NULL,
|
||||||
|
channel TEXT NOT NULL,
|
||||||
|
payload TEXT NOT NULL,
|
||||||
|
read INTEGER NOT NULL DEFAULT 0,
|
||||||
|
created_at INTEGER NOT NULL DEFAULT (unixepoch()),
|
||||||
|
FOREIGN KEY (user_id) REFERENCES users(id)
|
||||||
|
);
|
||||||
|
|
||||||
|
-- Create indexes for efficient querying
|
||||||
|
CREATE INDEX idx_events_user_channel ON events(user_id, channel);
|
||||||
|
CREATE INDEX idx_events_created_at ON events(created_at);
|
||||||
|
CREATE INDEX idx_events_read ON events(read);
|
||||||
|
`);
|
||||||
|
|
||||||
|
console.log("Migration completed successfully");
|
||||||
@@ -66,6 +66,18 @@ export const users = sqliteTable("users", {
|
|||||||
.default(new Date()),
|
.default(new Date()),
|
||||||
});
|
});
|
||||||
|
|
||||||
|
// New table for event notifications (replacing Redis pub/sub)
|
||||||
|
export const events = sqliteTable("events", {
|
||||||
|
id: text("id").primaryKey(),
|
||||||
|
userId: text("user_id").notNull().references(() => users.id),
|
||||||
|
channel: text("channel").notNull(),
|
||||||
|
payload: text("payload", { mode: "json" }).notNull(),
|
||||||
|
read: integer("read", { mode: "boolean" }).notNull().default(false),
|
||||||
|
createdAt: integer("created_at", { mode: "timestamp" })
|
||||||
|
.notNull()
|
||||||
|
.default(new Date()),
|
||||||
|
});
|
||||||
|
|
||||||
const githubSchema = configSchema.shape.githubConfig;
|
const githubSchema = configSchema.shape.githubConfig;
|
||||||
const giteaSchema = configSchema.shape.giteaConfig;
|
const giteaSchema = configSchema.shape.giteaConfig;
|
||||||
const scheduleSchema = configSchema.shape.scheduleConfig;
|
const scheduleSchema = configSchema.shape.scheduleConfig;
|
||||||
|
|||||||
@@ -140,3 +140,15 @@ export const organizationSchema = z.object({
|
|||||||
});
|
});
|
||||||
|
|
||||||
export type Organization = z.infer<typeof organizationSchema>;
|
export type Organization = z.infer<typeof organizationSchema>;
|
||||||
|
|
||||||
|
// Event schema (for SQLite-based pub/sub)
|
||||||
|
export const eventSchema = z.object({
|
||||||
|
id: z.string().uuid().optional(),
|
||||||
|
userId: z.string().uuid(),
|
||||||
|
channel: z.string().min(1),
|
||||||
|
payload: z.any(),
|
||||||
|
read: z.boolean().default(false),
|
||||||
|
createdAt: z.date().default(() => new Date()),
|
||||||
|
});
|
||||||
|
|
||||||
|
export type Event = z.infer<typeof eventSchema>;
|
||||||
|
|||||||
130
src/lib/events.ts
Normal file
130
src/lib/events.ts
Normal file
@@ -0,0 +1,130 @@
|
|||||||
|
import { v4 as uuidv4 } from "uuid";
|
||||||
|
import { db, events } from "./db";
|
||||||
|
import { eq, and, gt } from "drizzle-orm";
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Publishes an event to a specific channel for a user
|
||||||
|
* This replaces Redis pub/sub with SQLite storage
|
||||||
|
*/
|
||||||
|
export async function publishEvent({
|
||||||
|
userId,
|
||||||
|
channel,
|
||||||
|
payload,
|
||||||
|
}: {
|
||||||
|
userId: string;
|
||||||
|
channel: string;
|
||||||
|
payload: any;
|
||||||
|
}): Promise<string> {
|
||||||
|
try {
|
||||||
|
const eventId = uuidv4();
|
||||||
|
console.log(`Publishing event to channel ${channel} for user ${userId}`);
|
||||||
|
|
||||||
|
// Insert the event into the SQLite database
|
||||||
|
await db.insert(events).values({
|
||||||
|
id: eventId,
|
||||||
|
userId,
|
||||||
|
channel,
|
||||||
|
payload: JSON.stringify(payload),
|
||||||
|
createdAt: new Date(),
|
||||||
|
});
|
||||||
|
|
||||||
|
console.log(`Event published successfully with ID ${eventId}`);
|
||||||
|
return eventId;
|
||||||
|
} catch (error) {
|
||||||
|
console.error("Error publishing event:", error);
|
||||||
|
throw new Error("Failed to publish event");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Gets new events for a specific user and channel
|
||||||
|
* This replaces Redis subscribe with SQLite polling
|
||||||
|
*/
|
||||||
|
export async function getNewEvents({
|
||||||
|
userId,
|
||||||
|
channel,
|
||||||
|
lastEventTime,
|
||||||
|
}: {
|
||||||
|
userId: string;
|
||||||
|
channel: string;
|
||||||
|
lastEventTime?: Date;
|
||||||
|
}): Promise<any[]> {
|
||||||
|
try {
|
||||||
|
console.log(`Getting new events for user ${userId} in channel ${channel}`);
|
||||||
|
if (lastEventTime) {
|
||||||
|
console.log(`Looking for events after ${lastEventTime.toISOString()}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Build the query
|
||||||
|
let query = db
|
||||||
|
.select()
|
||||||
|
.from(events)
|
||||||
|
.where(
|
||||||
|
and(
|
||||||
|
eq(events.userId, userId),
|
||||||
|
eq(events.channel, channel),
|
||||||
|
eq(events.read, false)
|
||||||
|
)
|
||||||
|
)
|
||||||
|
.orderBy(events.createdAt);
|
||||||
|
|
||||||
|
// Add time filter if provided
|
||||||
|
if (lastEventTime) {
|
||||||
|
query = query.where(gt(events.createdAt, lastEventTime));
|
||||||
|
}
|
||||||
|
|
||||||
|
// Execute the query
|
||||||
|
const newEvents = await query;
|
||||||
|
console.log(`Found ${newEvents.length} new events`);
|
||||||
|
|
||||||
|
// Mark events as read
|
||||||
|
if (newEvents.length > 0) {
|
||||||
|
console.log(`Marking ${newEvents.length} events as read`);
|
||||||
|
await db
|
||||||
|
.update(events)
|
||||||
|
.set({ read: true })
|
||||||
|
.where(
|
||||||
|
and(
|
||||||
|
eq(events.userId, userId),
|
||||||
|
eq(events.channel, channel),
|
||||||
|
eq(events.read, false)
|
||||||
|
)
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse the payloads
|
||||||
|
return newEvents.map(event => ({
|
||||||
|
...event,
|
||||||
|
payload: JSON.parse(event.payload as string),
|
||||||
|
}));
|
||||||
|
} catch (error) {
|
||||||
|
console.error("Error getting new events:", error);
|
||||||
|
return [];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Cleans up old events to prevent the database from growing too large
|
||||||
|
* Should be called periodically (e.g., daily via a cron job)
|
||||||
|
*/
|
||||||
|
export async function cleanupOldEvents(maxAgeInDays: number = 7): Promise<number> {
|
||||||
|
try {
|
||||||
|
const cutoffDate = new Date();
|
||||||
|
cutoffDate.setDate(cutoffDate.getDate() - maxAgeInDays);
|
||||||
|
|
||||||
|
// Delete events older than the cutoff date
|
||||||
|
const result = await db
|
||||||
|
.delete(events)
|
||||||
|
.where(
|
||||||
|
and(
|
||||||
|
eq(events.read, true),
|
||||||
|
gt(cutoffDate, events.createdAt)
|
||||||
|
)
|
||||||
|
);
|
||||||
|
|
||||||
|
return result.changes || 0;
|
||||||
|
} catch (error) {
|
||||||
|
console.error("Error cleaning up old events:", error);
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -1,7 +1,7 @@
|
|||||||
import type { RepoStatus } from "@/types/Repository";
|
import type { RepoStatus } from "@/types/Repository";
|
||||||
import { db, mirrorJobs } from "./db";
|
import { db, mirrorJobs } from "./db";
|
||||||
import { v4 as uuidv4 } from "uuid";
|
import { v4 as uuidv4 } from "uuid";
|
||||||
import { redisPublisher } from "./redis";
|
import { publishEvent } from "./events";
|
||||||
|
|
||||||
export async function createMirrorJob({
|
export async function createMirrorJob({
|
||||||
userId,
|
userId,
|
||||||
@@ -40,10 +40,16 @@ export async function createMirrorJob({
|
|||||||
};
|
};
|
||||||
|
|
||||||
try {
|
try {
|
||||||
|
// Insert the job into the database
|
||||||
await db.insert(mirrorJobs).values(job);
|
await db.insert(mirrorJobs).values(job);
|
||||||
|
|
||||||
|
// Publish the event using SQLite instead of Redis
|
||||||
const channel = `mirror-status:${userId}`;
|
const channel = `mirror-status:${userId}`;
|
||||||
await redisPublisher.publish(channel, JSON.stringify(job));
|
await publishEvent({
|
||||||
|
userId,
|
||||||
|
channel,
|
||||||
|
payload: job
|
||||||
|
});
|
||||||
|
|
||||||
return jobId;
|
return jobId;
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
|
|||||||
@@ -1,39 +0,0 @@
|
|||||||
import { RedisClient } from "bun";
|
|
||||||
|
|
||||||
// Connect to Redis using REDIS_URL environment variable or default to redis://redis:6379
|
|
||||||
// This ensures we have a fallback URL when running with Docker Compose
|
|
||||||
const redisUrl = process.env.REDIS_URL ?? "redis://localhost:6379";
|
|
||||||
|
|
||||||
console.log(`Connecting to Redis at: ${redisUrl}`);
|
|
||||||
|
|
||||||
// Configure Redis client with connection options and retry logic
|
|
||||||
function createClient() {
|
|
||||||
const client = new RedisClient(redisUrl, {
|
|
||||||
autoReconnect: true,
|
|
||||||
connectTimeout: 30000, // Increase timeout to 30 seconds
|
|
||||||
retryStrategy: (attempt: number) => {
|
|
||||||
// Exponential backoff with jitter
|
|
||||||
const delay = Math.min(Math.pow(2, attempt) * 100, 10000);
|
|
||||||
console.log(`Redis connection attempt ${attempt}, retrying in ${delay}ms`);
|
|
||||||
return delay;
|
|
||||||
},
|
|
||||||
});
|
|
||||||
|
|
||||||
// Set up event handlers
|
|
||||||
client.onconnect = () => console.log("Redis client connected successfully");
|
|
||||||
client.onclose = (err: Error | null) => {
|
|
||||||
if (err) {
|
|
||||||
console.error("Redis connection error:", err);
|
|
||||||
console.log("Redis will attempt to reconnect automatically");
|
|
||||||
} else {
|
|
||||||
console.log("Redis connection closed");
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
return client;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create Redis clients with improved error handling
|
|
||||||
export const redis = createClient();
|
|
||||||
export const redisPublisher = createClient();
|
|
||||||
export const redisSubscriber = createClient();
|
|
||||||
@@ -1,5 +1,5 @@
|
|||||||
import type { APIRoute } from "astro";
|
import type { APIRoute } from "astro";
|
||||||
import { redisSubscriber } from "@/lib/redis";
|
import { getNewEvents } from "@/lib/events";
|
||||||
|
|
||||||
export const GET: APIRoute = async ({ request }) => {
|
export const GET: APIRoute = async ({ request }) => {
|
||||||
const url = new URL(request.url);
|
const url = new URL(request.url);
|
||||||
@@ -11,13 +11,13 @@ export const GET: APIRoute = async ({ request }) => {
|
|||||||
|
|
||||||
const channel = `mirror-status:${userId}`;
|
const channel = `mirror-status:${userId}`;
|
||||||
let isClosed = false;
|
let isClosed = false;
|
||||||
let connectionAttempts = 0;
|
const POLL_INTERVAL = 2000; // Poll every 2 seconds
|
||||||
const MAX_ATTEMPTS = 5;
|
|
||||||
const RETRY_DELAY = 1000; // 1 second
|
|
||||||
|
|
||||||
const stream = new ReadableStream({
|
const stream = new ReadableStream({
|
||||||
start(controller) {
|
start(controller) {
|
||||||
const encoder = new TextEncoder();
|
const encoder = new TextEncoder();
|
||||||
|
let lastEventTime: Date | undefined = undefined;
|
||||||
|
let pollIntervalId: ReturnType<typeof setInterval> | null = null;
|
||||||
|
|
||||||
// Function to send a message to the client
|
// Function to send a message to the client
|
||||||
const sendMessage = (message: string) => {
|
const sendMessage = (message: string) => {
|
||||||
@@ -29,98 +29,63 @@ export const GET: APIRoute = async ({ request }) => {
|
|||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
// Function to handle Redis connection and subscription
|
// Function to poll for new events
|
||||||
const connectToRedis = () => {
|
const pollForEvents = async () => {
|
||||||
if (isClosed) return;
|
if (isClosed) return;
|
||||||
|
|
||||||
try {
|
try {
|
||||||
// Set up message handler for Bun's Redis client
|
console.log(`Polling for events for user ${userId} in channel ${channel}`);
|
||||||
redisSubscriber.onmessage = (message, channelName) => {
|
|
||||||
if (isClosed || channelName !== channel) return;
|
|
||||||
sendMessage(`data: ${message}\n\n`);
|
|
||||||
};
|
|
||||||
|
|
||||||
// Send initial connection message
|
// Get new events from SQLite
|
||||||
sendMessage(": connecting to Redis...\n\n");
|
const events = await getNewEvents({
|
||||||
|
userId,
|
||||||
|
channel,
|
||||||
|
lastEventTime,
|
||||||
|
});
|
||||||
|
|
||||||
// Use a try-catch block specifically for the subscribe operation
|
console.log(`Found ${events.length} new events`);
|
||||||
let subscribed = false;
|
|
||||||
try {
|
|
||||||
// Bun's Redis client expects a string for the channel
|
|
||||||
// We need to wrap this in a try-catch because it can throw if Redis is down
|
|
||||||
subscribed = redisSubscriber.subscribe(channel);
|
|
||||||
|
|
||||||
if (subscribed) {
|
// Send events to client
|
||||||
// If we get here, subscription was successful
|
if (events.length > 0) {
|
||||||
sendMessage(": connected\n\n");
|
// Update last event time
|
||||||
|
lastEventTime = events[events.length - 1].createdAt;
|
||||||
|
|
||||||
// Reset connection attempts on successful connection
|
// Send each event to the client
|
||||||
connectionAttempts = 0;
|
for (const event of events) {
|
||||||
|
console.log(`Sending event: ${JSON.stringify(event.payload)}`);
|
||||||
// Send a heartbeat every 30 seconds to keep the connection alive
|
sendMessage(`data: ${JSON.stringify(event.payload)}\n\n`);
|
||||||
const heartbeatInterval = setInterval(() => {
|
|
||||||
if (!isClosed) {
|
|
||||||
sendMessage(": heartbeat\n\n");
|
|
||||||
} else {
|
|
||||||
clearInterval(heartbeatInterval);
|
|
||||||
}
|
|
||||||
}, 30000);
|
|
||||||
} else {
|
|
||||||
throw new Error("Failed to subscribe to Redis channel");
|
|
||||||
}
|
|
||||||
|
|
||||||
} catch (subscribeErr) {
|
|
||||||
// Handle subscription error
|
|
||||||
console.error("Redis subscribe error:", subscribeErr);
|
|
||||||
|
|
||||||
// Retry connection if we haven't exceeded max attempts
|
|
||||||
if (connectionAttempts < MAX_ATTEMPTS) {
|
|
||||||
connectionAttempts++;
|
|
||||||
const nextRetryDelay = RETRY_DELAY * Math.pow(2, connectionAttempts - 1);
|
|
||||||
console.log(`Retrying Redis connection (attempt ${connectionAttempts}/${MAX_ATTEMPTS}) in ${nextRetryDelay}ms...`);
|
|
||||||
|
|
||||||
// Send retry message to client
|
|
||||||
sendMessage(`: retrying connection (${connectionAttempts}/${MAX_ATTEMPTS}) in ${nextRetryDelay}ms...\n\n`);
|
|
||||||
|
|
||||||
// Wait before retrying
|
|
||||||
setTimeout(connectToRedis, nextRetryDelay);
|
|
||||||
} else {
|
|
||||||
// Max retries exceeded, send error but keep the connection open
|
|
||||||
console.error("Max Redis connection attempts exceeded");
|
|
||||||
sendMessage(`data: {"error": "Redis connection failed after ${MAX_ATTEMPTS} attempts"}\n\n`);
|
|
||||||
|
|
||||||
// Set up a longer retry after max attempts
|
|
||||||
setTimeout(() => {
|
|
||||||
connectionAttempts = 0; // Reset counter for a fresh start
|
|
||||||
sendMessage(": attempting to reconnect after cooling period...\n\n");
|
|
||||||
connectToRedis();
|
|
||||||
}, 30000); // Try again after 30 seconds
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
} catch (err) {
|
} catch (err) {
|
||||||
// This catches any other errors outside the subscribe operation
|
console.error("Error polling for events:", err);
|
||||||
console.error("Redis connection error:", err);
|
sendMessage(`data: {"error": "Error polling for events"}\n\n`);
|
||||||
sendMessage(`data: {"error": "Redis connection error"}\n\n`);
|
|
||||||
|
|
||||||
// Still attempt to retry
|
|
||||||
if (connectionAttempts < MAX_ATTEMPTS) {
|
|
||||||
connectionAttempts++;
|
|
||||||
setTimeout(connectToRedis, RETRY_DELAY * Math.pow(2, connectionAttempts - 1));
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
// Start the initial connection
|
// Send initial connection message
|
||||||
connectToRedis();
|
sendMessage(": connected\n\n");
|
||||||
|
|
||||||
|
// Start polling for events
|
||||||
|
pollForEvents();
|
||||||
|
|
||||||
|
// Set up polling interval
|
||||||
|
pollIntervalId = setInterval(pollForEvents, POLL_INTERVAL);
|
||||||
|
|
||||||
|
// Send a heartbeat every 30 seconds to keep the connection alive
|
||||||
|
const heartbeatInterval = setInterval(() => {
|
||||||
|
if (!isClosed) {
|
||||||
|
sendMessage(": heartbeat\n\n");
|
||||||
|
} else {
|
||||||
|
clearInterval(heartbeatInterval);
|
||||||
|
}
|
||||||
|
}, 30000);
|
||||||
|
|
||||||
// Handle client disconnection
|
// Handle client disconnection
|
||||||
request.signal?.addEventListener("abort", () => {
|
request.signal?.addEventListener("abort", () => {
|
||||||
if (!isClosed) {
|
if (!isClosed) {
|
||||||
isClosed = true;
|
isClosed = true;
|
||||||
try {
|
if (pollIntervalId) {
|
||||||
redisSubscriber.unsubscribe(channel);
|
clearInterval(pollIntervalId);
|
||||||
} catch (err) {
|
|
||||||
console.error("Error unsubscribing from Redis:", err);
|
|
||||||
}
|
}
|
||||||
controller.close();
|
controller.close();
|
||||||
}
|
}
|
||||||
@@ -128,14 +93,7 @@ export const GET: APIRoute = async ({ request }) => {
|
|||||||
},
|
},
|
||||||
cancel() {
|
cancel() {
|
||||||
// Extra safety in case cancel is triggered
|
// Extra safety in case cancel is triggered
|
||||||
if (!isClosed) {
|
isClosed = true;
|
||||||
isClosed = true;
|
|
||||||
try {
|
|
||||||
redisSubscriber.unsubscribe(channel);
|
|
||||||
} catch (err) {
|
|
||||||
console.error("Error unsubscribing from Redis:", err);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
},
|
||||||
});
|
});
|
||||||
|
|
||||||
|
|||||||
56
src/pages/api/test-event.ts
Normal file
56
src/pages/api/test-event.ts
Normal file
@@ -0,0 +1,56 @@
|
|||||||
|
import type { APIRoute } from "astro";
|
||||||
|
import { publishEvent } from "@/lib/events";
|
||||||
|
import { v4 as uuidv4 } from "uuid";
|
||||||
|
|
||||||
|
export const POST: APIRoute = async ({ request }) => {
|
||||||
|
try {
|
||||||
|
const body = await request.json();
|
||||||
|
const { userId, message, status } = body;
|
||||||
|
|
||||||
|
if (!userId || !message || !status) {
|
||||||
|
return new Response(
|
||||||
|
JSON.stringify({
|
||||||
|
error: "Missing required fields: userId, message, status",
|
||||||
|
}),
|
||||||
|
{ status: 400 }
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create a test event
|
||||||
|
const eventData = {
|
||||||
|
id: uuidv4(),
|
||||||
|
userId,
|
||||||
|
repositoryId: uuidv4(),
|
||||||
|
repositoryName: "test-repo",
|
||||||
|
message,
|
||||||
|
status,
|
||||||
|
timestamp: new Date(),
|
||||||
|
};
|
||||||
|
|
||||||
|
// Publish the event
|
||||||
|
const channel = `mirror-status:${userId}`;
|
||||||
|
await publishEvent({
|
||||||
|
userId,
|
||||||
|
channel,
|
||||||
|
payload: eventData,
|
||||||
|
});
|
||||||
|
|
||||||
|
return new Response(
|
||||||
|
JSON.stringify({
|
||||||
|
success: true,
|
||||||
|
message: "Event published successfully",
|
||||||
|
event: eventData,
|
||||||
|
}),
|
||||||
|
{ status: 200 }
|
||||||
|
);
|
||||||
|
} catch (error) {
|
||||||
|
console.error("Error publishing test event:", error);
|
||||||
|
return new Response(
|
||||||
|
JSON.stringify({
|
||||||
|
error: "Failed to publish event",
|
||||||
|
details: error instanceof Error ? error.message : String(error),
|
||||||
|
}),
|
||||||
|
{ status: 500 }
|
||||||
|
);
|
||||||
|
}
|
||||||
|
};
|
||||||
Reference in New Issue
Block a user