Compare commits

..

25 Commits

Author SHA1 Message Date
Arunavo Ray
fe6bcc5288 chore: bump version to 3.13.2 2026-03-15 14:11:22 +05:30
ARUNAVO RAY
e26ed3aa9c fix: rewrite migration 0009 for SQLite compatibility and add migration validation (#230)
SQLite rejects ALTER TABLE ADD COLUMN with expression defaults like
DEFAULT (unixepoch()), which Drizzle-kit generated for the imported_at
column. This broke upgrades from v3.12.x to v3.13.0 (#228, #229).

Changes:
- Rewrite migration 0009 using table-recreation pattern (CREATE, INSERT
  SELECT, DROP, RENAME) instead of ALTER TABLE
- Add migration validation script with SQLite-specific lint rules that
  catch known invalid patterns before they ship
- Add upgrade-path testing with seeded data and verification fixtures
- Add runtime repair for users whose migration record may be stale
- Add explicit migration validation step to CI workflow

Fixes #228
Fixes #229
2026-03-15 14:10:06 +05:30
Arunavo Ray
efb96b6e60 chore: bump version to 3.13.1 2026-03-15 09:54:44 +05:30
Arunavo Ray
342cafed0e fix: force Go 1.25.8 toolchain and update x/crypto for git-lfs build
The git-lfs go.mod contains a `toolchain go1.25.3` directive which
causes Go to auto-download and use Go 1.25.3 instead of our installed
1.25.8. Set GOTOOLCHAIN=local to force using the installed version.

Also update golang.org/x/crypto to latest before building to resolve
CVE-2025-47913 (needs >= 0.43.0, was pinned at 0.36.0).
2026-03-15 09:35:50 +05:30
Arunavo Ray
fc7c6b59d7 docs: update README to reference Gitea/Forgejo as supported targets 2026-03-15 09:33:41 +05:30
Arunavo Ray
a77ec0447a chore: bump version to 3.13.0 2026-03-15 09:28:01 +05:30
Arunavo Ray
82b5ac8160 fix: build git-lfs from source with Go 1.25.8 to resolve remaining CVEs
Git-lfs v3.7.1 pre-built binaries use Go 1.25.3, which is affected by
CVE-2025-68121 (critical), CVE-2026-27142, CVE-2026-25679, CVE-2025-61729,
CVE-2025-61726, and CVE-2025-47913 (golang.org/x/crypto).

Since no newer git-lfs release exists, compile from source in a dedicated
build stage using Go 1.25.8 (latest patched release). Only the final
binary is copied into the runner image.
2026-03-15 09:22:50 +05:30
ARUNAVO RAY
299659eca2 fix: resolve CVEs, upgrade to Astro v6, and harden API security (#227)
* fix: resolve CVEs, upgrade to Astro v6, and harden API security

Docker image CVE fixes:
- Install git-lfs v3.7.1 from GitHub releases (Go 1.25) instead of
  Debian apt (Go 1.23.12), fixing CVE-2025-68121 and 8 other Go stdlib CVEs
- Strip build-only packages (esbuild, vite, rollup, svgo, tailwindcss)
  from production image, eliminating 9 esbuild Go stdlib CVEs

Dependency upgrades:
- Astro v5 → v6 (includes Vite 7, Zod 4)
- Remove legacy content config (src/content/config.ts)
- Update HealthResponse type for simplified health endpoint
- npm overrides for fast-xml-parser ≥5.3.6, devalue ≥5.6.2,
  node-forge ≥1.3.2, svgo ≥4.0.1, rollup ≥4.59.0

API security hardening:
- /api/auth/debug: dev-only, require auth, remove user-creation POST,
  strip trustedOrigins/databaseConfig from response
- /api/auth/check-users: return boolean hasUsers instead of exact count
- /api/cleanup/auto: require authentication, remove per-user details
- /api/health: remove OS version, memory, uptime from response
- /api/config: validate Gitea URL protocol (http/https only)
- BETTER_AUTH_SECRET: log security warning when using insecure defaults
- generateRandomString: replace Math.random() with crypto.getRandomValues()
- hashValue: add random salt and timing-safe verification

* repositories: migrate table to tanstack

* Revert "repositories: migrate table to tanstack"

This reverts commit a544b29e6d.

* fixed lock file
2026-03-15 09:19:24 +05:30
ARUNAVO RAY
6f53a3ed41 feat: add importedAt-based repository sorting (#226)
* repositories: add importedAt sorting

* repositories: use tanstack table for repo list
2026-03-15 08:52:45 +05:30
ARUNAVO RAY
1bca7df5ab feat: import repo topics and description into Gitea (#224)
* lib: sync repo topics and descriptions

* lib: harden metadata sync for existing repos
2026-03-15 08:22:44 +05:30
Arunavo Ray
b5210c3916 updated packages 2026-03-15 07:52:34 +05:30
ARUNAVO RAY
755647e29c scripts: add startup repair progress logs (#223) 2026-03-14 17:44:52 +05:30
dependabot[bot]
018c9d1a23 build(deps): bump devalue (#220) 2026-03-13 00:17:30 +05:30
Arunavo Ray
c89011819f chore: sync version to 3.12.5 2026-03-07 07:00:30 +05:30
ARUNAVO RAY
c00d48199b fix: gracefully handle SAML-protected orgs during GitHub import (#217) (#218) 2026-03-07 06:57:28 +05:30
ARUNAVO RAY
de28469210 nix: refresh bun deps and ci flake trust (#216) 2026-03-06 12:31:51 +05:30
github-actions[bot]
0e2f83fee0 chore: sync version to 3.12.4 2026-03-06 05:10:04 +00:00
ARUNAVO RAY
1dd3dea231 fix preserve strategy fork owner routing (#215) 2026-03-06 10:15:47 +05:30
Arunavo Ray
db783c4225 nix: reduce bun install CI stalls 2026-03-06 09:41:22 +05:30
github-actions[bot]
8a4716bdbd chore: sync version to 3.12.3 2026-03-06 03:35:40 +00:00
Arunavo Ray
9d37966c10 ci: only run nix flake check when nix files change 2026-03-06 09:03:32 +05:30
Arunavo Ray
ac16ae56ea ci: increase workflow timeouts to 25m and upgrade CodeQL Action to v4 2026-03-06 08:55:11 +05:30
Arunavo Ray
df3e665978 fix: bump Bun to 1.3.10 and harden startup for non-AVX CPUs (#213)
Bun 1.3.9 crashes with a segfault on CPUs without AVX support due to a
WASM IPInt bug (oven-sh/bun#27340), fixed in 1.3.10 via oven-sh/bun#26922.

- Bump Bun from 1.3.9 to 1.3.10 in Dockerfile, CI workflows, and packageManager
- Skip env config script when no GitHub/Gitea env vars are set
- Make startup scripts (env-config, recovery, repair) fault-tolerant so
  a crash in a non-critical script doesn't abort the entrypoint via set -e
2026-03-06 08:19:44 +05:30
github-actions[bot]
8a26764d2c chore: sync version to 3.12.2 2026-03-05 04:34:51 +00:00
ARUNAVO RAY
ce365a706e ci: persist release version to main (#212) 2026-03-05 09:55:59 +05:30
52 changed files with 4573 additions and 1684 deletions

View File

@@ -45,6 +45,7 @@ This workflow builds Docker images on pushes and pull requests, and pushes to Gi
- Creates multiple tags for each image (latest, semver, sha)
- Auto-syncs `package.json` version from `v*` tags during release builds
- Validates release tags use semver format before building
- After tag builds succeed, writes the same version back to `main/package.json`
### Docker Security Scan (`docker-scan.yml`)

View File

@@ -24,7 +24,7 @@ jobs:
build-and-test:
name: Build and Test Astro Project
runs-on: ubuntu-latest
timeout-minutes: 10
timeout-minutes: 25
steps:
- name: Checkout repository
@@ -33,7 +33,7 @@ jobs:
- name: Setup Bun
uses: oven-sh/setup-bun@v1
with:
bun-version: '1.3.6'
bun-version: '1.3.10'
- name: Check lockfile and install dependencies
run: |
@@ -48,6 +48,12 @@ jobs:
- name: Run tests
run: bun test --coverage
- name: Check Drizzle migrations
run: bun run db:check
- name: Validate migrations (SQLite lint + upgrade path)
run: bun test:migrations
- name: Build Astro project
run: bunx --bun astro build

View File

@@ -36,7 +36,7 @@ env:
jobs:
docker:
runs-on: ubuntu-latest
timeout-minutes: 10
timeout-minutes: 25
permissions:
contents: write
@@ -253,8 +253,49 @@ jobs:
# Upload security scan results to GitHub Security tab
- name: Upload Docker Scout scan results to GitHub Security tab
uses: github/codeql-action/upload-sarif@v3
uses: github/codeql-action/upload-sarif@v4
if: always()
continue-on-error: true
with:
sarif_file: scout-results.sarif
sync-version-main:
name: Sync package.json version back to main
if: startsWith(github.ref, 'refs/tags/v')
runs-on: ubuntu-latest
needs: docker
permissions:
contents: write
steps:
- name: Checkout default branch
uses: actions/checkout@v4
with:
ref: ${{ github.event.repository.default_branch }}
- name: Update package.json version on main
env:
TAG_VERSION: ${{ github.ref_name }}
TARGET_BRANCH: ${{ github.event.repository.default_branch }}
run: |
if [[ ! "$TAG_VERSION" =~ ^v[0-9]+\.[0-9]+\.[0-9]+([.-][0-9A-Za-z.-]+)?(\+[0-9A-Za-z.-]+)?$ ]]; then
echo "::error::Release tag '${TAG_VERSION}' is invalid. Expected semver tag format like v1.2.3 or v1.2.3-rc.1"
exit 1
fi
APP_VERSION="${TAG_VERSION#v}"
echo "Syncing ${TARGET_BRANCH}/package.json to ${APP_VERSION}"
jq --arg version "${APP_VERSION}" '.version = $version' package.json > package.json.tmp
mv package.json.tmp package.json
if git diff --quiet -- package.json; then
echo "package.json on ${TARGET_BRANCH} already at ${APP_VERSION}; nothing to commit."
exit 0
fi
git config user.name "github-actions[bot]"
git config user.email "41898282+github-actions[bot]@users.noreply.github.com"
git add package.json
git commit -m "chore: sync version to ${APP_VERSION}"
git push origin "HEAD:${TARGET_BRANCH}"

View File

@@ -40,13 +40,13 @@ env:
FAKE_GITHUB_PORT: 4580
GIT_SERVER_PORT: 4590
APP_PORT: 4321
BUN_VERSION: "1.3.6"
BUN_VERSION: "1.3.10"
jobs:
e2e-tests:
name: E2E Integration Tests
runs-on: ubuntu-latest
timeout-minutes: 10
timeout-minutes: 25
steps:
- name: Checkout repository

View File

@@ -21,7 +21,7 @@ jobs:
yamllint:
name: Lint YAML
runs-on: ubuntu-latest
timeout-minutes: 10
timeout-minutes: 25
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
@@ -36,7 +36,7 @@ jobs:
helm-template:
name: Helm lint & template
runs-on: ubuntu-latest
timeout-minutes: 10
timeout-minutes: 25
steps:
- uses: actions/checkout@v4
- name: Setup Helm

View File

@@ -5,18 +5,18 @@ on:
branches: [main, nix]
tags:
- 'v*'
paths-ignore:
- 'README.md'
- 'docs/**'
- 'www/**'
- 'helm/**'
paths:
- 'flake.nix'
- 'flake.lock'
- 'bun.nix'
- '.github/workflows/nix-build.yml'
pull_request:
branches: [main]
paths-ignore:
- 'README.md'
- 'docs/**'
- 'www/**'
- 'helm/**'
paths:
- 'flake.nix'
- 'flake.lock'
- 'bun.nix'
- '.github/workflows/nix-build.yml'
permissions:
contents: read
@@ -24,7 +24,11 @@ permissions:
jobs:
check:
runs-on: ubuntu-latest
timeout-minutes: 10
timeout-minutes: 45
env:
NIX_CONFIG: |
accept-flake-config = true
access-tokens = github.com=${{ secrets.GITHUB_TOKEN }}
steps:
- uses: actions/checkout@v4
@@ -36,11 +40,11 @@ jobs:
uses: DeterminateSystems/magic-nix-cache-action@main
- name: Check flake
run: nix flake check
run: nix flake check --accept-flake-config
- name: Show flake info
run: nix flake show
run: nix flake show --accept-flake-config
- name: Build package
if: github.ref == 'refs/heads/main' || startsWith(github.ref, 'refs/tags/v')
run: nix build --print-build-logs
run: nix build --print-build-logs --accept-flake-config

View File

@@ -1,6 +1,6 @@
# syntax=docker/dockerfile:1.4
FROM oven/bun:1.3.9-debian AS base
FROM oven/bun:1.3.10-debian AS base
WORKDIR /app
RUN apt-get update && apt-get install -y --no-install-recommends \
python3 make g++ gcc wget sqlite3 openssl ca-certificates \
@@ -26,18 +26,49 @@ COPY bun.lock* ./
RUN bun install --production --omit=peer --frozen-lockfile
# ----------------------------
FROM oven/bun:1.3.9-debian AS runner
# Build git-lfs from source with patched Go to resolve Go stdlib CVEs
FROM debian:trixie-slim AS git-lfs-builder
RUN apt-get update && apt-get install -y --no-install-recommends \
wget ca-certificates git make \
&& rm -rf /var/lib/apt/lists/*
ARG GO_VERSION=1.25.8
ARG GIT_LFS_VERSION=3.7.1
RUN ARCH="$(dpkg --print-architecture)" \
&& wget -qO /tmp/go.tar.gz "https://go.dev/dl/go${GO_VERSION}.linux-${ARCH}.tar.gz" \
&& tar -C /usr/local -xzf /tmp/go.tar.gz \
&& rm /tmp/go.tar.gz
ENV PATH="/usr/local/go/bin:/root/go/bin:${PATH}"
# Force using our installed Go (not the version in go.mod toolchain directive)
ENV GOTOOLCHAIN=local
RUN git clone --branch "v${GIT_LFS_VERSION}" --depth 1 https://github.com/git-lfs/git-lfs.git /tmp/git-lfs \
&& cd /tmp/git-lfs \
&& go get golang.org/x/crypto@latest \
&& go mod tidy \
&& make \
&& install -m 755 /tmp/git-lfs/bin/git-lfs /usr/local/bin/git-lfs
# ----------------------------
FROM oven/bun:1.3.10-debian AS runner
WORKDIR /app
RUN apt-get update && apt-get install -y --no-install-recommends \
git git-lfs wget sqlite3 openssl ca-certificates \
&& git lfs install \
git wget sqlite3 openssl ca-certificates \
&& rm -rf /var/lib/apt/lists/*
COPY --from=git-lfs-builder /usr/local/bin/git-lfs /usr/local/bin/git-lfs
RUN git lfs install
COPY --from=pruner /app/node_modules ./node_modules
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/package.json ./package.json
COPY --from=builder /app/docker-entrypoint.sh ./docker-entrypoint.sh
COPY --from=builder /app/drizzle ./drizzle
# Remove build-only packages that are not needed at runtime
# (esbuild, vite, rollup, tailwind, svgo — all only used during `astro build`)
RUN rm -rf node_modules/esbuild node_modules/@esbuild \
node_modules/rollup node_modules/@rollup \
node_modules/vite node_modules/svgo \
node_modules/@tailwindcss/vite \
node_modules/tailwindcss
ENV NODE_ENV=production
ENV HOST=0.0.0.0
ENV PORT=4321

View File

@@ -1,7 +1,7 @@
<p align="center">
<img src=".github/assets/logo.png" alt="Gitea Mirror Logo" width="120" />
<h1>Gitea Mirror</h1>
<p><i>Automatically mirror repositories from GitHub to your self-hosted Gitea instance.</i></p>
<p><i>Automatically mirror repositories from GitHub to your self-hosted Gitea/Forgejo instance.</i></p>
<p align="center">
<a href="https://github.com/RayLabsHQ/gitea-mirror/releases/latest"><img src="https://img.shields.io/github/v/tag/RayLabsHQ/gitea-mirror?label=release" alt="release"/></a>
<a href="https://github.com/RayLabsHQ/gitea-mirror/actions/workflows/astro-build-test.yml"><img src="https://img.shields.io/github/actions/workflow/status/RayLabsHQ/gitea-mirror/astro-build-test.yml?branch=main" alt="build"/></a>
@@ -19,7 +19,7 @@ docker compose -f docker-compose.alt.yml up -d
# Access at http://localhost:4321
```
First user signup becomes admin. Configure GitHub and Gitea through the web interface!
First user signup becomes admin. Configure GitHub and Gitea/Forgejo through the web interface!
<p align="center">
<img src=".github/assets/dashboard.png" alt="Dashboard" width="600" />
@@ -28,7 +28,7 @@ First user signup becomes admin. Configure GitHub and Gitea through the web inte
## ✨ Features
- 🔁 Mirror public, private, and starred GitHub repos to Gitea
- 🔁 Mirror public, private, and starred GitHub repos to Gitea/Forgejo
- 🏢 Mirror entire organizations with flexible strategies
- 🎯 Custom destination control for repos and organizations
- 📦 **Git LFS support** - Mirror large files with Git LFS
@@ -199,12 +199,12 @@ bun run dev
1. **First Time Setup**
- Navigate to http://localhost:4321
- Create admin account (first user signup)
- Configure GitHub and Gitea connections
- Configure GitHub and Gitea/Forgejo connections
2. **Mirror Strategies**
- **Preserve Structure**: Maintains GitHub organization structure
- **Single Organization**: All repos go to one Gitea organization
- **Flat User**: All repos under your Gitea user account
- **Single Organization**: All repos go to one Gitea/Forgejo organization
- **Flat User**: All repos under your Gitea/Forgejo user account
- **Mixed Mode**: Personal repos in one org, organization repos preserve structure
3. **Customization**
@@ -217,13 +217,13 @@ bun run dev
### Git LFS (Large File Storage)
Mirror Git LFS objects along with your repositories:
- Enable "Mirror LFS" option in Settings → Mirror Options
- Requires Gitea server with LFS enabled (`LFS_START_SERVER = true`)
- Requires Gitea/Forgejo server with LFS enabled (`LFS_START_SERVER = true`)
- Requires Git v2.1.2+ on the server
### Metadata Mirroring
Transfer complete repository metadata from GitHub to Gitea:
Transfer complete repository metadata from GitHub to Gitea/Forgejo:
- **Issues** - Mirror all issues with comments and labels
- **Pull Requests** - Transfer PR discussions to Gitea
- **Pull Requests** - Transfer PR discussions to Gitea/Forgejo
- **Labels** - Preserve repository labels
- **Milestones** - Keep project milestones
- **Wiki** - Mirror wiki content
@@ -243,7 +243,7 @@ Gitea Mirror provides powerful automatic synchronization features:
#### Features (v3.4.0+)
- **Auto-discovery**: Automatically discovers and imports new GitHub repositories
- **Repository cleanup**: Removes repositories that no longer exist in GitHub
- **Proper intervals**: Mirrors respect your configured sync intervals (not Gitea's default 24h)
- **Proper intervals**: Mirrors respect your configured sync intervals (not Gitea/Forgejo's default 24h)
- **Smart scheduling**: Only syncs repositories that need updating
- **Auto-start on boot** (v3.5.3+): Automatically imports and mirrors all repositories when `SCHEDULE_ENABLED=true` or `GITEA_MIRROR_INTERVAL` is set - no manual clicks required!
@@ -254,7 +254,7 @@ Navigate to the Configuration page and enable "Automatic Syncing" with your pref
**🚀 Set it and forget it!** With these environment variables, Gitea Mirror will automatically:
1. **Import** all your GitHub repositories on startup (no manual import needed!)
2. **Mirror** them to Gitea immediately
2. **Mirror** them to Gitea/Forgejo immediately
3. **Keep them synchronized** based on your interval
4. **Auto-discover** new repos you create/star on GitHub
5. **Clean up** repos you delete from GitHub
@@ -284,16 +284,16 @@ CLEANUP_DRY_RUN=false # Set to true to test without changes
- **Auto-Start**: When `SCHEDULE_ENABLED=true` or `GITEA_MIRROR_INTERVAL` is set, the service automatically imports all GitHub repositories and mirrors them on startup. No manual "Import" or "Mirror" button clicks required!
- The scheduler checks every minute for tasks to run. The `GITEA_MIRROR_INTERVAL` determines how often each repository is actually synced. For example, with `8h`, each repo syncs every 8 hours from its last successful sync.
- **Large repo bootstrap**: For first-time mirroring of large repositories (especially with metadata/LFS), avoid very short intervals (for example `5m`). Start with a longer interval (`1h` to `8h`) or temporarily disable scheduling during the initial import/mirror run, then enable your regular interval after the first pass completes.
- **Why this matters**: If your Gitea instance takes a long time to complete migrations/imports, aggressive schedules can cause repeated retries and duplicate-looking mirror attempts.
- **Why this matters**: If your Gitea/Forgejo instance takes a long time to complete migrations/imports, aggressive schedules can cause repeated retries and duplicate-looking mirror attempts.
**🛡️ Backup Protection Features**:
- **No Accidental Deletions**: Repository cleanup is automatically skipped if GitHub is inaccessible (account deleted, banned, or API errors)
- **Archive Never Deletes Data**: The `archive` action preserves all repository data:
- Regular repositories: Made read-only using Gitea's archive feature
- Mirror repositories: Renamed with `archived-` prefix (Gitea API limitation prevents archiving mirrors)
- Regular repositories: Made read-only using Gitea/Forgejo's archive feature
- Mirror repositories: Renamed with `archived-` prefix (Gitea/Forgejo API limitation prevents archiving mirrors)
- Failed operations: Repository remains fully accessible even if marking as archived fails
- **Manual Sync on Demand**: Archived mirrors stay in Gitea with automatic syncs disabled; trigger `Manual Sync` from the Repositories page whenever you need fresh data.
- **The Whole Point of Backups**: Your Gitea mirrors are preserved even when GitHub sources disappear - that's why you have backups!
- **Manual Sync on Demand**: Archived mirrors stay in Gitea/Forgejo with automatic syncs disabled; trigger `Manual Sync` from the Repositories page whenever you need fresh data.
- **The Whole Point of Backups**: Your Gitea/Forgejo mirrors are preserved even when GitHub sources disappear - that's why you have backups!
- **Strongly Recommended**: Always use `CLEANUP_ORPHANED_REPO_ACTION=archive` (default) instead of `delete`
## Troubleshooting
@@ -309,7 +309,7 @@ For existing pull-mirror repositories, changing the GitHub token in Gitea Mirror
If sync logs show authentication failures (for example `terminal prompts disabled`), do one of the following:
1. In Gitea/Forgejo, open repository **Settings → Mirror Settings** and update the mirror authorization password/token.
2. Or delete and re-mirror the repository from Gitea Mirror so it is recreated with current credentials.
2. Or delete and re-mirror the repository so it is recreated with current credentials.
### Re-sync Metadata After Changing Mirror Options
@@ -334,7 +334,7 @@ If your Gitea/Forgejo server has `mirror.MIN_INTERVAL` set to a higher value (fo
To avoid this:
1. Set Gitea Mirror interval to a value greater than or equal to your server `MIN_INTERVAL`.
2. Do not rely on manual per-repository mirror interval edits in Gitea/Forgejo, because Gitea Mirror will overwrite them on sync.
2. Do not rely on manual per-repository mirror interval edits in Gitea/Forgejo, as they will be overwritten on sync.
## Development
@@ -356,13 +356,13 @@ bun run build
- **Frontend**: Astro, React, Shadcn UI, Tailwind CSS v4
- **Backend**: Bun runtime, SQLite, Drizzle ORM
- **APIs**: GitHub (Octokit), Gitea REST API
- **APIs**: GitHub (Octokit), Gitea/Forgejo REST API
- **Auth**: Better Auth with session-based authentication
## Security
### Token Encryption
- All GitHub and Gitea API tokens are encrypted at rest using AES-256-GCM
- All GitHub and Gitea/Forgejo API tokens are encrypted at rest using AES-256-GCM
- Encryption is automatic and transparent to users
- Set `ENCRYPTION_SECRET` environment variable for production deployments
- Falls back to `BETTER_AUTH_SECRET` if not set
@@ -456,13 +456,13 @@ Gitea Mirror can also act as an OIDC provider for other applications. Register O
## Known Limitations
### Pull Request Mirroring Implementation
Pull requests **cannot be created as actual PRs** in Gitea due to API limitations. Instead, they are mirrored as **enriched issues** with comprehensive metadata.
Pull requests **cannot be created as actual PRs** in Gitea/Forgejo due to API limitations. Instead, they are mirrored as **enriched issues** with comprehensive metadata.
**Why real PR mirroring isn't possible:**
- Gitea's API doesn't support creating pull requests from external sources
- Gitea/Forgejo's API doesn't support creating pull requests from external sources
- Real PRs require actual Git branches with commits to exist in the repository
- Would require complex branch synchronization and commit replication
- The mirror relationship is one-way (GitHub → Gitea) for repository content
- The mirror relationship is one-way (GitHub → Gitea/Forgejo) for repository content
**How we handle Pull Requests:**
PRs are mirrored as issues with rich metadata including:
@@ -476,7 +476,7 @@ PRs are mirrored as issues with rich metadata including:
- 🔀 Base and head branch information
- ✅ Merge status tracking
This approach preserves all important PR information while working within Gitea's API constraints. The PRs appear in Gitea's issue tracker with clear visual distinction and comprehensive details.
This approach preserves all important PR information while working within Gitea/Forgejo's API constraints. The PRs appear in the issue tracker with clear visual distinction and comprehensive details.
## Contributing

1125
bun.lock

File diff suppressed because it is too large Load Diff

30
bun.nix
View File

@@ -881,6 +881,10 @@
url = "https://registry.npmjs.org/@oslojs/encoding/-/encoding-1.1.0.tgz";
hash = "sha512-70wQhgYmndg4GCPxPPxPGevRKqTIJ2Nh4OkiMWmDAVYsTQ+Ta7Sq+rPevXyXGdzr30/qZBnyOalCszoMxlyldQ==";
};
"@playwright/test@1.58.2" = fetchurl {
url = "https://registry.npmjs.org/@playwright/test/-/test-1.58.2.tgz";
hash = "sha512-akea+6bHYBBfA9uQqSYmlJXn61cTa+jbO87xVLCWbTqbWadRVmhxlXATaOjOgcBaWU4ePo0wB41KMFv3o35IXA==";
};
"@radix-ui/number@1.1.1" = fetchurl {
url = "https://registry.npmjs.org/@radix-ui/number/-/number-1.1.1.tgz";
hash = "sha512-MkKCwxlXTgz6CFoJx3pCwn07GKp36+aZyu/u2Ln2VrA5DcdyCZkASEDBTd8x5whTQQL5CiYf4prXKLcgQdv29g==";
@@ -1385,6 +1389,10 @@
url = "https://registry.npmjs.org/@types/node/-/node-22.15.23.tgz";
hash = "sha512-7Ec1zaFPF4RJ0eXu1YT/xgiebqwqoJz8rYPDi/O2BcZ++Wpt0Kq9cl0eg6NN6bYbPnR67ZLo7St5Q3UK0SnARw==";
};
"@types/node@25.3.2" = fetchurl {
url = "https://registry.npmjs.org/@types/node/-/node-25.3.2.tgz";
hash = "sha512-RpV6r/ij22zRRdyBPcxDeKAzH43phWVKEjL2iksqo1Vz3CuBUrgmPpPhALKiRfU7OMCmeeO9vECBMsV0hMTG8Q==";
};
"@types/react-dom@19.2.3" = fetchurl {
url = "https://registry.npmjs.org/@types/react-dom/-/react-dom-19.2.3.tgz";
hash = "sha512-jp2L/eY6fn+KgVVQAOqYItbF0VY/YApe5Mz2F0aykSO8gx31bYCZyvSeYxCHKvzHG5eZjc+zyaS5BrBWya2+kQ==";
@@ -1565,9 +1573,9 @@
url = "https://registry.npmjs.org/astring/-/astring-1.9.0.tgz";
hash = "sha512-LElXdjswlqjWrPpJFg1Fx4wpkOCxj1TDHlSV4PlaRxHGWko024xICaa97ZkMfs6DRKlCguiAI+rbXv5GWwXIkg==";
};
"astro@5.17.3" = fetchurl {
url = "https://registry.npmjs.org/astro/-/astro-5.17.3.tgz";
hash = "sha512-69dcfPe8LsHzklwj+hl+vunWUbpMB6pmg35mACjetxbJeUNNys90JaBM8ZiwsPK689SAj/4Zqb1ayaANls9/MA==";
"astro@5.18.0" = fetchurl {
url = "https://registry.npmjs.org/astro/-/astro-5.18.0.tgz";
hash = "sha512-CHiohwJIS4L0G6/IzE1Fx3dgWqXBCXus/od0eGUfxrZJD2um2pE7ehclMmgL/fXqbU7NfE1Ze2pq34h2QaA6iQ==";
};
"axobject-query@4.1.0" = fetchurl {
url = "https://registry.npmjs.org/axobject-query/-/axobject-query-4.1.0.tgz";
@@ -2093,6 +2101,10 @@
url = "https://registry.npmjs.org/fresh/-/fresh-2.0.0.tgz";
hash = "sha512-Rx/WycZ60HOaqLKAi6cHRKKI7zxWbJ31MhntmtwMoaTeF7XFH9hhBp8vITaMidfljRQ6eYWCKkaTK+ykVJHP2A==";
};
"fsevents@2.3.2" = fetchurl {
url = "https://registry.npmjs.org/fsevents/-/fsevents-2.3.2.tgz";
hash = "sha512-xiqMQR4xAeHTuB9uWm+fFRcIOgKBMiOBP+eXiyT7jsgVCq1bkVygt00oASowB7EdtpOHaaPgKt812P9ab+DDKA==";
};
"fsevents@2.3.3" = fetchurl {
url = "https://registry.npmjs.org/fsevents/-/fsevents-2.3.3.tgz";
hash = "sha512-5xoDfX+fL7faATnagmWPpbFtwh/R77WmMMqqHGS65C3vvB0YHrgF+B1YmZ3441tMj5n63k0212XNoJwzlhffQw==";
@@ -2913,6 +2925,14 @@
url = "https://registry.npmjs.org/picomatch/-/picomatch-4.0.3.tgz";
hash = "sha512-5gTmgEY/sqK6gFXLIsQNH19lWb4ebPDLA4SdLP7dsWkIXHWlG66oPuVvXSGFPppYZz8ZDZq0dYYrbHfBCVUb1Q==";
};
"playwright-core@1.58.2" = fetchurl {
url = "https://registry.npmjs.org/playwright-core/-/playwright-core-1.58.2.tgz";
hash = "sha512-yZkEtftgwS8CsfYo7nm0KE8jsvm6i/PTgVtB8DL726wNf6H2IMsDuxCpJj59KDaxCtSnrWan2AeDqM7JBaultg==";
};
"playwright@1.58.2" = fetchurl {
url = "https://registry.npmjs.org/playwright/-/playwright-1.58.2.tgz";
hash = "sha512-vA30H8Nvkq/cPBnNw4Q8TWz1EJyqgpuinBcHET0YVJVFldr8JDNiU9LaWAE1KqSkRYazuaBhTpB5ZzShOezQ6A==";
};
"postcss@8.5.3" = fetchurl {
url = "https://registry.npmjs.org/postcss/-/postcss-8.5.3.tgz";
hash = "sha512-dle9A3yYxlBSrt8Fu+IpjGT8SY8hN0mlaA6GY8t0P5PjIOZemULz/E2Bnm/2dcUOena75OTNkHI76uZBNUUq3A==";
@@ -3405,6 +3425,10 @@
url = "https://registry.npmjs.org/undici-types/-/undici-types-6.21.0.tgz";
hash = "sha512-iwDZqg0QAGrg9Rav5H4n0M64c3mkR59cJ6wQp+7C4nI0gsmExaedaYLNO44eT4AtBBwjbTiGPMlt2Md0T9H9JQ==";
};
"undici-types@7.18.2" = fetchurl {
url = "https://registry.npmjs.org/undici-types/-/undici-types-7.18.2.tgz";
hash = "sha512-AsuCzffGHJybSaRrmr5eHr81mwJU3kjw6M+uprWvCXiNeN9SOGwQ3Jn8jb8m3Z6izVgknn1R0FTCEAP2QrLY/w==";
};
"undici@7.22.0" = fetchurl {
url = "https://registry.npmjs.org/undici/-/undici-7.22.0.tgz";
hash = "sha512-RqslV2Us5BrllB+JeiZnK4peryVTndy9Dnqq62S3yYRRTj0tFQCwEniUy2167skdGOy3vqRzEvl1Dm4sV2ReDg==";

View File

@@ -139,16 +139,29 @@ fi
# Initialize configuration from environment variables if provided
echo "Checking for environment configuration..."
if [ -f "dist/scripts/startup-env-config.js" ]; then
echo "Loading configuration from environment variables..."
bun dist/scripts/startup-env-config.js
ENV_CONFIG_EXIT_CODE=$?
elif [ -f "scripts/startup-env-config.ts" ]; then
echo "Loading configuration from environment variables..."
bun scripts/startup-env-config.ts
ENV_CONFIG_EXIT_CODE=$?
# Only run the env config script if relevant env vars are set
# This avoids spawning a heavy Bun process on memory-constrained systems
HAS_ENV_CONFIG=false
if [ -n "$GITHUB_USERNAME" ] || [ -n "$GITHUB_TOKEN" ] || [ -n "$GITEA_URL" ] || [ -n "$GITEA_USERNAME" ] || [ -n "$GITEA_TOKEN" ]; then
HAS_ENV_CONFIG=true
fi
if [ "$HAS_ENV_CONFIG" = "true" ]; then
if [ -f "dist/scripts/startup-env-config.js" ]; then
echo "Loading configuration from environment variables..."
bun dist/scripts/startup-env-config.js || ENV_CONFIG_EXIT_CODE=$?
ENV_CONFIG_EXIT_CODE=${ENV_CONFIG_EXIT_CODE:-0}
elif [ -f "scripts/startup-env-config.ts" ]; then
echo "Loading configuration from environment variables..."
bun scripts/startup-env-config.ts || ENV_CONFIG_EXIT_CODE=$?
ENV_CONFIG_EXIT_CODE=${ENV_CONFIG_EXIT_CODE:-0}
else
echo "Environment configuration script not found. Skipping."
ENV_CONFIG_EXIT_CODE=0
fi
else
echo "Environment configuration script not found. Skipping."
echo "No GitHub/Gitea environment variables found, skipping env config initialization."
ENV_CONFIG_EXIT_CODE=0
fi
@@ -161,17 +174,15 @@ fi
# Run startup recovery to handle any interrupted jobs
echo "Running startup recovery..."
RECOVERY_EXIT_CODE=0
if [ -f "dist/scripts/startup-recovery.js" ]; then
echo "Running startup recovery using compiled script..."
bun dist/scripts/startup-recovery.js --timeout=30000
RECOVERY_EXIT_CODE=$?
bun dist/scripts/startup-recovery.js --timeout=30000 || RECOVERY_EXIT_CODE=$?
elif [ -f "scripts/startup-recovery.ts" ]; then
echo "Running startup recovery using TypeScript script..."
bun scripts/startup-recovery.ts --timeout=30000
RECOVERY_EXIT_CODE=$?
bun scripts/startup-recovery.ts --timeout=30000 || RECOVERY_EXIT_CODE=$?
else
echo "Warning: Startup recovery script not found. Skipping recovery."
RECOVERY_EXIT_CODE=0
fi
# Log recovery result
@@ -185,17 +196,15 @@ fi
# Run repository status repair to fix any inconsistent mirroring states
echo "Running repository status repair..."
REPAIR_EXIT_CODE=0
if [ -f "dist/scripts/repair-mirrored-repos.js" ]; then
echo "Running repository repair using compiled script..."
bun dist/scripts/repair-mirrored-repos.js --startup
REPAIR_EXIT_CODE=$?
bun dist/scripts/repair-mirrored-repos.js --startup || REPAIR_EXIT_CODE=$?
elif [ -f "scripts/repair-mirrored-repos.ts" ]; then
echo "Running repository repair using TypeScript script..."
bun scripts/repair-mirrored-repos.ts --startup
REPAIR_EXIT_CODE=$?
bun scripts/repair-mirrored-repos.ts --startup || REPAIR_EXIT_CODE=$?
else
echo "Warning: Repository repair script not found. Skipping repair."
REPAIR_EXIT_CODE=0
fi
# Log repair result

View File

@@ -328,6 +328,7 @@ git push origin vX.Y.Z
5. **CI version sync (automatic)**:
- On `v*` tags, release CI updates `package.json` version in the build context from the tag (`vX.Y.Z` -> `X.Y.Z`), so Docker release images always report the correct app version.
- After the release build succeeds, CI commits the same `package.json` version back to `main` automatically.
## Contributing

View File

@@ -0,0 +1,149 @@
CREATE TABLE `__new_repositories` (
`id` text PRIMARY KEY NOT NULL,
`user_id` text NOT NULL,
`config_id` text NOT NULL,
`name` text NOT NULL,
`full_name` text NOT NULL,
`normalized_full_name` text NOT NULL,
`url` text NOT NULL,
`clone_url` text NOT NULL,
`owner` text NOT NULL,
`organization` text,
`mirrored_location` text DEFAULT '',
`is_private` integer DEFAULT false NOT NULL,
`is_fork` integer DEFAULT false NOT NULL,
`forked_from` text,
`has_issues` integer DEFAULT false NOT NULL,
`is_starred` integer DEFAULT false NOT NULL,
`is_archived` integer DEFAULT false NOT NULL,
`size` integer DEFAULT 0 NOT NULL,
`has_lfs` integer DEFAULT false NOT NULL,
`has_submodules` integer DEFAULT false NOT NULL,
`language` text,
`description` text,
`default_branch` text NOT NULL,
`visibility` text DEFAULT 'public' NOT NULL,
`status` text DEFAULT 'imported' NOT NULL,
`last_mirrored` integer,
`error_message` text,
`destination_org` text,
`metadata` text,
`imported_at` integer DEFAULT (unixepoch()) NOT NULL,
`created_at` integer DEFAULT (unixepoch()) NOT NULL,
`updated_at` integer DEFAULT (unixepoch()) NOT NULL,
FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON UPDATE no action ON DELETE no action,
FOREIGN KEY (`config_id`) REFERENCES `configs`(`id`) ON UPDATE no action ON DELETE no action
);
--> statement-breakpoint
INSERT INTO `__new_repositories` (
`id`,
`user_id`,
`config_id`,
`name`,
`full_name`,
`normalized_full_name`,
`url`,
`clone_url`,
`owner`,
`organization`,
`mirrored_location`,
`is_private`,
`is_fork`,
`forked_from`,
`has_issues`,
`is_starred`,
`is_archived`,
`size`,
`has_lfs`,
`has_submodules`,
`language`,
`description`,
`default_branch`,
`visibility`,
`status`,
`last_mirrored`,
`error_message`,
`destination_org`,
`metadata`,
`imported_at`,
`created_at`,
`updated_at`
)
SELECT
`repositories`.`id`,
`repositories`.`user_id`,
`repositories`.`config_id`,
`repositories`.`name`,
`repositories`.`full_name`,
`repositories`.`normalized_full_name`,
`repositories`.`url`,
`repositories`.`clone_url`,
`repositories`.`owner`,
`repositories`.`organization`,
`repositories`.`mirrored_location`,
`repositories`.`is_private`,
`repositories`.`is_fork`,
`repositories`.`forked_from`,
`repositories`.`has_issues`,
`repositories`.`is_starred`,
`repositories`.`is_archived`,
`repositories`.`size`,
`repositories`.`has_lfs`,
`repositories`.`has_submodules`,
`repositories`.`language`,
`repositories`.`description`,
`repositories`.`default_branch`,
`repositories`.`visibility`,
`repositories`.`status`,
`repositories`.`last_mirrored`,
`repositories`.`error_message`,
`repositories`.`destination_org`,
`repositories`.`metadata`,
COALESCE(
(
SELECT MIN(`mj`.`timestamp`)
FROM `mirror_jobs` `mj`
WHERE `mj`.`user_id` = `repositories`.`user_id`
AND `mj`.`status` = 'imported'
AND (
(`mj`.`repository_id` IS NOT NULL AND `mj`.`repository_id` = `repositories`.`id`)
OR (
`mj`.`repository_id` IS NULL
AND `mj`.`repository_name` IS NOT NULL
AND (
lower(trim(`mj`.`repository_name`)) = `repositories`.`normalized_full_name`
OR lower(trim(`mj`.`repository_name`)) = lower(trim(`repositories`.`name`))
)
)
)
),
`repositories`.`created_at`,
unixepoch()
) AS `imported_at`,
`repositories`.`created_at`,
`repositories`.`updated_at`
FROM `repositories`;
--> statement-breakpoint
DROP TABLE `repositories`;
--> statement-breakpoint
ALTER TABLE `__new_repositories` RENAME TO `repositories`;
--> statement-breakpoint
CREATE INDEX `idx_repositories_user_id` ON `repositories` (`user_id`);
--> statement-breakpoint
CREATE INDEX `idx_repositories_config_id` ON `repositories` (`config_id`);
--> statement-breakpoint
CREATE INDEX `idx_repositories_status` ON `repositories` (`status`);
--> statement-breakpoint
CREATE INDEX `idx_repositories_owner` ON `repositories` (`owner`);
--> statement-breakpoint
CREATE INDEX `idx_repositories_organization` ON `repositories` (`organization`);
--> statement-breakpoint
CREATE INDEX `idx_repositories_is_fork` ON `repositories` (`is_fork`);
--> statement-breakpoint
CREATE INDEX `idx_repositories_is_starred` ON `repositories` (`is_starred`);
--> statement-breakpoint
CREATE INDEX `idx_repositories_user_imported_at` ON `repositories` (`user_id`,`imported_at`);
--> statement-breakpoint
CREATE UNIQUE INDEX `uniq_repositories_user_full_name` ON `repositories` (`user_id`,`full_name`);
--> statement-breakpoint
CREATE UNIQUE INDEX `uniq_repositories_user_normalized_full_name` ON `repositories` (`user_id`,`normalized_full_name`);

File diff suppressed because it is too large Load Diff

View File

@@ -64,6 +64,13 @@
"when": 1761802056073,
"tag": "0008_serious_thena",
"breakpoints": true
},
{
"idx": 9,
"version": "6",
"when": 1773542995732,
"tag": "0009_nervous_tyger_tiger",
"breakpoints": true
}
]
}

View File

@@ -49,6 +49,20 @@
bunNix = ./bun.nix;
};
# bun2nix defaults to isolated installs on Linux, which can be
# very slow in CI for larger dependency trees and may appear stuck.
# Use hoisted linker and fail fast on lockfile drift.
bunInstallFlags = if pkgs.stdenv.hostPlatform.isDarwin then [
"--linker=hoisted"
"--backend=copyfile"
"--frozen-lockfile"
"--no-progress"
] else [
"--linker=hoisted"
"--frozen-lockfile"
"--no-progress"
];
# Let the bun2nix hook handle dependency installation via the
# pre-fetched cache, but skip its default build/check/install
# phases since we have custom ones.

View File

@@ -1,7 +1,7 @@
{
"name": "gitea-mirror",
"type": "module",
"version": "3.10.1",
"version": "3.13.2",
"engines": {
"bun": ">=1.2.9"
},
@@ -34,6 +34,7 @@
"start": "bun dist/server/entry.mjs",
"start:fresh": "bun run cleanup-db && bun run manage-db init && bun dist/server/entry.mjs",
"test": "bun test",
"test:migrations": "bun scripts/validate-migrations.ts",
"test:watch": "bun test --watch",
"test:coverage": "bun test --coverage",
"test:e2e": "bash tests/e2e/run-e2e.sh",
@@ -44,14 +45,18 @@
},
"overrides": {
"@esbuild-kit/esm-loader": "npm:tsx@^4.21.0",
"devalue": "^5.5.0"
"devalue": "^5.6.4",
"fast-xml-parser": "^5.5.5",
"node-forge": "^1.3.3",
"svgo": "^4.0.1",
"rollup": ">=4.59.0"
},
"dependencies": {
"@astrojs/check": "^0.9.6",
"@astrojs/mdx": "4.3.13",
"@astrojs/node": "9.5.4",
"@astrojs/react": "^4.4.2",
"@better-auth/sso": "1.4.19",
"@astrojs/check": "^0.9.7",
"@astrojs/mdx": "5.0.0",
"@astrojs/node": "10.0.1",
"@astrojs/react": "^5.0.0",
"@better-auth/sso": "1.5.5",
"@octokit/plugin-throttling": "^11.0.3",
"@octokit/rest": "^22.0.1",
"@radix-ui/react-accordion": "^1.2.12",
@@ -73,13 +78,14 @@
"@radix-ui/react-tabs": "^1.1.13",
"@radix-ui/react-tooltip": "^1.2.8",
"@tailwindcss/vite": "^4.2.1",
"@tanstack/react-table": "^8.21.3",
"@tanstack/react-virtual": "^3.13.19",
"@types/canvas-confetti": "^1.9.0",
"@types/react": "^19.2.14",
"@types/react-dom": "^19.2.3",
"astro": "^5.18.0",
"astro": "^6.0.4",
"bcryptjs": "^3.0.3",
"better-auth": "1.4.19",
"better-auth": "1.5.5",
"buffer": "^6.0.3",
"canvas-confetti": "^1.9.4",
"class-variance-authority": "^0.7.1",
@@ -89,8 +95,8 @@
"drizzle-orm": "^0.45.1",
"fuse.js": "^7.1.0",
"jsonwebtoken": "^9.0.3",
"lucide-react": "^0.575.0",
"nanoid": "^3.3.11",
"lucide-react": "^0.577.0",
"nanoid": "^5.1.6",
"next-themes": "^0.4.6",
"react": "^19.2.4",
"react-dom": "^19.2.4",
@@ -109,15 +115,15 @@
"@testing-library/jest-dom": "^6.9.1",
"@testing-library/react": "^16.3.2",
"@types/bcryptjs": "^3.0.0",
"@types/bun": "^1.3.9",
"@types/bun": "^1.3.10",
"@types/jsonwebtoken": "^9.0.10",
"@types/node": "^25.3.2",
"@types/node": "^25.5.0",
"@types/uuid": "^11.0.0",
"@vitejs/plugin-react": "^5.1.4",
"@vitejs/plugin-react": "^6.0.1",
"drizzle-kit": "^0.31.9",
"jsdom": "^28.1.0",
"tsx": "^4.21.0",
"vitest": "^4.0.18"
"vitest": "^4.1.0"
},
"packageManager": "bun@1.3.3"
"packageManager": "bun@1.3.10"
}

View File

@@ -15,33 +15,40 @@ import { repoStatusEnum } from "@/types/Repository";
const isDryRun = process.argv.includes("--dry-run");
const specificRepo = process.argv.find(arg => arg.startsWith("--repo-name="))?.split("=")[1];
const isStartupMode = process.argv.includes("--startup");
const requestTimeoutMs = parsePositiveInteger(process.env.GITEA_REPAIR_REQUEST_TIMEOUT_MS, 15000);
const progressInterval = parsePositiveInteger(process.env.GITEA_REPAIR_PROGRESS_INTERVAL, 100);
async function checkRepoInGitea(config: any, owner: string, repoName: string): Promise<boolean> {
try {
if (!config.giteaConfig?.url || !config.giteaConfig?.token) {
return false;
}
type GiteaLookupResult = {
exists: boolean;
details: any | null;
timedOut: boolean;
error: string | null;
};
const response = await fetch(
`${config.giteaConfig.url}/api/v1/repos/${owner}/${repoName}`,
{
headers: {
Authorization: `token ${config.giteaConfig.token}`,
},
}
);
return response.ok;
} catch (error) {
console.error(`Error checking repo ${owner}/${repoName} in Gitea:`, error);
return false;
function parsePositiveInteger(value: string | undefined, fallback: number): number {
const parsed = Number.parseInt(value ?? "", 10);
if (!Number.isFinite(parsed) || parsed <= 0) {
return fallback;
}
return parsed;
}
async function getRepoDetailsFromGitea(config: any, owner: string, repoName: string): Promise<any> {
function isTimeoutError(error: unknown): boolean {
if (!(error instanceof Error)) {
return false;
}
return error.name === "TimeoutError" || error.name === "AbortError";
}
async function getRepoDetailsFromGitea(config: any, owner: string, repoName: string): Promise<GiteaLookupResult> {
try {
if (!config.giteaConfig?.url || !config.giteaConfig?.token) {
return null;
return {
exists: false,
details: null,
timedOut: false,
error: "Missing Gitea URL or token in config",
};
}
const response = await fetch(
@@ -50,16 +57,41 @@ async function getRepoDetailsFromGitea(config: any, owner: string, repoName: str
headers: {
Authorization: `token ${config.giteaConfig.token}`,
},
signal: AbortSignal.timeout(requestTimeoutMs),
}
);
if (response.ok) {
return await response.json();
return {
exists: true,
details: await response.json(),
timedOut: false,
error: null,
};
}
return null;
if (response.status === 404) {
return {
exists: false,
details: null,
timedOut: false,
error: null,
};
}
return {
exists: false,
details: null,
timedOut: false,
error: `Gitea API returned HTTP ${response.status}`,
};
} catch (error) {
console.error(`Error getting repo details for ${owner}/${repoName}:`, error);
return null;
return {
exists: false,
details: null,
timedOut: isTimeoutError(error),
error: error instanceof Error ? error.message : String(error),
};
}
}
@@ -99,6 +131,8 @@ async function repairMirroredRepositories() {
.from(repositories)
.where(whereConditions);
const totalRepos = repos.length;
if (repos.length === 0) {
if (!isStartupMode) {
console.log("✅ No repositories found that need repair");
@@ -109,13 +143,25 @@ async function repairMirroredRepositories() {
if (!isStartupMode) {
console.log(`📋 Found ${repos.length} repositories to check:`);
console.log("");
} else {
console.log(`Checking ${totalRepos} repositories for status inconsistencies...`);
console.log(`Request timeout: ${requestTimeoutMs}ms | Progress interval: every ${progressInterval} repositories`);
}
const startedAt = Date.now();
const configCache = new Map<string, any>();
let checkedCount = 0;
let repairedCount = 0;
let skippedCount = 0;
let errorCount = 0;
let timeoutCount = 0;
let giteaErrorCount = 0;
let giteaErrorSamples = 0;
let startupSkipWarningCount = 0;
for (const repo of repos) {
checkedCount++;
if (!isStartupMode) {
console.log(`🔍 Checking repository: ${repo.name}`);
console.log(` Current status: ${repo.status}`);
@@ -124,13 +170,29 @@ async function repairMirroredRepositories() {
try {
// Get user configuration
const config = await db
.select()
.from(configs)
.where(eq(configs.id, repo.configId))
.limit(1);
const configKey = String(repo.configId);
let userConfig = configCache.get(configKey);
if (config.length === 0) {
if (!userConfig) {
const config = await db
.select()
.from(configs)
.where(eq(configs.id, repo.configId))
.limit(1);
if (config.length === 0) {
if (!isStartupMode) {
console.log(` ❌ No configuration found for repository`);
}
errorCount++;
continue;
}
userConfig = config[0];
configCache.set(configKey, userConfig);
}
if (!userConfig) {
if (!isStartupMode) {
console.log(` ❌ No configuration found for repository`);
}
@@ -138,7 +200,6 @@ async function repairMirroredRepositories() {
continue;
}
const userConfig = config[0];
const giteaUsername = userConfig.giteaConfig?.defaultOwner;
if (!giteaUsername) {
@@ -153,25 +214,59 @@ async function repairMirroredRepositories() {
let existsInGitea = false;
let actualOwner = giteaUsername;
let giteaRepoDetails = null;
let repoRequestTimedOut = false;
let repoRequestErrored = false;
// First check user location
existsInGitea = await checkRepoInGitea(userConfig, giteaUsername, repo.name);
if (existsInGitea) {
giteaRepoDetails = await getRepoDetailsFromGitea(userConfig, giteaUsername, repo.name);
const userLookup = await getRepoDetailsFromGitea(userConfig, giteaUsername, repo.name);
existsInGitea = userLookup.exists;
giteaRepoDetails = userLookup.details;
if (userLookup.timedOut) {
timeoutCount++;
repoRequestTimedOut = true;
} else if (userLookup.error) {
giteaErrorCount++;
repoRequestErrored = true;
if (!isStartupMode || giteaErrorSamples < 3) {
console.log(` ⚠️ Gitea lookup issue for ${giteaUsername}/${repo.name}: ${userLookup.error}`);
giteaErrorSamples++;
}
}
// If not found in user location and repo has organization, check organization
if (!existsInGitea && repo.organization) {
existsInGitea = await checkRepoInGitea(userConfig, repo.organization, repo.name);
const orgLookup = await getRepoDetailsFromGitea(userConfig, repo.organization, repo.name);
existsInGitea = orgLookup.exists;
if (existsInGitea) {
actualOwner = repo.organization;
giteaRepoDetails = await getRepoDetailsFromGitea(userConfig, repo.organization, repo.name);
giteaRepoDetails = orgLookup.details;
}
if (orgLookup.timedOut) {
timeoutCount++;
repoRequestTimedOut = true;
} else if (orgLookup.error) {
giteaErrorCount++;
repoRequestErrored = true;
if (!isStartupMode || giteaErrorSamples < 3) {
console.log(` ⚠️ Gitea lookup issue for ${repo.organization}/${repo.name}: ${orgLookup.error}`);
giteaErrorSamples++;
}
}
}
if (!existsInGitea) {
if (!isStartupMode) {
console.log(` ⏭️ Repository not found in Gitea - skipping`);
} else if (repoRequestTimedOut || repoRequestErrored) {
if (startupSkipWarningCount < 3) {
console.log(` ⚠️ Skipping ${repo.name}; Gitea was slow/unreachable during lookup`);
startupSkipWarningCount++;
if (startupSkipWarningCount === 3) {
console.log(` Additional slow/unreachable lookup warnings suppressed; progress logs will continue`);
}
}
}
skippedCount++;
continue;
@@ -241,22 +336,43 @@ async function repairMirroredRepositories() {
if (!isStartupMode) {
console.log("");
} else if (checkedCount % progressInterval === 0 || checkedCount === totalRepos) {
const elapsedSeconds = Math.floor((Date.now() - startedAt) / 1000);
console.log(
`Repair progress: ${checkedCount}/${totalRepos} checked | repaired=${repairedCount}, skipped=${skippedCount}, errors=${errorCount}, timeouts=${timeoutCount} | elapsed=${elapsedSeconds}s`
);
}
}
if (isStartupMode) {
// In startup mode, only log if there were repairs or errors
const elapsedSeconds = Math.floor((Date.now() - startedAt) / 1000);
console.log(
`Repository repair summary: checked=${checkedCount}, repaired=${repairedCount}, skipped=${skippedCount}, errors=${errorCount}, timeouts=${timeoutCount}, elapsed=${elapsedSeconds}s`
);
if (repairedCount > 0) {
console.log(`Repaired ${repairedCount} repository status inconsistencies`);
}
if (errorCount > 0) {
console.log(`Warning: ${errorCount} repositories had errors during repair`);
}
if (timeoutCount > 0) {
console.log(
`Warning: ${timeoutCount} Gitea API requests timed out. Increase GITEA_REPAIR_REQUEST_TIMEOUT_MS if your Gitea instance is under heavy load.`
);
}
if (giteaErrorCount > 0) {
console.log(`Warning: ${giteaErrorCount} Gitea API requests failed with non-timeout errors.`);
}
} else {
console.log("📊 Repair Summary:");
console.log(` Checked: ${checkedCount}`);
console.log(` Repaired: ${repairedCount}`);
console.log(` Skipped: ${skippedCount}`);
console.log(` Errors: ${errorCount}`);
console.log(` Timeouts: ${timeoutCount}`);
if (giteaErrorCount > 0) {
console.log(` Gitea API Errors: ${giteaErrorCount}`);
}
if (isDryRun && repairedCount > 0) {
console.log("");

View File

@@ -0,0 +1,212 @@
#!/usr/bin/env bun
import { Database } from "bun:sqlite";
import { readFileSync } from "fs";
import path from "path";
type JournalEntry = {
idx: number;
tag: string;
when: number;
breakpoints: boolean;
};
type Migration = {
entry: JournalEntry;
statements: string[];
};
type UpgradeFixture = {
seed: (db: Database) => void;
verify: (db: Database) => void;
};
type TableInfoRow = {
cid: number;
name: string;
type: string;
notnull: number;
dflt_value: string | null;
pk: number;
};
const migrationsFolder = path.join(process.cwd(), "drizzle");
const migrations = loadMigrations();
const latestMigration = migrations.at(-1);
/**
* Known SQLite limitations that Drizzle-kit's auto-generated migrations
* can violate. Each rule is checked against every SQL statement.
*/
const SQLITE_LINT_RULES: { pattern: RegExp; message: string }[] = [
{
pattern: /ALTER\s+TABLE\s+\S+\s+ADD\s+(?:COLUMN\s+)?\S+[^;]*DEFAULT\s*\(/i,
message:
"ALTER TABLE ADD COLUMN with an expression default (e.g. DEFAULT (unixepoch())) " +
"is not allowed in SQLite. Use the table-recreation pattern instead " +
"(CREATE new table, INSERT SELECT, DROP old, RENAME).",
},
{
pattern: /ALTER\s+TABLE\s+\S+\s+ADD\s+(?:COLUMN\s+)?\S+[^;]*DEFAULT\s+CURRENT_(TIME|DATE|TIMESTAMP)\b/i,
message:
"ALTER TABLE ADD COLUMN with DEFAULT CURRENT_TIME/CURRENT_DATE/CURRENT_TIMESTAMP " +
"is not allowed in SQLite. Use the table-recreation pattern instead.",
},
];
function loadMigrations(): Migration[] {
const journalPath = path.join(migrationsFolder, "meta", "_journal.json");
const journal = JSON.parse(readFileSync(journalPath, "utf8")) as {
entries: JournalEntry[];
};
return journal.entries.map((entry) => {
const migrationPath = path.join(migrationsFolder, `${entry.tag}.sql`);
const statements = readFileSync(migrationPath, "utf8")
.split("--> statement-breakpoint")
.map((statement) => statement.trim())
.filter(Boolean);
return { entry, statements };
});
}
function assert(condition: unknown, message: string): asserts condition {
if (!condition) {
throw new Error(message);
}
}
function runMigration(db: Database, migration: Migration) {
db.run("BEGIN");
try {
for (const statement of migration.statements) {
db.run(statement);
}
db.run("COMMIT");
} catch (error) {
try {
db.run("ROLLBACK");
} catch {
// Ignore rollback errors so the original failure is preserved.
}
throw error;
}
}
function runMigrations(db: Database, selectedMigrations: Migration[]) {
for (const migration of selectedMigrations) {
runMigration(db, migration);
}
}
function seedPre0009Database(db: Database) {
// Seed every existing table so ALTER TABLE paths run against non-empty data.
db.run("INSERT INTO users (id, email, username, name) VALUES ('u1', 'u1@example.com', 'user1', 'User One')");
db.run("INSERT INTO configs (id, user_id, name, github_config, gitea_config, schedule_config, cleanup_config) VALUES ('c1', 'u1', 'Default', '{}', '{}', '{}', '{}')");
db.run("INSERT INTO accounts (id, account_id, user_id, provider_id, access_token, refresh_token, id_token, access_token_expires_at, refresh_token_expires_at, scope) VALUES ('acct1', 'acct-1', 'u1', 'github', 'access-token', 'refresh-token', 'id-token', 2000, 3000, 'repo')");
db.run("INSERT INTO events (id, user_id, channel, payload) VALUES ('evt1', 'u1', 'sync', '{\"status\":\"queued\"}')");
db.run("INSERT INTO mirror_jobs (id, user_id, repository_id, repository_name, status, message, timestamp) VALUES ('job1', 'u1', 'r1', 'owner/repo', 'imported', 'Imported repository', 900)");
db.run("INSERT INTO organizations (id, user_id, config_id, name, avatar_url, public_repository_count, private_repository_count, fork_repository_count) VALUES ('org1', 'u1', 'c1', 'Example Org', 'https://example.com/org.png', 1, 0, 0)");
db.run("INSERT INTO repositories (id, user_id, config_id, name, full_name, normalized_full_name, url, clone_url, owner, organization, default_branch, created_at, updated_at, metadata) VALUES ('r1', 'u1', 'c1', 'repo', 'owner/repo', 'owner/repo', 'https://example.com/repo', 'https://example.com/repo.git', 'owner', 'Example Org', 'main', 1000, 1100, '{\"issues\":true}')");
db.run("INSERT INTO sessions (id, token, user_id, expires_at) VALUES ('sess1', 'session-token', 'u1', 4000)");
db.run("INSERT INTO verification_tokens (id, token, identifier, type, expires_at) VALUES ('vt1', 'verify-token', 'u1@example.com', 'email', 5000)");
db.run("INSERT INTO verifications (id, identifier, value, expires_at) VALUES ('ver1', 'u1@example.com', '123456', 6000)");
db.run("INSERT INTO oauth_applications (id, client_id, client_secret, name, redirect_urls, type, user_id) VALUES ('app1', 'client-1', 'secret-1', 'Example App', '[\"https://example.com/callback\"]', 'confidential', 'u1')");
db.run("INSERT INTO oauth_access_tokens (id, access_token, refresh_token, access_token_expires_at, refresh_token_expires_at, client_id, user_id, scopes) VALUES ('oat1', 'oauth-access-token', 'oauth-refresh-token', 7000, 8000, 'client-1', 'u1', '[\"repo\"]')");
db.run("INSERT INTO oauth_consent (id, user_id, client_id, scopes, consent_given) VALUES ('consent1', 'u1', 'client-1', '[\"repo\"]', true)");
db.run("INSERT INTO sso_providers (id, issuer, domain, oidc_config, user_id, provider_id) VALUES ('sso1', 'https://issuer.example.com', 'example.com', '{}', 'u1', 'provider-1')");
db.run("INSERT INTO rate_limits (id, user_id, provider, `limit`, remaining, used, reset, retry_after, status, last_checked) VALUES ('rl1', 'u1', 'github', 5000, 4999, 1, 9000, NULL, 'ok', 8500)");
}
function verify0009Migration(db: Database) {
const repositoryColumns = db.query("PRAGMA table_info(repositories)").all() as TableInfoRow[];
const importedAtColumn = repositoryColumns.find((column) => column.name === "imported_at");
assert(importedAtColumn, "Expected repositories.imported_at column to exist after migration");
assert(importedAtColumn.notnull === 1, "Expected repositories.imported_at to be NOT NULL");
assert(importedAtColumn.dflt_value === "unixepoch()", `Expected repositories.imported_at default to be unixepoch(), got ${importedAtColumn.dflt_value ?? "null"}`);
const existingRepo = db.query("SELECT imported_at FROM repositories WHERE id = 'r1'").get() as { imported_at: number } | null;
assert(existingRepo?.imported_at === 900, `Expected existing repository imported_at to backfill from mirror_jobs timestamp 900, got ${existingRepo?.imported_at ?? "null"}`);
db.run("INSERT INTO repositories (id, user_id, config_id, name, full_name, normalized_full_name, url, clone_url, owner, default_branch) VALUES ('r2', 'u1', 'c1', 'repo-two', 'owner/repo-two', 'owner/repo-two', 'https://example.com/repo-two', 'https://example.com/repo-two.git', 'owner', 'main')");
const newRepo = db.query("SELECT imported_at FROM repositories WHERE id = 'r2'").get() as { imported_at: number } | null;
assert(typeof newRepo?.imported_at === "number" && newRepo.imported_at > 0, "Expected new repository insert to receive imported_at from the column default");
const importedAtIndex = db
.query("SELECT name FROM sqlite_master WHERE type = 'index' AND tbl_name = 'repositories' AND name = 'idx_repositories_user_imported_at'")
.get() as { name: string } | null;
assert(importedAtIndex?.name === "idx_repositories_user_imported_at", "Expected repositories imported_at index to exist after migration");
}
const latestUpgradeFixtures: Record<string, UpgradeFixture> = {
"0009_nervous_tyger_tiger": {
seed: seedPre0009Database,
verify: verify0009Migration,
},
};
function lintMigrations(selectedMigrations: Migration[]) {
const violations: string[] = [];
for (const migration of selectedMigrations) {
for (const statement of migration.statements) {
for (const rule of SQLITE_LINT_RULES) {
if (rule.pattern.test(statement)) {
violations.push(`[${migration.entry.tag}] ${rule.message}\n Statement: ${statement.slice(0, 120)}...`);
}
}
}
}
assert(
violations.length === 0,
`SQLite lint found ${violations.length} violation(s):\n\n${violations.join("\n\n")}`,
);
}
function validateMigrations() {
assert(latestMigration, "No migrations found in drizzle/meta/_journal.json");
// Lint all migrations for known SQLite pitfalls before running anything.
lintMigrations(migrations);
const emptyDb = new Database(":memory:");
try {
runMigrations(emptyDb, migrations);
} finally {
emptyDb.close();
}
const upgradeFixture = latestUpgradeFixtures[latestMigration.entry.tag];
assert(
upgradeFixture,
`Missing upgrade fixture for latest migration ${latestMigration.entry.tag}. Add one in scripts/validate-migrations.ts.`,
);
const upgradeDb = new Database(":memory:");
try {
runMigrations(upgradeDb, migrations.slice(0, -1));
upgradeFixture.seed(upgradeDb);
runMigration(upgradeDb, latestMigration);
upgradeFixture.verify(upgradeDb);
} finally {
upgradeDb.close();
}
console.log(
`Validated ${migrations.length} migrations from scratch and upgrade path for ${latestMigration.entry.tag}.`,
);
}
try {
validateMigrations();
} catch (error) {
console.error("Migration validation failed:");
console.error(error instanceof Error ? error.stack ?? error.message : String(error));
process.exit(1);
}

View File

@@ -124,19 +124,31 @@ export function ConfigTabs() {
if (!user?.id) return;
setIsSyncing(true);
try {
const result = await apiRequest<{ success: boolean; message?: string }>(
const result = await apiRequest<{ success: boolean; message?: string; failedOrgs?: string[]; recoveredOrgs?: number }>(
`/sync?userId=${user.id}`,
{ method: 'POST' },
);
result.success
? toast.success(
'GitHub data imported successfully! Head to the Repositories page to start mirroring.',
)
: toast.error(
`Failed to import GitHub data: ${
result.message || 'Unknown error'
}`,
if (result.success) {
toast.success(
'GitHub data imported successfully! Head to the Repositories page to start mirroring.',
);
if (result.failedOrgs && result.failedOrgs.length > 0) {
toast.warning(
`${result.failedOrgs.length} org${result.failedOrgs.length > 1 ? 's' : ''} failed to import (${result.failedOrgs.join(', ')}). Check the Organizations tab for details.`,
);
}
if (result.recoveredOrgs && result.recoveredOrgs > 0) {
toast.success(
`${result.recoveredOrgs} previously failed org${result.recoveredOrgs > 1 ? 's' : ''} recovered successfully.`,
);
}
} else {
toast.error(
`Failed to import GitHub data: ${
result.message || 'Unknown error'
}`,
);
}
} catch (error) {
toast.error(
`Error importing GitHub data: ${

View File

@@ -248,6 +248,11 @@ export function OrganizationList({
</div>
</div>
{/* Error message for failed orgs */}
{org.status === "failed" && org.errorMessage && (
<p className="text-xs text-destructive line-clamp-2">{org.errorMessage}</p>
)}
{/* Destination override section */}
<div>
<MirrorDestinationEditor
@@ -304,6 +309,13 @@ export function OrganizationList({
/>
</div>
{/* Error message for failed orgs */}
{org.status === "failed" && org.errorMessage && (
<div className="mb-4 p-3 rounded-md bg-destructive/10 border border-destructive/20">
<p className="text-sm text-destructive">{org.errorMessage}</p>
</div>
)}
{/* Repository statistics */}
<div className="mb-4">
<div className="flex items-center gap-4 text-sm">
@@ -313,7 +325,7 @@ export function OrganizationList({
{org.repositoryCount === 1 ? "repository" : "repositories"}
</span>
</div>
{/* Repository breakdown - only show non-zero counts */}
{(() => {
const counts = [];
@@ -326,7 +338,7 @@ export function OrganizationList({
if (org.forkRepositoryCount && org.forkRepositoryCount > 0) {
counts.push(`${org.forkRepositoryCount} ${org.forkRepositoryCount === 1 ? 'fork' : 'forks'}`);
}
return counts.length > 0 ? (
<div className="flex items-center gap-3 text-xs text-muted-foreground">
{counts.map((count, index) => (
@@ -415,7 +427,7 @@ export function OrganizationList({
)}
</>
)}
{/* Dropdown menu for additional actions */}
{org.status !== "mirroring" && (
<DropdownMenu>
@@ -426,7 +438,7 @@ export function OrganizationList({
</DropdownMenuTrigger>
<DropdownMenuContent align="end">
{org.status !== "ignored" && (
<DropdownMenuItem
<DropdownMenuItem
onClick={() => org.id && onIgnore && onIgnore({ orgId: org.id, ignore: true })}
>
<Ban className="h-4 w-4 mr-2" />
@@ -449,7 +461,7 @@ export function OrganizationList({
</DropdownMenu>
)}
</div>
<div className="flex items-center gap-2 justify-center">
{(() => {
const giteaUrl = getGiteaOrgUrl(org);

View File

@@ -51,6 +51,15 @@ import { useLiveRefresh } from "@/hooks/useLiveRefresh";
import { useConfigStatus } from "@/hooks/useConfigStatus";
import { useNavigation } from "@/components/layout/MainLayout";
const REPOSITORY_SORT_OPTIONS = [
{ value: "imported-desc", label: "Recently Imported" },
{ value: "imported-asc", label: "Oldest Imported" },
{ value: "updated-desc", label: "Recently Updated" },
{ value: "updated-asc", label: "Oldest Updated" },
{ value: "name-asc", label: "Name (A-Z)" },
{ value: "name-desc", label: "Name (Z-A)" },
] as const;
export default function Repository() {
const [repositories, setRepositories] = useState<Repository[]>([]);
const [isInitialLoading, setIsInitialLoading] = useState(true);
@@ -63,6 +72,7 @@ export default function Repository() {
status: "",
organization: "",
owner: "",
sort: "imported-desc",
});
const [isDialogOpen, setIsDialogOpen] = useState<boolean>(false);
const [selectedRepoIds, setSelectedRepoIds] = useState<Set<string>>(new Set());
@@ -999,6 +1009,7 @@ export default function Repository() {
status: "",
organization: "",
owner: "",
sort: filter.sort || "imported-desc",
});
};
@@ -1139,6 +1150,33 @@ export default function Repository() {
</SelectContent>
</Select>
</div>
{/* Sort Filter */}
<div className="space-y-2">
<label className="text-sm font-medium flex items-center gap-2">
<span className="text-muted-foreground">Sort</span>
</label>
<Select
value={filter.sort || "imported-desc"}
onValueChange={(value) =>
setFilter((prev) => ({
...prev,
sort: value,
}))
}
>
<SelectTrigger className="w-full h-10">
<SelectValue placeholder="Sort repositories" />
</SelectTrigger>
<SelectContent>
{REPOSITORY_SORT_OPTIONS.map((option) => (
<SelectItem key={option.value} value={option.value}>
{option.label}
</SelectItem>
))}
</SelectContent>
</Select>
</div>
</div>
<DrawerFooter className="gap-2 px-4 pt-2 pb-4 border-t">
@@ -1241,6 +1279,27 @@ export default function Repository() {
</SelectContent>
</Select>
<Select
value={filter.sort || "imported-desc"}
onValueChange={(value) =>
setFilter((prev) => ({
...prev,
sort: value,
}))
}
>
<SelectTrigger className="w-[190px] h-10">
<SelectValue placeholder="Sort repositories" />
</SelectTrigger>
<SelectContent>
{REPOSITORY_SORT_OPTIONS.map((option) => (
<SelectItem key={option.value} value={option.value}>
{option.label}
</SelectItem>
))}
</SelectContent>
</Select>
<Button
variant="outline"
size="icon"

View File

@@ -1,11 +1,19 @@
import { useMemo, useRef } from "react";
import Fuse from "fuse.js";
import {
getCoreRowModel,
getFilteredRowModel,
getSortedRowModel,
useReactTable,
type ColumnDef,
type ColumnFiltersState,
type SortingState,
} from "@tanstack/react-table";
import { useVirtualizer } from "@tanstack/react-virtual";
import { FlipHorizontal, GitFork, RefreshCw, RotateCcw, Star, Lock, Ban, Check, ChevronDown, Trash2, X } from "lucide-react";
import { SiGithub, SiGitea } from "react-icons/si";
import type { Repository } from "@/lib/db/schema";
import { Button } from "@/components/ui/button";
import { formatDate, formatLastSyncTime, getStatusColor } from "@/lib/utils";
import { formatLastSyncTime } from "@/lib/utils";
import type { FilterParams } from "@/types/filter";
import { Skeleton } from "@/components/ui/skeleton";
import { useGiteaConfig } from "@/hooks/useGiteaConfig";
@@ -46,6 +54,30 @@ interface RepositoryTableProps {
onDismissSync?: ({ repoId }: { repoId: string }) => Promise<void>;
}
function getTimestamp(value: Date | string | null | undefined): number {
if (!value) return 0;
const timestamp = new Date(value).getTime();
return Number.isNaN(timestamp) ? 0 : timestamp;
}
function getTableSorting(sortOrder: string | undefined): SortingState {
switch (sortOrder ?? "imported-desc") {
case "imported-asc":
return [{ id: "importedAt", desc: false }];
case "updated-desc":
return [{ id: "updatedAt", desc: true }];
case "updated-asc":
return [{ id: "updatedAt", desc: false }];
case "name-asc":
return [{ id: "fullName", desc: false }];
case "name-desc":
return [{ id: "fullName", desc: true }];
case "imported-desc":
default:
return [{ id: "importedAt", desc: true }];
}
}
export default function RepositoryTable({
repositories,
isLoading,
@@ -120,40 +152,89 @@ export default function RepositoryTable({
return `${baseUrl}/${repoPath}`;
};
const hasAnyFilter = Object.values(filter).some(
(val) => val?.toString().trim() !== ""
);
const hasAnyFilter = [
filter.searchTerm,
filter.status,
filter.owner,
filter.organization,
].some((val) => val?.toString().trim() !== "");
const filteredRepositories = useMemo(() => {
let result = repositories;
const columnFilters = useMemo<ColumnFiltersState>(() => {
const next: ColumnFiltersState = [];
if (filter.status) {
result = result.filter((repo) => repo.status === filter.status);
next.push({ id: "status", value: filter.status });
}
if (filter.owner) {
result = result.filter((repo) => repo.owner === filter.owner);
next.push({ id: "owner", value: filter.owner });
}
if (filter.organization) {
result = result.filter(
(repo) => repo.organization === filter.organization
);
next.push({ id: "organization", value: filter.organization });
}
if (filter.searchTerm) {
const fuse = new Fuse(result, {
keys: ["name", "fullName", "owner", "organization"],
threshold: 0.3,
});
result = fuse.search(filter.searchTerm).map((res) => res.item);
}
return next;
}, [filter.status, filter.owner, filter.organization]);
return result;
}, [repositories, filter]);
const sorting = useMemo(() => getTableSorting(filter.sort), [filter.sort]);
const columns = useMemo<ColumnDef<Repository>[]>(
() => [
{
id: "fullName",
accessorFn: (row) => row.fullName,
},
{
id: "owner",
accessorFn: (row) => row.owner,
filterFn: "equalsString",
},
{
id: "organization",
accessorFn: (row) => row.organization ?? "",
filterFn: "equalsString",
},
{
id: "status",
accessorFn: (row) => row.status,
filterFn: "equalsString",
},
{
id: "importedAt",
accessorFn: (row) => getTimestamp(row.importedAt),
enableGlobalFilter: false,
enableColumnFilter: false,
},
{
id: "updatedAt",
accessorFn: (row) => getTimestamp(row.updatedAt),
enableGlobalFilter: false,
enableColumnFilter: false,
},
],
[]
);
const table = useReactTable({
data: repositories,
columns,
state: {
globalFilter: filter.searchTerm ?? "",
columnFilters,
sorting,
},
getCoreRowModel: getCoreRowModel(),
getFilteredRowModel: getFilteredRowModel(),
getSortedRowModel: getSortedRowModel(),
});
const visibleRepositories = table
.getRowModel()
.rows.map((row) => row.original);
const rowVirtualizer = useVirtualizer({
count: filteredRepositories.length,
count: visibleRepositories.length,
getScrollElement: () => tableParentRef.current,
estimateSize: () => 65,
overscan: 5,
@@ -162,7 +243,11 @@ export default function RepositoryTable({
// Selection handlers
const handleSelectAll = (checked: boolean) => {
if (checked) {
const allIds = new Set(filteredRepositories.map(repo => repo.id).filter((id): id is string => !!id));
const allIds = new Set(
visibleRepositories
.map((repo) => repo.id)
.filter((id): id is string => !!id)
);
onSelectionChange(allIds);
} else {
onSelectionChange(new Set());
@@ -179,8 +264,9 @@ export default function RepositoryTable({
onSelectionChange(newSelection);
};
const isAllSelected = filteredRepositories.length > 0 &&
filteredRepositories.every(repo => repo.id && selectedRepoIds.has(repo.id));
const isAllSelected =
visibleRepositories.length > 0 &&
visibleRepositories.every((repo) => repo.id && selectedRepoIds.has(repo.id));
const isPartiallySelected = selectedRepoIds.size > 0 && !isAllSelected;
// Mobile card layout for repository
@@ -235,7 +321,7 @@ export default function RepositoryTable({
{/* Status & Last Mirrored */}
<div className="flex items-center justify-between">
<Badge
<Badge
className={`capitalize
${repo.status === 'imported' ? 'bg-yellow-500/10 text-yellow-600 hover:bg-yellow-500/20 dark:text-yellow-400' :
repo.status === 'mirrored' || repo.status === 'synced' ? 'bg-green-500/10 text-green-600 hover:bg-green-500/20 dark:text-green-400' :
@@ -250,7 +336,7 @@ export default function RepositoryTable({
{repo.status}
</Badge>
<span className="text-xs text-muted-foreground">
{formatLastSyncTime(repo.lastMirrored)}
{formatLastSyncTime(repo.lastMirrored ?? null)}
</span>
</div>
</div>
@@ -379,7 +465,7 @@ export default function RepositoryTable({
Ignore Repository
</Button>
)}
{/* External links */}
<div className="flex gap-2">
<Button variant="outline" size="default" className="flex-1 h-10 min-w-0" asChild>
@@ -510,7 +596,7 @@ export default function RepositoryTable({
{hasAnyFilter && (
<div className="mb-4 flex items-center gap-2">
<span className="text-sm text-muted-foreground">
Showing {filteredRepositories.length} of {repositories.length} repositories
Showing {visibleRepositories.length} of {repositories.length} repositories
</span>
<Button
variant="ghost"
@@ -521,6 +607,7 @@ export default function RepositoryTable({
status: "",
organization: "",
owner: "",
sort: filter.sort || "imported-desc",
})
}
>
@@ -529,7 +616,7 @@ export default function RepositoryTable({
</div>
)}
{filteredRepositories.length === 0 ? (
{visibleRepositories.length === 0 ? (
<div className="text-center py-8">
<p className="text-muted-foreground">
{hasAnyFilter
@@ -550,12 +637,12 @@ export default function RepositoryTable({
className="h-5 w-5"
/>
<span className="text-sm font-medium">
Select All ({filteredRepositories.length})
Select All ({visibleRepositories.length})
</span>
</div>
{/* Repository cards */}
{filteredRepositories.map((repo) => (
{visibleRepositories.map((repo) => (
<RepositoryCard key={repo.id} repo={repo} />
))}
</div>
@@ -601,13 +688,14 @@ export default function RepositoryTable({
position: "relative",
}}
>
{rowVirtualizer.getVirtualItems().map((virtualRow, index) => {
const repo = filteredRepositories[virtualRow.index];
{rowVirtualizer.getVirtualItems().map((virtualRow) => {
const repo = visibleRepositories[virtualRow.index];
if (!repo) return null;
const isLoading = loadingRepoIds.has(repo.id ?? "");
return (
<div
key={index}
key={virtualRow.key}
ref={rowVirtualizer.measureElement}
style={{
position: "absolute",
@@ -670,7 +758,7 @@ export default function RepositoryTable({
{/* Last Mirrored */}
<div className="h-full p-3 flex items-center flex-[1]">
<p className="text-sm">
{formatLastSyncTime(repo.lastMirrored)}
{formatLastSyncTime(repo.lastMirrored ?? null)}
</p>
</div>
@@ -680,7 +768,7 @@ export default function RepositoryTable({
<TooltipProvider>
<Tooltip>
<TooltipTrigger asChild>
<Badge
<Badge
variant="destructive"
className="cursor-help capitalize"
>
@@ -693,7 +781,7 @@ export default function RepositoryTable({
</Tooltip>
</TooltipProvider>
) : (
<Badge
<Badge
className={`capitalize
${repo.status === 'imported' ? 'bg-yellow-500/10 text-yellow-600 hover:bg-yellow-500/20 dark:text-yellow-400' :
repo.status === 'mirrored' || repo.status === 'synced' ? 'bg-green-500/10 text-green-600 hover:bg-green-500/20 dark:text-green-400' :
@@ -784,7 +872,7 @@ export default function RepositoryTable({
<div className={`h-1.5 w-1.5 rounded-full ${isLiveActive ? 'bg-emerald-500' : 'bg-primary'}`} />
<span className="text-sm font-medium text-foreground">
{hasAnyFilter
? `Showing ${filteredRepositories.length} of ${repositories.length} repositories`
? `Showing ${visibleRepositories.length} of ${repositories.length} repositories`
: `${repositories.length} ${repositories.length === 1 ? 'repository' : 'repositories'} total`}
</span>
</div>

View File

@@ -1,4 +0,0 @@
import { defineCollection, z } from 'astro:content';
// Export empty collections since docs have been moved
export const collections = {};

View File

@@ -7,6 +7,7 @@ const FILTER_KEYS: (keyof FilterParams)[] = [
"membershipRole",
"owner",
"organization",
"sort",
"type",
"name",
];

View File

@@ -91,35 +91,17 @@ export const giteaApi = {
// Health API
export interface HealthResponse {
status: "ok" | "error";
status: "ok" | "error" | "degraded";
timestamp: string;
version: string;
latestVersion: string;
updateAvailable: boolean;
database: {
connected: boolean;
message: string;
};
system: {
uptime: {
startTime: string;
uptimeMs: number;
formatted: string;
};
memory: {
rss: string;
heapTotal: string;
heapUsed: string;
external: string;
systemTotal: string;
systemFree: string;
};
os: {
platform: string;
version: string;
arch: string;
};
env: string;
recovery?: {
status: string;
jobsNeedingRecovery: number;
};
error?: string;
}

View File

@@ -19,8 +19,23 @@ export const ENV = {
},
// Better Auth secret for authentication
BETTER_AUTH_SECRET:
process.env.BETTER_AUTH_SECRET || "your-secret-key-change-this-in-production",
get BETTER_AUTH_SECRET(): string {
const secret = process.env.BETTER_AUTH_SECRET;
const knownInsecureDefaults = [
"your-secret-key-change-this-in-production",
"dev-only-insecure-secret-do-not-use-in-production",
];
if (!secret || knownInsecureDefaults.includes(secret)) {
if (process.env.NODE_ENV === "production") {
console.error(
"\x1b[31m[SECURITY WARNING]\x1b[0m BETTER_AUTH_SECRET is missing or using an insecure default. " +
"Set a strong secret: openssl rand -base64 32"
);
}
return secret || "dev-only-insecure-secret-do-not-use-in-production";
}
return secret;
},
// Server host and port
HOST: process.env.HOST || "localhost",

View File

@@ -35,13 +35,54 @@ if (process.env.NODE_ENV !== "test") {
// Create drizzle instance with the SQLite client
db = drizzle({ client: sqlite });
/**
* Fix migration records that were marked as applied but whose DDL actually
* failed (e.g. the v3.13.0 release where ALTER TABLE with expression default
* was rejected by SQLite). Without this, Drizzle skips the migration on
* retry because it thinks it already ran.
*
* Drizzle tracks migrations by `created_at` (= journal timestamp) and only
* looks at the most recent record. If the last recorded timestamp is >= the
* failed migration's timestamp but the expected column is missing, we delete
* stale records so the migration re-runs.
*/
function repairFailedMigrations() {
try {
const migrationsTableExists = sqlite
.query("SELECT name FROM sqlite_master WHERE type='table' AND name='__drizzle_migrations'")
.get();
if (!migrationsTableExists) return;
// Migration 0009 journal timestamp (from drizzle/meta/_journal.json)
const MIGRATION_0009_TIMESTAMP = 1773542995732;
const lastMigration = sqlite
.query("SELECT id, created_at FROM __drizzle_migrations ORDER BY created_at DESC LIMIT 1")
.get() as { id: number; created_at: number } | null;
if (!lastMigration || Number(lastMigration.created_at) < MIGRATION_0009_TIMESTAMP) return;
// Migration 0009 is recorded as applied — verify the column actually exists
const columns = sqlite.query("PRAGMA table_info(repositories)").all() as { name: string }[];
const hasImportedAt = columns.some((c) => c.name === "imported_at");
if (!hasImportedAt) {
console.log("🔧 Detected failed migration 0009 (imported_at column missing). Removing stale record so it can re-run...");
sqlite.prepare("DELETE FROM __drizzle_migrations WHERE created_at >= ?").run(MIGRATION_0009_TIMESTAMP);
}
} catch (error) {
console.warn("⚠️ Migration repair check failed (non-fatal):", error);
}
}
/**
* Run Drizzle migrations
*/
function runDrizzleMigrations() {
try {
console.log("🔄 Checking for pending migrations...");
// Check if migrations table exists
const migrationsTableExists = sqlite
.query("SELECT name FROM sqlite_master WHERE type='table' AND name='__drizzle_migrations'")
@@ -51,9 +92,12 @@ if (process.env.NODE_ENV !== "test") {
console.log("📦 First time setup - running initial migrations...");
}
// Fix any migrations that were recorded but actually failed (e.g. v3.13.0 bug)
repairFailedMigrations();
// Run migrations using Drizzle migrate function
migrate(db, { migrationsFolder: "./drizzle" });
console.log("✅ Database migrations completed successfully");
} catch (error) {
console.error("❌ Error running migrations:", error);

View File

@@ -0,0 +1,26 @@
import { expect, test } from "bun:test";
function decodeOutput(output: ArrayBufferLike | Uint8Array | null | undefined) {
if (!output) {
return "";
}
return Buffer.from(output as ArrayBufferLike).toString("utf8");
}
test("migration validation script passes", () => {
const result = Bun.spawnSync({
cmd: ["bun", "scripts/validate-migrations.ts"],
cwd: process.cwd(),
stdout: "pipe",
stderr: "pipe",
});
const stdout = decodeOutput(result.stdout);
const stderr = decodeOutput(result.stderr);
expect(
result.exitCode,
`Migration validation script failed.\nstdout:\n${stdout}\nstderr:\n${stderr}`,
).toBe(0);
});

View File

@@ -181,6 +181,7 @@ export const repositorySchema = z.object({
errorMessage: z.string().optional().nullable(),
destinationOrg: z.string().optional().nullable(),
metadata: z.string().optional().nullable(), // JSON string for metadata sync state
importedAt: z.coerce.date(),
createdAt: z.coerce.date(),
updatedAt: z.coerce.date(),
});
@@ -395,6 +396,9 @@ export const repositories = sqliteTable("repositories", {
destinationOrg: text("destination_org"),
metadata: text("metadata"), // JSON string storing metadata sync state (issues, PRs, releases, etc.)
importedAt: integer("imported_at", { mode: "timestamp" })
.notNull()
.default(sql`(unixepoch())`),
createdAt: integer("created_at", { mode: "timestamp" })
.notNull()
@@ -410,6 +414,7 @@ export const repositories = sqliteTable("repositories", {
index("idx_repositories_organization").on(table.organization),
index("idx_repositories_is_fork").on(table.isForked),
index("idx_repositories_is_starred").on(table.isStarred),
index("idx_repositories_user_imported_at").on(table.userId, table.importedAt),
uniqueIndex("uniq_repositories_user_full_name").on(table.userId, table.fullName),
uniqueIndex("uniq_repositories_user_normalized_full_name").on(table.userId, table.normalizedFullName),
]);

View File

@@ -72,10 +72,21 @@ mock.module("./gitea", () => {
const mirrorStrategy =
config?.githubConfig?.mirrorStrategy ||
(config?.giteaConfig?.preserveOrgStructure ? "preserve" : "flat-user");
const configuredGitHubOwner =
(config?.githubConfig?.owner || config?.githubConfig?.username || "")
.trim()
.toLowerCase();
const repoOwner = repository?.owner?.trim().toLowerCase();
switch (mirrorStrategy) {
case "preserve":
return repository?.organization || config?.giteaConfig?.defaultOwner || "giteauser";
if (repository?.organization) {
return repository.organization;
}
if (configuredGitHubOwner && repoOwner && repoOwner !== configuredGitHubOwner) {
return repository.owner;
}
return config?.giteaConfig?.defaultOwner || "giteauser";
case "single-org":
return config?.giteaConfig?.organization || config?.giteaConfig?.defaultOwner || "giteauser";
case "mixed":
@@ -99,7 +110,7 @@ mock.module("./gitea", () => {
return mockDbSelectResult[0].destinationOrg;
}
return config?.giteaConfig?.defaultOwner || "giteauser";
return mockGetGiteaRepoOwner({ config, repository });
});
return {
isRepoPresentInGitea: mockIsRepoPresentInGitea,
@@ -376,6 +387,7 @@ describe("Gitea Repository Mirroring", () => {
describe("getGiteaRepoOwner - Organization Override Tests", () => {
const baseConfig: Partial<Config> = {
githubConfig: {
owner: "testuser",
username: "testuser",
token: "token",
preserveOrgStructure: false,
@@ -484,6 +496,18 @@ describe("getGiteaRepoOwner - Organization Override Tests", () => {
expect(result).toBe("giteauser");
});
test("preserve strategy: personal repos owned by another user keep source owner namespace", () => {
const repo = {
...baseRepo,
owner: "nice-user",
fullName: "nice-user/test-repo",
organization: undefined,
isForked: true,
};
const result = getGiteaRepoOwner({ config: baseConfig, repository: repo });
expect(result).toBe("nice-user");
});
test("preserve strategy: org repos go to same org name", () => {
const repo = { ...baseRepo, organization: "myorg" };
const result = getGiteaRepoOwner({ config: baseConfig, repository: repo });
@@ -589,4 +613,26 @@ describe("getGiteaRepoOwner - Organization Override Tests", () => {
expect(result).toBe("FOO");
});
test("getGiteaRepoOwnerAsync preserves external personal owner for preserve strategy", async () => {
const configWithUser: Partial<Config> = {
...baseConfig,
userId: "user-id",
};
const repo = {
...baseRepo,
owner: "nice-user",
fullName: "nice-user/test-repo",
organization: undefined,
isForked: true,
};
const result = await getGiteaRepoOwnerAsync({
config: configWithUser,
repository: repo,
});
expect(result).toBe("nice-user");
});
});

View File

@@ -138,14 +138,35 @@ export const getGiteaRepoOwner = ({
// Get the mirror strategy - use preserveOrgStructure for backward compatibility
const mirrorStrategy = config.githubConfig.mirrorStrategy ||
(config.giteaConfig.preserveOrgStructure ? "preserve" : "flat-user");
const configuredGitHubOwner =
(
config.githubConfig.owner ||
(config.githubConfig as typeof config.githubConfig & { username?: string }).username ||
""
)
.trim()
.toLowerCase();
switch (mirrorStrategy) {
case "preserve":
// Keep GitHub structure - org repos go to same org, personal repos to user (or override)
// Keep GitHub structure:
// - org repos stay in the same org
// - personal repos owned by other users keep their source owner namespace
// - personal repos owned by the configured account go to defaultOwner
if (repository.organization) {
return repository.organization;
}
// Use personal repos override if configured, otherwise use username
const normalizedRepoOwner = repository.owner.trim().toLowerCase();
if (
normalizedRepoOwner &&
configuredGitHubOwner &&
normalizedRepoOwner !== configuredGitHubOwner
) {
return repository.owner;
}
// Personal repos from the configured GitHub account go to the configured default owner
return config.giteaConfig.defaultOwner;
case "single-org":
@@ -353,6 +374,161 @@ export const checkRepoLocation = async ({
return { present: false, actualOwner: expectedOwner };
};
const sanitizeTopicForGitea = (topic: string): string =>
topic
.trim()
.toLowerCase()
.replace(/[^a-z0-9-]+/g, "-")
.replace(/-+/g, "-")
.replace(/^-+/, "")
.replace(/-+$/, "");
const normalizeTopicsForGitea = (
topics: string[],
topicPrefix?: string
): string[] => {
const normalizedPrefix = topicPrefix ? sanitizeTopicForGitea(topicPrefix) : "";
const transformedTopics = topics
.map((topic) => sanitizeTopicForGitea(topic))
.filter((topic) => topic.length > 0)
.map((topic) => (normalizedPrefix ? `${normalizedPrefix}-${topic}` : topic));
return [...new Set(transformedTopics)];
};
const getSourceRepositoryCoordinates = (repository: Repository) => {
const delimiterIndex = repository.fullName.indexOf("/");
if (
delimiterIndex > 0 &&
delimiterIndex < repository.fullName.length - 1
) {
return {
owner: repository.fullName.slice(0, delimiterIndex),
repo: repository.fullName.slice(delimiterIndex + 1),
};
}
return {
owner: repository.owner,
repo: repository.name,
};
};
const fetchGitHubTopics = async ({
octokit,
repository,
}: {
octokit: Octokit;
repository: Repository;
}): Promise<string[] | null> => {
const { owner, repo } = getSourceRepositoryCoordinates(repository);
try {
const response = await octokit.request("GET /repos/{owner}/{repo}/topics", {
owner,
repo,
headers: {
Accept: "application/vnd.github+json",
},
});
const names = (response.data as { names?: unknown }).names;
if (!Array.isArray(names)) {
console.warn(
`[Metadata] Unexpected topics payload for ${repository.fullName}; skipping topic sync.`
);
return null;
}
return names.filter((topic): topic is string => typeof topic === "string");
} catch (error) {
console.warn(
`[Metadata] Failed to fetch topics from GitHub for ${repository.fullName}: ${
error instanceof Error ? error.message : String(error)
}`
);
return null;
}
};
const syncRepositoryMetadataToGitea = async ({
config,
octokit,
repository,
giteaOwner,
giteaRepoName,
giteaToken,
}: {
config: Partial<Config>;
octokit: Octokit;
repository: Repository;
giteaOwner: string;
giteaRepoName: string;
giteaToken: string;
}): Promise<void> => {
const giteaBaseUrl = config.giteaConfig?.url;
if (!giteaBaseUrl) {
return;
}
const repoApiUrl = `${giteaBaseUrl}/api/v1/repos/${giteaOwner}/${giteaRepoName}`;
const authHeaders = {
Authorization: `token ${giteaToken}`,
};
const description = repository.description?.trim() || "";
try {
await httpPatch(
repoApiUrl,
{ description },
authHeaders
);
console.log(
`[Metadata] Synced description for ${repository.fullName} to ${giteaOwner}/${giteaRepoName}`
);
} catch (error) {
console.warn(
`[Metadata] Failed to sync description for ${repository.fullName} to ${giteaOwner}/${giteaRepoName}: ${
error instanceof Error ? error.message : String(error)
}`
);
}
if (config.giteaConfig?.addTopics === false) {
return;
}
const sourceTopics = await fetchGitHubTopics({ octokit, repository });
if (sourceTopics === null) {
console.warn(
`[Metadata] Skipping topic sync for ${repository.fullName} because GitHub topics could not be fetched.`
);
return;
}
const topics = normalizeTopicsForGitea(
sourceTopics,
config.giteaConfig?.topicPrefix
);
try {
await httpPut(
`${repoApiUrl}/topics`,
{ topics },
authHeaders
);
console.log(
`[Metadata] Synced ${topics.length} topic(s) for ${repository.fullName} to ${giteaOwner}/${giteaRepoName}`
);
} catch (error) {
console.warn(
`[Metadata] Failed to sync topics for ${repository.fullName} to ${giteaOwner}/${giteaRepoName}: ${
error instanceof Error ? error.message : String(error)
}`
);
}
};
export const mirrorGithubRepoToGitea = async ({
octokit,
repository,
@@ -376,6 +552,23 @@ export const mirrorGithubRepoToGitea = async ({
// Get the correct owner based on the strategy (with organization overrides)
let repoOwner = await getGiteaRepoOwnerAsync({ config, repository });
const mirrorStrategy = config.githubConfig.mirrorStrategy ||
(config.giteaConfig.preserveOrgStructure ? "preserve" : "flat-user");
const configuredGitHubOwner = (
config.githubConfig.owner ||
(config.githubConfig as typeof config.githubConfig & { username?: string }).username ||
""
)
.trim()
.toLowerCase();
const normalizedRepoOwner = repository.owner.trim().toLowerCase();
const isExternalPersonalRepoInPreserveMode =
mirrorStrategy === "preserve" &&
!repository.organization &&
!repository.isStarred &&
normalizedRepoOwner !== "" &&
configuredGitHubOwner !== "" &&
normalizedRepoOwner !== configuredGitHubOwner;
// Determine the actual repository name to use (handle duplicates for starred repos)
let targetRepoName = repository.name;
@@ -427,36 +620,66 @@ export const mirrorGithubRepoToGitea = async ({
});
if (isExisting) {
console.log(
`Repository ${targetRepoName} already exists in Gitea under ${repoOwner}. Updating database status.`
);
// Update database to reflect that the repository is already mirrored
await db
.update(repositories)
.set({
status: repoStatusEnum.parse("mirrored"),
updatedAt: new Date(),
lastMirrored: new Date(),
errorMessage: null,
mirroredLocation: `${repoOwner}/${targetRepoName}`,
})
.where(eq(repositories.id, repository.id!));
// Append log for "mirrored" status
await createMirrorJob({
userId: config.userId,
repositoryId: repository.id,
repositoryName: repository.name,
message: `Repository ${repository.name} already exists in Gitea`,
details: `Repository ${repository.name} was found to already exist in Gitea under ${repoOwner} and database status was updated.`,
status: "mirrored",
const { getGiteaRepoInfo, handleExistingNonMirrorRepo } = await import("./gitea-enhanced");
const existingRepoInfo = await getGiteaRepoInfo({
config,
owner: repoOwner,
repoName: targetRepoName,
});
console.log(
`Repository ${repository.name} database status updated to mirrored`
);
return;
if (existingRepoInfo && !existingRepoInfo.mirror) {
console.log(`Repository ${targetRepoName} exists but is not a mirror. Handling...`);
await handleExistingNonMirrorRepo({
config,
repository,
repoInfo: existingRepoInfo,
strategy: "delete", // Can be configured: "skip", "delete", or "rename"
});
} else if (existingRepoInfo?.mirror) {
console.log(
`Repository ${targetRepoName} already exists in Gitea under ${repoOwner}. Updating database status.`
);
await syncRepositoryMetadataToGitea({
config,
octokit,
repository,
giteaOwner: repoOwner,
giteaRepoName: targetRepoName,
giteaToken: decryptedConfig.giteaConfig.token,
});
// Update database to reflect that the repository is already mirrored
await db
.update(repositories)
.set({
status: repoStatusEnum.parse("mirrored"),
updatedAt: new Date(),
lastMirrored: new Date(),
errorMessage: null,
mirroredLocation: `${repoOwner}/${targetRepoName}`,
})
.where(eq(repositories.id, repository.id!));
// Append log for "mirrored" status
await createMirrorJob({
userId: config.userId,
repositoryId: repository.id,
repositoryName: repository.name,
message: `Repository ${repository.name} already exists in Gitea`,
details: `Repository ${repository.name} was found to already exist in Gitea under ${repoOwner} and database status was updated.`,
status: "mirrored",
});
console.log(
`Repository ${repository.name} database status updated to mirrored`
);
return;
} else {
console.warn(
`[Mirror] Repository ${repoOwner}/${targetRepoName} exists but mirror status could not be verified. Continuing with mirror creation flow.`
);
}
}
console.log(`Mirroring repository ${repository.name}`);
@@ -520,6 +743,13 @@ export const mirrorGithubRepoToGitea = async ({
(orgError.message.includes('Permission denied') ||
orgError.message.includes('Authentication failed') ||
orgError.message.includes('does not have permission'))) {
if (isExternalPersonalRepoInPreserveMode) {
throw new Error(
`Cannot create/access namespace "${repoOwner}" for ${repository.fullName}. ` +
`Refusing fallback to "${config.giteaConfig.defaultOwner}" in preserve mode to avoid cross-owner overwrite.`
);
}
console.warn(`[Fallback] Organization creation/access failed. Attempting to mirror to user account instead.`);
// Update the repository owner to use the user account
@@ -603,6 +833,15 @@ export const mirrorGithubRepoToGitea = async ({
}
);
await syncRepositoryMetadataToGitea({
config,
octokit,
repository,
giteaOwner: repoOwner,
giteaRepoName: targetRepoName,
giteaToken: decryptedConfig.giteaConfig.token,
});
const metadataState = parseRepositoryMetadataState(repository.metadata);
let metadataUpdated = false;
const skipMetadataForStarred =
@@ -1049,36 +1288,66 @@ export async function mirrorGitHubRepoToGiteaOrg({
});
if (isExisting) {
console.log(
`Repository ${targetRepoName} already exists in Gitea organization ${orgName}. Updating database status.`
);
// Update database to reflect that the repository is already mirrored
await db
.update(repositories)
.set({
status: repoStatusEnum.parse("mirrored"),
updatedAt: new Date(),
lastMirrored: new Date(),
errorMessage: null,
mirroredLocation: `${orgName}/${targetRepoName}`,
})
.where(eq(repositories.id, repository.id!));
// Create a mirror job log entry
await createMirrorJob({
userId: config.userId,
repositoryId: repository.id,
repositoryName: repository.name,
message: `Repository ${targetRepoName} already exists in Gitea organization ${orgName}`,
details: `Repository ${targetRepoName} was found to already exist in Gitea organization ${orgName} and database status was updated.`,
status: "mirrored",
const { getGiteaRepoInfo, handleExistingNonMirrorRepo } = await import("./gitea-enhanced");
const existingRepoInfo = await getGiteaRepoInfo({
config,
owner: orgName,
repoName: targetRepoName,
});
console.log(
`Repository ${targetRepoName} database status updated to mirrored in organization ${orgName}`
);
return;
if (existingRepoInfo && !existingRepoInfo.mirror) {
console.log(`Repository ${targetRepoName} exists but is not a mirror. Handling...`);
await handleExistingNonMirrorRepo({
config,
repository,
repoInfo: existingRepoInfo,
strategy: "delete", // Can be configured: "skip", "delete", or "rename"
});
} else if (existingRepoInfo?.mirror) {
console.log(
`Repository ${targetRepoName} already exists in Gitea organization ${orgName}. Updating database status.`
);
await syncRepositoryMetadataToGitea({
config,
octokit,
repository,
giteaOwner: orgName,
giteaRepoName: targetRepoName,
giteaToken: decryptedConfig.giteaConfig.token,
});
// Update database to reflect that the repository is already mirrored
await db
.update(repositories)
.set({
status: repoStatusEnum.parse("mirrored"),
updatedAt: new Date(),
lastMirrored: new Date(),
errorMessage: null,
mirroredLocation: `${orgName}/${targetRepoName}`,
})
.where(eq(repositories.id, repository.id!));
// Create a mirror job log entry
await createMirrorJob({
userId: config.userId,
repositoryId: repository.id,
repositoryName: repository.name,
message: `Repository ${targetRepoName} already exists in Gitea organization ${orgName}`,
details: `Repository ${targetRepoName} was found to already exist in Gitea organization ${orgName} and database status was updated.`,
status: "mirrored",
});
console.log(
`Repository ${targetRepoName} database status updated to mirrored in organization ${orgName}`
);
return;
} else {
console.warn(
`[Mirror] Repository ${orgName}/${targetRepoName} exists but mirror status could not be verified. Continuing with mirror creation flow.`
);
}
}
console.log(
@@ -1137,6 +1406,7 @@ export async function mirrorGitHubRepoToGiteaOrg({
wiki: shouldMirrorWiki || false,
lfs: config.giteaConfig?.lfs || false,
private: repository.isPrivate,
description: repository.description?.trim() || "",
};
// Add authentication for private repositories
@@ -1159,6 +1429,15 @@ export async function mirrorGitHubRepoToGiteaOrg({
}
);
await syncRepositoryMetadataToGitea({
config,
octokit,
repository,
giteaOwner: orgName,
giteaRepoName: targetRepoName,
giteaToken: decryptedConfig.giteaConfig.token,
});
const metadataState = parseRepositoryMetadataState(repository.metadata);
let metadataUpdated = false;
const skipMetadataForStarred =

View File

@@ -287,6 +287,7 @@ export async function getGithubRepositories({
lastMirrored: undefined,
errorMessage: undefined,
importedAt: new Date(),
createdAt: repo.created_at ? new Date(repo.created_at) : new Date(),
updatedAt: repo.updated_at ? new Date(repo.updated_at) : new Date(),
}));
@@ -348,6 +349,7 @@ export async function getGithubStarredRepositories({
lastMirrored: undefined,
errorMessage: undefined,
importedAt: new Date(),
createdAt: repo.created_at ? new Date(repo.created_at) : new Date(),
updatedAt: repo.updated_at ? new Date(repo.updated_at) : new Date(),
}));
@@ -369,7 +371,7 @@ export async function getGithubOrganizations({
}: {
octokit: Octokit;
config: Partial<Config>;
}): Promise<GitOrg[]> {
}): Promise<{ organizations: GitOrg[]; failedOrgs: { name: string; avatarUrl: string; reason: string }[] }> {
try {
const { data: orgs } = await octokit.orgs.listForAuthenticatedUser({
per_page: 100,
@@ -392,30 +394,47 @@ export async function getGithubOrganizations({
return true;
});
const organizations = await Promise.all(
const failedOrgs: { name: string; avatarUrl: string; reason: string }[] = [];
const results = await Promise.all(
filteredOrgs.map(async (org) => {
const [{ data: orgDetails }, { data: membership }] = await Promise.all([
octokit.orgs.get({ org: org.login }),
octokit.orgs.getMembershipForAuthenticatedUser({ org: org.login }),
]);
try {
const [{ data: orgDetails }, { data: membership }] = await Promise.all([
octokit.orgs.get({ org: org.login }),
octokit.orgs.getMembershipForAuthenticatedUser({ org: org.login }),
]);
const totalRepos =
orgDetails.public_repos + (orgDetails.total_private_repos ?? 0);
const totalRepos =
orgDetails.public_repos + (orgDetails.total_private_repos ?? 0);
return {
name: org.login,
avatarUrl: org.avatar_url,
membershipRole: membership.role as MembershipRole,
isIncluded: false,
status: "imported" as RepoStatus,
repositoryCount: totalRepos,
createdAt: new Date(),
updatedAt: new Date(),
};
return {
name: org.login,
avatarUrl: org.avatar_url,
membershipRole: membership.role as MembershipRole,
isIncluded: false,
status: "imported" as RepoStatus,
repositoryCount: totalRepos,
createdAt: new Date(),
updatedAt: new Date(),
};
} catch (error: any) {
// Capture organizations that return 403 (SAML enforcement, insufficient token scope, etc.)
if (error?.status === 403) {
const reason = error?.message || "access denied";
console.warn(
`Failed to import organization ${org.login} - ${reason}`,
);
failedOrgs.push({ name: org.login, avatarUrl: org.avatar_url, reason });
return null;
}
throw error;
}
}),
);
return organizations;
return {
organizations: results.filter((org): org is NonNullable<typeof org> => org !== null),
failedOrgs,
};
} catch (error) {
throw new Error(
`Error fetching organizations: ${
@@ -475,6 +494,7 @@ export async function getGithubOrganizationRepositories({
lastMirrored: undefined,
errorMessage: undefined,
importedAt: new Date(),
createdAt: repo.created_at ? new Date(repo.created_at) : new Date(),
updatedAt: repo.updated_at ? new Date(repo.updated_at) : new Date(),
}));

View File

@@ -28,6 +28,7 @@ function sampleRepo(overrides: Partial<GitRepo> = {}): GitRepo {
status: 'imported',
lastMirrored: undefined,
errorMessage: undefined,
importedAt: new Date(),
createdAt: new Date(),
updatedAt: new Date(),
};

View File

@@ -56,6 +56,7 @@ export function normalizeGitRepoToInsert(
status: 'imported',
lastMirrored: repo.lastMirrored ?? null,
errorMessage: repo.errorMessage ?? null,
importedAt: repo.importedAt || new Date(),
createdAt: repo.createdAt || new Date(),
updatedAt: repo.updatedAt || new Date(),
};

View File

@@ -0,0 +1,68 @@
import { describe, expect, test } from "bun:test";
import type { Repository } from "@/lib/db/schema";
import { sortRepositories } from "@/lib/repository-sorting";
function makeRepo(overrides: Partial<Repository>): Repository {
return {
id: "id",
userId: "user-1",
configId: "config-1",
name: "repo",
fullName: "owner/repo",
normalizedFullName: "owner/repo",
url: "https://github.com/owner/repo",
cloneUrl: "https://github.com/owner/repo.git",
owner: "owner",
organization: null,
mirroredLocation: "",
isPrivate: false,
isForked: false,
forkedFrom: null,
hasIssues: true,
isStarred: false,
isArchived: false,
size: 1,
hasLFS: false,
hasSubmodules: false,
language: null,
description: null,
defaultBranch: "main",
visibility: "public",
status: "imported",
lastMirrored: null,
errorMessage: null,
destinationOrg: null,
metadata: null,
importedAt: new Date("2026-01-01T00:00:00.000Z"),
createdAt: new Date("2020-01-01T00:00:00.000Z"),
updatedAt: new Date("2026-01-01T00:00:00.000Z"),
...overrides,
};
}
describe("sortRepositories", () => {
test("defaults to recently imported first", () => {
const repos = [
makeRepo({ id: "a", fullName: "owner/a", importedAt: new Date("2026-01-01T00:00:00.000Z") }),
makeRepo({ id: "b", fullName: "owner/b", importedAt: new Date("2026-03-01T00:00:00.000Z") }),
makeRepo({ id: "c", fullName: "owner/c", importedAt: new Date("2025-12-01T00:00:00.000Z") }),
];
const sorted = sortRepositories(repos, undefined);
expect(sorted.map((repo) => repo.id)).toEqual(["b", "a", "c"]);
});
test("supports name and updated sorting", () => {
const repos = [
makeRepo({ id: "a", fullName: "owner/zeta", updatedAt: new Date("2026-01-01T00:00:00.000Z") }),
makeRepo({ id: "b", fullName: "owner/alpha", updatedAt: new Date("2026-03-01T00:00:00.000Z") }),
makeRepo({ id: "c", fullName: "owner/middle", updatedAt: new Date("2025-12-01T00:00:00.000Z") }),
];
const nameSorted = sortRepositories(repos, "name-asc");
expect(nameSorted.map((repo) => repo.id)).toEqual(["b", "c", "a"]);
const updatedSorted = sortRepositories(repos, "updated-desc");
expect(updatedSorted.map((repo) => repo.id)).toEqual(["b", "a", "c"]);
});
});

View File

@@ -0,0 +1,40 @@
import type { Repository } from "@/lib/db/schema";
export type RepositorySortOrder =
| "imported-desc"
| "imported-asc"
| "updated-desc"
| "updated-asc"
| "name-asc"
| "name-desc";
function getTimestamp(value: Date | string | null | undefined): number {
if (!value) return 0;
const timestamp = new Date(value).getTime();
return Number.isNaN(timestamp) ? 0 : timestamp;
}
export function sortRepositories(
repositories: Repository[],
sortOrder: string | undefined,
): Repository[] {
const order = (sortOrder ?? "imported-desc") as RepositorySortOrder;
return [...repositories].sort((a, b) => {
switch (order) {
case "imported-asc":
return getTimestamp(a.importedAt) - getTimestamp(b.importedAt);
case "updated-desc":
return getTimestamp(b.updatedAt) - getTimestamp(a.updatedAt);
case "updated-asc":
return getTimestamp(a.updatedAt) - getTimestamp(b.updatedAt);
case "name-asc":
return a.fullName.localeCompare(b.fullName, undefined, { sensitivity: "base" });
case "name-desc":
return b.fullName.localeCompare(a.fullName, undefined, { sensitivity: "base" });
case "imported-desc":
default:
return getTimestamp(b.importedAt) - getTimestamp(a.importedAt);
}
});
}

View File

@@ -11,9 +11,11 @@ export function cn(...inputs: ClassValue[]) {
export function generateRandomString(length: number): string {
const chars = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789';
const randomValues = new Uint32Array(length);
crypto.getRandomValues(randomValues);
let result = '';
for (let i = 0; i < length; i++) {
result += chars.charAt(Math.floor(Math.random() * chars.length));
result += chars.charAt(randomValues[i] % chars.length);
}
return result;
}

View File

@@ -160,10 +160,23 @@ export function generateSecureToken(length: number = 32): string {
}
/**
* Hashes a value using SHA-256 (for non-reversible values like API keys for comparison)
* Hashes a value using SHA-256 with a random salt (for non-reversible values like API keys)
* @param value The value to hash
* @returns Hex encoded hash
* @returns Salt and hash in format "salt:hash"
*/
export function hashValue(value: string): string {
return crypto.createHash('sha256').update(value).digest('hex');
const salt = crypto.randomBytes(16).toString('hex');
const hash = crypto.createHash('sha256').update(salt + value).digest('hex');
return `${salt}:${hash}`;
}
/**
* Verifies a value against a salted hash produced by hashValue()
* Uses constant-time comparison to prevent timing attacks
*/
export function verifyHash(value: string, saltedHash: string): boolean {
const [salt, expectedHash] = saltedHash.split(':');
if (!salt || !expectedHash) return false;
const actualHash = crypto.createHash('sha256').update(salt + value).digest('hex');
return crypto.timingSafeEqual(Buffer.from(actualHash, 'hex'), Buffer.from(expectedHash, 'hex'));
}

View File

@@ -7,17 +7,10 @@ export const GET: APIRoute = async () => {
const userCountResult = await db
.select({ count: sql<number>`count(*)` })
.from(users);
const userCount = userCountResult[0].count;
if (userCount === 0) {
return new Response(JSON.stringify({ error: "No users found" }), {
status: 404,
headers: { "Content-Type": "application/json" },
});
}
const hasUsers = userCountResult[0].count > 0;
return new Response(JSON.stringify({ userCount }), {
return new Response(JSON.stringify({ hasUsers }), {
status: 200,
headers: { "Content-Type": "application/json" },
});
@@ -27,4 +20,4 @@ export const GET: APIRoute = async () => {
headers: { "Content-Type": "application/json" },
});
}
};
};

View File

@@ -1,79 +1,42 @@
import type { APIRoute } from "astro";
import { auth } from "@/lib/auth";
import { db } from "@/lib/db";
import { users } from "@/lib/db/schema";
import { nanoid } from "nanoid";
import { ENV } from "@/lib/config";
import { requireAuthenticatedUserId } from "@/lib/auth-guards";
export const GET: APIRoute = async ({ request, locals }) => {
// Only available in development
if (ENV.NODE_ENV === "production") {
return new Response(JSON.stringify({ error: "Not found" }), {
status: 404,
headers: { "Content-Type": "application/json" },
});
}
export const GET: APIRoute = async ({ request }) => {
try {
// Get Better Auth configuration info
const authResult = await requireAuthenticatedUserId({ request, locals });
if ("response" in authResult) return authResult.response;
const info = {
baseURL: auth.options.baseURL,
basePath: auth.options.basePath,
trustedOrigins: auth.options.trustedOrigins,
emailPasswordEnabled: auth.options.emailAndPassword?.enabled,
userFields: auth.options.user?.additionalFields,
databaseConfig: {
usePlural: true,
provider: "sqlite"
}
};
return new Response(JSON.stringify({
success: true,
config: info
config: info,
}), {
status: 200,
headers: { "Content-Type": "application/json" },
});
} catch (error) {
// Log full error details server-side for debugging
console.error("Debug endpoint error:", error);
// Only return safe error information to the client
return new Response(JSON.stringify({
success: false,
error: error instanceof Error ? error.message : "An unexpected error occurred"
error: "An unexpected error occurred",
}), {
status: 500,
headers: { "Content-Type": "application/json" },
});
}
};
export const POST: APIRoute = async ({ request }) => {
try {
// Test creating a user directly
const userId = nanoid();
const now = new Date();
await db.insert(users).values({
id: userId,
email: "test2@example.com",
emailVerified: false,
username: "test2",
// Let the database handle timestamps with defaults
});
return new Response(JSON.stringify({
success: true,
userId,
message: "User created successfully"
}), {
status: 200,
headers: { "Content-Type": "application/json" },
});
} catch (error) {
// Log full error details server-side for debugging
console.error("Debug endpoint error:", error);
// Only return safe error information to the client
return new Response(JSON.stringify({
success: false,
error: error instanceof Error ? error.message : "An unexpected error occurred"
}), {
status: 500,
headers: { "Content-Type": "application/json" },
});
}
};

View File

@@ -6,19 +6,23 @@
import type { APIRoute } from 'astro';
import { runAutomaticCleanup } from '@/lib/cleanup-service';
import { createSecureErrorResponse } from '@/lib/utils';
import { requireAuthenticatedUserId } from '@/lib/auth-guards';
export const POST: APIRoute = async ({ request }) => {
export const POST: APIRoute = async ({ request, locals }) => {
try {
const authResult = await requireAuthenticatedUserId({ request, locals });
if ("response" in authResult) return authResult.response;
console.log('Manual cleanup trigger requested');
// Run the automatic cleanup
const results = await runAutomaticCleanup();
// Calculate totals
const totalEventsDeleted = results.reduce((sum, result) => sum + result.eventsDeleted, 0);
const totalJobsDeleted = results.reduce((sum, result) => sum + result.mirrorJobsDeleted, 0);
const errors = results.filter(result => result.error);
return new Response(
JSON.stringify({
success: true,
@@ -28,7 +32,6 @@ export const POST: APIRoute = async ({ request }) => {
totalEventsDeleted,
totalJobsDeleted,
errors: errors.length,
details: results,
},
}),
{

View File

@@ -38,6 +38,24 @@ export const POST: APIRoute = async ({ request, locals }) => {
);
}
// Validate Gitea URL format and protocol
if (giteaConfig.url) {
try {
const giteaUrl = new URL(giteaConfig.url);
if (!['http:', 'https:'].includes(giteaUrl.protocol)) {
return new Response(
JSON.stringify({ success: false, message: "Gitea URL must use http or https protocol." }),
{ status: 400, headers: { "Content-Type": "application/json" } }
);
}
} catch {
return new Response(
JSON.stringify({ success: false, message: "Invalid Gitea URL format." }),
{ status: 400, headers: { "Content-Type": "application/json" } }
);
}
}
// Fetch existing config
const existingConfigResult = await db
.select()

View File

@@ -49,7 +49,7 @@ export const GET: APIRoute = async ({ request, locals }) => {
.select()
.from(repositories)
.where(and(...conditions))
.orderBy(sql`name COLLATE NOCASE`);
.orderBy(sql`${repositories.importedAt} DESC`, sql`name COLLATE NOCASE`);
const response: RepositoryApiResponse = {
success: true,

View File

@@ -1,14 +1,9 @@
import type { APIRoute } from "astro";
import { jsonResponse, createSecureErrorResponse } from "@/lib/utils";
import { db } from "@/lib/db";
import { ENV } from "@/lib/config";
import { getRecoveryStatus, hasJobsNeedingRecovery } from "@/lib/recovery";
import os from "os";
import { httpGet } from "@/lib/http-client";
// Track when the server started
const serverStartTime = new Date();
// Cache for the latest version to avoid frequent GitHub API calls
interface VersionCache {
latestVersion: string;
@@ -23,18 +18,6 @@ export const GET: APIRoute = async () => {
// Check database connection by running a simple query
const dbStatus = await checkDatabaseConnection();
// Get system information
const systemInfo = {
uptime: getUptime(),
memory: getMemoryUsage(),
os: {
platform: os.platform(),
version: os.version(),
arch: os.arch(),
},
env: ENV.NODE_ENV,
};
// Get current and latest versions
const currentVersion = process.env.npm_package_version || "unknown";
const latestVersion = await checkLatestVersion();
@@ -50,7 +33,7 @@ export const GET: APIRoute = async () => {
overallStatus = "degraded";
}
// Build response
// Build response (no OS/memory details to avoid information disclosure)
const healthData = {
status: overallStatus,
timestamp: new Date().toISOString(),
@@ -59,9 +42,11 @@ export const GET: APIRoute = async () => {
updateAvailable: latestVersion !== "unknown" &&
currentVersion !== "unknown" &&
compareVersions(currentVersion, latestVersion) < 0,
database: dbStatus,
recovery: recoveryStatus,
system: systemInfo,
database: { connected: dbStatus.connected },
recovery: {
status: recoveryStatus.status,
jobsNeedingRecovery: recoveryStatus.jobsNeedingRecovery,
},
};
return jsonResponse({
@@ -125,55 +110,6 @@ async function getRecoverySystemStatus() {
}
}
/**
* Get server uptime information
*/
function getUptime() {
const now = new Date();
const uptimeMs = now.getTime() - serverStartTime.getTime();
// Convert to human-readable format
const seconds = Math.floor(uptimeMs / 1000);
const minutes = Math.floor(seconds / 60);
const hours = Math.floor(minutes / 60);
const days = Math.floor(hours / 24);
return {
startTime: serverStartTime.toISOString(),
uptimeMs,
formatted: `${days}d ${hours % 24}h ${minutes % 60}m ${seconds % 60}s`,
};
}
/**
* Get memory usage information
*/
function getMemoryUsage() {
const memoryUsage = process.memoryUsage();
return {
rss: formatBytes(memoryUsage.rss),
heapTotal: formatBytes(memoryUsage.heapTotal),
heapUsed: formatBytes(memoryUsage.heapUsed),
external: formatBytes(memoryUsage.external),
systemTotal: formatBytes(os.totalmem()),
systemFree: formatBytes(os.freemem()),
};
}
/**
* Format bytes to human-readable format
*/
function formatBytes(bytes: number): string {
if (bytes === 0) return '0 Bytes';
const k = 1024;
const sizes = ['Bytes', 'KB', 'MB', 'GB', 'TB'];
const i = Math.floor(Math.log(bytes) / Math.log(k));
return parseFloat((bytes / Math.pow(k, i)).toFixed(2)) + ' ' + sizes[i];
}
/**
* Compare semantic versions
* Returns:

View File

@@ -1,6 +1,6 @@
import type { APIRoute } from "astro";
import { db, organizations, repositories, configs } from "@/lib/db";
import { eq } from "drizzle-orm";
import { eq, and } from "drizzle-orm";
import { v4 as uuidv4 } from "uuid";
import { createMirrorJob } from "@/lib/helpers";
import {
@@ -47,13 +47,14 @@ export const POST: APIRoute = async ({ request, locals }) => {
const octokit = createGitHubClient(decryptedToken, userId, githubUsername);
// Fetch GitHub data in parallel
const [basicAndForkedRepos, starredRepos, gitOrgs] = await Promise.all([
const [basicAndForkedRepos, starredRepos, orgResult] = await Promise.all([
getGithubRepositories({ octokit, config }),
config.githubConfig?.includeStarred
? getGithubStarredRepositories({ octokit, config })
: Promise.resolve([]),
getGithubOrganizations({ octokit, config }),
]);
const { organizations: gitOrgs, failedOrgs } = orgResult;
// Merge and de-duplicate by fullName, preferring starred variant when duplicated
const allGithubRepos = mergeGitReposPreferStarred(basicAndForkedRepos, starredRepos);
@@ -89,6 +90,7 @@ export const POST: APIRoute = async ({ request, locals }) => {
status: repo.status,
lastMirrored: repo.lastMirrored ?? null,
errorMessage: repo.errorMessage ?? null,
importedAt: repo.importedAt,
createdAt: repo.createdAt,
updatedAt: repo.updatedAt,
}));
@@ -108,8 +110,27 @@ export const POST: APIRoute = async ({ request, locals }) => {
updatedAt: new Date(),
}));
// Prepare failed org records for DB insertion
const failedOrgRecords = failedOrgs.map((org) => ({
id: uuidv4(),
userId,
configId: config.id,
name: org.name,
normalizedName: org.name.toLowerCase(),
avatarUrl: org.avatarUrl,
membershipRole: "member" as const,
isIncluded: false,
status: "failed" as const,
errorMessage: org.reason,
repositoryCount: 0,
createdAt: new Date(),
updatedAt: new Date(),
}));
let insertedRepos: typeof newRepos = [];
let insertedOrgs: typeof newOrgs = [];
let insertedFailedOrgs: typeof failedOrgRecords = [];
let recoveredOrgCount = 0;
// Transaction to insert only new items
await db.transaction(async (tx) => {
@@ -119,18 +140,62 @@ export const POST: APIRoute = async ({ request, locals }) => {
.from(repositories)
.where(eq(repositories.userId, userId)),
tx
.select({ normalizedName: organizations.normalizedName })
.select({ normalizedName: organizations.normalizedName, status: organizations.status })
.from(organizations)
.where(eq(organizations.userId, userId)),
]);
const existingRepoNames = new Set(existingRepos.map((r) => r.normalizedFullName));
const existingOrgNames = new Set(existingOrgs.map((o) => o.normalizedName));
const existingOrgMap = new Map(existingOrgs.map((o) => [o.normalizedName, o.status]));
insertedRepos = newRepos.filter(
(r) => !existingRepoNames.has(r.normalizedFullName)
);
insertedOrgs = newOrgs.filter((o) => !existingOrgNames.has(o.normalizedName));
insertedOrgs = newOrgs.filter((o) => !existingOrgMap.has(o.normalizedName));
// Update previously failed orgs that now succeeded
const recoveredOrgs = newOrgs.filter(
(o) => existingOrgMap.get(o.normalizedName) === "failed"
);
for (const org of recoveredOrgs) {
await tx
.update(organizations)
.set({
status: "imported",
errorMessage: null,
repositoryCount: org.repositoryCount,
avatarUrl: org.avatarUrl,
membershipRole: org.membershipRole,
updatedAt: new Date(),
})
.where(
and(
eq(organizations.userId, userId),
eq(organizations.normalizedName, org.normalizedName),
)
);
}
recoveredOrgCount = recoveredOrgs.length;
// Insert or update failed orgs (only update orgs already in "failed" state — don't overwrite good state)
insertedFailedOrgs = failedOrgRecords.filter((o) => !existingOrgMap.has(o.normalizedName));
const stillFailedOrgs = failedOrgRecords.filter(
(o) => existingOrgMap.get(o.normalizedName) === "failed"
);
for (const org of stillFailedOrgs) {
await tx
.update(organizations)
.set({
errorMessage: org.errorMessage,
updatedAt: new Date(),
})
.where(
and(
eq(organizations.userId, userId),
eq(organizations.normalizedName, org.normalizedName),
)
);
}
// Batch insert repositories to avoid SQLite parameter limit (dynamic by column count)
const sample = newRepos[0];
@@ -148,9 +213,10 @@ export const POST: APIRoute = async ({ request, locals }) => {
// Batch insert organizations (they have fewer fields, so we can use larger batches)
const ORG_BATCH_SIZE = 100;
if (insertedOrgs.length > 0) {
for (let i = 0; i < insertedOrgs.length; i += ORG_BATCH_SIZE) {
const batch = insertedOrgs.slice(i, i + ORG_BATCH_SIZE);
const allNewOrgs = [...insertedOrgs, ...insertedFailedOrgs];
if (allNewOrgs.length > 0) {
for (let i = 0; i < allNewOrgs.length; i += ORG_BATCH_SIZE) {
const batch = allNewOrgs.slice(i, i + ORG_BATCH_SIZE);
await tx.insert(organizations).values(batch);
}
}
@@ -189,6 +255,8 @@ export const POST: APIRoute = async ({ request, locals }) => {
newRepositories: insertedRepos.length,
newOrganizations: insertedOrgs.length,
skippedDisabledRepositories: allGithubRepos.length - mirrorableGithubRepos.length,
failedOrgs: failedOrgs.map((o) => o.name),
recoveredOrgs: recoveredOrgCount,
},
});
} catch (error) {

View File

@@ -187,6 +187,7 @@ export const POST: APIRoute = async ({ request, locals }) => {
status: "imported" as RepoStatus,
lastMirrored: null,
errorMessage: null,
importedAt: new Date(),
createdAt: repo.created_at ? new Date(repo.created_at) : new Date(),
updatedAt: repo.updated_at ? new Date(repo.updated_at) : new Date(),
};

View File

@@ -155,6 +155,7 @@ export const POST: APIRoute = async ({ request, locals }) => {
errorMessage: null,
mirroredLocation: "",
destinationOrg: null,
importedAt: new Date(),
createdAt: repoData.created_at
? new Date(repoData.created_at)
: new Date(),

View File

@@ -75,6 +75,7 @@ export interface GitRepo {
lastMirrored?: Date;
errorMessage?: string;
importedAt: Date;
createdAt: Date;
updatedAt: Date;
}

View File

@@ -7,6 +7,7 @@ export interface FilterParams {
membershipRole?: MembershipRole | ""; //membership role in orgs
owner?: string; // owner of the repos
organization?: string; // organization of the repos
sort?: string; // repository sort order
type?: string; //types in activity log
name?: string; // name in activity log
}

View File

@@ -1,7 +1,7 @@
{
"name": "www",
"type": "module",
"version": "1.1.0",
"version": "1.2.0",
"scripts": {
"dev": "astro dev",
"build": "astro build",
@@ -9,20 +9,20 @@
"astro": "astro"
},
"dependencies": {
"@astrojs/mdx": "^4.3.13",
"@astrojs/react": "^4.4.2",
"@astrojs/mdx": "^5.0.0",
"@astrojs/react": "^5.0.0",
"@radix-ui/react-slot": "^1.2.4",
"@splinetool/react-spline": "^4.1.0",
"@splinetool/runtime": "^1.12.60",
"@splinetool/runtime": "^1.12.69",
"@tailwindcss/vite": "^4.2.1",
"@types/canvas-confetti": "^1.9.0",
"@types/react": "^19.2.14",
"@types/react-dom": "^19.2.3",
"astro": "^5.17.3",
"astro": "^6.0.4",
"canvas-confetti": "^1.9.4",
"class-variance-authority": "^0.7.1",
"clsx": "^2.1.1",
"lucide-react": "^0.575.0",
"lucide-react": "^0.577.0",
"react": "^19.2.4",
"react-dom": "^19.2.4",
"tailwind-merge": "^3.5.0",
@@ -31,5 +31,5 @@
"devDependencies": {
"tw-animate-css": "^1.4.0"
},
"packageManager": "pnpm@10.24.0"
"packageManager": "pnpm@10.32.1"
}

902
www/pnpm-lock.yaml generated

File diff suppressed because it is too large Load Diff