Coregit
Guides

Scalability Benchmarks

GitHub vs Coregit API latency benchmarks with reproducible test methodology.

GitHub vs Coregit — Head-to-Head

Real-world benchmarks comparing GitHub REST API and Coregit API for identical operations. All tests run from the same machine, same network, same time window.

Test conditions: Kazakhstan → US-East, private repos with authentication, April 2026.

Write Operations

Coregit's atomic multi-file commit is the key differentiator. GitHub requires N+3 sequential API calls (N blobs + get HEAD + create tree + create commit + update ref). Coregit does it in 1 call.

OperationGitHubCoregitAPI CallsWinner
Commit 1 file2,217 ms2,148 ms4 vs 1Coregit ~1:1
Commit 5 files4,829 ms3,456 ms8 vs 1Coregit 1.4x
Commit 10 files8,387 ms4,183 ms13 vs 1Coregit 2.0x
Commit 100 files72,064 ms19,769 ms105 vs 1Coregit 3.6x

Coregit's advantage grows with file count:

  • 1 file: ~parity (single API call vs 4, but Coregit does more server-side)
  • 5 files: 1.4x faster
  • 10 files: 2.0x faster
  • 100 files: 3.6x faster
  • 1,000 files (projected): ~10x+ faster

Read Operations

OperationGitHubCoregitWinner
Read file (warm)735 ms800 msGitHub 1.1x
List tree797 ms752 msCoregit 1.1x
List commits (warm)829 ms474 msCoregit 1.7x

Coregit uses KV-cached flat tree maps and commit lists. First request populates the cache; all subsequent requests hit cache.

Other Operations

OperationGitHubCoregitWinner
Create repo713 ms2,500 msGitHub 3.5x
Create branch689 ms1,794 msGitHub 2.6x
Diff branches (warm)738 ms752 ms~Parity

GitHub's single-op operations are faster due to monolith architecture with in-memory caches (0 network hops between components). Coregit is a distributed system on Cloudflare Workers where each component (KV, R2, Durable Objects, Hyperdrive) is a separate network call. Diff performance was optimized with shared tree caching — subtrees common to both branches are fetched once.

Architecture

Coregit's performance stack:

  • Cloudflare Workers — serverless compute at edge or near backend (Smart Placement)
  • R2 — git object storage (content-addressed, immutable)
  • KV — caching layer (tree maps, auth, repo metadata, commit lists, embeddings)
  • Durable Objects — rate limiting, session state, per-repo hot layer
  • Hyperdrive — PostgreSQL connection pooling and query caching for Neon

Caching layers

LayerLatencyWhat's cachedTTL
In-memory (per-request)0 msGit objects (32 MB cap)Request lifetime
KV (global edge)5-15 msAuth, repo metadata, flat trees, commit lists, embeddings60s-forever
Edge Cache API< 5 msGit objects (immutable by SHA)1 year
Hyperdrive0-5 msDB query results60s
R250-200 msAll git objects, refs, packfilesPermanent

Session API (Zero-Wait Protocol)

For AI agents doing many operations in sequence, the Session API eliminates per-request auth overhead:

  1. POST /v1/session — auth once, get session ID
  2. All requests with X-Session-Id header — auth validated via Durable Object (~1ms)
  3. DELETE /v1/session/:id — close session, flush pending writes

Methodology

All benchmarks use the same pattern:

# Timing with millisecond precision
ms() { python3 -c "import time; print(int(time.time()*1000))"; }
t1=$(ms); <operation>; t2=$(ms); echo "$((t2-t1)) ms"

All benchmarks use gh api for GitHub and cgt CLI for Coregit. Create repo uses gh repo create. All authenticate with API keys (private repos). 3 runs per operation, median reported.

For multi-file commits on GitHub, each blob is created with a separate POST /repos/:owner/:repo/git/blobs call, followed by tree creation, commit creation, and ref update — matching the minimum required API calls.

Reproduce

# GitHub: commit 10 files (13 API calls)
for i in $(seq 1 10); do
  gh api repos/OWNER/REPO/git/blobs -f content="file $i" -f encoding=utf-8
done
# + get HEAD + create tree + create commit + update ref

# Coregit: commit 10 files (1 API call)
cgt commit my-repo -b main -m "add 10 files" \
  -f f1.ts:='1' -f f2.ts:='2' ... -f f10.ts:='10'

On this page