Themida GitHub
OPEN SOURCE · AGPL-3.0

Compliance findings
at line-level.

Themida reads your code and tells you which line triggers a GDPR or EU AI Act issue — with the legal article, severity, and a working fix you can paste into a PR.

Self-host in 5 minutes
No signup, no telemetry
Bring your own LLM key
Critical app/data/user-dao.js:41

Password hashed with broken MD5 algorithm

var passwordHash = crypto
  .createHash('md5')
  .update(userPassword)
  .digest('hex');

← MD5 has been broken since 2004. Collisions take seconds.

Legal reference
GDPR Art. 5(1)(f), 32(1)(a)
Maximum fine
€20M or 4% revenue

Suggested fix

import bcrypt from 'bcrypt';
const passwordHash = await bcrypt.hash(userPassword, 12);
~60s
A typical scan
10+
Rules shipping
3
LLM passes

The pipeline

Three passes, one report.

A small model goes hunting first, a larger one reads what it found, and a verifier drops the hallucinations. The result: high signal, low false positives, and a cost that lands somewhere between a coffee and not-quite-a-coffee.

01

Recon

A fast, cheap model walks the file tree and surfaces ~15 files most likely to contain compliance-sensitive code.

files scanned 487
flagged for deep scan 15
02

Deep scan

A larger model reads each suspect file line by line with the full rule pack in context. Findings include file, line, snippet, and a working fix.

Sample finding
CRITICAL · MD5 password hash
03

Verify

A final pass drops paths the model hallucinated and findings that are already mitigated nearby in the same file. Cuts false positives sharply.

raw findings 23
after verification 12

Bring your own LLM

Any model.
Any provider.

The scanner ships adapters for Anthropic and any OpenAI-compatible endpoint, so you can plug in whatever backend you trust — including a local model. One env var switches the whole stack.

export LLM_PROVIDER=openai
export OPENAI_API_KEY=sk-…
export OPENAI_BASE_URL=https://openrouter.ai/api/v1
# done.
Anthropic
Native adapter
OpenAI
Native adapter
OpenRouter
OpenAI-compatible
Groq
OpenAI-compatible
Together
OpenAI-compatible
vLLM · llama.cpp · Ollama
Local model · 0 ¢

Rule packs

Plain TypeScript.
Open for contribution.

Every rule pack is a single TypeScript file with a readable schema. Adding a framework is one file plus tests — and the kind of PR that's easiest to merge.

Browse src/lib/rules

Shipped

  • GDPR
  • EU AI Act

Wanted · contributions welcome

These don't ship yet — each one is a single TypeScript file away. Open an issue if you'd like to take one on.

  • HIPAA
  • SOC 2
  • ISO 27001
  • OWASP Top 10
  • PCI DSS

Run it yourself

Five minutes
to your first finding.

You bring your own LLM key, you control the API contract, you keep the findings. Files are sent to your provider under your key — nothing else leaves your laptop.

# 1. clone + install
git clone https://github.com/Nikolaospet/themida
cd themida
pnpm install

# 2. configure your LLM provider
cp .env.example .env.local
# edit .env.local — Anthropic, OpenAI,
# OpenRouter, Groq, or a local model

# 3. scan
pnpm dev:scan

One more thing

This is a personal project.

Themida isn't a company. There's no funding, no managed version, no waitlist. It's a hobby I'm building in the open because I find the problem interesting and I think devs are tired of compliance tools that don't actually read code. Contributions — especially rule packs and bug reports — are warmly welcomed.