Open Source · MIT License

Turn documents
into knowledge
that sticks

Upload any PDF, Word doc, or text file. CardForge reads it, understands it, and generates study-ready flashcards — powered by Anthropic Claude or your local Ollama model.

3
File formats
30
Cards per run
2
AI backends
Data Structures medium
Q03

What is the time complexity of searching in a balanced binary search tree?

tap to reveal answer ↓
Data Structures extended
Answer

O(log n) — a balanced BST halves the search space at each step. This holds for AVL trees and Red-Black trees. Worst case degrades to O(n) for unbalanced trees.

tap to flip back ↑
How it works

Three steps.
One result.

No setup required beyond an API key. Drop your files and let the AI do the reading.

01
📄
Upload your documents

Drag and drop one or more files — PDF, DOCX, or TXT. Multiple documents are combined and analysed together, so cross-document questions are possible.

02
⚙️
Configure generation

Choose your difficulty level, how many cards you want (up to 30), and whether to extend the scope with Ollama's local reasoning for deeper questions.

03
🧠
Study with confidence

Review all generated cards in grid or list view, filter Core vs Extended, then jump into fullscreen Study Mode with spaced-repetition scoring.

Features

Everything you need
nothing you don't

☁️
Cloud AI via Anthropic

Connects to Claude claude-opus-4-5 for high-quality, nuanced question generation. Understands context, tone, and terminology from your specific documents.

🖥️
Local AI via Ollama

Run llama3, mistral, phi3, or any Ollama-compatible model entirely on your machine. No data leaves your device. Status indicator confirms detection.

🔮
Extended question scope

When Local AI is on, Ollama generates inference, "what-if", application, and critical-thinking questions that go beyond the raw document text.

Cloud AI claude-opus-4-5
Recall Definition Relationship Analysis
Local AI llama3
Recall Definition Relationship Analysis Inference Application Critical thinking What-if
🎓
Session Complete
8
Known
2
Review
80%
Score
🃏
3D flip cards

Every card flips with a smooth CSS perspective animation. Browse all generated cards in a responsive grid or list view.

📚
Fullscreen Study Mode

One card at a time, fullscreen, distraction-free. Mark each card as "Got It" or "Review Again" and get a scored summary at the end.

🔍
Core vs Extended filtering

Filter cards by type to focus on document-based core questions or Ollama-generated extended reasoning questions independently.

Difficulty levels

Study at
your pace

Choose how deep the AI goes. Each level changes how questions are framed — from surface recall all the way to synthesis and evaluation.

Easy
Recall & Definitions

Foundational questions that test basic recognition of facts, terms, and concepts from the document.

What is X?
Define Y
Which year did Z happen?
Medium
Understanding & Relationships

Questions that require comprehension, explanation, and understanding how concepts relate to each other.

Explain how X works
What is the relationship between X and Y?
Why does Z occur?
Hard
Analysis & Synthesis

Deep questions that push you to analyse, evaluate, compare edge cases, and apply knowledge to new scenarios.

Compare and contrast X vs Y
What are the limitations of Z?
How would you apply X in context Y?
Tech stack

Built with the
right tools

Frontend
React 18
TypeScript · No UI lib
Backend
FastAPI
Python 3.9+ · Uvicorn
Cloud AI
Anthropic Claude
claude-opus-4-5
Local AI
Ollama
llama3 · mistral · phi3
PDF Parsing
pdfplumber
Text + table extraction
DOCX Parsing
python-docx
Word document support
HTTP Client
httpx
Async · Ollama bridge
Deploy
Docker Compose
One-command setup
Quick start

Up and running
in minutes

Cloud AI Setup
# 1. Clone and enter
git clone https://github.com/likhith1542/cardforge
cd cardforge/backend

# 2. Install Python deps
pip install -r requirements.txt

# 3. Set your API key
export ANTHROPIC_API_KEY=sk-ant-...

# 4. Start backend + frontend
uvicorn main:app --reload --port 8000
# (new terminal)
cd ../frontend && npm install && npm start
Local AI Setup (Ollama)
# 1. Install Ollama
#    → https://ollama.ai

# 2. Pull a model
ollama pull llama3
# or: mistral, phi3, gemma

# 3. Start Ollama server
ollama serve

# 4. Toggle "Local AI" in the UI
#    Enter your model name
#    Green dot = detected ✓

Ready to study
smarter?

Open source, self-hostable, privacy-first when you want it. Clone the repo and have your first flashcards in under five minutes.