Week 6/40 — Making AI Features Feel Real (Caching + Derived Data)
Tech stack (this week)
FastAPI backend + SQLite
Frontend: React + TypeScript
New table: cached/derived analysis results
Hashing + upserts + cache validation
New topic: “AI engineering = state + data flow” (not just inference).
Why this week?
I wanted the analysis feature to behave like a real product: analyze once, reuse many times, survive restarts.
What I shipped
Persist analysis results instead of recomputing every time
Decide “recompute vs cached response” using hashing
Keep frontend and backend contracts aligned as schemas evolve
The One Feature Rule
One feature: analysis caching + persistence.
Not doing: real ML model, evaluation harness, user feedback loop.
What I learned
The hard part wasn’t the analysis logic — it was designing data flow so frontend/backend/storage stay in sync.
Follow along (code it yourself): This week’s task
Task: Make the analysis feel like a product feature by caching it.
Add a new table
analysis_resultswith:entry_idtext_hashresult_jsoncreated_at
On
POST /analyze:compute a hash of the input text
if the latest cached hash matches → return cached result
else compute + store + return
Add a UI hint: “(cached)” vs “(fresh)”.
Success criteria: Analyze the same text twice → second time is returned from cache.
Extra help & hints
Hash doesn’t need to be fancy: stable string hash is enough for v0.
Cache rule: “Same input → same output → don’t recompute.”
When debugging caching: log which path you took (cached vs recompute).