Week 6/40 — Making AI Features Feel Real (Caching + Derived Data)

Tech stack (this week)

  • FastAPI backend + SQLite

  • Frontend: React + TypeScript

  • New table: cached/derived analysis results

  • Hashing + upserts + cache validation

New topic: “AI engineering = state + data flow” (not just inference).

Why this week?

I wanted the analysis feature to behave like a real product: analyze once, reuse many times, survive restarts.

What I shipped

  • Persist analysis results instead of recomputing every time

  • Decide “recompute vs cached response” using hashing

  • Keep frontend and backend contracts aligned as schemas evolve

The One Feature Rule

One feature: analysis caching + persistence.
Not doing: real ML model, evaluation harness, user feedback loop.

What I learned

The hard part wasn’t the analysis logic — it was designing data flow so frontend/backend/storage stay in sync.

Follow along (code it yourself): This week’s task

Task: Make the analysis feel like a product feature by caching it.

  1. Add a new table analysis_results with:

    • entry_id

    • text_hash

    • result_json

    • created_at

  2. On POST /analyze:

    • compute a hash of the input text

    • if the latest cached hash matches → return cached result

    • else compute + store + return

  3. Add a UI hint: “(cached)” vs “(fresh)”.

Success criteria: Analyze the same text twice → second time is returned from cache.

Extra help & hints

  • Hash doesn’t need to be fancy: stable string hash is enough for v0.

  • Cache rule: “Same input → same output → don’t recompute.”

  • When debugging caching: log which path you took (cached vs recompute).

Previous
Previous

Week 7/40 — CatAtlas: Same-Cat Matching v0

Next
Next

Week 5/40 — First “AI-like” Endpoint