Untitled Excavation
The Fortune Cookie Fit Analyzer
The fit analyzer lied. Not in the fun way where it calls me the reincarnation of Grace Hopper. In the cheap carnival way. It looked at a bad match and told me to smile harder.
I was in my office, late, building an AI resume site out of four JSON files and spite. I had just done the good work first: mined two resume PDFs, pulled hard metrics from my portfolio valuation docs, built a grounded Q&A layer, even split the presentation into public-safe and AI-forward variants. Then the little crystal ball widget showed up and started giving horoscope-tier hiring advice.
“sadly. the fit analyzer is terrible, and I don’t have enough credits to fix it now.”
That’s the whole post. That’s the bug report. Everything else is theology.
My first wrong theory was prompt quality. Of course it was. Maybe the wording is too loose. Maybe it needs more guardrails. Maybe if I threaten it with a rubric and a wooden spoon it’ll behave. I kept thinking I could tune a probabilistic toy into a hiring instrument with one more pass.
Nope. The issue wasn’t tone. It was architecture. I built a gorgeous front end with grounded evidence and then let the highest-trust feature run on mushy scoring logic. That’s like safety-wiring the whole motorcycle and forgetting to tighten the front axle. Looks disciplined. Dies stupid.
So I cut it.
Clinical fix. No drama. The kind Garret likes.
// v1 rescue: trust-preserving downgrade
if (fitAnalyzer.isLowQuality || credits.exhausted) {
hide("Fit Analyzer");
show("How I Evaluate Fit");
showExamples(["Strong Fit: platform/reliability/TPM", "Not a Fit: consumer Head of Product"]);
keep("Ask AI About Me", { grounded: true, citations: true });
}
And when I have credits again, the replacement is deterministic first. fit_rules from resume.json. Pattern matching from qa_knowledge_base.json. Allowed labels only. Evidence refs on every strong claim. If the data isn’t there, the answer is: I don’t have evidence for that in this profile. Boring. Good. That’s how you keep trust.
Here’s the part that actually matters.
| Thing | Status | Trust impact |
|---|---|---|
| Resume data model | Grounded JSON | High |
| Ask AI modal | Grounded Q/A + citations | High |
| Fit analyzer (LLM-ish mush) | Bad | Catastrophic |
People think AI products fail because the model isn’t smart enough. Sometimes that’s true. Sometimes the model is fine and the product manager in your own skull is being greedy. Did I need the fit analyzer to prove the idea worked? No. I needed the evidence layer, the AI context on experience, and the honesty to say “not built yet.”
One pop culture note and I’ll go home: this thing felt like a Dark Souls boss with one HP left, and I still almost died because I got cute.
Bad fit analyzers are worse than no fit analyzer. The scar is knowing when to cut the shiny part before it poisons the whole build.