Blog
A knowledge base playbook that keeps AI accurate
· GoFER Team
- Knowledge Base
- AI
- Best Practices
An AI support tool is only as good as the answers you approve. The playbook below is what we recommend to every new GoFER workspace—whether you sell software, run a clinic, or operate an agency.
Principle 1: Source of truth beats clever prompts
Long system prompts cannot fix stale pricing. Start with artifacts you already maintain:
- FAQ page and pricing table
- PDF spec sheets or service menus
- Onboarding emails you send to every customer
GoFER can ingest text, extract from PDFs, and crawl public URLs on a schedule so updates propagate without redeploying code.
Tip: Crawl your pricing page weekly; crawl static “About” content monthly.
Principle 2: Exact match for high-risk topics
Some questions deserve one canonical answer—refund policy, medical disclaimers, warranty terms.
Use exact-match Q&A pairs for:
- Legal or compliance language you cannot paraphrase
- Promotions with end dates
- Integration setup steps where order matters
Let the model improvise on friendly small talk, not on money-back guarantees.
Principle 3: Review “missed questions” like stand-up
Once a week, open the missed questions report (questions GoFER could not answer confidently):
- Add or fix the knowledge entry.
- Mark whether it should escalate by default next time.
- If the same gap appears three weeks in a row, update your website copy—not only the bot.
This loop is how accuracy compounds without hiring a prompt engineer.
Principle 4: Tone is a product decision
Pick a tone preset—Professional, Friendly, Casual—and add two bullets your team cares about:
- “Never promise same-day shipping unless inventory confirms.”
- “Always offer a human for billing disputes.”
Tone guidelines belong beside facts, not in a separate doc nobody reads.
Principle 5: Crawl + human review for big launches
Launching a new product line?
- Publish the page.
- Trigger a crawl or paste the release notes.
- Run five test questions your sales team expects.
- Only then announce “Ask our AI” on social.
Skipping step three is how “hallucination” headlines get written.
Sample knowledge map
| Topic | Source | Refresh cadence |
|---|---|---|
| Pricing & plans | /pricing crawl | Weekly |
| Integrations | Notion export / docs site | On change |
| Returns & refunds | Exact-match FAQ | On policy change |
| Small talk & greetings | Tone preset only | Rare |
Measuring quality without a data science team
Track week over week:
- Resolution rate — chats closed without human intervention
- Escalation rate — should fall as knowledge improves
- CSAT (if enabled) — qualitative signal when numbers lie
A rising resolution rate with flat CSAT usually means answers are fast but not empathetic—tune tone, not facts.
Your 30-minute monthly ritual
Block thirty minutes. Fix the top five missed questions. Delete outdated promos. Re-crawl pricing if anything changed.
That is the entire secret. No magic model swap required.
Put the playbook to work
Import your FAQ, connect your site crawl, and send GoFER the questions your team is tired of typing. Compare plans when you are ready for higher volume and scheduled crawls on every tier that includes them.