Getting Started
How long does it take to get started?
15 minutes from signup to first insights. You can:- Create a project (2 min)
- Add scenarios (3 min)
- Generate outputs (2 min)
- Rate outputs (5 min)
- Extract patterns (2 min)
Do I need to know how to code?
No. Sageloop is designed for Product Managers. You rate outputs based on your intuition. No coding, no technical knowledge required.How many scenarios do I need?
Start with 15-30. This is enough to spot patterns without overwhelming yourself. If testing narrow behavior (date parsing), 10-15 is okay. If testing complex behavior, 30-50 is ideal.Rating & Feedback
How do I rate outputs?
Use the 5-star scale:- 5 stars: Perfect
- 4 stars: Good
- 3 stars: Okay
- 2 stars: Problem
- 1 star: Unacceptable
Do I need to add feedback for every output?
Only for 1-2 star ratings. Feedback helps pattern extraction understand root causes. For 5-star outputs, feedback is optional.What if I rate something wrong?
You can edit your rating anytime. Click the pencil icon and change your rating.Can I have consistent standards with my team?
Yes. Have the team rate the same outputs independently, then discuss differences. This aligns your standards.Pattern Extraction
What’s the minimum for extraction?
15 rated outputs. You’ll get a low-confidence warning with fewer than 15 outputs.Why are my patterns low confidence?
Not enough data. Add 5-10 more scenarios and rate them. Re-run extraction. Low confidence means patterns might be noise, not signal.What if no patterns are found?
Possible causes:- All your ratings are 4-5 stars (no failures to cluster)
- Each failure is unique (no common root cause)
- Too few ratings (fewer than 15)
Can I trust suggested fixes?
Usually yes, but verify first. Review the extraction reasoning before applying fixes. The AI can hallucinate. Trust your judgment.Team & Collaboration (Coming Soon)
Can I invite teammates?
Yes. Use the Share button in project settings. Team members can:- Rate outputs
- Add feedback
- View insights
What if my team disagrees on ratings?
This is actually valuable! It reveals subjective vs. objective quality gaps. Discuss differences, align on standards, document final definition.Who should rate outputs?
Ideally multiple people:- PM (product knowledge)
- Designer (brand/tone)
- Support lead (customer perspective)
- Customer (actual user)
Scenarios
Can I edit scenarios?
Yes. Click the pencil icon and update the text.What if I delete a scenario?
Its outputs and ratings are also deleted. Be careful. Deleted scenarios can’t be recovered.Can I import scenarios?
Yes, three ways:- One at a time (click “Add Scenario”)
- Bulk add (paste multiple, one per line)
- Import CSV (with
scenario_textcolumn)
How do I organize 100+ scenarios?
Start with 15-30. Test them, get insights, iterate. Only add more scenarios when you identify new gaps.Generation & Models
Can I regenerate specific scenarios?
Yes. Select scenarios you want, click “Regenerate Selected”. Only those regenerate (saves time).How long does generation take?
30-60 seconds for 20 scenarios. Depends on model and API load.Export
What can I export?
- Golden Examples: All 5-star outputs (Markdown or JSON)
- Test Suite: Pytest format for CI/CD
- Insights: Extracted patterns and recommendations
Can I use exports in CI/CD?
Yes. Export as pytest and integrate with your CI pipeline. Engineers run exported tests to prevent regressions.What format is best for sharing?
For non-technical people: PDF or Markdown For engineers: JSON or pytest For stakeholders: Summary document + key metricsPrivacy & Security
Is my data private?
Yes. Projects are private by default. Only people you explicitly invite can see them.Can I delete a project?
Yes, in project settings. Deletion is permanent.Do you train on my data?
No. Your data is used only for your project.Billing & Accounts
How do I change my plan?
Go to Billing. Choose new plan. Changes take effect immediately.Troubleshooting
Generation is failing
Wait a few minutes and retry. Check that you have scenarios added.Extraction is showing wrong patterns
Patterns might not be accurate if:- Ratings are inconsistent
- Sample size is too small
- Each failure is unique
My success rate isn’t improving after fixes
The fix might not address the root cause. Try:- Making larger prompt changes
- Being more specific in instructions
- Adding examples to system prompt
Getting Help
Where’s the documentation?
You’re reading it! Check the sidebar for more guides.How do I contact support?
Email [email protected]How do I report a bug?
Email [email protected] with screenshots and details.Still Have Questions?
- Email: [email protected]
- Check docs: Explore the guides and use cases