Quick Start Guide
Get Unfold CI monitoring your tests in under 5 minutes. No infrastructure setup required!
Prerequisites
- ✅ GitHub repository with tests
- ✅ Admin access to the repository
- ✅ GitHub Actions enabled (or willing to enable it)
Step 1: Install GitHub App (1 minute)
-
Click "Install"
- Select your organization or personal account
- Choose repositories:
- All repositories (recommended) OR
- Only select repositories
-
Authorize
- Review permissions
- Click "Install & Authorize"
- You'll be redirected to the dashboard
✅ Done! Your repositories are now synced to Unfold CI.
Step 2: Generate API Key (1 minute)
-
Login to Dashboard
- Visit app.unfoldci.com
- Click "Sign in with GitHub"
-
Navigate to Settings
- Click your profile icon (top-right)
- Select "Settings" → "API Keys"
-
Generate New Key
- Click "Generate New API Key"
- Name it: "Production Workflow"
- Scope: "Organization-wide" (or repository-specific)
- Click "Create"
-
Copy Your Key
unfold_sk_abc123def456...⚠️ Save this immediately! You won't see it again.
✅ Done! You have your authentication key.
Step 3: Add Secret to GitHub (1 minute)
-
Go to Repository Settings
- Open your repository on GitHub
- Click "Settings" tab
-
Add Secret
- Navigate to: "Secrets and variables" → "Actions"
- Click "New repository secret"
- Name:
UNFOLD_API_KEY - Value: Paste your API key
- Click "Add secret"
✅ Done! Your workflow can now authenticate with Unfold CI.
Step 4: Create Workflow (2 minutes)
Create .github/workflows/unfold-ci.yml in your repository:
For JavaScript/TypeScript (Jest)
name: Unfold CI
on:
push:
branches: [ main ]
pull_request:
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '18'
cache: 'npm'
- run: npm ci
- name: Run tests
run: npm test -- --json --outputFile=test-results.json
continue-on-error: true
- name: Upload to Unfold CI
if: always()
run: |
curl -X POST https://api.unfoldci.com/api/test-results \
-H "Authorization: Bearer ${{ secrets.UNFOLD_API_KEY }}" \
-F "file=@test-results.json" \
-F "repo=${{ github.repository }}" \
-F "branch=${{ github.ref_name }}" \
-F "commit=${{ github.sha }}" \
-F "framework=jest"
For Python (pytest)
name: Unfold CI
on:
push:
branches: [ main ]
pull_request:
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: '3.11'
cache: 'pip'
- run: |
pip install -r requirements.txt
pip install pytest pytest-json-report
- name: Run tests
run: pytest --json-report --json-report-file=test-results.json
continue-on-error: true
- name: Upload to Unfold CI
if: always()
run: |
curl -X POST https://api.unfoldci.com/api/test-results \
-H "Authorization: Bearer ${{ secrets.UNFOLD_API_KEY }}" \
-F "file=@test-results.json" \
-F "repo=${{ github.repository }}" \
-F "branch=${{ github.ref_name }}" \
-F "commit=${{ github.sha }}" \
-F "framework=pytest"
For C#/.NET
name: Unfold CI
on:
push:
branches: [ main ]
pull_request:
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-dotnet@v4
with:
dotnet-version: '8.0.x'
- run: dotnet restore
- run: dotnet build --no-restore
- name: Run tests
run: dotnet test --logger "json;LogFileName=test-results.json"
continue-on-error: true
- name: Upload to Unfold CI
if: always()
run: |
TEST_FILE=$(find . -name "test-results.json" | head -n 1)
curl -X POST https://api.unfoldci.com/api/test-results \
-H "Authorization: Bearer ${{ secrets.UNFOLD_API_KEY }}" \
-F "file=@$TEST_FILE" \
-F "repo=${{ github.repository }}" \
-F "branch=${{ github.ref_name }}" \
-F "commit=${{ github.sha }}" \
-F "framework=nunit"
✅ Done! Your workflow is configured.
Step 5: Trigger Your First Run (30 seconds)
Option A: Push a Commit
git add .github/workflows/unfold-ci.yml
git commit -m "Add Unfold CI monitoring"
git push origin main
Option B: Manual Trigger
- Go to your repository on GitHub
- Click "Actions" tab
- Select "Unfold CI" workflow
- Click "Run workflow" button
- Choose branch and click "Run workflow"
✅ Done! Your tests are running.
Step 6: View Results (30 seconds)
-
Wait for Workflow
- Check GitHub Actions tab
- Wait for workflow to complete (usually 1-3 minutes)
-
Open Dashboard
- Visit app.unfoldci.com
- You should see:
- ✅ Total tests monitored
- ✅ Test health score
- ✅ Recent test runs
-
Verify Data
- Click on your repository
- View test details
- Check test distribution
✅ Congratulations! Unfold CI is now monitoring your tests.
What Happens Next?
Immediate (First Run)
- ✅ Tests are recorded in our system
- ✅ Baseline metrics established
- ✅ Dashboard shows initial health score
After 3-5 Runs
- 🔍 Flake detection algorithm activates
- 📊 Pass/fail patterns identified
- ⚠️ Potentially flaky tests flagged
When Flaky Test Detected
- 🤖 AI analyzes root cause
- 💡 Fix generated automatically
- 🔀 Pull request created in your repo
- 📧 Notification sent (if configured)
Testing the Full Flow
Want to see Unfold CI in action? Add a flaky test:
Example Flaky Test (DO NOT USE IN PRODUCTION!)
// JavaScript/Jest
test('example flaky test', () => {
const shouldPass = Math.random() > 0.3; // Fails ~30% of time
expect(shouldPass).toBe(true);
});
# Python/pytest
def test_example_flaky():
import random
assert random.random() > 0.3 # Fails ~30% of time
Trigger Multiple Runs
# Push 5 commits to generate pattern
for i in {1..5}; do
git commit --allow-empty -m "Test run $i"
git push
sleep 60 # Wait between runs
done
Watch the Magic
- Run 1-2: Tests recorded, no issues detected
- Run 3-4: Flake pattern emerges, test flagged
- Run 5: AI analyzes, generates fix, creates PR
Check dashboard to see:
- Flake score increasing
- Root cause category assigned
- AI-generated PR link
Viewing AI-Generated PRs
When Unfold CI creates a fix:
-
Dashboard Notification
- "Recent AI Fixes" section shows new PR
- Click to view details
-
GitHub PR
- PR appears in your repository
- Includes:
- 📝 Detailed explanation of the issue
- 🔧 Code fix with inline comments
- 📊 Confidence score (0-100%)
- 🤖 AI model used (GPT-4o)
-
Review & Merge
- Review the changes
- Test locally if needed
- Click "Merge" if approved
- Or request modifications in comments
Dashboard Features
Main Dashboard
- Health Score: Overall test suite reliability (0-100%)
- Critical Flaky: High-priority tests to fix
- Tests Monitored: Total tests across all repos
- Time Saved: Estimated hours saved by auto-fixes
Repository View
- Test Distribution: By status (passing/flaky/failing)
- Pass Rate: Historical trend
- Flaky Tests: Sorted by severity
- Recent Runs: Latest execution history
AI Fixes View
- Open PRs: Pending AI-generated fixes
- Merged PRs: Successfully applied fixes
- Fix Status: Pending/merged/closed
- Success Rate: Track fix effectiveness
Analytics
- Trend Charts: Flake rate over time
- Root Causes: Most common flaky patterns
- ROI Metrics: Time and cost savings
- Comparison: Across all your repositories
Troubleshooting
No Data in Dashboard
Check:
- Workflow completed successfully ✓
- Upload step succeeded (check logs) ✓
- API key is correct ✓
- Refresh dashboard page ✓
Fix:
# Add debug output to workflow
- name: Debug upload
run: |
echo "File exists:"
ls -la test-results.json
echo "Uploading..."
curl -v -X POST https://api.unfoldci.com/api/test-results \
-H "Authorization: Bearer ${{ secrets.UNFOLD_API_KEY }}" \
-F "file=@test-results.json"
Workflow Fails
Check:
- Test results file generated ✓
- API key secret exists ✓
- Network access to api.unfoldci.com ✓
Fix:
# Ensure tests run even if they fail
- name: Run tests
run: npm test
continue-on-error: true # CRITICAL!
# Always upload, even if tests failed
- name: Upload
if: always() # CRITICAL!
run: curl ...
PR Not Created
Reasons:
- Not enough test runs yet (need 3-5)
- Flake score too low (under 5% failure rate)
- AI confidence too low (under 70%)
- Similar PR already exists
Check Dashboard:
- View test details
- Check flake score
- Look for existing PRs
- Review AI analysis status
Next Steps
Enable Notifications
Set up Slack/Discord notifications for new PRs:
- Dashboard → Settings → Notifications
- Connect Slack/Discord webhook
- Choose notification preferences
Add More Repositories
- Dashboard → "Add Repository"
- Or: GitHub Settings → Installed GitHub Apps → Configure
- Select additional repositories
- Sync in dashboard
Customize Detection
Configure flake detection thresholds:
- Dashboard → Repository Settings
- Adjust sensitivity:
- Sensitive: Catch all potential flakes (5%+ failure)
- Balanced: Standard detection (10%+ failure)
- Conservative: Only obvious flakes (20%+ failure)
Historical Analysis
Backfill past test data:
- Dashboard → Repository → "Analyze History"
- Select timeframe (7/30/90 days)
- Unfold CI analyzes historical patterns
- Improves detection accuracy
Best Practices
Workflow Triggers
Monitor all important branches:
on:
push:
branches: [ main, develop, staging ]
pull_request:
schedule:
- cron: '0 */6 * * *' # Every 6 hours
Test Organization
Structure tests for better analysis:
- Use descriptive test names
- Group related tests
- Include test context in names
- Add test tags/categories
PR Review Process
When reviewing AI-generated PRs:
- ✅ Read the explanation carefully
- ✅ Understand the root cause
- ✅ Test the fix locally
- ✅ Check for edge cases
- ✅ Merge or provide feedback
Real-World Example
Here's what a typical Unfold CI experience looks like:
Day 1: Installation
- Installed GitHub App ✓
- Generated API key ✓
- Added workflow ✓
- First test run complete ✓
Day 2-3: Data Collection
- 15 workflow runs completed
- 247 tests monitored
- 3 tests flagged as potentially flaky
- No PRs yet (gathering data)
Day 4: First AI Fix
- 🎉 First PR created!
- Test:
test_user_login_timeout - Root cause: Network timing issue
- Fix: Added retry logic with exponential backoff
- Confidence: 87%
- Merged after review ✓
Week 2: ROI Visible
- 5 flaky tests fixed
- 12 hours of debugging time saved
- CI/CD reliability improved from 92% to 98%
- Team confidence in tests restored
Getting Help
Resources
Community & Support
Need Help?
We're here to help! Most setup issues are resolved in under 5 minutes. Reach out anytime.
🎉 You're all set! Unfold CI is now monitoring your tests and will automatically create PRs to fix flaky tests as they're detected.