Configuration
Configure your GitHub Actions workflow to send test results to UnfoldCI.
Quick Start
Add UnfoldCI to your workflow in just 3 lines:
- name: Analyze Flaky Tests
if: always()
uses: UnfoldAI-Labs/UnfoldCI-flaky-autopilot-action@v1
with:
api-key: ${{ secrets.FLAKY_AUTOPILOT_KEY }}
That's it! UnfoldCI auto-detects JUnit XML files from most test frameworks.
Storing the API Key
Store your API key as a GitHub Secret.
Repository Secret
- Go to your repository on GitHub
- Navigate to Settings → Secrets and variables → Actions
- Click New repository secret
- Configure:
- Name:
FLAKY_AUTOPILOT_KEY - Value: Your API key (e.g.,
unfold_ci_abc123...)
- Name:
- Click Add secret
Organization Secret
For multiple repositories, use an organization secret:
- Go to your GitHub organization
- Navigate to Settings → Secrets and variables → Actions
- Click New organization secret
- Configure:
- Name:
FLAKY_AUTOPILOT_KEY - Value: Your API key
- Repository access: Select "All repositories" or choose specific ones
- Name:
- Click Add secret
GitHub Actions Workflow
Add the UnfoldCI action to your workflow after your tests run.
Basic Configuration (Single Test Framework)
When using a single test framework (Jest, pytest, Vitest, etc.), you don't need continue-on-error. The framework runs all tests and writes complete XML results even when some tests fail.
name: Tests with UnfoldCI
on:
push:
branches: [main, master]
pull_request:
branches: [main, master]
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
# Your setup steps here (Node.js, Python, etc.)
- name: Run tests
run: npm test
# No continue-on-error needed!
# Jest/pytest run ALL tests and write complete XML even when some fail
# UnfoldCI Flaky Test Detection
- name: Analyze Flaky Tests
if: always() # ← Key: runs even if tests failed
uses: UnfoldAI-Labs/UnfoldCI-flaky-autopilot-action@v1
with:
api-key: ${{ secrets.FLAKY_AUTOPILOT_KEY }}
# results-path not needed! Auto-detects from common locations
How it works:
- Test framework runs all tests, even if some fail
- XML contains complete results (all passes AND failures)
if: always()ensures UnfoldCI runs regardless of test outcome- CI status accurately reflects test results (✅ or ❌)
Multiple Test Frameworks
If you have multiple test commands (e.g., JavaScript + Python), use continue-on-error: true to ensure all test suites run even if one fails. Then add a final check step for accurate CI status:
Why
continue-on-errorhere? Without it, if JavaScript tests fail, the Python tests step would be skipped entirely and UnfoldCI wouldn't see those results.
name: Tests with UnfoldCI
on:
push:
branches: [main, master]
pull_request:
branches: [main, master]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
# Setup steps...
# Run all test suites
- name: Run JavaScript tests
id: js-tests
run: npm test
continue-on-error: true # Let other tests run
- name: Run Python tests
id: py-tests
run: pytest
continue-on-error: true
# UnfoldCI analyzes ALL results (auto-detects XML files from both frameworks)
- name: Analyze Flaky Tests
if: always()
uses: UnfoldAI-Labs/UnfoldCI-flaky-autopilot-action@v1
with:
api-key: ${{ secrets.FLAKY_AUTOPILOT_KEY }}
# results-path not needed! Auto-detects from common locations
# Final check - shows accurate CI status
- name: Check test results
if: always()
run: |
if [ "${{ steps.js-tests.outcome }}" == "failure" ] || \
[ "${{ steps.py-tests.outcome }}" == "failure" ]; then
echo "❌ Tests failed"
exit 1
fi
echo "✅ All tests passed"
Why this pattern?
| Step | Purpose |
|---|---|
continue-on-error: true | Ensures all test suites run, even if one fails |
if: always() on UnfoldCI | Analyzes results regardless of test outcome |
| Final check step | Sets the CI status users see (✅ or ❌) |
This gives you the best of both worlds: UnfoldCI gets complete test data, and you see accurate CI status at a glance.
Supported Test Frameworks
UnfoldCI automatically detects JUnit XML output from:
| Framework | Language | Auto-Detected Locations |
|---|---|---|
| Jest | JavaScript/TypeScript | **/junit.xml, **/junitresults*.xml, **/jest-junit.xml |
| pytest | Python | **/test-*.xml, **/*_test.xml, **/pytest-results.xml |
| Vitest | JavaScript/TypeScript | **/junit.xml |
| Go test | Go | **/report.xml |
| JUnit/TestNG | Java | **/surefire-reports/*.xml, **/surefire-reports/TEST-*.xml |
| Gradle | Java/Kotlin | **/build/test-results/**/*.xml |
| Mocha | JavaScript | **/mocha-*.xml, **/mocha-results.xml |
| PHPUnit | PHP | **/phpunit-results.xml |
| xUnit/NUnit | C# | **/TestResults/*.xml, **/xunit.xml, **/nunit-results.xml |
Custom location? Specify with results-path:
with:
results-path: 'my-custom-path/**/*.xml'
Action Inputs
| Input | Required | Default | Description |
|---|---|---|---|
api-key | No | — | UnfoldCI API key for authentication |
results-path | No | Auto-detect | Glob pattern for JUnit XML. Leave empty to auto-detect from common locations. |
comment-on-pr | No | true | Comment on PR when flaky tests are detected |
fail-on-test-failure | No | true | Fail CI if any tests failed or errored |
min-tests | No | 0 | Minimum expected tests (fails if fewer found—catches crashes) |
api-url | No | Production URL | UnfoldCI API endpoint |
Action Outputs
| Output | Description |
|---|---|
flakes_detected | Number of flaky tests detected in this run |
tests_analyzed | Total number of tests analyzed |
tests_passed | Number of tests that passed |
tests_failed | Number of tests that failed or errored |
tests_skipped | Number of tests that were skipped |
dashboard_url | URL to the UnfoldCI dashboard for this repository |
status | Action status: success, rate_limited, api_error, no_results, or no_tests |
Using Outputs
- name: Analyze Flaky Tests
id: flaky-analysis
uses: UnfoldAI-Labs/UnfoldCI-flaky-autopilot-action@v1
with:
api-key: ${{ secrets.FLAKY_AUTOPILOT_KEY }}
- name: Show Results
if: always()
run: |
echo "Flaky tests detected: ${{ steps.flaky-analysis.outputs.flakes_detected }}"
echo "Total tests analyzed: ${{ steps.flaky-analysis.outputs.tests_analyzed }}"
echo "Dashboard: ${{ steps.flaky-analysis.outputs.dashboard_url }}"
Framework Examples
Jest (JavaScript/TypeScript)
Configure Jest to output JUnit XML:
// package.json
{
"scripts": {
"test": "jest --ci --reporters=default --reporters=jest-junit"
},
"jest-junit": {
"outputDirectory": "test-results",
"outputName": "junit.xml"
}
}
Install the reporter:
npm install --save-dev jest-junit
Workflow:
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
- name: Install dependencies
run: npm ci
- name: Run tests
id: tests
run: npm test
- name: Analyze Flaky Tests
if: always()
uses: UnfoldAI-Labs/UnfoldCI-flaky-autopilot-action@v1
with:
api-key: ${{ secrets.FLAKY_AUTOPILOT_KEY }}
# results-path auto-detected from test-results/**/*.xml
Note: Jest writes XML results even when tests fail, so UnfoldCI will receive complete data. The CI status will correctly show ❌ if tests fail.
pytest (Python)
Configure pytest to output JUnit XML:
# pytest.ini
[pytest]
addopts = --junitxml=test-results/junit.xml
Workflow:
- name: Setup Python
uses: actions/setup-python@v4
with:
python-version: '3.11'
- name: Install dependencies
run: pip install pytest
- name: Run tests
id: tests
run: pytest
- name: Analyze Flaky Tests
if: always()
uses: UnfoldAI-Labs/UnfoldCI-flaky-autopilot-action@v1
with:
api-key: ${{ secrets.FLAKY_AUTOPILOT_KEY }}
# results-path auto-detected from test-results/**/*.xml
Note: pytest writes XML results even when tests fail, so UnfoldCI will receive complete data.
Go
Install go-junit-report and output JUnit XML:
go install github.com/jstemmer/go-junit-report/v2@latest
Workflow:
- name: Setup Go
uses: actions/setup-go@v4
with:
go-version: '1.21'
- name: Run tests
run: go test -v ./... 2>&1 | go-junit-report -set-exit-code > report.xml
- name: Analyze Flaky Tests
if: always()
uses: UnfoldAI-Labs/UnfoldCI-flaky-autopilot-action@v1
with:
api-key: ${{ secrets.FLAKY_AUTOPILOT_KEY }}
# results-path auto-detected from report.xml
Vitest
Configure Vitest to output JUnit XML:
// vitest.config.ts
export default {
test: {
reporters: ['default', 'junit'],
outputFile: {
junit: 'test-results/junit.xml'
}
}
}
Zero Configuration Frameworks
Some frameworks output JUnit XML by default. No additional configuration needed!
Maven (JUnit/TestNG)
Maven's Surefire plugin outputs JUnit XML automatically:
- name: Setup Java
uses: actions/setup-java@v4
with:
distribution: 'temurin'
java-version: '17'
cache: 'maven'
- name: Run tests
run: mvn test
- name: Analyze Flaky Tests
if: always()
uses: UnfoldAI-Labs/UnfoldCI-flaky-autopilot-action@v1
with:
api-key: ${{ secrets.FLAKY_AUTOPILOT_KEY }}
# Auto-detects from target/surefire-reports/*.xml
Maven outputs JUnit XML to target/surefire-reports/ by default—no setup required!
Gradle (JUnit/TestNG)
Gradle also outputs JUnit XML by default:
- name: Setup Java
uses: actions/setup-java@v4
with:
distribution: 'temurin'
java-version: '17'
cache: 'gradle'
- name: Run tests
run: ./gradlew test
- name: Analyze Flaky Tests
if: always()
uses: UnfoldAI-Labs/UnfoldCI-flaky-autopilot-action@v1
with:
api-key: ${{ secrets.FLAKY_AUTOPILOT_KEY }}
# Auto-detects from build/test-results/**/*.xml
Gradle outputs JUnit XML to build/test-results/ by default—no setup required!
Additional Framework Examples
Mocha (JavaScript)
Install the JUnit reporter:
npm install --save-dev mocha-junit-reporter
Configure .mocharc.json:
{
"reporter": "mocha-junit-reporter",
"reporterOptions": {
"mochaFile": "test-results/mocha-results.xml",
"outputs": true
}
}
Workflow:
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
- name: Install dependencies
run: npm ci
- name: Run tests
run: npm test
- name: Analyze Flaky Tests
if: always()
uses: UnfoldAI-Labs/UnfoldCI-flaky-autopilot-action@v1
with:
api-key: ${{ secrets.FLAKY_AUTOPILOT_KEY }}
# Auto-detects from test-results/mocha-results.xml
For multiple reporters (console + JUnit), use the mocha-multi-reporters package.
PHPUnit (PHP)
Configure phpunit.xml:
<?xml version="1.0" encoding="UTF-8"?>
<phpunit>
<logging>
<junit outputFile="test-results/junit.xml"/>
</logging>
</phpunit>
Or use the command line flag:
vendor/bin/phpunit --log-junit test-results/junit.xml
Workflow:
- name: Setup PHP
uses: shivammathur/setup-php@v2
with:
php-version: '8.2'
tools: composer
- name: Install dependencies
run: composer install
- name: Run tests
run: vendor/bin/phpunit
- name: Analyze Flaky Tests
if: always()
uses: UnfoldAI-Labs/UnfoldCI-flaky-autopilot-action@v1
with:
api-key: ${{ secrets.FLAKY_AUTOPILOT_KEY }}
# Auto-detects from test-results/junit.xml
xUnit / NUnit / MSTest (.NET)
Install the JUnit logger in your test project:
dotnet add package JunitXml.TestLogger
Workflow:
- name: Setup .NET
uses: actions/setup-dotnet@v4
with:
dotnet-version: '8.0.x'
- name: Restore dependencies
run: dotnet restore
- name: Run tests
run: dotnet test --logger "junit;LogFilePath=test-results/junit.xml"
- name: Analyze Flaky Tests
if: always()
uses: UnfoldAI-Labs/UnfoldCI-flaky-autopilot-action@v1
with:
api-key: ${{ secrets.FLAKY_AUTOPILOT_KEY }}
# Auto-detects from test-results/junit.xml
The JunitXml.TestLogger package works with xUnit, NUnit, and MSTest.
Advanced Configuration
Import Path Resolution
For projects with custom import aliases, create a .flaky-autopilot.json file in your repository root:
{
"importResolver": {
"aliases": {
"@": "./src",
"@utils": "./src/utils",
"@tests": "./tests"
},
"pythonPaths": [
".",
"./src",
"./tests"
],
"extensions": [
".js",
".ts",
".jsx",
".tsx",
".py"
]
}
}
This helps UnfoldCI accurately calculate code hashes by resolving import dependencies.
Python Path Configuration
For Python projects, configure the Python path in pytest.ini:
[pytest]
pythonpath = . src tests
Security
API Key Security
- Store API keys as GitHub Secrets only
- Never commit keys to your repository
- Rotate keys periodically from the dashboard
- Revoke unused keys immediately
Permissions
The action requires:
- Read access to test result files
Optional but recommended:
- Read access to test result files
Next Steps
- Quick Start Guide — Complete working example
- Troubleshooting — Common issues and solutions