Skip to main content

Configuration

Configure your GitHub Actions workflow to send test results to UnfoldCI.

Quick Start

Add UnfoldCI to your workflow in just 3 lines:

- name: Analyze Flaky Tests
if: always()
uses: UnfoldAI-Labs/UnfoldCI-flaky-autopilot-action@v1
with:
api-key: ${{ secrets.FLAKY_AUTOPILOT_KEY }}

That's it! UnfoldCI auto-detects JUnit XML files from most test frameworks.


Storing the API Key

Store your API key as a GitHub Secret.

Repository Secret

  1. Go to your repository on GitHub
  2. Navigate to Settings → Secrets and variables → Actions
  3. Click New repository secret
  4. Configure:
    • Name: FLAKY_AUTOPILOT_KEY
    • Value: Your API key (e.g., unfold_ci_abc123...)
  5. Click Add secret

Organization Secret

For multiple repositories, use an organization secret:

  1. Go to your GitHub organization
  2. Navigate to Settings → Secrets and variables → Actions
  3. Click New organization secret
  4. Configure:
    • Name: FLAKY_AUTOPILOT_KEY
    • Value: Your API key
    • Repository access: Select "All repositories" or choose specific ones
  5. Click Add secret

GitHub Actions Workflow

Add the UnfoldCI action to your workflow after your tests run.

Basic Configuration (Single Test Framework)

When using a single test framework (Jest, pytest, Vitest, etc.), you don't need continue-on-error. The framework runs all tests and writes complete XML results even when some tests fail.

name: Tests with UnfoldCI

on:
push:
branches: [main, master]
pull_request:
branches: [main, master]

jobs:
test:
runs-on: ubuntu-latest

steps:
- name: Checkout code
uses: actions/checkout@v4

# Your setup steps here (Node.js, Python, etc.)

- name: Run tests
run: npm test
# No continue-on-error needed!
# Jest/pytest run ALL tests and write complete XML even when some fail

# UnfoldCI Flaky Test Detection
- name: Analyze Flaky Tests
if: always() # ← Key: runs even if tests failed
uses: UnfoldAI-Labs/UnfoldCI-flaky-autopilot-action@v1
with:
api-key: ${{ secrets.FLAKY_AUTOPILOT_KEY }}
# results-path not needed! Auto-detects from common locations

How it works:

  • Test framework runs all tests, even if some fail
  • XML contains complete results (all passes AND failures)
  • if: always() ensures UnfoldCI runs regardless of test outcome
  • CI status accurately reflects test results (✅ or ❌)

Multiple Test Frameworks

If you have multiple test commands (e.g., JavaScript + Python), use continue-on-error: true to ensure all test suites run even if one fails. Then add a final check step for accurate CI status:

Why continue-on-error here? Without it, if JavaScript tests fail, the Python tests step would be skipped entirely and UnfoldCI wouldn't see those results.

name: Tests with UnfoldCI

on:
push:
branches: [main, master]
pull_request:
branches: [main, master]

jobs:
test:
runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v4

# Setup steps...

# Run all test suites
- name: Run JavaScript tests
id: js-tests
run: npm test
continue-on-error: true # Let other tests run

- name: Run Python tests
id: py-tests
run: pytest
continue-on-error: true

# UnfoldCI analyzes ALL results (auto-detects XML files from both frameworks)
- name: Analyze Flaky Tests
if: always()
uses: UnfoldAI-Labs/UnfoldCI-flaky-autopilot-action@v1
with:
api-key: ${{ secrets.FLAKY_AUTOPILOT_KEY }}
# results-path not needed! Auto-detects from common locations

# Final check - shows accurate CI status
- name: Check test results
if: always()
run: |
if [ "${{ steps.js-tests.outcome }}" == "failure" ] || \
[ "${{ steps.py-tests.outcome }}" == "failure" ]; then
echo "❌ Tests failed"
exit 1
fi
echo "✅ All tests passed"

Why this pattern?

StepPurpose
continue-on-error: trueEnsures all test suites run, even if one fails
if: always() on UnfoldCIAnalyzes results regardless of test outcome
Final check stepSets the CI status users see (✅ or ❌)

This gives you the best of both worlds: UnfoldCI gets complete test data, and you see accurate CI status at a glance.

Supported Test Frameworks

UnfoldCI automatically detects JUnit XML output from:

FrameworkLanguageAuto-Detected Locations
JestJavaScript/TypeScript**/junit.xml, **/junitresults*.xml, **/jest-junit.xml
pytestPython**/test-*.xml, **/*_test.xml, **/pytest-results.xml
VitestJavaScript/TypeScript**/junit.xml
Go testGo**/report.xml
JUnit/TestNGJava**/surefire-reports/*.xml, **/surefire-reports/TEST-*.xml
GradleJava/Kotlin**/build/test-results/**/*.xml
MochaJavaScript**/mocha-*.xml, **/mocha-results.xml
PHPUnitPHP**/phpunit-results.xml
xUnit/NUnitC#**/TestResults/*.xml, **/xunit.xml, **/nunit-results.xml

Custom location? Specify with results-path:

with:
results-path: 'my-custom-path/**/*.xml'

Action Inputs

InputRequiredDefaultDescription
api-keyNoUnfoldCI API key for authentication
results-pathNoAuto-detectGlob pattern for JUnit XML. Leave empty to auto-detect from common locations.
comment-on-prNotrueComment on PR when flaky tests are detected
fail-on-test-failureNotrueFail CI if any tests failed or errored
min-testsNo0Minimum expected tests (fails if fewer found—catches crashes)
api-urlNoProduction URLUnfoldCI API endpoint

Action Outputs

OutputDescription
flakes_detectedNumber of flaky tests detected in this run
tests_analyzedTotal number of tests analyzed
tests_passedNumber of tests that passed
tests_failedNumber of tests that failed or errored
tests_skippedNumber of tests that were skipped
dashboard_urlURL to the UnfoldCI dashboard for this repository
statusAction status: success, rate_limited, api_error, no_results, or no_tests

Using Outputs

- name: Analyze Flaky Tests
id: flaky-analysis
uses: UnfoldAI-Labs/UnfoldCI-flaky-autopilot-action@v1
with:
api-key: ${{ secrets.FLAKY_AUTOPILOT_KEY }}

- name: Show Results
if: always()
run: |
echo "Flaky tests detected: ${{ steps.flaky-analysis.outputs.flakes_detected }}"
echo "Total tests analyzed: ${{ steps.flaky-analysis.outputs.tests_analyzed }}"
echo "Dashboard: ${{ steps.flaky-analysis.outputs.dashboard_url }}"

Framework Examples

Jest (JavaScript/TypeScript)

Configure Jest to output JUnit XML:

// package.json
{
"scripts": {
"test": "jest --ci --reporters=default --reporters=jest-junit"
},
"jest-junit": {
"outputDirectory": "test-results",
"outputName": "junit.xml"
}
}

Install the reporter:

npm install --save-dev jest-junit

Workflow:

- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'

- name: Install dependencies
run: npm ci

- name: Run tests
id: tests
run: npm test

- name: Analyze Flaky Tests
if: always()
uses: UnfoldAI-Labs/UnfoldCI-flaky-autopilot-action@v1
with:
api-key: ${{ secrets.FLAKY_AUTOPILOT_KEY }}
# results-path auto-detected from test-results/**/*.xml

Note: Jest writes XML results even when tests fail, so UnfoldCI will receive complete data. The CI status will correctly show ❌ if tests fail.

pytest (Python)

Configure pytest to output JUnit XML:

# pytest.ini
[pytest]
addopts = --junitxml=test-results/junit.xml

Workflow:

- name: Setup Python
uses: actions/setup-python@v4
with:
python-version: '3.11'

- name: Install dependencies
run: pip install pytest

- name: Run tests
id: tests
run: pytest

- name: Analyze Flaky Tests
if: always()
uses: UnfoldAI-Labs/UnfoldCI-flaky-autopilot-action@v1
with:
api-key: ${{ secrets.FLAKY_AUTOPILOT_KEY }}
# results-path auto-detected from test-results/**/*.xml

Note: pytest writes XML results even when tests fail, so UnfoldCI will receive complete data.

Go

Install go-junit-report and output JUnit XML:

go install github.com/jstemmer/go-junit-report/v2@latest

Workflow:

- name: Setup Go
uses: actions/setup-go@v4
with:
go-version: '1.21'

- name: Run tests
run: go test -v ./... 2>&1 | go-junit-report -set-exit-code > report.xml

- name: Analyze Flaky Tests
if: always()
uses: UnfoldAI-Labs/UnfoldCI-flaky-autopilot-action@v1
with:
api-key: ${{ secrets.FLAKY_AUTOPILOT_KEY }}
# results-path auto-detected from report.xml

Vitest

Configure Vitest to output JUnit XML:

// vitest.config.ts
export default {
test: {
reporters: ['default', 'junit'],
outputFile: {
junit: 'test-results/junit.xml'
}
}
}

Zero Configuration Frameworks

Some frameworks output JUnit XML by default. No additional configuration needed!

Maven (JUnit/TestNG)

Maven's Surefire plugin outputs JUnit XML automatically:

- name: Setup Java
uses: actions/setup-java@v4
with:
distribution: 'temurin'
java-version: '17'
cache: 'maven'

- name: Run tests
run: mvn test

- name: Analyze Flaky Tests
if: always()
uses: UnfoldAI-Labs/UnfoldCI-flaky-autopilot-action@v1
with:
api-key: ${{ secrets.FLAKY_AUTOPILOT_KEY }}
# Auto-detects from target/surefire-reports/*.xml
Zero Config

Maven outputs JUnit XML to target/surefire-reports/ by default—no setup required!

Gradle (JUnit/TestNG)

Gradle also outputs JUnit XML by default:

- name: Setup Java
uses: actions/setup-java@v4
with:
distribution: 'temurin'
java-version: '17'
cache: 'gradle'

- name: Run tests
run: ./gradlew test

- name: Analyze Flaky Tests
if: always()
uses: UnfoldAI-Labs/UnfoldCI-flaky-autopilot-action@v1
with:
api-key: ${{ secrets.FLAKY_AUTOPILOT_KEY }}
# Auto-detects from build/test-results/**/*.xml
Zero Config

Gradle outputs JUnit XML to build/test-results/ by default—no setup required!


Additional Framework Examples

Mocha (JavaScript)

Install the JUnit reporter:

npm install --save-dev mocha-junit-reporter

Configure .mocharc.json:

{
"reporter": "mocha-junit-reporter",
"reporterOptions": {
"mochaFile": "test-results/mocha-results.xml",
"outputs": true
}
}

Workflow:

- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'

- name: Install dependencies
run: npm ci

- name: Run tests
run: npm test

- name: Analyze Flaky Tests
if: always()
uses: UnfoldAI-Labs/UnfoldCI-flaky-autopilot-action@v1
with:
api-key: ${{ secrets.FLAKY_AUTOPILOT_KEY }}
# Auto-detects from test-results/mocha-results.xml
Multi-Reporter

For multiple reporters (console + JUnit), use the mocha-multi-reporters package.

PHPUnit (PHP)

Configure phpunit.xml:

<?xml version="1.0" encoding="UTF-8"?>
<phpunit>
<logging>
<junit outputFile="test-results/junit.xml"/>
</logging>
</phpunit>

Or use the command line flag:

vendor/bin/phpunit --log-junit test-results/junit.xml

Workflow:

- name: Setup PHP
uses: shivammathur/setup-php@v2
with:
php-version: '8.2'
tools: composer

- name: Install dependencies
run: composer install

- name: Run tests
run: vendor/bin/phpunit

- name: Analyze Flaky Tests
if: always()
uses: UnfoldAI-Labs/UnfoldCI-flaky-autopilot-action@v1
with:
api-key: ${{ secrets.FLAKY_AUTOPILOT_KEY }}
# Auto-detects from test-results/junit.xml

xUnit / NUnit / MSTest (.NET)

Install the JUnit logger in your test project:

dotnet add package JunitXml.TestLogger

Workflow:

- name: Setup .NET
uses: actions/setup-dotnet@v4
with:
dotnet-version: '8.0.x'

- name: Restore dependencies
run: dotnet restore

- name: Run tests
run: dotnet test --logger "junit;LogFilePath=test-results/junit.xml"

- name: Analyze Flaky Tests
if: always()
uses: UnfoldAI-Labs/UnfoldCI-flaky-autopilot-action@v1
with:
api-key: ${{ secrets.FLAKY_AUTOPILOT_KEY }}
# Auto-detects from test-results/junit.xml
Universal

The JunitXml.TestLogger package works with xUnit, NUnit, and MSTest.


Advanced Configuration

Import Path Resolution

For projects with custom import aliases, create a .flaky-autopilot.json file in your repository root:

{
"importResolver": {
"aliases": {
"@": "./src",
"@utils": "./src/utils",
"@tests": "./tests"
},
"pythonPaths": [
".",
"./src",
"./tests"
],
"extensions": [
".js",
".ts",
".jsx",
".tsx",
".py"
]
}
}

This helps UnfoldCI accurately calculate code hashes by resolving import dependencies.

Python Path Configuration

For Python projects, configure the Python path in pytest.ini:

[pytest]
pythonpath = . src tests

Security

API Key Security

  • Store API keys as GitHub Secrets only
  • Never commit keys to your repository
  • Rotate keys periodically from the dashboard
  • Revoke unused keys immediately

Permissions

The action requires:

  • Read access to test result files

Optional but recommended:

  • Read access to test result files

Next Steps