Testing
Testing skills help validate your MuleSoft flows by generating test cases and sample data, ensuring accuracy and functionality before deployment.
1. MUnit Test Generator
Generates MUnit tests for specific MuleSoft flows based on your selected repository or uploaded project.
- Flow-Aware Testing: Automatically detects flows from the selected repo or upload and generates appropriate MUnit scenarios.
- Highly Configurable: Specify VM arguments, choose runtime versions, and adjust test case complexity using provided controls.
- Coverage Ready: Optionally enable test coverage reporting for visibility into flow validation.
Supported Input Modes:
- With Repository: Connect a Git repo to select flows directly from your project.
- Upload from Computer: Upload a MuleSoft project folder to access local flow files.
Additional Features:
- VM Args: Define custom environment variables or JVM flags for test execution.
- Coverage Toggle: Enable or disable test coverage collection.
- Scenario Slider: Choose how many test cases to generate (up to 3).
- Flow Selector: Pick the specific flow you want to test.
- Runtime Configuration: Choose your Java, Maven, and Mule runtime versions.
- settings.xml Support: Upload a
settings.xmlfile for Maven config adjustments. - Notes (Optional): Add extra guidance or sample expectations to tailor test logic.
This capability helps you automate and accelerate test creation for faster and more confident Mule deployments.
2. Sample Data Generator
Generates realistic input-output samples from a provided DataWeave (DWL) script to help test or demonstrate data transformations.
- Quick Test Setup: Produces mock data that matches the structure defined by your DataWeave logic.
- Input Type Aware: Customize the data shape by selecting the appropriate input format (e.g., JSON, XML).
- Script-Driven Generation: Simply paste your DWL script, and the platform generates corresponding samples.
Supported Input Modes:
- No Repository: Paste the DWL script directly without any project context.
- With Repository: Select a DWL file from a connected repository and auto-generate sample data.
- Upload from Computer: Upload a Mule project folder that includes DWL files.
Additional Features:
- Input Format Selection: Choose from JSON, XML, CSV, YAML, TEXT, etc., to tailor the output.
- Notes (Optional): Add special considerations or sample data preferences to guide generation.
- VM Args: Provide optional runtime parameters if required for execution context.
- Runtime Configuration: Select Java, Maven, and Mule runtime versions.
- settings.xml Support: Upload a
settings.xmlfile to influence configuration if applicable.
This capability is especially useful when you want to simulate flow behavior without needing actual backend calls or real input data.
3. Integration Test Agent
The Integration Test Agent is an AI-powered testing tool designed specifically for Mule4 applications. It autonomously analyzes your MuleSoft flows, generates comprehensive test scenarios, executes integration tests, and provides detailed reports with actionable insights. This agent will connect to your end-systems and potentially modify data there so it is recommended to use this agent with test/dev-instances only.
Key Benefits
- Automated Test Generation: Automatically analyzes your flows and creates relevant test cases
- Comprehensive Coverage: Tests API endpoints, scheduler flows, batch processes, and event-driven flows
- Multiple Test Scenarios: Generates happy path, edge case, and negative test scenarios
- Detailed Reporting: Provides test summaries, performance metrics, and debugging information
- Postman Collection Export: Exports all tests as Postman collections for easy sharing and reuse
Prerequisites and Setup
System Requirements:
Required:
- Access to Curie Platform: You need an active account with appropriate permissions
- Repository Access: The platform needs access to your repository (via version control or upload)
Optional (for Enhanced Features):
- Anypoint Exchange Configuration: Enables automatic CloudHub deployment for testing
- If not configured, you can test pre-deployed applications by providing the deployment URL
- This is required for testing non-API (scheduled, triggered) flows
Configuration Steps:
You can configure Anypoint Exchange in the Integrations tab within your workspace settings. Before using the integration, ensure CurieTech AI has the necessary permissions to access your Anypoint Platform resources.
Steps to Grant Permissions:
- Log in to your Anypoint Platform account
- Navigate to Access Management
- Click Connected Apps
- Select your Organization Name
- Click Add Scopes
- Grant the required scopes for the following components:
- Design Center
- Exchange
- Runtime Manager
- General
- API Manager
Important Note:
Without these permissions, CurieTech AI may not be able to:
- Publish assets
- Deploy applications
- Retrieve logs
Ensure all relevant scopes are enabled before testing the integration.
Configuration Requirements:
VM Arguments (Optional): If your MuleSoft application requires specific VM arguments (e.g., encryption keys, system properties), you'll need to provide them:
-Dkey=your_encryption_key -Denv=test
Deployment Information:
Option A: Anypoint Exchange Configured
- The agent will automatically deploy your application to CloudHub and clean up all deployments at the end of the task
- No deployment URL needed
Option B: Anypoint Exchange Not Configured
- You must deploy your application manually
- Provide the deployment URL when running the agent (e.g.,
https://your-app.cloudhub.io). This should be publicly accessible from Curie server, you can mention client ID and secrets required to reach the URL.
How to Use the Agent
Step 1: Access the Integration Test Agent
- Log into the Curie platform
- Navigate to the Agent Gallery or Testing Agents section
- Select Integration Test Agent
Step 2: Configure Your Test
Basic Configuration:
-
Select Repository
- Choose your MuleSoft repository from the dropdown
- Select the branch you want to test (e.g.,
main,develop)
-
Provide Deployment URL (Required)
- If Anypoint Exchange is configured: The agent will deploy automatically
- If not configured: Enter your pre-deployed application URL
- Example:
https://salesforce-testing.cloudhub.io
-
Add VM Arguments (Optional)
- Enter any required JVM arguments
- Example:
-Dkey=1122334455667788
-
Additional Notes
- Describe test scenarios, provide schemas or any specific details about what you want the output report format to be. You can be creative here!
- It is highly recommended to add schemas of the end systems you are testing or data samples in order to get accurate results
Advanced Configuration: Input/Output Testing
For more rigorous testing, you can enable Comparison Testing:
-
Enable Comparison Testing Checkbox
- This allows you to provide specific input examples and expected output
-
Configure Flow Examples
- For each flow you want to test:
- Flow Name: e.g.,
salesforce-accountFlow - Input Example: Provide sample JSON/XML input
[{"Name": "Test Account", "Type": "Customer", "Industry": "Technology"}] - Expected Output Example: Provide expected response
{"success": true, "id": "001ao00001p9hSvAAI", "created": true}
- Flow Name: e.g.,
- For each flow you want to test:
-
Validation Behavior
- The agent will execute the flow with your input
- Compare actual output with expected output
- Report matches and mismatches
- Ignore non-deterministic fields (timestamps, auto-generated IDs)
Step 3: Run the Test
- Click Submit or Run Test
- The agent will:
- Clone your repository
- Analyze the flows
- Deploy the application (if needed)
- Execute comprehensive tests
- Generate reports
Step 4: Monitor Progress
You'll see real-time progress updates:
- "Analyzing flows..."
- "Deploying application..."
- "Executing test scenarios..."
- "Generating reports..."
Typical execution time: 5-15 minutes depending on application complexity
Understanding the Outputs
1. Test Summary Report
The main report includes:
Test Results Table:
| Test Scenario | Status | Response Time | HTTP Status | Details |
|----------------------------------|---------|---------------|-------------|----------------|
| Create Account - Valid Data | ✅ PASSED | 0.85s | 200 | Account created|
| Query Accounts | ✅ PASSED | 1.02s | 200 | 42 accounts |
| Delete Account - Invalid ID | ✅ PASSED | 0.32s | 400 | Expected error |
Flow Overview:
- Flow name and description
- Flow type (API, scheduler, event-driven)
- Business logic summary
- Connected systems
Test Execution Details: For each test case:
- Request: The exact curl command used
curl -X POST "https://app.cloudhub.io/account/create" \
-H "Content-Type: application/json" \
-d '{"Name": "Test Account", "Type": "Customer"}' - Response: Full response data
- Performance Metrics: Response time, HTTP status
- Test Result: PASSED/FAILED with explanation
- Findings: Insights about the flow behavior
2. Postman Collection
The agent generates a ready-to-use Postman collection (postman_collection.json) containing:
- All executed HTTP requests
- Proper headers and authentication
- Request body examples
- Collection variables (base URL)
- Test scripts with assertions
- Pre-request scripts
How to Use:
- Download the
postman_collection.jsonfile from the results - Import into Postman: File → Import → Upload Files
- Update collection variables if needed
- Run individual requests or entire collection
3. Individual Test Reports
Each test scenario has a detailed markdown report including:
- Test name and description
- Complete command with all parameters
- Full request and response payloads
- Error messages and stack traces (if failed)
- Performance data
- Debugging recommendations
4. Files Generated
After completion, you'll receive:
curie_test_results/
├── postman_collection.json # Postman collection with all tests
├── flow_name_1/
│ ├── test_summary.md # Comprehensive test report for flow 1
├── flow_name_2/
│ ├── test_summary.md # Comprehensive test report for flow 2
│ └── test_script.md # Additional test script added by the agent to test non-API flows
└── ...
Advanced Features
1. Testing Non-API Flows
For flows that aren't exposed as HTTP endpoints (schedulers, event-driven):
What the agent does:
- Automatically creates temporary HTTP endpoints
- These endpoints trigger the flow logic for testing
- Tests are executed via the temporary endpoint
- Temporary code is removed after testing (or kept for debugging)
Example:
If you have a scheduler flow that processes data every 5 minutes, the agent creates a /test/scheduler-flow endpoint that you can call on-demand.
2. Multi-Flow Testing
Test multiple flows in a single session:
Test Specification:
salesforce-accountFlow, salesforce-contactFlow, salesforce-opportunityFlow
The agent will:
- Test each flow independently
- Generate separate reports for each
- Create a combined Postman collection
- Provide an overall summary
3. Comparison Testing with Input/Output Examples
When you provide input/output examples:
Benefits:
- Regression testing: Ensure flows behave consistently
- Contract validation: Verify API contract compliance
- Data transformation verification: Validate DataWeave logic
Report Format:
| Test Case | Input Match | Output Match | Status | Notes |
|------------------------|-------------|--------------|-------------|-----------------|
| Create Account Test 1 | ✓ | ✓ | ✅ PASSED | Perfect match |
| Create Account Test 2 | ✓ | Partial | ⚠️ WARNING | ID differs |
4. Performance Testing
Every test captures:
- Response Time: Total time to complete the request
- HTTP Status Code: Success/error codes
- Throughput: For bulk operations, items processed per second
Example Output:
Performance Metrics:
- Average Response Time: 0.87s
- 95th Percentile: 1.24s
- Success Rate: 98% (49/50 requests)
5. DataWeave Testing
The agent can:
- Generate sample input data for DataWeave scripts
- Test transformations with various data formats (JSON, XML, CSV)
- Validate mapping logic
- Test edge cases (null values, empty arrays, special characters)
Example Walkthrough
Scenario: Testing a Salesforce Integration Application
Application: A MuleSoft app that creates, updates, and queries Salesforce accounts
Steps:
-
Open Integration Test Agent
- Select repository:
salesforce-integration - Branch:
main
- Select repository:
-
Configure Test
Deployment URL: https://salesforce-integration.cloudhub.io
VM Arguments: -Dsfdc.username=admin@example.com
Test Instructions:
Test the following flows:
- salesforce-createAccountFlow
- salesforce-queryAccountsFlow
Use various account types (Customer, Partner, Prospect) and validate responses. -
Enable Comparison Testing (Optional)
- Flow:
salesforce-createAccountFlow - Input:
[{"Name": "Curie Test", "Type": "Customer", "Industry": "Technology"}] - Expected Output:
{"success": true, "created": true}
- Flow:
-
Submit and Wait
- Agent analyzes flows (~2 min)
- Executes test scenarios (~5 min)
- Generates reports (~1 min)
-
Review Results
- Download test summary markdown
- Import Postman collection
- Review any failed tests
- Share results with team
Expected Outputs:
test_summary.mdwith 5-8 test scenariospostman_collection.jsonwith all HTTP requests- Performance metrics showing response times
- Validation results for comparison testing
This capability provides comprehensive integration testing for MuleSoft applications, ensuring your flows work correctly with real systems and providing detailed insights for debugging and optimization.