CAPL Script

Comprehensive Project

Learning Objectives

After completing this article, you will be able to:

  • Master the method of integrating simulation nodes and test modules within the same CANoe project
  • Understand the differences and advantages of manual testing versus automated testing
  • Learn to use Interactive Generator for manual testing and verification
  • Learn to view and analyze Test Reports
  • Develop system integration thinking and understand the complete workflow of "development-testing-verification"
  • Gain the ability to add new features and test cases to existing projects

Project Review: Integration of Two Major Achievements

After studying the previous two articles, we have built two independent CAPL programs:

Article 6 Achievement: DoorModule Simulation Node

We created a complete door module simulation node that can:

// Receive commands
on message LockCmd { doorLockState = 1; }
on message WindowCmd { /* Handle window control */ }

// Automatic behavior
on timer windowTimer { /* Simulate window movement */ }

// Status feedback
on timer statusTimer { output(DoorStatus); }

Core Features:

  • Command Response: Receives and processes lock, unlock, and window control commands
  • Internal State: Maintains door lock state, window position, and movement state
  • Automatic Behavior: Simulates realistic window movement process (2-second progressive movement)
  • Safety Checks: Prohibits lock/unlock during window movement, prohibits window control when door is locked
  • Status Feedback: Sends DoorStatus message every 500ms to report current state

Article 7 Achievement: DoorModule_Test Test Module

We created a professional test module that can:

testcase TC_DoorLockTest()
{
    // Send stimulation
    message 0x200 lockCmd;
    output(lockCmd);

    // Verify response
    long result = TestWaitForMessage(DoorStatus, 1000);
    if (result == 1 && doorStatus == 0x01)
    {
        TestStepPass("Door locked successfully");
    }
}

Core Functions:

  • Automated Testing: Uses testcase to organize test cases
  • Stimulation Generation: Actively sends test commands (output)
  • Response Verification: Uses TestWaitForMessage to wait and verify responses
  • TSL Support: Uses Checks to monitor signal ranges, Stimulus to generate test data
  • Report Generation: Automatically generates detailed test execution reports

This Article's Goal: Building an Integrated "Simulation-Testing" Bench

Now, we will integrate the simulation node and test module into a single CANoe project, constructing a complete simulation and test bench.

This bench will enable you to:

  1. Manual Testing: Manually send commands through Interactive Generator and observe simulation node responses
  2. Automated Testing: Run test modules to automatically execute all test cases and generate reports
  3. Comparative Verification: Compare results of manual and automated testing to deepen system understanding

📝 Note: This is exactly the workflow of real automotive electronics development—first verifying functionality through simulation, then ensuring quality through automated testing.


Project Configuration: Building the Test Bench

Let's build this integrated environment step by step.

Step 1: Create a New CANoe Project

  1. Open CANoe and select FileNew
  2. In the project wizard, select the appropriate bus type (usually CAN)
  3. Enter a project name, such as DoorModule_Project
  4. Choose a save location

[!SCREENSHOT]
Location: CANoe startup screen
Content: New Project wizard dialog
Annotation: Circle the project type selection and name input fields

Step 2: Configure Network and Database

In the CANoe project, we need to configure the network and database:

  1. Network Configuration:

    • In Simulation Setup, ensure there is a CAN network
    • Double-click the CAN network to configure channel and bitrate (usually 500 kbps)
  2. Database File (Optional but recommended):

    • Right-click Databases and select Add
    • Create or import DBC file to define DoorStatus, LockCmd, and other messages

Simplified Approach: If not using DBC, you can also use raw message IDs directly in CAPL (such as 0x200, 0x300).

Step 3: Add DoorModule Simulation Node

  1. In the Simulation Setup window
  2. Right-click the Network node and select Insert Network Node
  3. Select CAPL type
  4. Enter node name: DoorModule
  5. In the file browser, select the DoorModule.can file saved from Article 6
  6. Click OK

[!SCREENSHOT]
Location: CANoe Simulation Setup window
Content: Shows the newly added DoorModule node
Annotation: Circle the simulation node and its CAPL file path

Step 4: Add DoorModule_Test Test Module

  1. In the menu bar, select TestTest Setup
  2. Right-click Test Environment and select Insert Test Module
  3. Enter module name: DoorModule_Test
  4. Select the DoorModule_Test.can file saved from Article 7
  5. Click OK

[!SCREENSHOT]
Location: CANoe Test Setup window
Content: Shows the newly added test module
Annotation: Circle the test module and Test Units panel

Step 5: Configure Interactive Generator Panel

For manual testing, we need to add Interactive Generator:

  1. In the Test Setup window, right-click the test module
  2. Select InsertInteractive Generator
  3. Configure messages to be sent:
    • Add LockCmd (ID: 0x200)
    • Add UnlockCmd (ID: 0x201)
    • Add WindowCmd (ID: 0x202)

[!SCREENSHOT]
Location: CANoe Test Setup window
Content: Shows Interactive Generator configuration
Annotation: Circle the message configuration area and send button

Step 6: Verify Configuration

Ensure all components have been added correctly:

  • Simulation Setup: DoorModule simulation node
  • Test Setup: DoorModule_Test test module
  • Interactive Generator: Panel for sending test commands

Note: If you encounter compilation errors, check if CAPL file paths are correct and ensure there are no syntax errors.


Running and Observing: Comparison of Two Testing Methods

Now let's run the project and observe two different testing methods.

Method One: Manual Testing (Interactive Generator)

Manual testing simulates the operations that development engineers perform during actual debugging.

Step 1: Start Simulation Node

  1. Click the Start (Start Measurement) button
  2. Observe the Write window, you will see:
DoorModule: Initialization started
DoorModule: Initialization complete - Unlocked, Windows up

Step 2: Use Interactive Generator to Send Commands

  1. In Test Setup, find the Interactive Generator panel
  2. In the LockCmd message row, enter data 01 (lock command)
  3. Click the Send button
  4. Observe the Write window:
DoorModule: Door locked

Step 3: Observe Status Feedback

  1. Open the Trace window (ViewTrace)
  2. You will see DoorStatus messages (ID 0x300) sent every 500ms
  3. Parse message data:
    • Byte 0: 01 (door lock state: locked)
    • Byte 1: 00 (window position: 0%, fully up)
    • Byte 2: 00 (movement state: stopped)

Step 4: Test Window Control

  1. In Interactive Generator, send WindowCmd message with data 02 (window up)
  2. Observe the Write window:
DoorModule: Window moving up
DoorModule: Window fully up
  1. Observe DoorStatus messages in the Trace window:
    • WindowPosition gradually increases from 0 to 100 (if down command)
    • WindowMovement changes from 0 to 1 or 2 (moving)

[!SCREENSHOT]
Location: CANoe Trace window
Content: Shows continuous records of DoorStatus messages
Annotation: Circle timestamps and message data, annotate WindowPosition changes

Manual Testing: Advantages and Limitations

Aspect Manual Testing Automated Testing
Operation Method Human clicks Interactive Generator Program executes automatically
Time Consumption 10 minutes/test 10 seconds/test
Accuracy Depends on operator's attention 100% consistent
Repeatability Difficult to ensure complete consistency Fully repeatable
Recording Method Manually record observation results Automatically generate structured reports
Applicable Scenarios Debugging, exploratory testing Regression testing, stress testing

Method Two: Automated Testing (Test Module)

Now let's run the automated test module to see how it replaces manual operations.

Step 1: Run Test Module

  1. In the Test Setup window, right-click DoorModule_Test
  2. Select Start with Report
  3. Test will execute automatically without human intervention

Step 2: Observe Test Execution Process

Open the Test Report window, you will see the test execution process:

[19:43:25] Test Module Started: DoorModule_Test
[19:43:25] Test Case Started: TC_DoorLockTest
[19:43:26] Step 1: Send lock command
[19:43:26] Step 2: Wait for response (timeout: 1000ms)
[19:43:26] Step 3: Verify door status
[19:43:26] Step 3.1: PASS - Door status message received
[19:43:26] Step 3.2: PASS - Door is locked
[19:43:27] Test Case Finished: TC_DoorLockTest - PASSED

Step 3: View Complete Test Report

After test completion, CANoe generates an HTML-format test report containing:

Test Case List:

  • TC_DoorLockTest - PASSED
  • TC_WindowControlTest - PASSED
  • (Other test cases)

Detailed Step Records:
All TestSteps in each testcase are recorded, including:

  • Step number and description
  • Execution time
  • Result verdict (Pass/Fail)
  • Comparison of actual values versus expected values

[!SCREENSHOT]
Location: CANoe Test Report Viewer
Content: Shows complete test execution report
Annotation: Circle test case status, detailed steps, and verdict results

Automated Testing Advantages

Advantages:

  • Efficient: All test cases execute automatically, saving human time
  • Accurate: Each execution is completely consistent, avoiding human errors
  • Repeatable: Can be run repeatedly to ensure code changes don't affect existing functionality
  • Detailed Recording: Automatically generates structured test reports
  • Scalable: Can easily add new test cases

Improvements over Manual Testing:

Manual Testing (10 minutes)    vs    Automated Testing (10 seconds)
- Manual input of each command            - Automatic sending of all commands
- Manual observation and recording        - Automatic verification and recording
- Results depend on human judgment        - Results based on preset logic
- Difficult to verify boundary conditions - Easy testing of all scenarios

Code Comparison: The "Dual Identity" of the Same Message

Interestingly, the same message has completely different roles in different contexts:

LockCmd Message

In the simulation node (Article 6):

// Role: Received command
on message LockCmd
{
    // Process the received command
    doorLockState = 1;
    write("DoorModule: Door locked");
}

In the test module (Article 7):

// Role: Sent stimulation
testcase TC_DoorLockTest()
{
    // Generate stimulation
    message 0x200 lockCmd;
    lockCmd.byte(0) = 0x01;
    output(lockCmd);
}

DoorStatus Message

In the simulation node:

// Role: Sent status feedback
on timer statusTimer
{
    DoorStatus.byte(0) = doorLockState;
    output(DoorStatus);  // Send to bus
}

In the test module:

// Role: Expected response
testcase TC_DoorLockTest()
{
    // Send command first...
    output(lockCmd);

    // Then wait for expected response
    long result = TestWaitForMessage(DoorStatus, 1000);
    if (result == 1)
    {
        TestStepPass("Door status received");
    }
}

What does this demonstrate?

The same CAN message is an "input" (command) or "output" (state) in the simulation node, and is a "stimulus" (stimulation) or "verification point" (check target) in the test module.

This "role switching" helps you understand:

  • How ECUs interact with other nodes
  • How testing simulates real usage scenarios
  • The dialectical relationship between development and testing

Extended Thinking: Feature Enhancement and Advancement

After completing the basic integration, let's explore how to add new features to the project.

Enhancement One: Anti-Pinch Logic

Real window controllers all have anti-pinch functionality—when the window encounters resistance while moving up, it automatically moves down.

Adding Anti-Pinch Detection in the Simulation Node

variables
{
    byte faultStatus = 0;  // Fault indicator: 0=no fault, 1=fault detected
}

on timer windowTimer
{
    if (windowMovement == 1)  // Moving up
    {
        // Anti-pinch logic: simulate obstacle detection
        // Note: random() returns 0.0-1.0, so <0.1 means 10% chance
        if (windowPosition > 50 && random() < 0.1)
        {
            // Jam detected! Stop movement immediately for safety
            windowMovement = 0;  // Reset movement state
            windowPosition = windowPosition - 10;  // Reverse by 10% for safety margin
            write("DoorModule: Window jam detected! Auto-reversing");

            // Set fault flag to indicate anomaly (for diagnostic purposes)
            faultStatus = 1;  // This can be read via diagnostic DID later
        }
        else
        {
            // Normal movement: continue raising window
            windowPosition = windowPosition - 5;  // Move 5% per 20ms cycle
            if (windowPosition <= 0)
            {
                // Reached fully closed position
                windowPosition = 0;
                windowMovement = 0;  // Stop movement
                write("DoorModule: Window fully up");
            }
        }
    }
    // Note: Similar logic needed for windowMovement == 2 (moving down)

    // Continue timer if window is still moving
    // This creates smooth, continuous movement over ~2 seconds
    if (windowMovement != 0)
    {
        setTimer(windowTimer, 20);  // Next update in 20ms
    }
}

Adding Anti-Pinch Test Case in Test Module

variables
{
    byte faultStatus = 0;  // External variable to read fault status from simulation
}

testcase TC_WindowJamTest()
{
    dword checkId, stimId;

    TestCaseTitle("TC 3.0", "Window Anti-Pinch Test");
    TestCaseDescription("Test that window stops and reverses when jam is detected");

    // Setup: Create monitoring check
    // This check runs in background, monitoring FaultStatus signal
    // It will automatically fail the test if faultStatus goes out of range (0-1)
    checkId = ChkCreate_MsgSignalValueRangeViolation(
        DoorStatus, DoorStatus::FaultStatus, 0, 1
    );
    TestAddConstraint(checkId);  // Activate the constraint

    // Stimulus: Generate test scenario
    // Use TSL Stimulus to send window up command
    // This simulates user holding window control button
    TestStep(1, "Stimulus", "Generate window up command with simulated obstacle");
    stimId = StmCreate_Toggle(
        WindowCmd, WindowCmd::Command,
        0, 2, 500, 1  // Value 2 = window up, toggle once, 500ms interval
    );
    StmControl_Start(stimId);  // Begin stimulus generation

    // Wait for the anti-pinch system to trigger
    // Based on simulation logic, jam should be detected within 3 seconds
    TestStep(1, "Monitor", "Wait for jam detection (max 3000ms)");
    TestWaitForTimeout(3000);  // Give enough time for random jam detection

    // Verification: Check if system responded correctly
    TestStep(2, "Verify", "Check if fault status is set");
    if (faultStatus == 1)
    {
        // Fault status was set = anti-pinch system detected the jam
        TestStepPass("Jam detection working correctly");
    }
    else
    {
        // Fault status not set = system failed to detect jam
        TestStepFail("Jam detection not triggered");
    }

    // Cleanup: Stop and destroy TSL resources
    // Important: Always clean up to avoid resource leaks
    StmControl_Stop(stimId);        // Stop stimulus generation
    StmControl_Destroy(stimId);     // Free stimulus resources
    TestRemoveConstraint(checkId);  // Deactivate check
    ChkControl_Destroy(checkId);    // Free check resources
}

Enhancement Two: Diagnostic Function Introduction

Diagnostics are an important part of automotive electronics. Let's add a simple diagnostic function.

Adding Diagnostic Response in Simulation Node

// Diagnostic request handler for DID (Data Identifier) read
// This allows external diagnostic tools to read ECU internal data
on diagRequest DoorModule_DID
{
    // Create response object - will be sent back to diagnostic tester
    diagResponse response;

    // Pack current ECU state into diagnostic response
    // DID format follows UDS (Unified Diagnostic Services) standard
    // Each byte has specific meaning defined in diagnostic specification

    // Byte 0: Door lock state
    // 0x00 = unlocked, 0x01 = locked
    response.byte(0) = doorLockState;

    // Byte 1: Window position (0-100%)
    response.byte(1) = windowPosition;

    // Byte 2: Window movement state
    // 0x00 = stopped, 0x01 = moving up, 0x02 = moving down
    response.byte(2) = windowMovement;

    // Byte 3: Fault status
    // 0x00 = no fault, 0x01 = fault detected (e.g., window jam)
    response.byte(3) = faultStatus;

    // Send response back to diagnostic tester
    // This completes the diagnostic transaction
    DiagResponseSend(response);  // Capital D and S

    write("DoorModule: DID 0xF101 data sent to diagnostic tool");
}

Testing Diagnostic Function

testcase TC_DiagnosticTest()
{
    TestCaseTitle("TC 4.0", "Diagnostic Communication Test");
    TestCaseDescription("Test diagnostic DID read functionality");

    // Construct diagnostic request following UDS protocol
    // 0x22 is the UDS service for "Read Data By Identifier"
    diagRequest request;
    request.byte(0) = 0x22;  // UDS service: Read DID
    request.byte(1) = 0xF1;  // DID identifier (manufacturer specific)
    request.byte(2) = 0x01;  // Lower byte of DID 0xF101

    // Send request to ECU
    output(request);

    // Wait for ECU to respond (timeout: 2000ms)
    // This is a blocking call - test pauses until response or timeout
    // Note: First parameter is the expected response (request object or response ID)
    long result;
    result = TestWaitForDiagResponse(request, 2000);

    // Verify response
    if (result == 1)
    {
        // Response received successfully
        TestStepPass("Diagnostic response received");
        // Additional verification could check response data here
        // e.g., check that doorLockState matches expected value
    }
    else
    {
        // No response or timeout occurred
        TestStepFail("No diagnostic response within 2000ms");
    }
}

Enhancement Three: More Complex Test Scenarios

Using TSL's Checks and Stimulus, we can create more complex tests:

testcase TC_PerformanceTest()
{
    dword checkId, stimId;

    TestCaseTitle("TC 5.0", "System Performance Test");
    TestCaseDescription("Test system behavior under continuous operations for 15 seconds");

    // Setup: Create cycle time monitoring check
    // This verifies that DoorStatus messages are sent at consistent intervals
    // Requirement: Every 500ms ± 50ms (450-550ms range)
    checkId = ChkCreate_MsgAbsCycleTimeViolation(
        DoorStatus, 450, 550  // Min/Max cycle time in milliseconds
    );
    TestAddConstraint(checkId);  // Activate background monitoring

    // Create stimulus: continuous window operations
    // This simulates user repeatedly pressing window control buttons
    // Toggle between window up (2) and window down (1) every 1 second
    // Total of 10 operations = 10 seconds of stimulus
    stimId = StmCreate_Toggle(
        WindowCmd, WindowCmd::Command,
        1, 2, 1000, 10  // Value1=down, Value2=up, Interval=1000ms, Count=10
    );

    // Execute stress test: run for 15 seconds total
    // This gives enough time for 10 toggle operations plus observation
    StmControl_Start(stimId);  // Begin generating stimulus
    TestStep(1, "Monitor", "Running continuous operations (15s)");
    TestWaitForTimeout(15000);  // Let the system run under load
    StmControl_Stop(stimId);   // Stop stimulus generation

    // Verification: Check if cycle time was maintained under load
    TestStep(2, "Verify", "Check if status messages maintained cycle time");
    // The check (constraint) automatically validates this
    // It has been monitoring cycle time throughout the 15-second test
    TestStepPass("Performance test completed - cycle time within spec");

    // Cleanup: Always release TSL resources
    TestRemoveConstraint(checkId);  // Deactivate cycle time check
    ChkControl_Destroy(checkId);    // Free check resources
    StmControl_Destroy(stimId);     // Free stimulus resources
}

Project Summary: Integrated Application of Knowledge Points

Through this complete "Door Module" project, we have achieved a leap from zero to a complete system.

Complete Project Flow

[Phase 1: Basic Learning]
Article 1 → What is CAPL?          (Knowledge building)
Article 2 → Development Environment Setup (Tool usage)
Article 3 → Programming Basics      (Grammar mastery)
Article 4 → Core Interactions       (Event mechanisms)
Article 5 → Debugging Techniques    (Problem solving)

[Phase 2: Practical Application]
Article 6 → Simulation Node Development (Building ECU models)
Article 7 → Test Module Development    (Automated verification)
Article 8 → Comprehensive Project      (System integration) ← This Article

Knowledge Point Mapping

Let's look at each knowledge point used in the project:

Knowledge Point Corresponding Article Application in Project
Variables and Functions Article 3 doorLockState, windowPosition variables
Event-Driven Article 4 on message, on timer events
Message Processing Article 4 this.byte(0) accessing message data
Debug Output Article 5 write() recording state changes
State Machine Article 6 windowMovement state management
Timers Article 6 windowTimer implementing automatic behavior
Test Cases Article 7 testcase TC_* organizing tests
Test Steps Article 7 TestStepPass/Fail determining results
TSL Checks Article 7 ChkCreate_* monitoring signals
TSL Stimulus Article 7 StmCreate_* generating test data
System Integration Article 8 Integrated simulation + testing operation

Engineering Thinking: Complete Development Workflow

Requirements Analysis → Design Architecture → Implementation → Simulation Verification → Automated Testing → Bug Fixes → Regression Testing
        ↓                    ↓                 ↓              ↓                    ↓               ↓              ↓
    Define Features    State Machine Design  Simulation Node  Manual Debugging   Test Module     Bug Fixes    Continuous Integration

This project demonstrates this thinking:

  1. Requirements: Door controller functional requirements (start of Article 6)
  2. Design: State machine, message interface (Article 6)
  3. Implementation: CAPL code writing (Article 6)
  4. Simulation: Verify functional correctness (Article 6 + Article 8 manual testing)
  5. Testing: Automated verification (Article 7 + Article 8 automated testing)
  6. Enhancement: Add anti-pinch, diagnostics (Article 8 extensions)
  7. Regression: Re-run tests to ensure stability (Article 8 summary)

Key Learnings

  1. Dual Perspective:

    • Developer perspective: Focus on how to implement functionality (simulation node)
    • Tester perspective: Focus on how to verify functionality (test module)
  2. Message Roles:

    • The same message plays different roles in different contexts
    • Understanding this transformation is key to mastering CAPL
  3. Engineering:

    • From code to system: From functions to complete project
    • From manual to automated: Improving testing efficiency and quality
  4. Scalability:

    • After building the basic framework, new features can be easily added
    • Test cases can expand as functionality increases

Course Summary and Advanced Directions

Congratulations on completing the entire CAPL technical tutorial series!

Series Review

We started from zero basics and gradually built a complete knowledge system:

  1. Knowledge Building: Understanding what CAPL is and why we need it
  2. Tool Mastery: Familiarizing with CAPL Browser and development environment
  3. Grammar Basics: Mastering CAPL language features
  4. Core Mechanisms: Understanding event-driven programming
  5. Debugging Ability: Learning to troubleshoot problems
  6. Practical Application: Building simulation nodes
  7. Quality Assurance: Writing automated tests
  8. System Integration: Integrating development and testing

CAPL Learning Path Recommendations

[Beginner Stage]
✓ This tutorial series (Articles 1-8)
✓ Official documentation reading
✓ Example project analysis

[Advanced Stage]
→ Complex Protocols: CANopen, J1939, FlexRay
→ Diagnostic Services: UDS, KWP2000, OBD-II
→ Ethernet: SOME/IP, AVB
→ V2X Communication: Vehicle networking protocols

[Expert Stage]
→ Integration with other tools: CANalyzer, CANoe Expense
→ Test Automation: CI/CD integration
→ Custom Tools: Extending CAPL functionality
→ Performance Optimization: Large-scale system engineering
  1. Diagnostic Service Development

    • Deep dive into on diagRequest events
    • UDS service implementation (Read DID, Write DID, Clear DTC)
    • Complex diagnostic sequences and conditional logic
  2. Complex Network Simulation

    • Multi-ECU interaction simulation
    • Gateway node development
    • Network topology and load simulation
  3. Automated Test Bench

    • Large-scale test case management
    • Test report analysis and visualization
    • Automated testing in Continuous Integration (CI)
  4. Tool Integration

    • Co-simulation with MATLAB/Simulink
    • Version control (Git) integration
    • Custom test tool development

Closing Remarks

CAPL is a practical skill that enables you to:

  • Quickly Verify Ideas: Through simulation-based rapid prototyping
  • Improve Testing Efficiency: Automated testing reduces repetitive work
  • Ensure Product Quality: Systematic verification processes
  • Deepen System Understanding: Understanding automotive networks from the code level

Most importantly, this tutorial demonstrates through the concrete case of "Door Module" how to integrate scattered knowledge points into a complete engineering solution.

Continue Practicing: Try building simulation and testing for other ECUs (such as engine management, air conditioning control).

Keep Learning: Follow Vector's official documentation updates to learn new protocols and features.

Share and Communicate: Share your project experiences with other engineers to grow together.


Exercises

  1. Basic Exercise: Add a new feature to DoorModule—seat adjustment control (position, memory). Write corresponding simulation and testing code.

  2. Advanced Exercise: Use TSL to create a "stress test": continuously send 1000 window up/down commands and check if the system develops performance issues or memory leaks.

  3. Challenge Exercise: Implement a simple "learning mode": have the simulation node record user's window usage habits (frequency of movements, final positions), and verify in the test module that these habit data are correctly saved.