Files
managing-apps/.cursor/commands/write-unit-tests.md

14 KiB

write-unit-tests

When to Use

Use this command when you need to:

  • Write unit tests for C# classes and methods using xUnit
  • Create comprehensive test coverage following best practices
  • Set up test projects with proper structure
  • Implement AAA (Arrange-Act-Assert) pattern tests
  • Handle mocking, stubbing, and test data management
  • Follow naming conventions and testing best practices

Prerequisites

  • xUnit packages installed (Xunit, Xunit.Runner.VisualStudio, Microsoft.NET.Test.Sdk)
  • Test project exists or needs to be created (.Tests suffix convention)
  • Code to be tested is available and well-structured
  • Moq or similar mocking framework for dependencies
  • FluentAssertions for better assertion syntax (recommended)

Execution Steps

Step 1: Analyze Code to Test

Examine the class/method that needs testing:

Identify:

  • Class name and namespace
  • Public methods to test
  • Dependencies (interfaces, services) that need mocking
  • Constructor parameters
  • Expected behaviors and edge cases
  • Return types and exceptions

Check existing tests:

  • Search for existing test files: grep -r "ClassName" src/*/Tests/ --include="*.cs"
  • Determine what tests are missing
  • Review test coverage gaps

Step 2: Set Up Test Project Structure

If test project doesn't exist, create it:

Create test project:

dotnet new xunit -n Managing.Application.Tests
dotnet add Managing.Application.Tests/Managing.Application.Tests.csproj reference Managing.Application/Managing.Application.csproj

Add required packages:

dotnet add Managing.Application.Tests package Xunit
dotnet add Managing.Application.Tests package Xunit.Runner.VisualStudio
dotnet add Managing.Application.Tests package Microsoft.NET.Test.Sdk
dotnet add Managing.Application.Tests package Moq
dotnet add Managing.Application.Tests package FluentAssertions
dotnet add Managing.Application.Tests package AutoFixture

Step 3: Create Test Class Structure

Naming Convention:

  • Test class: [ClassName]Tests (e.g., TradingBotBaseTests)
  • Test method: [MethodName]_[Scenario]_[ExpectedResult] (e.g., Start_WithValidConfig_CallsLoadAccount)

File Structure:

src/
├── Managing.Application.Tests/
│   ├── TradingBotBaseTests.cs
│   ├── Services/
│   │   └── AccountServiceTests.cs
│   └── Helpers/
│       └── TradingBoxTests.cs

Step 4: Implement Test Methods (AAA Pattern)

For each test method:

Arrange (Setup)

  • Create mock objects for dependencies
  • Set up test data and expected values
  • Configure mock behavior
  • Initialize system under test (SUT)

Act (Execute)

  • Call the method being tested
  • Capture results or exceptions
  • Execute the behavior to test

Assert (Verify)

  • Verify the expected outcome
  • Check return values, property changes, or exceptions
  • Verify interactions with mocks

Step 5: Write Comprehensive Test Cases

Happy Path Tests:

  • Test normal successful execution
  • Verify expected return values
  • Check side effects on dependencies

Edge Cases:

  • Null/empty parameters
  • Boundary values
  • Invalid inputs

Error Scenarios:

  • Expected exceptions
  • Error conditions
  • Failure paths

Integration Points:

  • Verify correct interaction with dependencies
  • Test data flow through interfaces

Step 6: Handle Mocking and Stubbing

Using Moq:

// Arrange
var mockLogger = new Mock<ILogger<TradingBotBase>>();
var mockScopeFactory = new Mock<IServiceScopeFactory>();
// Configure mock behavior
mockLogger.Setup(x => x.LogInformation(It.IsAny<string>())).Verifiable();

// Act
var bot = new TradingBotBase(mockLogger.Object, mockScopeFactory.Object, config);

// Assert
mockLogger.Verify(x => x.LogInformation(It.IsAny<string>()), Times.Once);

Setup common mock configurations:

  • Logger mocks (verify logging calls)
  • Service mocks (setup return values)
  • Repository mocks (setup data access)
  • External service mocks (simulate API responses)

Step 7: Implement Test Data Management

Test Data Patterns:

  • Inline test data for simple tests
  • Private methods for complex test data setup
  • Test data builders for reusable scenarios
  • Theory data for parameterized tests

Using AutoFixture:

private readonly IFixture _fixture = new Fixture();

[Fact]
public void Start_WithValidConfig_SetsPropertiesCorrectly()
{
    // Arrange
    var config = _fixture.Create<TradingBotConfig>();
    var bot = new TradingBotBase(_loggerMock.Object, _scopeFactoryMock.Object, config);

    // Act
    await bot.Start(BotStatus.Saved);

    // Assert
    bot.Config.Should().Be(config);
}

Step 8: Add Proper Assertions

Using FluentAssertions:

// Value assertions
result.Should().Be(expectedValue);
result.Should().BeGreaterThan(0);
result.Should().NotBeNull();

// Collection assertions
positions.Should().HaveCount(1);
positions.Should().ContainSingle();

// Exception assertions
await Assert.ThrowsAsync<ArgumentException>(() => method.CallAsync());

Common Assertion Types:

  • Equality: Should().Be(), Should().BeEquivalentTo()
  • Null checks: Should().NotBeNull(), Should().BeNull()
  • Collections: Should().HaveCount(), Should().Contain()
  • Exceptions: Should().Throw<>, Should().NotThrow()
  • Types: Should().BeOfType<>, Should().BeAssignableTo<>()

Step 9: Handle Async Testing

Async Test Methods:

[Fact]
public async Task LoadAccount_WhenCalled_LoadsAccountFromService()
{
    // Arrange
    var expectedAccount = _fixture.Create<Account>();
    _accountServiceMock.Setup(x => x.GetAccountByAccountNameAsync(It.IsAny<string>(), It.IsAny<bool>(), It.IsAny<bool>()))
                      .ReturnsAsync(expectedAccount);

    // Act
    await _bot.LoadAccount();

    // Assert
    _bot.Account.Should().Be(expectedAccount);
}

Async Exception Testing:

[Fact]
public async Task LoadAccount_WithInvalidAccountName_ThrowsArgumentException()
{
    // Arrange
    _accountServiceMock.Setup(x => x.GetAccountByAccountNameAsync("InvalidName", It.IsAny<bool>(), It.IsAny<bool>()))
                      .ThrowsAsync(new ArgumentException("Account not found"));

    // Act & Assert
    await Assert.ThrowsAsync<ArgumentException>(() => _bot.LoadAccount());
}

Step 10: Add Theory Tests for Multiple Scenarios

Parameterized Tests:

[Theory]
[InlineData(BotStatus.Saved, "🚀 Bot Started Successfully")]
[InlineData(BotStatus.Stopped, "🔄 Bot Restarted")]
public async Task Start_WithDifferentPreviousStatuses_LogsCorrectMessage(BotStatus previousStatus, string expectedMessage)
{
    // Arrange
    _configMock.SetupGet(x => x.IsForBacktest).Returns(false);

    // Act
    await _bot.Start(previousStatus);

    // Assert
    _loggerMock.Verify(x => x.LogInformation(expectedMessage), Times.Once);
}

Step 11: Implement Test Cleanup and Disposal

Test Cleanup:

public class TradingBotBaseTests : IDisposable
{
    private readonly MockRepository _mockRepository;

    public TradingBotBaseTests()
    {
        _mockRepository = new MockRepository(MockBehavior.Strict);
        // Setup mocks
    }

    public void Dispose()
    {
        _mockRepository.VerifyAll();
    }
}

Reset State Between Tests:

  • Clear static state
  • Reset mock configurations
  • Clean up test data

Step 12: Run and Verify Tests

Run tests:

dotnet test src/Managing.Application.Tests/Managing.Application.Tests.csproj

Check coverage:

dotnet test /p:CollectCoverage=true /p:CoverletOutputFormat=cobertura

Verify test results:

  • All tests pass
  • No unexpected exceptions
  • Coverage meets requirements (typically >80%)

Step 13: Analyze Test Failures for Business Logic Issues

When tests fail unexpectedly, it may indicate business logic problems:

Create TODO.md Analysis:

# Document test failures that reveal business logic issues
# Analyze whether failures indicate bugs in implementation vs incorrect test assumptions

Key Indicators of Business Logic Issues:

  • Tests fail because actual behavior differs significantly from expected behavior
  • Core business calculations (P&L, fees, volumes) return incorrect values
  • Edge cases reveal fundamental logic flaws
  • Multiple related tests fail with similar patterns

Business Logic Failure Patterns:

  • Zero Returns: Methods return 0 when they should return calculated values
  • Null Returns: Methods return null when valid data is provided
  • Incorrect Calculations: Mathematical results differ from expected formulas
  • Validation Failures: Valid inputs are rejected or invalid inputs are accepted

Create TODO.md when:

  • Tests reveal potential bugs in business logic
  • Multiple tests fail with similar calculation errors
  • Core business metrics are not working correctly
  • Implementation behavior differs from business requirements

TODO.md Structure:

# [Component] Unit Tests - Business Logic Issues Analysis

## Test Results Summary
**Total Tests:** X
- **Passed:** Y ✅
- **Failed:** Z ❌

## Failed Test Categories & Potential Business Logic Issues
[List specific failing tests and analyze root causes]

## Business Logic Issues Identified
[Critical, Medium, Low priority issues]

## Recommended Actions
[Immediate fixes, investigation steps, test updates needed]

Best Practices for Unit Testing

Test Naming

  • [MethodName]_[Scenario]_[ExpectedResult]
  • Test1, MethodTest, CheckIfWorks

Test Structure

  • One assertion per test (Single Responsibility)
  • Clear Arrange-Act-Assert sections
  • Descriptive variable names

Mock Usage

  • Mock interfaces, not concrete classes
  • Verify important interactions
  • Avoid over-mocking (test behavior, not implementation)

Test Data

  • Use realistic test data
  • Test boundary conditions
  • Use factories for complex objects

Coverage Goals

  • Aim for >80% line coverage
  • Cover all public methods
  • Test error paths and edge cases

Test Organization

  • Group related tests in classes
  • Use base classes for common setup
  • Separate integration tests from unit tests

Common Testing Patterns

Service Layer Testing

[Fact]
public async Task GetAccountByName_WithValidName_ReturnsAccount()
{
    // Arrange
    var accountName = "test-account";
    var expectedAccount = new Account { Name = accountName };
    _repositoryMock.Setup(x => x.GetByNameAsync(accountName))
                  .ReturnsAsync(expectedAccount);

    // Act
    var result = await _accountService.GetAccountByNameAsync(accountName);

    // Assert
    result.Should().Be(expectedAccount);
}

Repository Testing

[Fact]
public async Task SaveAsync_WithValidEntity_CallsSaveOnContext()
{
    // Arrange
    var entity = _fixture.Create<Account>();

    // Act
    await _repository.SaveAsync(entity);

    // Assert
    _contextMock.Verify(x => x.SaveChangesAsync(It.IsAny<CancellationToken>()), Times.Once);
}

Validation Testing

[Theory]
[InlineData(null)]
[InlineData("")]
[InlineData("   ")]
public async Task CreateAccount_WithInvalidName_ThrowsValidationException(string invalidName)
{
    // Arrange
    var request = new CreateAccountRequest { Name = invalidName };

    // Act & Assert
    await Assert.ThrowsAsync<ValidationException>(() => _accountService.CreateAccountAsync(request));
}

Error Handling

If test project creation fails:

  • Check .NET SDK installation
  • Verify project name follows conventions
  • Check for existing project conflicts

If package installation fails:

  • Clear NuGet cache: dotnet nuget locals all --clear
  • Check network connectivity
  • Verify package names and versions

If tests fail:

  • Debug individual test methods
  • Check mock configurations
  • Verify test data setup
  • Review assertion logic

If code changes break tests:

  • Update test expectations
  • Modify test data if needed
  • Review if behavior changes are intentional

Example Execution

User input: Create unit tests for TradingBotBase.Start() method

AI execution:

  1. Analyze code:

    • TradingBotBase class with Start(BotStatus) method
    • Dependencies: ILogger, IServiceScopeFactory
    • Different behaviors based on BotStatus parameter
  2. Create test class:

    public class TradingBotBaseTests
    {
        private readonly Mock<ILogger<TradingBotBase>> _loggerMock;
        private readonly Mock<IServiceScopeFactory> _scopeFactoryMock;
        private readonly TradingBotConfig _config;
    
        public TradingBotBaseTests()
        {
            _loggerMock = new Mock<ILogger<TradingBotBase>>();
            _scopeFactoryMock = new Mock<IServiceScopeFactory>();
            _config = new TradingBotConfig { IsForBacktest = false };
        }
    }
    
  3. Write individual tests:

    [Fact]
    public async Task Start_WithSavedStatus_LoadsAccountAndLogsStartupMessage()
    {
        // Arrange
        var bot = new TradingBotBase(_loggerMock.Object, _scopeFactoryMock.Object, _config);
    
        // Act
        await bot.Start(BotStatus.Saved);
    
        // Assert
        _loggerMock.Verify(x => x.LogInformation(It.Is<string>(s => s.Contains("🚀 Bot Started Successfully"))), Times.Once);
    }
    
  4. Add edge cases:

    [Fact]
    public async Task Start_WithBacktestConfig_SkipsAccountLoading()
    {
        // Arrange
        _config.IsForBacktest = true;
        var bot = new TradingBotBase(_loggerMock.Object, _scopeFactoryMock.Object, _config);
    
        // Act
        await bot.Start(BotStatus.Saved);
    
        // Assert
        bot.Account.Should().BeNull();
    }
    
  5. Run tests and verify:

    dotnet test --filter "TradingBotBaseTests"
    

Important Notes

  • AAA Pattern: Arrange-Act-Assert structure for clarity
  • Single Responsibility: One concept per test
  • Descriptive Names: Method_Scenario_Result naming convention
  • Mock Dependencies: Test in isolation
  • Realistic Data: Use meaningful test values
  • Async Testing: Use async Task for async methods
  • Theory Tests: Use [Theory] for multiple scenarios
  • ⚠️ Avoid Over-Mocking: Don't mock everything
  • ⚠️ Integration Tests: Separate from unit tests
  • 📦 Test Packages: Xunit, Moq
  • 🎯 Coverage: Aim for >80% coverage
  • 🔧 Build Tests: dotnet test command