Testing Guide¶
This guide covers test organization, execution, and best practices for the OpenEV Data API project.
Test Structure¶
tests/
├── unit/ # Unit tests mirroring source structure
│ ├── ev_core/
│ │ ├── domain/
│ │ │ ├── battery_test.rs
│ │ │ ├── body_test.rs
│ │ │ ├── charging_test.rs
│ │ │ ├── enums_test.rs
│ │ │ ├── metadata_test.rs
│ │ │ ├── powertrain_test.rs
│ │ │ ├── range_test.rs
│ │ │ ├── sources_test.rs
│ │ │ ├── types_test.rs
│ │ │ └── vehicle_test.rs
│ │ ├── error_test.rs
│ │ └── validation_test.rs
│ ├── ev_etl/
│ │ ├── ingest/
│ │ │ ├── parser_test.rs
│ │ │ └── reader_test.rs
│ │ ├── merge/
│ │ │ └── strategy_test.rs
│ │ ├── output/
│ │ │ ├── csv_test.rs
│ │ │ ├── json_test.rs
│ │ │ ├── postgresql_test.rs
│ │ │ ├── sqlite_test.rs
│ │ │ ├── statistics_test.rs
│ │ │ └── xml_test.rs
│ │ ├── cli_test.rs
│ │ ├── error_test.rs
│ │ └── validate_test.rs
│ └── ev_server/
│ ├── api/
│ │ ├── health_test.rs
│ │ ├── makes_test.rs
│ │ ├── search_test.rs
│ │ └── vehicles_test.rs
│ ├── db/
│ │ └── sqlite_test.rs
│ ├── config_test.rs
│ ├── error_test.rs
│ └── models_test.rs
├── e2e/ # End-to-end API tests (one file per endpoint)
│ ├── health/
│ │ └── health_check_test.rs
│ ├── makes/
│ │ ├── list_makes_test.rs
│ │ └── list_models_test.rs
│ ├── search/
│ │ └── search_vehicles_test.rs
│ └── vehicles/
│ ├── list_vehicles_test.rs
│ ├── get_vehicle_test.rs
│ ├── get_vehicle_by_code_test.rs
│ └── get_vehicle_variants_test.rs
├── integration/ # Integration tests (cross-component)
│ ├── etl_pipeline_test.rs
│ ├── database_test.rs
│ └── api_database_test.rs
└── others/ # Other tests
└── fixtures/
└── sample_data.json
Test Categories¶
Unit Tests (tests/unit/)¶
Test individual functions and modules in isolation.
Structure: Mirrors the crates/ directory structure. Each source file has a corresponding _test.rs file.
Coverage: 100% of public functions and critical private functions.
Example:
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_parse_vehicle_json() {
let json = r#"{"make": {"slug": "tesla"}}"#;
let result = parse_vehicle(json);
assert!(result.is_ok());
}
}
E2E Tests (tests/e2e/)¶
Test complete API workflows from request to response.
Structure: Organized by endpoint group (health/, makes/, search/, vehicles/). Each endpoint has its own test file testing all success and error scenarios.
Required scenarios per endpoint: - ✅ Success response (200) - ❌ Not Found (404) where applicable - ❌ Bad Request (400) where applicable - ❌ Internal Error (500) simulation
Example:
#[tokio::test]
async fn test_get_vehicle_by_code_success() {
let response = client.get("/api/v1/vehicles/code/tesla:model_3:2024:model_3").await;
assert_eq!(response.status(), 200);
}
#[tokio::test]
async fn test_get_vehicle_by_code_not_found() {
let response = client.get("/api/v1/vehicles/code/invalid:code:2024:unknown").await;
assert_eq!(response.status(), 404);
let body: ProblemDetails = response.json().await;
assert!(body.error_type.contains("not-found"));
}
Integration Tests (tests/integration/)¶
Test component interactions and data flows.
Focus: - ETL pipeline end-to-end - Database + API integration - Cross-crate functionality
Other Tests (tests/others/)¶
Tests that don't fit other categories: - Performance benchmarks - Test fixtures - Helper utilities
Running Tests¶
All Tests¶
Specific Category¶
# Unit tests only
cargo test --test unit
# E2E tests only
cargo test --test e2e
# Integration tests only
cargo test --test integration
Specific Crate¶
With Output¶
Watch Mode¶
Test Conventions¶
- Naming:
test_<function_name>_<scenario> - Assertions: Use descriptive assertions with messages
- Fixtures: Place in
tests/others/fixtures/ - Database: Use in-memory SQLite for unit tests
- HTTP Client: Use
reqwestor test server utilities for e2e
Coverage¶
Generate coverage report using cargo-llvm-cov (Recommended):
# Install the tool
cargo install cargo-llvm-cov
# Generate text summary
cargo llvm-cov --workspace
# Generate HTML report (Interactive)
cargo llvm-cov --workspace --html
# Report will be at: target/llvm-cov/html/index.html
# Generate LCOV info (for CI)
cargo llvm-cov --workspace --lcov --output-path lcov.info
Adding New Tests¶
- Create file in appropriate directory following structure
- Import the module under test
- Write tests covering success and error cases
- Run
cargo testto verify - Update this guide if adding new test categories