Add monitoring on queries with sentry alert + Fix check position list in db for backtest

This commit is contained in:
2025-10-10 00:15:02 +07:00
parent ffb98fe359
commit e4c2f8b7a5
24 changed files with 3340 additions and 179 deletions

336
SQL_MONITORING_README.md Normal file
View File

@@ -0,0 +1,336 @@
# SQL Query Monitoring and Loop Detection System
## Overview
This comprehensive SQL monitoring system has been implemented to identify and resolve the SQL script loop issue that was causing DDOS-like behavior on your server. The system provides detailed logging, performance monitoring, and automatic loop detection to help identify the root cause of problematic database operations.
## Features
### 🔍 **Comprehensive SQL Query Logging**
- **Detailed Query Tracking**: Every SQL query is logged with timing, parameters, and execution context
- **Performance Metrics**: Automatic tracking of query execution times, row counts, and resource usage
- **Connection State Monitoring**: Tracks database connection open/close operations with timing
- **Error Logging**: Comprehensive error logging with stack traces and context information
### 🚨 **Automatic Loop Detection**
- **Pattern Recognition**: Identifies repeated query patterns that may indicate infinite loops
- **Frequency Analysis**: Monitors query execution frequency and detects abnormally high rates
- **Performance Thresholds**: Automatically flags slow queries and high-frequency operations
- **Real-time Alerts**: Immediate notification when potential loops are detected
### 📊 **Performance Monitoring**
- **Query Execution Statistics**: Tracks execution counts, average times, and performance trends
- **Resource Usage Monitoring**: Monitors memory, CPU, and I/O usage during database operations
- **Connection Pool Monitoring**: Tracks database connection pool health and usage
- **Transaction Monitoring**: Monitors transaction duration and rollback rates
### 🎯 **Smart Alerting System**
- **Configurable Thresholds**: Customizable thresholds for slow queries, high frequency, and error rates
- **Multi-level Alerts**: Different alert levels (Info, Warning, Error, Critical) based on severity
- **Contextual Information**: Alerts include repository name, method name, and query patterns
- **Automatic Escalation**: Critical issues are automatically escalated with detailed diagnostics
## Components
### 1. SqlQueryLogger
**Location**: `src/Managing.Infrastructure.Database/PostgreSql/SqlQueryLogger.cs`
Provides comprehensive logging for individual database operations:
- Operation start/completion logging
- Query execution timing and parameters
- Connection state changes
- Error handling and exception logging
- Performance issue detection
### 2. SqlLoopDetectionService
**Location**: `src/Managing.Infrastructure.Database/PostgreSql/SqlLoopDetectionService.cs`
Advanced loop detection and performance monitoring:
- Real-time query pattern analysis
- Execution frequency tracking
- Performance threshold monitoring
- Automatic cleanup of old tracking data
- Configurable detection rules
### 3. BaseRepositoryWithLogging
**Location**: `src/Managing.Infrastructure.Database/PostgreSql/BaseRepositoryWithLogging.cs`
Base class for repositories with integrated monitoring:
- Automatic query execution tracking
- Performance monitoring for all database operations
- Error handling and logging
- Loop detection integration
### 4. Enhanced ManagingDbContext
**Location**: `src/Managing.Infrastructure.Database/PostgreSql/ManagingDbContext.cs`
Extended DbContext with monitoring capabilities:
- Query execution tracking
- Performance metrics collection
- Loop detection integration
- Statistics and health monitoring
### 5. SqlMonitoringController
**Location**: `src/Managing.Api/Controllers/SqlMonitoringController.cs`
REST API endpoints for monitoring and management:
- Real-time query statistics
- Alert management
- Performance metrics
- Health monitoring
- Configuration management
## API Endpoints
### Get Query Statistics
```http
GET /api/SqlMonitoring/statistics
```
Returns comprehensive query execution statistics including:
- Loop detection statistics
- Context execution counts
- Active query patterns
- Performance metrics
### Get Alerts
```http
GET /api/SqlMonitoring/alerts
```
Returns current alerts and potential issues:
- High frequency queries
- Slow query patterns
- Performance issues
- Loop detection alerts
### Clear Tracking Data
```http
POST /api/SqlMonitoring/clear-tracking
```
Clears all tracking data and resets monitoring counters.
### Get Query Details
```http
GET /api/SqlMonitoring/query-details/{repositoryName}/{methodName}
```
Returns detailed information about specific query patterns.
### Get Monitoring Health
```http
GET /api/SqlMonitoring/health
```
Returns overall monitoring system health status.
## Configuration
### SqlMonitoringSettings
**Location**: `src/Managing.Infrastructure.Database/PostgreSql/SqlMonitoringSettings.cs`
Comprehensive configuration options:
- **TrackingWindow**: Time window for query tracking (default: 5 minutes)
- **MaxExecutionsPerWindow**: Maximum executions per window (default: 10)
- **SlowQueryThresholdMs**: Slow query threshold (default: 1000ms)
- **HighFrequencyThreshold**: High frequency threshold (default: 20 executions/minute)
- **EnableDetailedLogging**: Enable detailed SQL logging (default: true)
- **EnableLoopDetection**: Enable loop detection (default: true)
- **EnablePerformanceMonitoring**: Enable performance monitoring (default: true)
## Usage Examples
### 1. Using Enhanced Repository
```csharp
public class MyRepository : BaseRepositoryWithLogging, IMyRepository
{
public MyRepository(ManagingDbContext context, ILogger<MyRepository> logger, SqlLoopDetectionService loopDetectionService)
: base(context, logger, loopDetectionService)
{
}
public async Task<User> GetUserAsync(string name)
{
return await ExecuteWithLoggingAsync(async () =>
{
// Your database operation here
return await _context.Users.FirstOrDefaultAsync(u => u.Name == name);
}, nameof(GetUserAsync), ("name", name));
}
}
```
### 2. Manual Query Tracking
```csharp
// Track a specific query execution
_context.TrackQueryExecution("GetUserByName", TimeSpan.FromMilliseconds(150), "UserRepository", "GetUserAsync");
```
### 3. Monitoring API Usage
```bash
# Get current statistics
curl -X GET "https://your-api/api/SqlMonitoring/statistics"
# Get alerts
curl -X GET "https://your-api/api/SqlMonitoring/alerts"
# Clear tracking data
curl -X POST "https://your-api/api/SqlMonitoring/clear-tracking"
```
## Logging Output Examples
### Query Execution Log
```
[SQL-OP-START] a1b2c3d4 | PostgreSqlUserRepository.GetUserByNameAsync | Started at 14:30:15.123
[SQL-CONNECTION] a1b2c3d4 | PostgreSqlUserRepository.GetUserByNameAsync | Connection OPENED (took 5ms)
[SQL-QUERY] a1b2c3d4 | PostgreSqlUserRepository.GetUserByNameAsync | Executed in 25ms | Rows: 1
[SQL-CONNECTION] a1b2c3d4 | PostgreSqlUserRepository.GetUserByNameAsync | Connection CLOSED (took 2ms)
[SQL-OP-COMPLETE] a1b2c3d4 | PostgreSqlUserRepository.GetUserByNameAsync | Completed in 32ms | Queries: 1 | Result: User
```
### Loop Detection Alert
```
[SQL-LOOP-DETECTED] e5f6g7h8 | PostgreSqlTradingRepository.GetPositionsAsync | Pattern 'GetPositionsAsync()' executed 15 times | Possible infinite loop!
[SQL-LOOP-ALERT] Potential infinite loop detected in PostgreSqlTradingRepository.GetPositionsAsync with pattern 'GetPositionsAsync()'
```
### Performance Warning
```
[SQL-PERFORMANCE] PostgreSqlTradingRepository | GetPositionsAsync took 2500ms (threshold: 1000ms)
[SQL-QUERY-DETAILS] i9j0k1l2 | Query: SELECT * FROM Positions WHERE Status = @status | Parameters: {"status":"Active"}
```
## Troubleshooting
### Common Issues and Solutions
#### 1. High Query Frequency
**Symptoms**: Multiple queries executing rapidly
**Detection**: `[SQL-LOOP-DETECTED]` logs with high execution counts
**Solution**:
- Check for recursive method calls
- Verify loop conditions in business logic
- Review async/await patterns
#### 2. Slow Query Performance
**Symptoms**: Queries taking longer than expected
**Detection**: `[SQL-PERFORMANCE]` warnings
**Solution**:
- Review query execution plans
- Check database indexes
- Optimize query parameters
#### 3. Connection Issues
**Symptoms**: Connection timeouts or pool exhaustion
**Detection**: `[SQL-CONNECTION]` error logs
**Solution**:
- Review connection management
- Check connection pool settings
- Verify proper connection disposal
#### 4. Memory Issues
**Symptoms**: High memory usage during database operations
**Detection**: Memory monitoring alerts
**Solution**:
- Review query result set sizes
- Implement pagination
- Check for memory leaks in entity tracking
## Integration Steps
### 1. Update Existing Repositories
Replace existing repository implementations with the enhanced base class:
```csharp
// Before
public class MyRepository : IMyRepository
{
private readonly ManagingDbContext _context;
// ...
}
// After
public class MyRepository : BaseRepositoryWithLogging, IMyRepository
{
public MyRepository(ManagingDbContext context, ILogger<MyRepository> logger, SqlLoopDetectionService loopDetectionService)
: base(context, logger, loopDetectionService)
{
}
// ...
}
```
### 2. Update Dependency Injection
The services are automatically registered in `Program.cs`:
- `SqlLoopDetectionService` as Singleton
- Enhanced `ManagingDbContext` with monitoring
- All repositories with logging capabilities
### 3. Configure Monitoring Settings
Add configuration to `appsettings.json`:
```json
{
"SqlMonitoring": {
"TrackingWindow": "00:05:00",
"MaxExecutionsPerWindow": 10,
"SlowQueryThresholdMs": 1000,
"HighFrequencyThreshold": 20,
"EnableDetailedLogging": true,
"EnableLoopDetection": true,
"EnablePerformanceMonitoring": true
}
}
```
## Monitoring Dashboard
### Key Metrics to Monitor
1. **Query Execution Count**: Track total queries per minute
2. **Average Execution Time**: Monitor query performance trends
3. **Error Rate**: Track database error frequency
4. **Connection Pool Usage**: Monitor connection health
5. **Loop Detection Alerts**: Immediate notification of potential issues
### Alert Thresholds
- **Critical**: >50 queries/minute, >5 second execution time
- **Warning**: >20 queries/minute, >1 second execution time
- **Info**: Normal operation metrics
## Best Practices
### 1. Repository Design
- Always inherit from `BaseRepositoryWithLogging`
- Use `ExecuteWithLoggingAsync` for all database operations
- Include meaningful parameter names in logging calls
- Handle exceptions properly with logging
### 2. Performance Optimization
- Monitor slow queries regularly
- Implement proper indexing strategies
- Use pagination for large result sets
- Avoid N+1 query problems
### 3. Error Handling
- Log all database errors with context
- Implement proper retry mechanisms
- Use circuit breaker patterns for external dependencies
- Monitor error rates and trends
### 4. Security Considerations
- Avoid logging sensitive data in query parameters
- Use parameterized queries to prevent SQL injection
- Implement proper access controls for monitoring endpoints
- Regular security audits of database operations
## Conclusion
This comprehensive SQL monitoring system provides the tools needed to identify and resolve the SQL script loop issue. The system offers:
- **Real-time monitoring** of all database operations
- **Automatic loop detection** with configurable thresholds
- **Performance tracking** with detailed metrics
- **Comprehensive logging** for debugging and analysis
- **REST API endpoints** for monitoring and management
- **Configurable settings** for different environments
The system is designed to be non-intrusive while providing maximum visibility into database operations, helping you quickly identify and resolve performance issues and potential infinite loops.

View File

@@ -0,0 +1,319 @@
using Managing.Application.Abstractions.Services;
using Managing.Application.Shared;
using Managing.Infrastructure.Databases.PostgreSql;
using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Mvc;
namespace Managing.Api.Controllers;
/// <summary>
/// Controller for monitoring SQL query performance and detecting potential loops
/// Provides endpoints to view query statistics and clear tracking data
/// Requires admin authorization for access
/// </summary>
[ApiController]
[Authorize]
[Route("api/[controller]")]
public class SqlMonitoringController : BaseController
{
private readonly SentrySqlMonitoringService _sentryMonitoringService;
private readonly ManagingDbContext _context;
private readonly ILogger<SqlMonitoringController> _logger;
private readonly IAdminConfigurationService _adminService;
public SqlMonitoringController(
SentrySqlMonitoringService sentryMonitoringService,
ManagingDbContext context,
ILogger<SqlMonitoringController> logger,
IUserService userService,
IAdminConfigurationService adminService) : base(userService)
{
_sentryMonitoringService = sentryMonitoringService;
_context = context;
_logger = logger;
_adminService = adminService;
}
/// <summary>
/// Checks if the current user is an admin
/// </summary>
/// <returns>True if the user is admin, False otherwise</returns>
private async Task<bool> IsUserAdmin()
{
try
{
var user = await GetUser();
if (user == null)
return false;
return _adminService.IsUserAdmin(user.Name);
}
catch (Exception ex)
{
_logger.LogError(ex, "Error checking if user is admin");
return false;
}
}
/// <summary>
/// Gets current SQL query execution statistics
/// </summary>
/// <returns>Query execution statistics</returns>
[HttpGet("statistics")]
public async Task<ActionResult<object>> GetQueryStatistics()
{
try
{
// Check if user is admin
if (!await IsUserAdmin())
{
return Forbid("Only administrators can access SQL monitoring statistics");
}
var loopDetectionStats = _sentryMonitoringService.GetQueryStatistics();
var contextStats = _context.GetQueryExecutionCounts();
var result = new
{
LoopDetectionStats = loopDetectionStats,
ContextStats = contextStats,
Timestamp = DateTime.UtcNow,
TotalTrackedQueries = loopDetectionStats.Count,
ActiveQueries = loopDetectionStats.Count(kvp => kvp.Value.IsActive)
};
_logger.LogInformation("[SQL-MONITORING] Query statistics retrieved: {Count} tracked queries", loopDetectionStats.Count);
return Ok(result);
}
catch (Exception ex)
{
_logger.LogError(ex, "[SQL-MONITORING] Error retrieving query statistics");
return StatusCode(500, "Error retrieving query statistics");
}
}
/// <summary>
/// Gets potential loop alerts and performance issues
/// </summary>
/// <returns>List of potential issues</returns>
[HttpGet("alerts")]
public async Task<ActionResult<object>> GetAlerts()
{
try
{
// Check if user is admin
if (!await IsUserAdmin())
{
return Forbid("Only administrators can access SQL monitoring alerts");
}
var stats = _sentryMonitoringService.GetQueryStatistics();
var alerts = new List<object>();
foreach (var kvp in stats)
{
var stat = kvp.Value;
var issues = new List<string>();
// Check for high execution frequency
if (stat.ExecutionsPerMinute > 20)
{
issues.Add($"High frequency: {stat.ExecutionsPerMinute:F1} executions/minute");
}
// Check for slow queries
if (stat.AverageExecutionTime.TotalMilliseconds > 1000)
{
issues.Add($"Slow query: {stat.AverageExecutionTime.TotalMilliseconds:F0}ms average");
}
// Check for many executions
if (stat.ExecutionCount > 50)
{
issues.Add($"High count: {stat.ExecutionCount} total executions");
}
if (issues.Any())
{
alerts.Add(new
{
Repository = stat.RepositoryName,
Method = stat.MethodName,
QueryPattern = stat.QueryPattern,
Issues = issues,
ExecutionCount = stat.ExecutionCount,
ExecutionsPerMinute = stat.ExecutionsPerMinute,
AverageExecutionTime = stat.AverageExecutionTime.TotalMilliseconds,
LastExecution = stat.LastExecution,
IsActive = stat.IsActive
});
}
}
var result = new
{
Alerts = alerts,
AlertCount = alerts.Count,
Timestamp = DateTime.UtcNow
};
if (alerts.Any())
{
_logger.LogWarning("[SQL-MONITORING] {Count} potential issues detected", alerts.Count);
}
return Ok(result);
}
catch (Exception ex)
{
_logger.LogError(ex, "[SQL-MONITORING] Error retrieving alerts");
return StatusCode(500, "Error retrieving alerts");
}
}
/// <summary>
/// Clears all SQL query tracking data
/// </summary>
/// <returns>Success status</returns>
[HttpPost("clear-tracking")]
public async Task<ActionResult> ClearTracking()
{
try
{
// Check if user is admin
if (!await IsUserAdmin())
{
return Forbid("Only administrators can clear SQL monitoring data");
}
_sentryMonitoringService.ClearAllTracking();
_context.ClearQueryTracking();
_logger.LogInformation("[SQL-MONITORING] All tracking data cleared");
return Ok(new { Message = "All tracking data cleared successfully", Timestamp = DateTime.UtcNow });
}
catch (Exception ex)
{
_logger.LogError(ex, "[SQL-MONITORING] Error clearing tracking data");
return StatusCode(500, "Error clearing tracking data");
}
}
/// <summary>
/// Gets detailed information about a specific query pattern
/// </summary>
/// <param name="repositoryName">Repository name</param>
/// <param name="methodName">Method name</param>
/// <returns>Detailed query information</returns>
[HttpGet("query-details/{repositoryName}/{methodName}")]
public async Task<ActionResult<object>> GetQueryDetails(string repositoryName, string methodName)
{
try
{
// Check if user is admin
if (!await IsUserAdmin())
{
return Forbid("Only administrators can access SQL query details");
}
var stats = _sentryMonitoringService.GetQueryStatistics();
var matchingQueries = stats.Where(kvp =>
kvp.Value.RepositoryName.Equals(repositoryName, StringComparison.OrdinalIgnoreCase) &&
kvp.Value.MethodName.Equals(methodName, StringComparison.OrdinalIgnoreCase))
.ToList();
if (!matchingQueries.Any())
{
return NotFound(new { Message = $"No queries found for {repositoryName}.{methodName}" });
}
var result = new
{
RepositoryName = repositoryName,
MethodName = methodName,
Queries = matchingQueries.Select(kvp => new
{
QueryPattern = kvp.Value.QueryPattern,
ExecutionCount = kvp.Value.ExecutionCount,
ExecutionsPerMinute = kvp.Value.ExecutionsPerMinute,
AverageExecutionTime = kvp.Value.AverageExecutionTime.TotalMilliseconds,
MinExecutionTime = kvp.Value.MinExecutionTime.TotalMilliseconds,
MaxExecutionTime = kvp.Value.MaxExecutionTime.TotalMilliseconds,
FirstExecution = kvp.Value.FirstExecution,
LastExecution = kvp.Value.LastExecution,
IsActive = kvp.Value.IsActive
}),
Timestamp = DateTime.UtcNow
};
return Ok(result);
}
catch (Exception ex)
{
_logger.LogError(ex, "[SQL-MONITORING] Error retrieving query details for {Repository}.{Method}", repositoryName, methodName);
return StatusCode(500, "Error retrieving query details");
}
}
/// <summary>
/// Gets a summary of SQL monitoring health
/// </summary>
/// <returns>Monitoring health summary</returns>
[HttpGet("health")]
public async Task<ActionResult<object>> GetMonitoringHealth()
{
try
{
// Check if user is admin
if (!await IsUserAdmin())
{
return Forbid("Only administrators can access SQL monitoring health");
}
var stats = _sentryMonitoringService.GetQueryStatistics();
var contextStats = _context.GetQueryExecutionCounts();
var activeQueries = stats.Count(kvp => kvp.Value.IsActive);
var slowQueries = stats.Count(kvp => kvp.Value.AverageExecutionTime.TotalMilliseconds > 1000);
var highFrequencyQueries = stats.Count(kvp => kvp.Value.ExecutionsPerMinute > 20);
var healthStatus = "Healthy";
if (highFrequencyQueries > 0 || slowQueries > 5)
{
healthStatus = "Warning";
}
if (highFrequencyQueries > 2 || slowQueries > 10)
{
healthStatus = "Critical";
}
var result = new
{
Status = healthStatus,
TotalTrackedQueries = stats.Count,
ActiveQueries = activeQueries,
SlowQueries = slowQueries,
HighFrequencyQueries = highFrequencyQueries,
ContextQueryCount = contextStats.Count,
Timestamp = DateTime.UtcNow,
// Add configuration status
isEnabled = _sentryMonitoringService.IsMonitoringEnabled(),
loggingEnabled = _sentryMonitoringService.IsLoggingEnabled(),
sentryEnabled = _sentryMonitoringService.IsSentryEnabled(),
loopDetectionEnabled = _sentryMonitoringService.IsLoopDetectionEnabled(),
performanceMonitoringEnabled = _sentryMonitoringService.IsPerformanceMonitoringEnabled(),
lastHealthCheck = DateTime.UtcNow.ToString("O"),
totalAlerts = 0 // TODO: Implement alert counting
};
return Ok(result);
}
catch (Exception ex)
{
_logger.LogError(ex, "[SQL-MONITORING] Error retrieving monitoring health");
return StatusCode(500, "Error retrieving monitoring health");
}
}
}

View File

@@ -89,8 +89,15 @@ builder.Services.AddHttpClient("GmxHealthCheck")
builder.Services.AddSingleton<Web3ProxyHealthCheck>(sp =>
new Web3ProxyHealthCheck(sp.GetRequiredService<IHttpClientFactory>(), web3ProxyUrl));
// Add SQL Loop Detection Service with Sentry integration
// Configure SQL monitoring settings
builder.Services.Configure<SqlMonitoringSettings>(builder.Configuration.GetSection("SqlMonitoring"));
// Register SQL monitoring services
builder.Services.AddSingleton<SentrySqlMonitoringService>();
// Add PostgreSQL DbContext with improved concurrency and connection management
builder.Services.AddDbContext<ManagingDbContext>(options =>
builder.Services.AddDbContext<ManagingDbContext>((serviceProvider, options) =>
{
options.UseNpgsql(postgreSqlConnectionString, npgsqlOptions =>
{
@@ -114,8 +121,22 @@ builder.Services.AddDbContext<ManagingDbContext>(options =>
// Enable service provider caching for better performance
options.EnableServiceProviderCaching();
// Enable connection resiliency for backtest and high-load scenarios
options.LogTo(msg => Console.WriteLine(msg), LogLevel.Warning); // Log warnings for connection issues
// Enable comprehensive SQL query logging for monitoring and debugging
var logger = serviceProvider.GetRequiredService<ILogger<ManagingDbContext>>();
var sentryMonitoringService = serviceProvider.GetRequiredService<SentrySqlMonitoringService>();
options.LogTo(msg =>
{
// Log SQL queries with enhanced formatting
if (msg.Contains("Executed DbCommand") || msg.Contains("Executing DbCommand"))
{
Console.WriteLine($"[EF-SQL] {msg}");
}
else if (msg.Contains("Warning") || msg.Contains("Error"))
{
Console.WriteLine($"[EF-WARNING] {msg}");
}
}, LogLevel.Information); // Log all SQL operations for monitoring
}, ServiceLifetime.Scoped); // Explicitly specify scoped lifetime for proper request isolation
// Add specific health checks for databases and other services

View File

@@ -44,5 +44,12 @@
"BaseUrl": "https://api.kaigen.managing.live",
"DebitEndpoint": "/api/credits/debit",
"RefundEndpoint": "/api/credits/refund"
},
"SqlMonitoring": {
"Enabled": true,
"LoggingEnabled": false,
"SentryEnabled": true,
"LoopDetectionEnabled": true,
"LogErrorsOnly": true
}
}

View File

@@ -35,6 +35,13 @@
"ElasticConfiguration": {
"Uri": "http://elasticsearch:9200"
},
"SqlMonitoring": {
"Enabled": true,
"LoggingEnabled": true,
"SentryEnabled": true,
"LoopDetectionEnabled": true,
"LogSlowQueriesOnly": false
},
"RunOrleansGrains": true,
"AllowedHosts": "*"
}

View File

@@ -84,6 +84,20 @@
"WorkerBundleBacktest": false,
"WorkerBalancesTracking": false,
"WorkerNotifyBundleBacktest": false,
"AdminUsers": "",
"AllowedHosts": "*"
"SqlMonitoring": {
"Enabled": true,
"LoggingEnabled": true,
"SentryEnabled": true,
"LoopDetectionEnabled": true,
"PerformanceMonitoringEnabled": true,
"LoopDetectionWindowSeconds": 60,
"MaxQueryExecutionsPesrWindow": 100,
"MaxMethodExecutionsPerWindow": 50,
"LongRunningQueryThresholdMs": 1000,
"SentryAlertThreshold": 5,
"SlowQueryThresholdMs": 2000,
"LogSlowQueriesOnly": false,
"LogErrorsOnly": false,
"DataRetentionMinutes": 30
}
}

View File

@@ -308,6 +308,7 @@ public class TradingBotBase : ITradingBot
// Second, process all finished positions to ensure they are updated in the database
// TODO : This should be removed in the future, when we have a better way to handle positions
if (!Config.IsForBacktest)
foreach (var position in Positions.Values.Where(p => p.IsFinished()))
{
try

View File

@@ -0,0 +1,223 @@
using System.Diagnostics;
using Microsoft.Extensions.Logging;
namespace Managing.Infrastructure.Databases.PostgreSql;
/// <summary>
/// Base repository class with comprehensive SQL query logging and monitoring
/// Provides automatic query tracking, loop detection, and performance monitoring
/// </summary>
public abstract class BaseRepositoryWithLogging
{
protected readonly ManagingDbContext _context;
protected readonly ILogger<SqlQueryLogger> _logger;
protected readonly SentrySqlMonitoringService _sentryMonitoringService;
protected readonly string _repositoryName;
protected BaseRepositoryWithLogging(ManagingDbContext context, ILogger<SqlQueryLogger> logger, SentrySqlMonitoringService sentryMonitoringService)
{
_context = context;
_logger = logger;
_sentryMonitoringService = sentryMonitoringService;
_repositoryName = GetType().Name;
}
/// <summary>
/// Executes a database operation with lightweight logging and monitoring
/// Only logs slow queries (>2000ms) and errors to minimize performance impact
/// </summary>
/// <typeparam name="T">Return type of the operation</typeparam>
/// <param name="operation">The database operation to execute</param>
/// <param name="methodName">Name of the calling method</param>
/// <param name="parameters">Parameters passed to the operation</param>
/// <returns>Result of the operation</returns>
protected async Task<T> ExecuteWithLoggingAsync<T>(
Func<Task<T>> operation,
string methodName,
params (string name, object value)[] parameters)
{
// Check if monitoring is enabled globally
if (!_sentryMonitoringService.IsMonitoringEnabled())
{
return await operation();
}
var stopwatch = Stopwatch.StartNew();
var queryPattern = GenerateQueryPattern(methodName, parameters);
try
{
var result = await operation();
stopwatch.Stop();
// Only log if slow query (>2000ms) and logging is enabled
if (stopwatch.Elapsed.TotalMilliseconds > 2000 && _sentryMonitoringService.IsLoggingEnabled())
{
_logger.LogWarning(
"[SLOW-SQL] {Repository}.{Method} | Pattern: {Pattern} | Time: {Time}ms",
_repositoryName, methodName, queryPattern, stopwatch.Elapsed.TotalMilliseconds);
// Send slow query alert to Sentry asynchronously if enabled
if (_sentryMonitoringService.IsSentryEnabled())
{
_ = Task.Run(() => SendSlowQueryToSentryAsync(queryPattern, stopwatch.Elapsed, methodName));
}
}
// Track query execution for loop detection if enabled (minimal overhead)
if (_sentryMonitoringService.IsLoopDetectionEnabled())
{
_context.TrackQueryExecution(queryPattern, stopwatch.Elapsed, _repositoryName, methodName);
}
return result;
}
catch (Exception ex)
{
stopwatch.Stop();
// Always log errors if logging is enabled
if (_sentryMonitoringService.IsLoggingEnabled())
{
_logger.LogError(ex,
"[SQL-ERROR] {Repository}.{Method} | Pattern: {Pattern} | Time: {Time}ms",
_repositoryName, methodName, queryPattern, stopwatch.Elapsed.TotalMilliseconds);
}
// Send SQL error to Sentry asynchronously if enabled
if (_sentryMonitoringService.IsSentryEnabled())
{
_ = Task.Run(() => SendSqlErrorToSentryAsync(queryPattern, stopwatch.Elapsed, ex, methodName));
}
throw;
}
}
/// <summary>
/// Executes a database operation with lightweight logging and monitoring (void return)
/// Only logs slow queries (>2000ms) and errors to minimize performance impact
/// </summary>
/// <param name="operation">The database operation to execute</param>
/// <param name="methodName">Name of the calling method</param>
/// <param name="parameters">Parameters passed to the operation</param>
protected async Task ExecuteWithLoggingAsync(
Func<Task> operation,
string methodName,
params (string name, object value)[] parameters)
{
// Check if monitoring is enabled globally
if (!_sentryMonitoringService.IsMonitoringEnabled())
{
await operation();
return;
}
var stopwatch = Stopwatch.StartNew();
var queryPattern = GenerateQueryPattern(methodName, parameters);
try
{
await operation();
stopwatch.Stop();
// Only log if slow query (>2000ms) and logging is enabled
if (stopwatch.Elapsed.TotalMilliseconds > 2000 && _sentryMonitoringService.IsLoggingEnabled())
{
_logger.LogWarning(
"[SLOW-SQL] {Repository}.{Method} | Pattern: {Pattern} | Time: {Time}ms",
_repositoryName, methodName, queryPattern, stopwatch.Elapsed.TotalMilliseconds);
// Send slow query alert to Sentry asynchronously if enabled
if (_sentryMonitoringService.IsSentryEnabled())
{
_ = Task.Run(() => SendSlowQueryToSentryAsync(queryPattern, stopwatch.Elapsed, methodName));
}
}
// Track query execution for loop detection if enabled (minimal overhead)
if (_sentryMonitoringService.IsLoopDetectionEnabled())
{
_context.TrackQueryExecution(queryPattern, stopwatch.Elapsed, _repositoryName, methodName);
}
}
catch (Exception ex)
{
stopwatch.Stop();
// Always log errors if logging is enabled
if (_sentryMonitoringService.IsLoggingEnabled())
{
_logger.LogError(ex,
"[SQL-ERROR] {Repository}.{Method} | Pattern: {Pattern} | Time: {Time}ms",
_repositoryName, methodName, queryPattern, stopwatch.Elapsed.TotalMilliseconds);
}
// Send SQL error to Sentry asynchronously if enabled
if (_sentryMonitoringService.IsSentryEnabled())
{
_ = Task.Run(() => SendSqlErrorToSentryAsync(queryPattern, stopwatch.Elapsed, ex, methodName));
}
throw;
}
}
/// <summary>
/// Generates a query pattern for tracking purposes
/// </summary>
/// <param name="methodName">Name of the method</param>
/// <param name="parameters">Method parameters</param>
/// <returns>Query pattern string</returns>
private string GenerateQueryPattern(string methodName, (string name, object value)[] parameters)
{
var paramStrings = parameters.Select(p => $"{p.name}={p.value?.GetType().Name ?? "null"}");
return $"{methodName}({string.Join(",", paramStrings)})";
}
/// <summary>
/// Logs a potential performance issue
/// </summary>
/// <param name="operation">Operation description</param>
/// <param name="duration">Operation duration</param>
/// <param name="threshold">Performance threshold</param>
protected void LogPerformanceIssue(string operation, TimeSpan duration, TimeSpan threshold)
{
if (duration > threshold)
{
_logger.LogWarning(
"[SQL-PERFORMANCE] {Repository} | {Operation} took {Duration}ms (threshold: {Threshold}ms)",
_repositoryName, operation, duration.TotalMilliseconds, threshold.TotalMilliseconds);
}
}
/// <summary>
/// Sends slow query alert to Sentry asynchronously (fire and forget)
/// </summary>
private async Task SendSlowQueryToSentryAsync(string queryPattern, TimeSpan executionTime, string methodName)
{
try
{
await _sentryMonitoringService.SendSlowQueryAlertAsync(_repositoryName, methodName, queryPattern, executionTime);
}
catch (Exception ex)
{
_logger.LogError(ex, "[SENTRY-ERROR] Failed to send slow query alert to Sentry");
}
}
/// <summary>
/// Sends SQL error to Sentry asynchronously (fire and forget)
/// </summary>
private async Task SendSqlErrorToSentryAsync(string queryPattern, TimeSpan executionTime, Exception exception, string methodName)
{
try
{
await _sentryMonitoringService.SendSqlErrorAlertAsync(_repositoryName, methodName, queryPattern, executionTime, exception);
}
catch (Exception ex)
{
_logger.LogError(ex, "[SENTRY-ERROR] Failed to send SQL error alert to Sentry");
}
}
}

View File

@@ -1,14 +1,27 @@
using Managing.Infrastructure.Databases.PostgreSql.Entities;
using Microsoft.EntityFrameworkCore;
using Microsoft.Extensions.Logging;
namespace Managing.Infrastructure.Databases.PostgreSql;
public class ManagingDbContext : DbContext
{
private readonly ILogger<ManagingDbContext>? _logger;
private readonly SentrySqlMonitoringService? _sentryMonitoringService;
private readonly Dictionary<string, int> _queryExecutionCounts = new();
private readonly object _queryCountLock = new object();
public ManagingDbContext(DbContextOptions<ManagingDbContext> options) : base(options)
{
}
public ManagingDbContext(DbContextOptions<ManagingDbContext> options, ILogger<ManagingDbContext> logger, SentrySqlMonitoringService sentryMonitoringService)
: base(options)
{
_logger = logger;
_sentryMonitoringService = sentryMonitoringService;
}
public DbSet<AccountEntity> Accounts { get; set; }
public DbSet<UserEntity> Users { get; set; }
public DbSet<GeneticRequestEntity> GeneticRequests { get; set; }
@@ -607,7 +620,7 @@ public class ManagingDbContext : DbContext
{
try
{
var count = await Database.SqlQueryRaw<long>($"SELECT COUNT(*) FROM {tableName}").FirstOrDefaultAsync();
var count = await Database.SqlQueryRaw<long>($"SELECT COUNT(*) FROM \"{tableName}\"").FirstOrDefaultAsync();
stats[tableName] = count;
}
catch
@@ -638,4 +651,63 @@ public class ManagingDbContext : DbContext
// Add any additional configuration here if needed
}
/// <summary>
/// Tracks query execution for loop detection and performance monitoring
/// </summary>
/// <param name="queryPattern">Pattern or hash of the query</param>
/// <param name="executionTime">Time taken to execute the query</param>
/// <param name="repositoryName">Name of the repository executing the query</param>
/// <param name="methodName">Name of the method executing the query</param>
public void TrackQueryExecution(string queryPattern, TimeSpan executionTime, string repositoryName, string methodName)
{
if (_logger == null || _sentryMonitoringService == null) return;
// Track execution count for this query pattern
lock (_queryCountLock)
{
_queryExecutionCounts[queryPattern] = _queryExecutionCounts.GetValueOrDefault(queryPattern, 0) + 1;
}
// Check for potential loops with Sentry integration
var isLoopDetected = _sentryMonitoringService.TrackQueryExecution(repositoryName, methodName, queryPattern, executionTime);
// Log query execution details
var logLevel = executionTime.TotalMilliseconds > 1000 ? LogLevel.Warning : LogLevel.Debug;
_logger.Log(logLevel,
"[SQL-QUERY-TRACKED] {Repository}.{Method} | Pattern: {Pattern} | Time: {Time}ms | Count: {Count}",
repositoryName, methodName, queryPattern, executionTime.TotalMilliseconds,
_queryExecutionCounts[queryPattern]);
// Alert on potential loops
if (isLoopDetected)
{
_logger.LogError(
"[SQL-LOOP-ALERT] Potential infinite loop detected in {Repository}.{Method} with pattern '{Pattern}'",
repositoryName, methodName, queryPattern);
}
}
/// <summary>
/// Gets current query execution statistics
/// </summary>
public Dictionary<string, int> GetQueryExecutionCounts()
{
lock (_queryCountLock)
{
return new Dictionary<string, int>(_queryExecutionCounts);
}
}
/// <summary>
/// Clears query execution tracking data
/// </summary>
public void ClearQueryTracking()
{
lock (_queryCountLock)
{
_queryExecutionCounts.Clear();
}
_logger?.LogInformation("[SQL-TRACKING] Query execution counts cleared");
}
}

View File

@@ -1,10 +1,12 @@
using System.Data;
using System.Diagnostics;
using Microsoft.EntityFrameworkCore;
namespace Managing.Infrastructure.Databases.PostgreSql;
/// <summary>
/// Helper class for managing PostgreSQL database connections in Entity Framework repositories
/// Enhanced with comprehensive logging and monitoring capabilities
/// </summary>
public static class PostgreSqlConnectionHelper
{
@@ -20,6 +22,27 @@ public static class PostgreSqlConnectionHelper
}
}
/// <summary>
/// Ensures the database connection is open with logging
/// </summary>
/// <param name="context">The DbContext to manage the connection for</param>
/// <param name="logger">SQL query logger for monitoring</param>
public static async Task EnsureConnectionOpenAsync(DbContext context, SqlQueryLogger logger)
{
var stopwatch = Stopwatch.StartNew();
if (context.Database.GetDbConnection().State != ConnectionState.Open)
{
await context.Database.OpenConnectionAsync();
stopwatch.Stop();
logger.LogConnectionStateChange("OPENED", stopwatch.Elapsed);
}
else
{
logger.LogConnectionStateChange("ALREADY_OPEN");
}
}
/// <summary>
/// Safely closes the database connection if it was opened by us
/// </summary>
@@ -31,4 +54,25 @@ public static class PostgreSqlConnectionHelper
await context.Database.CloseConnectionAsync();
}
}
/// <summary>
/// Safely closes the database connection with logging
/// </summary>
/// <param name="context">The DbContext to manage the connection for</param>
/// <param name="logger">SQL query logger for monitoring</param>
public static async Task SafeCloseConnectionAsync(DbContext context, SqlQueryLogger logger)
{
var stopwatch = Stopwatch.StartNew();
if (context.Database.GetDbConnection().State == ConnectionState.Open)
{
await context.Database.CloseConnectionAsync();
stopwatch.Stop();
logger.LogConnectionStateChange("CLOSED", stopwatch.Elapsed);
}
else
{
logger.LogConnectionStateChange("ALREADY_CLOSED");
}
}
}

View File

@@ -5,18 +5,17 @@ using Managing.Domain.Trades;
using Managing.Domain.Users;
using Managing.Infrastructure.Databases.PostgreSql.Entities;
using Microsoft.EntityFrameworkCore;
using Microsoft.Extensions.Logging;
using Newtonsoft.Json;
using static Managing.Common.Enums;
namespace Managing.Infrastructure.Databases.PostgreSql;
public class PostgreSqlTradingRepository : ITradingRepository
public class PostgreSqlTradingRepository : BaseRepositoryWithLogging, ITradingRepository
{
private readonly ManagingDbContext _context;
public PostgreSqlTradingRepository(ManagingDbContext context)
public PostgreSqlTradingRepository(ManagingDbContext context, ILogger<SqlQueryLogger> logger, SentrySqlMonitoringService sentryMonitoringService)
: base(context, logger, sentryMonitoringService)
{
_context = context;
}
#region Scenario Methods
@@ -267,6 +266,8 @@ public class PostgreSqlTradingRepository : ITradingRepository
#region Position Methods
public async Task<Position> GetPositionByIdentifierAsync(Guid identifier)
{
return await ExecuteWithLoggingAsync(async () =>
{
try
{
@@ -282,12 +283,13 @@ public class PostgreSqlTradingRepository : ITradingRepository
.FirstOrDefaultAsync(p => p.Identifier == identifier)
.ConfigureAwait(false);
return PostgreSqlMappers.Map(position);
return PostgreSqlMappers.Map(position ?? throw new InvalidOperationException("Position not found"));
}
finally
{
await PostgreSqlConnectionHelper.SafeCloseConnectionAsync(_context);
}
}, nameof(GetPositionByIdentifierAsync), ("identifier", identifier));
}
public IEnumerable<Position> GetPositions(PositionInitiator positionInitiator)
@@ -389,6 +391,12 @@ public class PostgreSqlTradingRepository : ITradingRepository
public async Task UpdatePositionAsync(Position position)
{
await ExecuteWithLoggingAsync(async () =>
{
try
{
await PostgreSqlConnectionHelper.EnsureConnectionOpenAsync(_context);
var entity = _context.Positions
.AsTracking()
.Include(p => p.OpenTrade)
@@ -435,6 +443,12 @@ public class PostgreSqlTradingRepository : ITradingRepository
await _context.SaveChangesAsync();
}
}
finally
{
await PostgreSqlConnectionHelper.SafeCloseConnectionAsync(_context);
}
}, nameof(UpdatePositionAsync), ("positionIdentifier", position.Identifier), ("positionStatus", position.Status));
}
/// <summary>
/// Updates a trade entity with data from a domain trade object

View File

@@ -1,21 +1,20 @@
using Managing.Application.Abstractions.Repositories;
using Managing.Domain.Users;
using Microsoft.EntityFrameworkCore;
using Microsoft.Extensions.Logging;
namespace Managing.Infrastructure.Databases.PostgreSql;
public class PostgreSqlUserRepository : IUserRepository
public class PostgreSqlUserRepository : BaseRepositoryWithLogging, IUserRepository
{
private readonly ManagingDbContext _context;
public PostgreSqlUserRepository(ManagingDbContext context)
public PostgreSqlUserRepository(ManagingDbContext context, ILogger<SqlQueryLogger> logger, SentrySqlMonitoringService sentryMonitoringService)
: base(context, logger, sentryMonitoringService)
{
_context = context;
}
public async Task<User> GetUserByAgentNameAsync(string agentName)
{
return await ExecuteWithLoggingAsync(async () =>
{
try
{
@@ -26,16 +25,19 @@ public class PostgreSqlUserRepository : IUserRepository
.FirstOrDefaultAsync(u => u.AgentName == agentName)
.ConfigureAwait(false);
return PostgreSqlMappers.Map(userEntity);
return PostgreSqlMappers.Map(userEntity ?? throw new InvalidOperationException("User not found"));
}
finally
{
// Always ensure the connection is closed after the operation
await PostgreSqlConnectionHelper.SafeCloseConnectionAsync(_context);
}
}, nameof(GetUserByAgentNameAsync), ("agentName", agentName));
}
public async Task<User> GetUserByNameAsync(string name)
{
return await ExecuteWithLoggingAsync(async () =>
{
try
{
@@ -46,16 +48,19 @@ public class PostgreSqlUserRepository : IUserRepository
.FirstOrDefaultAsync(u => u.Name == name)
.ConfigureAwait(false);
return PostgreSqlMappers.Map(userEntity);
return PostgreSqlMappers.Map(userEntity ?? throw new InvalidOperationException("User not found"));
}
finally
{
// Always ensure the connection is closed after the operation
await PostgreSqlConnectionHelper.SafeCloseConnectionAsync(_context);
}
}, nameof(GetUserByNameAsync), ("name", name));
}
public async Task<IEnumerable<User>> GetAllUsersAsync()
{
return await ExecuteWithLoggingAsync(async () =>
{
try
{
@@ -73,9 +78,12 @@ public class PostgreSqlUserRepository : IUserRepository
// Always ensure the connection is closed after the operation
await PostgreSqlConnectionHelper.SafeCloseConnectionAsync(_context);
}
}, nameof(GetAllUsersAsync));
}
public async Task SaveOrUpdateUserAsync(User user)
{
await ExecuteWithLoggingAsync(async () =>
{
try
{
@@ -115,5 +123,6 @@ public class PostgreSqlUserRepository : IUserRepository
Console.WriteLine(e);
throw new Exception("Cannot save or update user");
}
}, nameof(SaveOrUpdateUserAsync), ("userName", user.Name), ("userId", user.Id));
}
}

View File

@@ -0,0 +1,573 @@
using System.Collections.Concurrent;
using Microsoft.Extensions.Logging;
using Microsoft.Extensions.Options;
namespace Managing.Infrastructure.Databases.PostgreSql;
/// <summary>
/// Enhanced SQL loop detection service with Sentry integration
/// Monitors query patterns and execution frequency, sending critical alerts to Sentry
/// </summary>
public class SentrySqlMonitoringService
{
private readonly ILogger<SentrySqlMonitoringService> _logger;
private readonly SqlMonitoringSettings _settings;
private readonly ConcurrentDictionary<string, QueryExecutionTracker> _queryTrackers;
private readonly Timer _cleanupTimer;
public SentrySqlMonitoringService(ILogger<SentrySqlMonitoringService> logger, IOptions<SqlMonitoringSettings> settings)
{
_logger = logger;
_settings = settings.Value;
_queryTrackers = new ConcurrentDictionary<string, QueryExecutionTracker>();
// Setup cleanup timer to remove old tracking data
_cleanupTimer = new Timer(CleanupOldTrackers, null, TimeSpan.FromMinutes(1), TimeSpan.FromMinutes(1));
}
/// <summary>
/// Tracks a query execution and detects potential loops with Sentry integration
/// </summary>
/// <param name="repositoryName">Name of the repository executing the query</param>
/// <param name="methodName">Name of the method executing the query</param>
/// <param name="queryPattern">Pattern or hash of the query being executed</param>
/// <param name="executionTime">Time taken to execute the query</param>
/// <returns>True if a potential loop is detected</returns>
public bool TrackQueryExecution(string repositoryName, string methodName, string queryPattern, TimeSpan executionTime)
{
var key = $"{repositoryName}.{methodName}.{queryPattern}";
var now = DateTime.UtcNow;
var tracker = _queryTrackers.AddOrUpdate(key,
new QueryExecutionTracker
{
RepositoryName = repositoryName,
MethodName = methodName,
QueryPattern = queryPattern,
FirstExecution = now,
LastExecution = now,
ExecutionCount = 1,
TotalExecutionTime = executionTime,
MaxExecutionTime = executionTime,
MinExecutionTime = executionTime
},
(k, existing) =>
{
existing.LastExecution = now;
existing.ExecutionCount++;
existing.TotalExecutionTime += executionTime;
existing.MaxExecutionTime = existing.MaxExecutionTime > executionTime ? existing.MaxExecutionTime : executionTime;
existing.MinExecutionTime = existing.MinExecutionTime < executionTime ? existing.MinExecutionTime : executionTime;
return existing;
});
// Check for potential loop conditions
var timeSinceFirst = now - tracker.FirstExecution;
var executionsPerMinute = tracker.ExecutionCount / Math.Max(timeSinceFirst.TotalMinutes, 0.1);
var isLoopDetected = false;
var isCriticalAlert = false;
var reasons = new List<string>();
var sentryTags = new Dictionary<string, string>();
var sentryExtras = new Dictionary<string, object>();
// Check execution frequency
if (executionsPerMinute > 20)
{
isLoopDetected = true;
reasons.Add($"High frequency: {executionsPerMinute:F1} executions/minute");
if (executionsPerMinute > 50) // Critical frequency threshold
{
isCriticalAlert = true;
sentryTags["alert_level"] = "critical";
sentryTags["issue_type"] = "high_frequency_query";
}
}
// Check total execution count in window
if (tracker.ExecutionCount > _settings.MaxQueryExecutionsPerWindow)
{
isLoopDetected = true;
reasons.Add($"High count: {tracker.ExecutionCount} executions in {timeSinceFirst.TotalMinutes:F1} minutes");
if (tracker.ExecutionCount > _settings.SentryAlertThreshold * _settings.MaxQueryExecutionsPerWindow)
{
isCriticalAlert = true;
sentryTags["alert_level"] = "critical";
sentryTags["issue_type"] = "high_execution_count";
}
}
// Check for rapid successive executions
if (tracker.ExecutionCount > 5 && timeSinceFirst.TotalSeconds < 10)
{
isLoopDetected = true;
isCriticalAlert = true;
reasons.Add($"Rapid execution: {tracker.ExecutionCount} executions in {timeSinceFirst.TotalSeconds:F1} seconds");
sentryTags["alert_level"] = "critical";
sentryTags["issue_type"] = "rapid_execution";
}
// Check for consistently slow queries
if (tracker.ExecutionCount > 3 && tracker.AverageExecutionTime.TotalMilliseconds > 1000)
{
isLoopDetected = true;
reasons.Add($"Consistently slow: {tracker.AverageExecutionTime.TotalMilliseconds:F0}ms average");
if (tracker.AverageExecutionTime > TimeSpan.FromSeconds(5)) // Critical slow query threshold
{
isCriticalAlert = true;
sentryTags["alert_level"] = "critical";
sentryTags["issue_type"] = "slow_query";
}
}
// Prepare Sentry data
sentryTags["repository"] = repositoryName;
sentryTags["method"] = methodName;
sentryTags["query_pattern"] = queryPattern;
sentryTags["environment"] = Environment.GetEnvironmentVariable("ASPNETCORE_ENVIRONMENT") ?? "Unknown";
sentryExtras["execution_count"] = tracker.ExecutionCount;
sentryExtras["executions_per_minute"] = executionsPerMinute;
sentryExtras["average_execution_time_ms"] = tracker.AverageExecutionTime.TotalMilliseconds;
sentryExtras["min_execution_time_ms"] = tracker.MinExecutionTime.TotalMilliseconds;
sentryExtras["max_execution_time_ms"] = tracker.MaxExecutionTime.TotalMilliseconds;
sentryExtras["total_execution_time_ms"] = tracker.TotalExecutionTime.TotalMilliseconds;
sentryExtras["first_execution"] = tracker.FirstExecution.ToString("yyyy-MM-dd HH:mm:ss.fff");
sentryExtras["last_execution"] = tracker.LastExecution.ToString("yyyy-MM-dd HH:mm:ss.fff");
sentryExtras["time_window_minutes"] = timeSinceFirst.TotalMinutes;
sentryExtras["detection_reasons"] = string.Join("; ", reasons);
if (isLoopDetected)
{
_logger.LogWarning(
"[SQL-LOOP-DETECTED] {Repository}.{Method} | Pattern: {Pattern} | Count: {Count} | Reasons: {Reasons} | Avg Time: {AvgTime}ms",
repositoryName, methodName, queryPattern, tracker.ExecutionCount,
string.Join(", ", reasons), tracker.AverageExecutionTime.TotalMilliseconds);
// Log detailed execution history
_logger.LogWarning(
"[SQL-LOOP-DETAILS] {Repository}.{Method} | First: {First} | Last: {Last} | Min: {Min}ms | Max: {Max}ms | Total: {Total}ms",
repositoryName, methodName, tracker.FirstExecution.ToString("HH:mm:ss.fff"),
tracker.LastExecution.ToString("HH:mm:ss.fff"), tracker.MinExecutionTime.TotalMilliseconds,
tracker.MaxExecutionTime.TotalMilliseconds, tracker.TotalExecutionTime.TotalMilliseconds);
}
// Send to Sentry for critical alerts
if (isCriticalAlert)
{
SendCriticalAlertToSentry(repositoryName, methodName, queryPattern, reasons, sentryTags, sentryExtras);
}
else if (isLoopDetected)
{
SendWarningToSentry(repositoryName, methodName, queryPattern, reasons, sentryTags, sentryExtras);
}
return isLoopDetected;
}
/// <summary>
/// Sends a critical alert to Sentry for immediate attention
/// </summary>
private void SendCriticalAlertToSentry(string repositoryName, string methodName, string queryPattern,
List<string> reasons, Dictionary<string, string> tags, Dictionary<string, object> extras)
{
try
{
var message = $"CRITICAL SQL Loop Detected: {repositoryName}.{methodName}";
var exception = new InvalidOperationException($"Potential infinite SQL loop detected: {string.Join(", ", reasons)}");
// Add SQL-specific data to exception
exception.Data["Repository"] = repositoryName;
exception.Data["Method"] = methodName;
exception.Data["QueryPattern"] = queryPattern;
exception.Data["DetectionReasons"] = string.Join("; ", reasons);
var sentryId = SentrySdk.CaptureException(exception, scope =>
{
// Set tags for filtering and grouping
foreach (var tag in tags)
{
scope.SetTag(tag.Key, tag.Value);
}
// Set extra data for debugging
foreach (var extra in extras)
{
scope.SetExtra(extra.Key, extra.Value);
}
// Set fingerprint for better grouping
scope.SetFingerprint(new[] { "sql-loop-detection", repositoryName, methodName });
// Set level
scope.Level = SentryLevel.Error;
// Add breadcrumb
scope.AddBreadcrumb(
message: $"Critical SQL loop detected in {repositoryName}.{methodName}",
category: "sql-monitoring",
level: BreadcrumbLevel.Error,
data: new Dictionary<string, string>
{
["query_pattern"] = queryPattern,
["execution_count"] = extras["execution_count"].ToString(),
["executions_per_minute"] = extras["executions_per_minute"].ToString()
}
);
// Set user context if available
scope.SetExtra("repository", repositoryName);
scope.SetExtra("method", methodName);
scope.SetExtra("query_pattern", queryPattern);
scope.SetExtra("detection_time", DateTime.UtcNow);
scope.SetExtra("alert_type", "critical_loop_detection");
});
_logger.LogError(
"[SENTRY-CRITICAL] Sent critical SQL loop alert to Sentry: {SentryId} | {Repository}.{Method} | {Reasons}",
sentryId, repositoryName, methodName, string.Join(", ", reasons));
}
catch (Exception ex)
{
_logger.LogError(ex, "[SENTRY-ERROR] Failed to send critical alert to Sentry for {Repository}.{Method}",
repositoryName, methodName);
}
}
/// <summary>
/// Sends a warning to Sentry for monitoring purposes
/// </summary>
private void SendWarningToSentry(string repositoryName, string methodName, string queryPattern,
List<string> reasons, Dictionary<string, string> tags, Dictionary<string, object> extras)
{
try
{
var message = $"SQL Performance Warning: {repositoryName}.{methodName}";
var sentryId = SentrySdk.CaptureMessage(message, scope =>
{
// Set tags for filtering and grouping
foreach (var tag in tags)
{
scope.SetTag(tag.Key, tag.Value);
}
// Set extra data for debugging
foreach (var extra in extras)
{
scope.SetExtra(extra.Key, extra.Value);
}
// Set fingerprint for better grouping
scope.SetFingerprint(new[] { "sql-performance-warning", repositoryName, methodName });
// Set level
scope.Level = SentryLevel.Warning;
// Add breadcrumb
scope.AddBreadcrumb(
message: $"SQL performance warning in {repositoryName}.{methodName}",
category: "sql-monitoring",
level: BreadcrumbLevel.Warning,
data: new Dictionary<string, string>
{
["query_pattern"] = queryPattern,
["execution_count"] = extras["execution_count"].ToString(),
["executions_per_minute"] = extras["executions_per_minute"].ToString()
}
);
// Set context
scope.SetExtra("repository", repositoryName);
scope.SetExtra("method", methodName);
scope.SetExtra("query_pattern", queryPattern);
scope.SetExtra("detection_time", DateTime.UtcNow);
scope.SetExtra("alert_type", "performance_warning");
});
_logger.LogWarning(
"[SENTRY-WARNING] Sent SQL performance warning to Sentry: {SentryId} | {Repository}.{Method} | {Reasons}",
sentryId, repositoryName, methodName, string.Join(", ", reasons));
}
catch (Exception ex)
{
_logger.LogError(ex, "[SENTRY-ERROR] Failed to send warning to Sentry for {Repository}.{Method}",
repositoryName, methodName);
}
}
/// <summary>
/// Sends a custom performance metric to Sentry
/// </summary>
public void SendPerformanceMetricToSentry(string repositoryName, string methodName, string metricName,
double value, Dictionary<string, string> tags = null)
{
try
{
var sentryTags = tags ?? new Dictionary<string, string>();
sentryTags["repository"] = repositoryName;
sentryTags["method"] = methodName;
sentryTags["metric_name"] = metricName;
SentrySdk.AddBreadcrumb(
message: $"SQL Performance Metric: {metricName} = {value}",
category: "sql-performance",
level: BreadcrumbLevel.Info,
data: new Dictionary<string, string>
{
["repository"] = repositoryName,
["method"] = methodName,
["metric_name"] = metricName,
["value"] = value.ToString()
});
_logger.LogDebug("[SENTRY-METRIC] Sent performance metric to Sentry: {Metric} = {Value} for {Repository}.{Method}",
metricName, value, repositoryName, methodName);
}
catch (Exception ex)
{
_logger.LogError(ex, "[SENTRY-ERROR] Failed to send performance metric to Sentry");
}
}
/// <summary>
/// Gets current statistics for all tracked queries
/// </summary>
public Dictionary<string, QueryExecutionStats> GetQueryStatistics()
{
var stats = new Dictionary<string, QueryExecutionStats>();
var now = DateTime.UtcNow;
foreach (var kvp in _queryTrackers)
{
var tracker = kvp.Value;
var timeSinceFirst = now - tracker.FirstExecution;
stats[kvp.Key] = new QueryExecutionStats
{
RepositoryName = tracker.RepositoryName,
MethodName = tracker.MethodName,
QueryPattern = tracker.QueryPattern,
ExecutionCount = tracker.ExecutionCount,
FirstExecution = tracker.FirstExecution,
LastExecution = tracker.LastExecution,
AverageExecutionTime = tracker.AverageExecutionTime,
MinExecutionTime = tracker.MinExecutionTime,
MaxExecutionTime = tracker.MaxExecutionTime,
ExecutionsPerMinute = tracker.ExecutionCount / Math.Max(timeSinceFirst.TotalMinutes, 0.1),
IsActive = timeSinceFirst < TimeSpan.FromSeconds(_settings.LoopDetectionWindowSeconds)
};
}
return stats;
}
/// <summary>
/// Clears all tracking data
/// </summary>
public void ClearAllTracking()
{
_queryTrackers.Clear();
_logger.LogInformation("[SQL-LOOP-DETECTION] All tracking data cleared");
}
private void CleanupOldTrackers(object? state)
{
var now = DateTime.UtcNow;
var keysToRemove = new List<string>();
foreach (var kvp in _queryTrackers)
{
var timeSinceLastExecution = now - kvp.Value.LastExecution;
// Use configurable retention period for monitoring dashboard
// This allows users to see query statistics even if queries haven't been executed recently
var retentionPeriod = TimeSpan.FromMinutes(_settings.DataRetentionMinutes);
if (timeSinceLastExecution > retentionPeriod)
{
keysToRemove.Add(kvp.Key);
}
}
foreach (var key in keysToRemove)
{
_queryTrackers.TryRemove(key, out _);
}
if (keysToRemove.Count > 0)
{
_logger.LogDebug("[SQL-MONITORING] Cleaned up {Count} old trackers (retention: {RetentionMinutes} minutes)", keysToRemove.Count, _settings.DataRetentionMinutes);
}
}
/// <summary>
/// Sends slow query alert to Sentry asynchronously
/// </summary>
public async Task SendSlowQueryAlertAsync(string repositoryName, string methodName, string queryPattern, TimeSpan executionTime)
{
try
{
var message = $"Slow SQL Query: {repositoryName}.{methodName}";
var exception = new TimeoutException($"SQL query took {executionTime.TotalMilliseconds:F0}ms to execute");
var sentryId = SentrySdk.CaptureException(exception, scope =>
{
scope.SetTag("repository", repositoryName);
scope.SetTag("method", methodName);
scope.SetTag("alert_type", "slow_query");
scope.SetTag("environment", Environment.GetEnvironmentVariable("ASPNETCORE_ENVIRONMENT") ?? "Unknown");
scope.SetExtra("query_pattern", queryPattern);
scope.SetExtra("execution_time_ms", executionTime.TotalMilliseconds);
scope.SetExtra("threshold_ms", 2000);
scope.SetFingerprint(new[] { "slow-query", repositoryName, methodName });
scope.Level = SentryLevel.Warning;
scope.AddBreadcrumb(
message: $"Slow SQL query in {repositoryName}.{methodName}",
category: "sql-monitoring",
level: BreadcrumbLevel.Warning,
data: new Dictionary<string, string>
{
["query_pattern"] = queryPattern,
["execution_time_ms"] = executionTime.TotalMilliseconds.ToString()
}
);
});
_logger.LogWarning(
"[SENTRY-SLOW-QUERY] Sent slow query alert to Sentry: {SentryId} | {Repository}.{Method} | {Time}ms",
sentryId, repositoryName, methodName, executionTime.TotalMilliseconds);
}
catch (Exception ex)
{
_logger.LogError(ex, "[SENTRY-ERROR] Failed to send slow query alert to Sentry");
}
}
/// <summary>
/// Sends SQL error alert to Sentry asynchronously
/// </summary>
public async Task SendSqlErrorAlertAsync(string repositoryName, string methodName, string queryPattern, TimeSpan executionTime, Exception exception)
{
try
{
var sentryId = SentrySdk.CaptureException(exception, scope =>
{
scope.SetTag("repository", repositoryName);
scope.SetTag("method", methodName);
scope.SetTag("alert_type", "sql_error");
scope.SetTag("environment", Environment.GetEnvironmentVariable("ASPNETCORE_ENVIRONMENT") ?? "Unknown");
scope.SetExtra("query_pattern", queryPattern);
scope.SetExtra("execution_time_ms", executionTime.TotalMilliseconds);
scope.SetExtra("error_type", exception.GetType().Name);
scope.SetFingerprint(new[] { "sql-error", repositoryName, methodName, exception.GetType().Name });
scope.Level = SentryLevel.Error;
scope.AddBreadcrumb(
message: $"SQL error in {repositoryName}.{methodName}",
category: "sql-monitoring",
level: BreadcrumbLevel.Error,
data: new Dictionary<string, string>
{
["query_pattern"] = queryPattern,
["execution_time_ms"] = executionTime.TotalMilliseconds.ToString(),
["error_type"] = exception.GetType().Name
}
);
});
_logger.LogError(
"[SENTRY-SQL-ERROR] Sent SQL error alert to Sentry: {SentryId} | {Repository}.{Method} | {Error}",
sentryId, repositoryName, methodName, exception.Message);
}
catch (Exception ex)
{
_logger.LogError(ex, "[SENTRY-ERROR] Failed to send SQL error alert to Sentry");
}
}
/// <summary>
/// Checks if monitoring is enabled globally
/// </summary>
public bool IsMonitoringEnabled()
{
return _settings.Enabled;
}
/// <summary>
/// Checks if logging is enabled
/// </summary>
public bool IsLoggingEnabled()
{
return _settings.LoggingEnabled;
}
/// <summary>
/// Checks if Sentry integration is enabled
/// </summary>
public bool IsSentryEnabled()
{
return _settings.SentryEnabled;
}
/// <summary>
/// Checks if loop detection is enabled
/// </summary>
public bool IsLoopDetectionEnabled()
{
return _settings.LoopDetectionEnabled;
}
/// <summary>
/// Checks if performance monitoring is enabled
/// </summary>
public bool IsPerformanceMonitoringEnabled()
{
return _settings.PerformanceMonitoringEnabled;
}
/// <summary>
/// Checks if a query should be logged based on configuration
/// </summary>
public bool ShouldLogQuery(TimeSpan executionTime)
{
if (!_settings.LoggingEnabled) return false;
if (_settings.LogErrorsOnly) return false; // Only log errors, not normal queries
if (_settings.LogSlowQueriesOnly)
{
return executionTime.TotalMilliseconds > _settings.SlowQueryThresholdMs;
}
return true; // Log all queries if logging is enabled
}
public void Dispose()
{
_cleanupTimer?.Dispose();
}
private class QueryExecutionTracker
{
public string RepositoryName { get; set; } = string.Empty;
public string MethodName { get; set; } = string.Empty;
public string QueryPattern { get; set; } = string.Empty;
public DateTime FirstExecution { get; set; }
public DateTime LastExecution { get; set; }
public int ExecutionCount { get; set; }
public TimeSpan TotalExecutionTime { get; set; }
public TimeSpan MaxExecutionTime { get; set; }
public TimeSpan MinExecutionTime { get; set; }
public TimeSpan AverageExecutionTime =>
ExecutionCount > 0 ? TimeSpan.FromTicks(TotalExecutionTime.Ticks / ExecutionCount) : TimeSpan.Zero;
}
}

View File

@@ -0,0 +1,221 @@
using System.Collections.Concurrent;
using Microsoft.Extensions.Logging;
namespace Managing.Infrastructure.Databases.PostgreSql;
/// <summary>
/// Service for detecting potential SQL query loops and performance issues
/// Monitors query patterns and execution frequency to identify problematic operations
/// </summary>
public class SqlLoopDetectionService
{
private readonly ILogger<SqlLoopDetectionService> _logger;
private readonly ConcurrentDictionary<string, QueryExecutionTracker> _queryTrackers;
private readonly Timer _cleanupTimer;
private readonly TimeSpan _trackingWindow = TimeSpan.FromMinutes(5);
private readonly int _maxExecutionsPerWindow = 10;
private readonly TimeSpan _cleanupInterval = TimeSpan.FromMinutes(1);
public SqlLoopDetectionService(ILogger<SqlLoopDetectionService> logger)
{
_logger = logger;
_queryTrackers = new ConcurrentDictionary<string, QueryExecutionTracker>();
// Setup cleanup timer to remove old tracking data
_cleanupTimer = new Timer(CleanupOldTrackers, null, _cleanupInterval, _cleanupInterval);
}
/// <summary>
/// Tracks a query execution and detects potential loops
/// </summary>
/// <param name="repositoryName">Name of the repository executing the query</param>
/// <param name="methodName">Name of the method executing the query</param>
/// <param name="queryPattern">Pattern or hash of the query being executed</param>
/// <param name="executionTime">Time taken to execute the query</param>
/// <returns>True if a potential loop is detected</returns>
public bool TrackQueryExecution(string repositoryName, string methodName, string queryPattern, TimeSpan executionTime)
{
var key = $"{repositoryName}.{methodName}.{queryPattern}";
var now = DateTime.UtcNow;
var tracker = _queryTrackers.AddOrUpdate(key,
new QueryExecutionTracker
{
RepositoryName = repositoryName,
MethodName = methodName,
QueryPattern = queryPattern,
FirstExecution = now,
LastExecution = now,
ExecutionCount = 1,
TotalExecutionTime = executionTime,
MaxExecutionTime = executionTime,
MinExecutionTime = executionTime
},
(k, existing) =>
{
existing.LastExecution = now;
existing.ExecutionCount++;
existing.TotalExecutionTime += executionTime;
existing.MaxExecutionTime = existing.MaxExecutionTime > executionTime ? existing.MaxExecutionTime : executionTime;
existing.MinExecutionTime = existing.MinExecutionTime < executionTime ? existing.MinExecutionTime : executionTime;
return existing;
});
// Check for potential loop conditions
var timeSinceFirst = now - tracker.FirstExecution;
var executionsPerMinute = tracker.ExecutionCount / Math.Max(timeSinceFirst.TotalMinutes, 0.1);
var isLoopDetected = false;
var reasons = new List<string>();
// Check execution frequency
if (executionsPerMinute > 20)
{
isLoopDetected = true;
reasons.Add($"High frequency: {executionsPerMinute:F1} executions/minute");
}
// Check total execution count in window
if (tracker.ExecutionCount > _maxExecutionsPerWindow)
{
isLoopDetected = true;
reasons.Add($"High count: {tracker.ExecutionCount} executions in {timeSinceFirst.TotalMinutes:F1} minutes");
}
// Check for rapid successive executions
if (tracker.ExecutionCount > 5 && timeSinceFirst.TotalSeconds < 10)
{
isLoopDetected = true;
reasons.Add($"Rapid execution: {tracker.ExecutionCount} executions in {timeSinceFirst.TotalSeconds:F1} seconds");
}
// Check for consistently slow queries
if (tracker.ExecutionCount > 3 && tracker.AverageExecutionTime.TotalMilliseconds > 1000)
{
isLoopDetected = true;
reasons.Add($"Consistently slow: {tracker.AverageExecutionTime.TotalMilliseconds:F0}ms average");
}
if (isLoopDetected)
{
_logger.LogWarning(
"[SQL-LOOP-DETECTED] {Repository}.{Method} | Pattern: {Pattern} | Count: {Count} | Reasons: {Reasons} | Avg Time: {AvgTime}ms",
repositoryName, methodName, queryPattern, tracker.ExecutionCount,
string.Join(", ", reasons), tracker.AverageExecutionTime.TotalMilliseconds);
// Log detailed execution history
_logger.LogWarning(
"[SQL-LOOP-DETAILS] {Repository}.{Method} | First: {First} | Last: {Last} | Min: {Min}ms | Max: {Max}ms | Total: {Total}ms",
repositoryName, methodName, tracker.FirstExecution.ToString("HH:mm:ss.fff"),
tracker.LastExecution.ToString("HH:mm:ss.fff"), tracker.MinExecutionTime.TotalMilliseconds,
tracker.MaxExecutionTime.TotalMilliseconds, tracker.TotalExecutionTime.TotalMilliseconds);
}
return isLoopDetected;
}
/// <summary>
/// Gets current statistics for all tracked queries
/// </summary>
public Dictionary<string, QueryExecutionStats> GetQueryStatistics()
{
var stats = new Dictionary<string, QueryExecutionStats>();
var now = DateTime.UtcNow;
foreach (var kvp in _queryTrackers)
{
var tracker = kvp.Value;
var timeSinceFirst = now - tracker.FirstExecution;
stats[kvp.Key] = new QueryExecutionStats
{
RepositoryName = tracker.RepositoryName,
MethodName = tracker.MethodName,
QueryPattern = tracker.QueryPattern,
ExecutionCount = tracker.ExecutionCount,
FirstExecution = tracker.FirstExecution,
LastExecution = tracker.LastExecution,
AverageExecutionTime = tracker.AverageExecutionTime,
MinExecutionTime = tracker.MinExecutionTime,
MaxExecutionTime = tracker.MaxExecutionTime,
ExecutionsPerMinute = tracker.ExecutionCount / Math.Max(timeSinceFirst.TotalMinutes, 0.1),
IsActive = timeSinceFirst < _trackingWindow
};
}
return stats;
}
/// <summary>
/// Clears all tracking data
/// </summary>
public void ClearAllTracking()
{
_queryTrackers.Clear();
_logger.LogInformation("[SQL-LOOP-DETECTION] All tracking data cleared");
}
private void CleanupOldTrackers(object? state)
{
var now = DateTime.UtcNow;
var keysToRemove = new List<string>();
foreach (var kvp in _queryTrackers)
{
var timeSinceLastExecution = now - kvp.Value.LastExecution;
if (timeSinceLastExecution > _trackingWindow)
{
keysToRemove.Add(kvp.Key);
}
}
foreach (var key in keysToRemove)
{
_queryTrackers.TryRemove(key, out _);
}
if (keysToRemove.Count > 0)
{
_logger.LogDebug("[SQL-LOOP-DETECTION] Cleaned up {Count} old trackers", keysToRemove.Count);
}
}
public void Dispose()
{
_cleanupTimer?.Dispose();
}
private class QueryExecutionTracker
{
public string RepositoryName { get; set; } = string.Empty;
public string MethodName { get; set; } = string.Empty;
public string QueryPattern { get; set; } = string.Empty;
public DateTime FirstExecution { get; set; }
public DateTime LastExecution { get; set; }
public int ExecutionCount { get; set; }
public TimeSpan TotalExecutionTime { get; set; }
public TimeSpan MaxExecutionTime { get; set; }
public TimeSpan MinExecutionTime { get; set; }
public TimeSpan AverageExecutionTime =>
ExecutionCount > 0 ? TimeSpan.FromTicks(TotalExecutionTime.Ticks / ExecutionCount) : TimeSpan.Zero;
}
}
/// <summary>
/// Statistics for query execution tracking
/// </summary>
public class QueryExecutionStats
{
public string RepositoryName { get; set; } = string.Empty;
public string MethodName { get; set; } = string.Empty;
public string QueryPattern { get; set; } = string.Empty;
public int ExecutionCount { get; set; }
public DateTime FirstExecution { get; set; }
public DateTime LastExecution { get; set; }
public TimeSpan AverageExecutionTime { get; set; }
public TimeSpan MinExecutionTime { get; set; }
public TimeSpan MaxExecutionTime { get; set; }
public double ExecutionsPerMinute { get; set; }
public bool IsActive { get; set; }
}

View File

@@ -0,0 +1,77 @@
namespace Managing.Infrastructure.Databases.PostgreSql;
/// <summary>
/// Configuration settings for SQL query monitoring and loop detection
/// </summary>
public class SqlMonitoringSettings
{
/// <summary>
/// Whether SQL monitoring is enabled globally (default: true)
/// </summary>
public bool Enabled { get; set; } = true;
/// <summary>
/// Whether SQL query logging is enabled (default: true)
/// </summary>
public bool LoggingEnabled { get; set; } = true;
/// <summary>
/// Whether Sentry integration is enabled (default: true)
/// </summary>
public bool SentryEnabled { get; set; } = true;
/// <summary>
/// Whether loop detection is enabled (default: true)
/// </summary>
public bool LoopDetectionEnabled { get; set; } = true;
/// <summary>
/// Whether performance monitoring is enabled (default: true)
/// </summary>
public bool PerformanceMonitoringEnabled { get; set; } = true;
/// <summary>
/// Time window for loop detection in seconds (default: 60)
/// </summary>
public int LoopDetectionWindowSeconds { get; set; } = 60;
/// <summary>
/// Maximum query executions per window for loop detection (default: 100)
/// </summary>
public int MaxQueryExecutionsPerWindow { get; set; } = 100;
/// <summary>
/// Maximum method executions per window for loop detection (default: 50)
/// </summary>
public int MaxMethodExecutionsPerWindow { get; set; } = 50;
/// <summary>
/// Threshold for long-running queries in milliseconds (default: 1000)
/// </summary>
public int LongRunningQueryThresholdMs { get; set; } = 1000;
/// <summary>
/// Threshold for Sentry alerts (default: 5)
/// </summary>
public int SentryAlertThreshold { get; set; } = 5;
/// <summary>
/// Threshold for slow queries in milliseconds (default: 2000)
/// </summary>
public int SlowQueryThresholdMs { get; set; } = 2000;
/// <summary>
/// Whether to log only slow queries (reduces overhead) (default: false)
/// </summary>
public bool LogSlowQueriesOnly { get; set; } = false;
/// <summary>
/// Whether to log only errors (minimal overhead) (default: false)
/// </summary>
public bool LogErrorsOnly { get; set; } = false;
/// <summary>
/// Data retention period in minutes for monitoring dashboard (default: 30)
/// </summary>
public int DataRetentionMinutes { get; set; } = 30;
}

View File

@@ -0,0 +1,425 @@
using System.Diagnostics;
using System.Text.Json;
using Microsoft.Extensions.Logging;
namespace Managing.Infrastructure.Databases.PostgreSql;
/// <summary>
/// Comprehensive SQL query logger for monitoring and debugging database operations
/// Provides detailed logging with timing, parameters, and performance metrics
/// </summary>
public class SqlQueryLogger : IDisposable
{
private readonly ILogger<SqlQueryLogger> _logger;
private readonly Stopwatch _stopwatch;
private readonly string _operationId;
private readonly DateTime _startTime;
private readonly string _methodName;
private readonly string _repositoryName;
private readonly Dictionary<string, object> _parameters;
private readonly List<string> _executedQueries;
private bool _disposed = false;
public SqlQueryLogger(ILogger<SqlQueryLogger> logger, string repositoryName, string methodName)
{
_logger = logger;
_repositoryName = repositoryName;
_methodName = methodName;
_operationId = Guid.NewGuid().ToString("N")[..8]; // Short ID for correlation
_startTime = DateTime.UtcNow;
_stopwatch = Stopwatch.StartNew();
_parameters = new Dictionary<string, object>();
_executedQueries = new List<string>();
}
/// <summary>
/// Logs the start of a database operation
/// </summary>
public void LogOperationStart(params (string name, object value)[] parameters)
{
foreach (var (name, value) in parameters)
{
_parameters[name] = value;
}
_logger.LogInformation(
"[SQL-OP-START] {OperationId} | {Repository}.{Method} | Started at {StartTime}",
_operationId, _repositoryName, _methodName, _startTime.ToString("HH:mm:ss.fff"));
}
/// <summary>
/// Logs a SQL query execution with timing and parameters
/// </summary>
public void LogQueryExecution(string query, TimeSpan executionTime, int? rowsAffected = null, Exception? exception = null)
{
_executedQueries.Add(query);
var logLevel = exception != null ? LogLevel.Error :
executionTime.TotalMilliseconds > 1000 ? LogLevel.Warning : LogLevel.Information;
var logMessage = exception != null
? "[SQL-QUERY-ERROR] {OperationId} | {Repository}.{Method} | Query failed after {ExecutionTime}ms | Error: {Error}"
: "[SQL-QUERY] {OperationId} | {Repository}.{Method} | Executed in {ExecutionTime}ms | Rows: {RowsAffected}";
var args = new object[]
{
_operationId, _repositoryName, _methodName, executionTime.TotalMilliseconds,
rowsAffected ?? 0
};
if (exception != null)
{
args[4] = exception.Message;
_logger.LogError(exception, logMessage, args);
// Send SQL error to Sentry
SendSqlErrorToSentry(query, executionTime, exception, rowsAffected);
}
else
{
_logger.Log(logLevel, logMessage, args);
// Send slow query alert to Sentry
if (executionTime.TotalMilliseconds > 2000) // Critical slow query threshold
{
SendSlowQueryToSentry(query, executionTime, rowsAffected);
}
else if (executionTime.TotalMilliseconds > 1000) // Warning threshold
{
SendSlowQueryWarningToSentry(query, executionTime, rowsAffected);
}
}
// Log query details for slow queries or errors
if (executionTime.TotalMilliseconds > 500 || exception != null)
{
_logger.LogWarning(
"[SQL-QUERY-DETAILS] {OperationId} | Query: {Query} | Parameters: {Parameters}",
_operationId,
TruncateQuery(query, 500),
JsonSerializer.Serialize(_parameters, new JsonSerializerOptions { WriteIndented = false }));
}
}
/// <summary>
/// Logs the completion of a database operation with summary
/// </summary>
public void LogOperationComplete(object? result = null, Exception? exception = null)
{
_stopwatch.Stop();
var totalTime = _stopwatch.Elapsed;
var logLevel = exception != null ? LogLevel.Error :
totalTime.TotalMilliseconds > 2000 ? LogLevel.Warning : LogLevel.Information;
var logMessage = exception != null
? "[SQL-OP-ERROR] {OperationId} | {Repository}.{Method} | Failed after {TotalTime}ms | Queries: {QueryCount} | Error: {Error}"
: "[SQL-OP-COMPLETE] {OperationId} | {Repository}.{Method} | Completed in {TotalTime}ms | Queries: {QueryCount} | Result: {ResultType}";
var args = new object[]
{
_operationId, _repositoryName, _methodName, totalTime.TotalMilliseconds,
_executedQueries.Count, result?.GetType().Name ?? "void"
};
if (exception != null)
{
args[5] = exception.Message;
_logger.LogError(exception, logMessage, args);
}
else
{
_logger.Log(logLevel, logMessage, args);
}
// Log operation summary for long-running operations
if (totalTime.TotalMilliseconds > 1000 || _executedQueries.Count > 5)
{
_logger.LogWarning(
"[SQL-OP-SUMMARY] {OperationId} | Parameters: {Parameters} | Query Count: {QueryCount} | Total Time: {TotalTime}ms",
_operationId,
JsonSerializer.Serialize(_parameters, new JsonSerializerOptions { WriteIndented = false }),
_executedQueries.Count,
totalTime.TotalMilliseconds);
}
}
/// <summary>
/// Logs potential loop detection based on query patterns
/// </summary>
public void LogPotentialLoopDetection(string queryPattern, int occurrenceCount)
{
_logger.LogWarning(
"[SQL-LOOP-DETECTED] {OperationId} | {Repository}.{Method} | Pattern '{Pattern}' executed {Count} times | Possible infinite loop!",
_operationId, _repositoryName, _methodName, queryPattern, occurrenceCount);
// Send critical alert to Sentry for loop detection
SendLoopDetectionToSentry(queryPattern, occurrenceCount);
}
/// <summary>
/// Sends loop detection alert to Sentry
/// </summary>
private void SendLoopDetectionToSentry(string queryPattern, int occurrenceCount)
{
try
{
var message = $"SQL Loop Detection: {_repositoryName}.{_methodName}";
var exception = new InvalidOperationException($"Potential infinite SQL loop detected: {queryPattern} executed {occurrenceCount} times");
var sentryId = SentrySdk.CaptureException(exception, scope =>
{
scope.SetTag("operation_id", _operationId);
scope.SetTag("repository", _repositoryName);
scope.SetTag("method", _methodName);
scope.SetTag("query_pattern", queryPattern);
scope.SetTag("alert_type", "sql_loop_detection");
scope.SetTag("environment", Environment.GetEnvironmentVariable("ASPNETCORE_ENVIRONMENT") ?? "Unknown");
scope.SetExtra("occurrence_count", occurrenceCount);
scope.SetExtra("operation_duration_ms", _stopwatch.Elapsed.TotalMilliseconds);
scope.SetExtra("parameters", JsonSerializer.Serialize(_parameters, new JsonSerializerOptions { WriteIndented = false }));
scope.SetExtra("executed_queries_count", _executedQueries.Count);
scope.SetExtra("start_time", _startTime.ToString("yyyy-MM-dd HH:mm:ss.fff"));
scope.SetFingerprint(new[] { "sql-loop-detection", _repositoryName, _methodName, queryPattern });
scope.Level = SentryLevel.Error;
scope.AddBreadcrumb(
message: $"SQL loop detected in {_repositoryName}.{_methodName}",
category: "sql-monitoring",
level: BreadcrumbLevel.Error,
data: new Dictionary<string, string>
{
["query_pattern"] = queryPattern,
["occurrence_count"] = occurrenceCount.ToString(),
["operation_id"] = _operationId
}
);
scope.SetExtra("operation_id", _operationId);
scope.SetExtra("repository", _repositoryName);
scope.SetExtra("method", _methodName);
scope.SetExtra("query_pattern", queryPattern);
scope.SetExtra("occurrence_count", occurrenceCount);
scope.SetExtra("start_time", _startTime);
scope.SetExtra("duration_ms", _stopwatch.Elapsed.TotalMilliseconds);
scope.SetExtra("parameters", JsonSerializer.Serialize(_parameters));
});
_logger.LogError(
"[SENTRY-LOOP-ALERT] Sent loop detection alert to Sentry: {SentryId} | {Repository}.{Method} | Pattern: {Pattern} | Count: {Count}",
sentryId, _repositoryName, _methodName, queryPattern, occurrenceCount);
}
catch (Exception ex)
{
_logger.LogError(ex, "[SENTRY-ERROR] Failed to send loop detection alert to Sentry");
}
}
/// <summary>
/// Logs connection state changes
/// </summary>
public void LogConnectionStateChange(string state, TimeSpan? duration = null)
{
var message = duration.HasValue
? "[SQL-CONNECTION] {OperationId} | {Repository}.{Method} | Connection {State} (took {Duration}ms)"
: "[SQL-CONNECTION] {OperationId} | {Repository}.{Method} | Connection {State}";
var args = duration.HasValue
? new object[] { _operationId, _repositoryName, _methodName, state, duration.Value.TotalMilliseconds }
: new object[] { _operationId, _repositoryName, _methodName, state };
_logger.LogInformation(message, args);
}
/// <summary>
/// Sends SQL error to Sentry
/// </summary>
private void SendSqlErrorToSentry(string query, TimeSpan executionTime, Exception exception, int? rowsAffected)
{
try
{
var sentryId = SentrySdk.CaptureException(exception, scope =>
{
scope.SetTag("operation_id", _operationId);
scope.SetTag("repository", _repositoryName);
scope.SetTag("method", _methodName);
scope.SetTag("alert_type", "sql_error");
scope.SetTag("environment", Environment.GetEnvironmentVariable("ASPNETCORE_ENVIRONMENT") ?? "Unknown");
scope.SetExtra("query", TruncateQuery(query, 1000));
scope.SetExtra("execution_time_ms", executionTime.TotalMilliseconds);
scope.SetExtra("rows_affected", rowsAffected ?? 0);
scope.SetExtra("parameters", JsonSerializer.Serialize(_parameters, new JsonSerializerOptions { WriteIndented = false }));
scope.SetExtra("operation_duration_ms", _stopwatch.Elapsed.TotalMilliseconds);
scope.SetFingerprint(new[] { "sql-error", _repositoryName, _methodName, exception.GetType().Name });
scope.Level = SentryLevel.Error;
scope.AddBreadcrumb(
message: $"SQL error in {_repositoryName}.{_methodName}",
category: "sql-monitoring",
level: BreadcrumbLevel.Error,
data: new Dictionary<string, string>
{
["query"] = TruncateQuery(query, 200),
["execution_time_ms"] = executionTime.TotalMilliseconds.ToString(),
["operation_id"] = _operationId
}
);
scope.SetExtra("operation_id", _operationId);
scope.SetExtra("repository", _repositoryName);
scope.SetExtra("method", _methodName);
scope.SetExtra("query", TruncateQuery(query, 1000));
scope.SetExtra("execution_time_ms", executionTime.TotalMilliseconds);
scope.SetExtra("rows_affected", rowsAffected ?? 0);
scope.SetExtra("error_type", exception.GetType().Name);
scope.SetExtra("error_message", exception.Message);
});
_logger.LogError(
"[SENTRY-SQL-ERROR] Sent SQL error to Sentry: {SentryId} | {Repository}.{Method} | {Error}",
sentryId, _repositoryName, _methodName, exception.Message);
}
catch (Exception ex)
{
_logger.LogError(ex, "[SENTRY-ERROR] Failed to send SQL error to Sentry");
}
}
/// <summary>
/// Sends critical slow query alert to Sentry
/// </summary>
private void SendSlowQueryToSentry(string query, TimeSpan executionTime, int? rowsAffected)
{
try
{
var message = $"Critical Slow SQL Query: {_repositoryName}.{_methodName}";
var exception = new TimeoutException($"SQL query took {executionTime.TotalMilliseconds:F0}ms to execute");
var sentryId = SentrySdk.CaptureException(exception, scope =>
{
scope.SetTag("operation_id", _operationId);
scope.SetTag("repository", _repositoryName);
scope.SetTag("method", _methodName);
scope.SetTag("alert_type", "slow_query_critical");
scope.SetTag("environment", Environment.GetEnvironmentVariable("ASPNETCORE_ENVIRONMENT") ?? "Unknown");
scope.SetExtra("query", TruncateQuery(query, 1000));
scope.SetExtra("execution_time_ms", executionTime.TotalMilliseconds);
scope.SetExtra("rows_affected", rowsAffected ?? 0);
scope.SetExtra("parameters", JsonSerializer.Serialize(_parameters, new JsonSerializerOptions { WriteIndented = false }));
scope.SetExtra("operation_duration_ms", _stopwatch.Elapsed.TotalMilliseconds);
scope.SetFingerprint(new[] { "slow-query-critical", _repositoryName, _methodName });
scope.Level = SentryLevel.Error;
scope.AddBreadcrumb(
message: $"Critical slow SQL query in {_repositoryName}.{_methodName}",
category: "sql-monitoring",
level: BreadcrumbLevel.Error,
data: new Dictionary<string, string>
{
["query"] = TruncateQuery(query, 200),
["execution_time_ms"] = executionTime.TotalMilliseconds.ToString(),
["operation_id"] = _operationId
}
);
scope.SetExtra("operation_id", _operationId);
scope.SetExtra("repository", _repositoryName);
scope.SetExtra("method", _methodName);
scope.SetExtra("query", TruncateQuery(query, 1000));
scope.SetExtra("execution_time_ms", executionTime.TotalMilliseconds);
scope.SetExtra("rows_affected", rowsAffected ?? 0);
scope.SetExtra("threshold_ms", 2000);
scope.SetExtra("severity", "critical");
});
_logger.LogError(
"[SENTRY-SLOW-QUERY] Sent critical slow query alert to Sentry: {SentryId} | {Repository}.{Method} | {Time}ms",
sentryId, _repositoryName, _methodName, executionTime.TotalMilliseconds);
}
catch (Exception ex)
{
_logger.LogError(ex, "[SENTRY-ERROR] Failed to send slow query alert to Sentry");
}
}
/// <summary>
/// Sends slow query warning to Sentry
/// </summary>
private void SendSlowQueryWarningToSentry(string query, TimeSpan executionTime, int? rowsAffected)
{
try
{
var message = $"Slow SQL Query Warning: {_repositoryName}.{_methodName}";
var sentryId = SentrySdk.CaptureMessage(message, scope =>
{
scope.SetTag("operation_id", _operationId);
scope.SetTag("repository", _repositoryName);
scope.SetTag("method", _methodName);
scope.SetTag("alert_type", "slow_query_warning");
scope.SetTag("environment", Environment.GetEnvironmentVariable("ASPNETCORE_ENVIRONMENT") ?? "Unknown");
scope.SetExtra("query", TruncateQuery(query, 1000));
scope.SetExtra("execution_time_ms", executionTime.TotalMilliseconds);
scope.SetExtra("rows_affected", rowsAffected ?? 0);
scope.SetExtra("parameters", JsonSerializer.Serialize(_parameters, new JsonSerializerOptions { WriteIndented = false }));
scope.SetExtra("operation_duration_ms", _stopwatch.Elapsed.TotalMilliseconds);
scope.SetFingerprint(new[] { "slow-query-warning", _repositoryName, _methodName });
scope.Level = SentryLevel.Warning;
scope.AddBreadcrumb(
message: $"Slow SQL query warning in {_repositoryName}.{_methodName}",
category: "sql-monitoring",
level: BreadcrumbLevel.Warning,
data: new Dictionary<string, string>
{
["query"] = TruncateQuery(query, 200),
["execution_time_ms"] = executionTime.TotalMilliseconds.ToString(),
["operation_id"] = _operationId
}
);
scope.SetExtra("operation_id", _operationId);
scope.SetExtra("repository", _repositoryName);
scope.SetExtra("method", _methodName);
scope.SetExtra("query", TruncateQuery(query, 1000));
scope.SetExtra("execution_time_ms", executionTime.TotalMilliseconds);
scope.SetExtra("rows_affected", rowsAffected ?? 0);
scope.SetExtra("threshold_ms", 1000);
scope.SetExtra("severity", "warning");
});
_logger.LogWarning(
"[SENTRY-SLOW-QUERY-WARNING] Sent slow query warning to Sentry: {SentryId} | {Repository}.{Method} | {Time}ms",
sentryId, _repositoryName, _methodName, executionTime.TotalMilliseconds);
}
catch (Exception ex)
{
_logger.LogError(ex, "[SENTRY-ERROR] Failed to send slow query warning to Sentry");
}
}
private static string TruncateQuery(string query, int maxLength)
{
if (string.IsNullOrEmpty(query) || query.Length <= maxLength)
return query;
return query[..maxLength] + "... [TRUNCATED]";
}
public void Dispose()
{
if (!_disposed)
{
_stopwatch?.Stop();
_disposed = true;
}
}
}

View File

@@ -9,8 +9,8 @@ test('GMX Position Closing', async (t) => {
const result = await closeGmxPositionImpl(
sdk,
"ADA",
TradeDirection.Long
"ETH",
TradeDirection.Short
)
console.log('Position closing result:', result)
assert.ok(result, 'Position closing result should be defined')

View File

@@ -48,7 +48,7 @@ const LogIn = () => {
.user_CreateToken({
address: walletAddress,
message: message,
name: form.name,
name: user?.id,
signature: signature,
})
.then((data) => {
@@ -101,19 +101,6 @@ const LogIn = () => {
action="#"
onSubmit={handleSubmit(onSubmit)}
>
<div>
<label
htmlFor="name"
className="dark:text-white block mb-2 text-sm font-medium text-gray-900"
hidden={true}
>
Name
</label>
<input
className="bg-gray-50 border border-gray-300 text-gray-900 sm:text-sm rounded-lg focus:ring-primary-600 focus:border-primary-600 block w-full p-2.5 dark:bg-gray-700 dark:border-gray-600 dark:placeholder-gray-400 dark:text-white dark:focus:ring-blue-500 dark:focus:border-blue-500"
{...register('name')}
></input>
</div>
<button
type="submit"
className="btn bg-primary w-full text-white bg-primary-600 hover:bg-primary-700 focus:ring-4 focus:outline-none focus:ring-primary-300 font-medium rounded-lg text-sm px-5 py-2.5 text-center dark:bg-primary-600 dark:hover:bg-primary-700 dark:focus:ring-primary-800"

View File

@@ -69,7 +69,7 @@ export default function Table({
) as TableInstanceWithHooks<any>
// Calculez le total des valeurs dans la colonne USD
const total = data
const total = data && showTotal
? data
.reduce((sum: number, row: any) => {
return sum + (row.value || 0) // Si la valeur est undefined = 0

View File

@@ -3062,6 +3062,224 @@ export class SettingsClient extends AuthorizedApiBase {
}
}
export class SqlMonitoringClient extends AuthorizedApiBase {
private http: { fetch(url: RequestInfo, init?: RequestInit): Promise<Response> };
private baseUrl: string;
protected jsonParseReviver: ((key: string, value: any) => any) | undefined = undefined;
constructor(configuration: IConfig, baseUrl?: string, http?: { fetch(url: RequestInfo, init?: RequestInit): Promise<Response> }) {
super(configuration);
this.http = http ? http : window as any;
this.baseUrl = baseUrl ?? "http://localhost:5000";
}
sqlMonitoring_GetQueryStatistics(): Promise<FileResponse> {
let url_ = this.baseUrl + "/api/SqlMonitoring/statistics";
url_ = url_.replace(/[?&]$/, "");
let options_: RequestInit = {
method: "GET",
headers: {
"Accept": "application/octet-stream"
}
};
return this.transformOptions(options_).then(transformedOptions_ => {
return this.http.fetch(url_, transformedOptions_);
}).then((_response: Response) => {
return this.processSqlMonitoring_GetQueryStatistics(_response);
});
}
protected processSqlMonitoring_GetQueryStatistics(response: Response): Promise<FileResponse> {
const status = response.status;
let _headers: any = {}; if (response.headers && response.headers.forEach) { response.headers.forEach((v: any, k: any) => _headers[k] = v); };
if (status === 200 || status === 206) {
const contentDisposition = response.headers ? response.headers.get("content-disposition") : undefined;
let fileNameMatch = contentDisposition ? /filename\*=(?:(\\?['"])(.*?)\1|(?:[^\s]+'.*?')?([^;\n]*))/g.exec(contentDisposition) : undefined;
let fileName = fileNameMatch && fileNameMatch.length > 1 ? fileNameMatch[3] || fileNameMatch[2] : undefined;
if (fileName) {
fileName = decodeURIComponent(fileName);
} else {
fileNameMatch = contentDisposition ? /filename="?([^"]*?)"?(;|$)/g.exec(contentDisposition) : undefined;
fileName = fileNameMatch && fileNameMatch.length > 1 ? fileNameMatch[1] : undefined;
}
return response.blob().then(blob => { return { fileName: fileName, data: blob, status: status, headers: _headers }; });
} else if (status !== 200 && status !== 204) {
return response.text().then((_responseText) => {
return throwException("An unexpected server error occurred.", status, _responseText, _headers);
});
}
return Promise.resolve<FileResponse>(null as any);
}
sqlMonitoring_GetAlerts(): Promise<FileResponse> {
let url_ = this.baseUrl + "/api/SqlMonitoring/alerts";
url_ = url_.replace(/[?&]$/, "");
let options_: RequestInit = {
method: "GET",
headers: {
"Accept": "application/octet-stream"
}
};
return this.transformOptions(options_).then(transformedOptions_ => {
return this.http.fetch(url_, transformedOptions_);
}).then((_response: Response) => {
return this.processSqlMonitoring_GetAlerts(_response);
});
}
protected processSqlMonitoring_GetAlerts(response: Response): Promise<FileResponse> {
const status = response.status;
let _headers: any = {}; if (response.headers && response.headers.forEach) { response.headers.forEach((v: any, k: any) => _headers[k] = v); };
if (status === 200 || status === 206) {
const contentDisposition = response.headers ? response.headers.get("content-disposition") : undefined;
let fileNameMatch = contentDisposition ? /filename\*=(?:(\\?['"])(.*?)\1|(?:[^\s]+'.*?')?([^;\n]*))/g.exec(contentDisposition) : undefined;
let fileName = fileNameMatch && fileNameMatch.length > 1 ? fileNameMatch[3] || fileNameMatch[2] : undefined;
if (fileName) {
fileName = decodeURIComponent(fileName);
} else {
fileNameMatch = contentDisposition ? /filename="?([^"]*?)"?(;|$)/g.exec(contentDisposition) : undefined;
fileName = fileNameMatch && fileNameMatch.length > 1 ? fileNameMatch[1] : undefined;
}
return response.blob().then(blob => { return { fileName: fileName, data: blob, status: status, headers: _headers }; });
} else if (status !== 200 && status !== 204) {
return response.text().then((_responseText) => {
return throwException("An unexpected server error occurred.", status, _responseText, _headers);
});
}
return Promise.resolve<FileResponse>(null as any);
}
sqlMonitoring_ClearTracking(): Promise<FileResponse> {
let url_ = this.baseUrl + "/api/SqlMonitoring/clear-tracking";
url_ = url_.replace(/[?&]$/, "");
let options_: RequestInit = {
method: "POST",
headers: {
"Accept": "application/octet-stream"
}
};
return this.transformOptions(options_).then(transformedOptions_ => {
return this.http.fetch(url_, transformedOptions_);
}).then((_response: Response) => {
return this.processSqlMonitoring_ClearTracking(_response);
});
}
protected processSqlMonitoring_ClearTracking(response: Response): Promise<FileResponse> {
const status = response.status;
let _headers: any = {}; if (response.headers && response.headers.forEach) { response.headers.forEach((v: any, k: any) => _headers[k] = v); };
if (status === 200 || status === 206) {
const contentDisposition = response.headers ? response.headers.get("content-disposition") : undefined;
let fileNameMatch = contentDisposition ? /filename\*=(?:(\\?['"])(.*?)\1|(?:[^\s]+'.*?')?([^;\n]*))/g.exec(contentDisposition) : undefined;
let fileName = fileNameMatch && fileNameMatch.length > 1 ? fileNameMatch[3] || fileNameMatch[2] : undefined;
if (fileName) {
fileName = decodeURIComponent(fileName);
} else {
fileNameMatch = contentDisposition ? /filename="?([^"]*?)"?(;|$)/g.exec(contentDisposition) : undefined;
fileName = fileNameMatch && fileNameMatch.length > 1 ? fileNameMatch[1] : undefined;
}
return response.blob().then(blob => { return { fileName: fileName, data: blob, status: status, headers: _headers }; });
} else if (status !== 200 && status !== 204) {
return response.text().then((_responseText) => {
return throwException("An unexpected server error occurred.", status, _responseText, _headers);
});
}
return Promise.resolve<FileResponse>(null as any);
}
sqlMonitoring_GetQueryDetails(repositoryName: string, methodName: string): Promise<FileResponse> {
let url_ = this.baseUrl + "/api/SqlMonitoring/query-details/{repositoryName}/{methodName}";
if (repositoryName === undefined || repositoryName === null)
throw new Error("The parameter 'repositoryName' must be defined.");
url_ = url_.replace("{repositoryName}", encodeURIComponent("" + repositoryName));
if (methodName === undefined || methodName === null)
throw new Error("The parameter 'methodName' must be defined.");
url_ = url_.replace("{methodName}", encodeURIComponent("" + methodName));
url_ = url_.replace(/[?&]$/, "");
let options_: RequestInit = {
method: "GET",
headers: {
"Accept": "application/octet-stream"
}
};
return this.transformOptions(options_).then(transformedOptions_ => {
return this.http.fetch(url_, transformedOptions_);
}).then((_response: Response) => {
return this.processSqlMonitoring_GetQueryDetails(_response);
});
}
protected processSqlMonitoring_GetQueryDetails(response: Response): Promise<FileResponse> {
const status = response.status;
let _headers: any = {}; if (response.headers && response.headers.forEach) { response.headers.forEach((v: any, k: any) => _headers[k] = v); };
if (status === 200 || status === 206) {
const contentDisposition = response.headers ? response.headers.get("content-disposition") : undefined;
let fileNameMatch = contentDisposition ? /filename\*=(?:(\\?['"])(.*?)\1|(?:[^\s]+'.*?')?([^;\n]*))/g.exec(contentDisposition) : undefined;
let fileName = fileNameMatch && fileNameMatch.length > 1 ? fileNameMatch[3] || fileNameMatch[2] : undefined;
if (fileName) {
fileName = decodeURIComponent(fileName);
} else {
fileNameMatch = contentDisposition ? /filename="?([^"]*?)"?(;|$)/g.exec(contentDisposition) : undefined;
fileName = fileNameMatch && fileNameMatch.length > 1 ? fileNameMatch[1] : undefined;
}
return response.blob().then(blob => { return { fileName: fileName, data: blob, status: status, headers: _headers }; });
} else if (status !== 200 && status !== 204) {
return response.text().then((_responseText) => {
return throwException("An unexpected server error occurred.", status, _responseText, _headers);
});
}
return Promise.resolve<FileResponse>(null as any);
}
sqlMonitoring_GetMonitoringHealth(): Promise<FileResponse> {
let url_ = this.baseUrl + "/api/SqlMonitoring/health";
url_ = url_.replace(/[?&]$/, "");
let options_: RequestInit = {
method: "GET",
headers: {
"Accept": "application/octet-stream"
}
};
return this.transformOptions(options_).then(transformedOptions_ => {
return this.http.fetch(url_, transformedOptions_);
}).then((_response: Response) => {
return this.processSqlMonitoring_GetMonitoringHealth(_response);
});
}
protected processSqlMonitoring_GetMonitoringHealth(response: Response): Promise<FileResponse> {
const status = response.status;
let _headers: any = {}; if (response.headers && response.headers.forEach) { response.headers.forEach((v: any, k: any) => _headers[k] = v); };
if (status === 200 || status === 206) {
const contentDisposition = response.headers ? response.headers.get("content-disposition") : undefined;
let fileNameMatch = contentDisposition ? /filename\*=(?:(\\?['"])(.*?)\1|(?:[^\s]+'.*?')?([^;\n]*))/g.exec(contentDisposition) : undefined;
let fileName = fileNameMatch && fileNameMatch.length > 1 ? fileNameMatch[3] || fileNameMatch[2] : undefined;
if (fileName) {
fileName = decodeURIComponent(fileName);
} else {
fileNameMatch = contentDisposition ? /filename="?([^"]*?)"?(;|$)/g.exec(contentDisposition) : undefined;
fileName = fileNameMatch && fileNameMatch.length > 1 ? fileNameMatch[1] : undefined;
}
return response.blob().then(blob => { return { fileName: fileName, data: blob, status: status, headers: _headers }; });
} else if (status !== 200 && status !== 204) {
return response.text().then((_responseText) => {
return throwException("An unexpected server error occurred.", status, _responseText, _headers);
});
}
return Promise.resolve<FileResponse>(null as any);
}
}
export class TradingClient extends AuthorizedApiBase {
private http: { fetch(url: RequestInfo, init?: RequestInit): Promise<Response> };
private baseUrl: string;

View File

@@ -0,0 +1,182 @@
import {useMutation, useQuery, useQueryClient} from '@tanstack/react-query'
import useApiUrlStore from '../app/store/apiStore'
import {SqlMonitoringClient} from '../generated/ManagingApi'
// Interface for SQL monitoring statistics
interface SqlMonitoringStats {
loopDetectionStats: Record<string, any>
contextStats: Record<string, any>
timestamp: string
totalTrackedQueries: number
activeQueries: number
}
// Interface for SQL monitoring alerts
interface SqlMonitoringAlert {
id: string
type: string
message: string
timestamp: string
repository: string
method: string
severity: string
}
// Interface for monitoring health
interface MonitoringHealth {
isEnabled: boolean
loggingEnabled: boolean
sentryEnabled: boolean
loopDetectionEnabled: boolean
performanceMonitoringEnabled: boolean
lastHealthCheck: string
totalAlerts: number
activeQueries: number
}
// Interface for query details
interface QueryDetail {
repository: string
method: string
queryPattern: string
executionCount: number
averageExecutionTime: number
lastExecution: string
isActive: boolean
}
// Hook for SQL monitoring statistics
export const useSqlMonitoringStats = () => {
const { apiUrl } = useApiUrlStore()
const sqlMonitoringClient = new SqlMonitoringClient({}, apiUrl)
return useQuery({
queryKey: ['sqlMonitoring', 'statistics'],
queryFn: async () => {
try {
const response = await sqlMonitoringClient.sqlMonitoring_GetQueryStatistics()
const text = await response.data.text()
const data = JSON.parse(text) as SqlMonitoringStats
// Ensure the data has the expected structure
return {
loopDetectionStats: data.loopDetectionStats || {},
contextStats: data.contextStats || {},
timestamp: data.timestamp || new Date().toISOString(),
totalTrackedQueries: data.totalTrackedQueries || 0,
activeQueries: data.activeQueries || 0,
}
} catch (error) {
console.error('Error fetching SQL monitoring statistics:', error)
throw error
}
},
refetchInterval: 30000, // Refresh every 30 seconds
})
}
// Hook for SQL monitoring alerts
export const useSqlMonitoringAlerts = () => {
const { apiUrl } = useApiUrlStore()
const sqlMonitoringClient = new SqlMonitoringClient({}, apiUrl)
return useQuery({
queryKey: ['sqlMonitoring', 'alerts'],
queryFn: async () => {
try {
const response = await sqlMonitoringClient.sqlMonitoring_GetAlerts()
const text = await response.data.text()
const data = JSON.parse(text) as SqlMonitoringAlert[]
// Ensure we return an array
return Array.isArray(data) ? data : []
} catch (error) {
console.error('Error fetching SQL monitoring alerts:', error)
throw error
}
},
refetchInterval: 15000, // Refresh every 15 seconds
})
}
// Hook for monitoring health
export const useSqlMonitoringHealth = () => {
const { apiUrl } = useApiUrlStore()
const sqlMonitoringClient = new SqlMonitoringClient({}, apiUrl)
return useQuery({
queryKey: ['sqlMonitoring', 'health'],
queryFn: async () => {
try {
const response = await sqlMonitoringClient.sqlMonitoring_GetMonitoringHealth()
const text = await response.data.text()
const data = JSON.parse(text) as MonitoringHealth
// Ensure the data has the expected structure
return {
isEnabled: data.isEnabled || false,
loggingEnabled: data.loggingEnabled || false,
sentryEnabled: data.sentryEnabled || false,
loopDetectionEnabled: data.loopDetectionEnabled || false,
performanceMonitoringEnabled: data.performanceMonitoringEnabled || false,
lastHealthCheck: data.lastHealthCheck || new Date().toISOString(),
totalAlerts: data.totalAlerts || 0,
activeQueries: data.activeQueries || 0,
}
} catch (error) {
console.error('Error fetching SQL monitoring health:', error)
throw error
}
},
refetchInterval: 60000, // Refresh every minute
})
}
// Hook for query details
export const useSqlMonitoringQueryDetails = (repositoryName: string, methodName: string) => {
const { apiUrl } = useApiUrlStore()
const sqlMonitoringClient = new SqlMonitoringClient({}, apiUrl)
return useQuery({
queryKey: ['sqlMonitoring', 'queryDetails', repositoryName, methodName],
queryFn: async () => {
try {
const response = await sqlMonitoringClient.sqlMonitoring_GetQueryDetails(repositoryName, methodName)
const text = await response.data.text()
const data = JSON.parse(text) as QueryDetail[]
// Ensure we return an array
return Array.isArray(data) ? data : []
} catch (error) {
console.error('Error fetching SQL monitoring query details:', error)
throw error
}
},
enabled: !!repositoryName && !!methodName, // Only run if both parameters are provided
})
}
// Hook for clearing tracking data
export const useClearSqlMonitoringTracking = () => {
const { apiUrl } = useApiUrlStore()
const queryClient = useQueryClient()
const sqlMonitoringClient = new SqlMonitoringClient({}, apiUrl)
return useMutation({
mutationFn: async () => {
await sqlMonitoringClient.sqlMonitoring_ClearTracking()
},
onSuccess: () => {
// Invalidate all SQL monitoring queries to refresh data
queryClient.invalidateQueries({ queryKey: ['sqlMonitoring'] })
},
})
}
// Export types for use in components
export type {
SqlMonitoringStats,
SqlMonitoringAlert,
MonitoringHealth,
QueryDetail,
}

View File

@@ -9,6 +9,7 @@ import Theme from './theme'
import DefaultConfig from './defaultConfig/defaultConfig'
import UserInfoSettings from './UserInfoSettings'
import AccountFee from './accountFee/accountFee'
import SqlMonitoring from './sqlmonitoring/sqlMonitoring'
type TabsType = {
label: string
@@ -53,6 +54,11 @@ const tabs: TabsType = [
index: 7,
label: 'Health Checks',
},
{
Component: SqlMonitoring,
index: 8,
label: 'SQL Monitoring',
},
]
const Settings: React.FC = () => {

View File

@@ -0,0 +1,94 @@
# SQL Monitoring Dashboard
This component provides a comprehensive single-page dashboard for monitoring SQL query performance, loop detection, and system health in the Managing application.
## Features
### Overview Cards
- **Total Tracked Queries**: Shows the total number of queries tracked by the monitoring system
- **Active Queries**: Displays currently monitored queries
- **Total Alerts**: Shows the number of alerts generated by the system
- **Monitoring Status**: Indicates whether SQL monitoring is active or inactive
### System Health Section
- **Monitoring Status**: Overall health of the SQL monitoring system
- **Feature Status**: Individual status of monitoring features (logging, Sentry, loop detection, etc.)
- **Compact Layout**: All health indicators displayed in a responsive grid
### Recent Alerts Section
- **SQL Monitoring Alerts**: Real-time alerts for SQL performance issues
- **Severity Levels**: Critical, Warning, and Info alerts with color-coded badges
- **Repository and Method**: Shows which repository and method triggered the alert
- **Timestamp**: When the alert was generated
### Query Statistics Section
- **Query Statistics**: Detailed statistics about query execution patterns
- **Loop Detection Stats**: Information about detected query loops
- **Context Stats**: Additional context information about the monitoring system
### Information Panel
- **Usage Instructions**: Explains how the dashboard works
- **Auto-refresh Info**: Details about automatic data updates
- **Clear Data Instructions**: How to reset monitoring statistics
## API Integration
The component integrates with the following SQL monitoring endpoints:
- `GET /api/sqlmonitoring/statistics` - Get query statistics
- `GET /api/sqlmonitoring/alerts` - Get monitoring alerts
- `GET /api/sqlmonitoring/health` - Get monitoring health status
- `POST /api/sqlmonitoring/clear` - Clear tracking data
- `GET /api/sqlmonitoring/details/{repositoryName}/{methodName}` - Get query details
## Auto-refresh
- **Statistics**: Refreshes every 30 seconds
- **Alerts**: Refreshes every 15 seconds
- **Health**: Refreshes every minute
## Admin Authorization
All SQL monitoring endpoints require admin authorization. Only users with admin privileges can access this dashboard.
## Mobile-Friendly Design
The dashboard is designed to be fully responsive and mobile-friendly:
- **Responsive Grid**: Overview cards adapt from 1 column on mobile to 4 columns on desktop
- **Compact Layout**: Health status indicators are arranged in a responsive grid
- **Horizontal Scrolling**: Tables have horizontal scroll on smaller screens
- **Touch-Friendly**: All interactive elements are appropriately sized for touch devices
- **Readable Text**: Font sizes and spacing optimized for mobile viewing
## Usage
1. Navigate to Settings → SQL Monitoring
2. View all monitoring information on a single page
3. Use the "Clear Tracking Data" button to reset monitoring statistics
4. Monitor alerts and statistics in real-time
5. Check system health status at a glance
## Configuration
The SQL monitoring system can be configured via `appsettings.json`:
```json
{
"SqlMonitoring": {
"Enabled": true,
"LoggingEnabled": true,
"SentryEnabled": true,
"LoopDetectionEnabled": true,
"PerformanceMonitoringEnabled": true,
"LoopDetectionWindowSeconds": 60,
"MaxQueryExecutionsPerWindow": 100,
"MaxMethodExecutionsPerWindow": 50,
"LongRunningQueryThresholdMs": 1000,
"SentryAlertThreshold": 5,
"SlowQueryThresholdMs": 2000,
"LogSlowQueriesOnly": false,
"LogErrorsOnly": false
}
}
```

View File

@@ -0,0 +1,311 @@
import React from 'react'
import {Table} from '../../../components/mollecules'
import {
useClearSqlMonitoringTracking,
useSqlMonitoringAlerts,
useSqlMonitoringHealth,
useSqlMonitoringStats,
} from '../../../hooks/useSqlMonitoring'
const SqlMonitoring: React.FC = () => {
// Use custom hooks for SQL monitoring data
const { data: statistics, isLoading: isLoadingStats } = useSqlMonitoringStats()
const { data: alerts, isLoading: isLoadingAlerts } = useSqlMonitoringAlerts()
const { data: health, isLoading: isLoadingHealth } = useSqlMonitoringHealth()
const clearTrackingMutation = useClearSqlMonitoringTracking()
const isLoading = isLoadingStats || isLoadingAlerts || isLoadingHealth
// Prepare statistics data for table
const statisticsData = React.useMemo(() => {
if (!statistics) return []
const stats: Array<{
type: string
key: string
value: string
timestamp: string
}> = []
// Add loop detection stats
if (statistics.loopDetectionStats) {
Object.entries(statistics.loopDetectionStats).forEach(([key, value]) => {
stats.push({
type: 'Loop Detection',
key,
value: typeof value === 'object' ? JSON.stringify(value) : String(value),
timestamp: statistics.timestamp,
})
})
}
// Add context stats
if (statistics.contextStats) {
Object.entries(statistics.contextStats).forEach(([key, value]) => {
stats.push({
type: 'Context',
key,
value: String(value),
timestamp: statistics.timestamp,
})
})
}
return stats
}, [statistics])
// Prepare alerts data for table
const alertsData = React.useMemo(() => {
if (!alerts || !Array.isArray(alerts)) return []
return alerts.map(alert => ({
id: alert.id || 'unknown',
type: alert.type || 'unknown',
message: alert.message || 'No message',
timestamp: alert.timestamp || new Date().toISOString(),
repository: alert.repository || 'unknown',
method: alert.method || 'unknown',
severity: alert.severity || 'info',
}))
}, [alerts])
// Define columns for statistics table
const statisticsColumns = React.useMemo(
() => [
{
Header: 'Type',
accessor: 'type',
disableSortBy: true,
disableFilters: true,
},
{
Header: 'Key',
accessor: 'key',
disableSortBy: true,
disableFilters: true,
},
{
Header: 'Value',
accessor: 'value',
disableSortBy: true,
disableFilters: true,
},
{
Header: 'Timestamp',
accessor: 'timestamp',
disableSortBy: true,
disableFilters: true,
},
],
[]
)
// Define columns for alerts table
const alertsColumns = React.useMemo(
() => [
{
Header: 'Severity',
accessor: 'severity',
Cell: ({ value }: { value: string }) => (
<span
className={`badge ${
value === 'Critical' || value === 'Error'
? 'badge-error'
: value === 'Warning'
? 'badge-warning'
: 'badge-info'
}`}
>
{value}
</span>
),
disableSortBy: true,
disableFilters: true,
},
{
Header: 'Type',
accessor: 'type',
disableSortBy: true,
disableFilters: true,
},
{
Header: 'Repository',
accessor: 'repository',
disableSortBy: true,
disableFilters: true,
},
{
Header: 'Method',
accessor: 'method',
disableSortBy: true,
disableFilters: true,
},
{
Header: 'Message',
accessor: 'message',
disableSortBy: true,
disableFilters: true,
},
{
Header: 'Timestamp',
accessor: 'timestamp',
disableSortBy: true,
disableFilters: true,
},
],
[]
)
return (
<div className="container mx-auto space-y-6">
{/* Header */}
<div className="flex flex-col sm:flex-row justify-between items-start sm:items-center gap-4">
<h2 className="text-2xl font-bold">SQL Monitoring Dashboard</h2>
<button
className="btn btn-outline btn-sm"
onClick={() => clearTrackingMutation.mutate()}
disabled={clearTrackingMutation.isPending}
>
{clearTrackingMutation.isPending ? 'Clearing...' : 'Clear Tracking Data'}
</button>
</div>
{isLoading ? (
<div className="flex justify-center">
<progress className="progress progress-primary w-56"></progress>
</div>
) : (
<div className="space-y-6">
{/* Overview Cards */}
<div className="grid grid-cols-1 sm:grid-cols-2 lg:grid-cols-4 gap-4">
<div className="stat bg-base-200 rounded-lg">
<div className="stat-title text-sm">Total Tracked Queries</div>
<div className="stat-value text-lg text-primary">
{statistics?.totalTrackedQueries || 0}
</div>
<div className="stat-desc text-xs">All time</div>
</div>
<div className="stat bg-base-200 rounded-lg">
<div className="stat-title text-sm">Active Queries</div>
<div className="stat-value text-lg text-secondary">
{statistics?.activeQueries || 0}
</div>
<div className="stat-desc text-xs">Currently monitored</div>
</div>
<div className="stat bg-base-200 rounded-lg">
<div className="stat-title text-sm">Total Alerts</div>
<div className="stat-value text-lg text-warning">
{alerts?.length || 0}
</div>
<div className="stat-desc text-xs">All alerts</div>
</div>
<div className="stat bg-base-200 rounded-lg">
<div className="stat-title text-sm">Monitoring Status</div>
<div className={`stat-value text-lg ${health?.isEnabled ? 'text-success' : 'text-error'}`}>
{health?.isEnabled ? 'Active' : 'Inactive'}
</div>
<div className="stat-desc text-xs">System status</div>
</div>
</div>
{/* Health Status */}
<div className="card bg-base-200">
<div className="card-body">
<h3 className="card-title text-lg">System Health</h3>
{health ? (
<div className="grid grid-cols-1 sm:grid-cols-2 lg:grid-cols-5 gap-4">
<div className="flex flex-col items-center">
<span className="text-sm font-medium mb-1">Monitoring</span>
<span className={`badge ${health.isEnabled ? 'badge-success' : 'badge-error'}`}>
{health.isEnabled ? 'Enabled' : 'Disabled'}
</span>
</div>
<div className="flex flex-col items-center">
<span className="text-sm font-medium mb-1">Logging</span>
<span className={`badge ${health.loggingEnabled ? 'badge-success' : 'badge-error'}`}>
{health.loggingEnabled ? 'Enabled' : 'Disabled'}
</span>
</div>
<div className="flex flex-col items-center">
<span className="text-sm font-medium mb-1">Sentry</span>
<span className={`badge ${health.sentryEnabled ? 'badge-success' : 'badge-error'}`}>
{health.sentryEnabled ? 'Enabled' : 'Disabled'}
</span>
</div>
<div className="flex flex-col items-center">
<span className="text-sm font-medium mb-1">Loop Detection</span>
<span className={`badge ${health.loopDetectionEnabled ? 'badge-success' : 'badge-error'}`}>
{health.loopDetectionEnabled ? 'Enabled' : 'Disabled'}
</span>
</div>
<div className="flex flex-col items-center">
<span className="text-sm font-medium mb-1">Performance</span>
<span className={`badge ${health.performanceMonitoringEnabled ? 'badge-success' : 'badge-error'}`}>
{health.performanceMonitoringEnabled ? 'Enabled' : 'Disabled'}
</span>
</div>
</div>
) : (
<div className="alert alert-error">
<svg xmlns="http://www.w3.org/2000/svg" className="stroke-current shrink-0 h-6 w-6" fill="none" viewBox="0 0 24 24">
<path strokeLinecap="round" strokeLinejoin="round" strokeWidth="2" d="M10 14l2-2m0 0l2-2m-2 2l-2-2m2 2l2 2m7-2a9 9 0 11-18 0 9 9 0 0118 0z" />
</svg>
<span>Unable to fetch monitoring health data</span>
</div>
)}
</div>
</div>
{/* Alerts Section */}
<div className="card bg-base-200">
<div className="card-body">
<h3 className="card-title text-lg">Recent Alerts</h3>
{alertsData.length === 0 ? (
<div className="alert alert-info">
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" className="stroke-current shrink-0 w-6 h-6">
<path strokeLinecap="round" strokeLinejoin="round" strokeWidth="2" d="M13 16h-1v-4h-1m1-4h.01M21 12a9 9 0 11-18 0 9 9 0 0118 0z"></path>
</svg>
<span>No alerts found. The system is running smoothly!</span>
</div>
) : (
<div className="overflow-x-auto">
<Table
columns={alertsColumns}
data={alertsData}
showPagination={true}
showTotal={false}
/>
</div>
)}
</div>
</div>
{/* Statistics Section */}
<div className="card bg-base-200">
<div className="card-body">
<h3 className="card-title text-lg">Query Statistics</h3>
{statisticsData.length === 0 ? (
<div className="alert alert-warning">
<svg xmlns="http://www.w3.org/2000/svg" className="stroke-current shrink-0 h-6 w-6" fill="none" viewBox="0 0 24 24">
<path strokeLinecap="round" strokeLinejoin="round" strokeWidth="2" d="M12 9v2m0 4h.01m-6.938 4h13.856c1.54 0 2.502-1.667 1.732-2.5L13.732 4c-.77-.833-1.964-.833-2.732 0L3.732 16.5c-.77.833.192 2.5 1.732 2.5z" />
</svg>
<span>No statistics available yet</span>
</div>
) : (
<div className="overflow-x-auto">
<Table
columns={statisticsColumns}
data={statisticsData}
showPagination={true}
showTotal={false}
/>
</div>
)}
</div>
</div>
</div>
)}
</div>
)
}
export default SqlMonitoring