-
Notifications
You must be signed in to change notification settings - Fork 7
Open
Description
Description
RepoReady currently lacks robust error handling for network-related issues like timeouts, connection errors, and intermittent API failures. Users may experience cryptic error messages or hanging operations when network conditions are poor.
Current State
- ❌ No explicit timeout handling for GitHub API calls
- ❌ No retry logic for temporary failures
- ❌ Generic error messages for network issues
- ❌ No graceful degradation for slow connections
- ✅ Basic error handling exists in
src/utils/github.ts - ✅ Octokit handles some network errors automatically
Acceptance Criteria
Timeout Handling
- Set reasonable timeout values for all API calls (30-60 seconds)
- Implement custom timeout logic where needed
- Provide clear timeout error messages to users
- Allow timeout configuration via environment variables
Retry Logic
- Implement exponential backoff for retryable errors
- Retry on specific HTTP status codes (500, 502, 503, 504)
- Limit retry attempts (3-5 attempts max)
- Skip retries for non-retryable errors (404, 401, 403)
User Experience
- Show progress indicators for long operations
- Provide helpful error messages with next steps
- Suggest solutions for common network issues
- Gracefully handle partial failures in batch operations
Implementation Suggestions
Enhanced GitHubService with Timeout Support
// src/utils/github.ts - Enhanced version
import { Octokit } from '@octokit/rest';
import ora from 'ora';
export class GitHubService {
private octokit: Octokit;
private timeout: number;
private maxRetries: number;
private retryDelay: number;
constructor(token?: string, options: {
timeout?: number;
maxRetries?: number;
retryDelay?: number;
} = {}) {
this.timeout = options.timeout ?? parseInt(process.env.GITHUB_TIMEOUT ?? '30000'); // 30s default
this.maxRetries = options.maxRetries ?? parseInt(process.env.GITHUB_MAX_RETRIES ?? '3');
this.retryDelay = options.retryDelay ?? 1000; // 1s base delay
this.octokit = new Octokit({
auth: token || process.env.GITHUB_TOKEN,
request: {
timeout: this.timeout,
retries: 0 // We'll handle retries ourselves
}
});
}
private async withRetry<T>(
operation: () => Promise<T>,
context: string
): Promise<T> {
let lastError: Error;
for (let attempt = 1; attempt <= this.maxRetries; attempt++) {
try {
return await Promise.race([
operation(),
this.createTimeoutPromise(context)
]);
} catch (error) {
lastError = error as Error;
if (!this.isRetryableError(error) || attempt === this.maxRetries) {
throw this.enhanceError(error, context, attempt);
}
const delay = this.calculateBackoffDelay(attempt);
console.warn(`⚠️ ${context} failed (attempt ${attempt}/${this.maxRetries}), retrying in ${delay}ms...`);
await this.sleep(delay);
}
}
throw this.enhanceError(lastError!, context, this.maxRetries);
}
private createTimeoutPromise<T>(context: string): Promise<T> {
return new Promise((_, reject) => {
setTimeout(() => {
reject(new Error(`Operation timed out after ${this.timeout}ms: ${context}`));
}, this.timeout);
});
}
private isRetryableError(error: any): boolean {
// Retry on network errors and server errors
if (error.code === 'ECONNRESET' || error.code === 'ETIMEDOUT') return true;
if (error.status >= 500 && error.status < 600) return true;
if (error.status === 429) return true; // Rate limit
// Don't retry client errors
if (error.status >= 400 && error.status < 500) return false;
return true;
}
private calculateBackoffDelay(attempt: number): number {
// Exponential backoff: 1s, 2s, 4s, 8s...
return this.retryDelay * Math.pow(2, attempt - 1) + Math.random() * 1000;
}
private enhanceError(error: any, context: string, attempts: number): Error {
const baseMessage = `Failed to ${context}`;
if (error.message?.includes('timeout')) {
return new Error(
`${baseMessage}: Request timed out after ${this.timeout}ms. ` +
'Try increasing the timeout with GITHUB_TIMEOUT environment variable or check your internet connection.'
);
}
if (error.status === 403 && error.message?.includes('rate limit')) {
return new Error(
`${baseMessage}: GitHub API rate limit exceeded. ` +
'Use a personal access token with --token flag for higher limits.'
);
}
if (error.code === 'ECONNRESET' || error.code === 'ETIMEDOUT') {
return new Error(
`${baseMessage}: Network connection error after ${attempts} attempts. ` +
'Please check your internet connection and try again.'
);
}
if (error.status === 404) {
return new Error(
`${baseMessage}: Repository not found. Please check the repository name and your access permissions.`
);
}
return new Error(`${baseMessage}: ${error.message} (after ${attempts} attempts)`);
}
private sleep(ms: number): Promise<void> {
return new Promise(resolve => setTimeout(resolve, ms));
}
async getRepositoryInfo(owner: string, repo: string): Promise<RepositoryInfo> {
return this.withRetry(async () => {
const spinner = ora('Fetching repository information...').start();
try {
// Your existing implementation, but wrapped in retry logic
const repoData = await this.octokit.rest.repos.get({ owner, repo });
// ... rest of the implementation
spinner.succeed('Repository information fetched successfully');
return repositoryInfo;
} catch (error) {
spinner.fail('Failed to fetch repository information');
throw error;
}
}, `fetch repository information for ${owner}/${repo}`);
}
}Environment Variable Configuration
# .env.example
GITHUB_TOKEN=your_token_here
GITHUB_TIMEOUT=30000 # 30 seconds
GITHUB_MAX_RETRIES=3 # 3 retry attempts
GITHUB_RETRY_DELAY=1000 # 1 second base delayEnhanced CLI Error Messages
// src/commands/evaluate.ts - Enhanced error handling
catch (error) {
spinner.fail('Evaluation failed');
if (error instanceof Error) {
// Network/timeout specific handling
if (error.message.includes('timeout')) {
console.error('\n🕒 Request Timeout');
console.error('The request took too long to complete. This might be due to:');
console.error('• Slow internet connection');
console.error('• GitHub API being slow');
console.error('• Large repository with many files to check');
console.error('\n💡 Try:');
console.error('• Check your internet connection');
console.error('• Use a GitHub token for better performance: --token YOUR_TOKEN');
console.error('• Increase timeout: GITHUB_TIMEOUT=60000 rr evaluate ...');
} else if (error.message.includes('rate limit')) {
console.error('\n⏱️ Rate Limit Exceeded');
console.error('GitHub API rate limit has been exceeded.');
console.error('\n💡 Try:');
console.error('• Use a personal access token: --token YOUR_TOKEN');
console.error('• Wait an hour for the rate limit to reset');
console.error('• Create a token at: https://github.com/settings/tokens');
} else if (error.message.includes('Network connection error')) {
console.error('\n🌐 Network Connection Error');
console.error('Unable to connect to GitHub after multiple attempts.');
console.error('\n💡 Try:');
console.error('• Check your internet connection');
console.error('• Verify GitHub.com is accessible');
console.error('• Try again in a few minutes');
}
console.error(`\n❌ Error: ${error.message}`);
}
process.exit(1);
}Files to Modify
src/utils/github.ts- Add timeout and retry logicsrc/commands/evaluate.ts- Enhanced error messagessrc/commands/create.ts- Enhanced error messagesREADME.md- Document timeout configurationpackage.json- Add timeout-related dependencies if needed
Configuration Options
Environment Variables
GITHUB_TIMEOUT- Request timeout in milliseconds (default: 30000)GITHUB_MAX_RETRIES- Maximum retry attempts (default: 3)GITHUB_RETRY_DELAY- Base delay between retries in milliseconds (default: 1000)
CLI Options (Optional)
rr evaluate owner/repo --timeout 60000 --max-retries 5Benefits
- 🔄 More reliable operation in poor network conditions
- ⏱️ Clear timeout behavior instead of hanging
- 🔁 Automatic recovery from temporary failures
- 📝 Better user experience with helpful error messages
- ⚙️ Configurable for different environments
- 🎯 Distinguishes between temporary and permanent errors
Testing Considerations
- Test with simulated network delays
- Test with intermittent connection failures
- Test timeout behavior
- Test retry logic with different error codes
- Mock slow API responses
Resources
Estimated Effort
Medium - Requires understanding async patterns and error handling strategies.
Great for contributors who want to improve reliability and user experience! 🔄
Metadata
Metadata
Assignees
Labels
No labels