Mac Huang
Initial push of CodeLLaMA Linux BugFix model with Hugging Face LFS support
43864c1
|
raw
history blame
9.66 kB

Linux Kernel Anti-Pattern Detector

A comprehensive tool for detecting anti-patterns and potential issues in Linux kernel code.

🎯 Overview

This project provides automated static analysis tools to identify common anti-patterns, code smells, and potential issues in Linux kernel source code. The analysis covers 7 major categories including memory management, concurrency, security vulnerabilities, and code quality issues.

πŸ“Š Recent Analysis Results

Analysis Date: June 29, 2025
Kernel Version: Linux 6.16-rc4
Files Analyzed: 35,588
Total Issues Found: 3,122

Issue Distribution

  • πŸ”΄ Critical Security Issues: 347 (11.1%)
  • 🟑 High Priority Issues: 2,670 (85.5%)
  • πŸ”΅ Medium Priority Issues: 105 (3.4%)

Top Categories

  1. Concurrency Issues: 2,314 (74.1%) - Race conditions, deadlocks
  2. Memory Management: 356 (11.4%) - Memory leaks, use-after-free
  3. Security Vulnerabilities: 347 (11.1%) - Buffer overflows, format strings
  4. Code Quality: 92 (2.9%) - Magic numbers, code duplication

πŸš€ Quick Start

Prerequisites

  • Python 3.8+
  • Git
  • Conda (recommended for environment management)

Setup

# Clone the repository
git clone https://github.com/Mac-Huang/linux-kernel-anti-pattern-detector.git
cd linux-kernel-anti-pattern-detector

# Create and activate conda environment
conda create -n linux-kernel-anti-pattern-detector python=3.10 -y
conda activate linux-kernel-anti-pattern-detector

# Install dependencies
pip install -r requirements.txt
pip install -r requirements-kernel-analysis.txt

Run Analysis

# Full kernel analysis (clones kernel if needed)
python tools/detectors/detector.py --clone --output data/results.json

# Concurrency-specific analysis
python scripts/analysis/concurrency_analyzer.py

# View results interactively
python scripts/reporting/view_results.py --interactive

πŸ—ƒοΈ Dataset Building Pipeline

βœ… Successfully Implemented

The project includes a robust dataset building pipeline for training code intelligence models on Linux kernel bug fixes.

Dataset Format

{
  "input": {
    "original code": "code before the patch (extracted from a known bug-fix)",
    "instruction": "the commit message describing what this fix is"
  },
  "output": {
    "diff codes": "the unified diff patch for the bug fix"
  }
}

Features

  • πŸ” Intelligent Bug Detection: Uses keyword-based filtering to identify bug-fix commits
  • πŸ“ Focused Code Extraction: Extracts relevant code context around bug fixes
  • πŸ”§ Diff Processing: Parses and formats unified diff patches
  • ⚑ Parallel Processing: Multi-threaded processing for large repositories
  • πŸ“Š Quality Filtering: Only includes valid C source file modifications

Usage

# Activate environment
conda activate detector

# Build test dataset (small sample)
cd dataset_builder
python build_dataset_demo.py

# Build full dataset (entire repository)
# Edit TEST_MODE = False in build_dataset_demo.py
python build_dataset_demo.py

Output

  • File: dataset_builder/output/linux_bugfix_dataset.jsonl
  • Format: JSONL (one JSON object per line)
  • Content: Bug-fix commits with original code, commit messages, and diff patches

Keywords Detected

  • Memory issues: leak, null, overflow, memory
  • Security: security, vulnerability, exploit, buffer
  • Concurrency: race, deadlock, lock
  • General bugs: fix, bug, error, failure, crash

πŸ“ Project Structure

β”œβ”€β”€ πŸ“ data/                    # Analysis results and logs
β”œβ”€β”€ πŸ“ dataset_builder/         # Dataset building pipeline
β”‚   β”œβ”€β”€ build_dataset.py        # Main dataset builder
β”‚   β”œβ”€β”€ build_dataset_demo.py   # Test dataset builder
β”‚   └── output/                 # Generated datasets
β”œβ”€β”€ πŸ“ docs/                    # Documentation
β”œβ”€β”€ πŸ“ reports/                 # Generated reports
β”œβ”€β”€ πŸ“ scripts/                 # Analysis and utility scripts
β”œβ”€β”€ πŸ“ src/                     # Core source code
β”œβ”€β”€ πŸ“ tests/                   # Test files
β”œβ”€β”€ πŸ“ tools/                   # Detection tools
└── πŸ“ linux/                   # Kernel source (cloned)

See PROJECT_STRUCTURE.md for detailed structure information.

πŸ“‹ Available Reports

πŸ“„ Main Reports

πŸ”’ Specialized Reports

πŸ› οΈ Usage Examples

Basic Analysis

# Run complete analysis
python tools/detectors/detector.py --clone --output data/results.json

# Quick summary
python scripts/utils/quick_summary.py

# Interactive results viewer
python scripts/reporting/view_results.py --interactive

Specialized Analysis

# Concurrency issues only
python scripts/analysis/concurrency_analyzer.py

# Kernel structure analysis
python scripts/analysis/analyze_kernel_structure.py

Custom Configuration

# Use custom config
python tools/detectors/detector.py --config tools/detectors/config.yaml

# Analyze specific kernel path
python tools/detectors/detector.py --kernel-path /path/to/kernel

πŸ” Detection Categories

1. Memory Management

  • Memory leaks (kmalloc without kfree)
  • Use-after-free bugs
  • Double-free issues
  • Null pointer dereferences

2. Concurrency

  • Race conditions
  • Deadlocks
  • Missing locks
  • Double locking
  • Lock ordering violations

3. Security

  • Buffer overflows
  • Format string vulnerabilities
  • Privilege escalation
  • Information disclosure

4. Error Handling

  • Unchecked return values
  • Missing error handling
  • Ignored error codes
  • Wrong error propagation

5. Performance

  • O(nΒ²) algorithms
  • Unnecessary memory allocation
  • Inefficient data structures
  • Cache miss patterns

6. Code Quality

  • Magic numbers
  • Hardcoded values
  • Complex functions
  • Code duplication

7. API Usage

  • Deprecated functions
  • Wrong API usage
  • Missing parameter validation
  • Incorrect flags

πŸ“Š Analysis Features

  • Pattern-based Detection: Regular expression matching with context awareness
  • Parallel Processing: Configurable concurrent analysis for performance
  • Detailed Reporting: JSON output with file locations and line numbers
  • Interactive Viewer: Browse and filter results by category and severity
  • Code Snippet Extraction: View actual code around detected issues
  • Severity Classification: Critical, High, Medium, Low priority levels

πŸ”§ Configuration

The analysis can be customized through tools/detectors/config.yaml:

detection_rules:
  memory_management:
    enabled: true
    severity: "high"
    patterns:
      - "kmalloc.*without.*kfree"
      - "use.*after.*free"
    directories: ["drivers", "kernel", "mm"]

πŸ“ˆ Performance

  • Analysis Speed: ~11 minutes for 35,588 files
  • Memory Usage: Configurable limits (default: 1GB)
  • Parallel Processing: Up to 4 concurrent analyzers
  • File Filtering: Excludes generated files and build artifacts

🀝 Contributing

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Add your changes following the project structure
  4. Test your changes (python -m pytest tests/)
  5. Commit your changes (git commit -m 'Add amazing feature')
  6. Push to the branch (git push origin feature/amazing-feature)
  7. Open a Pull Request

Development Guidelines

  • Analysis scripts: Add to scripts/analysis/
  • Reporting tools: Add to scripts/reporting/
  • Core detection: Add to src/detectors/
  • Configuration: Update tools/detectors/config.yaml

πŸ“š Documentation

πŸ› Known Issues

  • Code snippet extraction: Some file paths may not match due to kernel cloning method
  • Large file handling: Files >10MB are skipped to prevent memory issues
  • Pattern accuracy: Some patterns may generate false positives

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ™ Acknowledgments

  • Linux kernel community for the source code
  • Static analysis research community
  • Contributors and maintainers

πŸ“ž Contact


This tool is designed to help improve Linux kernel code quality by identifying potential issues early in the development process. The analysis results should be used as guidance for improvement rather than definitive assessments of code quality.