Contributing to Dexray Insight
We welcome contributions to Dexray Insight! This guide outlines how to contribute to the project, including code contributions, documentation improvements, bug reports, and feature requests.
Getting Started
Development Environment Setup
Fork and Clone Repository:
# Fork the repository on GitHub # Then clone your fork git clone https://github.com/YOUR_USERNAME/Sandroid_Dexray-Insight.git cd Sandroid_Dexray-Insight
Set Up Development Environment:
# Create virtual environment python -m venv dexray-dev source dexray-dev/bin/activate # On Windows: dexray-dev\Scripts\activate # Install in development mode pip install -e . # Install development dependencies pip install -r requirements-dev.txt pip install -r tests/requirements.txt pip install -r docs/requirements.txt
Verify Installation:
# Test basic functionality dexray-insight --version # Run existing tests make test # Build documentation cd docs && make html
Development Workflow
Create Feature Branch:
git checkout -b feature/your-feature-name # or git checkout -b bugfix/issue-description
Make Changes:
Follow the coding standards outlined below
Add tests for new functionality
Update documentation as needed
Ensure all tests pass
Test Your Changes:
# Run full test suite make test # Run specific test categories pytest -m unit pytest -m integration # Run linting make lint # Test with sample APKs dexray-insight sample.apk -s -d DEBUG
Commit Changes:
git add . git commit -m "Add feature: brief description - Detailed explanation of changes - Any breaking changes - Fixes #issue_number (if applicable)"
Push and Create Pull Request:
git push origin feature/your-feature-name # Create pull request on GitHub
Types of Contributions
Code Contributions
New Analysis Modules:
Create new analysis modules to extend Dexray Insight’s capabilities:
from dexray_insight.core.base_classes import BaseAnalysisModule, BaseResult, register_module
from dexray_insight.core.base_classes import AnalysisContext, AnalysisStatus
from dataclasses import dataclass
from typing import Dict, Any, List
import time
@dataclass
class MyModuleResult(BaseResult):
findings: List[Dict[str, Any]] = None
analysis_summary: str = ""
def __post_init__(self):
if self.findings is None:
self.findings = []
@register_module('my_custom_module')
class MyCustomModule(BaseAnalysisModule):
def __init__(self, config: Dict[str, Any]):
super().__init__(config)
self.custom_setting = config.get('custom_setting', 'default_value')
def analyze(self, apk_path: str, context: AnalysisContext) -> MyModuleResult:
start_time = time.time()
try:
# Your analysis logic here
findings = self._perform_analysis(apk_path, context)
return MyModuleResult(
module_name='my_custom_module',
status=AnalysisStatus.SUCCESS,
execution_time=time.time() - start_time,
findings=findings,
analysis_summary=f"Found {len(findings)} items"
)
except Exception as e:
return MyModuleResult(
module_name='my_custom_module',
status=AnalysisStatus.FAILURE,
execution_time=time.time() - start_time,
error_message=str(e)
)
def get_dependencies(self) -> List[str]:
return ['apk_overview'] # Dependencies on other modules
def _perform_analysis(self, apk_path: str, context: AnalysisContext):
# Implementation details
pass
External Tool Integration:
Add support for new external analysis tools:
from dexray_insight.core.base_classes import BaseExternalTool
import subprocess
import json
class MyExternalTool(BaseExternalTool):
def __init__(self, config: Dict[str, Any]):
super().__init__(config)
self.tool_path = config.get('path', 'my-tool')
self.timeout = config.get('timeout', 300)
def is_available(self) -> bool:
try:
subprocess.run([self.tool_path, '--version'],
capture_output=True, timeout=10)
return True
except (subprocess.TimeoutExpired, FileNotFoundError):
return False
def analyze_apk(self, apk_path: str, output_dir: str) -> Dict[str, Any]:
cmd = [self.tool_path, '--input', apk_path, '--output', output_dir]
result = subprocess.run(cmd, capture_output=True,
timeout=self.timeout, text=True)
if result.returncode != 0:
raise RuntimeError(f"Tool failed: {result.stderr}")
# Parse tool output
return self._parse_output(result.stdout)
Utility Functions:
Add utility functions to support new analysis capabilities:
from typing import List, Optional
import re
def extract_custom_patterns(text: str) -> List[str]:
"""Extract custom patterns from text"""
pattern = r'custom_pattern_regex_here'
return re.findall(pattern, text)
def validate_custom_data(data: Dict[str, Any]) -> bool:
"""Validate custom data structure"""
required_fields = ['field1', 'field2']
return all(field in data for field in required_fields)
Bug Reports and Feature Requests
Bug Reports:
When reporting bugs, please include:
Dexray Insight version:
dexray-insight --version
Python version:
python --version
Operating system: Linux/macOS/Windows version
APK information: Size, framework, if possible to share
Command used: Exact command that caused the issue
Error message: Full error output with
-d DEBUG
Expected behavior: What should have happened
Steps to reproduce: Minimal steps to reproduce the issue
Feature Requests:
For feature requests, please provide:
Use case: Why is this feature needed?
Proposed solution: How should it work?
Alternative solutions: Other approaches considered
Additional context: Any relevant background information
Documentation Improvements
Documentation improvements are always welcome:
Fix typos, grammar, or unclear explanations
Add examples and use cases
Improve API documentation
Update installation instructions
Add tutorials for specific workflows
# Work on documentation
cd docs
# Install documentation dependencies
pip install -r requirements.txt
# Build and view documentation locally
make serve
# Open http://localhost:8000
Testing Improvements
Help improve test coverage and quality:
Add test cases for edge conditions
Improve test fixtures and utilities
Add integration tests for new modules
Performance and stress testing
Cross-platform testing
# Run specific test categories
pytest -m unit tests/unit/
pytest -m integration tests/integration/
pytest -m synthetic tests/ -k synthetic
Coding Standards
Code Style
Follow Python PEP 8 with these specific guidelines:
General Style:
# Use descriptive variable names
analysis_results = perform_analysis() # Good
res = perform_analysis() # Avoid
# Use type hints
def analyze_apk(apk_path: str, config: Dict[str, Any]) -> AnalysisResult:
pass
# Document functions with docstrings
def extract_permissions(manifest_xml: str) -> List[str]:
"""Extract permissions from AndroidManifest.xml.
Args:
manifest_xml: Raw XML content of AndroidManifest.xml
Returns:
List of permission strings found in manifest
Raises:
ValueError: If manifest XML is invalid
"""
pass
Class Structure:
class AnalysisModule:
"""Analysis module for specific functionality.
This class provides analysis capabilities for [specific area].
It follows the BaseAnalysisModule interface and integrates with
the analysis framework.
"""
def __init__(self, config: Dict[str, Any]):
"""Initialize module with configuration."""
super().__init__(config)
self.logger = logging.getLogger(__name__)
def analyze(self, apk_path: str, context: AnalysisContext) -> BaseResult:
"""Perform analysis on APK file."""
# Implementation here
pass
Error Handling:
# Specific exception handling
try:
result = risky_operation()
except FileNotFoundError:
logger.error(f"APK file not found: {apk_path}")
return AnalysisResult(status=AnalysisStatus.FAILURE,
error_message="APK file not found")
except ValueError as e:
logger.error(f"Invalid APK format: {e}")
return AnalysisResult(status=AnalysisStatus.FAILURE,
error_message=f"Invalid APK: {e}")
Logging:
import logging
class MyModule:
def __init__(self):
self.logger = logging.getLogger(__name__)
def analyze(self):
self.logger.info("Starting analysis")
self.logger.debug(f"Processing file: {filename}")
try:
# Analysis code
self.logger.debug("Analysis completed successfully")
except Exception as e:
self.logger.error(f"Analysis failed: {e}")
Testing Standards
Test Structure:
import pytest
from unittest.mock import Mock, patch
class TestMyModule:
"""Tests for MyModule functionality."""
@pytest.fixture
def module_instance(self, minimal_config):
"""Create module instance for testing."""
return MyModule(minimal_config)
@pytest.mark.unit
def test_should_extract_data_when_valid_input_provided(self, module_instance):
"""Test that data is extracted correctly with valid input."""
# Arrange
test_input = "valid test input"
expected_output = ["expected", "results"]
# Act
actual_output = module_instance.extract_data(test_input)
# Assert
assert actual_output == expected_output
@pytest.mark.unit
def test_should_handle_invalid_input_gracefully(self, module_instance):
"""Test that invalid input is handled gracefully."""
# Arrange
invalid_input = None
# Act & Assert
with pytest.raises(ValueError, match="Input cannot be None"):
module_instance.extract_data(invalid_input)
Test Coverage:
Unit tests should achieve >90% coverage for new code
Integration tests for module interactions
End-to-end tests for critical workflows
Performance tests for resource-intensive operations
Mock Usage:
@pytest.fixture
def mock_external_tool():
"""Mock external tool for testing."""
with patch('subprocess.run') as mock_run:
mock_run.return_value = Mock(
returncode=0,
stdout="tool output",
stderr=""
)
yield mock_run
Documentation Standards
API Documentation:
Use Google-style docstrings for all public functions and classes:
def analyze_strings(content: str, patterns: List[str]) -> Dict[str, List[str]]:
"""Analyze strings using specified patterns.
This function searches through the provided content using regex patterns
and returns categorized matches.
Args:
content: Text content to analyze
patterns: List of regex patterns to match against
Returns:
Dictionary mapping pattern names to lists of matches
Raises:
ValueError: If patterns list is empty
re.error: If regex patterns are invalid
Example:
>>> patterns = ['http[s]?://[^\\s]+', '\\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\\.[A-Z|a-z]{2,}\\b']
>>> results = analyze_strings("Visit https://example.com or email test@example.com", patterns)
>>> print(results)
{'urls': ['https://example.com'], 'emails': ['test@example.com']}
"""
pass
User Documentation:
Use clear, concise language
Include practical examples
Provide complete command examples
Document all configuration options
Include troubleshooting sections
Review Process
Pull Request Guidelines
Before Submitting:
Ensure all tests pass:
make test
Run linting:
make lint
Update documentation if needed
Add changelog entry if applicable
Rebase on latest main branch
Pull Request Template:
## Description
Brief description of changes made.
## Type of Change
- [ ] Bug fix (non-breaking change fixing an issue)
- [ ] New feature (non-breaking change adding functionality)
- [ ] Breaking change (fix or feature causing existing functionality to change)
- [ ] Documentation update
## Testing
- [ ] Unit tests added/updated
- [ ] Integration tests added/updated
- [ ] Manual testing performed
- [ ] All tests pass
## Documentation
- [ ] Documentation updated
- [ ] API documentation updated
- [ ] Configuration documentation updated
## Checklist
- [ ] Code follows project style guidelines
- [ ] Self-review completed
- [ ] Meaningful commit messages
- [ ] No unnecessary files included
Review Criteria:
Reviewers will check:
Functionality: Does the code work as intended?
Code Quality: Follows coding standards and best practices?
Testing: Adequate test coverage and quality?
Documentation: Clear documentation and comments?
Performance: No significant performance regressions?
Security: No security vulnerabilities introduced?
Compatibility: Maintains backward compatibility?
Community Guidelines
Code of Conduct
We are committed to providing a welcoming and inclusive environment:
Be Respectful: Treat all community members with respect
Be Collaborative: Work together constructively
Be Inclusive: Welcome newcomers and diverse perspectives
Be Professional: Maintain professional communication
Focus on Learning: Help others learn and grow
Communication Channels
GitHub Issues: Bug reports, feature requests, questions
GitHub Discussions: General discussions, ideas, help
Pull Requests: Code contributions and reviews
Getting Help
If you need help contributing:
Check existing documentation and examples
Search through GitHub issues and discussions
Create a GitHub issue with your question
Provide context and specific details
Recognition
Contributors are recognized in several ways:
Listed in project contributors
Mentioned in release notes for significant contributions
Invited to be maintainers for sustained contributions
Release Process
The project follows semantic versioning (semver):
Major (X.0.0): Breaking changes
Minor (0.X.0): New features, backward compatible
Patch (0.0.X): Bug fixes, backward compatible
Release Schedule:
Patch releases: As needed for critical bugs
Minor releases: Monthly or bi-monthly
Major releases: Quarterly or as needed
Release Checklist:
Update version numbers
Update changelog
Run full test suite
Build and test documentation
Create release tag
Deploy documentation
Announce release
Future Development
Planned improvements and areas for contribution:
Short-term Goals:
Enhanced machine learning-based detection
Additional framework support (Kotlin Multiplatform, Unity)
Improved performance for large APKs
Enhanced CLI user experience
Long-term Goals:
Real-time analysis capabilities
Cloud-based analysis service
Integration with CI/CD pipelines
Advanced behavioral analysis
Thank you for contributing to Dexray Insight! Your contributions help make mobile application security analysis more accessible and effective for the community.