Metadata-Version: 2.4
Name: crawlerforge
Version: 1.0.12
Summary: Advanced Scrapy framework with multi-engine support
Home-page: https://github.com/fabiocantone/crawlerforge
Author: Fabio Cantone
Author-email: Fabio Cantone <fabio@cantone.me>
License: MIT
Keywords: scrapy,scraping,crawling,proxy,browser
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Requires-Python: >=3.8
Description-Content-Type: text/markdown
Dynamic: author
Dynamic: home-page
Dynamic: requires-python

Advanced Scrapy framework with multi-engine support and intelligent proxy management.

## Features

- **Multi-Engine Support**: HTTP (curl_cffi), Camoufox, Undetected Chrome
- **Intelligent Proxy Management**: API, file, database providers with auto-rotation
- **JSON Configuration**: Zero-code spider setup
- **Advanced Anti-Detection**: Human-like behaviors and stealth features
- **Flexible Data Extraction**: CSS, XPath, JSON, derived fields

## Installation

```bash
# Basic installation
pip install crawlerforge

# With browser support
pip install crawlerforge[browser]

# With all features
pip install crawlerforge[all]
```

## Quick Start

```bash
# Generate configuration
crawlerforge genconfig --template ecommerce --output config.json

# Run spider
crawlerforge crawl myspider -c config.json -o products.json
```

## Example Configuration

```json
{
  "engine": "camoufox",
  "start_url": ["https://example.com/sitemap.xml"],
  "products_list_selector": ".product",
  "fields": {
    "name": {"type": "text", "tags": [".title::text"], "required": true},
    "price": {"type": "price", "tags": [".price::text"], "required": true}
  }
}
```

## Documentation

Visit [GitHub](https://github.com/fabiocantone/crawlerforge) for full documentation and examples.

## License

MIT License

# Commands for publication:

# 1. Install build tools
pip install build twine

# 2. Build package
python -m build

# 3. Upload to TestPyPI (testing)
python -m twine upload --repository testpypi dist/*

# 4. Upload to PyPI (production)
python -m twine upload dist/*

# 5. Install from PyPI
pip install crawlerforge

# 6. Install from GitHub (development)
pip install git+https://github.com/fabiocantone/crawlerforge.git
