# littledl

High-performance download library with IDM-style multi-threaded chunked downloading, intelligent scheduling, and resume support.

## Features

- Multi-threaded chunked downloads with intelligent scheduling
- Direct file writing (no temporary files)
- Resume support for interrupted downloads
- Real-time speed monitoring with ETA
- Multiple authentication methods (Basic, Bearer, Digest, API Key, OAuth2)
- Full proxy support (HTTP, HTTPS, SOCKS5, system proxy auto-detect)
- Speed limiting (token bucket, leaky bucket, adaptive)
- Cross-platform (Windows, macOS, Linux, FreeBSD)

## Installation

```bash
pip install littledl
# or
uv add littledl
```

# Getting Started

## Prerequisites

- Python 3.10 or higher
- pip or uv package manager

## Basic Usage

### Synchronous Download

```python
from littledl import download_file_sync

path = download_file_sync("https://example.com/file.zip")
```

### Asynchronous Download

```python
import asyncio
from littledl import download_file

async def main():
    path = await download_file(
        "https://example.com/file.zip",
        save_path="./downloads",
        filename="my_file.zip",
    )

asyncio.run(main())
```

# Configuration

## DownloadConfig

```python
from littledl import DownloadConfig

config = DownloadConfig(
    enable_chunking=True,
    max_chunks=16,
    chunk_size=4 * 1024 * 1024,  # 4MB
    buffer_size=64 * 1024,        # 64KB
    timeout=300,
    resume=True,
    verify_ssl=True,
)
```

## Configuration Options

| Option | Type | Default | Description |
|--------|------|---------|-------------|
| enable_chunking | bool | True | Enable multi-threaded chunked download |
| max_chunks | int | 16 | Maximum number of concurrent chunks |
| chunk_size | int | 4MB | Default size for each chunk |
| buffer_size | int | 64KB | Disk write buffer size |
| timeout | float | 300 | Read/write timeout in seconds |
| resume | bool | True | Enable resume support |
| verify_ssl | bool | True | Verify SSL certificates |
| fallback_to_single_on_failure | bool | True | Auto fallback to single-stream mode on chunked failure |
| enable_adaptive | bool | True | Enable adaptive network scheduling |
| enable_hybrid_turbo | bool | True | Enable hybrid turbo download with AIMD congestion control and smart chunk fallback |
| hybrid_aimd_increase_step | int | 1 | Target worker increase step size (Additive Increase) |
| hybrid_aimd_decrease_factor | float | 0.5 | Factor used to multiply target workers on speed decline (Multiplicative Decrease) |
| hybrid_speedup_threshold | float | 0.08 | Minimum relative speedup threshold required to trigger AIMD |
| hybrid_slow_chunk_ratio | float | 0.45 | Threshold ratio defining extremely slow chunks |
| verify_hash | bool | False | Verify downloaded file hash |

<details>
<summary>Methods</summary>

- `apply_style(style: Any) -> "DownloadConfig"`: Quickly reconfigures scheduling variables, chunking thresholds, and AIMD control parameters based on a provided style. (e.g. `DownloadStyle.HYBRID_TURBO` or `"HYBRID_TURBO"`).

</details>

## Speed Limiting

```python
from littledl import DownloadConfig, SpeedLimitConfig, SpeedLimitMode

speed_limit = SpeedLimitConfig(
    enabled=True,
    mode=SpeedLimitMode.GLOBAL,
    max_speed=1024 * 1024,  # 1 MB/s
)

config = DownloadConfig(speed_limit=speed_limit)
```

# Authentication

## AuthConfig

```python
from littledl import AuthConfig, AuthType
```

## Authentication Types

### Basic Authentication

```python
auth = AuthConfig(
    auth_type=AuthType.BASIC,
    username="user",
    password="pass",
)
```

### Bearer Token

```python
auth = AuthConfig(
    auth_type=AuthType.BEARER,
    token="your-api-token",
)
```

### API Key

```python
auth = AuthConfig(
    auth_type=AuthType.API_KEY,
    api_key="your-api-key",
    api_key_header="X-API-Key",
)
```

### OAuth2

```python
auth = AuthConfig(
    auth_type=AuthType.OAUTH2,
    client_id="client-id",
    client_secret="client-secret",
    token_url="https://example.com/oauth/token",
)
```

# Proxy Configuration

## ProxyConfig

```python
from littledl import ProxyConfig, ProxyMode
```

## Proxy Modes

### System Proxy (Auto-detect)

```python
proxy = ProxyConfig(mode=ProxyMode.SYSTEM)
```

### Custom Proxy

```python
proxy = ProxyConfig(
    mode=ProxyMode.CUSTOM,
    http_proxy="http://proxy.example.com:8080",
    https_proxy="https://proxy.example.com:8080",
)
```

### SOCKS5 Proxy

```python
proxy = ProxyConfig(
    mode=ProxyMode.CUSTOM,
    socks5_proxy="socks5://user:pass@proxy.example.com:1080",
)
```

# Error Handling

## Exception Types

### DownloadException

Base exception for all download-related errors.

### NetworkError

Network-related errors (connection timeout, DNS failure, etc.).

### AuthenticationError

Authentication failures.

## Retry Configuration

```python
from littledl import DownloadConfig, RetryConfig, RetryMode

retry = RetryConfig(
    enabled=True,
    mode=RetryMode.EXPONENTIAL,
    max_retries=3,
    initial_delay=1.0,
    max_delay=60.0,
)

config = DownloadConfig(retry=retry)
```

# Advanced Usage

## Chunk Management

### Manual Chunk Size

```python
config = DownloadConfig(
    enable_chunking=True,
    chunk_size=8 * 1024 * 1024,  # 8MB chunks
    max_chunks=8,
)
```

### Disabling Chunking

```python
config = DownloadConfig(enable_chunking=False)
```

## Concurrent Downloads

```python
import asyncio
from littledl import download_file

async def download_multiple(urls: list[str]):
    tasks = [download_file(url) for url in urls]
    return await asyncio.gather(*tasks)
```

## Batch Download

Multi-file batch download with specialized optimizations for large numbers of small/large files:

```python
from littledl import batch_download_sync

results = batch_download_sync(
    urls=[
        "https://example.com/file1.zip",
        "https://example.com/file2.zip",
        "https://example.com/file3.zip",
    ],
    save_path="./downloads",
    max_concurrent_files=5,
)

for url, path, error in results:
    if path:
        print(f"✓ {url} -> {path}")
    else:
        print(f"✗ {url}: {error}")
```

Async version with more control:

```python
from littledl import BatchDownloader

downloader = BatchDownloader(
    max_concurrent_files=5,
    max_concurrent_chunks_per_file=4,
    enable_adaptive_concurrency=True,
)
await downloader.add_urls(urls, "./downloads")
await downloader.start()
```

Batch download features:
- Adaptive concurrency: dynamically adjusts concurrent downloads based on network speed
- Small file priority: auto-identifies and prioritizes small files for better UX
- Connection pooling: shared connection pool reduces overhead
- Batch probe: parallel HEAD requests for file info
- Smart chunking: auto-selects optimal chunk strategy based on file size

## Custom Headers

```python
config = DownloadConfig(
    headers={
        "User-Agent": "MyApp/1.0",
        "Accept": "application/octet-stream",
    }
)
```

## Progress Callback

```python
def on_progress(downloaded: int, total: int, speed: float, eta: int):
    percent = (downloaded / total) * 100
    print(f"\r{percent:.1f}% | {speed/1024:.1f} KB/s | ETA: {eta}s", end="")

config = DownloadConfig(progress_callback=on_progress)
```

## Performance Tuning

### Buffer Size

```python
config = DownloadConfig(
    buffer_size=256 * 1024,  # 256KB buffer
)
```

### Connection Pooling

```python
config = DownloadConfig(
    max_connections=32,
    max_keepalive_connections=16,
)
```

# API Reference

## Core Functions

### download_file_sync

```python
from littledl import download_file_sync

path = download_file_sync(
    url: str,
    save_path: str = ".",
    filename: str | None = None,
    config: DownloadConfig | None = None,
) -> Path
```

### download_file

```python
from littledl import download_file

path = await download_file(
    url: str,
    save_path: str = ".",
    filename: str | None = None,
    config: DownloadConfig | None = None,
) -> Path
```

### batch_download_sync

```python
from littledl import batch_download_sync

results = batch_download_sync(
    urls: list[str],
    save_path: str = "./downloads",
    config: DownloadConfig | None = None,
    max_concurrent_files: int = 5,
    max_concurrent_chunks_per_file: int = 4,
) -> list[tuple[str, Path | None, str | None]]
# Returns [(url, path, error), ...]
```

### BatchDownloader

```python
from littledl import BatchDownloader

downloader = BatchDownloader(
    config: DownloadConfig | None = None,
    max_concurrent_files: int = 5,
    max_concurrent_chunks_per_file: int = 4,
    enable_adaptive_concurrency: bool = True,
    enable_small_file_priority: bool = True,
)

await downloader.add_url(url, save_path, filename, priority)
await downloader.add_urls(urls, save_path)
downloader.set_progress_callback(callback)
downloader.set_file_complete_callback(callback)
await downloader.start()
await downloader.pause()
await downloader.resume()
await downloader.cancel()
await downloader.stop()

task = downloader.get_task(task_id)
tasks = downloader.get_all_tasks()
progress = downloader.get_progress()
stats = downloader.get_stats()
```

### FileTask

```python
from littledl import FileTask

# Properties:
task.task_id       # Unique task ID
task.url           # Download URL
task.filename      # Output filename
task.status        # FileTaskStatus enum
task.file_size     # Total file size
task.downloaded    # Downloaded bytes
task.speed         # Current speed
task.progress      # Progress percentage
task.error         # Error message if failed
task.is_small_file # True if < 5MB
task.is_large_file # True if > 100MB
```

### BatchProgress

```python
from littledl import BatchProgress

# Properties:
progress.total_files      # Total number of files
progress.completed_files  # Completed files count
progress.failed_files     # Failed files count
progress.active_files     # Currently downloading count
progress.total_bytes      # Total bytes to download
progress.downloaded_bytes # Downloaded bytes
progress.overall_speed    # Current download speed
progress.eta             # Estimated seconds remaining
progress.progress        # Overall progress percentage
```

## Enums

### AuthType

- BASIC
- BEARER
- DIGEST
- API_KEY
- OAUTH2

### ProxyMode

- SYSTEM - Auto-detect system proxy
- CUSTOM - Use custom proxy settings
- NONE - No proxy

### SpeedLimitMode

- GLOBAL - Limit overall speed
- PER_CHUNK - Limit per-chunk speed

## Multi-language Support

```python
from littledl import set_language, get_available_languages

set_language("zh")  # or "en"
print(get_available_languages())  # {'en': 'English', 'zh': '中文'}
```

# High-Speed Download Mode

## DownloadStyle Enum

Download style options for single-file downloads.

```python
from littledl import DownloadStyle

DownloadStyle.SINGLE    # Single-threaded download
DownloadStyle.MULTI     # Multi-threaded segmented download (aria2-style)
DownloadStyle.ADAPTIVE  # Automatically select best style
```

## StrategySelector

Intelligent strategy selector that analyzes file characteristics and network conditions to automatically choose the optimal download style.

### Algorithm

1. File size + server Range support → base style
2. Network stability prediction → whether to add more threads
3. Historical performance → dynamically adjust thresholds

### Usage

```python
from littledl import StrategySelector, DownloadStyle

selector = StrategySelector(
    default_style=DownloadStyle.ADAPTIVE,
    enable_single=True,
    enable_multi=True,
    max_chunks=16,
)

# Analyze file
profile = selector.analyze_file(
    url="https://example.com/file.zip",
    size=100 * 1024 * 1024,
    supports_range=True,
)

# Get style decision
decision = selector.select_style(profile)
print(f"Recommended style: {decision.style.value}")
print(f"Recommended chunks: {decision.recommended_chunks}")
print(f"Estimated speedup: {decision.estimated_speedup:.1f}x")
```

### Style Decision Result

```python
@dataclass
class StyleDecision:
    style: DownloadStyle           # SINGLE, MULTI, or ADAPTIVE
    confidence: float             # 0.0 - 1.0
    reason: str                   # Human-readable explanation
    recommended_chunks: int       # Recommended number of chunks
    estimated_speedup: float       # Estimated speedup vs single-threaded
```

## DynamicStyleAllocator

Dynamic style allocator for multi-file batch downloads.

```python
from littledl import DynamicStyleAllocator, DownloadStyle

allocator = DynamicStyleAllocator(
    selector=selector,
    max_concurrent_files=5,
    max_total_chunks=16,
)

# Add file and get allocation
decision = await allocator.add_file(
    file_id="file1",
    url="https://example.com/file.zip",
    size=100 * 1024 * 1024,
    supports_range=True,
    priority=1,
)
```

## EnhancedBatchDownloader

High-performance batch downloader with intelligent scheduling and aria2-style multi-threaded segmented downloads.

```python
from littledl import EnhancedBatchDownloader

downloader = EnhancedBatchDownloader(
    max_concurrent_files=5,
    max_total_threads=15,
    enable_existing_file_reuse=True,
    enable_multi_source=True,
)

await downloader.add_url(
    "https://example.com/file.zip",
    backup_urls=["https://backup.com/file.zip"]
)
await downloader.start()
```

### Statistics

```python
stats = downloader.get_stats()
# {
#     "total_files": 10,
#     "completed_files": 5,
#     "failed_files": 0,
#     "reused_files": 2,
#     "total_threads": 8,
#     "active_threads": 5,
#     "dynamic_chunks_added": 3,
# }
```

### File Reuse Statistics

```python
reuse_stats = downloader.get_file_reuse_stats()
# {
#     "checks": 100,
#     "hits": 45,
#     "hit_rate": "45.0%",
#     "bytes_saved": "1.2 GB",
#     "quick_hash_hits": 30,
#     "content_matched": 15,
# }
```

## GlobalThreadPool

Global thread pool for unified thread management across multiple file downloads.

```python
from littledl import GlobalThreadPool

pool = GlobalThreadPool(
    max_total_threads=15,
    min_speed_threshold=256 * 1024,
    ewma_alpha=0.3,
)

stats = pool.get_stats()
# {
#     "total_threads": 8,
#     "active_threads": 5,
#     "avg_speed": 5242880.0,
#     "speed_trend": 0.15,
#     "speed_variance": 0.25,
#     "predicted_speed": 5500000.0,
# }
```

## FileReuseChecker

Content-aware file matching for reusing existing files.

```python
from littledl import FileReuseChecker

checker = FileReuseChecker(
    check_hash=True,
    hash_algorithm="md5",
    enable_content_matching=True,
    quick_hash_size=64 * 1024,
)

# Find existing file by content
existing = checker.find_matching_file_by_content(
    target_path=Path("target.jar"),
    search_directory=Path("./minecraft/libraries"),
    size_tolerance=0.01,
)
```

## MultiSourceManager

Multi-source backup with automatic failover.

```python
from littledl import MultiSourceManager

manager = MultiSourceManager()
manager.add_source("https://primary.com/file.zip", priority=1)
manager.add_source("https://backup.com/file.zip", priority=0)

source = manager.get_next_available()
manager.mark_source_failed(source["url"], "404")
manager.mark_source_success(source["url"])
```

# License

Apache-2.0
