Metadata-Version: 2.4
Name: fmusvid
Version: 0.0.1
Summary: A Python video processing library
Home-page: https://github.com/mexyusef/fmusvid
Author: Yusef Ulum
Author-email: yusef314159@gmail.com
License: MIT
Project-URL: Documentation, https://fmusvid.readthedocs.io
Project-URL: Source, https://github.com/mexyusef/fmusvid
Project-URL: Tracker, https://github.com/mexyusef/fmusvid/issues
Keywords: video,processing,editing,capture,webcam,ai,streaming
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Topic :: Multimedia :: Video
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Requires-Python: >=3.8
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: opencv-python>=4.5.0
Requires-Dist: numpy>=1.19.0
Requires-Dist: Pillow>=8.0.0
Requires-Dist: tqdm>=4.50.0
Requires-Dist: pydantic>=1.8.0
Provides-Extra: ai
Requires-Dist: torch>=1.9.0; extra == "ai"
Requires-Dist: torchvision>=0.10.0; extra == "ai"
Requires-Dist: ultralytics>=8.0.0; extra == "ai"
Requires-Dist: faster-whisper>=0.5.0; extra == "ai"
Provides-Extra: streaming
Requires-Dist: ffmpeg-python>=0.2.0; extra == "streaming"
Requires-Dist: av>=9.0.0; extra == "streaming"
Provides-Extra: gui
Requires-Dist: PyQt6>=6.3.0; extra == "gui"
Provides-Extra: dev
Requires-Dist: pytest>=6.0.0; extra == "dev"
Requires-Dist: pytest-cov>=2.10.0; extra == "dev"
Requires-Dist: black>=21.5b2; extra == "dev"
Requires-Dist: isort>=5.9.0; extra == "dev"
Requires-Dist: flake8>=3.9.0; extra == "dev"
Requires-Dist: mypy>=0.800; extra == "dev"
Provides-Extra: all
Requires-Dist: ffmpeg-python>=0.2.0; extra == "all"
Requires-Dist: av>=9.0.0; extra == "all"
Requires-Dist: PyQt6>=6.3.0; extra == "all"
Requires-Dist: flake8>=3.9.0; extra == "all"
Requires-Dist: faster-whisper>=0.5.0; extra == "all"
Requires-Dist: torch>=1.9.0; extra == "all"
Requires-Dist: torchvision>=0.10.0; extra == "all"
Requires-Dist: mypy>=0.800; extra == "all"
Requires-Dist: pytest>=6.0.0; extra == "all"
Requires-Dist: black>=21.5b2; extra == "all"
Requires-Dist: ultralytics>=8.0.0; extra == "all"
Requires-Dist: pytest-cov>=2.10.0; extra == "all"
Requires-Dist: isort>=5.9.0; extra == "all"
Dynamic: author
Dynamic: author-email
Dynamic: classifier
Dynamic: description
Dynamic: description-content-type
Dynamic: home-page
Dynamic: keywords
Dynamic: license
Dynamic: license-file
Dynamic: project-url
Dynamic: provides-extra
Dynamic: requires-dist
Dynamic: requires-python
Dynamic: summary

# FMUS-VID Library

FMUS-VID is a Python library for video processing and manipulation.

## Installation

```bash
# Basic installation
pip install fmusvid

# With AI features
pip install fmusvid[ai]

# With streaming support
pip install fmusvid[streaming]

# With GUI support
pip install fmusvid[gui]

# With all features
pip install fmusvid[all]
```

## Quick Start

### Basic Video Processing

```python
from fmusvid.video import VideoProcessor
from fmusvid.operations import basic

# Load a video
video = VideoProcessor("input.mp4")

# Apply operations
video.apply(basic.cut(start=10, end=20))
video.apply(basic.resize(width=1280, height=720))

# Save the result
video.save("output.mp4")
```

### Object Detection

```python
from fmusvid.video import VideoProcessor
from fmusvid.ai import detection

# Load a video
video = VideoProcessor("input.mp4")

# Detect objects
results = video.apply(detection.detect_objects(model="yolo"))

# Save the result with bounding boxes
video.save("output_with_detection.mp4")
```

### RTMP Streaming with Multilingual Subtitles

```python
from fmusvid.stream import RTMPCapture, RTMPOutput
from fmusvid.ai.speech import WhisperTranscriber
from fmusvid.ai.translation import LLMTranslator
from fmusvid.operations.text import SubtitleOverlay

# Create components
rtmp_capture = RTMPCapture("rtmp://localhost/live/stream")
transcriber = WhisperTranscriber(model="medium", device="cuda")
translator = LLMTranslator(model="ollama:mistral", target_languages=["es", "fr", "de"])

# Create outputs for different languages
outputs = {
    "en": RTMPOutput("rtmp://localhost/live/stream_en"),
    "es": RTMPOutput("rtmp://localhost/live/stream_es"),
    "fr": RTMPOutput("rtmp://localhost/live/stream_fr"),
    "de": RTMPOutput("rtmp://localhost/live/stream_de")
}

# Process frames in real-time
async for frame, audio in rtmp_capture.aiter_frames_with_audio():
    # Transcribe audio
    text = await transcriber.transcribe_chunk(audio)

    # Translate text
    translations = await translator.translate_async(text)

    # Create frames with different subtitles
    frames = {}
    for lang, translation in translations.items():
        subtitle = SubtitleOverlay(position="bottom")
        frames[lang] = subtitle.apply(frame, translation)

    # Write frames to outputs
    for lang, output_frame in frames.items():
        await outputs[lang].write_frame(output_frame)
```

### Webcam Capture

```python
from fmusvid.capture import CameraCapture

# List available cameras
from fmusvid.capture import list_cameras
cameras = list_cameras()
for camera in cameras:
    print(f"Camera {camera['id']}: {camera['name']} ({camera['width']}x{camera['height']})")

# Create a camera capture instance
camera = CameraCapture(camera_id=0, width=1280, height=720)

# Start the camera
camera.start()

# Capture a frame
frame = camera.get_frame()

# Capture an image and save it
camera.capture_image("webcam_capture.jpg")

# Record video
camera.record_video("webcam_recording.mp4", duration=10)  # 10 seconds

# Stop the camera
camera.stop()
```

## GUI Applications

FMUS-VID includes a PyQt6-based GUI application for webcam capture:

```bash
# Install with GUI support
pip install fmusvid[gui]

# Run the camera GUI application
python examples/camera_gui.py
```

## Example Scripts

The library includes several example scripts to demonstrate its capabilities:

- `examples/basic_example.py`: Basic video processing operations
- `examples/transitions_example.py`: Video transitions and effects
- `examples/ai_features.py`: Object detection and tracking
- `examples/rtmp_live_example.py`: RTMP streaming with real-time transcription and translation
- `examples/rtmp_demo.py`: Demo of RTMP capabilities using local video files
- `examples/camera_gui.py`: PyQt6-based webcam capture application

## Command Line Interface

FMUS-VID provides a comprehensive command-line interface for common video operations:

```bash
# Basic CLI usage
fmusvid [command] [options]
```

### Available Commands

| Command | Description | Example |
|---------|-------------|---------|
| `convert` | Convert video format | `fmusvid convert input.mp4 output.webm` |
| `cut` | Cut a segment from video | `fmusvid cut -i input.mp4 -o output.mp4 -s 10 -e 20` |
| `resize` | Resize video | `fmusvid resize -i input.mp4 -o output.mp4 -w 1280 -h 720` |
| `info` | Display video information | `fmusvid info input.mp4` |
| `concat` | Concatenate multiple videos | `fmusvid concat -i video1.mp4 video2.mp4 -o output.mp4` |
| `extract-audio` | Extract audio from video | `fmusvid extract-audio -i input.mp4 -o audio.mp3` |
| `add-subtitles` | Add subtitles to video | `fmusvid add-subtitles -i input.mp4 -s subtitles.srt -o output.mp4` |
| `generate-subtitles` | Generate subtitles using AI | `fmusvid generate-subtitles -i input.mp4 -o subtitles.srt --model medium` |
| `translate-subtitles` | Translate subtitles | `fmusvid translate-subtitles -i subtitles.srt -o subtitles_es.srt --target es` |
| `camera` | Capture from webcam | `fmusvid camera -o capture.mp4 -d 10` |
| `screen` | Capture screen recording | `fmusvid screen -o screen.mp4 -d 30` |
| `rtmp-stream` | Stream video to RTMP server | `fmusvid rtmp-stream -i input.mp4 -u rtmp://server/live/stream` |
| `rtmp-capture` | Capture RTMP stream | `fmusvid rtmp-capture -u rtmp://server/live/stream -o output.mp4 -d 60` |

### Global Options

| Option | Description |
|--------|-------------|
| `-v, --verbose` | Enable verbose output |
| `--version` | Show version information |
| `--help` | Show help message |
| `--config FILE` | Use custom configuration file |
| `--log-level LEVEL` | Set logging level (debug, info, warning, error) |

For more detailed help on each command:

```bash
fmusvid [command] --help
```

## Dependencies

- OpenCV
- NumPy
- FFmpeg
- PyTorch (for AI features)
- Faster-Whisper (for speech recognition)
- Local LLM support (for translation)
- PyQt6 (for GUI applications)

## License

This project is licensed under the MIT License.
