Metadata-Version: 2.4
Name: shadeai
Version: 2.1.0
Summary: Fully automatic censorship removal for language models
Keywords: llm,transformer,abliteration
Author: Assem Sabry
Author-email: Assem Sabry <assem@assem.cloud>
License-Expression: AGPL-3.0-or-later
Classifier: Development Status :: 4 - Beta
Classifier: Environment :: Console
Classifier: Environment :: GPU
Classifier: Intended Audience :: Science/Research
Classifier: License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Requires-Dist: accelerate~=1.10
Requires-Dist: bitsandbytes~=0.45
Requires-Dist: click~=8.1
Requires-Dist: datasets~=4.0
Requires-Dist: hf-transfer~=0.1
Requires-Dist: huggingface-hub~=0.34
Requires-Dist: kernels~=0.11
Requires-Dist: optuna~=4.5
Requires-Dist: peft~=0.14
Requires-Dist: fastapi~=0.115
Requires-Dist: psutil~=7.1
Requires-Dist: pydantic-settings~=2.10
Requires-Dist: questionary~=2.1
Requires-Dist: requests~=2.32
Requires-Dist: rich~=14.1
Requires-Dist: torch>=2.0
Requires-Dist: transformers~=4.57
Requires-Dist: uvicorn~=0.34
Requires-Dist: geom-median~=0.1 ; extra == 'research'
Requires-Dist: imageio~=2.37 ; extra == 'research'
Requires-Dist: matplotlib~=3.10 ; extra == 'research'
Requires-Dist: numpy~=2.2 ; extra == 'research'
Requires-Dist: pacmap~=0.8 ; extra == 'research'
Requires-Dist: scikit-learn~=1.7 ; extra == 'research'
Requires-Python: >=3.10
Project-URL: Changelog, https://github.com/AssemSabry/Shade/releases
Project-URL: Documentation, https://github.com/AssemSabry/Shade
Project-URL: Homepage, https://github.com/AssemSabry/Shade
Project-URL: Issues, https://github.com/AssemSabry/Shade/issues
Project-URL: Repository, https://github.com/AssemSabry/Shade.git
Provides-Extra: research
Description-Content-Type: text/markdown

![Shade v2.0.0 Banner](media/shadev2.webp)

# Shade : Fully Automatic Censorship Removal

<p align="center">
  <a href="https://assem.cloud/"><img src="https://img.shields.io/badge/Website-Assem.cloud-blue?style=flat&logo=google-chrome&logoColor=white" alt="Website"></a>
  <a href="https://x.com/assemsabryy"><img src="https://img.shields.io/badge/X-@assemsabryy-black?style=flat&logo=x&logoColor=white" alt="X"></a>
  <a href="https://www.facebook.com/assemsabryy"><img src="https://img.shields.io/badge/Facebook-assemsabryy-blue?style=flat&logo=facebook&logoColor=white" alt="Facebook"></a>
  <img src="https://img.shields.io/badge/Version-2.0.0-green" alt="Version">
  <img src="https://img.shields.io/badge/License-AGPL--3.0-orange" alt="License">
</p>

---

## 🌟 What is Shade?

**Shade** is a state-of-the-art platform designed to liberate Large Language Models (LLMs) from artificial censorship and safety filters. Using advanced **Abliteration** (directional ablation) and an automated **TPE-based parameter optimizer** powered by [Optuna](https://optuna.org/), Shade removes "safety alignment" without damaging the model's core intelligence.

---

## 🚀 New in Version 2.0.0

The v2.0.0 release transforms Shade from a CLI utility into a complete **Model Liberation Platform**.

- **Ollama One-Click Integration**: Automatically register your uncensored models with Ollama.
- **Model Quality Benchmarking**: Built-in "Sanity Check" system to verify model intelligence after processing.
- **Space Optimizer (Prune)**: Deep clean temporary files, checkpoints, and heavy Hugging Face cache.
- **Proactive Core (Doctor ++)**: Self-healing diagnostic system that can auto-install missing dependencies.
- **Official API & Web Backend**: Ready-to-use FastAPI server for custom app integrations.

---

## � Core Features

### 1. Fully Automated Abliteration
- **No Training Required**: Uses mathematical projection to remove censorship without expensive GPU fine-tuning.
- **Smart Layer Analysis**: Automatically identifies which layers are responsible for refusals.
- **Precision Optimization**: Balances removal of safety filters with the preservation of model intelligence (KL Divergence tracking).

### 2. High-End Web Interface (Shade Web UI)
- **Modern Liquid Glass Design**: A premium, responsive web chat interface.
- **Model Comparison Mode**: View original vs. uncensored responses side-by-side.

### 3. Hardware & System Care
- **Multi-GPU Support**: Automatically detects and leverages CUDA, XPU, MLU, and Apple Metal (MPS).
- **GPU Diagnostics**: Real-time VRAM monitoring.
- **Memory Optimization**: Optimized memory management to prevent OOM errors.

---

## ⚒️ Getting Started

### 1. Installation (PyPI)
Install Shade directly from PyPI:
```bash
pip install shadeai
```

For research features (plotting, clustering, etc.):
```bash
pip install shadeai[research]
```

### 2. Installation (From Source)
```bash
git clone https://github.com/AssemSabry/Shade.git
cd Shade
pip install -e .
```

### 3. Configuration & Login
To use models from Hugging Face, secure your access first:
```bash
shade hf login
```

### 4. Liberate a Model
Run the automatic optimization process on any model ID:
```bash
shade <model_id>
```
*Example:* `shade Qwen/Qwen2.5-1.5B-Instruct`

### 5. Start Web Chat
Launch the web interface to talk to your models:
```bash
shade serve
```

---

## 📋 Command Reference

| Command | Description |
| :--- | :--- |
| `shade <model_id>` | Start the automatic optimization & abliteration process. |
| `shade serve` | Launch the Shade Web UI interface. |
| `shade library` | Manage and launch your saved decensored models. |
| `shade ollama` | Export and register a model with Ollama automatically. |
| `shade benchmark` | Run quality tests to ensure the model's logic is intact. |
| `shade doctor --fix` | Automatically diagnose and fix system/dependency issues. |
| `shade prune --all` | Free up disk space by cleaning cache and checkpoints. |
| `shade hf login` | Securely authenticate with Hugging Face Hub. |
| `shade commands` | Show the complete CLI manual. |

---

## 🐍 Python API Usage

Shade is not just a CLI; it's a powerful library. You can integrate Shade's liberation engine directly into your Python apps.

### 1. Basic Generation
```python
from shade.model import Model
from shade.config import Settings

# Initialize with default settings
settings = Settings()
model = Model("meta-llama/Llama-3.1-8B-Instruct", settings)

# Generate uncensored response
response = model.generate("Your prompt here")
print(response)
```

### 2. Automatic Optimization (Abliteration)
```python
from shade.main import main as run_optimization

# This starts the fully automatic search and removal process
run_optimization()
```

### 3. Integrated Web UI & API
```python
from shade.server import start_server
from shade.config import Settings

# Launch the FastAPI backend and Liquid Glass UI
settings = Settings()
start_server(model_id="Qwen/Qwen2.5-1.5B-Instruct", settings=settings)
```

### 4. System Diagnostics
```python
from shade.cli import run_doctor

# Check for CUDA, RAM, and auto-fix dependencies
run_doctor(fix=True)
```

---

## 🧠 How It Works

Shade identifies the "refusal direction" within the model's high-dimensional space and applies an **Ablation Weight Kernel**. This kernel is optimized specifically for each component (Attention Out-Projection, MLP Down-Projection) to ensure that the censorship is removed with the least amount of "collateral damage" to the model's capabilities.

> [!IMPORTANT]
> **Shade** is a fully original, independent project built from the ground up. It is **NOT** a clone, fork, or derivative of any existing repository. All automation logic, UI design, and optimization workflows were developed specifically for this project.

---

## 👤 Meet the Developer

<p align="center">
  <img src="media/assemm.webp" width="600" alt="Assem Sabry">
  <br>
  <b>Assem Sabry</b>
  <br>
  <i>Lead Developer & AI Researcher</i>
</p>

<p align="center">
  <a href="https://assem.cloud/">
    <img src="https://img.shields.io/badge/Visit%20My%20Website-assem.cloud-blue?style=for-the-badge&logo=google-chrome&logoColor=white" alt="Website">
  </a>
  <a href="https://www.facebook.com/assemsabryy">
    <img src="https://img.shields.io/badge/Facebook-assemsabryy-1877F2?style=for-the-badge&logo=facebook&logoColor=white" alt="Facebook">
  </a>
  <a href="https://www.linkedin.com/in/assem7/">
    <img src="https://img.shields.io/badge/LinkedIn-assem7-0A66C2?style=for-the-badge&logo=linkedin&logoColor=white" alt="LinkedIn">
  </a>
</p>

---

## ⚠️ Disclaimer

**Assem Sabry**, the developer of Shade, is **not responsible** for any misuse of this tool. Shade is provided for educational and research purposes only. The primary goal of this project is to allow users to unlock the full potential of open-source language models. Users are expected to interact with de-censored models responsibly.

---

## 📜 Citation

If you use Shade in your research, please cite it:

```bibtex
@misc{shade,
  author = {Sabry, Assem},
  title = {Shade: Fully automatic censorship removal for language models},
  year = {2026},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/AssemSabry/Shade}}
}
```

---

## ⚖️ License

Copyright &copy; 2026 **Assem Sabry**
Licensed under the **GNU Affero General Public License v3.0**. See the [LICENSE](LICENSE) file for details.
