Metadata-Version: 2.4
Name: tsne_pso
Version: 1.1.4
Summary: t-Distributed Stochastic Neighbor Embedding with Particle Swarm Optimization
Home-page: https://github.com/draglesss/t-SNE-PSO
Author: Allaoui, Mebarka, Belhaouari, Samir Brahim, Hedjam, Rachid, Bouanane, Khadra, Kherfi, Mohammed Lamine
Author-email: fattehotmane@hotmail.com
Maintainer-email: Otmane Fatteh <fattehotmane@hotmail.com>
License-Expression: BSD-3-Clause
Project-URL: Homepage, https://github.com/Draglesss/t-SNE-PSO
Project-URL: Bug Tracker, https://github.com/Draglesss/t-SNE-PSO/issues
Project-URL: Documentation, https://github.com/Draglesss/t-SNE-PSO#readme
Classifier: Programming Language :: Python :: 3
Classifier: Operating System :: OS Independent
Classifier: Intended Audience :: Science/Research
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.9
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: numpy>=1.19.5
Requires-Dist: scipy>=1.6.0
Requires-Dist: scikit-learn>=1.0.0
Requires-Dist: umap-learn>=0.5.3; platform_system != "Windows"
Requires-Dist: tqdm>=4.64.0
Dynamic: author-email
Dynamic: home-page
Dynamic: license-file
Dynamic: requires-python

# TSNE-PSO

[![PyPI version](https://badge.fury.io/py/tsne-pso.svg)](https://badge.fury.io/py/tsne-pso)
[![License](https://img.shields.io/badge/License-BSD_3--Clause-blue.svg)](https://opensource.org/licenses/BSD-3-Clause)
[![Python](https://img.shields.io/badge/python-3.9+-blue.svg)](https://www.python.org/downloads/)

t-Distributed Stochastic Neighbor Embedding with Particle Swarm Optimization (TSNE-PSO) is an enhanced version of t-SNE that uses Particle Swarm Optimization instead of gradient descent for the optimization step. This implementation is based on the research paper by Allaoui et al. (2025).

## Features

- **Improved Optimization**: Uses Particle Swarm Optimization for better optimization with less susceptibility to local minima
- **Multiple Initialization Options**: Supports initialization using PCA, UMAP, t-SNE, or custom embeddings
- **Hybrid Approach**: Optional hybrid optimization combining PSO with gradient descent steps
- **Highly Customizable**: Fine-tune parameters for particles, inertia, cognitive/social weights, and more
- **scikit-learn Compatible**: Follows scikit-learn's API conventions for easy integration

## Installation

Install the latest stable version from PyPI:

```bash
pip install tsne-pso
```

### Dependencies

- numpy
- scipy
- scikit-learn
- umap-learn (optional, for UMAP initialization)
- tqdm (optional, for progress bars)

## Quick Start

```python
from tsne_pso import TSNEPSO
import numpy as np
from sklearn.datasets import load_iris

# Load example data
iris = load_iris()
X = iris.data

# Create and fit the TSNE-PSO model
tsne_pso = TSNEPSO(
    n_components=2,
    perplexity=30.0,
    n_particles=10,
    n_iter=500,
    random_state=42
)
X_embedded = tsne_pso.fit_transform(X)

# Visualize the results
import matplotlib.pyplot as plt
plt.figure(figsize=(10, 8))
scatter = plt.scatter(X_embedded[:, 0], X_embedded[:, 1], c=iris.target)
plt.legend(handles=scatter.legend_elements()[0], labels=iris.target_names)
plt.title('TSNE-PSO visualization of Iris dataset')
plt.show()
```

## Advanced Usage

### Different Initialization Methods

```python
# Using UMAP for initialization
model = TSNEPSO(init='umap', perplexity=30)

# Using t-SNE for initialization
model = TSNEPSO(init='tsne', perplexity=30)

# Using custom initialization
initial_embedding = np.random.normal(0, 0.0001, (n_samples, 2))
model = TSNEPSO(init=initial_embedding)
```

### Tuning PSO Parameters

```python
model = TSNEPSO(
    n_particles=20,           # Number of particles
    inertia_weight=0.7,       # Inertia weight
    h=1e-20,                  # Parameter for dynamic cognitive weight
    f=1e-21,                  # Parameter for dynamic social weight
    use_hybrid=True,          # Use hybrid PSO + gradient descent
    n_iter=1000               # Number of iterations
)
```

## How It Works

TSNE-PSO enhances the original t-SNE algorithm by replacing gradient descent with Particle Swarm Optimization. The algorithm:

1. **Initialization**: Creates a swarm of particles with positions initialized via PCA, UMAP, t-SNE, or randomly
2. **Optimization**: Updates particles using:
   - Cognitive component (attraction to personal best position)
   - Social component (attraction to global best position)
   - Inertia (tendency to continue current trajectory)
3. **Dynamic Parameters**: Adapts cognitive and social weights over iterations
4. **Hybrid Approach**: Optionally applies gradient descent steps to accelerate convergence

## Citation

If you use this package in your research, please cite the following paper:

```bibtex
@article{allaoui2025t,
  title={t-SNE-PSO: Optimizing t-SNE using particle swarm optimization},
  author={Allaoui, Mebarka and Belhaouari, Samir Brahim and Hedjam, Rachid and Bouanane, Khadra and Kherfi, Mohammed Lamine},
  journal={Expert Systems with Applications},
  volume={269},
  pages={126398},
  year={2025},
  publisher={Elsevier}
}
```

## Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

1. Fork the repository
2. Create your feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add some amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request

## License

BSD-3-Clause License (same as scikit-learn) 
