Metadata-Version: 2.4
Name: pyg-hyper-nn
Version: 0.1.1
Summary: PyTorch Geometric-based hypergraph neural networks library
Author-email: Ryusei Nishide <nishide.dev@gmail.com>
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Science/Research
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.12
Requires-Dist: numpy>=1.24.0
Requires-Dist: torch-geometric>=2.4.0
Requires-Dist: torch>=2.0.0
Description-Content-Type: text/markdown

# PyG-Hyper-NN

[![PyPI version](https://badge.fury.io/py/pyg-hyper-nn.svg)](https://pypi.org/project/pyg-hyper-nn/)
[![Python 3.12](https://img.shields.io/badge/python-3.12-blue.svg)](https://www.python.org/downloads/)
[![PyTorch Geometric](https://img.shields.io/badge/PyG-2.6-ee4c2c.svg)](https://pytorch-geometric.readthedocs.io/)
[![Code style: ruff](https://img.shields.io/badge/code%20style-ruff-000000.svg)](https://github.com/astral-sh/ruff)
[![Type checked: ty](https://img.shields.io/badge/type%20checked-ty-blue.svg)](https://github.com/astral-sh/ty)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)

**PyTorch Geometric-based hypergraph neural networks library with 19+ state-of-the-art models for research and production.**

PyG-Hyper-NN is a comprehensive library of hypergraph neural network models built on PyTorch Geometric. All implementations are faithfully ported from [DHG-Bench](https://github.com/Coco-Hut/DHG-Bench) (ICLR 2026), preserving the exact mathematical operations and algorithmic logic from the original papers. The library provides clean, typed implementations with standardized interfaces, comprehensive tests (373 tests), and production-ready code quality.

## 🚀 Key Features

### 🧠 **19+ State-of-the-Art Models** (from DHG-Bench)
- **Basic Models**: MLP, HGNN, HCHA, UniGNN, UniGCNII ✅
- **Set-Based Models**: AllSet (SetGNN with PMA attention), EquivSetGNN ✅
- **Diffusion Models**: TFHNN (Training-free PageRank propagation), HyperND (p-norm diffusion) ✅
- **Phenomenological Models**: PhenomNN (Multi-scale iterative propagation), PhenomNNS (Simplified variant) ✅
- **Graph Expansion**: CEGCN, CEGAT (Clique expansion), HyperGCN (Mediator-based), LEGCN (Line expansion) ✅
- **Transformers**: HyperGT (Kernelized attention with O(N) complexity) ✅
- **Degree-Based**: HNHN (Learnable alpha/beta normalization), HJRL (Joint node-edge representation) ✅
- **Advanced Architectures**: SheafHyperGNN (Sheaf theory with CP decomposition), EDGNN (Equivariant diffusion) ✅
- **Diverse Approaches**: Message passing, attention, transformers, equivariant operations, diffusion, phenomenological modeling, sheaf theory
- **Research-Backed**: Faithful implementations from published papers (AAAI 2019-2024, IJCAI 2021, ICLR 2022-2025, NeurIPS 2019-2023, ICML 2022-2023, etc.)

### 🎯 **Clean Architecture**
- **Modular Design**: Separate `layers/` and `models/` for maximum reusability
- **Standardized Interface**: Consistent API across all models
- **Type Safety**: Full type annotations with `ty` checking
- **No External Config**: Pure PyTorch, no args/config file dependencies

### 🔬 **Research-Ready**
- **Faithful Implementations**: All models faithfully ported from [DHG-Bench](https://github.com/Coco-Hut/DHG-Bench), preserving exact mathematical operations
- **Comprehensive Tests**: 373 tests covering all layers and models (100% pass rate)
- **Gradient Flow Verified**: All models tested for proper backpropagation
- **Reproducible**: Fixed initialization and deterministic operations
- **Documented**: Docstrings with paper references and parameter explanations
- **No Research Fraud**: Never simplified or created "fake" implementations—mathematical integrity preserved

### ⚡ **Production-Quality**
- **Modern Python**: Python 3.12+, type hints, dataclasses
- **Code Quality**: Ruff linting + ty type checking (100% pass rate)
- **CI/CD Ready**: GitHub Actions workflows for testing and deployment
- **GPU Optimized**: CUDA 12.6 support with mixed precision training

## 📦 Installation

### Prerequisites

This package requires PyTorch Geometric to be installed. Install it first:

```bash
pip install torch torch-geometric
```

For GPU support with CUDA 12.6:
```bash
pip install torch --index-url https://download.pytorch.org/whl/cu126
pip install torch-geometric
```

### Using uv (Recommended)
```bash
# Clone the repository
git clone https://github.com/nishide-dev/pyg-hyper-nn.git
cd pyg-hyper-nn

# Install with all dependencies
uv sync

# Verify installation
uv run python -c "from pyg_hyper_nn.models import HGNN; print('✅ Installation successful!')"
```

### Using pip
```bash
# Install from source
git clone https://github.com/nishide-dev/pyg-hyper-nn.git
cd pyg-hyper-nn
pip install -e .

# Or install directly from GitHub (when published)
pip install git+https://github.com/nishide-dev/pyg-hyper-nn.git
```

### Requirements
- Python ≥ 3.12
- PyTorch ≥ 2.8
- PyTorch Geometric ≥ 2.6
- torch-scatter, torch-sparse
- NumPy ≥ 1.24

## 🎯 Quick Start

### Basic Node Classification

```python
import torch
from pyg_hyper_nn.models import HGNN

# Create model
model = HGNN(
    in_channels=16,      # Input feature dimension
    hidden_channels=32,  # Hidden layer dimension
    out_channels=7,      # Number of classes
    num_layers=2,        # Number of convolution layers
    dropout=0.6,         # Dropout probability
)

# Prepare data
x = torch.randn(100, 16)  # Node features [num_nodes, in_channels]
hyperedge_index = torch.tensor([
    [0, 1, 2, 1, 2, 3],  # Node indices
    [0, 0, 0, 1, 1, 1],  # Hyperedge indices
])

# Forward pass
out = model(x, hyperedge_index)  # Output: [num_nodes, out_channels]
print(f"Output shape: {out.shape}")  # torch.Size([100, 7])
```

### Using Different Models

```python
from pyg_hyper_nn.models import HGNN, HyperGCN, UniGNN, MLP

# HGNN - Classic hypergraph convolution (Feng et al., AAAI 2019)
hgnn = HGNN(16, 32, 7, num_layers=2)

# HyperGCN - Supremum-infimum projection (Yadati et al., NeurIPS 2019)
hypergcn = HyperGCN(
    in_channels=16,
    hidden_channels=32,
    out_channels=7,
    num_layers=2,
    fast=True,        # Precompute structure once
    mediators=True,   # Use two-star expansion with mediators
)

# UniGNN - Universal message passing (Huang & Yang, IJCAI 2021)
unignn = UniGNN(
    in_channels=16,
    hidden_channels=32,
    out_channels=7,
    num_layers=2,
    heads=4,  # Multi-head attention
    first_aggregate="mean",  # Vertex-to-hyperedge aggregation
)

# MLP - Baseline model (no graph structure)
mlp = MLP(16, 32, 7, num_layers=2, normalization="bn")

# All models share the same interface
for model in [hgnn, hypergcn, unignn, mlp]:
    out = model(x, hyperedge_index)  # MLP ignores hyperedge_index
    print(f"{model.__class__.__name__}: {out.shape}")
```

### Training Example

```python
import torch
import torch.nn.functional as F
from pyg_hyper_nn.models import HGNN

# Setup
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = HGNN(in_channels=16, hidden_channels=32, out_channels=7, num_layers=2).to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=0.01, weight_decay=5e-4)

# Training loop
model.train()
for epoch in range(100):
    optimizer.zero_grad()

    # Forward pass
    out = model(x.to(device), hyperedge_index.to(device))
    loss = F.cross_entropy(out[train_mask], y[train_mask])

    # Backward pass
    loss.backward()
    optimizer.step()

    if epoch % 10 == 0:
        print(f"Epoch {epoch:3d} | Loss: {loss.item():.4f}")

# Evaluation
model.eval()
with torch.no_grad():
    out = model(x.to(device), hyperedge_index.to(device))
    pred = out.argmax(dim=1)
    acc = (pred[test_mask] == y[test_mask]).float().mean()
    print(f"Test Accuracy: {acc:.4f}")
```

## 📚 Available Models

### ✅ Implemented Models (19/26)

All implementations are based on the official [DHG-Bench](https://github.com/Coco-Hut/DHG-Bench) implementations, faithfully preserving the mathematical operations and algorithmic logic from the original papers.

| Model | Paper | Venue | Year | Key Features |
|-------|-------|-------|------|--------------|
| **MLP** | - | Baseline | - | Standard multi-layer perceptron |
| **HGNN** | [Hypergraph neural networks](https://cdn.aaai.org/ojs/4235/4235-13-7289-1-10-20190705.pdf) | AAAI | 2019 | Symmetric degree normalization (D_v^{-1/2} H W D_e^{-1} H^T D_v^{-1/2}) |
| **HCHA** | [Hypergraph convolution and hypergraph attention](https://www.sciencedirect.com/science/article/abs/pii/S0031320320304404) | PR | 2020 | Asymmetric normalization without D_v^{-1/2} |
| **HyperGCN** | [HyperGCN: A New Method of Training Graph Convolutional Networks on Hypergraphs](https://proceedings.neurips.cc/paper/2019/file/1efa39bcaec6f3900149160693694536-Paper.pdf) | NeurIPS | 2019 | Supremum-infimum projection with mediators |
| **HNHN** | [HNHN: Hypergraph Networks with Hyperedge Neurons](https://grlplus.github.io/papers/40.pdf) | ICML WS | 2020 | Two-stage message passing with learnable alpha/beta normalization |
| **UniGNN** | [UniGNN: a Unified Framework for Graph and Hypergraph Neural Networks](https://www.ijcai.org/proceedings/2021/0353.pdf) | IJCAI | 2021 | Universal vertex-hyperedge message passing framework |
| **UniGCNII** | [UniGNN: a Unified Framework for Graph and Hypergraph Neural Networks](https://www.ijcai.org/proceedings/2021/0353.pdf) | IJCAI | 2021 | GCNII-style initial residual + adaptive identity mapping |
| **AllSet** | [You are AllSet: A Multiset Function Framework for Hypergraph Neural Networks](https://openreview.net/forum?id=hpBTIv2uy_E) | ICLR | 2022 | Multiset functions with PMA attention mechanism |
| **HyperND** | [Nonlinear Feature Diffusion on Hypergraphs](https://proceedings.mlr.press/v162/prokopchik22a/prokopchik22a.pdf) | ICML | 2022 | Iterative p-norm diffusion with personalized PageRank restart |
| **LEGCN** | [Semi-supervised Hypergraph Node Classification on Hypergraph Line Expansion](https://arxiv.org/pdf/2005.04843) | CIKM | 2022 | Line expansion GCN for hypergraphs |
| **EquivSetGNN** | [Equivariant Hypergraph Neural Networks](https://arxiv.org/abs/2207.06680) | ECCV | 2022 | Equivariant set operations with deep residuals |
| **ED-HNN** | [Equivariant Hypergraph Diffusion Neural Operators](https://openreview.net/forum?id=RiTjKoscnNd) | ICLR | 2023 | Equivariant hypergraph diffusion (implemented as **EDGNN**) |
| **PhenomNN** | [From Hypergraph Energy Functions to Hypergraph Neural Networks](https://proceedings.mlr.press/v202/wang23d/wang23d.pdf) | ICML | 2023 | Multi-scale phenomenological modeling with two normalization schemes |
| **SheafHyperGNN** | [Sheaf Hypergraph Networks](https://proceedings.neurips.cc/paper_files/paper/2023/file/27f243af2887d7f248f518d9b967a882-Paper-Conference.pdf) | NeurIPS | 2023 | Sheaf-theoretic learning with diagonal restriction maps and CP decomposition |
| **HJRL** | [Hypergraph Joint Representation Learning for Hypervertices and Hyperedges via Cross Expansion](https://openreview.net/forum?id=fxLaL5s6UH) | AAAI | 2024 | Joint node-edge representation learning with 4 propagation paths |
| **HyperGT** | [Hypergraph Transformer for Semi-Supervised Classification](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10446248) | ICASSP | 2024 | Kernelized attention with O(N) complexity using random Fourier features |
| **TFHNN** | [Training-Free Message Passing for Learning on Hypergraphs](https://openreview.net/pdf?id=4AuyYxt7A2) | ICLR | 2025 | Training-free tensor factorization with personalized PageRank |
| **CEGCN** | Based on clique expansion | - | - | Clique expansion + standard GCN layers |
| **CEGAT** | Based on clique expansion | - | - | Clique expansion + GAT layers with multi-head attention |

**Note**: PhenomNNS (Simplified PhenomNN) and PlainUnigencoder are additional utility models for faster computation and group pooling operations.

### 🚧 Coming Soon (7 models remaining)

The remaining models from DHG-Bench present significant implementation challenges due to custom data structures and non-standard interfaces:

| Model | Paper | Venue | Year | Implementation Status |
|-------|-------|-------|------|----------------------|
| **T-HyperGNNs** | [T-HyperGNNs: Hypergraph Neural Networks via Tensor Representations](https://ieeexplore.ieee.org/document/10462516) | TNNLS | 2024 | Requires custom tensor aggregation (TMPHN-style) |
| **DPHGNN** | [DPHGNN: A Dual Perspective Hypergraph Neural Networks](https://arxiv.org/pdf/2405.16616) | KDD | 2024 | Requires TAA (Topology-Aware Attention) and multiple graph expansions |
| **EHNN** | [Equivariant hypergraph neural networks](https://arxiv.org/pdf/2208.10428) | ECCV | 2022 | Requires hypernetwork for dynamic weight generation |

**Why these are difficult**: All remaining models depend on special data structures (neig_dict, ehnn_cache, multiple adjacency matrices) that cannot be easily expressed with standard PyG's `hyperedge_index` interface. See [CLAUDE.md](CLAUDE.md) for detailed technical analysis.

**Utility function available**: `build_neighbor_dict()` has been implemented as a foundation for future TMPHN implementation (6 tests passing).

See [docs/models.md](docs/models.md) for detailed model descriptions and API documentation.

## 🏗️ Project Structure

```
pyg-hyper-nn/
├── src/pyg_hyper_nn/
│   ├── __init__.py              # Package entry point
│   ├── py.typed                 # Type information marker
│   ├── layers/                  # Reusable layer implementations
│   │   ├── __init__.py
│   │   ├── conv.py             # HypergraphConv, AllSetConv, etc.
│   │   ├── attention.py        # PMA, Multi-head attention
│   │   ├── pooling.py          # Hypergraph pooling layers
│   │   ├── mlp.py              # MLP blocks
│   │   └── utils.py            # Initialization utilities
│   └── models/                  # Complete model implementations
│       ├── __init__.py
│       ├── mlp.py              # MLP baseline ✅
│       ├── hgnn.py             # HGNN, HCHA ✅
│       ├── unignn.py           # UniGNN ✅
│       ├── hypergcn.py         # HyperGCN (coming soon)
│       ├── allset.py           # AllSet (coming soon)
│       └── ...                 # 23 more models
├── tests/
│   ├── test_layers/            # Layer unit tests
│   │   ├── test_conv.py       # HypergraphConv tests (15 tests) ✅
│   │   └── test_attention.py  # Attention tests
│   └── test_models/            # Model integration tests
│       └── test_basic.py      # MLP, HGNN, UniGNN tests (10 tests) ✅
├── docs/
│   ├── models.md               # Model documentation
│   ├── layers.md               # Layer API reference
│   └── examples/               # Usage examples
├── .github/
│   └── workflows/
│       ├── test.yml           # CI testing
│       └── publish.yml        # PyPI publishing
├── pyproject.toml             # Project configuration
├── ruff.toml                  # Ruff linting config
└── README.md                  # This file
```

## 🧪 Development

### Running Tests

```bash
# Run all tests (367 tests passing!)
uv run pytest tests/ -v

# Run specific test suite
uv run pytest tests/test_models/test_basic.py -v
uv run pytest tests/test_layers/test_conv.py -v

# Run with coverage
uv run pytest tests/ --cov=src --cov-report=html

# View coverage report
open htmlcov/index.html
```

### Code Quality Checks

```bash
# Run all quality checks
uv run ruff check src/ tests/        # Linting
uv run ruff format src/ tests/       # Formatting
uv run ty check src/                 # Type checking

# Auto-fix issues
uv run ruff check --fix src/ tests/

# Current status: ✅ All checks passing!
```

### Pre-commit Hooks

Set up automatic code quality checks before commits:

```bash
# Install pre-commit
uv add --dev pre-commit

# Install the git hooks
uv run pre-commit install

# Run hooks manually on all files
uv run pre-commit run --all-files

# Now hooks run automatically on every commit!
# - ruff lint --fix: Auto-fix linting issues
# - ruff format: Auto-format code
# - ty check: Type checking
```

### Adding Dependencies

```bash
# Add runtime dependency
uv add <package-name>

# Add development dependency
uv add --dev <package-name>

# Update all dependencies
uv lock --upgrade
```

## 📊 Model Interface Standard

All models in PyG-Hyper-NN follow a consistent interface:

```python
class ModelName(nn.Module):
    """Model description with paper reference.

    Args:
        in_channels: Size of input node features.
        hidden_channels: Size of hidden layer features.
        out_channels: Number of output classes.
        num_layers: Number of convolution layers.
        dropout: Dropout probability. Default: 0.5.
        **kwargs: Model-specific parameters.
    """

    def __init__(
        self,
        in_channels: int,
        hidden_channels: int,
        out_channels: int,
        num_layers: int,
        dropout: float = 0.5,
        **kwargs,
    ):
        ...

    def reset_parameters(self) -> None:
        """Reset all learnable parameters."""
        ...

    def forward(
        self,
        x: Tensor,
        hyperedge_index: Tensor,
        hyperedge_weight: Optional[Tensor] = None,
    ) -> Tensor:
        """Forward pass.

        Args:
            x: Node feature matrix of shape (num_nodes, in_channels).
            hyperedge_index: Hyperedge indices in COO format of shape (2, num_edges).
            hyperedge_weight: Optional hyperedge weights of shape (num_hyperedges,).

        Returns:
            Output predictions of shape (num_nodes, out_channels).
        """
        ...
```

## 📖 Documentation

- **[Model Documentation](docs/models.md)** - Detailed model descriptions and API
- **[Layer API](docs/layers.md)** - Layer-level documentation
- **[Examples](docs/examples/)** - Usage examples and tutorials
- **[Contributing Guide](CONTRIBUTING.md)** - Development guidelines
- **[CUDA Setup](docs/CUDA_SETUP.md)** - GPU environment setup

## 🤝 Contributing

Contributions are welcome! We're actively implementing the remaining 23 models.

### Priority Tasks
1. **High Priority Models**: UniGCNII, AllSet, HyperGT, CEGCN/CEGAT
2. **Preprocessing Infrastructure**: Support for line expansion, Laplacian computation
3. **Documentation**: Model docs, usage examples
4. **Testing**: Additional edge cases, performance benchmarks

See [CONTRIBUTING.md](CONTRIBUTING.md) for detailed guidelines.

## 📖 Citation

This library is based on implementations from [DHG-Bench](https://github.com/Coco-Hut/DHG-Bench), a comprehensive benchmark for deep hypergraph learning. We faithfully preserve the mathematical operations and algorithmic logic from the original papers.

If you use **pyg-hyper-nn** or any of the implemented models in your research, please consider citing the DHG-Bench paper:

```bibtex
@article{li2025dhg,
  title={DHG-Bench: A Comprehensive Benchmark for Deep Hypergraph Learning},
  author={Li, Fan and Wang, Xiaoyang and Zhang, Wenjie and Zhang, Ying and Lin, Xuemin},
  journal={arXiv preprint arXiv:2508.12244},
  year={2025}
}
```

**DHG-Bench Paper**: https://openreview.net/forum?id=lhsb1ChUDF

### Individual Model Citations

When using specific models, please also cite the original papers:

<details>
<summary>Click to expand model citations</summary>

**HGNN** (AAAI 2019):
```bibtex
@inproceedings{feng2019hypergraph,
  title={Hypergraph neural networks},
  author={Feng, Yifan and You, Haoxuan and Zhang, Zizhao and Ji, Rongrong and Gao, Yue},
  booktitle={Proceedings of the AAAI conference on artificial intelligence},
  volume={33},
  pages={3558--3565},
  year={2019}
}
```

**HyperGCN** (NeurIPS 2019):
```bibtex
@inproceedings{yadati2019hypergcn,
  title={Hypergcn: A new method for training graph convolutional networks on hypergraphs},
  author={Yadati, Naganand and Nimishakavi, Madhav and Yadav, Prateek and Nitin, Vikram and Louis, Anand and Talukdar, Partha},
  booktitle={Advances in neural information processing systems},
  volume={32},
  year={2019}
}
```

**UniGNN** (IJCAI 2021):
```bibtex
@inproceedings{huang2021unignn,
  title={UniGNN: a unified framework for graph and hypergraph neural networks},
  author={Huang, Jing and Yang, Jie},
  booktitle={Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence},
  pages={2563--2569},
  year={2021}
}
```

**AllSet** (ICLR 2022):
```bibtex
@inproceedings{chien2022you,
  title={You are allset: A multiset function framework for hypergraph neural networks},
  author={Chien, Eli and Pan, Chao and Peng, Jianhao and Milenkovic, Olgica},
  booktitle={International Conference on Learning Representations},
  year={2022}
}
```

**HyperND** (ICML 2022):
```bibtex
@inproceedings{prokopchik2022nonlinear,
  title={Nonlinear feature diffusion on hypergraphs},
  author={Prokopchik, Konstantin and Besta, Maciej and Hoefler, Torsten},
  booktitle={International Conference on Machine Learning},
  pages={17932--17951},
  year={2022}
}
```

**PhenomNN** (ICML 2023):
```bibtex
@inproceedings{wang2023hypergraph,
  title={From hypergraph energy functions to hypergraph neural networks},
  author={Wang, Yuxin and Yao, Quan and Kwok, James T and Ni, Lionel M},
  booktitle={International Conference on Machine Learning},
  pages={36433--36448},
  year={2023}
}
```

**SheafHyperGNN** (NeurIPS 2023):
```bibtex
@inproceedings{yu2023sheaf,
  title={Sheaf hypergraph networks},
  author={Yu, Tianyu and Li, Jiajie and Gong, Hongyang and Li, Mengzhao},
  booktitle={Advances in Neural Information Processing Systems},
  volume={36},
  pages={76714--76733},
  year={2023}
}
```

**HyperGT** (ICASSP 2024):
```bibtex
@inproceedings{gao2024hypergraph,
  title={Hypergraph Transformer for Semi-Supervised Classification},
  author={Gao, Zeyu and Zhang, Chao and Zhang, Zhenpeng and Zhu, Fengli and Li, Jianan and Yu, Jing},
  booktitle={ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing},
  pages={5690--5694},
  year={2024}
}
```

**HJRL** (AAAI 2024):
```bibtex
@inproceedings{ju2024hypergraph,
  title={Hypergraph Joint Representation Learning for Hypervertices and Hyperedges via Cross Expansion},
  author={Ju, Wei and Luo, Yi and Fang, Yifan and Zhang, Zhiping and Zhang, Ming},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={38},
  pages={8633--8641},
  year={2024}
}
```

**TFHNN** (ICLR 2025):
```bibtex
@inproceedings{luo2025training,
  title={Training-Free Message Passing for Learning on Hypergraphs},
  author={Luo, Bohan and Lin, Zhezheng and Feng, Yilong and Wu, Zheng-Jun and Wang, Stan Z},
  booktitle={The Thirteenth International Conference on Learning Representations},
  year={2025}
}
```

For other models (HNHN, LEGCN, EDGNN, HCHA, etc.), please refer to the DHG-Bench paper and the original papers listed in the model table above.

</details>

## 📄 License

MIT License - see [LICENSE](LICENSE) file for details.

---

Built with ❤️ for hypergraph learning research | Based on [DHG-Bench](https://github.com/Coco-Hut/DHG-Bench)
