Metadata-Version: 2.4
Name: pleasant-database
Version: 1.4.2
Summary: package for handling local database files and data
Author-email: Elias Rodkey <elias.rodkey@gmail.com>
License: MIT
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: requests
Requires-Dist: numpy
Requires-Dist: sqlalchemy
Requires-Dist: pleasant-loggers
Requires-Dist: pandas
Provides-Extra: dev
Requires-Dist: pytest; extra == "dev"
Dynamic: license-file

# pleasant_database

this package is a modular python package that allows for easy
creation, reading, editing, and deleting local database files

## License

[MIT](https://choosealicense.com/licenses/mit/)

## Usage/Examples

**Install Package**
Ensure you have access to the github repository
Run the command:
    pip install pleasant_database
    GitHub repository: <https://github.com/EliasRodkey/python-toolkit>

### Import Package

```python
From pleasant_database import DatabaseFile, DatabaseManager, BaseTable
```

### Set Up Table Object

Use the BaseTable class to create a table object that will be connected to using the DatabaseManager

```python
class TableName(BaseTable):
    """A table object"""
    __tablename__ = "table_name"
    id = Column(Integer, primary_key=True, autoincrement=True)
    name = Column(String(50), nullable=False)
    age = Column(Integer)
    email = Column(String(100), unique=True)
```

The attributes here represent column names in the table, additional funcionality can be added to avoid duplicate entries, for example:

```python
email = Column(String(100), unique=True)
```

This forces all emails to be unique in the table.
Another option is to add a special hash row that would be unique to each entry:

```python
    import hashlib

def generate_hash(row):
    # Concatenate values of all relevant columns
    data = f"{row["col1"]}_{row["col2"]}_{row["col3"]}"
    return hashlib.sha256(data.encode()).hexdigest()

df["unique_id"] = df.apply(generate_hash, axis=1)
```

Additionally the \_\_tableargs\_\_ can be updated to make a custom unique filter based on multiple columns to the class attributes:

```python
__table_args__ = (
    UniqueConstraint("column1", "column2", name="unique_combo"),  # Combination must be unique
)
```

### Creating the DataFile

The DatabaseFile object represents the actual file of the database and is required to initialize a DatabaseManager object.

```python
file = DatabaseFile(db_name, directory="data/database")

db_name: valid .db filename
directory: relative path to database directory (default data//database)
```

DatabaseFile funcitons:

```python
file.create() # Creates a db file in the directory location
file.move(target_directory) # Moves the db file to a new directory
file.exists() # Returns boolean, if the file exists
file.delete() # Deletes the actual db file
```

### Create DatabaseManager

The DatabaseManager takes a file and table as arguments and allows for common database operations on that table

```python
manager = DatabaseManager(table_class: BaseTable, databasefile: DatabaseFile)
```

The DatabaseFile functions can be accessed through the DatabaseManager.file attribute
Basic database funcitons:

```python
manager.add_item(entry: dictionary with columns matching the db table class)
manager.add_multiple_items(entries: list of entries)
manager.append_dataframe(df) # pandas DataFrame with columns that match the db table class

manager.fetch_all_items() # returns all table class instances in the table
manager.fetch_item_by_id(id: int) # returns an individual table class instance with data
manager.fetch_items_by_attribute(**kwargs) # allows filtering of table by kwargs
manager.filter_items(filters: dict, use_or=False) # allows filtering of database table, returns ORM objects
manager.to_dataframe() # returns the entire database as a pandas DataFrame

# Flexible query returning a DataFrame — supports column projection, filtering, sorting, pagination, and search
manager.query(
    columns=["name", "age"],          # optional: subset of columns to return (None = all)
    filters={"age": (">", 18)},       # optional: same filter dict format as filter_items()
    order_by=[("age", "desc"), ("name", "asc")],  # optional: str, (col, dir) tuple, or list of tuples
    ascending=True,                   # only used when order_by is a bare string
    limit=10,                         # optional: max rows to return
    offset=0,                         # optional: rows to skip before returning results
    search="john",                    # optional: case-insensitive substring match across string columns
    search_columns=["name", "email"], # optional: restrict search to these columns (default: all str columns)
)

manager.convert_orm_list_to_dataframe(orm_list) # converts a list of ORM objects to a pandas DataFrame

manager.update_item(item_id: int, **kwargs) # updates values kwargs of an item with a given ID
manager.delete_item(item_id: int) # Deletes an item with the given item id
manager.delete_items_by_attribute(**kwargs) # Deletes items in a database where column values match kwargs.
manager.delete_items_by_filter(filters: dict, use_or=False) # Deletes items in a database where filters apply.
manager.clear_table() # Deletes all items in the database table.

manager.start_session() # initiates when instance initialized
manager.end_session() # should be called before exiting program
```

## Version History

### 1.4.1
- Added `search` and `search_columns` parameters to `query()` for case-insensitive substring matching across string columns. Matches any column by default (OR logic), composable with `filters` (AND), auto-wraps term as `%term%`.

### 1.4.0
- `delete_item()` and `update_item()` now raise `ItemNotFoundError` when the target ID does not exist.

### 1.3.x
- `query()` enhancements: added `limit`/`offset` pagination, multi-column sorting, filter support, and column projection.

### 1.2.x
- Added `query()` method returning results as a pandas DataFrame.
