Metadata-Version: 2.1
Name: shy_sh
Version: 1.1.9
Summary: Shell copilot - sh shell AI copilot
Home-page: https://github.com/mceck/shy-sh
License: MIT
Author: Mattia Cecchini
Author-email: matcecco@gmail.com
Requires-Python: >=3.10,<3.14
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Terminals
Classifier: Topic :: Utilities
Provides-Extra: audio
Provides-Extra: aws
Requires-Dist: langchain (>=0.3.20,<0.4.0)
Requires-Dist: langchain-anthropic (>=0.3.9,<0.4.0)
Requires-Dist: langchain-aws (>=0.2.15,<0.3.0) ; extra == "aws"
Requires-Dist: langchain-google-genai (>=2.0.11,<3.0.0)
Requires-Dist: langchain-groq (>=0.2.5,<0.3.0)
Requires-Dist: langchain-ollama (>=0.2.3,<0.3.0)
Requires-Dist: langchain-openai (>=0.3.8,<0.4.0)
Requires-Dist: langgraph (>=0.2.76,<0.3.0)
Requires-Dist: pydantic-settings (>=2.8.1,<3.0.0)
Requires-Dist: pyperclip (>=1.9.0,<2.0.0)
Requires-Dist: pyreadline3 (>=3.5.4,<4.0.0) ; sys_platform == "win32"
Requires-Dist: pyyaml (>=6.0.2,<7.0.0)
Requires-Dist: questionary (>=2.1.0,<3.0.0)
Requires-Dist: speechrecognition (>=3.14.1,<4.0.0) ; extra == "audio"
Requires-Dist: tiktoken (>=0.9.0,<0.10.0)
Requires-Dist: typer (>=0.15.2,<0.16.0)
Requires-Dist: tzlocal (>=5.3.1,<6.0.0)
Project-URL: Repository, https://github.com/mceck/shy-sh
Description-Content-Type: text/markdown

# Shy.sh

Sh shell AI copilot

![image_cover](./docs/images/sh.gif)

## Install

```sh
pip install shy-sh
```

Configure your LLM

```sh
shy --configure
```

Supported providers: openai, anthropic, google, groq, aws, ollama

## Help

Usage: `shy [OPTIONS] [PROMPT]...`

Arguments
prompt [PROMPT]

Options

- -x Do not ask confirmation before executing scripts
- -e Explain the given shell command
- --configure Configure LLM
- --help Show this message and exit.

## Settings

```sh
shy --configure
 Provider: ollama
 Model: llama3.2
 Agent Pattern: react
 Temperature: 0.0
 Language: klingon
 Sandbox Mode: Yes
```

#### Configurable settings

- Provider: The LLM provider to use [OpenAI, Anthropic, Google, Groq, AWS Bedrock, Ollama(local)].
- API Key: The API key for the LLM provider. (Format for aws bedrock: `region_name acces_key secret_key`)
- Model: The LLM model to use.
- Agent Pattern: react or function_call. (If you are not using OpenAI, Anthropic or Google, react is recommended)
- Temperature: The LLM model's temperature setting.
- Language: The language for the LLM's final answers.
- Sandbox Mode: When enabled, no commands or scripts will be executed on your system; you will only receive suggestions. This feature is recommended for beginners.

All the settings are saved in `~/.config/shy/config.yml`

## Examples

```sh
> shy find all python files in this folder

🛠️ find . -type f -name '*.py'

Do you want to execute this command? [Yes/no/copy/explain/alternatives]:

./src/chat_models.py
./src/agent/tools.py
./src/agent/__init__.py
./src/agent/agent.py
./src/settings.py
./src/main.py

🤖: Here are all the Python files found in the current folder and its subfolders.
```

```sh
> shy -x convert aaa.png to jpeg and resize to 200x200

🛠️ convert aaa.png -resize 200x200 aaa.jpg

🤖: I converted the file aaa.png to JPEG format and resized it to 200x200 pixels.
```

```sh
> shy resize movie.avi to 1024x768 and save it in mp4

🛠️ ffmpeg -i movie.avi -vf scale=1024:768 -c:v libx264 output.mp4

Do you want to execute this command? [Yes/no/copy/explain/alternatives]: c

🤖: Command copied to the clipboard!
```

```sh
> shy

✨: Hello, how are you?

🤖: Hello! I'm fine thanks

✨: how many files in this folder

🛠️ ls | wc -l

Do you want to execute this command? [Yes/no/copy/explain/alternatives]:

5

✨: exit

🤖: 👋 Bye!
```

```sh
> shy -e "find . -type f -name '*.py' | wc -l"

🤖: This shell command uses `find` to search for files (`-type f`) with the extension `.py` (`-name '*.py'`) in the current directory (`.`) and its subdirectories.
The results are then piped to `wc -l`, which counts the number of line.
In conclusion, the command presents the total count of Python files (*.py) located within the current directory and its subdirectories.
```

![image_python](./docs/images/python.gif)

![image_ascii](./docs/images/ascii.gif)

## Chat commands

You can use these commands during the chat:

- `/chats` to list all the chats
- `/clear` to clear the current chat
- `/history` to list the recent executed commands/scripts
- `/load [CHAT_ID]` to continue a previous chat

## Privacy

If you are not using Ollama as provider, please note that information such as the current path, your operating system name, and the last commands executed in the shell may be included in the LLM context.

