Merge branch 'main' of github.com:HASHIRU-AI/HASHIRU
Browse files
README.md
CHANGED
@@ -18,6 +18,12 @@ pinned: false
|
|
18 |
|
19 |
This project provides a framework for creating and managing AI agents and tools. It includes features for managing resource and expense budgets, loading tools and agents, and interacting with various language models.
|
20 |
|
|
|
|
|
|
|
|
|
|
|
|
|
21 |
## Directory Structure
|
22 |
|
23 |
* **src/**: Contains the source code for the project.
|
@@ -43,14 +49,28 @@ This project provides a framework for creating and managing AI agents and tools.
|
|
43 |
|
44 |
## Usage
|
45 |
|
46 |
-
To use the project,
|
|
|
|
|
47 |
|
48 |
-
|
|
|
|
|
|
|
|
|
49 |
2. Create tools and place them in the `src/tools/default_tools` or `src/tools/user_tools` directories.
|
50 |
|
51 |
Please note that by default, we do provide a lot of pre-defined tools and agents, so you may not need to create your own tools unless you have specific requirements.
|
52 |
|
53 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
54 |
|
55 |
## Contributing
|
56 |
|
|
|
18 |
|
19 |
This project provides a framework for creating and managing AI agents and tools. It includes features for managing resource and expense budgets, loading tools and agents, and interacting with various language models.
|
20 |
|
21 |
+
The architecture consists of a hierarchical system where a CEO agent manages a set of employee agents and tools. Both agents and tools can be autonomously created, invoked, and deleted by the CEO. Agents are associated with reclaimable costs (if local) and non-reclaimable costs (if cloud-based).
|
22 |
+
|
23 |
+
The project is designed to be modular and extensible, allowing users to integrate their own tools and agents. It supports multiple language model integrations, including Ollama, Gemini, Groq, and Lambda Labs.
|
24 |
+
|
25 |
+
NOTE: Benchmarking efforts of the HASHIRU architecture can be found in [HASHIRUBench](https://github.com/HASHIRU-AI/HASHIRUBench).
|
26 |
+
|
27 |
## Directory Structure
|
28 |
|
29 |
* **src/**: Contains the source code for the project.
|
|
|
49 |
|
50 |
## Usage
|
51 |
|
52 |
+
To use the project, follow these steps:
|
53 |
+
1. Install the required dependencies by running `pip install -r requirements.txt`.
|
54 |
+
2. Start the application by running `python app.py`. This will launch a web interface where you can interact with the agents and tools.
|
55 |
|
56 |
+
By default, on running `python app.py`, you would need to authenticate with Auth0. But, this can be overriden through the CLI argument `--no-auth` to skip authentication.
|
57 |
+
|
58 |
+
To use the project with additional tools and agents, you need to:
|
59 |
+
|
60 |
+
1. Configure the budget in `src/tools/default_tools/agent_cost_manager.py`.
|
61 |
2. Create tools and place them in the `src/tools/default_tools` or `src/tools/user_tools` directories.
|
62 |
|
63 |
Please note that by default, we do provide a lot of pre-defined tools and agents, so you may not need to create your own tools unless you have specific requirements.
|
64 |
|
65 |
+
## Model Support
|
66 |
+
The project supports the following language model integrations:
|
67 |
+
- **Ollama**: Local model management and invocation.
|
68 |
+
- **Gemini**: Cloud-based model management and invocation from Google.
|
69 |
+
- **Groq**: Cloud-based model management and invocation from Groq.
|
70 |
+
- **Lambda**: Cloud-based model management and invocation from Lambda Labs.
|
71 |
+
|
72 |
+
## Acknowledgements
|
73 |
+
We would like to thank Hugging Face, Groq and Lambda Labs for sponsoring this project and providing the necessary resources for development.
|
74 |
|
75 |
## Contributing
|
76 |
|