koffiwind commited on
Commit
2210481
·
1 Parent(s): 2580556
.gitignore ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ __pycache__/
2
+ .chainlit/
3
+ .venv/
4
+ .env
BuildingAChainlitApp.md ADDED
@@ -0,0 +1,224 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Building a Chainlit App
2
+
3
+ What if we want to take our Week 1 Day 2 assignment - [Pythonic RAG](https://github.com/AI-Maker-Space/AIE4/tree/main/Week%201/Day%202) - and bring it out of the notebook?
4
+
5
+ Well - we'll cover exactly that here!
6
+
7
+ ## Anatomy of a Chainlit Application
8
+
9
+ [Chainlit](https://docs.chainlit.io/get-started/overview) is a Python package similar to Streamlit that lets users write a backend and a front end in a single (or multiple) Python file(s). It is mainly used for prototyping LLM-based Chat Style Applications - though it is used in production in some settings with 1,000,000s of MAUs (Monthly Active Users).
10
+
11
+ The primary method of customizing and interacting with the Chainlit UI is through a few critical [decorators](https://blog.hubspot.com/website/decorators-in-python).
12
+
13
+ > NOTE: Simply put, the decorators (in Chainlit) are just ways we can "plug-in" to the functionality in Chainlit.
14
+
15
+ We'll be concerning ourselves with three main scopes:
16
+
17
+ 1. On application start - when we start the Chainlit application with a command like `chainlit run app.py`
18
+ 2. On chat start - when a chat session starts (a user opens the web browser to the address hosting the application)
19
+ 3. On message - when the users sends a message through the input text box in the Chainlit UI
20
+
21
+ Let's dig into each scope and see what we're doing!
22
+
23
+ ## On Application Start:
24
+
25
+ The first thing you'll notice is that we have the traditional "wall of imports" this is to ensure we have everything we need to run our application.
26
+
27
+ ```python
28
+ import os
29
+ from typing import List
30
+ from chainlit.types import AskFileResponse
31
+ from aimakerspace.text_utils import CharacterTextSplitter, TextFileLoader
32
+ from aimakerspace.openai_utils.prompts import (
33
+ UserRolePrompt,
34
+ SystemRolePrompt,
35
+ AssistantRolePrompt,
36
+ )
37
+ from aimakerspace.openai_utils.embedding import EmbeddingModel
38
+ from aimakerspace.vectordatabase import VectorDatabase
39
+ from aimakerspace.openai_utils.chatmodel import ChatOpenAI
40
+ import chainlit as cl
41
+ ```
42
+
43
+ Next up, we have some prompt templates. As all sessions will use the same prompt templates without modification, and we don't need these templates to be specific per template - we can set them up here - at the application scope.
44
+
45
+ ```python
46
+ system_template = """\
47
+ Use the following context to answer a users question. If you cannot find the answer in the context, say you don't know the answer."""
48
+ system_role_prompt = SystemRolePrompt(system_template)
49
+
50
+ user_prompt_template = """\
51
+ Context:
52
+ {context}
53
+
54
+ Question:
55
+ {question}
56
+ """
57
+ user_role_prompt = UserRolePrompt(user_prompt_template)
58
+ ```
59
+
60
+ > NOTE: You'll notice that these are the exact same prompt templates we used from the Pythonic RAG Notebook in Week 1 Day 2!
61
+
62
+ Following that - we can create the Python Class definition for our RAG pipeline - or *chain*, as we'll refer to it in the rest of this walkthrough.
63
+
64
+ Let's look at the definition first:
65
+
66
+ ```python
67
+ class RetrievalAugmentedQAPipeline:
68
+ def __init__(self, llm: ChatOpenAI(), vector_db_retriever: VectorDatabase) -> None:
69
+ self.llm = llm
70
+ self.vector_db_retriever = vector_db_retriever
71
+
72
+ async def arun_pipeline(self, user_query: str):
73
+ ### RETRIEVAL
74
+ context_list = self.vector_db_retriever.search_by_text(user_query, k=4)
75
+
76
+ context_prompt = ""
77
+ for context in context_list:
78
+ context_prompt += context[0] + "\n"
79
+
80
+ ### AUGMENTED
81
+ formatted_system_prompt = system_role_prompt.create_message()
82
+
83
+ formatted_user_prompt = user_role_prompt.create_message(question=user_query, context=context_prompt)
84
+
85
+
86
+ ### GENERATION
87
+ async def generate_response():
88
+ async for chunk in self.llm.astream([formatted_system_prompt, formatted_user_prompt]):
89
+ yield chunk
90
+
91
+ return {"response": generate_response(), "context": context_list}
92
+ ```
93
+
94
+ Notice a few things:
95
+
96
+ 1. We have modified this `RetrievalAugmentedQAPipeline` from the initial notebook to support streaming.
97
+ 2. In essence, our pipeline is *chaining* a few events together:
98
+ 1. We take our user query, and chain it into our Vector Database to collect related chunks
99
+ 2. We take those contexts and our user's questions and chain them into the prompt templates
100
+ 3. We take that prompt template and chain it into our LLM call
101
+ 4. We chain the response of the LLM call to the user
102
+ 3. We are using a lot of `async` again!
103
+
104
+ Now, we're going to create a helper function for processing uploaded text files.
105
+
106
+ First, we'll instantiate a shared `CharacterTextSplitter`.
107
+
108
+ ```python
109
+ text_splitter = CharacterTextSplitter()
110
+ ```
111
+
112
+ Now we can define our helper.
113
+
114
+ ```python
115
+ def process_text_file(file: AskFileResponse):
116
+ import tempfile
117
+
118
+ with tempfile.NamedTemporaryFile(mode="w", delete=False, suffix=".txt") as temp_file:
119
+ temp_file_path = temp_file.name
120
+
121
+ with open(temp_file_path, "wb") as f:
122
+ f.write(file.content)
123
+
124
+ text_loader = TextFileLoader(temp_file_path)
125
+ documents = text_loader.load_documents()
126
+ texts = text_splitter.split_texts(documents)
127
+ return texts
128
+ ```
129
+
130
+ Simply put, this downloads the file as a temp file, we load it in with `TextFileLoader` and then split it with our `TextSplitter`, and returns that list of strings!
131
+
132
+ #### QUESTION #1:
133
+
134
+ Why do we want to support streaming? What about streaming is important, or useful?
135
+
136
+ #### ANSWER #2:
137
+
138
+ Streaming is used to render tokens produced by the model as they are generated. It is useful
139
+ to make the experience more fluid for the user. Since we can generate more data from the vector store,
140
+ it reduce the wainting time for the user.
141
+
142
+ ## On Chat Start:
143
+
144
+ The next scope is where "the magic happens". On Chat Start is when a user begins a chat session. This will happen whenever a user opens a new chat window, or refreshes an existing chat window.
145
+
146
+ You'll see that our code is set-up to immediately show the user a chat box requesting them to upload a file.
147
+
148
+ ```python
149
+ while files == None:
150
+ files = await cl.AskFileMessage(
151
+ content="Please upload a Text File file to begin!",
152
+ accept=["text/plain"],
153
+ max_size_mb=2,
154
+ timeout=180,
155
+ ).send()
156
+ ```
157
+
158
+ Once we've obtained the text file - we'll use our processing helper function to process our text!
159
+
160
+ After we have processed our text file - we'll need to create a `VectorDatabase` and populate it with our processed chunks and their related embeddings!
161
+
162
+ ```python
163
+ vector_db = VectorDatabase()
164
+ vector_db = await vector_db.abuild_from_list(texts)
165
+ ```
166
+
167
+ Once we have that piece completed - we can create the chain we'll be using to respond to user queries!
168
+
169
+ ```python
170
+ retrieval_augmented_qa_pipeline = RetrievalAugmentedQAPipeline(
171
+ vector_db_retriever=vector_db,
172
+ llm=chat_openai
173
+ )
174
+ ```
175
+
176
+ Now, we'll save that into our user session!
177
+
178
+ > NOTE: Chainlit has some great documentation about [User Session](https://docs.chainlit.io/concepts/user-session).
179
+
180
+ ### QUESTION #2:
181
+
182
+ Why are we using User Session here? What about Python makes us need to use this? Why not just store everything in a global variable?
183
+
184
+ ### ANSWER #2:
185
+ This is the answer from chainlit doc : The user session is designed to persist data in memory through the life cycle of a chat session. Each user session is unique to a user and a given chat session.
186
+ I think it is used to keep context from previous prompts of the user and give more accurate answers.
187
+
188
+ ## On Message
189
+
190
+ First, we load our chain from the user session:
191
+
192
+ ```python
193
+ chain = cl.user_session.get("chain")
194
+ ```
195
+
196
+ Then, we run the chain on the content of the message - and stream it to the front end - that's it!
197
+
198
+ ```python
199
+ msg = cl.Message(content="")
200
+ result = await chain.arun_pipeline(message.content)
201
+
202
+ async for stream_resp in result["response"]:
203
+ await msg.stream_token(stream_resp)
204
+ ```
205
+
206
+ ## 🎉
207
+
208
+ With that - you've created a Chainlit application that moves our Pythonic RAG notebook to a Chainlit application!
209
+
210
+ ## 🚧 CHALLENGE MODE 🚧
211
+
212
+ For an extra challenge - modify the behaviour of your applciation by integrating changes you made to your Pythonic RAG notebook (using new retrieval methods, etc.)
213
+
214
+ If you're still looking for a challenge, or didn't make any modifications to your Pythonic RAG notebook:
215
+
216
+ 1) Allow users to upload PDFs (this will require you to build a PDF parser as well)
217
+ 2) Modify the VectorStore to leverage [Qdrant](https://python-client.qdrant.tech/)
218
+
219
+ > NOTE: The motivation for these challenges is simple - the beginning of the course is extremely information dense, and people come from all kinds of different technical backgrounds. In order to ensure that all learners are able to engage with the content confidently and comfortably, we want to focus on the basic units of technical competency required. This leads to a situation where some learners, who came in with more robust technical skills, find the introductory material to be too simple - and these open-ended challenges help us do this!
220
+
221
+
222
+
223
+
224
+
Dockerfile ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Get a distribution that has uv already installed
3
+ FROM ghcr.io/astral-sh/uv:python3.13-bookworm-slim
4
+
5
+ # Add user - this is the user that will run the app
6
+ # If you do not set user, the app will run as root (undesirable)
7
+ RUN useradd -m -u 1000 user
8
+ USER user
9
+
10
+ # Set the home directory and path
11
+ ENV HOME=/home/user \
12
+ PATH=/home/user/.local/bin:$PATH
13
+
14
+ ENV UVICORN_WS_PROTOCOL=websockets
15
+
16
+
17
+ # Set the working directory
18
+ WORKDIR $HOME/app
19
+
20
+ # Copy the app to the container
21
+ COPY --chown=user . $HOME/app
22
+
23
+ # Install the dependencies
24
+ # RUN uv sync --frozen
25
+ RUN uv sync
26
+
27
+ # Expose the port
28
+ EXPOSE 7860
29
+
30
+ # Run the app
31
+ CMD ["uv", "run", "chainlit", "run", "app.py", "--host", "0.0.0.0", "--port", "7860"]
README.md CHANGED
@@ -1,11 +1,49 @@
1
  ---
2
- title: Qdrant RAG
3
- emoji: 📈
4
  colorFrom: blue
5
- colorTo: blue
6
  sdk: docker
7
  pinned: false
8
- short_description: Deploying RAG powered by Qdrant as vector db and fastembed f
9
  ---
10
 
11
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: Deploy Qdrant RAG
3
+ emoji: 📉
4
  colorFrom: blue
5
+ colorTo: purple
6
  sdk: docker
7
  pinned: false
8
+ license: apache-2.0
9
  ---
10
 
11
+ # Deploying RAG powered by Qdrant as vector db and fastembed for embedding and retrieval
12
+
13
+ #### ❓ QUESTION #1:
14
+
15
+ Why do we want to support streaming? What about streaming is important, or useful?
16
+
17
+ #### ANSWER #1:
18
+
19
+ The goal of streaming in this context is to render the generated answers in chunks. Thus reducing latency specifically for answers containing a lot of tokens
20
+
21
+ #### ❓ QUESTION #2:
22
+
23
+ Why are we using User Session here? What about Python makes us need to use this? Why not just store everything in a global variable?
24
+
25
+ #### ANSWER #2:
26
+
27
+ Users sessions are used to keep track of users activity. It can be used to retrieve contxt from previous conversations or separate conversions
28
+
29
+ #### ❓ Discussion Question #1:
30
+
31
+ Upload a PDF file of the recent DeepSeek-R1 paper and ask the following questions:
32
+
33
+ 1. What is RL and how does it help reasoning?
34
+ 2. What is the difference between DeepSeek-R1 and DeepSeek-R1-Zero?
35
+ 3. What is this paper about?
36
+
37
+ Does this application pass your vibe check? Are there any immediate pitfalls you're noticing?
38
+
39
+ #### ❓ Discussion
40
+
41
+ Yes The application passes the vibe check except for the last question but it is normal behaviour. My collection had documents from another uploaded PDF
42
+
43
+ ![image](vibe.png)
44
+
45
+ ## 🚧 CHALLENGE MODE 🚧
46
+
47
+ Added Qdrant as vector db
48
+
49
+ Hugging Face Space link :
aimakerspace/__init__.py ADDED
File without changes
aimakerspace/openai_utils/__init__.py ADDED
File without changes
aimakerspace/openai_utils/chatmodel.py ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from openai import OpenAI, AsyncOpenAI
2
+ from dotenv import load_dotenv
3
+ import os
4
+
5
+ load_dotenv()
6
+
7
+
8
+ class ChatOpenAI:
9
+ def __init__(self, model_name: str = "gpt-4o-mini"):
10
+ self.model_name = model_name
11
+ self.openai_api_key = os.getenv("OPENAI_API_KEY")
12
+ if self.openai_api_key is None:
13
+ raise ValueError("OPENAI_API_KEY is not set")
14
+
15
+ def run(self, messages, text_only: bool = True, **kwargs):
16
+ if not isinstance(messages, list):
17
+ raise ValueError("messages must be a list")
18
+
19
+ client = OpenAI()
20
+ response = client.chat.completions.create(
21
+ model=self.model_name, messages=messages, **kwargs
22
+ )
23
+
24
+ if text_only:
25
+ return response.choices[0].message.content
26
+
27
+ return response
28
+
29
+ async def astream(self, messages, **kwargs):
30
+ if not isinstance(messages, list):
31
+ raise ValueError("messages must be a list")
32
+
33
+ client = AsyncOpenAI()
34
+
35
+ stream = await client.chat.completions.create(
36
+ model=self.model_name,
37
+ messages=messages,
38
+ stream=True,
39
+ **kwargs
40
+ )
41
+
42
+ async for chunk in stream:
43
+ content = chunk.choices[0].delta.content
44
+ if content is not None:
45
+ yield content
aimakerspace/openai_utils/embedding.py ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from dotenv import load_dotenv
2
+ from openai import AsyncOpenAI, OpenAI
3
+ import openai
4
+ from typing import List
5
+ import os
6
+ import asyncio
7
+
8
+
9
+ class EmbeddingModel:
10
+ def __init__(self, embeddings_model_name: str = "text-embedding-3-small"):
11
+ load_dotenv()
12
+ self.openai_api_key = os.getenv("OPENAI_API_KEY")
13
+ self.async_client = AsyncOpenAI()
14
+ self.client = OpenAI()
15
+
16
+ if self.openai_api_key is None:
17
+ raise ValueError(
18
+ "OPENAI_API_KEY environment variable is not set. Please set it to your OpenAI API key."
19
+ )
20
+ openai.api_key = self.openai_api_key
21
+ self.embeddings_model_name = embeddings_model_name
22
+
23
+ async def async_get_embeddings(self, list_of_text: List[str]) -> List[List[float]]:
24
+ embedding_response = await self.async_client.embeddings.create(
25
+ input=list_of_text, model=self.embeddings_model_name
26
+ )
27
+
28
+ return [embeddings.embedding for embeddings in embedding_response.data]
29
+
30
+ async def async_get_embedding(self, text: str) -> List[float]:
31
+ embedding = await self.async_client.embeddings.create(
32
+ input=text, model=self.embeddings_model_name
33
+ )
34
+
35
+ return embedding.data[0].embedding
36
+
37
+ def get_embeddings(self, list_of_text: List[str]) -> List[List[float]]:
38
+ embedding_response = self.client.embeddings.create(
39
+ input=list_of_text, model=self.embeddings_model_name
40
+ )
41
+
42
+ return [embeddings.embedding for embeddings in embedding_response.data]
43
+
44
+ def get_embedding(self, text: str) -> List[float]:
45
+ embedding = self.client.embeddings.create(
46
+ input=text, model=self.embeddings_model_name
47
+ )
48
+
49
+ return embedding.data[0].embedding
50
+
51
+
52
+ if __name__ == "__main__":
53
+ embedding_model = EmbeddingModel()
54
+ print(asyncio.run(embedding_model.async_get_embedding("Hello, world!")))
55
+ print(
56
+ asyncio.run(
57
+ embedding_model.async_get_embeddings(["Hello, world!", "Goodbye, world!"])
58
+ )
59
+ )
aimakerspace/openai_utils/prompts.py ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import re
2
+
3
+
4
+ class BasePrompt:
5
+ def __init__(self, prompt):
6
+ """
7
+ Initializes the BasePrompt object with a prompt template.
8
+
9
+ :param prompt: A string that can contain placeholders within curly braces
10
+ """
11
+ self.prompt = prompt
12
+ self._pattern = re.compile(r"\{([^}]+)\}")
13
+
14
+ def format_prompt(self, **kwargs):
15
+ """
16
+ Formats the prompt string using the keyword arguments provided.
17
+
18
+ :param kwargs: The values to substitute into the prompt string
19
+ :return: The formatted prompt string
20
+ """
21
+ matches = self._pattern.findall(self.prompt)
22
+ return self.prompt.format(**{match: kwargs.get(match, "") for match in matches})
23
+
24
+ def get_input_variables(self):
25
+ """
26
+ Gets the list of input variable names from the prompt string.
27
+
28
+ :return: List of input variable names
29
+ """
30
+ return self._pattern.findall(self.prompt)
31
+
32
+
33
+ class RolePrompt(BasePrompt):
34
+ def __init__(self, prompt, role: str):
35
+ """
36
+ Initializes the RolePrompt object with a prompt template and a role.
37
+
38
+ :param prompt: A string that can contain placeholders within curly braces
39
+ :param role: The role for the message ('system', 'user', or 'assistant')
40
+ """
41
+ super().__init__(prompt)
42
+ self.role = role
43
+
44
+ def create_message(self, format=True, **kwargs):
45
+ """
46
+ Creates a message dictionary with a role and a formatted message.
47
+
48
+ :param kwargs: The values to substitute into the prompt string
49
+ :return: Dictionary containing the role and the formatted message
50
+ """
51
+ if format:
52
+ return {"role": self.role, "content": self.format_prompt(**kwargs)}
53
+
54
+ return {"role": self.role, "content": self.prompt}
55
+
56
+
57
+ class SystemRolePrompt(RolePrompt):
58
+ def __init__(self, prompt: str):
59
+ super().__init__(prompt, "system")
60
+
61
+
62
+ class UserRolePrompt(RolePrompt):
63
+ def __init__(self, prompt: str):
64
+ super().__init__(prompt, "user")
65
+
66
+
67
+ class AssistantRolePrompt(RolePrompt):
68
+ def __init__(self, prompt: str):
69
+ super().__init__(prompt, "assistant")
70
+
71
+
72
+ if __name__ == "__main__":
73
+ prompt = BasePrompt("Hello {name}, you are {age} years old")
74
+ print(prompt.format_prompt(name="John", age=30))
75
+
76
+ prompt = SystemRolePrompt("Hello {name}, you are {age} years old")
77
+ print(prompt.create_message(name="John", age=30))
78
+ print(prompt.get_input_variables())
aimakerspace/text_utils.py ADDED
@@ -0,0 +1,136 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ from typing import List
3
+ import PyPDF2
4
+
5
+
6
+ class TextFileLoader:
7
+ def __init__(self, path: str, encoding: str = "utf-8"):
8
+ self.documents = []
9
+ self.path = path
10
+ self.encoding = encoding
11
+
12
+ def load(self):
13
+ if os.path.isdir(self.path):
14
+ self.load_directory()
15
+ elif os.path.isfile(self.path) and self.path.endswith(".txt"):
16
+ self.load_file()
17
+ else:
18
+ raise ValueError(
19
+ "Provided path is neither a valid directory nor a .txt file."
20
+ )
21
+
22
+ def load_file(self):
23
+ with open(self.path, "r", encoding=self.encoding) as f:
24
+ self.documents.append(f.read())
25
+
26
+ def load_directory(self):
27
+ for root, _, files in os.walk(self.path):
28
+ for file in files:
29
+ if file.endswith(".txt"):
30
+ with open(
31
+ os.path.join(root, file), "r", encoding=self.encoding
32
+ ) as f:
33
+ self.documents.append(f.read())
34
+
35
+ def load_documents(self):
36
+ self.load()
37
+ return self.documents
38
+
39
+
40
+ class CharacterTextSplitter:
41
+ def __init__(
42
+ self,
43
+ chunk_size: int = 1000,
44
+ chunk_overlap: int = 200,
45
+ ):
46
+ assert (
47
+ chunk_size > chunk_overlap
48
+ ), "Chunk size must be greater than chunk overlap"
49
+
50
+ self.chunk_size = chunk_size
51
+ self.chunk_overlap = chunk_overlap
52
+
53
+ def split(self, text: str) -> List[str]:
54
+ chunks = []
55
+ for i in range(0, len(text), self.chunk_size - self.chunk_overlap):
56
+ chunks.append(text[i : i + self.chunk_size])
57
+ return chunks
58
+
59
+ def split_texts(self, texts: List[str]) -> List[str]:
60
+ chunks = []
61
+ for text in texts:
62
+ chunks.extend(self.split(text))
63
+ return chunks
64
+
65
+
66
+ class PDFLoader:
67
+ def __init__(self, path: str):
68
+ self.documents = []
69
+ self.path = path
70
+ print(f"PDFLoader initialized with path: {self.path}")
71
+
72
+ def load(self):
73
+ print(f"Loading PDF from path: {self.path}")
74
+ print(f"Path exists: {os.path.exists(self.path)}")
75
+ print(f"Is file: {os.path.isfile(self.path)}")
76
+ print(f"Is directory: {os.path.isdir(self.path)}")
77
+ print(f"File permissions: {oct(os.stat(self.path).st_mode)[-3:]}")
78
+
79
+ try:
80
+ # Try to open the file first to verify access
81
+ with open(self.path, 'rb') as test_file:
82
+ pass
83
+
84
+ # If we can open it, proceed with loading
85
+ self.load_file()
86
+
87
+ except IOError as e:
88
+ raise ValueError(f"Cannot access file at '{self.path}': {str(e)}")
89
+ except Exception as e:
90
+ raise ValueError(f"Error processing file at '{self.path}': {str(e)}")
91
+
92
+ def load_file(self):
93
+ with open(self.path, 'rb') as file:
94
+ # Create PDF reader object
95
+ pdf_reader = PyPDF2.PdfReader(file)
96
+
97
+ # Extract text from each page
98
+ text = ""
99
+ for page in pdf_reader.pages:
100
+ text += page.extract_text() + "\n"
101
+
102
+ self.documents.append(text)
103
+
104
+ def load_directory(self):
105
+ for root, _, files in os.walk(self.path):
106
+ for file in files:
107
+ if file.lower().endswith('.pdf'):
108
+ file_path = os.path.join(root, file)
109
+ with open(file_path, 'rb') as f:
110
+ pdf_reader = PyPDF2.PdfReader(f)
111
+
112
+ # Extract text from each page
113
+ text = ""
114
+ for page in pdf_reader.pages:
115
+ text += page.extract_text() + "\n"
116
+
117
+ self.documents.append(text)
118
+
119
+ def load_documents(self):
120
+ self.load()
121
+ return self.documents
122
+
123
+
124
+ if __name__ == "__main__":
125
+ loader = TextFileLoader("data/KingLear.txt")
126
+ loader.load()
127
+ splitter = CharacterTextSplitter()
128
+ chunks = splitter.split_texts(loader.documents)
129
+ print(len(chunks))
130
+ print(chunks[0])
131
+ print("--------")
132
+ print(chunks[1])
133
+ print("--------")
134
+ print(chunks[-2])
135
+ print("--------")
136
+ print(chunks[-1])
aimakerspace/vectordatabase.py ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from qdrant_client import QdrantClient
2
+ from qdrant_client.http import models
3
+ from typing import List, Dict, Optional
4
+ import os
5
+
6
+
7
+ class VectorDatabase:
8
+ def __init__(
9
+ self,
10
+ url=os.getenv("QDRANT_URL"),
11
+ api_key=os.getenv("QDRANT_API_KEY"),
12
+ collection_name="testing_col",
13
+ # embedding_model_name: str = "BAAI/bge-small-en", # Default model
14
+ ):
15
+ """
16
+ Initialize the Qdrant client, FastEmbed, and collection.
17
+
18
+ Args:
19
+ host (str): Host address of the Qdrant server.
20
+ port (int): Port of the Qdrant server.
21
+ collection_name (str): Name of the collection to use or create.
22
+ embedding_model_name (str): Name of the FastEmbed model to use.
23
+ """
24
+ self.client = QdrantClient(url=url, api_key=api_key)
25
+ self.collection_name = collection_name
26
+
27
+ def upsert_documents(self, texts: List[str]):
28
+ # Insert into Qdrant
29
+ self.client.add(
30
+ collection_name=self.collection_name,
31
+ documents=texts,
32
+ )
33
+ print(
34
+ f"Inserted {len(texts)} documents into collection '{self.collection_name}'."
35
+ )
36
+
37
+ def search_similar(self, query_text: str):
38
+ search_result = self.client.query(
39
+ collection_name=self.collection_name,
40
+ query_text=query_text,
41
+ limit=1,
42
+ )
43
+
44
+ document = search_result[0].document
45
+
46
+ return document
47
+
48
+ def delete_collection(self):
49
+ """
50
+ Delete the Qdrant collection.
51
+ """
52
+ self.client.delete_collection(self.collection_name)
53
+ print(f"Deleted collection: {self.collection_name}")
54
+
55
+ def list_collections(self):
56
+ """
57
+ List all collections in the Qdrant database.
58
+
59
+ Returns:
60
+ List[str]: List of collection names.
61
+ """
62
+ collections = self.client.get_collections().collections
63
+ return [collection.name for collection in collections]
app.py ADDED
@@ -0,0 +1,141 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ from typing import List
3
+ from chainlit.types import AskFileResponse
4
+ from aimakerspace.text_utils import CharacterTextSplitter, TextFileLoader, PDFLoader
5
+ from aimakerspace.openai_utils.prompts import (
6
+ UserRolePrompt,
7
+ SystemRolePrompt,
8
+ AssistantRolePrompt,
9
+ )
10
+ from aimakerspace.openai_utils.embedding import EmbeddingModel
11
+ from aimakerspace.vectordatabase import VectorDatabase
12
+ from aimakerspace.openai_utils.chatmodel import ChatOpenAI
13
+ import chainlit as cl
14
+ from qdrant_client import QdrantClient
15
+
16
+ system_template = """\
17
+ Use the following context to answer a users question. If you cannot find the answer in the context, say you don't know the answer."""
18
+ system_role_prompt = SystemRolePrompt(system_template)
19
+
20
+ user_prompt_template = """\
21
+ Context:
22
+ {context}
23
+
24
+ Question:
25
+ {question}
26
+ """
27
+ user_role_prompt = UserRolePrompt(user_prompt_template)
28
+
29
+
30
+ class RetrievalAugmentedQAPipeline:
31
+ def __init__(self, llm: ChatOpenAI(), vector_db_retriever: VectorDatabase) -> None:
32
+ self.llm = llm
33
+ self.vector_db_retriever = vector_db_retriever
34
+
35
+ async def arun_pipeline(self, user_query: str):
36
+ context_data = self.vector_db_retriever.search_similar(user_query)
37
+
38
+ context_prompt = ""
39
+ context_prompt += context_data + "\n"
40
+
41
+ formatted_system_prompt = system_role_prompt.create_message()
42
+
43
+ formatted_user_prompt = user_role_prompt.create_message(
44
+ question=user_query, context=context_prompt
45
+ )
46
+
47
+ async def generate_response():
48
+ async for chunk in self.llm.astream(
49
+ [formatted_system_prompt, formatted_user_prompt]
50
+ ):
51
+ yield chunk
52
+
53
+ return {"response": generate_response(), "context": context_prompt}
54
+
55
+
56
+ text_splitter = CharacterTextSplitter()
57
+
58
+
59
+ def process_file(file: AskFileResponse):
60
+ import tempfile
61
+ import shutil
62
+
63
+ print(f"Processing file: {file.name}")
64
+
65
+ # Create a temporary file with the correct extension
66
+ suffix = f".{file.name.split('.')[-1]}"
67
+ with tempfile.NamedTemporaryFile(delete=False, suffix=suffix) as temp_file:
68
+ # Copy the uploaded file content to the temporary file
69
+ shutil.copyfile(file.path, temp_file.name)
70
+ print(f"Created temporary file at: {temp_file.name}")
71
+
72
+ # Create appropriate loader
73
+ if file.name.lower().endswith(".pdf"):
74
+ loader = PDFLoader(temp_file.name)
75
+ else:
76
+ loader = TextFileLoader(temp_file.name)
77
+
78
+ try:
79
+ # Load and process the documents
80
+ documents = loader.load_documents()
81
+ texts = text_splitter.split_texts(documents)
82
+ return texts
83
+ finally:
84
+ # Clean up the temporary file
85
+ try:
86
+ os.unlink(temp_file.name)
87
+ except Exception as e:
88
+ print(f"Error cleaning up temporary file: {e}")
89
+
90
+
91
+ @cl.on_chat_start
92
+ async def on_chat_start():
93
+ files = None
94
+
95
+ # Wait for the user to upload a file
96
+ while files == None:
97
+ files = await cl.AskFileMessage(
98
+ content="Please upload a Text or PDF file to begin!",
99
+ accept=["text/plain", "application/pdf"],
100
+ max_size_mb=2,
101
+ timeout=180,
102
+ ).send()
103
+
104
+ file = files[0]
105
+
106
+ msg = cl.Message(content=f"Processing `{file.name}`...")
107
+ await msg.send()
108
+
109
+ # load the file
110
+ texts = process_file(file)
111
+
112
+ print(f"Processing {len(texts)} text chunks")
113
+
114
+ # Create a dict vector store
115
+ vector_db = VectorDatabase()
116
+ vector_db.upsert_documents(texts)
117
+ chat_openai = ChatOpenAI()
118
+
119
+ # Create a chain
120
+ retrieval_augmented_qa_pipeline = RetrievalAugmentedQAPipeline(
121
+ vector_db_retriever=vector_db, llm=chat_openai
122
+ )
123
+
124
+ # Let the user know that the system is ready
125
+ msg.content = f"Processing `{file.name}` done. You can now ask questions!"
126
+ await msg.update()
127
+
128
+ cl.user_session.set("chain", retrieval_augmented_qa_pipeline)
129
+
130
+
131
+ @cl.on_message
132
+ async def main(message):
133
+ chain = cl.user_session.get("chain")
134
+
135
+ msg = cl.Message(content="")
136
+ result = await chain.arun_pipeline(message.content)
137
+
138
+ async for stream_resp in result["response"]:
139
+ await msg.stream_token(stream_resp)
140
+
141
+ await msg.send()
chainlit.md ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ # Welcome to Chat with Your PDF/text File
2
+
3
+ With this application, you can chat with an uploaded text file that is smaller than 2MB!
images/docchain_img.png ADDED
pyproject.toml ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [project]
2
+ name = "aie5-deploypythonicrag"
3
+ version = "0.1.0"
4
+ description = "Simple Pythonic RAG App"
5
+ readme = "README.md"
6
+ requires-python = ">=3.13"
7
+ dependencies = [
8
+ "chainlit>=2.0.4",
9
+ "maturin>=1.8.1",
10
+ "numpy>=2.2.2",
11
+ "openai>=1.59.9",
12
+ "pydantic==2.10.1",
13
+ "pypdf2>=3.0.1",
14
+ "qdrant-client[fastembed]>=1.13.2",
15
+ "websockets>=14.2",
16
+ ]
uv.lock ADDED
The diff for this file is too large to render. See raw diff
 
vibe.png ADDED