ruslanmv commited on
Commit
d621767
·
1 Parent(s): a83407e
Files changed (4) hide show
  1. README.md +402 -1
  2. hf/Dockerfile +20 -0
  3. hf/README.md +93 -0
  4. hf/requirements.txt +8 -0
README.md CHANGED
@@ -114,10 +114,411 @@ PROJECT_ID=your_project_id
114
 
115
  This will configure your project to connect to Watsonx.ai using the obtained credentials.
116
 
117
- Step 4: Creation of app.py
118
 
119
  In the followig section we are going to invoke Large Language Models (LLMs) deployed in watsonx.ai. Documentation: [here](https://ibm.github.io/watson-machine-learning-sdk/foundation_models.html)
120
  This example shows a Question and Answer use case for a provided web site
121
 
122
 
123
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
114
 
115
  This will configure your project to connect to Watsonx.ai using the obtained credentials.
116
 
117
+ ## Step 4: Creation of app.py
118
 
119
  In the followig section we are going to invoke Large Language Models (LLMs) deployed in watsonx.ai. Documentation: [here](https://ibm.github.io/watson-machine-learning-sdk/foundation_models.html)
120
  This example shows a Question and Answer use case for a provided web site
121
 
122
 
123
 
124
+ ### Section 1: Importing Necessary Libraries
125
+
126
+ ```python
127
+ # For reading credentials from the .env file
128
+ import os
129
+ from dotenv import load_dotenv
130
+
131
+ from sentence_transformers import SentenceTransformer
132
+ from chromadb.api.types import EmbeddingFunction
133
+
134
+ # WML python SDK
135
+ from ibm_watson_machine_learning.foundation_models import Model
136
+ from ibm_watson_machine_learning.metanames import GenTextParamsMetaNames as GenParams
137
+ from ibm_watson_machine_learning.foundation_models.utils.enums import ModelTypes, DecodingMethods
138
+
139
+ import requests
140
+ from bs4 import BeautifulSoup
141
+ import spacy
142
+ import chromadb
143
+ import en_core_web_md
144
+ ```
145
+
146
+ **Explanation:**
147
+ - `os` and `dotenv` libraries are used for handling environment variables securely.
148
+ - `sentence_transformers` and `chromadb.api.types` are used for text embedding and database operations.
149
+ - `ibm_watson_machine_learning` SDK helps interact with IBM Watson models.
150
+ - `requests` and `BeautifulSoup` are used for web scraping.
151
+ - `spacy` is used for natural language processing tasks.
152
+
153
+ ### Section 2: Setting Up Environment Variables
154
+
155
+ ```python
156
+ # Important: hardcoding the API key in Python code is not a best practice. We are using
157
+ # this approach for the ease of demo setup. In a production application these variables
158
+ # can be stored in an .env or a properties file
159
+
160
+ # URL of the hosted LLMs is hardcoded because at this time all LLMs share the same endpoint
161
+ url = "https://us-south.ml.cloud.ibm.com"
162
+
163
+ # These global variables will be updated in get_credentials() function
164
+ watsonx_project_id = ""
165
+ # Replace with your IBM Cloud key
166
+ api_key = ""
167
+ ```
168
+
169
+ **Explanation:**
170
+ - Hardcoding credentials is not recommended for production; use environment variables instead.
171
+ - `url` is the endpoint for IBM Watson models.
172
+ - `watsonx_project_id` and `api_key` will be populated from environment variables.
173
+
174
+ ### Section 3: Loading Credentials
175
+
176
+ ```python
177
+ def get_credentials():
178
+ load_dotenv()
179
+ # Update the global variables that will be used for authentication in another function
180
+ globals()["api_key"] = os.getenv("api_key", None)
181
+ globals()["watsonx_project_id"] = os.getenv("project_id", None)
182
+ ```
183
+
184
+ **Explanation:**
185
+ - `get_credentials` function loads the `.env` file and updates global variables for `api_key` and `watsonx_project_id`.
186
+
187
+ ### Section 4: Creating the Model
188
+
189
+ ```python
190
+ def get_model(model_type, max_tokens, min_tokens, decoding, temperature, top_k, top_p):
191
+ generate_params = {
192
+ GenParams.MAX_NEW_TOKENS: max_tokens,
193
+ GenParams.MIN_NEW_TOKENS: min_tokens,
194
+ GenParams.DECODING_METHOD: decoding,
195
+ GenParams.TEMPERATURE: temperature,
196
+ GenParams.TOP_K: top_k,
197
+ GenParams.TOP_P: top_p,
198
+ }
199
+
200
+ model = Model(
201
+ model_id=model_type,
202
+ params=generate_params,
203
+ credentials={
204
+ "apikey": api_key,
205
+ "url": url
206
+ },
207
+ project_id=watsonx_project_id
208
+ )
209
+
210
+ return model
211
+ ```
212
+
213
+ **Explanation:**
214
+ - `get_model` function initializes a Watson model with specified parameters like `max_tokens`, `decoding` method, `temperature`, etc.
215
+ - Credentials and project ID are passed to authenticate the model.
216
+
217
+ ### Section 5: Embedding Function
218
+
219
+ ```python
220
+ class MiniLML6V2EmbeddingFunction(EmbeddingFunction):
221
+ MODEL = SentenceTransformer('all-MiniLM-L6-v2')
222
+
223
+ def __call__(self, texts):
224
+ return MiniLML6V2EmbeddingFunction.MODEL.encode(texts).tolist()
225
+ ```
226
+
227
+ **Explanation:**
228
+ - `MiniLML6V2EmbeddingFunction` class uses `SentenceTransformer` to convert text into embeddings, which are numeric representations of the text.
229
+
230
+ ### Section 6: Extracting Text from a Webpage
231
+
232
+ ```python
233
+ def extract_text(url):
234
+ try:
235
+ # Send an HTTP GET request to the URL
236
+ response = requests.get(url)
237
+
238
+ # Check if the request was successful
239
+ if response.status_code == 200:
240
+ # Parse the HTML content of the page using BeautifulSoup
241
+ soup = BeautifulSoup(response.text, 'html.parser')
242
+
243
+ # Extract contents of <p> elements
244
+ p_contents = [p.get_text() for p in soup.find_all('p')]
245
+
246
+ # Print the contents of <p> elements
247
+ print("\nContents of <p> elements: \n")
248
+ for content in p_contents:
249
+ print(content)
250
+ raw_web_text = " ".join(p_contents)
251
+ # remove \xa0 which is used in html to avoid words break acorss lines.
252
+ cleaned_text = raw_web_text.replace("\xa0", " ")
253
+ return cleaned_text
254
+
255
+ else:
256
+ print(f"Failed to retrieve the page. Status code: {response.status_code}")
257
+
258
+ except Exception as e:
259
+ print(f"An error occurred: {str(e)}")
260
+ ```
261
+
262
+ **Explanation:**
263
+ - `extract_text` function scrapes text content from `<p>` tags of a given webpage URL using `requests` and `BeautifulSoup`.
264
+
265
+ ### Section 7: Splitting Text into Sentences
266
+
267
+ ```python
268
+ def split_text_into_sentences(text):
269
+ nlp = spacy.load("en_core_web_md")
270
+ doc = nlp(text)
271
+ sentences = [sent.text for sent in doc.sents]
272
+ cleaned_sentences = [s.strip() for s in sentences]
273
+ return cleaned_sentences
274
+ ```
275
+
276
+ **Explanation:**
277
+ - `split_text_into_sentences` function uses `spaCy` to split the extracted text into sentences and clean them.
278
+
279
+ ### Section 8: Creating Embeddings
280
+
281
+ ```python
282
+ def create_embedding(url, collection_name):
283
+ cleaned_text = extract_text(url)
284
+ cleaned_sentences = split_text_into_sentences(cleaned_text)
285
+
286
+ client = chromadb.Client()
287
+
288
+ collection = client.get_or_create_collection(collection_name)
289
+
290
+ # Upload text to chroma
291
+ collection.upsert(
292
+ documents=cleaned_sentences,
293
+ metadatas=[{"source": str(i)} for i in range(len(cleaned_sentences))],
294
+ ids=[str(i) for i in range(len(cleaned_sentences))],
295
+ )
296
+
297
+ return collection
298
+ ```
299
+
300
+ **Explanation:**
301
+ - `create_embedding` function extracts, cleans, and splits text, then uploads it to a Chroma database collection.
302
+
303
+ ### Section 9: Creating a Prompt for the Model
304
+
305
+ ```python
306
+ def create_prompt(url, question, collection_name):
307
+ # Create embeddings for the text file
308
+ collection = create_embedding(url, collection_name)
309
+
310
+ # query relevant information
311
+ relevant_chunks = collection.query(
312
+ query_texts=[question],
313
+ n_results=5,
314
+ )
315
+ context = "\n\n\n".join(relevant_chunks["documents"][0])
316
+ # Please note that this is a generic format. You can change this format to be specific to llama
317
+ prompt = (f"{context}\n\nPlease answer the following question in one sentence using this "
318
+ + f"text. "
319
+ + f"If the question is unanswerable, say \"unanswerable\". Do not include information that's not relevant to the question."
320
+ + f"Question: {question}")
321
+
322
+ return prompt
323
+ ```
324
+
325
+ **Explanation:**
326
+ - `create_prompt` function generates a prompt by querying the Chroma database for relevant text chunks based on a question and constructs a formatted prompt.
327
+
328
+ ### Section 10: Main Function
329
+
330
+ ```python
331
+ def main():
332
+
333
+ # Get the API key and project id and update global variables
334
+ get_credentials()
335
+
336
+ # Try diffrent URLs and questions
337
+ url = "https://www.usbank.com/financialiq/manage-your-household/buy-a-car/own-electric-vehicles-learned-buying-driving-EVs.html"
338
+
339
+ question = "What are the incentives for purchasing EVs?"
340
+ # question = "What is the percentage of driving powered by hybrid cars?"
341
+ # question = "Can an EV be plugged in to a household outlet?"
342
+ collection_name = "test_web_RAG"
343
+
344
+ answer_questions_from_web(api_key, watsonx_project_id, url, question, collection_name)
345
+ ```
346
+
347
+ **Explanation:**
348
+ - `main` function initializes credentials and runs the process to answer a question based on the content from a given URL.
349
+
350
+ ### Section 11: Answering Questions from the Web
351
+
352
+ ```python
353
+ def answer_questions_from_web(request_api_key, request_project_id, url, question, collection_name):
354
+ # Update the global variable
355
+ globals()["api_key"] = request_api_key
356
+ globals()["watsonx_project_id"] = request_project_id
357
+
358
+ # Specify model parameters
359
+ model_type = "meta-llama/llama-2-70b-chat"
360
+ max_tokens = 100
361
+ min_tokens = 50
362
+ top_k = 50
363
+ top_p = 1
364
+ decoding = DecodingMethods.GREEDY
365
+ temperature = 0.7
366
+
367
+ # Get the watsonx model = try both options
368
+ model = get_model(model_type, max_tokens, min_tokens, decoding, temperature, top_k, top_p)
369
+
370
+ # Get the prompt
371
+ complete_prompt = create_prompt(url, question, collection_name)
372
+
373
+ # Let's review the prompt
374
+ print("----------------------------------------------------------------------------------------------------")
375
+ print("*** Prompt:" + complete_prompt + "***")
376
+ print("----------------------------------------------------------------------------------------------------")
377
+
378
+ generated_response = model.generate(prompt=complete_prompt)
379
+ response_text = generated_response['results'][0]['generated_text']
380
+
381
+ # Remove trailing white spaces
382
+ response_text = response
383
+
384
+ _text.strip()
385
+
386
+ # print model response
387
+ print("--------------------------------- Generated response -----------------------------------")
388
+ print(response_text)
389
+ print("*********************************************************************************************")
390
+
391
+ return response_text
392
+ ```
393
+
394
+ **Explanation:**
395
+ - `answer_questions_from_web` function updates the global variables, initializes the model, creates a prompt, generates a response, and prints the answer.
396
+
397
+ ### Section 12: Running the Script
398
+
399
+ ```python
400
+ # Invoke the main function
401
+ if __name__ == "__main__":
402
+ main()
403
+ ```
404
+
405
+ **Explanation:**
406
+ - This code block ensures that the `main` function is called when the script is run directly.
407
+
408
+ By breaking down the code into these sections, readers can understand the role of each part and how they work together to create a web chat application using Watsonx.ai.
409
+
410
+
411
+ ### Explanation of `run.py` Code
412
+
413
+ Let's break down and explain the `run.py` code step-by-step:
414
+
415
+ #### Section 1: Importing Necessary Libraries
416
+
417
+ ```python
418
+ # For reading credentials from the .env file
419
+ import os
420
+ from dotenv import load_dotenv
421
+ import streamlit as st
422
+ import webchat
423
+ ```
424
+
425
+ **Explanation:**
426
+ - `os` and `dotenv` are used to load environment variables.
427
+ - `streamlit` is a library for creating interactive web applications.
428
+ - `webchat` is a module that contains functions for interacting with IBM Watson models.
429
+
430
+ #### Section 2: Setting Up Environment Variables
431
+
432
+ ```python
433
+ # URL of the hosted LLMs is hardcoded because at this time all LLMs share the same endpoint
434
+ url = "https://us-south.ml.cloud.ibm.com"
435
+
436
+ # These global variables will be updated in get_credentials() function
437
+ watsonx_project_id = ""
438
+ api_key = ""
439
+ ```
440
+
441
+ **Explanation:**
442
+ - `url` is the endpoint for IBM Watson models.
443
+ - `watsonx_project_id` and `api_key` are initialized and will be populated with actual values from environment variables.
444
+
445
+ #### Section 3: Loading Credentials
446
+
447
+ ```python
448
+ def get_credentials():
449
+ load_dotenv()
450
+ # Update the global variables that will be used for authentication in another function
451
+ globals()["api_key"] = os.getenv("API_KEY", "")
452
+ globals()["watsonx_project_id"] = os.getenv("PROJECT_ID", "")
453
+ ```
454
+
455
+ **Explanation:**
456
+ - `get_credentials` function loads the environment variables from a `.env` file and updates the global `api_key` and `watsonx_project_id`.
457
+
458
+ #### Section 4: Streamlit Application Setup
459
+
460
+ ```python
461
+ def main():
462
+ # Get the API key and project id and update global variables
463
+ get_credentials()
464
+
465
+ # Use the full page instead of a narrow central column
466
+ st.set_page_config(layout="wide")
467
+
468
+ # Streamlit app title
469
+ st.title("🌠Demo of RAG with a Web page")
470
+
471
+ # Sidebar for settings
472
+ st.sidebar.header("Settings")
473
+ api_key_input = st.sidebar.text_input("API Key", api_key)
474
+ project_id_input = st.sidebar.text_input("Project ID", watsonx_project_id)
475
+
476
+ # Update credentials if provided by the user
477
+ if api_key_input:
478
+ globals()["api_key"] = api_key_input
479
+ if project_id_input:
480
+ globals()["watsonx_project_id"] = project_id_input
481
+
482
+ user_url = st.text_input('Provide a URL')
483
+ collection_name = st.text_input('Provide a unique name for this website (lower case). Use the same name for the same URL to avoid loading data multiple times.')
484
+
485
+ # UI component to enter the question
486
+ question = st.text_area('Question', height=100)
487
+ button_clicked = st.button("Answer the question")
488
+
489
+ st.subheader("Response")
490
+
491
+ # Invoke the LLM when the button is clicked
492
+ if button_clicked:
493
+ response = webchat.answer_questions_from_web(api_key, watsonx_project_id, user_url, question, collection_name)
494
+ st.write(response)
495
+ ```
496
+
497
+ **Explanation:**
498
+ - `main` function sets up the Streamlit application.
499
+ - `get_credentials` is called to load API credentials.
500
+ - `st.set_page_config` configures the page layout.
501
+ - Streamlit UI components are defined:
502
+ - Title and sidebar settings for API key and project ID.
503
+ - Text input fields for URL and collection name.
504
+ - Text area for the question.
505
+ - Button to trigger the question answering process.
506
+ - When the button is clicked, `webchat.answer_questions_from_web` function is called to get the response, which is then displayed on the page.
507
+
508
+ #### Section 5: Running the Application
509
+
510
+ ```python
511
+ if __name__ == "__main__":
512
+ main()
513
+ ```
514
+
515
+ **Explanation:**
516
+ - Ensures that the `main` function is executed when the script is run directly.
517
+
518
+ ### Summary of the Program
519
+
520
+ The provided code sets up an interactive web application using Streamlit to demonstrate a Retrieval-Augmented Generation (RAG) system. The system allows users to input a URL, which is then scraped for content. This content is embedded and stored in a database. Users can ask questions related to the content, and the system uses IBM Watson's language model to generate relevant answers. The application handles authentication via environment variables and allows users to update credentials through the UI.
521
+
522
+ ### Conclusion
523
+
524
+ In this blog post, we've explored a Python-based web chat application using Watsonx.ai and IBM Watson's powerful language models. The application demonstrates how to build a Retrieval-Augmented Generation (RAG) system that scrapes web content, embeds it, and leverages machine learning to answer user questions. By breaking down the code into manageable sections, we've provided a comprehensive guide to understanding and implementing such a system. This application showcases the potential of combining web scraping, natural language processing, and interactive web frameworks to create sophisticated AI-driven solutions.
hf/Dockerfile ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Use an official Python runtime as a parent image
2
+ FROM python:3.10-slim
3
+
4
+ # Set the working directory in the container
5
+ WORKDIR /app
6
+
7
+ # Copy the current directory contents into the container at /app
8
+ COPY . /app
9
+
10
+ # Install any needed packages specified in requirements.txt
11
+ RUN pip install --no-cache-dir -r requirements.txt
12
+
13
+ # Expose port 8501 for Streamlit
14
+ EXPOSE 8501
15
+
16
+ # Make sure the script is executable
17
+ RUN chmod +x run.py
18
+
19
+ # Run the application
20
+ ENTRYPOINT ["streamlit", "run", "run.py"]
hf/README.md ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # WatsonX-WebChat
2
+
3
+ WatsonX-WebChat is an interactive web application that uses IBM Watson's language models to answer questions based on the content of a provided web page URL. This application leverages Retrieval-Augmented Generation (RAG) techniques to provide accurate and contextually relevant answers.
4
+
5
+ ## Features
6
+
7
+ - Extracts and processes text from a given URL.
8
+ - Embeds the text and stores it in a database.
9
+ - Answers user questions based on the embedded content using IBM Watson's language models.
10
+ - Interactive web interface built with Streamlit.
11
+
12
+ ## Setup and Deployment
13
+
14
+ ### Prerequisites
15
+
16
+ - Docker
17
+ - WatsonX IBM
18
+ ### Installation
19
+
20
+ 1. **Clone the repository:**
21
+
22
+ ```sh
23
+ git clone https://github.com/your-username/WatsonX-WebChat.git
24
+ cd WatsonX-WebChat
25
+ ```
26
+
27
+ 2. **Create a `.env` file with your IBM Cloud credentials:**
28
+
29
+ ```plaintext
30
+ API_KEY=your_ibm_cloud_api_key
31
+ PROJECT_ID=your_ibm_cloud_project_id
32
+ ```
33
+
34
+ 3. **Build the Docker image:**
35
+
36
+ ```sh
37
+ docker build -t watsonx-webchat .
38
+ ```
39
+
40
+ 4. **Run the Docker container:**
41
+
42
+ ```sh
43
+ docker run -p 8501:8501 --env-file .env watsonx-webchat
44
+ ```
45
+
46
+ ### Deploy on Hugging Face
47
+
48
+ 1. **Log in to Hugging Face CLI:**
49
+
50
+ ```sh
51
+ huggingface-cli login
52
+ ```
53
+
54
+ 2. **Create a new repository on Hugging Face.**
55
+
56
+ 3. **Push the Docker image to Hugging Face:**
57
+
58
+ ```sh
59
+ docker tag watsonx-webchat huggingface.co/your-username/watsonx-webchat
60
+ docker push huggingface.co/your-username/watsonx-webchat
61
+ ```
62
+
63
+ 4. **Configure the Hugging Face repository to use the Docker image:**
64
+
65
+ - Go to your Hugging Face repository page.
66
+ - Click on "Settings".
67
+ - Under "Custom Docker Image", set the image to `huggingface.co/your-username/watsonx-webchat`.
68
+
69
+ ### Usage
70
+
71
+ 1. **Access the application:**
72
+
73
+ Open your browser and go to the URL provided by Hugging Face after deploying the application.
74
+
75
+ 2. **Enter the required information:**
76
+
77
+ - **API Key**: Your IBM Cloud API key.
78
+ - **Project ID**: Your IBM Cloud project ID.
79
+ - **URL**: The URL of the webpage you want to extract content from.
80
+ - **Collection Name**: A unique name for the webpage's data collection.
81
+ - **Question**: The question you want to ask based on the webpage content.
82
+
83
+ 3. **Get the response:**
84
+
85
+ Click the "Answer the question" button to get a response from the application.
86
+
87
+ ## Contributing
88
+
89
+ Feel free to open issues or submit pull requests if you find any bugs or have suggestions for new features.
90
+
91
+ ## License
92
+
93
+ This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
hf/requirements.txt ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ streamlit
2
+ requests
3
+ beautifulsoup4
4
+ spacy
5
+ sentence-transformers
6
+ chromadb
7
+ ibm-watson-machine-learning
8
+ python-dotenv