File size: 3,540 Bytes
5e68670 b55432b 5e68670 e03ec9a 5e68670 e03ec9a 819a364 e03ec9a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 |
---
library_name: transformers
datasets:
- b-mc2/sql-create-context
language:
- en
base_model:
- meta-llama/Llama-3.2-1B-Instruct
---
# Text-to-SQL Model Usage
## Model Details
- Base Model: `meta-llama/Llama-3.2-1B-Instruct`
- Fine-tuned Model: `pavan-naik/Llama-3.2-1B-Instruct-Text-to-SQL`
- Task: Text to SQL Query Generation
- Framework: PyTorch with 🤗 Transformers and PEFT
## Installation
```bash
pip install peft transformers bitsandbytes
```
## Required Imports
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
from peft import PeftModel
import torch
```
## Loading the Model
### 1. Configure Quantization (Optional)
```python
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.float16
)
```
### 2. Load Base Model and Tokenizer
```python
# Load base model
base_model = AutoModelForCausalLM.from_pretrained(
"meta-llama/Llama-3.2-1B-Instruct",
#quantization_config=bnb_config, #uncomment if you want to use quatized version.
device_map="auto"
)
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(
"pavan-naik/Llama-3.2-1B-Instruct-Text-to-SQL",
trust_remote_code=True
)
tokenizer.pad_token = tokenizer.eos_token
```
### 3. Load PEFT Adapter
```python
model = PeftModel.from_pretrained(base_model, "pavan-naik/Llama-3.2-1B-Instruct-Text-to-SQL")
```
## Generating SQL Queries
### Prompt Template
```python
sql_prompt_template = """You are a database management system expert, proficient in Structured Query Language (SQL).
Your job is to write an SQL query that answers the following question, based on the given database schema and any additional information provided. Use SQLite syntax.
Please output only SQL (without any explanations).
### Question: {question}
### Schema: {context}
### Completion: """
```
### Generation Function
```python
def generate_sql(question, context, model, tokenizer, max_length=128):
prompt = sql_prompt_template.format(question=question, context=context)
inputs = tokenizer(prompt, return_tensors="pt", truncation=True, max_length=max_length)
inputs = {k: v.to(model.device) for k, v in inputs.items()}
prompt_length = len(inputs["input_ids"][0])
outputs = model.generate(
**inputs,
max_length=prompt_length + max_length,
num_return_sequences=1,
temperature=0.7,
do_sample=True,
)
sql_answer = tokenizer.decode(outputs[0][prompt_length:], skip_special_tokens=True).strip()
return sql_answer
```
## Example Usage
```python
# Define your question and database schema
question = "For each continent, show the city with the highest population and what percentage of its country's total population it represents"
context = """
CREATE TABLE city (city_id INTEGER, name VARCHAR, population INTEGER, country_id INTEGER);
CREATE TABLE country (country_id INTEGER, name VARCHAR, continent VARCHAR)
"""
# Generate SQL query
sql_query = generate_sql(question, context, model, tokenizer)
print(sql_query)
```
## Notes
- The model uses SQLite syntax
- Adjust `max_length` parameter based on your query complexity
- Temperature can be modified to control randomness in generation (0.0 for deterministic output)
- The model performs best with clear schema definitions and well-structured questions
## Requirements
- Python 3.7+
- PyTorch
- Transformers
- PEFT (Parameter-Efficient Fine-Tuning)
- bitsandbytes (for quantization) |