File size: 2,294 Bytes
f0aeabd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
import os
from openai import OpenAI
import logging
from typing import List, Dict, Any
import json 
import time
from modals.inputs import LLMConfig


class LLM:
    def __init__(self, max_retries: int = 3, retry_delay: float = 1.0):
        self.max_retries = max_retries
        self.retry_delay = retry_delay

    def parse_json(self, response: str) -> Dict[str, Any]:
        import re
        match = re.search(r'```json\s*(\{.*?\}|\[.*?\])\s*```', response, re.DOTALL)
        if match:
            json_str = match.group(1)
            try:
                parsed_json = json.loads(json_str)
                return parsed_json
            except json.JSONDecodeError as e:
                raise 
        raise 

    def step(self, messages: List[Dict[str, str]] = None, llm_config: LLMConfig = None) -> str:
        messages = messages or []
        llm = OpenAI(
            api_key=llm_config.api_key,
            base_url=llm_config.base_url
        )
        for attempt in range(self.max_retries):
            try:
                response = llm.chat.completions.create(
                    model=llm_config.model,
                    messages=messages,
                    temperature=0.2
                )
                return response.choices[0].message.content
            except Exception as e:
                logging.error(f"Error in LLM step (attempt {attempt + 1}/{self.max_retries}): {e}")
                if attempt < self.max_retries - 1:
                    time.sleep(self.retry_delay * (2 ** attempt))  # Exponential backoff
                else:
                    raise

if __name__ == "__main__":
    # llm_config = LLMConfig(
    #     base_url="https://openrouter.ai/api/v1",
    #     api_key="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",
    #     model="openai/gpt-4o-mini"
    # )

    llm_config = LLMConfig(
    api_key="AIzaSyCsstACK4dJx61ad2_fhWugtvCcEDcTiTE",
    base_url="https://generativelanguage.googleapis.com/v1beta/openai/",
    model="gemini-2.0-flash",
    )

    messages = [
        {"role": "system", "content": "You are a helpful AI assistant."},
        {"role": "user", "content": "Tell me a fun fact about space."}
    ]

    llm = LLM()
    response = llm.step(messages, llm_config)
    print(response)