NexusLearnAI / README.md
ChaseHan's picture
Upload 19 files
415ceb3 verified
|
raw
history blame
6.43 kB

LLM Educational Assistant Platform

An educational assistant platform built on Gradio, utilizing large language models to help students better understand complex concepts. Through concept decomposition and visualization, it provides personalized learning experiences for students.

Features

  1. User Profile Customization

    • Input grade, subject, and learning needs
    • Receive personalized learning experiences and explanations
  2. Concept Decomposition and Visualization

    • Break down complex problems into basic concepts
    • Visualize concept relationships through tree diagrams or network graphs
    • Intuitively display dependencies and connections between concepts
  3. Interactive Learning Experience

    • Click on any sub-concept to get detailed explanations
    • View targeted examples and exercises
    • Get recommendations for relevant learning resources
  4. Progress Saving and Caching

    • Cache generated concept explanations to improve response speed
    • Save learning records for easy review and revision

Installation and Running

Prerequisites

  • Python 3.7+
  • pip (Python package manager)
conda create -n ai_new_dream python=3.11
pip install -r requirements.txt
export OPENAI_API_KEY="your-secret-api-key"

Automatic Installation and Running

Linux/Mac:

# Add execution permission
chmod +x run.sh

# Run startup script
./run.sh

Windows:

# Double-click to run, or run in command prompt
run.bat

Manual Installation and Running

# Create virtual environment
python -m venv venv

# Activate virtual environment
# On Windows:
venv\Scripts\activate
# On Linux/Mac:
source venv/bin/activate

# Install dependencies
pip install -r requirements.txt

# Run application
python app.py

Project Structure

.
├── app.py              # Main application file
├── prompts.py          # LLM prompt templates
├── llm_utils.py        # LLM utility functions
├── visualization.py    # Concept graph visualization module
├── cache_utils.py      # Caching utilities
├── concept_handler.js  # JavaScript for concept click handling
├── requirements.txt    # Project dependencies
├── run.sh              # Linux/Mac startup script
└── run.bat             # Windows startup script

Integrated LLM

This application is integrated with the OpenAI API using the gpt4omini model. The implementation is in the call_llm function in the llm_utils.py file:

def call_llm(prompt: str) -> str:
    """Call the OpenAI API with gpt4omini model"""
    try:
        from openai import OpenAI
        client = OpenAI(api_key="YOUR_API_KEY")
        response = client.chat.completions.create(
            model="gpt4omini",
            messages=[
                {"role": "system", "content": "You are a helpful education assistant."},
                {"role": "user", "content": prompt}
            ]
        )
        return response.choices[0].message.content
    except Exception as e:
        # Fallback to mock data if API call fails
        print(f"Error calling OpenAI API: {e}")
        # ...

If the API call fails, the system falls back to mock data to demonstrate functionality.

Customization and Extension

  • Adding New Subjects: Add new subject options in the subject_input options in app.py
  • Adjusting Prompt Templates: Modify the prompt templates in prompts.py to implement specific teaching styles or methods
  • Enhancing Visualization: Modify the visualization functions in visualization.py to implement richer concept graph representations
  • Changing LLM Model: To use a different model, update the model parameter in the call_llm function in llm_utils.py

Contributions and Feedback

Questions, suggestions, or code contributions are welcome to help improve this educational assistant platform!

教育LLM应用

这是一个基于大语言模型的教育应用,旨在帮助学生分解和理解复杂的学术概念。

功能特点

  • 根据学生的年级和学科自动调整内容难度
  • 将复杂问题分解为相互关联的子概念
  • 生成可视化的知识图谱
  • 为每个概念提供详细解释、示例、学习资源和练习题

如何运行

使用脚本运行(推荐)

  1. 确保您已安装Python 3.7或更高版本
  2. 在终端中导航到项目目录

对于Mac/Linux用户:

chmod +x run.sh  # 添加执行权限
./run.sh         # 运行脚本

对于Windows用户:

run.bat

手动设置

  1. 创建并激活虚拟环境:
python -m venv venv
source venv/bin/activate  # Mac/Linux
venv\Scripts\activate     # Windows
  1. 安装依赖:
pip install -r requirements.txt
  1. 启动应用:
python app.py

配置OpenAI API

应用使用OpenAI API进行概念分解和解释。在config.py文件中设置您的API密钥:

OPENAI_API_KEY = "您的API密钥"  # 替换为您的实际API密钥

您也可以调整其他配置参数:

  • OPENAI_MODEL: 要使用的OpenAI模型名称
  • OPENAI_TIMEOUT: API调用超时时间(秒)
  • OPENAI_MAX_RETRIES: 请求失败时的最大重试次数
  • DEBUG_MODE: 是否启用调试输出
  • USE_FALLBACK_DATA: API失败时是否使用备用数据
  • CACHE_ENABLED: 是否启用响应缓存

系统架构

应用由以下主要组件组成:

  1. app.py - 主应用文件,包含Gradio界面
  2. llm_utils.py - LLM调用和处理函数
  3. visualization.py - 知识图谱可视化
  4. prompts.py - LLM提示模板
  5. cache_utils.py - 响应缓存功能
  6. config.py - 应用配置
  7. concept_handler.py - 备用模拟数据(API失败时使用)

自定义和扩展

您可以通过以下方式自定义应用:

  1. 添加新学科: 扩展prompts.py中的领域特定提示
  2. 调整提示模板: 修改prompts.py中的系统和用户提示
  3. 增强可视化: 在visualization.py中调整知识图谱的生成
  4. 更改模型: 在config.py中指定不同的OpenAI模型

故障排除

如果遇到连接错误:

  1. 检查您的API密钥是否正确
  2. 确认您的网络连接正常
  3. 检查模型名称是否正确(例如:"gpt-4o-mini")
  4. 查看应用的调试输出(启用DEBUG_MODE)

贡献

欢迎提交问题报告和拉取请求来改进这个项目。