--- # Dataset Card Metadata # For more information, see: https://huggingface.co/docs/hub/datasets-cards # Example: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1 # # Important: Fill in all sections. If a section is not applicable, comment it out. # Remove this comment block before saving. # Basic Information # --------------- license: mit # (already in your README, but good to have here) # A list of languages the dataset is in. language: - en # A list of tasks the dataset is suitable for. task_categories: - visual-question-answering - image-text-to-text - question-answering # task_ids: # More specific task IDs from https://hf.co/tasks # - visual-question-answering # Pretty name for the dataset. pretty_name: Indian Competitive Exams (JEE/NEET) LLM Benchmark # Dataset identifier from a recognized benchmark. # benchmark: # e.g., super_glue, anli # Date of the last update. # date: # YYYY-MM-DD or YYYY-MM-DDTHH:MM:SSZ (ISO 8601) # Dataset Structure # ----------------- # List of configurations for the dataset. configs: - config_name: default data_files: # How data files are structured for this config - split: test path: data/metadata.jsonl # Path to the data file or glob pattern images_dir: images # Path to the directory containing the image files # You can add more configs if your dataset has them. # Splits # ------ # Information about the data splits. splits: test: # Name of the split # num_bytes: # Size of the split in bytes (you might need to calculate this) num_examples: 482 # Number of examples in the split (from your script output) # You can add dataset_tags, dataset_summary, etc. for each split. # Column Naming # ------------- # Information about the columns (features) in the dataset. column_info: image: description: The question image. data_type: image question_id: description: Unique identifier for the question. data_type: string exam_name: description: Name of the exam (e.g., "NEET", "JEE_MAIN", "JEE_ADVANCED"). data_type: string exam_year: description: Year of the exam. data_type: int32 exam_code: description: Specific paper code/session (e.g., "T3", "45"). data_type: string subject: description: Subject (e.g., "Physics", "Chemistry", "Biology", "Mathematics"). data_type: string question_type: description: Type of question (e.g., "MCQ_SINGLE_CORRECT", "MCQ_MULTIPLE_CORRECT", "INTEGER"). data_type: string correct_answer: description: List containing the correct answer strings (e.g., ["A"], ["B", "C"]) or a single string for INTEGER type. data_type: list[string] # Updated to reflect string format # More Information dataset_summary: | A benchmark dataset for evaluating Large Language Models (LLMs) on questions from major Indian competitive examinations: Joint Entrance Examination (JEE Main & Advanced) for engineering and the National Eligibility cum Entrance Test (NEET) for medical fields. Questions are provided as images, and metadata includes exam details (name, year, subject, question type) and correct answers. The benchmark supports various question types including Single Correct MCQs, Multiple Correct MCQs (with partial marking for JEE Advanced), and Integer type questions. dataset_tags: # Tags to help users find your dataset - education - science - india - competitive-exams - llm-benchmark annotations_creators: # How annotations were created - found # As questions are from existing exams - expert-generated # Assuming answers are official/verified annotation_types: # Types of annotations - multiple-choice source_datasets: # If your dataset is derived from other datasets - original # If it's original data # - extended # If it extends another dataset size_categories: # Approximate size of the dataset - n<1K # (482 examples) dataset_curation_process: | Questions are sourced from official JEE and NEET examination papers. They are provided as images to maintain original formatting and diagrams. Metadata is manually compiled to link images with exam details and answers. personal_sensitive_information: false # Does the dataset contain PII? # similar_datasets: # - # List similar datasets if any --- # JEE/NEET LLM Benchmark Dataset [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) ## Dataset Description This repository contains a benchmark dataset designed for evaluating the capabilities of Large Language Models (LLMs) on questions from major Indian competitive examinations: * **JEE (Main & Advanced):** Joint Entrance Examination for engineering. * **NEET:** National Eligibility cum Entrance Test for medical fields. The questions are presented in image format (`.png`) as they appear in the original papers. The dataset includes metadata linking each image to its corresponding exam details (name, year, subject, question type), and correct answer(s). The benchmark framework supports various question types including Single Correct MCQs, Multiple Correct MCQs (with partial marking for JEE Advanced), and Integer type questions. **Current Data:** * **NEET 2024** (Code T3): 200 questions across Physics, Chemistry, Botany, and Zoology * **NEET 2025** (Code 45): 180 questions across Physics, Chemistry, Botany, and Zoology * **JEE Advanced 2024** (Paper 1 & 2): 102 questions across Physics, Chemistry, and Mathematics * **Total:** 380+ questions with comprehensive metadata ## Key Features * **🖼️ Multimodal Reasoning:** Uses images of questions directly, testing the multimodal reasoning capability of models * **📊 Exam-Specific Scoring:** Implements authentic scoring rules for different exams and question types, including partial marking for JEE Advanced * **🔄 Robust API Handling:** Built-in retry mechanism and re-prompting for failed API calls or parsing errors * **🎯 Flexible Filtering:** Filter by exam name, year, or specific question IDs for targeted evaluation * **📈 Comprehensive Results:** Generates detailed JSON and human-readable Markdown summaries with section-wise breakdowns * **🔧 Easy Configuration:** Simple YAML-based configuration for models and parameters ## How to Use ### Using `datasets` Library The dataset is designed to be loaded using the Hugging Face `datasets` library: ```python from datasets import load_dataset # Load the evaluation split dataset = load_dataset("Reja1/jee-neet-benchmark", split='test') # Replace with your HF repo name # Example: Access the first question example = dataset[0] image = example["image"] question_id = example["question_id"] subject = example["subject"] correct_answers = example["correct_answer"] print(f"Question ID: {question_id}") print(f"Subject: {subject}") print(f"Correct Answer(s): {correct_answers}") # Display the image (requires Pillow) # image.show() ``` ### Manual Usage (Benchmark Scripts) This repository contains scripts to run the benchmark evaluation directly: 1. **Clone the repository:** ```bash # Replace with your actual repository URL git clone https://github.com/your-username/jee-neet-benchmark cd jee-neet-benchmark # Ensure Git LFS is installed and pull large files if necessary # git lfs pull ``` 2. **Install dependencies:** ```bash # It's recommended to use a virtual environment python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate pip install -r requirements.txt ``` 3. **Configure API Key:** * Create a file named `.env` in the root directory of the project. * Add your OpenRouter API key to this file: ```dotenv OPENROUTER_API_KEY=your_actual_openrouter_api_key_here ``` * **Important:** The `.gitignore` file is already configured to prevent committing the `.env` file. Never commit your API keys directly. 4. **Configure Models:** * Edit the `configs/benchmark_config.yaml` file. * Modify the `openrouter_models` list to include the specific model identifiers you want to evaluate: ```yaml openrouter_models: - "google/gemini-2.5-pro-preview-03-25" - "openai/gpt-4o" - "anthropic/claude-3-5-sonnet-20241022" ``` * Ensure these models support vision input on OpenRouter. * You can also adjust other parameters like `max_tokens` and `request_timeout` if needed. 5. **Run the benchmark:** **Basic usage (run all available models on all questions):** ```bash python src/benchmark_runner.py --config configs/benchmark_config.yaml --model "google/gemini-2.5-pro-preview-03-25" ``` **Filter by exam and year:** ```bash # Run only NEET 2024 questions python src/benchmark_runner.py --config configs/benchmark_config.yaml --model "openai/gpt-4o" --exam_name NEET --exam_year 2024 # Run only JEE Advanced 2024 questions python src/benchmark_runner.py --config configs/benchmark_config.yaml --model "anthropic/claude-3-5-sonnet-20241022" --exam_name JEE_ADVANCED --exam_year 2024 ``` **Run specific questions:** ```bash # Run specific question IDs python src/benchmark_runner.py --config configs/benchmark_config.yaml --model "google/gemini-2.5-pro-preview-03-25" --question_ids "N24T3001,N24T3002,JA24P1M01" ``` **Custom output directory:** ```bash python src/benchmark_runner.py --config configs/benchmark_config.yaml --model "openai/gpt-4o" --output_dir my_custom_results ``` **Available filtering options:** - `--exam_name`: Choose from `NEET`, `JEE_MAIN`, `JEE_ADVANCED`, or `all` (default) - `--exam_year`: Choose from available years (`2024`, `2025`, etc.) or `all` (default) - `--question_ids`: Comma-separated list of specific question IDs to evaluate (e.g., "N24T3001,JA24P1M01") 6. **Check Results:** * Results for each model run will be saved in timestamped subdirectories within the `results/` folder. * Each run's folder (e.g., `results/google_gemini-2.5-pro-preview-03-25_NEET_2024_20250524_141230/`) contains: * **`predictions.jsonl`**: Detailed results for each question including: - Model predictions and ground truth - Raw LLM responses - Evaluation status and marks awarded - API call success/failure information * **`summary.json`**: Overall scores and statistics in JSON format * **`summary.md`**: Human-readable Markdown summary with: - Overall exam scores - Section-wise breakdown (by subject) - Detailed statistics on correct/incorrect/skipped questions ## Scoring System The benchmark implements authentic scoring systems for each exam type: ### NEET Scoring - **Single Correct MCQ**: +4 for correct, -1 for incorrect, 0 for skipped ### JEE Main Scoring - **Single Correct MCQ**: +4 for correct, -1 for incorrect, 0 for skipped - **Integer Type**: +4 for correct, 0 for incorrect, 0 for skipped ### JEE Advanced Scoring - **Single Correct MCQ**: +3 for correct, -1 for incorrect, 0 for skipped - **Multiple Correct MCQ**: Complex partial marking system: - +4 for all correct options selected - +3 for 3 out of 4 correct options (when 4 are correct) - +2 for 2 out of 3+ correct options - +1 for 1 out of 2+ correct options - -2 for any incorrect option selected - 0 for skipped - **Integer Type**: +4 for correct, 0 for incorrect, 0 for skipped ## Advanced Features ### Retry Mechanism - Automatic retry for failed API calls (up to 3 attempts with exponential backoff) - Separate retry pass for questions that failed initially - Comprehensive error tracking and reporting ### Re-prompting System - If initial response parsing fails, the system automatically re-prompts the model - Uses the previous response to ask for properly formatted answers - Adapts prompts based on question type (MCQ vs Integer) ### Comprehensive Evaluation - Tracks multiple metrics: correct answers, partial credit, skipped questions, API failures - Section-wise breakdown by subject - Detailed logging with color-coded progress indicators ## Dataset Structure * **`data/metadata.jsonl`**: Contains metadata for each question image with fields: - `image_path`: Path to the question image - `question_id`: Unique identifier (e.g., "N24T3001") - `exam_name`: Exam type ("NEET", "JEE_MAIN", "JEE_ADVANCED") - `exam_year`: Year of the exam (integer) - `exam_code`: Paper/session code (e.g., "T3", "P1") - `subject`: Subject name (e.g., "Physics", "Chemistry", "Mathematics") - `question_type`: Question format ("MCQ_SINGLE_CORRECT", "MCQ_MULTIPLE_CORRECT", "INTEGER") - `correct_answer`: List of correct answer strings (e.g., ["A"], ["B", "C"], ["42"]) * **`images/`**: Contains subdirectories for each exam set: - `images/NEET_2024_T3/`: NEET 2024 question images - `images/NEET_2025_45/`: NEET 2025 question images - `images/JEE_ADVANCE_2024/`: JEE Advanced 2024 question images * **`src/`**: Python source code for the benchmark system: - `benchmark_runner.py`: Main benchmark execution script - `llm_interface.py`: OpenRouter API interface with retry logic - `evaluation.py`: Scoring and evaluation functions - `prompts.py`: LLM prompts for different question types - `utils.py`: Utility functions for parsing and configuration * **`configs/`**: Configuration files: - `benchmark_config.yaml`: Model selection and API parameters * **`results/`**: Directory where benchmark results are stored (timestamped subdirectories) * **`jee-neet-benchmark.py`**: Hugging Face `datasets` loading script ## Data Fields The dataset contains the following fields (accessible via `datasets`): * `image`: The question image (`datasets.Image`) * `question_id`: Unique identifier for the question (string) * `exam_name`: Name of the exam (e.g., "NEET", "JEE_ADVANCED") (string) * `exam_year`: Year of the exam (int) * `exam_code`: Paper/session code (e.g., "T3", "P1") (string) * `subject`: Subject (e.g., "Physics", "Chemistry", "Mathematics") (string) * `question_type`: Type of question (e.g., "MCQ_SINGLE_CORRECT", "INTEGER") (string) * `correct_answer`: List containing the correct answer strings. - For MCQs, these are option identifiers (e.g., `["1"]`, `["A"]`, `["B", "C"]`). The LLM should output the identifier as it appears in the question. - For INTEGER type, this is the numerical answer as a string (e.g., `["42"]`, `["12.75"]`). The LLM should output the number. - For some `MCQ_SINGLE_CORRECT` questions, multiple answers in this list are considered correct if the LLM prediction matches any one of them. (list of strings) ## LLM Answer Format The LLM is expected to return its answer enclosed in `` tags. For example: - MCQ Single Correct (Option A): `A` - MCQ Single Correct (Option 2): `2` - MCQ Multiple Correct (Options B and D): `B,D` - Integer Answer: `42` - Decimal Answer: `12.75` - Skipped Question: `SKIP The system parses these formats. Prompts are designed to guide the LLM accordingly. ## Troubleshooting ### Common Issues **API Key Issues:** - Ensure your `.env` file is in the root directory - Verify your OpenRouter API key is valid and has sufficient credits - Check that the key has access to vision-capable models **Model Not Found:** - Verify the model identifier exists on OpenRouter - Ensure the model supports vision input - Check your OpenRouter account has access to the specific model **Memory Issues:** - Reduce `max_tokens` in the config file - Process smaller subsets using `--question_ids` filter - Use models with smaller context windows **Parsing Failures:** - The system automatically attempts re-prompting for parsing failures - Check the raw responses in `predictions.jsonl` to debug prompt issues - Consider adjusting prompts in `src/prompts.py` for specific models ## Current Limitations * **Dataset Size:** While comprehensive, the dataset could benefit from more JEE Main questions and additional years * **Language Support:** Currently only supports English questions * **Model Dependencies:** Requires models with vision capabilities available through OpenRouter ## Citation If you use this dataset or benchmark code, please cite: ```bibtex @misc{rejaullah_2025_jeeneetbenchmark, title={JEE/NEET LLM Benchmark}, author={Md Rejaullah}, year={2025}, howpublished={\url{https://huggingface.co/datasets/Reja1/jee-neet-benchmark}}, } ``` ## Contact For questions, suggestions, or collaboration, feel free to reach out: * **X (Twitter):** [https://x.com/RejaullahmdMd](https://x.com/RejaullahmdMd) ## License This dataset and associated code are licensed under the [MIT License](https://opensource.org/licenses/MIT).