pranavdarshan commited on
Commit
4e214ac
·
verified ·
1 Parent(s): 1136f06

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +26 -3
README.md CHANGED
@@ -1,6 +1,13 @@
1
  ---
2
  library_name: transformers
3
- tags: []
 
 
 
 
 
 
 
4
  ---
5
 
6
  # Model Card for Model ID
@@ -26,13 +33,29 @@ This is the model card of a 🤗 transformers model that has been pushed on the
26
 
27
  <!-- Provide the basic links for the model. -->
28
 
29
- - **Repository:** [More Information Needed]
30
- - **Paper [optional]:** [More Information Needed]
31
  - **Demo [optional]:** [More Information Needed]
32
 
33
  ## Uses
34
 
35
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
 
37
  ### Direct Use
38
 
 
1
  ---
2
  library_name: transformers
3
+ tags:
4
+ - education
5
+ datasets:
6
+ - NiharMandahas/Os_evaluator
7
+ language:
8
+ - en
9
+ base_model:
10
+ - NousResearch/Llama-2-7b-chat-hf
11
  ---
12
 
13
  # Model Card for Model ID
 
33
 
34
  <!-- Provide the basic links for the model. -->
35
 
36
+ - **Repository:** https://github.com/PranavDarshan/AutoGrader
37
+ - **Paper [optional]:** https://ieeexplore.ieee.org/document/10817016
38
  - **Demo [optional]:** [More Information Needed]
39
 
40
  ## Uses
41
 
42
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
43
+ The model developed in this study is designed to assist in the automated evaluation of answer scripts, specifically within the domain of operating systems. It aims to streamline the grading process by reducing the time required for evaluation and eliminating human bias.
44
+
45
+ Foreseeable Users:
46
+
47
+ Educators and Examiners – University professors and teachers who assess student responses can leverage the system to expedite grading and maintain consistency.
48
+ Students – By ensuring fair and unbiased evaluation, students receive objective feedback, improving their learning experience.
49
+ Academic Institutions – Schools and universities can integrate this system into their assessment frameworks, enhancing efficiency in large-scale evaluations.
50
+ Affected Stakeholders:
51
+
52
+ Handwritten Answer Evaluation – The integration of handwriting recognition ensures that students who submit handwritten scripts are evaluated fairly.
53
+ Educational Technology Providers – The model can be adopted into existing learning management systems to enhance automated assessment tools.
54
+ Policy Makers in Education – Standardized, unbiased grading could influence educational reforms related to assessment methodologies.
55
+ The model operates by utilizing a fine-tuned Large Language Model (LLM) and Retrieval-Augmented Generation (RAG) to fetch contextual information from prescribed textbooks. Additionally, it integrates handwriting recognition for evaluating manually written answer scripts. The entire system is deployed on an interactive web platform using AWS SageMaker, ensuring scalability and accessibility.
56
+
57
+ By addressing the challenges associated with traditional grading, this model aims to revolutionize the assessment process, making it more efficient, accurate, and fair.
58
+
59
 
60
  ### Direct Use
61