codezakh nielsr HF Staff commited on
Commit
faee48d
·
verified ·
1 Parent(s): 5c8fcae

Improve metadata, link to project page (#1)

Browse files

- Improve metadata, link to project page (549acf97e8a79cd0f36c28a4391fa8c55f30f5a6)


Co-authored-by: Niels Rogge <nielsr@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +7 -4
README.md CHANGED
@@ -1,7 +1,10 @@
1
  ---
2
- library_name: peft
3
- license: other
4
  base_model: meta-llama/Llama-3.1-8B-Instruct
 
 
 
 
 
5
  tags:
6
  - llama-factory
7
  - lora
@@ -9,12 +12,12 @@ tags:
9
  model-index:
10
  - name: llama_factory_output_dir
11
  results: []
12
- datasets:
13
- - codezakh/EFAGen-Llama-3.1-8B-Instruct-Training-Data
14
  ---
15
 
16
  [📃 Paper](arxiv.org/abs/2504.09763)
17
 
 
 
18
  This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) trained to generate Executable Functional Abstractions (EFAs) for math problems.
19
  The training data for this model can be found [here](https://huggingface.co/datasets/codezakh/EFAGen-Llama-3.1-8B-Instruct-Training-Data).
20
  The model was trained using Llama-Factory and the data is already in Alpaca instruction-tuning format.
 
1
  ---
 
 
2
  base_model: meta-llama/Llama-3.1-8B-Instruct
3
+ datasets:
4
+ - codezakh/EFAGen-Llama-3.1-8B-Instruct-Training-Data
5
+ library_name: transformers
6
+ license: other
7
+ pipeline_tag: text-generation
8
  tags:
9
  - llama-factory
10
  - lora
 
12
  model-index:
13
  - name: llama_factory_output_dir
14
  results: []
 
 
15
  ---
16
 
17
  [📃 Paper](arxiv.org/abs/2504.09763)
18
 
19
+ Project Page: https://zaidkhan.me/EFAGen
20
+
21
  This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) trained to generate Executable Functional Abstractions (EFAs) for math problems.
22
  The training data for this model can be found [here](https://huggingface.co/datasets/codezakh/EFAGen-Llama-3.1-8B-Instruct-Training-Data).
23
  The model was trained using Llama-Factory and the data is already in Alpaca instruction-tuning format.