dpeelen9 commited on
Commit
5c1c253
·
verified ·
1 Parent(s): f9eedcc

Model save

Browse files
Files changed (2) hide show
  1. README.md +16 -16
  2. model.safetensors +1 -1
README.md CHANGED
@@ -7,8 +7,6 @@ tags:
7
  model-index:
8
  - name: flashcarder
9
  results: []
10
- datasets:
11
- - dpeelen9/academic-flashcards
12
  ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -16,26 +14,24 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  # flashcarder
18
 
19
- This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an my own academic-flashcards dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 0.223
22
 
23
  ## Model description
24
 
25
- This model specializes in returning a Python List that will give flashcard clues
26
 
27
  ## Intended uses & limitations
28
 
29
- Its intent is for building flashcards but could(?) be alternatively used to just build python lists of your choice
30
 
31
  ## Training and evaluation data
32
 
33
- Seen in the academic-flashcards dataset, I used a divy between the two.
34
 
35
  ## Training procedure
36
 
37
- See the hyperparameters below, I passed those through to transformer's TrainingArguments class which I then used with the Trainer.
38
-
39
  ### Training hyperparameters
40
 
41
  The following hyperparameters were used during training:
@@ -44,7 +40,7 @@ The following hyperparameters were used during training:
44
  - eval_batch_size: 16
45
  - seed: 42
46
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
47
- - lr_scheduler_type: cosine
48
  - lr_scheduler_warmup_steps: 500
49
  - num_epochs: 10
50
 
@@ -52,11 +48,15 @@ The following hyperparameters were used during training:
52
 
53
  | Training Loss | Epoch | Step | Validation Loss |
54
  |:-------------:|:-----:|:----:|:---------------:|
55
- | 0.1204 | 1.0 | 1020 | 0.2234 |
56
- | 0.1091 | 2.0 | 2040 | 0.2287 |
57
- | 0.0997 | 3.0 | 3060 | 0.2395 |
58
-
59
- I had initially forgotten early stopping and had set epochs to 30, unfortunate the overfitting was halted later than I wanted, but could be worse.
 
 
 
 
60
 
61
 
62
  ### Framework versions
@@ -64,4 +64,4 @@ I had initially forgotten early stopping and had set epochs to 30, unfortunate t
64
  - Transformers 4.51.3
65
  - Pytorch 2.6.0+cu124
66
  - Datasets 2.14.4
67
- - Tokenizers 0.21.1
 
7
  model-index:
8
  - name: flashcarder
9
  results: []
 
 
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
14
 
15
  # flashcarder
16
 
17
+ This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 0.1990
20
 
21
  ## Model description
22
 
23
+ More information needed
24
 
25
  ## Intended uses & limitations
26
 
27
+ More information needed
28
 
29
  ## Training and evaluation data
30
 
31
+ More information needed
32
 
33
  ## Training procedure
34
 
 
 
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
 
40
  - eval_batch_size: 16
41
  - seed: 42
42
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
43
+ - lr_scheduler_type: linear
44
  - lr_scheduler_warmup_steps: 500
45
  - num_epochs: 10
46
 
 
48
 
49
  | Training Loss | Epoch | Step | Validation Loss |
50
  |:-------------:|:-----:|:----:|:---------------:|
51
+ | 0.474 | 1.0 | 1004 | 0.2415 |
52
+ | 0.2877 | 2.0 | 2008 | 0.2146 |
53
+ | 0.2514 | 3.0 | 3012 | 0.2101 |
54
+ | 0.2311 | 4.0 | 4016 | 0.2050 |
55
+ | 0.2131 | 5.0 | 5020 | 0.2004 |
56
+ | 0.2033 | 6.0 | 6024 | 0.2000 |
57
+ | 0.1932 | 7.0 | 7028 | 0.1983 |
58
+ | 0.1849 | 8.0 | 8032 | 0.1997 |
59
+ | 0.1813 | 9.0 | 9036 | 0.1990 |
60
 
61
 
62
  ### Framework versions
 
64
  - Transformers 4.51.3
65
  - Pytorch 2.6.0+cu124
66
  - Datasets 2.14.4
67
+ - Tokenizers 0.21.1
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0f26e65ca69703c6d5d1c1cd3642c3902ba23c31c442b6b8ebcf4a8c04e32fae
3
  size 990345064
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:88f2826190116286a60a87a17939303dd06639b49ccc4ef6e7690d65462bb39b
3
  size 990345064