Liu-Xiang commited on
Commit
8c840d9
·
verified ·
1 Parent(s): 03a63bb

Training in progress, step 20

Browse files
README.md CHANGED
@@ -16,7 +16,7 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 0.5902
20
 
21
  ## Model description
22
 
@@ -39,21 +39,22 @@ The following hyperparameters were used during training:
39
  - train_batch_size: 32
40
  - eval_batch_size: 8
41
  - seed: 42
42
- - gradient_accumulation_steps: 2
43
- - total_train_batch_size: 64
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_steps: 100
47
- - training_steps: 80
48
 
49
  ### Training results
50
 
51
  | Training Loss | Epoch | Step | Validation Loss |
52
  |:-------------:|:-----:|:----:|:---------------:|
53
- | 2.1581 | 0.02 | 20 | 1.9857 |
54
- | 1.1128 | 0.04 | 40 | 0.8156 |
55
- | 0.8092 | 0.05 | 60 | 0.6632 |
56
- | 0.5965 | 0.07 | 80 | 0.5902 |
 
57
 
58
 
59
  ### Framework versions
 
16
 
17
  This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 0.5699
20
 
21
  ## Model description
22
 
 
39
  - train_batch_size: 32
40
  - eval_batch_size: 8
41
  - seed: 42
42
+ - gradient_accumulation_steps: 4
43
+ - total_train_batch_size: 128
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_steps: 100
47
+ - training_steps: 100
48
 
49
  ### Training results
50
 
51
  | Training Loss | Epoch | Step | Validation Loss |
52
  |:-------------:|:-----:|:----:|:---------------:|
53
+ | 2.1594 | 0.04 | 20 | 1.9765 |
54
+ | 1.1079 | 0.07 | 40 | 0.8140 |
55
+ | 0.8124 | 0.11 | 60 | 0.6610 |
56
+ | 0.5827 | 0.14 | 80 | 0.5901 |
57
+ | 0.4148 | 0.18 | 100 | 0.5699 |
58
 
59
 
60
  ### Framework versions
adapter_config.json CHANGED
@@ -20,10 +20,10 @@
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
23
- "k_proj",
24
  "q_proj",
25
  "v_proj",
26
- "o_proj"
27
  ],
28
  "task_type": "CAUSAL_LM",
29
  "use_dora": false,
 
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
23
+ "o_proj",
24
  "q_proj",
25
  "v_proj",
26
+ "k_proj"
27
  ],
28
  "task_type": "CAUSAL_LM",
29
  "use_dora": false,
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:bdd19ae0476802f82752fb3a6199d13bde26c836b1a2e5342892688f59aba329
3
  size 67143296
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e2f6526d19004c7e9832ef5eeb254e3d39aa3d639e20d8c32b9dcf3542f65826
3
  size 67143296
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a0d2bc654f8aa85fed95b942ed73d1b0c8b49a104a6c1523d3bca806caa40068
3
  size 4283
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bec8f9f245df0ded77c6cf1d11f8f56d034ba2c408a5f3f292f80fa38360aba6
3
  size 4283