Liu-Xiang commited on
Commit
d113485
·
verified ·
1 Parent(s): 69185ef

Training in progress, step 40

Browse files
README.md CHANGED
@@ -16,7 +16,7 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 0.4620
20
 
21
  ## Model description
22
 
@@ -50,26 +50,26 @@ The following hyperparameters were used during training:
50
 
51
  | Training Loss | Epoch | Step | Validation Loss |
52
  |:-------------:|:-----:|:----:|:---------------:|
53
- | 2.1604 | 0.04 | 20 | 1.9793 |
54
- | 1.1151 | 0.07 | 40 | 0.8205 |
55
- | 0.8219 | 0.11 | 60 | 0.6630 |
56
- | 0.5871 | 0.14 | 80 | 0.5909 |
57
- | 0.4129 | 0.18 | 100 | 0.5730 |
58
- | 0.5695 | 0.22 | 120 | 0.5218 |
59
- | 0.4224 | 0.25 | 140 | 0.5122 |
60
- | 0.6429 | 0.29 | 160 | 0.5232 |
61
- | 0.4826 | 0.33 | 180 | 0.4914 |
62
- | 0.3566 | 0.36 | 200 | 0.5035 |
63
- | 0.5297 | 0.4 | 220 | 0.4846 |
64
- | 0.4034 | 0.43 | 240 | 0.4812 |
65
- | 0.6001 | 0.47 | 260 | 0.4841 |
66
- | 0.4673 | 0.51 | 280 | 0.4717 |
67
- | 0.3482 | 0.54 | 300 | 0.4801 |
68
- | 0.5158 | 0.58 | 320 | 0.4717 |
69
- | 0.3967 | 0.62 | 340 | 0.4658 |
70
- | 0.5728 | 0.65 | 360 | 0.4661 |
71
- | 0.4584 | 0.69 | 380 | 0.4630 |
72
- | 0.3529 | 0.72 | 400 | 0.4620 |
73
 
74
 
75
  ### Framework versions
 
16
 
17
  This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 0.4621
20
 
21
  ## Model description
22
 
 
50
 
51
  | Training Loss | Epoch | Step | Validation Loss |
52
  |:-------------:|:-----:|:----:|:---------------:|
53
+ | 2.1563 | 0.04 | 20 | 1.9767 |
54
+ | 1.1222 | 0.07 | 40 | 0.8267 |
55
+ | 0.814 | 0.11 | 60 | 0.6637 |
56
+ | 0.5956 | 0.14 | 80 | 0.5908 |
57
+ | 0.405 | 0.18 | 100 | 0.5643 |
58
+ | 0.5643 | 0.22 | 120 | 0.5204 |
59
+ | 0.4326 | 0.25 | 140 | 0.5107 |
60
+ | 0.6401 | 0.29 | 160 | 0.5211 |
61
+ | 0.4789 | 0.33 | 180 | 0.4908 |
62
+ | 0.3577 | 0.36 | 200 | 0.5069 |
63
+ | 0.5289 | 0.4 | 220 | 0.4851 |
64
+ | 0.3971 | 0.43 | 240 | 0.4811 |
65
+ | 0.5972 | 0.47 | 260 | 0.4807 |
66
+ | 0.4683 | 0.51 | 280 | 0.4712 |
67
+ | 0.3442 | 0.54 | 300 | 0.4790 |
68
+ | 0.5148 | 0.58 | 320 | 0.4692 |
69
+ | 0.3917 | 0.62 | 340 | 0.4661 |
70
+ | 0.5769 | 0.65 | 360 | 0.4661 |
71
+ | 0.4603 | 0.69 | 380 | 0.4629 |
72
+ | 0.3461 | 0.72 | 400 | 0.4621 |
73
 
74
 
75
  ### Framework versions
adapter_config.json CHANGED
@@ -20,10 +20,10 @@
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
23
- "o_proj",
24
- "k_proj",
25
  "q_proj",
26
- "v_proj"
 
 
27
  ],
28
  "task_type": "CAUSAL_LM",
29
  "use_dora": false,
 
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
 
 
23
  "q_proj",
24
+ "o_proj",
25
+ "v_proj",
26
+ "k_proj"
27
  ],
28
  "task_type": "CAUSAL_LM",
29
  "use_dora": false,
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:49775ef7d34cc3472a86c8c4d1b8fb48671606b55c7a77f1ffebce1455566948
3
  size 67143296
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b66168fc43ad688ae6b13c39bb716a62b4182cd34e35d3bc430a988691f36ac2
3
  size 67143296
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f3ec485c11fcaa704474d66d804a3c23738e5bd0e1952efb66a18ce715385745
3
  size 4283
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5880b45d9ca7cec45d0c75a2de2e3b6da69558dcaca791d74d6988acf2d52909
3
  size 4283