Text Generation
Transformers
PyTorch
Safetensors
Korean
llama
text-generation-inference

(์ฃผ)๋ฏธ๋””์–ด๊ทธ๋ฃน์‚ฌ๋žŒ๊ณผ์ˆฒ๊ณผ (์ฃผ)๋งˆ์ปค์˜ LLM ์—ฐ๊ตฌ ์ปจ์†Œ์‹œ์—„์—์„œ ๊ฐœ๋ฐœ๋œ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค
The license is cc-by-nc-sa.

Model Details

Model Developers SeungyooLee (DopeorNope)

Input Models input text only.

Output Models generate text only.

Model Architecture
pub-llama-13b-v6 is an auto-regressive language model based on the LLaMA2 transformer architecture.

Base Model : beomi/llama-2-koen-13b

Training Dataset
DopeorNope/OpenOrca-near-dedup-v1 dataset was created by Near dedup algorithm to reduce similarity. We will open it soon.

Downloads last month
1,397
Safetensors
Model size
13.2B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Markr-AI/pub-llama-13B-v6

Quantizations
1 model

Space using Markr-AI/pub-llama-13B-v6 1